From patchwork Thu Jun 27 12:58:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: MAHESH BODAPATI X-Patchwork-Id: 1953251 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=HaUGQkxX; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W8zDy30R7z20XB for ; Thu, 27 Jun 2024 23:00:02 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 8819A3832E62 for ; Thu, 27 Jun 2024 13:00:00 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by sourceware.org (Postfix) with ESMTPS id A3AC23832E4A for ; Thu, 27 Jun 2024 12:59:17 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org A3AC23832E4A Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=linux.ibm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linux.ibm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org A3AC23832E4A Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1719493163; cv=none; b=IN2U+UsAkxTGYbSZKPaBR6z3A7yFdbxEgjlGrp98b9CtIPj6OBDAQQIJDbVUJ1gEjbvvPHZ8WWxncyvvxstZSLRq6jW2itUqlrxu6fGeItrh0UBmvE9vKB+3WAi4U3ncDa9if4yjGFCBGawKr285s8B1TD5W44K3tQmMoABMqJI= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1719493163; c=relaxed/simple; bh=miF2cS5nw1gLpTMhrg9QUaSfP/AGr7ezh32xInYRiUg=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=TfXxXn2SpwsjpaJX7whiuXgycFXyFPA0VP1NGlyeOPdiDNECq2nrgK23eS7IH8h+z17dfSnpORc+Ifu1dkzSjxCQajdYNaJgROs+6KEkd1NFGTVKfMkcR0e69kAEFbK3nbVt9eSRXxSiY6JTZJsP14Ju20VyQK5Dn27ibVTVkvE= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45RBN5Pg013068; Thu, 27 Jun 2024 12:59:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from :to:cc:subject:date:message-id:mime-version :content-transfer-encoding; s=pp1; bh=eCxSfOMR9fJSrs2iselJaPfbsx WGbcTokg7SqXPt9ig=; b=HaUGQkxXcPwXzKmTaGKQ+UflGbgM18mnQZ/D9eZTu4 KLoI7HghS/qd7zjcHb/WJLt8vu10PlLyqgRwbGd8KDlzbSCheERx5YLEs0J3neBM EcpMdrhX3gUFMAcGt4ncrV8YQkaZI4VT73MmLtCGFnV/ZFNjYC+8tMM5u/MQUDxG k5ikHMHtANmpfDUdMvSvKB36OyN6hiXoeKPARKfWci4O9GU8NNMlGU17ND9bi410 Qd6AZQmVhvGD5E0soBy1GTORaYrS7SaYsms/p+ZEcmTf4Jnu/RcsoJOUMpnpQ4f4 wWMb1O9MXZL7g32uEahsi6G6gw5NV4v5ieeiuHYNCpFg== Received: from ppma22.wdc07v.mail.ibm.com (5c.69.3da9.ip4.static.sl-reverse.com [169.61.105.92]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 400yaw1d0y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 27 Jun 2024 12:59:15 +0000 (GMT) Received: from pps.filterd (ppma22.wdc07v.mail.ibm.com [127.0.0.1]) by ppma22.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 45RAXi4c008176; Thu, 27 Jun 2024 12:59:14 GMT Received: from smtprelay05.fra02v.mail.ibm.com ([9.218.2.225]) by ppma22.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3yx9b12ycs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 27 Jun 2024 12:59:14 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay05.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 45RCx8ph51642790 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 27 Jun 2024 12:59:10 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5E9AD20043; Thu, 27 Jun 2024 12:59:08 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7CA592004B; Thu, 27 Jun 2024 12:59:07 +0000 (GMT) Received: from rain48p1.aus.stglabs.ibm.com (unknown [9.40.203.147]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 27 Jun 2024 12:59:07 +0000 (GMT) From: MAHESH BODAPATI To: libc-alpha@sourceware.org Cc: bergner@linux.ibm.com, murphyp@linux.ibm.com, adhemerval.zanella@linaro.org, Mahesh Bodapati Subject: [PATCH v3] powerpc64: optimize strcpy and stpcpy for POWER9/10 Date: Thu, 27 Jun 2024 08:58:35 -0400 Message-ID: <20240627125835.3039577-1-bmahi496@linux.ibm.com> X-Mailer: git-send-email 2.43.5 MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: kK-Rq-ypU-HEFUKbD8WTacZeNZuejlC9 X-Proofpoint-GUID: kK-Rq-ypU-HEFUKbD8WTacZeNZuejlC9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-27_08,2024-06-27_03,2024-05-17_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 priorityscore=1501 mlxscore=0 phishscore=0 adultscore=0 mlxlogscore=920 impostorscore=0 suspectscore=0 clxscore=1015 spamscore=0 bulkscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2406140001 definitions=main-2406270097 X-Spam-Status: No, score=-11.7 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, KAM_NUMSUBJECT, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org From: Mahesh Bodapati This patch modifies the current POWER9 implementation of strcpy and stpcpy to optimize it for POWER9/10. Since no new POWER10 instructions are used, the original POWER9 strcpy is modified instead of creating a new implementation for POWER10. The changes also affect stpcpy, which uses the same implementation with some additional code before returning. Improvements compared to POWER9 version: Use simple comparisons for the first ~512 bytes The main loop is good for long strings, but comparing 16B each time is better for shorter strings. After aligning the address to 16 bytes, we unroll the loop four times, checking 128 bytes each time. There may be some overlap with the main loop for unaligned strings, but it is better for shorter strings. Loop with 64 bytes for longer bytes using 4 consecutive lxv/stxv instructions. Showed an average improvement of 13%. Reviewed-by: Paul E. Murphy --- sysdeps/powerpc/powerpc64/le/power9/strcpy.S | 306 ++++++++++++++----- 1 file changed, 238 insertions(+), 68 deletions(-) diff --git a/sysdeps/powerpc/powerpc64/le/power9/strcpy.S b/sysdeps/powerpc/powerpc64/le/power9/strcpy.S index 603bde1e39..206eeafcd6 100644 --- a/sysdeps/powerpc/powerpc64/le/power9/strcpy.S +++ b/sysdeps/powerpc/powerpc64/le/power9/strcpy.S @@ -42,22 +42,48 @@ if USE_AS_STPCPY is defined. - The implementation can load bytes past a null terminator, but only - up to the next 16B boundary, so it never crosses a page. */ + This implementation never reads across a page boundary, but may + read beyond the NUL terminator. */ -/* Load quadword at addr+offset to vreg, check for null bytes, +/* Load 4 quadwords, merge into one VR for speed and check for NUL + and branch to label if NUL is found. */ +#define CHECK_64B(offset,addr,label) \ + lxv 32+v4,(offset+0)(addr); \ + lxv 32+v5,(offset+16)(addr); \ + lxv 32+v6,(offset+32)(addr); \ + lxv 32+v7,(offset+48)(addr); \ + vminub v14,v4,v5; \ + vminub v15,v6,v7; \ + vminub v16,v14,v15; \ + vcmpequb. v0,v16,v18; \ + beq cr6,$+12; \ + li r7,offset; \ + b L(label); \ + stxv 32+v4,(offset+0)(r11); \ + stxv 32+v5,(offset+16)(r11); \ + stxv 32+v6,(offset+32)(r11); \ + stxv 32+v7,(offset+48)(r11) + +/* Load quadword at addr+offset to vreg, check for NUL bytes, and branch to label if any are found. */ -#define CHECK16(vreg,offset,addr,label) \ - lxv vreg+32,offset(addr); \ - vcmpequb. v6,vreg,v18; \ +#define CHECK_16B(vreg,offset,addr,label) \ + lxv vreg+32,offset(addr); \ + vcmpequb. v15,vreg,v18; \ bne cr6,L(label); +/* Store vreg2 with length if NUL is found. */ +#define STORE_WITH_LEN(vreg1,vreg2,reg) \ + vctzlsbb r8,vreg1; \ + addi r9,r8,1; \ + sldi r9,r9,56; \ + stxvl 32+vreg2,reg,r9; + .machine power9 ENTRY_TOCLESS (FUNC_NAME, 4) CALL_MCOUNT 2 - vspltisb v18,0 /* Zeroes in v18 */ - vspltisb v19,-1 /* 0xFF bytes in v19 */ + vspltisb v18,0 /* Zeroes in v18. */ + vspltisb v19,-1 /* 0xFF bytes in v19. */ /* Next 16B-aligned address. Prepare address for L(loop). */ addi r5,r4,16 @@ -70,14 +96,11 @@ ENTRY_TOCLESS (FUNC_NAME, 4) lvsr v1,0,r4 vperm v0,v19,v0,v1 - vcmpequb. v6,v0,v18 /* 0xff if byte is NULL, 0x00 otherwise */ + vcmpequb. v6,v0,v18 /* 0xff if byte is NUL, 0x00 otherwise. */ beq cr6,L(no_null) - /* There's a null byte. */ - vctzlsbb r8,v6 /* Number of trailing zeroes */ - addi r9,r8,1 /* Add null byte. */ - sldi r10,r9,56 /* stxvl wants size in top 8 bits. */ - stxvl 32+v0,r3,r10 /* Partial store */ + /* There's a NUL byte. */ + STORE_WITH_LEN(v6,v0,r3) #ifdef USE_AS_STPCPY /* stpcpy returns the dest address plus the size not counting the @@ -87,17 +110,22 @@ ENTRY_TOCLESS (FUNC_NAME, 4) blr L(no_null): - sldi r10,r8,56 /* stxvl wants size in top 8 bits */ - stxvl 32+v0,r3,r10 /* Partial store */ + sldi r10,r8,56 /* stxvl wants size in top 8 bits. */ + stxvl 32+v0,r3,r10 /* Partial store. */ +/* The main loop is optimized for longer strings(> 512 bytes), + so checking the first bytes in 16B chunks benefits shorter + strings a lot. */ .p2align 4 -L(loop): - CHECK16(v0,0,r5,tail1) - CHECK16(v1,16,r5,tail2) - CHECK16(v2,32,r5,tail3) - CHECK16(v3,48,r5,tail4) - CHECK16(v4,64,r5,tail5) - CHECK16(v5,80,r5,tail6) +L(aligned): + CHECK_16B(v0,0,r5,tail1) + CHECK_16B(v1,16,r5,tail2) + CHECK_16B(v2,32,r5,tail3) + CHECK_16B(v3,48,r5,tail4) + CHECK_16B(v4,64,r5,tail5) + CHECK_16B(v5,80,r5,tail6) + CHECK_16B(v6,96,r5,tail7) + CHECK_16B(v7,112,r5,tail8) stxv 32+v0,0(r11) stxv 32+v1,16(r11) @@ -105,21 +133,146 @@ L(loop): stxv 32+v3,48(r11) stxv 32+v4,64(r11) stxv 32+v5,80(r11) + stxv 32+v6,96(r11) + stxv 32+v7,112(r11) + + addi r11,r11,128 + + CHECK_16B(v0,128,r5,tail1) + CHECK_16B(v1,128+16,r5,tail2) + CHECK_16B(v2,128+32,r5,tail3) + CHECK_16B(v3,128+48,r5,tail4) + CHECK_16B(v4,128+64,r5,tail5) + CHECK_16B(v5,128+80,r5,tail6) + CHECK_16B(v6,128+96,r5,tail7) + CHECK_16B(v7,128+112,r5,tail8) + + stxv 32+v0,0(r11) + stxv 32+v1,16(r11) + stxv 32+v2,32(r11) + stxv 32+v3,48(r11) + stxv 32+v4,64(r11) + stxv 32+v5,80(r11) + stxv 32+v6,96(r11) + stxv 32+v7,112(r11) + + addi r11,r11,128 + + CHECK_16B(v0,256,r5,tail1) + CHECK_16B(v1,256+16,r5,tail2) + CHECK_16B(v2,256+32,r5,tail3) + CHECK_16B(v3,256+48,r5,tail4) + CHECK_16B(v4,256+64,r5,tail5) + CHECK_16B(v5,256+80,r5,tail6) + CHECK_16B(v6,256+96,r5,tail7) + CHECK_16B(v7,256+112,r5,tail8) + + stxv 32+v0,0(r11) + stxv 32+v1,16(r11) + stxv 32+v2,32(r11) + stxv 32+v3,48(r11) + stxv 32+v4,64(r11) + stxv 32+v5,80(r11) + stxv 32+v6,96(r11) + stxv 32+v7,112(r11) + + addi r11,r11,128 + + CHECK_16B(v0,384,r5,tail1) + CHECK_16B(v1,384+16,r5,tail2) + CHECK_16B(v2,384+32,r5,tail3) + CHECK_16B(v3,384+48,r5,tail4) + CHECK_16B(v4,384+64,r5,tail5) + CHECK_16B(v5,384+80,r5,tail6) + CHECK_16B(v6,384+96,r5,tail7) + CHECK_16B(v7,384+112,r5,tail8) + + stxv 32+v0,0(r11) + stxv 32+v1,16(r11) + stxv 32+v2,32(r11) + stxv 32+v3,48(r11) + stxv 32+v4,64(r11) + stxv 32+v5,80(r11) + stxv 32+v6,96(r11) + stxv 32+v7,112(r11) + + /* Align src pointer down to a 64B boundary. */ + addi r5,r4,512 + clrrdi r5,r5,6 + subf r7,r4,r5 + add r11,r3,r7 + +/* Switch to a more aggressive approach checking 64B each time. */ + .p2align 5 +L(strcpy_loop): + CHECK_64B(0,r5,tail_64b) + CHECK_64B(64,r5,tail_64b) + CHECK_64B(128,r5,tail_64b) + CHECK_64B(192,r5,tail_64b) - addi r5,r5,96 - addi r11,r11,96 + CHECK_64B(256,r5,tail_64b) + CHECK_64B(256+64,r5,tail_64b) + CHECK_64B(256+128,r5,tail_64b) + CHECK_64B(256+192,r5,tail_64b) + addi r5,r5,512 + addi r11,r11,512 + + b L(strcpy_loop) + + .p2align 5 +L(tail_64b): + /* OK, we found a NUL byte. Let's look for it in the current 64-byte + block and mark it in its corresponding VR. */ + add r11,r11,r7 + vcmpequb. v8,v4,v18 + beq cr6,L(no_null_16B) + /* There's a NUL byte. */ + STORE_WITH_LEN(v8,v4,r11) +#ifdef USE_AS_STPCPY + add r3,r11,r8 +#endif + blr + +L(no_null_16B): + stxv 32+v4,0(r11) + vcmpequb. v8,v5,v18 + beq cr6,L(no_null_32B) + /* There's a NUL byte. */ + addi r11,r11,16 + STORE_WITH_LEN(v8,v5,r11) +#ifdef USE_AS_STPCPY + add r3,r11,r8 +#endif + blr + +L(no_null_32B): + stxv 32+v5,16(r11) + vcmpequb. v8,v6,v18 + beq cr6,L(no_null_48B) + /* There's a NUL byte. */ + addi r11,r11,32 + STORE_WITH_LEN(v8,v6,r11) +#ifdef USE_AS_STPCPY + add r3,r11,r8 +#endif + blr - b L(loop) +L(no_null_48B): + stxv 32+v6,32(r11) + vcmpequb. v8,v7,v18; + /* There's a NUL byte. */ + addi r11,r11,48 + STORE_WITH_LEN(v8,v7,r11) +#ifdef USE_AS_STPCPY + add r3,r11,r8 +#endif + blr .p2align 4 L(tail1): - vctzlsbb r8,v6 /* Number of trailing zeroes */ - addi r9,r8,1 /* Add null terminator */ - sldi r9,r9,56 /* stxvl wants size in top 8 bits */ - stxvl 32+v0,r11,r9 /* Partial store */ + /* There's a NUL byte. */ + STORE_WITH_LEN(v15,v0,r11) #ifdef USE_AS_STPCPY - /* stpcpy returns the dest address plus the size not counting the - final '\0'. */ add r3,r11,r8 #endif blr @@ -127,11 +280,9 @@ L(tail1): .p2align 4 L(tail2): stxv 32+v0,0(r11) - vctzlsbb r8,v6 - addi r9,r8,1 - sldi r9,r9,56 - addi r11,r11,16 - stxvl 32+v1,r11,r9 + /* There's a NUL byte. */ + addi r11,r11,16 + STORE_WITH_LEN(v15,v1,r11) #ifdef USE_AS_STPCPY add r3,r11,r8 #endif @@ -141,11 +292,8 @@ L(tail2): L(tail3): stxv 32+v0,0(r11) stxv 32+v1,16(r11) - vctzlsbb r8,v6 - addi r9,r8,1 - sldi r9,r9,56 - addi r11,r11,32 - stxvl 32+v2,r11,r9 + addi r11,r11,32 + STORE_WITH_LEN(v15,v2,r11) #ifdef USE_AS_STPCPY add r3,r11,r8 #endif @@ -156,11 +304,8 @@ L(tail4): stxv 32+v0,0(r11) stxv 32+v1,16(r11) stxv 32+v2,32(r11) - vctzlsbb r8,v6 - addi r9,r8,1 - sldi r9,r9,56 - addi r11,r11,48 - stxvl 32+v3,r11,r9 + addi r11,r11,48 + STORE_WITH_LEN(v15,v3,r11) #ifdef USE_AS_STPCPY add r3,r11,r8 #endif @@ -168,34 +313,59 @@ L(tail4): .p2align 4 L(tail5): - stxv 32+v0,0(r11) - stxv 32+v1,16(r11) - stxv 32+v2,32(r11) - stxv 32+v3,48(r11) - vctzlsbb r8,v6 - addi r9,r8,1 - sldi r9,r9,56 - addi r11,r11,64 - stxvl 32+v4,r11,r9 + stxv 32+v0,0(r11) + stxv 32+v1,16(r11) + stxv 32+v2,32(r11) + stxv 32+v3,48(r11) + addi r11,r11,64 + STORE_WITH_LEN(v15,v4,r11) #ifdef USE_AS_STPCPY - add r3,r11,r8 + add r3,r11,r8 #endif blr .p2align 4 L(tail6): - stxv 32+v0,0(r11) - stxv 32+v1,16(r11) - stxv 32+v2,32(r11) - stxv 32+v3,48(r11) - stxv 32+v4,64(r11) - vctzlsbb r8,v6 - addi r9,r8,1 - sldi r9,r9,56 - addi r11,r11,80 - stxvl 32+v5,r11,r9 + stxv 32+v0,0(r11) + stxv 32+v1,16(r11) + stxv 32+v2,32(r11) + stxv 32+v3,48(r11) + stxv 32+v4,64(r11) + addi r11,r11,80 + STORE_WITH_LEN(v15,v5,r11) #ifdef USE_AS_STPCPY - add r3,r11,r8 + add r3,r11,r8 +#endif + blr + + .p2align 4 +L(tail7): + stxv 32+v0,0(r11) + stxv 32+v1,16(r11) + stxv 32+v2,32(r11) + stxv 32+v3,48(r11) + stxv 32+v4,64(r11) + stxv 32+v5,80(r11) + addi r11,r11,96 + STORE_WITH_LEN(v15,v6,r11) +#ifdef USE_AS_STPCPY + add r3,r11,r8 +#endif + blr + + .p2align 4 +L(tail8): + stxv 32+v0,0(r11) + stxv 32+v1,16(r11) + stxv 32+v2,32(r11) + stxv 32+v3,48(r11) + stxv 32+v4,64(r11) + stxv 32+v5,80(r11) + stxv 32+v6,96(r11) + addi r11,r11,112 + STORE_WITH_LEN(v15,v7,r11) +#ifdef USE_AS_STPCPY + add r3,r11,r8 #endif blr