From patchwork Wed May 16 08:34:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 914265 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40m7C35rf0z9s19 for ; Wed, 16 May 2018 18:39:11 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Bdcm0Uvw"; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 40m7C348przF14f for ; Wed, 16 May 2018 18:39:11 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Bdcm0Uvw"; dkim-atps=neutral X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:400e:c01::241; helo=mail-pl0-x241.google.com; envelope-from=wei.guo.simon@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Bdcm0Uvw"; dkim-atps=neutral Received: from mail-pl0-x241.google.com (mail-pl0-x241.google.com [IPv6:2607:f8b0:400e:c01::241]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 40m75r3mzDzF1Q8 for ; Wed, 16 May 2018 18:34:40 +1000 (AEST) Received: by mail-pl0-x241.google.com with SMTP id i5-v6so1735575plt.2 for ; Wed, 16 May 2018 01:34:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rJqmprCRo2eWkbvKr9PMpimRCI9hPyabLZZCsnrJzgE=; b=Bdcm0UvwmdM6bnhVcG74T4Z+tN9SeNVCQXM2nRVbgheqJ+tjbtAVGTBeBLcVkLeRJJ nEqZNS9lsnrZNpVrmH174vIdf0LmaGD9sOFn8Dv+dBkuEWkQQjDbZBElgpWQKysUVQXz cSVChkSoTUAL46N6kzPZZc4o4Z1hy721ByKG/WgsaarnKkDPxl5Y9ruSlVCl0OG8uLRC M6ynn1V34LGzweQjVAGwKZnz40jTOEke/5yRD46eZJIa8/MygYKMu+HeIDGUDVI4EgdM oEi/ofvdru82eKETrfGWuEaacx4gXP41aM+zPJ2pWQlfBgb2328/6t5mHD36RVaFaLFs u30w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rJqmprCRo2eWkbvKr9PMpimRCI9hPyabLZZCsnrJzgE=; b=H3PjKJE3KKAufw5pqCF9kM7GRI9AtWN2tKTQUsPRWBUfAgMlcVb9GQnDCZFIXCADTP KY1/KR/aH4ihBoDEcK7nO4pMluvbVFsxQe+xggUax51QWFUQYsvHt+hYFJ0zwLVHtH3w YnteSIxkVBbmcs5dCPCfqiOM42tYJS/PK8V9A9/KoWGUo2dBsqP5v5oHZQonke1ZWUXh v/3JlbLJXlfGS/DKv7x/xNkTmfqLLUNxG4JpE7t6qtpbAZ64LRnArq3Tacr8/zD67etl ecSMv6BizBoVOrO0jhf+qhZU7McTbT8oPud9L9jEUWvxhrUqqboXduC1KLHPugsMKQRL c94Q== X-Gm-Message-State: ALKqPwfH5OBK31vllMYZ2ax6t7w5/E80CX3qE4DM656fgb66JcaPVMhE 2Mt8UjJrtJHYYLS7q/TyJx0GFQ== X-Google-Smtp-Source: AB8JxZqkoE6FwATTuCKBrxivcJ4L+OHFYMqBotPSS8fvKC8wrmCOhr7ZlU18Vv015a8MM6dM01b+TQ== X-Received: by 2002:a17:902:d210:: with SMTP id t16-v6mr2655487ply.364.1526459678193; Wed, 16 May 2018 01:34:38 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id s16-v6sm2898757pfm.114.2018.05.16.01.34.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 16 May 2018 01:34:37 -0700 (PDT) From: wei.guo.simon@gmail.com To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v4 1/4] powerpc/64: Align bytes before fall back to .Lshort in powerpc64 memcmp() Date: Wed, 16 May 2018 16:34:18 +0800 Message-Id: <1526459661-17323-2-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1526459661-17323-1-git-send-email-wei.guo.simon@gmail.com> References: <1526459661-17323-1-git-send-email-wei.guo.simon@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Naveen N. Rao" , Simon Guo , Cyril Bur Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Simon Guo Currently memcmp() 64bytes version in powerpc will fall back to .Lshort (compare per byte mode) if either src or dst address is not 8 bytes aligned. It can be opmitized in 2 situations: 1) if both addresses are with the same offset with 8 bytes boundary: memcmp() can compare the unaligned bytes within 8 bytes boundary firstly and then compare the rest 8-bytes-aligned content with .Llong mode. 2) If src/dst addrs are not with the same offset of 8 bytes boundary: memcmp() can align src addr with 8 bytes, increment dst addr accordingly, then load src with aligned mode and load dst with unaligned mode. This patch optmizes memcmp() behavior in the above 2 situations. Tested with both little/big endian. Performance result below is based on little endian. Following is the test result with src/dst having the same offset case: (a similar result was observed when src/dst having different offset): (1) 256 bytes Test with the existing tools/testing/selftests/powerpc/stringloops/memcmp: - without patch 29.773018302 seconds time elapsed ( +- 0.09% ) - with patch 16.485568173 seconds time elapsed ( +- 0.02% ) -> There is ~+80% percent improvement (2) 32 bytes To observe performance impact on < 32 bytes, modify tools/testing/selftests/powerpc/stringloops/memcmp.c with following: ------- #include #include "utils.h" -#define SIZE 256 +#define SIZE 32 #define ITERATIONS 10000 int test_memcmp(const void *s1, const void *s2, size_t n); -------- - Without patch 0.244746482 seconds time elapsed ( +- 0.36%) - with patch 0.215069477 seconds time elapsed ( +- 0.51%) -> There is ~+13% improvement (3) 0~8 bytes To observe <8 bytes performance impact, modify tools/testing/selftests/powerpc/stringloops/memcmp.c with following: ------- #include #include "utils.h" -#define SIZE 256 -#define ITERATIONS 10000 +#define SIZE 8 +#define ITERATIONS 1000000 int test_memcmp(const void *s1, const void *s2, size_t n); ------- - Without patch 1.845642503 seconds time elapsed ( +- 0.12% ) - With patch 1.849767135 seconds time elapsed ( +- 0.26% ) -> They are nearly the same. (-0.2%) Signed-off-by: Simon Guo --- arch/powerpc/lib/memcmp_64.S | 143 ++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 136 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S index d75d18b..f20e883 100644 --- a/arch/powerpc/lib/memcmp_64.S +++ b/arch/powerpc/lib/memcmp_64.S @@ -24,28 +24,41 @@ #define rH r31 #ifdef __LITTLE_ENDIAN__ +#define LH lhbrx +#define LW lwbrx #define LD ldbrx #else +#define LH lhzx +#define LW lwzx #define LD ldx #endif +/* + * There are 2 categories for memcmp: + * 1) src/dst has the same offset to the 8 bytes boundary. The handlers + * are named like .Lsameoffset_xxxx + * 2) src/dst has different offset to the 8 bytes boundary. The handlers + * are named like .Ldiffoffset_xxxx + */ _GLOBAL(memcmp) cmpdi cr1,r5,0 - /* Use the short loop if both strings are not 8B aligned */ - or r6,r3,r4 + /* Use the short loop if the src/dst addresses are not + * with the same offset of 8 bytes align boundary. + */ + xor r6,r3,r4 andi. r6,r6,7 - /* Use the short loop if length is less than 32B */ - cmpdi cr6,r5,31 + /* Fall back to short loop if compare at aligned addrs + * with less than 8 bytes. + */ + cmpdi cr6,r5,7 beq cr1,.Lzero - bne .Lshort - bgt cr6,.Llong + bgt cr6,.Lno_short .Lshort: mtctr r5 - 1: lbz rA,0(r3) lbz rB,0(r4) subf. rC,rB,rA @@ -78,11 +91,90 @@ _GLOBAL(memcmp) li r3,0 blr +.Lno_short: + dcbt 0,r3 + dcbt 0,r4 + bne .Ldiffoffset_8bytes_make_align_start + + +.Lsameoffset_8bytes_make_align_start: + /* attempt to compare bytes not aligned with 8 bytes so that + * rest comparison can run based on 8 bytes alignment. + */ + andi. r6,r3,7 + + /* Try to compare the first double word which is not 8 bytes aligned: + * load the first double word at (src & ~7UL) and shift left appropriate + * bits before comparision. + */ + clrlwi r6,r3,29 + rlwinm r6,r6,3,0,28 + beq .Lsameoffset_8bytes_aligned + clrrdi r3,r3,3 + clrrdi r4,r4,3 + LD rA,0,r3 + LD rB,0,r4 + sld rA,rA,r6 + sld rB,rB,r6 + cmpld cr0,rA,rB + srwi r6,r6,3 + bne cr0,.LcmpAB_lightweight + subfic r6,r6,8 + subfc. r5,r6,r5 + addi r3,r3,8 + addi r4,r4,8 + beq .Lzero + +.Lsameoffset_8bytes_aligned: + /* now we are aligned with 8 bytes. + * Use .Llong loop if left cmp bytes are equal or greater than 32B. + */ + cmpdi cr6,r5,31 + bgt cr6,.Llong + +.Lcmp_lt32bytes: + /* compare 1 ~ 32 bytes, at least r3 addr is 8 bytes aligned now */ + cmpdi cr5,r5,7 + srdi r0,r5,3 + ble cr5,.Lcmp_rest_lt8bytes + + /* handle 8 ~ 31 bytes */ + clrldi r5,r5,61 + mtctr r0 +2: + LD rA,0,r3 + LD rB,0,r4 + cmpld cr0,rA,rB + addi r3,r3,8 + addi r4,r4,8 + bne cr0,.LcmpAB_lightweight + bdnz 2b + + cmpwi r5,0 + beq .Lzero + +.Lcmp_rest_lt8bytes: + /* Here we have only less than 8 bytes to compare with. at least s1 + * Address is aligned with 8 bytes. + * The next double words are load and shift right with appropriate + * bits. + */ + subfic r6,r5,8 + rlwinm r6,r6,3,0,28 + LD rA,0,r3 + LD rB,0,r4 + srd rA,rA,r6 + srd rB,rB,r6 + cmpld cr0,rA,rB + bne cr0,.LcmpAB_lightweight + b .Lzero + .Lnon_zero: mr r3,rC blr .Llong: + /* At least s1 addr is aligned with 8 bytes */ li off8,8 li off16,16 li off24,24 @@ -232,4 +324,41 @@ _GLOBAL(memcmp) ld r28,-32(r1) ld r27,-40(r1) blr + +.LcmpAB_lightweight: /* skip NV GPRS restore */ + li r3,1 + bgt cr0,8f + li r3,-1 +8: + blr + +.Ldiffoffset_8bytes_make_align_start: + /* now try to align s1 with 8 bytes */ + andi. r6,r3,0x7 + rlwinm r6,r6,3,0,28 + beq .Ldiffoffset_align_s1_8bytes + + clrrdi r3,r3,3 + LD rA,0,r3 + LD rB,0,r4 /* unaligned load */ + sld rA,rA,r6 + srd rA,rA,r6 + srd rB,rB,r6 + cmpld cr0,rA,rB + srwi r6,r6,3 + bne cr0,.LcmpAB_lightweight + + subfic r6,r6,8 + subfc. r5,r6,r5 + addi r3,r3,8 + add r4,r4,r6 + + beq .Lzero + +.Ldiffoffset_align_s1_8bytes: + /* now s1 is aligned with 8 bytes. */ + cmpdi cr5,r5,31 + ble cr5,.Lcmp_lt32bytes + b .Llong + EXPORT_SYMBOL(memcmp) From patchwork Wed May 16 08:34:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 914271 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40m7GV6cb0z9s19 for ; Wed, 16 May 2018 18:42:10 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="QlG8MNzR"; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 40m7GV4kDgzF1Pp for ; Wed, 16 May 2018 18:42:10 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="QlG8MNzR"; dkim-atps=neutral X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:400e:c01::241; helo=mail-pl0-x241.google.com; envelope-from=wei.guo.simon@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="QlG8MNzR"; dkim-atps=neutral Received: from mail-pl0-x241.google.com (mail-pl0-x241.google.com [IPv6:2607:f8b0:400e:c01::241]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 40m75v1BfhzF1R9 for ; Wed, 16 May 2018 18:34:43 +1000 (AEST) Received: by mail-pl0-x241.google.com with SMTP id ay10-v6so1739447plb.1 for ; Wed, 16 May 2018 01:34:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=hb68Mv2CTh3yWzYP9F+r/yoJpfr3F+0sL+0KyYLJXuw=; b=QlG8MNzRGsEA+5tcKzERrGbLK9Mqb9VpQ+GWIh3RB9vjjSpQLq74h/SuSmBr9tsuIt wIALeSsCu3e8xP5GU435QFGlRZMKi/RdaI4WYxckJzPjCAaqVrvWIlUAUHaFo8BOPOcW 1W1FAZVLlqdH8dKbYhx5rKfHfvtugBDfpFjLwrdAniHDPoC9wqrGFny2KlKncpVpIIUs 4COQBjXZsR/e4zxnpO6pD/o/h1wlxm6r+DHf9tESteq6KhFqBfAeFWRAf5A6icZFS+Lz XdneiN9a2hGhvceuDbPrn8ONZwXxrS9aKhQnq01t+H2Y9BcL1eZGUz1DPApWr8KwfatG X/CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hb68Mv2CTh3yWzYP9F+r/yoJpfr3F+0sL+0KyYLJXuw=; b=CguMepgvLjYOLyCAWrZ2Zkro8889vTY4CMS5KiL6GjyJrBy8Qy2Wpq+chAd2vbGtTs xD4ixZGA4KT+zfUeIjpcJMFSN4z+PZRdGDc+6bb9JRO1jppha4uGb1D3+rNTgTvoaqiS 1KfjvtDs9G+U7hnI7T/jAI0SqbqTrkVbWUvW8q+tvGRnmkSE43qu+XlpaCilKwVaMStU SJhN2nJIQtBu07WCZFLHoSlNVJWRqroDvHGTZOf09E5o/0PTIOwiUxfi3rfuGDkKo3Iz vmlVbQkm/ACxTan19ureggtaLOhfxs8Kgbnh9HYqcKAw8Y0H2HskIP0f6HG9M0q8o7ol vItg== X-Gm-Message-State: ALKqPwcTY4JFpG/x+z2GsaWatoazMKwGrPWFQ4NRlo6QocrEyUqInxn+ nIkKbRPClhAEgSrqd3zbl01Qvw== X-Google-Smtp-Source: AB8JxZprJsH4azXoHbzB+8b8MfL9JqLE5HitWYWt1kfkLdSjXl8Oj4glF6NmgkC8H4iAsK7TBvlD6g== X-Received: by 2002:a17:902:b692:: with SMTP id c18-v6mr17961321pls.307.1526459681066; Wed, 16 May 2018 01:34:41 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id s16-v6sm2898757pfm.114.2018.05.16.01.34.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 16 May 2018 01:34:40 -0700 (PDT) From: wei.guo.simon@gmail.com To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v4 2/4] powerpc/64: enhance memcmp() with VMX instruction for long bytes comparision Date: Wed, 16 May 2018 16:34:19 +0800 Message-Id: <1526459661-17323-3-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1526459661-17323-1-git-send-email-wei.guo.simon@gmail.com> References: <1526459661-17323-1-git-send-email-wei.guo.simon@gmail.com> X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Naveen N. Rao" , Simon Guo , Cyril Bur Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Simon Guo This patch add VMX primitives to do memcmp() in case the compare size exceeds 4K bytes. KSM feature can benefit from this. Test result with following test program(replace the "^>" with ""): ------ ># cat tools/testing/selftests/powerpc/stringloops/memcmp.c >#include >#include >#include >#include >#include "utils.h" >#define SIZE (1024 * 1024 * 900) >#define ITERATIONS 40 int test_memcmp(const void *s1, const void *s2, size_t n); static int testcase(void) { char *s1; char *s2; unsigned long i; s1 = memalign(128, SIZE); if (!s1) { perror("memalign"); exit(1); } s2 = memalign(128, SIZE); if (!s2) { perror("memalign"); exit(1); } for (i = 0; i < SIZE; i++) { s1[i] = i & 0xff; s2[i] = i & 0xff; } for (i = 0; i < ITERATIONS; i++) { int ret = test_memcmp(s1, s2, SIZE); if (ret) { printf("return %d at[%ld]! should have returned zero\n", ret, i); abort(); } } return 0; } int main(void) { return test_harness(testcase, "memcmp"); } ------ Without this patch (but with the first patch "powerpc/64: Align bytes before fall back to .Lshort in powerpc64 memcmp()." in the series): 4.726728762 seconds time elapsed ( +- 3.54%) With VMX patch: 4.234335473 seconds time elapsed ( +- 2.63%) There is ~+10% improvement. Testing with unaligned and different offset version (make s1 and s2 shift random offset within 16 bytes) can archieve higher improvement than 10%.. Signed-off-by: Simon Guo --- arch/powerpc/include/asm/asm-prototypes.h | 4 +- arch/powerpc/lib/copypage_power7.S | 4 +- arch/powerpc/lib/memcmp_64.S | 231 ++++++++++++++++++++++++++++++ arch/powerpc/lib/memcpy_power7.S | 6 +- arch/powerpc/lib/vmx-helper.c | 4 +- 5 files changed, 240 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h index d9713ad..31fdcee 100644 --- a/arch/powerpc/include/asm/asm-prototypes.h +++ b/arch/powerpc/include/asm/asm-prototypes.h @@ -49,8 +49,8 @@ void __trace_hcall_exit(long opcode, unsigned long retval, /* VMX copying */ int enter_vmx_usercopy(void); int exit_vmx_usercopy(void); -int enter_vmx_copy(void); -void * exit_vmx_copy(void *dest); +int enter_vmx_ops(void); +void *exit_vmx_ops(void *dest); /* Traps */ long machine_check_early(struct pt_regs *regs); diff --git a/arch/powerpc/lib/copypage_power7.S b/arch/powerpc/lib/copypage_power7.S index 8fa73b7..e38f956 100644 --- a/arch/powerpc/lib/copypage_power7.S +++ b/arch/powerpc/lib/copypage_power7.S @@ -57,7 +57,7 @@ _GLOBAL(copypage_power7) std r4,-STACKFRAMESIZE+STK_REG(R30)(r1) std r0,16(r1) stdu r1,-STACKFRAMESIZE(r1) - bl enter_vmx_copy + bl enter_vmx_ops cmpwi r3,0 ld r0,STACKFRAMESIZE+16(r1) ld r3,STK_REG(R31)(r1) @@ -100,7 +100,7 @@ _GLOBAL(copypage_power7) addi r3,r3,128 bdnz 1b - b exit_vmx_copy /* tail call optimise */ + b exit_vmx_ops /* tail call optimise */ #else li r0,(PAGE_SIZE/128) diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S index f20e883..6303bbf 100644 --- a/arch/powerpc/lib/memcmp_64.S +++ b/arch/powerpc/lib/memcmp_64.S @@ -27,12 +27,73 @@ #define LH lhbrx #define LW lwbrx #define LD ldbrx +#define LVS lvsr +#define VPERM(_VRT,_VRA,_VRB,_VRC) \ + vperm _VRT,_VRB,_VRA,_VRC #else #define LH lhzx #define LW lwzx #define LD ldx +#define LVS lvsl +#define VPERM(_VRT,_VRA,_VRB,_VRC) \ + vperm _VRT,_VRA,_VRB,_VRC #endif +#define VMX_OPS_THRES 4096 +#define ENTER_VMX_OPS \ + mflr r0; \ + std r3,-STACKFRAMESIZE+STK_REG(R31)(r1); \ + std r4,-STACKFRAMESIZE+STK_REG(R30)(r1); \ + std r5,-STACKFRAMESIZE+STK_REG(R29)(r1); \ + std r0,16(r1); \ + stdu r1,-STACKFRAMESIZE(r1); \ + bl enter_vmx_ops; \ + cmpwi cr1,r3,0; \ + ld r0,STACKFRAMESIZE+16(r1); \ + ld r3,STK_REG(R31)(r1); \ + ld r4,STK_REG(R30)(r1); \ + ld r5,STK_REG(R29)(r1); \ + addi r1,r1,STACKFRAMESIZE; \ + mtlr r0 + +#define EXIT_VMX_OPS \ + mflr r0; \ + std r3,-STACKFRAMESIZE+STK_REG(R31)(r1); \ + std r4,-STACKFRAMESIZE+STK_REG(R30)(r1); \ + std r5,-STACKFRAMESIZE+STK_REG(R29)(r1); \ + std r0,16(r1); \ + stdu r1,-STACKFRAMESIZE(r1); \ + bl exit_vmx_ops; \ + ld r0,STACKFRAMESIZE+16(r1); \ + ld r3,STK_REG(R31)(r1); \ + ld r4,STK_REG(R30)(r1); \ + ld r5,STK_REG(R29)(r1); \ + addi r1,r1,STACKFRAMESIZE; \ + mtlr r0 + +/* + * LD_VSR_CROSS16B load the 2nd 16 bytes for _vaddr which is unaligned with + * 16 bytes boundary and permute the result with the 1st 16 bytes. + + * | y y y y y y y y y y y y y 0 1 2 | 3 4 5 6 7 8 9 a b c d e f z z z | + * ^ ^ ^ + * 0xbbbb10 0xbbbb20 0xbbb30 + * ^ + * _vaddr + * + * + * _vmask is the mask generated by LVS + * _v1st_qw is the 1st aligned QW of current addr which is already loaded. + * for example: 0xyyyyyyyyyyyyy012 for big endian + * _v2nd_qw is the 2nd aligned QW of cur _vaddr to be loaded. + * for example: 0x3456789abcdefzzz for big endian + * The permute result is saved in _v_res. + * for example: 0x0123456789abcdef for big endian. + */ +#define LD_VSR_CROSS16B(_vaddr,_vmask,_v1st_qw,_v2nd_qw,_v_res) \ + lvx _v2nd_qw,_vaddr,off16; \ + VPERM(_v_res,_v1st_qw,_v2nd_qw,_vmask) + /* * There are 2 categories for memcmp: * 1) src/dst has the same offset to the 8 bytes boundary. The handlers @@ -174,6 +235,13 @@ _GLOBAL(memcmp) blr .Llong: +#ifdef CONFIG_ALTIVEC + /* Try to use vmx loop if length is larger than 4K */ + cmpldi cr6,r5,VMX_OPS_THRES + bge cr6,.Lsameoffset_vmx_cmp + +.Llong_novmx_cmp: +#endif /* At least s1 addr is aligned with 8 bytes */ li off8,8 li off16,16 @@ -332,7 +400,94 @@ _GLOBAL(memcmp) 8: blr +#ifdef CONFIG_ALTIVEC +.Lsameoffset_vmx_cmp: + /* Enter with src/dst addrs has the same offset with 8 bytes + * align boundary + */ + ENTER_VMX_OPS + beq cr1,.Llong_novmx_cmp + +3: + /* need to check whether r4 has the same offset with r3 + * for 16 bytes boundary. + */ + xor r0,r3,r4 + andi. r0,r0,0xf + bne .Ldiffoffset_vmx_cmp_start + + /* len is no less than 4KB. Need to align with 16 bytes further. + */ + andi. rA,r3,8 + LD rA,0,r3 + beq 4f + LD rB,0,r4 + cmpld cr0,rA,rB + addi r3,r3,8 + addi r4,r4,8 + addi r5,r5,-8 + + beq cr0,4f + /* save and restore cr0 */ + mfocrf r5,64 + EXIT_VMX_OPS + mtocrf 64,r5 + b .LcmpAB_lightweight + +4: + /* compare 32 bytes for each loop */ + srdi r0,r5,5 + mtctr r0 + clrldi r5,r5,59 + li off16,16 + +.balign 16 +5: + lvx v0,0,r3 + lvx v1,0,r4 + vcmpequd. v0,v0,v1 + bf 24,7f + lvx v0,off16,r3 + lvx v1,off16,r4 + vcmpequd. v0,v0,v1 + bf 24,6f + addi r3,r3,32 + addi r4,r4,32 + bdnz 5b + + EXIT_VMX_OPS + cmpdi r5,0 + beq .Lzero + b .Lcmp_lt32bytes + +6: + addi r3,r3,16 + addi r4,r4,16 + +7: + /* diff the last 16 bytes */ + EXIT_VMX_OPS + LD rA,0,r3 + LD rB,0,r4 + cmpld cr0,rA,rB + li off8,8 + bne cr0,.LcmpAB_lightweight + + LD rA,off8,r3 + LD rB,off8,r4 + cmpld cr0,rA,rB + bne cr0,.LcmpAB_lightweight + b .Lzero +#endif + .Ldiffoffset_8bytes_make_align_start: +#ifdef CONFIG_ALTIVEC + /* only do vmx ops when the size exceeds 4K bytes */ + cmpdi cr5,r5,VMX_OPS_THRES + bge cr5,.Ldiffoffset_vmx_cmp +.Ldiffoffset_novmx_cmp: +#endif + /* now try to align s1 with 8 bytes */ andi. r6,r3,0x7 rlwinm r6,r6,3,0,28 @@ -359,6 +514,82 @@ _GLOBAL(memcmp) /* now s1 is aligned with 8 bytes. */ cmpdi cr5,r5,31 ble cr5,.Lcmp_lt32bytes + +#ifdef CONFIG_ALTIVEC + b .Llong_novmx_cmp +#else b .Llong +#endif + +#ifdef CONFIG_ALTIVEC +.Ldiffoffset_vmx_cmp: + ENTER_VMX_OPS + beq cr1,.Ldiffoffset_novmx_cmp + +.Ldiffoffset_vmx_cmp_start: + /* Firstly try to align r3 with 16 bytes */ + andi. r6,r3,0xf + li off16,16 + beq .Ldiffoffset_vmx_s1_16bytes_align + LVS v3,0,r3 + LVS v4,0,r4 + + lvx v5,0,r3 + lvx v6,0,r4 + LD_VSR_CROSS16B(r3,v3,v5,v7,v9) + LD_VSR_CROSS16B(r4,v4,v6,v8,v10) + + vcmpequb. v7,v9,v10 + bnl cr6,.Ldiffoffset_vmx_diff_found + + subfic r6,r6,16 + subf r5,r6,r5 + add r3,r3,r6 + add r4,r4,r6 + +.Ldiffoffset_vmx_s1_16bytes_align: + /* now s1 is aligned with 16 bytes */ + lvx v6,0,r4 + LVS v4,0,r4 + srdi r6,r5,5 /* loop for 32 bytes each */ + clrldi r5,r5,59 + mtctr r6 + +.balign 16 +.Ldiffoffset_vmx_32bytesloop: + /* the first qw of r4 was saved in v6 */ + lvx v9,0,r3 + LD_VSR_CROSS16B(r4,v4,v6,v8,v10) + vcmpequb. v7,v9,v10 + vor v6,v8,v8 + bnl cr6,.Ldiffoffset_vmx_diff_found + + addi r3,r3,16 + addi r4,r4,16 + + lvx v9,0,r3 + LD_VSR_CROSS16B(r4,v4,v6,v8,v10) + vcmpequb. v7,v9,v10 + vor v6,v8,v8 + bnl cr6,.Ldiffoffset_vmx_diff_found + + addi r3,r3,16 + addi r4,r4,16 + + bdnz .Ldiffoffset_vmx_32bytesloop + + EXIT_VMX_OPS + + cmpdi r5,0 + beq .Lzero + b .Lcmp_lt32bytes + +.Ldiffoffset_vmx_diff_found: + EXIT_VMX_OPS + /* anyway, the diff will appear in next 16 bytes */ + li r5,16 + b .Lcmp_lt32bytes + +#endif EXPORT_SYMBOL(memcmp) diff --git a/arch/powerpc/lib/memcpy_power7.S b/arch/powerpc/lib/memcpy_power7.S index df7de9d..070cdf6 100644 --- a/arch/powerpc/lib/memcpy_power7.S +++ b/arch/powerpc/lib/memcpy_power7.S @@ -230,7 +230,7 @@ _GLOBAL(memcpy_power7) std r5,-STACKFRAMESIZE+STK_REG(R29)(r1) std r0,16(r1) stdu r1,-STACKFRAMESIZE(r1) - bl enter_vmx_copy + bl enter_vmx_ops cmpwi cr1,r3,0 ld r0,STACKFRAMESIZE+16(r1) ld r3,STK_REG(R31)(r1) @@ -445,7 +445,7 @@ _GLOBAL(memcpy_power7) 15: addi r1,r1,STACKFRAMESIZE ld r3,-STACKFRAMESIZE+STK_REG(R31)(r1) - b exit_vmx_copy /* tail call optimise */ + b exit_vmx_ops /* tail call optimise */ .Lvmx_unaligned_copy: /* Get the destination 16B aligned */ @@ -649,5 +649,5 @@ _GLOBAL(memcpy_power7) 15: addi r1,r1,STACKFRAMESIZE ld r3,-STACKFRAMESIZE+STK_REG(R31)(r1) - b exit_vmx_copy /* tail call optimise */ + b exit_vmx_ops /* tail call optimise */ #endif /* CONFIG_ALTIVEC */ diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c index bf925cd..9f34049 100644 --- a/arch/powerpc/lib/vmx-helper.c +++ b/arch/powerpc/lib/vmx-helper.c @@ -53,7 +53,7 @@ int exit_vmx_usercopy(void) return 0; } -int enter_vmx_copy(void) +int enter_vmx_ops(void) { if (in_interrupt()) return 0; @@ -70,7 +70,7 @@ int enter_vmx_copy(void) * passed a pointer to the destination which we return as required by a * memcpy implementation. */ -void *exit_vmx_copy(void *dest) +void *exit_vmx_ops(void *dest) { disable_kernel_altivec(); preempt_enable(); From patchwork Wed May 16 08:34:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 914273 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40m7K76cHCz9s19 for ; Wed, 16 May 2018 18:44:27 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="UcBNGSdo"; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 40m7K74TlQzF14v for ; Wed, 16 May 2018 18:44:27 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="UcBNGSdo"; dkim-atps=neutral X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:400e:c00::242; helo=mail-pf0-x242.google.com; envelope-from=wei.guo.simon@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="UcBNGSdo"; dkim-atps=neutral Received: from mail-pf0-x242.google.com (mail-pf0-x242.google.com [IPv6:2607:f8b0:400e:c00::242]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 40m75x645bzF1QT for ; Wed, 16 May 2018 18:34:45 +1000 (AEST) Received: by mail-pf0-x242.google.com with SMTP id c10-v6so1441059pfi.12 for ; Wed, 16 May 2018 01:34:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Dx4IkyQyGTId6tIzQHk3kbFuK1sz0vHcldmIPitLKNc=; b=UcBNGSdoA4KmF4J2KNZAwnrVJXsyLSBntgfH1xGB6q/4oy3lqx/PNftivCPkScPsYQ ZkLpbOEpzyV+znOf0YIdv4xEW0twXcs8IcEeF0vub1UGCeiyv3yVVdJtuk7Oeko/Hhxd mI4WEY+9qy8V++ko9dkAcaMyzuk//Cp3bSTXw7u6Fkwgw+FEItsGULU2IO7MPACMAkUJ 4rTZe5knGkboMe/aYMxdxkjTRHOHNpOLfWMLx8vQ875SNcCjJUU2ExD1IZNYawoHc/sF JDGY8p/VXoUeEhnnmgJwgH8WU9gSa86MMwsptSIXaqF98eemgP6a0p+gK5jy2SkPD85t 5IxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Dx4IkyQyGTId6tIzQHk3kbFuK1sz0vHcldmIPitLKNc=; b=YYVfXbmmmKpfSEkYSNkLlyZOJKivbB+hezQKQSEz5KfIzUWx3E4IWJmAAKubVKBS42 Bazn40QyD/+efsQmX9cpp+vmh40wCcXC0+F/0TBf5t8SDrLsmXwejYHQ7GkFoz8B0Ia+ kBmkLF3+8YFXKaoONM+enu/doDYKNZhrIbrAKVQ0NPuiOAlftNSjws9oxK0fksLJkd51 rM0OPDjZ08cWzz4DhxSih2icHt+/o4sGvvus9KL7+Ds66j3PpMXMaYf9hG+CqsQIq7Iq RlIloW1azEwA/Nslmfmluv3VGw8C/PENiEpxvfstsyf9AUMSMn7KyPMa73dIyEuJ9Cor 8q2A== X-Gm-Message-State: ALKqPweKnanGaEq8vmC95julyg+1Ogpm2hKEwegXonTsT6ctqPnLD4El mnQGrjMUsEHVE5oyPxnEcxQi8g== X-Google-Smtp-Source: AB8JxZqkG7NTv3W7iy4ga4KbQg9pEpDVjpXxWS8gQPswu5m/bQUUa6Y4vEIxFsaeOGEbBWMp9pQKvA== X-Received: by 2002:a63:3643:: with SMTP id d64-v6mr15159533pga.228.1526459683661; Wed, 16 May 2018 01:34:43 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id s16-v6sm2898757pfm.114.2018.05.16.01.34.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 16 May 2018 01:34:43 -0700 (PDT) From: wei.guo.simon@gmail.com To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v4 3/4] powerpc/64: add 32 bytes prechecking before using VMX optimization on memcmp() Date: Wed, 16 May 2018 16:34:20 +0800 Message-Id: <1526459661-17323-4-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1526459661-17323-1-git-send-email-wei.guo.simon@gmail.com> References: <1526459661-17323-1-git-send-email-wei.guo.simon@gmail.com> X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Naveen N. Rao" , Simon Guo , Cyril Bur Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Simon Guo This patch is based on the previous VMX patch on memcmp(). To optimize ppc64 memcmp() with VMX instruction, we need to think about the VMX penalty brought with: If kernel uses VMX instruction, it needs to save/restore current thread's VMX registers. There are 32 x 128 bits VMX registers in PPC, which means 32 x 16 = 512 bytes for load and store. The major concern regarding the memcmp() performance in kernel is KSM, who will use memcmp() frequently to merge identical pages. So it will make sense to take some measures/enhancement on KSM to see whether any improvement can be done here. Cyril Bur indicates that the memcmp() for KSM has a higher possibility to fail (unmatch) early in previous bytes in following mail. https://patchwork.ozlabs.org/patch/817322/#1773629 And I am taking a follow-up on this with this patch. Per some testing, it shows KSM memcmp() will fail early at previous 32 bytes. More specifically: - 76% cases will fail/unmatch before 16 bytes; - 83% cases will fail/unmatch before 32 bytes; - 84% cases will fail/unmatch before 64 bytes; So 32 bytes looks a better choice than other bytes for pre-checking. This patch adds a 32 bytes pre-checking firstly before jumping into VMX operations, to avoid the unnecessary VMX penalty. And the testing shows ~20% improvement on memcmp() average execution time with this patch. The detail data and analysis is at: https://github.com/justdoitqd/publicFiles/blob/master/memcmp/README.md Any suggestion is welcome. Signed-off-by: Simon Guo --- arch/powerpc/lib/memcmp_64.S | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S index 6303bbf..df2eec0 100644 --- a/arch/powerpc/lib/memcmp_64.S +++ b/arch/powerpc/lib/memcmp_64.S @@ -405,6 +405,35 @@ _GLOBAL(memcmp) /* Enter with src/dst addrs has the same offset with 8 bytes * align boundary */ + +#ifdef CONFIG_KSM + /* KSM will always compare at page boundary so it falls into + * .Lsameoffset_vmx_cmp. + * + * There is an optimization for KSM based on following fact: + * KSM pages memcmp() prones to fail early at the first bytes. In + * a statisis data, it shows 76% KSM memcmp() fails at the first + * 16 bytes, and 83% KSM memcmp() fails at the first 32 bytes, 84% + * KSM memcmp() fails at the first 64 bytes. + * + * Before applying VMX instructions which will lead to 32x128bits VMX + * regs load/restore penalty, let us compares the first 32 bytes + * so that we can catch the ~80% fail cases. + */ + + li r0,4 + mtctr r0 +.Lksm_32B_loop: + LD rA,0,r3 + LD rB,0,r4 + cmpld cr0,rA,rB + addi r3,r3,8 + addi r4,r4,8 + bne cr0,.LcmpAB_lightweight + addi r5,r5,-8 + bdnz .Lksm_32B_loop +#endif + ENTER_VMX_OPS beq cr1,.Llong_novmx_cmp From patchwork Wed May 16 08:34:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 914274 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40m7Mv6Q73z9s2k for ; Wed, 16 May 2018 18:46:51 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="frSnsMOj"; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 40m7Mv4l91zF1PQ for ; Wed, 16 May 2018 18:46:51 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="frSnsMOj"; dkim-atps=neutral X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:400e:c00::243; helo=mail-pf0-x243.google.com; envelope-from=wei.guo.simon@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="frSnsMOj"; dkim-atps=neutral Received: from mail-pf0-x243.google.com (mail-pf0-x243.google.com [IPv6:2607:f8b0:400e:c00::243]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 40m7604K1rzF1QK for ; Wed, 16 May 2018 18:34:48 +1000 (AEST) Received: by mail-pf0-x243.google.com with SMTP id q22-v6so1444850pff.11 for ; Wed, 16 May 2018 01:34:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ZfJIb/6+ZdJWMslf46D3OU943hDWS2mXo0NdvnB2maM=; b=frSnsMOjpo9ZsMqVjgEU24Ft1MK33GcHLhFcHXEtsKIBQRnaIBY4qIlixm/IXS+MMD A+C6ilinvkmwlIeSdZTan5gjRcU+wKfs4MPrhLZkGf/HhSb8QrYiOE04H3lMlCllu/xk GHQehjyIvR1INnHR1vUGbLIe7oT0h/JG/Lrjzpz55OiQd2zSyg9oA/2rB+9M7yLmwXB2 lWSf9W5uDtWi7MYKuKLoykMCa0QqkfO/y8Of7m32lVjYGcOkx/z4co/BVTqZE+x6XZL4 w32h80oqgzlxO61ATIqXMVZZMgs5W6xb0vZC4c0Sc8bu9ysKxzamrWF4ho19RSCPndQp 8V9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ZfJIb/6+ZdJWMslf46D3OU943hDWS2mXo0NdvnB2maM=; b=Ros4jMXf96xNzRbmB3LMMmcOQvlk2iLrtJDriMuGmrVVtEorbZJJ2NLvBkZTEIMVD4 nB/peSa5mapJiilfecYc0vv91pmHm5aYOypENO0faePffAtUYAROfNkegj8gn61sRkKk ooo4kPWUGoe34Fj8lgZx3fBLxSfAErLa2dt4n1TCEAigt6csk2zlhS/wvJGkNWeD8e91 7aT8TtK8chIzYwLhRaUk2ZNms8LJ0vflcaHv6kDfoGlXOSonNpHKB/Vq453rEnyn9smG jclNNgT57O2NBrMRfKhC0IhPNEdk66eBofP6DEMC1bIBmtbfg93k010n0cJtK9f20DHK Otdw== X-Gm-Message-State: ALKqPwf6zu5IjaB+SAV1yy1t7L462E8cMGzzufoM/pfaBmqQ2eiM87PP kKU/RXmSfwJoPePBvAvUzmuVxw== X-Google-Smtp-Source: AB8JxZoSQss3M4RciuH9KA2mdY3IIs7sBmzMHBYvIkw/w8KHxBQ5yznxj3vFBAIljYfqgTFP4P+THw== X-Received: by 2002:a63:7c01:: with SMTP id x1-v6mr15108407pgc.286.1526459686452; Wed, 16 May 2018 01:34:46 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.88]) by smtp.gmail.com with ESMTPSA id s16-v6sm2898757pfm.114.2018.05.16.01.34.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 16 May 2018 01:34:45 -0700 (PDT) From: wei.guo.simon@gmail.com To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v4 4/4] powerpc:selftest update memcmp_64 selftest for VMX implementation Date: Wed, 16 May 2018 16:34:21 +0800 Message-Id: <1526459661-17323-5-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1526459661-17323-1-git-send-email-wei.guo.simon@gmail.com> References: <1526459661-17323-1-git-send-email-wei.guo.simon@gmail.com> X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Naveen N. Rao" , Simon Guo , Cyril Bur Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Simon Guo This patch reworked selftest memcmp_64 so that memcmp selftest can cover more test cases. It adds testcases for: - memcmp over 4K bytes size. - s1/s2 with different/random offset on 16 bytes boundary. - enter/exit_vmx_ops pairness. Signed-off-by: Simon Guo --- .../selftests/powerpc/copyloops/asm/ppc_asm.h | 4 +- .../testing/selftests/powerpc/stringloops/Makefile | 2 +- .../selftests/powerpc/stringloops/asm/ppc_asm.h | 22 +++++ .../testing/selftests/powerpc/stringloops/memcmp.c | 98 +++++++++++++++++----- 4 files changed, 101 insertions(+), 25 deletions(-) diff --git a/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h b/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h index 5ffe04d..dfce161 100644 --- a/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h +++ b/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h @@ -36,11 +36,11 @@ li r3,0 blr -FUNC_START(enter_vmx_copy) +FUNC_START(enter_vmx_ops) li r3,1 blr -FUNC_START(exit_vmx_copy) +FUNC_START(exit_vmx_ops) blr FUNC_START(memcpy_power7) diff --git a/tools/testing/selftests/powerpc/stringloops/Makefile b/tools/testing/selftests/powerpc/stringloops/Makefile index 1125e48..75a3d2fe 100644 --- a/tools/testing/selftests/powerpc/stringloops/Makefile +++ b/tools/testing/selftests/powerpc/stringloops/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 # The loops are all 64-bit code -CFLAGS += -m64 +CFLAGS += -m64 -DCONFIG_KSM CFLAGS += -I$(CURDIR) TEST_GEN_PROGS := memcmp diff --git a/tools/testing/selftests/powerpc/stringloops/asm/ppc_asm.h b/tools/testing/selftests/powerpc/stringloops/asm/ppc_asm.h index 136242e..185d257 100644 --- a/tools/testing/selftests/powerpc/stringloops/asm/ppc_asm.h +++ b/tools/testing/selftests/powerpc/stringloops/asm/ppc_asm.h @@ -1,4 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _PPC_ASM_H +#define __PPC_ASM_H #include #ifndef r1 @@ -6,3 +8,23 @@ #endif #define _GLOBAL(A) FUNC_START(test_ ## A) + +#define CONFIG_ALTIVEC + +#define R14 r14 +#define R15 r15 +#define R16 r16 +#define R17 r17 +#define R18 r18 +#define R19 r19 +#define R20 r20 +#define R21 r21 +#define R22 r22 +#define R29 r29 +#define R30 r30 +#define R31 r31 + +#define STACKFRAMESIZE 256 +#define STK_REG(i) (112 + ((i)-14)*8) + +#endif diff --git a/tools/testing/selftests/powerpc/stringloops/memcmp.c b/tools/testing/selftests/powerpc/stringloops/memcmp.c index 8250db2..b5cf717 100644 --- a/tools/testing/selftests/powerpc/stringloops/memcmp.c +++ b/tools/testing/selftests/powerpc/stringloops/memcmp.c @@ -2,20 +2,40 @@ #include #include #include +#include #include "utils.h" #define SIZE 256 #define ITERATIONS 10000 +#define LARGE_SIZE (5 * 1024) +#define LARGE_ITERATIONS 1000 +#define LARGE_MAX_OFFSET 32 +#define LARGE_SIZE_START 4096 + +#define MAX_OFFSET_DIFF_S1_S2 48 + +int vmx_count; +int enter_vmx_ops(void) +{ + vmx_count++; + return 1; +} + +void exit_vmx_ops(void) +{ + vmx_count--; +} int test_memcmp(const void *s1, const void *s2, size_t n); /* test all offsets and lengths */ -static void test_one(char *s1, char *s2) +static void test_one(char *s1, char *s2, unsigned long max_offset, + unsigned long size_start, unsigned long max_size) { unsigned long offset, size; - for (offset = 0; offset < SIZE; offset++) { - for (size = 0; size < (SIZE-offset); size++) { + for (offset = 0; offset < max_offset; offset++) { + for (size = size_start; size < (max_size - offset); size++) { int x, y; unsigned long i; @@ -35,70 +55,104 @@ static void test_one(char *s1, char *s2) printf("\n"); abort(); } + + if (vmx_count != 0) { + printf("vmx enter/exit not paired.(offset:%ld size:%ld s1:%p s2:%p vc:%d\n", + offset, size, s1, s2, vmx_count); + printf("\n"); + abort(); + } } } } -static int testcase(void) +static int testcase(bool islarge) { char *s1; char *s2; unsigned long i; - s1 = memalign(128, SIZE); + unsigned long comp_size = (islarge ? LARGE_SIZE : SIZE); + unsigned long alloc_size = comp_size + MAX_OFFSET_DIFF_S1_S2; + int iterations = islarge ? LARGE_ITERATIONS : ITERATIONS; + + s1 = memalign(128, alloc_size); if (!s1) { perror("memalign"); exit(1); } - s2 = memalign(128, SIZE); + s2 = memalign(128, alloc_size); if (!s2) { perror("memalign"); exit(1); } - srandom(1); + srandom(time(0)); - for (i = 0; i < ITERATIONS; i++) { + for (i = 0; i < iterations; i++) { unsigned long j; unsigned long change; + char *rand_s1 = s1; + char *rand_s2 = s2; - for (j = 0; j < SIZE; j++) + for (j = 0; j < alloc_size; j++) s1[j] = random(); - memcpy(s2, s1, SIZE); + rand_s1 += random() % MAX_OFFSET_DIFF_S1_S2; + rand_s2 += random() % MAX_OFFSET_DIFF_S1_S2; + memcpy(rand_s2, rand_s1, comp_size); /* change one byte */ - change = random() % SIZE; - s2[change] = random() & 0xff; - - test_one(s1, s2); + change = random() % comp_size; + rand_s2[change] = random() & 0xff; + + if (islarge) + test_one(rand_s1, rand_s2, LARGE_MAX_OFFSET, + LARGE_SIZE_START, comp_size); + else + test_one(rand_s1, rand_s2, SIZE, 0, comp_size); } - srandom(1); + srandom(time(0)); - for (i = 0; i < ITERATIONS; i++) { + for (i = 0; i < iterations; i++) { unsigned long j; unsigned long change; + char *rand_s1 = s1; + char *rand_s2 = s2; - for (j = 0; j < SIZE; j++) + for (j = 0; j < alloc_size; j++) s1[j] = random(); - memcpy(s2, s1, SIZE); + rand_s1 += random() % MAX_OFFSET_DIFF_S1_S2; + rand_s2 += random() % MAX_OFFSET_DIFF_S1_S2; + memcpy(rand_s2, rand_s1, comp_size); /* change multiple bytes, 1/8 of total */ - for (j = 0; j < SIZE / 8; j++) { - change = random() % SIZE; + for (j = 0; j < comp_size / 8; j++) { + change = random() % comp_size; s2[change] = random() & 0xff; } - test_one(s1, s2); + if (islarge) + test_one(rand_s1, rand_s2, LARGE_MAX_OFFSET, + LARGE_SIZE_START, comp_size); + else + test_one(rand_s1, rand_s2, SIZE, 0, comp_size); } return 0; } +static int testcases(void) +{ + testcase(0); + testcase(1); + return 0; +} + int main(void) { - return test_harness(testcase, "memcmp"); + return test_harness(testcases, "memcmp"); }