From patchwork Sun Nov 12 20:51:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 1862914 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=QFXbgZD+; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4ST4VL4dtLz1yR8 for ; Mon, 13 Nov 2023 07:51:38 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E53023858D20 for ; Sun, 12 Nov 2023 20:51:35 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-qv1-xf2e.google.com (mail-qv1-xf2e.google.com [IPv6:2607:f8b0:4864:20::f2e]) by sourceware.org (Postfix) with ESMTPS id 179213858D20 for ; Sun, 12 Nov 2023 20:51:26 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 179213858D20 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 179213858D20 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::f2e ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1699822288; cv=none; b=nQjw3usRHb9I5FTtGTWhMeYtR4sXuSR9DTguLkcrXPeIqbPQst84rrLnXSESqdi9PgIl7+VwC3UCaTD3Nm0HAiwTdknaLVYKuTb5s2fT/I1iR3YYYiH+w2VzXvr6Layp6HbRMjwD/LudsoD/ldCZyXbkKAC5OBIpdzEsjZ5w5FI= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1699822288; c=relaxed/simple; bh=PviT+NJsJOq5vuE1hm+XQSKUa9RoyJU6HxCeddTdlHk=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=nr1U/MN+Oaql/U6psoU4NlC2pt3jWiArq54rmFtBblAevC7F3GDSVrxzFfOXF8n0bvv9kngd89wCq4yq1z6V4SfJkyFnNxeSyW6cZpsBmgKa4AWHOF01TjbAheSnO4HZAgh23ddf+vnFze+K2LzGk3Cif0BgDFrAZDnFNMkoZq4= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-qv1-xf2e.google.com with SMTP id 6a1803df08f44-677a12f1362so14337666d6.1 for ; Sun, 12 Nov 2023 12:51:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1699822285; x=1700427085; darn=sourceware.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eglkBmLE3bAOFoiMLfbt06kFEOJu2Ly5gA1dEN9YzWM=; b=QFXbgZD+PfT/Q3Q0YXjq5lVMnnF85iaDJhf4Otco5xYK0whaoYY5KjdAdwpkyIwUai gnB6WLmde8XIlAAHhyjiE1uPVee1n9PzUWZfwuCU0tBSGJ7SZodNtWmkGSyGRRbQAWmG twXK0hoZitp/EJgcv/Y9e5gJ+a1ea93fErlbKqf4A2x4iS3kLfKcbdO9Et84WmI0R8QB UhhAhTAPUq6AAMPoID0AyfjS7ND0EEGIUqU/XIpS21HxSVmPhd5GSdVXLU5/gci1ME9J NEGWps9qZX52DiBQKYIP3ncDqMA+RCW8TKjatvwLDZXdocxVgizyqYboUHXlaZiTIn/L MtJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699822285; x=1700427085; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eglkBmLE3bAOFoiMLfbt06kFEOJu2Ly5gA1dEN9YzWM=; b=qMPVVNp9HOsUvGpv1afIUtFUDwDwYb6nHqPmoZQkIbGidJoYMbtemL+AvSEtVq2nLh Z3ADtqlwjnkWkKkBK96Mp4GhI2FVMxsaK4W/vBBRYUUZzutHxF8gqtYG+Hx4cM6YRFzC dKbyLEMqXpxyRBOfP35RfxZVUuXWQWfTEQDB1HDzsIsFoufcpnfTlo5pbHTHE9rheNk4 CsM7BuwXNWzKcPIOHCfX5qchdpckev4A6W7t7GowLRuX90yimkFNgP2riiUI/ijl3KRr zXzBVzsoobzi457WSdmia07s41YXBBn1RSwuP7s3iQ6giljK6p2lJTEhFL3Bvgs1FQW7 DurA== X-Gm-Message-State: AOJu0YzWNZWti35D13rPjHoEcKh2igrQlFx/Ku+FuxzZKfKZrGOS8vfo uMmrrx9dLq+9nhXi4+HsQv1pjTSm/78= X-Google-Smtp-Source: AGHT+IF6QM4I5aLkixqs/NtVuDtjNGVnZu78V9boN9o8B71b+ITDZqReWpsTus8yQlqoskOpRt1gpA== X-Received: by 2002:a05:6214:2a4f:b0:656:2d03:a4be with SMTP id jf15-20020a0562142a4f00b006562d03a4bemr7191381qvb.40.1699822284747; Sun, 12 Nov 2023 12:51:24 -0800 (PST) Received: from noahgold-desk.intel.com ([192.55.55.57]) by smtp.gmail.com with ESMTPSA id bd26-20020ad4569a000000b0065b21b232bfsm1513108qvb.138.2023.11.12.12.51.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 12 Nov 2023 12:51:24 -0800 (PST) From: Noah Goldstein To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: x86: Fix unchecked AVX512-VBMI2 usage in strrchr-evex-base.S Date: Sun, 12 Nov 2023 14:51:20 -0600 Message-Id: <20231112205120.3672636-1-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231101221657.311121-1-goldstein.w.n@gmail.com> References: <20231101221657.311121-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org strrchr-evex-base used `vpcompress{b|d}` in the page cross logic but was missing the CPU_FEATURE checks for VBMI2 in the ifunc/ifunc-impl-list. The fix is either to add those checks or change the logic to not use `vpcompress{b|d}`. Choosing the latter here so that the strrchr-evex implementation is usable on SKX. New implementation is a bit slower, but this is in a cold path so its probably okay. Reviewed-by: Sunil K Pandey --- sysdeps/x86_64/multiarch/strrchr-evex-base.S | 75 +++++++++++++------- 1 file changed, 51 insertions(+), 24 deletions(-) diff --git a/sysdeps/x86_64/multiarch/strrchr-evex-base.S b/sysdeps/x86_64/multiarch/strrchr-evex-base.S index cd6a0a870a..da5bf1237a 100644 --- a/sysdeps/x86_64/multiarch/strrchr-evex-base.S +++ b/sysdeps/x86_64/multiarch/strrchr-evex-base.S @@ -35,18 +35,20 @@ # define CHAR_SIZE 4 # define VPCMP vpcmpd # define VPMIN vpminud -# define VPCOMPRESS vpcompressd # define VPTESTN vptestnmd # define VPTEST vptestmd # define VPBROADCAST vpbroadcastd # define VPCMPEQ vpcmpeqd # else -# define SHIFT_REG VRDI +# if VEC_SIZE == 64 +# define SHIFT_REG VRCX +# else +# define SHIFT_REG VRDI +# endif # define CHAR_SIZE 1 # define VPCMP vpcmpb # define VPMIN vpminub -# define VPCOMPRESS vpcompressb # define VPTESTN vptestnmb # define VPTEST vptestmb # define VPBROADCAST vpbroadcastb @@ -56,6 +58,12 @@ # define KORTEST_M KORTEST # endif +# if VEC_SIZE == 32 || (defined USE_AS_WCSRCHR) +# define SHIFT_R(cnt, val) shrx cnt, val, val +# else +# define SHIFT_R(cnt, val) shr %cl, val +# endif + # define VMATCH VMM(0) # define CHAR_PER_VEC (VEC_SIZE / CHAR_SIZE) # define PAGE_SIZE 4096 @@ -71,7 +79,7 @@ ENTRY_P2ALIGN(STRRCHR, 6) andl $(PAGE_SIZE - 1), %eax cmpl $(PAGE_SIZE - VEC_SIZE), %eax jg L(cross_page_boundary) - +L(page_cross_continue): VMOVU (%rdi), %VMM(1) /* k0 has a 1 for each zero CHAR in YMM1. */ VPTESTN %VMM(1), %VMM(1), %k0 @@ -79,7 +87,7 @@ ENTRY_P2ALIGN(STRRCHR, 6) test %VGPR(rsi), %VGPR(rsi) jz L(aligned_more) /* fallthrough: zero CHAR in first VEC. */ -L(page_cross_return): + /* K1 has a 1 for each search CHAR match in VEC(1). */ VPCMPEQ %VMATCH, %VMM(1), %k1 KMOV %k1, %VGPR(rax) @@ -167,7 +175,6 @@ L(first_vec_x1_return): .p2align 4,, 12 L(aligned_more): -L(page_cross_continue): /* Need to keep original pointer incase VEC(1) has last match. */ movq %rdi, %rsi andq $-VEC_SIZE, %rdi @@ -340,34 +347,54 @@ L(return_new_match_ret): leaq (VEC_SIZE * 2)(%rdi, %rax, CHAR_SIZE), %rax ret - .p2align 4,, 4 L(cross_page_boundary): + /* eax contains all the page offset bits of src (rdi). `xor rdi, + rax` sets pointer will all page offset bits cleared so + offset of (PAGE_SIZE - VEC_SIZE) will get last aligned VEC + before page cross (guaranteed to be safe to read). Doing this + as opposed to `movq %rdi, %rax; andq $-VEC_SIZE, %rax` saves + a bit of code size. */ xorq %rdi, %rax - mov $-1, %VRDX - VMOVU (PAGE_SIZE - VEC_SIZE)(%rax), %VMM(6) - VPTESTN %VMM(6), %VMM(6), %k0 + VMOVU (PAGE_SIZE - VEC_SIZE)(%rax), %VMM(1) + VPTESTN %VMM(1), %VMM(1), %k0 KMOV %k0, %VRSI -# ifdef USE_AS_WCSRCHR + /* Shift out zero CHAR matches that are before the beginning of + src (rdi). */ +# if VEC_SIZE == 64 || (defined USE_AS_WCSRCHR) movl %edi, %ecx - and $(VEC_SIZE - 1), %ecx +# endif +# ifdef USE_AS_WCSRCHR + andl $(VEC_SIZE - 1), %ecx shrl $2, %ecx # endif - shlx %SHIFT_REG, %VRDX, %VRDX + SHIFT_R (%SHIFT_REG, %VRSI) +# if VEC_SIZE == 32 || (defined USE_AS_WCSRCHR) + /* For strrchr-evex512 we use SHIFT_R as shr which will set zero + flag. */ + test %VRSI, %VRSI +# endif + jz L(page_cross_continue) + /* Found zero CHAR so need to test for search CHAR. */ + VPCMPEQ %VMATCH, %VMM(1), %k1 + KMOV %k1, %VRAX + /* Shift out search CHAR matches that are before the beginning of + src (rdi). */ + SHIFT_R (%SHIFT_REG, %VRAX) + /* Check if any search CHAR match in range. */ + blsmsk %VRSI, %VRSI + and %VRSI, %VRAX + jz L(ret2) + bsr %VRAX, %VRAX # ifdef USE_AS_WCSRCHR - kmovw %edx, %k1 + leaq (%rdi, %rax, CHAR_SIZE), %rax # else - KMOV %VRDX, %k1 + addq %rdi, %rax # endif - - VPCOMPRESS %VMM(6), %VMM(1){%k1}{z} - /* We could technically just jmp back after the vpcompress but - it doesn't save any 16-byte blocks. */ - shrx %SHIFT_REG, %VRSI, %VRSI - test %VRSI, %VRSI - jnz L(page_cross_return) - jmp L(page_cross_continue) - /* 1-byte from cache line. */ +L(ret2): + ret + /* 3 bytes from cache-line for evex. */ + /* 0 bytes from cache-line for evex512. */ END(STRRCHR) #endif