From patchwork Tue Aug 13 13:55:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 1971988 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=FzMWLS/r; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WjtFh27L9z1yYl for ; Tue, 13 Aug 2024 23:55:52 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 4C313385841D for ; Tue, 13 Aug 2024 13:55:50 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [IPv6:2a00:1450:4864:20::629]) by sourceware.org (Postfix) with ESMTPS id 397FF385840F for ; Tue, 13 Aug 2024 13:55:33 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 397FF385840F Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 397FF385840F Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2a00:1450:4864:20::629 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1723557336; cv=none; b=KQUPL9u4Etrjkuj8skpPAr5O5DJz1cXDZ9/+Scb6C1/2rS69rvxRessdDtblKn0B4cK63ILCWiseX3Dx0uE0xsDvLDMWYWnGDdyGB/09XdMajxrbJrUIXEr955z0Ud0rrCAXLoPomKqTc7MiCM1nqSUd3d5SMGXGC/x1H+tqQW4= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1723557336; c=relaxed/simple; bh=v3n9CvvCKKzxrQqPQmBtwVX3S/TwKv6+oa+0mIlJHcU=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=ZIs1YhlLoRuQSpOqcYKzfZnzNchem8Zln3qw8GmmHdlSIEp/O7LNaW/o1i2WBifNEmR/cTtyWz3nFgFrZs3PHQmMl8lc0BxeQNq5izVg3bu1Xovka+Br9G5g8ycUIt39yW5cdAhCu/yB6+a1gthPzX5jcVF/6gH5jXU+PNkeSuM= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-ej1-x629.google.com with SMTP id a640c23a62f3a-a83562f9be9so37161166b.0 for ; Tue, 13 Aug 2024 06:55:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723557331; x=1724162131; darn=sourceware.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C9XcmYmiVQkpwzMdbTVD28iW5EEfZJPsd4HMVQ388k4=; b=FzMWLS/rvw30wj93ncorZb+tS7nuRwtFBjjzBY4MeyL5MyjKsBt84M0u8I8tsPGej9 F21N9pS1uOKNAPbapg5vlrvqmG9LSug3n92A3AfUmtFouExNUjUy3vaDnU4KHB8Jw49Q u0BfXgkNTbuQ7+dtU0d0QIomwz9c1aj9xNnrBR8xtQo2uQsf7EavZpa3As6kKcfCISMy KitDw2TgkxQuG5BBA1WNETr7Itz84CVizMIQhWmMM66sV1GNxtnCS1H1dFNlmMfw8/qw caR0froRGQRJoCyiDeMcF9Dk8Hbp91V7IOnkIDi2I/lPjPGWXSJZNt3wJAL4kdQ/vtAU JT7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723557331; x=1724162131; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C9XcmYmiVQkpwzMdbTVD28iW5EEfZJPsd4HMVQ388k4=; b=CEameGLVm8lnFfEuV1pvZQiYyVAs31pUJ7l8uDD335xkig+RxDqV6Gj157HVolGCaN Lje6tlPlDLaWDHTXlYmpzfyowZiDnr/l89Sc+w6gqVeEEvMzYCeBS4g5qVXVtC+34okV bul6J00AazuB/uC9Gg9g2/KfahdZgVwGsK05jWrrv1kZn9OP3DVkYXhusf/GIClSfROm VvAyjbAZw7FX2uf52e96S4vBXbYV1LWQ/uJDUkRjEGPGAUdCBW480MggtJKtz6Lwx9+7 DphUHrhPu4A9TaWdH/cjLGeLxPtWvpbw/D4cTLWR8GQlma+NiMEpqKaeb4KL3x8wugom OF0A== X-Gm-Message-State: AOJu0YyJidrE9NpE49uR6Mi80jQ8+saGRf+918wdqyvbZ4Xg7hnhZCls nzAj0pmu7nacvsYFIiEWhFC09UuCBTN96W+WFHpf3zLyeNREUHqkvOt8yMHU X-Google-Smtp-Source: AGHT+IGCVdioo8MO311Io0mqbd9gcd8qMEYMVoOXo03ZQBJyZo3pPH9V/dr6sN3+zDEOxwlYU+ysoQ== X-Received: by 2002:a17:907:968b:b0:a77:cdaa:88a7 with SMTP id a640c23a62f3a-a80ed2cb6aamr306536066b.48.1723557330826; Tue, 13 Aug 2024 06:55:30 -0700 (PDT) Received: from noahgold-desk.. ([45.83.220.218]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a80f414e677sm72489566b.148.2024.08.13.06.55.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Aug 2024 06:55:30 -0700 (PDT) From: Noah Goldstein To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: x86: Add new cpu-flag `Prefer_Non_Temporal` Date: Tue, 13 Aug 2024 21:55:25 +0800 Message-Id: <20240813135525.144529-1-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240710065226.2509525-1-goldstein.w.n@gmail.com> References: <20240710065226.2509525-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org The goal of this flag is to allow targets which don't prefer/have ERMS to still access the non-temporal memset implementation. --- sysdeps/x86/cpu-tunables.c | 4 ++ sysdeps/x86/dl-cacheinfo.h | 46 ++++++++++++++----- ...cpu-features-preferred_feature_index_1.def | 1 + sysdeps/x86/tst-hwcap-tunables.c | 6 ++- sysdeps/x86_64/multiarch/ifunc-memset.h | 19 ++++++-- 5 files changed, 58 insertions(+), 18 deletions(-) diff --git a/sysdeps/x86/cpu-tunables.c b/sysdeps/x86/cpu-tunables.c index ccc6b64dc2..b01b703517 100644 --- a/sysdeps/x86/cpu-tunables.c +++ b/sysdeps/x86/cpu-tunables.c @@ -255,6 +255,10 @@ TUNABLE_CALLBACK (set_hwcaps) (tunable_val_t *valp) (n, cpu_features, Prefer_PMINUB_for_stringop, SSE2, 26); } break; + case 32: + CHECK_GLIBC_IFUNC_PREFERRED_BOTH (n, cpu_features, + Prefer_Non_Temporal_Without_ERMS, 32); + break; } } } diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index a1c03b8903..600bd66a93 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -988,14 +988,6 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) if (CPU_FEATURE_USABLE_P (cpu_features, FSRM)) rep_movsb_threshold = 2112; - /* Non-temporal stores are more performant on Intel and AMD hardware above - non_temporal_threshold. Enable this for both Intel and AMD hardware. */ - unsigned long int memset_non_temporal_threshold = SIZE_MAX; - if (!CPU_FEATURES_ARCH_P (cpu_features, Avoid_Non_Temporal_Memset) - && (cpu_features->basic.kind == arch_kind_intel - || cpu_features->basic.kind == arch_kind_amd)) - memset_non_temporal_threshold = non_temporal_threshold; - /* For AMD CPUs that support ERMS (Zen3+), REP MOVSB is in a lot of cases slower than the vectorized path (and for some alignments, it is really slow, check BZ #30994). */ @@ -1017,6 +1009,15 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) if (tunable_size != 0) shared = tunable_size; + /* Non-temporal stores are more performant on Intel and AMD hardware above + non_temporal_threshold. Enable this for both Intel and AMD hardware. */ + unsigned long int memset_non_temporal_threshold = SIZE_MAX; + if (!CPU_FEATURES_ARCH_P (cpu_features, Avoid_Non_Temporal_Memset) + && (CPU_FEATURES_ARCH_P (cpu_features, Prefer_Non_Temporal_Without_ERMS) + || cpu_features->basic.kind == arch_kind_intel + || cpu_features->basic.kind == arch_kind_amd)) + memset_non_temporal_threshold = non_temporal_threshold; + tunable_size = TUNABLE_GET (x86_non_temporal_threshold, long int, NULL); if (tunable_size > minimum_non_temporal_threshold && tunable_size <= maximum_non_temporal_threshold) @@ -1042,14 +1043,37 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) slightly better than ERMS. */ rep_stosb_threshold = SIZE_MAX; + /* + For memset, the non-temporal implementation is only accessed through the + stosb code. ie: + ``` + if (size >= rep_stosb_thresh) + { + if (size >= non_temporal_thresh) + { + do_non_temporal (); + } + do_stosb (); + } + do_normal_vec_loop (); + ``` + So if we prefer non-temporal, set `rep_stosb_thresh = non_temporal_thresh` + to enable the implementation. If `rep_stosb_thresh = non_temporal_thresh`, + `rep stosb` will never be used. + */ + TUNABLE_SET_WITH_BOUNDS (x86_memset_non_temporal_threshold, + memset_non_temporal_threshold, + minimum_non_temporal_threshold, SIZE_MAX); + if (CPU_FEATURES_ARCH_P (cpu_features, Prefer_Non_Temporal_Without_ERMS)) + rep_stosb_threshold + = TUNABLE_GET (x86_memset_non_temporal_threshold, long int, NULL); + + TUNABLE_SET_WITH_BOUNDS (x86_data_cache_size, data, 0, SIZE_MAX); TUNABLE_SET_WITH_BOUNDS (x86_shared_cache_size, shared, 0, SIZE_MAX); TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold, minimum_non_temporal_threshold, maximum_non_temporal_threshold); - TUNABLE_SET_WITH_BOUNDS (x86_memset_non_temporal_threshold, - memset_non_temporal_threshold, - minimum_non_temporal_threshold, SIZE_MAX); TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold, minimum_rep_movsb_threshold, SIZE_MAX); TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1, diff --git a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def index 61bbbc2e89..60a3a382ff 100644 --- a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def +++ b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def @@ -34,3 +34,4 @@ BIT (MathVec_Prefer_No_AVX512) BIT (Prefer_FSRM) BIT (Avoid_Short_Distance_REP_MOVSB) BIT (Avoid_Non_Temporal_Memset) +BIT (Prefer_Non_Temporal_Without_ERMS) \ No newline at end of file diff --git a/sysdeps/x86/tst-hwcap-tunables.c b/sysdeps/x86/tst-hwcap-tunables.c index 94307283d7..4495d795e4 100644 --- a/sysdeps/x86/tst-hwcap-tunables.c +++ b/sysdeps/x86/tst-hwcap-tunables.c @@ -60,7 +60,8 @@ static const struct test_t /* Disable everything. */ "-Prefer_ERMS,-Prefer_FSRM,-AVX,-AVX2,-AVX512F,-AVX512VL," "-SSE4_1,-SSE4_2,-SSSE3,-Fast_Unaligned_Load,-ERMS," - "-AVX_Fast_Unaligned_Load,-Avoid_Non_Temporal_Memset", + "-AVX_Fast_Unaligned_Load,-Avoid_Non_Temporal_Memset," + "-Prefer_Non_Temporal_Without_ERMS", test_1, array_length (test_1) }, @@ -68,7 +69,8 @@ static const struct test_t /* Same as before, but with some empty suboptions. */ ",-,-Prefer_ERMS,-Prefer_FSRM,-AVX,-AVX2,-AVX512F,-AVX512VL," "-SSE4_1,-SSE4_2,-SSSE3,-Fast_Unaligned_Load,,-," - "-ERMS,-AVX_Fast_Unaligned_Load,-Avoid_Non_Temporal_Memset,-,", + "-ERMS,-AVX_Fast_Unaligned_Load,-Avoid_Non_Temporal_Memset," + "-Prefer_Non_Temporal_Without_ERMS,-,", test_1, array_length (test_1) } diff --git a/sysdeps/x86_64/multiarch/ifunc-memset.h b/sysdeps/x86_64/multiarch/ifunc-memset.h index 7a637ef7ca..3ca64bb853 100644 --- a/sysdeps/x86_64/multiarch/ifunc-memset.h +++ b/sysdeps/x86_64/multiarch/ifunc-memset.h @@ -61,7 +61,9 @@ IFUNC_SELECTOR (void) && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX512BW) && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, BMI2)) { - if (CPU_FEATURE_USABLE_P (cpu_features, ERMS)) + if (CPU_FEATURE_USABLE_P (cpu_features, ERMS) + || CPU_FEATURES_ARCH_P (cpu_features, + Prefer_Non_Temporal_Without_ERMS)) return OPTIMIZE (avx512_unaligned_erms); return OPTIMIZE (avx512_unaligned); @@ -76,7 +78,9 @@ IFUNC_SELECTOR (void) && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX512BW) && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, BMI2)) { - if (CPU_FEATURE_USABLE_P (cpu_features, ERMS)) + if (CPU_FEATURE_USABLE_P (cpu_features, ERMS) + || CPU_FEATURES_ARCH_P (cpu_features, + Prefer_Non_Temporal_Without_ERMS)) return OPTIMIZE (evex_unaligned_erms); return OPTIMIZE (evex_unaligned); @@ -84,7 +88,9 @@ IFUNC_SELECTOR (void) if (CPU_FEATURE_USABLE_P (cpu_features, RTM)) { - if (CPU_FEATURE_USABLE_P (cpu_features, ERMS)) + if (CPU_FEATURE_USABLE_P (cpu_features, ERMS) + || CPU_FEATURES_ARCH_P (cpu_features, + Prefer_Non_Temporal_Without_ERMS)) return OPTIMIZE (avx2_unaligned_erms_rtm); return OPTIMIZE (avx2_unaligned_rtm); @@ -93,14 +99,17 @@ IFUNC_SELECTOR (void) if (X86_ISA_CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER, !)) { - if (CPU_FEATURE_USABLE_P (cpu_features, ERMS)) + if (CPU_FEATURE_USABLE_P (cpu_features, ERMS) + || CPU_FEATURES_ARCH_P (cpu_features, + Prefer_Non_Temporal_Without_ERMS)) return OPTIMIZE (avx2_unaligned_erms); return OPTIMIZE (avx2_unaligned); } } - if (CPU_FEATURE_USABLE_P (cpu_features, ERMS)) + if (CPU_FEATURE_USABLE_P (cpu_features, ERMS) + || CPU_FEATURES_ARCH_P (cpu_features, Prefer_Non_Temporal_Without_ERMS)) return OPTIMIZE (sse2_unaligned_erms); return OPTIMIZE (sse2_unaligned);