From patchwork Wed May 10 00:33:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 1779149 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=eDc5lU4n; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QGGJs5bhzz214S for ; Wed, 10 May 2023 10:35:25 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id C551F385B519 for ; Wed, 10 May 2023 00:35:23 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org C551F385B519 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1683678923; bh=9dVrhdw0GYUo6FHJ0Xm+pTv5mAJ6/UL6PR6WiONSJSs=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=eDc5lU4nXIGrbNPm/XloNnRz+266iJx6Ypil59kcQdoXRGqZFmSyOAmHa2WkP9/2h T8CxY8NfvZAK6EZr4Dcu75mZCa6wnzPfxNroervF4UHmPnIj4ou1VKRRaM7BPAQe+a ZpZp4HkwcKMJMP5zZ3IFRXvv2XFNPiMd0gv0VNRk= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by sourceware.org (Postfix) with ESMTPS id 3C29D385773C for ; Wed, 10 May 2023 00:33:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 3C29D385773C Received: by mail-oi1-x230.google.com with SMTP id 5614622812f47-39415d35301so433792b6e.1 for ; Tue, 09 May 2023 17:33:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683678828; x=1686270828; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9dVrhdw0GYUo6FHJ0Xm+pTv5mAJ6/UL6PR6WiONSJSs=; b=ijsL2tLRWNGmDDHiipgv6lERGMJzTUiwUuvIrdeNtCs+HN3GFX2hUPPfayYo1Pe586 r3fK+bS6txFMX9TgbxazjFOiuQIBWrfHMDgetkSovZsaPWj9nXs48nn9veBKd9RpZNaD IhhxsPKdsmINrH8ygnp24pfQRZKyL4FUmgccFt0Zc2+cNIeZ+T+xW3TriyvFJlAOYXHY lOcBY2tiJoQaBWOapHkSHggvoPSSETCo0HFsfmxYFKVIGfBFBh1ftOyfJEJHR9BoGQEc 2SLaspo8mY/cohpePbI/1JKC3pnd9MhSUewkD76lWfNwlihFdxAT0TMPilegvBcBSx4W CEhQ== X-Gm-Message-State: AC+VfDz9PeBJGNbC3AgSFPO0brtT2+lghVNXw4EMdA+f1yCVFTn2VkSm Rb+qix30e9+SNyHDs9Oqfd+kDTve7cc= X-Google-Smtp-Source: ACHHUZ6Ug88vA/eHeNu40sQW6DKHrgqAWDH3e6HEMyog0E+GD9j4Behj0tdaMj77d6a7omq93SNttg== X-Received: by 2002:aca:1715:0:b0:38e:8990:663b with SMTP id j21-20020aca1715000000b0038e8990663bmr2019233oii.52.1683678828018; Tue, 09 May 2023 17:33:48 -0700 (PDT) Received: from noahgold-desk.lan (2603-8080-1301-76c6-4e19-fee9-c46d-d01a.res6.spectrum.com. [2603:8080:1301:76c6:4e19:fee9:c46d:d01a]) by smtp.gmail.com with ESMTPSA id y8-20020acae108000000b0038bb2f60064sm1776872oig.30.2023.05.09.17.33.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 May 2023 17:33:47 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Date: Tue, 9 May 2023 19:33:33 -0500 Message-Id: <20230510003336.637851-1-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230424050329.1501348-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org Sender: "Libc-alpha" Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 / ncores_per_socket'. This patch updates that value to roughly 'sizeof_L3 / 4` The original value (specifically dividing the `ncores_per_socket`) was done to limit the amount of other threads' data a `memcpy`/`memset` could evict. Dividing by 'ncores_per_socket', however leads to exceedingly low non-temporal thresholds and leads to using non-temporal stores in cases where REP MOVSB is multiple times faster. Furthermore, non-temporal stores are written directly to main memory so using it at a size much smaller than L3 can place soon to be accessed data much further away than it otherwise could be. As well, modern machines are able to detect streaming patterns (especially if REP MOVSB is used) and provide LRU hints to the memory subsystem. This in affect caps the total amount of eviction at 1/cache_associativity, far below meaningfully thrashing the entire cache. As best I can tell, the benchmarks that lead this small threshold where done comparing non-temporal stores versus standard cacheable stores. A better comparison (linked below) is to be REP MOVSB which, on the measure systems, is nearly 2x faster than non-temporal stores at the low-end of the previous threshold, and within 10% for over 100MB copies (well past even the current threshold). In cases with a low number of threads competing for bandwidth, REP MOVSB is ~2x faster up to `sizeof_L3`. The divisor of `4` is a somewhat arbitrary value. From benchmarks it seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs such as Broadwell prefer something closer to `8`. This patch is meant to be followed up by another one to make the divisor cpu-specific, but in the meantime (and for easier backporting), this patch settles on `4` as a middle-ground. Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable stores where done using: https://github.com/goldsteinn/memcpy-nt-benchmarks Sheets results (also available in pdf on the github): https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml --- sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++--------------- 1 file changed, 43 insertions(+), 27 deletions(-) diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index ec88945b39..4a1a5423ff 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -407,7 +407,7 @@ handle_zhaoxin (int name) } static void -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr, long int core) { unsigned int eax; @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, unsigned int family = cpu_features->basic.family; unsigned int model = cpu_features->basic.model; long int shared = *shared_ptr; + long int shared_per_thread = *shared_per_thread_ptr; unsigned int threads = *threads_ptr; bool inclusive_cache = true; bool support_count_mask = true; @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, /* Try L2 otherwise. */ level = 2; shared = core; + shared_per_thread = core; threads_l2 = 0; threads_l3 = -1; } @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, } else { -intel_bug_no_cache_info: - /* Assume that all logical threads share the highest cache - level. */ - threads - = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16) - & 0xff); - } - - /* Cap usage of highest cache level to the number of supported - threads. */ - if (shared > 0 && threads > 0) - shared /= threads; + intel_bug_no_cache_info: + /* Assume that all logical threads share the highest cache + level. */ + threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16) + & 0xff); + + /* Get per-thread size of highest level cache. */ + if (shared_per_thread > 0 && threads > 0) + shared_per_thread /= threads; + } } /* Account for non-inclusive L2 and L3 caches. */ if (!inclusive_cache) { if (threads_l2 > 0) - core /= threads_l2; + shared_per_thread += core / threads_l2; shared += core; } *shared_ptr = shared; + *shared_per_thread_ptr = shared_per_thread; *threads_ptr = threads; } @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) /* Find out what brand of processor. */ long int data = -1; long int shared = -1; + long int shared_per_thread = -1; long int core = -1; unsigned int threads = 0; unsigned long int level1_icache_size = -1; @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features); core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features); shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features); + shared_per_thread = shared; level1_icache_size = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features); @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) level4_cache_size = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features); - get_common_cache_info (&shared, &threads, core); + get_common_cache_info (&shared, &shared_per_thread, &threads, core); } else if (cpu_features->basic.kind == arch_kind_zhaoxin) { data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE); core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE); shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE); + shared_per_thread = shared; level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE); level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE); @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC); level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE); - get_common_cache_info (&shared, &threads, core); + get_common_cache_info (&shared, &shared_per_thread, &threads, core); } else if (cpu_features->basic.kind == arch_kind_amd) { data = handle_amd (_SC_LEVEL1_DCACHE_SIZE); core = handle_amd (_SC_LEVEL2_CACHE_SIZE); shared = handle_amd (_SC_LEVEL3_CACHE_SIZE); + shared_per_thread = shared; level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE); level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE); @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) if (shared <= 0) /* No shared L3 cache. All we have is the L2 cache. */ shared = core; + + if (shared_per_thread <= 0) + shared_per_thread = shared; } cpu_features->level1_icache_size = level1_icache_size; @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) cpu_features->level3_cache_linesize = level3_cache_linesize; cpu_features->level4_cache_size = level4_cache_size; - /* The default setting for the non_temporal threshold is 3/4 of one - thread's share of the chip's cache. For most Intel and AMD processors - with an initial release date between 2017 and 2020, a thread's typical - share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4 - threshold leaves 125 KBytes to 500 KBytes of the thread's data - in cache after a maximum temporal copy, which will maintain - in cache a reasonable portion of the thread's stack and other - active data. If the threshold is set higher than one thread's - share of the cache, it has a substantial risk of negatively - impacting the performance of other threads running on the chip. */ - unsigned long int non_temporal_threshold = shared * 3 / 4; + /* The default setting for the non_temporal threshold is 1/4 of size + of the chip's cache. For most Intel and AMD processors with an + initial release date between 2017 and 2023, a thread's typical + share of the cache is from 18-64MB. Using the 1/4 L3 is meant to + estimate the point where non-temporal stores begin outcompeting + REP MOVSB. As well the point where the fact that non-temporal + stores are forced back to main memory would already occurred to the + majority of the lines in the copy. Note, concerns about the + entire L3 cache being evicted by the copy are mostly alleviated + by the fact that modern HW detects streaming patterns and + provides proper LRU hints so that the maximum thrashing + capped at 1/associativity. */ + unsigned long int non_temporal_threshold = shared / 4; + /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run + a higher risk of actually thrashing the cache as they don't have a HW LRU + hint. As well, there performance in highly parallel situations is + noticeably worse. */ + if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS)) + non_temporal_threshold = shared_per_thread * 3 / 4; /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best if that operation cannot overflow. Minimum of 0x4040 (16448) because the From patchwork Wed May 10 00:33:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 1779151 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=wShEv0jG; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QGGKm1Msvz214S for ; Wed, 10 May 2023 10:36:12 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 383773856961 for ; Wed, 10 May 2023 00:36:10 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 383773856961 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1683678970; bh=0JB530HnaG2OZ2huNedSxSI1hsjYpY+O5SWkAyi1ueA=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=wShEv0jGQ1qyMysLaYliOif4JyUNGI0UQajw7wilj/OFylUt7qxJ7WPSADmAyfK0o 15Q8aWNzwFrVfzatX/Sg0N6zQy3nT7FLYVQJ01dvlEdG5b8jfsC3jpH3ayB4zkbCWd wXQJcJGMFmrMBDD7sSKVeMNHh47D7xp4CtX7bpFc= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-oa1-x36.google.com (mail-oa1-x36.google.com [IPv6:2001:4860:4864:20::36]) by sourceware.org (Postfix) with ESMTPS id 05C363856DC3 for ; Wed, 10 May 2023 00:34:32 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 05C363856DC3 Received: by mail-oa1-x36.google.com with SMTP id 586e51a60fabf-19619af9a02so1759756fac.2 for ; Tue, 09 May 2023 17:34:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683678871; x=1686270871; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0JB530HnaG2OZ2huNedSxSI1hsjYpY+O5SWkAyi1ueA=; b=bf+rZQQ+EH8Xzyintgrh3S89jTDzCtuXc3DvtIBoZWxm5wkK5IKJLrKsWEYTMbm9yd /QhIPiwUo5VHMyTQ1QXA7Ew6eWZWpenDA70EZAOmK3Sv5o9Y5WmPWAyd2s99KU/chfhA iXfxlx5mGoVOz+CZLNUql11hsH0qV/OiOCafkckxHOnrkYnKFsqJrCrHhBIVdtPpjZot 5jEi1MSmjgBGl0NCMqoKFXNeETwO2v21QGIif/SdOBkvOd4GAsOP2aPkm2ofvRD0GFrn EMsEUlAyz+Y2LqlJVsslOywbjB0ZqXsImQXVCYaaJTc6bLlnEXH5+3/g6PHmCiKdmS4a LFmA== X-Gm-Message-State: AC+VfDzX6xobobn4yfTVXCyg14eqkYVRuxvu4+ngKojQVOnC0g9oK9r8 Lpy2OYT7fovSJT99DXGTU5y3CiB8jGw= X-Google-Smtp-Source: ACHHUZ56tCwgvnNM3011vSjoBBGvN/QEzDUCWShwIS3WFsFRjjMirH6phQ4VMMeUTIFjpGlC6/2O7g== X-Received: by 2002:a05:6808:1cc:b0:38c:c177:a6bb with SMTP id x12-20020a05680801cc00b0038cc177a6bbmr2142415oic.23.1683678870769; Tue, 09 May 2023 17:34:30 -0700 (PDT) Received: from noahgold-desk.lan (2603-8080-1301-76c6-4e19-fee9-c46d-d01a.res6.spectrum.com. [2603:8080:1301:76c6:4e19:fee9:c46d:d01a]) by smtp.gmail.com with ESMTPSA id y8-20020acae108000000b0038bb2f60064sm1776872oig.30.2023.05.09.17.34.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 May 2023 17:34:30 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: [PATCH v6 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Date: Tue, 9 May 2023 19:33:35 -0500 Message-Id: <20230510003336.637851-3-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230510003336.637851-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230510003336.637851-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org Sender: "Libc-alpha" Different systems prefer a different divisors. From benchmarks[1] so far the following divisors have been found: ICX : 2 SKX : 2 BWD : 8 For Intel, we are generalizing that BWD and older prefers 8 as a divisor, and SKL and newer prefers 2. This number can be further tuned as benchmarks are run. [1]: https://github.com/goldsteinn/memcpy-nt-benchmarks --- sysdeps/x86/cpu-features.c | 16 +++++++++++++-- sysdeps/x86/dl-cacheinfo.h | 32 ++++++++++++++++++------------ sysdeps/x86/include/cpu-features.h | 3 +++ 3 files changed, 36 insertions(+), 15 deletions(-) diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 9d433f8144..4cc1cd9fed 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -637,6 +637,7 @@ init_cpu_features (struct cpu_features *cpu_features) unsigned int stepping = 0; enum cpu_features_kind kind; + cpu_features->cachesize_non_temporal_divisor = 4; #if !HAS_CPUID if (__get_cpuid_max (0, 0) == 0) { @@ -720,8 +721,8 @@ init_cpu_features (struct cpu_features *cpu_features) of Core i3/i5/i7 processors if AVX is available. */ if (!CPU_FEATURES_CPU_P (cpu_features, AVX)) break; - case INTEL_BIGCORE_NEHALEM: - case INTEL_BIGCORE_WESTMERE: + + enable_modern_features: /* Rep string instructions, unaligned load, unaligned copy, and pminub are fast on Intel Core i3, i5 and i7. */ cpu_features->preferred[index_arch_Fast_Rep_String] @@ -730,11 +731,20 @@ init_cpu_features (struct cpu_features *cpu_features) | bit_arch_Prefer_PMINUB_for_stringop); break; + case INTEL_BIGCORE_NEHALEM: + case INTEL_BIGCORE_WESTMERE: + /* Older CPUs prefer non-temporal stores at lower threshold. */ + cpu_features->cachesize_non_temporal_divisor = 8; + goto enable_modern_features; + /* Default tuned Bigcore microarch. */ case INTEL_BIGCORE_SANDYBRIDGE: case INTEL_BIGCORE_IVYBRIDGE: case INTEL_BIGCORE_HASWELL: case INTEL_BIGCORE_BROADWELL: + cpu_features->cachesize_non_temporal_divisor = 8; + goto default_tuning; + case INTEL_BIGCORE_SKYLAKE: case INTEL_BIGCORE_AMBERLAKE: case INTEL_BIGCORE_COFFEELAKE: @@ -755,11 +765,13 @@ init_cpu_features (struct cpu_features *cpu_features) case INTEL_BIGCORE_SAPPHIRERAPIDS: case INTEL_BIGCORE_EMERALDRAPIDS: case INTEL_BIGCORE_GRANITERAPIDS: + cpu_features->cachesize_non_temporal_divisor = 2; goto default_tuning; /* Default tuned Mixed (bigcore + atom SOC). */ case INTEL_MIXED_LAKEFIELD: case INTEL_MIXED_ALDERLAKE: + cpu_features->cachesize_non_temporal_divisor = 2; goto default_tuning; } diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index 4a1a5423ff..864b00a521 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) cpu_features->level3_cache_linesize = level3_cache_linesize; cpu_features->level4_cache_size = level4_cache_size; - /* The default setting for the non_temporal threshold is 1/4 of size - of the chip's cache. For most Intel and AMD processors with an - initial release date between 2017 and 2023, a thread's typical - share of the cache is from 18-64MB. Using the 1/4 L3 is meant to - estimate the point where non-temporal stores begin outcompeting - REP MOVSB. As well the point where the fact that non-temporal - stores are forced back to main memory would already occurred to the - majority of the lines in the copy. Note, concerns about the - entire L3 cache being evicted by the copy are mostly alleviated - by the fact that modern HW detects streaming patterns and - provides proper LRU hints so that the maximum thrashing - capped at 1/associativity. */ - unsigned long int non_temporal_threshold = shared / 4; + unsigned long int cachesize_non_temporal_divisor + = cpu_features->cachesize_non_temporal_divisor; + if (cachesize_non_temporal_divisor <= 0) + cachesize_non_temporal_divisor = 4; + + /* The default setting for the non_temporal threshold is [1/2, 1/8] of size + of the chip's cache (depending on `cachesize_non_temporal_divisor` which + is microarch specific. The defeault is 1/4). For most Intel and AMD + processors with an initial release date between 2017 and 2023, a thread's + typical share of the cache is from 18-64MB. Using a reasonable size + fraction of L3 is meant to estimate the point where non-temporal stores + begin outcompeting REP MOVSB. As well the point where the fact that + non-temporal stores are forced back to main memory would already occurred + to the majority of the lines in the copy. Note, concerns about the entire + L3 cache being evicted by the copy are mostly alleviated by the fact that + modern HW detects streaming patterns and provides proper LRU hints so that + the maximum thrashing capped at 1/associativity. */ + unsigned long int non_temporal_threshold + = shared / cachesize_non_temporal_divisor; /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run a higher risk of actually thrashing the cache as they don't have a HW LRU hint. As well, there performance in highly parallel situations is diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h index 40b8129d6a..f5b9dd54fe 100644 --- a/sysdeps/x86/include/cpu-features.h +++ b/sysdeps/x86/include/cpu-features.h @@ -915,6 +915,9 @@ struct cpu_features unsigned long int shared_cache_size; /* Threshold to use non temporal store. */ unsigned long int non_temporal_threshold; + /* When no user non_temporal_threshold is specified. We default to + cachesize / cachesize_non_temporal_divisor. */ + unsigned long int cachesize_non_temporal_divisor; /* Threshold to use "rep movsb". */ unsigned long int rep_movsb_threshold; /* Threshold to stop using "rep movsb". */ From patchwork Wed May 10 00:33:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 1779150 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=TxgoHNe8; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QGGK30ZDtz214S for ; Wed, 10 May 2023 10:35:35 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 78B003858C31 for ; Wed, 10 May 2023 00:35:29 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 78B003858C31 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1683678929; bh=UYaezltKb8zCfgeP6BwjotQsP8v/YthIByWie0z0Anw=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=TxgoHNe8PhP1jhb99G35Qt/S+rsIv+Sk9MrT1NyoqeIQHfhaKMotqJi/KkvQRB8WA o1lTEbz+X+mQMUhcV6k+9s+zfvhDnvKoBzD+owk4zSxM0HGuChCBrVwFHnIsfD2C13 E5zjFKIgBNxfnzf2Bnc/eFG3DrzF49Z1F6HjhCRw= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-oa1-x36.google.com (mail-oa1-x36.google.com [IPv6:2001:4860:4864:20::36]) by sourceware.org (Postfix) with ESMTPS id 6DA653857029 for ; Wed, 10 May 2023 00:34:47 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 6DA653857029 Received: by mail-oa1-x36.google.com with SMTP id 586e51a60fabf-18f4a6d2822so41397808fac.1 for ; Tue, 09 May 2023 17:34:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683678886; x=1686270886; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UYaezltKb8zCfgeP6BwjotQsP8v/YthIByWie0z0Anw=; b=dau9F0V+EtgKqaP0skA/a+eQbCiP0bV9UdNIXDG65vs+6XP4BhV9R3gnJea/DdrSh7 CbXyGCaa4QBIqsXV1XLZViUoFZlb2thkQI7JKMvbMAJQVZzADRJedDnjm5w33DLh+ueo cI8cPKlCZLW8g7LftswGlR5mW6j8Ix+0X0hsQgjqXdK6EeB+EGTYGXKEzealmvATxHJt irlEn6K9f9jTqzz+bh72BhQnINrcf6jaZr8449K1wweh7QuxO+voX02qlXkHsfjFmwHG FnpPDPEMhxdrPkFVrzMGAsXhXn0oltsFn9MralJU1G7ysM5Hu0y5sOLon00ptim1u/be xmiw== X-Gm-Message-State: AC+VfDwoYRfGi3hiRdCw3anjN99pQmHgj7J1zuXaA813x58zfAglWBZT HaPwddr4aVlhAbyXDlFcMJQCFoMp0/8= X-Google-Smtp-Source: ACHHUZ6c6Y6+SX61+789Heyu/g6wdefHkKxtN5kboZWPlNWpxLiQRAB6xo/rEN8/fHPdUDmggb5ffw== X-Received: by 2002:aca:1111:0:b0:387:4229:c48f with SMTP id 17-20020aca1111000000b003874229c48fmr1880795oir.26.1683678886456; Tue, 09 May 2023 17:34:46 -0700 (PDT) Received: from noahgold-desk.lan (2603-8080-1301-76c6-4e19-fee9-c46d-d01a.res6.spectrum.com. [2603:8080:1301:76c6:4e19:fee9:c46d:d01a]) by smtp.gmail.com with ESMTPSA id y8-20020acae108000000b0038bb2f60064sm1776872oig.30.2023.05.09.17.34.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 May 2023 17:34:45 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: [PATCH v6 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Date: Tue, 9 May 2023 19:33:36 -0500 Message-Id: <20230510003336.637851-4-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230510003336.637851-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230510003336.637851-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org Sender: "Libc-alpha" Saltwell is just a die shrink of Bonnell, so the same micro-architectural optimization preferences apply. --- sysdeps/x86/cpu-features.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 4cc1cd9fed..517b1be34c 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -678,7 +678,9 @@ init_cpu_features (struct cpu_features *cpu_features) break; /* Unaligned load versions are faster than SSSE3 - on Airmont, Silvermont, Goldmont, and Goldmont Plus. */ + on Saltwell, Airmont, Silvermont, Goldmont, and Goldmont + Plus. */ + case INTEL_ATOM_SALTWELL: case INTEL_ATOM_AIRMONT: case INTEL_ATOM_SILVERMONT: case INTEL_ATOM_GOLDMONT: @@ -710,7 +712,6 @@ init_cpu_features (struct cpu_features *cpu_features) /* Default tuned atom microarch. */ case INTEL_ATOM_SIERRAFOREST: case INTEL_ATOM_GRANDRIDGE: - case INTEL_ATOM_SALTWELL: goto default_tuning; /* Bigcore Tuning. */