From patchwork Tue Apr 25 03:43:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 1773170 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=PFukVtMh; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Q57CC1dDRz23v2 for ; Tue, 25 Apr 2023 13:43:51 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 409083858D1E for ; Tue, 25 Apr 2023 03:43:45 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 409083858D1E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1682394225; bh=9ay2LUKozPZa+fGn2mY5SHVF+WkRxz7HiUQyPa0IvWw=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=PFukVtMhSGDDBOyWia2yW8QRVujU9lAwyoxgWoF1YKMQbGwKETq7eVZ+sIMhEoI89 mp2LXX8ewQvwQxvawqp1ZXazCK8Pk6SZj7Xa+hYwqP7ALivgfifkyqCkQB3UNG2Dpi oSuoJXwbuhesWsEglHYbSVhc/1Rw069r0GwSN4E0= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [IPv6:2a00:1450:4864:20::635]) by sourceware.org (Postfix) with ESMTPS id 88D073858D1E for ; Tue, 25 Apr 2023 03:43:27 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 88D073858D1E Received: by mail-ej1-x635.google.com with SMTP id a640c23a62f3a-957dbae98b4so506559066b.1 for ; Mon, 24 Apr 2023 20:43:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682394206; x=1684986206; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9ay2LUKozPZa+fGn2mY5SHVF+WkRxz7HiUQyPa0IvWw=; b=bek3c2BH7rVbB8qYY7LUeYb/bRknqCzoDoVYcqq7djS5141vn6i8q9sA571ZoVTpLK pIAa1XXGbFEfxU+G/KZNP0xvxqIr0m+rIeKNHhM7d0Lv5q5dVlbF23+rMKLIAT+fqv4v TXKVgnp+k9d7aXbSU/5HGTZkSqrIPihqvaAUijnlAtQT/cYDNa9hkMXYwXp9+3Nd5eix J50aw45eFBhPfqf5AI8VnVijOH3Rd+52x13WxbeHmy1OvFD0ssacz89SOBJ4Kvg0/Y0b yNNJVKtRxv/QjHVDq8pP1BdGZnEdHEBDbqnW7pYXGoqaPZSlCBAGtDo1qVSO5WhSgdJy qZTQ== X-Gm-Message-State: AAQBX9f30c0GblvXJZXtJQjEwvHl+Hxi87l2cbElQZn/NsnDI8tMV+DO j+gIZafzx40xCFGNXrghV+4yA8AiRkk= X-Google-Smtp-Source: AKy350ZyqDQYe82It76oPq+a0SaddBAkY9b+/QpguLgBP7GvubUivw0Sg5MOd0/RQ/MerFwXzbkSKQ== X-Received: by 2002:a17:906:ca55:b0:94e:fd09:a02d with SMTP id jx21-20020a170906ca5500b0094efd09a02dmr13181937ejb.47.1682394205545; Mon, 24 Apr 2023 20:43:25 -0700 (PDT) Received: from noahgold-desk.intel.com ([2600:1010:b0bf:49f9:ced8:b147:42c1:aa6c]) by smtp.gmail.com with ESMTPSA id zu23-20020a17090708d700b0095807ab4b57sm3880733ejb.178.2023.04.24.20.43.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Apr 2023 20:43:25 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: [PATCH v3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Date: Mon, 24 Apr 2023 22:43:20 -0500 Message-Id: <20230425034320.2136731-1-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230424050329.1501348-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org Sender: "Libc-alpha" Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 / ncores_per_socket'. This patch updates that value to roughly 'sizeof_L3 / 2` The original value (specifically dividing the `ncores_per_socket`) was done to limit the amount of other threads' data a `memcpy`/`memset` could evict. Dividing by 'ncores_per_socket', however leads to exceedingly low non-temporal threshholds and leads to using non-temporal stores in cases where `rep movsb` is multiple times faster. Furthermore, non-temporal stores are written directly to disk so using it at a size much smaller than L3 can place soon to be accessed data much further away than it otherwise could be. As well, modern machines are able to detect streaming patterns (especially if `rep movsb` is used) and provide LRU hints to the memory subsystem. This in affect caps the total amount of eviction at 1/cache_assosiativity, far below meaningfully thrashing the entire cache. As best I can tell, the benchmarks that lead this small threshold where done comparing non-temporal stores versus standard cacheable stores. A better comparison (linked below) is to be `rep movsb` which, on the measure systems, is nearly 2x faster than non-temporal stores at the low-end of the previous threshold, and within 10% for over 100MB copies (well past even the current threshold). In cases with a low number of threads competing for bandwidth, `rep movsb` is ~2x faster up to `sizeof_L3`. Benchmarks comparing non-temporal stores, rep movsb, and cacheable stores where done using: https://github.com/goldsteinn/memcpy-nt-benchmarks Sheets results (also available in pdf on the github): https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml --- sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++--------------- 1 file changed, 43 insertions(+), 27 deletions(-) diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index ec88945b39..df75fbd868 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -407,7 +407,7 @@ handle_zhaoxin (int name) } static void -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr, long int core) { unsigned int eax; @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, unsigned int family = cpu_features->basic.family; unsigned int model = cpu_features->basic.model; long int shared = *shared_ptr; + long int shared_per_thread = *shared_per_thread_ptr; unsigned int threads = *threads_ptr; bool inclusive_cache = true; bool support_count_mask = true; @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, /* Try L2 otherwise. */ level = 2; shared = core; + shared_per_thread = core; threads_l2 = 0; threads_l3 = -1; } @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, } else { -intel_bug_no_cache_info: - /* Assume that all logical threads share the highest cache - level. */ - threads - = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16) - & 0xff); - } - - /* Cap usage of highest cache level to the number of supported - threads. */ - if (shared > 0 && threads > 0) - shared /= threads; + intel_bug_no_cache_info: + /* Assume that all logical threads share the highest cache + level. */ + threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16) + & 0xff); + + /* Get per-thread size of highest level cache. */ + if (shared_per_thread > 0 && threads > 0) + shared_per_thread /= threads; + } } /* Account for non-inclusive L2 and L3 caches. */ if (!inclusive_cache) { if (threads_l2 > 0) - core /= threads_l2; + shared_per_thread += core / threads_l2; shared += core; } *shared_ptr = shared; + *shared_per_thread_ptr = shared_per_thread; *threads_ptr = threads; } @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) /* Find out what brand of processor. */ long int data = -1; long int shared = -1; + long int shared_per_thread = -1; long int core = -1; unsigned int threads = 0; unsigned long int level1_icache_size = -1; @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features); core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features); shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features); + shared_per_thread = shared; level1_icache_size = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features); @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) level4_cache_size = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features); - get_common_cache_info (&shared, &threads, core); + get_common_cache_info (&shared, &shared_per_thread, &threads, core); } else if (cpu_features->basic.kind == arch_kind_zhaoxin) { data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE); core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE); shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE); + shared_per_thread = shared; level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE); level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE); @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC); level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE); - get_common_cache_info (&shared, &threads, core); + get_common_cache_info (&shared, &shared_per_thread, &threads, core); } else if (cpu_features->basic.kind == arch_kind_amd) { data = handle_amd (_SC_LEVEL1_DCACHE_SIZE); core = handle_amd (_SC_LEVEL2_CACHE_SIZE); shared = handle_amd (_SC_LEVEL3_CACHE_SIZE); + shared_per_thread = shared; level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE); level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE); @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) if (shared <= 0) /* No shared L3 cache. All we have is the L2 cache. */ shared = core; + + if (shared_per_thread <= 0) + shared_per_thread = shared; } cpu_features->level1_icache_size = level1_icache_size; @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) cpu_features->level3_cache_linesize = level3_cache_linesize; cpu_features->level4_cache_size = level4_cache_size; - /* The default setting for the non_temporal threshold is 3/4 of one - thread's share of the chip's cache. For most Intel and AMD processors - with an initial release date between 2017 and 2020, a thread's typical - share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4 - threshold leaves 125 KBytes to 500 KBytes of the thread's data - in cache after a maximum temporal copy, which will maintain - in cache a reasonable portion of the thread's stack and other - active data. If the threshold is set higher than one thread's - share of the cache, it has a substantial risk of negatively - impacting the performance of other threads running on the chip. */ - unsigned long int non_temporal_threshold = shared * 3 / 4; + /* The default setting for the non_temporal threshold is 1/2 of size + of chip's cache. For most Intel and AMD processors with an + initial release date between 2017 and 2023, a thread's typical + share of the cache is from 18-64MB. Using the 1/2 L3 is meant to + estimate the point where non-temporal stores begin outcompeting + other methods. As well the point where the fact that non-temporal + stores are forced back to disk would already occured to the + majority of the lines in the copy. Note, concerns about the + entire L3 cache being evicted by the copy are mostly alleviated + by the fact that modern HW detects streaming patterns and + provides proper LRU hints so that the the maximum thrashing + capped at 1/assosiativity. */ + unsigned long int non_temporal_threshold = shared / 2; + /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run + a higher risk of actually thrashing the cache as they don't have a HW LRU + hint. As well, there performance in highly parallel situations is + noticeably worse. */ + if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS)) + non_temporal_threshold = shared_per_thread * 3 / 4; /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best if that operation cannot overflow. Minimum of 0x4040 (16448) because the