From patchwork Mon Apr 24 05:03:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 1772545 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=XavkQNp/; dkim-atps=neutral Received: from sourceware.org (ip-8-43-85-97.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Q4Y252j5Vz23hs for ; Mon, 24 Apr 2023 15:03:57 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 8288A3858C2C for ; Mon, 24 Apr 2023 05:03:51 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 8288A3858C2C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1682312631; bh=t1+BhM+WYijeLdG+6fQdnl8pQeeVViSb6sRIvjzF8l0=; h=To:Cc:Subject:Date:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:List-Subscribe:From:Reply-To:From; b=XavkQNp/fzFzbsZBJVQOkYLtikIUM3QUJO3wYhgFaTggP5gbET2VooaRQW5DnUimT IrFJy4tcoOpNbUALG7PJdPyJq+RCSZ5+kMK00fyyNtKygN8wHEJLO9TVroPYW2P2at BkmaM89zbXeqcp1r8j6zpJdFvCnuqmU7VcMgjH94= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by sourceware.org (Postfix) with ESMTPS id 879A13858D32 for ; Mon, 24 Apr 2023 05:03:32 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 879A13858D32 Received: by mail-ed1-x534.google.com with SMTP id 4fb4d7f45d1cf-50847469a7fso5690556a12.0 for ; Sun, 23 Apr 2023 22:03:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682312611; x=1684904611; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=t1+BhM+WYijeLdG+6fQdnl8pQeeVViSb6sRIvjzF8l0=; b=eG9gL5Ll1WKu+8/7XbUulJO9CubZK+eHwaDu/aJIL5fn0FbYelXqe26Qbpo4UbNflV OwY73opxMbohVMZXDw47KGJVjIWjqcJlyve+UBsAh0moQIHbOvn72jSS3bnPk/tTlkla Smbwok1GM/ZiccNnmiAJtWr6esSUSSQb5iLy6JHwYbUWfm+FgjtkBm6uqpwmjmxXkqBi 5dwKvbFfNfFleD1msEUz5QsesGHa+t8OiF5UCI9DWPMCA2/xQnGdqMhlOpPdrleHdecM 11kGanqzJgo+JwXq2P7ywk6CaJ+LHOUyII6v2WBNOo1DqwiYORKLuY4DCTjC3oxGFKVD /86g== X-Gm-Message-State: AAQBX9eJKQSS7UpaLmUMY9H93jG+WcSGXUF/8+6J2I50mejGD0pMFlkb HryT/oqsPeiouv2kPbOuAQXZIurfis4= X-Google-Smtp-Source: AKy350Z7rhVtMqdHp5TR/1vsUzm22WPTQK5pHWvolBKFICAhPeBpFYDjXZ0E4GyP1p5uQ/o7Zng7FA== X-Received: by 2002:aa7:c2c5:0:b0:506:943a:6a5f with SMTP id m5-20020aa7c2c5000000b00506943a6a5fmr10037914edp.36.1682312610544; Sun, 23 Apr 2023 22:03:30 -0700 (PDT) Received: from noahgold-desk.intel.com ([192.55.55.52]) by smtp.gmail.com with ESMTPSA id l18-20020a056402345200b00506935b7c75sm4206449edc.69.2023.04.23.22.03.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 Apr 2023 22:03:30 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Date: Mon, 24 Apr 2023 00:03:29 -0500 Message-Id: <20230424050329.1501348-1-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org Sender: "Libc-alpha" Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 / ncores_per_socket'. This patch updates that value to roughly 'sizeof_L3 / 2` The original value (specifically dividing the `ncores_per_socket`) was done to limit the amount of other threads' data a `memcpy`/`memset` could evict. Dividing by 'ncores_per_socket', however leads to exceedingly low non-temporal threshholds and leads to using non-temporal stores in cases where `rep movsb` is multiple times faster. Furthermore, non-temporal stores are written directly to disk so using it at a size much smaller than L3 can place soon to be accessed data much further away than it otherwise could be. As well, modern machines are able to detect streaming patterns (especially if `rep movsb` is used) and provide LRU hints to the memory subsystem. This in affect caps the total amount of eviction at 1/cache_assosiativity, far below meaningfully thrashing the entire cache. As best I can tell, the benchmarks that lead this small threshold where done comparing non-temporal stores versus standard cacheable stores. A better comparison (linked below) is to be `rep movsb` which, on the measure systems, is nearly 2x faster than non-temporal stores at the low-end of the previous threshold, and within 10% for over 100MB copies (well past even the current threshold). In cases with a low number of threads competing for bandwidth, `rep movsb` is ~2x faster up to `sizeof_L3`. Benchmarks comparing non-temporal stores, rep movsb, and cacheable stores where done using: https://github.com/goldsteinn/memcpy-nt-benchmarks Sheets results (also available in pdf on the github): https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml --- sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++--------------------- 1 file changed, 14 insertions(+), 21 deletions(-) diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index ec88945b39..f25309dbc8 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -604,20 +604,11 @@ intel_bug_no_cache_info: = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16) & 0xff); } - - /* Cap usage of highest cache level to the number of supported - threads. */ - if (shared > 0 && threads > 0) - shared /= threads; } /* Account for non-inclusive L2 and L3 caches. */ if (!inclusive_cache) - { - if (threads_l2 > 0) - core /= threads_l2; - shared += core; - } + shared += core; *shared_ptr = shared; *threads_ptr = threads; @@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) cpu_features->level3_cache_linesize = level3_cache_linesize; cpu_features->level4_cache_size = level4_cache_size; - /* The default setting for the non_temporal threshold is 3/4 of one - thread's share of the chip's cache. For most Intel and AMD processors - with an initial release date between 2017 and 2020, a thread's typical - share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4 - threshold leaves 125 KBytes to 500 KBytes of the thread's data - in cache after a maximum temporal copy, which will maintain - in cache a reasonable portion of the thread's stack and other - active data. If the threshold is set higher than one thread's - share of the cache, it has a substantial risk of negatively - impacting the performance of other threads running on the chip. */ - unsigned long int non_temporal_threshold = shared * 3 / 4; + /* The default setting for the non_temporal threshold is 1/2 of size + of chip's cache. For most Intel and AMD processors with an + initial release date between 2017 and 2023, a thread's typical + share of the cache is from 18-64MB. Using the 1/2 L3 is meant to + estimate the point where non-temporal stores begin outcompeting + other methods. As well the point where the fact that non-temporal + stores are forced back to disk would already occured to the + majority of the lines in the copy. Note, concerns about the + entire L3 cache being evicted by the copy are mostly alleviated + by the fact that modern HW detects streaming patterns and + provides proper LRU hints so that the the maximum thrashing + capped at 1/assosiativity. */ + unsigned long int non_temporal_threshold = shared / 2; /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best if that operation cannot overflow. Minimum of 0x4040 (16448) because the From patchwork Wed May 10 00:33:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 1779148 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=DQJJButO; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QGGJ51Tycz20fl for ; Wed, 10 May 2023 10:34:45 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 34E6F3857024 for ; Wed, 10 May 2023 00:34:43 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 34E6F3857024 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1683678883; bh=XxNC5ARX8u5mR3bYxnAOxJxnIyZupnVfbVQKqJWKXec=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=DQJJButOpuIP1CEj3dAvx8kFDdaMYbNFP1yw3Pd2cbrLXiDJuplk7FiC0g97AhjtZ Mj5/9DG9oVbCJ60oyAvtP3Us9e9ibHvJMUjK6PXKiv/ISbLx9YIcwf3hPCuM6pXvbZ JHX867uPQwjY6aCws9j+N4cPAVGAhkbcPWIhxbEI= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-oo1-xc33.google.com (mail-oo1-xc33.google.com [IPv6:2607:f8b0:4864:20::c33]) by sourceware.org (Postfix) with ESMTPS id 0E7B83857030 for ; Wed, 10 May 2023 00:34:23 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 0E7B83857030 Received: by mail-oo1-xc33.google.com with SMTP id 006d021491bc7-54f846d251fso1353594eaf.3 for ; Tue, 09 May 2023 17:34:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683678862; x=1686270862; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XxNC5ARX8u5mR3bYxnAOxJxnIyZupnVfbVQKqJWKXec=; b=CY3psyg8mSw4Zvb+PDXWCwyks9ZSgcQ5O9Alw2RGIBY/RmOvB6eYQE5VfEaeSa88wM 1Zd57p+aR8ZL13/jje5phAgFw9GPF7zuzyNMOicKfW+r+YSbjmpmmdYhUH2l8mwmVO9+ +/kRVmrvkb0gbV6Vk+0Xae11hOrSk0zGVUcB2+cOSKwT6x542GKa+lUmFywzgfieN5Xv 3CI7k4HRgNP4hfNIQtPG83N+ZJDX5aKDHVX5k27BIXHLjLWuX6Sav9ggWKYS6Q16b4ga sTEjSuFjX19HPrlGImre2H4/45rb9B85msGa643TQin3sFvT7I4Dw3kTL6d7dij24Fxa u1Jw== X-Gm-Message-State: AC+VfDxL7E0kDbHGLSzUrv7IE0SrcMylvBd7cQyiulYKmazMAoMkgVK+ DyxdaOn2eM/5VsQM4XcdUVlLuX897ds= X-Google-Smtp-Source: ACHHUZ6KKGWzJmfooZZv/7OGF3Fzji3GM6++sW38OBwAYPE17JpEgIaVrgqAoG0IZZ/9gOUQSL/1nw== X-Received: by 2002:a05:6808:991:b0:38c:5c1e:48f5 with SMTP id a17-20020a056808099100b0038c5c1e48f5mr2199730oic.2.1683678861725; Tue, 09 May 2023 17:34:21 -0700 (PDT) Received: from noahgold-desk.lan (2603-8080-1301-76c6-4e19-fee9-c46d-d01a.res6.spectrum.com. [2603:8080:1301:76c6:4e19:fee9:c46d:d01a]) by smtp.gmail.com with ESMTPSA id y8-20020acae108000000b0038bb2f60064sm1776872oig.30.2023.05.09.17.34.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 May 2023 17:34:21 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features` Date: Tue, 9 May 2023 19:33:34 -0500 Message-Id: <20230510003336.637851-2-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230510003336.637851-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230510003336.637851-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org Sender: "Libc-alpha" This patch should have no affect on existing functionality. The current code, which has a single switch for model detection and setting prefered features, is difficult to follow/extend. The cases use magic numbers and many microarchitectures are missing. This makes it difficult to reason about what is implemented so far and/or how/where to add support for new features. This patch splits the model detection and preference setting stages so that CPU preferences can be set based on a complete list of available microarchitectures, rather than based on model magic numbers. --- sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++-------- 1 file changed, 317 insertions(+), 84 deletions(-) diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 5bff8ec0b4..9d433f8144 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load == index_arch_Fast_Copy_Backward)), "Incorrect index_arch_Fast_Unaligned_Load"); + +/* Intel Family-6 microarch list. */ +enum +{ + /* Atom processors. */ + INTEL_ATOM_BONNELL, + INTEL_ATOM_SALTWELL, + INTEL_ATOM_SILVERMONT, + INTEL_ATOM_AIRMONT, + INTEL_ATOM_GOLDMONT, + INTEL_ATOM_GOLDMONT_PLUS, + INTEL_ATOM_SIERRAFOREST, + INTEL_ATOM_GRANDRIDGE, + INTEL_ATOM_TREMONT, + + /* Bigcore processors. */ + INTEL_BIGCORE_MEROM, + INTEL_BIGCORE_PENRYN, + INTEL_BIGCORE_DUNNINGTON, + INTEL_BIGCORE_NEHALEM, + INTEL_BIGCORE_WESTMERE, + INTEL_BIGCORE_SANDYBRIDGE, + INTEL_BIGCORE_IVYBRIDGE, + INTEL_BIGCORE_HASWELL, + INTEL_BIGCORE_BROADWELL, + INTEL_BIGCORE_SKYLAKE, + INTEL_BIGCORE_AMBERLAKE, + INTEL_BIGCORE_COFFEELAKE, + INTEL_BIGCORE_WHISKEYLAKE, + INTEL_BIGCORE_KABYLAKE, + INTEL_BIGCORE_COMETLAKE, + INTEL_BIGCORE_SKYLAKE_AVX512, + INTEL_BIGCORE_CANNONLAKE, + INTEL_BIGCORE_CASCADELAKE, + INTEL_BIGCORE_COOPERLAKE, + INTEL_BIGCORE_ICELAKE, + INTEL_BIGCORE_TIGERLAKE, + INTEL_BIGCORE_ROCKETLAKE, + INTEL_BIGCORE_SAPPHIRERAPIDS, + INTEL_BIGCORE_RAPTORLAKE, + INTEL_BIGCORE_EMERALDRAPIDS, + INTEL_BIGCORE_METEORLAKE, + INTEL_BIGCORE_LUNARLAKE, + INTEL_BIGCORE_ARROWLAKE, + INTEL_BIGCORE_GRANITERAPIDS, + + /* Mixed (bigcore + atom SOC). */ + INTEL_MIXED_LAKEFIELD, + INTEL_MIXED_ALDERLAKE, + + /* KNL. */ + INTEL_KNIGHTS_MILL, + INTEL_KNIGHTS_LANDING, + + /* Unknown. */ + INTEL_UNKNOWN, +}; + +static unsigned int +intel_get_fam6_microarch (unsigned int model, unsigned int stepping) +{ + switch (model) + { + case 0x1C: + case 0x26: + return INTEL_ATOM_BONNELL; + case 0x27: + case 0x35: + case 0x36: + return INTEL_ATOM_SALTWELL; + case 0x37: + case 0x4A: + case 0x4D: + case 0x5D: + return INTEL_ATOM_SILVERMONT; + case 0x4C: + case 0x5A: + case 0x75: + return INTEL_ATOM_AIRMONT; + case 0x5C: + case 0x5F: + return INTEL_ATOM_GOLDMONT; + case 0x7A: + return INTEL_ATOM_GOLDMONT_PLUS; + case 0xAF: + return INTEL_ATOM_SIERRAFOREST; + case 0xB6: + return INTEL_ATOM_GRANDRIDGE; + case 0x86: + case 0x96: + case 0x9C: + return INTEL_ATOM_TREMONT; + case 0x0F: + case 0x16: + return INTEL_BIGCORE_MEROM; + case 0x17: + return INTEL_BIGCORE_PENRYN; + case 0x1D: + return INTEL_BIGCORE_DUNNINGTON; + case 0x1A: + case 0x1E: + case 0x1F: + case 0x2E: + return INTEL_BIGCORE_NEHALEM; + case 0x25: + case 0x2C: + case 0x2F: + return INTEL_BIGCORE_WESTMERE; + case 0x2A: + case 0x2D: + return INTEL_BIGCORE_SANDYBRIDGE; + case 0x3A: + case 0x3E: + return INTEL_BIGCORE_IVYBRIDGE; + case 0x3C: + case 0x3F: + case 0x45: + case 0x46: + return INTEL_BIGCORE_HASWELL; + case 0x3D: + case 0x47: + case 0x4F: + case 0x56: + return INTEL_BIGCORE_BROADWELL; + case 0x4E: + case 0x5E: + return INTEL_BIGCORE_SKYLAKE; + case 0x8E: + switch (stepping) + { + case 0x09: + return INTEL_BIGCORE_AMBERLAKE; + case 0x0A: + return INTEL_BIGCORE_COFFEELAKE; + case 0x0B: + case 0x0C: + return INTEL_BIGCORE_WHISKEYLAKE; + default: + return INTEL_BIGCORE_KABYLAKE; + } + case 0x9E: + switch (stepping) + { + case 0x0A: + case 0x0B: + case 0x0C: + case 0x0D: + return INTEL_BIGCORE_COFFEELAKE; + default: + return INTEL_BIGCORE_KABYLAKE; + } + case 0xA5: + case 0xA6: + return INTEL_BIGCORE_COMETLAKE; + case 0x66: + return INTEL_BIGCORE_CANNONLAKE; + case 0x55: + switch (stepping) + { + case 0x06: + case 0x07: + return INTEL_BIGCORE_CASCADELAKE; + case 0x0b: + return INTEL_BIGCORE_COOPERLAKE; + default: + return INTEL_BIGCORE_SKYLAKE_AVX512; + } + case 0x6A: + case 0x6C: + case 0x7D: + case 0x7E: + case 0x9D: + return INTEL_BIGCORE_ICELAKE; + case 0x8C: + case 0x8D: + return INTEL_BIGCORE_TIGERLAKE; + case 0xA7: + return INTEL_BIGCORE_ROCKETLAKE; + case 0x8F: + return INTEL_BIGCORE_SAPPHIRERAPIDS; + case 0xB7: + case 0xBA: + case 0xBF: + return INTEL_BIGCORE_RAPTORLAKE; + case 0xCF: + return INTEL_BIGCORE_EMERALDRAPIDS; + case 0xAA: + case 0xAC: + return INTEL_BIGCORE_METEORLAKE; + case 0xbd: + return INTEL_BIGCORE_LUNARLAKE; + case 0xc6: + return INTEL_BIGCORE_ARROWLAKE; + case 0xAD: + case 0xAE: + return INTEL_BIGCORE_GRANITERAPIDS; + case 0x8A: + return INTEL_MIXED_LAKEFIELD; + case 0x97: + case 0x9A: + case 0xBE: + return INTEL_MIXED_ALDERLAKE; + case 0x85: + return INTEL_KNIGHTS_MILL; + case 0x57: + return INTEL_KNIGHTS_LANDING; + default: + return INTEL_UNKNOWN; + } +} + static inline void init_cpu_features (struct cpu_features *cpu_features) { @@ -453,129 +664,151 @@ init_cpu_features (struct cpu_features *cpu_features) if (family == 0x06) { model += extended_model; - switch (model) + unsigned int microarch + = intel_get_fam6_microarch (model, stepping); + + switch (microarch) { - case 0x1c: - case 0x26: - /* BSF is slow on Atom. */ + /* Atom / KNL tuning. */ + case INTEL_ATOM_BONNELL: + /* BSF is slow on Bonnell. */ cpu_features->preferred[index_arch_Slow_BSF] - |= bit_arch_Slow_BSF; + |= bit_arch_Slow_BSF; break; - case 0x57: - /* Knights Landing. Enable Silvermont optimizations. */ - - case 0x7a: - /* Unaligned load versions are faster than SSSE3 - on Goldmont Plus. */ - - case 0x5c: - case 0x5f: /* Unaligned load versions are faster than SSSE3 - on Goldmont. */ + on Airmont, Silvermont, Goldmont, and Goldmont Plus. */ + case INTEL_ATOM_AIRMONT: + case INTEL_ATOM_SILVERMONT: + case INTEL_ATOM_GOLDMONT: + case INTEL_ATOM_GOLDMONT_PLUS: - case 0x4c: - case 0x5a: - case 0x75: - /* Airmont is a die shrink of Silvermont. */ + /* Knights Landing. Enable Silvermont optimizations. */ + case INTEL_KNIGHTS_LANDING: - case 0x37: - case 0x4a: - case 0x4d: - case 0x5d: - /* Unaligned load versions are faster than SSSE3 - on Silvermont. */ cpu_features->preferred[index_arch_Fast_Unaligned_Load] - |= (bit_arch_Fast_Unaligned_Load - | bit_arch_Fast_Unaligned_Copy - | bit_arch_Prefer_PMINUB_for_stringop - | bit_arch_Slow_SSE4_2); + |= (bit_arch_Fast_Unaligned_Load + | bit_arch_Fast_Unaligned_Copy + | bit_arch_Prefer_PMINUB_for_stringop + | bit_arch_Slow_SSE4_2); break; - case 0x86: - case 0x96: - case 0x9c: + case INTEL_ATOM_TREMONT: /* Enable rep string instructions, unaligned load, unaligned - copy, pminub and avoid SSE 4.2 on Tremont. */ + copy, pminub and avoid SSE 4.2 on Tremont. */ cpu_features->preferred[index_arch_Fast_Rep_String] - |= (bit_arch_Fast_Rep_String - | bit_arch_Fast_Unaligned_Load - | bit_arch_Fast_Unaligned_Copy - | bit_arch_Prefer_PMINUB_for_stringop - | bit_arch_Slow_SSE4_2); + |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load + | bit_arch_Fast_Unaligned_Copy + | bit_arch_Prefer_PMINUB_for_stringop + | bit_arch_Slow_SSE4_2); break; + /* Default tuned KNL microarch. */ + case INTEL_KNIGHTS_MILL: + goto default_tuning; + /* Default tuned atom microarch. */ + case INTEL_ATOM_SIERRAFOREST: + case INTEL_ATOM_GRANDRIDGE: + case INTEL_ATOM_SALTWELL: + goto default_tuning; + + /* Bigcore Tuning. */ + case INTEL_UNKNOWN: default: + default_tuning: /* Unknown family 0x06 processors. Assuming this is one of Core i3/i5/i7 processors if AVX is available. */ if (!CPU_FEATURES_CPU_P (cpu_features, AVX)) break; - /* Fall through. */ - - case 0x1a: - case 0x1e: - case 0x1f: - case 0x25: - case 0x2c: - case 0x2e: - case 0x2f: + case INTEL_BIGCORE_NEHALEM: + case INTEL_BIGCORE_WESTMERE: /* Rep string instructions, unaligned load, unaligned copy, and pminub are fast on Intel Core i3, i5 and i7. */ cpu_features->preferred[index_arch_Fast_Rep_String] - |= (bit_arch_Fast_Rep_String - | bit_arch_Fast_Unaligned_Load - | bit_arch_Fast_Unaligned_Copy - | bit_arch_Prefer_PMINUB_for_stringop); + |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load + | bit_arch_Fast_Unaligned_Copy + | bit_arch_Prefer_PMINUB_for_stringop); break; + + /* Default tuned Bigcore microarch. */ + case INTEL_BIGCORE_SANDYBRIDGE: + case INTEL_BIGCORE_IVYBRIDGE: + case INTEL_BIGCORE_HASWELL: + case INTEL_BIGCORE_BROADWELL: + case INTEL_BIGCORE_SKYLAKE: + case INTEL_BIGCORE_AMBERLAKE: + case INTEL_BIGCORE_COFFEELAKE: + case INTEL_BIGCORE_WHISKEYLAKE: + case INTEL_BIGCORE_KABYLAKE: + case INTEL_BIGCORE_COMETLAKE: + case INTEL_BIGCORE_SKYLAKE_AVX512: + case INTEL_BIGCORE_CASCADELAKE: + case INTEL_BIGCORE_COOPERLAKE: + case INTEL_BIGCORE_CANNONLAKE: + case INTEL_BIGCORE_ICELAKE: + case INTEL_BIGCORE_TIGERLAKE: + case INTEL_BIGCORE_ROCKETLAKE: + case INTEL_BIGCORE_RAPTORLAKE: + case INTEL_BIGCORE_METEORLAKE: + case INTEL_BIGCORE_LUNARLAKE: + case INTEL_BIGCORE_ARROWLAKE: + case INTEL_BIGCORE_SAPPHIRERAPIDS: + case INTEL_BIGCORE_EMERALDRAPIDS: + case INTEL_BIGCORE_GRANITERAPIDS: + goto default_tuning; + + /* Default tuned Mixed (bigcore + atom SOC). */ + case INTEL_MIXED_LAKEFIELD: + case INTEL_MIXED_ALDERLAKE: + goto default_tuning; } - /* Disable TSX on some processors to avoid TSX on kernels that - weren't updated with the latest microcode package (which - disables broken feature by default). */ - switch (model) + /* Disable TSX on some processors to avoid TSX on kernels that + weren't updated with the latest microcode package (which + disables broken feature by default). */ + switch (microarch) { - case 0x55: + case INTEL_BIGCORE_SKYLAKE_AVX512: + /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and + Cooperlake also have model 0x55 but stepping 5/6 and 11 + respectively so double check the stepping to be safe. */ if (stepping <= 5) goto disable_tsx; break; - case 0x8e: - /* NB: Although the errata documents that for model == 0x8e, - only 0xb stepping or lower are impacted, the intention of - the errata was to disable TSX on all client processors on - all steppings. Include 0xc stepping which is an Intel - Core i7-8665U, a client mobile processor. */ - case 0x9e: - if (stepping > 0xc) + + case INTEL_BIGCORE_SKYLAKE: + case INTEL_BIGCORE_AMBERLAKE: + case INTEL_BIGCORE_COFFEELAKE: + case INTEL_BIGCORE_WHISKEYLAKE: + case INTEL_BIGCORE_KABYLAKE: + /* NB: Although the errata documents that for model == 0x8e + (skylake client), only 0xb stepping or lower are impacted, + the intention of the errata was to disable TSX on all client + processors on all steppings. Include 0xc stepping which is + an Intel Core i7-8665U, a client mobile processor. */ + if ((model == 0x8e || model == 0x9e) && stepping > 0xc) break; - /* Fall through. */ - case 0x4e: - case 0x5e: - { + /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for processors listed in: https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html */ -disable_tsx: + disable_tsx: CPU_FEATURE_UNSET (cpu_features, HLE); CPU_FEATURE_UNSET (cpu_features, RTM); CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT); - } - break; - case 0x3f: - /* Xeon E7 v3 with stepping >= 4 has working TSX. */ - if (stepping >= 4) break; - /* Fall through. */ - case 0x3c: - case 0x45: - case 0x46: - /* Disable Intel TSX on Haswell processors (except Xeon E7 v3 - with stepping >= 4) to avoid TSX on kernels that weren't - updated with the latest microcode package (which disables - broken feature by default). */ - CPU_FEATURE_UNSET (cpu_features, RTM); - break; + + case INTEL_BIGCORE_HASWELL: + /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working + TSX. Haswell also include other model numbers that have + working TSX. */ + if (model == 0x3f && stepping >= 4) + break; + + CPU_FEATURE_UNSET (cpu_features, RTM); + break; } } From patchwork Fri May 12 22:03:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 1780843 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=Seqf5iIC; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QJ2pl02Q9z20KD for ; Sat, 13 May 2023 08:03:58 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id CC30D3856278 for ; Fri, 12 May 2023 22:03:56 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org CC30D3856278 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1683929036; bh=QRDC66Zy4vIbnfnJu/qyww+wZfPDA7TxrInxEN/nFKU=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=Seqf5iICJ4RITeEtgxJleH+l5sHism6epSVs1M09jMJZVefyYbGuEPbFRUuqlzsxU 4Di7Ysfui8c0MSbAlMl+eLBomkmFo185NnO43Jpsx/SjmSkrj2RjptVi/zcuzLjy7J y2GKqHtzShuTUyjgrzpXf+2Tk8tl8c+FjaHAOzLc= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [IPv6:2a00:1450:4864:20::629]) by sourceware.org (Postfix) with ESMTPS id C95443858C2D for ; Fri, 12 May 2023 22:03:40 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org C95443858C2D Received: by mail-ej1-x629.google.com with SMTP id a640c23a62f3a-969f90d71d4so845849466b.3 for ; Fri, 12 May 2023 15:03:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683929019; x=1686521019; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QRDC66Zy4vIbnfnJu/qyww+wZfPDA7TxrInxEN/nFKU=; b=LA1VkLP3eqdao0qAGOw3Qhbjio6Obt3AIP8mCV8pXj541ynVGRA+755VMOZxty/ofP lPpNfP/GG2KM5kGim5INWuH+ojyvMH97/RAhYspgdr70ak6VQh7KV0m67gHWTLRTyIXP seYAZqTZVtq9sqENx8isqElCmYWvn+XV/ukJeDKyP8YAUKTLcd1L1fqeT373m+1ggozC dpxtSc3SQaCAmrqvjhgCeiiksSQahO2YUXQ46YonMD0/Xp/5QKxjqaZDCnNuZ4rH72Sh Kcf66Nhkhc4lh7Nq8Dobq+BaPTQTGDAkEDVj3gR4QrCjfjMTLlbCE6qHxKfGY7NMk34r 4WWg== X-Gm-Message-State: AC+VfDwQ6uDf84zmDaEm9Gb5FxBz+h4ap6bUyLbu1l04pyfHI0RghIg1 mxgk79rOxOBq4jvpciTyIV7ohNcPgCY= X-Google-Smtp-Source: ACHHUZ4EFOGa/u1RdQaGnvJMT+jRfuZ5Ir7kCeYkRSIgu9ld8AqLnUaqLlJfpuZ1tXTMgdWYkMao8Q== X-Received: by 2002:a17:907:c10:b0:94d:7b6b:fda6 with SMTP id ga16-20020a1709070c1000b0094d7b6bfda6mr24818315ejc.22.1683929018512; Fri, 12 May 2023 15:03:38 -0700 (PDT) Received: from noahgold-desk.intel.com (2603-8080-1301-76c6-d520-eb19-b7be-96c4.res6.spectrum.com. [2603:8080:1301:76c6:d520:eb19:b7be:96c4]) by smtp.gmail.com with ESMTPSA id z4-20020a17090655c400b009660449b9a3sm5941912ejp.25.2023.05.12.15.03.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 May 2023 15:03:37 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: [PATCH v8 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Date: Fri, 12 May 2023 17:03:26 -0500 Message-Id: <20230512220326.1918608-1-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230424050329.1501348-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org Sender: "Libc-alpha" Different systems prefer a different divisors. From benchmarks[1] so far the following divisors have been found: ICX : 2 SKX : 2 BWD : 8 For Intel, we are generalizing that BWD and older prefers 8 as a divisor, and SKL and newer prefers 2. This number can be further tuned as benchmarks are run. [1]: https://github.com/goldsteinn/memcpy-nt-benchmarks --- sysdeps/x86/cpu-features.c | 27 +++++++++++++++++-------- sysdeps/x86/dl-cacheinfo.h | 32 ++++++++++++++++++------------ sysdeps/x86/include/cpu-features.h | 3 +++ 3 files changed, 41 insertions(+), 21 deletions(-) diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 264d309dd7..3ec7e6f2df 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -638,6 +638,7 @@ init_cpu_features (struct cpu_features *cpu_features) unsigned int stepping = 0; enum cpu_features_kind kind; + cpu_features->cachesize_non_temporal_divisor = 4; #if !HAS_CPUID if (__get_cpuid_max (0, 0) == 0) { @@ -717,12 +718,13 @@ init_cpu_features (struct cpu_features *cpu_features) /* Bigcore/Default Tuning. */ default: + default_tuning: /* Unknown family 0x06 processors. Assuming this is one of Core i3/i5/i7 processors if AVX is available. */ if (!CPU_FEATURES_CPU_P (cpu_features, AVX)) break; - case INTEL_BIGCORE_NEHALEM: - case INTEL_BIGCORE_WESTMERE: + + enable_modern_features: /* Rep string instructions, unaligned load, unaligned copy, and pminub are fast on Intel Core i3, i5 and i7. */ cpu_features->preferred[index_arch_Fast_Rep_String] @@ -731,12 +733,20 @@ init_cpu_features (struct cpu_features *cpu_features) | bit_arch_Prefer_PMINUB_for_stringop); break; - /* - Default tuned Bigcore microarch. + case INTEL_BIGCORE_NEHALEM: + case INTEL_BIGCORE_WESTMERE: + /* Older CPUs prefer non-temporal stores at lower threshold. */ + cpu_features->cachesize_non_temporal_divisor = 8; + goto enable_modern_features; + + /* Default tuned Bigcore microarch. */ case INTEL_BIGCORE_SANDYBRIDGE: case INTEL_BIGCORE_IVYBRIDGE: case INTEL_BIGCORE_HASWELL: case INTEL_BIGCORE_BROADWELL: + cpu_features->cachesize_non_temporal_divisor = 8; + goto default_tuning; + case INTEL_BIGCORE_SKYLAKE: case INTEL_BIGCORE_AMBERLAKE: case INTEL_BIGCORE_COFFEELAKE: @@ -755,13 +765,14 @@ init_cpu_features (struct cpu_features *cpu_features) case INTEL_BIGCORE_SAPPHIRERAPIDS: case INTEL_BIGCORE_EMERALDRAPIDS: case INTEL_BIGCORE_GRANITERAPIDS: - */ + cpu_features->cachesize_non_temporal_divisor = 2; + goto default_tuning; - /* - Default tuned Mixed (bigcore + atom SOC). + /* Default tuned Mixed (bigcore + atom SOC). */ case INTEL_MIXED_LAKEFIELD: case INTEL_MIXED_ALDERLAKE: - */ + cpu_features->cachesize_non_temporal_divisor = 2; + goto default_tuning; } /* Disable TSX on some processors to avoid TSX on kernels that diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index 4a1a5423ff..864b00a521 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) cpu_features->level3_cache_linesize = level3_cache_linesize; cpu_features->level4_cache_size = level4_cache_size; - /* The default setting for the non_temporal threshold is 1/4 of size - of the chip's cache. For most Intel and AMD processors with an - initial release date between 2017 and 2023, a thread's typical - share of the cache is from 18-64MB. Using the 1/4 L3 is meant to - estimate the point where non-temporal stores begin outcompeting - REP MOVSB. As well the point where the fact that non-temporal - stores are forced back to main memory would already occurred to the - majority of the lines in the copy. Note, concerns about the - entire L3 cache being evicted by the copy are mostly alleviated - by the fact that modern HW detects streaming patterns and - provides proper LRU hints so that the maximum thrashing - capped at 1/associativity. */ - unsigned long int non_temporal_threshold = shared / 4; + unsigned long int cachesize_non_temporal_divisor + = cpu_features->cachesize_non_temporal_divisor; + if (cachesize_non_temporal_divisor <= 0) + cachesize_non_temporal_divisor = 4; + + /* The default setting for the non_temporal threshold is [1/2, 1/8] of size + of the chip's cache (depending on `cachesize_non_temporal_divisor` which + is microarch specific. The defeault is 1/4). For most Intel and AMD + processors with an initial release date between 2017 and 2023, a thread's + typical share of the cache is from 18-64MB. Using a reasonable size + fraction of L3 is meant to estimate the point where non-temporal stores + begin outcompeting REP MOVSB. As well the point where the fact that + non-temporal stores are forced back to main memory would already occurred + to the majority of the lines in the copy. Note, concerns about the entire + L3 cache being evicted by the copy are mostly alleviated by the fact that + modern HW detects streaming patterns and provides proper LRU hints so that + the maximum thrashing capped at 1/associativity. */ + unsigned long int non_temporal_threshold + = shared / cachesize_non_temporal_divisor; /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run a higher risk of actually thrashing the cache as they don't have a HW LRU hint. As well, there performance in highly parallel situations is diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h index 40b8129d6a..f5b9dd54fe 100644 --- a/sysdeps/x86/include/cpu-features.h +++ b/sysdeps/x86/include/cpu-features.h @@ -915,6 +915,9 @@ struct cpu_features unsigned long int shared_cache_size; /* Threshold to use non temporal store. */ unsigned long int non_temporal_threshold; + /* When no user non_temporal_threshold is specified. We default to + cachesize / cachesize_non_temporal_divisor. */ + unsigned long int cachesize_non_temporal_divisor; /* Threshold to use "rep movsb". */ unsigned long int rep_movsb_threshold; /* Threshold to stop using "rep movsb". */