From patchwork Tue Aug 13 18:57:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 1972059 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=X35UPS8j; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Wk0y10Hvcz1yYl for ; Wed, 14 Aug 2024 04:57:45 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 4DECD3858428 for ; Tue, 13 Aug 2024 18:57:43 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com [IPv6:2a00:1450:4864:20::129]) by sourceware.org (Postfix) with ESMTPS id 5488E3858D34 for ; Tue, 13 Aug 2024 18:57:22 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 5488E3858D34 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 5488E3858D34 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2a00:1450:4864:20::129 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1723575444; cv=none; b=g0o0SFEAdDPaTLtzS8qM8ylbe7g9wbmRk0au/W8Pbs4TLvWpc5q719Knpfcs0/QH5fcuSoZ7gPdyjdKeMQ68aKY6kJpgVa57C7DnOzJSyjPQewPJzKWn1yMIJUzRAOjUiifHlvkX85uJKr8jj2G9MR0Z+qGE7UG3OOrEfXMQTTs= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1723575444; c=relaxed/simple; bh=fmWVJGuLlOtXrrl+5SgJ4BspRT6/0ZjfuPIHrUhFu0Y=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=bso2SQgRqsyENlnuAAH4nR+Lew5zSy6dzGNuHBq0sr9lwxkrSiaEcV8MOyM/+vA3OgLpjj2G1e0y5FEMvbmfNgax0IyZdtS/CQRED0u57rxuynTkeuM2VoLO9C1Ey/3ZdA9n3BWRi5rLD2IhuQ8Gp8Z+PIxDYEzWv/YnQGZeKaM= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-lf1-x129.google.com with SMTP id 2adb3069b0e04-52f04b3cb33so11707869e87.0 for ; Tue, 13 Aug 2024 11:57:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723575440; x=1724180240; darn=sourceware.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=S50gDP85bfIUkvmV9LMgIBOxA9ssCMFmf5fjm28/XhI=; b=X35UPS8jAMNh5s7SFiT6EBwpIKXpcXWVFVfTjIi9ZOVPcbdLIhwJqWKjubIS1Upkb/ N+v3ppzY/+zpdvoEbCAPxpi07KHD+FaBWNPj73Xmn1AcO82K5D/QDVW3MUkSjAWYGTTw nXlZMe40D9gWr2RwfH6kMOYIKhRFOuQLkjFzux394P0F8HUbvAsBSIAkm11U3uPiSeUF ZjAl19eXTNX0Ga0r/RadfnRLGIKu7G7ZunuUl5r8zf9mGcDawrY3b8pWRa6AB81zVTwz N83yY8i7Ej4RVwUdvworRyqq8Z76Uc+8fBwHyKHlmUSq2hC4UQreaYVzH2LRy4MLIwI3 JooA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723575440; x=1724180240; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=S50gDP85bfIUkvmV9LMgIBOxA9ssCMFmf5fjm28/XhI=; b=Znx+jFe4r25Yb8BY1dsMNZIJIT6ssLGc/4456LfVj/7m++VNEV28pbKprXFl6prtoF OwsnjAEaNdZSVQxLbrtCt0QWVWuWIbmurv4IjWdlgTdEVrIE8UHHrjoybhh6UmDz6UCg iK8UFzyHj9CIVf69JvCo0+/V/XVvqzPpC76Lu+19vWaaSLhJyxUmPkEhMBEo1CA16gOc LrzagGIqQzvamOrimRiA7jLpL6bXtELmZofhJvWAw/sB6GhzVQcOqNepR0IIJYdW3wqX 0wC7+22nAbT4kgZkM7tZL3bNzS4MSjErqLMzRXv6jpv4RYFNknqYKqk9quWAGRZqDxJ0 LI+Q== X-Gm-Message-State: AOJu0YyH5oxYR5yC+qN/9M4/qkP7QdH5w/VfO5mf30XKz9tfQ1eocuUA ExJS/S/mAlNNlMJSpRISMHZLm1FXcSKMKzH2XKG4TQo89a9KqRvH+55wJspg X-Google-Smtp-Source: AGHT+IEJ9xUjaC/XHgYDgu+ZvM+0kGZCBjSP282F9MA5drzvP6EVfT6399nanS1w66kkZzppu7Hnvg== X-Received: by 2002:a05:6512:3b07:b0:52c:dd25:9ac6 with SMTP id 2adb3069b0e04-532eda8a3bbmr251585e87.29.1723575439795; Tue, 13 Aug 2024 11:57:19 -0700 (PDT) Received: from noahgold-desk.. ([185.213.154.220]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-53200f1b2b5sm1079471e87.217.2024.08.13.11.57.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Aug 2024 11:57:19 -0700 (PDT) From: Noah Goldstein To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com Subject: x86: Use `Avoid_Non_Temporal_Memset` to control non-temporal path Date: Wed, 14 Aug 2024 02:57:13 +0800 Message-Id: <20240813185714.2999710-1-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Spam-Status: No, score=-12.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org This is just a refactor and there should be no behavioral change from this commit. The goal is to make `Avoid_Non_Temporal_Memset` a more universal knob for controlling whether we use non-temporal memset rather than having extra logic based on vendor. --- sysdeps/x86/cpu-features.c | 16 ++++++++++++++++ sysdeps/x86/dl-cacheinfo.h | 15 +++++++-------- 2 files changed, 23 insertions(+), 8 deletions(-) diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 18ed008040..a4786d23c7 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -756,6 +756,12 @@ init_cpu_features (struct cpu_features *cpu_features) unsigned int stepping = 0; enum cpu_features_kind kind; + /* Default is avoid non-temporal memset for non Intel/AMD hardware. This is, + as of writing this, we only have benchmarks indicatings it profitability + on Intel/AMD. */ + cpu_features->preferred[index_arch_Avoid_Non_Temporal_Memset] + |= bit_arch_Avoid_Non_Temporal_Memset; + cpu_features->cachesize_non_temporal_divisor = 4; #if !HAS_CPUID if (__get_cpuid_max (0, 0) == 0) @@ -781,6 +787,11 @@ init_cpu_features (struct cpu_features *cpu_features) update_active (cpu_features); + /* Benchmarks indicate non-temporal memset can be profitable on Intel + hardware. */ + cpu_features->preferred[index_arch_Avoid_Non_Temporal_Memset] + &= ~bit_arch_Avoid_Non_Temporal_Memset; + if (family == 0x06) { model += extended_model; @@ -992,6 +1003,11 @@ https://www.intel.com/content/www/us/en/support/articles/000059422/processors.ht ecx = cpu_features->features[CPUID_INDEX_1].cpuid.ecx; + /* Benchmarks indicate non-temporal memset can be profitable on AMD + hardware. */ + cpu_features->preferred[index_arch_Avoid_Non_Temporal_Memset] + &= ~bit_arch_Avoid_Non_Temporal_Memset; + if (CPU_FEATURE_USABLE_P (cpu_features, AVX)) { /* Since the FMA4 bit is in CPUID_INDEX_80000001 and diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index a1c03b8903..3d0c8d43b8 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -988,14 +988,6 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) if (CPU_FEATURE_USABLE_P (cpu_features, FSRM)) rep_movsb_threshold = 2112; - /* Non-temporal stores are more performant on Intel and AMD hardware above - non_temporal_threshold. Enable this for both Intel and AMD hardware. */ - unsigned long int memset_non_temporal_threshold = SIZE_MAX; - if (!CPU_FEATURES_ARCH_P (cpu_features, Avoid_Non_Temporal_Memset) - && (cpu_features->basic.kind == arch_kind_intel - || cpu_features->basic.kind == arch_kind_amd)) - memset_non_temporal_threshold = non_temporal_threshold; - /* For AMD CPUs that support ERMS (Zen3+), REP MOVSB is in a lot of cases slower than the vectorized path (and for some alignments, it is really slow, check BZ #30994). */ @@ -1017,6 +1009,13 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) if (tunable_size != 0) shared = tunable_size; + /* Non-temporal stores are more performant on some hardware above + non_temporal_threshold. Currently Prefer_Non_Temporal is set for for both + Intel and AMD hardware. */ + unsigned long int memset_non_temporal_threshold = SIZE_MAX; + if (!CPU_FEATURES_ARCH_P (cpu_features, Avoid_Non_Temporal_Memset)) + memset_non_temporal_threshold = non_temporal_threshold; + tunable_size = TUNABLE_GET (x86_non_temporal_threshold, long int, NULL); if (tunable_size > minimum_non_temporal_threshold && tunable_size <= maximum_non_temporal_threshold)