diff mbox series

[v11,1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`

Message ID 20230607181803.4154764-1-goldstein.w.n@gmail.com
State New
Headers show
Series [v11,1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` | expand

Commit Message

Noah Goldstein June 7, 2023, 6:18 p.m. UTC
Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.

Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.

The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.

Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
---
 sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
 1 file changed, 43 insertions(+), 27 deletions(-)

Comments

Noah Goldstein June 7, 2023, 6:19 p.m. UTC | #1
On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 4`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal thresholds and leads to using non-temporal stores in
> cases where REP MOVSB is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to main memory
> so using it at a size much smaller than L3 can place soon to be
> accessed data much further away than it otherwise could be. As well,
> modern machines are able to detect streaming patterns (especially if
> REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> in affect caps the total amount of eviction at 1/cache_associativity,
> far below meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be REP MOVSB which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> up to `sizeof_L3`.
>
> The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> such as Broadwell prefer something closer to `8`. This patch is meant
> to be followed up by another one to make the divisor cpu-specific, but
> in the meantime (and for easier backporting), this patch settles on
> `4` as a middle-ground.
>
> Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> Reviewed-by: DJ Delorie <dj@redhat.com>
> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> ---
>  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
>  1 file changed, 43 insertions(+), 27 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index 877e73d700..3bd3b3ec1b 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -              & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +       intel_bug_no_cache_info:
> +         /* Assume that all logical threads share the highest cache
> +            level.  */
> +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +                    & 0xff);
> +
> +         /* Get per-thread size of highest level cache.  */
> +         if (shared_per_thread > 0 && threads > 0)
> +           shared_per_thread /= threads;
> +       }
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +       shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>
>        level1_icache_size
>         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>         shared = core;
> +
> +      if (shared_per_thread <= 0)
> +       shared_per_thread = shared;
>      }
>
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/4 of size
> +     of the chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> +     estimate the point where non-temporal stores begin out-competing
> +     REP MOVSB. As well the point where the fact that non-temporal
> +     stores are forced back to main memory would already occurred to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the maximum thrashing
> +     capped at 1/associativity. */
> +  unsigned long int non_temporal_threshold = shared / 4;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, their performance in highly parallel situations is
> +     noticeably worse.  */
> +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> +    non_temporal_threshold = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>

Now that Carlos, DJ, and HJ have signed off on this and Carlos + DJ
have reproduced
the results, I'm going to push this shortly.

Thank you all for the review!
Noah Goldstein Aug. 14, 2023, 11 p.m. UTC | #2
On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 4`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal thresholds and leads to using non-temporal stores in
> cases where REP MOVSB is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to main memory
> so using it at a size much smaller than L3 can place soon to be
> accessed data much further away than it otherwise could be. As well,
> modern machines are able to detect streaming patterns (especially if
> REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> in affect caps the total amount of eviction at 1/cache_associativity,
> far below meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be REP MOVSB which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> up to `sizeof_L3`.
>
> The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> such as Broadwell prefer something closer to `8`. This patch is meant
> to be followed up by another one to make the divisor cpu-specific, but
> in the meantime (and for easier backporting), this patch settles on
> `4` as a middle-ground.
>
> Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> Reviewed-by: DJ Delorie <dj@redhat.com>
> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> ---
>  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
>  1 file changed, 43 insertions(+), 27 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index 877e73d700..3bd3b3ec1b 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -              & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +       intel_bug_no_cache_info:
> +         /* Assume that all logical threads share the highest cache
> +            level.  */
> +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +                    & 0xff);
> +
> +         /* Get per-thread size of highest level cache.  */
> +         if (shared_per_thread > 0 && threads > 0)
> +           shared_per_thread /= threads;
> +       }
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +       shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>
>        level1_icache_size
>         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>         shared = core;
> +
> +      if (shared_per_thread <= 0)
> +       shared_per_thread = shared;
>      }
>
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/4 of size
> +     of the chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> +     estimate the point where non-temporal stores begin out-competing
> +     REP MOVSB. As well the point where the fact that non-temporal
> +     stores are forced back to main memory would already occurred to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the maximum thrashing
> +     capped at 1/associativity. */
> +  unsigned long int non_temporal_threshold = shared / 4;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, their performance in highly parallel situations is
> +     noticeably worse.  */
> +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> +    non_temporal_threshold = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>

Hi All,

I want to backport this series (minus CPUID codes) too 2.28 - 2.37

The patches I want to backport are:

1/4
```
commit af992e7abdc9049714da76cae1e5e18bc4838fb8
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Wed Jun 7 13:18:01 2023 -0500

    x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
```

2/4
```
commit 47f747217811db35854ea06741be3685e8bbd44d
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Mon Jul 17 23:14:33 2023 -0500

    x86: Fix slight bug in `shared_per_thread` cache size calculation.
```

3/4
```
commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Tue Jul 18 10:27:59 2023 -0500

    [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
NT threshold.
```

4/4
```
commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
origin/HEAD, master)
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Thu Aug 10 19:28:24 2023 -0500

    x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
```

The proposed patches are at:
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37

I know the protocol is not to normally backport optimizations, but I'd argue
these are closer to bug fixes for a severe misconfiguration than a proper
optimization series.  As well, the risk of introducing a correctness related
bug is exceedingly low.

Typically the type of optimization patch this is discouraged are the ones
that actually change a particular function. I.e if these fixes where directly
to the memmove implementation. These patches, however, don't touch
any of the memmove code itself, and are just re-tuning a value used by
memmove which seems categorically different.

The value also only informs memmove strategy. If these patches turn
out to be deeply buggy and set the new threshold incorrectly, the
blowback is limited to a bad performance (which we already have),
and is extremely unlikely to affect correctness in any way.

Thoughts?
Noah Goldstein Aug. 22, 2023, 3:11 p.m. UTC | #3
On Mon, Aug 14, 2023 at 6:00 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > ncores_per_socket'. This patch updates that value to roughly
> > 'sizeof_L3 / 4`
> >
> > The original value (specifically dividing the `ncores_per_socket`) was
> > done to limit the amount of other threads' data a `memcpy`/`memset`
> > could evict.
> >
> > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > non-temporal thresholds and leads to using non-temporal stores in
> > cases where REP MOVSB is multiple times faster.
> >
> > Furthermore, non-temporal stores are written directly to main memory
> > so using it at a size much smaller than L3 can place soon to be
> > accessed data much further away than it otherwise could be. As well,
> > modern machines are able to detect streaming patterns (especially if
> > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > in affect caps the total amount of eviction at 1/cache_associativity,
> > far below meaningfully thrashing the entire cache.
> >
> > As best I can tell, the benchmarks that lead this small threshold
> > where done comparing non-temporal stores versus standard cacheable
> > stores. A better comparison (linked below) is to be REP MOVSB which,
> > on the measure systems, is nearly 2x faster than non-temporal stores
> > at the low-end of the previous threshold, and within 10% for over
> > 100MB copies (well past even the current threshold). In cases with a
> > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > up to `sizeof_L3`.
> >
> > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > such as Broadwell prefer something closer to `8`. This patch is meant
> > to be followed up by another one to make the divisor cpu-specific, but
> > in the meantime (and for easier backporting), this patch settles on
> > `4` as a middle-ground.
> >
> > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > stores where done using:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > Sheets results (also available in pdf on the github):
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > Reviewed-by: DJ Delorie <dj@redhat.com>
> > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> > ---
> >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> >  1 file changed, 43 insertions(+), 27 deletions(-)
> >
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index 877e73d700..3bd3b3ec1b 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> >  }
> >
> >  static void
> > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> >                  long int core)
> >  {
> >    unsigned int eax;
> > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >    unsigned int family = cpu_features->basic.family;
> >    unsigned int model = cpu_features->basic.model;
> >    long int shared = *shared_ptr;
> > +  long int shared_per_thread = *shared_per_thread_ptr;
> >    unsigned int threads = *threads_ptr;
> >    bool inclusive_cache = true;
> >    bool support_count_mask = true;
> > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >        /* Try L2 otherwise.  */
> >        level  = 2;
> >        shared = core;
> > +      shared_per_thread = core;
> >        threads_l2 = 0;
> >        threads_l3 = -1;
> >      }
> > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >          }
> >        else
> >          {
> > -intel_bug_no_cache_info:
> > -          /* Assume that all logical threads share the highest cache
> > -             level.  */
> > -          threads
> > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > -              & 0xff);
> > -        }
> > -
> > -        /* Cap usage of highest cache level to the number of supported
> > -           threads.  */
> > -        if (shared > 0 && threads > 0)
> > -          shared /= threads;
> > +       intel_bug_no_cache_info:
> > +         /* Assume that all logical threads share the highest cache
> > +            level.  */
> > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > +                    & 0xff);
> > +
> > +         /* Get per-thread size of highest level cache.  */
> > +         if (shared_per_thread > 0 && threads > 0)
> > +           shared_per_thread /= threads;
> > +       }
> >      }
> >
> >    /* Account for non-inclusive L2 and L3 caches.  */
> >    if (!inclusive_cache)
> >      {
> >        if (threads_l2 > 0)
> > -        core /= threads_l2;
> > +       shared_per_thread += core / threads_l2;
> >        shared += core;
> >      }
> >
> >    *shared_ptr = shared;
> > +  *shared_per_thread_ptr = shared_per_thread;
> >    *threads_ptr = threads;
> >  }
> >
> > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    /* Find out what brand of processor.  */
> >    long int data = -1;
> >    long int shared = -1;
> > +  long int shared_per_thread = -1;
> >    long int core = -1;
> >    unsigned int threads = 0;
> >    unsigned long int level1_icache_size = -1;
> > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size
> >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level4_cache_size
> >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> >      {
> >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_amd)
> >      {
> >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        if (shared <= 0)
> >          /* No shared L3 cache.  All we have is the L2 cache.  */
> >         shared = core;
> > +
> > +      if (shared_per_thread <= 0)
> > +       shared_per_thread = shared;
> >      }
> >
> >    cpu_features->level1_icache_size = level1_icache_size;
> > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > -     thread's share of the chip's cache. For most Intel and AMD processors
> > -     with an initial release date between 2017 and 2020, a thread's typical
> > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > -     in cache after a maximum temporal copy, which will maintain
> > -     in cache a reasonable portion of the thread's stack and other
> > -     active data. If the threshold is set higher than one thread's
> > -     share of the cache, it has a substantial risk of negatively
> > -     impacting the performance of other threads running on the chip. */
> > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > +     of the chip's cache. For most Intel and AMD processors with an
> > +     initial release date between 2017 and 2023, a thread's typical
> > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > +     estimate the point where non-temporal stores begin out-competing
> > +     REP MOVSB. As well the point where the fact that non-temporal
> > +     stores are forced back to main memory would already occurred to the
> > +     majority of the lines in the copy. Note, concerns about the
> > +     entire L3 cache being evicted by the copy are mostly alleviated
> > +     by the fact that modern HW detects streaming patterns and
> > +     provides proper LRU hints so that the maximum thrashing
> > +     capped at 1/associativity. */
> > +  unsigned long int non_temporal_threshold = shared / 4;
> > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > +     hint. As well, their performance in highly parallel situations is
> > +     noticeably worse.  */
> > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > --
> > 2.34.1
> >
>
> Hi All,
>
> I want to backport this series (minus CPUID codes) too 2.28 - 2.37
>
> The patches I want to backport are:
>
> 1/4
> ```
> commit af992e7abdc9049714da76cae1e5e18bc4838fb8
> Author: Noah Goldstein <goldstein.w.n@gmail.com>
> Date:   Wed Jun 7 13:18:01 2023 -0500
>
>     x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
> ```
>
> 2/4
> ```
> commit 47f747217811db35854ea06741be3685e8bbd44d
> Author: Noah Goldstein <goldstein.w.n@gmail.com>
> Date:   Mon Jul 17 23:14:33 2023 -0500
>
>     x86: Fix slight bug in `shared_per_thread` cache size calculation.
> ```
>
> 3/4
> ```
> commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
> Author: Noah Goldstein <goldstein.w.n@gmail.com>
> Date:   Tue Jul 18 10:27:59 2023 -0500
>
>     [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
> NT threshold.
> ```
>
> 4/4
> ```
> commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
> origin/HEAD, master)
> Author: Noah Goldstein <goldstein.w.n@gmail.com>
> Date:   Thu Aug 10 19:28:24 2023 -0500
>
>     x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
> ```
>
> The proposed patches are at:
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37
>
> I know the protocol is not to normally backport optimizations, but I'd argue
> these are closer to bug fixes for a severe misconfiguration than a proper
> optimization series.  As well, the risk of introducing a correctness related
> bug is exceedingly low.
>
> Typically the type of optimization patch this is discouraged are the ones
> that actually change a particular function. I.e if these fixes where directly
> to the memmove implementation. These patches, however, don't touch
> any of the memmove code itself, and are just re-tuning a value used by
> memmove which seems categorically different.
>
> The value also only informs memmove strategy. If these patches turn
> out to be deeply buggy and set the new threshold incorrectly, the
> blowback is limited to a bad performance (which we already have),
> and is extremely unlikely to affect correctness in any way.
>
> Thoughts?

Ping/Any Objections to me backporting?
Noah Goldstein Aug. 24, 2023, 5:06 p.m. UTC | #4
On Tue, Aug 22, 2023 at 10:11 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Mon, Aug 14, 2023 at 6:00 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > ncores_per_socket'. This patch updates that value to roughly
> > > 'sizeof_L3 / 4`
> > >
> > > The original value (specifically dividing the `ncores_per_socket`) was
> > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > could evict.
> > >
> > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > non-temporal thresholds and leads to using non-temporal stores in
> > > cases where REP MOVSB is multiple times faster.
> > >
> > > Furthermore, non-temporal stores are written directly to main memory
> > > so using it at a size much smaller than L3 can place soon to be
> > > accessed data much further away than it otherwise could be. As well,
> > > modern machines are able to detect streaming patterns (especially if
> > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > far below meaningfully thrashing the entire cache.
> > >
> > > As best I can tell, the benchmarks that lead this small threshold
> > > where done comparing non-temporal stores versus standard cacheable
> > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > at the low-end of the previous threshold, and within 10% for over
> > > 100MB copies (well past even the current threshold). In cases with a
> > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > up to `sizeof_L3`.
> > >
> > > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > > such as Broadwell prefer something closer to `8`. This patch is meant
> > > to be followed up by another one to make the divisor cpu-specific, but
> > > in the meantime (and for easier backporting), this patch settles on
> > > `4` as a middle-ground.
> > >
> > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > stores where done using:
> > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > >
> > > Sheets results (also available in pdf on the github):
> > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > Reviewed-by: DJ Delorie <dj@redhat.com>
> > > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> > > ---
> > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > >
> > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > index 877e73d700..3bd3b3ec1b 100644
> > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > >  }
> > >
> > >  static void
> > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > >                  long int core)
> > >  {
> > >    unsigned int eax;
> > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >    unsigned int family = cpu_features->basic.family;
> > >    unsigned int model = cpu_features->basic.model;
> > >    long int shared = *shared_ptr;
> > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > >    unsigned int threads = *threads_ptr;
> > >    bool inclusive_cache = true;
> > >    bool support_count_mask = true;
> > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >        /* Try L2 otherwise.  */
> > >        level  = 2;
> > >        shared = core;
> > > +      shared_per_thread = core;
> > >        threads_l2 = 0;
> > >        threads_l3 = -1;
> > >      }
> > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >          }
> > >        else
> > >          {
> > > -intel_bug_no_cache_info:
> > > -          /* Assume that all logical threads share the highest cache
> > > -             level.  */
> > > -          threads
> > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > -              & 0xff);
> > > -        }
> > > -
> > > -        /* Cap usage of highest cache level to the number of supported
> > > -           threads.  */
> > > -        if (shared > 0 && threads > 0)
> > > -          shared /= threads;
> > > +       intel_bug_no_cache_info:
> > > +         /* Assume that all logical threads share the highest cache
> > > +            level.  */
> > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > +                    & 0xff);
> > > +
> > > +         /* Get per-thread size of highest level cache.  */
> > > +         if (shared_per_thread > 0 && threads > 0)
> > > +           shared_per_thread /= threads;
> > > +       }
> > >      }
> > >
> > >    /* Account for non-inclusive L2 and L3 caches.  */
> > >    if (!inclusive_cache)
> > >      {
> > >        if (threads_l2 > 0)
> > > -        core /= threads_l2;
> > > +       shared_per_thread += core / threads_l2;
> > >        shared += core;
> > >      }
> > >
> > >    *shared_ptr = shared;
> > > +  *shared_per_thread_ptr = shared_per_thread;
> > >    *threads_ptr = threads;
> > >  }
> > >
> > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    /* Find out what brand of processor.  */
> > >    long int data = -1;
> > >    long int shared = -1;
> > > +  long int shared_per_thread = -1;
> > >    long int core = -1;
> > >    unsigned int threads = 0;
> > >    unsigned long int level1_icache_size = -1;
> > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size
> > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        level4_cache_size
> > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > >
> > > -      get_common_cache_info (&shared, &threads, core);
> > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > >      }
> > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > >      {
> > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > >
> > > -      get_common_cache_info (&shared, &threads, core);
> > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > >      }
> > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > >      {
> > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        if (shared <= 0)
> > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > >         shared = core;
> > > +
> > > +      if (shared_per_thread <= 0)
> > > +       shared_per_thread = shared;
> > >      }
> > >
> > >    cpu_features->level1_icache_size = level1_icache_size;
> > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > >    cpu_features->level4_cache_size = level4_cache_size;
> > >
> > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > -     in cache after a maximum temporal copy, which will maintain
> > > -     in cache a reasonable portion of the thread's stack and other
> > > -     active data. If the threshold is set higher than one thread's
> > > -     share of the cache, it has a substantial risk of negatively
> > > -     impacting the performance of other threads running on the chip. */
> > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > > +     of the chip's cache. For most Intel and AMD processors with an
> > > +     initial release date between 2017 and 2023, a thread's typical
> > > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > > +     estimate the point where non-temporal stores begin out-competing
> > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > +     stores are forced back to main memory would already occurred to the
> > > +     majority of the lines in the copy. Note, concerns about the
> > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > +     by the fact that modern HW detects streaming patterns and
> > > +     provides proper LRU hints so that the maximum thrashing
> > > +     capped at 1/associativity. */
> > > +  unsigned long int non_temporal_threshold = shared / 4;
> > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > +     hint. As well, their performance in highly parallel situations is
> > > +     noticeably worse.  */
> > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > --
> > > 2.34.1
> > >
> >
> > Hi All,
> >
> > I want to backport this series (minus CPUID codes) too 2.28 - 2.37
> >
> > The patches I want to backport are:
> >
> > 1/4
> > ```
> > commit af992e7abdc9049714da76cae1e5e18bc4838fb8
> > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > Date:   Wed Jun 7 13:18:01 2023 -0500
> >
> >     x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
> > ```
> >
> > 2/4
> > ```
> > commit 47f747217811db35854ea06741be3685e8bbd44d
> > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > Date:   Mon Jul 17 23:14:33 2023 -0500
> >
> >     x86: Fix slight bug in `shared_per_thread` cache size calculation.
> > ```
> >
> > 3/4
> > ```
> > commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
> > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > Date:   Tue Jul 18 10:27:59 2023 -0500
> >
> >     [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
> > NT threshold.
> > ```
> >
> > 4/4
> > ```
> > commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
> > origin/HEAD, master)
> > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > Date:   Thu Aug 10 19:28:24 2023 -0500
> >
> >     x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
> > ```
> >
> > The proposed patches are at:
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37
> >
> > I know the protocol is not to normally backport optimizations, but I'd argue
> > these are closer to bug fixes for a severe misconfiguration than a proper
> > optimization series.  As well, the risk of introducing a correctness related
> > bug is exceedingly low.
> >
> > Typically the type of optimization patch this is discouraged are the ones
> > that actually change a particular function. I.e if these fixes where directly
> > to the memmove implementation. These patches, however, don't touch
> > any of the memmove code itself, and are just re-tuning a value used by
> > memmove which seems categorically different.
> >
> > The value also only informs memmove strategy. If these patches turn
> > out to be deeply buggy and set the new threshold incorrectly, the
> > blowback is limited to a bad performance (which we already have),
> > and is extremely unlikely to affect correctness in any way.
> >
> > Thoughts?
>
> Ping/Any Objections to me backporting?

I am going to take the continued lack of objections to mean no one
has issue with me backporting these.

I will start backporting next week. Will do so piecemeal to give time
for issues to emerge before fulling committing.
Noah Goldstein Aug. 28, 2023, 8:02 p.m. UTC | #5
On Thu, Aug 24, 2023 at 12:06 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Tue, Aug 22, 2023 at 10:11 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Mon, Aug 14, 2023 at 6:00 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > ncores_per_socket'. This patch updates that value to roughly
> > > > 'sizeof_L3 / 4`
> > > >
> > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > could evict.
> > > >
> > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > non-temporal thresholds and leads to using non-temporal stores in
> > > > cases where REP MOVSB is multiple times faster.
> > > >
> > > > Furthermore, non-temporal stores are written directly to main memory
> > > > so using it at a size much smaller than L3 can place soon to be
> > > > accessed data much further away than it otherwise could be. As well,
> > > > modern machines are able to detect streaming patterns (especially if
> > > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > > far below meaningfully thrashing the entire cache.
> > > >
> > > > As best I can tell, the benchmarks that lead this small threshold
> > > > where done comparing non-temporal stores versus standard cacheable
> > > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > at the low-end of the previous threshold, and within 10% for over
> > > > 100MB copies (well past even the current threshold). In cases with a
> > > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > > up to `sizeof_L3`.
> > > >
> > > > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > > > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > > > such as Broadwell prefer something closer to `8`. This patch is meant
> > > > to be followed up by another one to make the divisor cpu-specific, but
> > > > in the meantime (and for easier backporting), this patch settles on
> > > > `4` as a middle-ground.
> > > >
> > > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > > stores where done using:
> > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > >
> > > > Sheets results (also available in pdf on the github):
> > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > Reviewed-by: DJ Delorie <dj@redhat.com>
> > > > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> > > > ---
> > > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > > >
> > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > index 877e73d700..3bd3b3ec1b 100644
> > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > > >  }
> > > >
> > > >  static void
> > > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > > >                  long int core)
> > > >  {
> > > >    unsigned int eax;
> > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >    unsigned int family = cpu_features->basic.family;
> > > >    unsigned int model = cpu_features->basic.model;
> > > >    long int shared = *shared_ptr;
> > > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > > >    unsigned int threads = *threads_ptr;
> > > >    bool inclusive_cache = true;
> > > >    bool support_count_mask = true;
> > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >        /* Try L2 otherwise.  */
> > > >        level  = 2;
> > > >        shared = core;
> > > > +      shared_per_thread = core;
> > > >        threads_l2 = 0;
> > > >        threads_l3 = -1;
> > > >      }
> > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >          }
> > > >        else
> > > >          {
> > > > -intel_bug_no_cache_info:
> > > > -          /* Assume that all logical threads share the highest cache
> > > > -             level.  */
> > > > -          threads
> > > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > -              & 0xff);
> > > > -        }
> > > > -
> > > > -        /* Cap usage of highest cache level to the number of supported
> > > > -           threads.  */
> > > > -        if (shared > 0 && threads > 0)
> > > > -          shared /= threads;
> > > > +       intel_bug_no_cache_info:
> > > > +         /* Assume that all logical threads share the highest cache
> > > > +            level.  */
> > > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > +                    & 0xff);
> > > > +
> > > > +         /* Get per-thread size of highest level cache.  */
> > > > +         if (shared_per_thread > 0 && threads > 0)
> > > > +           shared_per_thread /= threads;
> > > > +       }
> > > >      }
> > > >
> > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > >    if (!inclusive_cache)
> > > >      {
> > > >        if (threads_l2 > 0)
> > > > -        core /= threads_l2;
> > > > +       shared_per_thread += core / threads_l2;
> > > >        shared += core;
> > > >      }
> > > >
> > > >    *shared_ptr = shared;
> > > > +  *shared_per_thread_ptr = shared_per_thread;
> > > >    *threads_ptr = threads;
> > > >  }
> > > >
> > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    /* Find out what brand of processor.  */
> > > >    long int data = -1;
> > > >    long int shared = -1;
> > > > +  long int shared_per_thread = -1;
> > > >    long int core = -1;
> > > >    unsigned int threads = 0;
> > > >    unsigned long int level1_icache_size = -1;
> > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size
> > > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        level4_cache_size
> > > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > > >
> > > > -      get_common_cache_info (&shared, &threads, core);
> > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > >      }
> > > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > > >      {
> > > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > > >
> > > > -      get_common_cache_info (&shared, &threads, core);
> > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > >      }
> > > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > > >      {
> > > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        if (shared <= 0)
> > > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > > >         shared = core;
> > > > +
> > > > +      if (shared_per_thread <= 0)
> > > > +       shared_per_thread = shared;
> > > >      }
> > > >
> > > >    cpu_features->level1_icache_size = level1_icache_size;
> > > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > >
> > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > -     in cache after a maximum temporal copy, which will maintain
> > > > -     in cache a reasonable portion of the thread's stack and other
> > > > -     active data. If the threshold is set higher than one thread's
> > > > -     share of the cache, it has a substantial risk of negatively
> > > > -     impacting the performance of other threads running on the chip. */
> > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > > > +     of the chip's cache. For most Intel and AMD processors with an
> > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > > > +     estimate the point where non-temporal stores begin out-competing
> > > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > > +     stores are forced back to main memory would already occurred to the
> > > > +     majority of the lines in the copy. Note, concerns about the
> > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > +     by the fact that modern HW detects streaming patterns and
> > > > +     provides proper LRU hints so that the maximum thrashing
> > > > +     capped at 1/associativity. */
> > > > +  unsigned long int non_temporal_threshold = shared / 4;
> > > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > > +     hint. As well, their performance in highly parallel situations is
> > > > +     noticeably worse.  */
> > > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > --
> > > > 2.34.1
> > > >
> > >
> > > Hi All,
> > >
> > > I want to backport this series (minus CPUID codes) too 2.28 - 2.37
> > >
> > > The patches I want to backport are:
> > >
> > > 1/4
> > > ```
> > > commit af992e7abdc9049714da76cae1e5e18bc4838fb8
> > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > Date:   Wed Jun 7 13:18:01 2023 -0500
> > >
> > >     x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
> > > ```
> > >
> > > 2/4
> > > ```
> > > commit 47f747217811db35854ea06741be3685e8bbd44d
> > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > Date:   Mon Jul 17 23:14:33 2023 -0500
> > >
> > >     x86: Fix slight bug in `shared_per_thread` cache size calculation.
> > > ```
> > >
> > > 3/4
> > > ```
> > > commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
> > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > Date:   Tue Jul 18 10:27:59 2023 -0500
> > >
> > >     [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
> > > NT threshold.
> > > ```
> > >
> > > 4/4
> > > ```
> > > commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
> > > origin/HEAD, master)
> > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > Date:   Thu Aug 10 19:28:24 2023 -0500
> > >
> > >     x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
> > > ```
> > >
> > > The proposed patches are at:
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37
> > >
> > > I know the protocol is not to normally backport optimizations, but I'd argue
> > > these are closer to bug fixes for a severe misconfiguration than a proper
> > > optimization series.  As well, the risk of introducing a correctness related
> > > bug is exceedingly low.
> > >
> > > Typically the type of optimization patch this is discouraged are the ones
> > > that actually change a particular function. I.e if these fixes where directly
> > > to the memmove implementation. These patches, however, don't touch
> > > any of the memmove code itself, and are just re-tuning a value used by
> > > memmove which seems categorically different.
> > >
> > > The value also only informs memmove strategy. If these patches turn
> > > out to be deeply buggy and set the new threshold incorrectly, the
> > > blowback is limited to a bad performance (which we already have),
> > > and is extremely unlikely to affect correctness in any way.
> > >
> > > Thoughts?
> >
> > Ping/Any Objections to me backporting?
>
> I am going to take the continued lack of objections to mean no one
> has issue with me backporting these.
>
> I will start backporting next week. Will do so piecemeal to give time
> for issues to emerge before fulling committing.

Have done the 2.37 backport. If nothing comes up this week will proceed
to 2.36 and further.
Noah Goldstein Sept. 5, 2023, 3:37 p.m. UTC | #6
On Mon, Aug 28, 2023 at 3:02 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Thu, Aug 24, 2023 at 12:06 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Tue, Aug 22, 2023 at 10:11 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > On Mon, Aug 14, 2023 at 6:00 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > > >
> > > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > > ncores_per_socket'. This patch updates that value to roughly
> > > > > 'sizeof_L3 / 4`
> > > > >
> > > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > > could evict.
> > > > >
> > > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > > non-temporal thresholds and leads to using non-temporal stores in
> > > > > cases where REP MOVSB is multiple times faster.
> > > > >
> > > > > Furthermore, non-temporal stores are written directly to main memory
> > > > > so using it at a size much smaller than L3 can place soon to be
> > > > > accessed data much further away than it otherwise could be. As well,
> > > > > modern machines are able to detect streaming patterns (especially if
> > > > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > > > far below meaningfully thrashing the entire cache.
> > > > >
> > > > > As best I can tell, the benchmarks that lead this small threshold
> > > > > where done comparing non-temporal stores versus standard cacheable
> > > > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > > at the low-end of the previous threshold, and within 10% for over
> > > > > 100MB copies (well past even the current threshold). In cases with a
> > > > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > > > up to `sizeof_L3`.
> > > > >
> > > > > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > > > > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > > > > such as Broadwell prefer something closer to `8`. This patch is meant
> > > > > to be followed up by another one to make the divisor cpu-specific, but
> > > > > in the meantime (and for easier backporting), this patch settles on
> > > > > `4` as a middle-ground.
> > > > >
> > > > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > > > stores where done using:
> > > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > > >
> > > > > Sheets results (also available in pdf on the github):
> > > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > > Reviewed-by: DJ Delorie <dj@redhat.com>
> > > > > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> > > > > ---
> > > > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > > > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > > > >
> > > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > > index 877e73d700..3bd3b3ec1b 100644
> > > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > > > >  }
> > > > >
> > > > >  static void
> > > > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > > > >                  long int core)
> > > > >  {
> > > > >    unsigned int eax;
> > > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > >    unsigned int family = cpu_features->basic.family;
> > > > >    unsigned int model = cpu_features->basic.model;
> > > > >    long int shared = *shared_ptr;
> > > > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > > > >    unsigned int threads = *threads_ptr;
> > > > >    bool inclusive_cache = true;
> > > > >    bool support_count_mask = true;
> > > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > >        /* Try L2 otherwise.  */
> > > > >        level  = 2;
> > > > >        shared = core;
> > > > > +      shared_per_thread = core;
> > > > >        threads_l2 = 0;
> > > > >        threads_l3 = -1;
> > > > >      }
> > > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > >          }
> > > > >        else
> > > > >          {
> > > > > -intel_bug_no_cache_info:
> > > > > -          /* Assume that all logical threads share the highest cache
> > > > > -             level.  */
> > > > > -          threads
> > > > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > > -              & 0xff);
> > > > > -        }
> > > > > -
> > > > > -        /* Cap usage of highest cache level to the number of supported
> > > > > -           threads.  */
> > > > > -        if (shared > 0 && threads > 0)
> > > > > -          shared /= threads;
> > > > > +       intel_bug_no_cache_info:
> > > > > +         /* Assume that all logical threads share the highest cache
> > > > > +            level.  */
> > > > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > > +                    & 0xff);
> > > > > +
> > > > > +         /* Get per-thread size of highest level cache.  */
> > > > > +         if (shared_per_thread > 0 && threads > 0)
> > > > > +           shared_per_thread /= threads;
> > > > > +       }
> > > > >      }
> > > > >
> > > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > > >    if (!inclusive_cache)
> > > > >      {
> > > > >        if (threads_l2 > 0)
> > > > > -        core /= threads_l2;
> > > > > +       shared_per_thread += core / threads_l2;
> > > > >        shared += core;
> > > > >      }
> > > > >
> > > > >    *shared_ptr = shared;
> > > > > +  *shared_per_thread_ptr = shared_per_thread;
> > > > >    *threads_ptr = threads;
> > > > >  }
> > > > >
> > > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >    /* Find out what brand of processor.  */
> > > > >    long int data = -1;
> > > > >    long int shared = -1;
> > > > > +  long int shared_per_thread = -1;
> > > > >    long int core = -1;
> > > > >    unsigned int threads = 0;
> > > > >    unsigned long int level1_icache_size = -1;
> > > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > > > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > > > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > > > +      shared_per_thread = shared;
> > > > >
> > > > >        level1_icache_size
> > > > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        level4_cache_size
> > > > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > > > >
> > > > > -      get_common_cache_info (&shared, &threads, core);
> > > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > > >      }
> > > > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > > > >      {
> > > > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > > > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > > > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > > > +      shared_per_thread = shared;
> > > > >
> > > > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > > > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > > > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > > > >
> > > > > -      get_common_cache_info (&shared, &threads, core);
> > > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > > >      }
> > > > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > > > >      {
> > > > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > > > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > > > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > > > +      shared_per_thread = shared;
> > > > >
> > > > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > > > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        if (shared <= 0)
> > > > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > > > >         shared = core;
> > > > > +
> > > > > +      if (shared_per_thread <= 0)
> > > > > +       shared_per_thread = shared;
> > > > >      }
> > > > >
> > > > >    cpu_features->level1_icache_size = level1_icache_size;
> > > > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > > >
> > > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > > -     in cache after a maximum temporal copy, which will maintain
> > > > > -     in cache a reasonable portion of the thread's stack and other
> > > > > -     active data. If the threshold is set higher than one thread's
> > > > > -     share of the cache, it has a substantial risk of negatively
> > > > > -     impacting the performance of other threads running on the chip. */
> > > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > > > > +     of the chip's cache. For most Intel and AMD processors with an
> > > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > > > > +     estimate the point where non-temporal stores begin out-competing
> > > > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > > > +     stores are forced back to main memory would already occurred to the
> > > > > +     majority of the lines in the copy. Note, concerns about the
> > > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > > +     by the fact that modern HW detects streaming patterns and
> > > > > +     provides proper LRU hints so that the maximum thrashing
> > > > > +     capped at 1/associativity. */
> > > > > +  unsigned long int non_temporal_threshold = shared / 4;
> > > > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > > > +     hint. As well, their performance in highly parallel situations is
> > > > > +     noticeably worse.  */
> > > > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > > --
> > > > > 2.34.1
> > > > >
> > > >
> > > > Hi All,
> > > >
> > > > I want to backport this series (minus CPUID codes) too 2.28 - 2.37
> > > >
> > > > The patches I want to backport are:
> > > >
> > > > 1/4
> > > > ```
> > > > commit af992e7abdc9049714da76cae1e5e18bc4838fb8
> > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > Date:   Wed Jun 7 13:18:01 2023 -0500
> > > >
> > > >     x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
> > > > ```
> > > >
> > > > 2/4
> > > > ```
> > > > commit 47f747217811db35854ea06741be3685e8bbd44d
> > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > Date:   Mon Jul 17 23:14:33 2023 -0500
> > > >
> > > >     x86: Fix slight bug in `shared_per_thread` cache size calculation.
> > > > ```
> > > >
> > > > 3/4
> > > > ```
> > > > commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
> > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > Date:   Tue Jul 18 10:27:59 2023 -0500
> > > >
> > > >     [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
> > > > NT threshold.
> > > > ```
> > > >
> > > > 4/4
> > > > ```
> > > > commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
> > > > origin/HEAD, master)
> > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > Date:   Thu Aug 10 19:28:24 2023 -0500
> > > >
> > > >     x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
> > > > ```
> > > >
> > > > The proposed patches are at:
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37
> > > >
> > > > I know the protocol is not to normally backport optimizations, but I'd argue
> > > > these are closer to bug fixes for a severe misconfiguration than a proper
> > > > optimization series.  As well, the risk of introducing a correctness related
> > > > bug is exceedingly low.
> > > >
> > > > Typically the type of optimization patch this is discouraged are the ones
> > > > that actually change a particular function. I.e if these fixes where directly
> > > > to the memmove implementation. These patches, however, don't touch
> > > > any of the memmove code itself, and are just re-tuning a value used by
> > > > memmove which seems categorically different.
> > > >
> > > > The value also only informs memmove strategy. If these patches turn
> > > > out to be deeply buggy and set the new threshold incorrectly, the
> > > > blowback is limited to a bad performance (which we already have),
> > > > and is extremely unlikely to affect correctness in any way.
> > > >
> > > > Thoughts?
> > >
> > > Ping/Any Objections to me backporting?
> >
> > I am going to take the continued lack of objections to mean no one
> > has issue with me backporting these.
> >
> > I will start backporting next week. Will do so piecemeal to give time
> > for issues to emerge before fulling committing.
>
> Have done the 2.37 backport. If nothing comes up this week will proceed
> to 2.36 and further.
Have seen no issue with the 2.37 backport.
Going to backport to 2.36/2.35/2.34. If no issues will complete backporting
through 2.28 next week.
Noah Goldstein Sept. 12, 2023, 3:50 a.m. UTC | #7
On Tue, Sep 5, 2023 at 8:37 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Mon, Aug 28, 2023 at 3:02 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Thu, Aug 24, 2023 at 12:06 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > On Tue, Aug 22, 2023 at 10:11 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > On Mon, Aug 14, 2023 at 6:00 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > > >
> > > > > On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > > > >
> > > > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > > > ncores_per_socket'. This patch updates that value to roughly
> > > > > > 'sizeof_L3 / 4`
> > > > > >
> > > > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > > > could evict.
> > > > > >
> > > > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > > > non-temporal thresholds and leads to using non-temporal stores in
> > > > > > cases where REP MOVSB is multiple times faster.
> > > > > >
> > > > > > Furthermore, non-temporal stores are written directly to main memory
> > > > > > so using it at a size much smaller than L3 can place soon to be
> > > > > > accessed data much further away than it otherwise could be. As well,
> > > > > > modern machines are able to detect streaming patterns (especially if
> > > > > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > > > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > > > > far below meaningfully thrashing the entire cache.
> > > > > >
> > > > > > As best I can tell, the benchmarks that lead this small threshold
> > > > > > where done comparing non-temporal stores versus standard cacheable
> > > > > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > > > at the low-end of the previous threshold, and within 10% for over
> > > > > > 100MB copies (well past even the current threshold). In cases with a
> > > > > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > > > > up to `sizeof_L3`.
> > > > > >
> > > > > > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > > > > > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > > > > > such as Broadwell prefer something closer to `8`. This patch is meant
> > > > > > to be followed up by another one to make the divisor cpu-specific, but
> > > > > > in the meantime (and for easier backporting), this patch settles on
> > > > > > `4` as a middle-ground.
> > > > > >
> > > > > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > > > > stores where done using:
> > > > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > > > >
> > > > > > Sheets results (also available in pdf on the github):
> > > > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > > > Reviewed-by: DJ Delorie <dj@redhat.com>
> > > > > > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> > > > > > ---
> > > > > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > > > > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > > > > >
> > > > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > > > index 877e73d700..3bd3b3ec1b 100644
> > > > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > > > > >  }
> > > > > >
> > > > > >  static void
> > > > > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > > > > >                  long int core)
> > > > > >  {
> > > > > >    unsigned int eax;
> > > > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > >    unsigned int family = cpu_features->basic.family;
> > > > > >    unsigned int model = cpu_features->basic.model;
> > > > > >    long int shared = *shared_ptr;
> > > > > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > > > > >    unsigned int threads = *threads_ptr;
> > > > > >    bool inclusive_cache = true;
> > > > > >    bool support_count_mask = true;
> > > > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > >        /* Try L2 otherwise.  */
> > > > > >        level  = 2;
> > > > > >        shared = core;
> > > > > > +      shared_per_thread = core;
> > > > > >        threads_l2 = 0;
> > > > > >        threads_l3 = -1;
> > > > > >      }
> > > > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > >          }
> > > > > >        else
> > > > > >          {
> > > > > > -intel_bug_no_cache_info:
> > > > > > -          /* Assume that all logical threads share the highest cache
> > > > > > -             level.  */
> > > > > > -          threads
> > > > > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > > > -              & 0xff);
> > > > > > -        }
> > > > > > -
> > > > > > -        /* Cap usage of highest cache level to the number of supported
> > > > > > -           threads.  */
> > > > > > -        if (shared > 0 && threads > 0)
> > > > > > -          shared /= threads;
> > > > > > +       intel_bug_no_cache_info:
> > > > > > +         /* Assume that all logical threads share the highest cache
> > > > > > +            level.  */
> > > > > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > > > +                    & 0xff);
> > > > > > +
> > > > > > +         /* Get per-thread size of highest level cache.  */
> > > > > > +         if (shared_per_thread > 0 && threads > 0)
> > > > > > +           shared_per_thread /= threads;
> > > > > > +       }
> > > > > >      }
> > > > > >
> > > > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > > > >    if (!inclusive_cache)
> > > > > >      {
> > > > > >        if (threads_l2 > 0)
> > > > > > -        core /= threads_l2;
> > > > > > +       shared_per_thread += core / threads_l2;
> > > > > >        shared += core;
> > > > > >      }
> > > > > >
> > > > > >    *shared_ptr = shared;
> > > > > > +  *shared_per_thread_ptr = shared_per_thread;
> > > > > >    *threads_ptr = threads;
> > > > > >  }
> > > > > >
> > > > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >    /* Find out what brand of processor.  */
> > > > > >    long int data = -1;
> > > > > >    long int shared = -1;
> > > > > > +  long int shared_per_thread = -1;
> > > > > >    long int core = -1;
> > > > > >    unsigned int threads = 0;
> > > > > >    unsigned long int level1_icache_size = -1;
> > > > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > > > > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > > > > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > > > > +      shared_per_thread = shared;
> > > > > >
> > > > > >        level1_icache_size
> > > > > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >        level4_cache_size
> > > > > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > > > > >
> > > > > > -      get_common_cache_info (&shared, &threads, core);
> > > > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > > > >      }
> > > > > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > > > > >      {
> > > > > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > > > > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > > > > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > > > > +      shared_per_thread = shared;
> > > > > >
> > > > > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > > > > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > > > > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > > > > >
> > > > > > -      get_common_cache_info (&shared, &threads, core);
> > > > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > > > >      }
> > > > > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > > > > >      {
> > > > > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > > > > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > > > > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > > > > +      shared_per_thread = shared;
> > > > > >
> > > > > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > > > > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >        if (shared <= 0)
> > > > > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > > > > >         shared = core;
> > > > > > +
> > > > > > +      if (shared_per_thread <= 0)
> > > > > > +       shared_per_thread = shared;
> > > > > >      }
> > > > > >
> > > > > >    cpu_features->level1_icache_size = level1_icache_size;
> > > > > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > > > >
> > > > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > > > -     in cache after a maximum temporal copy, which will maintain
> > > > > > -     in cache a reasonable portion of the thread's stack and other
> > > > > > -     active data. If the threshold is set higher than one thread's
> > > > > > -     share of the cache, it has a substantial risk of negatively
> > > > > > -     impacting the performance of other threads running on the chip. */
> > > > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > > > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > > > > > +     of the chip's cache. For most Intel and AMD processors with an
> > > > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > > > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > > > > > +     estimate the point where non-temporal stores begin out-competing
> > > > > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > > > > +     stores are forced back to main memory would already occurred to the
> > > > > > +     majority of the lines in the copy. Note, concerns about the
> > > > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > > > +     by the fact that modern HW detects streaming patterns and
> > > > > > +     provides proper LRU hints so that the maximum thrashing
> > > > > > +     capped at 1/associativity. */
> > > > > > +  unsigned long int non_temporal_threshold = shared / 4;
> > > > > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > > > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > > > > +     hint. As well, their performance in highly parallel situations is
> > > > > > +     noticeably worse.  */
> > > > > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > > > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > > > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > > > --
> > > > > > 2.34.1
> > > > > >
> > > > >
> > > > > Hi All,
> > > > >
> > > > > I want to backport this series (minus CPUID codes) too 2.28 - 2.37
> > > > >
> > > > > The patches I want to backport are:
> > > > >
> > > > > 1/4
> > > > > ```
> > > > > commit af992e7abdc9049714da76cae1e5e18bc4838fb8
> > > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > > Date:   Wed Jun 7 13:18:01 2023 -0500
> > > > >
> > > > >     x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
> > > > > ```
> > > > >
> > > > > 2/4
> > > > > ```
> > > > > commit 47f747217811db35854ea06741be3685e8bbd44d
> > > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > > Date:   Mon Jul 17 23:14:33 2023 -0500
> > > > >
> > > > >     x86: Fix slight bug in `shared_per_thread` cache size calculation.
> > > > > ```
> > > > >
> > > > > 3/4
> > > > > ```
> > > > > commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
> > > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > > Date:   Tue Jul 18 10:27:59 2023 -0500
> > > > >
> > > > >     [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
> > > > > NT threshold.
> > > > > ```
> > > > >
> > > > > 4/4
> > > > > ```
> > > > > commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
> > > > > origin/HEAD, master)
> > > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > > Date:   Thu Aug 10 19:28:24 2023 -0500
> > > > >
> > > > >     x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
> > > > > ```
> > > > >
> > > > > The proposed patches are at:
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37
> > > > >
> > > > > I know the protocol is not to normally backport optimizations, but I'd argue
> > > > > these are closer to bug fixes for a severe misconfiguration than a proper
> > > > > optimization series.  As well, the risk of introducing a correctness related
> > > > > bug is exceedingly low.
> > > > >
> > > > > Typically the type of optimization patch this is discouraged are the ones
> > > > > that actually change a particular function. I.e if these fixes where directly
> > > > > to the memmove implementation. These patches, however, don't touch
> > > > > any of the memmove code itself, and are just re-tuning a value used by
> > > > > memmove which seems categorically different.
> > > > >
> > > > > The value also only informs memmove strategy. If these patches turn
> > > > > out to be deeply buggy and set the new threshold incorrectly, the
> > > > > blowback is limited to a bad performance (which we already have),
> > > > > and is extremely unlikely to affect correctness in any way.
> > > > >
> > > > > Thoughts?
> > > >
> > > > Ping/Any Objections to me backporting?
> > >
> > > I am going to take the continued lack of objections to mean no one
> > > has issue with me backporting these.
> > >
> > > I will start backporting next week. Will do so piecemeal to give time
> > > for issues to emerge before fulling committing.
> >
> > Have done the 2.37 backport. If nothing comes up this week will proceed
> > to 2.36 and further.
> Have seen no issue with the 2.37 backport.
> Going to backport to 2.36/2.35/2.34. If no issues will complete backporting
> through 2.28 next week.

I have backported these patches to 2.28-2.33. Expect to be done with the issue
now.
diff mbox series

Patch

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index 877e73d700..3bd3b3ec1b 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@  handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,25 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/4 of size
+     of the chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
+     estimate the point where non-temporal stores begin out-competing
+     REP MOVSB. As well the point where the fact that non-temporal
+     stores are forced back to main memory would already occurred to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the maximum thrashing
+     capped at 1/associativity. */
+  unsigned long int non_temporal_threshold = shared / 4;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, their performance in highly parallel situations is
+     noticeably worse.  */
+  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
+    non_temporal_threshold = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the