Message ID | 20240209183435.328126-1-philip.cox@canonical.com |
---|---|
State | New |
Headers | show |
Series | [SRU,jammy:linux] mm/memcontrol.c: remove the redundant updating of stats_flush_threshold | expand |
On 24/02/09 01:34PM, Philip Cox wrote: > From: Jiebin Sun <jiebin.sun@intel.com> > > BugLink: https://bugs.launchpad.net/bugs/2052827 > > Remove the redundant updating of stats_flush_threshold. If the global var > stats_flush_threshold has exceeded the trigger value for > __mem_cgroup_flush_stats, further increment is unnecessary. > > Apply the patch and test the pts/hackbench-1.0.0 Count:4 (160 threads). > > Score gain: 1.95x > Reduce CPU cycles in __mod_memcg_lruvec_state (44.88% -> 0.12%) > > CPU: ICX 8380 x 2 sockets > Core number: 40 x 2 physical cores > Benchmark: pts/hackbench-1.0.0 Count:4 (160 threads) > > Link: https://lkml.kernel.org/r/20220722164949.47760-1-jiebin.sun@intel.com > Signed-off-by: Jiebin Sun <jiebin.sun@intel.com> > Acked-by: Shakeel Butt <shakeelb@google.com> > Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> > Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> > Acked-by: Muchun Song <songmuchun@bytedance.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@kernel.org> > Cc: "Huang, Ying" <ying.huang@intel.com> > Cc: Amadeusz Sawiski <amadeuszx.slawinski@linux.intel.com> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > (cherry picked from commit 873f64b791a2b43c246e78b7d9fdd64ce909685b) > Signed-off-by: Philip Cox <philip.cox@canonical.com> > --- > mm/memcontrol.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 6f969ba0d688..f922613ddf8c 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -662,7 +662,14 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) > > x = __this_cpu_add_return(stats_updates, abs(val)); > if (x > MEMCG_CHARGE_BATCH) { > - atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold); > + /* > + * If stats_flush_threshold exceeds the threshold > + * (>num_online_cpus()), cgroup stats update will be triggered > + * in __mem_cgroup_flush_stats(). Increasing this var further > + * is redundant and simply adds overhead in atomic update. > + */ > + if (atomic_read(&stats_flush_threshold) <= num_online_cpus()) > + atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold); > __this_cpu_write(stats_updates, 0); > } > } I know that this is a clean cherry-pick, and it does look alright - patch-wise. Even so, I still consider that a cover letter would be minimum friction and in line with our contribution guidelines. Again, patch looks good to me so: Acked-by: Andrei Gherzan <andrei.gherzan@canonical.com>
Acked-by: Thibault Ferrante <thibault.ferrante@canonical.com> On 09-02-2024 19:34, Philip Cox wrote: > From: Jiebin Sun <jiebin.sun@intel.com> > > BugLink: https://bugs.launchpad.net/bugs/2052827 > > Remove the redundant updating of stats_flush_threshold. If the global var > stats_flush_threshold has exceeded the trigger value for > __mem_cgroup_flush_stats, further increment is unnecessary. > > Apply the patch and test the pts/hackbench-1.0.0 Count:4 (160 threads). > > Score gain: 1.95x > Reduce CPU cycles in __mod_memcg_lruvec_state (44.88% -> 0.12%) > > CPU: ICX 8380 x 2 sockets > Core number: 40 x 2 physical cores > Benchmark: pts/hackbench-1.0.0 Count:4 (160 threads) > > Link: https://lkml.kernel.org/r/20220722164949.47760-1-jiebin.sun@intel.com > Signed-off-by: Jiebin Sun <jiebin.sun@intel.com> > Acked-by: Shakeel Butt <shakeelb@google.com> > Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> > Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> > Acked-by: Muchun Song <songmuchun@bytedance.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@kernel.org> > Cc: "Huang, Ying" <ying.huang@intel.com> > Cc: Amadeusz Sawiski <amadeuszx.slawinski@linux.intel.com> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > (cherry picked from commit 873f64b791a2b43c246e78b7d9fdd64ce909685b) > Signed-off-by: Philip Cox <philip.cox@canonical.com> > --- > mm/memcontrol.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 6f969ba0d688..f922613ddf8c 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -662,7 +662,14 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) > > x = __this_cpu_add_return(stats_updates, abs(val)); > if (x > MEMCG_CHARGE_BATCH) { > - atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold); > + /* > + * If stats_flush_threshold exceeds the threshold > + * (>num_online_cpus()), cgroup stats update will be triggered > + * in __mem_cgroup_flush_stats(). Increasing this var further > + * is redundant and simply adds overhead in atomic update. > + */ > + if (atomic_read(&stats_flush_threshold) <= num_online_cpus()) > + atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold); > __this_cpu_write(stats_updates, 0); > } > } -- Thibault
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6f969ba0d688..f922613ddf8c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -662,7 +662,14 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) x = __this_cpu_add_return(stats_updates, abs(val)); if (x > MEMCG_CHARGE_BATCH) { - atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold); + /* + * If stats_flush_threshold exceeds the threshold + * (>num_online_cpus()), cgroup stats update will be triggered + * in __mem_cgroup_flush_stats(). Increasing this var further + * is redundant and simply adds overhead in atomic update. + */ + if (atomic_read(&stats_flush_threshold) <= num_online_cpus()) + atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold); __this_cpu_write(stats_updates, 0); } }