Message ID | 20221013151647.1857994-2-npiggin@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Headers | show |
Series | [1/3] powerpc/64s: Disable preemption in hash lazy mmu mode | expand |
On Fri, Oct 14, 2022 at 01:16:46AM +1000, Nicholas Piggin wrote: > stop_machine_cpuslocked takes a mutex so it must be called in a > preemptible context, so it can't simply be fixed by disabling > preemption. > > This is not a bug, because CPU hotplug is locked, so this processor will > call in to the stop machine function. So raw_smp_processor_id() could be > used. This leaves a small chance that this thread will be migrated to > another CPU, so the master work would be done by a CPU from a different > context. Better for test coverage to make that a common case by just > having the first CPU to call in become the master. > > Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Guenter Roeck <linux@roeck-us.net> > --- > arch/powerpc/mm/book3s64/hash_pgtable.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c > index 747492edb75a..51f48984abca 100644 > --- a/arch/powerpc/mm/book3s64/hash_pgtable.c > +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c > @@ -404,7 +404,8 @@ EXPORT_SYMBOL_GPL(hash__has_transparent_hugepage); > > struct change_memory_parms { > unsigned long start, end, newpp; > - unsigned int step, nr_cpus, master_cpu; > + unsigned int step, nr_cpus; > + atomic_t master_cpu; > atomic_t cpu_counter; > }; > > @@ -478,7 +479,8 @@ static int change_memory_range_fn(void *data) > { > struct change_memory_parms *parms = data; > > - if (parms->master_cpu != smp_processor_id()) > + // First CPU goes through, all others wait. > + if (atomic_xchg(&parms->master_cpu, 1) == 1) > return chmem_secondary_loop(parms); > > // Wait for all but one CPU (this one) to call-in > @@ -516,7 +518,7 @@ static bool hash__change_memory_range(unsigned long start, unsigned long end, > chmem_parms.end = end; > chmem_parms.step = step; > chmem_parms.newpp = newpp; > - chmem_parms.master_cpu = smp_processor_id(); > + atomic_set(&chmem_parms.master_cpu, 0); > > cpus_read_lock(); > > -- > 2.37.2 >
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c index 747492edb75a..51f48984abca 100644 --- a/arch/powerpc/mm/book3s64/hash_pgtable.c +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c @@ -404,7 +404,8 @@ EXPORT_SYMBOL_GPL(hash__has_transparent_hugepage); struct change_memory_parms { unsigned long start, end, newpp; - unsigned int step, nr_cpus, master_cpu; + unsigned int step, nr_cpus; + atomic_t master_cpu; atomic_t cpu_counter; }; @@ -478,7 +479,8 @@ static int change_memory_range_fn(void *data) { struct change_memory_parms *parms = data; - if (parms->master_cpu != smp_processor_id()) + // First CPU goes through, all others wait. + if (atomic_xchg(&parms->master_cpu, 1) == 1) return chmem_secondary_loop(parms); // Wait for all but one CPU (this one) to call-in @@ -516,7 +518,7 @@ static bool hash__change_memory_range(unsigned long start, unsigned long end, chmem_parms.end = end; chmem_parms.step = step; chmem_parms.newpp = newpp; - chmem_parms.master_cpu = smp_processor_id(); + atomic_set(&chmem_parms.master_cpu, 0); cpus_read_lock();
stop_machine_cpuslocked takes a mutex so it must be called in a preemptible context, so it can't simply be fixed by disabling preemption. This is not a bug, because CPU hotplug is locked, so this processor will call in to the stop machine function. So raw_smp_processor_id() could be used. This leaves a small chance that this thread will be migrated to another CPU, so the master work would be done by a CPU from a different context. Better for test coverage to make that a common case by just having the first CPU to call in become the master. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> --- arch/powerpc/mm/book3s64/hash_pgtable.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)