Message ID | 20181114170319.24828-1-lvivier@redhat.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | powerpc/numa: fix hot-added CPU on memory-less node | expand |
Context | Check | Description |
---|---|---|
snowpatch_ozlabs/apply_patch | success | next/apply_patch Successfully applied |
snowpatch_ozlabs/build-ppc64le | success | build succeded & removed 0 sparse warning(s) |
snowpatch_ozlabs/build-ppc64be | success | build succeded & removed 0 sparse warning(s) |
snowpatch_ozlabs/build-ppc64e | success | build succeded & removed 0 sparse warning(s) |
snowpatch_ozlabs/build-pmac32 | success | build succeded & removed 0 sparse warning(s) |
snowpatch_ozlabs/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 15 lines checked |
On Wed, Nov 14, 2018 at 06:03:19PM +0100, Laurent Vivier wrote: > Trying to hotplug a CPU on an empty NUMA node (without > memory or CPU) crashes the kernel when the CPU is onlined. > > During the onlining process, the kernel calls start_secondary() > that ends by calling > set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])) > that relies on NODE_DATA(nid)->node_zonelists and in our case > NODE_DATA(nid) is NULL. > > To fix that, add the same checking as we already have in > find_and_online_cpu_nid(): if NODE_DATA() is NULL, use > the first online node. > > Bug: https://github.com/linuxppc/linux/issues/184 > Fixes: ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6 > (powerpc/numa: Ensure nodes initialized for hotplug) > Signed-off-by: Laurent Vivier <lvivier@redhat.com> > --- > arch/powerpc/mm/numa.c | 9 +++++++++ > 1 file changed, 9 insertions(+) This patch causes regression for cold plug numa case(Case 1) and hotplug case + reboot(Case 2) with adding all vcpus into node 0. Env: HW: Power8 Host. Kernel: 4.20-rc2 + this patch Case 1: 1. boot a guest with 8 vcpus(all available), spreadout in 4 numa nodes. <vcpu placement='static'>8</vcpu> ... <numa> <cell id='0' cpus='0-1' memory='4194304' unit='KiB'/> <cell id='1' cpus='2-3' memory='4194304' unit='KiB'/> <cell id='2' cpus='4-5' memory='0' unit='KiB'/> <cell id='3' cpus='6-7' memory='0' unit='KiB'/> </numa> 2. Check lscpu --- all vcpus are added to node0 --> NOK # lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 8 Socket(s): 1 NUMA node(s): 4 Model: 2.1 (pvr 004b 0201) Model name: POWER8 (architected), altivec supported Hypervisor vendor: KVM Virtualization type: para L1d cache: 64K L1i cache: 32K NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): NUMA node2 CPU(s): NUMA node3 CPU(s): without this patch it was working fine. # lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 8 Socket(s): 1 NUMA node(s): 4 Model: 2.1 (pvr 004b 0201) Model name: POWER8 (architected), altivec supported Hypervisor vendor: KVM Virtualization type: para L1d cache: 64K L1i cache: 32K NUMA node0 CPU(s): 0,1 NUMA node1 CPU(s): 2,3 NUMA node2 CPU(s): 4,5 NUMA node3 CPU(s): 6,7 Case 2: 1. boot a guest with 8 vcpus(2 available, 6 possible), spreadout in 4 numa nodes. <vcpu placement='static' current='2'>8</vcpu> ... <numa> <cell id='0' cpus='0-1' memory='0' unit='KiB'/> <cell id='1' cpus='2-3' memory='4194304' unit='KiB'/> <cell id='2' cpus='4-5' memory='0' unit='KiB'/> <cell id='3' cpus='6-7' memory='0' unit='KiB'/> </numa> 2. Hotplug all vcpus # lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 8 Socket(s): 1 NUMA node(s): 2 Model: 2.1 (pvr 004b 0201) Model name: POWER8 (architected), altivec supported Hypervisor vendor: KVM Virtualization type: para L1d cache: 64K L1i cache: 32K NUMA node0 CPU(s): 0,1,4-7 NUMA node1 CPU(s): 2,3 3. reboot the guest # lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 8 Socket(s): 1 NUMA node(s): 4 Model: 2.1 (pvr 004b 0201) Model name: POWER8 (architected), altivec supported Hypervisor vendor: KVM Virtualization type: para L1d cache: 64K L1i cache: 32K NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): NUMA node2 CPU(s): NUMA node3 CPU(s): Without this patch, Case 2 crashes the guest during hotplug, i.e issue reported in https://github.com/linuxppc/linux/issues/184 Regards, -Satheesh. > > diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c > index 3a048e98a132..1b2d25a3c984 100644 > --- a/arch/powerpc/mm/numa.c > +++ b/arch/powerpc/mm/numa.c > @@ -483,6 +483,15 @@ static int numa_setup_cpu(unsigned long lcpu) > if (nid < 0 || !node_possible(nid)) > nid = first_online_node; > > + if (NODE_DATA(nid) == NULL) { > + /* > + * Default to using the nearest node that has memory installed. > + * Otherwise, it would be necessary to patch the kernel MM code > + * to deal with more memoryless-node error conditions. > + */ > + nid = first_online_node; > + } > + > map_cpu_to_node(lcpu, nid); > of_node_put(cpu); > out: > -- > 2.17.2 >
On 15/11/2018 10:19, Satheesh Rajendran wrote: > On Wed, Nov 14, 2018 at 06:03:19PM +0100, Laurent Vivier wrote: >> Trying to hotplug a CPU on an empty NUMA node (without >> memory or CPU) crashes the kernel when the CPU is onlined. >> >> During the onlining process, the kernel calls start_secondary() >> that ends by calling >> set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])) >> that relies on NODE_DATA(nid)->node_zonelists and in our case >> NODE_DATA(nid) is NULL. >> >> To fix that, add the same checking as we already have in >> find_and_online_cpu_nid(): if NODE_DATA() is NULL, use >> the first online node. >> >> Bug: https://github.com/linuxppc/linux/issues/184 >> Fixes: ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6 >> (powerpc/numa: Ensure nodes initialized for hotplug) >> Signed-off-by: Laurent Vivier <lvivier@redhat.com> >> --- >> arch/powerpc/mm/numa.c | 9 +++++++++ >> 1 file changed, 9 insertions(+) > > This patch causes regression for cold plug numa case(Case 1) and > hotplug case + reboot(Case 2) with adding all vcpus into node 0. > > > Env: HW: Power8 Host. > Kernel: 4.20-rc2 + this patch > > Case 1: > 1. boot a guest with 8 vcpus(all available), spreadout in 4 numa nodes. > <vcpu placement='static'>8</vcpu> > ... > <numa> > <cell id='0' cpus='0-1' memory='4194304' unit='KiB'/> > <cell id='1' cpus='2-3' memory='4194304' unit='KiB'/> > <cell id='2' cpus='4-5' memory='0' unit='KiB'/> > <cell id='3' cpus='6-7' memory='0' unit='KiB'/> > </numa> > > 2. Check lscpu --- all vcpus are added to node0 --> NOK > > # lscpu ... > NUMA node0 CPU(s): 0-7 > NUMA node1 CPU(s): > NUMA node2 CPU(s): > NUMA node3 CPU(s): > > without this patch it was working fine. > # lscpu ... > NUMA node0 CPU(s): 0,1 > NUMA node1 CPU(s): 2,3 > NUMA node2 CPU(s): 4,5 > NUMA node3 CPU(s): 6,7 > Good point. Thank you. I'm going to see what happens and how the cold case allows to online CPUs on nodes with NODE_DATA() set to NULL (because it's what the patch changes). Thanks, Laurent
On 15/11/2018 10:19, Satheesh Rajendran wrote: > On Wed, Nov 14, 2018 at 06:03:19PM +0100, Laurent Vivier wrote: >> Trying to hotplug a CPU on an empty NUMA node (without >> memory or CPU) crashes the kernel when the CPU is onlined. >> >> During the onlining process, the kernel calls start_secondary() >> that ends by calling >> set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])) >> that relies on NODE_DATA(nid)->node_zonelists and in our case >> NODE_DATA(nid) is NULL. >> >> To fix that, add the same checking as we already have in >> find_and_online_cpu_nid(): if NODE_DATA() is NULL, use >> the first online node. >> >> Bug: https://github.com/linuxppc/linux/issues/184 >> Fixes: ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6 >> (powerpc/numa: Ensure nodes initialized for hotplug) >> Signed-off-by: Laurent Vivier <lvivier@redhat.com> >> --- >> arch/powerpc/mm/numa.c | 9 +++++++++ >> 1 file changed, 9 insertions(+) > > This patch causes regression for cold plug numa case(Case 1) and > hotplug case + reboot(Case 2) with adding all vcpus into node 0. > > > Env: HW: Power8 Host. > Kernel: 4.20-rc2 + this patch > > Case 1: > 1. boot a guest with 8 vcpus(all available), spreadout in 4 numa nodes. > <vcpu placement='static'>8</vcpu> > ... > <numa> > <cell id='0' cpus='0-1' memory='4194304' unit='KiB'/> > <cell id='1' cpus='2-3' memory='4194304' unit='KiB'/> > <cell id='2' cpus='4-5' memory='0' unit='KiB'/> > <cell id='3' cpus='6-7' memory='0' unit='KiB'/> > </numa> > > 2. Check lscpu --- all vcpus are added to node0 --> NOK > > # lscpu > Architecture: ppc64le > Byte Order: Little Endian > CPU(s): 8 > On-line CPU(s) list: 0-7 > Thread(s) per core: 1 > Core(s) per socket: 8 > Socket(s): 1 > NUMA node(s): 4 > Model: 2.1 (pvr 004b 0201) > Model name: POWER8 (architected), altivec supported > Hypervisor vendor: KVM > Virtualization type: para > L1d cache: 64K > L1i cache: 32K > NUMA node0 CPU(s): 0-7 > NUMA node1 CPU(s): > NUMA node2 CPU(s): > NUMA node3 CPU(s): > > without this patch it was working fine. > # lscpu > Architecture: ppc64le > Byte Order: Little Endian > CPU(s): 8 > On-line CPU(s) list: 0-7 > Thread(s) per core: 1 > Core(s) per socket: 8 > Socket(s): 1 > NUMA node(s): 4 > Model: 2.1 (pvr 004b 0201) > Model name: POWER8 (architected), altivec supported > Hypervisor vendor: KVM > Virtualization type: para > L1d cache: 64K > L1i cache: 32K > NUMA node0 CPU(s): 0,1 > NUMA node1 CPU(s): 2,3 > NUMA node2 CPU(s): 4,5 > NUMA node3 CPU(s): 6,7 > > > Case 2: > 1. boot a guest with 8 vcpus(2 available, 6 possible), spreadout in 4 numa nodes. > <vcpu placement='static' current='2'>8</vcpu> > ... > <numa> > <cell id='0' cpus='0-1' memory='0' unit='KiB'/> > <cell id='1' cpus='2-3' memory='4194304' unit='KiB'/> > <cell id='2' cpus='4-5' memory='0' unit='KiB'/> > <cell id='3' cpus='6-7' memory='0' unit='KiB'/> > </numa> > > 2. Hotplug all vcpus > # lscpu > Architecture: ppc64le > Byte Order: Little Endian > CPU(s): 8 > On-line CPU(s) list: 0-7 > Thread(s) per core: 1 > Core(s) per socket: 8 > Socket(s): 1 > NUMA node(s): 2 > Model: 2.1 (pvr 004b 0201) > Model name: POWER8 (architected), altivec supported > Hypervisor vendor: KVM > Virtualization type: para > L1d cache: 64K > L1i cache: 32K > NUMA node0 CPU(s): 0,1,4-7 > NUMA node1 CPU(s): 2,3 > > > 3. reboot the guest > # lscpu > Architecture: ppc64le > Byte Order: Little Endian > CPU(s): 8 > On-line CPU(s) list: 0-7 > Thread(s) per core: 1 > Core(s) per socket: 8 > Socket(s): 1 > NUMA node(s): 4 > Model: 2.1 (pvr 004b 0201) > Model name: POWER8 (architected), altivec supported > Hypervisor vendor: KVM > Virtualization type: para > L1d cache: 64K > L1i cache: 32K > NUMA node0 CPU(s): 0-7 > NUMA node1 CPU(s): > NUMA node2 CPU(s): > NUMA node3 CPU(s): > > > Without this patch, Case 2 crashes the guest during hotplug, i.e > issue reported in https://github.com/linuxppc/linux/issues/184 > > Regards, > -Satheesh. > >> >> diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c >> index 3a048e98a132..1b2d25a3c984 100644 >> --- a/arch/powerpc/mm/numa.c >> +++ b/arch/powerpc/mm/numa.c >> @@ -483,6 +483,15 @@ static int numa_setup_cpu(unsigned long lcpu) >> if (nid < 0 || !node_possible(nid)) >> nid = first_online_node; >> >> + if (NODE_DATA(nid) == NULL) { >> + /* >> + * Default to using the nearest node that has memory installed. >> + * Otherwise, it would be necessary to patch the kernel MM code >> + * to deal with more memoryless-node error conditions. >> + */ >> + nid = first_online_node; >> + } >> + >> map_cpu_to_node(lcpu, nid); >> of_node_put(cpu); >> out: >> -- >> 2.17.2 >> > I have worked a while on this problem, and I don't see any easy fix for that. It seems kernel is not ready to online a memory-less/cpu-less node when someone hotplug a CPU in it. I think we have to fix several areas to be able to do that. Perhaps someone from IBM could have a better view on what we need? Michael? Nathan? Thanks, Laurent
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index 3a048e98a132..1b2d25a3c984 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -483,6 +483,15 @@ static int numa_setup_cpu(unsigned long lcpu) if (nid < 0 || !node_possible(nid)) nid = first_online_node; + if (NODE_DATA(nid) == NULL) { + /* + * Default to using the nearest node that has memory installed. + * Otherwise, it would be necessary to patch the kernel MM code + * to deal with more memoryless-node error conditions. + */ + nid = first_online_node; + } + map_cpu_to_node(lcpu, nid); of_node_put(cpu); out:
Trying to hotplug a CPU on an empty NUMA node (without memory or CPU) crashes the kernel when the CPU is onlined. During the onlining process, the kernel calls start_secondary() that ends by calling set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])) that relies on NODE_DATA(nid)->node_zonelists and in our case NODE_DATA(nid) is NULL. To fix that, add the same checking as we already have in find_and_online_cpu_nid(): if NODE_DATA() is NULL, use the first online node. Bug: https://github.com/linuxppc/linux/issues/184 Fixes: ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6 (powerpc/numa: Ensure nodes initialized for hotplug) Signed-off-by: Laurent Vivier <lvivier@redhat.com> --- arch/powerpc/mm/numa.c | 9 +++++++++ 1 file changed, 9 insertions(+)