From patchwork Tue Feb 12 21:48:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Vivier X-Patchwork-Id: 1040876 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43zc0D51l5z9sDr for ; Wed, 13 Feb 2019 08:55:24 +1100 (AEDT) Received: from localhost ([127.0.0.1]:46658 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gtg1S-0000Xy-FY for incoming@patchwork.ozlabs.org; Tue, 12 Feb 2019 16:55:22 -0500 Received: from eggs.gnu.org ([209.51.188.92]:42604) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gtg10-0000Xd-GR for qemu-devel@nongnu.org; Tue, 12 Feb 2019 16:54:55 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gtg0z-0003W5-1q for qemu-devel@nongnu.org; Tue, 12 Feb 2019 16:54:54 -0500 Received: from mx1.redhat.com ([209.132.183.28]:45698) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gtg0y-0003Sw-Kr; Tue, 12 Feb 2019 16:54:52 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8A2702CD7F4; Tue, 12 Feb 2019 21:48:34 +0000 (UTC) Received: from thinkpad.redhat.com (ovpn-204-62.brq.redhat.com [10.40.204.62]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0AB7110027DA; Tue, 12 Feb 2019 21:48:31 +0000 (UTC) From: Laurent Vivier To: qemu-devel@nongnu.org Date: Tue, 12 Feb 2019 22:48:24 +0100 Message-Id: <20190212214827.30543-2-lvivier@redhat.com> In-Reply-To: <20190212214827.30543-1-lvivier@redhat.com> References: <20190212214827.30543-1-lvivier@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 12 Feb 2019 21:48:34 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [RFC 1/4] numa, spapr: add thread-id in the possible_cpus list X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Thomas Huth , Eduardo Habkost , qemu-ppc@nongnu.org, Igor Mammedov , Paolo Bonzini , David Gibson Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" spapr_possible_cpu_arch_ids() counts only cores, and so the number of available CPUs is the number of vCPU divided by smp_threads. ... -smp 4,maxcpus=8,cores=2,threads=2,sockets=2 -numa node,cpus=0,cpus=1 \ -numa node,cpus=3,cpus=4 \ -numa node -numa node This generates (info hotpluggable-cpus) node-id: 0 core-id: 0 thread-id: 0 [thread-id: 1] node-id: 0 core-id: 6 thread-id: 0 [thread-id: 1] node-id: 1 core-id: 2 thread-id: 0 [thread-id: 1] node-id: 1 core-id: 4 thread-id: 0 [thread-id: 1] And this command line generates the following error: CPU(s) not present in any NUMA nodes: CPU 3 [core-id: 6] That is wrong because CPU 3 [core-id: 6] is assigned to node-id 0 Moreover "cpus=4" is not valid, because it means core-id 8 but maxcpus is 8. With this patch we have now: node-id: 0 core-id: 0 thread-id: 0 node-id: 0 core-id: 0 thread-id: 1 node-id: 0 core-id: 1 thread-id: 0 node-id: 1 core-id: 1 thread-id: 1 node-id: 0 core-id: 2 thread-id: 1 node-id: 1 core-id: 2 thread-id: 0 node-id: 0 core-id: 3 thread-id: 1 node-id: 0 core-id: 3 thread-id: 0 CPUs 0 (core-id: 0 thread-id: 0) and 1 (core-id: 0 thread-id: 1) are correctly assigned to node-id 0, CPUs 3 (core-id: 1 thread-id: 1) and 4 (core-id: 2 thread-id: 0) are correctly assigned to node-id 1. All other CPUs are assigned to node-id 0 by default. And the error message is also correct: CPU(s) not present in any NUMA nodes: CPU 2 [core-id: 1, thread-id: 0], \ CPU 5 [core-id: 2, thread-id: 1], \ CPU 6 [core-id: 3, thread-id: 0], \ CPU 7 [core-id: 3, thread-id: 1] Fixes: ec78f8114bc4 ("numa: use possible_cpus for not mapped CPUs check") Cc: imammedo@redhat.com Before commit ec78f8114bc4, output was correct: CPU(s) not present in any NUMA nodes: 2 5 6 7 Signed-off-by: Laurent Vivier --- hw/ppc/spapr.c | 33 +++++++++++++-------------------- 1 file changed, 13 insertions(+), 20 deletions(-) diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c index 332cba89d425..7196ba09da34 100644 --- a/hw/ppc/spapr.c +++ b/hw/ppc/spapr.c @@ -2404,15 +2404,13 @@ static void spapr_validate_node_memory(MachineState *machine, Error **errp) /* find cpu slot in machine->possible_cpus by core_id */ static CPUArchId *spapr_find_cpu_slot(MachineState *ms, uint32_t id, int *idx) { - int index = id / smp_threads; - - if (index >= ms->possible_cpus->len) { + if (id >= ms->possible_cpus->len) { return NULL; } if (idx) { - *idx = index; + *idx = id; } - return &ms->possible_cpus->cpus[index]; + return &ms->possible_cpus->cpus[id]; } static void spapr_set_vsmt_mode(sPAPRMachineState *spapr, Error **errp) @@ -2514,7 +2512,7 @@ static void spapr_init_cpus(sPAPRMachineState *spapr) error_report("This machine version does not support CPU hotplug"); exit(1); } - boot_cores_nr = possible_cpus->len; + boot_cores_nr = possible_cpus->len / smp_threads; } if (smc->pre_2_10_has_unused_icps) { @@ -2528,7 +2526,7 @@ static void spapr_init_cpus(sPAPRMachineState *spapr) } } - for (i = 0; i < possible_cpus->len; i++) { + for (i = 0; i < possible_cpus->len / smp_threads; i++) { int core_id = i * smp_threads; if (mc->has_hotpluggable_cpus) { @@ -3795,21 +3793,16 @@ spapr_cpu_index_to_props(MachineState *machine, unsigned cpu_index) static int64_t spapr_get_default_cpu_node_id(const MachineState *ms, int idx) { - return idx / smp_cores % nb_numa_nodes; + return idx / (smp_cores * smp_threads) % nb_numa_nodes; } static const CPUArchIdList *spapr_possible_cpu_arch_ids(MachineState *machine) { int i; const char *core_type; - int spapr_max_cores = max_cpus / smp_threads; - MachineClass *mc = MACHINE_GET_CLASS(machine); - if (!mc->has_hotpluggable_cpus) { - spapr_max_cores = QEMU_ALIGN_UP(smp_cpus, smp_threads) / smp_threads; - } if (machine->possible_cpus) { - assert(machine->possible_cpus->len == spapr_max_cores); + assert(machine->possible_cpus->len == max_cpus); return machine->possible_cpus; } @@ -3820,16 +3813,16 @@ static const CPUArchIdList *spapr_possible_cpu_arch_ids(MachineState *machine) } machine->possible_cpus = g_malloc0(sizeof(CPUArchIdList) + - sizeof(CPUArchId) * spapr_max_cores); - machine->possible_cpus->len = spapr_max_cores; + sizeof(CPUArchId) * max_cpus); + machine->possible_cpus->len = max_cpus; for (i = 0; i < machine->possible_cpus->len; i++) { - int core_id = i * smp_threads; - machine->possible_cpus->cpus[i].type = core_type; machine->possible_cpus->cpus[i].vcpus_count = smp_threads; - machine->possible_cpus->cpus[i].arch_id = core_id; + machine->possible_cpus->cpus[i].arch_id = i; machine->possible_cpus->cpus[i].props.has_core_id = true; - machine->possible_cpus->cpus[i].props.core_id = core_id; + machine->possible_cpus->cpus[i].props.core_id = i / smp_threads; + machine->possible_cpus->cpus[i].props.has_thread_id = true; + machine->possible_cpus->cpus[i].props.thread_id = i % smp_threads; } return machine->possible_cpus; }