From patchwork Tue Aug 1 10:35:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 1815453 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=ZWPZd7uM; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4RFWf13WlXz1ybb for ; Tue, 1 Aug 2023 20:32:57 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qQmg2-0007x9-JP; Tue, 01 Aug 2023 06:32:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qQmg1-0007x1-Kb for qemu-devel@nongnu.org; Tue, 01 Aug 2023 06:32:29 -0400 Received: from [192.55.52.88] (helo=mgamail.intel.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qQmfz-0002Cy-CI for qemu-devel@nongnu.org; Tue, 01 Aug 2023 06:32:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690885947; x=1722421947; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=NFjtfvliL6hQVdmeN2aEsPW/8xpedb/pgZeJOPfys1w=; b=ZWPZd7uMjzvc11x43MxsT2EAHJoRjSTsF4D2nynAuPyh80SjAELYjI9K XoddCGoJzCQjLxuCDC39hI5AqvFclRVXWAG/b7qh8/iJP/OGICXRUYOQU J9Qo1Q+hcjWgINk1TiSuPqFDW8LFp6ZaOnWWYsSIUNz3VA0GG20oohRmW DDWjCVyc/aznqPsSeeFH7s90EGkExtafD7TrNb8wHf+iqd3VpDycWO+Iu TisDacKPgidtrV1I7OJbN+nFeDJchgStxMEsw2BwbfjTS8KIvUyo1Zqp/ W242LbeGXuhzKmv8h45YoyqbSlmk8C+NbYxiOSlt5s/3XoBTO2i8MVIc4 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10788"; a="400210975" X-IronPort-AV: E=Sophos;i="6.01,246,1684825200"; d="scan'208";a="400210975" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Aug 2023 03:25:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10788"; a="731931924" X-IronPort-AV: E=Sophos;i="6.01,246,1684825200"; d="scan'208";a="731931924" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.28]) by fmsmga007.fm.intel.com with ESMTP; 01 Aug 2023 03:25:01 -0700 From: Zhao Liu To: Eduardo Habkost , Marcel Apfelbaum , =?utf-8?q?Philippe_Mathieu-D?= =?utf-8?q?aud=C3=A9?= , Yanan Wang , "Michael S . Tsirkin" , Richard Henderson , Paolo Bonzini Cc: qemu-devel@nongnu.org, Zhenyu Wang , Xiaoyao Li , Babu Moger , Zhao Liu Subject: [PATCH v3 00/17] Support smp.clusters for x86 Date: Tue, 1 Aug 2023 18:35:10 +0800 Message-Id: <20230801103527.397756-1-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Host-Lookup-Failed: Reverse DNS lookup failed for 192.55.52.88 (failed) Received-SPF: none client-ip=192.55.52.88; envelope-from=zhao1.liu@linux.intel.com; helo=mgamail.intel.com X-Spam_score_int: -34 X-Spam_score: -3.5 X-Spam_bar: --- X-Spam_report: (-3.5 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RDNS_NONE=0.793, SPF_HELO_NONE=0.001, SPF_NONE=0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Zhao Liu Hi list, This is the our v3 patch series, rebased on the master branch at the commit 234320cd0573 ("Merge tag 'pull-target-arm-20230731' of https: //git.linaro.org/people/pmaydell/qemu-arm into staging"). Comparing with v2 [1], v3 mainly adds "Tested-by", "Reviewed-by" and "ACKed-by" (for PC related patchies) tags and minor code changes (Pls see changelog). # Introduction This series add the cluster support for x86 PC machine, which allows x86 can use smp.clusters to configure x86 modlue level CPU topology. And since the compatibility issue (see section: ## Why not share L2 cache in cluster directly), this series also introduce a new command to adjust the x86 L2 cache topology. Welcome your comments! # Backgroud The "clusters" parameter in "smp" is introduced by ARM [2], but x86 hasn't supported it. At present, x86 defaults L2 cache is shared in one core, but this is not enough. There're some platforms that multiple cores share the same L2 cache, e.g., Alder Lake-P shares L2 cache for one module of Atom cores [3], that is, every four Atom cores shares one L2 cache. Therefore, we need the new CPU topology level (cluster/module). Another reason is for hybrid architecture. cluster support not only provides another level of topology definition in x86, but would aslo provide required code change for future our hybrid topology support. # Overview ## Introduction of module level for x86 "cluster" in smp is the CPU topology level which is between "core" and die. For x86, the "cluster" in smp is corresponding to the module level [4], which is above the core level. So use the "module" other than "cluster" in x86 code. And please note that x86 already has a cpu topology level also named "cluster" [4], this level is at the upper level of the package. Here, the cluster in x86 cpu topology is completely different from the "clusters" as the smp parameter. After the module level is introduced, the cluster as the smp parameter will actually refer to the module level of x86. ## Why not share L2 cache in cluster directly Though "clusters" was introduced to help define L2 cache topology [2], using cluster to define x86's L2 cache topology will cause the compatibility problem: Currently, x86 defaults that the L2 cache is shared in one core, which actually implies a default setting "cores per L2 cache is 1" and therefore implicitly defaults to having as many L2 caches as cores. For example (i386 PC machine): -smp 16,sockets=2,dies=2,cores=2,threads=2,maxcpus=16 (*) Considering the topology of the L2 cache, this (*) implicitly means "1 core per L2 cache" and "2 L2 caches per die". If we use cluster to configure L2 cache topology with the new default setting "clusters per L2 cache is 1", the above semantics will change to "2 cores per cluster" and "1 cluster per L2 cache", that is, "2 cores per L2 cache". So the same command (*) will cause changes in the L2 cache topology, further affecting the performance of the virtual machine. Therefore, x86 should only treat cluster as a cpu topology level and avoid using it to change L2 cache by default for compatibility. ## module level in CPUID Currently, we don't expose module level in CPUID.1FH because currently linux (v6.2-rc6) doesn't support module level. And exposing module and die levels at the same time in CPUID.1FH will cause linux to calculate wrong die_id. The module level should be exposed until the real machine has the module level in CPUID.1FH. We can configure CPUID.04H.02H (L2 cache topology) with module level by a new command: "-cpu,x-l2-cache-topo=cluster" More information about this command, please see the section: "## New property: x-l2-cache-topo". ## New cache topology info in CPUCacheInfo Currently, by default, the cache topology is encoded as: 1. i/d cache is shared in one core. 2. L2 cache is shared in one core. 3. L3 cache is shared in one die. This default general setting has caused a misunderstanding, that is, the cache topology is completely equated with a specific cpu topology, such as the connection between L2 cache and core level, and the connection between L3 cache and die level. In fact, the settings of these topologies depend on the specific platform and are not static. For example, on Alder Lake-P, every four Atom cores share the same L2 cache [2]. Thus, in this patch set, we explicitly define the corresponding cache topology for different cpu models and this has two benefits: 1. Easy to expand to new CPU models in the future, which has different cache topology. 2. It can easily support custom cache topology by some command (e.g., x-l2-cache-topo). ## New property: x-l2-cache-topo The property l2-cache-topo will be used to change the L2 cache topology in CPUID.04H. Now it allows user to set the L2 cache is shared in core level or cluster level. If user passes "-cpu x-l2-cache-topo=[core|cluster]" then older L2 cache topology will be overrided by the new topology setting. Since CPUID.04H is used by intel cpus, this property is available on intel cpus as for now. When necessary, it can be extended to CPUID[0x8000001D] for amd cpus. # Patch description patch 1-2 Cleanups about coding style and test name. patch 3-4,15 Fixes about x86 topology, intel l1 cache topology and amd cache topology encoding. patch 5-6 Cleanups about topology related CPUID encoding and QEMU topology variables. patch 7-12 Add the module as the new CPU topology level in x86, and it is corresponding to the cluster level in generic code. patch 13,14,16 Add cache topology infomation in cache models. patch 17 Introduce a new command to configure L2 cache topology. [1]: https://lists.gnu.org/archive/html/qemu-devel/2023-05/msg07179.html [2]: https://patchew.org/QEMU/20211228092221.21068-1-wangyanan55@huawei.com/ [3]: https://www.intel.com/content/www/us/en/products/platforms/details/alder-lake-p.html [4]: SDM, vol.3, ch.9, 9.9.1 Hierarchical Mapping of Shared Resources. Best Regards, Zhao --- Changelog: Changes since v2: * Add "Tested-by", "Reviewed-by" and "ACKed-by" tags. * Use newly added wrapped helper to get cores per socket in qemu_init_vcpu(). Changes since v1: * Reordered patches. (Yanan) * Deprecated the patch to fix comment of machine_parse_smp_config(). (Yanan) * Rename test-x86-cpuid.c to test-x86-topo.c. (Yanan) * Split the intel's l1 cache topology fix into a new separate patch. (Yanan) * Combined module_id and APIC ID for module level support into one patch. (Yanan) * Make cache_into_passthrough case of cpuid 0x04 leaf in * cpu_x86_cpuid() use max_processor_ids_for_cache() and max_core_ids_in_package() to encode CPUID[4]. (Yanan) * Add the prefix "CPU_TOPO_LEVEL_*" for CPU topology level names. (Yanan) * Rename the "INVALID" level to "CPU_TOPO_LEVEL_UNKNOW". (Yanan) --- Zhao Liu (10): i386: Fix comment style in topology.h tests: Rename test-x86-cpuid.c to test-x86-topo.c i386/cpu: Fix i/d-cache topology to core level for Intel CPU i386/cpu: Use APIC ID offset to encode cache topo in CPUID[4] i386/cpu: Consolidate the use of topo_info in cpu_x86_cpuid() i386: Add cache topology info in CPUCacheInfo i386: Use CPUCacheInfo.share_level to encode CPUID[4] i386: Fix NumSharingCache for CPUID[0x8000001D].EAX[bits 25:14] i386: Use CPUCacheInfo.share_level to encode CPUID[0x8000001D].EAX[bits 25:14] i386: Add new property to control L2 cache topo in CPUID.04H Zhuocheng Ding (7): softmmu: Fix CPUSTATE.nr_cores' calculation i386: Introduce module-level cpu topology to CPUX86State i386: Support modules_per_die in X86CPUTopoInfo i386: Support module_id in X86CPUTopoIDs i386/cpu: Introduce cluster-id to X86CPU tests: Add test case of APIC ID for module level parsing hw/i386/pc: Support smp.clusters for x86 PC machine MAINTAINERS | 2 +- hw/i386/pc.c | 1 + hw/i386/x86.c | 49 +++++- include/hw/core/cpu.h | 2 +- include/hw/i386/topology.h | 68 +++++--- qemu-options.hx | 10 +- softmmu/cpus.c | 2 +- target/i386/cpu.c | 158 ++++++++++++++---- target/i386/cpu.h | 25 +++ tests/unit/meson.build | 4 +- .../{test-x86-cpuid.c => test-x86-topo.c} | 58 ++++--- 11 files changed, 280 insertions(+), 99 deletions(-) rename tests/unit/{test-x86-cpuid.c => test-x86-topo.c} (73%)