From patchwork Fri Jan 10 18:40:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 2032782 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=HGuaba7J; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=M5OJwz71; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20230601 header.b=rqaSIbFE; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4YV9Wj34ZQz1yPD for ; Sat, 11 Jan 2025 05:42:57 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=BPCKa8sTBJXjGFH1TBYtkrGhXxv3GFzHtzvMiP4Gfjg=; b=HGuaba7JRw11/wG1exUvhA5G48 p5Pse753ovtu0VJJ+IvkwvvvMvW+EnCAXdj1PM8wDP6blWZAijFkYr7ukrDts7lmlz0VuiT24xGWE G5FZMaWbZV+GFf3sKshQmq6oR9QIznJIcjuL1KfHITH89kKmgHtV2r87XQb850PwA6tGPXMi+ivHe wGYz7rhsMnauZvphl+t4+6/q7hSmXZg4S2wPqXbOP0kHz6rdIZS3NcWLskhpORL13aS5q77mUQifK CLtCZN0cUGKD37r9qkS1H/E5vObftOirKyumlBB+mw2sqGX/5UPXRnVwwRZxTdInZ0FevpZAY0KwM OTZ1wjOA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tWJyD-0000000GcRv-08T3; Fri, 10 Jan 2025 18:42:57 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJxB-0000000Gc1S-1YK7 for linux-snps-arc@bombadil.infradead.org; Fri, 10 Jan 2025 18:41:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=CAllTjvu+aeRJsuZfnIbaARP70yHbpTXyh5BDwNTLpw=; b=M5OJwz71x+E209KhhTWSCJSPC+ rTQnQlevcRMyA4dzFH1fHdoszJrRe0PA/sN4hqR71PyoZQmNQzGP0ZLQXhrn32DyEYg8SO7yY9GmE kFunVuraNhTvt4C7/wgDidF4ja2EuhsGbLMVuZHt1CT9W54VnnQ70Gw04fIsc3Rr3AuWcJYbGqAek mQP4BUuNSpHj9SFRSYN6CItRW+YmfUJEolh9qR8w9EN/rJ/cl8sTugvECZjvMlsHooMs1ve+5TBya IPNu+clEAOgW1qqjo01dRdYhUiG9Ok+KCbdpOkw79yOrZaSlPvfvtgJWerhnsYQKPyXtf5kSmk+Iv mGL3U+lA==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tWJx4-00000009ssU-1AnQ for linux-snps-arc@lists.infradead.org; Fri, 10 Jan 2025 18:41:50 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-43621907030so19343035e9.1 for ; Fri, 10 Jan 2025 10:41:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534505; x=1737139305; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CAllTjvu+aeRJsuZfnIbaARP70yHbpTXyh5BDwNTLpw=; b=rqaSIbFEaRsw1XLV5/Y9dwc/CINHbljVbDhmf2YGrl4CMKhk9l/gZfcBRgbi/xD48J Ms3mWypknpNEdCsHMbIiQxC0xEhGdx5rW9cuWToeDAAPRNKFD1eIgFAygXocaWn/AZBV yLPEVTUKQ67c4O5AYT9vl8HZw7TLB3bs8YzRuB3fWtgDIe4xiNt3bwLoawLS3BJ1ysbT u8Xa+0hg4dQ1N0lSZmTu6HjHiL2HeL8P6A2DOXHByWvh4oCjyTaedhyHlkXyx4rhadOS 8sk7//6/1IeZWmPcSJl2iFLaKArjCdWlQ/Q8nyO6jrw9nbqMx21im7U5GwrMj0njDies Lqiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534505; x=1737139305; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CAllTjvu+aeRJsuZfnIbaARP70yHbpTXyh5BDwNTLpw=; b=OKx35ulqaYMjoU12hh0++h4UjKSbhLdpHI8noSY41Wbm+cznjeBbEsaN5H4K7o0/bQ gBR7Q+apn3sLYTjoRvZ7Jygngl8UJE7bDwX9Mt1moChnGt5nTId0izcqj+sYLj3O3nz8 r1q1I9pdxpB2FPnd9Av8c5lxO8ux5pQ22PiDhatvtLrJORTitwQd5cJdwrbNiZWqR67Z wwaAP3om88FFmPdM0oFepy4sRvL7387rL21j5bmk7dqT+E50QrAw4ucHtSkQOoiA+SL8 7P2kYjVoc7PPI3GIF7oMplGp3M/HLJFpVT8/wroMPzwekoAQERIg0rrq2tM3aqsbLpSZ e89Q== X-Forwarded-Encrypted: i=1; AJvYcCX2Trt7NQWJ/EToxVWSlGwBOePGOn/QaxGvXnPlZouozC+7N32Ta23H5JYZQfSkjLnky24cGdx92Pq+LCEang==@lists.infradead.org X-Gm-Message-State: AOJu0YyIhd/Xd7WODfbHRjgA4tEip4BRYKM481SN68mUg/+WMHXrnuVE 0Qab1F/WywfwIi1OlrvYiOa7L3fhObpmw5vevlG4rbydEz+Kc5eE4+02TYoNwyNCK9gi33qB9U7 GMf6Xr5aS7w== X-Google-Smtp-Source: AGHT+IFbu2VfquZETrYawBwFSqjLwW/Qzimn4MDKfCetEVKQSMACYt32ELZsfcDoj+dSkmxIf50qY6QNFb/+Xg== X-Received: from wmqe1.prod.google.com ([2002:a05:600c:4e41:b0:434:a050:ddcf]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3c85:b0:436:18d0:aa6e with SMTP id 5b1f17b1804b1-436e2679a7cmr125841955e9.5.1736534504638; Fri, 10 Jan 2025 10:41:44 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:52 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-26-8419288bc805@google.com> Subject: [PATCH RFC v2 26/29] x86: Create library for flushing L1D for L1TF From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250110_184146_578849_94276318 X-CRM114-Status: GOOD ( 36.74 ) X-Spam-Score: -8.0 (--------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: ASI will need to use this L1D flushing logic so put it in a library where it can be used independently of KVM. Since we're creating this library, it starts to look messy if we don't also use it in the double-opt-in (both kernel cmdline and prctl) mm-switching flush logic which is there for mitigating Snoop-Ass [...] Content analysis details: (-8.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:34a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.4 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org ASI will need to use this L1D flushing logic so put it in a library where it can be used independently of KVM. Since we're creating this library, it starts to look messy if we don't also use it in the double-opt-in (both kernel cmdline and prctl) mm-switching flush logic which is there for mitigating Snoop-Assisted L1 Data Sampling ("SAL1DS"). However, that logic doesn't use any software-based fallback for flushing on CPUs without the L1D_FLUSH command. In that case the prctl opt-in will fail. One option would be to just start using the software fallback sequence currently done by VMX code, but Linus didn't seem happy with a similar sequence being used here [1]. CPUs affected by SAL1DS are a subset of those affected by L1TF, so it wouldn't be completely insane to assume that the same sequence works for both cases, but I'll err on the side of caution and avoid risk of giving users a false impression that the kernel has really flushed L1D for them. [1] https://lore.kernel.org/linux-kernel/CAHk-=whC4PUhErcoDhCbTOdmPPy-Pj8j9ytsdcyz9TorOb4KUw@mail.gmail.com/ Instead, create this awkward library that is scoped specifically to L1TF, which will be used only by VMX and ASI, and has an annoying "only sometimes works" doc-comment. Users of the library can then infer from that comment whether they have flushed L1D. No functional change intended. Checkpatch-args: --ignore=COMMIT_LOG_LONG_LINE Signed-off-by: Brendan Jackman --- arch/x86/Kconfig | 4 ++ arch/x86/include/asm/l1tf.h | 11 ++++++ arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/vmx/vmx.c | 66 +++---------------------------- arch/x86/lib/Makefile | 1 + arch/x86/lib/l1tf.c | 94 +++++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 117 insertions(+), 60 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ae31f36ce23d7c29d1e90b726c5a2e6ea5a63c8d..ca984dc7ee2f2b68c3ce1bcb5055047ca4f2a65d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2523,6 +2523,7 @@ config MITIGATION_ADDRESS_SPACE_ISOLATION bool "Allow code to run with a reduced kernel address space" default n depends on X86_64 && !PARAVIRT && !UML + select X86_L1TF_FLUSH_LIB help This feature provides the ability to run some kernel code with a reduced kernel address space. This can be used to @@ -3201,6 +3202,9 @@ config HAVE_ATOMIC_IOMAP def_bool y depends on X86_32 +config X86_L1TF_FLUSH_LIB + def_bool n + source "arch/x86/kvm/Kconfig" source "arch/x86/Kconfig.assembler" diff --git a/arch/x86/include/asm/l1tf.h b/arch/x86/include/asm/l1tf.h new file mode 100644 index 0000000000000000000000000000000000000000..e0be19c588bb5ec5c76a1861492e48b88615b4b8 --- /dev/null +++ b/arch/x86/include/asm/l1tf.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_L1TF_FLUSH_H +#define _ASM_L1TF_FLUSH_H + +#ifdef CONFIG_X86_L1TF_FLUSH_LIB +int l1tf_flush_setup(void); +void l1tf_flush(void); +#endif /* CONFIG_X86_L1TF_FLUSH_LIB */ + +#endif + diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index f09f13c01c6bbd28fa37fdf50547abf4403658c9..81c71510e33e52447882ab7b22682199c57b492e 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -92,6 +92,7 @@ config KVM_SW_PROTECTED_VM config KVM_INTEL tristate "KVM for Intel (and compatible) processors support" depends on KVM && IA32_FEAT_CTL + select X86_L1TF_FLUSH_LIB help Provides support for KVM on processors equipped with Intel's VT extensions, a.k.a. Virtual Machine Extensions (VMX). diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0e90463f1f2183b8d716f85d5c8a8af8958fef0b..b1a02f27b3abce0ef6ac448b66bef2c653a52eef 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -250,9 +251,6 @@ static void *vmx_l1d_flush_pages; static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf) { - struct page *page; - unsigned int i; - if (!boot_cpu_has_bug(X86_BUG_L1TF)) { l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_NOT_REQUIRED; return 0; @@ -288,26 +286,11 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf) l1tf = VMENTER_L1D_FLUSH_ALWAYS; } - if (l1tf != VMENTER_L1D_FLUSH_NEVER && !vmx_l1d_flush_pages && - !boot_cpu_has(X86_FEATURE_FLUSH_L1D)) { - /* - * This allocation for vmx_l1d_flush_pages is not tied to a VM - * lifetime and so should not be charged to a memcg. - */ - page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER); - if (!page) - return -ENOMEM; - vmx_l1d_flush_pages = page_address(page); + if (l1tf != VMENTER_L1D_FLUSH_NEVER) { + int err = l1tf_flush_setup(); - /* - * Initialize each page with a different pattern in - * order to protect against KSM in the nested - * virtualization case. - */ - for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) { - memset(vmx_l1d_flush_pages + i * PAGE_SIZE, i + 1, - PAGE_SIZE); - } + if (err) + return err; } l1tf_vmx_mitigation = l1tf; @@ -6652,20 +6635,8 @@ int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath) return ret; } -/* - * Software based L1D cache flush which is used when microcode providing - * the cache control MSR is not loaded. - * - * The L1D cache is 32 KiB on Nehalem and later microarchitectures, but to - * flush it is required to read in 64 KiB because the replacement algorithm - * is not exactly LRU. This could be sized at runtime via topology - * information but as all relevant affected CPUs have 32KiB L1D cache size - * there is no point in doing so. - */ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) { - int size = PAGE_SIZE << L1D_CACHE_ORDER; - /* * This code is only executed when the flush mode is 'cond' or * 'always' @@ -6695,32 +6666,7 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) vcpu->stat.l1d_flush++; - if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) { - native_wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH); - return; - } - - asm volatile( - /* First ensure the pages are in the TLB */ - "xorl %%eax, %%eax\n" - ".Lpopulate_tlb:\n\t" - "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" - "addl $4096, %%eax\n\t" - "cmpl %%eax, %[size]\n\t" - "jne .Lpopulate_tlb\n\t" - "xorl %%eax, %%eax\n\t" - "cpuid\n\t" - /* Now fill the cache */ - "xorl %%eax, %%eax\n" - ".Lfill_cache:\n" - "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" - "addl $64, %%eax\n\t" - "cmpl %%eax, %[size]\n\t" - "jne .Lfill_cache\n\t" - "lfence\n" - :: [flush_pages] "r" (vmx_l1d_flush_pages), - [size] "r" (size) - : "eax", "ebx", "ecx", "edx"); + l1tf_flush(); } void vmx_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile index 98583a9dbab337e09a2e58905e5200499a496a07..b0a45bd70b40743a3fccb352b9641caacac83275 100644 --- a/arch/x86/lib/Makefile +++ b/arch/x86/lib/Makefile @@ -37,6 +37,7 @@ lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o insn-eval.o lib-$(CONFIG_RANDOMIZE_BASE) += kaslr.o lib-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o lib-$(CONFIG_MITIGATION_RETPOLINE) += retpoline.o +lib-$(CONFIG_X86_L1TF_FLUSH_LIB) += l1tf.o obj-y += msr.o msr-reg.o msr-reg-export.o hweight.o obj-y += iomem.o diff --git a/arch/x86/lib/l1tf.c b/arch/x86/lib/l1tf.c new file mode 100644 index 0000000000000000000000000000000000000000..c474f18ae331c8dfa7a029c457dd3cf75bebf808 --- /dev/null +++ b/arch/x86/lib/l1tf.c @@ -0,0 +1,94 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +#include +#include +#include + +#define L1D_CACHE_ORDER 4 +static void *l1tf_flush_pages; + +int l1tf_flush_setup(void) +{ + struct page *page; + unsigned int i; + + if (l1tf_flush_pages || boot_cpu_has(X86_FEATURE_FLUSH_L1D)) + return 0; + + page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER); + if (!page) + return -ENOMEM; + l1tf_flush_pages = page_address(page); + + /* + * Initialize each page with a different pattern in + * order to protect against KSM in the nested + * virtualization case. + */ + for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) { + memset(l1tf_flush_pages + i * PAGE_SIZE, i + 1, + PAGE_SIZE); + } + + return 0; +} +EXPORT_SYMBOL(l1tf_flush_setup); + +/* + * Flush L1D in a way that: + * + * - definitely works on CPUs X86_FEATURE_FLUSH_L1D (because the SDM says so). + * - almost definitely works on other CPUs with L1TF (because someone on LKML + * said someone from Intel said so). + * - may or may not work on other CPUs. + * + * Don't call unless l1tf_flush_setup() has returned successfully. + */ +noinstr void l1tf_flush(void) +{ + int size = PAGE_SIZE << L1D_CACHE_ORDER; + + if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) { + native_wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH); + return; + } + + if (WARN_ON(!l1tf_flush_pages)) + return; + + /* + * This sequence was provided by Intel for the purpose of mitigating + * L1TF on VMX. + * + * The L1D cache is 32 KiB on Nehalem and some later microarchitectures, + * but to flush it is required to read in 64 KiB because the replacement + * algorithm is not exactly LRU. This could be sized at runtime via + * topology information but as all relevant affected CPUs have 32KiB L1D + * cache size there is no point in doing so. + */ + asm volatile( + /* First ensure the pages are in the TLB */ + "xorl %%eax, %%eax\n" + ".Lpopulate_tlb:\n\t" + "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" + "addl $4096, %%eax\n\t" + "cmpl %%eax, %[size]\n\t" + "jne .Lpopulate_tlb\n\t" + "xorl %%eax, %%eax\n\t" + "cpuid\n\t" + /* Now fill the cache */ + "xorl %%eax, %%eax\n" + ".Lfill_cache:\n" + "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" + "addl $64, %%eax\n\t" + "cmpl %%eax, %[size]\n\t" + "jne .Lfill_cache\n\t" + "lfence\n" + :: [flush_pages] "r" (l1tf_flush_pages), + [size] "r" (size) + : "eax", "ebx", "ecx", "edx"); +} +EXPORT_SYMBOL(l1tf_flush);