From patchwork Sun Oct 20 19:47:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 1999670 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=qeQYMq66; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=ventanamicro.com header.i=@ventanamicro.com header.a=rsa-sha256 header.s=google header.b=RgZzWrXF; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XWrS84B2mz1xwK for ; Mon, 21 Oct 2024 08:00:21 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Pm2KFzvAl+OQvMwt4ouGFpAjTY1ecfxASb0dYd9gR6o=; b=qeQYMq66ZWr7wZ EjC+e6Jald4UitXFUfg6c4uZzJ93iyr4G3xXmRc+MunjO05O9bQVIRrrLqVm9JwBUebHPrKI7g3wb J8MMOV4GF6EoVBhzrwq378PAk3M7kpheP2in6ugdJkO8NiQA92/cn9OzDksyPR4qRGPIN89Yj838k k2Ix/VYG0jlHr/8YDwljA5N3zAeNuf/JILlKvZM/wCQU+eOHbwPHVnUuzAOct8t4EvLt8aRJ915CQ CS2j85fgbUot8S98RefNcieIdSiTNn2zd0w1qpY4YV1bvXt6SwxmzNEx+taV+sN1kqrD2Irrvejf1 +SHuc1q44+UMW1OTVESA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t2d29-00000005SKv-2ohK; Sun, 20 Oct 2024 21:00:17 +0000 Received: from mail-pj1-x1036.google.com ([2607:f8b0:4864:20::1036]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t2buL-00000005Mve-10Sq for kvm-riscv@lists.infradead.org; Sun, 20 Oct 2024 19:48:14 +0000 Received: by mail-pj1-x1036.google.com with SMTP id 98e67ed59e1d1-2e2e88cb0bbso2724052a91.3 for ; Sun, 20 Oct 2024 12:48:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1729453688; x=1730058488; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Jpt4DbIfoiuDWJOlua/tAIaTx6f+U6YQBxkTpe5jPP4=; b=RgZzWrXF0X/w3vc1eAOc5O59fV2dVQVKulWeSrJMWgHu3/oS/tHGVgdgeXZlDbmG/I T8N+T/6qmzRctnNG+YFpUkA7xyb5MJs8usTNiAkuh+gt5c9XA/jTwuDgWGK0dWwmDTUo z9lz7URrdX/f9carwD+XH6VXNp7aWeTJCdgulDhSapN5Tqi5GixZkN8AGHIFLCKAYFAx IHooavW5Sf5soBGYXhHleW7vwlznPofCPuvPuEXzi2z/DnbXvqBRN9a/NycsVy/1K9vr V3g0rCN5s6t6sKEdbECGDKtOtKMJ1bTiyJKOoAk4CIdae2G+HPrtcsAuHu+/qSkE6JgT kUuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729453688; x=1730058488; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Jpt4DbIfoiuDWJOlua/tAIaTx6f+U6YQBxkTpe5jPP4=; b=R3U+5sJ7GrGcYKkYMFOsi9gTVjXllPIGcxyd7UbnwWevOvgOVR4YklY/SlDHDzlAzF v9Y/zD2UEBmp82Ay6bVKTBrdlDe7PbQHjCAKWstSpaJJAlhtnCuuWxf6892UZ1eCFs9C QMyDpWpL95fcY506wEat0S207xLsBz9Pkn5fsYcKn7mTVdVf/Y2Lm/pVeIlXaGZHQ6nG gDO8KQn8Wz3XzNsfhtVSZzCVRvHE3Sqh4BvC2l6nyZfo7DSKZs49QDgwQaHzUroblE9a eYuVf3LixOQyqrhFam5CgYixC1TPo/f7mPhATpRtGQ2ZxIurp6U4wEDjPb4wyarUJkqv /iAg== X-Forwarded-Encrypted: i=1; AJvYcCWwV4uEyyXmb4Lh7WYfUdtlNmCgVAA1UsAbY3Hxfjq3PLIGMpYkPjsCj486CvnbVkIROUQf7KTEX4A=@lists.infradead.org X-Gm-Message-State: AOJu0Yw0FLKSs1uBy/fHbPNzR1YklQukW0BkRARXcXJdIRYFdThzk1E2 U18xzhvcMMT7p+2+NcgEvxHYFehe66RVEzsJGC8v6Ci8zXjw3SP7a81GxIZ498A= X-Google-Smtp-Source: AGHT+IFBmef3AeMvYK7vRhNOkgHPNp9TfRjR+EjTOWv2i/hsD5eBK3uDXWFjXdE3v54cHEOsNVWYNw== X-Received: by 2002:a17:90a:17cc:b0:2e0:5748:6ea1 with SMTP id 98e67ed59e1d1-2e56172aad1mr11159778a91.37.1729453688471; Sun, 20 Oct 2024 12:48:08 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([50.238.223.131]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e5ad365d4dsm1933188a91.14.2024.10.20.12.48.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Oct 2024 12:48:07 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 13/13] RISC-V: KVM: Use NACL HFENCEs for KVM request based HFENCEs Date: Mon, 21 Oct 2024 01:17:34 +0530 Message-ID: <20241020194734.58686-14-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241020194734.58686-1-apatel@ventanamicro.com> References: <20241020194734.58686-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241020_124809_542007_D58536FE X-CRM114-Status: GOOD ( 10.10 ) X-Spam-Score: -2.1 (--) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: When running under some other hypervisor, use SBI NACL based HFENCEs for TLB shoot-down via KVM requests. This makes HFENCEs faster whenever SBI nested acceleration is available. Signed-off-by: Anup Patel --- arch/riscv/kvm/tlb.c | 57 +++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+), 17 deletions(-) Content analysis details: (-2.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1036 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org When running under some other hypervisor, use SBI NACL based HFENCEs for TLB shoot-down via KVM requests. This makes HFENCEs faster whenever SBI nested acceleration is available. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/tlb.c | 57 +++++++++++++++++++++++++++++++------------- 1 file changed, 40 insertions(+), 17 deletions(-) diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 23c0e82b5103..2f91ea5f8493 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -14,6 +14,7 @@ #include #include #include +#include #define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL) @@ -186,18 +187,24 @@ void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu) { - struct kvm_vmid *vmid; + struct kvm_vmid *v = &vcpu->kvm->arch.vmid; + unsigned long vmid = READ_ONCE(v->vmid); - vmid = &vcpu->kvm->arch.vmid; - kvm_riscv_local_hfence_gvma_vmid_all(READ_ONCE(vmid->vmid)); + if (kvm_riscv_nacl_available()) + nacl_hfence_gvma_vmid_all(nacl_shmem(), vmid); + else + kvm_riscv_local_hfence_gvma_vmid_all(vmid); } void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu) { - struct kvm_vmid *vmid; + struct kvm_vmid *v = &vcpu->kvm->arch.vmid; + unsigned long vmid = READ_ONCE(v->vmid); - vmid = &vcpu->kvm->arch.vmid; - kvm_riscv_local_hfence_vvma_all(READ_ONCE(vmid->vmid)); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_all(nacl_shmem(), vmid); + else + kvm_riscv_local_hfence_vvma_all(vmid); } static bool vcpu_hfence_dequeue(struct kvm_vcpu *vcpu, @@ -251,6 +258,7 @@ static bool vcpu_hfence_enqueue(struct kvm_vcpu *vcpu, void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) { + unsigned long vmid; struct kvm_riscv_hfence d = { 0 }; struct kvm_vmid *v = &vcpu->kvm->arch.vmid; @@ -259,26 +267,41 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) case KVM_RISCV_HFENCE_UNKNOWN: break; case KVM_RISCV_HFENCE_GVMA_VMID_GPA: - kvm_riscv_local_hfence_gvma_vmid_gpa( - READ_ONCE(v->vmid), - d.addr, d.size, d.order); + vmid = READ_ONCE(v->vmid); + if (kvm_riscv_nacl_available()) + nacl_hfence_gvma_vmid(nacl_shmem(), vmid, + d.addr, d.size, d.order); + else + kvm_riscv_local_hfence_gvma_vmid_gpa(vmid, d.addr, + d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); - kvm_riscv_local_hfence_vvma_asid_gva( - READ_ONCE(v->vmid), d.asid, - d.addr, d.size, d.order); + vmid = READ_ONCE(v->vmid); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_asid(nacl_shmem(), vmid, d.asid, + d.addr, d.size, d.order); + else + kvm_riscv_local_hfence_vvma_asid_gva(vmid, d.asid, d.addr, + d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_ALL: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); - kvm_riscv_local_hfence_vvma_asid_all( - READ_ONCE(v->vmid), d.asid); + vmid = READ_ONCE(v->vmid); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_asid_all(nacl_shmem(), vmid, d.asid); + else + kvm_riscv_local_hfence_vvma_asid_all(vmid, d.asid); break; case KVM_RISCV_HFENCE_VVMA_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); - kvm_riscv_local_hfence_vvma_gva( - READ_ONCE(v->vmid), - d.addr, d.size, d.order); + vmid = READ_ONCE(v->vmid); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma(nacl_shmem(), vmid, + d.addr, d.size, d.order); + else + kvm_riscv_local_hfence_vvma_gva(vmid, d.addr, + d.size, d.order); break; default: break;