From patchwork Sun Oct 20 19:47:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 1999663 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=0u7Otbbc; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=ventanamicro.com header.i=@ventanamicro.com header.a=rsa-sha256 header.s=google header.b=Dh6wTKOi; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XWpsL5S0Xz1xwb for ; Mon, 21 Oct 2024 06:48:38 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1jSjkrK/JULm/2jpIkR2bK6j+lCoAaKlnFkxMjpI18k=; b=0u7Otbbc/3O650 xA5NAqgPCMK4U9sFWE462KfMIh8UYWy2INwZ60f5zGYLoOPjq2aqazUoJuSrwOFa/v3n1QwfdRp09 CXlNwtdnnX7IZLvGqJiHgFbyFYUGivTkhWVh3Se4CQ7LTsD41q3F8d/bQgI9nrD7aXVVv6Xcl/6hS LzHt4Sy31usfrbR10ZPo3Ghp9X2fAdp+eBQFJ14IUNL2RAqhAs7qoKsfK5Dmzd4ODu5IXKR8R7ArQ Kup+TPXjoVUQZhQ8ATF2yWOuG7XmvJ3Q8XWa8LaCmoHqPafaLMbdRJ5onvKvPp+RQ8CKjiNY9d7MZ U5VrA4aST+ukeRLBjPcA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t2bum-00000005NJ0-2T0l; Sun, 20 Oct 2024 19:48:36 +0000 Received: from mail-pj1-x102c.google.com ([2607:f8b0:4864:20::102c]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t2buI-00000005Msr-1jVU for kvm-riscv@lists.infradead.org; Sun, 20 Oct 2024 19:48:12 +0000 Received: by mail-pj1-x102c.google.com with SMTP id 98e67ed59e1d1-2e2b9480617so3064500a91.1 for ; Sun, 20 Oct 2024 12:48:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1729453685; x=1730058485; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4C7ZduGC7pkN/JUl6gfB/F71HinWDzomU4MOEbWcFt4=; b=Dh6wTKOi7z9OQeQPXG/vA/HvStv2lGnub7kG4clcW+Wdo0IskzGZcCFpSWiDczsfoz vD13n7sBTQNUOt9HGoP5Au8lgUr6hnOTq02TH6P6BANGGi15v2QcC2DmrsfcglOfFcvS q2YrDFDwt5YzF0e65Q7kRbEXnp7OXQTNg7g9k6w6Hf3TVcGPJWhslXX8ARFyzNNBM9PO uDVsadrHCZ6XxJW+6QZJCyhlGC2mOAzWSPrmIEdap1F9fVJ7XYbf/MW80ygqGahzL77j XSN2aT+0pdPSZKikR7NvCIRee62S0ZQg0NmNQ6ptA4YhUDeOn+se93OqRkjEXBMCruk5 59Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729453685; x=1730058485; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4C7ZduGC7pkN/JUl6gfB/F71HinWDzomU4MOEbWcFt4=; b=XnJr3AVH+oNK2Qfpq1KGE09urv/fe9LCdQeWE+JkbzVWZUNHrthe4igohmRlZaVwl5 +rfzv+hVDos8vAmnQ+wFoWSXc9YHvUFggK/lIhnXAoiPkgjDmUNaR+O+icGZvdaG5JtR NNrUY19Y3DkRv22Uw+b9QdQCRQJGyk6ZUaa2spmYn7YKz/i6ud/0yCxvFgB0mveoU9u9 ZKwBbG7cchB7R3nnzfd07A7px4T73/4taJ4eFkDYXUdjMARbihOaXDaq9XCJfhyBUlv2 9h4Hgrv+OP3u9OSRKlcFeYYvNM5vFi7fWmJ4QAOxX3AQsBii/PtdDDM459JWoXEdiLmR Qrtw== X-Forwarded-Encrypted: i=1; AJvYcCVed9rrtR7rokkbith3CGP7aicuHNoBhtGcz8VExULYb0av4Vu5S6oqphgSev+Jchr6/RPuVFPB5TM=@lists.infradead.org X-Gm-Message-State: AOJu0YxXrgm7HIy5nVlnLzqNkpLhYR5RV8mV5cyZ5q/kVDViOhYL4oCx 9Nr1eo75VsQpXHY/YzqCV8HQ5+CJzWsCClrgRFv15n1t1/5Me71ObCHTFWoKbLE= X-Google-Smtp-Source: AGHT+IG/11qYdtlLvAcC5cZkvAaWerEVivMA/dGwxaUtWguVGnxVgD9ylKyt8Lr3PM/GRQspnVP0aw== X-Received: by 2002:a17:90a:15c9:b0:2e2:bad3:e393 with SMTP id 98e67ed59e1d1-2e5616de72amr10855206a91.3.1729453685228; Sun, 20 Oct 2024 12:48:05 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([50.238.223.131]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e5ad365d4dsm1933188a91.14.2024.10.20.12.48.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Oct 2024 12:48:04 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley Cc: Atish Patra , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v2 11/13] RISC-V: KVM: Use SBI sync SRET call when available Date: Mon, 21 Oct 2024 01:17:32 +0530 Message-ID: <20241020194734.58686-12-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241020194734.58686-1-apatel@ventanamicro.com> References: <20241020194734.58686-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241020_124806_663446_51DAE9CE X-CRM114-Status: GOOD ( 12.72 ) X-Spam-Score: -2.1 (--) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Implement an optimized KVM world-switch using SBI sync SRET call when SBI nested acceleration extension is available. This improves KVM world-switch when KVM RISC-V is running as a Guest under some ot [...] Content analysis details: (-2.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:102c listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Implement an optimized KVM world-switch using SBI sync SRET call when SBI nested acceleration extension is available. This improves KVM world-switch when KVM RISC-V is running as a Guest under some other hypervisor. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_nacl.h | 6 ++++ arch/riscv/kvm/vcpu.c | 48 ++++++++++++++++++++++++++++--- arch/riscv/kvm/vcpu_switch.S | 29 +++++++++++++++++++ 3 files changed, 79 insertions(+), 4 deletions(-) diff --git a/arch/riscv/include/asm/kvm_nacl.h b/arch/riscv/include/asm/kvm_nacl.h index 8f3e3ebf5017..4124d5e06a0f 100644 --- a/arch/riscv/include/asm/kvm_nacl.h +++ b/arch/riscv/include/asm/kvm_nacl.h @@ -12,6 +12,8 @@ #include #include +struct kvm_vcpu_arch; + DECLARE_STATIC_KEY_FALSE(kvm_riscv_nacl_available); #define kvm_riscv_nacl_available() \ static_branch_unlikely(&kvm_riscv_nacl_available) @@ -43,6 +45,10 @@ void __kvm_riscv_nacl_hfence(void *shmem, unsigned long page_num, unsigned long page_count); +void __kvm_riscv_nacl_switch_to(struct kvm_vcpu_arch *vcpu_arch, + unsigned long sbi_ext_id, + unsigned long sbi_func_id); + int kvm_riscv_nacl_enable(void); void kvm_riscv_nacl_disable(void); diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 0aad58f984ff..e191e6eae0c0 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -770,19 +770,59 @@ static __always_inline void kvm_riscv_vcpu_swap_in_host_state(struct kvm_vcpu *v */ static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu) { + void *nsh; struct kvm_cpu_context *gcntx = &vcpu->arch.guest_context; struct kvm_cpu_context *hcntx = &vcpu->arch.host_context; kvm_riscv_vcpu_swap_in_guest_state(vcpu); guest_state_enter_irqoff(); - hcntx->hstatus = ncsr_swap(CSR_HSTATUS, gcntx->hstatus); + if (kvm_riscv_nacl_sync_sret_available()) { + nsh = nacl_shmem(); - nsync_csr(-1UL); + if (kvm_riscv_nacl_autoswap_csr_available()) { + hcntx->hstatus = + nacl_csr_read(nsh, CSR_HSTATUS); + nacl_scratch_write_long(nsh, + SBI_NACL_SHMEM_AUTOSWAP_OFFSET + + SBI_NACL_SHMEM_AUTOSWAP_HSTATUS, + gcntx->hstatus); + nacl_scratch_write_long(nsh, + SBI_NACL_SHMEM_AUTOSWAP_OFFSET, + SBI_NACL_SHMEM_AUTOSWAP_FLAG_HSTATUS); + } else if (kvm_riscv_nacl_sync_csr_available()) { + hcntx->hstatus = nacl_csr_swap(nsh, + CSR_HSTATUS, gcntx->hstatus); + } else { + hcntx->hstatus = csr_swap(CSR_HSTATUS, gcntx->hstatus); + } - __kvm_riscv_switch_to(&vcpu->arch); + nacl_scratch_write_longs(nsh, + SBI_NACL_SHMEM_SRET_OFFSET + + SBI_NACL_SHMEM_SRET_X(1), + &gcntx->ra, + SBI_NACL_SHMEM_SRET_X_LAST); + + __kvm_riscv_nacl_switch_to(&vcpu->arch, SBI_EXT_NACL, + SBI_EXT_NACL_SYNC_SRET); + + if (kvm_riscv_nacl_autoswap_csr_available()) { + nacl_scratch_write_long(nsh, + SBI_NACL_SHMEM_AUTOSWAP_OFFSET, + 0); + gcntx->hstatus = nacl_scratch_read_long(nsh, + SBI_NACL_SHMEM_AUTOSWAP_OFFSET + + SBI_NACL_SHMEM_AUTOSWAP_HSTATUS); + } else { + gcntx->hstatus = csr_swap(CSR_HSTATUS, hcntx->hstatus); + } + } else { + hcntx->hstatus = csr_swap(CSR_HSTATUS, gcntx->hstatus); - gcntx->hstatus = csr_swap(CSR_HSTATUS, hcntx->hstatus); + __kvm_riscv_switch_to(&vcpu->arch); + + gcntx->hstatus = csr_swap(CSR_HSTATUS, hcntx->hstatus); + } vcpu->arch.last_exit_cpu = vcpu->cpu; guest_state_exit_irqoff(); diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S index 9f13e5ce6a18..47686bcb21e0 100644 --- a/arch/riscv/kvm/vcpu_switch.S +++ b/arch/riscv/kvm/vcpu_switch.S @@ -218,6 +218,35 @@ SYM_FUNC_START(__kvm_riscv_switch_to) ret SYM_FUNC_END(__kvm_riscv_switch_to) + /* + * Parameters: + * A0 <= Pointer to struct kvm_vcpu_arch + * A1 <= SBI extension ID + * A2 <= SBI function ID + */ +SYM_FUNC_START(__kvm_riscv_nacl_switch_to) + SAVE_HOST_GPRS + + SAVE_HOST_AND_RESTORE_GUEST_CSRS .Lkvm_nacl_switch_return + + /* Resume Guest using SBI nested acceleration */ + add a6, a2, zero + add a7, a1, zero + ecall + + /* Back to Host */ + .align 2 +.Lkvm_nacl_switch_return: + SAVE_GUEST_GPRS + + SAVE_GUEST_AND_RESTORE_HOST_CSRS + + RESTORE_HOST_GPRS + + /* Return to C code */ + ret +SYM_FUNC_END(__kvm_riscv_nacl_switch_to) + SYM_CODE_START(__kvm_riscv_unpriv_trap) /* * We assume that faulting unpriv load/store instruction is