From patchwork Mon Oct 4 15:59:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536189 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=aiYXGzNB; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQS24jcPz9t5G for ; Tue, 5 Oct 2021 03:01:06 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236223AbhJDQCy (ORCPT ); Mon, 4 Oct 2021 12:02:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236220AbhJDQCy (ORCPT ); Mon, 4 Oct 2021 12:02:54 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29931C061745 for ; Mon, 4 Oct 2021 09:01:05 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id e7so16989826pgk.2 for ; Mon, 04 Oct 2021 09:01:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c8V1scHTGZcm2Dn3vb3tzVwF9hzvzB4+HDzAd0pKgKw=; b=aiYXGzNBVlPfTv7k54hzq91r1IvHTkus8O/D3LEmDMQcpLt/EImKfx3ggtcXpdHyTC tMmYWtx7jtVbFoAFG5ogXVlbmgiTtWrj0DLnSJO0KkZjooSzN4T4bXd8XIC5bUvkoASj 8fMtU6Zd8YfVLsn3/n3hnV5FreDIB562ug3zNTMDOwIrQPnaf9FQxjh9aUpFKBTM0ugQ 8itwHaIwK28bcEUUVC3JJhYj1UB0LitCXI6DXi6xryv74IuM1OAd0LJfo+mY8LqsB4xs Tloxj6IYt/wgkzZDEuKxkxe2Ih0SSJ7ONdnR3F8kq49sHrd7NwhzBAyp01NF63fnQMGQ vvAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c8V1scHTGZcm2Dn3vb3tzVwF9hzvzB4+HDzAd0pKgKw=; b=VjDx4Pub3NDrJJA7B/Wmstx+L3LInY7aueFgYWnCfi/Vu8H/78cfh5x2wRxAmQeEMg s5Xrv9BXwxjLhhF7MoZ/TMu3zO2yiO+t1XseHyx0y0Se87B02F+gyYLuA0gA8FTJ4r8m 7r6lDOze1zrIxxyEP0AalxRKHCri9IKBANRg56tW2uCPxaRQsagp3s/+Wm7IEQmknIf9 mCF+n1VoXf0fxG8Ylvt12N4UILdBisqoQRGpaBpVsfKm6mrzl7zxFVh99H0sy3VU51Xr f6U2O2XmCLhCmbft1tItF1G8XYqI7f+/OKuv03uC9as+1eRm0Ji6Yt2pmewM/VyXAe9B cduA== X-Gm-Message-State: AOAM531jEcQu6NNiIuzdNF5QQdVN0BSLxOvuJwAdfOxpsF8tBt/JLePd XvRLjHnPESFZj8uibYpRMRflWs1BwU4= X-Google-Smtp-Source: ABdhPJxoovPXW4e7e+wAXmLw3EOYbGp5mJiVzkhts8KFSfnO0+5VqkumX0lXYQvuezeZiW4bDhYQOQ== X-Received: by 2002:a63:200a:: with SMTP id g10mr11295120pgg.242.1633363264562; Mon, 04 Oct 2021 09:01:04 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:04 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 01/52] powerpc/64s: Remove WORT SPR from POWER9/10 (take 2) Date: Tue, 5 Oct 2021 01:59:58 +1000 Message-Id: <20211004160049.1338837-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This removes a missed remnant of the WORT SPR. Signed-off-by: Nicholas Piggin --- arch/powerpc/platforms/powernv/idle.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index e3ffdc8e8567..86e787502e42 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -589,7 +589,6 @@ struct p9_sprs { u64 purr; u64 spurr; u64 dscr; - u64 wort; u64 ciabr; u64 mmcra; From patchwork Mon Oct 4 15:59:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536190 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=i/i7wLrh; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQS46PRlz9t5G for ; Tue, 5 Oct 2021 03:01:08 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236219AbhJDQC5 (ORCPT ); Mon, 4 Oct 2021 12:02:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236220AbhJDQC4 (ORCPT ); Mon, 4 Oct 2021 12:02:56 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59C7EC061745 for ; Mon, 4 Oct 2021 09:01:07 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id p1so8841150pfh.8 for ; Mon, 04 Oct 2021 09:01:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Cnfzayer/odaC20bEu/UMo0Ban8qAMEc7NkYV1p/hU8=; b=i/i7wLrhPuKj8xef4HNo0yzJAzxsWagLSCJZEm2GVybli0EMdfz1v0EvYh1rvtdAMK PLyzi991XdldSAV+o6uOOVlzLSqyfh+k8aIerL+yZDMox4vzWj/LomQ+KnqRV3s3+Oek i1t8GJa5GNdzvqTDX5xMZoy05HUBMbmm5eQwrs/aXw/hzHFNA/RhdSQAubcyf3Gzlyep Ua5O2JFnzZWaIL3PwVCwlZdadpUfWPJjAuDVAkWCXd64OfFQtqImKxg+9BPLgXC/FL9P f9nSXHwj396vz0WBj1QEEpxS0NZ6iT9uAVgh9MXbbycTMzyLVU3TPn4jK0eLNgnylV3z wlRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Cnfzayer/odaC20bEu/UMo0Ban8qAMEc7NkYV1p/hU8=; b=Jg5LABHT0zKDzLBCQ31CIVgBQ2U75x2JlJ1R9IkcxRfxUuAkA4TGi9IayxOZujoDLB YpNH9g7Vg4J6b+p61xaN6ySJYikgj+GumPFJMt/n40USSrrmmQE8BOYnll5JbdziI3uA PA3hjlFlVmril37fy97Xp+U2q+3r7CTVo4RxO+zC99cJmxPqBevACmJIihCPiLpO2eRA 7KEe+zG1XXIyWdW0VWNM1z3fEjX0dZbLO99a3XPsW1wGNh5Yuduh8UH20jCXRFhE/8uP TVrSyEicrCDS8tgIfOEUdrQppM6aJ5VjzIsiI6f1t5V8oi7SXyoKwYIKeQAL4Kf5F2ha TUXA== X-Gm-Message-State: AOAM530QokAE5wDgEIyDsjrxsMXxBkXP7Uk/1FsL/B+61Ez/sQ1kDV32 XME3h8VF7HY1hO/9y9tVp3jO/KF3C70= X-Google-Smtp-Source: ABdhPJzQidBs0IfBrm6laNxRLmaLqZy0ljGZ98570y7F4m7cBL+iegV6TmHqU8fyrzqwkoW3aFxXdw== X-Received: by 2002:aa7:8891:0:b0:44c:255d:391f with SMTP id z17-20020aa78891000000b0044c255d391fmr15502813pfe.26.1633363266698; Mon, 04 Oct 2021 09:01:06 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:06 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 02/52] powerpc/64s: guard optional TIDR SPR with CPU ftr test Date: Tue, 5 Oct 2021 01:59:59 +1000 Message-Id: <20211004160049.1338837-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The TIDR SPR only exists on POWER9. Avoid accessing it when the feature bit for it is not set. Signed-off-by: Nicholas Piggin Reviewed-by: Fabiano Rosas --- arch/powerpc/kvm/book3s_hv.c | 12 ++++++++---- arch/powerpc/xmon/xmon.c | 10 ++++++++-- 2 files changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2acb1c96cfaf..f4a779fffd18 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3767,7 +3767,8 @@ static void load_spr_state(struct kvm_vcpu *vcpu) mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); mtspr(SPRN_BESCR, vcpu->arch.bescr); - mtspr(SPRN_TIDR, vcpu->arch.tid); + if (cpu_has_feature(CPU_FTR_P9_TIDR)) + mtspr(SPRN_TIDR, vcpu->arch.tid); mtspr(SPRN_AMR, vcpu->arch.amr); mtspr(SPRN_UAMOR, vcpu->arch.uamor); @@ -3793,7 +3794,8 @@ static void store_spr_state(struct kvm_vcpu *vcpu) vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); vcpu->arch.bescr = mfspr(SPRN_BESCR); - vcpu->arch.tid = mfspr(SPRN_TIDR); + if (cpu_has_feature(CPU_FTR_P9_TIDR)) + vcpu->arch.tid = mfspr(SPRN_TIDR); vcpu->arch.amr = mfspr(SPRN_AMR); vcpu->arch.uamor = mfspr(SPRN_UAMOR); vcpu->arch.dscr = mfspr(SPRN_DSCR); @@ -3813,7 +3815,8 @@ struct p9_host_os_sprs { static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { host_os_sprs->dscr = mfspr(SPRN_DSCR); - host_os_sprs->tidr = mfspr(SPRN_TIDR); + if (cpu_has_feature(CPU_FTR_P9_TIDR)) + host_os_sprs->tidr = mfspr(SPRN_TIDR); host_os_sprs->iamr = mfspr(SPRN_IAMR); host_os_sprs->amr = mfspr(SPRN_AMR); host_os_sprs->fscr = mfspr(SPRN_FSCR); @@ -3827,7 +3830,8 @@ static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, mtspr(SPRN_UAMOR, 0); mtspr(SPRN_DSCR, host_os_sprs->dscr); - mtspr(SPRN_TIDR, host_os_sprs->tidr); + if (cpu_has_feature(CPU_FTR_P9_TIDR)) + mtspr(SPRN_TIDR, host_os_sprs->tidr); mtspr(SPRN_IAMR, host_os_sprs->iamr); if (host_os_sprs->amr != vcpu->arch.amr) diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c index dd8241c009e5..7958e5aae844 100644 --- a/arch/powerpc/xmon/xmon.c +++ b/arch/powerpc/xmon/xmon.c @@ -2107,8 +2107,14 @@ static void dump_300_sprs(void) if (!cpu_has_feature(CPU_FTR_ARCH_300)) return; - printf("pidr = %.16lx tidr = %.16lx\n", - mfspr(SPRN_PID), mfspr(SPRN_TIDR)); + if (cpu_has_feature(CPU_FTR_P9_TIDR)) { + printf("pidr = %.16lx tidr = %.16lx\n", + mfspr(SPRN_PID), mfspr(SPRN_TIDR)); + } else { + printf("pidr = %.16lx\n", + mfspr(SPRN_PID)); + } + printf("psscr = %.16lx\n", hv ? mfspr(SPRN_PSSCR) : mfspr(SPRN_PSSCR_PR)); From patchwork Mon Oct 4 16:00:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536191 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=IpCzKUbo; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQS65H1sz9t5G for ; Tue, 5 Oct 2021 03:01:10 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236222AbhJDQC7 (ORCPT ); Mon, 4 Oct 2021 12:02:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236220AbhJDQC6 (ORCPT ); Mon, 4 Oct 2021 12:02:58 -0400 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8FB7C061745 for ; Mon, 4 Oct 2021 09:01:09 -0700 (PDT) Received: by mail-pg1-x534.google.com with SMTP id a73so14037716pge.0 for ; Mon, 04 Oct 2021 09:01:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zf9+rzXTVi8plIkKJnOaRfuWZtrpNKxJ2kRRjgEz1l0=; b=IpCzKUboYsufQlqofbjtwHJ26a3gG3Udd35ZcUoXYGIJVrbHWS8NvVD8VY1vS4hVFt sYLpF35diFkfXr6amvzmSgtVKgKMP9nNvkVFGgeB7OulGx0Txo7lxIbQbW+YqS2iLy+L hnhcUKn9/NDSS1crCYK/xmmsLameS/uGYbpMgJlUyyNvqzfTWV2TqwAEOH/q1sbGuYqk l21kz9E1Y4xdQFGtqgrXS/wfPKBLMfe9jtNmOjpR1JtHSurlnL7hzkpsV+xlRE35ciDj 3eRykrrY7essQHMZbJ4Tnjb9rpywFVpXvl2VcQpK2lFxQbNbvANPCPtao2i8zWeuHHSP p+Kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zf9+rzXTVi8plIkKJnOaRfuWZtrpNKxJ2kRRjgEz1l0=; b=t2q0c0KOAJAJZhnR9uqQVnk/Mt0HzPfWBcWNohFy7q/5/aYuGUUZsT+VEoRo5J0t9V i2ftmptPRifoKw/ztGYFj0CWJRZnMbQ+FOizjuCOGkckGB5Ibedkle2h3xVwxryYW3+I eu58pn84aDyLUNvXEYljq+tiTX7+bYtHKbQ6bIJiZraUXVuYtOe1wrpiwsNS2PKgrvDO 5sbWDVEjUPLbr0jGeSTenpzjVzcYS57DJkj47soryUPWRSpbm+9Q0rTkGiBL89rVn6YJ nwkXK45Yw/PgUOmANYnCUXpAXiCornwK6TeQFtIkbCLXQ0wBfDMer4/DQqwcZqJ54fqG qn5Q== X-Gm-Message-State: AOAM5329+IWCM1hKyAPVys96YHlmqH4Oe/7o5h/nnyBLr4ZarJptIO5x /ZtrHGafeDGG6dRQjvicL5JA577eCW4= X-Google-Smtp-Source: ABdhPJxkFM4brl+777k2MPuDMiwgYuONWj7Oh1/MWWHD9rxebdH1RhU/PvAkmMgOwL/wyQRhqGgKqg== X-Received: by 2002:a62:5804:0:b0:44b:b75b:ec8f with SMTP id m4-20020a625804000000b0044bb75bec8fmr25399694pfb.63.1633363269208; Mon, 04 Oct 2021 09:01:09 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:08 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Alexey Kardashevskiy Subject: [PATCH v3 03/52] KMV: PPC: Book3S HV P9: Use set_dec to set decrementer to host Date: Tue, 5 Oct 2021 02:00:00 +1000 Message-Id: <20211004160049.1338837-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The host Linux timer code arms the decrementer with the value 'decrementers_next_tb - current_tb' using set_dec(), which stores val - 1 on Book3S-64, which is not quite the same as what KVM does to re-arm the host decrementer when exiting the guest. This shouldn't be a significant change, but it makes the logic match and avoids this small extra change being brought into the next patch. Suggested-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f4a779fffd18..6a07a79f07d8 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4050,7 +4050,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - mtspr(SPRN_DEC, local_paca->kvm_hstate.dec_expires - mftb()); + set_dec(local_paca->kvm_hstate.dec_expires - mftb()); /* We may have raced with new irq work */ if (test_irq_work_pending()) set_dec(1); From patchwork Mon Oct 4 16:00:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536192 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=mgMJaoHk; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSF44rqz9t5G for ; Tue, 5 Oct 2021 03:01:17 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236235AbhJDQDF (ORCPT ); Mon, 4 Oct 2021 12:03:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236220AbhJDQDB (ORCPT ); Mon, 4 Oct 2021 12:03:01 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0ABF9C061745 for ; Mon, 4 Oct 2021 09:01:12 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id q23so14898112pfs.9 for ; Mon, 04 Oct 2021 09:01:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5pcRvtdslqLe3269KSh6pekuy65GoxrcpKwBn7DAUtU=; b=mgMJaoHkilZlL1u7SJCBpMIDKwSglv7DrTVjNzZ0xMLUxtoN9IeiviAfkQNGQ3/Jkm LkUYA89+rjLF31E4UnH6SV0XBubItXuG5oB9MDuwhFEG2aRNuCw1jupWHsmV9n9N+zJ+ AOcztA6hGqasVBtN/aDviQbGQNsFglTy+LXN2+BoK99ut1C1Gmx3rCxLnxwnf0Dw4IVP 59rkyeQOEr+GH5O2Va2AaV+jEstiIPVz4aipnUDmUYs6I65td30rGBRB0ASwvSGv7KTk FREzPtoyYN7kbGDBblzBAM46mMmz2wyLuNZYlqrCdiCpFDCAgIcdKCYRjKhc4WSOKS3j 4Izg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5pcRvtdslqLe3269KSh6pekuy65GoxrcpKwBn7DAUtU=; b=jkGA6xo8a4vN+qti1CcDGgi/fio3kpAdSPIfaBGsrrM2Ejk8ac8cz/J1WhuXA9fPx2 elnQEAR37p5E0gT+3ABEZV8vqJXaEOn2fz6v4MdImkSYJ39/jC2XrnxHXOubisTaSuPv tbA1pwvC6ptQklll5HuhrrtV+HHl2yJQrS9epiIdSC5Bc5FjC9SP3CpzmpwH3l6sCWrd mrpaZRnuNzEhLsH6KZyP6SMOuC8Dfh97EQQTqZcAHkgqoJk8viFtIfZspR+bQ52b4y6i JloutxsVUPeyTAkxAUbMIeWAI6E5NdiC1RvLCSvJpkDvbRansEduSfiWcCodypH+2jqI BDMQ== X-Gm-Message-State: AOAM531FN+3RX3XPGkRIIVKx7Ylw/C40JzR0QYrrzPcCdJORQb9HUAKw GB4n5kBb1v9W33ifWKZH2fA8FCtZs3Q= X-Google-Smtp-Source: ABdhPJxbHxvVGtRL43v+pJW6MIX61tzQ8nXeti2cnql5N0HFZJE138rMYH782+IzzvUCqRmucJNo/A== X-Received: by 2002:a05:6a00:cc9:b0:44c:2305:50de with SMTP id b9-20020a056a000cc900b0044c230550demr15931064pfv.79.1633363271367; Mon, 04 Oct 2021 09:01:11 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:11 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 04/52] KVM: PPC: Book3S HV P9: Use host timer accounting to avoid decrementer read Date: Tue, 5 Oct 2021 02:00:01 +1000 Message-Id: <20211004160049.1338837-5-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org There is no need to save away the host DEC value, as it is derived from the host timer subsystem which maintains the next timer time, so it can be restored from there. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 5 +++++ arch/powerpc/kernel/time.c | 1 + arch/powerpc/kvm/book3s_hv.c | 14 +++++++------- 3 files changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 8c2c3dd4ddba..fd09b4797fd7 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -111,6 +111,11 @@ static inline unsigned long test_irq_work_pending(void) DECLARE_PER_CPU(u64, decrementers_next_tb); +static inline u64 timer_get_next_tb(void) +{ + return __this_cpu_read(decrementers_next_tb); +} + /* Convert timebase ticks to nanoseconds */ unsigned long long tb_to_ns(unsigned long long tb_ticks); diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 934d8ae66cc6..e84a087223ce 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -107,6 +107,7 @@ struct clock_event_device decrementer_clockevent = { EXPORT_SYMBOL(decrementer_clockevent); DEFINE_PER_CPU(u64, decrementers_next_tb); +EXPORT_SYMBOL_GPL(decrementers_next_tb); static DEFINE_PER_CPU(struct clock_event_device, decrementers); #define XSEC_PER_SEC (1024*1024) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 6a07a79f07d8..30d400bf161b 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3860,18 +3860,17 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, struct kvmppc_vcore *vc = vcpu->arch.vcore; struct p9_host_os_sprs host_os_sprs; s64 dec; - u64 tb; + u64 tb, next_timer; int trap, save_pmu; WARN_ON_ONCE(vcpu->arch.ceded); - dec = mfspr(SPRN_DEC); tb = mftb(); - if (dec < 0) + next_timer = timer_get_next_tb(); + if (tb >= next_timer) return BOOK3S_INTERRUPT_HV_DECREMENTER; - local_paca->kvm_hstate.dec_expires = dec + tb; - if (local_paca->kvm_hstate.dec_expires < time_limit) - time_limit = local_paca->kvm_hstate.dec_expires; + if (next_timer < time_limit) + time_limit = next_timer; save_p9_host_os_sprs(&host_os_sprs); @@ -4050,7 +4049,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - set_dec(local_paca->kvm_hstate.dec_expires - mftb()); + next_timer = timer_get_next_tb(); + set_dec(next_timer - mftb()); /* We may have raced with new irq work */ if (test_irq_work_pending()) set_dec(1); From patchwork Mon Oct 4 16:00:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536193 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=ppT4rhgi; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSF6Pw9z9t6h for ; Tue, 5 Oct 2021 03:01:17 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233311AbhJDQDF (ORCPT ); Mon, 4 Oct 2021 12:03:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236224AbhJDQDD (ORCPT ); Mon, 4 Oct 2021 12:03:03 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68556C061749 for ; Mon, 4 Oct 2021 09:01:14 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id r201so11050237pgr.4 for ; Mon, 04 Oct 2021 09:01:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hMnfSI9yoYDLSGnaeQ8njs+W5jZFdB15MADSAiJvUEw=; b=ppT4rhgi7tPQR7SsM3YfqL0KOJuWrKrQIQq4bPVblWzLiWoU6a5KgtfI+83w7lMmOJ 0ZIDs7yUqSUYm1VePY8xY9LyBvV5MlnUEeCVyKuFc2xsK5iPQkFoAWc27QQReluVGnsH sp9Chm5oeI1JjqJfWD7IqDgTOND2W2OP71rFxeHtQLv6m44NtBl34uaJeCJ5z3n/GYDQ 6uqeh+P8N06eOsogFVjBKKnyja0OFayl76vqjygyFpdOunQ/hIvQcf+1Ud5gREGY6hxf ZpEW/I6zmXxUdY8UHu1FGGmMdtXkQdZ012Skp2nl2ZdtFSW82t49YUW7LjCr7Z1gc6ni VeVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hMnfSI9yoYDLSGnaeQ8njs+W5jZFdB15MADSAiJvUEw=; b=A6uakDvWXQkGyMwmowDEviVrBc7t6sDFelttChAd3X9F6VVF/QxB9PkJR4nSFEZDFO QJT+Bdl94dGDhSRfaRcmkPu2WacxMCGfhejE4zj/N33ZvQ243OoLvs+L2ClT1BW0+jxn r4Kn7BHCvc36aoc0n7o5kmzYeScK7K/HJSq7uHd2S/l6oCMCoNfS3KX3igUv2QryFErm g0uuUKC8hxyZyT7JpxHjNdJx0h1f9eOvzcyNCz3euJRcSbrQzF93T8M5EPh9FvbuN5Jm 5Dy+rtZZEE9d7AUBjaAdKMVPghGJHJbiZPMyYzAih/zdCSbCJ/XTK5nP7MxyRSsYQdru 4qtA== X-Gm-Message-State: AOAM533WdBQuVBVzlZJAbVJDdYEKn/U9S/jjVDxOP8pdMwiG9BEb3Q9D ZCDHNOZdNrhW2T1hSDd4WWsXMriXVfI= X-Google-Smtp-Source: ABdhPJwBEERy2/Zuis0y01OsZytQQuADMZSXP9p0pINGWMFGJT+ZQ3gK+36xzer61qzfNvLJPYZGug== X-Received: by 2002:a63:131f:: with SMTP id i31mr11631459pgl.207.1633363273842; Mon, 04 Oct 2021 09:01:13 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:13 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Alexey Kardashevskiy Subject: [PATCH v3 05/52] KVM: PPC: Book3S HV P9: Use large decrementer for HDEC Date: Tue, 5 Oct 2021 02:00:02 +1000 Message-Id: <20211004160049.1338837-6-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org On processors that don't suppress the HDEC exceptions when LPCR[HDICE]=0, this could help reduce needless guest exits due to leftover exceptions on entering the guest. Reviewed-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 2 ++ arch/powerpc/kernel/time.c | 1 + arch/powerpc/kvm/book3s_hv_p9_entry.c | 3 ++- 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index fd09b4797fd7..69b6be617772 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -18,6 +18,8 @@ #include /* time.c */ +extern u64 decrementer_max; + extern unsigned long tb_ticks_per_jiffy; extern unsigned long tb_ticks_per_usec; extern unsigned long tb_ticks_per_sec; diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index e84a087223ce..6ce40d2ac201 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -88,6 +88,7 @@ static struct clocksource clocksource_timebase = { #define DECREMENTER_DEFAULT_MAX 0x7FFFFFFF u64 decrementer_max = DECREMENTER_DEFAULT_MAX; +EXPORT_SYMBOL_GPL(decrementer_max); /* for KVM HDEC */ static int decrementer_set_next_event(unsigned long evt, struct clock_event_device *dev); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 961b3d70483c..0ff9ddb5e7ca 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -504,7 +504,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = 0; } - mtspr(SPRN_HDEC, 0x7fffffff); + /* HDEC must be at least as large as DEC, so decrementer_max fits */ + mtspr(SPRN_HDEC, decrementer_max); save_clear_guest_mmu(kvm, vcpu); switch_mmu_to_host(kvm, host_pidr); From patchwork Mon Oct 4 16:00:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536194 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=FCziMgI9; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSG1tfBz9t5G for ; Tue, 5 Oct 2021 03:01:18 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233420AbhJDQDG (ORCPT ); Mon, 4 Oct 2021 12:03:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236220AbhJDQDG (ORCPT ); Mon, 4 Oct 2021 12:03:06 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0186BC061745 for ; Mon, 4 Oct 2021 09:01:17 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id j4so206851plx.4 for ; Mon, 04 Oct 2021 09:01:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=P62UrD3SpPpaMK7k10CAwEFBjd5vnrhxdqO8ECNd8U0=; b=FCziMgI9AaGEovNt/QKLpPFBLgnJjYeeHh/KRLugwuD861QiK50nr058K9Ls+5nXL8 UDc74nFKDZj3TyN7Uw/4khxZgSYpf9hzyl5DI8pnjlCX09GqR223NcD7CMd8sdwDRYn2 +JTQw/COgcHOgNRbr4JdWEhgBSQlBl1SrgiEbEF3iPsz1Aa2ZlPmmSe1DnuNRrBct5Hw 5zXQP40YyvrWnAVvY/An/Htk3lvjbz7omdInkk5LRsjpqtGKffTNEt+NBLKKiUh5QQlF Mq0/gl9YnVp83uV6Kw5KFsf42AHwSDZQ+pZGQNTKAHHI81OhFbwkHkWnopV26/Yvpw/K cOGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P62UrD3SpPpaMK7k10CAwEFBjd5vnrhxdqO8ECNd8U0=; b=iWV1UaToe7Rvy6IQHi8V9/xPWsrcla9YeZ3+WzuvtNWmaqNz0fDvTJRY7cDf7KpToi 0cZUVUEySsr8jdXbP3Tl3WlK5pMYR8xgYs1qJPL8kz8WZu5SxaZ5sENEsUFZcnZENwE3 NW9nDcGPVvYhPPzeTFj6DY4kJ35QCWcKxunJ9ZZwcESlTc1S1XbvZHleKiw0Qil5wB9J hSgBxAuWHm5a95r7COED1nX/2NdpVz2yghaplO34En3Qy3Qsj+2dkNhBb74dDlonVU7d rY3QJ+5MayWsnEzLWmPddkxUVWlAZhLWsWRumfZsHvqQFAsRQv1698FNAWGG71t6S7Se Y0KQ== X-Gm-Message-State: AOAM533ERtCu+/n4/KU9jpwZxnN6ESluwNWzl3xU3Ent9U2AqgZbFTyk FGPck7upgowfaZORu6k+qf85TGHrdhQ= X-Google-Smtp-Source: ABdhPJyK1bPWSFe9put/M7hNKFj8s2/NB8n6YBriVYPsFRbJTuQrzXDQSsX+VXK8hzI3gEdXK27xzw== X-Received: by 2002:a17:90b:1d8f:: with SMTP id pf15mr30069847pjb.200.1633363276385; Mon, 04 Oct 2021 09:01:16 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:16 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Fabiano Rosas Subject: [PATCH v3 06/52] KVM: PPC: Book3S HV P9: Reduce mftb per guest entry/exit Date: Tue, 5 Oct 2021 02:00:03 +1000 Message-Id: <20211004160049.1338837-7-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org mftb is serialising (dispatch next-to-complete) so it is heavy weight for a mfspr. Avoid reading it multiple times in the entry or exit paths. A small number of cycles delay to timers is tolerable. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 4 ++-- arch/powerpc/kvm/book3s_hv_p9_entry.c | 5 +++-- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 30d400bf161b..e4482bf546ed 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3927,7 +3927,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, * * XXX: Another day's problem. */ - mtspr(SPRN_DEC, vcpu->arch.dec_expires - mftb()); + mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); if (kvmhv_on_pseries()) { /* @@ -4050,7 +4050,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->in_guest = 0; next_timer = timer_get_next_tb(); - set_dec(next_timer - mftb()); + set_dec(next_timer - tb); /* We may have raced with new irq work */ if (test_irq_work_pending()) set_dec(1); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 0ff9ddb5e7ca..bd8cf0a65ce8 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -203,7 +203,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_dawr1; unsigned long host_dawrx1; - hdec = time_limit - mftb(); + tb = mftb(); + hdec = time_limit - tb; if (hdec < 0) return BOOK3S_INTERRUPT_HV_DECREMENTER; @@ -215,7 +216,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; if (vc->tb_offset) { - u64 new_tb = mftb() + vc->tb_offset; + u64 new_tb = tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); tb = mftb(); if ((tb & 0xffffff) < (new_tb & 0xffffff)) From patchwork Mon Oct 4 16:00:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536195 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=lof2rgwz; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSJ13jjz9t5G for ; Tue, 5 Oct 2021 03:01:20 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236224AbhJDQDI (ORCPT ); Mon, 4 Oct 2021 12:03:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236220AbhJDQDI (ORCPT ); Mon, 4 Oct 2021 12:03:08 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B054C061745 for ; Mon, 4 Oct 2021 09:01:19 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id l6so190188plh.9 for ; Mon, 04 Oct 2021 09:01:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LQhPOKpl3GkDq6Ycw92AS+eSUqd4zxHUUEoaeQSTY5Q=; b=lof2rgwz1vy9wv3Y0An3iEwB579gF1PViDlOu+ZUhifJdlvZve8lSb1UA9sPFRsap3 wMNEj0oocaNAQ9lWXfSoyMoxRO3ASZOdiFvjgIVVyZvOn79qfzGWCkMRP+O7nzcWU93y ga9gjZwS6uAHf0F/dzbkFvsAc20oIxd9OvbMroYs2g1a5WgrIwK/N9B+CQkoQZ4xNUd6 1C8pe/wiKYB2IE6cVtUw7nyJVadDC+s8m593XWS2CMz1Pjuqlwg+6O9DQO/PQHcsz0rd DkDJ7P++uRKmbm0bMHHf9RwOCTysjOoBZ9mxKae8pwpPI5s0h7xBhaU4bT6U30sli8rD EQkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LQhPOKpl3GkDq6Ycw92AS+eSUqd4zxHUUEoaeQSTY5Q=; b=22y+V+AwYlJx77aycyTb0MMfaNWUx+iS1VAtB4VkYTpzdLee61OdvYnlqSboO5hCqE 6+ABKOld8Dha+34Eg6bBXAJHg9W4NQKikjWr7PFVRAK6VxXYh1JHfwJw/EBHbzN6BLqM mOcPk9VSMLsDwoW6QxdNXTvv6RGbHYhYfcTbcIgXZmNZh3W/NOxkbVFgLgFuR6X2RIld wSfq+/EHeBKcF7F/pM079GigMGWEq1EATZ7mzRNB0HiojHG6/6hnHqkpi97zSSeW6F1U xHbo/i8VRMYr0iJr/i39ToH5J/VekrOKe4gz2DFrqq5HRw6RtSTAv8OJ0X9DO9MuJnbV ZRlg== X-Gm-Message-State: AOAM533BayDNVRZYNOTlZ3rSOyfb60nlp3kLmJXQXDmKH0J0PHo/SHpo x112hsLplajoe9jSZw3R3OnW4lg7vII= X-Google-Smtp-Source: ABdhPJyKBZmqCEQvKUFl4I6ASPt21mQNAKY5FJhVPYwKHKUUS3NLUMqpnb1n08wFKEBYLNNrPJk3Zw== X-Received: by 2002:a17:90a:7345:: with SMTP id j5mr38334813pjs.48.1633363278547; Mon, 04 Oct 2021 09:01:18 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:18 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 07/52] powerpc/time: add API for KVM to re-arm the host timer/decrementer Date: Tue, 5 Oct 2021 02:00:04 +1000 Message-Id: <20211004160049.1338837-8-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than have KVM look up the host timer and fiddle with the irq-work internal details, have the powerpc/time.c code provide a function for KVM to re-arm the Linux timer code when exiting a guest. This is implementation has an improvement over existing code of marking a decrementer interrupt as soft-pending if a timer has expired, rather than setting DEC to a -ve value, which tended to cause host timers to take two interrupts (first hdec to exit the guest, then the immediate dec). Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 16 +++------- arch/powerpc/kernel/time.c | 52 +++++++++++++++++++++++++++------ arch/powerpc/kvm/book3s_hv.c | 7 ++--- 3 files changed, 49 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 69b6be617772..924b2157882f 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -99,18 +99,6 @@ extern void div128_by_32(u64 dividend_high, u64 dividend_low, extern void secondary_cpu_time_init(void); extern void __init time_init(void); -#ifdef CONFIG_PPC64 -static inline unsigned long test_irq_work_pending(void) -{ - unsigned long x; - - asm volatile("lbz %0,%1(13)" - : "=r" (x) - : "i" (offsetof(struct paca_struct, irq_work_pending))); - return x; -} -#endif - DECLARE_PER_CPU(u64, decrementers_next_tb); static inline u64 timer_get_next_tb(void) @@ -118,6 +106,10 @@ static inline u64 timer_get_next_tb(void) return __this_cpu_read(decrementers_next_tb); } +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +void timer_rearm_host_dec(u64 now); +#endif + /* Convert timebase ticks to nanoseconds */ unsigned long long tb_to_ns(unsigned long long tb_ticks); diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 6ce40d2ac201..2a6c118a43fb 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -498,6 +498,16 @@ EXPORT_SYMBOL(profile_pc); * 64-bit uses a byte in the PACA, 32-bit uses a per-cpu variable... */ #ifdef CONFIG_PPC64 +static inline unsigned long test_irq_work_pending(void) +{ + unsigned long x; + + asm volatile("lbz %0,%1(13)" + : "=r" (x) + : "i" (offsetof(struct paca_struct, irq_work_pending))); + return x; +} + static inline void set_irq_work_pending_flag(void) { asm volatile("stb %0,%1(13)" : : @@ -541,13 +551,44 @@ void arch_irq_work_raise(void) preempt_enable(); } +static void set_dec_or_work(u64 val) +{ + set_dec(val); + /* We may have raced with new irq work */ + if (unlikely(test_irq_work_pending())) + set_dec(1); +} + #else /* CONFIG_IRQ_WORK */ #define test_irq_work_pending() 0 #define clear_irq_work_pending() +static void set_dec_or_work(u64 val) +{ + set_dec(val); +} #endif /* CONFIG_IRQ_WORK */ +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +void timer_rearm_host_dec(u64 now) +{ + u64 *next_tb = this_cpu_ptr(&decrementers_next_tb); + + WARN_ON_ONCE(!arch_irqs_disabled()); + WARN_ON_ONCE(mfmsr() & MSR_EE); + + if (now >= *next_tb) { + local_paca->irq_happened |= PACA_IRQ_DEC; + } else { + now = *next_tb - now; + if (now <= decrementer_max) + set_dec_or_work(now); + } +} +EXPORT_SYMBOL_GPL(timer_rearm_host_dec); +#endif + /* * timer_interrupt - gets called when the decrementer overflows, * with interrupts disabled. @@ -608,10 +649,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt) } else { now = *next_tb - now; if (now <= decrementer_max) - set_dec(now); - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + set_dec_or_work(now); __this_cpu_inc(irq_stat.timer_irqs_others); } @@ -853,11 +891,7 @@ static int decrementer_set_next_event(unsigned long evt, struct clock_event_device *dev) { __this_cpu_write(decrementers_next_tb, get_tb() + evt); - set_dec(evt); - - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + set_dec_or_work(evt); return 0; } diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e4482bf546ed..e83c7aa7dbba 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4049,11 +4049,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - next_timer = timer_get_next_tb(); - set_dec(next_timer - tb); - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + timer_rearm_host_dec(tb); + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); kvmhv_load_host_pmu(); From patchwork Mon Oct 4 16:00:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536197 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=lAVDfCEP; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSL3pT7z9t5G for ; Tue, 5 Oct 2021 03:01:22 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236220AbhJDQDL (ORCPT ); Mon, 4 Oct 2021 12:03:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235212AbhJDQDK (ORCPT ); Mon, 4 Oct 2021 12:03:10 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9542AC061745 for ; Mon, 4 Oct 2021 09:01:21 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id r2so16968017pgl.10 for ; Mon, 04 Oct 2021 09:01:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6PeeoMt4W4XFn9V2VfFg4Edyu34SbX1NSkfz7A99z94=; b=lAVDfCEPocM8um2kUJzsls5OFEbkcHeWvtz99SsEikKpioduRotj/qOTgvc9L7Oi6b eaYbNrAxf3Z1e1JpJ+o1pp4J3DGxMiUeCdA1QCURdsYCoq+T6RTKZxz+1yRcodbEFVb8 7LSwCb+Y6huuB20ZE2spNNx+keSsrShU/keBRhJMZTjopgDcgde/Smm2VE2O5Os8uYjw 1nh2B8FbnXrKMyvcH/TvJKmXn43/WLD+RyJOKa5P+GL3pXdn8Qf7f7YRMxmXN7UIAWfX 4oTunS9RckNoGZc4AxDBL6kn7kdxklc6cvhE9ceOyeuhq0Neg8zoqPJFqzkiSB0EK6Cw zkrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6PeeoMt4W4XFn9V2VfFg4Edyu34SbX1NSkfz7A99z94=; b=Sh0lZmyz8K3X4Dk58jF1smtfbpKjbbdoRjuW+7oePvxXD4kf1eWKYNI5zCJ1qKkApu 8FMbkrHIqidD4b7urQB/ZftVqjpwqCuKbw5zxMOB+1fC9TB/QOW5vi/eq6c80ZJtEArs OIc2jBDicxqqdzp2tOIp+eVe2GBc6DtsL8+mb48OBx96nh3wf04zRnYVJalYsbk0zypZ aY6m9dqN5lqaOx4598Lb+9S2l+gQTiBJQtO8Z7JVgQmViTAXWyBcfeer/T3dsJ0TlTtc WDR/CUKQ8yMVbauCVWgo57Yoik3uzjrmYZKuve4qrCegC9i6918umyW8WlDfBNxS7njO UquQ== X-Gm-Message-State: AOAM531fHPw1XcAvZklQa0YtMPvkWaMMFdJTrZLiRED1TGlsv7Tv7TgR hKV5R6C5Vh5mrYW9ES94L7Pi5cdEaLI= X-Google-Smtp-Source: ABdhPJzFi8aVzvJ0az0svpFGY4o9SjVSZ3fjMU+6MJlYUE22BC3r6kMNvuLjEeOCIuNRfljIABI87w== X-Received: by 2002:a63:e10:: with SMTP id d16mr11465052pgl.438.1633363280995; Mon, 04 Oct 2021 09:01:20 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:20 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Fabiano Rosas Subject: [PATCH v3 08/52] KVM: PPC: Book3S HV: POWER10 enable HAIL when running radix guests Date: Tue, 5 Oct 2021 02:00:05 +1000 Message-Id: <20211004160049.1338837-9-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org HV interrupts may be taken with the MMU enabled when radix guests are running. Enable LPCR[HAIL] on ISA v3.1 processors for radix guests. Make this depend on the host LPCR[HAIL] being enabled. Currently that is always enabled, but having this test means any issue that might require LPCR[HAIL] to be disabled in the host will not have to be duplicated in KVM. This optimisation takes 1380 cycles off a NULL hcall entry+exit micro benchmark on a POWER10. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e83c7aa7dbba..463534402107 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -5047,6 +5047,8 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu) */ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) { + unsigned long lpcr, lpcr_mask; + if (nesting_enabled(kvm)) kvmhv_release_all_nested(kvm); kvmppc_rmap_reset(kvm); @@ -5056,8 +5058,13 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) kvm->arch.radix = 0; spin_unlock(&kvm->mmu_lock); kvmppc_free_radix(kvm); - kvmppc_update_lpcr(kvm, LPCR_VPM1, - LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR); + + lpcr = LPCR_VPM1; + lpcr_mask = LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR; + if (cpu_has_feature(CPU_FTR_ARCH_31)) + lpcr_mask |= LPCR_HAIL; + kvmppc_update_lpcr(kvm, lpcr, lpcr_mask); + return 0; } @@ -5067,6 +5074,7 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) */ int kvmppc_switch_mmu_to_radix(struct kvm *kvm) { + unsigned long lpcr, lpcr_mask; int err; err = kvmppc_init_vm_radix(kvm); @@ -5078,8 +5086,17 @@ int kvmppc_switch_mmu_to_radix(struct kvm *kvm) kvm->arch.radix = 1; spin_unlock(&kvm->mmu_lock); kvmppc_free_hpt(&kvm->arch.hpt); - kvmppc_update_lpcr(kvm, LPCR_UPRT | LPCR_GTSE | LPCR_HR, - LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR); + + lpcr = LPCR_UPRT | LPCR_GTSE | LPCR_HR; + lpcr_mask = LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + lpcr_mask |= LPCR_HAIL; + if (cpu_has_feature(CPU_FTR_HVMODE) && + (kvm->arch.host_lpcr & LPCR_HAIL)) + lpcr |= LPCR_HAIL; + } + kvmppc_update_lpcr(kvm, lpcr, lpcr_mask); + return 0; } @@ -5243,6 +5260,10 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm) kvm->arch.mmu_ready = 1; lpcr &= ~LPCR_VPM1; lpcr |= LPCR_UPRT | LPCR_GTSE | LPCR_HR; + if (cpu_has_feature(CPU_FTR_HVMODE) && + cpu_has_feature(CPU_FTR_ARCH_31) && + (kvm->arch.host_lpcr & LPCR_HAIL)) + lpcr |= LPCR_HAIL; ret = kvmppc_init_vm_radix(kvm); if (ret) { kvmppc_free_lpid(kvm->arch.lpid); From patchwork Mon Oct 4 16:00:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536198 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=FHLTltSx; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQST1sx6z9t5G for ; Tue, 5 Oct 2021 03:01:29 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236317AbhJDQDR (ORCPT ); Mon, 4 Oct 2021 12:03:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236263AbhJDQDN (ORCPT ); Mon, 4 Oct 2021 12:03:13 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46A7FC061745 for ; Mon, 4 Oct 2021 09:01:24 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id me5-20020a17090b17c500b0019af76b7bb4so4828271pjb.2 for ; Mon, 04 Oct 2021 09:01:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EeqbUJvVeYL4DgSPP9dNhwVPmuIcMoP8sBYjUN7Gmng=; b=FHLTltSxT1apgZqQ0Xp3BILjrmntCG2wqKABT6lETMcDp7F1F5Z/vYbhWHjTpRH8Se k0VCoipkJ2g0rZ2f681o4dlC2lj/95LxiySarSHe7dXP9+2UhH4iQyzkcHGMHv7dYjGf mpIbX9CYVzQ1b8aZ5we61ptXx0CMXuiRBt+mDBAqgNoc8eSfab4Pl/5xHO1WtHaCha0h 5ay+FtT3xpQEu0SFsnZJRWW7qCClSHkwpMac5FqefRx3KMzLaeWlm1cK1Bw5yJDhwaqf DwLxNSWBnD7yC7JAHO+nC3jXmhEaP6NEZWtV1wiGC4VaNaO3QT/vF8qiAvwgeAV2oTfu Sq7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EeqbUJvVeYL4DgSPP9dNhwVPmuIcMoP8sBYjUN7Gmng=; b=CcuahCoEVfyx0f9RwXoRRGpRLSYviZ4/5U5FO2+a2RjKpJSu3NCdFJfgnTPaVTHwO4 ODctsX4nGQSdE8xg7UfYGBqtr1+na9CqYOgoSICx10Nf0jq57yjImcYIcngmrX6AYjUW vQX9fVut1g9IWQsDntcD86yso1XlgsWwlIG/aKZM7DsHqhPiVvkG8rU/VQZs8OhmRZah t6F7xjDga4E1a8tCoq1P6hsPG286PiDT4XktfCZE03jbuJyrG7Mt/lgDCCsBgekZ0a2b NAu8B/ljJ+pew+Ep8aBXkUEs66xgH1iRmcDFLW5Ds4AWVV+R1Mzck3HEZkqCho5NKGUB 4kng== X-Gm-Message-State: AOAM5332q8F1CmJQj+dAtauaqGYOt4eShHVxnOTkvPGuWtriB7G05q8O DSTVf9Q4+mABa7H5viTai+V6i0ZxDsY= X-Google-Smtp-Source: ABdhPJzfG0ZUv+9O9pyOw3wd3FFSmOlPLB746qm7Yoyfh8TXhJX2uBh0P4o9a77gswijB93VQg6DwA== X-Received: by 2002:a17:90a:7bc8:: with SMTP id d8mr19269932pjl.128.1633363283442; Mon, 04 Oct 2021 09:01:23 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:23 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Fabiano Rosas Subject: [PATCH v3 09/52] powerpc/64s: Keep AMOR SPR a constant ~0 at runtime Date: Tue, 5 Oct 2021 02:00:06 +1000 Message-Id: <20211004160049.1338837-10-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This register controls supervisor SPR modifications, and as such is only relevant for KVM. KVM always sets AMOR to ~0 on guest entry, and never restores it coming back out to the host, so it can be kept constant and avoid the mtSPR in KVM guest entry. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/cpu_setup_power.c | 8 ++++++++ arch/powerpc/kernel/dt_cpu_ftrs.c | 2 ++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 2 -- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 2 -- arch/powerpc/mm/book3s64/radix_pgtable.c | 15 --------------- arch/powerpc/platforms/powernv/idle.c | 8 +++----- 6 files changed, 13 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c index 3cca88ee96d7..a29dc8326622 100644 --- a/arch/powerpc/kernel/cpu_setup_power.c +++ b/arch/powerpc/kernel/cpu_setup_power.c @@ -137,6 +137,7 @@ void __setup_cpu_power7(unsigned long offset, struct cpu_spec *t) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH); } @@ -150,6 +151,7 @@ void __restore_cpu_power7(void) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH); } @@ -164,6 +166,7 @@ void __setup_cpu_power8(unsigned long offset, struct cpu_spec *t) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */ init_HFSCR(); @@ -184,6 +187,7 @@ void __restore_cpu_power8(void) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */ init_HFSCR(); @@ -202,6 +206,7 @@ void __setup_cpu_power9(unsigned long offset, struct cpu_spec *t) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); @@ -223,6 +228,7 @@ void __restore_cpu_power9(void) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); @@ -242,6 +248,7 @@ void __setup_cpu_power10(unsigned long offset, struct cpu_spec *t) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); @@ -264,6 +271,7 @@ void __restore_cpu_power10(void) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c index 358aee7c2d79..0a6b36b4bda8 100644 --- a/arch/powerpc/kernel/dt_cpu_ftrs.c +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c @@ -80,6 +80,7 @@ static void __restore_cpu_cpufeatures(void) mtspr(SPRN_LPCR, system_registers.lpcr); if (hv_mode) { mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_HFSCR, system_registers.hfscr); mtspr(SPRN_PCR, system_registers.pcr); } @@ -216,6 +217,7 @@ static int __init feat_enable_hv(struct dt_cpu_feature *f) } mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); lpcr = mfspr(SPRN_LPCR); lpcr &= ~LPCR_LPES0; /* HV external interrupts */ diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index bd8cf0a65ce8..a7f63082b4e3 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -286,8 +286,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); - mtspr(SPRN_AMOR, ~0UL); - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_HV_P9; /* diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 90484425a1e6..a5a2ef1c70ec 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -772,10 +772,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) /* Restore AMR and UAMOR, set AMOR to all 1s */ ld r5,VCPU_AMR(r4) ld r6,VCPU_UAMOR(r4) - li r7,-1 mtspr SPRN_AMR,r5 mtspr SPRN_UAMOR,r6 - mtspr SPRN_AMOR,r7 /* Restore state of CTRL run bit; assume 1 on entry */ lwz r5,VCPU_CTRL(r4) diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index ae20add7954a..1e48c55e3e5d 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -572,18 +572,6 @@ void __init radix__early_init_devtree(void) return; } -static void radix_init_amor(void) -{ - /* - * In HV mode, we init AMOR (Authority Mask Override Register) so that - * the hypervisor and guest can setup IAMR (Instruction Authority Mask - * Register), enable key 0 and set it to 1. - * - * AMOR = 0b1100 .... 0000 (Mask for key 0 is 11) - */ - mtspr(SPRN_AMOR, (3ul << 62)); -} - void __init radix__early_init_mmu(void) { unsigned long lpcr; @@ -644,7 +632,6 @@ void __init radix__early_init_mmu(void) lpcr = mfspr(SPRN_LPCR); mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR); radix_init_partition_table(); - radix_init_amor(); } else { radix_init_pseries(); } @@ -668,8 +655,6 @@ void radix__early_init_mmu_secondary(void) set_ptcr_when_no_uv(__pa(partition_tb) | (PATB_SIZE_SHIFT - 12)); - - radix_init_amor(); } radix__switch_mmu_context(NULL, &init_mm); diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index 86e787502e42..3bc84e2fe064 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -306,8 +306,8 @@ struct p7_sprs { /* per thread SPRs that get lost in shallow states */ u64 amr; u64 iamr; - u64 amor; u64 uamor; + /* amor is restored to constant ~0 */ }; static unsigned long power7_idle_insn(unsigned long type) @@ -378,7 +378,6 @@ static unsigned long power7_idle_insn(unsigned long type) if (cpu_has_feature(CPU_FTR_ARCH_207S)) { sprs.amr = mfspr(SPRN_AMR); sprs.iamr = mfspr(SPRN_IAMR); - sprs.amor = mfspr(SPRN_AMOR); sprs.uamor = mfspr(SPRN_UAMOR); } @@ -397,7 +396,7 @@ static unsigned long power7_idle_insn(unsigned long type) */ mtspr(SPRN_AMR, sprs.amr); mtspr(SPRN_IAMR, sprs.iamr); - mtspr(SPRN_AMOR, sprs.amor); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_UAMOR, sprs.uamor); } } @@ -686,7 +685,6 @@ static unsigned long power9_idle_stop(unsigned long psscr) sprs.amr = mfspr(SPRN_AMR); sprs.iamr = mfspr(SPRN_IAMR); - sprs.amor = mfspr(SPRN_AMOR); sprs.uamor = mfspr(SPRN_UAMOR); srr1 = isa300_idle_stop_mayloss(psscr); /* go idle */ @@ -707,7 +705,7 @@ static unsigned long power9_idle_stop(unsigned long psscr) */ mtspr(SPRN_AMR, sprs.amr); mtspr(SPRN_IAMR, sprs.iamr); - mtspr(SPRN_AMOR, sprs.amor); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_UAMOR, sprs.uamor); /* From patchwork Mon Oct 4 16:00:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536199 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=LPzQsVx4; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQST61KDz9t6h for ; Tue, 5 Oct 2021 03:01:29 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236263AbhJDQDR (ORCPT ); Mon, 4 Oct 2021 12:03:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236290AbhJDQDP (ORCPT ); Mon, 4 Oct 2021 12:03:15 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21222C061745 for ; Mon, 4 Oct 2021 09:01:26 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id on12-20020a17090b1d0c00b001997c60aa29so219100pjb.1 for ; Mon, 04 Oct 2021 09:01:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oslM7klffgoOo53VoIAiFJMx7FPxrNftk+02tBkeY6s=; b=LPzQsVx4rkOGxLLHKYHLDQqOzFvI38KsG1kttIJJDtIMCdbRx1PnU6OglVO7Micaa0 jsnXZBtiKBZjW76UQuAnFg6WFR93EYu+49rQkhEdpWcr9RQOVkmjspKLrfs0SmPtBVVH ZAB/kOjh5cTZ0sTD0QKN5puwLIhcvmRTK/NS66JLky3Q41RmA89TrMQHBJjGktdQ6ya/ vuSeEImgwsHmjLLtErDzRKRB5qQkT7eFil8+jia7sGyb6RYNRROxgXH2T27hfSwqc/jg RKOa/LS+VxB3P6qzFSK6GigWPmVjM4gjFzVNZrW5Sr97pK1I4xxLekulnxsuwi5EmpPq rOYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oslM7klffgoOo53VoIAiFJMx7FPxrNftk+02tBkeY6s=; b=G2dcdqinbr86dCljhipHSmnZSjFZnJNKkI3VOREMNY8dQZN3MhXpkyxBO8D+5gIJSI 6dFsO0SAdRbogJ1GpexUAOxMWqXmCZ9U67+dXMJ9pU+rH5eO22bRjMZfcEA6KB2pkm3v bHZhBVRvenOHa+7IKmQGqiILsQHlS8hVjzLhKWcZOprgdthZiqS8Juj3dKeyzbcnFGFn sBTCU30A2DhJsoITKL3Li+fAeTYtFvAA0WteSpX0o9/uzTrOE5hYAaH2EjdaqPhHuWT3 bTooEX3AQlci1quuTHJGecNgAcFcA/hoxRKuESLfcw3yXSXhzfB8s/KcHGntxo7dNVL1 7R9w== X-Gm-Message-State: AOAM531JJuWu5EfO1T+kq8nTO3n7bqYeoRrIWnj6zTm5Rhv8hJCmAere ZyG3+LGuY9nok6X36PHyPEYyTizhKlU= X-Google-Smtp-Source: ABdhPJysvUsKeQEhtUvipCr5T40K5nZMcks97YpkNLjlZsetXXqBEYEtzOUFlstJtKM5/TxJmfJfVw== X-Received: by 2002:a17:90a:8b8d:: with SMTP id z13mr33328686pjn.214.1633363285542; Mon, 04 Oct 2021 09:01:25 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:25 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 10/52] KVM: PPC: Book3S HV: Don't always save PMU for guest capable of nesting Date: Tue, 5 Oct 2021 02:00:07 +1000 Message-Id: <20211004160049.1338837-11-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Provide a config option that controls the workaround added by commit 63279eeb7f93 ("KVM: PPC: Book3S HV: Always save guest pmu for guest capable of nesting"). The option defaults to y for now, but is expected to go away within a few releases. Nested capable guests running with the earlier commit ("KVM: PPC: Book3S HV Nested: Indicate guest PMU in-use in VPA") will now indicate the PMU in-use status of their guests, which means the parent does not need to unconditionally save the PMU for nested capable guests. After this latest round of performance optimisations, this option costs about 540 cycles or 10% entry/exit performance on a POWER9 nested-capable guest. Signed-off-by: Nicholas Piggin Reviewed-by: Fabiano Rosas --- arch/powerpc/kvm/Kconfig | 15 +++++++++++++++ arch/powerpc/kvm/book3s_hv.c | 10 ++++++++-- 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig index ff581d70f20c..1e7aae522be8 100644 --- a/arch/powerpc/kvm/Kconfig +++ b/arch/powerpc/kvm/Kconfig @@ -130,6 +130,21 @@ config KVM_BOOK3S_HV_EXIT_TIMING If unsure, say N. +config KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND + bool "Nested L0 host workaround for L1 KVM host PMU handling bug" if EXPERT + depends on KVM_BOOK3S_HV_POSSIBLE + default !EXPERT + help + Old nested HV capable Linux guests have a bug where the don't + reflect the PMU in-use status of their L2 guest to the L0 host + while the L2 PMU registers are live. This can result in loss + of L2 PMU register state, causing perf to not work correctly in + L2 guests. + + Selecting this option for the L0 host implements a workaround for + those buggy L1s which saves the L2 state, at the cost of performance + in all nested-capable guest entry/exit. + config KVM_BOOKE_HV bool diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 463534402107..945fc9a96439 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4034,8 +4034,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.vpa.dirty = 1; save_pmu = lp->pmcregs_in_use; } - /* Must save pmu if this guest is capable of running nested guests */ - save_pmu |= nesting_enabled(vcpu->kvm); + if (IS_ENABLED(CONFIG_KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND)) { + /* + * Save pmu if this guest is capable of running nested guests. + * This is option is for old L1s that do not set their + * lppaca->pmcregs_in_use properly when entering their L2. + */ + save_pmu |= nesting_enabled(vcpu->kvm); + } kvmhv_save_guest_pmu(vcpu, save_pmu); #ifdef CONFIG_PPC_PSERIES From patchwork Mon Oct 4 16:00:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536200 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=EroHTvNR; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSW2yn7z9t5G for ; Tue, 5 Oct 2021 03:01:31 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236233AbhJDQDS (ORCPT ); Mon, 4 Oct 2021 12:03:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236247AbhJDQDR (ORCPT ); Mon, 4 Oct 2021 12:03:17 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3100C061745 for ; Mon, 4 Oct 2021 09:01:28 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id x4so203328pln.5 for ; Mon, 04 Oct 2021 09:01:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QsROx6YiIEUIQDZ2uLHUlrr43H/HbbJMND6qsXW3a6k=; b=EroHTvNRweXrSs3NEoQkGVpOvQIyS9qAwz0g8R2DoHCpgbX2Yo9G8fjDaAvzWeUtMK oCwF38JGja0WcLQXLUfmNDiE6ru8wDvt00zc5ICL0PKqzJY7aVvHITTuHE8gMaP/EkHJ UIobcXHudctG8BMLrnGqn5H/vjaB7a0X8RxiVVu47mi9fhLncAlBx7DM9/c4av1Hhhl2 bz0OlN92hVHE4RM3yJ8KOokVjNCXRqdZDxZwH4u2NdhfY4gj8H7eRNE1GSSoiTsMIuEV UTlFOirEGIuBi5XxmteSd1LnCQyceZ/GE4zG2b3F8zgCbj5nbOCn2jLKbPiW6pJM6+jQ 3Kkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QsROx6YiIEUIQDZ2uLHUlrr43H/HbbJMND6qsXW3a6k=; b=Nd4o3b2pvF9PVhAJOlN9Xculgi7uw51N5z3tLT0KSZEcOtBXqqLPpYJ00/maYL2MU2 HvVVO+rcoCct4T/UPBcmpigR7hYZiuhro/6I1qi14BQ/fKn7XAsbaS6uWwdoKfvhbq1p FtTcd0iNxwcUJt/o8s9Wv154TTB9xr8IvotZEJt8H0VMY65RCJpdb3wkddZje4KXGc7w ObqyQd8/Qq/OeC6zTgk1P/rgSJsgoimhbk3uCUFNAAYUfNXPhIRrQO9e6MUTirSp9Bdq pkSVX4xE4NNf3MDYIym/iTSpP3sePjvXTnb5gBZxLgEiLcVrnE7zoRXCRhvpCOJzzXyy qerA== X-Gm-Message-State: AOAM533aq12NvdmgA2sHDBhxNTnhlNw8b0FJfOSSWzy9WpeZxYQOK6AQ BR3gUYcO1zIeD958cAxl6JJBYJqOpYg= X-Google-Smtp-Source: ABdhPJyYWReFXGeu8UbKiCNGcyIAaJrAnhE0Q+gDCG6lkDLciCHLsvFhUtrUUh1nofgVNolozRyKNw== X-Received: by 2002:a17:90b:1b03:: with SMTP id nu3mr5373879pjb.76.1633363288275; Mon, 04 Oct 2021 09:01:28 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:28 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Madhavan Srinivasan , Athira Jajeev Subject: [PATCH v3 11/52] powerpc/64s: Always set PMU control registers to frozen/disabled when not in use Date: Tue, 5 Oct 2021 02:00:08 +1000 Message-Id: <20211004160049.1338837-12-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org KVM PMU management code looks for particular frozen/disabled bits in the PMU registers so it knows whether it must clear them when coming out of a guest or not. Setting this up helps KVM make these optimisations without getting confused. Longer term the better approach might be to move guest/host PMU switching to the perf subsystem. Cc: Madhavan Srinivasan Reviewed-by: Athira Jajeev Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/cpu_setup_power.c | 4 ++-- arch/powerpc/kernel/dt_cpu_ftrs.c | 6 +++--- arch/powerpc/kvm/book3s_hv.c | 5 +++++ 3 files changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c index a29dc8326622..3dc61e203f37 100644 --- a/arch/powerpc/kernel/cpu_setup_power.c +++ b/arch/powerpc/kernel/cpu_setup_power.c @@ -109,7 +109,7 @@ static void init_PMU_HV_ISA207(void) static void init_PMU(void) { mtspr(SPRN_MMCRA, 0); - mtspr(SPRN_MMCR0, 0); + mtspr(SPRN_MMCR0, MMCR0_FC); mtspr(SPRN_MMCR1, 0); mtspr(SPRN_MMCR2, 0); } @@ -123,7 +123,7 @@ static void init_PMU_ISA31(void) { mtspr(SPRN_MMCR3, 0); mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE); - mtspr(SPRN_MMCR0, MMCR0_PMCCEXT); + mtspr(SPRN_MMCR0, MMCR0_FC | MMCR0_PMCCEXT); } /* diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c index 0a6b36b4bda8..06a089fbeaa7 100644 --- a/arch/powerpc/kernel/dt_cpu_ftrs.c +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c @@ -353,7 +353,7 @@ static void init_pmu_power8(void) } mtspr(SPRN_MMCRA, 0); - mtspr(SPRN_MMCR0, 0); + mtspr(SPRN_MMCR0, MMCR0_FC); mtspr(SPRN_MMCR1, 0); mtspr(SPRN_MMCR2, 0); mtspr(SPRN_MMCRS, 0); @@ -392,7 +392,7 @@ static void init_pmu_power9(void) mtspr(SPRN_MMCRC, 0); mtspr(SPRN_MMCRA, 0); - mtspr(SPRN_MMCR0, 0); + mtspr(SPRN_MMCR0, MMCR0_FC); mtspr(SPRN_MMCR1, 0); mtspr(SPRN_MMCR2, 0); } @@ -428,7 +428,7 @@ static void init_pmu_power10(void) mtspr(SPRN_MMCR3, 0); mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE); - mtspr(SPRN_MMCR0, MMCR0_PMCCEXT); + mtspr(SPRN_MMCR0, MMCR0_FC | MMCR0_PMCCEXT); } static int __init feat_enable_pmu_power10(struct dt_cpu_feature *f) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 945fc9a96439..b069209b49b2 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2715,6 +2715,11 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) #endif #endif vcpu->arch.mmcr[0] = MMCR0_FC; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + vcpu->arch.mmcr[0] |= MMCR0_PMCCEXT; + vcpu->arch.mmcra = MMCRA_BHRB_DISABLE; + } + vcpu->arch.ctrl = CTRL_RUNLATCH; /* default to host PVR, since we can't spoof it */ kvmppc_set_pvr_hv(vcpu, mfspr(SPRN_PVR)); From patchwork Mon Oct 4 16:00:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536201 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=prRjjKhj; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSX6D38z9t5G for ; Tue, 5 Oct 2021 03:01:32 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236229AbhJDQDV (ORCPT ); Mon, 4 Oct 2021 12:03:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236225AbhJDQDU (ORCPT ); Mon, 4 Oct 2021 12:03:20 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D70B6C061745 for ; Mon, 4 Oct 2021 09:01:31 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id 66so16565711pgc.9 for ; Mon, 04 Oct 2021 09:01:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HMCXCraIHhTkxvi2migC0AGUwf7Sq1N7w4sWT8WmJ8w=; b=prRjjKhj42goSNuNQo7GDFdVXeBEPvv5l2gYZWfql08VR9c28zDSA0rKbeNS3pDzZ6 CPnwDwmn+P1ylB3+ydoMg7oM6S+EzFFj1rKq5OHG1osz8+yTPVu8Y2mg22EUisImSBOb IN8hu3f0I80oIat49mRrWayjDOw+Dq7nVuXQQo5u9p7rJsAm1V5ExHWgDsGKaeES8Y7g fGmDMSWgbsuzWU6c9fwsxdm7av+SE3DovBEFXIcY+sueVZmut5pt+VlQ06owQQufLw/k VSnDseQekfhVyCUIWl34tKMDBgp3gSMn8yW5/9ECEiNJao5aLz7G0r15rtYBbRKrqIAq cd6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HMCXCraIHhTkxvi2migC0AGUwf7Sq1N7w4sWT8WmJ8w=; b=3Masw2t1iwdBwlOMgHuounm5Smb+8k1TFC1AVFBSoFY0SwqVGrHO/uIrEtzhyQ3Vw0 PyP+qsVTeHx1nl2qMPBUFWRxnA1+P3wBEcT3ZbZuF4J+6sVv3/csodzuR8ROTzrsZp7r C3ZIf3UzCFWnwS7nskorfKTFDj54G7Js5IF/+2tKBDXfG/B1VlOSTcKKr5dlGzx4x0Ec L0/hf5xfWU7BDqiSkjJalT5jVjcj2+3f3+OHxCdNJVAlfDu4sEHpur3LuXAD5foA3fvz gEwuS1rrYXQbKs+6cs69lQmzcZyLLVi1fl6lOmDY63MEsWif35j5ZmIbnmja+kPGV0kG SwxQ== X-Gm-Message-State: AOAM5300+iWM9f7PA17KUwbACtf8cx09/ey/M1wtFr13m/zysHJ40qhD B0SZBfQAumsep6Ke5E+H5N5VzaS2gnw= X-Google-Smtp-Source: ABdhPJzviaQrucqm9C7UK7ONuBECuwhIQdEz6XBlxz795QHTv/tNuuTPqVo3vJ/ZwdN+/zztutw6ow== X-Received: by 2002:a63:e916:: with SMTP id i22mr11581714pgh.76.1633363290982; Mon, 04 Oct 2021 09:01:30 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:30 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Madhavan Srinivasan , Athira Jajeev Subject: [PATCH v3 12/52] powerpc/64s: Implement PMU override command line option Date: Tue, 5 Oct 2021 02:00:09 +1000 Message-Id: <20211004160049.1338837-13-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org It can be useful in simulators (with very constrained environments) to allow some PMCs to run from boot so they can be sampled directly by a test harness, rather than having to run perf. A previous change freezes counters at boot by default, so provide a boot time option to un-freeze (plus a bit more flexibility). Cc: Madhavan Srinivasan Reviewed-by: Athira Jajeev Signed-off-by: Nicholas Piggin --- .../admin-guide/kernel-parameters.txt | 8 +++++ arch/powerpc/perf/core-book3s.c | 35 +++++++++++++++++++ 2 files changed, 43 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 91ba391f9b32..02a80c02a713 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4120,6 +4120,14 @@ Override pmtimer IOPort with a hex value. e.g. pmtmr=0x508 + pmu_override= [PPC] Override the PMU. + This option takes over the PMU facility, so it is no + longer usable by perf. Setting this option starts the + PMU counters by setting MMCR0 to 0 (the FC bit is + cleared). If a number is given, then MMCR1 is set to + that number, otherwise (e.g., 'pmu_override=on'), MMCR1 + remains 0. + pm_debug_messages [SUSPEND,KNL] Enable suspend/resume debug messages during boot up. diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c index 73e62e9b179b..8d4ff93462fb 100644 --- a/arch/powerpc/perf/core-book3s.c +++ b/arch/powerpc/perf/core-book3s.c @@ -2419,8 +2419,24 @@ int register_power_pmu(struct power_pmu *pmu) } #ifdef CONFIG_PPC64 +static bool pmu_override = false; +static unsigned long pmu_override_val; +static void do_pmu_override(void *data) +{ + ppc_set_pmu_inuse(1); + if (pmu_override_val) + mtspr(SPRN_MMCR1, pmu_override_val); + mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC); +} + static int __init init_ppc64_pmu(void) { + if (cpu_has_feature(CPU_FTR_HVMODE) && pmu_override) { + pr_warn("disabling perf due to pmu_override= command line option.\n"); + on_each_cpu(do_pmu_override, NULL, 1); + return 0; + } + /* run through all the pmu drivers one at a time */ if (!init_power5_pmu()) return 0; @@ -2442,4 +2458,23 @@ static int __init init_ppc64_pmu(void) return init_generic_compat_pmu(); } early_initcall(init_ppc64_pmu); + +static int __init pmu_setup(char *str) +{ + unsigned long val; + + if (!early_cpu_has_feature(CPU_FTR_HVMODE)) + return 0; + + pmu_override = true; + + if (kstrtoul(str, 0, &val)) + val = 0; + + pmu_override_val = val; + + return 1; +} +__setup("pmu_override=", pmu_setup); + #endif From patchwork Mon Oct 4 16:00:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536202 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=qK56cSp0; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSb6LqDz9t5G for ; Tue, 5 Oct 2021 03:01:35 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236237AbhJDQDY (ORCPT ); Mon, 4 Oct 2021 12:03:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236240AbhJDQDX (ORCPT ); Mon, 4 Oct 2021 12:03:23 -0400 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8ED6DC061749 for ; Mon, 4 Oct 2021 09:01:34 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id p1so8842461pfh.8 for ; Mon, 04 Oct 2021 09:01:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hd4O7Rf6OIqE/DcaMPT47l5IcsSJY4MdZ1sUKCDHWDg=; b=qK56cSp0fbpi2tyvLZ2zlSNM5Vn5UGsHDhgpMq376OUT34fV+w44bMjWzMzrDKh9Qh ILk2KaGP4M+jpHzytrkMXUEq8a7yba70eDckOI7Rx+1ojRam6CserSDfTs4id3h3N0tY moC+s6hfJsJXLWyx8W7yQ5+uRNq2ngGPkfnwMDrXmRZW/qYFDccyjKH6wCOwN4WGoiii uLbOSOwG11yDiuOMxY1f4lzuH2YM7rcgA4kpMdAG4kRz/gnqCMpLz2vy5qu6utPWVyKN 6ncAeFIeJV1DmNxJbz5zQWGUMaP10KEiF+zgUgyzD7b/e+21b29YhRvF4fSRsOZuFZK/ txPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hd4O7Rf6OIqE/DcaMPT47l5IcsSJY4MdZ1sUKCDHWDg=; b=5Cum7L0t4rHiSvjbDs2taXILlxg4amqcz4WsX4bOWGBEgSvVIVPTt+e6NgjwyIG1O7 OVJTEuUl4BfXmW7lv0KXroYlMOnDQQ3+7et/oTekQPR2EOVpbqLMUbpxzpaEijPdFqwS uY1BE8PmxUt586iNbGIJD/sWZ/kCcvGVdSYro3oIXHRFbFZdI+A2Sj9Bcd96s8wystyX nIbrZ23ZQHn5SBzv3Xj1t3Wcra0GgAIk7z7gelIyfaP/3WnLqAqd01UXmaO8nlxdzm// 1gmghqZIezgeRY+NXqMVVfXeU2kqs9n9/hO5LCGW3HGfJslimSTxS0g6tUN9JmhR9X7W AfMA== X-Gm-Message-State: AOAM5322l4H0NdfLLiedjzNoH+agFInNr9y4VnUt8JE9K9pdRSMgp2Vy iOH3b1NYl5Pvhf/acSthc3HC6ezfb34= X-Google-Smtp-Source: ABdhPJynOof1MVVWod6vEKlQ2EAEhGBlp8memDE/20+x3SI0KmZcVDr/9CFUIk95V+Ik4pNzEkvukw== X-Received: by 2002:a63:b505:: with SMTP id y5mr11546130pge.91.1633363293798; Mon, 04 Oct 2021 09:01:33 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:33 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Madhavan Srinivasan , Athira Jajeev Subject: [PATCH v3 13/52] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C Date: Tue, 5 Oct 2021 02:00:10 +1000 Message-Id: <20211004160049.1338837-14-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Implement the P9 path PMU save/restore code in C, and remove the POWER9/10 code from the P7/8 path assembly. Cc: Madhavan Srinivasan Reviewed-by: Athira Jajeev Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/asm-prototypes.h | 5 - arch/powerpc/kvm/book3s_hv.c | 221 +++++++++++++++++++--- arch/powerpc/kvm/book3s_hv_interrupts.S | 13 +- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 43 +---- 4 files changed, 208 insertions(+), 74 deletions(-) diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h index 222823861a67..41b8a1e1144a 100644 --- a/arch/powerpc/include/asm/asm-prototypes.h +++ b/arch/powerpc/include/asm/asm-prototypes.h @@ -141,11 +141,6 @@ static inline void kvmppc_restore_tm_hv(struct kvm_vcpu *vcpu, u64 msr, bool preserve_nv) { } #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ -void kvmhv_save_host_pmu(void); -void kvmhv_load_host_pmu(void); -void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use); -void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu); - void kvmppc_p9_enter_guest(struct kvm_vcpu *vcpu); long kvmppc_h_set_dabr(struct kvm_vcpu *vcpu, unsigned long dabr); diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index b069209b49b2..211184544599 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3762,6 +3762,196 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } +/* + * Privileged (non-hypervisor) host registers to save. + */ +struct p9_host_os_sprs { + unsigned long dscr; + unsigned long tidr; + unsigned long iamr; + unsigned long amr; + unsigned long fscr; + + unsigned int pmc1; + unsigned int pmc2; + unsigned int pmc3; + unsigned int pmc4; + unsigned int pmc5; + unsigned int pmc6; + unsigned long mmcr0; + unsigned long mmcr1; + unsigned long mmcr2; + unsigned long mmcr3; + unsigned long mmcra; + unsigned long siar; + unsigned long sier1; + unsigned long sier2; + unsigned long sier3; + unsigned long sdar; +}; + +static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) +{ + if (!(mmcr0 & MMCR0_FC)) + goto do_freeze; + if (mmcra & MMCRA_SAMPLE_ENABLE) + goto do_freeze; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + if (!(mmcr0 & MMCR0_PMCCEXT)) + goto do_freeze; + if (!(mmcra & MMCRA_BHRB_DISABLE)) + goto do_freeze; + } + return; + +do_freeze: + mmcr0 = MMCR0_FC; + mmcra = 0; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mmcr0 |= MMCR0_PMCCEXT; + mmcra = MMCRA_BHRB_DISABLE; + } + + mtspr(SPRN_MMCR0, mmcr0); + mtspr(SPRN_MMCRA, mmcra); + isync(); +} + +static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) +{ + if (ppc_get_pmu_inuse()) { + /* + * It might be better to put PMU handling (at least for the + * host) in the perf subsystem because it knows more about what + * is being used. + */ + + /* POWER9, POWER10 do not implement HPMC or SPMC */ + + host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0); + host_os_sprs->mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra); + + host_os_sprs->pmc1 = mfspr(SPRN_PMC1); + host_os_sprs->pmc2 = mfspr(SPRN_PMC2); + host_os_sprs->pmc3 = mfspr(SPRN_PMC3); + host_os_sprs->pmc4 = mfspr(SPRN_PMC4); + host_os_sprs->pmc5 = mfspr(SPRN_PMC5); + host_os_sprs->pmc6 = mfspr(SPRN_PMC6); + host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1); + host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2); + host_os_sprs->sdar = mfspr(SPRN_SDAR); + host_os_sprs->siar = mfspr(SPRN_SIAR); + host_os_sprs->sier1 = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3); + host_os_sprs->sier2 = mfspr(SPRN_SIER2); + host_os_sprs->sier3 = mfspr(SPRN_SIER3); + } + } +} + +static void load_p9_guest_pmu(struct kvm_vcpu *vcpu) +{ + mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); + mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); + mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); + mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); + mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); + mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); + mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); + mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); + mtspr(SPRN_SDAR, vcpu->arch.sdar); + mtspr(SPRN_SIAR, vcpu->arch.siar); + mtspr(SPRN_SIER, vcpu->arch.sier[0]); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); + mtspr(SPRN_SIER2, vcpu->arch.sier[1]); + mtspr(SPRN_SIER3, vcpu->arch.sier[2]); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, vcpu->arch.mmcra); + mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); + /* No isync necessary because we're starting counters */ +} + +static void save_p9_guest_pmu(struct kvm_vcpu *vcpu) +{ + struct lppaca *lp; + int save_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + save_pmu = lp->pmcregs_in_use; + if (IS_ENABLED(CONFIG_KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND)) { + /* + * Save pmu if this guest is capable of running nested guests. + * This is option is for old L1s that do not set their + * lppaca->pmcregs_in_use properly when entering their L2. + */ + save_pmu |= nesting_enabled(vcpu->kvm); + } + + if (save_pmu) { + vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0); + vcpu->arch.mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra); + + vcpu->arch.pmc[0] = mfspr(SPRN_PMC1); + vcpu->arch.pmc[1] = mfspr(SPRN_PMC2); + vcpu->arch.pmc[2] = mfspr(SPRN_PMC3); + vcpu->arch.pmc[3] = mfspr(SPRN_PMC4); + vcpu->arch.pmc[4] = mfspr(SPRN_PMC5); + vcpu->arch.pmc[5] = mfspr(SPRN_PMC6); + vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1); + vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2); + vcpu->arch.sdar = mfspr(SPRN_SDAR); + vcpu->arch.siar = mfspr(SPRN_SIAR); + vcpu->arch.sier[0] = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3); + vcpu->arch.sier[1] = mfspr(SPRN_SIER2); + vcpu->arch.sier[2] = mfspr(SPRN_SIER3); + } + } else { + freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); + } +} + +static void load_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) +{ + if (ppc_get_pmu_inuse()) { + mtspr(SPRN_PMC1, host_os_sprs->pmc1); + mtspr(SPRN_PMC2, host_os_sprs->pmc2); + mtspr(SPRN_PMC3, host_os_sprs->pmc3); + mtspr(SPRN_PMC4, host_os_sprs->pmc4); + mtspr(SPRN_PMC5, host_os_sprs->pmc5); + mtspr(SPRN_PMC6, host_os_sprs->pmc6); + mtspr(SPRN_MMCR1, host_os_sprs->mmcr1); + mtspr(SPRN_MMCR2, host_os_sprs->mmcr2); + mtspr(SPRN_SDAR, host_os_sprs->sdar); + mtspr(SPRN_SIAR, host_os_sprs->siar); + mtspr(SPRN_SIER, host_os_sprs->sier1); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, host_os_sprs->mmcr3); + mtspr(SPRN_SIER2, host_os_sprs->sier2); + mtspr(SPRN_SIER3, host_os_sprs->sier3); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, host_os_sprs->mmcra); + mtspr(SPRN_MMCR0, host_os_sprs->mmcr0); + isync(); + } +} + static void load_spr_state(struct kvm_vcpu *vcpu) { mtspr(SPRN_DSCR, vcpu->arch.dscr); @@ -3806,17 +3996,6 @@ static void store_spr_state(struct kvm_vcpu *vcpu) vcpu->arch.dscr = mfspr(SPRN_DSCR); } -/* - * Privileged (non-hypervisor) host registers to save. - */ -struct p9_host_os_sprs { - unsigned long dscr; - unsigned long tidr; - unsigned long iamr; - unsigned long amr; - unsigned long fscr; -}; - static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { host_os_sprs->dscr = mfspr(SPRN_DSCR); @@ -3866,7 +4045,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, struct p9_host_os_sprs host_os_sprs; s64 dec; u64 tb, next_timer; - int trap, save_pmu; + int trap; WARN_ON_ONCE(vcpu->arch.ceded); @@ -3879,7 +4058,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_p9_host_os_sprs(&host_os_sprs); - kvmhv_save_host_pmu(); /* saves it to PACA kvm_hstate */ + save_p9_host_pmu(&host_os_sprs); kvmppc_subcore_enter_guest(); @@ -3909,7 +4088,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, barrier(); } #endif - kvmhv_load_guest_pmu(vcpu); + load_p9_guest_pmu(vcpu); msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); load_fp_state(&vcpu->arch.fp); @@ -4031,24 +4210,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - save_pmu = 1; if (vcpu->arch.vpa.pinned_addr) { struct lppaca *lp = vcpu->arch.vpa.pinned_addr; u32 yield_count = be32_to_cpu(lp->yield_count) + 1; lp->yield_count = cpu_to_be32(yield_count); vcpu->arch.vpa.dirty = 1; - save_pmu = lp->pmcregs_in_use; - } - if (IS_ENABLED(CONFIG_KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND)) { - /* - * Save pmu if this guest is capable of running nested guests. - * This is option is for old L1s that do not set their - * lppaca->pmcregs_in_use properly when entering their L2. - */ - save_pmu |= nesting_enabled(vcpu->kvm); } - kvmhv_save_guest_pmu(vcpu, save_pmu); + save_p9_guest_pmu(vcpu); #ifdef CONFIG_PPC_PSERIES if (kvmhv_on_pseries()) { barrier(); @@ -4064,7 +4233,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - kvmhv_load_host_pmu(); + load_p9_host_pmu(&host_os_sprs); kvmppc_subcore_exit_guest(); diff --git a/arch/powerpc/kvm/book3s_hv_interrupts.S b/arch/powerpc/kvm/book3s_hv_interrupts.S index 4444f83cb133..59d89e4b154a 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupts.S +++ b/arch/powerpc/kvm/book3s_hv_interrupts.S @@ -104,7 +104,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) mtlr r0 blr -_GLOBAL(kvmhv_save_host_pmu) +/* + * void kvmhv_save_host_pmu(void) + */ +kvmhv_save_host_pmu: BEGIN_FTR_SECTION /* Work around P8 PMAE bug */ li r3, -1 @@ -138,14 +141,6 @@ BEGIN_FTR_SECTION std r8, HSTATE_MMCR2(r13) std r9, HSTATE_SIER(r13) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - mfspr r5, SPRN_MMCR3 - mfspr r6, SPRN_SIER2 - mfspr r7, SPRN_SIER3 - std r5, HSTATE_MMCR3(r13) - std r6, HSTATE_SIER2(r13) - std r7, HSTATE_SIER3(r13) -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) mfspr r3, SPRN_PMC1 mfspr r5, SPRN_PMC2 mfspr r6, SPRN_PMC3 diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index a5a2ef1c70ec..7fa0df632f89 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -2770,10 +2770,11 @@ kvmppc_msr_interrupt: blr /* + * void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu) + * * Load up guest PMU state. R3 points to the vcpu struct. */ -_GLOBAL(kvmhv_load_guest_pmu) -EXPORT_SYMBOL_GPL(kvmhv_load_guest_pmu) +kvmhv_load_guest_pmu: mr r4, r3 mflr r0 li r3, 1 @@ -2807,27 +2808,17 @@ END_FTR_SECTION_IFSET(CPU_FTR_PMAO_BUG) mtspr SPRN_MMCRA, r6 mtspr SPRN_SIAR, r7 mtspr SPRN_SDAR, r8 -BEGIN_FTR_SECTION - ld r5, VCPU_MMCR + 24(r4) - ld r6, VCPU_SIER + 8(r4) - ld r7, VCPU_SIER + 16(r4) - mtspr SPRN_MMCR3, r5 - mtspr SPRN_SIER2, r6 - mtspr SPRN_SIER3, r7 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) BEGIN_FTR_SECTION ld r5, VCPU_MMCR + 16(r4) ld r6, VCPU_SIER(r4) mtspr SPRN_MMCR2, r5 mtspr SPRN_SIER, r6 -BEGIN_FTR_SECTION_NESTED(96) lwz r7, VCPU_PMC + 24(r4) lwz r8, VCPU_PMC + 28(r4) ld r9, VCPU_MMCRS(r4) mtspr SPRN_SPMC1, r7 mtspr SPRN_SPMC2, r8 mtspr SPRN_MMCRS, r9 -END_FTR_SECTION_NESTED(CPU_FTR_ARCH_300, 0, 96) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) mtspr SPRN_MMCR0, r3 isync @@ -2835,10 +2826,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) blr /* + * void kvmhv_load_host_pmu(void) + * * Reload host PMU state saved in the PACA by kvmhv_save_host_pmu. */ -_GLOBAL(kvmhv_load_host_pmu) -EXPORT_SYMBOL_GPL(kvmhv_load_host_pmu) +kvmhv_load_host_pmu: mflr r0 lbz r4, PACA_PMCINUSE(r13) /* is the host using the PMU? */ cmpwi r4, 0 @@ -2876,25 +2868,18 @@ BEGIN_FTR_SECTION mtspr SPRN_MMCR2, r8 mtspr SPRN_SIER, r9 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - ld r5, HSTATE_MMCR3(r13) - ld r6, HSTATE_SIER2(r13) - ld r7, HSTATE_SIER3(r13) - mtspr SPRN_MMCR3, r5 - mtspr SPRN_SIER2, r6 - mtspr SPRN_SIER3, r7 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) mtspr SPRN_MMCR0, r3 isync mtlr r0 23: blr /* + * void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use) + * * Save guest PMU state into the vcpu struct. * r3 = vcpu, r4 = full save flag (PMU in use flag set in VPA) */ -_GLOBAL(kvmhv_save_guest_pmu) -EXPORT_SYMBOL_GPL(kvmhv_save_guest_pmu) +kvmhv_save_guest_pmu: mr r9, r3 mr r8, r4 BEGIN_FTR_SECTION @@ -2943,14 +2928,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) BEGIN_FTR_SECTION std r10, VCPU_MMCR + 16(r9) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - mfspr r5, SPRN_MMCR3 - mfspr r6, SPRN_SIER2 - mfspr r7, SPRN_SIER3 - std r5, VCPU_MMCR + 24(r9) - std r6, VCPU_SIER + 8(r9) - std r7, VCPU_SIER + 16(r9) -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) std r7, VCPU_SIAR(r9) std r8, VCPU_SDAR(r9) mfspr r3, SPRN_PMC1 @@ -2968,7 +2945,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) BEGIN_FTR_SECTION mfspr r5, SPRN_SIER std r5, VCPU_SIER(r9) -BEGIN_FTR_SECTION_NESTED(96) mfspr r6, SPRN_SPMC1 mfspr r7, SPRN_SPMC2 mfspr r8, SPRN_MMCRS @@ -2977,7 +2953,6 @@ BEGIN_FTR_SECTION_NESTED(96) std r8, VCPU_MMCRS(r9) lis r4, 0x8000 mtspr SPRN_MMCRS, r4 -END_FTR_SECTION_NESTED(CPU_FTR_ARCH_300, 0, 96) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 22: blr From patchwork Mon Oct 4 16:00:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536203 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=ier9SG47; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSf3crgz9t5G for ; Tue, 5 Oct 2021 03:01:38 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236225AbhJDQD0 (ORCPT ); Mon, 4 Oct 2021 12:03:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236240AbhJDQDZ (ORCPT ); Mon, 4 Oct 2021 12:03:25 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D80CEC061745 for ; Mon, 4 Oct 2021 09:01:36 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id r201so11051312pgr.4 for ; Mon, 04 Oct 2021 09:01:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wnsBS3bJmiwyDILnLMDKqMSh12LsJybZQv4a3uQmWU0=; b=ier9SG47V2QEjoV7udjqiJV9XQ+1xZBLqRtJLDwh1XzkoIHsW21tI910+01FCikc3g rSyi6JNrNdHtxnA/4VWLLHW8rJl6Zia/mRqgub+ATVNoqoo7BX7J2e7uFchZtVEG0JbF P/yVoIjH7hIMc9jH02JYFzLmktaxMvTJBNr9bmaz0dF6HOhWHqWyFiXzpiEDTwX93UCp ViPbH2Kkv6dowtP1OdE4HiJD+WCPdVLWSnSpV+nxrjl8k6xjzsT3vvmkbNXp5Zv6nNpg 5YpFR67RrtL89km4UxMtOwEGjvvLr7uJx1DwVHILEuUc0MkgZc/jyQqzOxiIzQDyYYki nm/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wnsBS3bJmiwyDILnLMDKqMSh12LsJybZQv4a3uQmWU0=; b=3HeqRCCNd4KG40HNeXPZM6r/f9UshEPju9DgT7iD/iRQky6bSkaHVFIx/5hwk8Cnns zp+UwR3bIAAq5DW9vi5jguAw6NRlBBn+hoHXOzRllqzwk0Q1ELgzWw6q9YOKnH0FN0Ia y48oxwPyfGutQKWb61Ru/cEZb9g117QEt2LUo7oqaN+p3KM6lmMwW4UILhOzht2CxTnW 2p96L5A5EeX2sg+AYIKmwiBH3zu/bgfIshSLS+DIpjPUCPmpLwbcRtXXJGw/V298Vo2s e2kVq2diuI5PeW5GzXAtMC/JyM0gef6715fFYbJA5C9Rk/rc7bE6UAWhPHAeGQe/x4Os /oSg== X-Gm-Message-State: AOAM530+UNC7yMee1z1pp+yYJUXn2jL0ndfeF9128UGLf24FGDRw4KMD m2sFHHcfNQAHVj+fe1bh7Tx00SvxCAk= X-Google-Smtp-Source: ABdhPJwqg5mhD37PJ8cnY1xrw2QpqLxWWyWjts097dXtVMP5WqFX0+tuLDMVI7LBvqag7YKvbYQQRQ== X-Received: by 2002:a63:2a07:: with SMTP id q7mr11513121pgq.221.1633363296314; Mon, 04 Oct 2021 09:01:36 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:36 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Athira Jajeev Subject: [PATCH v3 14/52] KVM: PPC: Book3S HV P9: Factor PMU save/load into context switch functions Date: Tue, 5 Oct 2021 02:00:11 +1000 Message-Id: <20211004160049.1338837-15-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than guest/host save/retsore functions, implement context switch functions that take care of details like the VPA update for nested. The reason to split these kind of helpers into explicit save/load functions is mainly to schedule SPR access nicely, but PMU is a special case where the load requires mtSPR (to stop counters) and other difficulties, so there's less possibility to schedule those nicely. The SPR accesses also have side-effects if the PMU is running, and in later changes we keep the host PMU running as long as possible so this code can be better profiled, which also complicates scheduling. Reviewed-by: Athira Jajeev Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 61 +++++++++++++++++------------------- 1 file changed, 28 insertions(+), 33 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 211184544599..29a8c770c4a6 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3817,7 +3817,8 @@ static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) isync(); } -static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) +static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) { if (ppc_get_pmu_inuse()) { /* @@ -3851,10 +3852,21 @@ static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) host_os_sprs->sier3 = mfspr(SPRN_SIER3); } } -} -static void load_p9_guest_pmu(struct kvm_vcpu *vcpu) -{ +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + if (vcpu->arch.vpa.pinned_addr) { + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; + get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; + } else { + get_lppaca()->pmcregs_in_use = 1; + } + barrier(); + } +#endif + + /* load guest */ mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); @@ -3879,7 +3891,8 @@ static void load_p9_guest_pmu(struct kvm_vcpu *vcpu) /* No isync necessary because we're starting counters */ } -static void save_p9_guest_pmu(struct kvm_vcpu *vcpu) +static void switch_pmu_to_host(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) { struct lppaca *lp; int save_pmu = 1; @@ -3922,10 +3935,15 @@ static void save_p9_guest_pmu(struct kvm_vcpu *vcpu) } else { freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); } -} -static void load_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) -{ +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); + barrier(); + } +#endif + if (ppc_get_pmu_inuse()) { mtspr(SPRN_PMC1, host_os_sprs->pmc1); mtspr(SPRN_PMC2, host_os_sprs->pmc2); @@ -4058,8 +4076,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_p9_host_os_sprs(&host_os_sprs); - save_p9_host_pmu(&host_os_sprs); - kvmppc_subcore_enter_guest(); vc->entry_exit_map = 1; @@ -4076,19 +4092,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); -#ifdef CONFIG_PPC_PSERIES - if (kvmhv_on_pseries()) { - barrier(); - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; - } else { - get_lppaca()->pmcregs_in_use = 1; - } - barrier(); - } -#endif - load_p9_guest_pmu(vcpu); + switch_pmu_to_guest(vcpu, &host_os_sprs); msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); load_fp_state(&vcpu->arch.fp); @@ -4217,14 +4221,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.vpa.dirty = 1; } - save_p9_guest_pmu(vcpu); -#ifdef CONFIG_PPC_PSERIES - if (kvmhv_on_pseries()) { - barrier(); - get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); - barrier(); - } -#endif + switch_pmu_to_host(vcpu, &host_os_sprs); vc->entry_exit_map = 0x101; vc->in_guest = 0; @@ -4233,8 +4230,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - load_p9_host_pmu(&host_os_sprs); - kvmppc_subcore_exit_guest(); return trap; From patchwork Mon Oct 4 16:00:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536204 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=b2wo8PkT; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSh6ZcNz9t5G for ; Tue, 5 Oct 2021 03:01:40 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236239AbhJDQD3 (ORCPT ); Mon, 4 Oct 2021 12:03:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236240AbhJDQD2 (ORCPT ); Mon, 4 Oct 2021 12:03:28 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBDA6C061746 for ; Mon, 4 Oct 2021 09:01:39 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id c4so202708pls.6 for ; Mon, 04 Oct 2021 09:01:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fu524DKeQa7aA0oGOpETME6QVLaF1mXTP4gF4wx3j7o=; b=b2wo8PkTgo2kDW8F+YIlfIQoOD3qLK2WnqqbnikFDKF0ubZaVzo3fjxS4MQqnLl2lq 9+VvcLjtLyLshN5SQP5XL/80bzOT13I+04PNEtsIqINBuHtm5zXuytokeYBUf3S7f0We p9IlUi5kDTEUwHBl8fkJS7+2GCKLmlfJqimQKys3Jhy8/Yicu5Y2tTqz4Wg9Kulbkc6P SNqogvrINmY5pqRlGgyFK3BD9JPrXD2ZbQXrtxbimgvrGmbxLKYzyy7dfk6r3Gprm6kx /DXwOR4rLhNRLE0gzTSXIFVuRgrGXM+Z3jl+atvFcx8wk3JlN/D49FkqFCM52eRN3IJs uWYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fu524DKeQa7aA0oGOpETME6QVLaF1mXTP4gF4wx3j7o=; b=ryIDKMCfJA5h4JJDNijx9aFVA48x39/O2WuR0r/AG4mg4SR7m7rqcanUUJwt5vGsHY PN5VWAYsz5CCehZNM905nfnO6iJeSo63qXB3ymyOC6SDaT4yje/zTpUgvUuaVIg/3hKi ZgfNPhriwmofyJwaI0sbXWCyvNEb1WfzLfSw3QHwj9oDoO+aIBxB9d826UPIgdXlUDow yCaonJbTAJYbvYCRfvL73z1Fikfk2l0fmyqw7Zh239aGm8EJskhqBk78uYoYsKSE4N4C x6dUR/Qa8T7y/LanQ6T0dzL3jBQWVn1Z/iyxuzzYytsmUYPUfgEcwJSRK2jgZ7D2lqDC sFVg== X-Gm-Message-State: AOAM5311RU2Ce74GIra9UnqrYX7BSGhuD+2nOUqebYqRXhTjTXDf/jWw 6LlstCzeB1Vj9sjiprbG0h7l3WHYA4g= X-Google-Smtp-Source: ABdhPJxjYKwr4imt33gQJbyxsg/OvT6YuajCBNwIVyA4iVZUqYhcNo7T3b0N05TCGRj7ieDgwSB9kQ== X-Received: by 2002:a17:90a:12:: with SMTP id 18mr38036547pja.104.1633363298821; Mon, 04 Oct 2021 09:01:38 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:38 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Athira Jajeev Subject: [PATCH v3 15/52] KVM: PPC: Book3S HV P9: Demand fault PMU SPRs when marked not inuse Date: Tue, 5 Oct 2021 02:00:12 +1000 Message-Id: <20211004160049.1338837-16-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The pmcregs_in_use field in the guest VPA can not be trusted to reflect what the guest is doing with PMU SPRs, so the PMU must always be managed (stopped) when exiting the guest, and SPR values set when entering the guest to ensure it can't cause a covert channel or otherwise cause other guests or the host to misbehave. So prevent guest access to the PMU with HFSCR[PM] if pmcregs_in_use is clear, and avoid the PMU SPR access on every partition switch. Guests that set pmcregs_in_use incorrectly or when first setting it and using the PMU will take a hypervisor facility unavailable interrupt that will bring in the PMU SPRs. Reviewed-by: Athira Jajeev Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 131 ++++++++++++++++++++++++++--------- 1 file changed, 98 insertions(+), 33 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 29a8c770c4a6..6bbd670658b9 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1421,6 +1421,23 @@ static int kvmppc_emulate_doorbell_instr(struct kvm_vcpu *vcpu) return RESUME_GUEST; } +/* + * If the lppaca had pmcregs_in_use clear when we exited the guest, then + * HFSCR_PM is cleared for next entry. If the guest then tries to access + * the PMU SPRs, we get this facility unavailable interrupt. Putting HFSCR_PM + * back in the guest HFSCR will cause the next entry to load the PMU SPRs and + * allow the guest access to continue. + */ +static int kvmppc_pmu_unavailable(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.hfscr_permitted & HFSCR_PM)) + return EMULATE_FAIL; + + vcpu->arch.hfscr |= HFSCR_PM; + + return RESUME_GUEST; +} + static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct task_struct *tsk) { @@ -1702,16 +1719,22 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, * to emulate. * Otherwise, we just generate a program interrupt to the guest. */ - case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: + case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: { + u64 cause = vcpu->arch.hfscr >> 56; + r = EMULATE_FAIL; - if (((vcpu->arch.hfscr >> 56) == FSCR_MSGP_LG) && - cpu_has_feature(CPU_FTR_ARCH_300)) - r = kvmppc_emulate_doorbell_instr(vcpu); + if (cpu_has_feature(CPU_FTR_ARCH_300)) { + if (cause == FSCR_MSGP_LG) + r = kvmppc_emulate_doorbell_instr(vcpu); + if (cause == FSCR_PM_LG) + r = kvmppc_pmu_unavailable(vcpu); + } if (r == EMULATE_FAIL) { kvmppc_core_queue_program(vcpu, SRR1_PROGILL); r = RESUME_GUEST; } break; + } case BOOK3S_INTERRUPT_HV_RM_HARD: r = RESUME_PASSTHROUGH; @@ -2750,6 +2773,11 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) vcpu->arch.hfscr_permitted = vcpu->arch.hfscr; + /* + * PM is demand-faulted so start with it clear. + */ + vcpu->arch.hfscr &= ~HFSCR_PM; + kvmppc_mmu_book3s_hv_init(vcpu); vcpu->arch.state = KVMPPC_VCPU_NOTREADY; @@ -3820,6 +3848,14 @@ static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + struct lppaca *lp; + int load_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + load_pmu = lp->pmcregs_in_use; + + /* Save host */ if (ppc_get_pmu_inuse()) { /* * It might be better to put PMU handling (at least for the @@ -3854,41 +3890,47 @@ static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, } #ifdef CONFIG_PPC_PSERIES + /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */ if (kvmhv_on_pseries()) { barrier(); - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; - } else { - get_lppaca()->pmcregs_in_use = 1; - } + get_lppaca()->pmcregs_in_use = load_pmu; barrier(); } #endif - /* load guest */ - mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); - mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); - mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); - mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); - mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); - mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); - mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); - mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); - mtspr(SPRN_SDAR, vcpu->arch.sdar); - mtspr(SPRN_SIAR, vcpu->arch.siar); - mtspr(SPRN_SIER, vcpu->arch.sier[0]); + /* + * Load guest. If the VPA said the PMCs are not in use but the guest + * tried to access them anyway, HFSCR[PM] will be set by the HFAC + * fault so we can make forward progress. + */ + if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) { + mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); + mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); + mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); + mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); + mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); + mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); + mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); + mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); + mtspr(SPRN_SDAR, vcpu->arch.sdar); + mtspr(SPRN_SIAR, vcpu->arch.siar); + mtspr(SPRN_SIER, vcpu->arch.sier[0]); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); + mtspr(SPRN_SIER2, vcpu->arch.sier[1]); + mtspr(SPRN_SIER3, vcpu->arch.sier[2]); + } - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); - mtspr(SPRN_SIER2, vcpu->arch.sier[1]); - mtspr(SPRN_SIER3, vcpu->arch.sier[2]); - } + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, vcpu->arch.mmcra); + mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); + /* No isync necessary because we're starting counters */ - /* Set MMCRA then MMCR0 last */ - mtspr(SPRN_MMCRA, vcpu->arch.mmcra); - mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); - /* No isync necessary because we're starting counters */ + if (!vcpu->arch.nested && + (vcpu->arch.hfscr_permitted & HFSCR_PM)) + vcpu->arch.hfscr |= HFSCR_PM; + } } static void switch_pmu_to_host(struct kvm_vcpu *vcpu, @@ -3932,9 +3974,32 @@ static void switch_pmu_to_host(struct kvm_vcpu *vcpu, vcpu->arch.sier[1] = mfspr(SPRN_SIER2); vcpu->arch.sier[2] = mfspr(SPRN_SIER3); } - } else { + + } else if (vcpu->arch.hfscr & HFSCR_PM) { + /* + * The guest accessed PMC SPRs without specifying they should + * be preserved, or it cleared pmcregs_in_use after the last + * access. Just ensure they are frozen. + */ freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); - } + + /* + * Demand-fault PMU register access in the guest. + * + * This is used to grab the guest's VPA pmcregs_in_use value + * and reflect it into the host's VPA in the case of a nested + * hypervisor. + * + * It also avoids having to zero-out SPRs after each guest + * exit to avoid side-channels when. + * + * This is cleared here when we exit the guest, so later HFSCR + * interrupt handling can add it back to run the guest with + * PM enabled next time. + */ + if (!vcpu->arch.nested) + vcpu->arch.hfscr &= ~HFSCR_PM; + } /* otherwise the PMU should still be frozen */ #ifdef CONFIG_PPC_PSERIES if (kvmhv_on_pseries()) { From patchwork Mon Oct 4 16:00:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536205 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=p2/urn5r; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSl6QJNz9t5G for ; Tue, 5 Oct 2021 03:01:43 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236242AbhJDQDc (ORCPT ); Mon, 4 Oct 2021 12:03:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236129AbhJDQDa (ORCPT ); Mon, 4 Oct 2021 12:03:30 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF2EBC061745 for ; Mon, 4 Oct 2021 09:01:41 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id 75so16989511pga.3 for ; Mon, 04 Oct 2021 09:01:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DFMn8lvRPRNRmYA8rM8dLrBYTRNSq64OxGEvfMyovHo=; b=p2/urn5rzExbS67HM4XpjZhOqytqq3R9uNF5LhYfxjg9s2C5kGrXIOHbSTJnEyF8TJ N9qFZ1qNpP48uE/6UZijGyoIyKBi62ju4f+MyWaw4rxOZYLtZJqlU+DPYQzotm+zdQnG eKZhtr/jGmkviquHoZUG0pr4RlsMyV7euLIIRRc5Xub1Hc3l87eBujVvCCc+zvC75JKa lL4ffj5tuRZ7XNSajAx8uSU50CBzlU8g8M38oT2ytiK9DTJM84vHf7/ZcGBLtUZEmVED xsqy7eWyiyVtRSy7LV+L1TVAtOgqVUmabpzLlucar/bspU2PAyAbP0xXK/9jVfaJ5xcM BbIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DFMn8lvRPRNRmYA8rM8dLrBYTRNSq64OxGEvfMyovHo=; b=pHF4PU3Mym8jYKNbU2rlGOoFqFd7pqxoqIOckUGYj3unSTMSqAgxesQcX53JnHWcte TJMS3eWWuM7+fREYiNTPHaVjKljImvE4XOoKI879GeVwVzz85oZsWzdxa346jEQ3gYQS iss4tnlAj2jvpKOnlVKMumcYn1GIXqZNrdLNFwmyjsD4jTYhHCUCDy3ZR2+XYROW84k1 wuidifaZ9c+akMoRbZB0KRNcUrTuDhQKmi8n9Udiughgje5Jd+VZfuxwuSoN30eSmfNh mFigbA8qCjQHhDxwN7u20QDMfeYCsLoi3r53Ltt3bsiP0Z5IiOsHzALWfQNgLhO6N9x0 9vRA== X-Gm-Message-State: AOAM532s/B//Vez3k4i7L4fSOAhdRaQQ7b3Nq9wjrSQkYBCb7jxYt7XU 9UgchFqxgUeJZUOH8/FpxKr8UISGJ+Y= X-Google-Smtp-Source: ABdhPJz7235PlBwZMUX3vYxKnU/kU7PPk5wr8vpAcHfgGeteQsFyXpMYRgn032dUqomdMr9D++w2KQ== X-Received: by 2002:a63:af4a:: with SMTP id s10mr11525585pgo.469.1633363301244; Mon, 04 Oct 2021 09:01:41 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:41 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Fabiano Rosas Subject: [PATCH v3 16/52] KVM: PPC: Book3S HV P9: Factor out yield_count increment Date: Tue, 5 Oct 2021 02:00:13 +1000 Message-Id: <20211004160049.1338837-17-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Factor duplicated code into a helper function. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 6bbd670658b9..f0ad3fb2eabd 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4118,6 +4118,16 @@ static inline bool hcall_is_xics(unsigned long req) req == H_IPOLL || req == H_XIRR || req == H_XIRR_X; } +static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu) +{ + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; + if (lp) { + u32 yield_count = be32_to_cpu(lp->yield_count) + 1; + lp->yield_count = cpu_to_be32(yield_count); + vcpu->arch.vpa.dirty = 1; + } +} + /* * Guest entry for POWER9 and later CPUs. */ @@ -4146,12 +4156,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 1; vc->in_guest = 1; - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - u32 yield_count = be32_to_cpu(lp->yield_count) + 1; - lp->yield_count = cpu_to_be32(yield_count); - vcpu->arch.vpa.dirty = 1; - } + vcpu_vpa_increment_dispatch(vcpu); if (cpu_has_feature(CPU_FTR_TM) || cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) @@ -4279,12 +4284,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - u32 yield_count = be32_to_cpu(lp->yield_count) + 1; - lp->yield_count = cpu_to_be32(yield_count); - vcpu->arch.vpa.dirty = 1; - } + vcpu_vpa_increment_dispatch(vcpu); switch_pmu_to_host(vcpu, &host_os_sprs); From patchwork Mon Oct 4 16:00:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536206 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=Ntq5DylH; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSn0RRtz9t5G for ; Tue, 5 Oct 2021 03:01:45 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236243AbhJDQDd (ORCPT ); Mon, 4 Oct 2021 12:03:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236129AbhJDQDc (ORCPT ); Mon, 4 Oct 2021 12:03:32 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE599C061745 for ; Mon, 4 Oct 2021 09:01:43 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id 66so16566340pgc.9 for ; Mon, 04 Oct 2021 09:01:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KTcpw9sCjqvIJZJ39IDi/BwDIrqeO0Am9Hi11LMRpIU=; b=Ntq5DylHcg5DlRpyldG4xAcjuJIlDVF1Ee+2DZJp92VuDmm4ydDvc6DkvWFc+TBuL9 6SuHdZSSoaU11Ob5tuVFkzcibGwpxFZRNmPFQcKCINGxSWTzANzbETfUEX4zT1jgFyoT YLryASnuZPkz65d25nI+QtIs8+FpL50pBAha28vDZDhr8VHN3LTnSYRwY6QYjQZHa4a8 NKbcjzhY1SjWiE/HtWdy3zNmtAYEXXmt5Ci9RkEU6MFy860mZ/57EX1d89E33uRS4Rv0 F923zdLFAFkGlcz/wBvF9s7XoBRSgQt30eSSzO4qk2fZb+1XncYwrHKuecxMeHGjHSVS ioyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KTcpw9sCjqvIJZJ39IDi/BwDIrqeO0Am9Hi11LMRpIU=; b=UtzB/e2eT8OIXs1gTAoZRzJLGzVV7QuKfCPqNbUMFemdC2v4Dm975Q4b0ZcjizXUoi TQktCUX7BBvHDP+fCbrET8sUdCNGXx04Fg7Xe0vvvB9Rvnbc37nLK9wjHNfJso1MMwTh 5Tx6zpyOlJl9v8hvtHCPvpYj+yo25Ew77v5J27o1wzaBHQnuPFCCq/R0GeoJ9jaeiyvV Whmb0iYKnOu1FhWy27thCcwioVcw07ylr8vetCon/nrPIkiNiBV4Ph07r7avSzKp2mbX bk8Uuy7NJpyGv9T0oMNjefbG2sPCqfhRmFrqWGj9ui892K+duhbWSMSo7AriBWVLWnyQ lcDg== X-Gm-Message-State: AOAM533BpNf/4AHZtm8231kxM2CLjoiBapHd225rEQOrwG8L/o31c9H0 MB2TTK+xwRRiXm/wWW8JCSYvJjypK1w= X-Google-Smtp-Source: ABdhPJy5P8mVVOCVTJhMsvahaySzsxEMfCsHGzIlWkVh73FNZIVY7qjPB2jWzkLNOLqin0wQFCn2Qw== X-Received: by 2002:a63:200a:: with SMTP id g10mr11298425pgg.242.1633363303366; Mon, 04 Oct 2021 09:01:43 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:43 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 17/52] KVM: PPC: Book3S HV: CTRL SPR does not require read-modify-write Date: Tue, 5 Oct 2021 02:00:14 +1000 Message-Id: <20211004160049.1338837-18-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Processors that support KVM HV do not require read-modify-write of the CTRL SPR to set/clear their thread's runlatch. Just write 1 or 0 to it. Signed-off-by: Nicholas Piggin Reviewed-by: Fabiano Rosas --- arch/powerpc/kvm/book3s_hv.c | 2 +- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 15 ++++++--------- 2 files changed, 7 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f0ad3fb2eabd..1c5b81bd02c1 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4058,7 +4058,7 @@ static void load_spr_state(struct kvm_vcpu *vcpu) */ if (!(vcpu->arch.ctrl & 1)) - mtspr(SPRN_CTRLT, mfspr(SPRN_CTRLF) & ~1); + mtspr(SPRN_CTRLT, 0); } static void store_spr_state(struct kvm_vcpu *vcpu) diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 7fa0df632f89..070e228b3c20 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -775,12 +775,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) mtspr SPRN_AMR,r5 mtspr SPRN_UAMOR,r6 - /* Restore state of CTRL run bit; assume 1 on entry */ + /* Restore state of CTRL run bit; the host currently has it set to 1 */ lwz r5,VCPU_CTRL(r4) andi. r5,r5,1 bne 4f - mfspr r6,SPRN_CTRLF - clrrdi r6,r6,1 + li r6,0 mtspr SPRN_CTRLT,r6 4: /* Secondary threads wait for primary to have done partition switch */ @@ -1203,12 +1202,12 @@ guest_bypass: stw r0, VCPU_CPU(r9) stw r0, VCPU_THREAD_CPU(r9) - /* Save guest CTRL register, set runlatch to 1 */ + /* Save guest CTRL register, set runlatch to 1 if it was clear */ mfspr r6,SPRN_CTRLF stw r6,VCPU_CTRL(r9) andi. r0,r6,1 bne 4f - ori r6,r6,1 + li r6,1 mtspr SPRN_CTRLT,r6 4: /* @@ -2178,8 +2177,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) * Also clear the runlatch bit before napping. */ kvm_do_nap: - mfspr r0, SPRN_CTRLF - clrrdi r0, r0, 1 + li r0,0 mtspr SPRN_CTRLT, r0 li r0,1 @@ -2198,8 +2196,7 @@ kvm_nap_sequence: /* desired LPCR value in r5 */ bl isa206_idle_insn_mayloss - mfspr r0, SPRN_CTRLF - ori r0, r0, 1 + li r0,1 mtspr SPRN_CTRLT, r0 mtspr SPRN_SRR1, r3 From patchwork Mon Oct 4 16:00:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536207 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=IqCkpfAb; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSq0DPfz9t6h for ; Tue, 5 Oct 2021 03:01:47 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236240AbhJDQDf (ORCPT ); Mon, 4 Oct 2021 12:03:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236129AbhJDQDf (ORCPT ); Mon, 4 Oct 2021 12:03:35 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22868C061745 for ; Mon, 4 Oct 2021 09:01:46 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id t11so185356plq.11 for ; Mon, 04 Oct 2021 09:01:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BSpAfqkgDPepZQ3GfNve5U7U5lObdzccs8al+a/vxj0=; b=IqCkpfAb746JTjXXOlrU9/9XyZ4v5/VseeUrceuibJEaNgm38bLE/QJzcDgX+Z+vpW EfsuTtW6qMhx/+zfMB3Dn7+FDGPe8CxYFVzcppylROPglw322XQAmYHc+ScHArFgFU3F Ve5kOplnwKdqR/YxFuP4jfFddEuZqKzM62+iXLAhSQOZaZvrxKV04Ji16tO0TSUT8f50 1UjjSTBaA54EI2pOQii8/Coo/BAltJlEi0UdK++n9oKTXfcrFuTGp2wBtaiAlSPsHjx9 duxaxbtVtxD4ngVn/cSB5trDFNkvlTp4J2EOMXJFEvVNMDfjke1w8h5Cj8Bb/Ed4pJ7I c+fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BSpAfqkgDPepZQ3GfNve5U7U5lObdzccs8al+a/vxj0=; b=x/vnuBwhyNEhcb7VMk7RMxu/Af9hn7aCf0IJFeIHuz6oz5PijfjGDn4rNpY1u3J8O0 fVJFUQpu2x91Goahe/s6UG96lpc+DYa8dpri6IpEK5XgV39lxM6q5+bMx5bN0gnoxmT4 2aBA1cUa8kWgCBdFzbL3EttPkXRdwZgqdJn7TOvOD0k3NPDB6v6/rM/LiyLCDr9u5r0f g+Y3dP+6uVwYlLAeG7IPxoqjRE/jzUMDVg+HUCNN0pPQHdmWZvVmgCguAyM++XARH47u I+KowqhTwkfLqzNkJeJjN+VMypYyRjMUVsQpnwmUEZimXZEnJrPsq8zL85TqExM038Uk IX6g== X-Gm-Message-State: AOAM532zkvUrV5B3IDtzmH4d+V/LEuqz/dQM0K5vfEgbT4dW3rq8gsYJ wAXWNoAiARVt8ux/923BqCYB/fX+ZGw= X-Google-Smtp-Source: ABdhPJy7cR2TZgBjVDra3SULaeyL51sytbKil2sKTuW6jFcSH9HYTe/YDMGM7AUNc0n6mmrOs2JYeA== X-Received: by 2002:a17:903:2283:b0:13e:acd8:86c2 with SMTP id b3-20020a170903228300b0013eacd886c2mr427722plh.78.1633363305523; Mon, 04 Oct 2021 09:01:45 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:45 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 18/52] KVM: PPC: Book3S HV P9: Move SPRG restore to restore_p9_host_os_sprs Date: Tue, 5 Oct 2021 02:00:15 +1000 Message-Id: <20211004160049.1338837-19-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the SPR update into its relevant helper function. This will help with SPR scheduling improvements in later changes. Signed-off-by: Nicholas Piggin Reviewed-by: Fabiano Rosas --- arch/powerpc/kvm/book3s_hv.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 1c5b81bd02c1..fca89ed2244f 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4093,6 +4093,8 @@ static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); + mtspr(SPRN_PSPB, 0); mtspr(SPRN_UAMOR, 0); @@ -4293,8 +4295,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, timer_rearm_host_dec(tb); - mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - kvmppc_subcore_exit_guest(); return trap; From patchwork Mon Oct 4 16:00:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536209 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=CnRmKiLz; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSt74Fmz9t6h for ; Tue, 5 Oct 2021 03:01:50 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236249AbhJDQDj (ORCPT ); Mon, 4 Oct 2021 12:03:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236245AbhJDQDh (ORCPT ); Mon, 4 Oct 2021 12:03:37 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D662C061745 for ; Mon, 4 Oct 2021 09:01:48 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id np13so1886668pjb.4 for ; Mon, 04 Oct 2021 09:01:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2M5qUMpszc7y0kTAP66EoNWnoS+C/f8xv2fiQGab9VE=; b=CnRmKiLz1JL74iQrGl08XR3HTo747VKcYQ20r0eDBIIsPI+1Lpr+eknpdC4p6KaWDh CWnq+y/FYtUwOwfGNF1y3jDh+qCEWw3XC5/LUoL/INrZ71O7njvMKLVmB6yikUD0WFhp 4wR3wSfZf4xcSWfarpxiPVCGyGSrZJM0IbhhQs4LJY4Ws0lv+tHLtnc9KD70/c3bgOBe aHFhXwjtv0b4JneUN1WYxmJVr+fwhmH7RLG+d/xfwgObQrq8+Ofh+z9tbhyyI7fjFwcn 5hE26w2BGI07ihOwRs0Ly+QKnrHJ/ILrCbB5Fyv5y9aeIjY8rhP8Jz4u4be34w+taMss V/CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2M5qUMpszc7y0kTAP66EoNWnoS+C/f8xv2fiQGab9VE=; b=IAy8db9VQQ59c7ihfJUJ1QXIdlR6GQ9IjLorU8XKcO7THsRBWZP/8hZ03wPxiU3E1x bV+kEX7Y5KCPWGgKqBNSY0KQtOBdLIamGut11rUq23mwIBidn/gawna71ZGE1ISoHSAX M23lUlFHHbECDIkcfpOJUszLI8ijs1Vv4lWw/t846lhvfN/EogAXSN1pkU8xO8HcGu1H OUXDmgEvdMZSV62SHPr4VrWRnNgkkQ/2KvWzdeQUSTIcOfjHuOGcNchEmzNoj1TCG2Mp Fwj5r0Wp0tIz0Aeo/OYl29cYJFEvxHKYIarBLvmStDRzocA/p0AAQY0JsbazCBnKdc0U Fqaw== X-Gm-Message-State: AOAM5313RFAZ9L07BkfpiKBGTUV/utNiq9mduc+kKz8mZSANJi5eqqQl jTOKMnC9O0LksxsB2eKHSzPZ7UgtR3Y= X-Google-Smtp-Source: ABdhPJzRUb46DftirO/YcVQWJy7uI+GSu/NkkbbWaPfkFPFMnhTIUgJhYF9N0XI6+rrDfjjwoLu3Kw== X-Received: by 2002:a17:902:9b88:b0:13e:55b1:2939 with SMTP id y8-20020a1709029b8800b0013e55b12939mr424178plp.80.1633363307649; Mon, 04 Oct 2021 09:01:47 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:47 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 19/52] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs Date: Tue, 5 Oct 2021 02:00:16 +1000 Message-Id: <20211004160049.1338837-20-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This reduces the number of mtmsrd required to enable facility bits when saving/restoring registers, by having the KVM code set all bits up front rather than using individual facility functions that set their particular MSR bits. Signed-off-by: Nicholas Piggin Reviewed-by: Fabiano Rosas --- arch/powerpc/include/asm/switch_to.h | 2 + arch/powerpc/kernel/process.c | 28 +++++++++++++ arch/powerpc/kvm/book3s_hv.c | 59 ++++++++++++++++++--------- arch/powerpc/kvm/book3s_hv_p9_entry.c | 1 + 4 files changed, 71 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h index 9d1fbd8be1c7..e8013cd6b646 100644 --- a/arch/powerpc/include/asm/switch_to.h +++ b/arch/powerpc/include/asm/switch_to.h @@ -112,6 +112,8 @@ static inline void clear_task_ebb(struct task_struct *t) #endif } +void kvmppc_save_user_regs(void); + extern int set_thread_tidr(struct task_struct *t); #endif /* _ASM_POWERPC_SWITCH_TO_H */ diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 50436b52c213..3fca321b820d 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1156,6 +1156,34 @@ static inline void save_sprs(struct thread_struct *t) #endif } +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +void kvmppc_save_user_regs(void) +{ + unsigned long usermsr; + + if (!current->thread.regs) + return; + + usermsr = current->thread.regs->msr; + + if (usermsr & MSR_FP) + save_fpu(current); + + if (usermsr & MSR_VEC) + save_altivec(current); + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + if (usermsr & MSR_TM) { + current->thread.tm_tfhar = mfspr(SPRN_TFHAR); + current->thread.tm_tfiar = mfspr(SPRN_TFIAR); + current->thread.tm_texasr = mfspr(SPRN_TEXASR); + current->thread.regs->msr &= ~MSR_TM; + } +#endif +} +EXPORT_SYMBOL_GPL(kvmppc_save_user_regs); +#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ + static inline void restore_sprs(struct thread_struct *old_thread, struct thread_struct *new_thread) { diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index fca89ed2244f..16365c0e9872 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4140,6 +4140,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, struct p9_host_os_sprs host_os_sprs; s64 dec; u64 tb, next_timer; + unsigned long msr; int trap; WARN_ON_ONCE(vcpu->arch.ceded); @@ -4151,8 +4152,23 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, if (next_timer < time_limit) time_limit = next_timer; + vcpu->arch.ceded = 0; + save_p9_host_os_sprs(&host_os_sprs); + /* MSR bits may have been cleared by context switch */ + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + kvmppc_subcore_enter_guest(); vc->entry_exit_map = 1; @@ -4161,12 +4177,13 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + msr = mfmsr(); /* TM restore can update msr */ + } switch_pmu_to_guest(vcpu, &host_os_sprs); - msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); load_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC load_vr_state(&vcpu->arch.vr); @@ -4275,7 +4292,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, restore_p9_host_os_sprs(vcpu, &host_os_sprs); - msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); store_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC store_vr_state(&vcpu->arch.vr); @@ -4825,19 +4841,24 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) unsigned long user_tar = 0; unsigned int user_vrsave; struct kvm *kvm; + unsigned long msr; if (!vcpu->arch.sane) { run->exit_reason = KVM_EXIT_INTERNAL_ERROR; return -EINVAL; } + /* No need to go into the guest when all we'll do is come back out */ + if (signal_pending(current)) { + run->exit_reason = KVM_EXIT_INTR; + return -EINTR; + } + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM /* * Don't allow entry with a suspended transaction, because * the guest entry/exit code will lose it. - * If the guest has TM enabled, save away their TM-related SPRs - * (they will get restored by the TM unavailable interrupt). */ -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM if (cpu_has_feature(CPU_FTR_TM) && current->thread.regs && (current->thread.regs->msr & MSR_TM)) { if (MSR_TM_ACTIVE(current->thread.regs->msr)) { @@ -4845,12 +4866,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) run->fail_entry.hardware_entry_failure_reason = 0; return -EINVAL; } - /* Enable TM so we can read the TM SPRs */ - mtmsr(mfmsr() | MSR_TM); - current->thread.tm_tfhar = mfspr(SPRN_TFHAR); - current->thread.tm_tfiar = mfspr(SPRN_TFIAR); - current->thread.tm_texasr = mfspr(SPRN_TEXASR); - current->thread.regs->msr &= ~MSR_TM; } #endif @@ -4865,18 +4880,24 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) kvmppc_core_prepare_to_enter(vcpu); - /* No need to go into the guest when all we'll do is come back out */ - if (signal_pending(current)) { - run->exit_reason = KVM_EXIT_INTR; - return -EINTR; - } - kvm = vcpu->kvm; atomic_inc(&kvm->arch.vcpus_running); /* Order vcpus_running vs. mmu_ready, see kvmppc_alloc_reset_hpt */ smp_mb(); - flush_all_to_thread(current); + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + + kvmppc_save_user_regs(); /* Save userspace EBB and other register values */ if (cpu_has_feature(CPU_FTR_ARCH_207S)) { diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index a7f63082b4e3..fb9cb34445ea 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -224,6 +224,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = vc->tb_offset; } + /* Could avoid mfmsr by passing around, but probably no big deal */ msr = mfmsr(); host_hfscr = mfspr(SPRN_HFSCR); From patchwork Mon Oct 4 16:00:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536210 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=EDKj8e2O; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSv2XyVz9t5G for ; Tue, 5 Oct 2021 03:01:51 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236129AbhJDQDj (ORCPT ); Mon, 4 Oct 2021 12:03:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236250AbhJDQDj (ORCPT ); Mon, 4 Oct 2021 12:03:39 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 722BAC061745 for ; Mon, 4 Oct 2021 09:01:50 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id 66so16566660pgc.9 for ; Mon, 04 Oct 2021 09:01:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kbNMN5/ycF90abyDvuus1UyFcgyOK6fk2IGspBrPPPc=; b=EDKj8e2OCD1oNd6XMKXNSyzhvjqRttg0JdlauhpsHZig3ufREVZQJZXGJ9Pjygq0F3 5r7thMf9llIC+mx+PAznq0dMCNPUvg419LUXbxQ/1JE3WnaUb1SB+WM/J0Xih3tYcnis BStK+ZJF1FQ3ES5bOQtPtcWHwG2VXEOmSeuMxpXdv8Kt7iKc3T22YE1bzeY2n0VUZM2q GOmTYo7z6ldBS9JFkJ/M4kFRUljo2D+/XRIRyrMuuAtX2nqAFPNUMaFDTve1foR/FnhH IfuE+g4D0BCajyxF1s4Y2vjtdWzwaT4PNYkI6jONzYSs69nSeDxInXAMlpH1BU7Am55l pVww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kbNMN5/ycF90abyDvuus1UyFcgyOK6fk2IGspBrPPPc=; b=ZflZh2+3thqDelvtDakSwFSe5UUuIxLQWveacEvhTryoQASmQAqftJYt+gwk61DBqW EleVu3KuOBOscOS9cJ2kx9am/WvYNQRiO0gMzph4I5/iXGc5z+TDDLmpWz05jr3cvDy8 ceeKNmAR4mKrElnqEO03eZ0rNKO7mMs8rmoHoV2N3rBmXN2j1l4p3XrdhW/Dp63cCM1/ ZKtiJeuZSqvgVzu5JML74UMVGfHJGQiEV4OO8tbc/5g/K8+Mp6JaQg718jl5N6YnmPzm rYGMsoRuniWPByFxVYBcE5gMohscn3SEQdf3GYFGNL6pXQeZWnEdsndE2bjH29cQikyv M6bg== X-Gm-Message-State: AOAM532Sj7oV2NvN9g9+Fsex7EjXnxAXdIUmGXN5geKwp6JAGxVkF7T+ kJw1mG+cfRTT6IhpYYzuiPxpYXdj5yw= X-Google-Smtp-Source: ABdhPJxYgoK0eSIGz6nCE8LCrmlN+ynSfHHlsBS/gxu1ZN3R1c4RJZLKXwXJjDWqduKRYw303ZEMVQ== X-Received: by 2002:a63:710d:: with SMTP id m13mr11593324pgc.467.1633363309813; Mon, 04 Oct 2021 09:01:49 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:49 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 20/52] KVM: PPC: Book3S HV P9: Improve mtmsrd scheduling by delaying MSR[EE] disable Date: Tue, 5 Oct 2021 02:00:17 +1000 Message-Id: <20211004160049.1338837-21-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Moving the mtmsrd after the host SPRs are saved and before the guest SPRs start to be loaded can prevent an SPR scoreboard stall (because the mtmsrd is L=1 type which does not cause context synchronisation. This is also now more convenient to combined with the mtmsrd L=0 instruction to enable facilities just below, but that is not done yet. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 16365c0e9872..7e8ddffd61c7 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4156,6 +4156,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_p9_host_os_sprs(&host_os_sprs); + /* + * This could be combined with MSR[RI] clearing, but that expands + * the unrecoverable window. It would be better to cover unrecoverable + * with KVM bad interrupt handling rather than use MSR[RI] at all. + * + * Much more difficult and less worthwhile to combine with IR/DR + * disable. + */ + hard_irq_disable(); + if (lazy_irq_pending()) + return 0; + /* MSR bits may have been cleared by context switch */ msr = 0; if (IS_ENABLED(CONFIG_PPC_FPU)) @@ -4667,6 +4679,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, struct kvmppc_vcore *vc; struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; + unsigned long flags; trace_kvmppc_run_vcpu_enter(vcpu); @@ -4710,11 +4723,11 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, if (kvm_is_radix(kvm)) kvmppc_prepare_radix_vcpu(vcpu, pcpu); - local_irq_disable(); - hard_irq_disable(); + /* flags save not required, but irq_pmu has no disable/enable API */ + powerpc_local_irq_pmu_save(flags); if (signal_pending(current)) goto sigpend; - if (lazy_irq_pending() || need_resched() || !kvm->arch.mmu_ready) + if (need_resched() || !kvm->arch.mmu_ready) goto out; if (!nested) { @@ -4769,7 +4782,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, guest_exit_irqoff(); - local_irq_enable(); + powerpc_local_irq_pmu_restore(flags); cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest); @@ -4827,7 +4840,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; out: - local_irq_enable(); + powerpc_local_irq_pmu_restore(flags); preempt_enable(); goto done; } From patchwork Mon Oct 4 16:00:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536212 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=XITgc8RJ; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQSx6q5Lz9t5G for ; Tue, 5 Oct 2021 03:01:53 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236247AbhJDQDm (ORCPT ); Mon, 4 Oct 2021 12:03:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236245AbhJDQDl (ORCPT ); Mon, 4 Oct 2021 12:03:41 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E55B3C061745 for ; Mon, 4 Oct 2021 09:01:52 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id m21so16926702pgu.13 for ; Mon, 04 Oct 2021 09:01:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+uifPGYlRIbN5mTHvUbZjtaaYYRvM7KqJ3vO8EmF4DI=; b=XITgc8RJByYf5tfAAvYEE3fH5jgqKB2ALbN1qiAbVXPBSv3UghC3UhFKAggSOwhafw nd9sbGhwE0w/Ce6sxG/M4VHE4EMtvxNJypiDXY/5Rr5wPcO59ioDlh+FDVev9c/un9zR ZOS6i7BAzbp76Vb7vJpA6JHPtpRUqxcQSpo1eyrH/YY3pWwD6H7uLBZfvrRVRZcqpQDS HihpwPE8PpWqiOQq/+UIirBaDJZfb6utgiZPv9rt3vhQpTBkz0JKHaOR3UksFClgl8PX 2e664zqByaQX1VH0Dq17LnQdUKi6HxhZxhTD7l2Q+78gAa4qUgdvm/mEvYq5ALIlzwa2 k6hA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+uifPGYlRIbN5mTHvUbZjtaaYYRvM7KqJ3vO8EmF4DI=; b=t0p6QoXicVQefhQKre93HV12REaepXfnmcLXJdVBI96zK+n/vYhOuIALWjUZE08CAn HN5DWOhSWqAU/J1j8tXw57j+Z/a/9tI+ZX4SX3DO4bBl7QIbIEfs3cK5eTrnyzf4UbdU XHqiBUAC7jb2/LPme1yriPzWj43UExDgt71mVNuttiG5eo+wfxdm8f6mVXy3ia2YB6mt Q21qbHSMx80Ei1lBKWuXsS7iAsQ1Y+u6RH9ikeVsW6gj5x83KjohqYvq+/wvfC8hkJql n2xRtmE+joAMXjpILIR+4pCTknEUVi1hMxw7f3DDQ1W/XG7TQSxlYeuVXaO7eAHkaC19 JM6Q== X-Gm-Message-State: AOAM531N1LhjOGXTZgMZuWn+A2JBoUYzfmtSEFoLdpG+XMfskdLg2aKf +KkRIjj4ryPjqyJp4/4pvlaSwx33s0A= X-Google-Smtp-Source: ABdhPJyNq27FnsuPIjdFF/2bUzMcOMAx0XQUSvD/HRDWTzd9kWE+rNvvjQl1qezB9/4JVlhHsWfEMQ== X-Received: by 2002:a63:ce52:: with SMTP id r18mr11325811pgi.350.1633363312287; Mon, 04 Oct 2021 09:01:52 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:52 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Fabiano Rosas Subject: [PATCH v3 21/52] KVM: PPC: Book3S HV P9: Add kvmppc_stop_thread to match kvmppc_start_thread Date: Tue, 5 Oct 2021 02:00:18 +1000 Message-Id: <20211004160049.1338837-22-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Small cleanup makes it a bit easier to match up entry and exit operations. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7e8ddffd61c7..0a711d929531 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3070,6 +3070,13 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) kvmppc_ipi_thread(cpu); } +/* Old path does this in asm */ +static void kvmppc_stop_thread(struct kvm_vcpu *vcpu) +{ + vcpu->cpu = -1; + vcpu->arch.thread_cpu = -1; +} + static void kvmppc_wait_for_nap(int n_threads) { int cpu = smp_processor_id(); @@ -4297,8 +4304,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, dec = (s32) dec; tb = mftb(); vcpu->arch.dec_expires = dec + tb; - vcpu->cpu = -1; - vcpu->arch.thread_cpu = -1; store_spr_state(vcpu); @@ -4782,6 +4787,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, guest_exit_irqoff(); + kvmppc_stop_thread(vcpu); + powerpc_local_irq_pmu_restore(flags); cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest); From patchwork Mon Oct 4 16:00:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536213 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=oJAyyEaK; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQT06cCqz9t5G for ; Tue, 5 Oct 2021 03:01:56 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236250AbhJDQDp (ORCPT ); Mon, 4 Oct 2021 12:03:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236245AbhJDQDo (ORCPT ); Mon, 4 Oct 2021 12:03:44 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4543C061745 for ; Mon, 4 Oct 2021 09:01:55 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id w11so180919plz.13 for ; Mon, 04 Oct 2021 09:01:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3XXedmojiFo3V1R6yotydbEJx5BiJd/Iil1JAE4Tbzw=; b=oJAyyEaKMMxeJFzp9m94CHwwN6MsxeUeX5Em//BMXx4L3l1SAFgu5tsEimNt20pJvL m+LdwLB4AUdDSwe+6pOCihNOfRcS26j1XV4Y0dGgPXC2z8gKaRUTA4DX+PEgXVa6nvsy qOnFLEcYPBHN4+DQfbIwhdgQ20aPbj6KrZ+ibJp9x6fKoVcFwEAws8njp1+Lf+6UUuwF gz9QxSmGbW9940PG3SH2tagkSmORSpnD+g5rY5fJu1/d1CCnxt+TI5EIs9CZeG5s/pLo 8GuJFUV1T/s8ettewKVirYT1wORZXgrd8RuGSHfbg7r87w7yzhS7lz4U40vg8H579ndS YHDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3XXedmojiFo3V1R6yotydbEJx5BiJd/Iil1JAE4Tbzw=; b=mQ0bmavYOwPO5qs8extbgnE9VwwcunTFxQeHdlCgOlyay5VYqnwVwtNERSfHSP9AwF J16LVfNoh8AO5K4AGEtyWDaoxOP2sNrT9cPlLT6LAVdHSVoyDfopoQW8JZybFzFAwrjR ElbJa0Nx3w1h14OciYq4hu5+LdegemAHJnqbsSeakUunbkV1mulk9DEVDDv/Oj8bmW+y YTR0RXcbZBBApts9qeCWKrEPaNK6u7ksxbFLovdtk/72qc3ozp7tdsRXba8OgcjTEPc/ dwCOHTtn1UvSfq6pXJFFKXObfMa5mZg+82aFPKCBQ9PHp6X8Qe/COFHj+QDnUy39QE5b lZLQ== X-Gm-Message-State: AOAM530RQP4ZllTPfQudAchszfZMAK4lT06fmHmhDn+/tcNl3uKlGLZo 9EsWbMB3UEhdakJHdj/vWK4o0Uk3UfQ= X-Google-Smtp-Source: ABdhPJxLBjq0vXdrnCbGlYLGCQPjPnmal5uR6YNaH5nqpla61yDE9CeyIAddifuqNvQVWvaTEQJItA== X-Received: by 2002:a17:90a:b783:: with SMTP id m3mr38286500pjr.183.1633363314620; Mon, 04 Oct 2021 09:01:54 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:54 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 22/52] KVM: PPC: Book3S HV: Change dec_expires to be relative to guest timebase Date: Tue, 5 Oct 2021 02:00:19 +1000 Message-Id: <20211004160049.1338837-23-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Change dec_expires to be relative to the guest timebase, and allow it to be moved into low level P9 guest entry functions, to improve SPR access scheduling. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s.h | 6 +++ arch/powerpc/include/asm/kvm_host.h | 2 +- arch/powerpc/kvm/book3s_hv.c | 58 +++++++++++++------------ arch/powerpc/kvm/book3s_hv_nested.c | 3 ++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 10 ++++- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 13 ------ 6 files changed, 49 insertions(+), 43 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index caaa0f592d8e..15b573671f99 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -406,6 +406,12 @@ static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu) return vcpu->arch.fault_dar; } +/* Expiry time of vcpu DEC relative to host TB */ +static inline u64 kvmppc_dec_expires_host_tb(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.dec_expires - vcpu->arch.vcore->tb_offset; +} + static inline bool is_kvmppc_resume_guest(int r) { return (r == RESUME_GUEST || r == RESUME_GUEST_NV); diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 080a7feb7731..c5fc4d016695 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -741,7 +741,7 @@ struct kvm_vcpu_arch { struct hrtimer dec_timer; u64 dec_jiffies; - u64 dec_expires; + u64 dec_expires; /* Relative to guest timebase. */ unsigned long pending_exceptions; u8 ceded; u8 prodded; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 0a711d929531..4abe4a24e5e7 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2261,8 +2261,7 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, *val = get_reg_val(id, vcpu->arch.vcore->arch_compat); break; case KVM_REG_PPC_DEC_EXPIRY: - *val = get_reg_val(id, vcpu->arch.dec_expires + - vcpu->arch.vcore->tb_offset); + *val = get_reg_val(id, vcpu->arch.dec_expires); break; case KVM_REG_PPC_ONLINE: *val = get_reg_val(id, vcpu->arch.online); @@ -2514,8 +2513,7 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, r = kvmppc_set_arch_compat(vcpu, set_reg_val(id, *val)); break; case KVM_REG_PPC_DEC_EXPIRY: - vcpu->arch.dec_expires = set_reg_val(id, *val) - - vcpu->arch.vcore->tb_offset; + vcpu->arch.dec_expires = set_reg_val(id, *val); break; case KVM_REG_PPC_ONLINE: i = set_reg_val(id, *val); @@ -2902,13 +2900,13 @@ static void kvmppc_set_timer(struct kvm_vcpu *vcpu) unsigned long dec_nsec, now; now = get_tb(); - if (now > vcpu->arch.dec_expires) { + if (now > kvmppc_dec_expires_host_tb(vcpu)) { /* decrementer has already gone negative */ kvmppc_core_queue_dec(vcpu); kvmppc_core_prepare_to_enter(vcpu); return; } - dec_nsec = tb_to_ns(vcpu->arch.dec_expires - now); + dec_nsec = tb_to_ns(kvmppc_dec_expires_host_tb(vcpu) - now); hrtimer_start(&vcpu->arch.dec_timer, dec_nsec, HRTIMER_MODE_REL); vcpu->arch.timer_running = 1; } @@ -3380,7 +3378,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) */ spin_unlock(&vc->lock); /* cancel pending dec exception if dec is positive */ - if (now < vcpu->arch.dec_expires && + if (now < kvmppc_dec_expires_host_tb(vcpu) && kvmppc_core_pending_dec(vcpu)) kvmppc_core_dequeue_dec(vcpu); @@ -4211,20 +4209,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, load_spr_state(vcpu); - /* - * When setting DEC, we must always deal with irq_work_raise via NMI vs - * setting DEC. The problem occurs right as we switch into guest mode - * if a NMI hits and sets pending work and sets DEC, then that will - * apply to the guest and not bring us back to the host. - * - * irq_work_raise could check a flag (or possibly LPCR[HDICE] for - * example) and set HDEC to 1? That wouldn't solve the nested hv - * case which needs to abort the hcall or zero the time limit. - * - * XXX: Another day's problem. - */ - mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); - if (kvmhv_on_pseries()) { /* * We need to save and restore the guest visible part of the @@ -4250,6 +4234,23 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, hvregs.vcpu_token = vcpu->vcpu_id; } hvregs.hdec_expiry = time_limit; + + /* + * When setting DEC, we must always deal with irq_work_raise + * via NMI vs setting DEC. The problem occurs right as we + * switch into guest mode if a NMI hits and sets pending work + * and sets DEC, then that will apply to the guest and not + * bring us back to the host. + * + * irq_work_raise could check a flag (or possibly LPCR[HDICE] + * for example) and set HDEC to 1? That wouldn't solve the + * nested hv case which needs to abort the hcall or zero the + * time limit. + * + * XXX: Another day's problem. + */ + mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - tb); + mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), @@ -4261,6 +4262,12 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); mtspr(SPRN_PSSCR_PR, host_psscr); + dec = mfspr(SPRN_DEC); + if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ + dec = (s32) dec; + tb = mftb(); + vcpu->arch.dec_expires = dec + (tb + vc->tb_offset); + /* H_CEDE has to be handled now, not later */ if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && kvmppc_get_gpr(vcpu, 3) == H_CEDE) { @@ -4268,6 +4275,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_set_gpr(vcpu, 3, 0); trap = 0; } + } else { kvmppc_xive_push_vcpu(vcpu); trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr); @@ -4299,12 +4307,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.slb_max = 0; } - dec = mfspr(SPRN_DEC); - if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ - dec = (s32) dec; - tb = mftb(); - vcpu->arch.dec_expires = dec + tb; - store_spr_state(vcpu); restore_p9_host_os_sprs(vcpu, &host_os_sprs); @@ -4801,7 +4803,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, * by L2 and the L1 decrementer is provided in hdec_expires */ if (kvmppc_core_pending_dec(vcpu) && - ((get_tb() < vcpu->arch.dec_expires) || + ((get_tb() < kvmppc_dec_expires_host_tb(vcpu)) || (trap == BOOK3S_INTERRUPT_SYSCALL && kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED))) kvmppc_core_dequeue_dec(vcpu); diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index ed8a2c9f5629..7bed0b91245e 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -358,6 +358,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) /* convert TB values/offsets to host (L0) values */ hdec_exp = l2_hv.hdec_expiry - vc->tb_offset; vc->tb_offset += l2_hv.tb_offset; + vcpu->arch.dec_expires += l2_hv.tb_offset; /* set L1 state to L2 state */ vcpu->arch.nested = l2; @@ -399,6 +400,8 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) if (l2_regs.msr & MSR_TS_MASK) vcpu->arch.shregs.msr |= MSR_TS_S; vc->tb_offset = saved_l1_hv.tb_offset; + /* XXX: is this always the same delta as saved_l1_hv.tb_offset? */ + vcpu->arch.dec_expires -= l2_hv.tb_offset; restore_hv_regs(vcpu, &saved_l1_hv); vcpu->arch.purr += delta_purr; vcpu->arch.spurr += delta_spurr; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index fb9cb34445ea..814b0dfd590f 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -188,7 +188,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; struct kvmppc_vcore *vc = vcpu->arch.vcore; - s64 hdec; + s64 hdec, dec; u64 tb, purr, spurr; u64 *exsave; bool ri_set; @@ -317,6 +317,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDEC, hdec); + mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM tm_return_to_guest: #endif @@ -461,6 +463,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); + dec = mfspr(SPRN_DEC); + if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ + dec = (s32) dec; + tb = mftb(); + vcpu->arch.dec_expires = dec + tb; + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 070e228b3c20..b25d5a339bb6 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -808,10 +808,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) * Set the decrementer to the guest decrementer. */ ld r8,VCPU_DEC_EXPIRES(r4) - /* r8 is a host timebase value here, convert to guest TB */ - ld r5,HSTATE_KVM_VCORE(r13) - ld r6,VCORE_TB_OFFSET_APPL(r5) - add r8,r8,r6 mftb r7 subf r3,r7,r8 mtspr SPRN_DEC,r3 @@ -1186,9 +1182,6 @@ guest_bypass: mftb r6 extsw r5,r5 16: add r5,r5,r6 - /* r5 is a guest timebase value here, convert to host TB */ - ld r4,VCORE_TB_OFFSET_APPL(r3) - subf r5,r4,r5 std r5,VCPU_DEC_EXPIRES(r9) /* Increment exit count, poke other threads to exit */ @@ -2154,9 +2147,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) /* save expiry time of guest decrementer */ add r3, r3, r5 ld r4, HSTATE_KVM_VCPU(r13) - ld r5, HSTATE_KVM_VCORE(r13) - ld r6, VCORE_TB_OFFSET_APPL(r5) - subf r3, r6, r3 /* convert to host TB value */ std r3, VCPU_DEC_EXPIRES(r4) #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING @@ -2253,9 +2243,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) /* Restore guest decrementer */ ld r3, VCPU_DEC_EXPIRES(r4) - ld r5, HSTATE_KVM_VCORE(r13) - ld r6, VCORE_TB_OFFSET_APPL(r5) - add r3, r3, r6 /* convert host TB to guest TB value */ mftb r7 subf r3, r7, r3 mtspr SPRN_DEC, r3 From patchwork Mon Oct 4 16:00:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536215 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=Q2st7IA+; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTG6jQ3z9t5G for ; Tue, 5 Oct 2021 03:02:10 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236267AbhJDQDr (ORCPT ); Mon, 4 Oct 2021 12:03:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236245AbhJDQDq (ORCPT ); Mon, 4 Oct 2021 12:03:46 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B97CDC061746 for ; Mon, 4 Oct 2021 09:01:57 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id l6so191623plh.9 for ; Mon, 04 Oct 2021 09:01:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=InnIngdCZKWqNjgAIB+VqHmGiblNFRvsyL7J2HN5wII=; b=Q2st7IA+GB2J+xGfUo5M/91N33BF/4uiPcYwT8WayzLhI4OVKyfURKgwEzxKcT90z6 xcljRDJkowhFe8Pz/K22/a6wAcFi9DSTvzqOsivQDn6V6VOvWKgCHipBDTwB21NJd+aC 1z/UJkEPPureUrlUPDqP17po/gpf+ywAlKpq+dFA4QWlLX5lpuerPQdKZbRD/H5wC09R ClXEhcTbZ5kmg8o2JArEG3S39MDi2bkAIQzCCPQ1XeOA9mgolaH0fz634yzXcJsOf3Ji McAYKF+YR3myCMuztpEgbPFq/UJ2WIsMxPgy6UKLSHrkAr6asLGK2qu7/KSguF7mxkdS VbWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=InnIngdCZKWqNjgAIB+VqHmGiblNFRvsyL7J2HN5wII=; b=UjGoIK6CCGwCIvJRCuy+smTuK4JYIa3fpvtUmLSCQo2cTGGfGAqP+SDuSnj18n7BEG NljxRICWbh9F6RcPxB97h/P4wlLUYBaRlVVT06N12ltsz2S4EZBAGhchi6+Ir6PhmS4D EPENL8SsJjNQgRStzX5JjgUQTMZ15Oo55eeeGSO3miadTmQzirKBREUelh3oqaQahDFp nHKsTnH/TcVEgwJvwrerv2uWIuKISNTaYCcKMXWHYyzhdxKSl1Atm1k9rKFexNqpW5Tw GLzh4MiB/9ry4k5faRASlFp91RF87qpj29U6HdrdFRLpN59WoW7FiSOztgmi/6a/gFZq XjjQ== X-Gm-Message-State: AOAM531WZs5YgYNEavqLDt/X3hBvQlncQUtcbNSj5SQjpxtiRaKN+6Sz Kxglk59npO189CMK/wCruM0HnQEPdO8= X-Google-Smtp-Source: ABdhPJw4EV0wPMjuOJXMgE5bP61ZjBY1L6yljvIKLwCq3aWSII2odyqJx4cOC5Y/xdv/sOXThT2m4w== X-Received: by 2002:a17:90b:3149:: with SMTP id ip9mr26701634pjb.13.1633363317178; Mon, 04 Oct 2021 09:01:57 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:56 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 23/52] KVM: PPC: Book3S HV P9: Move TB updates Date: Tue, 5 Oct 2021 02:00:20 +1000 Message-Id: <20211004160049.1338837-24-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the TB updates between saving and loading guest and host SPRs, to improve scheduling by keeping issue-NTC operations together as much as possible. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 36 +++++++++++++-------------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 814b0dfd590f..e7793bb806eb 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -215,15 +215,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; - if (vc->tb_offset) { - u64 new_tb = tb + vc->tb_offset; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = vc->tb_offset; - } - /* Could avoid mfmsr by passing around, but probably no big deal */ msr = mfmsr(); @@ -238,6 +229,15 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_dawrx1 = mfspr(SPRN_DAWRX1); } + if (vc->tb_offset) { + u64 new_tb = tb + vc->tb_offset; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = vc->tb_offset; + } + if (vc->pcr) mtspr(SPRN_PCR, vc->pcr | PCR_MASK); mtspr(SPRN_DPDES, vc->dpdes); @@ -469,6 +469,15 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc tb = mftb(); vcpu->arch.dec_expires = dec + tb; + if (vc->tb_offset_applied) { + u64 new_tb = tb - vc->tb_offset_applied; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = 0; + } + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); @@ -503,15 +512,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); - if (vc->tb_offset_applied) { - u64 new_tb = mftb() - vc->tb_offset_applied; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = 0; - } - /* HDEC must be at least as large as DEC, so decrementer_max fits */ mtspr(SPRN_HDEC, decrementer_max); From patchwork Mon Oct 4 16:00:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536216 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=TJtsIc5I; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTl4Jndz9t6h for ; Tue, 5 Oct 2021 03:02:35 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236255AbhJDQDt (ORCPT ); Mon, 4 Oct 2021 12:03:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236245AbhJDQDt (ORCPT ); Mon, 4 Oct 2021 12:03:49 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C6AAC061745 for ; Mon, 4 Oct 2021 09:02:00 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id k23so1001287pji.0 for ; Mon, 04 Oct 2021 09:02:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OxU54WAHH2jpclwON2BfRFJZbfjN3bTAVSLd8U/g6eA=; b=TJtsIc5IxL99bMnoFFg7fwPY4lqgoKfj3wFoNUpMT09fkyAi8BSRWWF7ZCzbMXidF5 bxyE+DgbW0B3MDlMI4z2k7QHVO7JcUVyoanwfKkIXw6TSTAiHZFFD11YBE1f/zMhSNaW YmE7HuS5Ap1Kw0aaBFgBwagPjgLCHFJy4GlnWbDguUDyfi+JIPlxBibGo5Bf8o13Zhe3 fiKJMJS1UQEOYJRdJXcrTxX9B6Lr35LTP4F2hJ5TIFNqxishv4OIunpcGE1FxnfNusEq BYkN755Yrma9DoCRyMzM7AIniYAOrvje42/cuoWxY+9j2+Tmx5aJOTJWV0RSE5j4utxH reqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OxU54WAHH2jpclwON2BfRFJZbfjN3bTAVSLd8U/g6eA=; b=VlQinnXQaA2n54Oom+t5lsLp1CvatBfWbuEP4fK5YcpKkeG4LQiSbGKKGf8h6Evze0 1zQIs1S1OThxBm9o2ZXU3Cd9qwOCg76Wsj7QepHlfH9nlM8CgBHmGx3TI0PAhz2GjxfD JfnAG23dicPcO7XvBNFA2v4kwvsK0MkKJ9ecqFLOxHPRDqv+ABqR4PhDSdVakB4Hz8Wv I9Djg1Q3s3NQCXNSEcZz+iZtWS7+7oukG3Cksku8zAy9dJFTPawTYDPmvHe+jRomwUt3 IQc4TyHLRGXsL5X2YSeHxVThbm/e4irbRbHkHsSDWbkUumLE63kRGZNvvCMuMGMTA5ID WP/g== X-Gm-Message-State: AOAM531lkOsSFKvj4v+2jQDRmI1lX8bhGEUSp4LV3BnIYu+eLgfQhPuM Kbc62ky+0Wa967cCUamsHEYNOErKdAo= X-Google-Smtp-Source: ABdhPJwdnVMocCUzcnlNJkOAdbWzHt7n/SRTHEk/p3qo9Dp3om/eXqYBipV/HRSanmEQfRfunuNRZQ== X-Received: by 2002:a17:90b:3591:: with SMTP id mm17mr1829344pjb.140.1633363319379; Mon, 04 Oct 2021 09:01:59 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:01:59 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 24/52] KVM: PPC: Book3S HV P9: Optimise timebase reads Date: Tue, 5 Oct 2021 02:00:21 +1000 Message-Id: <20211004160049.1338837-25-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Reduce the number of mfTB executed by passing the current timebase around entry and exit code rather than read it multiple times. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 2 +- arch/powerpc/kvm/book3s_hv.c | 88 +++++++++++++----------- arch/powerpc/kvm/book3s_hv_p9_entry.c | 33 +++++---- 3 files changed, 65 insertions(+), 58 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index fff391b9b97b..0a319ed9c2fd 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -154,7 +154,7 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } -int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr); +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb); #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #endif diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 4abe4a24e5e7..f3c052b8b7ee 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -276,22 +276,22 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu) * they should never fail.) */ -static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc) +static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; spin_lock_irqsave(&vc->stoltb_lock, flags); - vc->preempt_tb = mftb(); + vc->preempt_tb = tb; spin_unlock_irqrestore(&vc->stoltb_lock, flags); } -static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc) +static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; spin_lock_irqsave(&vc->stoltb_lock, flags); if (vc->preempt_tb != TB_NIL) { - vc->stolen_tb += mftb() - vc->preempt_tb; + vc->stolen_tb += tb - vc->preempt_tb; vc->preempt_tb = TB_NIL; } spin_unlock_irqrestore(&vc->stoltb_lock, flags); @@ -301,6 +301,7 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; + u64 now = mftb(); /* * We can test vc->runner without taking the vcore lock, @@ -309,12 +310,12 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) * ever sets it to NULL. */ if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING) - kvmppc_core_end_stolen(vc); + kvmppc_core_end_stolen(vc, now); spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST && vcpu->arch.busy_preempt != TB_NIL) { - vcpu->arch.busy_stolen += mftb() - vcpu->arch.busy_preempt; + vcpu->arch.busy_stolen += now - vcpu->arch.busy_preempt; vcpu->arch.busy_preempt = TB_NIL; } spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); @@ -324,13 +325,14 @@ static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; + u64 now = mftb(); if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING) - kvmppc_core_start_stolen(vc); + kvmppc_core_start_stolen(vc, now); spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST) - vcpu->arch.busy_preempt = mftb(); + vcpu->arch.busy_preempt = now; spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); } @@ -685,7 +687,7 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now) } static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, - struct kvmppc_vcore *vc) + struct kvmppc_vcore *vc, u64 tb) { struct dtl_entry *dt; struct lppaca *vpa; @@ -696,7 +698,7 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, dt = vcpu->arch.dtl_ptr; vpa = vcpu->arch.vpa.pinned_addr; - now = mftb(); + now = tb; core_stolen = vcore_stolen_time(vc, now); stolen = core_stolen - vcpu->arch.stolen_logged; vcpu->arch.stolen_logged = core_stolen; @@ -2914,14 +2916,14 @@ static void kvmppc_set_timer(struct kvm_vcpu *vcpu) extern int __kvmppc_vcore_entry(void); static void kvmppc_remove_runnable(struct kvmppc_vcore *vc, - struct kvm_vcpu *vcpu) + struct kvm_vcpu *vcpu, u64 tb) { u64 now; if (vcpu->arch.state != KVMPPC_VCPU_RUNNABLE) return; spin_lock_irq(&vcpu->arch.tbacct_lock); - now = mftb(); + now = tb; vcpu->arch.busy_stolen += vcore_stolen_time(vc, now) - vcpu->arch.stolen_logged; vcpu->arch.busy_preempt = now; @@ -3172,14 +3174,14 @@ static void kvmppc_vcore_preempt(struct kvmppc_vcore *vc) } /* Start accumulating stolen time */ - kvmppc_core_start_stolen(vc); + kvmppc_core_start_stolen(vc, mftb()); } static void kvmppc_vcore_end_preempt(struct kvmppc_vcore *vc) { struct preempted_vcore_list *lp; - kvmppc_core_end_stolen(vc); + kvmppc_core_end_stolen(vc, mftb()); if (!list_empty(&vc->preempt_list)) { lp = &per_cpu(preempted_vcores, vc->pcpu); spin_lock(&lp->lock); @@ -3306,7 +3308,7 @@ static void prepare_threads(struct kvmppc_vcore *vc) vcpu->arch.ret = RESUME_GUEST; else continue; - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); wake_up(&vcpu->arch.cpu_run); } } @@ -3325,7 +3327,7 @@ static void collect_piggybacks(struct core_info *cip, int target_threads) list_del_init(&pvc->preempt_list); if (pvc->runner == NULL) { pvc->vcore_state = VCORE_INACTIVE; - kvmppc_core_end_stolen(pvc); + kvmppc_core_end_stolen(pvc, mftb()); } spin_unlock(&pvc->lock); continue; @@ -3334,7 +3336,7 @@ static void collect_piggybacks(struct core_info *cip, int target_threads) spin_unlock(&pvc->lock); continue; } - kvmppc_core_end_stolen(pvc); + kvmppc_core_end_stolen(pvc, mftb()); pvc->vcore_state = VCORE_PIGGYBACK; if (cip->total_threads >= target_threads) break; @@ -3401,7 +3403,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) else ++still_running; } else { - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); wake_up(&vcpu->arch.cpu_run); } } @@ -3410,7 +3412,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) kvmppc_vcore_preempt(vc); } else if (vc->runner) { vc->vcore_state = VCORE_PREEMPT; - kvmppc_core_start_stolen(vc); + kvmppc_core_start_stolen(vc, mftb()); } else { vc->vcore_state = VCORE_INACTIVE; } @@ -3541,7 +3543,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) ((vc->num_threads > threads_per_subcore) || !on_primary_thread())) { for_each_runnable_thread(i, vcpu, vc) { vcpu->arch.ret = -EBUSY; - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); wake_up(&vcpu->arch.cpu_run); } goto out; @@ -3673,7 +3675,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) pvc->pcpu = pcpu + thr; for_each_runnable_thread(i, vcpu, pvc) { kvmppc_start_thread(vcpu, pvc); - kvmppc_create_dtl_entry(vcpu, pvc); + kvmppc_create_dtl_entry(vcpu, pvc, mftb()); trace_kvm_guest_enter(vcpu); if (!vcpu->arch.ptid) thr0_done = true; @@ -4139,20 +4141,17 @@ static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu) * Guest entry for POWER9 and later CPUs. */ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, - unsigned long lpcr) + unsigned long lpcr, u64 *tb) { struct kvmppc_vcore *vc = vcpu->arch.vcore; struct p9_host_os_sprs host_os_sprs; s64 dec; - u64 tb, next_timer; + u64 next_timer; unsigned long msr; int trap; - WARN_ON_ONCE(vcpu->arch.ceded); - - tb = mftb(); next_timer = timer_get_next_tb(); - if (tb >= next_timer) + if (*tb >= next_timer) return BOOK3S_INTERRUPT_HV_DECREMENTER; if (next_timer < time_limit) time_limit = next_timer; @@ -4249,7 +4248,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, * * XXX: Another day's problem. */ - mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - tb); + mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb); mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); @@ -4265,8 +4264,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; - tb = mftb(); - vcpu->arch.dec_expires = dec + (tb + vc->tb_offset); + *tb = mftb(); + vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); /* H_CEDE has to be handled now, not later */ if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && @@ -4278,7 +4277,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } else { kvmppc_xive_push_vcpu(vcpu); - trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr); + trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr, tb); if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && !(vcpu->arch.shregs.msr & MSR_PR)) { unsigned long req = kvmppc_get_gpr(vcpu, 3); @@ -4309,6 +4308,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, store_spr_state(vcpu); + timer_rearm_host_dec(*tb); + restore_p9_host_os_sprs(vcpu, &host_os_sprs); store_fp_state(&vcpu->arch.fp); @@ -4328,8 +4329,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - timer_rearm_host_dec(tb); - kvmppc_subcore_exit_guest(); return trap; @@ -4583,7 +4582,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) if ((vc->vcore_state == VCORE_PIGGYBACK || vc->vcore_state == VCORE_RUNNING) && !VCORE_IS_EXITING(vc)) { - kvmppc_create_dtl_entry(vcpu, vc); + kvmppc_create_dtl_entry(vcpu, vc, mftb()); kvmppc_start_thread(vcpu, vc); trace_kvm_guest_enter(vcpu); } else if (vc->vcore_state == VCORE_SLEEPING) { @@ -4618,7 +4617,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) for_each_runnable_thread(i, v, vc) { kvmppc_core_prepare_to_enter(v); if (signal_pending(v->arch.run_task)) { - kvmppc_remove_runnable(vc, v); + kvmppc_remove_runnable(vc, v, mftb()); v->stat.signal_exits++; v->run->exit_reason = KVM_EXIT_INTR; v->arch.ret = -EINTR; @@ -4659,7 +4658,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) kvmppc_vcore_end_preempt(vc); if (vcpu->arch.state == KVMPPC_VCPU_RUNNABLE) { - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); vcpu->stat.signal_exits++; run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; @@ -4687,6 +4686,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; unsigned long flags; + u64 tb; trace_kvmppc_run_vcpu_enter(vcpu); @@ -4697,7 +4697,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vc = vcpu->arch.vcore; vcpu->arch.ceded = 0; vcpu->arch.run_task = current; - vcpu->arch.stolen_logged = vcore_stolen_time(vc, mftb()); vcpu->arch.state = KVMPPC_VCPU_RUNNABLE; vcpu->arch.busy_preempt = TB_NIL; vcpu->arch.last_inst = KVM_INST_FETCH_FAILED; @@ -4722,7 +4721,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_update_vpas(vcpu); init_vcore_to_run(vc); - vc->preempt_tb = TB_NIL; preempt_disable(); pcpu = smp_processor_id(); @@ -4732,6 +4730,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, /* flags save not required, but irq_pmu has no disable/enable API */ powerpc_local_irq_pmu_save(flags); + if (signal_pending(current)) goto sigpend; if (need_resched() || !kvm->arch.mmu_ready) @@ -4754,12 +4753,17 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, goto out; } + tb = mftb(); + + vcpu->arch.stolen_logged = vcore_stolen_time(vc, tb); + vc->preempt_tb = TB_NIL; + kvmppc_clear_host_core(pcpu); local_paca->kvm_hstate.napping = 0; local_paca->kvm_hstate.kvm_split_mode = NULL; kvmppc_start_thread(vcpu, vc); - kvmppc_create_dtl_entry(vcpu, vc); + kvmppc_create_dtl_entry(vcpu, vc, tb); trace_kvm_guest_enter(vcpu); vc->vcore_state = VCORE_RUNNING; @@ -4774,7 +4778,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, /* Tell lockdep that we're about to enable interrupts */ trace_hardirqs_on(); - trap = kvmhv_p9_guest_entry(vcpu, time_limit, lpcr); + trap = kvmhv_p9_guest_entry(vcpu, time_limit, lpcr, &tb); vcpu->arch.trap = trap; trace_hardirqs_off(); @@ -4803,7 +4807,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, * by L2 and the L1 decrementer is provided in hdec_expires */ if (kvmppc_core_pending_dec(vcpu) && - ((get_tb() < kvmppc_dec_expires_host_tb(vcpu)) || + ((tb < kvmppc_dec_expires_host_tb(vcpu)) || (trap == BOOK3S_INTERRUPT_SYSCALL && kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED))) kvmppc_core_dequeue_dec(vcpu); @@ -4839,7 +4843,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, trace_kvmppc_run_core(vc, 1); done: - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, tb); trace_kvmppc_run_vcpu_exit(vcpu); return vcpu->arch.ret; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index e7793bb806eb..2bd96d8256d1 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -183,13 +183,13 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) } } -int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr) +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; struct kvmppc_vcore *vc = vcpu->arch.vcore; s64 hdec, dec; - u64 tb, purr, spurr; + u64 purr, spurr; u64 *exsave; bool ri_set; int trap; @@ -203,8 +203,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_dawr1; unsigned long host_dawrx1; - tb = mftb(); - hdec = time_limit - tb; + hdec = time_limit - *tb; if (hdec < 0) return BOOK3S_INTERRUPT_HV_DECREMENTER; @@ -230,11 +229,13 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc } if (vc->tb_offset) { - u64 new_tb = tb + vc->tb_offset; + u64 new_tb = *tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + *tb = new_tb; vc->tb_offset_applied = vc->tb_offset; } @@ -317,7 +318,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDEC, hdec); - mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); + mtspr(SPRN_DEC, vcpu->arch.dec_expires - *tb); #ifdef CONFIG_PPC_TRANSACTIONAL_MEM tm_return_to_guest: @@ -466,15 +467,17 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; - tb = mftb(); - vcpu->arch.dec_expires = dec + tb; + *tb = mftb(); + vcpu->arch.dec_expires = dec + *tb; if (vc->tb_offset_applied) { - u64 new_tb = tb - vc->tb_offset_applied; + u64 new_tb = *tb - vc->tb_offset_applied; mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + *tb = new_tb; vc->tb_offset_applied = 0; } From patchwork Mon Oct 4 16:00:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536217 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=ngoqcK76; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTm1DCpz9t5G for ; Tue, 5 Oct 2021 03:02:36 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236270AbhJDQDv (ORCPT ); Mon, 4 Oct 2021 12:03:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236245AbhJDQDv (ORCPT ); Mon, 4 Oct 2021 12:03:51 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AB07C061745 for ; Mon, 4 Oct 2021 09:02:02 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id l6so191766plh.9 for ; Mon, 04 Oct 2021 09:02:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Xn4wh7zfveyTBFbm0dkvLMIw9n95PVL7hzWsJ1Vsmpw=; b=ngoqcK76pz+7dM35BoLUMaBeR48MnmkTh8kFN+yZQMQ66Dt/GylFfGGrYfKXhikCKk v8c7JWzTnIVFvqNEPDYdDOlgfEInb6tdn2VNVFhRILVjbTsUvPHhwYDDYCk2rvz4ixGU n8flP1WB9qEtrH/gXru9gCe6CQTcNk29oqJDJM1rreqQh6L1V+TY6C3Jaha/IyJlbaag O4UJ94Kup694AkoeTh/4MCfn9sK0JP+QB/VQNKJZ9aQO9FzMAS5cG8aVGjFtEG/ZDEIx XkAazQ0LF2ecPEu26O+g1XO/xt4vUPAUbs1yGHj4dxx4vHj8Fehi8zRRiBZ+PrN+DXoK 4Fug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Xn4wh7zfveyTBFbm0dkvLMIw9n95PVL7hzWsJ1Vsmpw=; b=Z2BySDq10Mu1Gedhum6IpjRHfj1J3IX+ihpo2DLAP4FIF1oQlDXOZwNVTxqShOFIVe MJVmdNEhS/T7ocNoIJq/a1M2KeTj37A8pd3P+twEc5t0elXO82ElYU/vLYd2r7aVPOSi zodW4YYRyjP0JKUvKz2ct5nA9jmiymTKV23UDm7fZ/W1cC7X0X5fei6VjxTTPlz4O1lb XEfASYW8Aup1fBrBe1lI1DBe57p8pa/oMwCJLmitKifuqJxnye6WXMxnZK65VVku1SsS 97OaLiR+6wPdSUVUl8MR3AlLvkSic0T641xqrXnRZf7/k0+nRyGyU88ARW9yRfKC/mnd qALA== X-Gm-Message-State: AOAM5311sEsFoA/jMQOdbCGF1HqqthpnkIaITkNVBSTQ4VQG2sfcRTrX AnXlstviNmgazF2lWfXO6iRHtlEJWN8= X-Google-Smtp-Source: ABdhPJynBljTaDF/mKgN7BThuWj+RDiBNngwjhXttSoNOeKKtA+trSUomggos2CT3OIj0shw3DStlQ== X-Received: by 2002:a17:90a:8b8d:: with SMTP id z13mr33332818pjn.214.1633363321520; Mon, 04 Oct 2021 09:02:01 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.01.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:01 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 25/52] KVM: PPC: Book3S HV P9: Avoid SPR scoreboard stalls Date: Tue, 5 Oct 2021 02:00:22 +1000 Message-Id: <20211004160049.1338837-26-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Avoid interleaving mfSPR and mtSPR to reduce SPR scoreboard stalls. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 8 ++++---- arch/powerpc/kvm/book3s_hv_p9_entry.c | 19 +++++++++++-------- 2 files changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f3c052b8b7ee..823d64047d01 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4308,10 +4308,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, store_spr_state(vcpu); - timer_rearm_host_dec(*tb); - - restore_p9_host_os_sprs(vcpu, &host_os_sprs); - store_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC store_vr_state(&vcpu->arch.vr); @@ -4326,6 +4322,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, switch_pmu_to_host(vcpu, &host_os_sprs); + timer_rearm_host_dec(*tb); + + restore_p9_host_os_sprs(vcpu, &host_os_sprs); + vc->entry_exit_map = 0x101; vc->in_guest = 0; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 2bd96d8256d1..bd0021cd3a67 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -228,6 +228,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_dawrx1 = mfspr(SPRN_DAWRX1); } + local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); + local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); + if (vc->tb_offset) { u64 new_tb = *tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); @@ -244,8 +247,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_DPDES, vc->dpdes); mtspr(SPRN_VTB, vc->vtb); - local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); - local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); mtspr(SPRN_PURR, vcpu->arch.purr); mtspr(SPRN_SPURR, vcpu->arch.spurr); @@ -448,10 +449,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc /* Advance host PURR/SPURR by the amount used by guest */ purr = mfspr(SPRN_PURR); spurr = mfspr(SPRN_SPURR); - mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr + - purr - vcpu->arch.purr); - mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr + - spurr - vcpu->arch.spurr); + local_paca->kvm_hstate.host_purr += purr - vcpu->arch.purr; + local_paca->kvm_hstate.host_spurr += spurr - vcpu->arch.spurr; vcpu->arch.purr = purr; vcpu->arch.spurr = spurr; @@ -464,6 +463,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); + vc->dpdes = mfspr(SPRN_DPDES); + vc->vtb = mfspr(SPRN_VTB); + dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; @@ -481,6 +483,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = 0; } + mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr); + mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr); + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); @@ -509,8 +514,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (cpu_has_feature(CPU_FTR_ARCH_31)) asm volatile(PPC_CP_ABORT); - vc->dpdes = mfspr(SPRN_DPDES); - vc->vtb = mfspr(SPRN_VTB); mtspr(SPRN_DPDES, 0); if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); From patchwork Mon Oct 4 16:00:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536218 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=gFr9aUUe; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTm3ZJfz9t9b for ; Tue, 5 Oct 2021 03:02:36 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236266AbhJDQDy (ORCPT ); Mon, 4 Oct 2021 12:03:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236275AbhJDQDx (ORCPT ); Mon, 4 Oct 2021 12:03:53 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90269C061745 for ; Mon, 4 Oct 2021 09:02:04 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id kk10so2602865pjb.1 for ; Mon, 04 Oct 2021 09:02:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8AxVO/grgzWXIBf5o+D+hI/Dho5dubgQSTSKv2Waw1o=; b=gFr9aUUeYrVK4tRDw1ipvrotNBm+76TE4YJj7slaVaSut8Rf6KuoLLzDBTP8BRAcwx N/w3WAXg/Q0fRwbrEvIENOX3uSxu2UNrANDfwdFS4PB/WDf+ZsjNIz3mCh/xmOWDZImk 383Bhr4EHQ/v+nF0bt/A5NRoZwlk3fTOQjAun+/D2F01+2arwI4bp3TPaXgDAbKyL0uZ xgDXbXrM7WyLmIM+HF4VHPgcOxKE+0FyBC+smgpIHCfd8ocirVFNlylv3WvB/r2qWvRV 2aOA06fRzczSAceNW7v9BQW5057FSoc49b6u1jk5px3UmCEgdhnI2CW/YSSavJvO8xfN 6eJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8AxVO/grgzWXIBf5o+D+hI/Dho5dubgQSTSKv2Waw1o=; b=35JTjnNeFAroM1YJYHayRRPnEHpXeWCC2gZWs0AQsNBUG2WjH1R1mOq3JhmL/STvcx S+nToorWFS6vnQFLaWav+f/LABU6/+W6knrEzPuPN7+w9Dry+jZDU6MYksYWmqlOlLbx 84M4EeKiNyBQrYGIFYoJLh4ANbEyuOyG8/RF8BaX0F4f3X32ojS1cPv9oDNdHUlqChWX rMjwcipt6WUAhjh6Jhx/dNIN+wZG2rTdk1jdRKGtFkQzL+xmf/6rkRa9PK2fAmTz7pwD Jcbc1tMx6iA4sausUxau7p6Pje7gkdqc16+CTbO73ExSIbnpBN9mWriFxTQMlJngNGqr jzQQ== X-Gm-Message-State: AOAM5305POtyhELpzoUSTqSqoxJrFMVcWgt5VNj/gZ9E2/Xc0o38SPiL V/MvRYm80zPcavl5LBQNCGrLau3KkNQ= X-Google-Smtp-Source: ABdhPJyGBm0Nvq7K+JRWjMHxSMiuVauRE5/0AInNGV/sjMf31pMTCo1eNr3DbMUPq9wxiF6uNbjvOg== X-Received: by 2002:a17:90b:1b03:: with SMTP id nu3mr5377951pjb.76.1633363323713; Mon, 04 Oct 2021 09:02:03 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:03 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 26/52] KVM: PPC: Book3S HV P9: Only execute mtSPR if the value changed Date: Tue, 5 Oct 2021 02:00:23 +1000 Message-Id: <20211004160049.1338837-27-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Keep better track of the current SPR value in places where they are to be loaded with a new context, to reduce expensive mtSPR operations. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 51 ++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 823d64047d01..460290cc79af 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4042,20 +4042,28 @@ static void switch_pmu_to_host(struct kvm_vcpu *vcpu, } } -static void load_spr_state(struct kvm_vcpu *vcpu) +static void load_spr_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) { - mtspr(SPRN_DSCR, vcpu->arch.dscr); - mtspr(SPRN_IAMR, vcpu->arch.iamr); - mtspr(SPRN_PSPB, vcpu->arch.pspb); - mtspr(SPRN_FSCR, vcpu->arch.fscr); mtspr(SPRN_TAR, vcpu->arch.tar); mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); mtspr(SPRN_BESCR, vcpu->arch.bescr); + if (cpu_has_feature(CPU_FTR_P9_TIDR)) mtspr(SPRN_TIDR, vcpu->arch.tid); - mtspr(SPRN_AMR, vcpu->arch.amr); - mtspr(SPRN_UAMOR, vcpu->arch.uamor); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, vcpu->arch.iamr); + if (host_os_sprs->amr != vcpu->arch.amr) + mtspr(SPRN_AMR, vcpu->arch.amr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, vcpu->arch.uamor); + if (host_os_sprs->fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, vcpu->arch.fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, vcpu->arch.dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, vcpu->arch.pspb); /* * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] @@ -4070,20 +4078,21 @@ static void load_spr_state(struct kvm_vcpu *vcpu) static void store_spr_state(struct kvm_vcpu *vcpu) { - vcpu->arch.ctrl = mfspr(SPRN_CTRLF); - - vcpu->arch.iamr = mfspr(SPRN_IAMR); - vcpu->arch.pspb = mfspr(SPRN_PSPB); - vcpu->arch.fscr = mfspr(SPRN_FSCR); vcpu->arch.tar = mfspr(SPRN_TAR); vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); vcpu->arch.bescr = mfspr(SPRN_BESCR); + if (cpu_has_feature(CPU_FTR_P9_TIDR)) vcpu->arch.tid = mfspr(SPRN_TIDR); + vcpu->arch.iamr = mfspr(SPRN_IAMR); vcpu->arch.amr = mfspr(SPRN_AMR); vcpu->arch.uamor = mfspr(SPRN_UAMOR); + vcpu->arch.fscr = mfspr(SPRN_FSCR); vcpu->arch.dscr = mfspr(SPRN_DSCR); + vcpu->arch.pspb = mfspr(SPRN_PSPB); + + vcpu->arch.ctrl = mfspr(SPRN_CTRLF); } static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) @@ -4094,6 +4103,7 @@ static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) host_os_sprs->iamr = mfspr(SPRN_IAMR); host_os_sprs->amr = mfspr(SPRN_AMR); host_os_sprs->fscr = mfspr(SPRN_FSCR); + host_os_sprs->dscr = mfspr(SPRN_DSCR); } /* vcpu guest regs must already be saved */ @@ -4102,19 +4112,20 @@ static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, { mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - mtspr(SPRN_PSPB, 0); - mtspr(SPRN_UAMOR, 0); - - mtspr(SPRN_DSCR, host_os_sprs->dscr); if (cpu_has_feature(CPU_FTR_P9_TIDR)) mtspr(SPRN_TIDR, host_os_sprs->tidr); - mtspr(SPRN_IAMR, host_os_sprs->iamr); - + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, host_os_sprs->iamr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, 0); if (host_os_sprs->amr != vcpu->arch.amr) mtspr(SPRN_AMR, host_os_sprs->amr); - if (host_os_sprs->fscr != vcpu->arch.fscr) mtspr(SPRN_FSCR, host_os_sprs->fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, host_os_sprs->dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, 0); /* Save guest CTRL register, set runlatch to 1 */ if (!(vcpu->arch.ctrl & 1)) @@ -4206,7 +4217,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, #endif mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); - load_spr_state(vcpu); + load_spr_state(vcpu, &host_os_sprs); if (kvmhv_on_pseries()) { /* From patchwork Mon Oct 4 16:00:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536219 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=kmBFcBfv; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTm6PyMz9t5G for ; Tue, 5 Oct 2021 03:02:36 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236245AbhJDQD5 (ORCPT ); Mon, 4 Oct 2021 12:03:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236284AbhJDQDz (ORCPT ); Mon, 4 Oct 2021 12:03:55 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1569C061749 for ; Mon, 4 Oct 2021 09:02:06 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id s11so16966848pgr.11 for ; Mon, 04 Oct 2021 09:02:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NfJOjCJ/v6ZV2LFYZwsNEIjww7KBaxYWLhodjyZWfLI=; b=kmBFcBfvDyvkYTfWJeoHR03tCDT3ZOvbzr+qWLrNctK8phXRCA4yZzY8DwHkmVpl6i 5/XNkLmZFL63Fsi2Rz5F42PGAJtQiKWSWOCxC8I4BLay4Xdt85+FcnrhuR3EjhCcqBJe XiJgZzPs5Zbiv+HrSlLHNzZTlxfwwKrSOhK5MvsadscXOrPA/IxRiWS8ZCRMvWDa4YG/ WeabSMO8r+b29nCzB94Y4h3TE9R+59KMVsWxuWQINYGZeP161FBwW8FAhd8HteK2SzGN WmQ8dtpkG3z0kaY4w5k3ZKuaTAMYoIosxCv/XdhuwA4kiLB9EK2tq23h/XPOeoWYHrME 5AIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NfJOjCJ/v6ZV2LFYZwsNEIjww7KBaxYWLhodjyZWfLI=; b=fOtYCzii4gwy2tUSfDm5qapLcfvRl+5HU9SiMuWYWnQxoBaTn0YQ3uHnXRq/NkYl8n DC6BPArdnoO5lss80Uler6Wy+inBRM6HycxsYp3kyiqKgCADXADtYJLsZiYkHyPxSvkk j9E+6wKSt2EGm9gRI6Axw4axgZEpfEDqhqd7jbH9XvoSkYYtT8Z9SaQZborJTty1P7DC 0C6qhMJHJQg62ObvlQua8Syum4f612yYGV6wVFQw7QPG9d7eWDAUpO2/uWKfRGP18gYi NUwqduQo/9w1VaqXiR5Y2NUcPM3tMZPOFfbwrf1VnkmjwBydO65O8LliF0K9u/qhetjj BHTA== X-Gm-Message-State: AOAM532KGNUn8C+ReTOFevMQxKFT/MIWDETxP6lbkuhYNHe7j188mdEH d9MjX7G5HuhO7oNoKZOfVwyChKHW4zA= X-Google-Smtp-Source: ABdhPJwHfQ82qhsj2MZOiN/a0Ys0BngjnAHpeoAK+RJN54quEMc+t27HX7MeBOnqe7l83lNwt3N9uQ== X-Received: by 2002:a63:6e02:: with SMTP id j2mr11581304pgc.157.1633363326090; Mon, 04 Oct 2021 09:02:06 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:05 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 27/52] KVM: PPC: Book3S HV P9: Juggle SPR switching around Date: Tue, 5 Oct 2021 02:00:24 +1000 Message-Id: <20211004160049.1338837-28-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This juggles SPR switching on the entry and exit sides to be more symmetric, which makes the next refactoring patch possible with no functional change. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 460290cc79af..e817159cd53f 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4209,7 +4209,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, msr = mfmsr(); /* TM restore can update msr */ } - switch_pmu_to_guest(vcpu, &host_os_sprs); + load_spr_state(vcpu, &host_os_sprs); load_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC @@ -4217,7 +4217,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, #endif mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); - load_spr_state(vcpu, &host_os_sprs); + switch_pmu_to_guest(vcpu, &host_os_sprs); if (kvmhv_on_pseries()) { /* @@ -4317,6 +4317,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.slb_max = 0; } + switch_pmu_to_host(vcpu, &host_os_sprs); + store_spr_state(vcpu); store_fp_state(&vcpu->arch.fp); @@ -4331,8 +4333,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - switch_pmu_to_host(vcpu, &host_os_sprs); - timer_rearm_host_dec(*tb); restore_p9_host_os_sprs(vcpu, &host_os_sprs); From patchwork Mon Oct 4 16:00:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536220 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=ZCX0inXv; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTn1gtQz9t6h for ; Tue, 5 Oct 2021 03:02:37 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236282AbhJDQD6 (ORCPT ); Mon, 4 Oct 2021 12:03:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236275AbhJDQD5 (ORCPT ); Mon, 4 Oct 2021 12:03:57 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4C9DC061745 for ; Mon, 4 Oct 2021 09:02:08 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id g184so16988973pgc.6 for ; Mon, 04 Oct 2021 09:02:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cHcJEgGt4n//rNMwXrbb2Y/Mu7D+Fv1X8CuV0T7O8Vs=; b=ZCX0inXv0MrSy7yOTcot9uSY5rrJGLIWLiBIXdPGSJF6ASQUYWsaU0/w5uKPzWV+Ui aWfbjTapfvTvkn+PALykkkqrRyOv/scTGMbD5fEwq+W5rMZbInywp1Dn9azhGAp1YIen /FpV4zfB6u8JxhpreDlufSAvb/j4qRtndFI0tdpYHxL7zXKpkWsj+j/EW17Jm+e+A3U4 QGOirG8ta+hzx3mjWOglhvMyO7nLfkdft/8yORz23dTBVkFcpNtO1FLCMJ+B2xeidln4 aC0wqJAXZlM/AiwblHZ5c9GUvpTN4oNxT9jTwhI+cQDv1RdxF0okXw9n2KEUXP+rVlxW wp1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cHcJEgGt4n//rNMwXrbb2Y/Mu7D+Fv1X8CuV0T7O8Vs=; b=CDjmGTEG99rxyaR6lITl3QNp5cOle5WA8qDysVNPRR2Zf23w8IysC2h/W2N9rbIJWq ErN+9rUlIUCWbCuw4qDLaoMhvpS1DYztkW6w7DCQroHDKpx5RRyWF2SjAjYa14gMAN91 IIb63rYeqeqJBRTvb3vH+92VBJ0H6XIq1j7ykfZ+3074XGDdECx8wqFRu9ReM9Ohy5F/ l8PfijcG8Fms6wtZCB1oTIwDQhi9Crc2FAbXl+SMTeTnlDZqgpYirF2cZmc4wvQmc8nG Mh5DhDULUSltn/3Xb4XXLROjvEZC/Puszg2akkJd+du7sTvzHSGDJhoWknuKz71tV0lp HbdQ== X-Gm-Message-State: AOAM532YGJapZUTUHu9I58J3/1YS9nhvDYFU2RFlUFZldxhvJzIHmEoc 8fb8FEI1UIoFcB0x6qvJPALz8VaTncE= X-Google-Smtp-Source: ABdhPJyMLdF78dTzsR3aMr2JFmPBHAVM6pfQ4cZXRuCnk2nN9vfsDlg5NOECIMknwDtpoTpN83YdPg== X-Received: by 2002:a63:4003:: with SMTP id n3mr11399906pga.413.1633363328192; Mon, 04 Oct 2021 09:02:08 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:08 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 28/52] KVM: PPC: Book3S HV P9: Move vcpu register save/restore into functions Date: Tue, 5 Oct 2021 02:00:25 +1000 Message-Id: <20211004160049.1338837-29-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This should be no functional difference but makes the caller easier to read. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 65 +++++++++++++++++++++++------------- 1 file changed, 41 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e817159cd53f..8d721baf8c6b 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4095,6 +4095,44 @@ static void store_spr_state(struct kvm_vcpu *vcpu) vcpu->arch.ctrl = mfspr(SPRN_CTRLF); } +/* Returns true if current MSR and/or guest MSR may have changed */ +static bool load_vcpu_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + bool ret = false; + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + ret = true; + } + + load_spr_state(vcpu, host_os_sprs); + + load_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + load_vr_state(&vcpu->arch.vr); +#endif + mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); + + return ret; +} + +static void store_vcpu_state(struct kvm_vcpu *vcpu) +{ + store_spr_state(vcpu); + + store_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + store_vr_state(&vcpu->arch.vr); +#endif + vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); +} + static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { host_os_sprs->dscr = mfspr(SPRN_DSCR); @@ -4203,19 +4241,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { - kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - msr = mfmsr(); /* TM restore can update msr */ - } - - load_spr_state(vcpu, &host_os_sprs); - - load_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - load_vr_state(&vcpu->arch.vr); -#endif - mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); + if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) + msr = mfmsr(); /* MSR may have been updated */ switch_pmu_to_guest(vcpu, &host_os_sprs); @@ -4319,17 +4346,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, switch_pmu_to_host(vcpu, &host_os_sprs); - store_spr_state(vcpu); - - store_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - store_vr_state(&vcpu->arch.vr); -#endif - vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); - - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + store_vcpu_state(vcpu); vcpu_vpa_increment_dispatch(vcpu); From patchwork Mon Oct 4 16:00:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536222 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=e1chmxW8; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTn6gF4z9t6h for ; Tue, 5 Oct 2021 03:02:37 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236275AbhJDQEG (ORCPT ); Mon, 4 Oct 2021 12:04:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235926AbhJDQEE (ORCPT ); Mon, 4 Oct 2021 12:04:04 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 405CCC061745 for ; Mon, 4 Oct 2021 09:02:14 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id g2so14899003pfc.6 for ; Mon, 04 Oct 2021 09:02:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=F12/OBmYvdaBp6rWo0HElwKfUd3YlItcUKLxdK7NzgI=; b=e1chmxW82eaW1hr1xTZc1xvbSHZBA5mSnVH7BhMtjvYIBnDKOnGFUp8FHz3t9kmg3B 0O2791ToxCFVtMIMFYexTna/GecSbSSJC1sTYV1nzOK27WUVSYN4T4j5LlkkltVspge5 2lVNl6D6errwMN0H4G/u4Px3whVGoBOVFkgcMvVXcLj+3kglqQXxhArYpakhghtSPK8E R7d7eEsL4e8k385uNC3jICXJdN9JAd/Y2CoPYHDAbJpWzzDH7iCac7stsOH7zSFJGVq1 g0qSLwzTBlKRPBLFmoIe+3ATCa+LIa5GPFfpXSB4Gomqj06nHzVUmT36sAOJRVrm2Zdk ShQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=F12/OBmYvdaBp6rWo0HElwKfUd3YlItcUKLxdK7NzgI=; b=vDrHITuCYS7Z6VH7Zih7PfeWNcg6iPnEw+Q/wLevUOaqj5D57bYVXo22QFW/hg/CFH 3t/PM6Id8Ly53A1qgwjgzt4rTq0G4ma8bU0KH+dhIvb4gkQ/LIBoTxEv32CIsobubun5 aLQ9kMroWx+uImISoL0cUDWCSrWjE0kBFFsfSkwDDmfcJgPOcZ/E14rIPdXryAuAVnFv lAOAXZRMu1YxZIvk+UdyzmhCbvLHVfAUK5wIUpoZZpeD/vMHydGsnCIDCtzKjvAnS929 GeYs8zbH284a5WuJJMWgXhMLrxnfyxPznn2W6VYaUd+OF9OyLJNj5M9nfxVnxU0YkSbq Ll3A== X-Gm-Message-State: AOAM532yv0XdDK0QG8IYdNGwWiFkiksrEwfa3tmMG45D3IAKpqe0/4BJ 1rxzbvJVfvITVO3ygjGCfS5byW6hP18= X-Google-Smtp-Source: ABdhPJw6oOxuNjWhL1OV5h29gPSEyRWJDdtIcKiCDXEu+omHHjhmFKXq7dnjmwgp9tFt7VwjRoHM9g== X-Received: by 2002:a62:60c7:0:b0:40a:4bd7:752c with SMTP id u190-20020a6260c7000000b0040a4bd7752cmr26672894pfb.52.1633363330772; Mon, 04 Oct 2021 09:02:10 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:10 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 29/52] KVM: PPC: Book3S HV P9: Move host OS save/restore functions to built-in Date: Tue, 5 Oct 2021 02:00:26 +1000 Message-Id: <20211004160049.1338837-30-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the P9 guest/host register switching functions to the built-in P9 entry code, and export it for nested to use as well. This allows more flexibility in scheduling these supervisor privileged SPR accesses with the HV privileged and PR SPR accesses in the low level entry code. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 379 +------------------------- arch/powerpc/kvm/book3s_hv.h | 45 +++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 353 ++++++++++++++++++++++++ 3 files changed, 399 insertions(+), 378 deletions(-) create mode 100644 arch/powerpc/kvm/book3s_hv.h diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 8d721baf8c6b..580bac4753f6 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -80,6 +80,7 @@ #include #include "book3s.h" +#include "book3s_hv.h" #define CREATE_TRACE_POINTS #include "trace_hv.h" @@ -127,11 +128,6 @@ static bool nested = true; module_param(nested, bool, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(nested, "Enable nested virtualization (only on POWER9)"); -static inline bool nesting_enabled(struct kvm *kvm) -{ - return kvm->arch.nested_enable && kvm_is_radix(kvm); -} - static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu); /* @@ -3797,379 +3793,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } -/* - * Privileged (non-hypervisor) host registers to save. - */ -struct p9_host_os_sprs { - unsigned long dscr; - unsigned long tidr; - unsigned long iamr; - unsigned long amr; - unsigned long fscr; - - unsigned int pmc1; - unsigned int pmc2; - unsigned int pmc3; - unsigned int pmc4; - unsigned int pmc5; - unsigned int pmc6; - unsigned long mmcr0; - unsigned long mmcr1; - unsigned long mmcr2; - unsigned long mmcr3; - unsigned long mmcra; - unsigned long siar; - unsigned long sier1; - unsigned long sier2; - unsigned long sier3; - unsigned long sdar; -}; - -static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) -{ - if (!(mmcr0 & MMCR0_FC)) - goto do_freeze; - if (mmcra & MMCRA_SAMPLE_ENABLE) - goto do_freeze; - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - if (!(mmcr0 & MMCR0_PMCCEXT)) - goto do_freeze; - if (!(mmcra & MMCRA_BHRB_DISABLE)) - goto do_freeze; - } - return; - -do_freeze: - mmcr0 = MMCR0_FC; - mmcra = 0; - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mmcr0 |= MMCR0_PMCCEXT; - mmcra = MMCRA_BHRB_DISABLE; - } - - mtspr(SPRN_MMCR0, mmcr0); - mtspr(SPRN_MMCRA, mmcra); - isync(); -} - -static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - struct lppaca *lp; - int load_pmu = 1; - - lp = vcpu->arch.vpa.pinned_addr; - if (lp) - load_pmu = lp->pmcregs_in_use; - - /* Save host */ - if (ppc_get_pmu_inuse()) { - /* - * It might be better to put PMU handling (at least for the - * host) in the perf subsystem because it knows more about what - * is being used. - */ - - /* POWER9, POWER10 do not implement HPMC or SPMC */ - - host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0); - host_os_sprs->mmcra = mfspr(SPRN_MMCRA); - - freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra); - - host_os_sprs->pmc1 = mfspr(SPRN_PMC1); - host_os_sprs->pmc2 = mfspr(SPRN_PMC2); - host_os_sprs->pmc3 = mfspr(SPRN_PMC3); - host_os_sprs->pmc4 = mfspr(SPRN_PMC4); - host_os_sprs->pmc5 = mfspr(SPRN_PMC5); - host_os_sprs->pmc6 = mfspr(SPRN_PMC6); - host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1); - host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2); - host_os_sprs->sdar = mfspr(SPRN_SDAR); - host_os_sprs->siar = mfspr(SPRN_SIAR); - host_os_sprs->sier1 = mfspr(SPRN_SIER); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3); - host_os_sprs->sier2 = mfspr(SPRN_SIER2); - host_os_sprs->sier3 = mfspr(SPRN_SIER3); - } - } - -#ifdef CONFIG_PPC_PSERIES - /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */ - if (kvmhv_on_pseries()) { - barrier(); - get_lppaca()->pmcregs_in_use = load_pmu; - barrier(); - } -#endif - - /* - * Load guest. If the VPA said the PMCs are not in use but the guest - * tried to access them anyway, HFSCR[PM] will be set by the HFAC - * fault so we can make forward progress. - */ - if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) { - mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); - mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); - mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); - mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); - mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); - mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); - mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); - mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); - mtspr(SPRN_SDAR, vcpu->arch.sdar); - mtspr(SPRN_SIAR, vcpu->arch.siar); - mtspr(SPRN_SIER, vcpu->arch.sier[0]); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); - mtspr(SPRN_SIER2, vcpu->arch.sier[1]); - mtspr(SPRN_SIER3, vcpu->arch.sier[2]); - } - - /* Set MMCRA then MMCR0 last */ - mtspr(SPRN_MMCRA, vcpu->arch.mmcra); - mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); - /* No isync necessary because we're starting counters */ - - if (!vcpu->arch.nested && - (vcpu->arch.hfscr_permitted & HFSCR_PM)) - vcpu->arch.hfscr |= HFSCR_PM; - } -} - -static void switch_pmu_to_host(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - struct lppaca *lp; - int save_pmu = 1; - - lp = vcpu->arch.vpa.pinned_addr; - if (lp) - save_pmu = lp->pmcregs_in_use; - if (IS_ENABLED(CONFIG_KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND)) { - /* - * Save pmu if this guest is capable of running nested guests. - * This is option is for old L1s that do not set their - * lppaca->pmcregs_in_use properly when entering their L2. - */ - save_pmu |= nesting_enabled(vcpu->kvm); - } - - if (save_pmu) { - vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0); - vcpu->arch.mmcra = mfspr(SPRN_MMCRA); - - freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra); - - vcpu->arch.pmc[0] = mfspr(SPRN_PMC1); - vcpu->arch.pmc[1] = mfspr(SPRN_PMC2); - vcpu->arch.pmc[2] = mfspr(SPRN_PMC3); - vcpu->arch.pmc[3] = mfspr(SPRN_PMC4); - vcpu->arch.pmc[4] = mfspr(SPRN_PMC5); - vcpu->arch.pmc[5] = mfspr(SPRN_PMC6); - vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1); - vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2); - vcpu->arch.sdar = mfspr(SPRN_SDAR); - vcpu->arch.siar = mfspr(SPRN_SIAR); - vcpu->arch.sier[0] = mfspr(SPRN_SIER); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3); - vcpu->arch.sier[1] = mfspr(SPRN_SIER2); - vcpu->arch.sier[2] = mfspr(SPRN_SIER3); - } - - } else if (vcpu->arch.hfscr & HFSCR_PM) { - /* - * The guest accessed PMC SPRs without specifying they should - * be preserved, or it cleared pmcregs_in_use after the last - * access. Just ensure they are frozen. - */ - freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); - - /* - * Demand-fault PMU register access in the guest. - * - * This is used to grab the guest's VPA pmcregs_in_use value - * and reflect it into the host's VPA in the case of a nested - * hypervisor. - * - * It also avoids having to zero-out SPRs after each guest - * exit to avoid side-channels when. - * - * This is cleared here when we exit the guest, so later HFSCR - * interrupt handling can add it back to run the guest with - * PM enabled next time. - */ - if (!vcpu->arch.nested) - vcpu->arch.hfscr &= ~HFSCR_PM; - } /* otherwise the PMU should still be frozen */ - -#ifdef CONFIG_PPC_PSERIES - if (kvmhv_on_pseries()) { - barrier(); - get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); - barrier(); - } -#endif - - if (ppc_get_pmu_inuse()) { - mtspr(SPRN_PMC1, host_os_sprs->pmc1); - mtspr(SPRN_PMC2, host_os_sprs->pmc2); - mtspr(SPRN_PMC3, host_os_sprs->pmc3); - mtspr(SPRN_PMC4, host_os_sprs->pmc4); - mtspr(SPRN_PMC5, host_os_sprs->pmc5); - mtspr(SPRN_PMC6, host_os_sprs->pmc6); - mtspr(SPRN_MMCR1, host_os_sprs->mmcr1); - mtspr(SPRN_MMCR2, host_os_sprs->mmcr2); - mtspr(SPRN_SDAR, host_os_sprs->sdar); - mtspr(SPRN_SIAR, host_os_sprs->siar); - mtspr(SPRN_SIER, host_os_sprs->sier1); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mtspr(SPRN_MMCR3, host_os_sprs->mmcr3); - mtspr(SPRN_SIER2, host_os_sprs->sier2); - mtspr(SPRN_SIER3, host_os_sprs->sier3); - } - - /* Set MMCRA then MMCR0 last */ - mtspr(SPRN_MMCRA, host_os_sprs->mmcra); - mtspr(SPRN_MMCR0, host_os_sprs->mmcr0); - isync(); - } -} - -static void load_spr_state(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - mtspr(SPRN_TAR, vcpu->arch.tar); - mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); - mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); - mtspr(SPRN_BESCR, vcpu->arch.bescr); - - if (cpu_has_feature(CPU_FTR_P9_TIDR)) - mtspr(SPRN_TIDR, vcpu->arch.tid); - if (host_os_sprs->iamr != vcpu->arch.iamr) - mtspr(SPRN_IAMR, vcpu->arch.iamr); - if (host_os_sprs->amr != vcpu->arch.amr) - mtspr(SPRN_AMR, vcpu->arch.amr); - if (vcpu->arch.uamor != 0) - mtspr(SPRN_UAMOR, vcpu->arch.uamor); - if (host_os_sprs->fscr != vcpu->arch.fscr) - mtspr(SPRN_FSCR, vcpu->arch.fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) - mtspr(SPRN_DSCR, vcpu->arch.dscr); - if (vcpu->arch.pspb != 0) - mtspr(SPRN_PSPB, vcpu->arch.pspb); - - /* - * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] - * clear (or hstate set appropriately to catch those registers - * being clobbered if we take a MCE or SRESET), so those are done - * later. - */ - - if (!(vcpu->arch.ctrl & 1)) - mtspr(SPRN_CTRLT, 0); -} - -static void store_spr_state(struct kvm_vcpu *vcpu) -{ - vcpu->arch.tar = mfspr(SPRN_TAR); - vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); - vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); - vcpu->arch.bescr = mfspr(SPRN_BESCR); - - if (cpu_has_feature(CPU_FTR_P9_TIDR)) - vcpu->arch.tid = mfspr(SPRN_TIDR); - vcpu->arch.iamr = mfspr(SPRN_IAMR); - vcpu->arch.amr = mfspr(SPRN_AMR); - vcpu->arch.uamor = mfspr(SPRN_UAMOR); - vcpu->arch.fscr = mfspr(SPRN_FSCR); - vcpu->arch.dscr = mfspr(SPRN_DSCR); - vcpu->arch.pspb = mfspr(SPRN_PSPB); - - vcpu->arch.ctrl = mfspr(SPRN_CTRLF); -} - -/* Returns true if current MSR and/or guest MSR may have changed */ -static bool load_vcpu_state(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - bool ret = false; - - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { - kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - ret = true; - } - - load_spr_state(vcpu, host_os_sprs); - - load_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - load_vr_state(&vcpu->arch.vr); -#endif - mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); - - return ret; -} - -static void store_vcpu_state(struct kvm_vcpu *vcpu) -{ - store_spr_state(vcpu); - - store_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - store_vr_state(&vcpu->arch.vr); -#endif - vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); - - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); -} - -static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) -{ - host_os_sprs->dscr = mfspr(SPRN_DSCR); - if (cpu_has_feature(CPU_FTR_P9_TIDR)) - host_os_sprs->tidr = mfspr(SPRN_TIDR); - host_os_sprs->iamr = mfspr(SPRN_IAMR); - host_os_sprs->amr = mfspr(SPRN_AMR); - host_os_sprs->fscr = mfspr(SPRN_FSCR); - host_os_sprs->dscr = mfspr(SPRN_DSCR); -} - -/* vcpu guest regs must already be saved */ -static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - - if (cpu_has_feature(CPU_FTR_P9_TIDR)) - mtspr(SPRN_TIDR, host_os_sprs->tidr); - if (host_os_sprs->iamr != vcpu->arch.iamr) - mtspr(SPRN_IAMR, host_os_sprs->iamr); - if (vcpu->arch.uamor != 0) - mtspr(SPRN_UAMOR, 0); - if (host_os_sprs->amr != vcpu->arch.amr) - mtspr(SPRN_AMR, host_os_sprs->amr); - if (host_os_sprs->fscr != vcpu->arch.fscr) - mtspr(SPRN_FSCR, host_os_sprs->fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) - mtspr(SPRN_DSCR, host_os_sprs->dscr); - if (vcpu->arch.pspb != 0) - mtspr(SPRN_PSPB, 0); - - /* Save guest CTRL register, set runlatch to 1 */ - if (!(vcpu->arch.ctrl & 1)) - mtspr(SPRN_CTRLT, 1); -} - static inline bool hcall_is_xics(unsigned long req) { return req == H_EOI || req == H_CPPR || req == H_IPI || diff --git a/arch/powerpc/kvm/book3s_hv.h b/arch/powerpc/kvm/book3s_hv.h new file mode 100644 index 000000000000..d7485b9e9762 --- /dev/null +++ b/arch/powerpc/kvm/book3s_hv.h @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* + * Privileged (non-hypervisor) host registers to save. + */ +struct p9_host_os_sprs { + unsigned long dscr; + unsigned long tidr; + unsigned long iamr; + unsigned long amr; + unsigned long fscr; + + unsigned int pmc1; + unsigned int pmc2; + unsigned int pmc3; + unsigned int pmc4; + unsigned int pmc5; + unsigned int pmc6; + unsigned long mmcr0; + unsigned long mmcr1; + unsigned long mmcr2; + unsigned long mmcr3; + unsigned long mmcra; + unsigned long siar; + unsigned long sier1; + unsigned long sier2; + unsigned long sier3; + unsigned long sdar; +}; + +static inline bool nesting_enabled(struct kvm *kvm) +{ + return kvm->arch.nested_enable && kvm_is_radix(kvm); +} + +bool load_vcpu_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); +void store_vcpu_state(struct kvm_vcpu *vcpu); +void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs); +void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); +void switch_pmu_to_guest(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); +void switch_pmu_to_host(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index bd0021cd3a67..784ff5429ebc 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -4,8 +4,361 @@ #include #include #include +#include #include +#include "book3s_hv.h" + +static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) +{ + if (!(mmcr0 & MMCR0_FC)) + goto do_freeze; + if (mmcra & MMCRA_SAMPLE_ENABLE) + goto do_freeze; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + if (!(mmcr0 & MMCR0_PMCCEXT)) + goto do_freeze; + if (!(mmcra & MMCRA_BHRB_DISABLE)) + goto do_freeze; + } + return; + +do_freeze: + mmcr0 = MMCR0_FC; + mmcra = 0; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mmcr0 |= MMCR0_PMCCEXT; + mmcra = MMCRA_BHRB_DISABLE; + } + + mtspr(SPRN_MMCR0, mmcr0); + mtspr(SPRN_MMCRA, mmcra); + isync(); +} + +void switch_pmu_to_guest(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + struct lppaca *lp; + int load_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + load_pmu = lp->pmcregs_in_use; + + /* Save host */ + if (ppc_get_pmu_inuse()) { + /* + * It might be better to put PMU handling (at least for the + * host) in the perf subsystem because it knows more about what + * is being used. + */ + + /* POWER9, POWER10 do not implement HPMC or SPMC */ + + host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0); + host_os_sprs->mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra); + + host_os_sprs->pmc1 = mfspr(SPRN_PMC1); + host_os_sprs->pmc2 = mfspr(SPRN_PMC2); + host_os_sprs->pmc3 = mfspr(SPRN_PMC3); + host_os_sprs->pmc4 = mfspr(SPRN_PMC4); + host_os_sprs->pmc5 = mfspr(SPRN_PMC5); + host_os_sprs->pmc6 = mfspr(SPRN_PMC6); + host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1); + host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2); + host_os_sprs->sdar = mfspr(SPRN_SDAR); + host_os_sprs->siar = mfspr(SPRN_SIAR); + host_os_sprs->sier1 = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3); + host_os_sprs->sier2 = mfspr(SPRN_SIER2); + host_os_sprs->sier3 = mfspr(SPRN_SIER3); + } + } + +#ifdef CONFIG_PPC_PSERIES + /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */ + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = load_pmu; + barrier(); + } +#endif + + /* + * Load guest. If the VPA said the PMCs are not in use but the guest + * tried to access them anyway, HFSCR[PM] will be set by the HFAC + * fault so we can make forward progress. + */ + if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) { + mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); + mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); + mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); + mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); + mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); + mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); + mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); + mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); + mtspr(SPRN_SDAR, vcpu->arch.sdar); + mtspr(SPRN_SIAR, vcpu->arch.siar); + mtspr(SPRN_SIER, vcpu->arch.sier[0]); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); + mtspr(SPRN_SIER2, vcpu->arch.sier[1]); + mtspr(SPRN_SIER3, vcpu->arch.sier[2]); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, vcpu->arch.mmcra); + mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); + /* No isync necessary because we're starting counters */ + + if (!vcpu->arch.nested && + (vcpu->arch.hfscr_permitted & HFSCR_PM)) + vcpu->arch.hfscr |= HFSCR_PM; + } +} +EXPORT_SYMBOL_GPL(switch_pmu_to_guest); + +void switch_pmu_to_host(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + struct lppaca *lp; + int save_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + save_pmu = lp->pmcregs_in_use; + if (IS_ENABLED(CONFIG_KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND)) { + /* + * Save pmu if this guest is capable of running nested guests. + * This is option is for old L1s that do not set their + * lppaca->pmcregs_in_use properly when entering their L2. + */ + save_pmu |= nesting_enabled(vcpu->kvm); + } + + if (save_pmu) { + vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0); + vcpu->arch.mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra); + + vcpu->arch.pmc[0] = mfspr(SPRN_PMC1); + vcpu->arch.pmc[1] = mfspr(SPRN_PMC2); + vcpu->arch.pmc[2] = mfspr(SPRN_PMC3); + vcpu->arch.pmc[3] = mfspr(SPRN_PMC4); + vcpu->arch.pmc[4] = mfspr(SPRN_PMC5); + vcpu->arch.pmc[5] = mfspr(SPRN_PMC6); + vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1); + vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2); + vcpu->arch.sdar = mfspr(SPRN_SDAR); + vcpu->arch.siar = mfspr(SPRN_SIAR); + vcpu->arch.sier[0] = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3); + vcpu->arch.sier[1] = mfspr(SPRN_SIER2); + vcpu->arch.sier[2] = mfspr(SPRN_SIER3); + } + + } else if (vcpu->arch.hfscr & HFSCR_PM) { + /* + * The guest accessed PMC SPRs without specifying they should + * be preserved, or it cleared pmcregs_in_use after the last + * access. Just ensure they are frozen. + */ + freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); + + /* + * Demand-fault PMU register access in the guest. + * + * This is used to grab the guest's VPA pmcregs_in_use value + * and reflect it into the host's VPA in the case of a nested + * hypervisor. + * + * It also avoids having to zero-out SPRs after each guest + * exit to avoid side-channels when. + * + * This is cleared here when we exit the guest, so later HFSCR + * interrupt handling can add it back to run the guest with + * PM enabled next time. + */ + if (!vcpu->arch.nested) + vcpu->arch.hfscr &= ~HFSCR_PM; + } /* otherwise the PMU should still be frozen */ + +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); + barrier(); + } +#endif + + if (ppc_get_pmu_inuse()) { + mtspr(SPRN_PMC1, host_os_sprs->pmc1); + mtspr(SPRN_PMC2, host_os_sprs->pmc2); + mtspr(SPRN_PMC3, host_os_sprs->pmc3); + mtspr(SPRN_PMC4, host_os_sprs->pmc4); + mtspr(SPRN_PMC5, host_os_sprs->pmc5); + mtspr(SPRN_PMC6, host_os_sprs->pmc6); + mtspr(SPRN_MMCR1, host_os_sprs->mmcr1); + mtspr(SPRN_MMCR2, host_os_sprs->mmcr2); + mtspr(SPRN_SDAR, host_os_sprs->sdar); + mtspr(SPRN_SIAR, host_os_sprs->siar); + mtspr(SPRN_SIER, host_os_sprs->sier1); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, host_os_sprs->mmcr3); + mtspr(SPRN_SIER2, host_os_sprs->sier2); + mtspr(SPRN_SIER3, host_os_sprs->sier3); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, host_os_sprs->mmcra); + mtspr(SPRN_MMCR0, host_os_sprs->mmcr0); + isync(); + } +} +EXPORT_SYMBOL_GPL(switch_pmu_to_host); + +static void load_spr_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + mtspr(SPRN_TAR, vcpu->arch.tar); + mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); + mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); + mtspr(SPRN_BESCR, vcpu->arch.bescr); + + if (cpu_has_feature(CPU_FTR_P9_TIDR)) + mtspr(SPRN_TIDR, vcpu->arch.tid); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, vcpu->arch.iamr); + if (host_os_sprs->amr != vcpu->arch.amr) + mtspr(SPRN_AMR, vcpu->arch.amr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, vcpu->arch.uamor); + if (host_os_sprs->fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, vcpu->arch.fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, vcpu->arch.dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, vcpu->arch.pspb); + + /* + * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] + * clear (or hstate set appropriately to catch those registers + * being clobbered if we take a MCE or SRESET), so those are done + * later. + */ + + if (!(vcpu->arch.ctrl & 1)) + mtspr(SPRN_CTRLT, 0); +} + +static void store_spr_state(struct kvm_vcpu *vcpu) +{ + vcpu->arch.tar = mfspr(SPRN_TAR); + vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); + vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); + vcpu->arch.bescr = mfspr(SPRN_BESCR); + + if (cpu_has_feature(CPU_FTR_P9_TIDR)) + vcpu->arch.tid = mfspr(SPRN_TIDR); + vcpu->arch.iamr = mfspr(SPRN_IAMR); + vcpu->arch.amr = mfspr(SPRN_AMR); + vcpu->arch.uamor = mfspr(SPRN_UAMOR); + vcpu->arch.fscr = mfspr(SPRN_FSCR); + vcpu->arch.dscr = mfspr(SPRN_DSCR); + vcpu->arch.pspb = mfspr(SPRN_PSPB); + + vcpu->arch.ctrl = mfspr(SPRN_CTRLF); +} + +/* Returns true if current MSR and/or guest MSR may have changed */ +bool load_vcpu_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + bool ret = false; + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + ret = true; + } + + load_spr_state(vcpu, host_os_sprs); + + load_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + load_vr_state(&vcpu->arch.vr); +#endif + mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); + + return ret; +} +EXPORT_SYMBOL_GPL(load_vcpu_state); + +void store_vcpu_state(struct kvm_vcpu *vcpu) +{ + store_spr_state(vcpu); + + store_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + store_vr_state(&vcpu->arch.vr); +#endif + vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); +} +EXPORT_SYMBOL_GPL(store_vcpu_state); + +void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) +{ + if (cpu_has_feature(CPU_FTR_P9_TIDR)) + host_os_sprs->tidr = mfspr(SPRN_TIDR); + host_os_sprs->iamr = mfspr(SPRN_IAMR); + host_os_sprs->amr = mfspr(SPRN_AMR); + host_os_sprs->fscr = mfspr(SPRN_FSCR); + host_os_sprs->dscr = mfspr(SPRN_DSCR); +} +EXPORT_SYMBOL_GPL(save_p9_host_os_sprs); + +/* vcpu guest regs must already be saved */ +void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); + + if (cpu_has_feature(CPU_FTR_P9_TIDR)) + mtspr(SPRN_TIDR, host_os_sprs->tidr); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, host_os_sprs->iamr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, 0); + if (host_os_sprs->amr != vcpu->arch.amr) + mtspr(SPRN_AMR, host_os_sprs->amr); + if (host_os_sprs->fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, host_os_sprs->fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, host_os_sprs->dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, 0); + + /* Save guest CTRL register, set runlatch to 1 */ + if (!(vcpu->arch.ctrl & 1)) + mtspr(SPRN_CTRLT, 1); +} +EXPORT_SYMBOL_GPL(restore_p9_host_os_sprs); + #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING static void __start_timing(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next) { From patchwork Mon Oct 4 16:00:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536221 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=MTD1iv4a; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTn4JqMz9t5G for ; Tue, 5 Oct 2021 03:02:37 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236284AbhJDQEF (ORCPT ); Mon, 4 Oct 2021 12:04:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236275AbhJDQEE (ORCPT ); Mon, 4 Oct 2021 12:04:04 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1B64C061746 for ; Mon, 4 Oct 2021 09:02:15 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id l6so192318plh.9 for ; Mon, 04 Oct 2021 09:02:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FkWMuHfPOxxrU47+vjgWI+l5GmxefJupZqH5KrybsgE=; b=MTD1iv4alXD6Gy8oE6uF5PjwPsDfr22C3DVhNrnbtt7Wt2wrv6zhcuHozDRV8id3Ry cE12gZpnxiLFer5SMxBhYTrnNuiuCqu3g+6XGBklgtZtMjmp05Ot4j940G/x59dHJ/P2 SjKA/+H9WPgmyh2lB7IHlgYGaLGg9sPdA9Y4n6YEJ3PDitF+Hyjxf37MsH0Qg4PeZ24W aYIRieoHc86KxyjhPN3Sa2cKOGL9qexQg8CqyVgCv7uqAEM/GpClZVHF8YZygr/qSU1y NJwsCvdw8tF3ityQnAboSLty0VZkbmp/3O5EVBFnE8DwNbBGB9+MGWzS9S5H6Te8/Rk4 HA7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FkWMuHfPOxxrU47+vjgWI+l5GmxefJupZqH5KrybsgE=; b=EuX33Gm/z1ESXqF/yGyFEGFwCQrn5ayXhvi8WOStoeX1CO6LqyaZrT2SIjwNTEb8uH QaTIyXeO7n/SOXDmxGuypHbHsETMSytZXr3LdVVGSAgELEefX1+TK4Mg2eZhzYNWrFxA YZ1zaF2bFfAi6KwHLAzkHNa7p9PfvYu7JNovq/kbxANhv2iP9ayGDNhvxYqSeomcuMuD 5Z36IFphs3IOGB/gZTW75k0Pl6cw1U8lDecH9fyLWxlaOoWJk53dvl0doQb8dJtjnjZ3 4hWJSdV+rPYIe5hUwu81blwYsJtL/F+yWjDMO051sJ2jxZNPL8Ggoau/HQQQiVij6orj JpHQ== X-Gm-Message-State: AOAM530rFHX13aDzpa2S3W4Tpng6gRvu3NiHbaqHYyPp1cU1TfYs+FP0 KGl2zQbLmrSYKxtetZv96Pur/ydUaVI= X-Google-Smtp-Source: ABdhPJwcKy7fbp9xvWCGnLZvfxut9nPBACN51oBqrzFVv3fqeNPJF4xKGk5f49QKX4MEqgPkoiUs3A== X-Received: by 2002:a17:902:ba8e:b0:13e:c690:5acb with SMTP id k14-20020a170902ba8e00b0013ec6905acbmr408788pls.63.1633363334967; Mon, 04 Oct 2021 09:02:14 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:14 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 30/52] KVM: PPC: Book3S HV P9: Move nested guest entry into its own function Date: Tue, 5 Oct 2021 02:00:27 +1000 Message-Id: <20211004160049.1338837-31-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the part of the guest entry which is specific to nested HV into its own function. This is just refactoring. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 125 +++++++++++++++++++---------------- 1 file changed, 67 insertions(+), 58 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 580bac4753f6..a57727463980 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3809,6 +3809,72 @@ static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu) } } +/* call our hypervisor to load up HV regs and go */ +static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + unsigned long host_psscr; + struct hv_guest_state hvregs; + int trap; + s64 dec; + + /* + * We need to save and restore the guest visible part of the + * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor + * doesn't do this for us. Note only required if pseries since + * this is done in kvmhv_vcpu_entry_p9() below otherwise. + */ + host_psscr = mfspr(SPRN_PSSCR_PR); + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + kvmhv_save_hv_regs(vcpu, &hvregs); + hvregs.lpcr = lpcr; + vcpu->arch.regs.msr = vcpu->arch.shregs.msr; + hvregs.version = HV_GUEST_STATE_VERSION; + if (vcpu->arch.nested) { + hvregs.lpid = vcpu->arch.nested->shadow_lpid; + hvregs.vcpu_token = vcpu->arch.nested_vcpu_id; + } else { + hvregs.lpid = vcpu->kvm->arch.lpid; + hvregs.vcpu_token = vcpu->vcpu_id; + } + hvregs.hdec_expiry = time_limit; + + /* + * When setting DEC, we must always deal with irq_work_raise + * via NMI vs setting DEC. The problem occurs right as we + * switch into guest mode if a NMI hits and sets pending work + * and sets DEC, then that will apply to the guest and not + * bring us back to the host. + * + * irq_work_raise could check a flag (or possibly LPCR[HDICE] + * for example) and set HDEC to 1? That wouldn't solve the + * nested hv case which needs to abort the hcall or zero the + * time limit. + * + * XXX: Another day's problem. + */ + mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb); + + mtspr(SPRN_DAR, vcpu->arch.shregs.dar); + mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); + trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), + __pa(&vcpu->arch.regs)); + kvmhv_restore_hv_return_state(vcpu, &hvregs); + vcpu->arch.shregs.msr = vcpu->arch.regs.msr; + vcpu->arch.shregs.dar = mfspr(SPRN_DAR); + vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); + vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); + mtspr(SPRN_PSSCR_PR, host_psscr); + + dec = mfspr(SPRN_DEC); + if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ + dec = (s32) dec; + *tb = mftb(); + vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); + + return trap; +} + /* * Guest entry for POWER9 and later CPUs. */ @@ -3817,7 +3883,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, { struct kvmppc_vcore *vc = vcpu->arch.vcore; struct p9_host_os_sprs host_os_sprs; - s64 dec; u64 next_timer; unsigned long msr; int trap; @@ -3870,63 +3935,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, switch_pmu_to_guest(vcpu, &host_os_sprs); if (kvmhv_on_pseries()) { - /* - * We need to save and restore the guest visible part of the - * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor - * doesn't do this for us. Note only required if pseries since - * this is done in kvmhv_vcpu_entry_p9() below otherwise. - */ - unsigned long host_psscr; - /* call our hypervisor to load up HV regs and go */ - struct hv_guest_state hvregs; - - host_psscr = mfspr(SPRN_PSSCR_PR); - mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); - kvmhv_save_hv_regs(vcpu, &hvregs); - hvregs.lpcr = lpcr; - vcpu->arch.regs.msr = vcpu->arch.shregs.msr; - hvregs.version = HV_GUEST_STATE_VERSION; - if (vcpu->arch.nested) { - hvregs.lpid = vcpu->arch.nested->shadow_lpid; - hvregs.vcpu_token = vcpu->arch.nested_vcpu_id; - } else { - hvregs.lpid = vcpu->kvm->arch.lpid; - hvregs.vcpu_token = vcpu->vcpu_id; - } - hvregs.hdec_expiry = time_limit; - - /* - * When setting DEC, we must always deal with irq_work_raise - * via NMI vs setting DEC. The problem occurs right as we - * switch into guest mode if a NMI hits and sets pending work - * and sets DEC, then that will apply to the guest and not - * bring us back to the host. - * - * irq_work_raise could check a flag (or possibly LPCR[HDICE] - * for example) and set HDEC to 1? That wouldn't solve the - * nested hv case which needs to abort the hcall or zero the - * time limit. - * - * XXX: Another day's problem. - */ - mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb); - - mtspr(SPRN_DAR, vcpu->arch.shregs.dar); - mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); - trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), - __pa(&vcpu->arch.regs)); - kvmhv_restore_hv_return_state(vcpu, &hvregs); - vcpu->arch.shregs.msr = vcpu->arch.regs.msr; - vcpu->arch.shregs.dar = mfspr(SPRN_DAR); - vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); - vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); - mtspr(SPRN_PSSCR_PR, host_psscr); - - dec = mfspr(SPRN_DEC); - if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ - dec = (s32) dec; - *tb = mftb(); - vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); + trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb); /* H_CEDE has to be handled now, not later */ if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && From patchwork Mon Oct 4 16:00:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536223 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=mI9AjCze; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTp3Flkz9t5G for ; Tue, 5 Oct 2021 03:02:38 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233981AbhJDQEH (ORCPT ); Mon, 4 Oct 2021 12:04:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235926AbhJDQEH (ORCPT ); Mon, 4 Oct 2021 12:04:07 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D60D3C061746 for ; Mon, 4 Oct 2021 09:02:17 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id c29so4513930pfp.2 for ; Mon, 04 Oct 2021 09:02:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=beEVDL7ciCBVPGVzvAuitGlc6Wqq+k8/KKHAATblqC0=; b=mI9AjCzeoCedsJ1xJfDu5fsb+y6KB7EfqYAcD5imOsOUpel2i1Q0YugrFQxdLOVZin SBM92fx4kKSyDXWDfX1vkQXOvRi42T6NQY8G41u57hwwnEGRCwneJQiKbJnglb2+kaTN d3de0UcI8m+Mc4J+8Hw0AovjLtoiedVblPNet+TLIQCLJ4ixb2K4xrRAR6WNjJJO/9dP 9Qfor5HnKbLf4j3kJVAvb5EbBHOHsnPGyLwQUF6BocEjzn72y7isbFPHtxFddK6uIn8I 3qOtVX+n3AM6OtTzBui9ubkd35dhHxNNtadHAKSXCpuRCd4p2KsxHD9bk5PrfcgdZ/RV zIyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=beEVDL7ciCBVPGVzvAuitGlc6Wqq+k8/KKHAATblqC0=; b=AkM+4oboryoUSmC2K0K00XIZpl8ss3GBnVZ9H85rslxASYJLn+xuS5yWAAmM+zoPXV ICdXrX47BtLYKbw5PSS5UdiOUTds/Ea2roR67RcPtNOIOfwZJc7q5Zzvqe24crQElVj8 ue7+ijGr0GCDGDB5kpwDnqfDzJ3KGpSKCKfNWtm/i1OoDJx+61dHaZYR5YLjxF4UTVIR wp1s6wNoLejfPZDsmtdbIoSCRYu9/EOQbGOY9xpPhn08cUjl2taU0NNHxwte1pDqHWEC HzyGAVJRv4VKl3Yxm72p91/VeJDBRJE0Dfe83wsCg/TSlJglpYdb21FL313vG//havVa z0tw== X-Gm-Message-State: AOAM530F0AmSCpw52OtCpRfGYw1mJTCF/wJDppvrMX3amNt3ks8dQLt6 QP3XUY1aXsjCFNem9GK5/GIC8tPxdLE= X-Google-Smtp-Source: ABdhPJzpM53cwZq/beanPAPaTVoH5U58elKmite+2ZU85YQ/Op1yGdXhQn4yQ42QPnONey3I0XRj3w== X-Received: by 2002:a63:b505:: with SMTP id y5mr11550127pge.91.1633363337230; Mon, 04 Oct 2021 09:02:17 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:17 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 31/52] KVM: PPC: Book3S HV P9: Move remaining SPR and MSR access into low level entry Date: Tue, 5 Oct 2021 02:00:28 +1000 Message-Id: <20211004160049.1338837-32-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move register saving and loading from kvmhv_p9_guest_entry() into the HV and nested entry handlers. Accesses are scheduled to reduce mtSPR / mfSPR interleaving which reduces SPR scoreboard stalls. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 79 ++++++++++------------ arch/powerpc/kvm/book3s_hv_p9_entry.c | 96 ++++++++++++++++++++------- 2 files changed, 109 insertions(+), 66 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index a57727463980..db42eeb27c15 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3814,9 +3814,15 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long host_psscr; + unsigned long msr; struct hv_guest_state hvregs; - int trap; + struct p9_host_os_sprs host_os_sprs; s64 dec; + int trap; + + switch_pmu_to_guest(vcpu, &host_os_sprs); + + save_p9_host_os_sprs(&host_os_sprs); /* * We need to save and restore the guest visible part of the @@ -3825,6 +3831,27 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns * this is done in kvmhv_vcpu_entry_p9() below otherwise. */ host_psscr = mfspr(SPRN_PSSCR_PR); + + hard_irq_disable(); + if (lazy_irq_pending()) + return 0; + + /* MSR bits may have been cleared by context switch */ + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + + if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) + msr = mfmsr(); /* TM restore can update msr */ + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); kvmhv_save_hv_regs(vcpu, &hvregs); hvregs.lpcr = lpcr; @@ -3866,12 +3893,20 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); mtspr(SPRN_PSSCR_PR, host_psscr); + store_vcpu_state(vcpu); + dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; *tb = mftb(); vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); + timer_rearm_host_dec(*tb); + + restore_p9_host_os_sprs(vcpu, &host_os_sprs); + + switch_pmu_to_host(vcpu, &host_os_sprs); + return trap; } @@ -3882,9 +3917,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { struct kvmppc_vcore *vc = vcpu->arch.vcore; - struct p9_host_os_sprs host_os_sprs; u64 next_timer; - unsigned long msr; int trap; next_timer = timer_get_next_tb(); @@ -3895,33 +3928,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.ceded = 0; - save_p9_host_os_sprs(&host_os_sprs); - - /* - * This could be combined with MSR[RI] clearing, but that expands - * the unrecoverable window. It would be better to cover unrecoverable - * with KVM bad interrupt handling rather than use MSR[RI] at all. - * - * Much more difficult and less worthwhile to combine with IR/DR - * disable. - */ - hard_irq_disable(); - if (lazy_irq_pending()) - return 0; - - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - kvmppc_subcore_enter_guest(); vc->entry_exit_map = 1; @@ -3929,11 +3935,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) - msr = mfmsr(); /* MSR may have been updated */ - - switch_pmu_to_guest(vcpu, &host_os_sprs); - if (kvmhv_on_pseries()) { trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb); @@ -3976,16 +3977,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.slb_max = 0; } - switch_pmu_to_host(vcpu, &host_os_sprs); - - store_vcpu_state(vcpu); - vcpu_vpa_increment_dispatch(vcpu); - timer_rearm_host_dec(*tb); - - restore_p9_host_os_sprs(vcpu, &host_os_sprs); - vc->entry_exit_map = 0x101; vc->in_guest = 0; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 784ff5429ebc..fa080533bd8d 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -538,6 +538,7 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { + struct p9_host_os_sprs host_os_sprs; struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; struct kvmppc_vcore *vc = vcpu->arch.vcore; @@ -567,9 +568,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; - /* Could avoid mfmsr by passing around, but probably no big deal */ - msr = mfmsr(); - host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); host_dawr0 = mfspr(SPRN_DAWR0); @@ -584,6 +582,41 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); + switch_pmu_to_guest(vcpu, &host_os_sprs); + + save_p9_host_os_sprs(&host_os_sprs); + + /* + * This could be combined with MSR[RI] clearing, but that expands + * the unrecoverable window. It would be better to cover unrecoverable + * with KVM bad interrupt handling rather than use MSR[RI] at all. + * + * Much more difficult and less worthwhile to combine with IR/DR + * disable. + */ + hard_irq_disable(); + if (lazy_irq_pending()) { + trap = 0; + goto out; + } + + /* MSR bits may have been cleared by context switch */ + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + /* Save MSR for restore. This is after hard disable, so EE is clear. */ + + if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) + msr = mfmsr(); /* MSR may have been updated */ + if (vc->tb_offset) { u64 new_tb = *tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); @@ -642,6 +675,14 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); + /* + * It might be preferable to load_vcpu_state here, in order to get the + * GPR/FP register loads executing in parallel with the previous mtSPR + * instructions, but for now that can't be done because the TM handling + * in load_vcpu_state can change some SPRs and vcpu state (nip, msr). + * But TM could be split out if this would be a significant benefit. + */ + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_HV_P9; /* @@ -819,6 +860,20 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->dpdes = mfspr(SPRN_DPDES); vc->vtb = mfspr(SPRN_VTB); + save_clear_guest_mmu(kvm, vcpu); + switch_mmu_to_host(kvm, host_pidr); + + /* + * If we are in real mode, only switch MMU on after the MMU is + * switched to host, to avoid the P9_RADIX_PREFETCH_BUG. + */ + if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && + vcpu->arch.shregs.msr & MSR_TS_MASK) + msr |= MSR_TS_S; + __mtmsrd(msr, 0); + + store_vcpu_state(vcpu); + dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; @@ -851,6 +906,19 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_DAWRX1, host_dawrx1); } + mtspr(SPRN_DPDES, 0); + if (vc->pcr) + mtspr(SPRN_PCR, PCR_MASK); + + /* HDEC must be at least as large as DEC, so decrementer_max fits */ + mtspr(SPRN_HDEC, decrementer_max); + + timer_rearm_host_dec(*tb); + + restore_p9_host_os_sprs(vcpu, &host_os_sprs); + + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; + if (kvm_is_radix(kvm)) { /* * Since this is radix, do a eieio; tlbsync; ptesync sequence @@ -867,26 +935,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (cpu_has_feature(CPU_FTR_ARCH_31)) asm volatile(PPC_CP_ABORT); - mtspr(SPRN_DPDES, 0); - if (vc->pcr) - mtspr(SPRN_PCR, PCR_MASK); - - /* HDEC must be at least as large as DEC, so decrementer_max fits */ - mtspr(SPRN_HDEC, decrementer_max); - - save_clear_guest_mmu(kvm, vcpu); - switch_mmu_to_host(kvm, host_pidr); - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; - - /* - * If we are in real mode, only switch MMU on after the MMU is - * switched to host, to avoid the P9_RADIX_PREFETCH_BUG. - */ - if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && - vcpu->arch.shregs.msr & MSR_TS_MASK) - msr |= MSR_TS_S; - - __mtmsrd(msr, 0); +out: + switch_pmu_to_host(vcpu, &host_os_sprs); end_timing(vcpu); From patchwork Mon Oct 4 16:00:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536224 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=QQQKWRfh; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTp5v1Rz9t9b for ; Tue, 5 Oct 2021 03:02:38 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236280AbhJDQEK (ORCPT ); Mon, 4 Oct 2021 12:04:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235926AbhJDQEJ (ORCPT ); Mon, 4 Oct 2021 12:04:09 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47901C061745 for ; Mon, 4 Oct 2021 09:02:20 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id 75so16991460pga.3 for ; Mon, 04 Oct 2021 09:02:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nw+tyfcHLpLdeHB2w2xt0EdeJ3op84k0Tqa/00/sMlA=; b=QQQKWRfhFnTxudCfehAR+peIPTn68znqLLiZU86od0c40MStRasZ0h39O6G8dER/3I PeYdfrurA3Y02cjC1SvQvuaxzv6gnkUY20vyDyUP5LUDEQm7LzEnXv1oF5psZImZRoii m9MMdA5wsXibGb2zQEE2+HWXkHnX0GOTwQw+bfoWP0hgdOSmlPEhfy7JYMw6byWA3QGx IJi/NABHQ5r30rDkbVGDq+BqmEwzeV/IedkUCx/Gl64KcfOl1aiRMp6l7wzS96eroBkp 1GTdqm4UwkjSXkT7tiEeFre9m3RBa2QcQigAD7UIMWF1vz8UhPh1hsZwHDdG4YijNKXX DKOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nw+tyfcHLpLdeHB2w2xt0EdeJ3op84k0Tqa/00/sMlA=; b=mdfY3pcj3TecZl44LXK42S9ZpoRbwK6oA2tZrMcZyCyV2ltDgLpUcdfCAoYZeojpUI nmIZ2m3BghZVW/9ClxwtZ83vy4pEzOow4CuDFOBUhNLJITR4O/hd999KmqP+Oto+F3Nj 7TCd7zav+ip51jGb4hF/+NGugDt34ZviZfdwjS8p90QlQuVtPBrW0sBof/mFl69zhYbO e9uf0eu91D3eqvXON4TKlpQBpoXDLd5Ieez6tk28hYsNMR5CXnKDSVs9sfntItv5Wxke 1nQEJAKXJfz4BKOnBaMxMpMNwxngVZSN9ZY0z7uPyxStyYunmCP9QNa00ie4nqwbmOXm NJdg== X-Gm-Message-State: AOAM533ugqYCet8hLr41oO5zxHwmTsSVV6PoUqU5i5PIii5tGqV0kdGb CxQUJNAEdP+/JRNsT5OBWTm+CGNXEHA= X-Google-Smtp-Source: ABdhPJyIsfqeG/Mh1wu9pNeXGjllDPl6AWQsRBG0NXuAIklS2i5+g/aAkqsfdqFlERZET/MpBpwKrA== X-Received: by 2002:a63:e741:: with SMTP id j1mr11594783pgk.86.1633363339405; Mon, 04 Oct 2021 09:02:19 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:19 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 32/52] KVM: PPC: Book3S HV P9: Implement TM fastpath for guest entry/exit Date: Tue, 5 Oct 2021 02:00:29 +1000 Message-Id: <20211004160049.1338837-33-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org If TM is not active, only TM register state needs to be saved and restored, avoiding several mfmsr/mtmsrd instructions and improving performance. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 27 +++++++++++++++++++++++---- 1 file changed, 23 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index fa080533bd8d..6bef509bccb8 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -287,11 +287,20 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu, { bool ret = false; +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM if (cpu_has_feature(CPU_FTR_TM) || cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { - kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - ret = true; + unsigned long guest_msr = vcpu->arch.shregs.msr; + if (MSR_TM_ACTIVE(guest_msr)) { + kvmppc_restore_tm_hv(vcpu, guest_msr, true); + ret = true; + } else { + mtspr(SPRN_TEXASR, vcpu->arch.texasr); + mtspr(SPRN_TFHAR, vcpu->arch.tfhar); + mtspr(SPRN_TFIAR, vcpu->arch.tfiar); + } } +#endif load_spr_state(vcpu, host_os_sprs); @@ -315,9 +324,19 @@ void store_vcpu_state(struct kvm_vcpu *vcpu) #endif vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + unsigned long guest_msr = vcpu->arch.shregs.msr; + if (MSR_TM_ACTIVE(guest_msr)) { + kvmppc_save_tm_hv(vcpu, guest_msr, true); + } else { + vcpu->arch.texasr = mfspr(SPRN_TEXASR); + vcpu->arch.tfhar = mfspr(SPRN_TFHAR); + vcpu->arch.tfiar = mfspr(SPRN_TFIAR); + } + } +#endif } EXPORT_SYMBOL_GPL(store_vcpu_state); From patchwork Mon Oct 4 16:00:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536225 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=VEmWxjjt; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTq17Dxz9t6h for ; Tue, 5 Oct 2021 03:02:39 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236287AbhJDQEM (ORCPT ); Mon, 4 Oct 2021 12:04:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235926AbhJDQEL (ORCPT ); Mon, 4 Oct 2021 12:04:11 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4757C061745 for ; Mon, 4 Oct 2021 09:02:22 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id 75so16991552pga.3 for ; Mon, 04 Oct 2021 09:02:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ow8cYghbA2bnydpS+MNEJPmRY9uv0/URFDPZIuLqe34=; b=VEmWxjjtJgcn907S87MPwBbiKZ4eygPZE+dXkofvgkMg+pKmw7DDeWn5Y2pN25EnQd bnQeS1VimSvi9qVo5n+NXhJa9tS1E/NZpHwudWc6jeRIPJpknLIGaHLrElJ3SNsLWjpp hyTfSuA2ICX1xpiJC1OrMoImo1zmSnkiRB0rgQFNc05EpeElBAAEvkLeDiH3nh+oZGrj qltW42gRoCCfn1Zy6cjw09/W7CMaRVTr2RUmPGBDNZxtKsiRuX7oOYfG00jQWX6Ojhw0 02MFzOC3MAZr39pcZcdDGK/y56KrTM8upu5rj4GUraTUC+C9Iuk82qqYwyu1Ii/0AzPD DEtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ow8cYghbA2bnydpS+MNEJPmRY9uv0/URFDPZIuLqe34=; b=wTy2VSz0yfhHI5ZE2acl3X8C0M+ezVNFeo5rw+pOqxnRCj0CCOvMk6xjU54EFALi5a 7LuVW1cxqc6cJVB8El2PhmOypyjX0X0xtfcVukrx3LHSSg9ECPBFThNx6g9RA09Qq+/3 wxZ6J8SQTIhOxD0ySUDzUc7CkjX/SAfCO1tQGd0Wkpp4YF/mtwYO4PZa9weKMMj2jMGb EFYNYH+r0i4zyG6vDXcQEDFmlD+pcwo8LUd1EtiHgHGUhM15a0bngmHkw9tT8QP7bg2E okzICIRyeqQmGxJA7JZ4ztzyd3o5I0mloO8rlplWTDjynDVUY9uUXZj3IZzThf8YuejQ DtRQ== X-Gm-Message-State: AOAM5323lelpsKRZYgrmcXRzDv3+Ft2Yr0HIlUdIWOlWw7JLwdpvGr/A bS/zyEMorBmd2+lNIpa0soRL7S7t7vA= X-Google-Smtp-Source: ABdhPJwdOPIdI6NPSumcNUNoT0WRyLbrL2YZd8kgEZ/NWbnGWwDkeCSwNbq33rhu/cMGb7C7mqoJpQ== X-Received: by 2002:a05:6a00:1944:b0:438:d002:6e35 with SMTP id s4-20020a056a00194400b00438d0026e35mr25462359pfk.20.1633363342102; Mon, 04 Oct 2021 09:02:22 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:21 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 33/52] KVM: PPC: Book3S HV P9: Switch PMU to guest as late as possible Date: Tue, 5 Oct 2021 02:00:30 +1000 Message-Id: <20211004160049.1338837-34-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This moves PMU switch to guest as late as possible in entry, and switch back to host as early as possible at exit. This helps the host get the most perf coverage of KVM entry/exit code as possible. This is slightly suboptimal for SPR scheduling point of view when the PMU is enabled, but when perf is disabled there is no real difference. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 6 ++---- arch/powerpc/kvm/book3s_hv_p9_entry.c | 6 ++---- 2 files changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index db42eeb27c15..5a1859311b3e 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3820,8 +3820,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns s64 dec; int trap; - switch_pmu_to_guest(vcpu, &host_os_sprs); - save_p9_host_os_sprs(&host_os_sprs); /* @@ -3884,9 +3882,11 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); + switch_pmu_to_guest(vcpu, &host_os_sprs); trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), __pa(&vcpu->arch.regs)); kvmhv_restore_hv_return_state(vcpu, &hvregs); + switch_pmu_to_host(vcpu, &host_os_sprs); vcpu->arch.shregs.msr = vcpu->arch.regs.msr; vcpu->arch.shregs.dar = mfspr(SPRN_DAR); vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); @@ -3905,8 +3905,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns restore_p9_host_os_sprs(vcpu, &host_os_sprs); - switch_pmu_to_host(vcpu, &host_os_sprs); - return trap; } diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 6bef509bccb8..619bbcd47b92 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -601,8 +601,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); - switch_pmu_to_guest(vcpu, &host_os_sprs); - save_p9_host_os_sprs(&host_os_sprs); /* @@ -744,7 +742,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc accumulate_time(vcpu, &vcpu->arch.guest_time); + switch_pmu_to_guest(vcpu, &host_os_sprs); kvmppc_p9_enter_guest(vcpu); + switch_pmu_to_host(vcpu, &host_os_sprs); accumulate_time(vcpu, &vcpu->arch.rm_intr); @@ -955,8 +955,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc asm volatile(PPC_CP_ABORT); out: - switch_pmu_to_host(vcpu, &host_os_sprs); - end_timing(vcpu); return trap; From patchwork Mon Oct 4 16:00:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536226 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=mg0Iw6mc; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTq3TRpz9t5G for ; Tue, 5 Oct 2021 03:02:39 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235212AbhJDQES (ORCPT ); Mon, 4 Oct 2021 12:04:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236289AbhJDQEN (ORCPT ); Mon, 4 Oct 2021 12:04:13 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF858C061746 for ; Mon, 4 Oct 2021 09:02:24 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id j4so209455plx.4 for ; Mon, 04 Oct 2021 09:02:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TM/HAR2kIbbjsWEciD5SVgeeCTO1PDyzdJl58aiAKUU=; b=mg0Iw6mcWciAkTXZmbSjn0fIPZxsDgy0BgteZZFXRyFvyilYuuOastBAzHwDoRBuYf LvYLGn8PfLpVon3pT+gN0MQxG0unh6NK5OpSlaS1ix4OKzf1ka9tAe+/v0hXA/Z9yLOd uadUFkDtiz7XSrutFCTO6HdJOhWxU/LiqJV3GrSK7nQO/EY7gXIVP90qaKrX1eIaMxgq 3bGmoNiiGojqCBFOFQ5cVWYdEDv8A6aUCuuC6Nnx14tQDjXkGP37p0sGzm493oSPLlF4 oBSOgPKIzX0pvUYGEziebIAt3qF915nahHz1wSM6wWBeOfGczKkCzVs4PHR1vVGWm4Nj 1H9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TM/HAR2kIbbjsWEciD5SVgeeCTO1PDyzdJl58aiAKUU=; b=6bN2RyL8Re0OWj09I0sAs4eFIK7kek4vfrgs+ffOCP4ckdo1rp7X41LRYNdetIF2y9 ybWkJsr6cdcD2YUHC+kcIstaqE63mI7210l/3l7l5aJBwYGaya1MPVbYcQt60/SlucqV 9J/EA36NoO/JZXSZ4Ugu1z5TuPZJeOeDV2G95TAFSnZFonO+TpC7cmKf1gaMhbfRZMyS eRbSozMZxtAzwEhlTDUeI+tRAFoidX8DXCeueEnEIjFNM1Ccr0HpscP/IC/Hl67xonts oJUpPT+tTkRDQdeqvAwjGWiAY8604exUmvNl5gVaTUm863CTjgYSbObFgesF3ONb9i7t AUXg== X-Gm-Message-State: AOAM5314wVm5s0jFMp2ZnN8SkxMTs3Elgw/etozgoXijILzVf5Pb30V/ aZvcqJAURhWgejtWvLsif0S09XENGcs= X-Google-Smtp-Source: ABdhPJw9i0hkx/ZJX4gMlIdhV58CWJ6jGL1hsGTn/P4mcWSdf8vQ6enEN7lxxJB7ZPLqZykU8TlfWA== X-Received: by 2002:a17:902:868d:b0:13d:dfa7:f3f2 with SMTP id g13-20020a170902868d00b0013ddfa7f3f2mr456589plo.30.1633363344226; Mon, 04 Oct 2021 09:02:24 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:24 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 34/52] KVM: PPC: Book3S HV P9: Restrict DSISR canary workaround to processors that require it Date: Tue, 5 Oct 2021 02:00:31 +1000 Message-Id: <20211004160049.1338837-35-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use CPU_FTR_P9_RADIX_PREFETCH_BUG to apply the workaround, to test for DD2.1 and below processors. This saves a mtSPR in guest entry. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 3 ++- arch/powerpc/kvm/book3s_hv_p9_entry.c | 6 ++++-- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 5a1859311b3e..6fb941aa77f1 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1590,7 +1590,8 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, unsigned long vsid; long err; - if (vcpu->arch.fault_dsisr == HDSISR_CANARY) { + if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG) && + unlikely(vcpu->arch.fault_dsisr == HDSISR_CANARY)) { r = RESUME_GUEST; /* Just retry if it's the canary */ break; } diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 619bbcd47b92..67f57b03a896 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -683,9 +683,11 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc * HDSI which should correctly update the HDSISR the second time HDSI * entry. * - * Just do this on all p9 processors for now. + * The "radix prefetch bug" test can be used to test for this bug, as + * it also exists fo DD2.1 and below. */ - mtspr(SPRN_HDSISR, HDSISR_CANARY); + if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) + mtspr(SPRN_HDSISR, HDSISR_CANARY); mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); From patchwork Mon Oct 4 16:00:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536227 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=dD75TwsV; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTq66MQz9t6h for ; Tue, 5 Oct 2021 03:02:39 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236302AbhJDQES (ORCPT ); Mon, 4 Oct 2021 12:04:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236292AbhJDQEQ (ORCPT ); Mon, 4 Oct 2021 12:04:16 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0450BC061753 for ; Mon, 4 Oct 2021 09:02:27 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id t4so242467plo.0 for ; Mon, 04 Oct 2021 09:02:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c2h6hidknEDcFoUX05/j1Ej4Ouz2ogn73Ns94+r0eEk=; b=dD75TwsVYQlxNjW1EDlQhUGoyN0jlOo8DSumQUxFoahJxXp8/4MPPZwf95QCrPCOg5 +rqYR8KqqRmaOsxHAWBXZJtLUuD8zapKuaIzMvFKn+7aIV1emSlI/o3hcQqInWifIglg uP33nV2itQqwov7m80bjBinG6BFzidypSKd3le0MueteljlZqX9MCbrULoo0CsZQuvOD NnbRZ0dsiXBe02o21uhKI486CpWUeWEpHDU2TmkAqnti9KQGaKYW7+mjzY0eraL89hoS 79AL356P7sAo/qRnw3tk+W6va26ko52A8Zi9TzSEl2K596O999L/VKMmtPTcHxnArNty AS7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c2h6hidknEDcFoUX05/j1Ej4Ouz2ogn73Ns94+r0eEk=; b=e/T3ntFc8n4W6zAY4N/tF5ERjiyCoWE3SfUfcTc7FgnUPD2ZH9U3CZj/cpYuZ8JLnZ +6ByTQLVbHN3zbT0GLAm8dWRB7E5eJAUzP0aiHN4sxTklINiqW6/8+gfagdkedeDf/Mt pGqZYzgj7iu7rimHIASZZy1c2JMEdTXU3TfoMouRupVHahDdtDetTjtJ5gzDmLTHGRwB SCIWqjNoEUJhc9bx4UloMiBDKBAifg2QBs8gd3hVXgnh/0SZZo7rr9tjcuEivhBSI+YE pyOn0xv/sZdqI+TjyQWuH+8GSZa2ln8h61jJCXx5OHjE23uIt35On7gydpOAfWF54t2g yKfg== X-Gm-Message-State: AOAM530RxIEYSvnMS+W5rwED/ZgfzywV+Hzj2YN7geYTPb0zESXgmL7m 7GQ1vUvoCT5Vw78K0dBoFpaxhETodUE= X-Google-Smtp-Source: ABdhPJzuLrGXs46c3TVnrZtvSmM5Np40Qc40RJxs/U84mvaQ1xhPfmn1OH7uneOvZx+5B9hYrMpkpA== X-Received: by 2002:a17:902:8214:b0:13e:c554:6b55 with SMTP id x20-20020a170902821400b0013ec5546b55mr430735pln.80.1633363346386; Mon, 04 Oct 2021 09:02:26 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:26 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 35/52] KVM: PPC: Book3S HV P9: More SPR speed improvements Date: Tue, 5 Oct 2021 02:00:32 +1000 Message-Id: <20211004160049.1338837-36-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This avoids more scoreboard stalls and reduces mtSPRs. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 73 ++++++++++++++++----------- 1 file changed, 43 insertions(+), 30 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 67f57b03a896..a23f09fa7d2d 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -645,24 +645,29 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = vc->tb_offset; } - if (vc->pcr) - mtspr(SPRN_PCR, vc->pcr | PCR_MASK); - mtspr(SPRN_DPDES, vc->dpdes); mtspr(SPRN_VTB, vc->vtb); - mtspr(SPRN_PURR, vcpu->arch.purr); mtspr(SPRN_SPURR, vcpu->arch.spurr); + if (vc->pcr) + mtspr(SPRN_PCR, vc->pcr | PCR_MASK); + if (vc->dpdes) + mtspr(SPRN_DPDES, vc->dpdes); + if (dawr_enabled()) { - mtspr(SPRN_DAWR0, vcpu->arch.dawr0); - mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); + if (vcpu->arch.dawr0 != host_dawr0) + mtspr(SPRN_DAWR0, vcpu->arch.dawr0); + if (vcpu->arch.dawrx0 != host_dawrx0) + mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, vcpu->arch.dawr1); - mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); + if (vcpu->arch.dawr1 != host_dawr1) + mtspr(SPRN_DAWR1, vcpu->arch.dawr1); + if (vcpu->arch.dawrx1 != host_dawrx1) + mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); } } - mtspr(SPRN_CIABR, vcpu->arch.ciabr); - mtspr(SPRN_IC, vcpu->arch.ic); + if (vcpu->arch.ciabr != host_ciabr) + mtspr(SPRN_CIABR, vcpu->arch.ciabr); mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); @@ -881,20 +886,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->dpdes = mfspr(SPRN_DPDES); vc->vtb = mfspr(SPRN_VTB); - save_clear_guest_mmu(kvm, vcpu); - switch_mmu_to_host(kvm, host_pidr); - - /* - * If we are in real mode, only switch MMU on after the MMU is - * switched to host, to avoid the P9_RADIX_PREFETCH_BUG. - */ - if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && - vcpu->arch.shregs.msr & MSR_TS_MASK) - msr |= MSR_TS_S; - __mtmsrd(msr, 0); - - store_vcpu_state(vcpu); - dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; @@ -912,6 +903,22 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = 0; } + save_clear_guest_mmu(kvm, vcpu); + switch_mmu_to_host(kvm, host_pidr); + + /* + * Enable MSR here in order to have facilities enabled to save + * guest registers. This enables MMU (if we were in realmode), so + * only switch MMU on after the MMU is switched to host, to avoid + * the P9_RADIX_PREFETCH_BUG or hash guest context. + */ + if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && + vcpu->arch.shregs.msr & MSR_TS_MASK) + msr |= MSR_TS_S; + __mtmsrd(msr, 0); + + store_vcpu_state(vcpu); + mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr); mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr); @@ -919,15 +926,21 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); mtspr(SPRN_HFSCR, host_hfscr); - mtspr(SPRN_CIABR, host_ciabr); - mtspr(SPRN_DAWR0, host_dawr0); - mtspr(SPRN_DAWRX0, host_dawrx0); + if (vcpu->arch.ciabr != host_ciabr) + mtspr(SPRN_CIABR, host_ciabr); + if (vcpu->arch.dawr0 != host_dawr0) + mtspr(SPRN_DAWR0, host_dawr0); + if (vcpu->arch.dawrx0 != host_dawrx0) + mtspr(SPRN_DAWRX0, host_dawrx0); if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, host_dawr1); - mtspr(SPRN_DAWRX1, host_dawrx1); + if (vcpu->arch.dawr1 != host_dawr1) + mtspr(SPRN_DAWR1, host_dawr1); + if (vcpu->arch.dawrx1 != host_dawrx1) + mtspr(SPRN_DAWRX1, host_dawrx1); } - mtspr(SPRN_DPDES, 0); + if (vc->dpdes) + mtspr(SPRN_DPDES, 0); if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); From patchwork Mon Oct 4 16:00:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536228 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=WpCXD4wi; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTr1M5dz9t5G for ; Tue, 5 Oct 2021 03:02:40 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236300AbhJDQES (ORCPT ); Mon, 4 Oct 2021 12:04:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236290AbhJDQES (ORCPT ); Mon, 4 Oct 2021 12:04:18 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69D55C061745 for ; Mon, 4 Oct 2021 09:02:29 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id d13-20020a17090ad3cd00b0019e746f7bd4so4285371pjw.0 for ; Mon, 04 Oct 2021 09:02:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fq87rfMDwDfdeH36xguI8Sx1EqHcJXrKBlZ1OzcLmnA=; b=WpCXD4wi/JI5J9ibOIBdVVZ0N5PAqJddWLsrhhWtAGvTAK8HiI+aJUGd671MHfe+/8 0EvIltI4Zm6O7rM0jvNygX3+LyfIXolv7Ln5h519+x2sseaauZ7+ixU90yQ1LrZL9AUK 3F/KI0sEnaGBRdTCa/idSsO5BABqcsPAVksPEXDoU//aG6rxcIaYeO4axxrJkVu2P6Rx yVVxPqXTsP9aEdY5cpd+IgNzN1ufwjAqcy2kVD6gnRDA9ThzOJLyg3uv2EHaATTFoDUW 6U5fL6Rf/FWIgwlZiHev1eqIeqUMnho85KkB8qA0o622zoQjXqVGonxqcLSTdOQbsLON BT0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fq87rfMDwDfdeH36xguI8Sx1EqHcJXrKBlZ1OzcLmnA=; b=bsKr7HTqNjsDGdUQp5f40+TWCqeZmkiYe/erqcU4VXbTgckRKUU1Zfe+Ce2aVaVXIR SoG+LRj1jbOUj6F+oNPJpbGuQhYrzaOMk877k9QkBYCPFN2XDzUeLUKSg9hHpCXryBg4 6+Nee2cJmYjH1piBtbrTmHnXTNAk8NpLznYpUcBlRgS1pu+x8YY/FELABwpD5AkKRiIa osEgCVeuulbGMIcyRN2LWOdLWYnhpNGaTlsAP3dwazeQjcrsP7Dp6fnHsQLkAc7hc23W 9KXY+Opw65hqEB0kROYGiLILWp+KDnUR0+ICckriryI6N7Ih7rK5a8ojpkQguQe0tlOH In7Q== X-Gm-Message-State: AOAM533JrsFthNCJO7rerAPSCm+7+egYxbDsU4+d3eAyIPKtub8PWTX4 bYCKDfG0l+DxOSKtjgrEH+/10H35JwE= X-Google-Smtp-Source: ABdhPJwehgfWWGzyLyrKMEkByEkD9YvBHWhENI58G7BW0sEb2h7rAE9jlDKCpERH+RyvGgKKToOEXw== X-Received: by 2002:a17:902:7e84:b0:13e:d793:20d8 with SMTP id z4-20020a1709027e8400b0013ed79320d8mr400306pla.67.1633363348816; Mon, 04 Oct 2021 09:02:28 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:28 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Fabiano Rosas Subject: [PATCH v3 36/52] KVM: PPC: Book3S HV P9: Demand fault EBB facility registers Date: Tue, 5 Oct 2021 02:00:33 +1000 Message-Id: <20211004160049.1338837-37-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use HFSCR facility disabling to implement demand faulting for EBB, with a hysteresis counter similar to the load_fp etc counters in context switching that implement the equivalent demand faulting for userspace facilities. This speeds up guest entry/exit by avoiding the register save/restore when a guest is not frequently using them. When a guest does use them often, there will be some additional demand fault overhead, but these are not commonly used facilities. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/kvm/book3s_hv.c | 16 +++++++++++++-- arch/powerpc/kvm/book3s_hv_p9_entry.c | 28 +++++++++++++++++++++------ 3 files changed, 37 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index c5fc4d016695..9c63eff35812 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -579,6 +579,7 @@ struct kvm_vcpu_arch { ulong cfar; ulong ppr; u32 pspb; + u8 load_ebb; ulong fscr; ulong shadow_fscr; ulong ebbhr; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 6fb941aa77f1..070469867bf5 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1436,6 +1436,16 @@ static int kvmppc_pmu_unavailable(struct kvm_vcpu *vcpu) return RESUME_GUEST; } +static int kvmppc_ebb_unavailable(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.hfscr_permitted & HFSCR_EBB)) + return EMULATE_FAIL; + + vcpu->arch.hfscr |= HFSCR_EBB; + + return RESUME_GUEST; +} + static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct task_struct *tsk) { @@ -1727,6 +1737,8 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, r = kvmppc_emulate_doorbell_instr(vcpu); if (cause == FSCR_PM_LG) r = kvmppc_pmu_unavailable(vcpu); + if (cause == FSCR_EBB_LG) + r = kvmppc_ebb_unavailable(vcpu); } if (r == EMULATE_FAIL) { kvmppc_core_queue_program(vcpu, SRR1_PROGILL); @@ -2771,9 +2783,9 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) vcpu->arch.hfscr_permitted = vcpu->arch.hfscr; /* - * PM is demand-faulted so start with it clear. + * PM, EBB is demand-faulted so start with it clear. */ - vcpu->arch.hfscr &= ~HFSCR_PM; + vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB); kvmppc_mmu_book3s_hv_init(vcpu); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index a23f09fa7d2d..929a7c336b09 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -232,9 +232,12 @@ static void load_spr_state(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { mtspr(SPRN_TAR, vcpu->arch.tar); - mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); - mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); - mtspr(SPRN_BESCR, vcpu->arch.bescr); + + if (vcpu->arch.hfscr & HFSCR_EBB) { + mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); + mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); + mtspr(SPRN_BESCR, vcpu->arch.bescr); + } if (cpu_has_feature(CPU_FTR_P9_TIDR)) mtspr(SPRN_TIDR, vcpu->arch.tid); @@ -265,9 +268,22 @@ static void load_spr_state(struct kvm_vcpu *vcpu, static void store_spr_state(struct kvm_vcpu *vcpu) { vcpu->arch.tar = mfspr(SPRN_TAR); - vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); - vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); - vcpu->arch.bescr = mfspr(SPRN_BESCR); + + if (vcpu->arch.hfscr & HFSCR_EBB) { + vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); + vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); + vcpu->arch.bescr = mfspr(SPRN_BESCR); + /* + * This is like load_fp in context switching, turn off the + * facility after it wraps the u8 to try avoiding saving + * and restoring the registers each partition switch. + */ + if (!vcpu->arch.nested) { + vcpu->arch.load_ebb++; + if (!vcpu->arch.load_ebb) + vcpu->arch.hfscr &= ~HFSCR_EBB; + } + } if (cpu_has_feature(CPU_FTR_P9_TIDR)) vcpu->arch.tid = mfspr(SPRN_TIDR); From patchwork Mon Oct 4 16:00:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536229 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=MNKKpyfN; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTr3jvtz9t6h for ; Tue, 5 Oct 2021 03:02:40 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236333AbhJDQEZ (ORCPT ); Mon, 4 Oct 2021 12:04:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236294AbhJDQEU (ORCPT ); Mon, 4 Oct 2021 12:04:20 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFBD8C061745 for ; Mon, 4 Oct 2021 09:02:31 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id q23so14901738pfs.9 for ; Mon, 04 Oct 2021 09:02:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QhuEbcyyjvEHV1sroWyOcCg/EId/n5w8MxW9uQ4jdko=; b=MNKKpyfNHLMOI8glb86UjO1gY/+U0Jk7MzNXswi9yKdef1zXYTZvaQowvcmj9V1Aqz ipQsjDnK9VI3wyRTCJlbkooKhN5kXAUpD7/pFFOtPolC0NSg0x93x6KAf+0L75Sd0yBX SpGm905Q9PUoW7QnRbxMHrFmCTgaKqFZi1Db/R1ZHXcmNu1PScSwyxcwooFJ8u78vibw 1CX1BEs+YxV29R3OHIOEs1d83xAJJrGODVv99z5dPsdYqCCZH3KbaUtH6KGok/znW/yp JoWmYg8EK09GPZB/LOVsZojQOL98ayqxpwi9UyS1ZxrPvqgw+mX+sB5MCJQIKz1j9TkJ AFbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QhuEbcyyjvEHV1sroWyOcCg/EId/n5w8MxW9uQ4jdko=; b=iSLbDIpjke4ICGkdP0Q6WndpC5JQDuqQZkMTvsyQLaRsZkJHYhpXN5Py3/rnI4wlU0 MgRvsgwlUXswt/+nalERGFErKDxeWOtkZCJGCbnx7EIx1xSaiAk5jm5CECAydzqVXrvQ nnFa8uAvA3O71osC3mcYRZoI5b/h36m3SRr1gzBhBdGoWKiOezT/AnrEH0L/q2pYkPCA 2KQvhXXmYZNj/dDAWt3plVBboqFMSO06z7Uzafc572vYV/alWp1sgNzHL1NUvUamwF/C IHy16HAFPMkR20U6Z1U5cIc1Mi1/d4jBg5jrvWj+uRUQvNriHexJGRy0xV3fFNTqn2zG ZOeQ== X-Gm-Message-State: AOAM531Pbqx3oquM0aP6Jz+5Lx3ey/Kx1zMUUCitpXSnC0s95vqbtIKG zyuxLdFZqbDU7BjNNNEHJIkWSxgOqmY= X-Google-Smtp-Source: ABdhPJx0xmL3OnSeN0aiB2a8LGaWnPsl/gFU0IR57UiyNJzFlWJ0pvzG2LMGL2uIs8IRVhAZp8gBbw== X-Received: by 2002:a05:6a00:cc9:b0:44c:2305:50de with SMTP id b9-20020a056a000cc900b0044c230550demr15939524pfv.79.1633363351338; Mon, 04 Oct 2021 09:02:31 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:31 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin , Fabiano Rosas Subject: [PATCH v3 37/52] KVM: PPC: Book3S HV P9: Demand fault TM facility registers Date: Tue, 5 Oct 2021 02:00:34 +1000 Message-Id: <20211004160049.1338837-38-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use HFSCR facility disabling to implement demand faulting for TM, with a hysteresis counter similar to the load_fp etc counters in context switching that implement the equivalent demand faulting for userspace facilities. This speeds up guest entry/exit by avoiding the register save/restore when a guest is not frequently using them. When a guest does use them often, there will be some additional demand fault overhead, but these are not commonly used facilities. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_host.h | 3 +++ arch/powerpc/kvm/book3s_hv.c | 26 ++++++++++++++++++++------ arch/powerpc/kvm/book3s_hv_p9_entry.c | 15 +++++++++++---- 3 files changed, 34 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 9c63eff35812..92925f82a1e3 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -580,6 +580,9 @@ struct kvm_vcpu_arch { ulong ppr; u32 pspb; u8 load_ebb; +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + u8 load_tm; +#endif ulong fscr; ulong shadow_fscr; ulong ebbhr; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 070469867bf5..e9037f7a3737 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1446,6 +1446,16 @@ static int kvmppc_ebb_unavailable(struct kvm_vcpu *vcpu) return RESUME_GUEST; } +static int kvmppc_tm_unavailable(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.hfscr_permitted & HFSCR_TM)) + return EMULATE_FAIL; + + vcpu->arch.hfscr |= HFSCR_TM; + + return RESUME_GUEST; +} + static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct task_struct *tsk) { @@ -1739,6 +1749,8 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, r = kvmppc_pmu_unavailable(vcpu); if (cause == FSCR_EBB_LG) r = kvmppc_ebb_unavailable(vcpu); + if (cause == FSCR_TM_LG) + r = kvmppc_tm_unavailable(vcpu); } if (r == EMULATE_FAIL) { kvmppc_core_queue_program(vcpu, SRR1_PROGILL); @@ -2783,9 +2795,9 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) vcpu->arch.hfscr_permitted = vcpu->arch.hfscr; /* - * PM, EBB is demand-faulted so start with it clear. + * PM, EBB, TM are demand-faulted so start with it clear. */ - vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB); + vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB | HFSCR_TM); kvmppc_mmu_book3s_hv_init(vcpu); @@ -3855,8 +3867,9 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns msr |= MSR_VEC; if (cpu_has_feature(CPU_FTR_VSX)) msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) msr |= MSR_TM; msr = msr_check_and_set(msr); @@ -4582,8 +4595,9 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) msr |= MSR_VEC; if (cpu_has_feature(CPU_FTR_VSX)) msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) msr |= MSR_TM; msr = msr_check_and_set(msr); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 929a7c336b09..8499e8a9ca8f 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -310,7 +310,7 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu, if (MSR_TM_ACTIVE(guest_msr)) { kvmppc_restore_tm_hv(vcpu, guest_msr, true); ret = true; - } else { + } else if (vcpu->arch.hfscr & HFSCR_TM) { mtspr(SPRN_TEXASR, vcpu->arch.texasr); mtspr(SPRN_TFHAR, vcpu->arch.tfhar); mtspr(SPRN_TFIAR, vcpu->arch.tfiar); @@ -346,10 +346,16 @@ void store_vcpu_state(struct kvm_vcpu *vcpu) unsigned long guest_msr = vcpu->arch.shregs.msr; if (MSR_TM_ACTIVE(guest_msr)) { kvmppc_save_tm_hv(vcpu, guest_msr, true); - } else { + } else if (vcpu->arch.hfscr & HFSCR_TM) { vcpu->arch.texasr = mfspr(SPRN_TEXASR); vcpu->arch.tfhar = mfspr(SPRN_TFHAR); vcpu->arch.tfiar = mfspr(SPRN_TFIAR); + + if (!vcpu->arch.nested) { + vcpu->arch.load_tm++; /* see load_ebb comment */ + if (!vcpu->arch.load_tm) + vcpu->arch.hfscr &= ~HFSCR_TM; + } } } #endif @@ -641,8 +647,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc msr |= MSR_VEC; if (cpu_has_feature(CPU_FTR_VSX)) msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) msr |= MSR_TM; msr = msr_check_and_set(msr); /* Save MSR for restore. This is after hard disable, so EE is clear. */ From patchwork Mon Oct 4 16:00:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536230 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=aa+V/DKU; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTr6MHHz9t5G for ; Tue, 5 Oct 2021 03:02:40 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236294AbhJDQEZ (ORCPT ); Mon, 4 Oct 2021 12:04:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229635AbhJDQEX (ORCPT ); Mon, 4 Oct 2021 12:04:23 -0400 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51DB2C061746 for ; Mon, 4 Oct 2021 09:02:34 -0700 (PDT) Received: by mail-pg1-x52c.google.com with SMTP id 133so17002783pgb.1 for ; Mon, 04 Oct 2021 09:02:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wkexsEOAVWXa6SP1z3QouXaKbTqKQYxhAWf7gffMh3A=; b=aa+V/DKUu5ZG43rwzwPT5Up2WHq7gydSdPPXN+iPlScH8Sd57ZrlSzllLHIWLna83w Pj7KSV9yAIeWnKUYUukT4L8GxCcaVBMGwRArP+Aa48odGEaWFe+zpKw62cuCyAkSXSD6 vmmVWp6jBSRfZHPU7Qq2r18QxmRCvRM7sOk51DI1dK1fU0US+FWbFqXDRIGbZudBdD7/ BoqQeggUCJn+v0CQmQAuG+3MoyK3tZEAIFlkWKqFYGscYrK4hKQ+XwPhJePm/FgUTwLM OgLgbymP6vTq7cAJ4adBv0988BljJ7IUPf3oUQBZvbzRW3XPVwuY37cEavzJBewiHx0V IyEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wkexsEOAVWXa6SP1z3QouXaKbTqKQYxhAWf7gffMh3A=; b=qfltlbffzKQS3m5VrhL0U0xJqxnNlpYlaY5IK+kgBmhE/zKv76fq25GSHBup7hxhdH YuXfUVPome5NWSePrFt9Fs813pZVe3lKq9BcCNHHiKSDVS8Fg1hipm+nLwVoHxguFN9i Yoc/WUrIXyoJiRQyBgRvpSkeU0ljx0RV/ogo514ndUysiiCDhRAvm1vONI3i7HDoaq/g R4Jz8NqpVu3zX9h0U1cTj74aQC7x+DWmITv7zUiVYV5P3SI0BoBvKTjgG9tMrG9uYAV3 WeWOnwj+5344w6y0unyJqqhzx+PBV9kDbY+mcGrJwdOgaNcRAv6yrw3g7qv5VKf0VEgK 2zpg== X-Gm-Message-State: AOAM530yAi/vGYjlGQNBGBS+Yhf0i0NUI/5ck/B8aWYeVkuUncR8R9mA iHIOCBSR6K8fzM5LALO+Gf7adrl+6Lc= X-Google-Smtp-Source: ABdhPJyVts0WOd2f1YH/HiuCLKbXMqHRXf4oG8muSDOuCcQN+BgnIEGqP9eV64UKcV57jAOPFDbjhQ== X-Received: by 2002:a62:8f8a:0:b0:44c:6e3c:4807 with SMTP id n132-20020a628f8a000000b0044c6e3c4807mr2778896pfd.68.1633363353507; Mon, 04 Oct 2021 09:02:33 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:33 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 38/52] KVM: PPC: Book3S HV P9: Use Linux SPR save/restore to manage some host SPRs Date: Tue, 5 Oct 2021 02:00:35 +1000 Message-Id: <20211004160049.1338837-39-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Linux implements SPR save/restore including storage space for registers in the task struct for process context switching. Make use of this similarly to the way we make use of the context switching fp/vec save restore. This improves code reuse, allows some stack space to be saved, and helps with avoiding VRSAVE updates if they are not required. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/switch_to.h | 1 + arch/powerpc/kernel/process.c | 6 ++ arch/powerpc/kvm/book3s_hv.c | 21 +----- arch/powerpc/kvm/book3s_hv.h | 3 - arch/powerpc/kvm/book3s_hv_p9_entry.c | 93 +++++++++++++++++++-------- 5 files changed, 73 insertions(+), 51 deletions(-) diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h index e8013cd6b646..1f43ef696033 100644 --- a/arch/powerpc/include/asm/switch_to.h +++ b/arch/powerpc/include/asm/switch_to.h @@ -113,6 +113,7 @@ static inline void clear_task_ebb(struct task_struct *t) } void kvmppc_save_user_regs(void); +void kvmppc_save_current_sprs(void); extern int set_thread_tidr(struct task_struct *t); diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 3fca321b820d..b2d191a8fbf9 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1182,6 +1182,12 @@ void kvmppc_save_user_regs(void) #endif } EXPORT_SYMBOL_GPL(kvmppc_save_user_regs); + +void kvmppc_save_current_sprs(void) +{ + save_sprs(¤t->thread); +} +EXPORT_SYMBOL_GPL(kvmppc_save_current_sprs); #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ static inline void restore_sprs(struct thread_struct *old_thread, diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e9037f7a3737..f4445cc5a29a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4540,9 +4540,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) struct kvm_run *run = vcpu->run; int r; int srcu_idx; - unsigned long ebb_regs[3] = {}; /* shut up GCC */ - unsigned long user_tar = 0; - unsigned int user_vrsave; struct kvm *kvm; unsigned long msr; @@ -4603,14 +4600,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) kvmppc_save_user_regs(); - /* Save userspace EBB and other register values */ - if (cpu_has_feature(CPU_FTR_ARCH_207S)) { - ebb_regs[0] = mfspr(SPRN_EBBHR); - ebb_regs[1] = mfspr(SPRN_EBBRR); - ebb_regs[2] = mfspr(SPRN_BESCR); - user_tar = mfspr(SPRN_TAR); - } - user_vrsave = mfspr(SPRN_VRSAVE); + kvmppc_save_current_sprs(); vcpu->arch.waitp = &vcpu->arch.vcore->wait; vcpu->arch.pgdir = kvm->mm->pgd; @@ -4651,15 +4641,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) } } while (is_kvmppc_resume_guest(r)); - /* Restore userspace EBB and other register values */ - if (cpu_has_feature(CPU_FTR_ARCH_207S)) { - mtspr(SPRN_EBBHR, ebb_regs[0]); - mtspr(SPRN_EBBRR, ebb_regs[1]); - mtspr(SPRN_BESCR, ebb_regs[2]); - mtspr(SPRN_TAR, user_tar); - } - mtspr(SPRN_VRSAVE, user_vrsave); - vcpu->arch.state = KVMPPC_VCPU_NOTREADY; atomic_dec(&kvm->arch.vcpus_running); diff --git a/arch/powerpc/kvm/book3s_hv.h b/arch/powerpc/kvm/book3s_hv.h index d7485b9e9762..6b7f07d9026b 100644 --- a/arch/powerpc/kvm/book3s_hv.h +++ b/arch/powerpc/kvm/book3s_hv.h @@ -4,11 +4,8 @@ * Privileged (non-hypervisor) host registers to save. */ struct p9_host_os_sprs { - unsigned long dscr; - unsigned long tidr; unsigned long iamr; unsigned long amr; - unsigned long fscr; unsigned int pmc1; unsigned int pmc2; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 8499e8a9ca8f..093ac0453d91 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -231,15 +231,26 @@ EXPORT_SYMBOL_GPL(switch_pmu_to_host); static void load_spr_state(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + /* TAR is very fast */ mtspr(SPRN_TAR, vcpu->arch.tar); +#ifdef CONFIG_ALTIVEC + if (cpu_has_feature(CPU_FTR_ALTIVEC) && + current->thread.vrsave != vcpu->arch.vrsave) + mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); +#endif + if (vcpu->arch.hfscr & HFSCR_EBB) { - mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); - mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); - mtspr(SPRN_BESCR, vcpu->arch.bescr); + if (current->thread.ebbhr != vcpu->arch.ebbhr) + mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); + if (current->thread.ebbrr != vcpu->arch.ebbrr) + mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); + if (current->thread.bescr != vcpu->arch.bescr) + mtspr(SPRN_BESCR, vcpu->arch.bescr); } - if (cpu_has_feature(CPU_FTR_P9_TIDR)) + if (cpu_has_feature(CPU_FTR_P9_TIDR) && + current->thread.tidr != vcpu->arch.tid) mtspr(SPRN_TIDR, vcpu->arch.tid); if (host_os_sprs->iamr != vcpu->arch.iamr) mtspr(SPRN_IAMR, vcpu->arch.iamr); @@ -247,9 +258,9 @@ static void load_spr_state(struct kvm_vcpu *vcpu, mtspr(SPRN_AMR, vcpu->arch.amr); if (vcpu->arch.uamor != 0) mtspr(SPRN_UAMOR, vcpu->arch.uamor); - if (host_os_sprs->fscr != vcpu->arch.fscr) + if (current->thread.fscr != vcpu->arch.fscr) mtspr(SPRN_FSCR, vcpu->arch.fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) + if (current->thread.dscr != vcpu->arch.dscr) mtspr(SPRN_DSCR, vcpu->arch.dscr); if (vcpu->arch.pspb != 0) mtspr(SPRN_PSPB, vcpu->arch.pspb); @@ -269,20 +280,15 @@ static void store_spr_state(struct kvm_vcpu *vcpu) { vcpu->arch.tar = mfspr(SPRN_TAR); +#ifdef CONFIG_ALTIVEC + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); +#endif + if (vcpu->arch.hfscr & HFSCR_EBB) { vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); vcpu->arch.bescr = mfspr(SPRN_BESCR); - /* - * This is like load_fp in context switching, turn off the - * facility after it wraps the u8 to try avoiding saving - * and restoring the registers each partition switch. - */ - if (!vcpu->arch.nested) { - vcpu->arch.load_ebb++; - if (!vcpu->arch.load_ebb) - vcpu->arch.hfscr &= ~HFSCR_EBB; - } } if (cpu_has_feature(CPU_FTR_P9_TIDR)) @@ -324,7 +330,6 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu, #ifdef CONFIG_ALTIVEC load_vr_state(&vcpu->arch.vr); #endif - mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); return ret; } @@ -338,7 +343,6 @@ void store_vcpu_state(struct kvm_vcpu *vcpu) #ifdef CONFIG_ALTIVEC store_vr_state(&vcpu->arch.vr); #endif - vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); #ifdef CONFIG_PPC_TRANSACTIONAL_MEM if (cpu_has_feature(CPU_FTR_TM) || @@ -364,12 +368,8 @@ EXPORT_SYMBOL_GPL(store_vcpu_state); void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { - if (cpu_has_feature(CPU_FTR_P9_TIDR)) - host_os_sprs->tidr = mfspr(SPRN_TIDR); host_os_sprs->iamr = mfspr(SPRN_IAMR); host_os_sprs->amr = mfspr(SPRN_AMR); - host_os_sprs->fscr = mfspr(SPRN_FSCR); - host_os_sprs->dscr = mfspr(SPRN_DSCR); } EXPORT_SYMBOL_GPL(save_p9_host_os_sprs); @@ -377,26 +377,63 @@ EXPORT_SYMBOL_GPL(save_p9_host_os_sprs); void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + /* + * current->thread.xxx registers must all be restored to host + * values before a potential context switch, othrewise the context + * switch itself will overwrite current->thread.xxx with the values + * from the guest SPRs. + */ + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - if (cpu_has_feature(CPU_FTR_P9_TIDR)) - mtspr(SPRN_TIDR, host_os_sprs->tidr); + if (cpu_has_feature(CPU_FTR_P9_TIDR) && + current->thread.tidr != vcpu->arch.tid) + mtspr(SPRN_TIDR, current->thread.tidr); if (host_os_sprs->iamr != vcpu->arch.iamr) mtspr(SPRN_IAMR, host_os_sprs->iamr); if (vcpu->arch.uamor != 0) mtspr(SPRN_UAMOR, 0); if (host_os_sprs->amr != vcpu->arch.amr) mtspr(SPRN_AMR, host_os_sprs->amr); - if (host_os_sprs->fscr != vcpu->arch.fscr) - mtspr(SPRN_FSCR, host_os_sprs->fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) - mtspr(SPRN_DSCR, host_os_sprs->dscr); + if (current->thread.fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, current->thread.fscr); + if (current->thread.dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, current->thread.dscr); if (vcpu->arch.pspb != 0) mtspr(SPRN_PSPB, 0); /* Save guest CTRL register, set runlatch to 1 */ if (!(vcpu->arch.ctrl & 1)) mtspr(SPRN_CTRLT, 1); + +#ifdef CONFIG_ALTIVEC + if (cpu_has_feature(CPU_FTR_ALTIVEC) && + vcpu->arch.vrsave != current->thread.vrsave) + mtspr(SPRN_VRSAVE, current->thread.vrsave); +#endif + if (vcpu->arch.hfscr & HFSCR_EBB) { + if (vcpu->arch.bescr != current->thread.bescr) + mtspr(SPRN_BESCR, current->thread.bescr); + if (vcpu->arch.ebbhr != current->thread.ebbhr) + mtspr(SPRN_EBBHR, current->thread.ebbhr); + if (vcpu->arch.ebbrr != current->thread.ebbrr) + mtspr(SPRN_EBBRR, current->thread.ebbrr); + + if (!vcpu->arch.nested) { + /* + * This is like load_fp in context switching, turn off + * the facility after it wraps the u8 to try avoiding + * saving and restoring the registers each partition + * switch. + */ + vcpu->arch.load_ebb++; + if (!vcpu->arch.load_ebb) + vcpu->arch.hfscr &= ~HFSCR_EBB; + } + } + + if (vcpu->arch.tar != current->thread.tar) + mtspr(SPRN_TAR, current->thread.tar); } EXPORT_SYMBOL_GPL(restore_p9_host_os_sprs); From patchwork Mon Oct 4 16:00:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536231 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=Hy52umQO; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTs1c51z9t6h for ; Tue, 5 Oct 2021 03:02:41 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230284AbhJDQEZ (ORCPT ); Mon, 4 Oct 2021 12:04:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232582AbhJDQEZ (ORCPT ); Mon, 4 Oct 2021 12:04:25 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E660C061745 for ; Mon, 4 Oct 2021 09:02:36 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id 133so17002880pgb.1 for ; Mon, 04 Oct 2021 09:02:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U3rikcBBW6faBn9HW3G/ktrFps4HGeSwQYtxL/+Kwgo=; b=Hy52umQOBt8v9QuYumotMc+kSrjUGAZoKUKZFj5nBcJTyiZQnJxeAEsR0ie3Z8XMwZ L4FQrms+jX0aYmH3lQg9XVY1oajRLdP72wcp2KCduFmyM9V7io/d9Rk0sjiN7+T0+rcT UhQfH33NhMoU48SEQOM0taJuj0GKerLknWXv611g/T3HbQLGY9JysqTqNWWBYYlFlKmH tdSHZiYBMliAer1Mp43QkhZC4KvBOKFhsZ+xELy+5RA75cJ4HGl1LSy1xqslepRrh1Vk bOhPrvAQq6wphSgejk0liwfSANJ5oeDvljm4TBG2VerLyFluLsi6hH5Dc0A2kM0/KBVN JFGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U3rikcBBW6faBn9HW3G/ktrFps4HGeSwQYtxL/+Kwgo=; b=BbYy01lRfClWdny6eNV/lTfGQkYAKygbXS5ORToIW7V7piF1o7RkXGwh7wpVBHv788 DHAbELAjEj9Q5BIVoZBlCDSMkA5LrsfKiyoYO4ivmPy8O1Oq7PaDP13+Q901sG1WMXAO NTL0QbzppQRM1e+dJuB2qM141rHtr/o6wwqLL66nEV8JxdlsfojTNLfg3UNJ9qrHXcqa 5flu3tdIeYYbEaIlPH/Jlk2kfz0iYDTa61FpbB9bpT6Lzu9BesrqCsahgXnnhG2h8nKQ dFm1/o4VEwq8p40l8RF3dXm2pkgR19yG5CO9LbP1a9fRcRxw7Q4il2AVc7tykmIDHjWP isYQ== X-Gm-Message-State: AOAM530jG9E45twgzwnBtlZCX3CCwgRIg/hiWo82VwswrC/79tDQAEHO rM0xZ+sejMYEQiI2TvxKyCSovKJmmf0= X-Google-Smtp-Source: ABdhPJzsypEJIIifJ7XSCEdsBaVL8nldxpkpYEg6Z1RMD1hwbzyAaneO3LlWwUJ2YY3JmOPVZ+3Z3w== X-Received: by 2002:a65:4209:: with SMTP id c9mr11252545pgq.399.1633363355674; Mon, 04 Oct 2021 09:02:35 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:35 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 39/52] KVM: PPC: Book3S HV P9: Comment and fix MMU context switching code Date: Tue, 5 Oct 2021 02:00:36 +1000 Message-Id: <20211004160049.1338837-40-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Tighten up partition switching code synchronisation and comments. In particular, hwsync ; isync is required after the last access that is performed in the context of a partition, before the partition is switched away from. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_64_entry.S | 11 +++++-- arch/powerpc/kvm/book3s_64_mmu_radix.c | 4 +++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 40 +++++++++++++++++++------- 3 files changed, 42 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S index 983b8c18bc31..05e003eb5d90 100644 --- a/arch/powerpc/kvm/book3s_64_entry.S +++ b/arch/powerpc/kvm/book3s_64_entry.S @@ -374,11 +374,16 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_TYPE_RADIX) BEGIN_FTR_SECTION mtspr SPRN_DAWRX1,r10 END_FTR_SECTION_IFSET(CPU_FTR_DAWR1) - mtspr SPRN_PID,r10 /* - * Switch to host MMU mode + * Switch to host MMU mode (don't have the real host PID but we aren't + * going back to userspace). */ + hwsync + isync + + mtspr SPRN_PID,r10 + ld r10, HSTATE_KVM_VCPU(r13) ld r10, VCPU_KVM(r10) lwz r10, KVM_HOST_LPID(r10) @@ -389,6 +394,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_DAWR1) ld r10, KVM_HOST_LPCR(r10) mtspr SPRN_LPCR,r10 + isync + /* * Set GUEST_MODE_NONE so the handler won't branch to KVM, and clear * MSR_RI in r12 ([H]SRR1) so the handler won't try to return. diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 16359525a40f..8cebe5542256 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -57,6 +57,8 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, preempt_disable(); + asm volatile("hwsync" ::: "memory"); + isync(); /* switch the lpid first to avoid running host with unallocated pid */ old_lpid = mfspr(SPRN_LPID); if (old_lpid != lpid) @@ -75,6 +77,8 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, ret = __copy_to_user_inatomic((void __user *)to, from, n); pagefault_enable(); + asm volatile("hwsync" ::: "memory"); + isync(); /* switch the pid first to avoid running host with unallocated pid */ if (quadrant == 1 && pid != old_pid) mtspr(SPRN_PID, old_pid); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 093ac0453d91..323b692bbfe2 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -531,17 +531,19 @@ static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u6 lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; /* - * All the isync()s are overkill but trivially follow the ISA - * requirements. Some can likely be replaced with justification - * comment for why they are not needed. + * Prior memory accesses to host PID Q3 must be completed before we + * start switching, and stores must be drained to avoid not-my-LPAR + * logic (see switch_mmu_to_host). */ + asm volatile("hwsync" ::: "memory"); isync(); mtspr(SPRN_LPID, lpid); - isync(); mtspr(SPRN_LPCR, lpcr); - isync(); mtspr(SPRN_PID, vcpu->arch.pid); - isync(); + /* + * isync not required here because we are HRFID'ing to guest before + * any guest context access, which is context synchronising. + */ } static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) @@ -551,25 +553,41 @@ static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpid = kvm->arch.lpid; + /* + * See switch_mmu_to_guest_radix. ptesync should not be required here + * even if the host is in HPT mode because speculative accesses would + * not cause RC updates (we are in real mode). + */ + asm volatile("hwsync" ::: "memory"); + isync(); mtspr(SPRN_LPID, lpid); mtspr(SPRN_LPCR, lpcr); mtspr(SPRN_PID, vcpu->arch.pid); for (i = 0; i < vcpu->arch.slb_max; i++) mtslb(vcpu->arch.slb[i].orige, vcpu->arch.slb[i].origv); - - isync(); + /* + * isync not required here, see switch_mmu_to_guest_radix. + */ } static void switch_mmu_to_host(struct kvm *kvm, u32 pid) { + /* + * The guest has exited, so guest MMU context is no longer being + * non-speculatively accessed, but a hwsync is needed before the + * mtLPIDR / mtPIDR switch, in order to ensure all stores are drained, + * so the not-my-LPAR tlbie logic does not overlook them. + */ + asm volatile("hwsync" ::: "memory"); isync(); mtspr(SPRN_PID, pid); - isync(); mtspr(SPRN_LPID, kvm->arch.host_lpid); - isync(); mtspr(SPRN_LPCR, kvm->arch.host_lpcr); - isync(); + /* + * isync is not required after the switch, because mtmsrd with L=0 + * is performed after this switch, which is context synchronising. + */ if (!radix_enabled()) slb_restore_bolted_realmode(); From patchwork Mon Oct 4 16:00:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536232 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=FuPAPdd4; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTs3yNLz9t5G for ; Tue, 5 Oct 2021 03:02:41 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236314AbhJDQE1 (ORCPT ); Mon, 4 Oct 2021 12:04:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229635AbhJDQE1 (ORCPT ); Mon, 4 Oct 2021 12:04:27 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CD91C061745 for ; Mon, 4 Oct 2021 09:02:38 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id g184so16990328pgc.6 for ; Mon, 04 Oct 2021 09:02:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AdZU0vsaA7eoEc1W7hEx0HHSrDx68parS+Neoj58wvs=; b=FuPAPdd4AuCMzRlUAw1cO3YyYFfbBll4q/e22WAjqphgz+yORSeWc1whfUX0xYO+tG b2fJBd5sSqXGF9cwPjM+mxuP9lZRwwEv37XVnMSAmk70ENRjTJcbH8cJ8ykv1VfGAWMs K57sksAIEjAPB1iZ6eutfhzrFodokQQZ5BR8YF3ftYXeNF2cRkWImWzOgTU28duvo41K 3VgexiwLGFSHBo2oCaNpSkrEgllgmK9GlMfPI096srSoWquBbSngcT4SWa0vOIa0pNcH HQny8M7qI7TOfdduBacS6aF1UFWF8WkxQhUP9rFgFTd9yx41LeMTWvLT8ggzZppH3TFV qJPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AdZU0vsaA7eoEc1W7hEx0HHSrDx68parS+Neoj58wvs=; b=xqY8PjC06swqr/8bSbnr0yfCQhdo7FCBFBTeGAp++cFjFoViO7q/x6LZZyZcaVumEj pbpzICLjef3wvZlX496Ze5nMx9DtGGU0/ibM5x1jmTuVrZzDUVn0aT+QU9tVYV/b46Ov UocKbe0o3trt/jA3F3dBh0FoxlgpveVdVUwGQSofQJZnUVllmD2OLNqhn0sq2RJTvm/W 5/9tTDFynFiw9DWmw4FrHz6MDk9kd8UqO2wjoBknTMfarKttQZbt4QMDn6BjROw2s0BZ gegFyUnVCsZb42fgcyjIqD4XSOZm8GslRcVYPZ6ePBcbqnLUeV9UED4ENOBHLTWu7bLJ aBNw== X-Gm-Message-State: AOAM532dK0XeXigKhSxPCLTxIcWt5DEWQSBmFGRcXtVtW3uLDe8lf1qL RrDbkQ88cyNxyVrsKLyiXz8tvIRjkks= X-Google-Smtp-Source: ABdhPJwv7mubAZbz9fLyx8zI6rdNmj3T5moHXZxRqMNMlkCpzygbuyOQ2xtOyp4IrySG5p75oaTEHA== X-Received: by 2002:a63:dd51:: with SMTP id g17mr11486439pgj.47.1633363357835; Mon, 04 Oct 2021 09:02:37 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:37 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 40/52] KVM: PPC: Book3S HV P9: Test dawr_enabled() before saving host DAWR SPRs Date: Tue, 5 Oct 2021 02:00:37 +1000 Message-Id: <20211004160049.1338837-41-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Some of the DAWR SPR access is already predicated on dawr_enabled(), apply this to the remainder of the accesses. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 34 ++++++++++++++++----------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 323b692bbfe2..0f341011816c 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -666,13 +666,16 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); - host_dawr0 = mfspr(SPRN_DAWR0); - host_dawrx0 = mfspr(SPRN_DAWRX0); host_psscr = mfspr(SPRN_PSSCR); host_pidr = mfspr(SPRN_PID); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - host_dawr1 = mfspr(SPRN_DAWR1); - host_dawrx1 = mfspr(SPRN_DAWRX1); + + if (dawr_enabled()) { + host_dawr0 = mfspr(SPRN_DAWR0); + host_dawrx0 = mfspr(SPRN_DAWRX0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + host_dawr1 = mfspr(SPRN_DAWR1); + host_dawrx1 = mfspr(SPRN_DAWRX1); + } } local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); @@ -1006,15 +1009,18 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_HFSCR, host_hfscr); if (vcpu->arch.ciabr != host_ciabr) mtspr(SPRN_CIABR, host_ciabr); - if (vcpu->arch.dawr0 != host_dawr0) - mtspr(SPRN_DAWR0, host_dawr0); - if (vcpu->arch.dawrx0 != host_dawrx0) - mtspr(SPRN_DAWRX0, host_dawrx0); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - if (vcpu->arch.dawr1 != host_dawr1) - mtspr(SPRN_DAWR1, host_dawr1); - if (vcpu->arch.dawrx1 != host_dawrx1) - mtspr(SPRN_DAWRX1, host_dawrx1); + + if (dawr_enabled()) { + if (vcpu->arch.dawr0 != host_dawr0) + mtspr(SPRN_DAWR0, host_dawr0); + if (vcpu->arch.dawrx0 != host_dawrx0) + mtspr(SPRN_DAWRX0, host_dawrx0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + if (vcpu->arch.dawr1 != host_dawr1) + mtspr(SPRN_DAWR1, host_dawr1); + if (vcpu->arch.dawrx1 != host_dawrx1) + mtspr(SPRN_DAWRX1, host_dawrx1); + } } if (vc->dpdes) From patchwork Mon Oct 4 16:00:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536233 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=Cp9TFuDn; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTs6c1mz9t6h for ; Tue, 5 Oct 2021 03:02:41 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232582AbhJDQEa (ORCPT ); Mon, 4 Oct 2021 12:04:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231148AbhJDQE3 (ORCPT ); Mon, 4 Oct 2021 12:04:29 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB665C061745 for ; Mon, 4 Oct 2021 09:02:40 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id cs11-20020a17090af50b00b0019fe3df3dddso247760pjb.0 for ; Mon, 04 Oct 2021 09:02:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UPwQjon/JGAwP2hNwxa5hh2Gm9nZDXFP065oxigESDI=; b=Cp9TFuDnn9PJg4RnIi5LgHZZNizTxMSa0BnQHLvQggcgyzXhPeCRJ0NEX6C75vDAGR XPwnPldKswCJvCR+tYxOBiy99l9hkz+cMHSgvIPF//aI/4ccVCLr4sWwc4E+pHGxYKku mbgKLR/lRUtSZ2v2eKT6iZKW02RgE4LawNVIk5Oxo21A4oDXdmAOXDXpZZXk22a5JDs0 22DIiS4E1K3si7dVvnl6kydK5l0VvCIIyQJ4Pi4Jv+1M8mwJ28Z/PoFAzE18sz07eVo4 tuV8tnJRQE68kisDVM/6mNnYllnkB4RTBXkoT3ut8HyDB2kQCSpxkaJ/qko4j2muh+Uj 5ocw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UPwQjon/JGAwP2hNwxa5hh2Gm9nZDXFP065oxigESDI=; b=Bb5+Lf98UKqK6ZulNSRQKvkOqSXYcmDHUsXnr0RFSMzBAHYEcQ6bsGSg3sEtpKCxu+ Z/xFzIrDJ+0fPbM79NCUBb37zqNqdtyvj0KwaGdxq0aZPHLXixqh9rMMGh0rC1IiV8dx 3GpOY/GSQRh3rXYDVmuWTbWpPFCfiWJQ0/fV3MW5IlTIKphmocU29wKU/fjRtMJsI3Sc HMf8SROojEZtNP4pq8WrY/lE4Yc8X9LQQaTtnu9T4TnT98hptGGbXV0sIhsVKjNRSh4E zOy0/MIMlwZM9hrS9egxmYX00ZNr+tnnwRj3nWUvsTZcowMaAj4bSIcamPuJ2fN7Ocnq f0Qw== X-Gm-Message-State: AOAM530x7SnYCHNKVsPMn3rp91XgCRhRb2+eSs6QrmzX2hhxRifMt7dD MZFKm1iDWW4tQSNlepXwOcffi3a6ozU= X-Google-Smtp-Source: ABdhPJy45b5shGF85tjBMKoIrwjTXQgDTjxMEs1AFFjyfkfEIb4NOz+fsiDzSekMZeuXua4qLG2Iqg== X-Received: by 2002:a17:902:e8ce:b0:132:b140:9540 with SMTP id v14-20020a170902e8ce00b00132b1409540mr441432plg.28.1633363360045; Mon, 04 Oct 2021 09:02:40 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:39 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 41/52] KVM: PPC: Book3S HV P9: Don't restore PSSCR if not needed Date: Tue, 5 Oct 2021 02:00:38 +1000 Message-Id: <20211004160049.1338837-42-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This also moves the PSSCR update in nested entry to avoid a SPR scoreboard stall. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 7 +++++-- arch/powerpc/kvm/book3s_hv_p9_entry.c | 26 +++++++++++++++++++------- 2 files changed, 24 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f4445cc5a29a..f10cb4167549 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3876,7 +3876,9 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* TM restore can update msr */ - mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + if (vcpu->arch.psscr != host_psscr) + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + kvmhv_save_hv_regs(vcpu, &hvregs); hvregs.lpcr = lpcr; vcpu->arch.regs.msr = vcpu->arch.shregs.msr; @@ -3917,7 +3919,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns vcpu->arch.shregs.dar = mfspr(SPRN_DAR); vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); - mtspr(SPRN_PSSCR_PR, host_psscr); store_vcpu_state(vcpu); @@ -3930,6 +3931,8 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns timer_rearm_host_dec(*tb); restore_p9_host_os_sprs(vcpu, &host_os_sprs); + if (vcpu->arch.psscr != host_psscr) + mtspr(SPRN_PSSCR_PR, host_psscr); return trap; } diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 0f341011816c..eae9d806d704 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -649,6 +649,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_dawr0; unsigned long host_dawrx0; unsigned long host_psscr; + unsigned long host_hpsscr; unsigned long host_pidr; unsigned long host_dawr1; unsigned long host_dawrx1; @@ -666,7 +667,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); - host_psscr = mfspr(SPRN_PSSCR); + host_psscr = mfspr(SPRN_PSSCR_PR); + if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + host_hpsscr = mfspr(SPRN_PSSCR); host_pidr = mfspr(SPRN_PID); if (dawr_enabled()) { @@ -750,8 +753,14 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (vcpu->arch.ciabr != host_ciabr) mtspr(SPRN_CIABR, vcpu->arch.ciabr); - mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + + if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + } else { + if (vcpu->arch.psscr != host_psscr) + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + } mtspr(SPRN_HFSCR, vcpu->arch.hfscr); @@ -957,7 +966,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ic = mfspr(SPRN_IC); vcpu->arch.pid = mfspr(SPRN_PID); - vcpu->arch.psscr = mfspr(SPRN_PSSCR) & PSSCR_GUEST_VIS; + vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); vcpu->arch.shregs.sprg0 = mfspr(SPRN_SPRG0); vcpu->arch.shregs.sprg1 = mfspr(SPRN_SPRG1); @@ -1003,9 +1012,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr); mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr); - /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ - mtspr(SPRN_PSSCR, host_psscr | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ + mtspr(SPRN_PSSCR, host_hpsscr | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + } + mtspr(SPRN_HFSCR, host_hfscr); if (vcpu->arch.ciabr != host_ciabr) mtspr(SPRN_CIABR, host_ciabr); From patchwork Mon Oct 4 16:00:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536234 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=e+GWjShn; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTv62H6z9t5G for ; Tue, 5 Oct 2021 03:02:43 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229635AbhJDQEc (ORCPT ); Mon, 4 Oct 2021 12:04:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231148AbhJDQEb (ORCPT ); Mon, 4 Oct 2021 12:04:31 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC14FC061745 for ; Mon, 4 Oct 2021 09:02:42 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id r201so11054513pgr.4 for ; Mon, 04 Oct 2021 09:02:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=A5szwdcp8gimTWh83kL+eYl+KLG10+CVqh8cwZxo6/c=; b=e+GWjShnYaFzfI7wJ36T6V/Zk98hgHTcjWJp5SiZKF0BgPJDFwKIeezIhIZbknJxpk 6MdZ/d5DshIBXSs/S3Fb7YdC4tAKFGXK98gTPepNe6qmnaRjlgLYaw27t/QFgNfssxUI 7ML2gx2XlMVhVAmyw/yEiM+SClcgXEDbhs2CEp+wqY/9oIhhYMmMNLMySCZcG/aoOFmQ FIkgUpf2zovaTiax/ehXo3uoY+X61juQ/Kfg8VEnZoA8Dv/dZj/QrZzPp2eXULOrqQiM 1cCeJSQQi2jVgdsZdUJLB9wkg4JcrOPgF1mxQ/Na5w75m+cXyLUiNNTHL0QqATIg7PTF fUZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=A5szwdcp8gimTWh83kL+eYl+KLG10+CVqh8cwZxo6/c=; b=t5zY4iBjuioseexg9kfTw1jB8i7eX/eIKRI8oRjuybz6MUjW18omKVl6r6Cq5hoS7M fhY3WZdjfPP3u6ng3skVIbP6xg8B+K9UALjOPLEGMCmbVfeDbBJumUnAldNf8Tkm8EHC +Pp3mW78Q0u/meI3j2AluqKdlvYvpMCtXJnupO1FniGpvNUEQQdMYk2GOk+G006dVEDi LHx/01clhj4OhsG0zcUjFtKdJfp+Hi7HmAWJTBUpFCdb+JpDWn77O3TNtKqvCIigqYqE g/AuCKFMaXpsQmplTKfH3Wj61jsBnBQydXlVQLag74YbOaEKef/eHnH4GanWVj+oznxV XrhA== X-Gm-Message-State: AOAM533DYkoGncbQKpLo6r1PA1s2R/h9nQmXm1+NVVd/RHgiyfBbXLvn QaK5FHpN0/BFB0KHyepVYcqwWX+AFIw= X-Google-Smtp-Source: ABdhPJwNXUfyXHg1yzKfJoKSyALY7bUoUHvkNOX/NnXrS8Tugninsd4rl5KnNdqFvPh9W/eYBZYAOQ== X-Received: by 2002:a63:131f:: with SMTP id i31mr11639549pgl.207.1633363362384; Mon, 04 Oct 2021 09:02:42 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:42 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 42/52] KVM: PPC: Book3S HV P9: Avoid tlbsync sequence on radix guest exit Date: Tue, 5 Oct 2021 02:00:39 +1000 Message-Id: <20211004160049.1338837-43-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use the existing TLB flushing logic to IPI the previous CPU and run the necessary barriers before running a guest vCPU on a new physical CPU, to do the necessary radix GTSE barriers for handling the case of an interrupted guest tlbie sequence. This results in more IPIs than the TLB flush logic requires, but it's a significant win for common case scheduling when the vCPU remains on the same physical CPU. This saves about 520 cycles (nearly 10%) on a guest entry+exit micro benchmark on a POWER9. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 31 +++++++++++++++++++++++---- arch/powerpc/kvm/book3s_hv_p9_entry.c | 9 -------- 2 files changed, 27 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f10cb4167549..d2a9c930f33e 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3025,6 +3025,25 @@ static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu) smp_call_function_single(i, do_nothing, NULL, 1); } +static void do_migrate_away_vcpu(void *arg) +{ + struct kvm_vcpu *vcpu = arg; + struct kvm *kvm = vcpu->kvm; + + /* + * If the guest has GTSE, it may execute tlbie, so do a eieio; tlbsync; + * ptesync sequence on the old CPU before migrating to a new one, in + * case we interrupted the guest between a tlbie ; eieio ; + * tlbsync; ptesync sequence. + * + * Otherwise, ptesync is sufficient. + */ + if (kvm->arch.lpcr & LPCR_GTSE) + asm volatile("eieio; tlbsync; ptesync"); + else + asm volatile("ptesync"); +} + static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu) { struct kvm_nested_guest *nested = vcpu->arch.nested; @@ -3052,10 +3071,14 @@ static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu) * so we use a single bit in .need_tlb_flush for all 4 threads. */ if (prev_cpu != pcpu) { - if (prev_cpu >= 0 && - cpu_first_tlb_thread_sibling(prev_cpu) != - cpu_first_tlb_thread_sibling(pcpu)) - radix_flush_cpu(kvm, prev_cpu, vcpu); + if (prev_cpu >= 0) { + if (cpu_first_tlb_thread_sibling(prev_cpu) != + cpu_first_tlb_thread_sibling(pcpu)) + radix_flush_cpu(kvm, prev_cpu, vcpu); + + smp_call_function_single(prev_cpu, + do_migrate_away_vcpu, vcpu, 1); + } if (nested) nested->prev_cpu[vcpu->arch.nested_vcpu_id] = pcpu; else diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index eae9d806d704..a45ba584c734 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -1049,15 +1049,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; - if (kvm_is_radix(kvm)) { - /* - * Since this is radix, do a eieio; tlbsync; ptesync sequence - * in case we interrupted the guest between a tlbie and a - * ptesync. - */ - asm volatile("eieio; tlbsync; ptesync"); - } - /* * cp_abort is required if the processor supports local copy-paste * to clear the copy buffer that was under control of the guest. From patchwork Mon Oct 4 16:00:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536236 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=RUJnCCc3; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQTy1p84z9t6h for ; Tue, 5 Oct 2021 03:02:46 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236334AbhJDQEe (ORCPT ); Mon, 4 Oct 2021 12:04:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234465AbhJDQEe (ORCPT ); Mon, 4 Oct 2021 12:04:34 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22254C061746 for ; Mon, 4 Oct 2021 09:02:45 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id b22so217616pls.1 for ; Mon, 04 Oct 2021 09:02:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iyZ5LHFchUzwewehQfyXIN1Gtjf00pJhqWY3AC6un9o=; b=RUJnCCc3E/nnL6wYPHxLPvUyXUOvCO5MtKmpet1oulLn3lO2XUdE4E0YHiz1ESLSmo jZm90E33B0KQ7Zn8KlzCI+l5wUs+YdNczQQcZL6rilJ94mf5WIm/Fs7ixVZKH6XxapvN 7DTX/bxqlkyTmXjFbqqwXRfxPBxZVwL7G/5lup1fPPgtsulX+a7+9VbsAtfQ5DNLbpdb SozuhhCB9VyW/dmE9QyLxipU8GxQq7UcYwmoPD0YpKjs1g/3Fx7aQOsXJiaD6Mu2fslO I9KSq4ON1foXvgDiER8Sx4iA+KwHTQ51v9mDbKtAztBlSFfBNTuV2Pkz0s2GXB81TJTk Su4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iyZ5LHFchUzwewehQfyXIN1Gtjf00pJhqWY3AC6un9o=; b=imQDzsW604BGkUO6ln/LL54mBcn5IytFNsoHPRViI/CzRhyL1eLrUK2p8ZYT8AQwMm e29EXDSsMmIkc0Y1f19+YBxftPPWZZWscSUCsa7D1w8EaL2w+55abBKNIdz5ySG8+0Q7 rl4ohzZWsuQg17/tlrspwD/PWBm78lPTZ3jO/vbTq4Z1tOMOrrsn0Jt/QuUw0Z0DBbsI USgnj9jWeKS7Afvqcxg1fNSjFDgUj371PCksi8EMfHESDZjKktfYw5S1Ru5rkJkaPWYK /r0LCYEkIHDRHIGumR9GFH1DN1OZezIyryF0Ds+hy1Z7As351zxXBG3WMoa4d5WijitG 7nyA== X-Gm-Message-State: AOAM531HqVmwX0/DWi17ZD4Sc3t6kNdFiYRmVHkAM3hv2XXFDnEHRWCv E4CaGmueOhHG0IeXFNukoiNIJCHKIbU= X-Google-Smtp-Source: ABdhPJx9dS/yuRqgplT1tXygYaE+tzgDGvHd9GbuxXoPKZfj4fTAVaNjWT/6gRQl7sjMg39gLsnGDw== X-Received: by 2002:a17:90b:1649:: with SMTP id il9mr38111339pjb.206.1633363364512; Mon, 04 Oct 2021 09:02:44 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:44 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 43/52] KVM: PPC: Book3S HV Nested: Avoid extra mftb() in nested entry Date: Tue, 5 Oct 2021 02:00:40 +1000 Message-Id: <20211004160049.1338837-44-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org mftb() is expensive and one can be avoided on nested guest dispatch. If the time checking code distinguishes between the L0 timer and the nested HV timer, then both can be tested in the same place with the same mftb() value. This also nicely illustrates the relationship between the L0 and nested HV timers. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_asm.h | 1 + arch/powerpc/kvm/book3s_hv.c | 12 ++++++++++++ arch/powerpc/kvm/book3s_hv_nested.c | 5 ----- 3 files changed, 13 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_asm.h b/arch/powerpc/include/asm/kvm_asm.h index fbbf3cec92e9..d68d71987d5c 100644 --- a/arch/powerpc/include/asm/kvm_asm.h +++ b/arch/powerpc/include/asm/kvm_asm.h @@ -79,6 +79,7 @@ #define BOOK3S_INTERRUPT_FP_UNAVAIL 0x800 #define BOOK3S_INTERRUPT_DECREMENTER 0x900 #define BOOK3S_INTERRUPT_HV_DECREMENTER 0x980 +#define BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER 0x1980 #define BOOK3S_INTERRUPT_DOORBELL 0xa00 #define BOOK3S_INTERRUPT_SYSCALL 0xc00 #define BOOK3S_INTERRUPT_TRACE 0xd00 diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d2a9c930f33e..342b4f125d03 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1486,6 +1486,10 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, run->ready_for_interrupt_injection = 1; switch (vcpu->arch.trap) { /* We're good on these - the host merely wanted to get our attention */ + case BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER: + WARN_ON_ONCE(1); /* Should never happen */ + vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; + fallthrough; case BOOK3S_INTERRUPT_HV_DECREMENTER: vcpu->stat.dec_exits++; r = RESUME_GUEST; @@ -1814,6 +1818,12 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) vcpu->stat.ext_intr_exits++; r = RESUME_GUEST; break; + /* These need to go to the nested HV */ + case BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER: + vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; + vcpu->stat.dec_exits++; + r = RESUME_HOST; + break; /* SR/HMI/PMI are HV interrupts that host has handled. Resume guest.*/ case BOOK3S_INTERRUPT_HMI: case BOOK3S_INTERRUPT_PERFMON: @@ -3975,6 +3985,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, return BOOK3S_INTERRUPT_HV_DECREMENTER; if (next_timer < time_limit) time_limit = next_timer; + else if (*tb >= time_limit) /* nested time limit */ + return BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER; vcpu->arch.ceded = 0; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 7bed0b91245e..e57c08b968c0 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -375,11 +375,6 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) vcpu->arch.ret = RESUME_GUEST; vcpu->arch.trap = 0; do { - if (mftb() >= hdec_exp) { - vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; - r = RESUME_HOST; - break; - } r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr); } while (is_kvmppc_resume_guest(r)); From patchwork Mon Oct 4 16:00:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536237 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=ogBhi8K7; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQV05bqBz9t6h for ; Tue, 5 Oct 2021 03:02:48 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234595AbhJDQEh (ORCPT ); Mon, 4 Oct 2021 12:04:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231148AbhJDQEg (ORCPT ); Mon, 4 Oct 2021 12:04:36 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99053C061746 for ; Mon, 4 Oct 2021 09:02:47 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id d13-20020a17090ad3cd00b0019e746f7bd4so4286048pjw.0 for ; Mon, 04 Oct 2021 09:02:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0RII5FvLpORli3bjpPpeJOyKoOvf8B8BgGYRmW8rRY0=; b=ogBhi8K7JWbFOHugXDNIy0mCsX74QETMzkHtBAU4TS5z/WBBp7AEOtgCWeBXb/FuD4 t/4SoXWsIj7b/Ot9OFQsW/vwSTzuXULp62CLORRj0ztmfGeClINdPt5pO5bgWnLj95rd 1Q6Zjs9JHuIiy5VlE9Or9EdGGu2KkhJnOtXFXmZY1lS7U/9gjjKW5BSi2TN7d3DtMVCc c1a/5S+rPl10iLJQBAfIwl/tlg29j8y4Pdv6lyYvQwhn7k8lV0k+xVRHdtKUamHcKLkL GaFKD0Enrefa/1SdaW2j5Xa5KxiU7RoGh8AnznvzBGA1XH6qmxpecvyfRSCzoudkqCZc XQ2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0RII5FvLpORli3bjpPpeJOyKoOvf8B8BgGYRmW8rRY0=; b=Cuq3f5F2fpsEQIB2DqqNDHB7Uq1jEZKowmgBeA4OGoWWTtLbAUOCLTSBhBXkUVUnun zsotiS6LKCNKsqQy7oWb35FIykgc4AJtoL3CB6/5/yRRG+zRTydOQc4dqUQI4JAuYusL bqpQIvQo+qpHVirQI1APrSBmGQAIU60T4SdI7Qv2RfnWD5/iVvs0ivgZgm7HHPUGReSI Aw8pXt5778Dq1H4yHch/4Moe+tk1qxQueqMqt8SHPlzHd+c2iOU5Y8ETEeF7Ami5A3F4 fHZt52vew4tTmRH0jBUlRZDXGVWi6cWf0qrCa5YiCMEIEdefWpHPYFhnZQDrvdxcqqif k5Mw== X-Gm-Message-State: AOAM533O2Nix+1gjNE+bN8apv0ROGoBcThyh6hvMY0foDPot5fJOkW7K so/DpCrzAykV5AedvjYIrRzsM5cYW+s= X-Google-Smtp-Source: ABdhPJyZr4aLAllbcw3R0EGHzxDozwzPPqgm9xoViRYfVq+Epl71H80xIAMWDGyykTi+rPnpzUd1KQ== X-Received: by 2002:a17:90b:1d8f:: with SMTP id pf15mr30080478pjb.200.1633363366959; Mon, 04 Oct 2021 09:02:46 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:46 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 44/52] KVM: PPC: Book3S HV P9: Improve mfmsr performance on entry Date: Tue, 5 Oct 2021 02:00:41 +1000 Message-Id: <20211004160049.1338837-45-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rearrange the MSR saving on entry so it does not follow the mtmsrd to disable interrupts, avoiding a possible RAW scoreboard stall. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 2 + arch/powerpc/kvm/book3s_hv.c | 18 ++----- arch/powerpc/kvm/book3s_hv_p9_entry.c | 66 +++++++++++++++--------- 3 files changed, 47 insertions(+), 39 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 0a319ed9c2fd..96f0fda50a07 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -154,6 +154,8 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } +unsigned long kvmppc_msr_hard_disable_set_facilities(struct kvm_vcpu *vcpu, unsigned long msr); + int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb); #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 342b4f125d03..0bbef4587f41 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3878,6 +3878,8 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns s64 dec; int trap; + msr = mfmsr(); + save_p9_host_os_sprs(&host_os_sprs); /* @@ -3888,24 +3890,10 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns */ host_psscr = mfspr(SPRN_PSSCR_PR); - hard_irq_disable(); + kvmppc_msr_hard_disable_set_facilities(vcpu, msr); if (lazy_irq_pending()) return 0; - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if ((cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && - (vcpu->arch.hfscr & HFSCR_TM)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* TM restore can update msr */ diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index a45ba584c734..646f487ebf97 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -632,6 +632,44 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) } } +unsigned long kvmppc_msr_hard_disable_set_facilities(struct kvm_vcpu *vcpu, unsigned long msr) +{ + unsigned long msr_needed = 0; + + msr &= ~MSR_EE; + + /* MSR bits may have been cleared by context switch so must recheck */ + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr_needed |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr_needed |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr_needed |= MSR_VSX; + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) + msr_needed |= MSR_TM; + + /* + * This could be combined with MSR[RI] clearing, but that expands + * the unrecoverable window. It would be better to cover unrecoverable + * with KVM bad interrupt handling rather than use MSR[RI] at all. + * + * Much more difficult and less worthwhile to combine with IR/DR + * disable. + */ + if ((msr & msr_needed) != msr_needed) { + msr |= msr_needed; + __mtmsrd(msr, 0); + } else { + __hard_irq_disable(); + } + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; + + return msr; +} +EXPORT_SYMBOL_GPL(kvmppc_msr_hard_disable_set_facilities); + int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { struct p9_host_os_sprs host_os_sprs; @@ -665,6 +703,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; + /* Save MSR for restore, with EE clear. */ + msr = mfmsr() & ~MSR_EE; + host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); host_psscr = mfspr(SPRN_PSSCR_PR); @@ -686,35 +727,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc save_p9_host_os_sprs(&host_os_sprs); - /* - * This could be combined with MSR[RI] clearing, but that expands - * the unrecoverable window. It would be better to cover unrecoverable - * with KVM bad interrupt handling rather than use MSR[RI] at all. - * - * Much more difficult and less worthwhile to combine with IR/DR - * disable. - */ - hard_irq_disable(); + msr = kvmppc_msr_hard_disable_set_facilities(vcpu, msr); if (lazy_irq_pending()) { trap = 0; goto out; } - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if ((cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && - (vcpu->arch.hfscr & HFSCR_TM)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - /* Save MSR for restore. This is after hard disable, so EE is clear. */ - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* MSR may have been updated */ From patchwork Mon Oct 4 16:00:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536238 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=jGT2p6ld; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQV26HjCz9t5G for ; Tue, 5 Oct 2021 03:02:50 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236335AbhJDQEj (ORCPT ); Mon, 4 Oct 2021 12:04:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234465AbhJDQEi (ORCPT ); Mon, 4 Oct 2021 12:04:38 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3A30C061745 for ; Mon, 4 Oct 2021 09:02:49 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id oa12-20020a17090b1bcc00b0019f715462a8so293855pjb.3 for ; Mon, 04 Oct 2021 09:02:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EPQZDhgPqi5g0CfouGHl1rvGEVKlv1XdekS1P2LkqM4=; b=jGT2p6ldMXThpfkhQJu2ytHRGELk/8SfvLk0Prd5doutrh1lOyv8tPlyn/VzDmrg4R VR7ZqyI/ECY/46f1BvRPyfK8D3Ozx68r1ikoW5krfQMGhDruugUfFMqGcjBtfUtNtnbM 1ffrST3fiot3MZt7nw/n3/DY7Ni1z7lQoRJLzrO3C5K627JOO1wytUOnJNYsNov8IRJz QUartURyW0NwLMPrQC6MaP3jbTwux2IZowTcF+6i4GRMFked9KIU3v8IuB20VPjnNwlX yp/yjgdZ+yYi0ruAqO9fdlpOzQ6xl9Bn2wTT90Pbf4DcCWumwn5inhcFC8m+SyCAZYG4 /tIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EPQZDhgPqi5g0CfouGHl1rvGEVKlv1XdekS1P2LkqM4=; b=caXt0cG7k1tZrN9kONxQk7bjBGOhbrauRxb123o+z9ZkQXFkaNSJqp7+1kGB942NwM UDO8LS7kLbA+D1sOWa+SPMs2e1bZsim4rtZ80oYrXTYpcViKdj9ReQHT+2EGl3tlFRDN VjZQy/9uqYAsynQhDoXy0+VvuMht3QuwU3S5M/MZIJnx5qzR3hMuiLFr1/uMGrNXNs6Y ekA3Mz6pFfzJAGzDFmdQlq3QC55ueHfEy/FeU7VPEc7896nFqoI2CgyVUJkfQSJJtiKd Pfmo6Q5Srbc3cX8SZSCxgd9MlPnfM1+LKO++OCMY/jDktgBcTahbNJaoQWugX6TFx0jO Lm7A== X-Gm-Message-State: AOAM530QoPlfcdez1s9HhQufldiK6d65Ox0ZnuhyQO1itLZNM5tf1rA/ s8rmZIOFNo0uZmuIz/mMy+btRl+Qxwk= X-Google-Smtp-Source: ABdhPJyKmvYE4L++eCYVyA+2RJsqZeQTlLo+rudyoP6QA7smhpqFh7ozmGlrFA6ads/+gv+RXkon7A== X-Received: by 2002:a17:90a:df8a:: with SMTP id p10mr4053591pjv.137.1633363369138; Mon, 04 Oct 2021 09:02:49 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:48 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 45/52] KVM: PPC: Book3S HV P9: Optimise hash guest SLB saving Date: Tue, 5 Oct 2021 02:00:42 +1000 Message-Id: <20211004160049.1338837-46-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org slbmfee/slbmfev instructions are very expensive, moreso than a regular mfspr instruction, so minimising them significantly improves hash guest exit performance. The slbmfev is only required if slbmfee found a valid SLB entry. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 646f487ebf97..99ce5805ea28 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -487,10 +487,22 @@ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator #define accumulate_time(vcpu, next) do {} while (0) #endif -static inline void mfslb(unsigned int idx, u64 *slbee, u64 *slbev) +static inline u64 mfslbv(unsigned int idx) { - asm volatile("slbmfev %0,%1" : "=r" (*slbev) : "r" (idx)); - asm volatile("slbmfee %0,%1" : "=r" (*slbee) : "r" (idx)); + u64 slbev; + + asm volatile("slbmfev %0,%1" : "=r" (slbev) : "r" (idx)); + + return slbev; +} + +static inline u64 mfslbe(unsigned int idx) +{ + u64 slbee; + + asm volatile("slbmfee %0,%1" : "=r" (slbee) : "r" (idx)); + + return slbee; } static inline void mtslb(u64 slbee, u64 slbev) @@ -620,8 +632,10 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) */ for (i = 0; i < vcpu->arch.slb_nr; i++) { u64 slbee, slbev; - mfslb(i, &slbee, &slbev); + + slbee = mfslbe(i); if (slbee & SLB_ESID_V) { + slbev = mfslbv(i); vcpu->arch.slb[nr].orige = slbee | i; vcpu->arch.slb[nr].origv = slbev; nr++; From patchwork Mon Oct 4 16:00:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536239 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=cpBuA2+p; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQV468jwz9t5G for ; Tue, 5 Oct 2021 03:02:52 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234465AbhJDQEl (ORCPT ); Mon, 4 Oct 2021 12:04:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231148AbhJDQEk (ORCPT ); Mon, 4 Oct 2021 12:04:40 -0400 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06B30C061745 for ; Mon, 4 Oct 2021 09:02:52 -0700 (PDT) Received: by mail-pg1-x52a.google.com with SMTP id h3so6337768pgb.7 for ; Mon, 04 Oct 2021 09:02:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vCwWZGrYE8T5YkED8uLzl3raa2ughTUaoc+B3T67xTw=; b=cpBuA2+p/mQO3rKiuksM1s0lHQmv86BzOqNOKncg79TDpe6EJeli+QnzXdKgMzlRLs 99BA0E+OWvTqJPh8gRe32pLGGuokKo0TGClU6753EomRo4z7JEF7FV4vSmknBKqUMXPm hwoEVffSfEDOxDcBbMbIZooGE0vkzOpzzLqiAW0OAy178zu6LQoXnSm1rontg7QtjKIJ CrUEV//hc/HaWMowRnpjee2Fsi55Qgic1DEH8r3BCTypshYWzeUX1SDhonu7acepCcs1 RPaazrn1u8lpULKgEUnh/hkl5Bra5Xk7o0vru3LHwsknbiDMaph25u/VVcX1n8wwYSB5 cneA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vCwWZGrYE8T5YkED8uLzl3raa2ughTUaoc+B3T67xTw=; b=Y7zRmTM/FAkl6KUhPP9AN61A7miBRRuXY0pg+pJIxF5vgTLpsva7OedEm9OVPGk0n3 buJlsxHda1pVtilAPlJqfK5bUEVUHgU6lpm7A9N+e0uViFKbXC8GZNaSmp/lGuypkh/0 4UV0ChHQ5Q7j42zWl7cmJkLCqFb/LctJgGLRVhPFibtYZ2A8wZy5pN8QEOGfY1qVMauK DsNrioRW7HNlQrbYLM+SWT/TQp4vEYv+9QtM8JvgFVglxsiYwwq8Xo9BA52DkAj26XcS jNyhAxOQ/skh7YilGTP017WpQ4fb9NUIDbJwyaIvORBbDixX5riOZSlJllmpJ3pxHQ07 zixg== X-Gm-Message-State: AOAM5339UIevBq2Wyh4WIgGBDEQd0fOUeUvyYrwEpYoU960gMEh9aSX1 0C6FUUMS+QFrSTFXekOaOqvVH/Bb5Y0= X-Google-Smtp-Source: ABdhPJwsgIX03roCMtb6NAeSHPwc6TBmSZsvJ+1epu+plLdCue3rqiWj47CVjfAgsnqf0VTXdwC5GQ== X-Received: by 2002:a05:6a00:c81:b029:30e:21bf:4c15 with SMTP id a1-20020a056a000c81b029030e21bf4c15mr25167641pfv.70.1633363371378; Mon, 04 Oct 2021 09:02:51 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:51 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 46/52] KVM: PPC: Book3S HV P9: Avoid changing MSR[RI] in entry and exit Date: Tue, 5 Oct 2021 02:00:43 +1000 Message-Id: <20211004160049.1338837-47-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org kvm_hstate.in_guest provides the equivalent of MSR[RI]=0 protection, and it covers the existing MSR[RI]=0 section in late entry and early exit, so clearing and setting MSR[RI] in those cases does not actually do anything useful. Remove the RI manipulation and replace it with comments. Make the in_guest memory accesses a bit closer to a proper critical section pattern. This speeds up guest entry/exit performance. This also removes the MSR[RI] warnings which aren't very interesting and would cause crashes if they hit due to causing an interrupt in non-recoverable code. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 50 ++++++++++++--------------- 1 file changed, 23 insertions(+), 27 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 99ce5805ea28..5a71532a3adf 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -829,7 +829,15 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc * But TM could be split out if this would be a significant benefit. */ - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_HV_P9; + /* + * MSR[RI] does not need to be cleared (and is not, for radix guests + * with no prefetch bug), because in_guest is set. If we take a SRESET + * or MCE with in_guest set but still in HV mode, then + * kvmppc_p9_bad_interrupt handles the interrupt, which effectively + * clears MSR[RI] and doesn't return. + */ + WRITE_ONCE(local_paca->kvm_hstate.in_guest, KVM_GUEST_MODE_HV_P9); + barrier(); /* Open in_guest critical section */ /* * Hash host, hash guest, or radix guest with prefetch bug, all have @@ -841,14 +849,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc save_clear_host_mmu(kvm); - if (kvm_is_radix(kvm)) { + if (kvm_is_radix(kvm)) switch_mmu_to_guest_radix(kvm, vcpu, lpcr); - if (!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) - __mtmsrd(0, 1); /* clear RI */ - - } else { + else switch_mmu_to_guest_hpt(kvm, vcpu, lpcr); - } /* TLBIEL uses LPID=LPIDR, so run this after setting guest LPID */ kvmppc_check_need_tlb_flush(kvm, vc->pcpu, nested); @@ -903,19 +907,16 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.regs.gpr[3] = local_paca->kvm_hstate.scratch2; /* - * Only set RI after reading machine check regs (DAR, DSISR, SRR0/1) - * and hstate scratch (which we need to move into exsave to make - * re-entrant vs SRESET/MCE) + * After reading machine check regs (DAR, DSISR, SRR0/1) and hstate + * scratch (which we need to move into exsave to make re-entrant vs + * SRESET/MCE), register state is protected from reentrancy. However + * timebase, MMU, among other state is still set to guest, so don't + * enable MSR[RI] here. It gets enabled at the end, after in_guest + * is cleared. + * + * It is possible an NMI could come in here, which is why it is + * important to save the above state early so it can be debugged. */ - if (ri_set) { - if (unlikely(!(mfmsr() & MSR_RI))) { - __mtmsrd(MSR_RI, 1); - WARN_ON_ONCE(1); - } - } else { - WARN_ON_ONCE(mfmsr() & MSR_RI); - __mtmsrd(MSR_RI, 1); - } vcpu->arch.regs.gpr[9] = exsave[EX_R9/sizeof(u64)]; vcpu->arch.regs.gpr[10] = exsave[EX_R10/sizeof(u64)]; @@ -973,13 +974,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HSRR0, vcpu->arch.regs.nip); mtspr(SPRN_HSRR1, vcpu->arch.shregs.msr); - - /* - * tm_return_to_guest re-loads SRR0/1, DAR, - * DSISR after RI is cleared, in case they had - * been clobbered by a MCE. - */ - __mtmsrd(0, 1); /* clear RI */ goto tm_return_to_guest; } } @@ -1079,7 +1073,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc restore_p9_host_os_sprs(vcpu, &host_os_sprs); - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; + barrier(); /* Close in_guest critical section */ + WRITE_ONCE(local_paca->kvm_hstate.in_guest, KVM_GUEST_MODE_NONE); + /* Interrupts are recoverable at this point */ /* * cp_abort is required if the processor supports local copy-paste From patchwork Mon Oct 4 16:00:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536240 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=PQ/QJx0l; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQV765Q5z9t5G for ; Tue, 5 Oct 2021 03:02:55 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231148AbhJDQEo (ORCPT ); Mon, 4 Oct 2021 12:04:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234670AbhJDQEn (ORCPT ); Mon, 4 Oct 2021 12:04:43 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1D98C061745 for ; Mon, 4 Oct 2021 09:02:54 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id s16so14946880pfk.0 for ; Mon, 04 Oct 2021 09:02:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nuVTSGJxHnIlv0kTLuX5DnMA5uRT7Y6LvkwGWm5jYwY=; b=PQ/QJx0lA7uR3YdVgf0IYhBobUcA59zS103cQx4nivXznPw0uOZyQu55O9Q6Qlo5Bh r/1Uf+N2GlqO7B/DW1oqqp+ogZvr61O720EVf+l3GlHUyXHKafz+YDcgQuK85HChktPi M3Wi6vgWw9OE6IQuhU1vxufXL8ALTMwpHyWmWqjDF++xoqa+lsrQr5L+tAfGBXQDhbJa ubR4gu6sBYkMB0TZsqZnqw1dkSLCQBOmOlQxwz36cU159cb3OTV1pCsOFaOpFB+lTmq4 HGEdKOp7j1ErL+bgnufMcneP4iAwLiGcC9uC9bMEFBZLERRM2jsYdRBGwyxJQVhmbVw5 ScLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nuVTSGJxHnIlv0kTLuX5DnMA5uRT7Y6LvkwGWm5jYwY=; b=lm4XDxcgjkmkCUIUxixX5KW+IFIUxFAP0H4wks4sxRzqeh0vtz6W9cYLZpU6nG2ZV2 SS7amHugeYUiFzaUrRKrqgKE4H95N4RasnqyeMfOQZ+px4dQTreGZOu2TCGBEZIJRlqy YLUZYI6qlvuOtLC/RMXyHrv0RU9nYNRUZ/SGeOXbRJ2UVmMbP0SuIIYhU2q8CFnogg2E JZIQDbrlzb6++q310bJsYHjtvRrcVwVRNU+MlT1DZllOMxUaa7m5GCtyW+myH2mkpIaz rVeaBq5AbHbF0KsmNNtu8b0xg5y3NjGNNjGR/mWbJ1s5AW1V+6iwndpLkBiClPXTSp8T VHwQ== X-Gm-Message-State: AOAM5320FJ37GDAjO5U9nVz997w2Cc38LHJWpWhr7vjVrdmZcisES/Nd pOzcgIsfNeCCWkns4y0iqyAoEAdHe7o= X-Google-Smtp-Source: ABdhPJwtq6ifviyS0m2a4kL00d7vB9BI9GC3qc5G52IEgsAUrkqJgvUOBLf/09hb57HFJQypT02h3g== X-Received: by 2002:aa7:80d1:0:b029:399:ce3a:d617 with SMTP id a17-20020aa780d10000b0290399ce3ad617mr26833434pfn.16.1633363373616; Mon, 04 Oct 2021 09:02:53 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:53 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 47/52] KVM: PPC: Book3S HV P9: Add unlikely annotation for !mmu_ready Date: Tue, 5 Oct 2021 02:00:44 +1000 Message-Id: <20211004160049.1338837-48-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The mmu will almost always be ready. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 0bbef4587f41..6e072e2e130a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4408,7 +4408,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vc->runner = vcpu; /* See if the MMU is ready to go */ - if (!kvm->arch.mmu_ready) { + if (unlikely(!kvm->arch.mmu_ready)) { r = kvmhv_setup_mmu(vcpu); if (r) { run->exit_reason = KVM_EXIT_FAIL_ENTRY; From patchwork Mon Oct 4 16:00:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536241 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=i5mlYDaS; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQV963n8z9t5G for ; Tue, 5 Oct 2021 03:02:57 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236337AbhJDQEq (ORCPT ); Mon, 4 Oct 2021 12:04:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234670AbhJDQEp (ORCPT ); Mon, 4 Oct 2021 12:04:45 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75741C061745 for ; Mon, 4 Oct 2021 09:02:56 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id q23so14902872pfs.9 for ; Mon, 04 Oct 2021 09:02:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YNeeHAwcCGfFVZGJd6KmD3cGTet2+e2859nzFZQ6v6c=; b=i5mlYDaSPcYMV3O/Sfv9CWWiupBmzJ+B3CNOaTr3nanCjPzd+YLaboTiGyI+aZCxFT 5R8z0yOGcO9sh61o9ypZamYbIAWVtAEuMjNyxJHAbnKF9dElBYEuGgdbM3ImfErVM7vm VJ+oKcMk91WWoIqpH598TvWIAyuB1nxAkzdOFB94B2yjeHPKWcmOc/ZAnCovVim8VMOf ymW6TLO6C+k2jTJQYCnG6YOuN8Zz57lkxUTLiSQp7R2VJA8cqR/YF2Obzneoq5p9kV5P MYzlZvuHlORDwHcg6AZVSWczfIoDk1d2GNtefmcaxbvNfZY4TfM/u2x+FCn4rvtaCYMN aqag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YNeeHAwcCGfFVZGJd6KmD3cGTet2+e2859nzFZQ6v6c=; b=zKqcy6OKsnpdgP2lj23aSro6QWi8m14qaZmp8+YNXfYlnqvDYv98xvDorbzIpFIAj7 +GkqqR6QJY4+ttKwskbbM2DsoIP/x7W+pK4NYy0+mqPGOojDyXlEUPW2QDftoXu11CCk adWeAVEBdkNb6UL1k/N/R4r+BJDgl33YODMu7x4te2mSrk/tW57AfguzCuViKbNvoOFc GDNrnNEvvamCnuqiTuXMByXDkqp1kuZtTw7hczWlHR5EIhyBsTIPSKfoumsXXVesVDVB jX3sfyPqh1zYn55dYI5L8YmTKmZp+O6ZPvkBK4l2nzNf+jcBouNhZ3Jwtt5PCMuLub1g 9ACg== X-Gm-Message-State: AOAM531PUue5+SbGa+6Msp3WyXuBnaglB6DuJ1cK0zE8eFCTCkt9vYfi 1j3z5cEGpcoA4JsRpvhIqseKo6MKoN4= X-Google-Smtp-Source: ABdhPJxg3+4MAtPcy+Y3J78KAgivx5YlJ4LEyQzAIq0g6jIgEGQeAxgmfr6vrz+2B9AyA1bl9U3S8g== X-Received: by 2002:aa7:824b:0:b0:44c:22ad:2763 with SMTP id e11-20020aa7824b000000b0044c22ad2763mr15715789pfn.63.1633363375814; Mon, 04 Oct 2021 09:02:55 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:55 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 48/52] KVM: PPC: Book3S HV P9: Avoid cpu_in_guest atomics on entry and exit Date: Tue, 5 Oct 2021 02:00:45 +1000 Message-Id: <20211004160049.1338837-49-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org cpu_in_guest is set to determine if a CPU needs to be IPI'ed to exit the guest and notice the need_tlb_flush bit. This can be implemented as a global per-CPU pointer to the currently running guest instead of per-guest cpumasks, saving 2 atomics per entry/exit. P7/8 doesn't require cpu_in_guest, nor does a nested HV (only the L0 does), so move it to the P9 HV path. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 1 - arch/powerpc/include/asm/kvm_host.h | 1 - arch/powerpc/kvm/book3s_hv.c | 38 +++++++++++++----------- 3 files changed, 21 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 96f0fda50a07..fe07558173ef 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -44,7 +44,6 @@ struct kvm_nested_guest { struct mutex tlb_lock; /* serialize page faults and tlbies */ struct kvm_nested_guest *next; cpumask_t need_tlb_flush; - cpumask_t cpu_in_guest; short prev_cpu[NR_CPUS]; u8 radix; /* is this nested guest radix */ }; diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 92925f82a1e3..4de418f6c0a2 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -287,7 +287,6 @@ struct kvm_arch { u32 online_vcores; atomic_t hpte_mod_interest; cpumask_t need_tlb_flush; - cpumask_t cpu_in_guest; u8 radix; u8 fwnmi_enabled; u8 secure_guest; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 6e072e2e130a..6574e8a3731e 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3009,30 +3009,33 @@ static void kvmppc_release_hwthread(int cpu) tpaca->kvm_hstate.kvm_split_mode = NULL; } +static DEFINE_PER_CPU(struct kvm *, cpu_in_guest); + static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu) { struct kvm_nested_guest *nested = vcpu->arch.nested; - cpumask_t *cpu_in_guest; int i; cpu = cpu_first_tlb_thread_sibling(cpu); - if (nested) { + if (nested) cpumask_set_cpu(cpu, &nested->need_tlb_flush); - cpu_in_guest = &nested->cpu_in_guest; - } else { + else cpumask_set_cpu(cpu, &kvm->arch.need_tlb_flush); - cpu_in_guest = &kvm->arch.cpu_in_guest; - } /* - * Make sure setting of bit in need_tlb_flush precedes - * testing of cpu_in_guest bits. The matching barrier on - * the other side is the first smp_mb() in kvmppc_run_core(). + * Make sure setting of bit in need_tlb_flush precedes testing of + * cpu_in_guest. The matching barrier on the other side is hwsync + * when switching to guest MMU mode, which happens between + * cpu_in_guest being set to the guest kvm, and need_tlb_flush bit + * being tested. */ smp_mb(); for (i = cpu; i <= cpu_last_tlb_thread_sibling(cpu); - i += cpu_tlb_thread_sibling_step()) - if (cpumask_test_cpu(i, cpu_in_guest)) + i += cpu_tlb_thread_sibling_step()) { + struct kvm *running = *per_cpu_ptr(&cpu_in_guest, i); + + if (running == kvm) smp_call_function_single(i, do_nothing, NULL, 1); + } } static void do_migrate_away_vcpu(void *arg) @@ -3100,7 +3103,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) { int cpu; struct paca_struct *tpaca; - struct kvm *kvm = vc->kvm; cpu = vc->pcpu; if (vcpu) { @@ -3111,7 +3113,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) cpu += vcpu->arch.ptid; vcpu->cpu = vc->pcpu; vcpu->arch.thread_cpu = cpu; - cpumask_set_cpu(cpu, &kvm->arch.cpu_in_guest); } tpaca = paca_ptrs[cpu]; tpaca->kvm_hstate.kvm_vcpu = vcpu; @@ -3829,7 +3830,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) kvmppc_release_hwthread(pcpu + i); if (sip && sip->napped[i]) kvmppc_ipi_thread(pcpu + i); - cpumask_clear_cpu(pcpu + i, &vc->kvm->arch.cpu_in_guest); } spin_unlock(&vc->lock); @@ -3997,8 +3997,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } } else { + struct kvm *kvm = vcpu->kvm; + kvmppc_xive_push_vcpu(vcpu); + + __this_cpu_write(cpu_in_guest, kvm); trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr, tb); + __this_cpu_write(cpu_in_guest, NULL); + if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && !(vcpu->arch.shregs.msr & MSR_PR)) { unsigned long req = kvmppc_get_gpr(vcpu, 3); @@ -4023,7 +4029,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } kvmppc_xive_pull_vcpu(vcpu); - if (kvm_is_radix(vcpu->kvm)) + if (kvm_is_radix(kvm)) vcpu->arch.slb_max = 0; } @@ -4500,8 +4506,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, powerpc_local_irq_pmu_restore(flags); - cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest); - preempt_enable(); /* From patchwork Mon Oct 4 16:00:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536242 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=WPMXaN0H; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQVD75mpz9t5G for ; Tue, 5 Oct 2021 03:03:00 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236340AbhJDQEs (ORCPT ); Mon, 4 Oct 2021 12:04:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234670AbhJDQEr (ORCPT ); Mon, 4 Oct 2021 12:04:47 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF575C061746 for ; Mon, 4 Oct 2021 09:02:58 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id m5so2168609pfk.7 for ; Mon, 04 Oct 2021 09:02:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WSA9KNw+NCGluNGMWhmCZ5xJhNmCaVO+jtqY+PDjziM=; b=WPMXaN0HlBOdwuVrNVxQ5RMrBbpfht7lKCuB4x2rsv0FdYJYvkzuTA/qPzublqTx95 V/sQmuSbmw9Pkp9fYQyx/elGS3jSgYmjtT5ZfPwtTwj1B4C6FdsNNH2raWpAWi5GgtPv 0JJt+kCjevk/k6j6PYXihIYGG21oWuyWzW5x/uWrNj6iXoZ6XIu0ISjd/27kNYNpqp1L Ttug1diMyrFEF0DNJytSonFTxG/yihQn53sBWCFB9DbnrF0ObFcbWSQ02a7atRtjSsmR qElsjPBAYU/TNiBwdZKeHQ1qQcAxMN6ZqKrDTaov89P7KIBDGbux6chyzkcXN2jjPyHf ONew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WSA9KNw+NCGluNGMWhmCZ5xJhNmCaVO+jtqY+PDjziM=; b=3P+1y6QFDBrAgbWIQT5QPkJ/M1sEoTUNcgKZ/rbi69iz5Jr0y4ydQVokxqSl6WMdJj 3uNorijgqrDrV5GN0Nz8hMrW1bvRZG0gjb4P0yGRo+dxsJtgdcxCnXhQDGm6lSP6Xfmu seSwvaz8Xau+SbIHojftxTIvHbaR+M7VMXYE6zVjmjUtw5dX4Z1d+ZQhygYQrng8RWWH CL1p+0WUxwMogY2yYvh+OVZozXEDvA8up/NuIc9WhMmonJFmhEQjYMQ54AraOu2Gh+XQ PpzQL2+SVSZA4/c9p64yhj54w5RzpKuG2GK3BIwUhvMzA8p2e4Yx14nzzIBt+yqe5slJ QS1w== X-Gm-Message-State: AOAM533uFCKQ/BesVE0JCSElTX3niLOTgMQfbwMBT29VfKLnHD2/ES8i a+MZdkSMIoPRI9IsSe9/L1NhRXw/L8M= X-Google-Smtp-Source: ABdhPJwz5HJuEcc3d2byvuLv7N/j2pw5/8BrlF0wHgVsv794DR969uoaRKzkoLcaXhpS8X1M/ru7Cw== X-Received: by 2002:a05:6a00:188b:b0:44c:70a8:cf4 with SMTP id x11-20020a056a00188b00b0044c70a80cf4mr2241186pfh.85.1633363377978; Mon, 04 Oct 2021 09:02:57 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:57 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 49/52] KVM: PPC: Book3S HV P9: Remove most of the vcore logic Date: Tue, 5 Oct 2021 02:00:46 +1000 Message-Id: <20211004160049.1338837-50-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The P9 path always uses one vcpu per vcore, so none of the vcore, locks, stolen time, blocking logic, shared waitq, etc., is required. Remove most of it. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 147 ++++++++++++++++++++--------------- 1 file changed, 85 insertions(+), 62 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 6574e8a3731e..d614f83c7b3f 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -276,6 +276,8 @@ static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + spin_lock_irqsave(&vc->stoltb_lock, flags); vc->preempt_tb = tb; spin_unlock_irqrestore(&vc->stoltb_lock, flags); @@ -285,6 +287,8 @@ static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + spin_lock_irqsave(&vc->stoltb_lock, flags); if (vc->preempt_tb != TB_NIL) { vc->stolen_tb += tb - vc->preempt_tb; @@ -297,7 +301,12 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; - u64 now = mftb(); + u64 now; + + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return; + + now = mftb(); /* * We can test vc->runner without taking the vcore lock, @@ -321,7 +330,12 @@ static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; - u64 now = mftb(); + u64 now; + + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return; + + now = mftb(); if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING) kvmppc_core_start_stolen(vc, now); @@ -673,6 +687,8 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now) u64 p; unsigned long flags; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + spin_lock_irqsave(&vc->stoltb_lock, flags); p = vc->stolen_tb; if (vc->vcore_state != VCORE_INACTIVE && @@ -695,13 +711,19 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, dt = vcpu->arch.dtl_ptr; vpa = vcpu->arch.vpa.pinned_addr; now = tb; - core_stolen = vcore_stolen_time(vc, now); - stolen = core_stolen - vcpu->arch.stolen_logged; - vcpu->arch.stolen_logged = core_stolen; - spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); - stolen += vcpu->arch.busy_stolen; - vcpu->arch.busy_stolen = 0; - spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); + + if (cpu_has_feature(CPU_FTR_ARCH_300)) { + stolen = 0; + } else { + core_stolen = vcore_stolen_time(vc, now); + stolen = core_stolen - vcpu->arch.stolen_logged; + vcpu->arch.stolen_logged = core_stolen; + spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); + stolen += vcpu->arch.busy_stolen; + vcpu->arch.busy_stolen = 0; + spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); + } + if (!dt || !vpa) return; memset(dt, 0, sizeof(struct dtl_entry)); @@ -898,13 +920,14 @@ static int kvm_arch_vcpu_yield_to(struct kvm_vcpu *target) * mode handler is not called but no other threads are in the * source vcore. */ - - spin_lock(&vcore->lock); - if (target->arch.state == KVMPPC_VCPU_RUNNABLE && - vcore->vcore_state != VCORE_INACTIVE && - vcore->runner) - target = vcore->runner; - spin_unlock(&vcore->lock); + if (!cpu_has_feature(CPU_FTR_ARCH_300)) { + spin_lock(&vcore->lock); + if (target->arch.state == KVMPPC_VCPU_RUNNABLE && + vcore->vcore_state != VCORE_INACTIVE && + vcore->runner) + target = vcore->runner; + spin_unlock(&vcore->lock); + } return kvm_vcpu_yield_to(target); } @@ -3125,13 +3148,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) kvmppc_ipi_thread(cpu); } -/* Old path does this in asm */ -static void kvmppc_stop_thread(struct kvm_vcpu *vcpu) -{ - vcpu->cpu = -1; - vcpu->arch.thread_cpu = -1; -} - static void kvmppc_wait_for_nap(int n_threads) { int cpu = smp_processor_id(); @@ -3220,6 +3236,8 @@ static void kvmppc_vcore_preempt(struct kvmppc_vcore *vc) { struct preempted_vcore_list *lp = this_cpu_ptr(&preempted_vcores); + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + vc->vcore_state = VCORE_PREEMPT; vc->pcpu = smp_processor_id(); if (vc->num_threads < threads_per_vcore(vc->kvm)) { @@ -3236,6 +3254,8 @@ static void kvmppc_vcore_end_preempt(struct kvmppc_vcore *vc) { struct preempted_vcore_list *lp; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + kvmppc_core_end_stolen(vc, mftb()); if (!list_empty(&vc->preempt_list)) { lp = &per_cpu(preempted_vcores, vc->pcpu); @@ -3964,7 +3984,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { - struct kvmppc_vcore *vc = vcpu->arch.vcore; u64 next_timer; int trap; @@ -3980,9 +3999,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_subcore_enter_guest(); - vc->entry_exit_map = 1; - vc->in_guest = 1; - vcpu_vpa_increment_dispatch(vcpu); if (kvmhv_on_pseries()) { @@ -4035,9 +4051,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - vc->entry_exit_map = 0x101; - vc->in_guest = 0; - kvmppc_subcore_exit_guest(); return trap; @@ -4103,6 +4116,13 @@ static bool kvmppc_vcpu_woken(struct kvm_vcpu *vcpu) return false; } +static bool kvmppc_vcpu_check_block(struct kvm_vcpu *vcpu) +{ + if (!vcpu->arch.ceded || kvmppc_vcpu_woken(vcpu)) + return true; + return false; +} + /* * Check to see if any of the runnable vcpus on the vcore have pending * exceptions or are no longer ceded @@ -4113,7 +4133,7 @@ static int kvmppc_vcore_check_block(struct kvmppc_vcore *vc) int i; for_each_runnable_thread(i, vcpu, vc) { - if (!vcpu->arch.ceded || kvmppc_vcpu_woken(vcpu)) + if (kvmppc_vcpu_check_block(vcpu)) return 1; } @@ -4130,6 +4150,8 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) int do_sleep = 1; u64 block_ns; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + /* Poll for pending exceptions and ceded state */ cur = start_poll = ktime_get(); if (vc->halt_poll_ns) { @@ -4407,11 +4429,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.ceded = 0; vcpu->arch.run_task = current; vcpu->arch.state = KVMPPC_VCPU_RUNNABLE; - vcpu->arch.busy_preempt = TB_NIL; vcpu->arch.last_inst = KVM_INST_FETCH_FAILED; - vc->runnable_threads[0] = vcpu; - vc->n_runnable = 1; - vc->runner = vcpu; /* See if the MMU is ready to go */ if (unlikely(!kvm->arch.mmu_ready)) { @@ -4429,11 +4447,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_update_vpas(vcpu); - init_vcore_to_run(vc); - preempt_disable(); pcpu = smp_processor_id(); - vc->pcpu = pcpu; if (kvm_is_radix(kvm)) kvmppc_prepare_radix_vcpu(vcpu, pcpu); @@ -4462,21 +4477,23 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, goto out; } - tb = mftb(); + if (vcpu->arch.timer_running) { + hrtimer_try_to_cancel(&vcpu->arch.dec_timer); + vcpu->arch.timer_running = 0; + } - vcpu->arch.stolen_logged = vcore_stolen_time(vc, tb); - vc->preempt_tb = TB_NIL; + tb = mftb(); - kvmppc_clear_host_core(pcpu); + vcpu->cpu = pcpu; + vcpu->arch.thread_cpu = pcpu; + local_paca->kvm_hstate.kvm_vcpu = vcpu; + local_paca->kvm_hstate.ptid = 0; + local_paca->kvm_hstate.fake_suspend = 0; - local_paca->kvm_hstate.napping = 0; - local_paca->kvm_hstate.kvm_split_mode = NULL; - kvmppc_start_thread(vcpu, vc); + vc->pcpu = pcpu; // for kvmppc_create_dtl_entry kvmppc_create_dtl_entry(vcpu, vc, tb); - trace_kvm_guest_enter(vcpu); - vc->vcore_state = VCORE_RUNNING; - trace_kvmppc_run_core(vc, 0); + trace_kvm_guest_enter(vcpu); guest_enter_irqoff(); @@ -4498,11 +4515,10 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, set_irq_happened(trap); - kvmppc_set_host_core(pcpu); - guest_exit_irqoff(); - kvmppc_stop_thread(vcpu); + vcpu->cpu = -1; + vcpu->arch.thread_cpu = -1; powerpc_local_irq_pmu_restore(flags); @@ -4529,28 +4545,31 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, } vcpu->arch.ret = r; - if (is_kvmppc_resume_guest(r) && vcpu->arch.ceded && - !kvmppc_vcpu_woken(vcpu)) { + if (is_kvmppc_resume_guest(r) && !kvmppc_vcpu_check_block(vcpu)) { kvmppc_set_timer(vcpu); - while (vcpu->arch.ceded && !kvmppc_vcpu_woken(vcpu)) { + + prepare_to_rcuwait(&vcpu->wait); + for (;;) { + set_current_state(TASK_INTERRUPTIBLE); if (signal_pending(current)) { vcpu->stat.signal_exits++; run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; break; } - spin_lock(&vc->lock); - kvmppc_vcore_blocked(vc); - spin_unlock(&vc->lock); + + if (kvmppc_vcpu_check_block(vcpu)) + break; + + trace_kvmppc_vcore_blocked(vc, 0); + schedule(); + trace_kvmppc_vcore_blocked(vc, 1); } + finish_rcuwait(&vcpu->wait); } vcpu->arch.ceded = 0; - vc->vcore_state = VCORE_INACTIVE; - trace_kvmppc_run_core(vc, 1); - done: - kvmppc_remove_runnable(vc, vcpu, tb); trace_kvmppc_run_vcpu_exit(vcpu); return vcpu->arch.ret; @@ -4632,7 +4651,8 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) kvmppc_save_current_sprs(); - vcpu->arch.waitp = &vcpu->arch.vcore->wait; + if (!cpu_has_feature(CPU_FTR_ARCH_300)) + vcpu->arch.waitp = &vcpu->arch.vcore->wait; vcpu->arch.pgdir = kvm->mm->pgd; vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; @@ -5094,6 +5114,9 @@ void kvmppc_alloc_host_rm_ops(void) int cpu, core; int size; + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return; + /* Not the first time here ? */ if (kvmppc_host_rm_ops_hv != NULL) return; From patchwork Mon Oct 4 16:00:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536243 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=XuviA3b7; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQVG4wXXz9t5G for ; Tue, 5 Oct 2021 03:03:02 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234670AbhJDQEu (ORCPT ); Mon, 4 Oct 2021 12:04:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236338AbhJDQEt (ORCPT ); Mon, 4 Oct 2021 12:04:49 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B49B7C061745 for ; Mon, 4 Oct 2021 09:03:00 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id g13-20020a17090a3c8d00b00196286963b9so3080142pjc.3 for ; Mon, 04 Oct 2021 09:03:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iAp5cZPlH2CQn7Ni2mzcHZSZ7/uct2D+QzsCLoH2W/0=; b=XuviA3b7q6DWcTCbVp2sJoDpCn3wytM5lazWjg5MGsb/SYjmiZ4E5opHkuVHRuJYIl gxi0YzWtIuZ2lc95Lm+FLSCMhTp4vzHU3c4TkEjVlrpqX15E7PJvmXH2x6JX5pFfzPl4 ZiAKVkhED30oODf3am+0NypHe1VBrwVPKMO1hV2To+MTGv8WyL4jTGlJIo9Q6kqturtB +TgkqMH/CZNdPBy7OXUk+GMyaT7jZUYGI4XgQYY5lrF5C+O7bl3BjE4Zw+y6ItUtrMjG Q6FH8adB0z8pd09cH7Uai2IZwtDbbjI7pfc2+gXYsQrx9WvB9HIUgGGAlQpG4OQSe2Hm qq4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iAp5cZPlH2CQn7Ni2mzcHZSZ7/uct2D+QzsCLoH2W/0=; b=5QMPq0tWxCx3Hi46JSOisUYQRBiV7E781kJnpcBexaR/cR01wjiuAp3jsPFI6wtdSf q4OnUkcfioNiLfymd6/WD0tGPF8x2mYtLXkFVnUlMtoXAHk7erR4ST4ElHXvuWiqy74l yJ/+oMiHrjY4RjqLRCwj26DudXiNqni+pUyWzu7fL4bWHdY94DewRK/BC7JoK7ZJWP+4 eSCgDy2/eShLB8A9CR2DDOnchxXoi/1ms/aWI7f4ijyidwWDY2FVXS0NW4lyjpyhkGXK djh0z31Am+6X6nnfmMKk6Lz4Rc6LR9wL1+ACUqChNJm7dNAO1pb/rL8Lt886Xtea6H6J gz5g== X-Gm-Message-State: AOAM531A/ls4d+HikNGfUSK58wIKNX9HhAW0H+x8LA8IkpNM8D55b/1k Qs6NM1AwdPbKebOA61lXjoJcjMT1sC8= X-Google-Smtp-Source: ABdhPJw7jTzQ4D3ZQvNDT7P6IeqodWCkjSNS1WudIYUggjGNdSkwjYqfGKnjF+BNSBGxuk7VgLJ6Yw== X-Received: by 2002:a17:90b:3591:: with SMTP id mm17mr1836820pjb.140.1633363380123; Mon, 04 Oct 2021 09:03:00 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.02.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:02:59 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 50/52] KVM: PPC: Book3S HV P9: Tidy kvmppc_create_dtl_entry Date: Tue, 5 Oct 2021 02:00:47 +1000 Message-Id: <20211004160049.1338837-51-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This goes further to removing vcores from the P9 path. Also avoid the memset in favour of explicitly initialising all fields. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 61 +++++++++++++++++++++--------------- 1 file changed, 35 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d614f83c7b3f..57bf49c90e73 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -698,41 +698,30 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now) return p; } -static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, - struct kvmppc_vcore *vc, u64 tb) +static void __kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, + unsigned int pcpu, u64 now, + unsigned long stolen) { struct dtl_entry *dt; struct lppaca *vpa; - unsigned long stolen; - unsigned long core_stolen; - u64 now; - unsigned long flags; dt = vcpu->arch.dtl_ptr; vpa = vcpu->arch.vpa.pinned_addr; - now = tb; - - if (cpu_has_feature(CPU_FTR_ARCH_300)) { - stolen = 0; - } else { - core_stolen = vcore_stolen_time(vc, now); - stolen = core_stolen - vcpu->arch.stolen_logged; - vcpu->arch.stolen_logged = core_stolen; - spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); - stolen += vcpu->arch.busy_stolen; - vcpu->arch.busy_stolen = 0; - spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); - } if (!dt || !vpa) return; - memset(dt, 0, sizeof(struct dtl_entry)); + dt->dispatch_reason = 7; - dt->processor_id = cpu_to_be16(vc->pcpu + vcpu->arch.ptid); - dt->timebase = cpu_to_be64(now + vc->tb_offset); + dt->preempt_reason = 0; + dt->processor_id = cpu_to_be16(pcpu + vcpu->arch.ptid); dt->enqueue_to_dispatch_time = cpu_to_be32(stolen); + dt->ready_to_enqueue_time = 0; + dt->waiting_to_ready_time = 0; + dt->timebase = cpu_to_be64(now); + dt->fault_addr = 0; dt->srr0 = cpu_to_be64(kvmppc_get_pc(vcpu)); dt->srr1 = cpu_to_be64(vcpu->arch.shregs.msr); + ++dt; if (dt == vcpu->arch.dtl.pinned_end) dt = vcpu->arch.dtl.pinned_addr; @@ -743,6 +732,27 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, vcpu->arch.dtl.dirty = true; } +static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, + struct kvmppc_vcore *vc) +{ + unsigned long stolen; + unsigned long core_stolen; + u64 now; + unsigned long flags; + + now = mftb(); + + core_stolen = vcore_stolen_time(vc, now); + stolen = core_stolen - vcpu->arch.stolen_logged; + vcpu->arch.stolen_logged = core_stolen; + spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); + stolen += vcpu->arch.busy_stolen; + vcpu->arch.busy_stolen = 0; + spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); + + __kvmppc_create_dtl_entry(vcpu, vc->pcpu, now + vc->tb_offset, stolen); +} + /* See if there is a doorbell interrupt pending for a vcpu */ static bool kvmppc_doorbell_pending(struct kvm_vcpu *vcpu) { @@ -3750,7 +3760,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) pvc->pcpu = pcpu + thr; for_each_runnable_thread(i, vcpu, pvc) { kvmppc_start_thread(vcpu, pvc); - kvmppc_create_dtl_entry(vcpu, pvc, mftb()); + kvmppc_create_dtl_entry(vcpu, pvc); trace_kvm_guest_enter(vcpu); if (!vcpu->arch.ptid) thr0_done = true; @@ -4313,7 +4323,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) if ((vc->vcore_state == VCORE_PIGGYBACK || vc->vcore_state == VCORE_RUNNING) && !VCORE_IS_EXITING(vc)) { - kvmppc_create_dtl_entry(vcpu, vc, mftb()); + kvmppc_create_dtl_entry(vcpu, vc); kvmppc_start_thread(vcpu, vc); trace_kvm_guest_enter(vcpu); } else if (vc->vcore_state == VCORE_SLEEPING) { @@ -4490,8 +4500,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, local_paca->kvm_hstate.ptid = 0; local_paca->kvm_hstate.fake_suspend = 0; - vc->pcpu = pcpu; // for kvmppc_create_dtl_entry - kvmppc_create_dtl_entry(vcpu, vc, tb); + __kvmppc_create_dtl_entry(vcpu, pcpu, tb + vc->tb_offset, 0); trace_kvm_guest_enter(vcpu); From patchwork Mon Oct 4 16:00:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536244 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=Bhz3Eh6n; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQVJ0dwCz9t5G for ; Tue, 5 Oct 2021 03:03:04 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236341AbhJDQEw (ORCPT ); Mon, 4 Oct 2021 12:04:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236338AbhJDQEw (ORCPT ); Mon, 4 Oct 2021 12:04:52 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1953AC061745 for ; Mon, 4 Oct 2021 09:03:03 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id s11so16969622pgr.11 for ; Mon, 04 Oct 2021 09:03:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CkLYCnetbtAZE9M5QXDIsnhwTgD/585hJdfqEy8WJpw=; b=Bhz3Eh6n84uesPvImheylLT3lvMJSApOuL5JN2Tnt09hqDf4YIin64PCQjO+qjUpT3 fvQhJTdrW3v97EIH8TPPtmVW7NoGFqUJdpUP+ealpGSf45xxTh4gFGjYjHfqRymFZULZ Q1FvgJEgAjzcwJQv+7kA8ERWKYwwu9pdlqAay4ESyhO2gOcN1BGZO9kPExfrRAmOnJN4 +a9zMGWbHSr4bB1kHm4nIj+671/YUZGz2xHFKnPwaS+a5IBYd41XF6pdxIF291R6YIi5 QnNI/CMBTgFPah0+UamK2KeKUslXUAC6Fww3CB9vfHDdd+9xW2oqdNEnNqzzMUXclT4M en7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CkLYCnetbtAZE9M5QXDIsnhwTgD/585hJdfqEy8WJpw=; b=gDSZ+fvAhF46BiD3VbuNIhuQyGR9+xa0Kjuzb5Rv+zMiUjnXxmQB+kqDCozitJNbuc JdywsiyYNOxAzt9zT7aK8kEHoqCWOjQDsb1WuTXO0f0F6c0FZWSENCBck7TpavQfbwdn eOrysySK5ijxQPrHHHNZ4yT3c8JM+eJcXXfOI6P5R7n3Xe+AIcnX/QRFcIYiLDOuYwj9 BFfkMN6N+bjjXoOxSM2e8hSEP4vHwtnYr6hK7JlyWzOIyKKkWXCmnlk4UF8RcIcYxIqc gbgRajX7qgoO2EmAEPKuLPDvNoyvoBl9tjZb7/Zlq15+PdRWUOOK768txzEX/ctH4QzO ONVQ== X-Gm-Message-State: AOAM5328i/5iztIzjC1V6hDVy9KNf/6f0WaooCsfcXuBNNwD6acqIvqK 05pYitY5pfwlSlbjLaBUGIyB9ZZ5N4o= X-Google-Smtp-Source: ABdhPJxWQl/NXUBjwkMvCQFlYDmU9y/0E6caP9R6/YaW/XbY9beMQPR/rmTIUKJNDl2uQoEB7ZCVMg== X-Received: by 2002:a05:6a00:140c:b0:447:96be:2ade with SMTP id l12-20020a056a00140c00b0044796be2ademr26451699pfu.26.1633363382281; Mon, 04 Oct 2021 09:03:02 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.03.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:03:02 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 51/52] KVM: PPC: Book3S HV P9: Stop using vc->dpdes Date: Tue, 5 Oct 2021 02:00:48 +1000 Message-Id: <20211004160049.1338837-52-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The P9 path uses vc->dpdes only for msgsndp / SMT emulation. This adds an ordering requirement between vcpu->doorbell_request and vc->dpdes for no real benefit. Use vcpu->doorbell_request directly. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 18 ++++++++++-------- arch/powerpc/kvm/book3s_hv_builtin.c | 2 ++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 14 ++++++++++---- 3 files changed, 22 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 57bf49c90e73..351018f617fb 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -761,6 +761,8 @@ static bool kvmppc_doorbell_pending(struct kvm_vcpu *vcpu) if (vcpu->arch.doorbell_request) return true; + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return false; /* * Ensure that the read of vcore->dpdes comes after the read * of vcpu->doorbell_request. This barrier matches the @@ -2185,8 +2187,10 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, * either vcore->dpdes or doorbell_request. * On POWER8, doorbell_request is 0. */ - *val = get_reg_val(id, vcpu->arch.vcore->dpdes | - vcpu->arch.doorbell_request); + if (cpu_has_feature(CPU_FTR_ARCH_300)) + *val = get_reg_val(id, vcpu->arch.doorbell_request); + else + *val = get_reg_val(id, vcpu->arch.vcore->dpdes); break; case KVM_REG_PPC_VTB: *val = get_reg_val(id, vcpu->arch.vcore->vtb); @@ -2423,7 +2427,10 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, vcpu->arch.pspb = set_reg_val(id, *val); break; case KVM_REG_PPC_DPDES: - vcpu->arch.vcore->dpdes = set_reg_val(id, *val); + if (cpu_has_feature(CPU_FTR_ARCH_300)) + vcpu->arch.doorbell_request = set_reg_val(id, *val) & 1; + else + vcpu->arch.vcore->dpdes = set_reg_val(id, *val); break; case KVM_REG_PPC_VTB: vcpu->arch.vcore->vtb = set_reg_val(id, *val); @@ -4472,11 +4479,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, if (!nested) { kvmppc_core_prepare_to_enter(vcpu); - if (vcpu->arch.doorbell_request) { - vc->dpdes = 1; - smp_wmb(); - vcpu->arch.doorbell_request = 0; - } if (test_bit(BOOK3S_IRQPRIO_EXTERNAL, &vcpu->arch.pending_exceptions)) lpcr |= LPCR_MER; diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c index fcf4760a3a0e..a4fc4b2d3806 100644 --- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -649,6 +649,8 @@ void kvmppc_guest_entry_inject_int(struct kvm_vcpu *vcpu) int ext; unsigned long lpcr; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + /* Insert EXTERNAL bit into LPCR at the MER bit position */ ext = (vcpu->arch.pending_exceptions >> BOOK3S_IRQPRIO_EXTERNAL) & 1; lpcr = mfspr(SPRN_LPCR); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 5a71532a3adf..fbecbdc42c26 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -705,6 +705,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_pidr; unsigned long host_dawr1; unsigned long host_dawrx1; + unsigned long dpdes; hdec = time_limit - *tb; if (hdec < 0) @@ -767,8 +768,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (vc->pcr) mtspr(SPRN_PCR, vc->pcr | PCR_MASK); - if (vc->dpdes) - mtspr(SPRN_DPDES, vc->dpdes); + if (vcpu->arch.doorbell_request) { + vcpu->arch.doorbell_request = 0; + mtspr(SPRN_DPDES, 1); + } if (dawr_enabled()) { if (vcpu->arch.dawr0 != host_dawr0) @@ -999,7 +1002,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); - vc->dpdes = mfspr(SPRN_DPDES); + dpdes = mfspr(SPRN_DPDES); + if (dpdes) + vcpu->arch.doorbell_request = 1; + vc->vtb = mfspr(SPRN_VTB); dec = mfspr(SPRN_DEC); @@ -1061,7 +1067,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc } } - if (vc->dpdes) + if (dpdes) mtspr(SPRN_DPDES, 0); if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); From patchwork Mon Oct 4 16:00:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1536245 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=Iwuc5/Be; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4HNQVM0P6dz9t5G for ; Tue, 5 Oct 2021 03:03:07 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236342AbhJDQEz (ORCPT ); Mon, 4 Oct 2021 12:04:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236338AbhJDQEy (ORCPT ); Mon, 4 Oct 2021 12:04:54 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1170C061745 for ; Mon, 4 Oct 2021 09:03:05 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id j4so210903plx.4 for ; Mon, 04 Oct 2021 09:03:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YDapZeq+Cy3m4K/ha0FiUOzYZwT+p/FpOvrkM+uM8lQ=; b=Iwuc5/BeQEGbDH9HFCxyQmkwx/v0mQd/vNNky/r0eKdQSWoFzgcR4n+X0nDvL8MBRv CIjt1+IriqKuBXtUXpgffvMClEU9Q35eLHLNiZqQ43J5pdDej5gbZIf53pHJ9No8IEpG 5bRL8Y0fs9foTn1QU3wunMaW6VpjcTWoTeG+Dd1RAaT9MtZWHIJ0WqgbyMKS7QGbgxlF ADHwsTp8MgUztZ5GucriEPyyfU1H9wbb8VECM12iN7Wyjss1j6P+xRoOJH8ExBmjk9wD vA/t8nydvb7HiDUR2iFDyMkyLEKwKc8LJ/5W0FLIljO6zxWuDYgYBdSAkU5jryzIIdx+ UxjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YDapZeq+Cy3m4K/ha0FiUOzYZwT+p/FpOvrkM+uM8lQ=; b=YRr4QpN9xzSu56KwTtjWT9MyuT8y5+Qb6bge5zG8qB/S92IPkLAO/IsbCO4th29rAB AsQW5CK07UJji0UuXRmJpFRzAAiFU26BOzbN9vieORxUKBs93DA/QUXFnW+AJQqJLeQK MfDwQ6cCMC2eNcHJODhqy6rEcjra/dRVXD+FbHvocPsCPlwgNuWPnq58pm+bzIIomNZV z392PhrJRYnWgjEjSXCBA+EHHd3uTZurEa0hYWW5OowJfutOAf7hez3tRhg+JZ8VVnfs DsBpdsuaNva/6QiijdgHRFTCdZZyTgg6tSn6xopfirDS2TwBEThzrDIXveZHM15ViNf5 avjg== X-Gm-Message-State: AOAM531iSHksiJFRKzSBQ3/1HOZ900BANQxWWEP/JIGx9jO49rPVxopR NmB0s+kyClyaDPU+L3+t1RyTpe7D2Ms= X-Google-Smtp-Source: ABdhPJy/apREgMGxV3J8KWmaYig1di0dorNMqQ75JWm2ocp7vQ9l8cznQW55+rHuzKVb+Y7THN9n9A== X-Received: by 2002:a17:90a:8b8d:: with SMTP id z13mr33340553pjn.214.1633363384700; Mon, 04 Oct 2021 09:03:04 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (115-64-153-41.tpgi.com.au. [115.64.153.41]) by smtp.gmail.com with ESMTPSA id 130sm15557223pfz.77.2021.10.04.09.03.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Oct 2021 09:03:04 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Subject: [PATCH v3 52/52] KVM: PPC: Book3S HV P9: Remove subcore HMI handling Date: Tue, 5 Oct 2021 02:00:49 +1000 Message-Id: <20211004160049.1338837-53-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20211004160049.1338837-1-npiggin@gmail.com> References: <20211004160049.1338837-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org On POWER9 and newer, rather than the complex HMI synchronisation and subcore state, have each thread un-apply the guest TB offset before calling into the early HMI handler. This allows the subcore state to be avoided, including subcore enter / exit guest, which includes an expensive divide that shows up slightly in profiles. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_ppc.h | 1 + arch/powerpc/kvm/book3s_hv.c | 12 +++--- arch/powerpc/kvm/book3s_hv_hmi.c | 7 +++- arch/powerpc/kvm/book3s_hv_p9_entry.c | 2 +- arch/powerpc/kvm/book3s_hv_ras.c | 54 +++++++++++++++++++++++++++ 5 files changed, 67 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 671fbd1a765e..70ffcb3c91bf 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -760,6 +760,7 @@ void kvmppc_realmode_machine_check(struct kvm_vcpu *vcpu); void kvmppc_subcore_enter_guest(void); void kvmppc_subcore_exit_guest(void); long kvmppc_realmode_hmi_handler(void); +long kvmppc_p9_realmode_hmi_handler(struct kvm_vcpu *vcpu); long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags, long pte_index, unsigned long pteh, unsigned long ptel); long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags, diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 351018f617fb..449ac0a19ceb 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4014,8 +4014,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.ceded = 0; - kvmppc_subcore_enter_guest(); - vcpu_vpa_increment_dispatch(vcpu); if (kvmhv_on_pseries()) { @@ -4068,8 +4066,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - kvmppc_subcore_exit_guest(); - return trap; } @@ -6069,9 +6065,11 @@ static int kvmppc_book3s_init_hv(void) if (r) return r; - r = kvm_init_subcore_bitmap(); - if (r) - return r; + if (!cpu_has_feature(CPU_FTR_ARCH_300)) { + r = kvm_init_subcore_bitmap(); + if (r) + return r; + } /* * We need a way of accessing the XICS interrupt controller, diff --git a/arch/powerpc/kvm/book3s_hv_hmi.c b/arch/powerpc/kvm/book3s_hv_hmi.c index 9af660476314..1ec50c69678b 100644 --- a/arch/powerpc/kvm/book3s_hv_hmi.c +++ b/arch/powerpc/kvm/book3s_hv_hmi.c @@ -20,10 +20,15 @@ void wait_for_subcore_guest_exit(void) /* * NULL bitmap pointer indicates that KVM module hasn't - * been loaded yet and hence no guests are running. + * been loaded yet and hence no guests are running, or running + * on POWER9 or newer CPU. + * * If no KVM is in use, no need to co-ordinate among threads * as all of them will always be in host and no one is going * to modify TB other than the opal hmi handler. + * + * POWER9 and newer don't need this synchronisation. + * * Hence, just return from here. */ if (!local_paca->sibling_subcore_state) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index fbecbdc42c26..86a222f97e8e 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -938,7 +938,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc kvmppc_realmode_machine_check(vcpu); } else if (unlikely(trap == BOOK3S_INTERRUPT_HMI)) { - kvmppc_realmode_hmi_handler(); + kvmppc_p9_realmode_hmi_handler(vcpu); } else if (trap == BOOK3S_INTERRUPT_H_EMUL_ASSIST) { vcpu->arch.emul_inst = mfspr(SPRN_HEIR); diff --git a/arch/powerpc/kvm/book3s_hv_ras.c b/arch/powerpc/kvm/book3s_hv_ras.c index d4bca93b79f6..ccfd96965630 100644 --- a/arch/powerpc/kvm/book3s_hv_ras.c +++ b/arch/powerpc/kvm/book3s_hv_ras.c @@ -136,6 +136,60 @@ void kvmppc_realmode_machine_check(struct kvm_vcpu *vcpu) vcpu->arch.mce_evt = mce_evt; } + +long kvmppc_p9_realmode_hmi_handler(struct kvm_vcpu *vcpu) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + long ret = 0; + + /* + * Unapply and clear the offset first. That way, if the TB was not + * resynced then it will remain in host-offset, and if it was resynced + * then it is brought into host-offset. Then the tb offset is + * re-applied before continuing with the KVM exit. + * + * This way, we don't need to actually know whether not OPAL resynced + * the timebase or do any of the complicated dance that the P7/8 + * path requires. + */ + if (vc->tb_offset_applied) { + u64 new_tb = mftb() - vc->tb_offset_applied; + mtspr(SPRN_TBU40, new_tb); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + vc->tb_offset_applied = 0; + } + + local_paca->hmi_irqs++; + + if (hmi_handle_debugtrig(NULL) >= 0) { + ret = 1; + goto out; + } + + if (ppc_md.hmi_exception_early) + ppc_md.hmi_exception_early(NULL); + +out: + if (vc->tb_offset) { + u64 new_tb = mftb() + vc->tb_offset; + mtspr(SPRN_TBU40, new_tb); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + vc->tb_offset_applied = vc->tb_offset; + } + + return ret; +} + +/* + * The following subcore HMI handling is all only for pre-POWER9 CPUs. + */ + /* Check if dynamic split is in force and return subcore size accordingly. */ static inline int kvmppc_cur_subcore_size(void) {