From patchwork Wed Aug 11 16:00:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515872 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=mV7UlH3D; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF1m4Y4yz9sXS for ; Thu, 12 Aug 2021 02:01:48 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233522AbhHKQCL (ORCPT ); Wed, 11 Aug 2021 12:02:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233385AbhHKQCL (ORCPT ); Wed, 11 Aug 2021 12:02:11 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA0E2C061765 for ; Wed, 11 Aug 2021 09:01:47 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id a8so4180677pjk.4 for ; Wed, 11 Aug 2021 09:01:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=E8HC5KhjUlV41KsAAH26eBl3oXcWuvaOrNt60v4bxY0=; b=mV7UlH3DvLgMJ8Q4WElYlwvm7C5mdnCt9HgAQtZZc1cxirIowLROGpkT7TzbPvH0UD CvAWp7Dgg0cMgq4AUy2IdhTak631/ziDI7mUFyKWlsjMNySAUPZsSmrvt7YT5ec4alQk BbegHt3uvTJ2KPdlaCOROsSSths1BFcwXoiJI36BPx8pQNRW3u/xmN12tMHcYUDvf6i1 36/WLAtiYuI0Jv2JjsylSemnD9f1S2+2h7imQ2DcDcUcnv+8N2+UdHjMwwdOOrhciBkK Tpn2bK5iv2c5aaWJriZlqvh6Fxa5T4necAZytQzY5qKko8ZuS6Cs7S6WljQdWr2ITefv yoQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=E8HC5KhjUlV41KsAAH26eBl3oXcWuvaOrNt60v4bxY0=; b=OsUHgBUq/2ezV/M3en+2pO+lxrO5WQMv+8Iuk3mAWT6ai9HQtkwlKWJbXcd0xFDVcC HnNuf4dVgqFqUhvSxjWjPB494ycYnj+zmvfT7CZYWixMqq9IAD7MgBN0WdiV23TxHA0G 4utSXVHsSRNy8naTsj/W3YyBvXNGBKjo1VPou9NJ3dGgVaSxl0iebf2B5Cx9qn/CUHy9 J5OCivs8Zs9ef+r5XL6IC1zZs8ptrkUDhAk3d7NHwU5qgMW+o2c7eTaO1Vh2Y762R/RR AnKRXUaB4yxGowYj5nUPUCYXqLvO8zRnF/TuQ9tw0jYhHKx8MzsW1DqHG9cWNACt0MaZ CJEg== X-Gm-Message-State: AOAM532/lNg06BEhOZW4VzVp6eFMj8gMxnlI+WAyjVMtofwl6s5Cjoh+ jOVrLIv2pl361YZymsPAt9U/MBlRLMA= X-Google-Smtp-Source: ABdhPJw0X/yXTqy7uK6o1fGg2KjFOhZPsyVwTsz7rRc0h/CWcZzi2VpYHfToAVfUywkDOv99bl9JHw== X-Received: by 2002:a63:4b53:: with SMTP id k19mr345448pgl.3.1628697707077; Wed, 11 Aug 2021 09:01:47 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.01.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:01:46 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v2 01/60] KVM: PPC: Book3S HV: Initialise vcpu MSR with MSR_ME Date: Thu, 12 Aug 2021 02:00:35 +1000 Message-Id: <20210811160134.904987-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org It is possible to create a VCPU without setting the MSR before running it, which results in a warning in kvmhv_vcpu_entry_p9() that MSR_ME is not set. This is pretty harmless because the MSR_ME bit is added to HSRR1 before HRFID to guest, and a normal qemu guest doesn't hit it. Initialise the vcpu MSR with MSR_ME set. Reported-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index ce7ff12cfc03..2afe7a95fc9c 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2687,6 +2687,7 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) spin_lock_init(&vcpu->arch.vpa_update_lock); spin_lock_init(&vcpu->arch.tbacct_lock); vcpu->arch.busy_preempt = TB_NIL; + vcpu->arch.shregs.msr = MSR_ME; vcpu->arch.intr_msr = MSR_SF | MSR_ME; /* From patchwork Wed Aug 11 16:00:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515873 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=OhMFl8HR; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF1q1rDkz9sXN for ; Thu, 12 Aug 2021 02:01:51 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233526AbhHKQCO (ORCPT ); Wed, 11 Aug 2021 12:02:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233385AbhHKQCO (ORCPT ); Wed, 11 Aug 2021 12:02:14 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34547C061765 for ; Wed, 11 Aug 2021 09:01:50 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id m24-20020a17090a7f98b0290178b1a81700so5723703pjl.4 for ; Wed, 11 Aug 2021 09:01:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tBUS86yzW++npW3fONJoOYSWPBRI54iAbyTNCBlq0wE=; b=OhMFl8HRirrAAMVDyybTuV0EAGt4CgiYNACAlQUI284h4CxKkX2rgbnqjB2AE2yH2a RQ7pExWFwFJtK9ol2SixJpujUvcJp76LSbXWZKFhq36PnmDcBbxddt0C45nz/2IF12FF rCHfQMaYRUoYpovYL9S41dTR5SA68KZHDcQnq7cG+zA9Z/h5I61AK8kcdOHQ4Q+YkPrn mOeTNWtMI3QrpuvYt9YaZxvDn83kamGK0RoAVNXs1WAYDXf0kZM1B62hicW1cfD6ty4m MvehYfrq6EpVaSH5+WH1UTFA8BaHa9C+9yv6iG/C+bXRzP2lWDrlusbZxC/9GhN7HA1m PpuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tBUS86yzW++npW3fONJoOYSWPBRI54iAbyTNCBlq0wE=; b=Ga3i3pjeYb5wlSeBJ03jFHdxMYzUzD1PYXc2PcNp8ik161wGGck0WjsM+y+TAr5d8O xJT0/YpigC2bIqgi54u0KJ0R6xcZUBFMGn6GhJ9v01KpF+vmo1vWnQZzcbM9Udk3QZHB JnfLD83GWVRFRDO6ItQUtpFYlYZcjQpJVnECJ19TQUr6J7Y1cBAk1x9z4t/Zy0oEIc3y zsq1O84iwnH45ERwdyxcPO0na6l4yspDjYZYQ7lx3Oh39w0KN36b6ZeiisMhFVtPS9sm ebav4F1jsDfPCUwrYv+hs08/qkvJPKGrPVSpAk/lCvWXzFZU7D3KsYmQJ/JiqOQaPlxe m/iw== X-Gm-Message-State: AOAM530ee94EokObx/ntjvcKx++zgIJPJ3EEJy1bUxBIDFWZjfNYBnj/ SveJrYMe2nciOKTGvjsogqfo3juKWtI= X-Google-Smtp-Source: ABdhPJyGesYYVuFvADexHokYQDJijjGlMpUAGoYMGGFnjf/wHnSaka9QLjpwaUNrFGsegQX5b2QRVA== X-Received: by 2002:a17:902:b947:b029:12c:b414:a018 with SMTP id h7-20020a170902b947b029012cb414a018mr15908883pls.30.1628697709608; Wed, 11 Aug 2021 09:01:49 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.01.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:01:49 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 02/60] KVM: PPC: Book3S HV: Remove TM emulation from POWER7/8 path Date: Thu, 12 Aug 2021 02:00:36 +1000 Message-Id: <20210811160134.904987-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org TM fake-suspend emulation is only used by POWER9. Remove it from the old code path. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 42 ------------------------- 1 file changed, 42 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 8dd437d7a2c6..75079397c2a5 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1088,12 +1088,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) cmpwi r12, BOOK3S_INTERRUPT_H_INST_STORAGE beq kvmppc_hisi -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM - /* For softpatch interrupt, go off and do TM instruction emulation */ - cmpwi r12, BOOK3S_INTERRUPT_HV_SOFTPATCH - beq kvmppc_tm_emul -#endif - /* See if this is a leftover HDEC interrupt */ cmpwi r12,BOOK3S_INTERRUPT_HV_DECREMENTER bne 2f @@ -1599,42 +1593,6 @@ maybe_reenter_guest: blt deliver_guest_interrupt b guest_exit_cont -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM -/* - * Softpatch interrupt for transactional memory emulation cases - * on POWER9 DD2.2. This is early in the guest exit path - we - * haven't saved registers or done a treclaim yet. - */ -kvmppc_tm_emul: - /* Save instruction image in HEIR */ - mfspr r3, SPRN_HEIR - stw r3, VCPU_HEIR(r9) - - /* - * The cases we want to handle here are those where the guest - * is in real suspend mode and is trying to transition to - * transactional mode. - */ - lbz r0, HSTATE_FAKE_SUSPEND(r13) - cmpwi r0, 0 /* keep exiting guest if in fake suspend */ - bne guest_exit_cont - rldicl r3, r11, 64 - MSR_TS_S_LG, 62 - cmpwi r3, 1 /* or if not in suspend state */ - bne guest_exit_cont - - /* Call C code to do the emulation */ - mr r3, r9 - bl kvmhv_p9_tm_emulation_early - nop - ld r9, HSTATE_KVM_VCPU(r13) - li r12, BOOK3S_INTERRUPT_HV_SOFTPATCH - cmpwi r3, 0 - beq guest_exit_cont /* continue exiting if not handled */ - ld r10, VCPU_PC(r9) - ld r11, VCPU_MSR(r9) - b fast_interrupt_c_return /* go back to guest if handled */ -#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ - /* * Check whether an HDSI is an HPTE not found fault or something else. * If it is an HPTE not found fault that is due to the guest accessing From patchwork Wed Aug 11 16:00:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515874 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=fzedp2iN; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF1s4mNQz9sXN for ; Thu, 12 Aug 2021 02:01:53 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233528AbhHKQCQ (ORCPT ); Wed, 11 Aug 2021 12:02:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233385AbhHKQCQ (ORCPT ); Wed, 11 Aug 2021 12:02:16 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA61BC061765 for ; Wed, 11 Aug 2021 09:01:52 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id t7-20020a17090a5d87b029017807007f23so10303945pji.5 for ; Wed, 11 Aug 2021 09:01:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VBST8AqhrKncX0fAkXnrZxL4xRtwbCPCtY6hZhbOAns=; b=fzedp2iNkDVZSFJUEZt6rIf9covcWofEVp1wWIULIKSSUOVdlFVNuLfSNf4xKeuh2o v8Coq9zLeVfMstaCGb1dtmQ5AQjV/NwYE4PzNEgj458CHi1DoxfVuobkQoQnyYfOj2Ly 29qW/yAllHoEk0sWTpZWL4VA5l19XWEq9vSJJiPXijMTHDdzZi+tfnrLd9h4f51qX/Ok GFZfPrBIX70JfowjVRl7CqCZis0EKOaUvy4/u8soIMpXGTRCRutpr40BuZ3NGHdl5+RA qyH/T2rNAKRq5qh6amY5W4MoNrQNCSEQSKUJkzSFmWPAU4O29H8obTyV7w/Gs8OXopTL oWQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VBST8AqhrKncX0fAkXnrZxL4xRtwbCPCtY6hZhbOAns=; b=pA+kkpKVSOtfOiCNikYYL+bW32wPiSmsHZnq8HXFYAhmfXIKn5/YnW8WtYAfRFF0dz DQM1Gp4er8MBvjLZkgELfoixQ3fotHsx6ob5H9TyhIrapHc89MtO2VQLVrd4bISZdT+5 Ay8nfvcoYIZWPSXITglkxIJI0GatGToRlrLu2V+YbFeIc9WdjwWFA+S6QxRfo27OAz90 O8h3ADPUVPImYp8RFRIzSiMDwhYnH6wlx17noxALmvRpv7+lAkRxAohBQofNBr7xB+52 z7ZthWGFEDo3orxGNby7L/aadrv4EK7ygAKh3YQc6d+qVhsWKeHLzXZ+6xrOJ4HC5ayf s8Gg== X-Gm-Message-State: AOAM531jvIgqxhF2I1D/lNOQkMdCIvtu+n1Jl9qcCpvFt/+X5QizFeed xNlfifF7vg2vi5PEuDkDhytJD6gM+/Y= X-Google-Smtp-Source: ABdhPJxDqfy9grce4lnzimGRNfvr4PUNGpKbtsfaYfR4JnNRxSKDAXUDCmTdm9bVnXKMY1zZcYdtxg== X-Received: by 2002:a17:903:4094:b029:12d:242e:a68e with SMTP id z20-20020a1709034094b029012d242ea68emr4810396plc.82.1628697712145; Wed, 11 Aug 2021 09:01:52 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.01.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:01:51 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 03/60] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt NIP Date: Thu, 12 Aug 2021 02:00:37 +1000 Message-Id: <20210811160134.904987-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The softpatch interrupt sets HSRR0 to the faulting instruction +4, so it should subtract 4 for the faulting instruction address in the case it is a TM softpatch interrupt (the instruction was not executed) and it was not emulated. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_tm.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c index cc90b8b82329..e7c36f8bf205 100644 --- a/arch/powerpc/kvm/book3s_hv_tm.c +++ b/arch/powerpc/kvm/book3s_hv_tm.c @@ -46,6 +46,15 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) u64 newmsr, bescr; int ra, rs; + /* + * The TM softpatch interrupt sets NIP to the instruction following + * the faulting instruction, which is not executed. Rewind nip to the + * faulting instruction so it looks like a normal synchronous + * interrupt, then update nip in the places where the instruction is + * emulated. + */ + vcpu->arch.regs.nip -= 4; + /* * rfid, rfebb, and mtmsrd encode bit 31 = 0 since it's a reserved bit * in these instructions, so masking bit 31 out doesn't change these @@ -67,7 +76,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) (newmsr & MSR_TM))); newmsr = sanitize_msr(newmsr); vcpu->arch.shregs.msr = newmsr; - vcpu->arch.cfar = vcpu->arch.regs.nip - 4; + vcpu->arch.cfar = vcpu->arch.regs.nip; vcpu->arch.regs.nip = vcpu->arch.shregs.srr0; return RESUME_GUEST; @@ -100,7 +109,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) vcpu->arch.bescr = bescr; msr = (msr & ~MSR_TS_MASK) | MSR_TS_T; vcpu->arch.shregs.msr = msr; - vcpu->arch.cfar = vcpu->arch.regs.nip - 4; + vcpu->arch.cfar = vcpu->arch.regs.nip; vcpu->arch.regs.nip = vcpu->arch.ebbrr; return RESUME_GUEST; @@ -116,6 +125,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) newmsr = (newmsr & ~MSR_LE) | (msr & MSR_LE); newmsr = sanitize_msr(newmsr); vcpu->arch.shregs.msr = newmsr; + vcpu->arch.regs.nip += 4; return RESUME_GUEST; /* ignore bit 31, see comment above */ @@ -152,6 +162,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) msr = (msr & ~MSR_TS_MASK) | MSR_TS_S; } vcpu->arch.shregs.msr = msr; + vcpu->arch.regs.nip += 4; return RESUME_GUEST; /* ignore bit 31, see comment above */ @@ -189,6 +200,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) | (((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29); vcpu->arch.shregs.msr &= ~MSR_TS_MASK; + vcpu->arch.regs.nip += 4; return RESUME_GUEST; /* ignore bit 31, see comment above */ @@ -220,6 +232,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) | (((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29); vcpu->arch.shregs.msr = msr | MSR_TS_S; + vcpu->arch.regs.nip += 4; return RESUME_GUEST; } From patchwork Wed Aug 11 16:00:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515875 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=jN7MVicx; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF1v6mg1z9sWl for ; Thu, 12 Aug 2021 02:01:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233532AbhHKQCT (ORCPT ); Wed, 11 Aug 2021 12:02:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233385AbhHKQCS (ORCPT ); Wed, 11 Aug 2021 12:02:18 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3AE2C061765 for ; Wed, 11 Aug 2021 09:01:54 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id a8so4181305pjk.4 for ; Wed, 11 Aug 2021 09:01:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1YZJ7rHuqHe3aXhAHGVZUOQS96YsnTQLLJmckgSgdlg=; b=jN7MVicxrSnzyMewSFSODXv9XQa7kQQDvLvdwyBU9qMWAlv6UKlADGgYX0Sn8NoCFi AOYc6ihoJ2HWFmYe32SXGR9N7Nxz8g8Kes2pP0AxuB16vUQPPgLlgj6WytmCdDXmfHiS rmmq+7IcuqLJ0lXkwKcBaFq1AVMBOlxfybS3w/gyvHyRWT/xNAO7vBHt2+y94o5SoJlP 9bYSo/OjOsCS9uzd98DQIWmWqtaV3RkEE+WJHnRS4O5zlv2fJIvzL2FTKTzrihAXFdd6 zIJ0RkkXMD10BPfZcr5JCGpkKYvm3jIM+9JKvqbZciRggVMYLH7ryfigFhffdsL8SvXN lmLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1YZJ7rHuqHe3aXhAHGVZUOQS96YsnTQLLJmckgSgdlg=; b=W8n7EeHdYhIV+mWIY64ANIqO4ahSlB8aKz+gcoDjaz+7entRSNqFFeI775zCgYtLrk E+MvCTLVVfGtQ/1TKekKps/yKpkFHYT3TT+AJjZDKHhKri81YPLlZ+LaDheqMIOhNELF m6GFQ3yG5i+yhFQvHLDM/+jOeFCBx9Eyts+sPDDTjXFwpO/FgqmAUsKnw4nuSCGUAFJF 9gUc1gJ5LDJ1LwZlMiypt//kJUnwfj9OyygAJxVEqaQnRSA60MhqMWgCBsLmFrzFqmME 75RaLEQb/r7pKieihl2gD82PubgC7Scbd/NfUTyfIn+H50i17naHkDbSL6jAcaRqi1S0 WFAg== X-Gm-Message-State: AOAM533ZRjmeqmlEcG8h9L5XKq911qs0LJU13xroAFD6qrLoUvZchlKP h1QfDaNZh9V9UoSQtM30dUl2YV2y2n0= X-Google-Smtp-Source: ABdhPJz21ZSYLB9+E08upEP2Lur+4yuqBOu8Pf5X7IDQtq5I1Uqceg1jDWAn3VKIvBPnOuHWyTujGg== X-Received: by 2002:a62:2785:0:b029:3c7:c29f:97b0 with SMTP id n127-20020a6227850000b02903c7c29f97b0mr27768375pfn.78.1628697714396; Wed, 11 Aug 2021 09:01:54 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.01.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:01:54 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 04/60] KVM: PPC: Book3S HV Nested: Fix TM softpatch HFAC interrupt emulation Date: Thu, 12 Aug 2021 02:00:38 +1000 Message-Id: <20210811160134.904987-5-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Have the TM softpatch emulation code set up the HFAC interrupt and return -1 in case an instruction was executed with HFSCR bits clear, and have the interrupt exit handler fall through to the HFAC handler. When the L0 is running a nested guest, this ensures the HFAC interrupt is correctly passed up to the L1. The "direct guest" exit handler will turn these into PROGILL program interrupts so functionality in practice will be unchanged. But it's possible an L1 would want to handle these in a different way. Also rearrange the FAC interrupt emulation code to match the HFAC format while here (mainly, adding the FSCR_INTR_CAUSE mask). Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/reg.h | 3 ++- arch/powerpc/kvm/book3s_hv.c | 35 ++++++++++++++++---------- arch/powerpc/kvm/book3s_hv_tm.c | 44 ++++++++++++++++++--------------- 3 files changed, 48 insertions(+), 34 deletions(-) diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index be85cf156a1f..e9d27265253b 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -415,6 +415,7 @@ #define FSCR_TAR __MASK(FSCR_TAR_LG) #define FSCR_EBB __MASK(FSCR_EBB_LG) #define FSCR_DSCR __MASK(FSCR_DSCR_LG) +#define FSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */ #define SPRN_HFSCR 0xbe /* HV=1 Facility Status & Control Register */ #define HFSCR_PREFIX __MASK(FSCR_PREFIX_LG) #define HFSCR_MSGP __MASK(FSCR_MSGP_LG) @@ -426,7 +427,7 @@ #define HFSCR_DSCR __MASK(FSCR_DSCR_LG) #define HFSCR_VECVSX __MASK(FSCR_VECVSX_LG) #define HFSCR_FP __MASK(FSCR_FP_LG) -#define HFSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */ +#define HFSCR_INTR_CAUSE FSCR_INTR_CAUSE #define SPRN_TAR 0x32f /* Target Address Register */ #define SPRN_LPCR 0x13E /* LPAR Control Register */ #define LPCR_VPM0 ASM_CONST(0x8000000000000000) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2afe7a95fc9c..e79eedb65e6b 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1682,6 +1682,21 @@ XXX benchmark guest exits r = RESUME_GUEST; } break; + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + case BOOK3S_INTERRUPT_HV_SOFTPATCH: + /* + * This occurs for various TM-related instructions that + * we need to emulate on POWER9 DD2.2. We have already + * handled the cases where the guest was in real-suspend + * mode and was transitioning to transactional state. + */ + r = kvmhv_p9_tm_emulation(vcpu); + if (r != -1) + break; + fallthrough; /* go to facility unavailable handler */ +#endif + /* * This occurs if the guest (kernel or userspace), does something that * is prohibited by HFSCR. @@ -1700,18 +1715,6 @@ XXX benchmark guest exits } break; -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM - case BOOK3S_INTERRUPT_HV_SOFTPATCH: - /* - * This occurs for various TM-related instructions that - * we need to emulate on POWER9 DD2.2. We have already - * handled the cases where the guest was in real-suspend - * mode and was transitioning to transactional state. - */ - r = kvmhv_p9_tm_emulation(vcpu); - break; -#endif - case BOOK3S_INTERRUPT_HV_RM_HARD: r = RESUME_PASSTHROUGH; break; @@ -1814,9 +1817,15 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) * mode and was transitioning to transactional state. */ r = kvmhv_p9_tm_emulation(vcpu); - break; + if (r != -1) + break; + fallthrough; /* go to facility unavailable handler */ #endif + case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: + r = RESUME_HOST; + break; + case BOOK3S_INTERRUPT_HV_RM_HARD: vcpu->arch.trap = 0; r = RESUME_GUEST; diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c index e7c36f8bf205..866cadd70094 100644 --- a/arch/powerpc/kvm/book3s_hv_tm.c +++ b/arch/powerpc/kvm/book3s_hv_tm.c @@ -88,14 +88,15 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) } /* check EBB facility is available */ if (!(vcpu->arch.hfscr & HFSCR_EBB)) { - /* generate an illegal instruction interrupt */ - kvmppc_core_queue_program(vcpu, SRR1_PROGILL); - return RESUME_GUEST; + vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE; + vcpu->arch.hfscr |= (u64)FSCR_EBB_LG << 56; + vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL; + return -1; /* rerun host interrupt handler */ } if ((msr & MSR_PR) && !(vcpu->arch.fscr & FSCR_EBB)) { /* generate a facility unavailable interrupt */ - vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) | - ((u64)FSCR_EBB_LG << 56); + vcpu->arch.fscr &= ~FSCR_INTR_CAUSE; + vcpu->arch.fscr |= (u64)FSCR_EBB_LG << 56; kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_FAC_UNAVAIL); return RESUME_GUEST; } @@ -138,14 +139,15 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) } /* check for TM disabled in the HFSCR or MSR */ if (!(vcpu->arch.hfscr & HFSCR_TM)) { - /* generate an illegal instruction interrupt */ - kvmppc_core_queue_program(vcpu, SRR1_PROGILL); - return RESUME_GUEST; + vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE; + vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56; + vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL; + return -1; /* rerun host interrupt handler */ } if (!(msr & MSR_TM)) { /* generate a facility unavailable interrupt */ - vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) | - ((u64)FSCR_TM_LG << 56); + vcpu->arch.fscr &= ~FSCR_INTR_CAUSE; + vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56; kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_FAC_UNAVAIL); return RESUME_GUEST; @@ -169,14 +171,15 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) case (PPC_INST_TRECLAIM & PO_XOP_OPCODE_MASK): /* check for TM disabled in the HFSCR or MSR */ if (!(vcpu->arch.hfscr & HFSCR_TM)) { - /* generate an illegal instruction interrupt */ - kvmppc_core_queue_program(vcpu, SRR1_PROGILL); - return RESUME_GUEST; + vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE; + vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56; + vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL; + return -1; /* rerun host interrupt handler */ } if (!(msr & MSR_TM)) { /* generate a facility unavailable interrupt */ - vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) | - ((u64)FSCR_TM_LG << 56); + vcpu->arch.fscr &= ~FSCR_INTR_CAUSE; + vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56; kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_FAC_UNAVAIL); return RESUME_GUEST; @@ -208,14 +211,15 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu) /* XXX do we need to check for PR=0 here? */ /* check for TM disabled in the HFSCR or MSR */ if (!(vcpu->arch.hfscr & HFSCR_TM)) { - /* generate an illegal instruction interrupt */ - kvmppc_core_queue_program(vcpu, SRR1_PROGILL); - return RESUME_GUEST; + vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE; + vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56; + vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL; + return -1; /* rerun host interrupt handler */ } if (!(msr & MSR_TM)) { /* generate a facility unavailable interrupt */ - vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) | - ((u64)FSCR_TM_LG << 56); + vcpu->arch.fscr &= ~FSCR_INTR_CAUSE; + vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56; kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_FAC_UNAVAIL); return RESUME_GUEST; From patchwork Wed Aug 11 16:00:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515876 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=uYzPC1nW; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF1y3zZPz9sXN for ; Thu, 12 Aug 2021 02:01:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233533AbhHKQCV (ORCPT ); Wed, 11 Aug 2021 12:02:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233385AbhHKQCV (ORCPT ); Wed, 11 Aug 2021 12:02:21 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0C8EC061765 for ; Wed, 11 Aug 2021 09:01:57 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id u15so1137572ple.2 for ; Wed, 11 Aug 2021 09:01:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JNmXb4V9H5CVPboFqQ74PeD5qalgG4Hrmthz0rtVqMU=; b=uYzPC1nWCpHz1kstVbV67ejMEIdI05TwO0kMqsuxfQBvYMPsiQPakSZpVM0CUnfQKM sX92OYRntLysWGxwqrFyNK9PPpY1SFA8n4vDGlce34N14jzpnp74MgYDa/ZReRqcYF2h f9x9yhd69QVoR/TqaMz110p/rIc75A0ZT99s1ygylLY6D20y3IM53AmGOEzL4A0c7VOG HVsDsg0k2q0Rf1rzPB/d+0w6Bo4ObbjusDYJHBbGwqd3CN9LpUHVCKYvwQqkGnAQjx9Y 0LKXpS5Z8x++qkKSzAE4imEnAlYI2zyme8DUw77Mvdl0xIK183XG8m8kNFMXryyo2QGd XtiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JNmXb4V9H5CVPboFqQ74PeD5qalgG4Hrmthz0rtVqMU=; b=XEJf8k/Owj7HzZ0RRUOfg2VFKKN0UtOqvwbRN0cJOq6KiqpBpBXcrI9LaxiTuFBBZR iYyPWUzTegg+tCfIkQx6Wv8O04NeJ1lcX45MmeP2xKDMLxUJMrSQrQgevr1s1Ge5br72 CyMU+IS9hkaw6LU9g4NT0laTF4i6N9ozB3FsHeQILZ4l5QOOuDx83kFPN5GLfOZXuhts doZ8YK6lh3sOHFXD3UPHxZ0LmerG5d82/ilDP0kapAuHu2ffz5vKpB/ZVbN0JOxXVutA YgqM2apZgNF3mf0TJYb7LavcfnYmkC2n2vlnzmaiesbApVtMGxoDRlkiki+NdzhrseXM KCbg== X-Gm-Message-State: AOAM533RHuveA8W73JimhtV0ijC5Lwy66CnPjiNq52NFWXDziap/5jcx Hn8JqCb5t8Xiy8WEiGQQWC8+of6MTd0= X-Google-Smtp-Source: ABdhPJwF47g3yw89T7Kmjw3yhTbkLIQYlhDJh0c8CACCi602e+TOi3+MPJRMFuT56BWUwlHSM1/zjw== X-Received: by 2002:a05:6a00:1984:b029:3cd:c2ed:cd5a with SMTP id d4-20020a056a001984b02903cdc2edcd5amr9844299pfl.12.1628697717079; Wed, 11 Aug 2021 09:01:57 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.01.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:01:56 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 05/60] KVM: PPC: Book3S HV Nested: Sanitise vcpu registers Date: Thu, 12 Aug 2021 02:00:39 +1000 Message-Id: <20210811160134.904987-6-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Fabiano Rosas As one of the arguments of the H_ENTER_NESTED hypercall, the nested hypervisor (L1) prepares a structure containing the values of various hypervisor-privileged registers with which it wants the nested guest (L2) to run. Since the nested HV runs in supervisor mode it needs the host to write to these registers. To stop a nested HV manipulating this mechanism and using a nested guest as a proxy to access a facility that has been made unavailable to it, we have a routine that sanitises the values of the HV registers before copying them into the nested guest's vcpu struct. However, when coming out of the guest the values are copied as they were back into L1 memory, which means that any sanitisation we did during guest entry will be exposed to L1 after H_ENTER_NESTED returns. This patch alters this sanitisation to have effect on the vcpu->arch registers directly before entering and after exiting the guest, leaving the structure that is copied back into L1 unchanged (except when we really want L1 to access the value, e.g the Cause bits of HFSCR). Signed-off-by: Fabiano Rosas Reviewed-by: Nicholas Piggin Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_nested.c | 94 ++++++++++++++--------------- 1 file changed, 46 insertions(+), 48 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 898f942eb198..1eb4e989edc7 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -105,7 +105,6 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, struct kvmppc_vcore *vc = vcpu->arch.vcore; hr->dpdes = vc->dpdes; - hr->hfscr = vcpu->arch.hfscr; hr->purr = vcpu->arch.purr; hr->spurr = vcpu->arch.spurr; hr->ic = vcpu->arch.ic; @@ -128,55 +127,17 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, case BOOK3S_INTERRUPT_H_INST_STORAGE: hr->asdr = vcpu->arch.fault_gpa; break; + case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: + hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) | + (HFSCR_INTR_CAUSE & vcpu->arch.hfscr)); + break; case BOOK3S_INTERRUPT_H_EMUL_ASSIST: hr->heir = vcpu->arch.emul_inst; break; } } -/* - * This can result in some L0 HV register state being leaked to an L1 - * hypervisor when the hv_guest_state is copied back to the guest after - * being modified here. - * - * There is no known problem with such a leak, and in many cases these - * register settings could be derived by the guest by observing behaviour - * and timing, interrupts, etc., but it is an issue to consider. - */ -static void sanitise_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) -{ - struct kvmppc_vcore *vc = vcpu->arch.vcore; - u64 mask; - - /* - * Don't let L1 change LPCR bits for the L2 except these: - */ - mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | - LPCR_LPES | LPCR_MER; - - /* - * Additional filtering is required depending on hardware - * and configuration. - */ - hr->lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm, - (vc->lpcr & ~mask) | (hr->lpcr & mask)); - - /* - * Don't let L1 enable features for L2 which we've disabled for L1, - * but preserve the interrupt cause field. - */ - hr->hfscr &= (HFSCR_INTR_CAUSE | vcpu->arch.hfscr); - - /* Don't let data address watchpoint match in hypervisor state */ - hr->dawrx0 &= ~DAWRX_HYP; - hr->dawrx1 &= ~DAWRX_HYP; - - /* Don't let completed instruction address breakpt match in HV state */ - if ((hr->ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER) - hr->ciabr &= ~CIABR_PRIV; -} - -static void restore_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) +static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state *hr) { struct kvmppc_vcore *vc = vcpu->arch.vcore; @@ -288,6 +249,43 @@ static int kvmhv_write_guest_state_and_regs(struct kvm_vcpu *vcpu, sizeof(struct pt_regs)); } +static void load_l2_hv_regs(struct kvm_vcpu *vcpu, + const struct hv_guest_state *l2_hv, + const struct hv_guest_state *l1_hv, u64 *lpcr) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + u64 mask; + + restore_hv_regs(vcpu, l2_hv); + + /* + * Don't let L1 change LPCR bits for the L2 except these: + */ + mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | + LPCR_LPES | LPCR_MER; + + /* + * Additional filtering is required depending on hardware + * and configuration. + */ + *lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm, + (vc->lpcr & ~mask) | (*lpcr & mask)); + + /* + * Don't let L1 enable features for L2 which we've disabled for L1, + * but preserve the interrupt cause field. + */ + vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | vcpu->arch.hfscr); + + /* Don't let data address watchpoint match in hypervisor state */ + vcpu->arch.dawrx0 = l2_hv->dawrx0 & ~DAWRX_HYP; + vcpu->arch.dawrx1 = l2_hv->dawrx1 & ~DAWRX_HYP; + + /* Don't let completed instruction address breakpt match in HV state */ + if ((l2_hv->ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER) + vcpu->arch.ciabr = l2_hv->ciabr & ~CIABR_PRIV; +} + long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) { long int err, r; @@ -296,7 +294,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) struct hv_guest_state l2_hv = {0}, saved_l1_hv; struct kvmppc_vcore *vc = vcpu->arch.vcore; u64 hv_ptr, regs_ptr; - u64 hdec_exp; + u64 hdec_exp, lpcr; s64 delta_purr, delta_spurr, delta_ic, delta_vtb; if (vcpu->kvm->arch.l1_ptcr == 0) @@ -369,8 +367,8 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) /* Guest must always run with ME enabled, HV disabled. */ vcpu->arch.shregs.msr = (vcpu->arch.regs.msr | MSR_ME) & ~MSR_HV; - sanitise_hv_regs(vcpu, &l2_hv); - restore_hv_regs(vcpu, &l2_hv); + lpcr = l2_hv.lpcr; + load_l2_hv_regs(vcpu, &l2_hv, &saved_l1_hv, &lpcr); vcpu->arch.ret = RESUME_GUEST; vcpu->arch.trap = 0; @@ -380,7 +378,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) r = RESUME_HOST; break; } - r = kvmhv_run_single_vcpu(vcpu, hdec_exp, l2_hv.lpcr); + r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr); } while (is_kvmppc_resume_guest(r)); /* save L2 state for return */ From patchwork Wed Aug 11 16:00:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515877 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=RFqv6pBn; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF20534Xz9sXS for ; Thu, 12 Aug 2021 02:02:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233536AbhHKQCX (ORCPT ); Wed, 11 Aug 2021 12:02:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233385AbhHKQCX (ORCPT ); Wed, 11 Aug 2021 12:02:23 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9E62C061765 for ; Wed, 11 Aug 2021 09:01:59 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id cp15-20020a17090afb8fb029017891959dcbso10347896pjb.2 for ; Wed, 11 Aug 2021 09:01:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NYei3BlTiyrrgi66vFfJKSG/jGz/R3VmvpQE8Xx8Q/8=; b=RFqv6pBnoB7du76hq41hx30dFEObjs+y27xkrKJ93QHQHmoMIp/OW2w/t7RJk0Su2y 2n2gwtAt1zZ5NkNe46HpWnrUtkKagG55+GaFEpCGn4IQoCdxstDT8bmKo1PTxwMCbVpG V4FxcnYv6DVQZXR2Ph9njPvAAR2n4HZwNQklt2QQc333LgH6AfFyMxAN6tUnIzQ+K6T6 DGoF7TvtIVSpL0wOk5TcOE7n4jsEqovqcTMTg4OyUlNx/JQrsFJ9u7HdEwY6eyOpLSEz zNrrTkBu6g9VAg+aduWtbNxjTuXHfyQvkSFAqsKZ0lCRuvSdvy5s/yk878s04G+CyjDq p8/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NYei3BlTiyrrgi66vFfJKSG/jGz/R3VmvpQE8Xx8Q/8=; b=hB07XCgGwNGxls2nrrUcZA3QOLtuRpmXPQ44rCCmnw0+KYAMpv9WG+gP5e4hPHA/nG EVmcHveUEleOUW8xVpR0nhtV/jj4pnm/5od0OZEveCqRl3bLrVs4SHTOAT+V0vEjZaUs 37xsGg5nALyJpmmWksZOqxcKVf+VqRMx/BlaM0/+CpsJlWiJrqHlQtBxwJPWv2nKpc3t Lxa50tOo9AqxpyAc5TxM6Gb50SJpV58gvF1kkTCha00AdwDfF+wVimgIi+spKAD0SDLP uWV0LMZamfTYt+4awz+04VVwOx+gcqks/Kk4WolK4rl4Y6zytqeCGi5jZf5mOBzwNfef lwfA== X-Gm-Message-State: AOAM533K3vX/bxhGiU6Ki8F7kXYJoBvDvtIVSsbs4J1S0JpF74OARR2n 9n6ZFEJzz2DUI4L1Yq5CEfIGrklxOdw= X-Google-Smtp-Source: ABdhPJxmK/zmVg7WvVk6WLaFW5ifqZuJQ2yzKqGhVkALuU6uyQ2ci1JjQ5Tpz2QpAXv6xTtB+UnaVw== X-Received: by 2002:a65:6894:: with SMTP id e20mr841206pgt.419.1628697719294; Wed, 11 Aug 2021 09:01:59 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.01.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:01:59 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 06/60] KVM: PPC: Book3S HV Nested: Make nested HFSCR state accessible Date: Thu, 12 Aug 2021 02:00:40 +1000 Message-Id: <20210811160134.904987-7-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org When the L0 runs a nested L2, there are several permutations of HFSCR that can be relevant. The HFSCR that the L1 vcpu L1 requested, the HFSCR that the L1 vcpu may use, and the HFSCR that is actually being used to run the L2. The L1 requested HFSCR is not accessible outside the nested hcall handler, so copy that into a new kvm_nested_guest.hfscr field. The permitted HFSCR is taken from the HFSCR that the L1 runs with, which is also not accessible while the hcall is being made. Move this into a new kvm_vcpu_arch.hfscr_permitted field. These will be used by the next patch to improve facility handling for nested guests, and later by facility demand faulting patches. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 1 + arch/powerpc/include/asm/kvm_host.h | 2 ++ arch/powerpc/kvm/book3s_hv.c | 2 ++ arch/powerpc/kvm/book3s_hv_nested.c | 5 +++-- 4 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index eaf3a562bf1e..19b6942c6969 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -39,6 +39,7 @@ struct kvm_nested_guest { pgd_t *shadow_pgtable; /* our page table for this guest */ u64 l1_gr_to_hr; /* L1's addr of part'n-scoped table */ u64 process_table; /* process table entry for this guest */ + u64 hfscr; /* HFSCR that the L1 requested for this nested guest */ long refcnt; /* number of pointers to this struct */ struct mutex tlb_lock; /* serialize page faults and tlbies */ struct kvm_nested_guest *next; diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 9f52f282b1aa..a779f7849cfb 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -811,6 +811,8 @@ struct kvm_vcpu_arch { u32 online; + u64 hfscr_permitted; /* A mask of permitted HFSCR facilities */ + /* For support of nested guests */ struct kvm_nested_guest *nested; u32 nested_vcpu_id; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e79eedb65e6b..c65bd8fa4368 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2718,6 +2718,8 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) if (cpu_has_feature(CPU_FTR_TM_COMP)) vcpu->arch.hfscr |= HFSCR_TM; + vcpu->arch.hfscr_permitted = vcpu->arch.hfscr; + kvmppc_mmu_book3s_hv_init(vcpu); vcpu->arch.state = KVMPPC_VCPU_NOTREADY; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 1eb4e989edc7..5ad5014c6f68 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -272,10 +272,10 @@ static void load_l2_hv_regs(struct kvm_vcpu *vcpu, (vc->lpcr & ~mask) | (*lpcr & mask)); /* - * Don't let L1 enable features for L2 which we've disabled for L1, + * Don't let L1 enable features for L2 which we don't allow for L1, * but preserve the interrupt cause field. */ - vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | vcpu->arch.hfscr); + vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | vcpu->arch.hfscr_permitted); /* Don't let data address watchpoint match in hypervisor state */ vcpu->arch.dawrx0 = l2_hv->dawrx0 & ~DAWRX_HYP; @@ -362,6 +362,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) /* set L1 state to L2 state */ vcpu->arch.nested = l2; vcpu->arch.nested_vcpu_id = l2_hv.vcpu_token; + l2->hfscr = l2_hv.hfscr; vcpu->arch.regs = l2_regs; /* Guest must always run with ME enabled, HV disabled. */ From patchwork Wed Aug 11 16:00:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515878 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=U9nD7EKj; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF231pJJz9sXN for ; Thu, 12 Aug 2021 02:02:03 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233538AbhHKQC0 (ORCPT ); Wed, 11 Aug 2021 12:02:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233385AbhHKQC0 (ORCPT ); Wed, 11 Aug 2021 12:02:26 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E908C061765 for ; Wed, 11 Aug 2021 09:02:02 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id e15so3268223plh.8 for ; Wed, 11 Aug 2021 09:02:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=STNMxjwluTp9JCjXfe5D1Foa+DbrQm4yTFYqQb0u6iw=; b=U9nD7EKjgZigih/6AsSLpz0XDgyYMH4IlfEfteUP5xafhmgzlhBqrwJiuj/wsKiR49 feyoFauiS5xDdsOnrIks4DWXzYqb8uo7iKChsk/6XkNTy3GouoK1+7jZ4k/sL2kXbcN8 bkflCMbG89qNaEVTzTPSyn16HAGZ1lDmmRA7aywbZBzKlzqqZ27g/ZuTkz4+eqtRG/ZS +E9ttM8XltubyOZZSbFnmFkE4CCp3o90h3DErv8B9xVIarVednpVBBh9Jcl/EFU7himx lmeQimEm+X0zwY+Dp4AJqgXme4jvYyNFw3Z1t5EhW696D4vnezQjFQTvaKqCzAnDnFEC DYZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=STNMxjwluTp9JCjXfe5D1Foa+DbrQm4yTFYqQb0u6iw=; b=EOkICJMhPgTavzpa5U7CCmK+i0L3yw8qaEd0/AFVMdL068EXEBgc8aCjsUe7s1KKPI yr5paveiL1mD/ami6RGYMhinwOQDSfKMwuSV7sz+sswmMCHNuIZKJVErFFBLqXeXogdx /f4ch1vT4u7l/2QOYVj1R+4NzbqYs7TAYhPzrzMklgoElhT9d0ODDjRzWGhNQ8McovWO cfF9f3v5+NjFutklcShZEogR85TpAqp/bkX9Hz3os/LPNicYPFh0L0SB8v3MpYGwTpME C3HgYv91YSDOgjlvTvXJluvkZeSzQ6Ph6gYsk34XqZaqVsN9miDoOwPrL6rMWeTvBFfy DIlQ== X-Gm-Message-State: AOAM530Epdu6uo5yg5Kh+BmF5nh+iWoHMoLxjAoihBcA1ChuSFHzCu7T V7qBFxpPVNrX1mUx7PkfESQgIzeppY4= X-Google-Smtp-Source: ABdhPJxUngZExe9nTPF1RaK2X4z2XGC4lH8WAX8beuILcEuopbMdfmlOnh9YyeCt7nWalfEL7syZWw== X-Received: by 2002:a65:6248:: with SMTP id q8mr618569pgv.279.1628697721998; Wed, 11 Aug 2021 09:02:01 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.01.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:01 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 07/60] KVM: PPC: Book3S HV Nested: Stop forwarding all HFUs to L1 Date: Thu, 12 Aug 2021 02:00:41 +1000 Message-Id: <20210811160134.904987-8-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Fabiano Rosas If the nested hypervisor has no access to a facility because it has been disabled by the host, it should also not be able to see the Hypervisor Facility Unavailable that arises from one of its guests trying to access the facility. This patch turns a HFU that happened in L2 into a Hypervisor Emulation Assistance interrupt and forwards it to L1 for handling. The ones that happened because L1 explicitly disabled the facility for L2 are still let through, along with the corresponding Cause bits in the HFSCR. Signed-off-by: Fabiano Rosas [np: move handling into kvmppc_handle_nested_exit] Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 31 +++++++++++++++++++++++++++++-- 1 file changed, 29 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index c65bd8fa4368..7bc9d487bc1a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1733,6 +1733,7 @@ XXX benchmark guest exits static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) { + struct kvm_nested_guest *nested = vcpu->arch.nested; int r; int srcu_idx; @@ -1822,9 +1823,35 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) fallthrough; /* go to facility unavailable handler */ #endif - case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: - r = RESUME_HOST; + case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: { + u64 cause = vcpu->arch.hfscr >> 56; + + /* + * Only pass HFU interrupts to the L1 if the facility is + * permitted but disabled by the L1's HFSCR, otherwise + * the interrupt does not make sense to the L1 so turn + * it into a HEAI. + */ + if (!(vcpu->arch.hfscr_permitted & (1UL << cause)) || + (nested->hfscr & (1UL << cause))) { + vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST; + + /* + * If the fetch failed, return to guest and + * try executing it again. + */ + r = kvmppc_get_last_inst(vcpu, INST_GENERIC, + &vcpu->arch.emul_inst); + if (r != EMULATE_DONE) + r = RESUME_GUEST; + else + r = RESUME_HOST; + } else { + r = RESUME_HOST; + } + break; + } case BOOK3S_INTERRUPT_HV_RM_HARD: vcpu->arch.trap = 0; From patchwork Wed Aug 11 16:00:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515879 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=sSj9xBVD; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF261WF9z9shx for ; Thu, 12 Aug 2021 02:02:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233434AbhHKQC3 (ORCPT ); Wed, 11 Aug 2021 12:02:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233385AbhHKQC2 (ORCPT ); Wed, 11 Aug 2021 12:02:28 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E74BC061765 for ; Wed, 11 Aug 2021 09:02:05 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id m24-20020a17090a7f98b0290178b1a81700so5725085pjl.4 for ; Wed, 11 Aug 2021 09:02:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FIn8UyB5857Vdy2BoINpCb9wuJq8s9gPoZz4Xu4yDfU=; b=sSj9xBVDl75/eJWzEvk+K430dQ484klUIfLlSJKCUM1/+lrQKcMJPCTQTFxMuqCrqf 0tAmVtq9PfSgsGNX6X3ijzEQ7OI47EQqkDf85Fncb3jV4M+bOskM8vE6x1iKCdEn9qCB pHQICgZZO+usoUGkpxXcBhUzBaMgemEjTzLFm7j8b2EDCEmEe+gHXGXbXhQSObpxKrHS HdWd3Mr+X8+izi79Q/uPgfOvN4U9nIM/9D6YQHXvkwoUiwCGNJZpISRDszeUhZBm90I+ Jhr5kMApD/D9VgdDPI5IGxTQTiAEa7FvTH+0W/ErEB/pgNCODbZ8+ebCb3GYcz6Tbv9s Zo5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FIn8UyB5857Vdy2BoINpCb9wuJq8s9gPoZz4Xu4yDfU=; b=aYCGA5xcO5+G3L9ijC0CNz4NW3e9QispFSGZ7pIFlrR8MeUtJuLFqED1ZWNPKePBrf tKh4tVXC6941wVo+HH6YCgky9Wsw7w1qVfrgk2Yxg9INJG9OgLohi+DDy1/nfh5QOllF mOIWCMHd+QG/qzunP2Ob5NUfwUSftMleK8H5BOqIeuN/5ZsG3RGqTwpq8doIvqRiKeCS mLCqMv74FSOSc4LHaqmVWctzr1fo/Ij50YywzDZbqInDVp2knGKhAIPhpXYIhxXcxZj2 YrwY/Ml91NXyUcAiJxzrDxleEizGMTBfsV9C0rT9gWgn3vJCmpAXcfR7RKABilnnECRA ibaw== X-Gm-Message-State: AOAM530yXLwxLkPPa7CCgMFmMrbJ0W9EUpLh8yfqp8q8LyGo5CtI0j6v BHCU6vcJX/aGSXJSi/RoOZeDXpK+kTg= X-Google-Smtp-Source: ABdhPJyR4mhpDw1Ja1VVLy+V8qMCP6gtA2e92qPYuBlNIGCJ5bpg8E8h/gEebz/mYvoPFSpW5/uZYw== X-Received: by 2002:a63:788e:: with SMTP id t136mr1275639pgc.374.1628697724765; Wed, 11 Aug 2021 09:02:04 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:04 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 08/60] KVM: PPC: Book3S HV Nested: save_hv_return_state does not require trap argument Date: Thu, 12 Aug 2021 02:00:42 +1000 Message-Id: <20210811160134.904987-9-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Fabiano Rosas vcpu is already anargument so vcpu->arch.trap can be used directly. Signed-off-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_nested.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 5ad5014c6f68..ed8a2c9f5629 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr) hr->dawrx1 = swab64(hr->dawrx1); } -static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, +static void save_hv_return_state(struct kvm_vcpu *vcpu, struct hv_guest_state *hr) { struct kvmppc_vcore *vc = vcpu->arch.vcore; @@ -118,7 +118,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap, hr->pidr = vcpu->arch.pid; hr->cfar = vcpu->arch.cfar; hr->ppr = vcpu->arch.ppr; - switch (trap) { + switch (vcpu->arch.trap) { case BOOK3S_INTERRUPT_H_DATA_STORAGE: hr->hdar = vcpu->arch.fault_dar; hr->hdsisr = vcpu->arch.fault_dsisr; @@ -389,7 +389,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) delta_spurr = vcpu->arch.spurr - l2_hv.spurr; delta_ic = vcpu->arch.ic - l2_hv.ic; delta_vtb = vc->vtb - l2_hv.vtb; - save_hv_return_state(vcpu, vcpu->arch.trap, &l2_hv); + save_hv_return_state(vcpu, &l2_hv); /* restore L1 state */ vcpu->arch.nested = NULL; From patchwork Wed Aug 11 16:00:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515880 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=NAkhG87w; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF290qNhz9sXN for ; Thu, 12 Aug 2021 02:02:09 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233542AbhHKQCc (ORCPT ); Wed, 11 Aug 2021 12:02:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233385AbhHKQCb (ORCPT ); Wed, 11 Aug 2021 12:02:31 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3678EC061765 for ; Wed, 11 Aug 2021 09:02:08 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id n12so2617146plf.4 for ; Wed, 11 Aug 2021 09:02:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eZIRwNA4o4bMaBzjGFVMA8AYhuzRrj44ddH2iqzIflE=; b=NAkhG87wcUA8ta7y0aFfdVUVX3SdYhHQr9/iNt3ExP81clRIWk8oPiBlBdzYCFt4nT IBwuXy66zdIxNboGkVy5XRpeE7SPgqDGW1W89yYBVdmItBW0ULOPx/g4VH6Rd5LGF5t+ tr7l7J68qitSQMKXDuTART27BOA4wt1ChwU7ZRGPPgfqCbmh1Kci+7V/OQtUUbQ4COil dOfEZS9YU1tF1CuHDRxRxSsbPVUiR/1uLyznZ80q+9lguMRtZQK2hG8Pgu5Wd71vBwPy IghH+UoB/XTimidGlsDntDp09uQnYEXVThXq1JpM9QoqOk9KGPaI0Ch43hc11T+tNDSS +xaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eZIRwNA4o4bMaBzjGFVMA8AYhuzRrj44ddH2iqzIflE=; b=NkzKCOa2/g7njR/854/LozI8hYEDne9bxvpflf/4EYP0D0gkdCS3EqkvXnCRQc1XRG 9e5EJMXMtYDGmnwBWAfXVmRql4E9IXBpEK9DJAEHouRYYJpiJY0mL8TnbsCdTVSnCK1a SmT/gL0Bx5rB/h6NGUV+1nJdW3SzA9UEqnncIuokRneUOH6WGukPhVbX+l+ymaTIXTaH ErgcmWqdB9JvM+DOKewF6Lk7HepcrRB7H+0iYPvw6UKCm1ypJTDuRTanJYHyc14v1rXu 0USXV72Hgmb2o4GaK42PMw/7h7uVbmjzPTnmBfk8vLpQdEjxfa6GyOgpAl770rvNIlWg RC9g== X-Gm-Message-State: AOAM532cRCfsAGdb311elyGJLAXAOUQJFj2gMBPwA8sWcice7o9wCeBT 94pKWCf9sEBeR5MvDTHFbFWzwdaJEH0= X-Google-Smtp-Source: ABdhPJxXfhjv6ppy9tJgNemJJcDdIqbOkE6ioo6BrgS0G55roCMvlVPTGWcmDTp2qJtr0cKvG+vy/A== X-Received: by 2002:a17:90a:de8b:: with SMTP id n11mr11380877pjv.31.1628697727552; Wed, 11 Aug 2021 09:02:07 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:07 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 09/60] KVM: PPC: Book3S HV Nested: Reflect guest PMU in-use to L0 when guest SPRs are live Date: Thu, 12 Aug 2021 02:00:43 +1000 Message-Id: <20210811160134.904987-10-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org After the L1 saves its PMU SPRs but before loading the L2's PMU SPRs, switch the pmcregs_in_use field in the L1 lppaca to the value advertised by the L2 in its VPA. On the way out of the L2, set it back after saving the L2 PMU registers (if they were in-use). This transfers the PMU liveness indication between the L1 and L2 at the points where the registers are not live. This fixes the nested HV bug for which a workaround was added to the L0 HV by commit 63279eeb7f93a ("KVM: PPC: Book3S HV: Always save guest pmu for guest capable of nesting"), which explains the problem in detail. That workaround is no longer required for guests that include this bug fix. Fixes: 360cae313702 ("KVM: PPC: Book3S HV: Nested guest entry via hypercall") Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/pmc.h | 7 +++++++ arch/powerpc/kvm/book3s_hv.c | 20 ++++++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/arch/powerpc/include/asm/pmc.h b/arch/powerpc/include/asm/pmc.h index c6bbe9778d3c..3c09109e708e 100644 --- a/arch/powerpc/include/asm/pmc.h +++ b/arch/powerpc/include/asm/pmc.h @@ -34,6 +34,13 @@ static inline void ppc_set_pmu_inuse(int inuse) #endif } +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +static inline int ppc_get_pmu_inuse(void) +{ + return get_paca()->pmcregs_in_use; +} +#endif + extern void power4_enable_pmcs(void); #else /* CONFIG_PPC64 */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7bc9d487bc1a..1a3ea0ea7514 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -59,6 +59,7 @@ #include #include #include +#include #include #include #include @@ -3894,6 +3895,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + if (vcpu->arch.vpa.pinned_addr) { + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; + get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; + } else { + get_lppaca()->pmcregs_in_use = 1; + } + barrier(); + } +#endif kvmhv_load_guest_pmu(vcpu); msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); @@ -4028,6 +4041,13 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_pmu |= nesting_enabled(vcpu->kvm); kvmhv_save_guest_pmu(vcpu, save_pmu); +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); + barrier(); + } +#endif vc->entry_exit_map = 0x101; vc->in_guest = 0; From patchwork Wed Aug 11 16:00:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515882 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=aDlRdsD4; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2F15NDz9sXS for ; Thu, 12 Aug 2021 02:02:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233227AbhHKQCf (ORCPT ); Wed, 11 Aug 2021 12:02:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232847AbhHKQCe (ORCPT ); Wed, 11 Aug 2021 12:02:34 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2C8DC061765 for ; Wed, 11 Aug 2021 09:02:10 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id w14so4172234pjh.5 for ; Wed, 11 Aug 2021 09:02:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jCxdivQtSElLr5d9idnYO7f6LUBSBDiT6V+2D6aLUKg=; b=aDlRdsD4I51ZZcZxRl0CmF1zHn4RYxWsHJ4/iLOfOd8NbRtE2QUQg60i0d0dn6IaSf OHfNgTaHEh1+Zwo1ZuhciLciJua9HiZI+t7dR3KthS+vpzKhjvDV3V+j6iO11KMniFWO y95cThRmDMfOwBvQkTzrbjQBxBPBwBe2YckNxbDGA2ypVJO61Bcg9OE9MkLoDt8vB6jM 9vxgiCduW63oudsTcY9h4a/lZwv/HWqdxfSNIYUsgTYksCvhch9IcYH1y8qPW/Yd0HhC C3qTG2E1WYWgayKHvqimfxuU00dNXFCe3QG2Nnz3quJpubM7fQ5Nt0U77ieT1iHsxe/h wJqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jCxdivQtSElLr5d9idnYO7f6LUBSBDiT6V+2D6aLUKg=; b=hEywnByEVoGWRZYOQmu6eO5m3JbTH8p5zxr/DCI9ILDraCZLfb3LM9DaFV08REFu2/ DnR6BleVXLyCXH13/eF2o5l1QkJVtp5o0AX9eZyBePCkUJO5+q/CdEEE+IiSwA0juyrG VYD2zy9049UEzhc6ZUluDCb/+2SsOV+4v1UStLHDzKwKyIhsWHAuj9s9jErN09KSbhaa oxSSaZoLVRgO8HEQEpTDI2V9rx9MqqgmjM8B25b0a9OfOtulDHD78S+SZ343QJO3vm96 rA/gkjPqb56s9oxgPEozbX5sNHA5eKlAqpi5g0zNdfQj4evGIz9w2hHQ5BigYSSsfrrF HVNg== X-Gm-Message-State: AOAM533oK71vyukkIAR/EkbrGh1wm6obX2ssyXBX1Ef82CeB5PLOechV zZ5wzescOXqFEymJZUypy2Llb+xGYpw= X-Google-Smtp-Source: ABdhPJwOe47kCVYD2CjJf/UNfti3v0EKt3Z98V8sfY0s93Cmt6jR9l3cIXm7nAZJRd6rLoInIll1YQ== X-Received: by 2002:a17:902:c711:b029:12c:9b3c:9986 with SMTP id p17-20020a170902c711b029012c9b3c9986mr19502076plp.44.1628697730313; Wed, 11 Aug 2021 09:02:10 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:10 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 10/60] powerpc/64s: Remove WORT SPR from POWER9/10 Date: Thu, 12 Aug 2021 02:00:44 +1000 Message-Id: <20210811160134.904987-11-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This register is not architected and not implemented in POWER9 or 10, it just reads back zeroes for compatibility. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 3 --- arch/powerpc/platforms/powernv/idle.c | 2 -- 2 files changed, 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 1a3ea0ea7514..3198f79572d8 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3770,7 +3770,6 @@ static void load_spr_state(struct kvm_vcpu *vcpu) mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); mtspr(SPRN_BESCR, vcpu->arch.bescr); - mtspr(SPRN_WORT, vcpu->arch.wort); mtspr(SPRN_TIDR, vcpu->arch.tid); mtspr(SPRN_AMR, vcpu->arch.amr); mtspr(SPRN_UAMOR, vcpu->arch.uamor); @@ -3797,7 +3796,6 @@ static void store_spr_state(struct kvm_vcpu *vcpu) vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); vcpu->arch.bescr = mfspr(SPRN_BESCR); - vcpu->arch.wort = mfspr(SPRN_WORT); vcpu->arch.tid = mfspr(SPRN_TIDR); vcpu->arch.amr = mfspr(SPRN_AMR); vcpu->arch.uamor = mfspr(SPRN_UAMOR); @@ -3829,7 +3827,6 @@ static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { mtspr(SPRN_PSPB, 0); - mtspr(SPRN_WORT, 0); mtspr(SPRN_UAMOR, 0); mtspr(SPRN_DSCR, host_os_sprs->dscr); diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index 1e908536890b..df19e2ff9d3c 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -667,7 +667,6 @@ static unsigned long power9_idle_stop(unsigned long psscr) sprs.purr = mfspr(SPRN_PURR); sprs.spurr = mfspr(SPRN_SPURR); sprs.dscr = mfspr(SPRN_DSCR); - sprs.wort = mfspr(SPRN_WORT); sprs.ciabr = mfspr(SPRN_CIABR); sprs.mmcra = mfspr(SPRN_MMCRA); @@ -785,7 +784,6 @@ static unsigned long power9_idle_stop(unsigned long psscr) mtspr(SPRN_PURR, sprs.purr); mtspr(SPRN_SPURR, sprs.spurr); mtspr(SPRN_DSCR, sprs.dscr); - mtspr(SPRN_WORT, sprs.wort); mtspr(SPRN_CIABR, sprs.ciabr); mtspr(SPRN_MMCRA, sprs.mmcra); From patchwork Wed Aug 11 16:00:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515883 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=lma6aBQN; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2G377Vz9sXN for ; Thu, 12 Aug 2021 02:02:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232847AbhHKQCh (ORCPT ); Wed, 11 Aug 2021 12:02:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQCh (ORCPT ); Wed, 11 Aug 2021 12:02:37 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94C81C061765 for ; Wed, 11 Aug 2021 09:02:13 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id d1so3307518pll.1 for ; Wed, 11 Aug 2021 09:02:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DKq8DJSxGhBXzYj1xzUohFIXvPf01oJFRn8nuM9P7p4=; b=lma6aBQNpLAl+jS3h8+vESjIcKgyn9gViolaxPgZ+neVXaOq/Tm52zDzZq2zQCKK/M BhFYNFSZppyaXcvmgmr8ema84WlMud1Vmhua00bA+/tZDu+Sccfo+I/wzv/QbI9O1ZhG uLvg+uV77Ye4vtGFz21ji5RuAdM+bglisqYk7Ndft/EpNpdDbxZ87iDt0QIpsHZ/1qyM ggfbWhnyzedtE98zjQbOLYm1NvPmL2NP5IhAukaI3lF5ZCJufSSjfAGro6K05SXERAFy jaQvB3AhF11YfpFMuFhI6O/ngMhXqzkQpI+Gx/dTWYNWW0iwudusnbIUCCShxzY3YncM WfIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DKq8DJSxGhBXzYj1xzUohFIXvPf01oJFRn8nuM9P7p4=; b=GDKc5iX4GM9tFT48/ZMYMjj2Twe+JIg4P7rqNd55zjI3fQwju+qctsLWZk+vMIeFL7 ou2qLXwKwDR9R1/HqwsO40ZmQSAFf7KzyZVSvwR7xNWtLQXURJaL5gINGEo7YC7O2XYF ndGvk8s7nTXv8JxqcaclSyWTAxNKeCDNVAWyPtweaq2pppxBzTjhA0fbhqM0G1ixXJbi 2JUR2gRfmTIpVxuLrXMSueVhEeDomgiJ7SvpoDZdE3w1tloPOR7rZrsgSKcrUbrOZr6t u6BuIvtPg6f6zigeVFv5+MguEjE0CWbxb/vdFm4AGNLoeWJamRWq0vuB4xUqAubvrGoJ oUzQ== X-Gm-Message-State: AOAM532rnViisJjKwY8NP7NQPN1cdom+QYNJCVEP7p9yvuUvl7SPVmy5 99ca1cF7b0fWHX8pMtnGjNAbjiogW7k= X-Google-Smtp-Source: ABdhPJycpQsCjxqz5RVJTlxfpp/h1mcxCPlbQ4A5gGw9kh5PMpEuxbaTy7hWJkJJGorXl329MBvphw== X-Received: by 2002:a17:90a:1a51:: with SMTP id 17mr10940798pjl.59.1628697733000; Wed, 11 Aug 2021 09:02:13 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:12 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v2 11/60] KMV: PPC: Book3S HV P9: Use set_dec to set decrementer to host Date: Thu, 12 Aug 2021 02:00:45 +1000 Message-Id: <20210811160134.904987-12-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The host Linux timer code arms the decrementer with the value 'decrementers_next_tb - current_tb' using set_dec(), which stores val - 1 on Book3S-64, which is not quite the same as what KVM does to re-arm the host decrementer when exiting the guest. This shouldn't be a significant change, but it makes the logic match and avoids this small extra change being brought into the next patch. Suggested-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 3198f79572d8..b60a70177507 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4049,7 +4049,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - mtspr(SPRN_DEC, local_paca->kvm_hstate.dec_expires - mftb()); + set_dec(local_paca->kvm_hstate.dec_expires - mftb()); /* We may have raced with new irq work */ if (test_irq_work_pending()) set_dec(1); From patchwork Wed Aug 11 16:00:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515884 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=N5sjZMtT; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2K0dT5z9sX1 for ; Thu, 12 Aug 2021 02:02:17 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233385AbhHKQCk (ORCPT ); Wed, 11 Aug 2021 12:02:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQCj (ORCPT ); Wed, 11 Aug 2021 12:02:39 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAB54C061765 for ; Wed, 11 Aug 2021 09:02:15 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id a8so4183090pjk.4 for ; Wed, 11 Aug 2021 09:02:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Tl3zXkbeXpBq6mAh0/9Zs7l8/9YC+oa4x6XGIsxxGGo=; b=N5sjZMtT9K69iH2m2eteFZv++BFi2L8HHmNSZUVktJrQhe1hEbvxTqWylZkE6OsKDS 0GY8jbZsQOd4hh+OgUudQTt1+8zKLvBil2ro86wa9cQB5ik5E2wITYCCYcPp8ngik6mR argJbXnnIkvL6+HWf8JQATw3gApQVFhnHp/9qr/TRoPAm4uJgP/2yiMczHz9xaDX1icS QeYJX4HnPEOCw+Lm7lhr5Na/wN22hhfPuDonJqstJcB4/VE362BFF1uyBsqrdtpIDS7b 1BAsmuPZ2OtFOsqQgOOTx4AGfOPNYymXpVDkKGLaT20GNrtdNc3/RpTqum9GEITg8pT/ KXEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Tl3zXkbeXpBq6mAh0/9Zs7l8/9YC+oa4x6XGIsxxGGo=; b=qwzqcxLqyHnwu2VXSVW3kqfZvl+ec1ehbwLLgIYNw1aaMcK5AoGjIK1+95aPwsfx2h kCkZPq0uqkSoLa9dq5x/7OWMxK7h06/wXxN9cQWW5tLrb75onpZ9hEEQiawO2z8i/Byh hnVV25pFcl62317gItq0XebMJHPOSjJ9QEmcIgv9C7GtBrB6xXLrB0Ium2F8H1e0E0t8 2kj/lrQk10kUgX70kSJVJyqE/tZ/XXwpk/VR1RhynZOR9dEfJFJBVCOb2Xi23+9zHXQh 8qKRQXXWVq1tIyeyxtN5zgLE33gaqOw7IoAHN9sGWyF4SpvpsaVtNVduBJou1GvRPKIU A0WQ== X-Gm-Message-State: AOAM5304wzNtk0Vo1UwPrFpIyZaTnUB5Jz/k5hsJ95Pxr8tESEKRq0d5 X/63BnQA+Xq/ktru7AzvsJpfMOoy51Q= X-Google-Smtp-Source: ABdhPJxs5j/8Y/rZDioQ1c2Qs6LYpXVO0wQceVK3BdUJRs9Ssodo3Da77SJxVwOUu1xKcy2MXTFAlA== X-Received: by 2002:a62:7bd4:0:b029:3b7:29bf:b0f with SMTP id w203-20020a627bd40000b02903b729bf0b0fmr35332836pfc.15.1628697735410; Wed, 11 Aug 2021 09:02:15 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:15 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 12/60] KVM: PPC: Book3S HV P9: Use host timer accounting to avoid decrementer read Date: Thu, 12 Aug 2021 02:00:46 +1000 Message-Id: <20210811160134.904987-13-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org There is no need to save away the host DEC value, as it is derived from the host timer subsystem which maintains the next timer time, so it can be restored from there. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 5 +++++ arch/powerpc/kernel/time.c | 1 + arch/powerpc/kvm/book3s_hv.c | 14 +++++++------- 3 files changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 8c2c3dd4ddba..fd09b4797fd7 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -111,6 +111,11 @@ static inline unsigned long test_irq_work_pending(void) DECLARE_PER_CPU(u64, decrementers_next_tb); +static inline u64 timer_get_next_tb(void) +{ + return __this_cpu_read(decrementers_next_tb); +} + /* Convert timebase ticks to nanoseconds */ unsigned long long tb_to_ns(unsigned long long tb_ticks); diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index e45ce427bffb..01df89918aa4 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -108,6 +108,7 @@ struct clock_event_device decrementer_clockevent = { EXPORT_SYMBOL(decrementer_clockevent); DEFINE_PER_CPU(u64, decrementers_next_tb); +EXPORT_SYMBOL_GPL(decrementers_next_tb); static DEFINE_PER_CPU(struct clock_event_device, decrementers); #define XSEC_PER_SEC (1024*1024) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index b60a70177507..34e95d3c89e5 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3859,18 +3859,17 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, struct kvmppc_vcore *vc = vcpu->arch.vcore; struct p9_host_os_sprs host_os_sprs; s64 dec; - u64 tb; + u64 tb, next_timer; int trap, save_pmu; WARN_ON_ONCE(vcpu->arch.ceded); - dec = mfspr(SPRN_DEC); tb = mftb(); - if (dec < 0) + next_timer = timer_get_next_tb(); + if (tb >= next_timer) return BOOK3S_INTERRUPT_HV_DECREMENTER; - local_paca->kvm_hstate.dec_expires = dec + tb; - if (local_paca->kvm_hstate.dec_expires < time_limit) - time_limit = local_paca->kvm_hstate.dec_expires; + if (next_timer < time_limit) + time_limit = next_timer; save_p9_host_os_sprs(&host_os_sprs); @@ -4049,7 +4048,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - set_dec(local_paca->kvm_hstate.dec_expires - mftb()); + next_timer = timer_get_next_tb(); + set_dec(next_timer - mftb()); /* We may have raced with new irq work */ if (test_irq_work_pending()) set_dec(1); From patchwork Wed Aug 11 16:00:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515885 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=GBHq1Ace; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2M55tKz9tT8 for ; Thu, 12 Aug 2021 02:02:19 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233603AbhHKQCm (ORCPT ); Wed, 11 Aug 2021 12:02:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233543AbhHKQCm (ORCPT ); Wed, 11 Aug 2021 12:02:42 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4CEEC061765 for ; Wed, 11 Aug 2021 09:02:18 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id j1so4189249pjv.3 for ; Wed, 11 Aug 2021 09:02:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kGMO3KzFWfaiWdxRFMMRg8fmrR2Jvox/ePS+dlDySeM=; b=GBHq1AcepfNTDrAR75+JWS/sN0/dqJ8bw/Rvp6fGzw1okqZFnlW8Qnbgt7Y6iwmIyZ Xt7vq6puT2R4K2uxhtFwtKq7ABnWTgYwz6CxtgDDokY1YaJA0OhHUKa6o2/BIxh07+qM yCpDwJ67cX6FF5PeU2Q4rHFGh1BjAk0+HhppMLUWYMVckZbZskLUl7k83JWLggiTlu/V Gc3DhoM0bUHyZk6hPHwrU+k6Cq1JWlSbMQmh/jpBKvGWq4TWtaXg+PgLZY84ExDUvghJ NFTNlbvQBx6pYdJKcJFgUZyHdcrYuuUysBq+pzApJcsQcnTwFUQubx65mOk9E28e4PlI 9n+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kGMO3KzFWfaiWdxRFMMRg8fmrR2Jvox/ePS+dlDySeM=; b=SdPrd5pYcMRgkxRstJIq1xSgChs379RY5BScnfz1Iq1c1EtADfrHRN5q5QjGGzZcjC 0tqTc5YRQzpAuJK6PQnGZn+TtfrKxlRiDjyay51ChoXf+tuS6xzFLOgC1ojz5GPco5vZ ux3TuvZh6KnJrbHbvJhqE4neIe8DaQTK7CVGJBFdZHWs73zj1wEn5LfTldsfeZlgB2Lr ZvAvXeVd9EH4Bkcr43LrJ3Rjk1gxJzaNM/+Dageg0CaGTPWkjH9Zh5QS8F2QqjxHSbtb 9iNvYZCkk8KnLJNbcBWXVexQg3kWfMBtL19lBy1c8nZWMxRSVBwc+ZurA6apvyTNAbSh +/BQ== X-Gm-Message-State: AOAM532CajCMDrfZdy1kdxFePIBXFm5Wra96g1A7sgEBXQs+aLvfsPmV dKilW50YyVUwi34jmLV837rmd7cLa6U= X-Google-Smtp-Source: ABdhPJzups4vB4lpN4PHX/B6Vj+o5rtr93EsL4hB0BC1sN6/y7NvThkPJ47sQLjPnBOHYgkBnFRH7g== X-Received: by 2002:a05:6a00:a0d:b029:38d:6310:36ab with SMTP id p13-20020a056a000a0db029038d631036abmr34383437pfh.34.1628697738182; Wed, 11 Aug 2021 09:02:18 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:17 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Alexey Kardashevskiy Subject: [PATCH v2 13/60] KVM: PPC: Book3S HV P9: Use large decrementer for HDEC Date: Thu, 12 Aug 2021 02:00:47 +1000 Message-Id: <20210811160134.904987-14-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org On processors that don't suppress the HDEC exceptions when LPCR[HDICE]=0, this could help reduce needless guest exits due to leftover exceptions on entering the guest. Reviewed-by: Alexey Kardashevskiy Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 2 ++ arch/powerpc/kernel/time.c | 1 + arch/powerpc/kvm/book3s_hv_p9_entry.c | 3 ++- 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index fd09b4797fd7..69b6be617772 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -18,6 +18,8 @@ #include /* time.c */ +extern u64 decrementer_max; + extern unsigned long tb_ticks_per_jiffy; extern unsigned long tb_ticks_per_usec; extern unsigned long tb_ticks_per_sec; diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 01df89918aa4..72d872b49167 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -89,6 +89,7 @@ static struct clocksource clocksource_timebase = { #define DECREMENTER_DEFAULT_MAX 0x7FFFFFFF u64 decrementer_max = DECREMENTER_DEFAULT_MAX; +EXPORT_SYMBOL_GPL(decrementer_max); /* for KVM HDEC */ static int decrementer_set_next_event(unsigned long evt, struct clock_event_device *dev); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 961b3d70483c..0ff9ddb5e7ca 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -504,7 +504,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = 0; } - mtspr(SPRN_HDEC, 0x7fffffff); + /* HDEC must be at least as large as DEC, so decrementer_max fits */ + mtspr(SPRN_HDEC, decrementer_max); save_clear_guest_mmu(kvm, vcpu); switch_mmu_to_host(kvm, host_pidr); From patchwork Wed Aug 11 16:00:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515886 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=MKBLyv6w; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2Q3V2lz9t1s for ; Thu, 12 Aug 2021 02:02:21 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233608AbhHKQCp (ORCPT ); Wed, 11 Aug 2021 12:02:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233543AbhHKQCo (ORCPT ); Wed, 11 Aug 2021 12:02:44 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 315E8C061765 for ; Wed, 11 Aug 2021 09:02:21 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id lw7-20020a17090b1807b029017881cc80b7so10332803pjb.3 for ; Wed, 11 Aug 2021 09:02:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mf3TsS6jnjzVv5z7Oc/l3ztXe15tmjXwWcLWuTb9UUM=; b=MKBLyv6wGjirWARVahG68wX6UrgEZSgOvPPYa+e+MMEbAl+ZEUFhYynkZ9VgiCZ6+2 RT0p9qMBvtrwv0eUPtM81dEYCiHDM6kP44liDTZ54jMvUXsIUOHjH4i85t28AhNWH0gz DOjQfDpiljVuuE4WKftOVFxenUjuTdnOjJNEtKhwlkaX0Yq/gjt0Zqac02fb3WfOsySi StBUQiDe843a17MNiLDPPTU26xjV5Q38V4CB2ihhy32f2e03Ue0LkHcycjH2yBMNcOtC IJliefC+5Y6abbhVY6pVGPXuvYEUh4MNHCc8zPoTonxIipADPsqSXMpeCrcvnfXelCI2 8EWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mf3TsS6jnjzVv5z7Oc/l3ztXe15tmjXwWcLWuTb9UUM=; b=ED57lL6AYvqak5bJX8IhEOZXOWxZSI06mjyQM27SfYcpe/3w6tdA5upwnoOOLtWNJN 6tfRS4cTd+GNavZOh2VPyADGXe5u4v5RUjWv+ho4vs7GQji0GwLzRsyse+RWuuEe7f/T 9pjm48+0AtR5ZYHd9AWjjBKTGK8UeI1TUDEweZw0+QY+NubjgqTS79YtQknoN9K7OOnf hyTD5YLGexxgJ+nmlLGpDKYBOzI37oHTywXiJxcUCTUau+a7Grneut5XZvz++tYZYlVI 8T17AEw/NMYVATmj9HTKO1lxfgAnZ2cKnQCnpwEadjDVXOqmHsCZMLQaUX7hfUxoHB9M UfXQ== X-Gm-Message-State: AOAM530Cbi/nsuYFIy60hQtfJkeI0FpLNYY5bgXBXyd+Q/OXZOxSKm79 aIW4ngoQOj9dWEfHJTxqkcTVA7TkBEc= X-Google-Smtp-Source: ABdhPJypWybYsNqExAn5HNW8Otzxniv9N4Q81dTKLO/4cyNhrdpr+vPvmKt/IyP44fho1NG0XMLA0A== X-Received: by 2002:a63:f342:: with SMTP id t2mr540567pgj.45.1628697740696; Wed, 11 Aug 2021 09:02:20 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:20 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 14/60] KVM: PPC: Book3S HV P9: Reduce mftb per guest entry/exit Date: Thu, 12 Aug 2021 02:00:48 +1000 Message-Id: <20210811160134.904987-15-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org mftb is serialising (dispatch next-to-complete) so it is heavy weight for a mfspr. Avoid reading it multiple times in the entry or exit paths. A small number of cycles delay to timers is tolerable. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 4 ++-- arch/powerpc/kvm/book3s_hv_p9_entry.c | 5 +++-- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 34e95d3c89e5..49a3c1e6a544 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3926,7 +3926,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, * * XXX: Another day's problem. */ - mtspr(SPRN_DEC, vcpu->arch.dec_expires - mftb()); + mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); if (kvmhv_on_pseries()) { /* @@ -4049,7 +4049,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->in_guest = 0; next_timer = timer_get_next_tb(); - set_dec(next_timer - mftb()); + set_dec(next_timer - tb); /* We may have raced with new irq work */ if (test_irq_work_pending()) set_dec(1); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 0ff9ddb5e7ca..bd8cf0a65ce8 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -203,7 +203,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_dawr1; unsigned long host_dawrx1; - hdec = time_limit - mftb(); + tb = mftb(); + hdec = time_limit - tb; if (hdec < 0) return BOOK3S_INTERRUPT_HV_DECREMENTER; @@ -215,7 +216,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; if (vc->tb_offset) { - u64 new_tb = mftb() + vc->tb_offset; + u64 new_tb = tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); tb = mftb(); if ((tb & 0xffffff) < (new_tb & 0xffffff)) From patchwork Wed Aug 11 16:00:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515887 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=DU/XdwOT; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2S5mLlz9t23 for ; Thu, 12 Aug 2021 02:02:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233333AbhHKQCr (ORCPT ); Wed, 11 Aug 2021 12:02:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232946AbhHKQCr (ORCPT ); Wed, 11 Aug 2021 12:02:47 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBC9EC061765 for ; Wed, 11 Aug 2021 09:02:23 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id bo18so4283684pjb.0 for ; Wed, 11 Aug 2021 09:02:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jy8DCOB124Cxz6kPB5i1NMcJiSWvFl8tCGaD2ovTI/0=; b=DU/XdwOTubNov3Q6Gjg0T3JUR1X5+Eh7mdrTxw1mPTyGh5BdmdxkNGD+AYyWHu3vHf MhQbPItUw0kf2KOQQ0Mqd28Y2//x0UNYOy/lffoLITfJiMwmdAqlsg36NH0nTl7+Q9lb OZyLPGxWKKAWOpvOFqIBeP2PgOOiQ8mSXBzHOMdV69e5VangoyOoXQYSPelfvAa5JWqp srFmliThGhNG4Nowba03ZSA701XJHpNKTqYlAprY5kNO7QTXsyxKyGFLFYrpM3wL6Oo4 OlGJdGq1XvrQ+m/8MTABihzPmFQrfhkewMwDStQBBtBA+Jt3vSxNHrWHGXOm5xoBG97n Po6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jy8DCOB124Cxz6kPB5i1NMcJiSWvFl8tCGaD2ovTI/0=; b=n2wEZFJELjN/R03/+JTD71AAIFdD8nTk0BhM9culrai/UGIDDMktGwuS49ZhhSgkoa //vaBcwCMQpS2vS4feiEfzs489mvVzSZ6bbK7Kwo7NUv0rlWNrAWVWV1ddoJ5BVlTA0S 0qxMOibwvhmNbOq7yTxe7dXN3s2iw84DE00PE8Brq06NpD4GD+zBcJx57yA9YfffZYuM haPYDLFxpnerBvFWW60sWL6uigsaKwnN4zNtAcYEpfmrg+mXf18yHi6DWKSTeS9wMWyY QcrVanindY4r1IKdNULtyfG6UMaP6O7W+lW8cLFxk3MUp0M69T5wbKRS0FIWZ4eU6ufR 3Ziw== X-Gm-Message-State: AOAM532bNom1yD/NnflxZ+bnPclSyyX8V1CftkazpmUh9pDuxt0HR2Ts V+cWZDMRJqt2f7Ntvwm3HPUQpJcPjT4= X-Google-Smtp-Source: ABdhPJxpamzXgIwKgVvlI/MpCvGFBmyAL3P/GP4G39jZqu0fOvGOXbrMHPPfphvhDatZpJdLR2BxoQ== X-Received: by 2002:a63:464b:: with SMTP id v11mr15830pgk.26.1628697743153; Wed, 11 Aug 2021 09:02:23 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:22 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 15/60] powerpc/time: add API for KVM to re-arm the host timer/decrementer Date: Thu, 12 Aug 2021 02:00:49 +1000 Message-Id: <20210811160134.904987-16-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than have KVM look up the host timer and fiddle with the irq-work internal details, have the powerpc/time.c code provide a function for KVM to re-arm the Linux timer code when exiting a guest. This is implementation has an improvement over existing code of marking a decrementer interrupt as soft-pending if a timer has expired, rather than setting DEC to a -ve value, which tended to cause host timers to take two interrupts (first hdec to exit the guest, then the immediate dec). Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/time.h | 16 +++------- arch/powerpc/kernel/time.c | 52 +++++++++++++++++++++++++++------ arch/powerpc/kvm/book3s_hv.c | 7 ++--- 3 files changed, 49 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h index 69b6be617772..924b2157882f 100644 --- a/arch/powerpc/include/asm/time.h +++ b/arch/powerpc/include/asm/time.h @@ -99,18 +99,6 @@ extern void div128_by_32(u64 dividend_high, u64 dividend_low, extern void secondary_cpu_time_init(void); extern void __init time_init(void); -#ifdef CONFIG_PPC64 -static inline unsigned long test_irq_work_pending(void) -{ - unsigned long x; - - asm volatile("lbz %0,%1(13)" - : "=r" (x) - : "i" (offsetof(struct paca_struct, irq_work_pending))); - return x; -} -#endif - DECLARE_PER_CPU(u64, decrementers_next_tb); static inline u64 timer_get_next_tb(void) @@ -118,6 +106,10 @@ static inline u64 timer_get_next_tb(void) return __this_cpu_read(decrementers_next_tb); } +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +void timer_rearm_host_dec(u64 now); +#endif + /* Convert timebase ticks to nanoseconds */ unsigned long long tb_to_ns(unsigned long long tb_ticks); diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 72d872b49167..016828b7401b 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -499,6 +499,16 @@ EXPORT_SYMBOL(profile_pc); * 64-bit uses a byte in the PACA, 32-bit uses a per-cpu variable... */ #ifdef CONFIG_PPC64 +static inline unsigned long test_irq_work_pending(void) +{ + unsigned long x; + + asm volatile("lbz %0,%1(13)" + : "=r" (x) + : "i" (offsetof(struct paca_struct, irq_work_pending))); + return x; +} + static inline void set_irq_work_pending_flag(void) { asm volatile("stb %0,%1(13)" : : @@ -542,13 +552,44 @@ void arch_irq_work_raise(void) preempt_enable(); } +static void set_dec_or_work(u64 val) +{ + set_dec(val); + /* We may have raced with new irq work */ + if (unlikely(test_irq_work_pending())) + set_dec(1); +} + #else /* CONFIG_IRQ_WORK */ #define test_irq_work_pending() 0 #define clear_irq_work_pending() +static void set_dec_or_work(u64 val) +{ + set_dec(val); +} #endif /* CONFIG_IRQ_WORK */ +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +void timer_rearm_host_dec(u64 now) +{ + u64 *next_tb = this_cpu_ptr(&decrementers_next_tb); + + WARN_ON_ONCE(!arch_irqs_disabled()); + WARN_ON_ONCE(mfmsr() & MSR_EE); + + if (now >= *next_tb) { + local_paca->irq_happened |= PACA_IRQ_DEC; + } else { + now = *next_tb - now; + if (now <= decrementer_max) + set_dec_or_work(now); + } +} +EXPORT_SYMBOL_GPL(timer_rearm_host_dec); +#endif + /* * timer_interrupt - gets called when the decrementer overflows, * with interrupts disabled. @@ -609,10 +650,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt) } else { now = *next_tb - now; if (now <= decrementer_max) - set_dec(now); - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + set_dec_or_work(now); __this_cpu_inc(irq_stat.timer_irqs_others); } @@ -854,11 +892,7 @@ static int decrementer_set_next_event(unsigned long evt, struct clock_event_device *dev) { __this_cpu_write(decrementers_next_tb, get_tb() + evt); - set_dec(evt); - - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + set_dec_or_work(evt); return 0; } diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 49a3c1e6a544..5e48a929d670 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4048,11 +4048,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - next_timer = timer_get_next_tb(); - set_dec(next_timer - tb); - /* We may have raced with new irq work */ - if (test_irq_work_pending()) - set_dec(1); + timer_rearm_host_dec(tb); + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); kvmhv_load_host_pmu(); From patchwork Wed Aug 11 16:00:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515889 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ajkySZGK; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2W2Y2Vz9sXV for ; Thu, 12 Aug 2021 02:02:27 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233338AbhHKQCu (ORCPT ); Wed, 11 Aug 2021 12:02:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232946AbhHKQCu (ORCPT ); Wed, 11 Aug 2021 12:02:50 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84F50C061765 for ; Wed, 11 Aug 2021 09:02:26 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id t7-20020a17090a5d87b029017807007f23so10307149pji.5 for ; Wed, 11 Aug 2021 09:02:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PvCtp8x/xnVOg7P/kY+PiLHYDRcQOyLkNAyPlsDlMl8=; b=ajkySZGKELPOFGlAUa0DcURtiIKi9T3MyAEIAFYJvPxxWt9C91f84Djb5JTPQCUFDQ pJi0II2JSDY5y670wq5wlqBTT291LpkBwSrjMzCoOjL/uvaQDHXKcEkEGTi/MR0ZSM0d CVQyKQPDn0vXr8+v4Rtd0AAOmF24Ew3j+uvsWVO3sOX+e7jZkCTWsUh4xHUWnPe2KWvI y4qVxsudnZI93rN+VL2lD1Ehem6MT4xIhHgBmciaxi8PivyEm4KktZlQlR8sqIdROgfh 9k999+8Qu3uD5Fh5bgvenxgcJ/QiLFGbJy07en+uRZiwYfv7jMOCep6KMrZPwytwtTgC sn6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PvCtp8x/xnVOg7P/kY+PiLHYDRcQOyLkNAyPlsDlMl8=; b=NmhMsX7+v68ZqzKU0bvj41f7cFxyCXkENHB59WGuC6gbvmGzKGyWHxannwtPJK+EL0 8KtvTvNgS05efWiLjTF+8fzEX/FMVfc2MO7V8YlrV3eScw3m3aE9A2k5ecd+TmJoGwLx kGp6Wwv6h25fcg5c/aEsnCCAW5pmMDB9i6Uym6LYw0kt0/pefFG2zs8DS5mX1U33UTJC o1mvb1aiKLSg5hfg3gmPm6LFrNEKkEX8VAGOX3zs8X4dtthqjEv6LImqBXBOZOGh+U3R u2WAs8UgkxJrM3QkuLC9ypOvrOY0A6IiBw+O4mmSASPacLZG6MWchB4QtDZ37wEO5XHd uAng== X-Gm-Message-State: AOAM532O9BHXLxQfoZZIYsB2HUHCsnIWyyFNUUOBAdQPvn2GGn1kcILp vXxeKXDSPM57k6vv2sN1efXFB+0p9ic= X-Google-Smtp-Source: ABdhPJwM2JXz9u/GyBAQDqJFThzbzJNZQ9eRAZFa1EjHHIAXHksIN4bDV6a30r/1jGgLUAZRKZRnrg== X-Received: by 2002:a17:90b:3758:: with SMTP id ne24mr16269601pjb.218.1628697745967; Wed, 11 Aug 2021 09:02:25 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:25 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 16/60] KVM: PPC: Book3S HV: POWER10 enable HAIL when running radix guests Date: Thu, 12 Aug 2021 02:00:50 +1000 Message-Id: <20210811160134.904987-17-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org HV interrupts may be taken with the MMU enabled when radix guests are running. Enable LPCR[HAIL] on ISA v3.1 processors for radix guests. Make this depend on the host LPCR[HAIL] being enabled. Currently that is always enabled, but having this test means any issue that might require LPCR[HAIL] to be disabled in the host will not have to be duplicated in KVM. This optimisation takes 1380 cycles off a NULL hcall entry+exit micro benchmark on a POWER10. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 5e48a929d670..2fe01dc2062f 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -5034,6 +5034,8 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu) */ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) { + unsigned long lpcr, lpcr_mask; + if (nesting_enabled(kvm)) kvmhv_release_all_nested(kvm); kvmppc_rmap_reset(kvm); @@ -5043,8 +5045,13 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) kvm->arch.radix = 0; spin_unlock(&kvm->mmu_lock); kvmppc_free_radix(kvm); - kvmppc_update_lpcr(kvm, LPCR_VPM1, - LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR); + + lpcr = LPCR_VPM1; + lpcr_mask = LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR; + if (cpu_has_feature(CPU_FTR_ARCH_31)) + lpcr_mask |= LPCR_HAIL; + kvmppc_update_lpcr(kvm, lpcr, lpcr_mask); + return 0; } @@ -5054,6 +5061,7 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) */ int kvmppc_switch_mmu_to_radix(struct kvm *kvm) { + unsigned long lpcr, lpcr_mask; int err; err = kvmppc_init_vm_radix(kvm); @@ -5065,8 +5073,17 @@ int kvmppc_switch_mmu_to_radix(struct kvm *kvm) kvm->arch.radix = 1; spin_unlock(&kvm->mmu_lock); kvmppc_free_hpt(&kvm->arch.hpt); - kvmppc_update_lpcr(kvm, LPCR_UPRT | LPCR_GTSE | LPCR_HR, - LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR); + + lpcr = LPCR_UPRT | LPCR_GTSE | LPCR_HR; + lpcr_mask = LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + lpcr_mask |= LPCR_HAIL; + if (cpu_has_feature(CPU_FTR_HVMODE) && + (kvm->arch.host_lpcr & LPCR_HAIL)) + lpcr |= LPCR_HAIL; + } + kvmppc_update_lpcr(kvm, lpcr, lpcr_mask); + return 0; } @@ -5230,6 +5247,10 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm) kvm->arch.mmu_ready = 1; lpcr &= ~LPCR_VPM1; lpcr |= LPCR_UPRT | LPCR_GTSE | LPCR_HR; + if (cpu_has_feature(CPU_FTR_HVMODE) && + cpu_has_feature(CPU_FTR_ARCH_31) && + (kvm->arch.host_lpcr & LPCR_HAIL)) + lpcr |= LPCR_HAIL; ret = kvmppc_init_vm_radix(kvm); if (ret) { kvmppc_free_lpid(kvm->arch.lpid); From patchwork Wed Aug 11 16:00:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515890 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=fAA4kxhE; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2Z1Gwjz9t4b for ; Thu, 12 Aug 2021 02:02:30 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232946AbhHKQCx (ORCPT ); Wed, 11 Aug 2021 12:02:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233609AbhHKQCx (ORCPT ); Wed, 11 Aug 2021 12:02:53 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D9A4C061765 for ; Wed, 11 Aug 2021 09:02:29 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id fa24-20020a17090af0d8b0290178bfa69d97so5809865pjb.0 for ; Wed, 11 Aug 2021 09:02:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O/sqZTnORdZuItaT1yAiOHwnavM4EItQXgoU+iu2aPc=; b=fAA4kxhEbTwbiOXg5yCqxeDBawIM+5SiKntmU59nwBTx3YG48mZV9UCg70OCuh3RY1 6gH8dDMObgW8l2UES4ODenHlsxODyww3lPSx4FwEU63f2SpHHIAzxOlAUF8IxaMOaTdX hHcPAaEHDgnmKtxvVib6Sbpver9+Gju/6zIVeiLabGGtxpB61pICRtZazWY5ePV0XYIk m3Kv9u9kRkXlUG+AJX2to5L07xCljwSxwx3SnaAnLuHCvmF7VJPdOFKdTyGNoijl8mpX /50m3BwPnuHYEtP/TA0IjsCyq2JYTVFiqV6/KN7RN35AwXvAwo6PErhTX6Ss7KYWDgBq qyEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O/sqZTnORdZuItaT1yAiOHwnavM4EItQXgoU+iu2aPc=; b=J7lmn5J9w7CivVD7QuDqoY4BXIM2CHPGqQlJ3WADLTHeDDcxpAnw1DXRn00npKNZE3 R6jJrBdadegzAWOX4GPyT4Gdgx6feUsFTvqIbrECrDQoUpfZA/deN7MBC/oaT+yNbRSo Y3FLxMi2BfPceu++Jo8BEDm8hMiVqXxrVaGSdhSxqzoncgvN7XQ4VDa7pPcpBsQKmIYX SF9ejxRImDFIFk8TXNbFqCMr8/y/JpcqEDoU6XbDmk18P2cDB6lnzWjFfQxDLFnU3++E kIHEgdHhNxrliO4wqt9CUXelXQt7vMZpebMXZF0GBrchBCf8EDAsMGcXbQfnwOiuR+fT pxlw== X-Gm-Message-State: AOAM5305pmNd/XHZVMjbfISlBdBAgW35a+0A3yZzCS9VUm/SLr1bfL3p GuUnEqbxho+96oRQwvAM79UH55uqBC8= X-Google-Smtp-Source: ABdhPJxRk5lT65wV9miwWbYeCWtZRrQEo390TolWS+ANIV1KqhWIz6XYlZ8qMGuWUvaAaA42yJu9og== X-Received: by 2002:a17:90a:4481:: with SMTP id t1mr38069017pjg.232.1628697748797; Wed, 11 Aug 2021 09:02:28 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:28 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 17/60] powerpc/64s: Keep AMOR SPR a constant ~0 at runtime Date: Thu, 12 Aug 2021 02:00:51 +1000 Message-Id: <20210811160134.904987-18-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This register controls supervisor SPR modifications, and as such is only relevant for KVM. KVM always sets AMOR to ~0 on guest entry, and never restores it coming back out to the host, so it can be kept constant and avoid the mtSPR in KVM guest entry. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/cpu_setup_power.c | 8 ++++++++ arch/powerpc/kernel/dt_cpu_ftrs.c | 2 ++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 2 -- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 2 -- arch/powerpc/mm/book3s64/radix_pgtable.c | 15 --------------- arch/powerpc/platforms/powernv/idle.c | 8 +++----- 6 files changed, 13 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c index 3cca88ee96d7..a29dc8326622 100644 --- a/arch/powerpc/kernel/cpu_setup_power.c +++ b/arch/powerpc/kernel/cpu_setup_power.c @@ -137,6 +137,7 @@ void __setup_cpu_power7(unsigned long offset, struct cpu_spec *t) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH); } @@ -150,6 +151,7 @@ void __restore_cpu_power7(void) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH); } @@ -164,6 +166,7 @@ void __setup_cpu_power8(unsigned long offset, struct cpu_spec *t) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */ init_HFSCR(); @@ -184,6 +187,7 @@ void __restore_cpu_power8(void) return; mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */ init_HFSCR(); @@ -202,6 +206,7 @@ void __setup_cpu_power9(unsigned long offset, struct cpu_spec *t) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); @@ -223,6 +228,7 @@ void __restore_cpu_power9(void) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); @@ -242,6 +248,7 @@ void __setup_cpu_power10(unsigned long offset, struct cpu_spec *t) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); @@ -264,6 +271,7 @@ void __restore_cpu_power10(void) mtspr(SPRN_PSSCR, 0); mtspr(SPRN_LPID, 0); mtspr(SPRN_PID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_PCR, PCR_MASK); init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\ LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0); diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c index af95f337e54b..38ea20fadc4a 100644 --- a/arch/powerpc/kernel/dt_cpu_ftrs.c +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c @@ -80,6 +80,7 @@ static void __restore_cpu_cpufeatures(void) mtspr(SPRN_LPCR, system_registers.lpcr); if (hv_mode) { mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_HFSCR, system_registers.hfscr); mtspr(SPRN_PCR, system_registers.pcr); } @@ -216,6 +217,7 @@ static int __init feat_enable_hv(struct dt_cpu_feature *f) } mtspr(SPRN_LPID, 0); + mtspr(SPRN_AMOR, ~0); lpcr = mfspr(SPRN_LPCR); lpcr &= ~LPCR_LPES0; /* HV external interrupts */ diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index bd8cf0a65ce8..a7f63082b4e3 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -286,8 +286,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); - mtspr(SPRN_AMOR, ~0UL); - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_HV_P9; /* diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 75079397c2a5..9021052f1579 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -772,10 +772,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) /* Restore AMR and UAMOR, set AMOR to all 1s */ ld r5,VCPU_AMR(r4) ld r6,VCPU_UAMOR(r4) - li r7,-1 mtspr SPRN_AMR,r5 mtspr SPRN_UAMOR,r6 - mtspr SPRN_AMOR,r7 /* Restore state of CTRL run bit; assume 1 on entry */ lwz r5,VCPU_CTRL(r4) diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index e50ddf129c15..5aebd70ef66a 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -572,18 +572,6 @@ void __init radix__early_init_devtree(void) return; } -static void radix_init_amor(void) -{ - /* - * In HV mode, we init AMOR (Authority Mask Override Register) so that - * the hypervisor and guest can setup IAMR (Instruction Authority Mask - * Register), enable key 0 and set it to 1. - * - * AMOR = 0b1100 .... 0000 (Mask for key 0 is 11) - */ - mtspr(SPRN_AMOR, (3ul << 62)); -} - void __init radix__early_init_mmu(void) { unsigned long lpcr; @@ -644,7 +632,6 @@ void __init radix__early_init_mmu(void) lpcr = mfspr(SPRN_LPCR); mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR); radix_init_partition_table(); - radix_init_amor(); } else { radix_init_pseries(); } @@ -668,8 +655,6 @@ void radix__early_init_mmu_secondary(void) set_ptcr_when_no_uv(__pa(partition_tb) | (PATB_SIZE_SHIFT - 12)); - - radix_init_amor(); } radix__switch_mmu_context(NULL, &init_mm); diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c index df19e2ff9d3c..721ac4f7e2d1 100644 --- a/arch/powerpc/platforms/powernv/idle.c +++ b/arch/powerpc/platforms/powernv/idle.c @@ -306,8 +306,8 @@ struct p7_sprs { /* per thread SPRs that get lost in shallow states */ u64 amr; u64 iamr; - u64 amor; u64 uamor; + /* amor is restored to constant ~0 */ }; static unsigned long power7_idle_insn(unsigned long type) @@ -378,7 +378,6 @@ static unsigned long power7_idle_insn(unsigned long type) if (cpu_has_feature(CPU_FTR_ARCH_207S)) { sprs.amr = mfspr(SPRN_AMR); sprs.iamr = mfspr(SPRN_IAMR); - sprs.amor = mfspr(SPRN_AMOR); sprs.uamor = mfspr(SPRN_UAMOR); } @@ -397,7 +396,7 @@ static unsigned long power7_idle_insn(unsigned long type) */ mtspr(SPRN_AMR, sprs.amr); mtspr(SPRN_IAMR, sprs.iamr); - mtspr(SPRN_AMOR, sprs.amor); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_UAMOR, sprs.uamor); } } @@ -687,7 +686,6 @@ static unsigned long power9_idle_stop(unsigned long psscr) sprs.amr = mfspr(SPRN_AMR); sprs.iamr = mfspr(SPRN_IAMR); - sprs.amor = mfspr(SPRN_AMOR); sprs.uamor = mfspr(SPRN_UAMOR); srr1 = isa300_idle_stop_mayloss(psscr); /* go idle */ @@ -708,7 +706,7 @@ static unsigned long power9_idle_stop(unsigned long psscr) */ mtspr(SPRN_AMR, sprs.amr); mtspr(SPRN_IAMR, sprs.iamr); - mtspr(SPRN_AMOR, sprs.amor); + mtspr(SPRN_AMOR, ~0); mtspr(SPRN_UAMOR, sprs.uamor); /* From patchwork Wed Aug 11 16:00:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515891 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Qe0/FtQK; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2c3dctz9t4b for ; Thu, 12 Aug 2021 02:02:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233609AbhHKQCz (ORCPT ); Wed, 11 Aug 2021 12:02:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQCz (ORCPT ); Wed, 11 Aug 2021 12:02:55 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD195C061765 for ; Wed, 11 Aug 2021 09:02:31 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id d1so3308564pll.1 for ; Wed, 11 Aug 2021 09:02:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cKT4fTx7oGwRWzvpaHxdjqzbrtTqZjyygbqKOYWpOIc=; b=Qe0/FtQKhVwQV1cLcmplgCvwntQW+FjPNErVOqKGPL31gn6QeXoMl/3bNoON4C0gUH lv9QcHtJJ3G2J+FqJa/m5968lJbIzxGJpv7evVhhQxlXwCLdWs7AvWmbPtMCskfBnl7N gRkSo158HYGVTJmIYk3Ncr7DXnyIGqgVJrqgkJJBORe+PSItVe2DXoFTLqNje9qfV6Qs iiKuF5Q/24Pz19o+Ce0yqfyRbDGNL4PZ6fzEGoTkBe/6ueiAJfwGJUj5uEcKKAFBkOwR fd6+FgVRk74o3w7HRbJNBjkquh7y3Cev1tdKMFzEaASR9YqCj9u9Oyj8oVqIyx3J8POs 7Law== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cKT4fTx7oGwRWzvpaHxdjqzbrtTqZjyygbqKOYWpOIc=; b=QgBJTfXRJIDBSBGR1S4MwAhDvB4/D1CGF/bcvZw9dg/dZDq6ofBUSYq51UeXCML85L EmujTGPn9afun7tbpRCTcH42ekYkJk0vhbOu1QGsa3Q0CMIbiZFeIXDR0zP1NOPzyeUL jvRzmTk1mFuuuk7HNvWoiXubkNAXODn79R3Frj4P+o44Sjiq/asPNUjQzn1/cPM8uecx kClAezbfIPSiQU3oex5rQow19V1qyQIaNavePQY/qQ2uhNo1Jt4zbh4X8XFBe6BFL3/n oinaQYgXqI0/x+DgIOhSN1IRR8IGoAt9CKVJ0Uqummqi/N1fL57F1eTfAa+P1dsPnF2c H6rw== X-Gm-Message-State: AOAM532hlLvtErS+zQDhpIvucpZwpLrKlF9bxt1YHRokJqU1Y5d/zscr HgGfTNMwKOQAy10xlJX43EkZet+QEe0= X-Google-Smtp-Source: ABdhPJwQ5ivMxxa21BFrRKDX9Rhm0Iomm4tEiMybappdPSnvp0DlNEF3LTosHm27OJKOt4BGfREdVQ== X-Received: by 2002:a17:902:ecc6:b029:12c:44b:40bd with SMTP id a6-20020a170902ecc6b029012c044b40bdmr7605549plh.33.1628697751009; Wed, 11 Aug 2021 09:02:31 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:30 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 18/60] KVM: PPC: Book3S HV: Don't always save PMU for guest capable of nesting Date: Thu, 12 Aug 2021 02:00:52 +1000 Message-Id: <20210811160134.904987-19-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Provide a config option that controls the the workaround added by commit 63279eeb7f93a ("KVM: PPC: Book3S HV: Always save guest pmu for guest capable of nesting"). The option defaults to y for now, but is expected to go away within a few releases. Nested capable guests running with the earlier commit ("KVM: PPC: Book3S HV Nested: Indicate guest PMU in-use in VPA") will now indicate the PMU in-use status of their guests, which means the parent does not need to unconditionally save the PMU for nested capable guests. After this latest round of performance optimisations, this option costs about 540 cycles or 10% entry/exit performance on a POWER9 nested-capable guest. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/Kconfig | 15 +++++++++++++++ arch/powerpc/kvm/book3s_hv.c | 10 ++++++++-- 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig index e45644657d49..d57dcbc4eb0f 100644 --- a/arch/powerpc/kvm/Kconfig +++ b/arch/powerpc/kvm/Kconfig @@ -131,6 +131,21 @@ config KVM_BOOK3S_HV_EXIT_TIMING If unsure, say N. +config KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND + bool "Nested L0 host workaround for L1 KVM host PMU handling bug" if EXPERT + depends on KVM_BOOK3S_HV_POSSIBLE + default !EXPERT + help + Old nested HV capable Linux guests have a bug where the don't + reflect the PMU in-use status of their L2 guest to the L0 host + while the L2 PMU registers are live. This can result in loss + of L2 PMU register state, causing perf to not work correctly in + L2 guests. + + Selecting this option for the L0 host implements a workaround for + those buggy L1s which saves the L2 state, at the cost of performance + in all nested-capable guest entry/exit. + config KVM_BOOKE_HV bool diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2fe01dc2062f..197665c1a1cd 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4033,8 +4033,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.vpa.dirty = 1; save_pmu = lp->pmcregs_in_use; } - /* Must save pmu if this guest is capable of running nested guests */ - save_pmu |= nesting_enabled(vcpu->kvm); + if (IS_ENABLED(CONFIG_KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND)) { + /* + * Save pmu if this guest is capable of running nested guests. + * This is option is for old L1s that do not set their + * lppaca->pmcregs_in_use properly when entering their L2. + */ + save_pmu |= nesting_enabled(vcpu->kvm); + } kvmhv_save_guest_pmu(vcpu, save_pmu); #ifdef CONFIG_PPC_PSERIES From patchwork Wed Aug 11 16:00:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515893 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=m38fx7Mj; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2h28Lhz9t4b for ; Thu, 12 Aug 2021 02:02:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233613AbhHKQC7 (ORCPT ); Wed, 11 Aug 2021 12:02:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQC6 (ORCPT ); Wed, 11 Aug 2021 12:02:58 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73409C061765 for ; Wed, 11 Aug 2021 09:02:34 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id lw7-20020a17090b1807b029017881cc80b7so10333942pjb.3 for ; Wed, 11 Aug 2021 09:02:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yZjufjuNadV1tghNNfY89lJSLDdECYjetCLWMrMMZtk=; b=m38fx7MjF9GLTn7G7CX6ptbf2MEFMOR2BN6eFsf1rwSxK69YYE8lrvFFLtplE02onm IPjG29oDa7wt9QpiY+qKMVdGYBFLR33agxko6jN0LnT89z/ofi/Akpm0GYxsBJxddWZB 9XcrTmgU8kHiX5r5UgqmGo2MqPqGm5Dzv5RtKBuI8A8L7HLn6/4P2QrkZfNjZXpBLBmW FNDy6iwz5wi4ffmJOLWTn6T0preI7q7ajs6L0/JGSxZnp8hXzElv4pY0pWqqfz7MglDS I95hg+me3j2HIJR1mANo0yWA5ubJK+euNanTae/bhAWSmz23hhBmncuTNlQepPY56PlJ H/Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yZjufjuNadV1tghNNfY89lJSLDdECYjetCLWMrMMZtk=; b=De6vAu1RfTqcQ/CtCV0fJKGwsopl+Y5jonGudBEbcIUeWgEIbsuedbDKHzXNA+3KIg 6MzCEdNgjMEieZou6P2AnKM9u/C9fUyaeQqsns9ZNPynj5OPVyLGHDutmNhWn9xmZ1b7 XmMLmBovGo1GBLLIe6qI+nBDctUNN9ToSiotWqt04T27KZCBRLMBHTC3jn+NNw8NqgMa jyB5f00spcA9TGnja4f6Xalkxo3khvd61zAyptKv1W+8ZKOGw4tw8qKmqGseZ3CylkTH 74hHavRy3Fod5IPyPO4R9mKNTVTqg366k452yy7IVersoXiI71H9pXUw+7Khr6fkrOAl r8zg== X-Gm-Message-State: AOAM533ZqXs8yvMsj0wEPYHmXjgrZtyZNYXXI4E5bIgpikcpTv8VitNh sjF6BLngTgU+pCGA5TiXBEGmzDOlbyU= X-Google-Smtp-Source: ABdhPJwHhdLxVG/f4hmuVWZtbivuXl7M2kIpPYqbLjFGsM/0UhkIb8Xix9zFZbjXXh0r4cI7gBnAkQ== X-Received: by 2002:a17:902:d2c8:b029:12b:84f8:d916 with SMTP id n8-20020a170902d2c8b029012b84f8d916mr1293181plc.75.1628697753915; Wed, 11 Aug 2021 09:02:33 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:33 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Athira Jajeev , Madhavan Srinivasan Subject: [PATCH v2 19/60] powerpc/64s: Always set PMU control registers to frozen/disabled when not in use Date: Thu, 12 Aug 2021 02:00:53 +1000 Message-Id: <20210811160134.904987-20-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org KVM PMU management code looks for particular frozen/disabled bits in the PMU registers so it knows whether it must clear them when coming out of a guest or not. Setting this up helps KVM make these optimisations without getting confused. Longer term the better approach might be to move guest/host PMU switching to the perf subsystem. Cc: Athira Jajeev Cc: Madhavan Srinivasan Signed-off-by: Nicholas Piggin --- arch/powerpc/kernel/cpu_setup_power.c | 4 ++-- arch/powerpc/kernel/dt_cpu_ftrs.c | 6 +++--- arch/powerpc/kvm/book3s_hv.c | 5 +++++ 3 files changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c index a29dc8326622..3dc61e203f37 100644 --- a/arch/powerpc/kernel/cpu_setup_power.c +++ b/arch/powerpc/kernel/cpu_setup_power.c @@ -109,7 +109,7 @@ static void init_PMU_HV_ISA207(void) static void init_PMU(void) { mtspr(SPRN_MMCRA, 0); - mtspr(SPRN_MMCR0, 0); + mtspr(SPRN_MMCR0, MMCR0_FC); mtspr(SPRN_MMCR1, 0); mtspr(SPRN_MMCR2, 0); } @@ -123,7 +123,7 @@ static void init_PMU_ISA31(void) { mtspr(SPRN_MMCR3, 0); mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE); - mtspr(SPRN_MMCR0, MMCR0_PMCCEXT); + mtspr(SPRN_MMCR0, MMCR0_FC | MMCR0_PMCCEXT); } /* diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c index 38ea20fadc4a..a6bb0ee179cd 100644 --- a/arch/powerpc/kernel/dt_cpu_ftrs.c +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c @@ -353,7 +353,7 @@ static void init_pmu_power8(void) } mtspr(SPRN_MMCRA, 0); - mtspr(SPRN_MMCR0, 0); + mtspr(SPRN_MMCR0, MMCR0_FC); mtspr(SPRN_MMCR1, 0); mtspr(SPRN_MMCR2, 0); mtspr(SPRN_MMCRS, 0); @@ -392,7 +392,7 @@ static void init_pmu_power9(void) mtspr(SPRN_MMCRC, 0); mtspr(SPRN_MMCRA, 0); - mtspr(SPRN_MMCR0, 0); + mtspr(SPRN_MMCR0, MMCR0_FC); mtspr(SPRN_MMCR1, 0); mtspr(SPRN_MMCR2, 0); } @@ -428,7 +428,7 @@ static void init_pmu_power10(void) mtspr(SPRN_MMCR3, 0); mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE); - mtspr(SPRN_MMCR0, MMCR0_PMCCEXT); + mtspr(SPRN_MMCR0, MMCR0_FC | MMCR0_PMCCEXT); } static int __init feat_enable_pmu_power10(struct dt_cpu_feature *f) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 197665c1a1cd..e6cb923f91ff 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2718,6 +2718,11 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) #endif #endif vcpu->arch.mmcr[0] = MMCR0_FC; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + vcpu->arch.mmcr[0] |= MMCR0_PMCCEXT; + vcpu->arch.mmcra = MMCRA_BHRB_DISABLE; + } + vcpu->arch.ctrl = CTRL_RUNLATCH; /* default to host PVR, since we can't spoof it */ kvmppc_set_pvr_hv(vcpu, mfspr(SPRN_PVR)); From patchwork Wed Aug 11 16:00:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515894 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Rd+1J8yi; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2k2BJCz9t2b for ; Thu, 12 Aug 2021 02:02:38 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233612AbhHKQDB (ORCPT ); Wed, 11 Aug 2021 12:03:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQDB (ORCPT ); Wed, 11 Aug 2021 12:03:01 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 705C1C061765 for ; Wed, 11 Aug 2021 09:02:37 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id lw7-20020a17090b1807b029017881cc80b7so10334187pjb.3 for ; Wed, 11 Aug 2021 09:02:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=P8Q3KwZ/CYYBaLeek32FHX+IEOZLMmBQsPsW8ZydE6I=; b=Rd+1J8yiTcTvQ57f+VFxDmknStxCe5TXGMpFuMGgEPf+SWha0OfA3fYMSflmDVRIOT Hn0xLZp02j6pUnZ/3MDDGR0uWN/QGBUZlmr8D0aHwqBFKkSTgQl/+I1gM0NlJ6/eVHWt mwr8HiILeF0T92kLSqarvJpa2vbuN7F7xh5jzNwRZnfKozu5WKmsy5DcbMSK13Y4v9GE Ur3VbGXxyeLiTTdHkHI/KI7IH8QIqQEKUqFsQ9t7JRqcMzQpHZSm37m+VnVJJ60I1sb3 +usEL2Uq2V0x/PfBqsqkUc5HEGB+2rf6/eBdB8SwRiuzUV6/BqROI+edpP5IXFiwpk84 tvzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P8Q3KwZ/CYYBaLeek32FHX+IEOZLMmBQsPsW8ZydE6I=; b=KYRmxgww5FnlHTqo3ylemoxb7XeB/WGzGGQFdqP35me+F1fq4C29X+Dr1F4LLEU6n+ Qe1Ukk1k9FBZnbknI1SSkSAJQCPANXYhsX8eLpjQRI/FIkOQGrPwCin/IgcT+nAp2UE6 Ang5mTSFvFZxsRVyzB4I769RshrlElb+gpAHnBLPDNL2nSpMHwCw6jC4AZhRUxoHiKzk ubqx8fCSZepOtvSSOxaMfTHjL12ZryoMIozkRrQdx3j/7UwymTcKSXiOU5Q3EM6hTraa /os+EwYkCelIEIhalmuBh1og7P07BoAfbnxKhBEnX/x2U9/5jSjNj2kHkFfFK+LzsWKU RbOQ== X-Gm-Message-State: AOAM532/W/30vsZYQLm2lKnnelckViaQEaE0ac1jg6px6Tp3MWE70Har exjiDdBvy8QJVBcR1RcaA6LGkbv6nBQ= X-Google-Smtp-Source: ABdhPJzPBN/9GCDk1LaiEjr3yjtRuKnH01sbJ+ZIjey6T/vO2JeP5QhBaS/ecfYcsuHD8PL3+4zH0A== X-Received: by 2002:a62:b504:0:b029:3c7:aa49:85a3 with SMTP id y4-20020a62b5040000b02903c7aa4985a3mr28650510pfe.60.1628697756888; Wed, 11 Aug 2021 09:02:36 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:36 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Athira Jajeev , Madhavan Srinivasan Subject: [PATCH v2 20/60] powerpc/64s: Implement PMU override command line option Date: Thu, 12 Aug 2021 02:00:54 +1000 Message-Id: <20210811160134.904987-21-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org It can be useful in simulators (with very constrained environments) to allow some PMCs to run from boot so they can be sampled directly by a test harness, rather than having to run perf. A previous change freezes counters at boot by default, so provide a boot time option to un-freeze (plus a bit more flexibility). Cc: Athira Jajeev Cc: Madhavan Srinivasan Signed-off-by: Nicholas Piggin --- .../admin-guide/kernel-parameters.txt | 8 +++++ arch/powerpc/perf/core-book3s.c | 35 +++++++++++++++++++ 2 files changed, 43 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index bdb22006f713..f8ef1fc2ee9e 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4089,6 +4089,14 @@ Override pmtimer IOPort with a hex value. e.g. pmtmr=0x508 + pmu_override= [PPC] Override the PMU. + This option takes over the PMU facility, so it is no + longer usable by perf. Setting this option starts the + PMU counters by setting MMCR0 to 0 (the FC bit is + cleared). If a number is given, then MMCR1 is set to + that number, otherwise (e.g., 'pmu_override=on'), MMCR1 + remains 0. + pm_debug_messages [SUSPEND,KNL] Enable suspend/resume debug messages during boot up. diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c index 65795cadb475..45d894d85966 100644 --- a/arch/powerpc/perf/core-book3s.c +++ b/arch/powerpc/perf/core-book3s.c @@ -2428,8 +2428,24 @@ int register_power_pmu(struct power_pmu *pmu) } #ifdef CONFIG_PPC64 +static bool pmu_override = false; +static unsigned long pmu_override_val; +static void do_pmu_override(void *data) +{ + ppc_set_pmu_inuse(1); + if (pmu_override_val) + mtspr(SPRN_MMCR1, pmu_override_val); + mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC); +} + static int __init init_ppc64_pmu(void) { + if (cpu_has_feature(CPU_FTR_HVMODE) && pmu_override) { + printk(KERN_WARNING "perf: disabling perf due to pmu_override= command line option.\n"); + on_each_cpu(do_pmu_override, NULL, 1); + return 0; + } + /* run through all the pmu drivers one at a time */ if (!init_power5_pmu()) return 0; @@ -2451,4 +2467,23 @@ static int __init init_ppc64_pmu(void) return init_generic_compat_pmu(); } early_initcall(init_ppc64_pmu); + +static int __init pmu_setup(char *str) +{ + unsigned long val; + + if (!early_cpu_has_feature(CPU_FTR_HVMODE)) + return 0; + + pmu_override = true; + + if (kstrtoul(str, 0, &val)) + val = 0; + + pmu_override_val = val; + + return 1; +} +__setup("pmu_override=", pmu_setup); + #endif From patchwork Wed Aug 11 16:00:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515895 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Zbky1Vc2; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2n6pw6z9t1s for ; Thu, 12 Aug 2021 02:02:41 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233616AbhHKQDE (ORCPT ); Wed, 11 Aug 2021 12:03:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQDE (ORCPT ); Wed, 11 Aug 2021 12:03:04 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE9C1C061765 for ; Wed, 11 Aug 2021 09:02:40 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id u21-20020a17090a8915b02901782c36f543so10326404pjn.4 for ; Wed, 11 Aug 2021 09:02:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=S1letzOe5iDfS793f8g8Ji5wVT7TIiYGRaMo0G4LjUg=; b=Zbky1Vc2DUHA6u+SjYmxtlBeYVYB/EKfCkDnrtDFmVS3Om5s+LqHomtjijF0aRfzQ4 E60tokuuVx1fFn9qvexYIlN6DEFUV7ktpYQZp59PcSEoOg1JKjFmVKBJOcvu4ZWNiahX abGWKCqOZ14D1nvP8tQR+0dC2N17YQbkwjGR+RWG2Fm/6XcLHKyMCOYhH7nQi3U3szha f0CaJuNm54Q6nuhlrcSbrD8tGbcB2v6lzb1MSSMgUMrEtEEeLquK7JjyT+x2MJ16TV6c zNOJwQpdCKgaCuxHdx+vwTVFKZbAG6hSdESKbCHu8/nEdxaRWniHiEhHW2PDDfGVocx3 00FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=S1letzOe5iDfS793f8g8Ji5wVT7TIiYGRaMo0G4LjUg=; b=DTrOFz4oFZOStYBRmDww80aO9fP3WaCoq59rRKup8Oq8Rcozno5G8AzdvnnTvyZO5S ayffN0BNwHI20ePpIMnJGHWcy+SU81KIZY//q552gwP4GyhuCmdJm9e47JMbtaTtG345 EjoiNZ4aapzMfX1qoVxahVaDnOWApCgygQ25blrE9/UJ2lT9iNvCr7MEf9SPWXSc02Qs QZZQiIrppVT1asiC+oFaL1phWFVkoaEA79L3VSPbHlgigpC04Flx9Eky3g478Q8FG3PE pq0YgTDB7tlfyi/LOitRHSpb02qzJ6N9bbuleXerI3GrYJZrlYUY/cJ/sD7MoAsiBf/O Ougg== X-Gm-Message-State: AOAM531+oocrzkIixoB4gl2SZ6yEN45lCPWndmp+v0KqLXKSBP0HQUMC VrzdGZNOQ2xFXhYt7flg4POVl+CPhpQ= X-Google-Smtp-Source: ABdhPJzxzxGMhEij6h+U5PIMhs3TKYvjqe69WVqGlR6K9XY6OP2w6aFYPVnDp10CqEGGAtaOw1ndkA== X-Received: by 2002:a63:4a55:: with SMTP id j21mr88364pgl.187.1628697760163; Wed, 11 Aug 2021 09:02:40 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:39 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Athira Jajeev , Madhavan Srinivasan Subject: [PATCH v2 21/60] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C Date: Thu, 12 Aug 2021 02:00:55 +1000 Message-Id: <20210811160134.904987-22-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Implement the P9 path PMU save/restore code in C, and remove the POWER9/10 code from the P7/8 path assembly. Cc: Athira Jajeev Cc: Madhavan Srinivasan Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/asm-prototypes.h | 5 - arch/powerpc/kvm/book3s_hv.c | 221 +++++++++++++++++++--- arch/powerpc/kvm/book3s_hv_interrupts.S | 13 +- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 43 +---- 4 files changed, 208 insertions(+), 74 deletions(-) diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h index 222823861a67..41b8a1e1144a 100644 --- a/arch/powerpc/include/asm/asm-prototypes.h +++ b/arch/powerpc/include/asm/asm-prototypes.h @@ -141,11 +141,6 @@ static inline void kvmppc_restore_tm_hv(struct kvm_vcpu *vcpu, u64 msr, bool preserve_nv) { } #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ -void kvmhv_save_host_pmu(void); -void kvmhv_load_host_pmu(void); -void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use); -void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu); - void kvmppc_p9_enter_guest(struct kvm_vcpu *vcpu); long kvmppc_h_set_dabr(struct kvm_vcpu *vcpu, unsigned long dabr); diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e6cb923f91ff..c3fe05fe3c11 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3765,6 +3765,196 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } +/* + * Privileged (non-hypervisor) host registers to save. + */ +struct p9_host_os_sprs { + unsigned long dscr; + unsigned long tidr; + unsigned long iamr; + unsigned long amr; + unsigned long fscr; + + unsigned int pmc1; + unsigned int pmc2; + unsigned int pmc3; + unsigned int pmc4; + unsigned int pmc5; + unsigned int pmc6; + unsigned long mmcr0; + unsigned long mmcr1; + unsigned long mmcr2; + unsigned long mmcr3; + unsigned long mmcra; + unsigned long siar; + unsigned long sier1; + unsigned long sier2; + unsigned long sier3; + unsigned long sdar; +}; + +static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) +{ + if (!(mmcr0 & MMCR0_FC)) + goto do_freeze; + if (mmcra & MMCRA_SAMPLE_ENABLE) + goto do_freeze; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + if (!(mmcr0 & MMCR0_PMCCEXT)) + goto do_freeze; + if (!(mmcra & MMCRA_BHRB_DISABLE)) + goto do_freeze; + } + return; + +do_freeze: + mmcr0 = MMCR0_FC; + mmcra = 0; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mmcr0 |= MMCR0_PMCCEXT; + mmcra = MMCRA_BHRB_DISABLE; + } + + mtspr(SPRN_MMCR0, mmcr0); + mtspr(SPRN_MMCRA, mmcra); + isync(); +} + +static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) +{ + if (ppc_get_pmu_inuse()) { + /* + * It might be better to put PMU handling (at least for the + * host) in the perf subsystem because it knows more about what + * is being used. + */ + + /* POWER9, POWER10 do not implement HPMC or SPMC */ + + host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0); + host_os_sprs->mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra); + + host_os_sprs->pmc1 = mfspr(SPRN_PMC1); + host_os_sprs->pmc2 = mfspr(SPRN_PMC2); + host_os_sprs->pmc3 = mfspr(SPRN_PMC3); + host_os_sprs->pmc4 = mfspr(SPRN_PMC4); + host_os_sprs->pmc5 = mfspr(SPRN_PMC5); + host_os_sprs->pmc6 = mfspr(SPRN_PMC6); + host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1); + host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2); + host_os_sprs->sdar = mfspr(SPRN_SDAR); + host_os_sprs->siar = mfspr(SPRN_SIAR); + host_os_sprs->sier1 = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3); + host_os_sprs->sier2 = mfspr(SPRN_SIER2); + host_os_sprs->sier3 = mfspr(SPRN_SIER3); + } + } +} + +static void load_p9_guest_pmu(struct kvm_vcpu *vcpu) +{ + mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); + mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); + mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); + mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); + mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); + mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); + mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); + mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); + mtspr(SPRN_SDAR, vcpu->arch.sdar); + mtspr(SPRN_SIAR, vcpu->arch.siar); + mtspr(SPRN_SIER, vcpu->arch.sier[0]); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); + mtspr(SPRN_SIER2, vcpu->arch.sier[1]); + mtspr(SPRN_SIER3, vcpu->arch.sier[2]); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, vcpu->arch.mmcra); + mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); + /* No isync necessary because we're starting counters */ +} + +static void save_p9_guest_pmu(struct kvm_vcpu *vcpu) +{ + struct lppaca *lp; + int save_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + save_pmu = lp->pmcregs_in_use; + if (IS_ENABLED(CONFIG_KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND)) { + /* + * Save pmu if this guest is capable of running nested guests. + * This is option is for old L1s that do not set their + * lppaca->pmcregs_in_use properly when entering their L2. + */ + save_pmu |= nesting_enabled(vcpu->kvm); + } + + if (save_pmu) { + vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0); + vcpu->arch.mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra); + + vcpu->arch.pmc[0] = mfspr(SPRN_PMC1); + vcpu->arch.pmc[1] = mfspr(SPRN_PMC2); + vcpu->arch.pmc[2] = mfspr(SPRN_PMC3); + vcpu->arch.pmc[3] = mfspr(SPRN_PMC4); + vcpu->arch.pmc[4] = mfspr(SPRN_PMC5); + vcpu->arch.pmc[5] = mfspr(SPRN_PMC6); + vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1); + vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2); + vcpu->arch.sdar = mfspr(SPRN_SDAR); + vcpu->arch.siar = mfspr(SPRN_SIAR); + vcpu->arch.sier[0] = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3); + vcpu->arch.sier[1] = mfspr(SPRN_SIER2); + vcpu->arch.sier[2] = mfspr(SPRN_SIER3); + } + } else { + freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); + } +} + +static void load_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) +{ + if (ppc_get_pmu_inuse()) { + mtspr(SPRN_PMC1, host_os_sprs->pmc1); + mtspr(SPRN_PMC2, host_os_sprs->pmc2); + mtspr(SPRN_PMC3, host_os_sprs->pmc3); + mtspr(SPRN_PMC4, host_os_sprs->pmc4); + mtspr(SPRN_PMC5, host_os_sprs->pmc5); + mtspr(SPRN_PMC6, host_os_sprs->pmc6); + mtspr(SPRN_MMCR1, host_os_sprs->mmcr1); + mtspr(SPRN_MMCR2, host_os_sprs->mmcr2); + mtspr(SPRN_SDAR, host_os_sprs->sdar); + mtspr(SPRN_SIAR, host_os_sprs->siar); + mtspr(SPRN_SIER, host_os_sprs->sier1); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, host_os_sprs->mmcr3); + mtspr(SPRN_SIER2, host_os_sprs->sier2); + mtspr(SPRN_SIER3, host_os_sprs->sier3); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, host_os_sprs->mmcra); + mtspr(SPRN_MMCR0, host_os_sprs->mmcr0); + isync(); + } +} + static void load_spr_state(struct kvm_vcpu *vcpu) { mtspr(SPRN_DSCR, vcpu->arch.dscr); @@ -3807,17 +3997,6 @@ static void store_spr_state(struct kvm_vcpu *vcpu) vcpu->arch.dscr = mfspr(SPRN_DSCR); } -/* - * Privileged (non-hypervisor) host registers to save. - */ -struct p9_host_os_sprs { - unsigned long dscr; - unsigned long tidr; - unsigned long iamr; - unsigned long amr; - unsigned long fscr; -}; - static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { host_os_sprs->dscr = mfspr(SPRN_DSCR); @@ -3865,7 +4044,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, struct p9_host_os_sprs host_os_sprs; s64 dec; u64 tb, next_timer; - int trap, save_pmu; + int trap; WARN_ON_ONCE(vcpu->arch.ceded); @@ -3878,7 +4057,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_p9_host_os_sprs(&host_os_sprs); - kvmhv_save_host_pmu(); /* saves it to PACA kvm_hstate */ + save_p9_host_pmu(&host_os_sprs); kvmppc_subcore_enter_guest(); @@ -3908,7 +4087,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, barrier(); } #endif - kvmhv_load_guest_pmu(vcpu); + load_p9_guest_pmu(vcpu); msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); load_fp_state(&vcpu->arch.fp); @@ -4030,24 +4209,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - save_pmu = 1; if (vcpu->arch.vpa.pinned_addr) { struct lppaca *lp = vcpu->arch.vpa.pinned_addr; u32 yield_count = be32_to_cpu(lp->yield_count) + 1; lp->yield_count = cpu_to_be32(yield_count); vcpu->arch.vpa.dirty = 1; - save_pmu = lp->pmcregs_in_use; - } - if (IS_ENABLED(CONFIG_KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND)) { - /* - * Save pmu if this guest is capable of running nested guests. - * This is option is for old L1s that do not set their - * lppaca->pmcregs_in_use properly when entering their L2. - */ - save_pmu |= nesting_enabled(vcpu->kvm); } - kvmhv_save_guest_pmu(vcpu, save_pmu); + save_p9_guest_pmu(vcpu); #ifdef CONFIG_PPC_PSERIES if (kvmhv_on_pseries()) { barrier(); @@ -4063,7 +4232,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - kvmhv_load_host_pmu(); + load_p9_host_pmu(&host_os_sprs); kvmppc_subcore_exit_guest(); diff --git a/arch/powerpc/kvm/book3s_hv_interrupts.S b/arch/powerpc/kvm/book3s_hv_interrupts.S index 4444f83cb133..59d89e4b154a 100644 --- a/arch/powerpc/kvm/book3s_hv_interrupts.S +++ b/arch/powerpc/kvm/book3s_hv_interrupts.S @@ -104,7 +104,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) mtlr r0 blr -_GLOBAL(kvmhv_save_host_pmu) +/* + * void kvmhv_save_host_pmu(void) + */ +kvmhv_save_host_pmu: BEGIN_FTR_SECTION /* Work around P8 PMAE bug */ li r3, -1 @@ -138,14 +141,6 @@ BEGIN_FTR_SECTION std r8, HSTATE_MMCR2(r13) std r9, HSTATE_SIER(r13) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - mfspr r5, SPRN_MMCR3 - mfspr r6, SPRN_SIER2 - mfspr r7, SPRN_SIER3 - std r5, HSTATE_MMCR3(r13) - std r6, HSTATE_SIER2(r13) - std r7, HSTATE_SIER3(r13) -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) mfspr r3, SPRN_PMC1 mfspr r5, SPRN_PMC2 mfspr r6, SPRN_PMC3 diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 9021052f1579..551ce223b40c 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -2738,10 +2738,11 @@ kvmppc_msr_interrupt: blr /* + * void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu) + * * Load up guest PMU state. R3 points to the vcpu struct. */ -_GLOBAL(kvmhv_load_guest_pmu) -EXPORT_SYMBOL_GPL(kvmhv_load_guest_pmu) +kvmhv_load_guest_pmu: mr r4, r3 mflr r0 li r3, 1 @@ -2775,27 +2776,17 @@ END_FTR_SECTION_IFSET(CPU_FTR_PMAO_BUG) mtspr SPRN_MMCRA, r6 mtspr SPRN_SIAR, r7 mtspr SPRN_SDAR, r8 -BEGIN_FTR_SECTION - ld r5, VCPU_MMCR + 24(r4) - ld r6, VCPU_SIER + 8(r4) - ld r7, VCPU_SIER + 16(r4) - mtspr SPRN_MMCR3, r5 - mtspr SPRN_SIER2, r6 - mtspr SPRN_SIER3, r7 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) BEGIN_FTR_SECTION ld r5, VCPU_MMCR + 16(r4) ld r6, VCPU_SIER(r4) mtspr SPRN_MMCR2, r5 mtspr SPRN_SIER, r6 -BEGIN_FTR_SECTION_NESTED(96) lwz r7, VCPU_PMC + 24(r4) lwz r8, VCPU_PMC + 28(r4) ld r9, VCPU_MMCRS(r4) mtspr SPRN_SPMC1, r7 mtspr SPRN_SPMC2, r8 mtspr SPRN_MMCRS, r9 -END_FTR_SECTION_NESTED(CPU_FTR_ARCH_300, 0, 96) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) mtspr SPRN_MMCR0, r3 isync @@ -2803,10 +2794,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) blr /* + * void kvmhv_load_host_pmu(void) + * * Reload host PMU state saved in the PACA by kvmhv_save_host_pmu. */ -_GLOBAL(kvmhv_load_host_pmu) -EXPORT_SYMBOL_GPL(kvmhv_load_host_pmu) +kvmhv_load_host_pmu: mflr r0 lbz r4, PACA_PMCINUSE(r13) /* is the host using the PMU? */ cmpwi r4, 0 @@ -2844,25 +2836,18 @@ BEGIN_FTR_SECTION mtspr SPRN_MMCR2, r8 mtspr SPRN_SIER, r9 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - ld r5, HSTATE_MMCR3(r13) - ld r6, HSTATE_SIER2(r13) - ld r7, HSTATE_SIER3(r13) - mtspr SPRN_MMCR3, r5 - mtspr SPRN_SIER2, r6 - mtspr SPRN_SIER3, r7 -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) mtspr SPRN_MMCR0, r3 isync mtlr r0 23: blr /* + * void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use) + * * Save guest PMU state into the vcpu struct. * r3 = vcpu, r4 = full save flag (PMU in use flag set in VPA) */ -_GLOBAL(kvmhv_save_guest_pmu) -EXPORT_SYMBOL_GPL(kvmhv_save_guest_pmu) +kvmhv_save_guest_pmu: mr r9, r3 mr r8, r4 BEGIN_FTR_SECTION @@ -2911,14 +2896,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) BEGIN_FTR_SECTION std r10, VCPU_MMCR + 16(r9) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) -BEGIN_FTR_SECTION - mfspr r5, SPRN_MMCR3 - mfspr r6, SPRN_SIER2 - mfspr r7, SPRN_SIER3 - std r5, VCPU_MMCR + 24(r9) - std r6, VCPU_SIER + 8(r9) - std r7, VCPU_SIER + 16(r9) -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) std r7, VCPU_SIAR(r9) std r8, VCPU_SDAR(r9) mfspr r3, SPRN_PMC1 @@ -2936,7 +2913,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) BEGIN_FTR_SECTION mfspr r5, SPRN_SIER std r5, VCPU_SIER(r9) -BEGIN_FTR_SECTION_NESTED(96) mfspr r6, SPRN_SPMC1 mfspr r7, SPRN_SPMC2 mfspr r8, SPRN_MMCRS @@ -2945,7 +2921,6 @@ BEGIN_FTR_SECTION_NESTED(96) std r8, VCPU_MMCRS(r9) lis r4, 0x8000 mtspr SPRN_MMCRS, r4 -END_FTR_SECTION_NESTED(CPU_FTR_ARCH_300, 0, 96) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 22: blr From patchwork Wed Aug 11 16:00:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515896 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=eiiAc8pp; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2s5McBz9sXV for ; Thu, 12 Aug 2021 02:02:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233619AbhHKQDI (ORCPT ); Wed, 11 Aug 2021 12:03:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQDH (ORCPT ); Wed, 11 Aug 2021 12:03:07 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EF58C061765 for ; Wed, 11 Aug 2021 09:02:43 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id u13-20020a17090abb0db0290177e1d9b3f7so10368359pjr.1 for ; Wed, 11 Aug 2021 09:02:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dqMVvfuBOIM1K0b+cDSRuoKajK+MjiiA6DyuNM0QnsE=; b=eiiAc8ppPoKhBj0tm3THpd47ZLZMfgcxnFjqGAwR+dPo4xncGcQ09+EN0puUW7wtU4 aZrVmjsF88DO4tuK45/Gr8SRcUwz+EiXoMQEwt2CWCXilte0y6Gj7htLq1/iY818foxC Dtz6txdtO243NQiIwFRocwPG9AQjO3wC51dhrODfrx9MpYpQ3vyhqauU0FLIQ58MP4tM ALnohnyrANMAU+NF4msjPA3opzuG1jqmNbpEcQPAAn76k6IrYPBoai4Fb4219K1nZHZ8 igu/BR8v+OjsiXlx6E5Ly0Fx5yJex1ps19qYhlO8PN/jXTNP1TUHmw2VNt4l3bDOqYy4 a4Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dqMVvfuBOIM1K0b+cDSRuoKajK+MjiiA6DyuNM0QnsE=; b=VN0rKg3sVH4poZZ5ne2/NpVkCvRYIasUItbSTFpkkGoVP3sYQGTWW/rUMxLWUiLy9F d5EEb/utdSCEkqpnAr+OeZmYGidSIM7FTf5nUSRnnQoj+OizCQBecbc9xrTyI7LCvyno NacQre7niILq6x7xmzeUAb9dOiAZodhIs1SCMO69pREG0VgpgYZX++070XT7xDo0Z/rU Y5/Tm84ZPl66WuUAQJvLYI43VcGVBZk2EKA9unNHvpM/Yv6n1v4lzYI1UyBqsJa1bHNk PoWSHoNQlajS+nl/2I2lLmuSJm7L+AyTlo88+Yh5OLm2Y+Vx3f2/9lIAkZLvao02g+tE hCuA== X-Gm-Message-State: AOAM532Va/ys1QLcmq1Zsn9EfzY5kWUomVnP55xxECAz9u2SNWpXQ/bS YgL1IKdkXwy1XBpnW0vJN9ztBt28Kic= X-Google-Smtp-Source: ABdhPJw+QAiNZFHHj/MSqmKfte2qSj5ICYK5lnjSKrK0y25y0sEvWQeFMzb7HL9ExPuBINqLVivM5Q== X-Received: by 2002:a05:6a00:721:b029:3c7:623e:a871 with SMTP id 1-20020a056a000721b02903c7623ea871mr29208773pfm.9.1628697762938; Wed, 11 Aug 2021 09:02:42 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:42 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Athira Jajeev Subject: [PATCH v2 22/60] KVM: PPC: Book3S HV P9: Factor PMU save/load into context switch functions Date: Thu, 12 Aug 2021 02:00:56 +1000 Message-Id: <20210811160134.904987-23-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rather than guest/host save/retsore functions, implement context switch functions that take care of details like the VPA update for nested. The reason to split these kind of helpers into explicit save/load functions is mainly to schedule SPR access nicely, but PMU is a special case where the load requires mtSPR (to stop counters) and other difficulties, so there's less possibility to schedule those nicely. The SPR accesses also have side-effects if the PMU is running, and in later changes we keep the host PMU running as long as possible so this code can be better profiled, which also complicates scheduling. Cc: Athira Jajeev Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 61 +++++++++++++++++------------------- 1 file changed, 28 insertions(+), 33 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index c3fe05fe3c11..0ceb0bd28e13 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3820,7 +3820,8 @@ static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) isync(); } -static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) +static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) { if (ppc_get_pmu_inuse()) { /* @@ -3854,10 +3855,21 @@ static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) host_os_sprs->sier3 = mfspr(SPRN_SIER3); } } -} -static void load_p9_guest_pmu(struct kvm_vcpu *vcpu) -{ +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + if (vcpu->arch.vpa.pinned_addr) { + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; + get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; + } else { + get_lppaca()->pmcregs_in_use = 1; + } + barrier(); + } +#endif + + /* load guest */ mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); @@ -3882,7 +3894,8 @@ static void load_p9_guest_pmu(struct kvm_vcpu *vcpu) /* No isync necessary because we're starting counters */ } -static void save_p9_guest_pmu(struct kvm_vcpu *vcpu) +static void switch_pmu_to_host(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) { struct lppaca *lp; int save_pmu = 1; @@ -3925,10 +3938,15 @@ static void save_p9_guest_pmu(struct kvm_vcpu *vcpu) } else { freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); } -} -static void load_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs) -{ +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); + barrier(); + } +#endif + if (ppc_get_pmu_inuse()) { mtspr(SPRN_PMC1, host_os_sprs->pmc1); mtspr(SPRN_PMC2, host_os_sprs->pmc2); @@ -4057,8 +4075,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_p9_host_os_sprs(&host_os_sprs); - save_p9_host_pmu(&host_os_sprs); - kvmppc_subcore_enter_guest(); vc->entry_exit_map = 1; @@ -4075,19 +4091,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); -#ifdef CONFIG_PPC_PSERIES - if (kvmhv_on_pseries()) { - barrier(); - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; - } else { - get_lppaca()->pmcregs_in_use = 1; - } - barrier(); - } -#endif - load_p9_guest_pmu(vcpu); + switch_pmu_to_guest(vcpu, &host_os_sprs); msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); load_fp_state(&vcpu->arch.fp); @@ -4216,14 +4220,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.vpa.dirty = 1; } - save_p9_guest_pmu(vcpu); -#ifdef CONFIG_PPC_PSERIES - if (kvmhv_on_pseries()) { - barrier(); - get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); - barrier(); - } -#endif + switch_pmu_to_host(vcpu, &host_os_sprs); vc->entry_exit_map = 0x101; vc->in_guest = 0; @@ -4232,8 +4229,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - load_p9_host_pmu(&host_os_sprs); - kvmppc_subcore_exit_guest(); return trap; From patchwork Wed Aug 11 16:00:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515897 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=agJvXZ1X; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2y3FTYz9t1s for ; Thu, 12 Aug 2021 02:02:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233620AbhHKQDK (ORCPT ); Wed, 11 Aug 2021 12:03:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQDK (ORCPT ); Wed, 11 Aug 2021 12:03:10 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5531EC061765 for ; Wed, 11 Aug 2021 09:02:46 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id q2so3256615plr.11 for ; Wed, 11 Aug 2021 09:02:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yvgSohBs0lz6ePDfuRJAnsWvsoMCmdvVs4zUHcGUoII=; b=agJvXZ1XlDr9WbqXPgqs2E44Rwk7BzWuiMA7pr24MBgFVtUY8P7ffJOJzQ+j6RUwWk sbeIiM1mJ1l59xPT3/jKD8WbhYPffXdWDZtHMdCjqVh8NWISleSgO7AUKIuP262yd57q GxKrUWb28jtgw4op4AwwdUsWqb1gw5udaRFdb65CRRLT/6k98i6Crd5uZWsS/Ng+x0wu QLAeMirLBlCcvEWL0cCFMAD7kNw57YEzu5XzrwP5ybSIHNJgpgZxF5Ieqg29U+dNdYGv llUZxrG6llfj97ePmVBhdnz+A32iLss3z1KpIr1RbryP+SoPioHimV1Bh6qBOg5sME9+ w6kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yvgSohBs0lz6ePDfuRJAnsWvsoMCmdvVs4zUHcGUoII=; b=OOHQKk205bxmPjgfo4WAdhv3eus7AW8nDBK6558Mf+ebe8NxWEEcRPnjoDXXdfh20V wMVJqj/518NQcD7s1HTOXPRXxirBa+Wf4sfdk15nDzA4FpuxtlWzLV0id/Mn1dQeMxbZ PXmHzuZALPIVDizBRueDxjcV4cf3W+n0IqZyJzOCU1rCpD3+4nZgtohmFy5xSm+wKOLm sovjPisWrAasdwvOFDSAAvQwgVgbrDUZQVgm/fcpGnyaYSEui8fEyPoSlUq9mZJCrHzG Cy1RuigRwUrNvY0/NcYF4Azy6J5ezCersiPFvLsCe4fJjZu60005WsdhM2XL5WW72PO3 BAVw== X-Gm-Message-State: AOAM5335ykOxRPk9zuIUaPKkOeNQEyblkQCO8T7v9cUd9yieAbJN1df3 OH+vqZnN2K2+2hKFW4MSajR1GMbf0bQ= X-Google-Smtp-Source: ABdhPJw542yZIMZfAuFOMn/nPt3pHOZjVUhP8Q+mhuQbFjKj4N+mT65E3Nzh2pBr5yTG8xD7a9yNow== X-Received: by 2002:a17:90a:d245:: with SMTP id o5mr15625471pjw.57.1628697765701; Wed, 11 Aug 2021 09:02:45 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:45 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Athira Jajeev Subject: [PATCH v2 23/60] KVM: PPC: Book3S HV P9: Demand fault PMU SPRs when marked not inuse Date: Thu, 12 Aug 2021 02:00:57 +1000 Message-Id: <20210811160134.904987-24-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The pmcregs_in_use field in the guest VPA can not be trusted to reflect what the guest is doing with PMU SPRs, so the PMU must always be managed (stopped) when exiting the guest, and SPR values set when entering the guest to ensure it can't cause a covert channel or otherwise cause other guests or the host to misbehave. So prevent guest access to the PMU with HFSCR[PM] if pmcregs_in_use is clear, and avoid the PMU SPR access on every partition switch. Guests that set pmcregs_in_use incorrectly or when first setting it and using the PMU will take a hypervisor facility unavailable interrupt that will bring in the PMU SPRs. Cc: Athira Jajeev Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 131 ++++++++++++++++++++++++++--------- 1 file changed, 98 insertions(+), 33 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 0ceb0bd28e13..9ff7e3ea70f9 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1421,6 +1421,23 @@ static int kvmppc_emulate_doorbell_instr(struct kvm_vcpu *vcpu) return RESUME_GUEST; } +/* + * If the lppaca had pmcregs_in_use clear when we exited the guest, then + * HFSCR_PM is cleared for next entry. If the guest then tries to access + * the PMU SPRs, we get this facility unavailable interrupt. Putting HFSCR_PM + * back in the guest HFSCR will cause the next entry to load the PMU SPRs and + * allow the guest access to continue. + */ +static int kvmppc_pmu_unavailable(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.hfscr_permitted & HFSCR_PM)) + return EMULATE_FAIL; + + vcpu->arch.hfscr |= HFSCR_PM; + + return RESUME_GUEST; +} + static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct task_struct *tsk) { @@ -1705,16 +1722,22 @@ XXX benchmark guest exits * to emulate. * Otherwise, we just generate a program interrupt to the guest. */ - case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: + case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: { + u64 cause = vcpu->arch.hfscr >> 56; + r = EMULATE_FAIL; - if (((vcpu->arch.hfscr >> 56) == FSCR_MSGP_LG) && - cpu_has_feature(CPU_FTR_ARCH_300)) - r = kvmppc_emulate_doorbell_instr(vcpu); + if (cpu_has_feature(CPU_FTR_ARCH_300)) { + if (cause == FSCR_MSGP_LG) + r = kvmppc_emulate_doorbell_instr(vcpu); + if (cause == FSCR_PM_LG) + r = kvmppc_pmu_unavailable(vcpu); + } if (r == EMULATE_FAIL) { kvmppc_core_queue_program(vcpu, SRR1_PROGILL); r = RESUME_GUEST; } break; + } case BOOK3S_INTERRUPT_HV_RM_HARD: r = RESUME_PASSTHROUGH; @@ -2753,6 +2776,11 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) vcpu->arch.hfscr_permitted = vcpu->arch.hfscr; + /* + * PM is demand-faulted so start with it clear. + */ + vcpu->arch.hfscr &= ~HFSCR_PM; + kvmppc_mmu_book3s_hv_init(vcpu); vcpu->arch.state = KVMPPC_VCPU_NOTREADY; @@ -3823,6 +3851,14 @@ static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + struct lppaca *lp; + int load_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + load_pmu = lp->pmcregs_in_use; + + /* Save host */ if (ppc_get_pmu_inuse()) { /* * It might be better to put PMU handling (at least for the @@ -3857,41 +3893,47 @@ static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, } #ifdef CONFIG_PPC_PSERIES + /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */ if (kvmhv_on_pseries()) { barrier(); - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; - } else { - get_lppaca()->pmcregs_in_use = 1; - } + get_lppaca()->pmcregs_in_use = load_pmu; barrier(); } #endif - /* load guest */ - mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); - mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); - mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); - mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); - mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); - mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); - mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); - mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); - mtspr(SPRN_SDAR, vcpu->arch.sdar); - mtspr(SPRN_SIAR, vcpu->arch.siar); - mtspr(SPRN_SIER, vcpu->arch.sier[0]); + /* + * Load guest. If the VPA said the PMCs are not in use but the guest + * tried to access them anyway, HFSCR[PM] will be set by the HFAC + * fault so we can make forward progress. + */ + if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) { + mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); + mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); + mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); + mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); + mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); + mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); + mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); + mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); + mtspr(SPRN_SDAR, vcpu->arch.sdar); + mtspr(SPRN_SIAR, vcpu->arch.siar); + mtspr(SPRN_SIER, vcpu->arch.sier[0]); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); + mtspr(SPRN_SIER2, vcpu->arch.sier[1]); + mtspr(SPRN_SIER3, vcpu->arch.sier[2]); + } - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); - mtspr(SPRN_SIER2, vcpu->arch.sier[1]); - mtspr(SPRN_SIER3, vcpu->arch.sier[2]); - } + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, vcpu->arch.mmcra); + mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); + /* No isync necessary because we're starting counters */ - /* Set MMCRA then MMCR0 last */ - mtspr(SPRN_MMCRA, vcpu->arch.mmcra); - mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); - /* No isync necessary because we're starting counters */ + if (!vcpu->arch.nested && + (vcpu->arch.hfscr_permitted & HFSCR_PM)) + vcpu->arch.hfscr |= HFSCR_PM; + } } static void switch_pmu_to_host(struct kvm_vcpu *vcpu, @@ -3935,9 +3977,32 @@ static void switch_pmu_to_host(struct kvm_vcpu *vcpu, vcpu->arch.sier[1] = mfspr(SPRN_SIER2); vcpu->arch.sier[2] = mfspr(SPRN_SIER3); } - } else { + + } else if (vcpu->arch.hfscr & HFSCR_PM) { + /* + * The guest accessed PMC SPRs without specifying they should + * be preserved, or it cleared pmcregs_in_use after the last + * access. Just ensure they are frozen. + */ freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); - } + + /* + * Demand-fault PMU register access in the guest. + * + * This is used to grab the guest's VPA pmcregs_in_use value + * and reflect it into the host's VPA in the case of a nested + * hypervisor. + * + * It also avoids having to zero-out SPRs after each guest + * exit to avoid side-channels when. + * + * This is cleared here when we exit the guest, so later HFSCR + * interrupt handling can add it back to run the guest with + * PM enabled next time. + */ + if (!vcpu->arch.nested) + vcpu->arch.hfscr &= ~HFSCR_PM; + } /* otherwise the PMU should still be frozen */ #ifdef CONFIG_PPC_PSERIES if (kvmhv_on_pseries()) { From patchwork Wed Aug 11 16:00:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515898 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Oga2Xv/c; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF2z2KZFz9shx for ; Thu, 12 Aug 2021 02:02:51 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233622AbhHKQDN (ORCPT ); Wed, 11 Aug 2021 12:03:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQDM (ORCPT ); Wed, 11 Aug 2021 12:03:12 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1951C061765 for ; Wed, 11 Aug 2021 09:02:48 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id b7so3282505plh.7 for ; Wed, 11 Aug 2021 09:02:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QqPmgr/sGr5j1zDNMnH53gsmq1i0Fa0vOmXvGCg74L4=; b=Oga2Xv/cYKmKt+7qubqKM3fMNcJU24OXsL9KjiqLqi0tF9gDQJqkR5UCarU6Ys2Asl Yd+RkuDpGdHGrj4WtMGCQ1UjNAeb9KvlFQYybusL0jtuDX1pY+/0Kfp9FOpFV8OjuEjH g4njjLDRuvVhPEHu+K/8DqgAze4cTvbZKJ2yYutvIce3+hKlJpCa7SugTq0fGHqUtRHf 9qU8ypjya/shZM3RMKgwDiLzCKgl98gH/CNd9Qj1O1xYW29Gg06AvCoS9Bghksz7bfHI TWA9e9Jn0UOpasAlC530YmDUgNDxxYMDtgThop6GLcy1TUzLoso1yKZD23MijgLiQcwu SjJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QqPmgr/sGr5j1zDNMnH53gsmq1i0Fa0vOmXvGCg74L4=; b=L/pX0A4iNoEbf5e/aQPiOWDOrGPiIndDzxEO42HT1V4kYtk5Jw4w812YRUjrqTjVAt zAbzZK5/vs1ZEimLfqZEfcsTzXNlnNbdwiZe/Ez2qGwSBTyeDlZGCvBzei9BN9ngjspU qRx8SROs+E1e2MRgmnMjllYYU7c3O8+78jLjvnu1R4OzLaUw10SIMQcI1bdcOv0aGDQv rSDnJNbpLuDc1jN6NmLmOBuOwqY3d1SV1XwqvbpNVe0uy1a4guwq9xDPyyH93RUx4Oky n7Fw+9bTuJmZjVqpfj0R2IWYBWX3BGvRtdwFmJL6k7sNAlWkkphuehzMPF3IAEy8Z8X9 98rA== X-Gm-Message-State: AOAM533h0eh9xECI3PJYpkEQDDAZSRrMJoGRMr7pQWUhTDwt6LWy8zw2 n6tU9mY3u5yYL7DETAvM90vwmITe8FA= X-Google-Smtp-Source: ABdhPJyEilrPW7asJOvFeaFex2/BT3EDl9ATGVLj6BS67i90hGQjqkAIxNbcmDCowWQj7LwcesqaMg== X-Received: by 2002:a17:90a:db44:: with SMTP id u4mr11278276pjx.180.1628697768491; Wed, 11 Aug 2021 09:02:48 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:48 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 24/60] KVM: PPC: Book3S HV P9: Factor out yield_count increment Date: Thu, 12 Aug 2021 02:00:58 +1000 Message-Id: <20210811160134.904987-25-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Factor duplicated code into a helper function. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 9ff7e3ea70f9..fa12a3efeeb2 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4117,6 +4117,16 @@ static inline bool hcall_is_xics(unsigned long req) req == H_IPOLL || req == H_XIRR || req == H_XIRR_X; } +static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu) +{ + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; + if (lp) { + u32 yield_count = be32_to_cpu(lp->yield_count) + 1; + lp->yield_count = cpu_to_be32(yield_count); + vcpu->arch.vpa.dirty = 1; + } +} + /* * Guest entry for POWER9 and later CPUs. */ @@ -4145,12 +4155,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 1; vc->in_guest = 1; - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - u32 yield_count = be32_to_cpu(lp->yield_count) + 1; - lp->yield_count = cpu_to_be32(yield_count); - vcpu->arch.vpa.dirty = 1; - } + vcpu_vpa_increment_dispatch(vcpu); if (cpu_has_feature(CPU_FTR_TM) || cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) @@ -4278,12 +4283,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - if (vcpu->arch.vpa.pinned_addr) { - struct lppaca *lp = vcpu->arch.vpa.pinned_addr; - u32 yield_count = be32_to_cpu(lp->yield_count) + 1; - lp->yield_count = cpu_to_be32(yield_count); - vcpu->arch.vpa.dirty = 1; - } + vcpu_vpa_increment_dispatch(vcpu); switch_pmu_to_host(vcpu, &host_os_sprs); From patchwork Wed Aug 11 16:00:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515899 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=tEdIQaqs; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF301PJXz9t1s for ; Thu, 12 Aug 2021 02:02:52 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233628AbhHKQDP (ORCPT ); Wed, 11 Aug 2021 12:03:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQDP (ORCPT ); Wed, 11 Aug 2021 12:03:15 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A72EC061765 for ; Wed, 11 Aug 2021 09:02:51 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id 28-20020a17090a031cb0290178dcd8a4d1so5350257pje.0 for ; Wed, 11 Aug 2021 09:02:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q0d0iaopUkL57P/YF2EiZzRzVhFFGYIegIp/ZQI3dO8=; b=tEdIQaqs2HRZCjge8fh1ibXHwlZC+5KChx8ndqotUyE69jWlERwDq7ecjhtfEBxaBq K9O8G/N6GpS9Yc6P16JFFjYuO0HTfWNOe1PWBe3+7Wt1nfPFR+K5M80NCIAqLtz979kP /i2b+qbrFPNc0NgluKbYwJG5PHIAVKQxwOMgPepDqWo/STuhyDM0vLLizY6OeqTClPMU ugyDxaqDn6bYXm6dmnwEeGyNRKePgbpwtSSQ0Tyb07jyo0k7OWn/BDhKBjSks58uB/8n G6ToDuGVYYcnT2gswCIC9QbM/wYQYvg6b3ReZxeJ2xkpjY4E3zLaoqH8tt5iAdXL4liF GlOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q0d0iaopUkL57P/YF2EiZzRzVhFFGYIegIp/ZQI3dO8=; b=cOaXIGNpakDHbiDkeqEJjC/71ry/XLZCUBtWzgQX33KDhEZwSV7EHpBox26g4lbSjH tXwEHgrwAtRZvsdghxP5SjrdbQVKXwne5W9PSyiccn3UuvQ0aaCDF73A+OsnITdx5GCh a6szu6m1oNIV3buJkCHeMHIzPam/B+u9N/myGIBxFe6i27mApQ+x8svU2IYXSGux26kB 7/fN6qIa0Meor8dBgqWpI3D2Z7zBQPjpDMXmdc3ENMj37HCnBwCjn481Lfv4SFKRXD6r lXmiLOj9pPg50F4LbXP46Yo8d2qD6LbmjAYkYHipx29spMAf7CeytAOJk1QN0IEJXftX wqBg== X-Gm-Message-State: AOAM530yLAXNjVEnk5qaK0S4HQspURPuP3jn0PXMYs7TzBTdnyhJhPMO WQ7yKTz5KbHmMD/OkXCwiTaQ0fbu5KY= X-Google-Smtp-Source: ABdhPJzd+cZm7lc8lSMZdvmG9Mg0irEwwXBZrOzKs7aJnrs7cThmGx00AGnzug/cEOiJRet08RQRpQ== X-Received: by 2002:a17:90a:a091:: with SMTP id r17mr4239723pjp.56.1628697770805; Wed, 11 Aug 2021 09:02:50 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:50 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 25/60] KVM: PPC: Book3S HV: CTRL SPR does not require read-modify-write Date: Thu, 12 Aug 2021 02:00:59 +1000 Message-Id: <20210811160134.904987-26-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Processors that support KVM HV do not require read-modify-write of the CTRL SPR to set/clear their thread's runlatch. Just write 1 or 0 to it. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 +- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 15 ++++++--------- 2 files changed, 7 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index fa12a3efeeb2..f9942eeaefd3 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4060,7 +4060,7 @@ static void load_spr_state(struct kvm_vcpu *vcpu) */ if (!(vcpu->arch.ctrl & 1)) - mtspr(SPRN_CTRLT, mfspr(SPRN_CTRLF) & ~1); + mtspr(SPRN_CTRLT, 0); } static void store_spr_state(struct kvm_vcpu *vcpu) diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 551ce223b40c..05be8648937d 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -775,12 +775,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) mtspr SPRN_AMR,r5 mtspr SPRN_UAMOR,r6 - /* Restore state of CTRL run bit; assume 1 on entry */ + /* Restore state of CTRL run bit; the host currently has it set to 1 */ lwz r5,VCPU_CTRL(r4) andi. r5,r5,1 bne 4f - mfspr r6,SPRN_CTRLF - clrrdi r6,r6,1 + li r6,0 mtspr SPRN_CTRLT,r6 4: /* Secondary threads wait for primary to have done partition switch */ @@ -1203,12 +1202,12 @@ guest_bypass: stw r0, VCPU_CPU(r9) stw r0, VCPU_THREAD_CPU(r9) - /* Save guest CTRL register, set runlatch to 1 */ + /* Save guest CTRL register, set runlatch to 1 if it was clear */ mfspr r6,SPRN_CTRLF stw r6,VCPU_CTRL(r9) andi. r0,r6,1 bne 4f - ori r6,r6,1 + li r6,1 mtspr SPRN_CTRLT,r6 4: /* @@ -2178,8 +2177,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) * Also clear the runlatch bit before napping. */ kvm_do_nap: - mfspr r0, SPRN_CTRLF - clrrdi r0, r0, 1 + li r0,0 mtspr SPRN_CTRLT, r0 li r0,1 @@ -2198,8 +2196,7 @@ kvm_nap_sequence: /* desired LPCR value in r5 */ bl isa206_idle_insn_mayloss - mfspr r0, SPRN_CTRLF - ori r0, r0, 1 + li r0,1 mtspr SPRN_CTRLT, r0 mtspr SPRN_SRR1, r3 From patchwork Wed Aug 11 16:01:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515900 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Pd2qjHpS; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3440v8z9shx for ; Thu, 12 Aug 2021 02:02:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233633AbhHKQDS (ORCPT ); Wed, 11 Aug 2021 12:03:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQDS (ORCPT ); Wed, 11 Aug 2021 12:03:18 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77582C061765 for ; Wed, 11 Aug 2021 09:02:54 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id hv22-20020a17090ae416b0290178c579e424so5721018pjb.3 for ; Wed, 11 Aug 2021 09:02:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HcmzAMOblnFfBcnkC6E3lPv/DD88a8O4ehbqoWdwBuU=; b=Pd2qjHpS1RhttRymMp3KomvQpWIMnZCD8zlv5xyohfqa1X9sxUgMjoSHZFUuNRHmqR DYFRkVtXj84gCL+eBV17X5oanb6ZRkK16ElOGMcpaU9unbLSleHC2JWSDPuQkhAxLume xfceMY0v73942AMbAug3Bq9th6xSs+/6qra+F+NtHpf4W4+f28DOgW3vkQQQjSMpdNr0 ZMYFRWqhIG6qz6B9ngqXihLJdCUnEifog/C4zq0aFxEjkoqnQatTr0TIM5hC10935xWU gRkPzOk8dqF3aMRbC13LfpHTN+4dsgkuTT2xDPH5KqlMeNWZOF25lOGK15Y3yhexr0WD TVbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HcmzAMOblnFfBcnkC6E3lPv/DD88a8O4ehbqoWdwBuU=; b=mdMxT4Oal5ygguKTFPbRC8N6GaVQ22fwfQcFUrRsTByq9ItCSDWe0/3pS58dU2cOZ4 VhieasKrs+9pXYgdeNtD5hwEFIVJDjktnGIF5hDZmVUv6ZsHc9nvBKUDomqnBJXXtjHd qCctkA8bbZ36qdmAC77r6y9AOY4C4NljvL0H25Xki45lmupPWm+Tfb8HJ82dXzaFgBaD SRbcevicGbBF4MtsJQrl+s4nq0i9JUj04phl91xuOM3R2d2ppeOgsf42Gzej0owgJGZs wIbCswFPjHiKW07vJ20fXPWCGeLIoqzu7MRlNWuP6rszBQlAeAxZFBiXpiRP5kcrpAOl RrpQ== X-Gm-Message-State: AOAM533Mi5XLBnAcwSJ3CkQ6OCPyShXNumW3DNrvpYBGMW2yZYOdOqy9 y7kGJELITa5QzyvKDeNR8V25JcSHUno= X-Google-Smtp-Source: ABdhPJyPrzD2/ExAIrif/iw8efaaUMRHK8sWQ01m3JyACJBG7euhxP5u3TlZybgCnr2Sq0hBG1XMuw== X-Received: by 2002:a17:902:7611:b029:12b:e55e:6ee8 with SMTP id k17-20020a1709027611b029012be55e6ee8mr4874557pll.4.1628697773972; Wed, 11 Aug 2021 09:02:53 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:53 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 26/60] KVM: PPC: Book3S HV P9: Move SPRG restore to restore_p9_host_os_sprs Date: Thu, 12 Aug 2021 02:01:00 +1000 Message-Id: <20210811160134.904987-27-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the SPR update into its relevant helper function. This will help with SPR scheduling improvements in later changes. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index f9942eeaefd3..ef8c41396883 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4093,6 +4093,8 @@ static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); + mtspr(SPRN_PSPB, 0); mtspr(SPRN_UAMOR, 0); @@ -4292,8 +4294,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, timer_rearm_host_dec(tb); - mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - kvmppc_subcore_exit_guest(); return trap; From patchwork Wed Aug 11 16:01:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515901 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Qds6GQ4V; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF354Wjqz9t1r for ; Thu, 12 Aug 2021 02:02:57 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233635AbhHKQDU (ORCPT ); Wed, 11 Aug 2021 12:03:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233543AbhHKQDU (ORCPT ); Wed, 11 Aug 2021 12:03:20 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E21D7C061765 for ; Wed, 11 Aug 2021 09:02:56 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id mq2-20020a17090b3802b0290178911d298bso5762337pjb.1 for ; Wed, 11 Aug 2021 09:02:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vOYytrCJ6etB/rtWXurkGvbOwyzFTjkoqzC6hk3RudU=; b=Qds6GQ4VZK6qKTfDJpbhNMFLZb0ZycAu2nPz93zZGG6tsPk3A1snnLhNOOI44FyMgu wVrHMKTBA1VPii8c9QHJ+QOSavO1F1NnuRkTx6RwBQMT0zSVh+NSmh1AqgHO/MAAK+06 v596Kk4LhxY2tgdG1+I6cf23we04T9C51ziJKmkN0BzWnO7LGMwE7hnQf9sH6EXkzUQU 8Yq4fwgYAMBsvDNbT13X7dtT0HsJl3hgU2j4QDsCLwx5KTfFdgipOQcqIFOZwXbc+AWX RH8TInwJcmATMEhJsY8DvM2fXrHN8zDJsimbthnpwUIYZPxgY2uhq4RIfsynU5uaWmDx 7Dpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vOYytrCJ6etB/rtWXurkGvbOwyzFTjkoqzC6hk3RudU=; b=sl9VaYjA9TpdFKArTLi37ckp7Jh4Bi+4ByFOwtsvivSA3DKpZb8tX5KxHHYrqaA6W+ QZoyJDY0So/p+dR6yzuCkvzk9R6uYKckuLRnfrYW7bdj/kcbrno/hdoej0H7YBNmojnS jOJuMLvaAGxwadN8mbr6MXptx86mwxgOUYKe1RpNqijY65DBVgAP+Ba1otgLlAFpLo9c GY+wp8C0poUabZO9osEgjz9uZU3vu4wfRdZxMBx6CqGcQ+unoAv1D5FVdq8uGUBwLBdz /SfxRgWlGWDTKoMW7xbdrnWw7Zy3ahIdBHq+Y/H/MvQfsJhm+3wyI7+nBIrAQjpWW9Bn wzcw== X-Gm-Message-State: AOAM530Q3FOBUxL9/CL+im0eKpQELERa44fj3vc2wQtkvsDoa+sNZWI0 kFia3I+zbZzj/GkVZsBdDjVR5ahLsHQ= X-Google-Smtp-Source: ABdhPJwGQLjOLpxyb4u9wI3p17orX+nP2Zla2bNnncDdN3sWpcB0dKpJx9EfK8D9zOPyxSaDuvguXw== X-Received: by 2002:a63:4a41:: with SMTP id j1mr320271pgl.227.1628697776358; Wed, 11 Aug 2021 09:02:56 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:56 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 27/60] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs Date: Thu, 12 Aug 2021 02:01:01 +1000 Message-Id: <20210811160134.904987-28-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This reduces the number of mtmsrd required to enable facility bits when saving/restoring registers, by having the KVM code set all bits up front rather than using individual facility functions that set their particular MSR bits. Signed-off-by: Nicholas Piggin Reported-by: kernel test robot Reported-by: kernel test robot --- arch/powerpc/kernel/process.c | 26 ++++++++++++ arch/powerpc/kvm/book3s_hv.c | 61 ++++++++++++++++++--------- arch/powerpc/kvm/book3s_hv_p9_entry.c | 1 + 3 files changed, 69 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 185beb290580..4d4e99b1756e 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -593,6 +593,32 @@ static void save_all(struct task_struct *tsk) msr_check_and_clear(msr_all_available); } +void save_user_regs_kvm(void) +{ + unsigned long usermsr; + + if (!current->thread.regs) + return; + + usermsr = current->thread.regs->msr; + + if (usermsr & MSR_FP) + save_fpu(current); + + if (usermsr & MSR_VEC) + save_altivec(current); + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + if (usermsr & MSR_TM) { + current->thread.tm_tfhar = mfspr(SPRN_TFHAR); + current->thread.tm_tfiar = mfspr(SPRN_TFIAR); + current->thread.tm_texasr = mfspr(SPRN_TEXASR); + current->thread.regs->msr &= ~MSR_TM; + } +#endif +} +EXPORT_SYMBOL_GPL(save_user_regs_kvm); + void flush_all_to_thread(struct task_struct *tsk) { if (tsk->thread.regs) { diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index ef8c41396883..77a4138732af 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4139,6 +4139,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, struct p9_host_os_sprs host_os_sprs; s64 dec; u64 tb, next_timer; + unsigned long msr; int trap; WARN_ON_ONCE(vcpu->arch.ceded); @@ -4150,8 +4151,23 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, if (next_timer < time_limit) time_limit = next_timer; + vcpu->arch.ceded = 0; + save_p9_host_os_sprs(&host_os_sprs); + /* MSR bits may have been cleared by context switch */ + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + kvmppc_subcore_enter_guest(); vc->entry_exit_map = 1; @@ -4160,12 +4176,13 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + msr = mfmsr(); /* TM restore can update msr */ + } switch_pmu_to_guest(vcpu, &host_os_sprs); - msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); load_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC load_vr_state(&vcpu->arch.vr); @@ -4274,7 +4291,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, restore_p9_host_os_sprs(vcpu, &host_os_sprs); - msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); store_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC store_vr_state(&vcpu->arch.vr); @@ -4803,6 +4819,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, goto done; } +void save_user_regs_kvm(void); + static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) { struct kvm_run *run = vcpu->run; @@ -4812,19 +4830,24 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) unsigned long user_tar = 0; unsigned int user_vrsave; struct kvm *kvm; + unsigned long msr; if (!vcpu->arch.sane) { run->exit_reason = KVM_EXIT_INTERNAL_ERROR; return -EINVAL; } + /* No need to go into the guest when all we'll do is come back out */ + if (signal_pending(current)) { + run->exit_reason = KVM_EXIT_INTR; + return -EINTR; + } + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM /* * Don't allow entry with a suspended transaction, because * the guest entry/exit code will lose it. - * If the guest has TM enabled, save away their TM-related SPRs - * (they will get restored by the TM unavailable interrupt). */ -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM if (cpu_has_feature(CPU_FTR_TM) && current->thread.regs && (current->thread.regs->msr & MSR_TM)) { if (MSR_TM_ACTIVE(current->thread.regs->msr)) { @@ -4832,12 +4855,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) run->fail_entry.hardware_entry_failure_reason = 0; return -EINVAL; } - /* Enable TM so we can read the TM SPRs */ - mtmsr(mfmsr() | MSR_TM); - current->thread.tm_tfhar = mfspr(SPRN_TFHAR); - current->thread.tm_tfiar = mfspr(SPRN_TFIAR); - current->thread.tm_texasr = mfspr(SPRN_TEXASR); - current->thread.regs->msr &= ~MSR_TM; } #endif @@ -4852,18 +4869,24 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) kvmppc_core_prepare_to_enter(vcpu); - /* No need to go into the guest when all we'll do is come back out */ - if (signal_pending(current)) { - run->exit_reason = KVM_EXIT_INTR; - return -EINTR; - } - kvm = vcpu->kvm; atomic_inc(&kvm->arch.vcpus_running); /* Order vcpus_running vs. mmu_ready, see kvmppc_alloc_reset_hpt */ smp_mb(); - flush_all_to_thread(current); + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + + save_user_regs_kvm(); /* Save userspace EBB and other register values */ if (cpu_has_feature(CPU_FTR_ARCH_207S)) { diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index a7f63082b4e3..fb9cb34445ea 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -224,6 +224,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = vc->tb_offset; } + /* Could avoid mfmsr by passing around, but probably no big deal */ msr = mfmsr(); host_hfscr = mfspr(SPRN_HFSCR); From patchwork Wed Aug 11 16:01:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515902 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=A9uYqJaR; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF382SFBz9t1r for ; Thu, 12 Aug 2021 02:03:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233543AbhHKQDX (ORCPT ); Wed, 11 Aug 2021 12:03:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbhHKQDX (ORCPT ); Wed, 11 Aug 2021 12:03:23 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 948A2C061765 for ; Wed, 11 Aug 2021 09:02:59 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id s22-20020a17090a1c16b0290177caeba067so10431304pjs.0 for ; Wed, 11 Aug 2021 09:02:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=B1FzzEpWRWY6cyBzpZZ3791Kwi4/Y0fUuP+n4T9dwEg=; b=A9uYqJaRiCkqI8Cb3FkasbSgEl3gK5mwm5eqEJmExODe/P//TGj+1FTMJseEzS0fqc zAWshLkHMARXKwONch0fIYzJfAMDa+TjfNHDW12dPlExNUInelsRUrdWIgOdHBdRHJc1 gBI8wj2zM1QYPwnIAVjrtDtHSsU2Q5ygmHRQtOP/0M1IOCNo3lt29gLdGH12kLvdJP03 fnFc8IhlHHqo/oTCdONAMPdJFFlDHWxRIOAa6bm4veqFCYdNtER7TfyWEy3iTFIKaduX FqS0PGqjsWEFbypa7+mNhSCdK/ccLLJs+mIUS3qhcUYYbZlwTS+ysaeNV6HuM1ELOZGq ZWuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=B1FzzEpWRWY6cyBzpZZ3791Kwi4/Y0fUuP+n4T9dwEg=; b=IpTUWoS99r4xb/e75DFhy3fQ9LMc7VJDpdGxIUC+ls+axv07ziEAGshpFROf5Sg3rP DbdwY8G1MRLxj6OZv5n5mJ/66zzZFEkmmEiOp4PYZ3lnYM7HTHn9IBWh6swBlw8twAXg eVwo0Z2wEDr6VFjOd0NeKVxMYvQgH8PFbjzQp9mdPsYl2Vw/9CF+bAmzrzhz82s/PNIp uNxGWzFfl2+LGHxUplebETWYsuBuHy7Ob/nGsceF7cs8sTUyt79fUaiCVdo8dkTycW9a z8NggsWy63x0ZxPB5qSZd7CXtVnPejm/NVcS46W0AcO/68svpsr+XZOWncp0YF2RETfG TRYg== X-Gm-Message-State: AOAM530o4Um6RMZK7AUHBE2cT/2gjNvnGo2KjawPptszMHl7HCyQ9MaP lbl7Ae9vRqN0HuyZ78BNiz70AJ2Z7ck= X-Google-Smtp-Source: ABdhPJyN2EA7DwVy9t1bAH9Q7tLIwAsGs34tgc8S/b0Te5GkFuudS97nzvmnpFCRgNmjiSPTv3hh1A== X-Received: by 2002:a05:6a00:1c6d:b029:3cd:da92:7296 with SMTP id s45-20020a056a001c6db02903cdda927296mr9159427pfw.33.1628697778762; Wed, 11 Aug 2021 09:02:58 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:02:58 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 28/60] KVM: PPC: Book3S HV P9: Improve mtmsrd scheduling by delaying MSR[EE] disable Date: Thu, 12 Aug 2021 02:01:02 +1000 Message-Id: <20210811160134.904987-29-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Moving the mtmsrd after the host SPRs are saved and before the guest SPRs start to be loaded can prevent an SPR scoreboard stall (because the mtmsrd is L=1 type which does not cause context synchronisation. This is also now more convenient to combined with the mtmsrd L=0 instruction to enable facilities just below, but that is not done yet. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 77a4138732af..376687286fef 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4155,6 +4155,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_p9_host_os_sprs(&host_os_sprs); + /* + * This could be combined with MSR[RI] clearing, but that expands + * the unrecoverable window. It would be better to cover unrecoverable + * with KVM bad interrupt handling rather than use MSR[RI] at all. + * + * Much more difficult and less worthwhile to combine with IR/DR + * disable. + */ + hard_irq_disable(); + if (lazy_irq_pending()) + return 0; + /* MSR bits may have been cleared by context switch */ msr = 0; if (IS_ENABLED(CONFIG_PPC_FPU)) @@ -4654,6 +4666,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, struct kvmppc_vcore *vc; struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; + unsigned long flags; trace_kvmppc_run_vcpu_enter(vcpu); @@ -4697,11 +4710,11 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, if (kvm_is_radix(kvm)) kvmppc_prepare_radix_vcpu(vcpu, pcpu); - local_irq_disable(); - hard_irq_disable(); + /* flags save not required, but irq_pmu has no disable/enable API */ + powerpc_local_irq_pmu_save(flags); if (signal_pending(current)) goto sigpend; - if (lazy_irq_pending() || need_resched() || !kvm->arch.mmu_ready) + if (need_resched() || !kvm->arch.mmu_ready) goto out; if (!nested) { @@ -4756,7 +4769,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, guest_exit_irqoff(); - local_irq_enable(); + powerpc_local_irq_pmu_restore(flags); cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest); @@ -4814,7 +4827,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; out: - local_irq_enable(); + powerpc_local_irq_pmu_restore(flags); preempt_enable(); goto done; } From patchwork Wed Aug 11 16:01:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515903 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=hSIw1Uer; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3B595Nz9t23 for ; Thu, 12 Aug 2021 02:03:02 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229634AbhHKQDZ (ORCPT ); Wed, 11 Aug 2021 12:03:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233639AbhHKQDZ (ORCPT ); Wed, 11 Aug 2021 12:03:25 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AEBFC061765 for ; Wed, 11 Aug 2021 09:03:02 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id oa17so4225327pjb.1 for ; Wed, 11 Aug 2021 09:03:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YG43b2W+WoOa/mX7jhMUWZGR+grphI4ggOCmMv7FBAk=; b=hSIw1UerUdjTWk5TlQspyucFFJ7sfWEWHYe2rpzn7bFT6wos1pT94pr7kBjamtyrrr XfjceckV3iOU71r5CJ4muwZbxfDR9hPDvtYzmd4PFqHGiojsQkq0t5KlMv0aNwyMj6Ko 7iMHvVuIRTywrqtHTuTsmZLa7nqcEvRfhpGBejnVKVoMKyScGBuSl1TQ3aeFl6PVScdu YcaGCVvztOij/RKZ0Jsd0tZuG/Jz4Sec8Ow/1SkdYRMpOXJA7jy0K90Wj47fEF3lKMPb wBuO0KmKN94M9/z/pX58jgLrD1ffc+DCVkx5jTtTYkzCaNkkhGnoMYpMkXYPNrz5Nm1D eJSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YG43b2W+WoOa/mX7jhMUWZGR+grphI4ggOCmMv7FBAk=; b=jpSjDdGUQ0QRFvJzD82nyx4813C61IrQ9G83ueiIBysio4EwxN3ixze5XyKPiCzf5v mgcx+WmNfLN5wmT196zMIFRYVWHseqk7VkZSjwy7gzjJ3c55DHL8FWFNYP90j0ueQjBV FG6zoFhIVtLXIeTHVAQnCl9QwE38T6WLy2l8uHkxacnCSlhYgswUmOP5zi9oRgXQ5srW WcvMIzruiW7LapwSpi9Xc5yE5a+CHQFHiBSWYgIP0Y1nJYzkdv1ZyHzpZwHOIwTiXZl/ OoD3OIapnQO9fAh8TfdhHVxTH+v1/4qtyU2rlythwfcW/+4F1A5Hx0pS0D1ESyZz1/rQ R4qQ== X-Gm-Message-State: AOAM5322SDdCFw5xxZBh86VKdI0MNlBLWRWoWDUKDXd1+ToO1aHWxUOJ WWxdpF9D/9E34dRnJtkX/mF1Qosxb4Q= X-Google-Smtp-Source: ABdhPJzJYNPIqFkRKx0725GrTNxMOhNWPKDF8yupEL2kKHem8uRusSiWI92yY6h2mOTEeZMGuZr+Pg== X-Received: by 2002:a65:448a:: with SMTP id l10mr718362pgq.313.1628697781369; Wed, 11 Aug 2021 09:03:01 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.02.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:01 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 29/60] KVM: PPC: Book3S HV P9: Add kvmppc_stop_thread to match kvmppc_start_thread Date: Thu, 12 Aug 2021 02:01:03 +1000 Message-Id: <20210811160134.904987-30-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Small cleanup makes it a bit easier to match up entry and exit operations. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 376687286fef..e25eccfe1501 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3073,6 +3073,13 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) kvmppc_ipi_thread(cpu); } +/* Old path does this in asm */ +static void kvmppc_stop_thread(struct kvm_vcpu *vcpu) +{ + vcpu->cpu = -1; + vcpu->arch.thread_cpu = -1; +} + static void kvmppc_wait_for_nap(int n_threads) { int cpu = smp_processor_id(); @@ -4296,8 +4303,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, dec = (s32) dec; tb = mftb(); vcpu->arch.dec_expires = dec + tb; - vcpu->cpu = -1; - vcpu->arch.thread_cpu = -1; store_spr_state(vcpu); @@ -4769,6 +4774,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, guest_exit_irqoff(); + kvmppc_stop_thread(vcpu); + powerpc_local_irq_pmu_restore(flags); cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest); From patchwork Wed Aug 11 16:01:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515904 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=vYQtGput; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3F48d2z9t0G for ; Thu, 12 Aug 2021 02:03:05 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233640AbhHKQD2 (ORCPT ); Wed, 11 Aug 2021 12:03:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233639AbhHKQD2 (ORCPT ); Wed, 11 Aug 2021 12:03:28 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 887D5C061765 for ; Wed, 11 Aug 2021 09:03:04 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id f3so3298831plg.3 for ; Wed, 11 Aug 2021 09:03:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Tqi0zbyzPJ0VguohP2pBEXBrA7DtvBIu6X3xbhJzi5U=; b=vYQtGputncVGoyWp5VcnypAufbYXhjOZh6luw4kRw7mQtcHeB5Pn1HoRuy84UCXKRM jUwwd0KaSTmVRKq6W9p383y2TBfDS1VQA4NAioJsFLNsOgXoe7J2t9n1qXJhFw0E1F/o DCvYgatqznb2DBalJIwcOB/7Q3aybO8CzktVoyAcQ0udaFMT15P7tLDuQeupQoBDV7L+ kD01SVXOygGc8NuFMDU6bQwS0JDvgnSem2nNCJVoeVeHLCLCt+MqkraMT6h7eX8bL1N8 jXkkKYFjjfxn8pBG0wWipj+I5v+Z0PvyU2SkvqVsrQDsuD31JGvWV/jDa+yYayv6Il/7 QQQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Tqi0zbyzPJ0VguohP2pBEXBrA7DtvBIu6X3xbhJzi5U=; b=CasZCcYAnVoS5dVN4UeS7EXZLFKyHIaoY71t/laDX+LnmBZNrYUoKnQIY5MPjg+bSH JCKfV80CVNePssVogpoUDABH7qHNwxzv2XVx7gjxs1bxgpn5Iri8H/RxvJs2/rBrCb5j btUENYaT521u+R1su7ZtR7l6mwPfByVOeIaXeprMQ9+UDRe8cePBL2NZihuRrKB1HIbQ t5dM42/RUQtiSeuXiI4Ohrn0M0vaz7Z4xfM2rnPV1yrti6SeX9/CRZrasx7HoOnYJxSf Qe8AiMsnEO8YzMCPBL13AiDCRiSIRt+y7wERJtbL1Ncav7uXFMJ8K+Ew4NbcwKW2f8gE JFog== X-Gm-Message-State: AOAM532XJKTNy1yEMidWTGGV3Wj4UMbQ8Ro3uEJGdS6qaBqxjZ2Tg8Z3 HKEQCdjo4boi/hhu4tLvZCGFtV7+Db8= X-Google-Smtp-Source: ABdhPJzgAHWg7lvxv0R57aySk5+nId2yCBSPQK7MLLJtnGvVA34I0/EeRyf3+AZxL+bxu/799XeoMA== X-Received: by 2002:a65:438c:: with SMTP id m12mr1017106pgp.163.1628697783847; Wed, 11 Aug 2021 09:03:03 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:03 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 30/60] KVM: PPC: Book3S HV: Change dec_expires to be relative to guest timebase Date: Thu, 12 Aug 2021 02:01:04 +1000 Message-Id: <20210811160134.904987-31-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Change dec_expires to be relative to the guest timebase, and allow it to be moved into low level P9 guest entry functions, to improve SPR access scheduling. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s.h | 6 +++ arch/powerpc/include/asm/kvm_host.h | 2 +- arch/powerpc/kvm/book3s_hv.c | 58 +++++++++++++------------ arch/powerpc/kvm/book3s_hv_nested.c | 3 ++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 10 ++++- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 13 ------ 6 files changed, 49 insertions(+), 43 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index caaa0f592d8e..15b573671f99 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -406,6 +406,12 @@ static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu) return vcpu->arch.fault_dar; } +/* Expiry time of vcpu DEC relative to host TB */ +static inline u64 kvmppc_dec_expires_host_tb(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.dec_expires - vcpu->arch.vcore->tb_offset; +} + static inline bool is_kvmppc_resume_guest(int r) { return (r == RESUME_GUEST || r == RESUME_GUEST_NV); diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index a779f7849cfb..89216e247e8b 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -742,7 +742,7 @@ struct kvm_vcpu_arch { struct hrtimer dec_timer; u64 dec_jiffies; - u64 dec_expires; + u64 dec_expires; /* Relative to guest timebase. */ unsigned long pending_exceptions; u8 ceded; u8 prodded; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e25eccfe1501..967fa81063aa 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -2264,8 +2264,7 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, *val = get_reg_val(id, vcpu->arch.vcore->arch_compat); break; case KVM_REG_PPC_DEC_EXPIRY: - *val = get_reg_val(id, vcpu->arch.dec_expires + - vcpu->arch.vcore->tb_offset); + *val = get_reg_val(id, vcpu->arch.dec_expires); break; case KVM_REG_PPC_ONLINE: *val = get_reg_val(id, vcpu->arch.online); @@ -2517,8 +2516,7 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, r = kvmppc_set_arch_compat(vcpu, set_reg_val(id, *val)); break; case KVM_REG_PPC_DEC_EXPIRY: - vcpu->arch.dec_expires = set_reg_val(id, *val) - - vcpu->arch.vcore->tb_offset; + vcpu->arch.dec_expires = set_reg_val(id, *val); break; case KVM_REG_PPC_ONLINE: i = set_reg_val(id, *val); @@ -2905,13 +2903,13 @@ static void kvmppc_set_timer(struct kvm_vcpu *vcpu) unsigned long dec_nsec, now; now = get_tb(); - if (now > vcpu->arch.dec_expires) { + if (now > kvmppc_dec_expires_host_tb(vcpu)) { /* decrementer has already gone negative */ kvmppc_core_queue_dec(vcpu); kvmppc_core_prepare_to_enter(vcpu); return; } - dec_nsec = tb_to_ns(vcpu->arch.dec_expires - now); + dec_nsec = tb_to_ns(kvmppc_dec_expires_host_tb(vcpu) - now); hrtimer_start(&vcpu->arch.dec_timer, dec_nsec, HRTIMER_MODE_REL); vcpu->arch.timer_running = 1; } @@ -3383,7 +3381,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) */ spin_unlock(&vc->lock); /* cancel pending dec exception if dec is positive */ - if (now < vcpu->arch.dec_expires && + if (now < kvmppc_dec_expires_host_tb(vcpu) && kvmppc_core_pending_dec(vcpu)) kvmppc_core_dequeue_dec(vcpu); @@ -4210,20 +4208,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, load_spr_state(vcpu); - /* - * When setting DEC, we must always deal with irq_work_raise via NMI vs - * setting DEC. The problem occurs right as we switch into guest mode - * if a NMI hits and sets pending work and sets DEC, then that will - * apply to the guest and not bring us back to the host. - * - * irq_work_raise could check a flag (or possibly LPCR[HDICE] for - * example) and set HDEC to 1? That wouldn't solve the nested hv - * case which needs to abort the hcall or zero the time limit. - * - * XXX: Another day's problem. - */ - mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); - if (kvmhv_on_pseries()) { /* * We need to save and restore the guest visible part of the @@ -4249,6 +4233,23 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, hvregs.vcpu_token = vcpu->vcpu_id; } hvregs.hdec_expiry = time_limit; + + /* + * When setting DEC, we must always deal with irq_work_raise + * via NMI vs setting DEC. The problem occurs right as we + * switch into guest mode if a NMI hits and sets pending work + * and sets DEC, then that will apply to the guest and not + * bring us back to the host. + * + * irq_work_raise could check a flag (or possibly LPCR[HDICE] + * for example) and set HDEC to 1? That wouldn't solve the + * nested hv case which needs to abort the hcall or zero the + * time limit. + * + * XXX: Another day's problem. + */ + mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - tb); + mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), @@ -4260,6 +4261,12 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); mtspr(SPRN_PSSCR_PR, host_psscr); + dec = mfspr(SPRN_DEC); + if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ + dec = (s32) dec; + tb = mftb(); + vcpu->arch.dec_expires = dec + (tb + vc->tb_offset); + /* H_CEDE has to be handled now, not later */ if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && kvmppc_get_gpr(vcpu, 3) == H_CEDE) { @@ -4267,6 +4274,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_set_gpr(vcpu, 3, 0); trap = 0; } + } else { kvmppc_xive_push_vcpu(vcpu); trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr); @@ -4298,12 +4306,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.slb_max = 0; } - dec = mfspr(SPRN_DEC); - if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ - dec = (s32) dec; - tb = mftb(); - vcpu->arch.dec_expires = dec + tb; - store_spr_state(vcpu); restore_p9_host_os_sprs(vcpu, &host_os_sprs); @@ -4788,7 +4790,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, * by L2 and the L1 decrementer is provided in hdec_expires */ if (kvmppc_core_pending_dec(vcpu) && - ((get_tb() < vcpu->arch.dec_expires) || + ((get_tb() < kvmppc_dec_expires_host_tb(vcpu)) || (trap == BOOK3S_INTERRUPT_SYSCALL && kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED))) kvmppc_core_dequeue_dec(vcpu); diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index ed8a2c9f5629..7bed0b91245e 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -358,6 +358,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) /* convert TB values/offsets to host (L0) values */ hdec_exp = l2_hv.hdec_expiry - vc->tb_offset; vc->tb_offset += l2_hv.tb_offset; + vcpu->arch.dec_expires += l2_hv.tb_offset; /* set L1 state to L2 state */ vcpu->arch.nested = l2; @@ -399,6 +400,8 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) if (l2_regs.msr & MSR_TS_MASK) vcpu->arch.shregs.msr |= MSR_TS_S; vc->tb_offset = saved_l1_hv.tb_offset; + /* XXX: is this always the same delta as saved_l1_hv.tb_offset? */ + vcpu->arch.dec_expires -= l2_hv.tb_offset; restore_hv_regs(vcpu, &saved_l1_hv); vcpu->arch.purr += delta_purr; vcpu->arch.spurr += delta_spurr; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index fb9cb34445ea..814b0dfd590f 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -188,7 +188,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; struct kvmppc_vcore *vc = vcpu->arch.vcore; - s64 hdec; + s64 hdec, dec; u64 tb, purr, spurr; u64 *exsave; bool ri_set; @@ -317,6 +317,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDEC, hdec); + mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM tm_return_to_guest: #endif @@ -461,6 +463,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); + dec = mfspr(SPRN_DEC); + if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ + dec = (s32) dec; + tb = mftb(); + vcpu->arch.dec_expires = dec + tb; + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 05be8648937d..f3489584bcfc 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -808,10 +808,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) * Set the decrementer to the guest decrementer. */ ld r8,VCPU_DEC_EXPIRES(r4) - /* r8 is a host timebase value here, convert to guest TB */ - ld r5,HSTATE_KVM_VCORE(r13) - ld r6,VCORE_TB_OFFSET_APPL(r5) - add r8,r8,r6 mftb r7 subf r3,r7,r8 mtspr SPRN_DEC,r3 @@ -1186,9 +1182,6 @@ guest_bypass: mftb r6 extsw r5,r5 16: add r5,r5,r6 - /* r5 is a guest timebase value here, convert to host TB */ - ld r4,VCORE_TB_OFFSET_APPL(r3) - subf r5,r4,r5 std r5,VCPU_DEC_EXPIRES(r9) /* Increment exit count, poke other threads to exit */ @@ -2154,9 +2147,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) /* save expiry time of guest decrementer */ add r3, r3, r5 ld r4, HSTATE_KVM_VCPU(r13) - ld r5, HSTATE_KVM_VCORE(r13) - ld r6, VCORE_TB_OFFSET_APPL(r5) - subf r3, r6, r3 /* convert to host TB value */ std r3, VCPU_DEC_EXPIRES(r4) #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING @@ -2253,9 +2243,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM) /* Restore guest decrementer */ ld r3, VCPU_DEC_EXPIRES(r4) - ld r5, HSTATE_KVM_VCORE(r13) - ld r6, VCORE_TB_OFFSET_APPL(r5) - add r3, r3, r6 /* convert host TB to guest TB value */ mftb r7 subf r3, r7, r3 mtspr SPRN_DEC, r3 From patchwork Wed Aug 11 16:01:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515906 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=SLs5Sp2K; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3H6T4zz9sjJ for ; Thu, 12 Aug 2021 02:03:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233641AbhHKQDb (ORCPT ); Wed, 11 Aug 2021 12:03:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233639AbhHKQDa (ORCPT ); Wed, 11 Aug 2021 12:03:30 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF6CAC061765 for ; Wed, 11 Aug 2021 09:03:06 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id fa24-20020a17090af0d8b0290178bfa69d97so5813166pjb.0 for ; Wed, 11 Aug 2021 09:03:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=InnIngdCZKWqNjgAIB+VqHmGiblNFRvsyL7J2HN5wII=; b=SLs5Sp2KGYzaSjW8FHaSs2/pqJVjreblwmivnXGPwMI10Y4p95FmUWnDySm/Xf69mH /KoL/C6fVQ5bSIKms4NGM1g9dJVY0Rw+8c/LzS5JsceonBefq6sGfn8larYR+Hee6ugs u9TBS3pUCwTxG4Iw7GbN00blokO9E0VXBw2QMn6ExtGj0sY+vEUaUDixf17jzQojQIMK VTDy4kau7brqPDmmjlg+SIvewMkmL8VjQikT+T16tzphnhLUv3W5xx7GJVpbHIwYP2qR aphBom2Fwy3spcSlUA+EfTmY+N3+RyX6cB3qdteLZtvU4wM1E/3SGD5kRskB7uxp4Gra AcVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=InnIngdCZKWqNjgAIB+VqHmGiblNFRvsyL7J2HN5wII=; b=kZjjplSwDnEB0o1ARZcBon+SjHLMf1jI5RVCakcVOvN2Nm9SzHG4ewRPvkIMpWPOFV qE1JCXyZJVOnwO6CTPHY4a9zhL6cMY+XqR6Lxy87Fz1AXi3NwTvRFTFJaYVDoqWRkx7W trjN/r/8tvCl3fvwfxHmYAL9seZcpbqHT2D+o4OjLX6smW2d4eBg2FK9EaPurkNglaIb 1jM2MmJgyCEBjtOhelC/IC8gQF68ZIe9zsT+gPIhjGoGd8W/QOsRgbbjGY6WB6BKk4tb YrQ8+6jGk6jXM2QEoSl4Gqi9smDIiCCFmgqDx+ISsEujefikDW2FA5Dixao2S4sUhkpD eDaQ== X-Gm-Message-State: AOAM5328w/q6+uJPCW8sMpPCHdN2FcKxUv9WgQvzxtsrVChleT/NiZCG ITx0k5FGdtEFRE+IF1chicDlQNMfswA= X-Google-Smtp-Source: ABdhPJz4HafK0CG2RxLwDc1U5+eNUabmA0f5ORXEhQfO5vZsMmODLog8hZ8tYln9uMIWG01fQP9joQ== X-Received: by 2002:a63:214b:: with SMTP id s11mr401813pgm.87.1628697786435; Wed, 11 Aug 2021 09:03:06 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:06 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 31/60] KVM: PPC: Book3S HV P9: Move TB updates Date: Thu, 12 Aug 2021 02:01:05 +1000 Message-Id: <20210811160134.904987-32-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the TB updates between saving and loading guest and host SPRs, to improve scheduling by keeping issue-NTC operations together as much as possible. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 36 +++++++++++++-------------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 814b0dfd590f..e7793bb806eb 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -215,15 +215,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; - if (vc->tb_offset) { - u64 new_tb = tb + vc->tb_offset; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = vc->tb_offset; - } - /* Could avoid mfmsr by passing around, but probably no big deal */ msr = mfmsr(); @@ -238,6 +229,15 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_dawrx1 = mfspr(SPRN_DAWRX1); } + if (vc->tb_offset) { + u64 new_tb = tb + vc->tb_offset; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = vc->tb_offset; + } + if (vc->pcr) mtspr(SPRN_PCR, vc->pcr | PCR_MASK); mtspr(SPRN_DPDES, vc->dpdes); @@ -469,6 +469,15 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc tb = mftb(); vcpu->arch.dec_expires = dec + tb; + if (vc->tb_offset_applied) { + u64 new_tb = tb - vc->tb_offset_applied; + mtspr(SPRN_TBU40, new_tb); + tb = mftb(); + if ((tb & 0xffffff) < (new_tb & 0xffffff)) + mtspr(SPRN_TBU40, new_tb + 0x1000000); + vc->tb_offset_applied = 0; + } + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); @@ -503,15 +512,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); - if (vc->tb_offset_applied) { - u64 new_tb = mftb() - vc->tb_offset_applied; - mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); - vc->tb_offset_applied = 0; - } - /* HDEC must be at least as large as DEC, so decrementer_max fits */ mtspr(SPRN_HDEC, decrementer_max); From patchwork Wed Aug 11 16:01:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515907 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=O/PYhnOi; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3L5R5yz9sXS for ; Thu, 12 Aug 2021 02:03:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233569AbhHKQDd (ORCPT ); Wed, 11 Aug 2021 12:03:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233450AbhHKQDd (ORCPT ); Wed, 11 Aug 2021 12:03:33 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C63ABC061765 for ; Wed, 11 Aug 2021 09:03:09 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id u13-20020a17090abb0db0290177e1d9b3f7so10370600pjr.1 for ; Wed, 11 Aug 2021 09:03:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Hhm8D0ijfuqb2YJw4nLV1d2c636q460BzD6QJo13ySI=; b=O/PYhnOi+cROeaJ0evj8S0R+kAQEMfMEOWx6vb192jX8LgWuy31mjhrVtmt4x/vQAb ER3D9nWF3HoALZvdYIBz8qLDwpdQ/M95YOXTlBQWVzP31tiF1p06T5l3FZI6YqB4N4QS iYEUIH5PtSfTUMY1rjyqV/XMFqQjPI5qniRQseLc6/gWJ4aP1MhKIKIm75FRDKS7MdHe SR/N8iL9Q8cj7p2UQkqMjbqkb4/tzFPlVJBCy4rgp2OYDOp7umjGXUGgMjfjIOJrBKm0 9oQrBmvADPduA0Vf2N06vZHPwQCXYy4MUtH7MmILsjAGg7awbaVFsIXo/nRaepwh8WcO 5CiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Hhm8D0ijfuqb2YJw4nLV1d2c636q460BzD6QJo13ySI=; b=SBc8PMCBxpmwFwv+ytWWaF84htvaIxc0gx2p1XtxESFZkU/Kdz0lj+U6s1mH+2m/Vb pyEWE81Z0p/0d66arIRLt05BYThhxcVrBHj5RK2O9MmVWHEJnfcJACDCsp62KBtn3EfW TP+02H6u2vW2QQysFC8erX+mw4F0L38N0lALQGK4DZJj+bP3C5ZQBSprhK8bGtAfaS7W d0tURd+O4In+pFWrujyUTVt7s8MBZkiy1iHI1x+ZeHGwDZKQskeE1cEaYwXbiSfGicXB z2vkRg4bXX4sag0MeOLV0zxaSQ9Ra/8I3ve3R7t9HGhZVYNw5Ghuo74Texc8fIER0Qr8 ttzg== X-Gm-Message-State: AOAM531e/q9awaaImcnk9ajhFUCBNuoBUI8kJbwtBukZ5t1V4ahGI6Gd WWr1gg7kgYpmoCN9533v0j+XZe6SgzM= X-Google-Smtp-Source: ABdhPJyNNT65iYW/ogwMSKcnjIxXR0QfTKyF/WcNkSahgKOfyXGqI6VeBZvns19utMDdm4LsvoOelQ== X-Received: by 2002:a65:5bc9:: with SMTP id o9mr403620pgr.24.1628697789128; Wed, 11 Aug 2021 09:03:09 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:08 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 32/60] KVM: PPC: Book3S HV P9: Optimise timebase reads Date: Thu, 12 Aug 2021 02:01:06 +1000 Message-Id: <20210811160134.904987-33-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Reduce the number of mfTB executed by passing the current timebase around entry and exit code rather than read it multiple times. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 2 +- arch/powerpc/kvm/book3s_hv.c | 88 +++++++++++++----------- arch/powerpc/kvm/book3s_hv_p9_entry.c | 33 +++++---- 3 files changed, 65 insertions(+), 58 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 19b6942c6969..f721bb1a5eab 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -154,7 +154,7 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } -int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr); +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb); #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #endif diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 967fa81063aa..a390980b42d8 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -276,22 +276,22 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu) * they should never fail.) */ -static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc) +static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; spin_lock_irqsave(&vc->stoltb_lock, flags); - vc->preempt_tb = mftb(); + vc->preempt_tb = tb; spin_unlock_irqrestore(&vc->stoltb_lock, flags); } -static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc) +static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; spin_lock_irqsave(&vc->stoltb_lock, flags); if (vc->preempt_tb != TB_NIL) { - vc->stolen_tb += mftb() - vc->preempt_tb; + vc->stolen_tb += tb - vc->preempt_tb; vc->preempt_tb = TB_NIL; } spin_unlock_irqrestore(&vc->stoltb_lock, flags); @@ -301,6 +301,7 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; + u64 now = mftb(); /* * We can test vc->runner without taking the vcore lock, @@ -309,12 +310,12 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) * ever sets it to NULL. */ if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING) - kvmppc_core_end_stolen(vc); + kvmppc_core_end_stolen(vc, now); spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST && vcpu->arch.busy_preempt != TB_NIL) { - vcpu->arch.busy_stolen += mftb() - vcpu->arch.busy_preempt; + vcpu->arch.busy_stolen += now - vcpu->arch.busy_preempt; vcpu->arch.busy_preempt = TB_NIL; } spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); @@ -324,13 +325,14 @@ static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; + u64 now = mftb(); if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING) - kvmppc_core_start_stolen(vc); + kvmppc_core_start_stolen(vc, now); spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST) - vcpu->arch.busy_preempt = mftb(); + vcpu->arch.busy_preempt = now; spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); } @@ -685,7 +687,7 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now) } static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, - struct kvmppc_vcore *vc) + struct kvmppc_vcore *vc, u64 tb) { struct dtl_entry *dt; struct lppaca *vpa; @@ -696,7 +698,7 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, dt = vcpu->arch.dtl_ptr; vpa = vcpu->arch.vpa.pinned_addr; - now = mftb(); + now = tb; core_stolen = vcore_stolen_time(vc, now); stolen = core_stolen - vcpu->arch.stolen_logged; vcpu->arch.stolen_logged = core_stolen; @@ -2917,14 +2919,14 @@ static void kvmppc_set_timer(struct kvm_vcpu *vcpu) extern int __kvmppc_vcore_entry(void); static void kvmppc_remove_runnable(struct kvmppc_vcore *vc, - struct kvm_vcpu *vcpu) + struct kvm_vcpu *vcpu, u64 tb) { u64 now; if (vcpu->arch.state != KVMPPC_VCPU_RUNNABLE) return; spin_lock_irq(&vcpu->arch.tbacct_lock); - now = mftb(); + now = tb; vcpu->arch.busy_stolen += vcore_stolen_time(vc, now) - vcpu->arch.stolen_logged; vcpu->arch.busy_preempt = now; @@ -3175,14 +3177,14 @@ static void kvmppc_vcore_preempt(struct kvmppc_vcore *vc) } /* Start accumulating stolen time */ - kvmppc_core_start_stolen(vc); + kvmppc_core_start_stolen(vc, mftb()); } static void kvmppc_vcore_end_preempt(struct kvmppc_vcore *vc) { struct preempted_vcore_list *lp; - kvmppc_core_end_stolen(vc); + kvmppc_core_end_stolen(vc, mftb()); if (!list_empty(&vc->preempt_list)) { lp = &per_cpu(preempted_vcores, vc->pcpu); spin_lock(&lp->lock); @@ -3309,7 +3311,7 @@ static void prepare_threads(struct kvmppc_vcore *vc) vcpu->arch.ret = RESUME_GUEST; else continue; - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); wake_up(&vcpu->arch.cpu_run); } } @@ -3328,7 +3330,7 @@ static void collect_piggybacks(struct core_info *cip, int target_threads) list_del_init(&pvc->preempt_list); if (pvc->runner == NULL) { pvc->vcore_state = VCORE_INACTIVE; - kvmppc_core_end_stolen(pvc); + kvmppc_core_end_stolen(pvc, mftb()); } spin_unlock(&pvc->lock); continue; @@ -3337,7 +3339,7 @@ static void collect_piggybacks(struct core_info *cip, int target_threads) spin_unlock(&pvc->lock); continue; } - kvmppc_core_end_stolen(pvc); + kvmppc_core_end_stolen(pvc, mftb()); pvc->vcore_state = VCORE_PIGGYBACK; if (cip->total_threads >= target_threads) break; @@ -3404,7 +3406,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) else ++still_running; } else { - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); wake_up(&vcpu->arch.cpu_run); } } @@ -3413,7 +3415,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) kvmppc_vcore_preempt(vc); } else if (vc->runner) { vc->vcore_state = VCORE_PREEMPT; - kvmppc_core_start_stolen(vc); + kvmppc_core_start_stolen(vc, mftb()); } else { vc->vcore_state = VCORE_INACTIVE; } @@ -3544,7 +3546,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) ((vc->num_threads > threads_per_subcore) || !on_primary_thread())) { for_each_runnable_thread(i, vcpu, vc) { vcpu->arch.ret = -EBUSY; - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); wake_up(&vcpu->arch.cpu_run); } goto out; @@ -3676,7 +3678,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) pvc->pcpu = pcpu + thr; for_each_runnable_thread(i, vcpu, pvc) { kvmppc_start_thread(vcpu, pvc); - kvmppc_create_dtl_entry(vcpu, pvc); + kvmppc_create_dtl_entry(vcpu, pvc, mftb()); trace_kvm_guest_enter(vcpu); if (!vcpu->arch.ptid) thr0_done = true; @@ -4138,20 +4140,17 @@ static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu) * Guest entry for POWER9 and later CPUs. */ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, - unsigned long lpcr) + unsigned long lpcr, u64 *tb) { struct kvmppc_vcore *vc = vcpu->arch.vcore; struct p9_host_os_sprs host_os_sprs; s64 dec; - u64 tb, next_timer; + u64 next_timer; unsigned long msr; int trap; - WARN_ON_ONCE(vcpu->arch.ceded); - - tb = mftb(); next_timer = timer_get_next_tb(); - if (tb >= next_timer) + if (*tb >= next_timer) return BOOK3S_INTERRUPT_HV_DECREMENTER; if (next_timer < time_limit) time_limit = next_timer; @@ -4248,7 +4247,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, * * XXX: Another day's problem. */ - mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - tb); + mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb); mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); @@ -4264,8 +4263,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; - tb = mftb(); - vcpu->arch.dec_expires = dec + (tb + vc->tb_offset); + *tb = mftb(); + vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); /* H_CEDE has to be handled now, not later */ if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && @@ -4277,7 +4276,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } else { kvmppc_xive_push_vcpu(vcpu); - trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr); + trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr, tb); if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && !(vcpu->arch.shregs.msr & MSR_PR)) { unsigned long req = kvmppc_get_gpr(vcpu, 3); @@ -4308,6 +4307,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, store_spr_state(vcpu); + timer_rearm_host_dec(*tb); + restore_p9_host_os_sprs(vcpu, &host_os_sprs); store_fp_state(&vcpu->arch.fp); @@ -4327,8 +4328,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vc->entry_exit_map = 0x101; vc->in_guest = 0; - timer_rearm_host_dec(tb); - kvmppc_subcore_exit_guest(); return trap; @@ -4570,7 +4569,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) if ((vc->vcore_state == VCORE_PIGGYBACK || vc->vcore_state == VCORE_RUNNING) && !VCORE_IS_EXITING(vc)) { - kvmppc_create_dtl_entry(vcpu, vc); + kvmppc_create_dtl_entry(vcpu, vc, mftb()); kvmppc_start_thread(vcpu, vc); trace_kvm_guest_enter(vcpu); } else if (vc->vcore_state == VCORE_SLEEPING) { @@ -4605,7 +4604,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) for_each_runnable_thread(i, v, vc) { kvmppc_core_prepare_to_enter(v); if (signal_pending(v->arch.run_task)) { - kvmppc_remove_runnable(vc, v); + kvmppc_remove_runnable(vc, v, mftb()); v->stat.signal_exits++; v->run->exit_reason = KVM_EXIT_INTR; v->arch.ret = -EINTR; @@ -4646,7 +4645,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) kvmppc_vcore_end_preempt(vc); if (vcpu->arch.state == KVMPPC_VCPU_RUNNABLE) { - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, mftb()); vcpu->stat.signal_exits++; run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; @@ -4674,6 +4673,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; unsigned long flags; + u64 tb; trace_kvmppc_run_vcpu_enter(vcpu); @@ -4684,7 +4684,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vc = vcpu->arch.vcore; vcpu->arch.ceded = 0; vcpu->arch.run_task = current; - vcpu->arch.stolen_logged = vcore_stolen_time(vc, mftb()); vcpu->arch.state = KVMPPC_VCPU_RUNNABLE; vcpu->arch.busy_preempt = TB_NIL; vcpu->arch.last_inst = KVM_INST_FETCH_FAILED; @@ -4709,7 +4708,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_update_vpas(vcpu); init_vcore_to_run(vc); - vc->preempt_tb = TB_NIL; preempt_disable(); pcpu = smp_processor_id(); @@ -4719,6 +4717,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, /* flags save not required, but irq_pmu has no disable/enable API */ powerpc_local_irq_pmu_save(flags); + if (signal_pending(current)) goto sigpend; if (need_resched() || !kvm->arch.mmu_ready) @@ -4741,12 +4740,17 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, goto out; } + tb = mftb(); + + vcpu->arch.stolen_logged = vcore_stolen_time(vc, tb); + vc->preempt_tb = TB_NIL; + kvmppc_clear_host_core(pcpu); local_paca->kvm_hstate.napping = 0; local_paca->kvm_hstate.kvm_split_mode = NULL; kvmppc_start_thread(vcpu, vc); - kvmppc_create_dtl_entry(vcpu, vc); + kvmppc_create_dtl_entry(vcpu, vc, tb); trace_kvm_guest_enter(vcpu); vc->vcore_state = VCORE_RUNNING; @@ -4761,7 +4765,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, /* Tell lockdep that we're about to enable interrupts */ trace_hardirqs_on(); - trap = kvmhv_p9_guest_entry(vcpu, time_limit, lpcr); + trap = kvmhv_p9_guest_entry(vcpu, time_limit, lpcr, &tb); vcpu->arch.trap = trap; trace_hardirqs_off(); @@ -4790,7 +4794,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, * by L2 and the L1 decrementer is provided in hdec_expires */ if (kvmppc_core_pending_dec(vcpu) && - ((get_tb() < kvmppc_dec_expires_host_tb(vcpu)) || + ((tb < kvmppc_dec_expires_host_tb(vcpu)) || (trap == BOOK3S_INTERRUPT_SYSCALL && kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED))) kvmppc_core_dequeue_dec(vcpu); @@ -4826,7 +4830,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, trace_kvmppc_run_core(vc, 1); done: - kvmppc_remove_runnable(vc, vcpu); + kvmppc_remove_runnable(vc, vcpu, tb); trace_kvmppc_run_vcpu_exit(vcpu); return vcpu->arch.ret; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index e7793bb806eb..2bd96d8256d1 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -183,13 +183,13 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) } } -int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr) +int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; struct kvmppc_vcore *vc = vcpu->arch.vcore; s64 hdec, dec; - u64 tb, purr, spurr; + u64 purr, spurr; u64 *exsave; bool ri_set; int trap; @@ -203,8 +203,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_dawr1; unsigned long host_dawrx1; - tb = mftb(); - hdec = time_limit - tb; + hdec = time_limit - *tb; if (hdec < 0) return BOOK3S_INTERRUPT_HV_DECREMENTER; @@ -230,11 +229,13 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc } if (vc->tb_offset) { - u64 new_tb = tb + vc->tb_offset; + u64 new_tb = *tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + *tb = new_tb; vc->tb_offset_applied = vc->tb_offset; } @@ -317,7 +318,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HDEC, hdec); - mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb); + mtspr(SPRN_DEC, vcpu->arch.dec_expires - *tb); #ifdef CONFIG_PPC_TRANSACTIONAL_MEM tm_return_to_guest: @@ -466,15 +467,17 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; - tb = mftb(); - vcpu->arch.dec_expires = dec + tb; + *tb = mftb(); + vcpu->arch.dec_expires = dec + *tb; if (vc->tb_offset_applied) { - u64 new_tb = tb - vc->tb_offset_applied; + u64 new_tb = *tb - vc->tb_offset_applied; mtspr(SPRN_TBU40, new_tb); - tb = mftb(); - if ((tb & 0xffffff) < (new_tb & 0xffffff)) - mtspr(SPRN_TBU40, new_tb + 0x1000000); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + *tb = new_tb; vc->tb_offset_applied = 0; } From patchwork Wed Aug 11 16:01:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515908 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=PhC1DPPW; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3P519qz9t1s for ; Thu, 12 Aug 2021 02:03:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233450AbhHKQDg (ORCPT ); Wed, 11 Aug 2021 12:03:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233639AbhHKQDf (ORCPT ); Wed, 11 Aug 2021 12:03:35 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31FA0C061765 for ; Wed, 11 Aug 2021 09:03:12 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id fa24-20020a17090af0d8b0290178bfa69d97so5813646pjb.0 for ; Wed, 11 Aug 2021 09:03:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PqyRTscdsTGKYKfnYwkZD3pK5phryJeQLedqJYFt8j0=; b=PhC1DPPWo48cbqfOKin9V+yRb719XqP2DuQleOU/qTymiiG4BB5mXjW3h2pzN3B6vu 7RJTZeDBYmVMdyyxA19TLPdFxGd2kYqJlZOEtUOqcT4LgKG/riKxdPdWLBMA+k6/ntjl YCgFDFlP3RalSNaJtDjIV6tT7+B/+R1pWnsCCa4hW7CRGVZj9SEUBo+ZarWl8g8BcgJ4 ppVw9EYO8vWpNkdnt+qKPIJ/KJUJ1TdzLQ7iin2RbPE9Bsyz5tWSlWV/zMre9LTzioPm VhZCljNK8nnO/MlKCP1TjtmtsieJXc4ZWage10+Xmz9dMw0IYlDnGyREH46/k4LnIZzD PU6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PqyRTscdsTGKYKfnYwkZD3pK5phryJeQLedqJYFt8j0=; b=BAUz37FfNGBsHWd5lmr7zj7KCKAYupg6x/FIapc96Z4adEh5sOVjvIAp8bEJULLU4i Ce9qidNvL20fCVm9l2pso8BDJDbmWClq8N1jBRRu4Il7KU6f3zdIm7u27s609oT7bh5a zFfDBhl8KVztYgbeN3rB9PIr0fKXXx69WMgH7NEhEGzh25ZrGzbNnbvJmiq/NMFlEBzV tcA8+hl3ZAmmKgp2jh+j6MhUfHj/lmHOGMSezpWUVWjFN9CQizu9a8lEDXIJVoCL0/cR 8Qf3ChNivhqlVui1aFYwL8ms2RSRGmGH54zVHSxfznQLqWF6t/9eWXNq9/Dso1C9NQCC b+XA== X-Gm-Message-State: AOAM531hXG7G7q0Uv+9Y1eOvgb9cK4ii5RmKDG0lSAhGydwiJCEZb+1d aCY/WwaZEiHaEm3kO3NhB9pofvNjxHw= X-Google-Smtp-Source: ABdhPJy0edYN5bugMMus7taMQ6SUWGtuhtTDc9nixUI473SX0KeZBhtYqPW/tpqpOtZ+mLKaRaA0+Q== X-Received: by 2002:a63:4a55:: with SMTP id j21mr90731pgl.187.1628697791656; Wed, 11 Aug 2021 09:03:11 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:11 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 33/60] KVM: PPC: Book3S HV P9: Avoid SPR scoreboard stalls Date: Thu, 12 Aug 2021 02:01:07 +1000 Message-Id: <20210811160134.904987-34-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Avoid interleaving mfSPR and mtSPR to reduce SPR scoreboard stalls. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 8 ++++---- arch/powerpc/kvm/book3s_hv_p9_entry.c | 19 +++++++++++-------- 2 files changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index a390980b42d8..823e501c5ebe 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4307,10 +4307,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, store_spr_state(vcpu); - timer_rearm_host_dec(*tb); - - restore_p9_host_os_sprs(vcpu, &host_os_sprs); - store_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC store_vr_state(&vcpu->arch.vr); @@ -4325,6 +4321,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, switch_pmu_to_host(vcpu, &host_os_sprs); + timer_rearm_host_dec(*tb); + + restore_p9_host_os_sprs(vcpu, &host_os_sprs); + vc->entry_exit_map = 0x101; vc->in_guest = 0; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 2bd96d8256d1..bd0021cd3a67 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -228,6 +228,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_dawrx1 = mfspr(SPRN_DAWRX1); } + local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); + local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); + if (vc->tb_offset) { u64 new_tb = *tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); @@ -244,8 +247,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_DPDES, vc->dpdes); mtspr(SPRN_VTB, vc->vtb); - local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); - local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); mtspr(SPRN_PURR, vcpu->arch.purr); mtspr(SPRN_SPURR, vcpu->arch.spurr); @@ -448,10 +449,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc /* Advance host PURR/SPURR by the amount used by guest */ purr = mfspr(SPRN_PURR); spurr = mfspr(SPRN_SPURR); - mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr + - purr - vcpu->arch.purr); - mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr + - spurr - vcpu->arch.spurr); + local_paca->kvm_hstate.host_purr += purr - vcpu->arch.purr; + local_paca->kvm_hstate.host_spurr += spurr - vcpu->arch.spurr; vcpu->arch.purr = purr; vcpu->arch.spurr = spurr; @@ -464,6 +463,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); + vc->dpdes = mfspr(SPRN_DPDES); + vc->vtb = mfspr(SPRN_VTB); + dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; @@ -481,6 +483,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = 0; } + mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr); + mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr); + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); @@ -509,8 +514,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (cpu_has_feature(CPU_FTR_ARCH_31)) asm volatile(PPC_CP_ABORT); - vc->dpdes = mfspr(SPRN_DPDES); - vc->vtb = mfspr(SPRN_VTB); mtspr(SPRN_DPDES, 0); if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); From patchwork Wed Aug 11 16:01:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515909 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=bNwhrFma; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3R34CKz9sWl for ; Thu, 12 Aug 2021 02:03:15 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233645AbhHKQDi (ORCPT ); Wed, 11 Aug 2021 12:03:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233276AbhHKQDi (ORCPT ); Wed, 11 Aug 2021 12:03:38 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98148C061765 for ; Wed, 11 Aug 2021 09:03:14 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id a8so4187813pjk.4 for ; Wed, 11 Aug 2021 09:03:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lgFzwfPPUELBCc+2phoJnC6npoZNQp0x3MsFLsf+uAE=; b=bNwhrFmakUBPDLkDNtpxinox1H38Xkd2WP8CUhuif7wxt6tgrasrSwSb9rdqVf05i9 p78XevjWvfpKzhPrZ1DxDgO0VwVncAqFSIGCXsMBuYsbt9XHVOBuPZ4JBQwwg+Je3V4w bB6FMcaxFhegAW+VSgRtoj01huxnPAC4kwN5nfyvrJUorgiTSNgsIJKt2t6skwhGrNan BJNsDaGtj5HdIGiwd3AlI2CQtQzrzddPi1qweBJUxvuZr2jq3Pi7UJ8xrJu8v9YFcnWs xGc1lkwK9b9770SdjDDHAZvST+6JKLMAz5fliTm+FSU5IeEtckilY57O/9+4A2jstYkt n+xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lgFzwfPPUELBCc+2phoJnC6npoZNQp0x3MsFLsf+uAE=; b=bQTgt0+N3F0mGzV6wo+jHlqzxHVgqkwXMQtIwkLODzDil661NnK9sKPYMqB5HHmuGh w7NTYWe0T/fWyui3JtcsrHq/U6JKg0K5ZvCRH/UN5+g6jrig1TH7DRCGeDEOBruWg+vb 9OzF5q1FgtgO3bOxszyz1X4btlIOZ/RvXm/ITJxLMm/4oTDihgFTV44Oirk+jcjFOQbb hxirp5MmAbklwH60kzGwcKxjr+CuGXg1Yp3B7Pbvqp5lSVpVcx5hNkwHv3diCuuDPGXO 0CWORaqr4mOsQbgdM0rg8UW47ti1Xmzwrpbitu7Yrx9umwm/L0kBwbYDI+VXAO6Yxuca pNSA== X-Gm-Message-State: AOAM530J/h1enkEuXsYIAusyMaSK+YyAXJ1SkRd6eC4z53XZHZcjHZ74 vPLUZ17tmKJo8OC271XNCXeAkq/x1f0= X-Google-Smtp-Source: ABdhPJx82QDgyDUc+fNWk0gzv6fLU3PtFqnsDlSQzERROUTDRVYYznRkp80XIBZyA7/DvhcQxHdwuw== X-Received: by 2002:a17:902:b188:b029:11b:1549:da31 with SMTP id s8-20020a170902b188b029011b1549da31mr4733685plr.7.1628697794080; Wed, 11 Aug 2021 09:03:14 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:13 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 34/60] KVM: PPC: Book3S HV P9: Only execute mtSPR if the value changed Date: Thu, 12 Aug 2021 02:01:08 +1000 Message-Id: <20210811160134.904987-35-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Keep better track of the current SPR value in places where they are to be loaded with a new context, to reduce expensive mtSPR operations. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 64 ++++++++++++++++++++++-------------- 1 file changed, 39 insertions(+), 25 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 823e501c5ebe..85f441d9ce63 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4045,19 +4045,28 @@ static void switch_pmu_to_host(struct kvm_vcpu *vcpu, } } -static void load_spr_state(struct kvm_vcpu *vcpu) +static void load_spr_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) { - mtspr(SPRN_DSCR, vcpu->arch.dscr); - mtspr(SPRN_IAMR, vcpu->arch.iamr); - mtspr(SPRN_PSPB, vcpu->arch.pspb); - mtspr(SPRN_FSCR, vcpu->arch.fscr); mtspr(SPRN_TAR, vcpu->arch.tar); mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); mtspr(SPRN_BESCR, vcpu->arch.bescr); - mtspr(SPRN_TIDR, vcpu->arch.tid); - mtspr(SPRN_AMR, vcpu->arch.amr); - mtspr(SPRN_UAMOR, vcpu->arch.uamor); + + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + mtspr(SPRN_TIDR, vcpu->arch.tid); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, vcpu->arch.iamr); + if (host_os_sprs->amr != vcpu->arch.amr) + mtspr(SPRN_AMR, vcpu->arch.amr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, vcpu->arch.uamor); + if (host_os_sprs->fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, vcpu->arch.fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, vcpu->arch.dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, vcpu->arch.pspb); /* * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] @@ -4072,28 +4081,31 @@ static void load_spr_state(struct kvm_vcpu *vcpu) static void store_spr_state(struct kvm_vcpu *vcpu) { - vcpu->arch.ctrl = mfspr(SPRN_CTRLF); - - vcpu->arch.iamr = mfspr(SPRN_IAMR); - vcpu->arch.pspb = mfspr(SPRN_PSPB); - vcpu->arch.fscr = mfspr(SPRN_FSCR); vcpu->arch.tar = mfspr(SPRN_TAR); vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); vcpu->arch.bescr = mfspr(SPRN_BESCR); - vcpu->arch.tid = mfspr(SPRN_TIDR); + + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + vcpu->arch.tid = mfspr(SPRN_TIDR); + vcpu->arch.iamr = mfspr(SPRN_IAMR); vcpu->arch.amr = mfspr(SPRN_AMR); vcpu->arch.uamor = mfspr(SPRN_UAMOR); + vcpu->arch.fscr = mfspr(SPRN_FSCR); vcpu->arch.dscr = mfspr(SPRN_DSCR); + vcpu->arch.pspb = mfspr(SPRN_PSPB); + + vcpu->arch.ctrl = mfspr(SPRN_CTRLF); } static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { - host_os_sprs->dscr = mfspr(SPRN_DSCR); - host_os_sprs->tidr = mfspr(SPRN_TIDR); + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + host_os_sprs->tidr = mfspr(SPRN_TIDR); host_os_sprs->iamr = mfspr(SPRN_IAMR); host_os_sprs->amr = mfspr(SPRN_AMR); host_os_sprs->fscr = mfspr(SPRN_FSCR); + host_os_sprs->dscr = mfspr(SPRN_DSCR); } /* vcpu guest regs must already be saved */ @@ -4102,18 +4114,20 @@ static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, { mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - mtspr(SPRN_PSPB, 0); - mtspr(SPRN_UAMOR, 0); - - mtspr(SPRN_DSCR, host_os_sprs->dscr); - mtspr(SPRN_TIDR, host_os_sprs->tidr); - mtspr(SPRN_IAMR, host_os_sprs->iamr); - + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + mtspr(SPRN_TIDR, host_os_sprs->tidr); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, host_os_sprs->iamr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, 0); if (host_os_sprs->amr != vcpu->arch.amr) mtspr(SPRN_AMR, host_os_sprs->amr); - if (host_os_sprs->fscr != vcpu->arch.fscr) mtspr(SPRN_FSCR, host_os_sprs->fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, host_os_sprs->dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, 0); /* Save guest CTRL register, set runlatch to 1 */ if (!(vcpu->arch.ctrl & 1)) @@ -4205,7 +4219,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, #endif mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); - load_spr_state(vcpu); + load_spr_state(vcpu, &host_os_sprs); if (kvmhv_on_pseries()) { /* From patchwork Wed Aug 11 16:01:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515910 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=dJC/kinx; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3W0TVYz9sjJ for ; Thu, 12 Aug 2021 02:03:19 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233650AbhHKQDm (ORCPT ); Wed, 11 Aug 2021 12:03:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233276AbhHKQDk (ORCPT ); Wed, 11 Aug 2021 12:03:40 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F907C061765 for ; Wed, 11 Aug 2021 09:03:17 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id u21-20020a17090a8915b02901782c36f543so10329502pjn.4 for ; Wed, 11 Aug 2021 09:03:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XEm+DUqj4PUtiyOis8qtw0OTEw1uAY3Zp3uaXAMjEQ4=; b=dJC/kinxyhAdc5J8a4tDG79b2oCr39+gJuy90PWsrsw9a0ps6Mca8mQcPrqYaOV6U5 fF1PAfH9PVxfLfLX1aIbZrZqilYHQtksGV/c7MathOCCZSA3kpCo11yJ6zS0uKFSeGjK zk6gQJn/RW7fZUO6cbGOvoqOS/EksCwdpezAp5UjSLmQX/u5dAhyzqp2exkbU+X0T5qv w1IDeiyumd5IwBauDOGe9E0SvOFkIyaSxfUXNV4L1MRGhoEQq7g5t/lBcbMIwMHTYr0N MdOLrqjuKUguerPpxnVUKy6Io/eKxH0QkGIlKwOHF1gi89j8kA2JordYjPw6dJCIwpuG yPyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XEm+DUqj4PUtiyOis8qtw0OTEw1uAY3Zp3uaXAMjEQ4=; b=Kyk4Q+jqkTZ+LoPfjThQcxpB0wzwHhhE5ohUbt2HfXQTOfgDShhOaILs2RJSbiKBbf k+9NBKokgSvTy9/Xaho+BPALbxdTLQ7E1/OLvbs9pG4695LNf2U2t80JMrk+ovfLFTt7 e/nIkDdOF/VY5vtZk7RiVfVEadYCWQrSJQDXoXv+QekpbaCK96PC9eggOLx5C4HMuioZ yQMc8uMU3go+iXK5JMBCC6O6GzgirgtxPbCwtd1JjiQVuDFt7bT2wNz1lbRrpGS5uyyC J33CY6fg6aAfPhN+N/00ihD4aMDEQ1z8JiJYK8UWK20P0lT0omNN6K+iwx1CJLkPzH7H UAqg== X-Gm-Message-State: AOAM533J/2B7iW2Va53OayJacHNnA7J5ncHGMGrsrZGfZ1IThlI82dCR r2rb2cDInpIWRSzgR/a6jPUY8HtCemI= X-Google-Smtp-Source: ABdhPJwbukVAGKjLO2gZJLa3Iwc+KZUCF6MrCW4j/yqRZ8x8u2AzScv9ZLLuH3hXgM/MnFAfSTZUnQ== X-Received: by 2002:a17:90a:c006:: with SMTP id p6mr6424106pjt.144.1628697796538; Wed, 11 Aug 2021 09:03:16 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:16 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 35/60] KVM: PPC: Book3S HV P9: Juggle SPR switching around Date: Thu, 12 Aug 2021 02:01:09 +1000 Message-Id: <20210811160134.904987-36-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This juggles SPR switching on the entry and exit sides to be more symmetric, which makes the next refactoring patch possible with no functional change. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 85f441d9ce63..7867d6793b3e 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4211,7 +4211,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, msr = mfmsr(); /* TM restore can update msr */ } - switch_pmu_to_guest(vcpu, &host_os_sprs); + load_spr_state(vcpu, &host_os_sprs); load_fp_state(&vcpu->arch.fp); #ifdef CONFIG_ALTIVEC @@ -4219,7 +4219,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, #endif mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); - load_spr_state(vcpu, &host_os_sprs); + switch_pmu_to_guest(vcpu, &host_os_sprs); if (kvmhv_on_pseries()) { /* @@ -4319,6 +4319,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.slb_max = 0; } + switch_pmu_to_host(vcpu, &host_os_sprs); + store_spr_state(vcpu); store_fp_state(&vcpu->arch.fp); @@ -4333,8 +4335,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - switch_pmu_to_host(vcpu, &host_os_sprs); - timer_rearm_host_dec(*tb); restore_p9_host_os_sprs(vcpu, &host_os_sprs); From patchwork Wed Aug 11 16:01:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515911 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=XJqwDEXZ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3X3w9Kz9shx for ; Thu, 12 Aug 2021 02:03:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233639AbhHKQDn (ORCPT ); Wed, 11 Aug 2021 12:03:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233276AbhHKQDn (ORCPT ); Wed, 11 Aug 2021 12:03:43 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A65B6C061765 for ; Wed, 11 Aug 2021 09:03:19 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id b7so3284641plh.7 for ; Wed, 11 Aug 2021 09:03:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nSYAtI8P17mLu9fXMsfGStnZ2i1gV8qJNG1zXcMJXYc=; b=XJqwDEXZ1TmJVgaAZbsPYwhy/zHNIJw9dyq5b9vepuoqGa9FonsxE15xQixMwOIPSA 0eQ8VuWbHvQRflYSzYFDDpzigE5OJ06n1AHujAibRT84kZtsguujccvLpbNXqR0vkLVq L797vhzmblb1JFS/yZwoIstySRBaJQijH/477RDJcUEgKvgnKYdS2Mw7dL06H1VDFX/r zmXZcoVpnUPgBruMljhKY5hyvQzo2t6au0SCGouXi/pZQcO+4sIp/SvZYTadel9RRkZr elyfai5Jf9PxXEeebBJzPgmhuHkKQDybAVxPU8BOOt1WRu1l4/2EFXLYtH95z5/xXKoQ 9Srw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nSYAtI8P17mLu9fXMsfGStnZ2i1gV8qJNG1zXcMJXYc=; b=Wh9W/9HEW+0jrzvtxxdm7Tk0HEHTamzabKnHQ0/Y8h2742WK2rZwu1LQSwKVMRDAXU Cy7hYXD9gS1XbgaLQJj2hIMR3u1MWb8jEF3unQ2a208kV7dPv+7Wr6O/1GSgGARy0Lio Ub6ZJ01mU+N70EX4I+Bd7WDcYOdy0kQaNncn+Pvn76icebTi43B5P8ItqiEQvOjz/YcA TzgK8ZSeI1XXG5sYS9ZHD+YGLy73O2ulQh8+wVLYGyNWv+ttFPS40MRdvJXQw+Fv2rUN 0k0x2QAXaf2N2SDgtLpCFHUTp2j4ijfXMy9UH94vvxsB1YrOAADK9ds0ZohhLQ0V4PXe gjng== X-Gm-Message-State: AOAM5332wlNQ2LFwq3bGObgpCtYJkyG6tbXmpw1cIgxpzRGPRGGJW+b8 9BNHt3cAUG3SfqCfMmZc/X63zoQ0kho= X-Google-Smtp-Source: ABdhPJzYHkcy92be5dnLULlLQ58hc/tVRRBzsBWSWBMxdO9FX9NIlvnt7dZL9vptNvxz3XXxEvbLfA== X-Received: by 2002:a17:90a:9cf:: with SMTP id 73mr11357005pjo.136.1628697799168; Wed, 11 Aug 2021 09:03:19 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:18 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 36/60] KVM: PPC: Book3S HV P9: Move vcpu register save/restore into functions Date: Thu, 12 Aug 2021 02:01:10 +1000 Message-Id: <20210811160134.904987-37-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This should be no functional difference but makes the caller easier to read. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 65 +++++++++++++++++++++++------------- 1 file changed, 41 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7867d6793b3e..0f86b65efc10 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4098,6 +4098,44 @@ static void store_spr_state(struct kvm_vcpu *vcpu) vcpu->arch.ctrl = mfspr(SPRN_CTRLF); } +/* Returns true if current MSR and/or guest MSR may have changed */ +static bool load_vcpu_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + bool ret = false; + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + ret = true; + } + + load_spr_state(vcpu, host_os_sprs); + + load_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + load_vr_state(&vcpu->arch.vr); +#endif + mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); + + return ret; +} + +static void store_vcpu_state(struct kvm_vcpu *vcpu) +{ + store_spr_state(vcpu); + + store_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + store_vr_state(&vcpu->arch.vr); +#endif + vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); +} + static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { if (!cpu_has_feature(CPU_FTR_ARCH_31)) @@ -4205,19 +4243,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { - kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - msr = mfmsr(); /* TM restore can update msr */ - } - - load_spr_state(vcpu, &host_os_sprs); - - load_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - load_vr_state(&vcpu->arch.vr); -#endif - mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); + if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) + msr = mfmsr(); /* MSR may have been updated */ switch_pmu_to_guest(vcpu, &host_os_sprs); @@ -4321,17 +4348,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, switch_pmu_to_host(vcpu, &host_os_sprs); - store_spr_state(vcpu); - - store_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - store_vr_state(&vcpu->arch.vr); -#endif - vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); - - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + store_vcpu_state(vcpu); vcpu_vpa_increment_dispatch(vcpu); From patchwork Wed Aug 11 16:01:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515912 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=KSNabyNP; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3b52Xhz9sXV for ; Thu, 12 Aug 2021 02:03:23 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233652AbhHKQDq (ORCPT ); Wed, 11 Aug 2021 12:03:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQDq (ORCPT ); Wed, 11 Aug 2021 12:03:46 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB760C061765 for ; Wed, 11 Aug 2021 09:03:22 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id cp15-20020a17090afb8fb029017891959dcbso10355314pjb.2 for ; Wed, 11 Aug 2021 09:03:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gHJtvy8D8pzVJv1skr/QVAtKtm3D/iEKFrXFvPMWe6A=; b=KSNabyNP8q4CdQuZNqIyskP+/6zKm7RcYsBFk8aGEsa4H1w0PvY4g9RgXdOU2B5DLd LEopCxpdhKOXPTu2d+PpCTjz2KDMpxHa/htrHg8f+lUUgMBUx29omhq+e1+h/iXXqNiC PjwX11mEylQMWxpwbpPYjZ9MI2E6vyhInc1xg4ZJslHACtuwPIMCmp1yZOXJ3VV7rQHb IaoZPlCmP5iv0Y2Mo6r31gw3XZ1HT3rdpeHCVlCFp3hxDwY+5A0fNNeU2bMF0WLeywXu 4yD2bqLLbkUQeuVO4tY3sgg7EN8efRETmKJEZ2JYOsz3BA81Z3K7sjlQtROQbU7hvDGT clQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gHJtvy8D8pzVJv1skr/QVAtKtm3D/iEKFrXFvPMWe6A=; b=MQMlFrXJfdK0oW4Bt+wsnp0wJXs46bdEALTHPUrJ1orNFM/fbeX5Z+CWeD16ddyDYW Pn7LdBTdglwJ6dK0Ev1oh/NlqhKypmk5RseEoHePATL8uakr8niYYCNX7NugEKi9RT41 yOGSJWM7+8czf61y4GtWYPvhe5HsH8BW3Npe3f4ELdmy+uoUIKAnESrkP1rFqO+CciSs bloVlyd74Zx7vjI0YvcM1O8eF2lARu39Lvkt/D2Gf4yHTuqDCmH2v/Ex8ZbPeNynQ2GL QUE3xJGvuBnSP0NkCff/oKorfvW2XZBBxej2r09kUzjDhPSeyV0ldvWJ3Z+rrg9QbX1v 1/mg== X-Gm-Message-State: AOAM532C0wwQuec0yOgQvq1a+6zi81xpkpUTpCgL7VmjJxBr0JXOWJyP 2UsXBZXeCFmfhpIL998CNGbdfHtiVeI= X-Google-Smtp-Source: ABdhPJxFm2c7+E40+NqDKC7xpwQ0xA30h7TBVIyl/Ars2ueam4ANQeb9rfSvui6HBNwQ8/baHvWtuw== X-Received: by 2002:a17:90a:9511:: with SMTP id t17mr11408323pjo.194.1628697801828; Wed, 11 Aug 2021 09:03:21 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:21 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 37/60] KVM: PPC: Book3S HV P9: Move host OS save/restore functions to built-in Date: Thu, 12 Aug 2021 02:01:11 +1000 Message-Id: <20210811160134.904987-38-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the P9 guest/host register switching functions to the built-in P9 entry code, and export it for nested to use as well. This allows more flexibility in scheduling these supervisor privileged SPR accesses with the HV privileged and PR SPR accesses in the low level entry code. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 378 +------------------------- arch/powerpc/kvm/book3s_hv.h | 44 +++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 353 ++++++++++++++++++++++++ 3 files changed, 398 insertions(+), 377 deletions(-) create mode 100644 arch/powerpc/kvm/book3s_hv.h diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 0f86b65efc10..2731d3d39000 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -80,6 +80,7 @@ #include #include "book3s.h" +#include "book3s_hv.h" #define CREATE_TRACE_POINTS #include "trace_hv.h" @@ -127,11 +128,6 @@ static bool nested = true; module_param(nested, bool, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(nested, "Enable nested virtualization (only on POWER9)"); -static inline bool nesting_enabled(struct kvm *kvm) -{ - return kvm->arch.nested_enable && kvm_is_radix(kvm); -} - static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu); /* @@ -3800,378 +3796,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) trace_kvmppc_run_core(vc, 1); } -/* - * Privileged (non-hypervisor) host registers to save. - */ -struct p9_host_os_sprs { - unsigned long dscr; - unsigned long tidr; - unsigned long iamr; - unsigned long amr; - unsigned long fscr; - - unsigned int pmc1; - unsigned int pmc2; - unsigned int pmc3; - unsigned int pmc4; - unsigned int pmc5; - unsigned int pmc6; - unsigned long mmcr0; - unsigned long mmcr1; - unsigned long mmcr2; - unsigned long mmcr3; - unsigned long mmcra; - unsigned long siar; - unsigned long sier1; - unsigned long sier2; - unsigned long sier3; - unsigned long sdar; -}; - -static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) -{ - if (!(mmcr0 & MMCR0_FC)) - goto do_freeze; - if (mmcra & MMCRA_SAMPLE_ENABLE) - goto do_freeze; - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - if (!(mmcr0 & MMCR0_PMCCEXT)) - goto do_freeze; - if (!(mmcra & MMCRA_BHRB_DISABLE)) - goto do_freeze; - } - return; - -do_freeze: - mmcr0 = MMCR0_FC; - mmcra = 0; - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mmcr0 |= MMCR0_PMCCEXT; - mmcra = MMCRA_BHRB_DISABLE; - } - - mtspr(SPRN_MMCR0, mmcr0); - mtspr(SPRN_MMCRA, mmcra); - isync(); -} - -static void switch_pmu_to_guest(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - struct lppaca *lp; - int load_pmu = 1; - - lp = vcpu->arch.vpa.pinned_addr; - if (lp) - load_pmu = lp->pmcregs_in_use; - - /* Save host */ - if (ppc_get_pmu_inuse()) { - /* - * It might be better to put PMU handling (at least for the - * host) in the perf subsystem because it knows more about what - * is being used. - */ - - /* POWER9, POWER10 do not implement HPMC or SPMC */ - - host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0); - host_os_sprs->mmcra = mfspr(SPRN_MMCRA); - - freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra); - - host_os_sprs->pmc1 = mfspr(SPRN_PMC1); - host_os_sprs->pmc2 = mfspr(SPRN_PMC2); - host_os_sprs->pmc3 = mfspr(SPRN_PMC3); - host_os_sprs->pmc4 = mfspr(SPRN_PMC4); - host_os_sprs->pmc5 = mfspr(SPRN_PMC5); - host_os_sprs->pmc6 = mfspr(SPRN_PMC6); - host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1); - host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2); - host_os_sprs->sdar = mfspr(SPRN_SDAR); - host_os_sprs->siar = mfspr(SPRN_SIAR); - host_os_sprs->sier1 = mfspr(SPRN_SIER); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3); - host_os_sprs->sier2 = mfspr(SPRN_SIER2); - host_os_sprs->sier3 = mfspr(SPRN_SIER3); - } - } - -#ifdef CONFIG_PPC_PSERIES - /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */ - if (kvmhv_on_pseries()) { - barrier(); - get_lppaca()->pmcregs_in_use = load_pmu; - barrier(); - } -#endif - - /* - * Load guest. If the VPA said the PMCs are not in use but the guest - * tried to access them anyway, HFSCR[PM] will be set by the HFAC - * fault so we can make forward progress. - */ - if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) { - mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); - mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); - mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); - mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); - mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); - mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); - mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); - mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); - mtspr(SPRN_SDAR, vcpu->arch.sdar); - mtspr(SPRN_SIAR, vcpu->arch.siar); - mtspr(SPRN_SIER, vcpu->arch.sier[0]); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); - mtspr(SPRN_SIER2, vcpu->arch.sier[1]); - mtspr(SPRN_SIER3, vcpu->arch.sier[2]); - } - - /* Set MMCRA then MMCR0 last */ - mtspr(SPRN_MMCRA, vcpu->arch.mmcra); - mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); - /* No isync necessary because we're starting counters */ - - if (!vcpu->arch.nested && - (vcpu->arch.hfscr_permitted & HFSCR_PM)) - vcpu->arch.hfscr |= HFSCR_PM; - } -} - -static void switch_pmu_to_host(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - struct lppaca *lp; - int save_pmu = 1; - - lp = vcpu->arch.vpa.pinned_addr; - if (lp) - save_pmu = lp->pmcregs_in_use; - if (IS_ENABLED(CONFIG_KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND)) { - /* - * Save pmu if this guest is capable of running nested guests. - * This is option is for old L1s that do not set their - * lppaca->pmcregs_in_use properly when entering their L2. - */ - save_pmu |= nesting_enabled(vcpu->kvm); - } - - if (save_pmu) { - vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0); - vcpu->arch.mmcra = mfspr(SPRN_MMCRA); - - freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra); - - vcpu->arch.pmc[0] = mfspr(SPRN_PMC1); - vcpu->arch.pmc[1] = mfspr(SPRN_PMC2); - vcpu->arch.pmc[2] = mfspr(SPRN_PMC3); - vcpu->arch.pmc[3] = mfspr(SPRN_PMC4); - vcpu->arch.pmc[4] = mfspr(SPRN_PMC5); - vcpu->arch.pmc[5] = mfspr(SPRN_PMC6); - vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1); - vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2); - vcpu->arch.sdar = mfspr(SPRN_SDAR); - vcpu->arch.siar = mfspr(SPRN_SIAR); - vcpu->arch.sier[0] = mfspr(SPRN_SIER); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3); - vcpu->arch.sier[1] = mfspr(SPRN_SIER2); - vcpu->arch.sier[2] = mfspr(SPRN_SIER3); - } - - } else if (vcpu->arch.hfscr & HFSCR_PM) { - /* - * The guest accessed PMC SPRs without specifying they should - * be preserved, or it cleared pmcregs_in_use after the last - * access. Just ensure they are frozen. - */ - freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); - - /* - * Demand-fault PMU register access in the guest. - * - * This is used to grab the guest's VPA pmcregs_in_use value - * and reflect it into the host's VPA in the case of a nested - * hypervisor. - * - * It also avoids having to zero-out SPRs after each guest - * exit to avoid side-channels when. - * - * This is cleared here when we exit the guest, so later HFSCR - * interrupt handling can add it back to run the guest with - * PM enabled next time. - */ - if (!vcpu->arch.nested) - vcpu->arch.hfscr &= ~HFSCR_PM; - } /* otherwise the PMU should still be frozen */ - -#ifdef CONFIG_PPC_PSERIES - if (kvmhv_on_pseries()) { - barrier(); - get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); - barrier(); - } -#endif - - if (ppc_get_pmu_inuse()) { - mtspr(SPRN_PMC1, host_os_sprs->pmc1); - mtspr(SPRN_PMC2, host_os_sprs->pmc2); - mtspr(SPRN_PMC3, host_os_sprs->pmc3); - mtspr(SPRN_PMC4, host_os_sprs->pmc4); - mtspr(SPRN_PMC5, host_os_sprs->pmc5); - mtspr(SPRN_PMC6, host_os_sprs->pmc6); - mtspr(SPRN_MMCR1, host_os_sprs->mmcr1); - mtspr(SPRN_MMCR2, host_os_sprs->mmcr2); - mtspr(SPRN_SDAR, host_os_sprs->sdar); - mtspr(SPRN_SIAR, host_os_sprs->siar); - mtspr(SPRN_SIER, host_os_sprs->sier1); - - if (cpu_has_feature(CPU_FTR_ARCH_31)) { - mtspr(SPRN_MMCR3, host_os_sprs->mmcr3); - mtspr(SPRN_SIER2, host_os_sprs->sier2); - mtspr(SPRN_SIER3, host_os_sprs->sier3); - } - - /* Set MMCRA then MMCR0 last */ - mtspr(SPRN_MMCRA, host_os_sprs->mmcra); - mtspr(SPRN_MMCR0, host_os_sprs->mmcr0); - isync(); - } -} - -static void load_spr_state(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - mtspr(SPRN_TAR, vcpu->arch.tar); - mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); - mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); - mtspr(SPRN_BESCR, vcpu->arch.bescr); - - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - mtspr(SPRN_TIDR, vcpu->arch.tid); - if (host_os_sprs->iamr != vcpu->arch.iamr) - mtspr(SPRN_IAMR, vcpu->arch.iamr); - if (host_os_sprs->amr != vcpu->arch.amr) - mtspr(SPRN_AMR, vcpu->arch.amr); - if (vcpu->arch.uamor != 0) - mtspr(SPRN_UAMOR, vcpu->arch.uamor); - if (host_os_sprs->fscr != vcpu->arch.fscr) - mtspr(SPRN_FSCR, vcpu->arch.fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) - mtspr(SPRN_DSCR, vcpu->arch.dscr); - if (vcpu->arch.pspb != 0) - mtspr(SPRN_PSPB, vcpu->arch.pspb); - - /* - * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] - * clear (or hstate set appropriately to catch those registers - * being clobbered if we take a MCE or SRESET), so those are done - * later. - */ - - if (!(vcpu->arch.ctrl & 1)) - mtspr(SPRN_CTRLT, 0); -} - -static void store_spr_state(struct kvm_vcpu *vcpu) -{ - vcpu->arch.tar = mfspr(SPRN_TAR); - vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); - vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); - vcpu->arch.bescr = mfspr(SPRN_BESCR); - - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - vcpu->arch.tid = mfspr(SPRN_TIDR); - vcpu->arch.iamr = mfspr(SPRN_IAMR); - vcpu->arch.amr = mfspr(SPRN_AMR); - vcpu->arch.uamor = mfspr(SPRN_UAMOR); - vcpu->arch.fscr = mfspr(SPRN_FSCR); - vcpu->arch.dscr = mfspr(SPRN_DSCR); - vcpu->arch.pspb = mfspr(SPRN_PSPB); - - vcpu->arch.ctrl = mfspr(SPRN_CTRLF); -} - -/* Returns true if current MSR and/or guest MSR may have changed */ -static bool load_vcpu_state(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - bool ret = false; - - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { - kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - ret = true; - } - - load_spr_state(vcpu, host_os_sprs); - - load_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - load_vr_state(&vcpu->arch.vr); -#endif - mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); - - return ret; -} - -static void store_vcpu_state(struct kvm_vcpu *vcpu) -{ - store_spr_state(vcpu); - - store_fp_state(&vcpu->arch.fp); -#ifdef CONFIG_ALTIVEC - store_vr_state(&vcpu->arch.vr); -#endif - vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); - - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); -} - -static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) -{ - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - host_os_sprs->tidr = mfspr(SPRN_TIDR); - host_os_sprs->iamr = mfspr(SPRN_IAMR); - host_os_sprs->amr = mfspr(SPRN_AMR); - host_os_sprs->fscr = mfspr(SPRN_FSCR); - host_os_sprs->dscr = mfspr(SPRN_DSCR); -} - -/* vcpu guest regs must already be saved */ -static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, - struct p9_host_os_sprs *host_os_sprs) -{ - mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - mtspr(SPRN_TIDR, host_os_sprs->tidr); - if (host_os_sprs->iamr != vcpu->arch.iamr) - mtspr(SPRN_IAMR, host_os_sprs->iamr); - if (vcpu->arch.uamor != 0) - mtspr(SPRN_UAMOR, 0); - if (host_os_sprs->amr != vcpu->arch.amr) - mtspr(SPRN_AMR, host_os_sprs->amr); - if (host_os_sprs->fscr != vcpu->arch.fscr) - mtspr(SPRN_FSCR, host_os_sprs->fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) - mtspr(SPRN_DSCR, host_os_sprs->dscr); - if (vcpu->arch.pspb != 0) - mtspr(SPRN_PSPB, 0); - - /* Save guest CTRL register, set runlatch to 1 */ - if (!(vcpu->arch.ctrl & 1)) - mtspr(SPRN_CTRLT, 1); -} - static inline bool hcall_is_xics(unsigned long req) { return req == H_EOI || req == H_CPPR || req == H_IPI || diff --git a/arch/powerpc/kvm/book3s_hv.h b/arch/powerpc/kvm/book3s_hv.h new file mode 100644 index 000000000000..d3ea4c89f4fa --- /dev/null +++ b/arch/powerpc/kvm/book3s_hv.h @@ -0,0 +1,44 @@ + +/* + * Privileged (non-hypervisor) host registers to save. + */ +struct p9_host_os_sprs { + unsigned long dscr; + unsigned long tidr; + unsigned long iamr; + unsigned long amr; + unsigned long fscr; + + unsigned int pmc1; + unsigned int pmc2; + unsigned int pmc3; + unsigned int pmc4; + unsigned int pmc5; + unsigned int pmc6; + unsigned long mmcr0; + unsigned long mmcr1; + unsigned long mmcr2; + unsigned long mmcr3; + unsigned long mmcra; + unsigned long siar; + unsigned long sier1; + unsigned long sier2; + unsigned long sier3; + unsigned long sdar; +}; + +static inline bool nesting_enabled(struct kvm *kvm) +{ + return kvm->arch.nested_enable && kvm_is_radix(kvm); +} + +bool load_vcpu_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); +void store_vcpu_state(struct kvm_vcpu *vcpu); +void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs); +void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); +void switch_pmu_to_guest(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); +void switch_pmu_to_host(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index bd0021cd3a67..2fac612356a0 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -4,8 +4,361 @@ #include #include #include +#include #include +#include "book3s_hv.h" + +static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra) +{ + if (!(mmcr0 & MMCR0_FC)) + goto do_freeze; + if (mmcra & MMCRA_SAMPLE_ENABLE) + goto do_freeze; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + if (!(mmcr0 & MMCR0_PMCCEXT)) + goto do_freeze; + if (!(mmcra & MMCRA_BHRB_DISABLE)) + goto do_freeze; + } + return; + +do_freeze: + mmcr0 = MMCR0_FC; + mmcra = 0; + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mmcr0 |= MMCR0_PMCCEXT; + mmcra = MMCRA_BHRB_DISABLE; + } + + mtspr(SPRN_MMCR0, mmcr0); + mtspr(SPRN_MMCRA, mmcra); + isync(); +} + +void switch_pmu_to_guest(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + struct lppaca *lp; + int load_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + load_pmu = lp->pmcregs_in_use; + + /* Save host */ + if (ppc_get_pmu_inuse()) { + /* + * It might be better to put PMU handling (at least for the + * host) in the perf subsystem because it knows more about what + * is being used. + */ + + /* POWER9, POWER10 do not implement HPMC or SPMC */ + + host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0); + host_os_sprs->mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra); + + host_os_sprs->pmc1 = mfspr(SPRN_PMC1); + host_os_sprs->pmc2 = mfspr(SPRN_PMC2); + host_os_sprs->pmc3 = mfspr(SPRN_PMC3); + host_os_sprs->pmc4 = mfspr(SPRN_PMC4); + host_os_sprs->pmc5 = mfspr(SPRN_PMC5); + host_os_sprs->pmc6 = mfspr(SPRN_PMC6); + host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1); + host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2); + host_os_sprs->sdar = mfspr(SPRN_SDAR); + host_os_sprs->siar = mfspr(SPRN_SIAR); + host_os_sprs->sier1 = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3); + host_os_sprs->sier2 = mfspr(SPRN_SIER2); + host_os_sprs->sier3 = mfspr(SPRN_SIER3); + } + } + +#ifdef CONFIG_PPC_PSERIES + /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */ + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = load_pmu; + barrier(); + } +#endif + + /* + * Load guest. If the VPA said the PMCs are not in use but the guest + * tried to access them anyway, HFSCR[PM] will be set by the HFAC + * fault so we can make forward progress. + */ + if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) { + mtspr(SPRN_PMC1, vcpu->arch.pmc[0]); + mtspr(SPRN_PMC2, vcpu->arch.pmc[1]); + mtspr(SPRN_PMC3, vcpu->arch.pmc[2]); + mtspr(SPRN_PMC4, vcpu->arch.pmc[3]); + mtspr(SPRN_PMC5, vcpu->arch.pmc[4]); + mtspr(SPRN_PMC6, vcpu->arch.pmc[5]); + mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]); + mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]); + mtspr(SPRN_SDAR, vcpu->arch.sdar); + mtspr(SPRN_SIAR, vcpu->arch.siar); + mtspr(SPRN_SIER, vcpu->arch.sier[0]); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]); + mtspr(SPRN_SIER2, vcpu->arch.sier[1]); + mtspr(SPRN_SIER3, vcpu->arch.sier[2]); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, vcpu->arch.mmcra); + mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]); + /* No isync necessary because we're starting counters */ + + if (!vcpu->arch.nested && + (vcpu->arch.hfscr_permitted & HFSCR_PM)) + vcpu->arch.hfscr |= HFSCR_PM; + } +} +EXPORT_SYMBOL_GPL(switch_pmu_to_guest); + +void switch_pmu_to_host(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + struct lppaca *lp; + int save_pmu = 1; + + lp = vcpu->arch.vpa.pinned_addr; + if (lp) + save_pmu = lp->pmcregs_in_use; + if (IS_ENABLED(CONFIG_KVM_BOOK3S_HV_NESTED_PMU_WORKAROUND)) { + /* + * Save pmu if this guest is capable of running nested guests. + * This is option is for old L1s that do not set their + * lppaca->pmcregs_in_use properly when entering their L2. + */ + save_pmu |= nesting_enabled(vcpu->kvm); + } + + if (save_pmu) { + vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0); + vcpu->arch.mmcra = mfspr(SPRN_MMCRA); + + freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra); + + vcpu->arch.pmc[0] = mfspr(SPRN_PMC1); + vcpu->arch.pmc[1] = mfspr(SPRN_PMC2); + vcpu->arch.pmc[2] = mfspr(SPRN_PMC3); + vcpu->arch.pmc[3] = mfspr(SPRN_PMC4); + vcpu->arch.pmc[4] = mfspr(SPRN_PMC5); + vcpu->arch.pmc[5] = mfspr(SPRN_PMC6); + vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1); + vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2); + vcpu->arch.sdar = mfspr(SPRN_SDAR); + vcpu->arch.siar = mfspr(SPRN_SIAR); + vcpu->arch.sier[0] = mfspr(SPRN_SIER); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3); + vcpu->arch.sier[1] = mfspr(SPRN_SIER2); + vcpu->arch.sier[2] = mfspr(SPRN_SIER3); + } + + } else if (vcpu->arch.hfscr & HFSCR_PM) { + /* + * The guest accessed PMC SPRs without specifying they should + * be preserved, or it cleared pmcregs_in_use after the last + * access. Just ensure they are frozen. + */ + freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA)); + + /* + * Demand-fault PMU register access in the guest. + * + * This is used to grab the guest's VPA pmcregs_in_use value + * and reflect it into the host's VPA in the case of a nested + * hypervisor. + * + * It also avoids having to zero-out SPRs after each guest + * exit to avoid side-channels when. + * + * This is cleared here when we exit the guest, so later HFSCR + * interrupt handling can add it back to run the guest with + * PM enabled next time. + */ + if (!vcpu->arch.nested) + vcpu->arch.hfscr &= ~HFSCR_PM; + } /* otherwise the PMU should still be frozen */ + +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + barrier(); + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); + barrier(); + } +#endif + + if (ppc_get_pmu_inuse()) { + mtspr(SPRN_PMC1, host_os_sprs->pmc1); + mtspr(SPRN_PMC2, host_os_sprs->pmc2); + mtspr(SPRN_PMC3, host_os_sprs->pmc3); + mtspr(SPRN_PMC4, host_os_sprs->pmc4); + mtspr(SPRN_PMC5, host_os_sprs->pmc5); + mtspr(SPRN_PMC6, host_os_sprs->pmc6); + mtspr(SPRN_MMCR1, host_os_sprs->mmcr1); + mtspr(SPRN_MMCR2, host_os_sprs->mmcr2); + mtspr(SPRN_SDAR, host_os_sprs->sdar); + mtspr(SPRN_SIAR, host_os_sprs->siar); + mtspr(SPRN_SIER, host_os_sprs->sier1); + + if (cpu_has_feature(CPU_FTR_ARCH_31)) { + mtspr(SPRN_MMCR3, host_os_sprs->mmcr3); + mtspr(SPRN_SIER2, host_os_sprs->sier2); + mtspr(SPRN_SIER3, host_os_sprs->sier3); + } + + /* Set MMCRA then MMCR0 last */ + mtspr(SPRN_MMCRA, host_os_sprs->mmcra); + mtspr(SPRN_MMCR0, host_os_sprs->mmcr0); + isync(); + } +} +EXPORT_SYMBOL_GPL(switch_pmu_to_host); + +static void load_spr_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + mtspr(SPRN_TAR, vcpu->arch.tar); + mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); + mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); + mtspr(SPRN_BESCR, vcpu->arch.bescr); + + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + mtspr(SPRN_TIDR, vcpu->arch.tid); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, vcpu->arch.iamr); + if (host_os_sprs->amr != vcpu->arch.amr) + mtspr(SPRN_AMR, vcpu->arch.amr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, vcpu->arch.uamor); + if (host_os_sprs->fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, vcpu->arch.fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, vcpu->arch.dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, vcpu->arch.pspb); + + /* + * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI] + * clear (or hstate set appropriately to catch those registers + * being clobbered if we take a MCE or SRESET), so those are done + * later. + */ + + if (!(vcpu->arch.ctrl & 1)) + mtspr(SPRN_CTRLT, 0); +} + +static void store_spr_state(struct kvm_vcpu *vcpu) +{ + vcpu->arch.tar = mfspr(SPRN_TAR); + vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); + vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); + vcpu->arch.bescr = mfspr(SPRN_BESCR); + + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + vcpu->arch.tid = mfspr(SPRN_TIDR); + vcpu->arch.iamr = mfspr(SPRN_IAMR); + vcpu->arch.amr = mfspr(SPRN_AMR); + vcpu->arch.uamor = mfspr(SPRN_UAMOR); + vcpu->arch.fscr = mfspr(SPRN_FSCR); + vcpu->arch.dscr = mfspr(SPRN_DSCR); + vcpu->arch.pspb = mfspr(SPRN_PSPB); + + vcpu->arch.ctrl = mfspr(SPRN_CTRLF); +} + +/* Returns true if current MSR and/or guest MSR may have changed */ +bool load_vcpu_state(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + bool ret = false; + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + ret = true; + } + + load_spr_state(vcpu, host_os_sprs); + + load_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + load_vr_state(&vcpu->arch.vr); +#endif + mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); + + return ret; +} +EXPORT_SYMBOL_GPL(load_vcpu_state); + +void store_vcpu_state(struct kvm_vcpu *vcpu) +{ + store_spr_state(vcpu); + + store_fp_state(&vcpu->arch.fp); +#ifdef CONFIG_ALTIVEC + store_vr_state(&vcpu->arch.vr); +#endif + vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); + + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); +} +EXPORT_SYMBOL_GPL(store_vcpu_state); + +void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) +{ + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + host_os_sprs->tidr = mfspr(SPRN_TIDR); + host_os_sprs->iamr = mfspr(SPRN_IAMR); + host_os_sprs->amr = mfspr(SPRN_AMR); + host_os_sprs->fscr = mfspr(SPRN_FSCR); + host_os_sprs->dscr = mfspr(SPRN_DSCR); +} +EXPORT_SYMBOL_GPL(save_p9_host_os_sprs); + +/* vcpu guest regs must already be saved */ +void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, + struct p9_host_os_sprs *host_os_sprs) +{ + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); + + if (!cpu_has_feature(CPU_FTR_ARCH_31)) + mtspr(SPRN_TIDR, host_os_sprs->tidr); + if (host_os_sprs->iamr != vcpu->arch.iamr) + mtspr(SPRN_IAMR, host_os_sprs->iamr); + if (vcpu->arch.uamor != 0) + mtspr(SPRN_UAMOR, 0); + if (host_os_sprs->amr != vcpu->arch.amr) + mtspr(SPRN_AMR, host_os_sprs->amr); + if (host_os_sprs->fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, host_os_sprs->fscr); + if (host_os_sprs->dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, host_os_sprs->dscr); + if (vcpu->arch.pspb != 0) + mtspr(SPRN_PSPB, 0); + + /* Save guest CTRL register, set runlatch to 1 */ + if (!(vcpu->arch.ctrl & 1)) + mtspr(SPRN_CTRLT, 1); +} +EXPORT_SYMBOL_GPL(restore_p9_host_os_sprs); + #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING static void __start_timing(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next) { From patchwork Wed Aug 11 16:01:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515914 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=kzm6U5nP; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3d4Gx5z9sxS for ; Thu, 12 Aug 2021 02:03:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233522AbhHKQDs (ORCPT ); Wed, 11 Aug 2021 12:03:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQDs (ORCPT ); Wed, 11 Aug 2021 12:03:48 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8E5DC061765 for ; Wed, 11 Aug 2021 09:03:24 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id k2so3253172plk.13 for ; Wed, 11 Aug 2021 09:03:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q71VMufbN+Ua/yV2KAAZXFo/19WvrB3dTeEElEh3dDo=; b=kzm6U5nPSp6URqDGs91N7tNtFkWp8g5XskgutpwxSlHaqj7NSF036UBh0HoKQcvTzk iHoZFCE9NgY0a8wCUQzJmnF+4/X/4vas26sFW9r1qAkYPAmlBkkc3f8kKDDCSTIcyVYL ldlecSjFkIAMnjJ7glnpqKybHxE6wPyEDKpB5Q1Nwn8a/VuhADpMd0M/2Jsc5xNalS1a aCwzEPiSa9zsIB4cEtEaZqEPf2eA/Nx7JdrodehLdWM8w11G7foyq3sYLayAa9GoZmnp 9Zda4GMfjLhXERH5PmG1kwQYMeaAX7xn1HQkVh+laGXwEwDTKTz9CWHbckXBqpdqHtyB MYPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q71VMufbN+Ua/yV2KAAZXFo/19WvrB3dTeEElEh3dDo=; b=qP9Sml6PMFUJ5Vnda4DPtQKuO/7qGDH/8IU3T/aeKyeEjFYHP3PuYbfpJGZCqyNb3P ZiRrA7/SknVGHmC4o6SKOyO5SW9SAy/iY8Y5TJB408xLa+I4nEqCw8La5x5DbCfjwP1m Ch5/yjYSE6yYBSPuWtepZ5bO8Qu65eQ13iuvbtTWoFmLay4BaFzzQNzCnet3RfBMnqXc dA++tzjfu7m7+nlugYBy99LzjlPz29rF9/qMTHCX7TgHMKdgd+BleR+FRr2DPwSnpisM yBdvHAqcI8tnYSmIAAd0S+DkCcAlPmqrZC4dvdCl42fxqHg11VUJyxzTlNGpa7PALsWv 6nQA== X-Gm-Message-State: AOAM532U4wZy9tzfW6vzs2iz6ojWVe/ieyVsyncqbut6/+/G16giimhF /N5sv46fwyUgmyn2BRpx3VY7evE72+w= X-Google-Smtp-Source: ABdhPJyIP+w1HFpynDG1Jen5Fv4k5nqFz7nRaPog37kcupKZs8M7uKjG6mKk5ckvxsrKXCp+f9DIbg== X-Received: by 2002:a17:90a:de8b:: with SMTP id n11mr11387427pjv.31.1628697804353; Wed, 11 Aug 2021 09:03:24 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:24 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 38/60] KVM: PPC: Book3S HV P9: Move nested guest entry into its own function Date: Thu, 12 Aug 2021 02:01:12 +1000 Message-Id: <20210811160134.904987-39-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move the part of the guest entry which is specific to nested HV into its own function. This is just refactoring. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 125 +++++++++++++++++++---------------- 1 file changed, 67 insertions(+), 58 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2731d3d39000..eaa4628b9b2a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3812,6 +3812,72 @@ static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu) } } +/* call our hypervisor to load up HV regs and go */ +static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + unsigned long host_psscr; + struct hv_guest_state hvregs; + int trap; + s64 dec; + + /* + * We need to save and restore the guest visible part of the + * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor + * doesn't do this for us. Note only required if pseries since + * this is done in kvmhv_vcpu_entry_p9() below otherwise. + */ + host_psscr = mfspr(SPRN_PSSCR_PR); + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + kvmhv_save_hv_regs(vcpu, &hvregs); + hvregs.lpcr = lpcr; + vcpu->arch.regs.msr = vcpu->arch.shregs.msr; + hvregs.version = HV_GUEST_STATE_VERSION; + if (vcpu->arch.nested) { + hvregs.lpid = vcpu->arch.nested->shadow_lpid; + hvregs.vcpu_token = vcpu->arch.nested_vcpu_id; + } else { + hvregs.lpid = vcpu->kvm->arch.lpid; + hvregs.vcpu_token = vcpu->vcpu_id; + } + hvregs.hdec_expiry = time_limit; + + /* + * When setting DEC, we must always deal with irq_work_raise + * via NMI vs setting DEC. The problem occurs right as we + * switch into guest mode if a NMI hits and sets pending work + * and sets DEC, then that will apply to the guest and not + * bring us back to the host. + * + * irq_work_raise could check a flag (or possibly LPCR[HDICE] + * for example) and set HDEC to 1? That wouldn't solve the + * nested hv case which needs to abort the hcall or zero the + * time limit. + * + * XXX: Another day's problem. + */ + mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb); + + mtspr(SPRN_DAR, vcpu->arch.shregs.dar); + mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); + trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), + __pa(&vcpu->arch.regs)); + kvmhv_restore_hv_return_state(vcpu, &hvregs); + vcpu->arch.shregs.msr = vcpu->arch.regs.msr; + vcpu->arch.shregs.dar = mfspr(SPRN_DAR); + vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); + vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); + mtspr(SPRN_PSSCR_PR, host_psscr); + + dec = mfspr(SPRN_DEC); + if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ + dec = (s32) dec; + *tb = mftb(); + vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); + + return trap; +} + /* * Guest entry for POWER9 and later CPUs. */ @@ -3820,7 +3886,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, { struct kvmppc_vcore *vc = vcpu->arch.vcore; struct p9_host_os_sprs host_os_sprs; - s64 dec; u64 next_timer; unsigned long msr; int trap; @@ -3873,63 +3938,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, switch_pmu_to_guest(vcpu, &host_os_sprs); if (kvmhv_on_pseries()) { - /* - * We need to save and restore the guest visible part of the - * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor - * doesn't do this for us. Note only required if pseries since - * this is done in kvmhv_vcpu_entry_p9() below otherwise. - */ - unsigned long host_psscr; - /* call our hypervisor to load up HV regs and go */ - struct hv_guest_state hvregs; - - host_psscr = mfspr(SPRN_PSSCR_PR); - mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); - kvmhv_save_hv_regs(vcpu, &hvregs); - hvregs.lpcr = lpcr; - vcpu->arch.regs.msr = vcpu->arch.shregs.msr; - hvregs.version = HV_GUEST_STATE_VERSION; - if (vcpu->arch.nested) { - hvregs.lpid = vcpu->arch.nested->shadow_lpid; - hvregs.vcpu_token = vcpu->arch.nested_vcpu_id; - } else { - hvregs.lpid = vcpu->kvm->arch.lpid; - hvregs.vcpu_token = vcpu->vcpu_id; - } - hvregs.hdec_expiry = time_limit; - - /* - * When setting DEC, we must always deal with irq_work_raise - * via NMI vs setting DEC. The problem occurs right as we - * switch into guest mode if a NMI hits and sets pending work - * and sets DEC, then that will apply to the guest and not - * bring us back to the host. - * - * irq_work_raise could check a flag (or possibly LPCR[HDICE] - * for example) and set HDEC to 1? That wouldn't solve the - * nested hv case which needs to abort the hcall or zero the - * time limit. - * - * XXX: Another day's problem. - */ - mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb); - - mtspr(SPRN_DAR, vcpu->arch.shregs.dar); - mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); - trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), - __pa(&vcpu->arch.regs)); - kvmhv_restore_hv_return_state(vcpu, &hvregs); - vcpu->arch.shregs.msr = vcpu->arch.regs.msr; - vcpu->arch.shregs.dar = mfspr(SPRN_DAR); - vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); - vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); - mtspr(SPRN_PSSCR_PR, host_psscr); - - dec = mfspr(SPRN_DEC); - if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ - dec = (s32) dec; - *tb = mftb(); - vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); + trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb); /* H_CEDE has to be handled now, not later */ if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && From patchwork Wed Aug 11 16:01:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515915 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=mZuwTfz8; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3h3Zkrz9sXV for ; Thu, 12 Aug 2021 02:03:28 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233507AbhHKQDv (ORCPT ); Wed, 11 Aug 2021 12:03:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQDv (ORCPT ); Wed, 11 Aug 2021 12:03:51 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A6BDC061765 for ; Wed, 11 Aug 2021 09:03:27 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id e19so3263337pla.10 for ; Wed, 11 Aug 2021 09:03:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X3uVGRLZbiMQVJlywAixlblT51s/vzLhU6aD2+low3E=; b=mZuwTfz8/hRHf0XI/RQly22W1uP2u0bqD/lhOvxPspHvn0t6ONfGfMkoqwW8ujk1Eu 5kEQDeXGqwjw1Wt0Vij0GQmNszPdZq1hSUojOxBn1tjVgFvRNypEifz+ygX9dtc75tcZ f0tVVF3Ih3NlWKEDP3yZJ4ADSAQGGGydsaIGzf27AIBSAk1Lln+ekBRkIeDu2e66mEN7 3Q1ykX26nFj3fVsqbrjl5JDJDLVZJkqBU2O1reUh380bICNp3JM/3RhqeTKtzfKBFKED 96C+QQiY6/bzFDiWt4dXsjZ5FHEs2FKiiNzeWQpD0aX7ZzAQcfEAyjK5enFVednF7uf2 OL3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X3uVGRLZbiMQVJlywAixlblT51s/vzLhU6aD2+low3E=; b=TiwJ+uF+wVqWo7CFD/Uaun/xygBJjqdBa0TuEZkf4lrRq27j/VUhpI8YKKLLTIMH3N EqYUFRyWcJYrWmNmiNoEG9QnWgYrVPFoXRWI7Nln0+mcsshXpI4JdtGC7CZMHhGdji/E g9XTKaZdCjDL/vcwph60T9l3+DRCcwlC/1vYeB9fyehsMg7k0OuEf9BIuDefh6RCOs4R RoO8fJX6YH8GzeApi03IKToPfQ7u94jQhrca4zpMZ3b84M7p1ipo4stpPcPQ8/XTC3qp a0aDJJvZcfPP81RInVWh59tgnpphUDQZXYME7AVwdLlou4PjrC4zScf5e9nOouEigGJZ GejQ== X-Gm-Message-State: AOAM532Nu8FrKgxQrKNaTVWTQUlnAPgz/FRGcg/Yperk51DXyvdJ4eS7 PHF7uDiLfk4c8NQghuzVPzov/pA/AlQ= X-Google-Smtp-Source: ABdhPJzPmqsGohtVRFIglLZ8yytjxI3Q/jCwUP+PqQTbED0TesEcpEP+o/3yqBaw92sRYjTfSe1/tw== X-Received: by 2002:a05:6a00:2346:b029:3c6:9ff4:c5e6 with SMTP id j6-20020a056a002346b02903c69ff4c5e6mr34491410pfj.23.1628697807027; Wed, 11 Aug 2021 09:03:27 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:26 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 39/60] KVM: PPC: Book3S HV P9: Move remaining SPR and MSR access into low level entry Date: Thu, 12 Aug 2021 02:01:13 +1000 Message-Id: <20210811160134.904987-40-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Move register saving and loading from kvmhv_p9_guest_entry() into the HV and nested entry handlers. Accesses are scheduled to reduce mtSPR / mfSPR interleaving which reduces SPR scoreboard stalls. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 79 ++++++++++------------ arch/powerpc/kvm/book3s_hv_p9_entry.c | 96 ++++++++++++++++++++------- 2 files changed, 109 insertions(+), 66 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index eaa4628b9b2a..26872a4993fd 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3817,9 +3817,15 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long host_psscr; + unsigned long msr; struct hv_guest_state hvregs; - int trap; + struct p9_host_os_sprs host_os_sprs; s64 dec; + int trap; + + switch_pmu_to_guest(vcpu, &host_os_sprs); + + save_p9_host_os_sprs(&host_os_sprs); /* * We need to save and restore the guest visible part of the @@ -3828,6 +3834,27 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns * this is done in kvmhv_vcpu_entry_p9() below otherwise. */ host_psscr = mfspr(SPRN_PSSCR_PR); + + hard_irq_disable(); + if (lazy_irq_pending()) + return 0; + + /* MSR bits may have been cleared by context switch */ + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + + if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) + msr = mfmsr(); /* TM restore can update msr */ + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); kvmhv_save_hv_regs(vcpu, &hvregs); hvregs.lpcr = lpcr; @@ -3869,12 +3896,20 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); mtspr(SPRN_PSSCR_PR, host_psscr); + store_vcpu_state(vcpu); + dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; *tb = mftb(); vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset); + timer_rearm_host_dec(*tb); + + restore_p9_host_os_sprs(vcpu, &host_os_sprs); + + switch_pmu_to_host(vcpu, &host_os_sprs); + return trap; } @@ -3885,9 +3920,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { struct kvmppc_vcore *vc = vcpu->arch.vcore; - struct p9_host_os_sprs host_os_sprs; u64 next_timer; - unsigned long msr; int trap; next_timer = timer_get_next_tb(); @@ -3898,33 +3931,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.ceded = 0; - save_p9_host_os_sprs(&host_os_sprs); - - /* - * This could be combined with MSR[RI] clearing, but that expands - * the unrecoverable window. It would be better to cover unrecoverable - * with KVM bad interrupt handling rather than use MSR[RI] at all. - * - * Much more difficult and less worthwhile to combine with IR/DR - * disable. - */ - hard_irq_disable(); - if (lazy_irq_pending()) - return 0; - - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - kvmppc_subcore_enter_guest(); vc->entry_exit_map = 1; @@ -3932,11 +3938,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) - msr = mfmsr(); /* MSR may have been updated */ - - switch_pmu_to_guest(vcpu, &host_os_sprs); - if (kvmhv_on_pseries()) { trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb); @@ -3979,16 +3980,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.slb_max = 0; } - switch_pmu_to_host(vcpu, &host_os_sprs); - - store_vcpu_state(vcpu); - vcpu_vpa_increment_dispatch(vcpu); - timer_rearm_host_dec(*tb); - - restore_p9_host_os_sprs(vcpu, &host_os_sprs); - vc->entry_exit_map = 0x101; vc->in_guest = 0; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 2fac612356a0..9ea70736f3d7 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -538,6 +538,7 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { + struct p9_host_os_sprs host_os_sprs; struct kvm *kvm = vcpu->kvm; struct kvm_nested_guest *nested = vcpu->arch.nested; struct kvmppc_vcore *vc = vcpu->arch.vcore; @@ -567,9 +568,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; - /* Could avoid mfmsr by passing around, but probably no big deal */ - msr = mfmsr(); - host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); host_dawr0 = mfspr(SPRN_DAWR0); @@ -584,6 +582,41 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); + switch_pmu_to_guest(vcpu, &host_os_sprs); + + save_p9_host_os_sprs(&host_os_sprs); + + /* + * This could be combined with MSR[RI] clearing, but that expands + * the unrecoverable window. It would be better to cover unrecoverable + * with KVM bad interrupt handling rather than use MSR[RI] at all. + * + * Much more difficult and less worthwhile to combine with IR/DR + * disable. + */ + hard_irq_disable(); + if (lazy_irq_pending()) { + trap = 0; + goto out; + } + + /* MSR bits may have been cleared by context switch */ + msr = 0; + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr |= MSR_VSX; + if (cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + msr |= MSR_TM; + msr = msr_check_and_set(msr); + /* Save MSR for restore. This is after hard disable, so EE is clear. */ + + if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) + msr = mfmsr(); /* MSR may have been updated */ + if (vc->tb_offset) { u64 new_tb = *tb + vc->tb_offset; mtspr(SPRN_TBU40, new_tb); @@ -642,6 +675,14 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2); mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3); + /* + * It might be preferable to load_vcpu_state here, in order to get the + * GPR/FP register loads executing in parallel with the previous mtSPR + * instructions, but for now that can't be done because the TM handling + * in load_vcpu_state can change some SPRs and vcpu state (nip, msr). + * But TM could be split out if this would be a significant benefit. + */ + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_HV_P9; /* @@ -819,6 +860,20 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->dpdes = mfspr(SPRN_DPDES); vc->vtb = mfspr(SPRN_VTB); + save_clear_guest_mmu(kvm, vcpu); + switch_mmu_to_host(kvm, host_pidr); + + /* + * If we are in real mode, only switch MMU on after the MMU is + * switched to host, to avoid the P9_RADIX_PREFETCH_BUG. + */ + if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && + vcpu->arch.shregs.msr & MSR_TS_MASK) + msr |= MSR_TS_S; + __mtmsrd(msr, 0); + + store_vcpu_state(vcpu); + dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; @@ -851,6 +906,19 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_DAWRX1, host_dawrx1); } + mtspr(SPRN_DPDES, 0); + if (vc->pcr) + mtspr(SPRN_PCR, PCR_MASK); + + /* HDEC must be at least as large as DEC, so decrementer_max fits */ + mtspr(SPRN_HDEC, decrementer_max); + + timer_rearm_host_dec(*tb); + + restore_p9_host_os_sprs(vcpu, &host_os_sprs); + + local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; + if (kvm_is_radix(kvm)) { /* * Since this is radix, do a eieio; tlbsync; ptesync sequence @@ -867,26 +935,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (cpu_has_feature(CPU_FTR_ARCH_31)) asm volatile(PPC_CP_ABORT); - mtspr(SPRN_DPDES, 0); - if (vc->pcr) - mtspr(SPRN_PCR, PCR_MASK); - - /* HDEC must be at least as large as DEC, so decrementer_max fits */ - mtspr(SPRN_HDEC, decrementer_max); - - save_clear_guest_mmu(kvm, vcpu); - switch_mmu_to_host(kvm, host_pidr); - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; - - /* - * If we are in real mode, only switch MMU on after the MMU is - * switched to host, to avoid the P9_RADIX_PREFETCH_BUG. - */ - if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && - vcpu->arch.shregs.msr & MSR_TS_MASK) - msr |= MSR_TS_S; - - __mtmsrd(msr, 0); +out: + switch_pmu_to_host(vcpu, &host_os_sprs); end_timing(vcpu); From patchwork Wed Aug 11 16:01:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515916 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Lx3QQ9od; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3k6bn2z9t0p for ; Thu, 12 Aug 2021 02:03:30 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233508AbhHKQDy (ORCPT ); Wed, 11 Aug 2021 12:03:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233442AbhHKQDx (ORCPT ); Wed, 11 Aug 2021 12:03:53 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10C43C061765 for ; Wed, 11 Aug 2021 09:03:30 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id s22-20020a17090a1c16b0290177caeba067so10433805pjs.0 for ; Wed, 11 Aug 2021 09:03:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=h9r7slMa/wxjs1UUTM7giJfy6Yufgbe7vAh2oRXYxVo=; b=Lx3QQ9odZsPvd10rGGcBQyoWkVluSa+FHJrDKFAxKHsXzb/31cevkrNjpB2+FNLTbD 0+BPI5fyC+4RtfBNRR3tPVsYKszXK3Lo4KbVgbw25t6Gt7Y/0xattrtkGC59RMX3CgLk cIKEW+xpgmh8M3zKsv3bMXjgSnx05gESe1MRf6dIaXSzCpmm5V6uYBgEhhhNK+dTn7bd ORZrs3BfxsWP8dgqCDX/mV/tZzdmLdaQ6ihx1pOa4yjkzVBSjZI8UC3g88RnGfTob37K 289TuT+csRAzuLYZL1XCbLpUnE/2od0cm4cF7sozCCEkQsG2MVvNVAToSuwNp/juA2mW zDCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=h9r7slMa/wxjs1UUTM7giJfy6Yufgbe7vAh2oRXYxVo=; b=YfkpFIYqeOPVSsjl6eznbs9TkeHr0YigBP8zYG+hHKLh8xJenVGSg9rymxv+HZtiAT YzJLam3zOlB4coYpx1FKOQWAjjpia24aN6sAYKeAyBFRNrNe39D4cyl/qPdmPG4i7u6l I7C1v6DrmxuvppiCEuVWCsjSBDi3GMB3KGmpuIymBXo2Okv50GOOOVYd3V1kMVr9N8fY u3Wzc1clZwHOtzbchH47iSSr0j42DIBWpZbPvJISeMInkMN01sR41jMO9fIcIcmMVO8u n0uAX/VKSZzn1DQrfkEe5eIwY9pVmG1ZCzn98cVfws1kNPR3B7M2eNG4yMzFzupxVpzG od5g== X-Gm-Message-State: AOAM532ijSUYNBtGprKKqo1s72lUiiDZPVl6P6ozxaKNxH229AIWqUsh mAT3JU482I5oznsBI0I5A+kIWZEcDgM= X-Google-Smtp-Source: ABdhPJyKtVY7gsWVHDzOWE5GYD38Ht12rDXBJOgWTbTjSbWADtCV64mrIDjuE+J2LKGlgHw27HiF+w== X-Received: by 2002:a05:6a00:ccb:b029:3c6:803d:8e3 with SMTP id b11-20020a056a000ccbb02903c6803d08e3mr35308617pfv.0.1628697809419; Wed, 11 Aug 2021 09:03:29 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:29 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 40/60] KVM: PPC: Book3S HV P9: Implement TM fastpath for guest entry/exit Date: Thu, 12 Aug 2021 02:01:14 +1000 Message-Id: <20210811160134.904987-41-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org If TM is not active, only TM register state needs to be saved and restored, avoiding several mfmsr/mtmsrd instructions and improving performance. Signed-off-by: Nicholas Piggin Reported-by: kernel test robot --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 9ea70736f3d7..e52d8b040970 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -289,8 +289,15 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu, if (cpu_has_feature(CPU_FTR_TM) || cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { - kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); - ret = true; + unsigned long guest_msr = vcpu->arch.shregs.msr; + if (MSR_TM_ACTIVE(guest_msr)) { + kvmppc_restore_tm_hv(vcpu, guest_msr, true); + ret = true; + } else { + mtspr(SPRN_TEXASR, vcpu->arch.texasr); + mtspr(SPRN_TFHAR, vcpu->arch.tfhar); + mtspr(SPRN_TFIAR, vcpu->arch.tfiar); + } } load_spr_state(vcpu, host_os_sprs); @@ -316,8 +323,16 @@ void store_vcpu_state(struct kvm_vcpu *vcpu) vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) - kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + unsigned long guest_msr = vcpu->arch.shregs.msr; + if (MSR_TM_ACTIVE(guest_msr)) { + kvmppc_save_tm_hv(vcpu, guest_msr, true); + } else { + vcpu->arch.texasr = mfspr(SPRN_TEXASR); + vcpu->arch.tfhar = mfspr(SPRN_TFHAR); + vcpu->arch.tfiar = mfspr(SPRN_TFIAR); + } + } } EXPORT_SYMBOL_GPL(store_vcpu_state); From patchwork Wed Aug 11 16:01:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515917 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=PSPAlFat; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3n2jyNz9t0p for ; Thu, 12 Aug 2021 02:03:33 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233523AbhHKQD4 (ORCPT ); Wed, 11 Aug 2021 12:03:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233442AbhHKQD4 (ORCPT ); Wed, 11 Aug 2021 12:03:56 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88CE3C061765 for ; Wed, 11 Aug 2021 09:03:32 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id l11so3290785plk.6 for ; Wed, 11 Aug 2021 09:03:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=seBS1CKh1tdJivnx8ndo4Zu4wItjYVzMJJMUXfTGVvE=; b=PSPAlFatBQ+1B6aLgAp2LKgkYDsDtU4inSnxdbXA2HlkzIVoFdMX5D8W31vAUax/JF /Ug2/aKUrWfP73U3nHdIfG3gD0JmH/L3dumzXvTfmWBfEp9saVhls/jeTWpjsqLm957R Z2T+MF8CCzsmWlBltQZYZTPQ1W3QUuNSMEU1rPIBUohx2bWWzO1PPCmeODkH4VMMLu59 f2FZ6T38BEcFSIgcWAS3vPQ/Qp6JugGi703Wb5sSGDArUZisE8ZRH/4nLwT3JATwWbv+ I918hmOvE/IIEvoGSn2kCJLfqhPmtw+itIfAB9y8/J46CPudz2HcZht+ycAQlEm1NZ4m STuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=seBS1CKh1tdJivnx8ndo4Zu4wItjYVzMJJMUXfTGVvE=; b=A685q4JpJtM77hRMKR5jOShIJZBK5WuM0iY+bOL8sUiMevTjrxW8rsHTVaOztS6AGh y/CKKGHs5LLVtUDpH0rWT0arTImZpZ3bJvnU+SP+e3U2EjGWrjDulKeD7lygtBIbn8EH evb0ez8AW4WjytheD6E/uimEk6rWkBMwKHCS3axdirW5QkHG8VPBGZj8SA4KO3uyxaXk IaKgoHTqKa/KcNlnVWkHw8lLory+/NfLAKQzrYYKWh9a5HUHOGLvgIBzgJMdqefvtkkN J0Kmdsj3sRgFGgSbLdNdNyyxw9zZpOs5rdKGs/XCr9IQdjkHUJXyMJ0JBC5rwmGb+nE0 a8dg== X-Gm-Message-State: AOAM531lggg1oaSVQYkKKeoat0oqBJWOSh7pu5dBXDD+YBRPlYWCXxSg QsBX6IKyIWcpWHCex8BtS7oq4VnQtY4= X-Google-Smtp-Source: ABdhPJzQiy935f6UIsCwZfdi6QtVSrEE6j6Mo3zLJYxYjBVNd49DHIHZaKJEgWaBGYnctR3+MEXiyg== X-Received: by 2002:a17:90a:648b:: with SMTP id h11mr10767670pjj.141.1628697812062; Wed, 11 Aug 2021 09:03:32 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:31 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 41/60] KVM: PPC: Book3S HV P9: Switch PMU to guest as late as possible Date: Thu, 12 Aug 2021 02:01:15 +1000 Message-Id: <20210811160134.904987-42-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This moves PMU switch to guest as late as possible in entry, and switch back to host as early as possible at exit. This helps the host get the most perf coverage of KVM entry/exit code as possible. This is slightly suboptimal for SPR scheduling point of view when the PMU is enabled, but when perf is disabled there is no real difference. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 6 ++---- arch/powerpc/kvm/book3s_hv_p9_entry.c | 6 ++---- 2 files changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 26872a4993fd..c76deb3de3e9 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3823,8 +3823,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns s64 dec; int trap; - switch_pmu_to_guest(vcpu, &host_os_sprs); - save_p9_host_os_sprs(&host_os_sprs); /* @@ -3887,9 +3885,11 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); + switch_pmu_to_guest(vcpu, &host_os_sprs); trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs), __pa(&vcpu->arch.regs)); kvmhv_restore_hv_return_state(vcpu, &hvregs); + switch_pmu_to_host(vcpu, &host_os_sprs); vcpu->arch.shregs.msr = vcpu->arch.regs.msr; vcpu->arch.shregs.dar = mfspr(SPRN_DAR); vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); @@ -3908,8 +3908,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns restore_p9_host_os_sprs(vcpu, &host_os_sprs); - switch_pmu_to_host(vcpu, &host_os_sprs); - return trap; } diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index e52d8b040970..48cc94f3d642 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -597,8 +597,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR); - switch_pmu_to_guest(vcpu, &host_os_sprs); - save_p9_host_os_sprs(&host_os_sprs); /* @@ -740,7 +738,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc accumulate_time(vcpu, &vcpu->arch.guest_time); + switch_pmu_to_guest(vcpu, &host_os_sprs); kvmppc_p9_enter_guest(vcpu); + switch_pmu_to_host(vcpu, &host_os_sprs); accumulate_time(vcpu, &vcpu->arch.rm_intr); @@ -951,8 +951,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc asm volatile(PPC_CP_ABORT); out: - switch_pmu_to_host(vcpu, &host_os_sprs); - end_timing(vcpu); return trap; From patchwork Wed Aug 11 16:01:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515919 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=dgY37+ZO; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3t0LpPz9sjJ for ; Thu, 12 Aug 2021 02:03:38 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233276AbhHKQEB (ORCPT ); Wed, 11 Aug 2021 12:04:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQEA (ORCPT ); Wed, 11 Aug 2021 12:04:00 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27DFDC061765 for ; Wed, 11 Aug 2021 09:03:35 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id d17so3262416plr.12 for ; Wed, 11 Aug 2021 09:03:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/uO7fK/ax0SJRwBtePDjtrqOVVG3TZBN5bKm0iorBes=; b=dgY37+ZO0KFV8f/X0uji62/5wt7HJc0rsRWa9GgQ4A1k6v1He4ROiguj+2EC5/pcRm NqDQFsrWnGgumfzWgjItDr4B/gj4/vCqi+jDeETvhMgxS5XjllMPZG/labRd0IeQw3x6 aSg59ivgJXB4qwnnTCwOCt2uf2UpFxdvX8LARQedZRLDR34sb+t2yNahAzoil9J7xxww xO5YuiN1ojVvoNu15wrXjGSTIfnJ9ua8JAO4nZWP0OHpsWzoeYHFKHGYwDIXtr++n5Em aYE9fizIuoj2NknLva3F1pfdxUkT2Znwh2YrYNfYA0oWH9beGUaED71VSJ6yzsrKqCRa NvhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/uO7fK/ax0SJRwBtePDjtrqOVVG3TZBN5bKm0iorBes=; b=X15fxKJuf6QTiF97czO6zZkzTWGAZJ85qMLhbqze2wG1ewsiu+oemen6F5Tuu2OLKu TW3LzDHqor7jSYCalDaV8+LyA+PrSkZ60vS1DrPsC96raITlvJ0veQNOC4cFTw5e41T5 tWk8qANgThS2eRVj5uR7hq9zEth5cY8aevqgOoDmqoWYnpnqAG+KuodACLjvBXR8E1fS /UuZIRCRYwb7gIYx/dLTioSwAWqN/zfF1l+Q9SaGyac9otSX0IyG1mZZ8kBIvFEYD+sC QxAfabPz/8Y8Z7FvCm7IC6ExK9+eUFqO/LZxxrbXS26l9k/9vpxyDBV/tv+2OJ4FeDrp G7zg== X-Gm-Message-State: AOAM5307pReBAm+CAHcbe9aw6ONQtWrNdw8viZt5Ul82EC8V7ZdvSSds G5ByyV74KLj23v5G/AAq4FsVyagNM3s= X-Google-Smtp-Source: ABdhPJxtzgvDTQuCTcRgYnIN7pZsE7yXgv2IhyBUHJf8+vFhQW6dTqB0sHxvEz8D59vd932NRLgUqw== X-Received: by 2002:a17:90a:4481:: with SMTP id t1mr38074543pjg.232.1628697814599; Wed, 11 Aug 2021 09:03:34 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:34 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 42/60] KVM: PPC: Book3S HV P9: Restrict DSISR canary workaround to processors that require it Date: Thu, 12 Aug 2021 02:01:16 +1000 Message-Id: <20210811160134.904987-43-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use CPU_FTR_P9_RADIX_PREFETCH_BUG to apply the workaround, to test for DD2.1 and below processors. This saves a mtSPR in guest entry. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 3 ++- arch/powerpc/kvm/book3s_hv_p9_entry.c | 6 ++++-- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index c76deb3de3e9..8ca081a32d91 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1593,7 +1593,8 @@ XXX benchmark guest exits unsigned long vsid; long err; - if (vcpu->arch.fault_dsisr == HDSISR_CANARY) { + if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG) && + unlikely(vcpu->arch.fault_dsisr == HDSISR_CANARY)) { r = RESUME_GUEST; /* Just retry if it's the canary */ break; } diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 48cc94f3d642..3ec0d825b7d4 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -679,9 +679,11 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc * HDSI which should correctly update the HDSISR the second time HDSI * entry. * - * Just do this on all p9 processors for now. + * The "radix prefetch bug" test can be used to test for this bug, as + * it also exists fo DD2.1 and below. */ - mtspr(SPRN_HDSISR, HDSISR_CANARY); + if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) + mtspr(SPRN_HDSISR, HDSISR_CANARY); mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0); mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1); From patchwork Wed Aug 11 16:01:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515920 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=XduEEjrt; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3t6k8Xz9sxS for ; Thu, 12 Aug 2021 02:03:38 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233442AbhHKQEC (ORCPT ); Wed, 11 Aug 2021 12:04:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQEB (ORCPT ); Wed, 11 Aug 2021 12:04:01 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEB67C061765 for ; Wed, 11 Aug 2021 09:03:37 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id n12so2622875plf.4 for ; Wed, 11 Aug 2021 09:03:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xNyhyn9o9sjROWgk4AR4JdF/Vj/eQyiarg1I3tTsDvI=; b=XduEEjrt6nwrkp0eFqRiPW4ekoa2vSMpNAsWm5LCt4orVexDdZXGI7jL84cdeZ/OOR 39ZCZwVt6xRBvMMjbubBCI6D3uq1Jy+DEWZS6xArSa/E/Vr1cFSC5CWdFzkJsGcFgVKb vA1/Z5lUb8N/boMUOzO3PIo8NKG5JAmQf6BtJqdIYDrp+MV9C/UxhqNzMWV0ndk0y2jc WW2cVb8yU7+OioIh4Ok3+Le+kda3Uh9UWQcYGH79L1nMmPUawHiN14zq4VyXd6KR8nZk ogSLqdrn077FoHrjQlb6+Er5RR5KPSqFXbY7vqfkg3OzMB4VymPTGLgPJ+HTeB+jeO65 2/Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xNyhyn9o9sjROWgk4AR4JdF/Vj/eQyiarg1I3tTsDvI=; b=pPpSi12Lh0eq4v8zJcL+NIr8GjzE3Uu8U3xq55gcjJe0KwJ43DIwjr2uDu6uPupjD8 3JOTXxNNYXg6BAM1FmXOqN5+z/vKiLsvgft0GNu1rne4fujwYDlCe9jGiQyD413mgLEj wkROhQeI//OET0jL32ynHfZrzFv1EMzgMXbJGORv4MkT73Y3O0VzKbdYihZnbERTHcKt UFs/nvK7zSw08JHuXo2bpdzk3xadNJ23CLikWily1jBk9jUys8Vqbkmkpv1+BzY5K4Ni 5lnKao7uZjm8wfmyLwOey1WaNmQXBJ5cxagjfjDTh7twDINoXY/NaUQzt+xiGXIn71CR sodA== X-Gm-Message-State: AOAM532KHvGDATCsD66N543G41xTQCfinvMnf7fv9q/I0geitZFaH54U 19ymUeX0ARlcoRFp6COJMHXDr4xzQOg= X-Google-Smtp-Source: ABdhPJzzin++SEaqNuZrxWZr9qz4anW6MMz1Dm01T/thKRmtUWTLujv0LGNLYCMGuRPw0rtE4r29Mw== X-Received: by 2002:a17:90a:c006:: with SMTP id p6mr6425812pjt.144.1628697817159; Wed, 11 Aug 2021 09:03:37 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:36 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 43/60] KVM: PPC: Book3S HV P9: More SPR speed improvements Date: Thu, 12 Aug 2021 02:01:17 +1000 Message-Id: <20210811160134.904987-44-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This avoids more scoreboard stalls and reduces mtSPRs. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 73 ++++++++++++++++----------- 1 file changed, 43 insertions(+), 30 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 3ec0d825b7d4..3a7e242887aa 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -641,24 +641,29 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = vc->tb_offset; } - if (vc->pcr) - mtspr(SPRN_PCR, vc->pcr | PCR_MASK); - mtspr(SPRN_DPDES, vc->dpdes); mtspr(SPRN_VTB, vc->vtb); - mtspr(SPRN_PURR, vcpu->arch.purr); mtspr(SPRN_SPURR, vcpu->arch.spurr); + if (vc->pcr) + mtspr(SPRN_PCR, vc->pcr | PCR_MASK); + if (vc->dpdes) + mtspr(SPRN_DPDES, vc->dpdes); + if (dawr_enabled()) { - mtspr(SPRN_DAWR0, vcpu->arch.dawr0); - mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); + if (vcpu->arch.dawr0 != host_dawr0) + mtspr(SPRN_DAWR0, vcpu->arch.dawr0); + if (vcpu->arch.dawrx0 != host_dawrx0) + mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0); if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, vcpu->arch.dawr1); - mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); + if (vcpu->arch.dawr1 != host_dawr1) + mtspr(SPRN_DAWR1, vcpu->arch.dawr1); + if (vcpu->arch.dawrx1 != host_dawrx1) + mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1); } } - mtspr(SPRN_CIABR, vcpu->arch.ciabr); - mtspr(SPRN_IC, vcpu->arch.ic); + if (vcpu->arch.ciabr != host_ciabr) + mtspr(SPRN_CIABR, vcpu->arch.ciabr); mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); @@ -877,20 +882,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->dpdes = mfspr(SPRN_DPDES); vc->vtb = mfspr(SPRN_VTB); - save_clear_guest_mmu(kvm, vcpu); - switch_mmu_to_host(kvm, host_pidr); - - /* - * If we are in real mode, only switch MMU on after the MMU is - * switched to host, to avoid the P9_RADIX_PREFETCH_BUG. - */ - if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && - vcpu->arch.shregs.msr & MSR_TS_MASK) - msr |= MSR_TS_S; - __mtmsrd(msr, 0); - - store_vcpu_state(vcpu); - dec = mfspr(SPRN_DEC); if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ dec = (s32) dec; @@ -908,6 +899,22 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vc->tb_offset_applied = 0; } + save_clear_guest_mmu(kvm, vcpu); + switch_mmu_to_host(kvm, host_pidr); + + /* + * Enable MSR here in order to have facilities enabled to save + * guest registers. This enables MMU (if we were in realmode), so + * only switch MMU on after the MMU is switched to host, to avoid + * the P9_RADIX_PREFETCH_BUG or hash guest context. + */ + if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) && + vcpu->arch.shregs.msr & MSR_TS_MASK) + msr |= MSR_TS_S; + __mtmsrd(msr, 0); + + store_vcpu_state(vcpu); + mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr); mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr); @@ -915,15 +922,21 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_PSSCR, host_psscr | (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); mtspr(SPRN_HFSCR, host_hfscr); - mtspr(SPRN_CIABR, host_ciabr); - mtspr(SPRN_DAWR0, host_dawr0); - mtspr(SPRN_DAWRX0, host_dawrx0); + if (vcpu->arch.ciabr != host_ciabr) + mtspr(SPRN_CIABR, host_ciabr); + if (vcpu->arch.dawr0 != host_dawr0) + mtspr(SPRN_DAWR0, host_dawr0); + if (vcpu->arch.dawrx0 != host_dawrx0) + mtspr(SPRN_DAWRX0, host_dawrx0); if (cpu_has_feature(CPU_FTR_DAWR1)) { - mtspr(SPRN_DAWR1, host_dawr1); - mtspr(SPRN_DAWRX1, host_dawrx1); + if (vcpu->arch.dawr1 != host_dawr1) + mtspr(SPRN_DAWR1, host_dawr1); + if (vcpu->arch.dawrx1 != host_dawrx1) + mtspr(SPRN_DAWRX1, host_dawrx1); } - mtspr(SPRN_DPDES, 0); + if (vc->dpdes) + mtspr(SPRN_DPDES, 0); if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); From patchwork Wed Aug 11 16:01:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515921 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=k9M8CEXV; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF3x456Nz9sxS for ; Thu, 12 Aug 2021 02:03:41 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233656AbhHKQEE (ORCPT ); Wed, 11 Aug 2021 12:04:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQEE (ORCPT ); Wed, 11 Aug 2021 12:04:04 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFE68C061765 for ; Wed, 11 Aug 2021 09:03:40 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id n12so2623091plf.4 for ; Wed, 11 Aug 2021 09:03:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=s4peZ7/9NqhdhDvJa88d1a0X0g8yLvSMD33i1O4uYlI=; b=k9M8CEXVjvWHu0DE/E0QB0u4h01DgQxCKpV1p2YRCjnqeag49iTE7+FgDMqZsRkd6z jWXvPFYUZ8GXlbs0T89Uo0078PQ2g/Oc2OBCJgdoft0G3Zw8Yk8J3PGx/jzyNAZSl2VB zdjFTG/S4wKKrt3Ri087WWIVYvwH0tEgcg6G/xZTUuxU97M1hTRuc3oeWji7k+DMIBOZ e7qsFVIswnxg49CT4SQMRtQGllSQf7Z1Y6nKSZXL0zzA87sWO9oOmOeNbV3wJIVy8Mmm cdpLNId9D1DETNVQMLHEYmJdEDmAtsrjtoLSkLMWypGC1qTxywLQojxjEEpIuHVboRk1 gepg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=s4peZ7/9NqhdhDvJa88d1a0X0g8yLvSMD33i1O4uYlI=; b=kCncDHSMVVMVeNFkvpLBZvKnpckop09gfP6hbg59DC0enyTybocVBWa/0lv7v5zsYq 0buiUrmdg9metlusnEgD8z+1BQFtGHKFlfebm4Xpd8LiVE/B+zkwoe8htEubF/SAs9CB WobrOokvfxqF/TVgSJS28AY8rsz1hitCbX3TvSVgAJAS9lG01ifXLqqi0C7Ji3IUmVqD /7nviCa88fD1aT9AQcYYe5uVBlTEqGi+SW9T1R0qC1h8v563zbHu6FVAGkAl+/HDD8Jc U6geSCWvqEAKmYW+kk6MMkcAB+G+My1dWUwDCEhNmcX0l9Hof6Yt+jrHieVp4hu3uMu7 fOBQ== X-Gm-Message-State: AOAM530P8ZpYkCHLH+UVYhpyhCIHIXIFz0UmHfRgxgqBSJw9yQ/T5UOc XjY/Tga9DNHHwMEvmkOFj2889bGN23E= X-Google-Smtp-Source: ABdhPJwGd7DcT7bLEm9+SW01iU1xMncA5hk/K88E2VOjYZTCn6xdb84NBH6xauXm6iTegM0sIpBUFA== X-Received: by 2002:a17:90a:8b07:: with SMTP id y7mr11429568pjn.30.1628697820126; Wed, 11 Aug 2021 09:03:40 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:39 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 44/60] KVM: PPC: Book3S HV P9: Demand fault EBB facility registers Date: Thu, 12 Aug 2021 02:01:18 +1000 Message-Id: <20210811160134.904987-45-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use HFSCR facility disabling to implement demand faulting for EBB, with a hysteresis counter similar to the load_fp etc counters in context switching that implement the equivalent demand faulting for userspace facilities. This speeds up guest entry/exit by avoiding the register save/restore when a guest is not frequently using them. When a guest does use them often, there will be some additional demand fault overhead, but these are not commonly used facilities. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/kvm/book3s_hv.c | 16 +++++++++++++-- arch/powerpc/kvm/book3s_hv_p9_entry.c | 28 +++++++++++++++++++++------ 3 files changed, 37 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 89216e247e8b..c8d8bd90bf14 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -580,6 +580,7 @@ struct kvm_vcpu_arch { ulong cfar; ulong ppr; u32 pspb; + u8 load_ebb; ulong fscr; ulong shadow_fscr; ulong ebbhr; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 8ca081a32d91..99d4fd84f0b2 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1436,6 +1436,16 @@ static int kvmppc_pmu_unavailable(struct kvm_vcpu *vcpu) return RESUME_GUEST; } +static int kvmppc_ebb_unavailable(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.hfscr_permitted & HFSCR_EBB)) + return EMULATE_FAIL; + + vcpu->arch.hfscr |= HFSCR_EBB; + + return RESUME_GUEST; +} + static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct task_struct *tsk) { @@ -1730,6 +1740,8 @@ XXX benchmark guest exits r = kvmppc_emulate_doorbell_instr(vcpu); if (cause == FSCR_PM_LG) r = kvmppc_pmu_unavailable(vcpu); + if (cause == FSCR_EBB_LG) + r = kvmppc_ebb_unavailable(vcpu); } if (r == EMULATE_FAIL) { kvmppc_core_queue_program(vcpu, SRR1_PROGILL); @@ -2774,9 +2786,9 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) vcpu->arch.hfscr_permitted = vcpu->arch.hfscr; /* - * PM is demand-faulted so start with it clear. + * PM, EBB is demand-faulted so start with it clear. */ - vcpu->arch.hfscr &= ~HFSCR_PM; + vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB); kvmppc_mmu_book3s_hv_init(vcpu); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 3a7e242887aa..d196642061aa 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -232,9 +232,12 @@ static void load_spr_state(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { mtspr(SPRN_TAR, vcpu->arch.tar); - mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); - mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); - mtspr(SPRN_BESCR, vcpu->arch.bescr); + + if (vcpu->arch.hfscr & HFSCR_EBB) { + mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); + mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); + mtspr(SPRN_BESCR, vcpu->arch.bescr); + } if (!cpu_has_feature(CPU_FTR_ARCH_31)) mtspr(SPRN_TIDR, vcpu->arch.tid); @@ -265,9 +268,22 @@ static void load_spr_state(struct kvm_vcpu *vcpu, static void store_spr_state(struct kvm_vcpu *vcpu) { vcpu->arch.tar = mfspr(SPRN_TAR); - vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); - vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); - vcpu->arch.bescr = mfspr(SPRN_BESCR); + + if (vcpu->arch.hfscr & HFSCR_EBB) { + vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); + vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); + vcpu->arch.bescr = mfspr(SPRN_BESCR); + /* + * This is like load_fp in context switching, turn off the + * facility after it wraps the u8 to try avoiding saving + * and restoring the registers each partition switch. + */ + if (!vcpu->arch.nested) { + vcpu->arch.load_ebb++; + if (!vcpu->arch.load_ebb) + vcpu->arch.hfscr &= ~HFSCR_EBB; + } + } if (!cpu_has_feature(CPU_FTR_ARCH_31)) vcpu->arch.tid = mfspr(SPRN_TIDR); From patchwork Wed Aug 11 16:01:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515922 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ssTJNqw5; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4023Nqz9sXS for ; Thu, 12 Aug 2021 02:03:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233657AbhHKQEH (ORCPT ); Wed, 11 Aug 2021 12:04:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233658AbhHKQEH (ORCPT ); Wed, 11 Aug 2021 12:04:07 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CC6FC061765 for ; Wed, 11 Aug 2021 09:03:43 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id hv22-20020a17090ae416b0290178c579e424so5725741pjb.3 for ; Wed, 11 Aug 2021 09:03:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QS45qVH5Cx5QBJ0EQfJvzDyNJmrwCSYLhKs8WHEt8g8=; b=ssTJNqw5srUEVdEsF1m6NneCoE8YvHq7t49t5hK6sJZb38rdf8ISB3TN0yPgMoPevW 6HwlL4yF2sOfEJCn+wcJkKYOHYfwBu4DFWxFDt0YgTzo6o69ABNdJTkUpMpAaOyU3D/9 kssbHa5L83lUxA5AsRA3D3Hh8apV+4R+9Dg438sNTtgn9W/Ot6Q/t3c6co3JVFm/aSj2 nrMWicdYERFxAkoZplNSQttPidCqPIc5QWF+Dsl3a7SuTlZxiuu4N0cc87FQ5ug/hMS2 BZpBvhEFszzbPvNNJ4GaQ7LZHfC6Gy6OFCFtphL2+QW/DF8bJa1bSa0EKGjSL1n7yi7c We+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QS45qVH5Cx5QBJ0EQfJvzDyNJmrwCSYLhKs8WHEt8g8=; b=i/DXeB8jGB/GWG52ISxolRTALKIGmCwm2NqtA2wjOqhbDeQgdONawRV4wJWvQ/ofb5 +5uz3Tp4Q7O0TEXCFJh3LKIQNekpZI1FtT4/w7C5qvjPnWpk0AqHjQz2s2BDA3diM6rg NQoX6rCDzRbB9pbsTrebQk2CkJ0R01U1SyOlN6O8wc928H5RMgVXXj0uIbiTOQM1CDNO le8LHCMfpPIFPnFass9l+LlzEX0OTWgmAPGNVfLa5AdXdo38OFfwmHCDzs60918miJZf jRYPBXbcss2UID5Si0cHN5m/D7ROAsSZ3exAVP06h4WO+C5s4E7S+niMQv/UJbNTiE4q 2iRA== X-Gm-Message-State: AOAM532q+I0A3lKZptLIomy9cmgBtGcyGP0KnSIJa9uMa6KxhpVf+dGY 5/OrbE3k9ELPvxCKrkDTG87dlu5+zhs= X-Google-Smtp-Source: ABdhPJxFDlEH9pXyjRpwdQGyZRmCCT0u19ga5wLqxsbA3mHkdsK5n+ORLKmBvrC46c3lz8b9lhYHiw== X-Received: by 2002:a63:fd54:: with SMTP id m20mr332686pgj.104.1628697822917; Wed, 11 Aug 2021 09:03:42 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:42 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org, Fabiano Rosas Subject: [PATCH v2 45/60] KVM: PPC: Book3S HV P9: Demand fault TM facility registers Date: Thu, 12 Aug 2021 02:01:19 +1000 Message-Id: <20210811160134.904987-46-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use HFSCR facility disabling to implement demand faulting for TM, with a hysteresis counter similar to the load_fp etc counters in context switching that implement the equivalent demand faulting for userspace facilities. This speeds up guest entry/exit by avoiding the register save/restore when a guest is not frequently using them. When a guest does use them often, there will be some additional demand fault overhead, but these are not commonly used facilities. Reviewed-by: Fabiano Rosas Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/kvm/book3s_hv.c | 26 ++++++++++++++++++++------ arch/powerpc/kvm/book3s_hv_p9_entry.c | 15 +++++++++++---- 3 files changed, 32 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index c8d8bd90bf14..ef60f5cce251 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -581,6 +581,7 @@ struct kvm_vcpu_arch { ulong ppr; u32 pspb; u8 load_ebb; + u8 load_tm; ulong fscr; ulong shadow_fscr; ulong ebbhr; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 99d4fd84f0b2..2254101a7760 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1446,6 +1446,16 @@ static int kvmppc_ebb_unavailable(struct kvm_vcpu *vcpu) return RESUME_GUEST; } +static int kvmppc_tm_unavailable(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.hfscr_permitted & HFSCR_TM)) + return EMULATE_FAIL; + + vcpu->arch.hfscr |= HFSCR_TM; + + return RESUME_GUEST; +} + static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, struct task_struct *tsk) { @@ -1742,6 +1752,8 @@ XXX benchmark guest exits r = kvmppc_pmu_unavailable(vcpu); if (cause == FSCR_EBB_LG) r = kvmppc_ebb_unavailable(vcpu); + if (cause == FSCR_TM_LG) + r = kvmppc_tm_unavailable(vcpu); } if (r == EMULATE_FAIL) { kvmppc_core_queue_program(vcpu, SRR1_PROGILL); @@ -2786,9 +2798,9 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu) vcpu->arch.hfscr_permitted = vcpu->arch.hfscr; /* - * PM, EBB is demand-faulted so start with it clear. + * PM, EBB, TM are demand-faulted so start with it clear. */ - vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB); + vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB | HFSCR_TM); kvmppc_mmu_book3s_hv_init(vcpu); @@ -3858,8 +3870,9 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns msr |= MSR_VEC; if (cpu_has_feature(CPU_FTR_VSX)) msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) msr |= MSR_TM; msr = msr_check_and_set(msr); @@ -4575,8 +4588,9 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) msr |= MSR_VEC; if (cpu_has_feature(CPU_FTR_VSX)) msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) msr |= MSR_TM; msr = msr_check_and_set(msr); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index d196642061aa..ae5832a0836f 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -309,7 +309,7 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu, if (MSR_TM_ACTIVE(guest_msr)) { kvmppc_restore_tm_hv(vcpu, guest_msr, true); ret = true; - } else { + } else if (vcpu->arch.hfscr & HFSCR_TM) { mtspr(SPRN_TEXASR, vcpu->arch.texasr); mtspr(SPRN_TFHAR, vcpu->arch.tfhar); mtspr(SPRN_TFIAR, vcpu->arch.tfiar); @@ -343,10 +343,16 @@ void store_vcpu_state(struct kvm_vcpu *vcpu) unsigned long guest_msr = vcpu->arch.shregs.msr; if (MSR_TM_ACTIVE(guest_msr)) { kvmppc_save_tm_hv(vcpu, guest_msr, true); - } else { + } else if (vcpu->arch.hfscr & HFSCR_TM) { vcpu->arch.texasr = mfspr(SPRN_TEXASR); vcpu->arch.tfhar = mfspr(SPRN_TFHAR); vcpu->arch.tfiar = mfspr(SPRN_TFIAR); + + if (!vcpu->arch.nested) { + vcpu->arch.load_tm++; /* see load_ebb comment */ + if (!vcpu->arch.load_tm) + vcpu->arch.hfscr &= ~HFSCR_TM; + } } } } @@ -637,8 +643,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc msr |= MSR_VEC; if (cpu_has_feature(CPU_FTR_VSX)) msr |= MSR_VSX; - if (cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) msr |= MSR_TM; msr = msr_check_and_set(msr); /* Save MSR for restore. This is after hard disable, so EE is clear. */ From patchwork Wed Aug 11 16:01:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515923 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=PZPuRr8J; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF425YBVz9s5R for ; Thu, 12 Aug 2021 02:03:46 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233658AbhHKQEJ (ORCPT ); Wed, 11 Aug 2021 12:04:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQEJ (ORCPT ); Wed, 11 Aug 2021 12:04:09 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F260EC061765 for ; Wed, 11 Aug 2021 09:03:45 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id l11so3291569plk.6 for ; Wed, 11 Aug 2021 09:03:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IeuhrzIaq5WeadPoFp3hnMihsuT7X612voBGQpoEyDQ=; b=PZPuRr8JtqKZlpPGFkcZx+iono8Xe+82kAUp3RX2Qa8QEcVhKw5KSFiduAjmF1lKi0 m/kInCMGnwvm07xeR5WsUkZKxtvtwV1rp7XBywv5N4ny6bxH7t+jZ4gEloY/EmNhriBD hedVRcO/MCxGBhpPORqEmvmOnR+5bRlvbqWcy2jNmul9jUSnuCLohPm6kLJGitAvSziY b/OcbAFw7GNP1XRzMFY2TN4fCXi/G+nOr0ZfXMVcDcywbLpaThNQmsrVzWJh+gs9SizI fz3RNw0f+jmpfy1/gPySageSVmpGCn/oKxgZmULCEVVhULnYDpEPJgzVmMhfZY5hXlKQ o1zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IeuhrzIaq5WeadPoFp3hnMihsuT7X612voBGQpoEyDQ=; b=tZ5r2ETijWHYIg2a8GYlEmbbKyY+52OaaYel9MuKdTFDnKcAaM6A/X3ox5mxsiMaSU j6VF6F6nK23WlBoDXcusWzZzpRNCN5gEwhN0F4MXtEBU3dSlv7P9eIcwFcpgJBhmqKDz PtgWx0UwGEdyRm0OHAd/DW5QfPfOnIiInU4ppV38kBS/b17RiqU6tVEM6A9kuqHvkrYU lQCGdDO+y97WYo6GwFgg/QwFv12Ravj+bWTZz7S3mgjTmXkntZtnSLpLN+nu5asGsWE2 TR23V0rh9fWGMOdTmOPGaBqLdH83il5LtV/wLMtqMl+A6WVq30eZf86mwFrwY6mRbmgy tJIQ== X-Gm-Message-State: AOAM531zMaa6zcSOzuFkmsH2DE6rvxakKlUzO70XNdHSySll7eJ2faML Do0Qwuiwa3WBsYLGbusDRX509GI1y5o= X-Google-Smtp-Source: ABdhPJzLAw/B8+BH41ixZsf0WUpbBjtVV9QJDWAjTyYSLv8GKzIAQpd9epl/MFGNhZbL60vA6rmayA== X-Received: by 2002:a65:448a:: with SMTP id l10mr721716pgq.313.1628697825404; Wed, 11 Aug 2021 09:03:45 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:45 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 46/60] KVM: PPC: Book3S HV P9: Use Linux SPR save/restore to manage some host SPRs Date: Thu, 12 Aug 2021 02:01:20 +1000 Message-Id: <20210811160134.904987-47-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Linux implements SPR save/restore including storage space for registers in the task struct for process context switching. Make use of this similarly to the way we make use of the context switching fp/vec save restore. This improves code reuse, allows some stack space to be saved, and helps with avoiding VRSAVE updates if they are not required. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/switch_to.h | 2 + arch/powerpc/kernel/process.c | 6 ++ arch/powerpc/kvm/book3s_hv.c | 21 +----- arch/powerpc/kvm/book3s_hv.h | 3 - arch/powerpc/kvm/book3s_hv_p9_entry.c | 93 +++++++++++++++++++-------- 5 files changed, 74 insertions(+), 51 deletions(-) diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h index 9d1fbd8be1c7..de17c45314bc 100644 --- a/arch/powerpc/include/asm/switch_to.h +++ b/arch/powerpc/include/asm/switch_to.h @@ -112,6 +112,8 @@ static inline void clear_task_ebb(struct task_struct *t) #endif } +void kvmppc_save_current_sprs(void); + extern int set_thread_tidr(struct task_struct *t); #endif /* _ASM_POWERPC_SWITCH_TO_H */ diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 4d4e99b1756e..2225d5172e3c 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1182,6 +1182,12 @@ static inline void save_sprs(struct thread_struct *t) #endif } +void kvmppc_save_current_sprs(void) +{ + save_sprs(¤t->thread); +} +EXPORT_SYMBOL_GPL(kvmppc_save_current_sprs); + static inline void restore_sprs(struct thread_struct *old_thread, struct thread_struct *new_thread) { diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2254101a7760..719d8bec4436 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4533,9 +4533,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) struct kvm_run *run = vcpu->run; int r; int srcu_idx; - unsigned long ebb_regs[3] = {}; /* shut up GCC */ - unsigned long user_tar = 0; - unsigned int user_vrsave; struct kvm *kvm; unsigned long msr; @@ -4596,14 +4593,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) save_user_regs_kvm(); - /* Save userspace EBB and other register values */ - if (cpu_has_feature(CPU_FTR_ARCH_207S)) { - ebb_regs[0] = mfspr(SPRN_EBBHR); - ebb_regs[1] = mfspr(SPRN_EBBRR); - ebb_regs[2] = mfspr(SPRN_BESCR); - user_tar = mfspr(SPRN_TAR); - } - user_vrsave = mfspr(SPRN_VRSAVE); + kvmppc_save_current_sprs(); vcpu->arch.waitp = &vcpu->arch.vcore->wait; vcpu->arch.pgdir = kvm->mm->pgd; @@ -4644,15 +4634,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) } } while (is_kvmppc_resume_guest(r)); - /* Restore userspace EBB and other register values */ - if (cpu_has_feature(CPU_FTR_ARCH_207S)) { - mtspr(SPRN_EBBHR, ebb_regs[0]); - mtspr(SPRN_EBBRR, ebb_regs[1]); - mtspr(SPRN_BESCR, ebb_regs[2]); - mtspr(SPRN_TAR, user_tar); - } - mtspr(SPRN_VRSAVE, user_vrsave); - vcpu->arch.state = KVMPPC_VCPU_NOTREADY; atomic_dec(&kvm->arch.vcpus_running); diff --git a/arch/powerpc/kvm/book3s_hv.h b/arch/powerpc/kvm/book3s_hv.h index d3ea4c89f4fa..028dec70bdac 100644 --- a/arch/powerpc/kvm/book3s_hv.h +++ b/arch/powerpc/kvm/book3s_hv.h @@ -3,11 +3,8 @@ * Privileged (non-hypervisor) host registers to save. */ struct p9_host_os_sprs { - unsigned long dscr; - unsigned long tidr; unsigned long iamr; unsigned long amr; - unsigned long fscr; unsigned int pmc1; unsigned int pmc2; diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index ae5832a0836f..7770d08a2875 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -231,15 +231,26 @@ EXPORT_SYMBOL_GPL(switch_pmu_to_host); static void load_spr_state(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + /* TAR is very fast */ mtspr(SPRN_TAR, vcpu->arch.tar); +#ifdef CONFIG_ALTIVEC + if (cpu_has_feature(CPU_FTR_ALTIVEC) && + current->thread.vrsave != vcpu->arch.vrsave) + mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); +#endif + if (vcpu->arch.hfscr & HFSCR_EBB) { - mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); - mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); - mtspr(SPRN_BESCR, vcpu->arch.bescr); + if (current->thread.ebbhr != vcpu->arch.ebbhr) + mtspr(SPRN_EBBHR, vcpu->arch.ebbhr); + if (current->thread.ebbrr != vcpu->arch.ebbrr) + mtspr(SPRN_EBBRR, vcpu->arch.ebbrr); + if (current->thread.bescr != vcpu->arch.bescr) + mtspr(SPRN_BESCR, vcpu->arch.bescr); } - if (!cpu_has_feature(CPU_FTR_ARCH_31)) + if (!cpu_has_feature(CPU_FTR_ARCH_31) && + current->thread.tidr != vcpu->arch.tid) mtspr(SPRN_TIDR, vcpu->arch.tid); if (host_os_sprs->iamr != vcpu->arch.iamr) mtspr(SPRN_IAMR, vcpu->arch.iamr); @@ -247,9 +258,9 @@ static void load_spr_state(struct kvm_vcpu *vcpu, mtspr(SPRN_AMR, vcpu->arch.amr); if (vcpu->arch.uamor != 0) mtspr(SPRN_UAMOR, vcpu->arch.uamor); - if (host_os_sprs->fscr != vcpu->arch.fscr) + if (current->thread.fscr != vcpu->arch.fscr) mtspr(SPRN_FSCR, vcpu->arch.fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) + if (current->thread.dscr != vcpu->arch.dscr) mtspr(SPRN_DSCR, vcpu->arch.dscr); if (vcpu->arch.pspb != 0) mtspr(SPRN_PSPB, vcpu->arch.pspb); @@ -269,20 +280,15 @@ static void store_spr_state(struct kvm_vcpu *vcpu) { vcpu->arch.tar = mfspr(SPRN_TAR); +#ifdef CONFIG_ALTIVEC + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); +#endif + if (vcpu->arch.hfscr & HFSCR_EBB) { vcpu->arch.ebbhr = mfspr(SPRN_EBBHR); vcpu->arch.ebbrr = mfspr(SPRN_EBBRR); vcpu->arch.bescr = mfspr(SPRN_BESCR); - /* - * This is like load_fp in context switching, turn off the - * facility after it wraps the u8 to try avoiding saving - * and restoring the registers each partition switch. - */ - if (!vcpu->arch.nested) { - vcpu->arch.load_ebb++; - if (!vcpu->arch.load_ebb) - vcpu->arch.hfscr &= ~HFSCR_EBB; - } } if (!cpu_has_feature(CPU_FTR_ARCH_31)) @@ -322,7 +328,6 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu, #ifdef CONFIG_ALTIVEC load_vr_state(&vcpu->arch.vr); #endif - mtspr(SPRN_VRSAVE, vcpu->arch.vrsave); return ret; } @@ -336,7 +341,6 @@ void store_vcpu_state(struct kvm_vcpu *vcpu) #ifdef CONFIG_ALTIVEC store_vr_state(&vcpu->arch.vr); #endif - vcpu->arch.vrsave = mfspr(SPRN_VRSAVE); if (cpu_has_feature(CPU_FTR_TM) || cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { @@ -360,12 +364,8 @@ EXPORT_SYMBOL_GPL(store_vcpu_state); void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs) { - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - host_os_sprs->tidr = mfspr(SPRN_TIDR); host_os_sprs->iamr = mfspr(SPRN_IAMR); host_os_sprs->amr = mfspr(SPRN_AMR); - host_os_sprs->fscr = mfspr(SPRN_FSCR); - host_os_sprs->dscr = mfspr(SPRN_DSCR); } EXPORT_SYMBOL_GPL(save_p9_host_os_sprs); @@ -373,26 +373,63 @@ EXPORT_SYMBOL_GPL(save_p9_host_os_sprs); void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu, struct p9_host_os_sprs *host_os_sprs) { + /* + * current->thread.xxx registers must all be restored to host + * values before a potential context switch, othrewise the context + * switch itself will overwrite current->thread.xxx with the values + * from the guest SPRs. + */ + mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso); - if (!cpu_has_feature(CPU_FTR_ARCH_31)) - mtspr(SPRN_TIDR, host_os_sprs->tidr); + if (!cpu_has_feature(CPU_FTR_ARCH_31) && + current->thread.tidr != vcpu->arch.tid) + mtspr(SPRN_TIDR, current->thread.tidr); if (host_os_sprs->iamr != vcpu->arch.iamr) mtspr(SPRN_IAMR, host_os_sprs->iamr); if (vcpu->arch.uamor != 0) mtspr(SPRN_UAMOR, 0); if (host_os_sprs->amr != vcpu->arch.amr) mtspr(SPRN_AMR, host_os_sprs->amr); - if (host_os_sprs->fscr != vcpu->arch.fscr) - mtspr(SPRN_FSCR, host_os_sprs->fscr); - if (host_os_sprs->dscr != vcpu->arch.dscr) - mtspr(SPRN_DSCR, host_os_sprs->dscr); + if (current->thread.fscr != vcpu->arch.fscr) + mtspr(SPRN_FSCR, current->thread.fscr); + if (current->thread.dscr != vcpu->arch.dscr) + mtspr(SPRN_DSCR, current->thread.dscr); if (vcpu->arch.pspb != 0) mtspr(SPRN_PSPB, 0); /* Save guest CTRL register, set runlatch to 1 */ if (!(vcpu->arch.ctrl & 1)) mtspr(SPRN_CTRLT, 1); + +#ifdef CONFIG_ALTIVEC + if (cpu_has_feature(CPU_FTR_ALTIVEC) && + vcpu->arch.vrsave != current->thread.vrsave) + mtspr(SPRN_VRSAVE, current->thread.vrsave); +#endif + if (vcpu->arch.hfscr & HFSCR_EBB) { + if (vcpu->arch.bescr != current->thread.bescr) + mtspr(SPRN_BESCR, current->thread.bescr); + if (vcpu->arch.ebbhr != current->thread.ebbhr) + mtspr(SPRN_EBBHR, current->thread.ebbhr); + if (vcpu->arch.ebbrr != current->thread.ebbrr) + mtspr(SPRN_EBBRR, current->thread.ebbrr); + + if (!vcpu->arch.nested) { + /* + * This is like load_fp in context switching, turn off + * the facility after it wraps the u8 to try avoiding + * saving and restoring the registers each partition + * switch. + */ + vcpu->arch.load_ebb++; + if (!vcpu->arch.load_ebb) + vcpu->arch.hfscr &= ~HFSCR_EBB; + } + } + + if (vcpu->arch.tar != current->thread.tar) + mtspr(SPRN_TAR, current->thread.tar); } EXPORT_SYMBOL_GPL(restore_p9_host_os_sprs); From patchwork Wed Aug 11 16:01:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515924 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=GR31Gg+0; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF456RYXz9sXN for ; Thu, 12 Aug 2021 02:03:49 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233671AbhHKQEM (ORCPT ); Wed, 11 Aug 2021 12:04:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQEM (ORCPT ); Wed, 11 Aug 2021 12:04:12 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 640EBC061765 for ; Wed, 11 Aug 2021 09:03:48 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id w13-20020a17090aea0db029017897a5f7bcso5721624pjy.5 for ; Wed, 11 Aug 2021 09:03:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TqhGZGtGscDqIbc0byNYHBzevuRj5llr+tnYsdqeEDA=; b=GR31Gg+0XHUcd6MDMFcIqgEfanzfWovuzGrFtz2O6rf06KsNR571kTAAu11PBvQEwW YOCCF7qiOBA3oZ3ftvIvzfs6HJcWZxWL2VJa2aLQIjdOhyr5/A3lDgsY6WeTYqbkoI9R 3Tac98cva9lFHZu+POKVEsnYvnBmQEUjJj9KvGSFn2ldtH/YA70YGBLjOHvqlypO2Eki H75hTOCv7ycZmBtdgtIoorq/rN/6sl8wI7GRRp+FOR5leqhs0zGex1IE/ZQZ8yiBLmam lGx6jvwLxskPqQABHSbqD25XEiUy2SMaKTD6sPwB/c/BXYLVMwFM9ttwsI0qLzOcQWPX rfmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TqhGZGtGscDqIbc0byNYHBzevuRj5llr+tnYsdqeEDA=; b=qmiCTXKcY9SBuVQlWUa7j316j4R5TWOWOBSrU/9QGJoP8A1zQZdCwT8Ob6bP9gcJm8 kxg7M1/a3/wWxOQo+g+DGS3r6G3oUs6y7wCZRxG0lVqk+ZLhqdw/7YgZOF43Uc//N8xM knS10EKxqLo7x1OLQeIfWl3uIPH4zXR7V5uW+kM5oQpxNGICtQg2QfX37OdkF5RtKpv6 GOQW2XyDKPkvlMiEKGA8TSn+X54hgteKu6pBONXqAwhrJ9tnm3EdMo+WxzlEKkOGn2uA 0XtqRf+ImXm3kbFfM2Ia0vSV+kfa+t0IbIjVksP815TpSdWie8ufxOswOtwH4+odw6LP +wDA== X-Gm-Message-State: AOAM532abOQBvDQAPLrauxR2rv+vdMF2FR7NpsQAvqHBeKyoIEp2vcXE H/mIsJc5vBZUrV8ScaLDu/0SxaZTmKU= X-Google-Smtp-Source: ABdhPJze+EVZejZMZkhILi1gALN+vDnTDAyZrizRi22b/dTJtN3GJNoZtmgaRZdmbIZqo8Jt+VQyeA== X-Received: by 2002:a17:902:e891:b029:12d:6824:9d28 with SMTP id w17-20020a170902e891b029012d68249d28mr2956541plg.23.1628697827797; Wed, 11 Aug 2021 09:03:47 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:47 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 47/60] KVM: PPC: Book3S HV P9: Comment and fix MMU context switching code Date: Thu, 12 Aug 2021 02:01:21 +1000 Message-Id: <20210811160134.904987-48-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Tighten up partition switching code synchronisation and comments. In particular, hwsync ; isync is required after the last access that is performed in the context of a partition, before the partition is switched away from. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_64_mmu_radix.c | 4 +++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 40 +++++++++++++++++++------- 2 files changed, 33 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index b5905ae4377c..c5508744e14c 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -54,6 +54,8 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, preempt_disable(); + asm volatile("hwsync" ::: "memory"); + isync(); /* switch the lpid first to avoid running host with unallocated pid */ old_lpid = mfspr(SPRN_LPID); if (old_lpid != lpid) @@ -70,6 +72,8 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, else ret = copy_to_user_nofault((void __user *)to, from, n); + asm volatile("hwsync" ::: "memory"); + isync(); /* switch the pid first to avoid running host with unallocated pid */ if (quadrant == 1 && pid != old_pid) mtspr(SPRN_PID, old_pid); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 7770d08a2875..7fc391ecf39f 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -527,17 +527,19 @@ static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u6 lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; /* - * All the isync()s are overkill but trivially follow the ISA - * requirements. Some can likely be replaced with justification - * comment for why they are not needed. + * Prior memory accesses to host PID Q3 must be completed before we + * start switching, and stores must be drained to avoid not-my-LPAR + * logic (see switch_mmu_to_host). */ + asm volatile("hwsync" ::: "memory"); isync(); mtspr(SPRN_LPID, lpid); - isync(); mtspr(SPRN_LPCR, lpcr); - isync(); mtspr(SPRN_PID, vcpu->arch.pid); - isync(); + /* + * isync not required here because we are HRFID'ing to guest before + * any guest context access, which is context synchronising. + */ } static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) @@ -547,25 +549,41 @@ static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpid = kvm->arch.lpid; + /* + * See switch_mmu_to_guest_radix. ptesync should not be required here + * even if the host is in HPT mode because speculative accesses would + * not cause RC updates (we are in real mode). + */ + asm volatile("hwsync" ::: "memory"); + isync(); mtspr(SPRN_LPID, lpid); mtspr(SPRN_LPCR, lpcr); mtspr(SPRN_PID, vcpu->arch.pid); for (i = 0; i < vcpu->arch.slb_max; i++) mtslb(vcpu->arch.slb[i].orige, vcpu->arch.slb[i].origv); - - isync(); + /* + * isync not required here, see switch_mmu_to_guest_radix. + */ } static void switch_mmu_to_host(struct kvm *kvm, u32 pid) { + /* + * The guest has exited, so guest MMU context is no longer being + * non-speculatively accessed, but a hwsync is needed before the + * mtLPIDR / mtPIDR switch, in order to ensure all stores are drained, + * so the not-my-LPAR tlbie logic does not overlook them. + */ + asm volatile("hwsync" ::: "memory"); isync(); mtspr(SPRN_PID, pid); - isync(); mtspr(SPRN_LPID, kvm->arch.host_lpid); - isync(); mtspr(SPRN_LPCR, kvm->arch.host_lpcr); - isync(); + /* + * isync is not required after the switch, because mtmsrd with L=0 + * is performed after this switch, which is context synchronising. + */ if (!radix_enabled()) slb_restore_bolted_realmode(); From patchwork Wed Aug 11 16:01:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515925 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ZjnFCn5g; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF474Txnz9sXN for ; Thu, 12 Aug 2021 02:03:51 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233676AbhHKQEO (ORCPT ); Wed, 11 Aug 2021 12:04:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQEO (ORCPT ); Wed, 11 Aug 2021 12:04:14 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD876C061765 for ; Wed, 11 Aug 2021 09:03:50 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id u13-20020a17090abb0db0290177e1d9b3f7so10373771pjr.1 for ; Wed, 11 Aug 2021 09:03:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2UkDeEQqH1oXKRQpz834yLkjuDuYrb2VyZuIaA4gH9A=; b=ZjnFCn5gGqX88WJn9BGjTMhZSskOyCntplbH/g6S0hEaleZr5BCBPSqqYr1hdRxyiZ C6I3o8HCzQY0NaGk7WVLDrm47eHb0a2Bu+/9pAFOwRiC30NtmvnOIRC+PMIzauiTnKTf HefkIhnb2JATZJOhQq6J3vWZSPS63ISjWNZ0LcDyDFACwG5VD2Suh3/bE2qbEiKB6Nvf bxdqhnOgwFLJeaxsF50Gx1ZdsocRqN/ZkT38REHgh4udrwYQ9HA+M+XUyP50bBxN3no+ PcmgSj6140/6vJhbajLO4rkbCqLqobhPjiGUF6FoWbOnbXOKpK+GVAZU8iebYyKi4EpP t8/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2UkDeEQqH1oXKRQpz834yLkjuDuYrb2VyZuIaA4gH9A=; b=FZ0x8MGTWc8s51z167FA3TM/ZJ2xhT+uBpld3l8i+1YtKNHIVa+cov4cJtNiUJwSA2 7G1m3bHQ92MiyLuWC5KaWcC7uOCSzgl9aH/1g9ja+zLdf7YWZOYTGA3DxhXIr48XoDhI dxD2vI2JLknv7KTSupnNFEO6V7YVF/im8FfZ+YivZfQ2jSyTPIr+28khrna90BmA2+Ig d/nY+YSeRGokwh0SddqOosnkj6zNmNqoapaAoE+k0304kbwJpctUn4y5jggV8ACOi4wK u1qZeHgtNgcKbCHVMLPg1t2tXFmFqAl5GZkNVjcYcO20/rxUY4YPnrjrV+WYXuVOcEwG c3uA== X-Gm-Message-State: AOAM533AtCwNR40PG/1ea8JwRCOYLjPhbemJzICa291F9WP5XaP4lwfW UalU+NFhTgxcKb8I4xAFEqN3Ty8UrO4= X-Google-Smtp-Source: ABdhPJxsjXX0Vn4evla/BCdzjDSDlmdyzl8fdfZgP73VPVWsj/bdlMQYLg81IYfh764HEpdllLZ6Kg== X-Received: by 2002:a17:902:bcc2:b029:12c:de9d:3473 with SMTP id o2-20020a170902bcc2b029012cde9d3473mr4814824pls.6.1628697830226; Wed, 11 Aug 2021 09:03:50 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:50 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 48/60] KVM: PPC: Book3S HV P9: Test dawr_enabled() before saving host DAWR SPRs Date: Thu, 12 Aug 2021 02:01:22 +1000 Message-Id: <20210811160134.904987-49-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Some of the DAWR SPR access is already predicated on dawr_enabled(), apply this to the remainder of the accesses. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 34 ++++++++++++++++----------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 7fc391ecf39f..f8599e6f75fc 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -662,13 +662,16 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); - host_dawr0 = mfspr(SPRN_DAWR0); - host_dawrx0 = mfspr(SPRN_DAWRX0); host_psscr = mfspr(SPRN_PSSCR); host_pidr = mfspr(SPRN_PID); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - host_dawr1 = mfspr(SPRN_DAWR1); - host_dawrx1 = mfspr(SPRN_DAWRX1); + + if (dawr_enabled()) { + host_dawr0 = mfspr(SPRN_DAWR0); + host_dawrx0 = mfspr(SPRN_DAWRX0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + host_dawr1 = mfspr(SPRN_DAWR1); + host_dawrx1 = mfspr(SPRN_DAWRX1); + } } local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR); @@ -1002,15 +1005,18 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_HFSCR, host_hfscr); if (vcpu->arch.ciabr != host_ciabr) mtspr(SPRN_CIABR, host_ciabr); - if (vcpu->arch.dawr0 != host_dawr0) - mtspr(SPRN_DAWR0, host_dawr0); - if (vcpu->arch.dawrx0 != host_dawrx0) - mtspr(SPRN_DAWRX0, host_dawrx0); - if (cpu_has_feature(CPU_FTR_DAWR1)) { - if (vcpu->arch.dawr1 != host_dawr1) - mtspr(SPRN_DAWR1, host_dawr1); - if (vcpu->arch.dawrx1 != host_dawrx1) - mtspr(SPRN_DAWRX1, host_dawrx1); + + if (dawr_enabled()) { + if (vcpu->arch.dawr0 != host_dawr0) + mtspr(SPRN_DAWR0, host_dawr0); + if (vcpu->arch.dawrx0 != host_dawrx0) + mtspr(SPRN_DAWRX0, host_dawrx0); + if (cpu_has_feature(CPU_FTR_DAWR1)) { + if (vcpu->arch.dawr1 != host_dawr1) + mtspr(SPRN_DAWR1, host_dawr1); + if (vcpu->arch.dawrx1 != host_dawrx1) + mtspr(SPRN_DAWRX1, host_dawrx1); + } } if (vc->dpdes) From patchwork Wed Aug 11 16:01:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515926 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=pTnUh2Gc; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF496gqLz9sxS for ; Thu, 12 Aug 2021 02:03:53 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233677AbhHKQER (ORCPT ); Wed, 11 Aug 2021 12:04:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQEQ (ORCPT ); Wed, 11 Aug 2021 12:04:16 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C35BC061765 for ; Wed, 11 Aug 2021 09:03:53 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id f3so3302006plg.3 for ; Wed, 11 Aug 2021 09:03:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qy2KlNgjpqMqVOIvQF1sPv90tbTaR6Ocdl5ykLy2bQA=; b=pTnUh2GcW1TY0oRVr0GW5ZdSDWEKOqppihTm2MwPrxX1N5ebJh9huO/3S83tRzIW6D fX+YVitnS/KmiJOH8PXzFgZgEe+rl3055qUp/2qEMHkqnT0EX7OrEWNCbadvrQk3V4Fm wH8pa3kd54TKL2Y4StjilXjaj0pG9NUYOw/Vn3KYtFuSv1tM67lqM/tV9K50iCod7t4V mk/GZe5rVRbRu+BiMQQenPFoYqX0JjC62DYgyW/xNw1ZzQd7vBVW5XRYoXa3NyUMFUm3 /iLDK79HDt+FvPRKyrWrZ2OXJS0Fj600J5Y9jCmIF0+ej9sYWOV5jGzKA4Iyw7xffrxh wK9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qy2KlNgjpqMqVOIvQF1sPv90tbTaR6Ocdl5ykLy2bQA=; b=K3eMkMM62Bl6GgdCXzX9Doj0Ik4NkjyjUJlL5MBEv7QM5Y4fdFyqL/SkPZHSRTIPbS BUkWZfE9DQ4j2ZYdvuKK7r+wwQYOxCvl0JJGJakN0u8PFAuDOcGPqhAMhCE4B0T43vnn Lwq97HfEEVVhviMkORdu8x2h3pjQlfGL7k7VHjHpflfOc6Mq/XzRbp9Xrr+yZcX57HdN xEsgn1IU1d7ck/JDLJ/wAHoeVFjOEh5YkIeMaiAwjWuYPJmEWG0G8He9NfEE73J7suAg RJjQPhniQKCLefTVkZB7mKH6SxOdcms/MKSAHTtO61ci6ivifxjNic4V0wiC9QnAvwbW BkDw== X-Gm-Message-State: AOAM5308jRAm9YWrzZEJuepID988FLJOidIVRgbYbNuXckZWT2bNRRJg JS79Iy1JM2pVR985LptkZ9hTyBjdRjQ= X-Google-Smtp-Source: ABdhPJyoFL1Whue1E5WW4JwhkbDNl3fYWeBP3popRonfvGk0qcz1G23/z1+Vmg+GPqZcqrm4dkAytQ== X-Received: by 2002:a17:902:8b83:b029:12c:cbce:a52f with SMTP id ay3-20020a1709028b83b029012ccbcea52fmr8646845plb.9.1628697832553; Wed, 11 Aug 2021 09:03:52 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:52 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 49/60] KVM: PPC: Book3S HV P9: Don't restore PSSCR if not needed Date: Thu, 12 Aug 2021 02:01:23 +1000 Message-Id: <20210811160134.904987-50-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This also moves the PSSCR update in nested entry to avoid a SPR scoreboard stall. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 7 +++++-- arch/powerpc/kvm/book3s_hv_p9_entry.c | 26 +++++++++++++++++++------- 2 files changed, 24 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 719d8bec4436..3983c5fa065a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3879,7 +3879,9 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* TM restore can update msr */ - mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + if (vcpu->arch.psscr != host_psscr) + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + kvmhv_save_hv_regs(vcpu, &hvregs); hvregs.lpcr = lpcr; vcpu->arch.regs.msr = vcpu->arch.shregs.msr; @@ -3920,7 +3922,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns vcpu->arch.shregs.dar = mfspr(SPRN_DAR); vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR); vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); - mtspr(SPRN_PSSCR_PR, host_psscr); store_vcpu_state(vcpu); @@ -3933,6 +3934,8 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns timer_rearm_host_dec(*tb); restore_p9_host_os_sprs(vcpu, &host_os_sprs); + if (vcpu->arch.psscr != host_psscr) + mtspr(SPRN_PSSCR_PR, host_psscr); return trap; } diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index f8599e6f75fc..183d5884e362 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -645,6 +645,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_dawr0; unsigned long host_dawrx0; unsigned long host_psscr; + unsigned long host_hpsscr; unsigned long host_pidr; unsigned long host_dawr1; unsigned long host_dawrx1; @@ -662,7 +663,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); - host_psscr = mfspr(SPRN_PSSCR); + host_psscr = mfspr(SPRN_PSSCR_PR); + if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + host_hpsscr = mfspr(SPRN_PSSCR); host_pidr = mfspr(SPRN_PID); if (dawr_enabled()) { @@ -746,8 +749,14 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (vcpu->arch.ciabr != host_ciabr) mtspr(SPRN_CIABR, vcpu->arch.ciabr); - mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + + if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + } else { + if (vcpu->arch.psscr != host_psscr) + mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr); + } mtspr(SPRN_HFSCR, vcpu->arch.hfscr); @@ -953,7 +962,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ic = mfspr(SPRN_IC); vcpu->arch.pid = mfspr(SPRN_PID); - vcpu->arch.psscr = mfspr(SPRN_PSSCR) & PSSCR_GUEST_VIS; + vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR); vcpu->arch.shregs.sprg0 = mfspr(SPRN_SPRG0); vcpu->arch.shregs.sprg1 = mfspr(SPRN_SPRG1); @@ -999,9 +1008,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr); mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr); - /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ - mtspr(SPRN_PSSCR, host_psscr | - (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) { + /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */ + mtspr(SPRN_PSSCR, host_hpsscr | + (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG)); + } + mtspr(SPRN_HFSCR, host_hfscr); if (vcpu->arch.ciabr != host_ciabr) mtspr(SPRN_CIABR, host_ciabr); From patchwork Wed Aug 11 16:01:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515927 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=oD0jskWx; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4D3CLCz9t5K for ; Thu, 12 Aug 2021 02:03:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233678AbhHKQET (ORCPT ); Wed, 11 Aug 2021 12:04:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQET (ORCPT ); Wed, 11 Aug 2021 12:04:19 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 943D5C061765 for ; Wed, 11 Aug 2021 09:03:55 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id d17so3263638plr.12 for ; Wed, 11 Aug 2021 09:03:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ln6cdzeucykGbF0vTkTvQZAkd6bVcCqA4wzaEbEDif0=; b=oD0jskWxqr7aS/KUJbVu8G2XpHq5H/isMbATGUXHSULeArmjmPzb1JFu64rUrLP4t8 MpxKHSQsLhCAgfQFPWZ9v1RJjw5u5cuRFYBX+tCk58d/ApJCgh4x747a+NPYeEFIi9QF EnYTp0504WSqmFDwomy+g94kBiTsVmFcZDKxWI5aKGdUYNuKOpyOsr5HH4PqO9oQWbmS INs5ellFjuo1T9cPWv5kSb7b1cxkGcNhFYwVXFyBkmI8EKFNcNn5bbyyC+u/POMtlHMy XZkWaGjRFoeoNuANV0BJDVLR8YqoH/3uEsWHVtQOLl3UDRqnp6HZmcVIEBQwqfFZz7A/ KX9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ln6cdzeucykGbF0vTkTvQZAkd6bVcCqA4wzaEbEDif0=; b=pBx5s2Ka1e3HPI4ivxHO+A5QpZhZcGReNkJ4LClRDpYMXDnI+k+6gLUeO0tKyOQAj/ Vzj+HMffUGLzVXU5rG3xXhMIWnG7FA9NokH+sScBC16mBk5gwR2ed7peQR/NuN33HY3S 41egfdYUVG96fDDK6T9DsZXEv00MB6zJaPOZipCscW2R41MrFMF4DHzg34/jIbUe9Vl7 Qxdti7nRfYeAVosSBd0h21BueG9h+ZHFwRlWkTOx3hVdVZazfFiAFtdr/GWje82qTNEm clR7RkekeWK1KQ8GrYM7XiPQZNln4WD7P/wQEY4zpDt/Ov6uPpbjJJDCZuR5EgB/pHPX 2Gxw== X-Gm-Message-State: AOAM530w/sx6v6bpTxjSBS8FmhM/2lsOnybgFPukIxZyopfdgHO4msTX /jYUO0AA/DeZCeQPI4wAYvlnnZH64mg= X-Google-Smtp-Source: ABdhPJwlcB+1WRxjCluA6sdnTpIqSepiajUMT2RVFM4FcH9PPRIkQXbUMBapQGtwW7FJc/YoF46OIw== X-Received: by 2002:a17:90a:ba8e:: with SMTP id t14mr38079862pjr.176.1628697835026; Wed, 11 Aug 2021 09:03:55 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:54 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 50/60] KVM: PPC: Book3S HV P9: Avoid tlbsync sequence on radix guest exit Date: Thu, 12 Aug 2021 02:01:24 +1000 Message-Id: <20210811160134.904987-51-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Use the existing TLB flushing logic to IPI the previous CPU and run the necessary barriers before running a guest vCPU on a new physical CPU, to do the necessary radix GTSE barriers for handling the case of an interrupted guest tlbie sequence. This results in more IPIs than the TLB flush logic requires, but it's a significant win for common case scheduling when the vCPU remains on the same physical CPU. This saves about 520 cycles (nearly 10%) on a guest entry+exit micro benchmark on a POWER9. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 31 +++++++++++++++++++++++---- arch/powerpc/kvm/book3s_hv_p9_entry.c | 9 -------- 2 files changed, 27 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 3983c5fa065a..7d08b826d355 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3028,6 +3028,25 @@ static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu) smp_call_function_single(i, do_nothing, NULL, 1); } +static void do_migrate_away_vcpu(void *arg) +{ + struct kvm_vcpu *vcpu = arg; + struct kvm *kvm = vcpu->kvm; + + /* + * If the guest has GTSE, it may execute tlbie, so do a eieio; tlbsync; + * ptesync sequence on the old CPU before migrating to a new one, in + * case we interrupted the guest between a tlbie ; eieio ; + * tlbsync; ptesync sequence. + * + * Otherwise, ptesync is sufficient. + */ + if (kvm->arch.lpcr & LPCR_GTSE) + asm volatile("eieio; tlbsync; ptesync"); + else + asm volatile("ptesync"); +} + static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu) { struct kvm_nested_guest *nested = vcpu->arch.nested; @@ -3055,10 +3074,14 @@ static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu) * so we use a single bit in .need_tlb_flush for all 4 threads. */ if (prev_cpu != pcpu) { - if (prev_cpu >= 0 && - cpu_first_tlb_thread_sibling(prev_cpu) != - cpu_first_tlb_thread_sibling(pcpu)) - radix_flush_cpu(kvm, prev_cpu, vcpu); + if (prev_cpu >= 0) { + if (cpu_first_tlb_thread_sibling(prev_cpu) != + cpu_first_tlb_thread_sibling(pcpu)) + radix_flush_cpu(kvm, prev_cpu, vcpu); + + smp_call_function_single(prev_cpu, + do_migrate_away_vcpu, vcpu, 1); + } if (nested) nested->prev_cpu[vcpu->arch.nested_vcpu_id] = pcpu; else diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 183d5884e362..94b15294a388 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -1045,15 +1045,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; - if (kvm_is_radix(kvm)) { - /* - * Since this is radix, do a eieio; tlbsync; ptesync sequence - * in case we interrupted the guest between a tlbie and a - * ptesync. - */ - asm volatile("eieio; tlbsync; ptesync"); - } - /* * cp_abort is required if the processor supports local copy-paste * to clear the copy buffer that was under control of the guest. From patchwork Wed Aug 11 16:01:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515928 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=XpfG/oBD; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4G6lF3z9sjJ for ; Thu, 12 Aug 2021 02:03:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233681AbhHKQEW (ORCPT ); Wed, 11 Aug 2021 12:04:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233215AbhHKQEV (ORCPT ); Wed, 11 Aug 2021 12:04:21 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 116E6C061765 for ; Wed, 11 Aug 2021 09:03:58 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id a20so3349532plm.0 for ; Wed, 11 Aug 2021 09:03:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YPJZZMuZFYpGZBXtppoHRcw+wdz7vh9/1f9Siez1f5U=; b=XpfG/oBD/a2Of074nJtLK0nL3jxrc8SSKSvketXMz4tho9Dvh/lmWNL63BCD9CAvKN /XydlZXVqvakb4ONKYLsrTAsRx0mfFBGQ+UD3MRqxcgQ9vJpx0+wZZGbKMubM2Bsw2ky XLcMWNInfU66kcJxbu9nqBvXviXq5tYnWgBvdvqpwNvNwbKzAU9QMRvblIEyi1x01RdT g7MdZ+JZJTHKNxkxY2lf6842EI/GuW7GqrQc2YSwzK0ALL048t7ZVqSN7Ku6xFZgjZ0n MJXn3n6ddYQzJkK1twTOSMFpLbEjeTdbiG/E6ZVJ/PcvTGCaq0NKIZGSWGN3ULmwa8Be QrMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YPJZZMuZFYpGZBXtppoHRcw+wdz7vh9/1f9Siez1f5U=; b=ErefpUBxwWVURZj9QU4w6sL3pKJndAsCV+7KOp/y4FA9BsUSzgV2yBPHR/BQLY0jdd RdO2SfFaZ3m22lM6WsVcGBBJi/c2tdvQD2/YSHAXDpeH5b1a55vEDFedcm6hixoGLYFu nadveyr+s+teHnHhSelSU+cziDJ1WHbKaK9XWycKCeIJT641dzDXlr2QoVKGkXdPey1H ktFT+O5wacHkgoxNOZBsSe43z5IcIu8YS3GbebZcPUlWC8tWTjDI/w3y8vPXs4mbb147 FjmbgZ4OwLOu0SHIDCjHxod8Z0Zh9CgstuxR86o0k1j4T8fQLgD+x9ANXz97NYhs1Aq0 HyRA== X-Gm-Message-State: AOAM530EvZUFC0OI/adDQCGWrJVNpaRNacyWkLVzNykpqRCBRT9d/qlR vvfvk7PMhrq2NEceKHUaqhIaETRZArk= X-Google-Smtp-Source: ABdhPJwmzGFeCI5I2sRo2QfWGQYCu8NZ4sdLy7ZTEWbY5glAA6hY6+suelHJzEopfOWKpWdtSDu5uA== X-Received: by 2002:a17:902:7611:b029:12b:e55e:6ee8 with SMTP id k17-20020a1709027611b029012be55e6ee8mr4878885pll.4.1628697837491; Wed, 11 Aug 2021 09:03:57 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:57 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 51/60] KVM: PPC: Book3S HV Nested: Avoid extra mftb() in nested entry Date: Thu, 12 Aug 2021 02:01:25 +1000 Message-Id: <20210811160134.904987-52-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org mftb() is expensive and one can be avoided on nested guest dispatch. If the time checking code distinguishes between the L0 timer and the nested HV timer, then both can be tested in the same place with the same mftb() value. This also nicely illustrates the relationship between the L0 and nested HV timers. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_asm.h | 1 + arch/powerpc/kvm/book3s_hv.c | 12 ++++++++++++ arch/powerpc/kvm/book3s_hv_nested.c | 5 ----- 3 files changed, 13 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_asm.h b/arch/powerpc/include/asm/kvm_asm.h index fbbf3cec92e9..d68d71987d5c 100644 --- a/arch/powerpc/include/asm/kvm_asm.h +++ b/arch/powerpc/include/asm/kvm_asm.h @@ -79,6 +79,7 @@ #define BOOK3S_INTERRUPT_FP_UNAVAIL 0x800 #define BOOK3S_INTERRUPT_DECREMENTER 0x900 #define BOOK3S_INTERRUPT_HV_DECREMENTER 0x980 +#define BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER 0x1980 #define BOOK3S_INTERRUPT_DOORBELL 0xa00 #define BOOK3S_INTERRUPT_SYSCALL 0xc00 #define BOOK3S_INTERRUPT_TRACE 0xd00 diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7d08b826d355..7337c0ca94c6 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1486,6 +1486,10 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu, run->ready_for_interrupt_injection = 1; switch (vcpu->arch.trap) { /* We're good on these - the host merely wanted to get our attention */ + case BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER: + WARN_ON_ONCE(1); /* Should never happen */ + vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; + fallthrough; case BOOK3S_INTERRUPT_HV_DECREMENTER: vcpu->stat.dec_exits++; r = RESUME_GUEST; @@ -1817,6 +1821,12 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) vcpu->stat.ext_intr_exits++; r = RESUME_GUEST; break; + /* These need to go to the nested HV */ + case BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER: + vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; + vcpu->stat.dec_exits++; + r = RESUME_HOST; + break; /* SR/HMI/PMI are HV interrupts that host has handled. Resume guest.*/ case BOOK3S_INTERRUPT_HMI: case BOOK3S_INTERRUPT_PERFMON: @@ -3978,6 +3988,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, return BOOK3S_INTERRUPT_HV_DECREMENTER; if (next_timer < time_limit) time_limit = next_timer; + else if (*tb >= time_limit) /* nested time limit */ + return BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER; vcpu->arch.ceded = 0; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 7bed0b91245e..e57c08b968c0 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -375,11 +375,6 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) vcpu->arch.ret = RESUME_GUEST; vcpu->arch.trap = 0; do { - if (mftb() >= hdec_exp) { - vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER; - r = RESUME_HOST; - break; - } r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr); } while (is_kvmppc_resume_guest(r)); From patchwork Wed Aug 11 16:01:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515929 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=jxQwq2T7; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4K42vLz9sxS for ; Thu, 12 Aug 2021 02:04:01 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233215AbhHKQEY (ORCPT ); Wed, 11 Aug 2021 12:04:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233589AbhHKQEY (ORCPT ); Wed, 11 Aug 2021 12:04:24 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87597C061765 for ; Wed, 11 Aug 2021 09:04:00 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id w13-20020a17090aea0db029017897a5f7bcso5722637pjy.5 for ; Wed, 11 Aug 2021 09:04:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f7oBgAIHr3iz+YKgq9uJP4FTDyy08kLk6U1mSRH5irA=; b=jxQwq2T7WC7YROi7+OZ7M+dao1KyeAUQ075fReInC6El8usSkIMRgCX16fjms6CRMm exCRBhKYrw8cvlibk392a5RIXeeogqf+ej7kLRhlmJASwScfaXY1D6qXR8+8nH7mmSod gawmMPSYpSb0aDwjIQ1iOC3rfHd48G7QrCfMS3636A4cwDwuqWQ3yWB6/m/MIS7033PZ eshbDCZTmso6AE/RNzs7QV9WsmPt6joE4OSjgWexsEhNal6RJlc4JJIkgi/whj8GYXea iMzFVPb2TtFxq5JgLUHnhFeRwnY5clafZs1zXzoC2EQthANtMivh8HRw5DfQI+BU2Rx1 U/8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f7oBgAIHr3iz+YKgq9uJP4FTDyy08kLk6U1mSRH5irA=; b=l7FYcv8nak/3dUArnMIQHzzPL/OASNv6+Gv7QgPfwRMryb1gcHfhh6fo1Mgs2iK2yq J0/L0LlGUrS/s/SInOtHnIqpbLtVY/+THnoqDhn/1KhGkW93DMmmcdEt2xtH/aUx02AC tuXMscEIcqTsgA9Ui6Hyj9imBCoJSThcAWxBfAUVdxVFamP15HkT1TPIuQRRhIIVoeqW XxfSIx7Iz8+oI6FjBtvKUripOjkZSJi1zkvLyMhCN+LAq5uty/7TJ/u0RAewCT7GEoDI 7OXq9r5ay8AXa4yO/lW8Au1gFsrW+WSjI5efmQfKW9rPTK2te6vKzCClGRwMCVpjMb3F 6/Qw== X-Gm-Message-State: AOAM531zhIOH+qy9Ha5BMc1eJD2Fzl4P7o6y1x2bz4FlREQd1Fx9dXNo XkbtXecRvns4d+893V2QWGXeKuhsueg= X-Google-Smtp-Source: ABdhPJwvBlzVT3tAQ86PfoQp6xob1AMq1vberonr4Ji92d2MowZKK4CXgDYZTwt7LDk9RZCxB7OB7g== X-Received: by 2002:a65:5bc9:: with SMTP id o9mr407397pgr.24.1628697839973; Wed, 11 Aug 2021 09:03:59 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.03.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:03:59 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 52/60] KVM: PPC: Book3S HV P9: Improve mfmsr performance on entry Date: Thu, 12 Aug 2021 02:01:26 +1000 Message-Id: <20210811160134.904987-53-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Rearrange the MSR saving on entry so it does not follow the mtmsrd to disable interrupts, avoiding a possible RAW scoreboard stall. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 2 + arch/powerpc/kvm/book3s_hv.c | 18 ++----- arch/powerpc/kvm/book3s_hv_p9_entry.c | 66 +++++++++++++++--------- 3 files changed, 47 insertions(+), 39 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index f721bb1a5eab..ee25e93febe6 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -154,6 +154,8 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) return radix; } +unsigned long kvmppc_msr_hard_disable_set_facilities(struct kvm_vcpu *vcpu, unsigned long msr); + int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb); #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7337c0ca94c6..979223018c8e 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3881,6 +3881,8 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns s64 dec; int trap; + msr = mfmsr(); + save_p9_host_os_sprs(&host_os_sprs); /* @@ -3891,24 +3893,10 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns */ host_psscr = mfspr(SPRN_PSSCR_PR); - hard_irq_disable(); + kvmppc_msr_hard_disable_set_facilities(vcpu, msr); if (lazy_irq_pending()) return 0; - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if ((cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && - (vcpu->arch.hfscr & HFSCR_TM)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* TM restore can update msr */ diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 94b15294a388..fa6ac153c0f9 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -628,6 +628,44 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) } } +unsigned long kvmppc_msr_hard_disable_set_facilities(struct kvm_vcpu *vcpu, unsigned long msr) +{ + unsigned long msr_needed = 0; + + msr &= ~MSR_EE; + + /* MSR bits may have been cleared by context switch so must recheck */ + if (IS_ENABLED(CONFIG_PPC_FPU)) + msr_needed |= MSR_FP; + if (cpu_has_feature(CPU_FTR_ALTIVEC)) + msr_needed |= MSR_VEC; + if (cpu_has_feature(CPU_FTR_VSX)) + msr_needed |= MSR_VSX; + if ((cpu_has_feature(CPU_FTR_TM) || + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && + (vcpu->arch.hfscr & HFSCR_TM)) + msr_needed |= MSR_TM; + + /* + * This could be combined with MSR[RI] clearing, but that expands + * the unrecoverable window. It would be better to cover unrecoverable + * with KVM bad interrupt handling rather than use MSR[RI] at all. + * + * Much more difficult and less worthwhile to combine with IR/DR + * disable. + */ + if ((msr & msr_needed) != msr_needed) { + msr |= msr_needed; + __mtmsrd(msr, 0); + } else { + __hard_irq_disable(); + } + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; + + return msr; +} +EXPORT_SYMBOL_GPL(kvmppc_msr_hard_disable_set_facilities); + int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { struct p9_host_os_sprs host_os_sprs; @@ -661,6 +699,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.ceded = 0; + /* Save MSR for restore, with EE clear. */ + msr = mfmsr() & ~MSR_EE; + host_hfscr = mfspr(SPRN_HFSCR); host_ciabr = mfspr(SPRN_CIABR); host_psscr = mfspr(SPRN_PSSCR_PR); @@ -682,35 +723,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc save_p9_host_os_sprs(&host_os_sprs); - /* - * This could be combined with MSR[RI] clearing, but that expands - * the unrecoverable window. It would be better to cover unrecoverable - * with KVM bad interrupt handling rather than use MSR[RI] at all. - * - * Much more difficult and less worthwhile to combine with IR/DR - * disable. - */ - hard_irq_disable(); + msr = kvmppc_msr_hard_disable_set_facilities(vcpu, msr); if (lazy_irq_pending()) { trap = 0; goto out; } - /* MSR bits may have been cleared by context switch */ - msr = 0; - if (IS_ENABLED(CONFIG_PPC_FPU)) - msr |= MSR_FP; - if (cpu_has_feature(CPU_FTR_ALTIVEC)) - msr |= MSR_VEC; - if (cpu_has_feature(CPU_FTR_VSX)) - msr |= MSR_VSX; - if ((cpu_has_feature(CPU_FTR_TM) || - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) && - (vcpu->arch.hfscr & HFSCR_TM)) - msr |= MSR_TM; - msr = msr_check_and_set(msr); - /* Save MSR for restore. This is after hard disable, so EE is clear. */ - if (unlikely(load_vcpu_state(vcpu, &host_os_sprs))) msr = mfmsr(); /* MSR may have been updated */ From patchwork Wed Aug 11 16:01:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515930 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=OZaXx3cU; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4M5Ztfz9sjJ for ; Thu, 12 Aug 2021 02:04:03 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233682AbhHKQE0 (ORCPT ); Wed, 11 Aug 2021 12:04:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233589AbhHKQE0 (ORCPT ); Wed, 11 Aug 2021 12:04:26 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0D89C061765 for ; Wed, 11 Aug 2021 09:04:02 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id oa17so4230230pjb.1 for ; Wed, 11 Aug 2021 09:04:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tuny/0eGXiUI7uo2gVucclOBaSeF7WVbv1eeHSQKFaE=; b=OZaXx3cUvijxkwm0x1eN2JTWJWgtsEwynt9e6yW/0x1n+/J7npACl0cjOAhtGMMY2D jO9aVROyUQIK/xurGp7CDQ6HuaFzIw3qKn0DjI+5a2psfC3kMCAmLfPKcXe4xcUb89sP iE/rLcnZvQ6w+Gs9xFOCZGDGGHfXHVIML9eZMJi2NcoR+yQuk9WX4548JZ+nl3FyKRuB J+G1e01iLqS6USBfWB1F2064lPFWqClXm27wtp8UD6xkUf4TnTAv6tQQJygSlgYvBm6H cR6DQCr2Z+/m1KvxbroiBxKJaPTZ9pxDsEbNWJrHZSSBm3e3a91giiW15vW+VflDndfa U1tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tuny/0eGXiUI7uo2gVucclOBaSeF7WVbv1eeHSQKFaE=; b=g02LGl217lDy1sgMjxqHTJPTCPacfxqpRXiQJR1K2QUDCCF/oPDzkkuLNIIgQ4YYBJ KUS6hyZGbsK64bVYUdd1mRpb/m/HMMQ0kNYwjGxvAHvqE0Z/lgfNNtiLRwTuGaF4KG9g YJzal+22dW9YBkDEjbPryixvE0qnLAFdoskx5rCBbbd9An56dhUNcAjACvxLWHIHaFya XynxWY1y6T9xwzIvRTuAmFbHc/QqIzIiQYeNeSQ5SyVcAO9YUIclC1utryshUyg2Pj61 iut0ns5t6py8e5b+EA0jNZdcdc0rmWGqvcC7/YhnG+B8mDcY1fxYKtYNn2Uu6pg1YGTK HGcA== X-Gm-Message-State: AOAM5314kYIXrv7sq0k76AeFsc1mQdnP+qCSaosj9qqJux/hYqE5zvjc TGJHZgnpm5awubViHT0QgpB0hzWegeA= X-Google-Smtp-Source: ABdhPJy2Iq6P+yIa9UFj5axzhHa+x+EubqFC3tvbbYbE9FTipuyNemwAnPdcJOW0VYk4zQFHOfAaTA== X-Received: by 2002:a17:90a:7185:: with SMTP id i5mr37588461pjk.236.1628697842379; Wed, 11 Aug 2021 09:04:02 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.04.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:04:02 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 53/60] KVM: PPC: Book3S HV P9: Optimise hash guest SLB saving Date: Thu, 12 Aug 2021 02:01:27 +1000 Message-Id: <20210811160134.904987-54-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org slbmfee/slbmfev instructions are very expensive, moreso than a regular mfspr instruction, so minimising them significantly improves hash guest exit performance. The slbmfev is only required if slbmfee found a valid SLB entry. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index fa6ac153c0f9..cb865fe2580d 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -483,10 +483,22 @@ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator #define accumulate_time(vcpu, next) do {} while (0) #endif -static inline void mfslb(unsigned int idx, u64 *slbee, u64 *slbev) +static inline u64 mfslbv(unsigned int idx) { - asm volatile("slbmfev %0,%1" : "=r" (*slbev) : "r" (idx)); - asm volatile("slbmfee %0,%1" : "=r" (*slbee) : "r" (idx)); + u64 slbev; + + asm volatile("slbmfev %0,%1" : "=r" (slbev) : "r" (idx)); + + return slbev; +} + +static inline u64 mfslbe(unsigned int idx) +{ + u64 slbee; + + asm volatile("slbmfee %0,%1" : "=r" (slbee) : "r" (idx)); + + return slbee; } static inline void mtslb(u64 slbee, u64 slbev) @@ -616,8 +628,10 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu) */ for (i = 0; i < vcpu->arch.slb_nr; i++) { u64 slbee, slbev; - mfslb(i, &slbee, &slbev); + + slbee = mfslbe(i); if (slbee & SLB_ESID_V) { + slbev = mfslbv(i); vcpu->arch.slb[nr].orige = slbee | i; vcpu->arch.slb[nr].origv = slbev; nr++; From patchwork Wed Aug 11 16:01:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515931 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=UKOkMobR; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4Q1czpz9sXS for ; Thu, 12 Aug 2021 02:04:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233591AbhHKQE3 (ORCPT ); Wed, 11 Aug 2021 12:04:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233589AbhHKQE3 (ORCPT ); Wed, 11 Aug 2021 12:04:29 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6033AC061765 for ; Wed, 11 Aug 2021 09:04:05 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id u13-20020a17090abb0db0290177e1d9b3f7so10374860pjr.1 for ; Wed, 11 Aug 2021 09:04:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lGFwm9eLMc818K8Y7HbXY46Jvms/PHQTfB9Xnbg0In4=; b=UKOkMobR524dzmwACOuDHce1apQN8B3xOUPuWeZ+dEAbTPZ7xWGBwUrDeOFqQwhIKf vu90SPr2ao5wIranB8XSk/XoEbm9hR7SXqSZtmKj3m/W/w/kdrvnnvGY8/5YtC8mz1sN LHWh0E7qj9ZGIoCEYJlg3UHkPpblLCp9wPtW/nygqC5qqdxVWOoP1q0ty94crB2uVxwk qZPVCbo7M3ZWNftVYOMOTIzOXZ2KXsIUPtbk3C5AWDwCRVIuBCBkq1rZKVqz+6aH6VMr 6P8lde9ikwC3rG6R5vECx2XzO3oAAZS9ytowh3BfTBDik3PixzZPIhO14DtYMY0P1tDY M8Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lGFwm9eLMc818K8Y7HbXY46Jvms/PHQTfB9Xnbg0In4=; b=I6rcn5wOLrpH3q+qHJXOeeANtMHQKePSjHaF2oGSTjZDKMVzCyxf6tBmmSe6Wcg5C3 SVU4Gxj2MDdW1QrkEp7jSE6gAu1VOySKL70EUPQ1nl5sKmRVADMGstWW0MDh+SzIoLLf +yKTk1JqJgk8HDS/xyJ/PaXIJnNlfwyopcxCVayC4oape7icqLO/dnfuH/7w026PANV5 Tw7yg7BwUyD4+j+zc6E6keHkal6/1IweT0kCjWY4xR6YmLqRaiyj1C4YwMck7mmES+7L cZmYzQ90IUX7o7UsIsJpLbxtXJ7f6QNVZniSK7R5KNitDdEs8eyPjUk80y4kjfB6H0+j n7gA== X-Gm-Message-State: AOAM531MEaN+JhKFtWQ9j93TamsaYtadRJvHf/YbuITNaC2CwOgYDM3V 78RmwZ/44TR6CKN0lBaIkI/urD0PwCc= X-Google-Smtp-Source: ABdhPJz0cgfxKnR+wqNYHrawFiUMpNNJFs8wHnfFYJOWIj1C27SkzJRTWtm2J0AqLhTROVRE3ixSKw== X-Received: by 2002:a17:90a:1d44:: with SMTP id u4mr11329555pju.119.1628697844869; Wed, 11 Aug 2021 09:04:04 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.04.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:04:04 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 54/60] KVM: PPC: Book3S HV P9: Avoid changing MSR[RI] in entry and exit Date: Thu, 12 Aug 2021 02:01:28 +1000 Message-Id: <20210811160134.904987-55-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org kvm_hstate.in_guest provides the equivalent of MSR[RI]=0 protection, and it covers the existing MSR[RI]=0 section in late entry and early exit, so clearing and setting MSR[RI] in those cases does not actually do anything useful. Remove the RI manipulation and replace it with comments. Make the in_guest memory accesses a bit closer to a proper critical section pattern. This speeds up guest entry/exit performance. This also removes the MSR[RI] warnings which aren't very interesting and would cause crashes if they hit due to causing an interrupt in non-recoverable code. From: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv_p9_entry.c | 50 ++++++++++++--------------- 1 file changed, 23 insertions(+), 27 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index cb865fe2580d..5745a49021c3 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -825,7 +825,15 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc * But TM could be split out if this would be a significant benefit. */ - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_HV_P9; + /* + * MSR[RI] does not need to be cleared (and is not, for radix guests + * with no prefetch bug), because in_guest is set. If we take a SRESET + * or MCE with in_guest set but still in HV mode, then + * kvmppc_p9_bad_interrupt handles the interrupt, which effectively + * clears MSR[RI] and doesn't return. + */ + WRITE_ONCE(local_paca->kvm_hstate.in_guest, KVM_GUEST_MODE_HV_P9); + barrier(); /* Open in_guest critical section */ /* * Hash host, hash guest, or radix guest with prefetch bug, all have @@ -837,14 +845,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc save_clear_host_mmu(kvm); - if (kvm_is_radix(kvm)) { + if (kvm_is_radix(kvm)) switch_mmu_to_guest_radix(kvm, vcpu, lpcr); - if (!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) - __mtmsrd(0, 1); /* clear RI */ - - } else { + else switch_mmu_to_guest_hpt(kvm, vcpu, lpcr); - } /* TLBIEL uses LPID=LPIDR, so run this after setting guest LPID */ kvmppc_check_need_tlb_flush(kvm, vc->pcpu, nested); @@ -899,19 +903,16 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.regs.gpr[3] = local_paca->kvm_hstate.scratch2; /* - * Only set RI after reading machine check regs (DAR, DSISR, SRR0/1) - * and hstate scratch (which we need to move into exsave to make - * re-entrant vs SRESET/MCE) + * After reading machine check regs (DAR, DSISR, SRR0/1) and hstate + * scratch (which we need to move into exsave to make re-entrant vs + * SRESET/MCE), register state is protected from reentrancy. However + * timebase, MMU, among other state is still set to guest, so don't + * enable MSR[RI] here. It gets enabled at the end, after in_guest + * is cleared. + * + * It is possible an NMI could come in here, which is why it is + * important to save the above state early so it can be debugged. */ - if (ri_set) { - if (unlikely(!(mfmsr() & MSR_RI))) { - __mtmsrd(MSR_RI, 1); - WARN_ON_ONCE(1); - } - } else { - WARN_ON_ONCE(mfmsr() & MSR_RI); - __mtmsrd(MSR_RI, 1); - } vcpu->arch.regs.gpr[9] = exsave[EX_R9/sizeof(u64)]; vcpu->arch.regs.gpr[10] = exsave[EX_R10/sizeof(u64)]; @@ -969,13 +970,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc */ mtspr(SPRN_HSRR0, vcpu->arch.regs.nip); mtspr(SPRN_HSRR1, vcpu->arch.shregs.msr); - - /* - * tm_return_to_guest re-loads SRR0/1, DAR, - * DSISR after RI is cleared, in case they had - * been clobbered by a MCE. - */ - __mtmsrd(0, 1); /* clear RI */ goto tm_return_to_guest; } } @@ -1075,7 +1069,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc restore_p9_host_os_sprs(vcpu, &host_os_sprs); - local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE; + barrier(); /* Close in_guest critical section */ + WRITE_ONCE(local_paca->kvm_hstate.in_guest, KVM_GUEST_MODE_NONE); + /* Interrupts are recoverable at this point */ /* * cp_abort is required if the processor supports local copy-paste From patchwork Wed Aug 11 16:01:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515932 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=CZuo5gIZ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4T67rYz9sX1 for ; Thu, 12 Aug 2021 02:04:09 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233684AbhHKQEd (ORCPT ); Wed, 11 Aug 2021 12:04:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233589AbhHKQEc (ORCPT ); Wed, 11 Aug 2021 12:04:32 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6370C061765 for ; Wed, 11 Aug 2021 09:04:07 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id hv22-20020a17090ae416b0290178c579e424so5727888pjb.3 for ; Wed, 11 Aug 2021 09:04:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2i616JlEKtPLceUUCFltWk30KazQ8LPnIgBLdHO5H14=; b=CZuo5gIZ+d9XJdBV2l4Eqoug+9wAhSYuKmHY9WQYzDdKB0iVAK7tJdg/WjNzobpLVH o9ZjSEHGEMobaee8lcbVew7w4ShhA3c1A0TPjc79BjBh8SVs4zb72gnuxCye8BHq9mxX gCNJrJjI2cmW+PJ3akMB9EHgZ8skP14p7PXSL0wCsAKZasVZOYC/LrtgKuFE6YNvJUmZ 6Am+tNx0Qo3yim22R8paWEG9OfATSJS4nPiK/37KTTzxsbV1yxjNXM6Y2sgssWmWRm6R JV4vowG+WO7RQfQm862l4s8RUO+dnxYR01owbdms+rdKg74XEtjJG3DfPA+dzgXjLCxQ w9TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2i616JlEKtPLceUUCFltWk30KazQ8LPnIgBLdHO5H14=; b=b5b641mViJitjJwSINKf69aSuSAtnx5ZNNCnkRk/c0NcqVY/eiEWa179e7voTQyaCp cKaUV+kF7ncCEd59TtTU9yF1MTbAA8vvT/fm3p7LTqY3BCwQud7jECOjrY+Wg38V7E1R q4Fe4S0zsvG0H0RjfP9pTZuceO8EK8kmW+xKAf/HUQRqiTifAZXaoYbA9lEEsriZLKoO pwzhnNemKWYNXfGm7kFaij9DlIZ8t6E9n8O21WabEyFxD9s4/i64ptlIwQ0a1acA2+pI +p7l82WsQHeSaSLQOB1rvpbdA5uPHfRGBoPIrXFTumbvNh0lGTX3lT9wAF4hBZFPKrSF tr9g== X-Gm-Message-State: AOAM531GoVmAIWAL4ReN569jZ2Oiv625JEPfzFzg09wKL1FKmDk58Nse ROTeLmA/s7DOhoSvtxJsR0Xcvye2jFE= X-Google-Smtp-Source: ABdhPJwWHrCBicsMfJQly5fMPBDZpgAcu/r8DOPamVr3DJWzXGqzOze/rDmlNx1fL1MvoZPVKTSJrg== X-Received: by 2002:a17:90b:1d82:: with SMTP id pf2mr4146868pjb.212.1628697847217; Wed, 11 Aug 2021 09:04:07 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.04.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:04:07 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 55/60] KVM: PPC: Book3S HV P9: Add unlikely annotation for !mmu_ready Date: Thu, 12 Aug 2021 02:01:29 +1000 Message-Id: <20210811160134.904987-56-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The mmu will almost always be ready. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 979223018c8e..d3fc486a4817 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4399,7 +4399,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vc->runner = vcpu; /* See if the MMU is ready to go */ - if (!kvm->arch.mmu_ready) { + if (unlikely(!kvm->arch.mmu_ready)) { r = kvmhv_setup_mmu(vcpu); if (r) { run->exit_reason = KVM_EXIT_FAIL_ENTRY; From patchwork Wed Aug 11 16:01:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515933 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=k/M2kOOS; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4W00Xwz9t0Z for ; Thu, 12 Aug 2021 02:04:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233685AbhHKQEe (ORCPT ); Wed, 11 Aug 2021 12:04:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233589AbhHKQEd (ORCPT ); Wed, 11 Aug 2021 12:04:33 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1CDDCC061765 for ; Wed, 11 Aug 2021 09:04:10 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id a5so3296601plh.5 for ; Wed, 11 Aug 2021 09:04:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jy4R8BFfYquj/C4NC6ERCf9CQ1VWlLViGyEiq+5maxY=; b=k/M2kOOSXZ5BdRmESaWzo2oAi8T5VoxoaozOXgbSabHr9O2C/MM+3vnYHxU2SmbjyB YYSzdq0R9XdXI5PgftMNBIRhlN9nvVpBpdZj43JEdioJhScH/e6rq2RG/EkUFUDqzeLL m2ND/cvFXhHtxeA1gWbdZDPlCqTjDUDsgHGCaWwBrUiTuKcC2xEjySdKMFXR1QiLXMc1 TdqsAoWxbANQYzvJiAEF/GaYvC0S9m22Zr/Wp6wAuZKkVz20///tD8Y4ayylSqHizod9 tl94YfvWqTljw7VkKmXMETNKeowlM3iMysXpQkjYtizu+W2YJjms1rjzCDl7+rEGS3fD 6x+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jy4R8BFfYquj/C4NC6ERCf9CQ1VWlLViGyEiq+5maxY=; b=qocLPqYhVWjjcFz2MAvt5ZkN7dCMjfqjPi+HHfg8Y0XL3ze5EqPyxZQDT2DhN2fvAt mU/5OPSGEPsuoNP4QkjKr7Crl/SXywAmAg+fGJX6TwL2/4qbc7lZZHPHNILeWxgJ+T58 FaEh969a2I57yMkIsrnOdRm40LoVdEj59O9CaM0j0yFMQQIUzluT2blC5OoT/N1Xgncn zjemf/zzCyVtqddC2TI+P9HVQSaMNFop6oOp4pbKEm9AboeSYu32erAGgoyBBCLk0KTq taKoycbl8XDQIRoDoepj0Feh0FEghUkUJ7hpfDCYgc7mx6haQjyOKfhWS7ypsooj9XK7 2T5A== X-Gm-Message-State: AOAM533d6OJ3YAVasAGHTZeM0RJKfiFzejOeGTRZ1RppJ7dPMOGu7XuS Om/UKfI5kdg0yEVrGW1No0MoO2SohFc= X-Google-Smtp-Source: ABdhPJx5XySbSq1D8RPC+5DJS+5KrEfdxP5ai4aA/BSTh6lL+5dYrA186eI+5J/AldSAMaxrR9BJGw== X-Received: by 2002:a17:90b:1d02:: with SMTP id on2mr10852415pjb.150.1628697849500; Wed, 11 Aug 2021 09:04:09 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.04.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:04:09 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 56/60] KVM: PPC: Book3S HV P9: Avoid cpu_in_guest atomics on entry and exit Date: Thu, 12 Aug 2021 02:01:30 +1000 Message-Id: <20210811160134.904987-57-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org cpu_in_guest is set to determine if a CPU needs to be IPI'ed to exit the guest and notice the need_tlb_flush bit. This can be implemented as a global per-CPU pointer to the currently running guest instead of per-guest cpumasks, saving 2 atomics per entry/exit. P7/8 doesn't require cpu_in_guest, nor does a nested HV (only the L0 does), so move it to the P9 HV path. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_book3s_64.h | 1 - arch/powerpc/include/asm/kvm_host.h | 1 - arch/powerpc/kvm/book3s_hv.c | 38 +++++++++++++----------- 3 files changed, 21 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index ee25e93febe6..3109b41865b2 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -44,7 +44,6 @@ struct kvm_nested_guest { struct mutex tlb_lock; /* serialize page faults and tlbies */ struct kvm_nested_guest *next; cpumask_t need_tlb_flush; - cpumask_t cpu_in_guest; short prev_cpu[NR_CPUS]; u8 radix; /* is this nested guest radix */ }; diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index ef60f5cce251..2bcac6da0a4b 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -288,7 +288,6 @@ struct kvm_arch { u32 online_vcores; atomic_t hpte_mod_interest; cpumask_t need_tlb_flush; - cpumask_t cpu_in_guest; u8 radix; u8 fwnmi_enabled; u8 secure_guest; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d3fc486a4817..d4df2add81ae 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3012,30 +3012,33 @@ static void kvmppc_release_hwthread(int cpu) tpaca->kvm_hstate.kvm_split_mode = NULL; } +static DEFINE_PER_CPU(struct kvm *, cpu_in_guest); + static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu) { struct kvm_nested_guest *nested = vcpu->arch.nested; - cpumask_t *cpu_in_guest; int i; cpu = cpu_first_tlb_thread_sibling(cpu); - if (nested) { + if (nested) cpumask_set_cpu(cpu, &nested->need_tlb_flush); - cpu_in_guest = &nested->cpu_in_guest; - } else { + else cpumask_set_cpu(cpu, &kvm->arch.need_tlb_flush); - cpu_in_guest = &kvm->arch.cpu_in_guest; - } /* - * Make sure setting of bit in need_tlb_flush precedes - * testing of cpu_in_guest bits. The matching barrier on - * the other side is the first smp_mb() in kvmppc_run_core(). + * Make sure setting of bit in need_tlb_flush precedes testing of + * cpu_in_guest. The matching barrier on the other side is hwsync + * when switching to guest MMU mode, which happens between + * cpu_in_guest being set to the guest kvm, and need_tlb_flush bit + * being tested. */ smp_mb(); for (i = cpu; i <= cpu_last_tlb_thread_sibling(cpu); - i += cpu_tlb_thread_sibling_step()) - if (cpumask_test_cpu(i, cpu_in_guest)) + i += cpu_tlb_thread_sibling_step()) { + struct kvm *running = *per_cpu_ptr(&cpu_in_guest, i); + + if (running == kvm) smp_call_function_single(i, do_nothing, NULL, 1); + } } static void do_migrate_away_vcpu(void *arg) @@ -3103,7 +3106,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) { int cpu; struct paca_struct *tpaca; - struct kvm *kvm = vc->kvm; cpu = vc->pcpu; if (vcpu) { @@ -3114,7 +3116,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) cpu += vcpu->arch.ptid; vcpu->cpu = vc->pcpu; vcpu->arch.thread_cpu = cpu; - cpumask_set_cpu(cpu, &kvm->arch.cpu_in_guest); } tpaca = paca_ptrs[cpu]; tpaca->kvm_hstate.kvm_vcpu = vcpu; @@ -3832,7 +3833,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) kvmppc_release_hwthread(pcpu + i); if (sip && sip->napped[i]) kvmppc_ipi_thread(pcpu + i); - cpumask_clear_cpu(pcpu + i, &vc->kvm->arch.cpu_in_guest); } spin_unlock(&vc->lock); @@ -4000,8 +4000,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } } else { + struct kvm *kvm = vcpu->kvm; + kvmppc_xive_push_vcpu(vcpu); + + __this_cpu_write(cpu_in_guest, kvm); trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr, tb); + __this_cpu_write(cpu_in_guest, NULL); + if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && !(vcpu->arch.shregs.msr & MSR_PR)) { unsigned long req = kvmppc_get_gpr(vcpu, 3); @@ -4026,7 +4032,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, } kvmppc_xive_pull_vcpu(vcpu); - if (kvm_is_radix(vcpu->kvm)) + if (kvm_is_radix(kvm)) vcpu->arch.slb_max = 0; } @@ -4491,8 +4497,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, powerpc_local_irq_pmu_restore(flags); - cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest); - preempt_enable(); /* From patchwork Wed Aug 11 16:01:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515935 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=i85A2td1; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4Y2tHCz9sXN for ; Thu, 12 Aug 2021 02:04:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233693AbhHKQEg (ORCPT ); Wed, 11 Aug 2021 12:04:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233688AbhHKQEg (ORCPT ); Wed, 11 Aug 2021 12:04:36 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93A80C061765 for ; Wed, 11 Aug 2021 09:04:12 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id w13-20020a17090aea0db029017897a5f7bcso5723811pjy.5 for ; Wed, 11 Aug 2021 09:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cauDK1klmHflV3Go2+XAubrxTRr9Jif78l3bR4wcjqc=; b=i85A2td1DrHjFgYpy+LZhR94hnP1eMF8ZTVrO4wyemr+zxtqYD6pRgryQkXhe/rVWV aT40uu1Bmmqy/oobIPPKHVs7aGRhA/Qd/w3+LBu3iipi7vcyBvEexSCUYd3+4bmsKHKf yCMtoWSBkOZp+iqt/VdKIz+887Qw4hfjraOKudLWW+1gXSH46cp8hmOGmOUYIuWPU7E3 iGI/O7uLGMyIjIIf5iyTwFf9fnhfZezV5Wk9VeZMMUR6u2JTguzNXf2IqoJKDbaNpZiE UKBhAZCcS3iFoTOYoHBa4fbYv1DEIEqXYYC7BvlJhtTeKRqlYsBx85mMLXr8ZYCrXxdd Owcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cauDK1klmHflV3Go2+XAubrxTRr9Jif78l3bR4wcjqc=; b=XZ2JDw91W2iseeHuWv8B2SQxVx82LCEQOVSMkv8+/SN7AUWMSiHwkR+WV55H4VVd7P 5EJB4I38eQGftEI/oW3e/j4DxSwmPR9P/hOA5eHSPTOTtVFYQV+N0ggkQXGwo0b77TjM Zs0O/DXTmgF9uvSieMzO3dqkjRubrTXR0hPwUtkZZOq5s05qxW/pzpHbl9PWaBdwncnn 9Q+kaA4xlUF7weT7aMLgVe0hVfy85ElcT0bfyuQiyjKD1UekgOrI3bqud7DDZujJfyUA vVE2YiTGdqZWEbrrAYKPbyx/L+NrQfI/OZfd5mprni3dmuDIMfr/oxbboVSeoBp+/l8e Bu+w== X-Gm-Message-State: AOAM5302xFvliKOmG4L8+u3i8qggkDIt7CKT54xX2cawdgkpTm7puDok BlfQzpj9/6M5SOyJjQi4gAg4ozOO/Wg= X-Google-Smtp-Source: ABdhPJwko502Xql4qFBBh/0HXPRlb3F742tAlwLwUWjf4+CZhkn4sGlBJ3lo2EwyzTtEucMnZVfvNA== X-Received: by 2002:a17:90a:ba0b:: with SMTP id s11mr37329217pjr.10.1628697852002; Wed, 11 Aug 2021 09:04:12 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.04.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:04:11 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 57/60] KVM: PPC: Book3S HV P9: Remove most of the vcore logic Date: Thu, 12 Aug 2021 02:01:31 +1000 Message-Id: <20210811160134.904987-58-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The P9 path always uses one vcpu per vcore, so none of the the vcore, locks, stolen time, blocking logic, shared waitq, etc., is required. Remove most of it. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 147 ++++++++++++++++++++--------------- 1 file changed, 85 insertions(+), 62 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d4df2add81ae..c8ea430d1955 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -276,6 +276,8 @@ static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + spin_lock_irqsave(&vc->stoltb_lock, flags); vc->preempt_tb = tb; spin_unlock_irqrestore(&vc->stoltb_lock, flags); @@ -285,6 +287,8 @@ static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc, u64 tb) { unsigned long flags; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + spin_lock_irqsave(&vc->stoltb_lock, flags); if (vc->preempt_tb != TB_NIL) { vc->stolen_tb += tb - vc->preempt_tb; @@ -297,7 +301,12 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; - u64 now = mftb(); + u64 now; + + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return; + + now = mftb(); /* * We can test vc->runner without taking the vcore lock, @@ -321,7 +330,12 @@ static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu) { struct kvmppc_vcore *vc = vcpu->arch.vcore; unsigned long flags; - u64 now = mftb(); + u64 now; + + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return; + + now = mftb(); if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING) kvmppc_core_start_stolen(vc, now); @@ -673,6 +687,8 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now) u64 p; unsigned long flags; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + spin_lock_irqsave(&vc->stoltb_lock, flags); p = vc->stolen_tb; if (vc->vcore_state != VCORE_INACTIVE && @@ -695,13 +711,19 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, dt = vcpu->arch.dtl_ptr; vpa = vcpu->arch.vpa.pinned_addr; now = tb; - core_stolen = vcore_stolen_time(vc, now); - stolen = core_stolen - vcpu->arch.stolen_logged; - vcpu->arch.stolen_logged = core_stolen; - spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); - stolen += vcpu->arch.busy_stolen; - vcpu->arch.busy_stolen = 0; - spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); + + if (cpu_has_feature(CPU_FTR_ARCH_300)) { + stolen = 0; + } else { + core_stolen = vcore_stolen_time(vc, now); + stolen = core_stolen - vcpu->arch.stolen_logged; + vcpu->arch.stolen_logged = core_stolen; + spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); + stolen += vcpu->arch.busy_stolen; + vcpu->arch.busy_stolen = 0; + spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); + } + if (!dt || !vpa) return; memset(dt, 0, sizeof(struct dtl_entry)); @@ -898,13 +920,14 @@ static int kvm_arch_vcpu_yield_to(struct kvm_vcpu *target) * mode handler is not called but no other threads are in the * source vcore. */ - - spin_lock(&vcore->lock); - if (target->arch.state == KVMPPC_VCPU_RUNNABLE && - vcore->vcore_state != VCORE_INACTIVE && - vcore->runner) - target = vcore->runner; - spin_unlock(&vcore->lock); + if (!cpu_has_feature(CPU_FTR_ARCH_300)) { + spin_lock(&vcore->lock); + if (target->arch.state == KVMPPC_VCPU_RUNNABLE && + vcore->vcore_state != VCORE_INACTIVE && + vcore->runner) + target = vcore->runner; + spin_unlock(&vcore->lock); + } return kvm_vcpu_yield_to(target); } @@ -3128,13 +3151,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc) kvmppc_ipi_thread(cpu); } -/* Old path does this in asm */ -static void kvmppc_stop_thread(struct kvm_vcpu *vcpu) -{ - vcpu->cpu = -1; - vcpu->arch.thread_cpu = -1; -} - static void kvmppc_wait_for_nap(int n_threads) { int cpu = smp_processor_id(); @@ -3223,6 +3239,8 @@ static void kvmppc_vcore_preempt(struct kvmppc_vcore *vc) { struct preempted_vcore_list *lp = this_cpu_ptr(&preempted_vcores); + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + vc->vcore_state = VCORE_PREEMPT; vc->pcpu = smp_processor_id(); if (vc->num_threads < threads_per_vcore(vc->kvm)) { @@ -3239,6 +3257,8 @@ static void kvmppc_vcore_end_preempt(struct kvmppc_vcore *vc) { struct preempted_vcore_list *lp; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + kvmppc_core_end_stolen(vc, mftb()); if (!list_empty(&vc->preempt_list)) { lp = &per_cpu(preempted_vcores, vc->pcpu); @@ -3967,7 +3987,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) { - struct kvmppc_vcore *vc = vcpu->arch.vcore; u64 next_timer; int trap; @@ -3983,9 +4002,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_subcore_enter_guest(); - vc->entry_exit_map = 1; - vc->in_guest = 1; - vcpu_vpa_increment_dispatch(vcpu); if (kvmhv_on_pseries()) { @@ -4038,9 +4054,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - vc->entry_exit_map = 0x101; - vc->in_guest = 0; - kvmppc_subcore_exit_guest(); return trap; @@ -4106,6 +4119,13 @@ static bool kvmppc_vcpu_woken(struct kvm_vcpu *vcpu) return false; } +static bool kvmppc_vcpu_check_block(struct kvm_vcpu *vcpu) +{ + if (!vcpu->arch.ceded || kvmppc_vcpu_woken(vcpu)) + return true; + return false; +} + /* * Check to see if any of the runnable vcpus on the vcore have pending * exceptions or are no longer ceded @@ -4116,7 +4136,7 @@ static int kvmppc_vcore_check_block(struct kvmppc_vcore *vc) int i; for_each_runnable_thread(i, vcpu, vc) { - if (!vcpu->arch.ceded || kvmppc_vcpu_woken(vcpu)) + if (kvmppc_vcpu_check_block(vcpu)) return 1; } @@ -4133,6 +4153,8 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) int do_sleep = 1; u64 block_ns; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + /* Poll for pending exceptions and ceded state */ cur = start_poll = ktime_get(); if (vc->halt_poll_ns) { @@ -4398,11 +4420,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.ceded = 0; vcpu->arch.run_task = current; vcpu->arch.state = KVMPPC_VCPU_RUNNABLE; - vcpu->arch.busy_preempt = TB_NIL; vcpu->arch.last_inst = KVM_INST_FETCH_FAILED; - vc->runnable_threads[0] = vcpu; - vc->n_runnable = 1; - vc->runner = vcpu; /* See if the MMU is ready to go */ if (unlikely(!kvm->arch.mmu_ready)) { @@ -4420,11 +4438,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, kvmppc_update_vpas(vcpu); - init_vcore_to_run(vc); - preempt_disable(); pcpu = smp_processor_id(); - vc->pcpu = pcpu; if (kvm_is_radix(kvm)) kvmppc_prepare_radix_vcpu(vcpu, pcpu); @@ -4453,21 +4468,23 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, goto out; } - tb = mftb(); + if (vcpu->arch.timer_running) { + hrtimer_try_to_cancel(&vcpu->arch.dec_timer); + vcpu->arch.timer_running = 0; + } - vcpu->arch.stolen_logged = vcore_stolen_time(vc, tb); - vc->preempt_tb = TB_NIL; + tb = mftb(); - kvmppc_clear_host_core(pcpu); + vcpu->cpu = pcpu; + vcpu->arch.thread_cpu = pcpu; + local_paca->kvm_hstate.kvm_vcpu = vcpu; + local_paca->kvm_hstate.ptid = 0; + local_paca->kvm_hstate.fake_suspend = 0; - local_paca->kvm_hstate.napping = 0; - local_paca->kvm_hstate.kvm_split_mode = NULL; - kvmppc_start_thread(vcpu, vc); + vc->pcpu = pcpu; // for kvmppc_create_dtl_entry kvmppc_create_dtl_entry(vcpu, vc, tb); - trace_kvm_guest_enter(vcpu); - vc->vcore_state = VCORE_RUNNING; - trace_kvmppc_run_core(vc, 0); + trace_kvm_guest_enter(vcpu); guest_enter_irqoff(); @@ -4489,11 +4506,10 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, set_irq_happened(trap); - kvmppc_set_host_core(pcpu); - guest_exit_irqoff(); - kvmppc_stop_thread(vcpu); + vcpu->cpu = -1; + vcpu->arch.thread_cpu = -1; powerpc_local_irq_pmu_restore(flags); @@ -4520,28 +4536,31 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, } vcpu->arch.ret = r; - if (is_kvmppc_resume_guest(r) && vcpu->arch.ceded && - !kvmppc_vcpu_woken(vcpu)) { + if (is_kvmppc_resume_guest(r) && !kvmppc_vcpu_check_block(vcpu)) { kvmppc_set_timer(vcpu); - while (vcpu->arch.ceded && !kvmppc_vcpu_woken(vcpu)) { + + prepare_to_rcuwait(&vcpu->wait); + for (;;) { + set_current_state(TASK_INTERRUPTIBLE); if (signal_pending(current)) { vcpu->stat.signal_exits++; run->exit_reason = KVM_EXIT_INTR; vcpu->arch.ret = -EINTR; break; } - spin_lock(&vc->lock); - kvmppc_vcore_blocked(vc); - spin_unlock(&vc->lock); + + if (kvmppc_vcpu_check_block(vcpu)) + break; + + trace_kvmppc_vcore_blocked(vc, 0); + schedule(); + trace_kvmppc_vcore_blocked(vc, 1); } + finish_rcuwait(&vcpu->wait); } vcpu->arch.ceded = 0; - vc->vcore_state = VCORE_INACTIVE; - trace_kvmppc_run_core(vc, 1); - done: - kvmppc_remove_runnable(vc, vcpu, tb); trace_kvmppc_run_vcpu_exit(vcpu); return vcpu->arch.ret; @@ -4625,7 +4644,8 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu) kvmppc_save_current_sprs(); - vcpu->arch.waitp = &vcpu->arch.vcore->wait; + if (!cpu_has_feature(CPU_FTR_ARCH_300)) + vcpu->arch.waitp = &vcpu->arch.vcore->wait; vcpu->arch.pgdir = kvm->mm->pgd; vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; @@ -5087,6 +5107,9 @@ void kvmppc_alloc_host_rm_ops(void) int cpu, core; int size; + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return; + /* Not the first time here ? */ if (kvmppc_host_rm_ops_hv != NULL) return; From patchwork Wed Aug 11 16:01:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515936 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=YD2K34I0; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4b5pf6z9sWl for ; Thu, 12 Aug 2021 02:04:15 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233680AbhHKQEj (ORCPT ); Wed, 11 Aug 2021 12:04:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233589AbhHKQEi (ORCPT ); Wed, 11 Aug 2021 12:04:38 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 037F7C061765 for ; Wed, 11 Aug 2021 09:04:15 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id t7-20020a17090a5d87b029017807007f23so10315743pji.5 for ; Wed, 11 Aug 2021 09:04:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dyeznd7fI3Ttr5fjCtFBb+9Yq8CmDhEuQG11aiCoAXM=; b=YD2K34I094rIr+N9jGMAzNglTlN0rbrNyRVt5azEbS9j2f/TbsvU1kBqmx0uxbfHZD DH35XhINHZdSiV26yb2+slG/pxwJXn7yyZCB66/CCKTF4X+unfPAEXy1MCoLKKhYFkjr 7WvB/zdYS9g04Eq1f/U8PTllisZvOmkdjhI+s7wGFx4VRQkG55T/dfScEVPZp04dK0OH eU+ypA9ANP99yPetQV5dzYbUaNyRwBfmDd5qO5baIMNFqFxrO6S++dmjjsg/NOx/zjLI gRtDsmcudwqZRKHFAX8hk/id3FYmQAdhy50Appf6ZxKz7N55Eg6+bwN/yQDE6K8gmjwB Oe1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dyeznd7fI3Ttr5fjCtFBb+9Yq8CmDhEuQG11aiCoAXM=; b=H5/V0WTj2H944g+WBWI+aK2nm7oZxaXRKkJvmkEpGBUAA9dztGgcERc9SV/Dr4LHPE vSyDiEpThkj+hbmkZvyxUSb2MxHQQ6WH2ih6De4uvotQnCq3XyWqhTag3AfzFxI9WIRA 1TYQFWKL47YnkKpPiUGPyYVLil2YRIQktpniKBl9M9ZT1+FUCViq/lJKTDi5pbcE+zeO ++vq3FRgNFndINoDiC4GyotFQcJUTB2a4mONiU1C+h4WGT8ekSHTX9u1W5y30Y/GgEs9 nQ924x3fdFJRaU6mL8tKkIIXL5S6RgTBLtscNI7EndoJUnHxPOtgD6go6+nsSzywEVhb G51Q== X-Gm-Message-State: AOAM530SxNVLU/CfVaOwdI6t/Fdvyj/NAuwEDn5tFisOUKKFre0Jfapb HPA25Wb8vSfGz8AyZgIWn/Km+cqULN4= X-Google-Smtp-Source: ABdhPJyEAS4DeIsBBaSu60Jd/s1YbdXwUcLMuOGyVqPs22LFWEoaz/lDh8ekRrwMpolxGdS+O/vznQ== X-Received: by 2002:a17:902:bc84:b029:12c:f9b9:db98 with SMTP id bb4-20020a170902bc84b029012cf9b9db98mr4783798plb.19.1628697854428; Wed, 11 Aug 2021 09:04:14 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.04.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:04:14 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 58/60] KVM: PPC: Book3S HV P9: Tidy kvmppc_create_dtl_entry Date: Thu, 12 Aug 2021 02:01:32 +1000 Message-Id: <20210811160134.904987-59-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This goes further to removing vcores from the P9 path. Also avoid the memset in favour of explicitly initialising all fields. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 61 +++++++++++++++++++++--------------- 1 file changed, 35 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index c8ea430d1955..1dc98e553997 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -698,41 +698,30 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now) return p; } -static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, - struct kvmppc_vcore *vc, u64 tb) +static void __kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, + unsigned int pcpu, u64 now, + unsigned long stolen) { struct dtl_entry *dt; struct lppaca *vpa; - unsigned long stolen; - unsigned long core_stolen; - u64 now; - unsigned long flags; dt = vcpu->arch.dtl_ptr; vpa = vcpu->arch.vpa.pinned_addr; - now = tb; - - if (cpu_has_feature(CPU_FTR_ARCH_300)) { - stolen = 0; - } else { - core_stolen = vcore_stolen_time(vc, now); - stolen = core_stolen - vcpu->arch.stolen_logged; - vcpu->arch.stolen_logged = core_stolen; - spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); - stolen += vcpu->arch.busy_stolen; - vcpu->arch.busy_stolen = 0; - spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); - } if (!dt || !vpa) return; - memset(dt, 0, sizeof(struct dtl_entry)); + dt->dispatch_reason = 7; - dt->processor_id = cpu_to_be16(vc->pcpu + vcpu->arch.ptid); - dt->timebase = cpu_to_be64(now + vc->tb_offset); + dt->preempt_reason = 0; + dt->processor_id = cpu_to_be16(pcpu + vcpu->arch.ptid); dt->enqueue_to_dispatch_time = cpu_to_be32(stolen); + dt->ready_to_enqueue_time = 0; + dt->waiting_to_ready_time = 0; + dt->timebase = cpu_to_be64(now); + dt->fault_addr = 0; dt->srr0 = cpu_to_be64(kvmppc_get_pc(vcpu)); dt->srr1 = cpu_to_be64(vcpu->arch.shregs.msr); + ++dt; if (dt == vcpu->arch.dtl.pinned_end) dt = vcpu->arch.dtl.pinned_addr; @@ -743,6 +732,27 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, vcpu->arch.dtl.dirty = true; } +static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu, + struct kvmppc_vcore *vc) +{ + unsigned long stolen; + unsigned long core_stolen; + u64 now; + unsigned long flags; + + now = mftb(); + + core_stolen = vcore_stolen_time(vc, now); + stolen = core_stolen - vcpu->arch.stolen_logged; + vcpu->arch.stolen_logged = core_stolen; + spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags); + stolen += vcpu->arch.busy_stolen; + vcpu->arch.busy_stolen = 0; + spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags); + + __kvmppc_create_dtl_entry(vcpu, vc->pcpu, now + vc->tb_offset, stolen); +} + /* See if there is a doorbell interrupt pending for a vcpu */ static bool kvmppc_doorbell_pending(struct kvm_vcpu *vcpu) { @@ -3753,7 +3763,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) pvc->pcpu = pcpu + thr; for_each_runnable_thread(i, vcpu, pvc) { kvmppc_start_thread(vcpu, pvc); - kvmppc_create_dtl_entry(vcpu, pvc, mftb()); + kvmppc_create_dtl_entry(vcpu, pvc); trace_kvm_guest_enter(vcpu); if (!vcpu->arch.ptid) thr0_done = true; @@ -4304,7 +4314,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu) if ((vc->vcore_state == VCORE_PIGGYBACK || vc->vcore_state == VCORE_RUNNING) && !VCORE_IS_EXITING(vc)) { - kvmppc_create_dtl_entry(vcpu, vc, mftb()); + kvmppc_create_dtl_entry(vcpu, vc); kvmppc_start_thread(vcpu, vc); trace_kvm_guest_enter(vcpu); } else if (vc->vcore_state == VCORE_SLEEPING) { @@ -4481,8 +4491,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, local_paca->kvm_hstate.ptid = 0; local_paca->kvm_hstate.fake_suspend = 0; - vc->pcpu = pcpu; // for kvmppc_create_dtl_entry - kvmppc_create_dtl_entry(vcpu, vc, tb); + __kvmppc_create_dtl_entry(vcpu, pcpu, tb + vc->tb_offset, 0); trace_kvm_guest_enter(vcpu); From patchwork Wed Aug 11 16:01:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515937 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=lyFnjPCI; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4f1YXgz9sX1 for ; Thu, 12 Aug 2021 02:04:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233589AbhHKQEl (ORCPT ); Wed, 11 Aug 2021 12:04:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233509AbhHKQEl (ORCPT ); Wed, 11 Aug 2021 12:04:41 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73238C061765 for ; Wed, 11 Aug 2021 09:04:17 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id l11so3293656plk.6 for ; Wed, 11 Aug 2021 09:04:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/vEG4Ho/dXWgP2kOVkY6K1MeaFf4fxSaH5Q3DcWC6fo=; b=lyFnjPCIWjCgcAy4FU02Pz39Xgd7UZhZMBQls7yYaJojByausWz+UWnYBVL7AW93p9 d5QdXCVFCrfuHHj02x4NPuksDPBAOmaa8PcFrYRagJ3Wc/cgwPZnvY3lxiZP4+CpYM2s M4iayd2LMugrPto+I3s+GEcgRqdOy3RX2G1k0egyqDrd1gZbnuwCZMUgkvDx/+e8I5ly NWgJ0SN18CMDcFaYKyEEF9krE6DTbCKaq0zq36PPv0TQOSECJ+l7tvhQf6DsF/ZchVoa QIDrTuijE6sPsZ9YmI4LP9ylG4vrPyfl33iRZo4PNbcaYScA5Hg7+PTgkjPdCPywXmfB FmlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/vEG4Ho/dXWgP2kOVkY6K1MeaFf4fxSaH5Q3DcWC6fo=; b=muxCStpv/s7vhJJTKNPf/Vf/xXyMZ9BaJCaML0Bmdi5CcFAkrAsMxFn86tZaXI2/L7 UF8Tn0NG0RoxBkxLKR1ZYZJFMYyFECNRmEkLM/nwxNF9EQRUInG05yZq6aYgW7Hc3o8G jIC9jdfNv/TvJW7ed7R8PZJhSruol52862gPWXvy3p4O0i4qCoEo9Q/7BrRaGirwVJgE z3UqKYxT9S/yEq8bziw8QOtEs5WxuN17JKNAE9zsZuFrUhSg1cehRJSVVH1FWGZilxiJ obowqefxA9MMcyyaBR9rrOhFJG+lV25BMhdRez2C00GSR8keSDwgB84KmQpzWeP6Xi+1 0PmA== X-Gm-Message-State: AOAM533ksMyPsTIeVr91NOSdQYb5INOoQmredJc5FPbDp7rqTbuCOFic iuYFMCAkP8SnE8X6IK37Uns4Rk1WjWQ= X-Google-Smtp-Source: ABdhPJyfef0O9LtmxDiXnuEztUDy3opwF7xkqm9urqSAaKsAfRJY5aApc+RC30EPYW43U4cZAKiYFA== X-Received: by 2002:a17:90a:1d44:: with SMTP id u4mr11330552pju.119.1628697856867; Wed, 11 Aug 2021 09:04:16 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.04.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:04:16 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 59/60] KVM: PPC: Book3S HV P9: Stop using vc->dpdes Date: Thu, 12 Aug 2021 02:01:33 +1000 Message-Id: <20210811160134.904987-60-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The P9 path uses vc->dpdes only for msgsndp / SMT emulation. This adds an ordering requirement between vcpu->doorbell_request and vc->dpdes for no real benefit. Use vcpu->doorbell_request directly. Signed-off-by: Nicholas Piggin --- arch/powerpc/kvm/book3s_hv.c | 18 ++++++++++-------- arch/powerpc/kvm/book3s_hv_builtin.c | 2 ++ arch/powerpc/kvm/book3s_hv_p9_entry.c | 14 ++++++++++---- 3 files changed, 22 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 1dc98e553997..e7f4525f2a74 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -761,6 +761,8 @@ static bool kvmppc_doorbell_pending(struct kvm_vcpu *vcpu) if (vcpu->arch.doorbell_request) return true; + if (cpu_has_feature(CPU_FTR_ARCH_300)) + return false; /* * Ensure that the read of vcore->dpdes comes after the read * of vcpu->doorbell_request. This barrier matches the @@ -2188,8 +2190,10 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, * either vcore->dpdes or doorbell_request. * On POWER8, doorbell_request is 0. */ - *val = get_reg_val(id, vcpu->arch.vcore->dpdes | - vcpu->arch.doorbell_request); + if (cpu_has_feature(CPU_FTR_ARCH_300)) + *val = get_reg_val(id, vcpu->arch.doorbell_request); + else + *val = get_reg_val(id, vcpu->arch.vcore->dpdes); break; case KVM_REG_PPC_VTB: *val = get_reg_val(id, vcpu->arch.vcore->vtb); @@ -2426,7 +2430,10 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id, vcpu->arch.pspb = set_reg_val(id, *val); break; case KVM_REG_PPC_DPDES: - vcpu->arch.vcore->dpdes = set_reg_val(id, *val); + if (cpu_has_feature(CPU_FTR_ARCH_300)) + vcpu->arch.doorbell_request = set_reg_val(id, *val) & 1; + else + vcpu->arch.vcore->dpdes = set_reg_val(id, *val); break; case KVM_REG_PPC_VTB: vcpu->arch.vcore->vtb = set_reg_val(id, *val); @@ -4463,11 +4470,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit, if (!nested) { kvmppc_core_prepare_to_enter(vcpu); - if (vcpu->arch.doorbell_request) { - vc->dpdes = 1; - smp_wmb(); - vcpu->arch.doorbell_request = 0; - } if (test_bit(BOOK3S_IRQPRIO_EXTERNAL, &vcpu->arch.pending_exceptions)) lpcr |= LPCR_MER; diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c index a10bf93054ca..3ed90149ed2e 100644 --- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -660,6 +660,8 @@ void kvmppc_guest_entry_inject_int(struct kvm_vcpu *vcpu) int ext; unsigned long lpcr; + WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300)); + /* Insert EXTERNAL bit into LPCR at the MER bit position */ ext = (vcpu->arch.pending_exceptions >> BOOK3S_IRQPRIO_EXTERNAL) & 1; lpcr = mfspr(SPRN_LPCR); diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 5745a49021c3..1e18c089478e 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -701,6 +701,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc unsigned long host_pidr; unsigned long host_dawr1; unsigned long host_dawrx1; + unsigned long dpdes; hdec = time_limit - *tb; if (hdec < 0) @@ -763,8 +764,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc if (vc->pcr) mtspr(SPRN_PCR, vc->pcr | PCR_MASK); - if (vc->dpdes) - mtspr(SPRN_DPDES, vc->dpdes); + if (vcpu->arch.doorbell_request) { + vcpu->arch.doorbell_request = 0; + mtspr(SPRN_DPDES, 1); + } if (dawr_enabled()) { if (vcpu->arch.dawr0 != host_dawr0) @@ -995,7 +998,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2); vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3); - vc->dpdes = mfspr(SPRN_DPDES); + dpdes = mfspr(SPRN_DPDES); + if (dpdes) + vcpu->arch.doorbell_request = 1; + vc->vtb = mfspr(SPRN_VTB); dec = mfspr(SPRN_DEC); @@ -1057,7 +1063,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc } } - if (vc->dpdes) + if (dpdes) mtspr(SPRN_DPDES, 0); if (vc->pcr) mtspr(SPRN_PCR, PCR_MASK); From patchwork Wed Aug 11 16:01:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1515938 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=uAlKK3xL; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4GlF4j2fnsz9sX1 for ; Thu, 12 Aug 2021 02:04:21 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233688AbhHKQEo (ORCPT ); Wed, 11 Aug 2021 12:04:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233509AbhHKQEn (ORCPT ); Wed, 11 Aug 2021 12:04:43 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC2E0C061765 for ; Wed, 11 Aug 2021 09:04:19 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id t3so3274249plg.9 for ; Wed, 11 Aug 2021 09:04:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3QBUGSojZU/2F66voz9vD6uLX52t4OFR1d+++ZFktyQ=; b=uAlKK3xLA0XXOw9+oAoJWPjs07GPGobgo6Ad29rNbl79s1ydF35+noIPh3w++xi0YX 2DHoSru8IC4k9GMM9YTLJuFqTifBohHwwpVU9Sh1RTp6CTRJ6EK3R1f49BDusEfMC1mv 2lAncF6TB5qzhZFY9zpWGmW1qhFoW/yue2kAkiAtlJgJc69Q3ELF2qvTcc34EKfsbnUu 7pNvRStIjbDtf94RUJVsudFH0Nx9JWin/YmauWkgAmy1MML95P0MnM0mns7yFOsMARYg heablWWppHCP0OgnrpeOSyYvIWA8J/NpKfygXH16/Hd2AhFeXCR1Vdt+ywbtOZaaUUOt Ky6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3QBUGSojZU/2F66voz9vD6uLX52t4OFR1d+++ZFktyQ=; b=DVnDI8tMmItua9+F5UVQUfnY4idz9rMyw4BDdQSNmPqJzQJc1UYqJM5tvj38prMkxN GErcrAh5ipRTvv1iuK4YTg28sBU/34U6T6OfQOAhVvFr9zYWo+HcwCnnzMAJegISQYDp GfB39j2WZFCHnKGVOAvDtXIUyrYwUHmKBBpIjUy6TzMTBEg9gL1o2S8JYGlN2cSRyEO3 YfTZtBNHCqW/TScEBxxXmnB1pV6XeEx2CIk5ADwB20n/QLPAUnAGW2Oh13DXv+CIzdzB 6qcVq5Rda3GFrUHIFlfxlwv1v0+qXtTh2cIrw8/a07ecqMWzJLScjtFlZsG4k18Nbjbg 0Tpw== X-Gm-Message-State: AOAM53108u8BDuqhNZZuDpDKxdORyveb/tvcoF1kwFN9F6ktPJfmYhL4 vHQndQm0VuJhBvqqkUcQ3OjgC+sv2mk= X-Google-Smtp-Source: ABdhPJyXIa/Bg8LVAEd91Dk0/jabj5lRDr5fEFq6jRTG9TSgQa0tpZ3s0EQmsU93kHn7Wc4/uAJCXQ== X-Received: by 2002:a62:b414:0:b029:317:52d:7fd5 with SMTP id h20-20020a62b4140000b0290317052d7fd5mr35108583pfn.30.1628697859398; Wed, 11 Aug 2021 09:04:19 -0700 (PDT) Received: from bobo.ibm.com ([118.210.97.79]) by smtp.gmail.com with ESMTPSA id k19sm6596494pff.28.2021.08.11.09.04.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 09:04:19 -0700 (PDT) From: Nicholas Piggin To: kvm-ppc@vger.kernel.org Cc: Nicholas Piggin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 60/60] KVM: PPC: Book3S HV P9: Remove subcore HMI handling Date: Thu, 12 Aug 2021 02:01:34 +1000 Message-Id: <20210811160134.904987-61-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210811160134.904987-1-npiggin@gmail.com> References: <20210811160134.904987-1-npiggin@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org On POWER9 and newer, rather than the complex HMI synchronisation and subcore state, have each thread un-apply the guest TB offset before calling into the early HMI handler. This allows the subcore state to be avoided, including subcore enter / exit guest, which includes an expensive divide that shows up slightly in profiles. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/kvm_ppc.h | 1 + arch/powerpc/kvm/book3s_hv.c | 12 +++--- arch/powerpc/kvm/book3s_hv_hmi.c | 7 +++- arch/powerpc/kvm/book3s_hv_p9_entry.c | 2 +- arch/powerpc/kvm/book3s_hv_ras.c | 54 +++++++++++++++++++++++++++ 5 files changed, 67 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 2d88944f9f34..6355a6980ccf 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -760,6 +760,7 @@ void kvmppc_realmode_machine_check(struct kvm_vcpu *vcpu); void kvmppc_subcore_enter_guest(void); void kvmppc_subcore_exit_guest(void); long kvmppc_realmode_hmi_handler(void); +long kvmppc_p9_realmode_hmi_handler(struct kvm_vcpu *vcpu); long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags, long pte_index, unsigned long pteh, unsigned long ptel); long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags, diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e7f4525f2a74..f1f343307578 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4017,8 +4017,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu->arch.ceded = 0; - kvmppc_subcore_enter_guest(); - vcpu_vpa_increment_dispatch(vcpu); if (kvmhv_on_pseries()) { @@ -4071,8 +4069,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, vcpu_vpa_increment_dispatch(vcpu); - kvmppc_subcore_exit_guest(); - return trap; } @@ -6054,9 +6050,11 @@ static int kvmppc_book3s_init_hv(void) if (r) return r; - r = kvm_init_subcore_bitmap(); - if (r) - return r; + if (!cpu_has_feature(CPU_FTR_ARCH_300)) { + r = kvm_init_subcore_bitmap(); + if (r) + return r; + } /* * We need a way of accessing the XICS interrupt controller, diff --git a/arch/powerpc/kvm/book3s_hv_hmi.c b/arch/powerpc/kvm/book3s_hv_hmi.c index 9af660476314..1ec50c69678b 100644 --- a/arch/powerpc/kvm/book3s_hv_hmi.c +++ b/arch/powerpc/kvm/book3s_hv_hmi.c @@ -20,10 +20,15 @@ void wait_for_subcore_guest_exit(void) /* * NULL bitmap pointer indicates that KVM module hasn't - * been loaded yet and hence no guests are running. + * been loaded yet and hence no guests are running, or running + * on POWER9 or newer CPU. + * * If no KVM is in use, no need to co-ordinate among threads * as all of them will always be in host and no one is going * to modify TB other than the opal hmi handler. + * + * POWER9 and newer don't need this synchronisation. + * * Hence, just return from here. */ if (!local_paca->sibling_subcore_state) diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c index 1e18c089478e..7d31ad3de723 100644 --- a/arch/powerpc/kvm/book3s_hv_p9_entry.c +++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c @@ -934,7 +934,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc kvmppc_realmode_machine_check(vcpu); } else if (unlikely(trap == BOOK3S_INTERRUPT_HMI)) { - kvmppc_realmode_hmi_handler(); + kvmppc_p9_realmode_hmi_handler(vcpu); } else if (trap == BOOK3S_INTERRUPT_H_EMUL_ASSIST) { vcpu->arch.emul_inst = mfspr(SPRN_HEIR); diff --git a/arch/powerpc/kvm/book3s_hv_ras.c b/arch/powerpc/kvm/book3s_hv_ras.c index d4bca93b79f6..3f94f4080d04 100644 --- a/arch/powerpc/kvm/book3s_hv_ras.c +++ b/arch/powerpc/kvm/book3s_hv_ras.c @@ -136,6 +136,60 @@ void kvmppc_realmode_machine_check(struct kvm_vcpu *vcpu) vcpu->arch.mce_evt = mce_evt; } + +long kvmppc_p9_realmode_hmi_handler(struct kvm_vcpu *vcpu) +{ + struct kvmppc_vcore *vc = vcpu->arch.vcore; + long ret = 0; + + /* + * Unapply and clear the offset first. That way, if the TB was not + * resynced then it will remain in host-offset, and if it was resynced + * then it is brought into host-offset. Then the tb offset is + * re-applied before continuing with the KVM exit. + * + * This way, we don't need to actualy know whether not OPAL resynced + * the timebase or do any of the complicated dance that the P7/8 + * path requires. + */ + if (vc->tb_offset_applied) { + u64 new_tb = mftb() - vc->tb_offset_applied; + mtspr(SPRN_TBU40, new_tb); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + vc->tb_offset_applied = 0; + } + + local_paca->hmi_irqs++; + + if (hmi_handle_debugtrig(NULL) >= 0) { + ret = 1; + goto out; + } + + if (ppc_md.hmi_exception_early) + ppc_md.hmi_exception_early(NULL); + +out: + if (vc->tb_offset) { + u64 new_tb = mftb() + vc->tb_offset; + mtspr(SPRN_TBU40, new_tb); + if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) { + new_tb += 0x1000000; + mtspr(SPRN_TBU40, new_tb); + } + vc->tb_offset_applied = vc->tb_offset; + } + + return ret; +} + +/* + * The following subcore HMI handling is all only for pre-POWER9 CPUs. + */ + /* Check if dynamic split is in force and return subcore size accordingly. */ static inline int kvmppc_cur_subcore_size(void) {