Message ID | 20210619133415.20016-1-npiggin@gmail.com |
---|---|
State | New |
Headers | show |
Series | KVM: PPC: Book3S HV Nested: Reflect L2 PMU in-use to L0 when L2 SPRs are live | expand |
Nicholas Piggin <npiggin@gmail.com> writes: > After the L1 saves its PMU SPRs but before loading the L2's PMU SPRs, > switch the pmcregs_in_use field in the L1 lppaca to the value advertised > by the L2 in its VPA. On the way out of the L2, set it back after saving > the L2 PMU registers (if they were in-use). > > This transfers the PMU liveness indication between the L1 and L2 at the > points where the registers are not live. > > This fixes the nested HV bug for which a workaround was added to the L0 > HV by commit 63279eeb7f93a ("KVM: PPC: Book3S HV: Always save guest pmu > for guest capable of nesting"), which explains the problem in detail. > That workaround is no longer required for guests that include this bug > fix. > > Fixes: 360cae313702 ("KVM: PPC: Book3S HV: Nested guest entry via hypercall") > Signed-off-by: Nicholas Piggin <npiggin@gmail.com> I don't know much about the performance monitor facility, but the patch seems sane overall. Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> > --- > I have a later performance patch that reverts the workaround, but it > would be good to fix the nested HV first so there is some lead time for > the fix to percolate. > > Thanks, > Nick > > arch/powerpc/include/asm/pmc.h | 7 +++++++ > arch/powerpc/kvm/book3s_hv.c | 15 +++++++++++++++ > 2 files changed, 22 insertions(+) > > diff --git a/arch/powerpc/include/asm/pmc.h b/arch/powerpc/include/asm/pmc.h > index c6bbe9778d3c..3c09109e708e 100644 > --- a/arch/powerpc/include/asm/pmc.h > +++ b/arch/powerpc/include/asm/pmc.h > @@ -34,6 +34,13 @@ static inline void ppc_set_pmu_inuse(int inuse) > #endif > } > > +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE > +static inline int ppc_get_pmu_inuse(void) > +{ > + return get_paca()->pmcregs_in_use; > +} > +#endif > + > extern void power4_enable_pmcs(void); > > #else /* CONFIG_PPC64 */ > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c > index 0d6edb136bd4..e66f96fb6eed 100644 > --- a/arch/powerpc/kvm/book3s_hv.c > +++ b/arch/powerpc/kvm/book3s_hv.c > @@ -59,6 +59,7 @@ > #include <asm/kvm_book3s.h> > #include <asm/mmu_context.h> > #include <asm/lppaca.h> > +#include <asm/pmc.h> > #include <asm/processor.h> > #include <asm/cputhreads.h> > #include <asm/page.h> > @@ -3761,6 +3762,16 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, > cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) > kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); > > +#ifdef CONFIG_PPC_PSERIES > + if (kvmhv_on_pseries()) { > + if (vcpu->arch.vpa.pinned_addr) { > + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; > + get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; > + } else { > + get_lppaca()->pmcregs_in_use = 1; > + } > + } > +#endif > kvmhv_load_guest_pmu(vcpu); > > msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); > @@ -3895,6 +3906,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, > save_pmu |= nesting_enabled(vcpu->kvm); > > kvmhv_save_guest_pmu(vcpu, save_pmu); > +#ifdef CONFIG_PPC_PSERIES > + if (kvmhv_on_pseries()) > + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); > +#endif > > vc->entry_exit_map = 0x101; > vc->in_guest = 0;
diff --git a/arch/powerpc/include/asm/pmc.h b/arch/powerpc/include/asm/pmc.h index c6bbe9778d3c..3c09109e708e 100644 --- a/arch/powerpc/include/asm/pmc.h +++ b/arch/powerpc/include/asm/pmc.h @@ -34,6 +34,13 @@ static inline void ppc_set_pmu_inuse(int inuse) #endif } +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE +static inline int ppc_get_pmu_inuse(void) +{ + return get_paca()->pmcregs_in_use; +} +#endif + extern void power4_enable_pmcs(void); #else /* CONFIG_PPC64 */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 0d6edb136bd4..e66f96fb6eed 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -59,6 +59,7 @@ #include <asm/kvm_book3s.h> #include <asm/mmu_context.h> #include <asm/lppaca.h> +#include <asm/pmc.h> #include <asm/processor.h> #include <asm/cputhreads.h> #include <asm/page.h> @@ -3761,6 +3762,16 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) { + if (vcpu->arch.vpa.pinned_addr) { + struct lppaca *lp = vcpu->arch.vpa.pinned_addr; + get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; + } else { + get_lppaca()->pmcregs_in_use = 1; + } + } +#endif kvmhv_load_guest_pmu(vcpu); msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); @@ -3895,6 +3906,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, save_pmu |= nesting_enabled(vcpu->kvm); kvmhv_save_guest_pmu(vcpu, save_pmu); +#ifdef CONFIG_PPC_PSERIES + if (kvmhv_on_pseries()) + get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); +#endif vc->entry_exit_map = 0x101; vc->in_guest = 0;
After the L1 saves its PMU SPRs but before loading the L2's PMU SPRs, switch the pmcregs_in_use field in the L1 lppaca to the value advertised by the L2 in its VPA. On the way out of the L2, set it back after saving the L2 PMU registers (if they were in-use). This transfers the PMU liveness indication between the L1 and L2 at the points where the registers are not live. This fixes the nested HV bug for which a workaround was added to the L0 HV by commit 63279eeb7f93a ("KVM: PPC: Book3S HV: Always save guest pmu for guest capable of nesting"), which explains the problem in detail. That workaround is no longer required for guests that include this bug fix. Fixes: 360cae313702 ("KVM: PPC: Book3S HV: Nested guest entry via hypercall") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> --- I have a later performance patch that reverts the workaround, but it would be good to fix the nested HV first so there is some lead time for the fix to percolate. Thanks, Nick arch/powerpc/include/asm/pmc.h | 7 +++++++ arch/powerpc/kvm/book3s_hv.c | 15 +++++++++++++++ 2 files changed, 22 insertions(+)