Message ID | 20220311002528.2230172-11-dmatlack@google.com |
---|---|
State | Changes Requested |
Headers | show |
Series | Extend Eager Page Splitting to the shadow MMU | expand |
On Fri, Mar 11, 2022 at 12:25:12AM +0000, David Matlack wrote: > static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) > { > - free_page((unsigned long)sp->spt); > - kmem_cache_free(mmu_page_header_cache, sp); > + kvm_mmu_free_shadow_page(sp); > } Perhaps tdp_mmu_free_sp() can be dropped altogether with this? Reviewed-by: Peter Xu <peterx@redhat.com>
On Tue, Mar 15, 2022 at 3:23 AM Peter Xu <peterx@redhat.com> wrote: > > On Fri, Mar 11, 2022 at 12:25:12AM +0000, David Matlack wrote: > > static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) > > { > > - free_page((unsigned long)sp->spt); > > - kmem_cache_free(mmu_page_header_cache, sp); > > + kvm_mmu_free_shadow_page(sp); > > } > > Perhaps tdp_mmu_free_sp() can be dropped altogether with this? It certainly can but I prefer to keep it for 2 reasons: - Smaller diff. - It mirrors tdp_mmu_alloc_sp(), which I prefer to keep as well but I'll explain that in the next patch. > > Reviewed-by: Peter Xu <peterx@redhat.com> > > -- > Peter Xu >
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c12d5016f6dc..2dcafbef5ffc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1669,11 +1669,8 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) percpu_counter_add(&kvm_total_used_mmu_pages, nr); } -static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) +void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { - MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); - hlist_del(&sp->hash_link); - list_del(&sp->link); free_page((unsigned long)sp->spt); if (!sp->role.direct) free_page((unsigned long)sp->gfns); @@ -2521,6 +2518,9 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, list_for_each_entry_safe(sp, nsp, invalid_list, link) { WARN_ON(!sp->role.invalid || sp->root_count); + MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); + hlist_del(&sp->hash_link); + list_del(&sp->link); kvm_mmu_free_shadow_page(sp); } } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index a0648e7ddd33..5f91e4d07a95 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -173,4 +173,6 @@ void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked); +void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 1a43f908d508..184874a82a1b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -64,8 +64,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) { - free_page((unsigned long)sp->spt); - kmem_cache_free(mmu_page_header_cache, sp); + kvm_mmu_free_shadow_page(sp); } /*
Use a common function to free kvm_mmu_page structs in the TDP MMU and the shadow MMU. This reduces the amount of duplicate code and is needed in subsequent commits that allocate and free kvm_mmu_pages for eager page splitting. No functional change intended. Signed-off-by: David Matlack <dmatlack@google.com> --- arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 3 +-- 3 files changed, 7 insertions(+), 6 deletions(-)