Message ID | 20131106221138.GA27956@iris.ozlabs.ibm.com |
---|---|
State | New, archived |
Headers | show |
On Thu, Nov 07, 2013 at 09:11:38AM +1100, Paul Mackerras wrote: > This fixes a bug in kvmppc_do_h_enter() where the physical address > for a page can be calculated incorrectly if transparent huge pages > (THP) are active. Until THP came along, it was true that if we > encountered a large (16M) page in kvmppc_do_h_enter(), then the > associated memslot must be 16M aligned for both its guest physical > address and the userspace address, and the physical address > calculations in kvmppc_do_h_enter() assumed that. With THP, that > is no longer true. BTW, it looks like kvmppc_book3s_hv_page_fault() has a similar bug. I'll do a v2 of the patch to fix both, since it's essentially the same problem in both places. Paul. -- To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c index 9c51544..fddbf98 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c @@ -225,6 +225,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags, is_io = pa & (HPTE_R_I | HPTE_R_W); pte_size = PAGE_SIZE << (pa & KVMPPC_PAGE_ORDER_MASK); pa &= PAGE_MASK; + pa |= gpa & ~PAGE_MASK; } else { /* Translate to host virtual address */ hva = __gfn_to_hva_memslot(memslot, gfn); @@ -238,13 +239,12 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags, ptel = hpte_make_readonly(ptel); is_io = hpte_cache_bits(pte_val(pte)); pa = pte_pfn(pte) << PAGE_SHIFT; + pa |= hva & (pte_size - 1); } } if (pte_size < psize) return H_PARAMETER; - if (pa && pte_size > psize) - pa |= gpa & (pte_size - 1); ptel &= ~(HPTE_R_PP0 - psize); ptel |= pa;