From patchwork Thu Jun 21 08:52:38 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 166263 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 0206AB6FAF for ; Thu, 21 Jun 2012 18:52:57 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754760Ab2FUIwz (ORCPT ); Thu, 21 Jun 2012 04:52:55 -0400 Received: from tama500.ecl.ntt.co.jp ([129.60.39.148]:38971 "EHLO tama500.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758809Ab2FUIwx (ORCPT ); Thu, 21 Jun 2012 04:52:53 -0400 Received: from mfs6.rdh.ecl.ntt.co.jp (mfs6.rdh.ecl.ntt.co.jp [129.60.39.149]) by tama500.ecl.ntt.co.jp (8.14.5/8.14.5) with ESMTP id q5L8qhvf026131; Thu, 21 Jun 2012 17:52:43 +0900 (JST) Received: from mfs6.rdh.ecl.ntt.co.jp (localhost [127.0.0.1]) by mfs6.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id BC2246C24; Thu, 21 Jun 2012 17:52:43 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by mfs6.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id B3C1F6BF2; Thu, 21 Jun 2012 17:52:43 +0900 (JST) Received: from yshpad ([129.60.241.198]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id q5L8qhkU012265; Thu, 21 Jun 2012 17:52:43 +0900 Date: Thu, 21 Jun 2012 17:52:38 +0900 From: Takuya Yoshikawa To: avi@redhat.com, mtosatti@redhat.com Cc: agraf@suse.de, paulus@samba.org, aarcange@redhat.com, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, takuya.yoshikawa@gmail.com Subject: [PATCH 6/6] KVM: MMU: Avoid handling same rmap_pde in kvm_handle_hva_range() Message-Id: <20120621175238.705e6188.yoshikawa.takuya@oss.ntt.co.jp> In-Reply-To: <20120621174842.22779780.yoshikawa.takuya@oss.ntt.co.jp> References: <20120621174842.22779780.yoshikawa.takuya@oss.ntt.co.jp> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org When we invalidate a THP page, we call the handler with the same rmap_pde argument 512 times in the following loop: for each guest page in the range for each level unmap using rmap This patch avoids these extra handler calls by changing the loop order like this: for each level for each rmap in the range unmap using rmap With the preceding patches in the patch series, this made THP page invalidation more than 5 times faster on our x86 host: the host became more responsive during swapping the guest's memory as a result. Note: in the new code we could not use trace_kvm_age_page(), so we just dropped the point from kvm_handle_hva_range(). Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c | 39 ++++++++++++++++++++------------------- 1 files changed, 20 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 306711a..462becb 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1192,16 +1192,15 @@ static int kvm_handle_hva_range(struct kvm *kvm, unsigned long data)) { int j; - int ret; - int retval = 0; + int ret = 0; struct kvm_memslots *slots; struct kvm_memory_slot *memslot; slots = kvm_memslots(kvm); kvm_for_each_memslot(memslot, slots) { - gfn_t gfn; unsigned long hva_start, hva_end; + gfn_t gfn_start, gfn_end; hva_start = max(start, memslot->userspace_addr); hva_end = min(end, memslot->userspace_addr + @@ -1209,25 +1208,27 @@ static int kvm_handle_hva_range(struct kvm *kvm, if (hva_start >= hva_end) continue; - for (gfn = hva_to_gfn_memslot(hva_start, memslot); - gfn < hva_to_gfn_memslot(hva_end, memslot); gfn++) { - ret = 0; + gfn_start = hva_to_gfn_memslot(hva_start, memslot); + gfn_end = hva_to_gfn_memslot(hva_end, memslot); - for (j = PT_PAGE_TABLE_LEVEL; - j < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++j) { - unsigned long *rmapp; + for (j = PT_PAGE_TABLE_LEVEL; + j < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++j) { + unsigned long idx, idx_end; + unsigned long *rmapp; - rmapp = __gfn_to_rmap(gfn, j, memslot); - ret |= handler(kvm, rmapp, data); - } - trace_kvm_age_page(memslot->userspace_addr + - (gfn - memslot->base_gfn) * PAGE_SIZE, - memslot, ret); - retval |= ret; + idx = gfn_to_index(gfn_start, memslot->base_gfn, j); + idx_end = gfn_to_index(gfn_end, memslot->base_gfn, j); + + rmapp = __gfn_to_rmap(gfn_start, j, memslot); + + /* Handle the first one even if idx == idx_end. */ + do { + ret |= handler(kvm, rmapp++, data); + } while (++idx < idx_end); } } - return retval; + return ret; } static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,