From patchwork Fri Apr 29 17:36:30 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Williamson X-Patchwork-Id: 93468 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [140.186.70.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id E3BDA1007D7 for ; Sat, 30 Apr 2011 03:36:45 +1000 (EST) Received: from localhost ([::1]:42833 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QFrcP-0004wQ-JK for incoming@patchwork.ozlabs.org; Fri, 29 Apr 2011 13:36:41 -0400 Received: from eggs.gnu.org ([140.186.70.92]:39818) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QFrcJ-0004wL-5f for qemu-devel@nongnu.org; Fri, 29 Apr 2011 13:36:36 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QFrcI-0002GW-7V for qemu-devel@nongnu.org; Fri, 29 Apr 2011 13:36:35 -0400 Received: from mx1.redhat.com ([209.132.183.28]:31029) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QFrcH-0002GP-Uv for qemu-devel@nongnu.org; Fri, 29 Apr 2011 13:36:34 -0400 Received: from int-mx12.intmail.prod.int.phx2.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p3THaV34021604 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 29 Apr 2011 13:36:32 -0400 Received: from s20.home (ovpn01.gateway.prod.ext.phx2.redhat.com [10.5.9.1]) by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id p3THaUFQ017312; Fri, 29 Apr 2011 13:36:31 -0400 From: Alex Williamson To: qemu-devel@nongnu.org Date: Fri, 29 Apr 2011 11:36:30 -0600 Message-ID: <20110429173553.12032.94926.stgit@s20.home> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 209.132.183.28 Cc: jan.kiszka@siemens.com, alex.williamson@redhat.com, mst@redhat.com Subject: [Qemu-devel] [PATCH] CPUPhysMemoryClient: Batch contiguous addresses when playing catchup X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org When a phys memory client registers and we play catchup by walking the page tables, we can make a huge improvement in the number of times the set_memory callback is called by batching contiguous pages together. With a 4G guest, this reduces the number of callbacks at registration from 1048866 to 296. Signed-off-by: Alex Williamson --- exec.c | 38 ++++++++++++++++++++++++++++++++------ 1 files changed, 32 insertions(+), 6 deletions(-) diff --git a/exec.c b/exec.c index e670929..a0f2954 100644 --- a/exec.c +++ b/exec.c @@ -1741,8 +1741,15 @@ static int cpu_notify_migration_log(int enable) return 0; } -static void phys_page_for_each_1(CPUPhysMemoryClient *client, - int level, void **lp, target_phys_addr_t addr) +struct last_map { + target_phys_addr_t start_addr; + ram_addr_t size; + ram_addr_t phys_offset; +}; + +static void phys_page_for_each_1(CPUPhysMemoryClient *client, int level, + void **lp, target_phys_addr_t addr, + struct last_map *map) { int i; @@ -1754,15 +1761,29 @@ static void phys_page_for_each_1(CPUPhysMemoryClient *client, addr <<= L2_BITS + TARGET_PAGE_BITS; for (i = 0; i < L2_SIZE; ++i) { if (pd[i].phys_offset != IO_MEM_UNASSIGNED) { - client->set_memory(client, addr | i << TARGET_PAGE_BITS, - TARGET_PAGE_SIZE, pd[i].phys_offset); + target_phys_addr_t start_addr = addr | i << TARGET_PAGE_BITS; + + if (map->size && + start_addr == map->start_addr + map->size && + pd[i].phys_offset == map->phys_offset + map->size) { + + map->size += TARGET_PAGE_SIZE; + continue; + } else if (map->size) { + client->set_memory(client, map->start_addr, + map->size, map->phys_offset); + } + + map->start_addr = start_addr; + map->size = TARGET_PAGE_SIZE; + map->phys_offset = pd[i].phys_offset; } } } else { void **pp = *lp; for (i = 0; i < L2_SIZE; ++i) { phys_page_for_each_1(client, level - 1, pp + i, - (addr << L2_BITS) | i); + (addr << L2_BITS) | i, map); } } } @@ -1770,9 +1791,14 @@ static void phys_page_for_each_1(CPUPhysMemoryClient *client, static void phys_page_for_each(CPUPhysMemoryClient *client) { int i; + struct last_map map = { 0 }; + for (i = 0; i < P_L1_SIZE; ++i) { phys_page_for_each_1(client, P_L1_SHIFT / L2_BITS - 1, - l1_phys_map + i, i); + l1_phys_map + i, i, &map); + } + if (map.size) { + client->set_memory(client, map.start_addr, map.size, map.phys_offset); } }