From patchwork Tue May 3 18:36:46 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Williamson X-Patchwork-Id: 93880 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [140.186.70.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 82D8FB6F1E for ; Wed, 4 May 2011 04:37:13 +1000 (EST) Received: from localhost ([::1]:42733 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QHKT8-0005Q6-Qz for incoming@patchwork.ozlabs.org; Tue, 03 May 2011 14:37:10 -0400 Received: from eggs.gnu.org ([140.186.70.92]:40728) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QHKSo-0005L4-5M for qemu-devel@nongnu.org; Tue, 03 May 2011 14:36:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QHKSn-0001h0-4s for qemu-devel@nongnu.org; Tue, 03 May 2011 14:36:50 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40111) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QHKSm-0001gr-NL for qemu-devel@nongnu.org; Tue, 03 May 2011 14:36:49 -0400 Received: from int-mx12.intmail.prod.int.phx2.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p43IalQT029368 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Tue, 3 May 2011 14:36:47 -0400 Received: from s20.home (ovpn01.gateway.prod.ext.phx2.redhat.com [10.5.9.1]) by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id p43IakAV024456; Tue, 3 May 2011 14:36:46 -0400 From: Alex Williamson To: qemu-devel@nongnu.org, anthony@codemonkey.ws Date: Tue, 03 May 2011 12:36:46 -0600 Message-ID: <20110503183638.28430.11842.stgit@s20.home> In-Reply-To: <20110503182039.28430.26530.stgit@s20.home> References: <20110503182039.28430.26530.stgit@s20.home> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 209.132.183.28 Cc: jan.kiszka@siemens.com, alex.williamson@redhat.com, armbru@redhat.com, mst@redhat.com Subject: [Qemu-devel] [PATCH v2 2/3] CPUPhysMemoryClient: Pass guest physical address not region offset X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org When we're trying to get a newly registered phys memory client updated with the current page mappings, we end up passing the region offset (a ram_addr_t) as the start address rather than the actual guest physical memory address (target_phys_addr_t). If your guest has less than 3.5G of memory, these are coincidentally the same thing. If there's more, the region offset for the memory above 4G starts over at 0, so the set_memory client will overwrite it's lower memory entries. Instead, keep track of the guest phsyical address as we're walking the tables and pass that to the set_memory client. Signed-off-by: Alex Williamson Acked-by: Michael S. Tsirkin --- exec.c | 16 ++++++++++++---- 1 files changed, 12 insertions(+), 4 deletions(-) diff --git a/exec.c b/exec.c index 8790ad8..bbd5c86 100644 --- a/exec.c +++ b/exec.c @@ -1741,8 +1741,14 @@ static int cpu_notify_migration_log(int enable) return 0; } +/* The l1_phys_map provides the upper P_L1_BITs of the guest physical + * address. Each intermediate table provides the next L2_BITs of guest + * physical address space. The number of levels vary based on host and + * guest configuration, making it efficient to build the final guest + * physical address by seeding the L1 offset and shifting and adding in + * each L2 offset as we recurse through them. */ static void phys_page_for_each_1(CPUPhysMemoryClient *client, - int level, void **lp) + int level, void **lp, target_phys_addr_t addr) { int i; @@ -1751,16 +1757,18 @@ static void phys_page_for_each_1(CPUPhysMemoryClient *client, } if (level == 0) { PhysPageDesc *pd = *lp; + addr <<= L2_BITS + TARGET_PAGE_BITS; for (i = 0; i < L2_SIZE; ++i) { if (pd[i].phys_offset != IO_MEM_UNASSIGNED) { - client->set_memory(client, pd[i].region_offset, + client->set_memory(client, addr | i << TARGET_PAGE_BITS, TARGET_PAGE_SIZE, pd[i].phys_offset); } } } else { void **pp = *lp; for (i = 0; i < L2_SIZE; ++i) { - phys_page_for_each_1(client, level - 1, pp + i); + phys_page_for_each_1(client, level - 1, pp + i, + (addr << L2_BITS) | i); } } } @@ -1770,7 +1778,7 @@ static void phys_page_for_each(CPUPhysMemoryClient *client) int i; for (i = 0; i < P_L1_SIZE; ++i) { phys_page_for_each_1(client, P_L1_SHIFT / L2_BITS - 1, - l1_phys_map + i); + l1_phys_map + i, i); } }