From patchwork Fri Mar 6 03:41:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keqian Zhu X-Patchwork-Id: 1250012 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48YYPn5Rfbz9sSV for ; Fri, 6 Mar 2020 14:43:55 +1100 (AEDT) Received: from localhost ([::1]:59118 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jA3tw-00021e-Cb for incoming@patchwork.ozlabs.org; Thu, 05 Mar 2020 22:43:52 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:48469) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jA3tc-0001vM-Qg for qemu-devel@nongnu.org; Thu, 05 Mar 2020 22:43:33 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jA3tb-0005d2-QK for qemu-devel@nongnu.org; Thu, 05 Mar 2020 22:43:32 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:46942 helo=huawei.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jA3tZ-0005C6-4K; Thu, 05 Mar 2020 22:43:29 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 11F153F6A4A8244704DB; Fri, 6 Mar 2020 11:43:20 +0800 (CST) Received: from linux-kDCJWP.huawei.com (10.175.104.212) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Fri, 6 Mar 2020 11:43:13 +0800 From: Keqian Zhu To: Subject: [PATCH] migration: not require length align when choose fast dirty sync path Date: Fri, 6 Mar 2020 11:41:09 +0800 Message-ID: <20200306034109.19992-1-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.19.1 MIME-Version: 1.0 X-Originating-IP: [10.175.104.212] X-CFilter-Loop: Reflected X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 45.249.212.35 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wanghaibin.wang@huawei.com, qemu-arm@nongnu.org, Keqian Zhu , "Dr. David Alan Gilbert" , Paolo Bonzini Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" In aa777e297c840, ramblock length is required to align word pages when we choose the fast dirty sync path. The reason is that "If the Ramblock is less than 64 pages in length that long can contain bits representing two different RAMBlocks, but the code will update the bmap belinging to the 1st RAMBlock only while having updated the total dirty page count for both." This is right before 801110ab22be1ef2, which align ram_addr_t allocation on long boundaries. So currently we wont "updated the total dirty page count for both". Remove the alignment constraint of length and we can always use fast dirty sync path. Signed-off-by: Keqian Zhu --- Cc: Paolo Bonzini Cc: "Dr. David Alan Gilbert" Cc: qemu-devel@nongnu.org --- include/exec/ram_addr.h | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 5e59a3d8d7..40fd89e1cd 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -445,15 +445,13 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, ram_addr_t length, uint64_t *real_dirty_pages) { - ram_addr_t addr; - unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS); + ram_addr_t start_global = start + rb->offset; + unsigned long word = BIT_WORD(start_global >> TARGET_PAGE_BITS); uint64_t num_dirty = 0; unsigned long *dest = rb->bmap; - /* start address and length is aligned at the start of a word? */ - if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == - (start + rb->offset) && - !(length & ((BITS_PER_LONG << TARGET_PAGE_BITS) - 1))) { + /* start address is aligned at the start of a word? */ + if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == start_global) { int k; int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS); unsigned long * const *src; @@ -495,11 +493,10 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, memory_region_clear_dirty_bitmap(rb->mr, start, length); } } else { - ram_addr_t offset = rb->offset; - + ram_addr_t addr; for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) { if (cpu_physical_memory_test_and_clear_dirty( - start + addr + offset, + start_global + addr, TARGET_PAGE_SIZE, DIRTY_MEMORY_MIGRATION)) { *real_dirty_pages += 1;