From patchwork Tue Mar 10 09:17:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keqian Zhu X-Patchwork-Id: 1252045 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48c8k127Gdz9sNg for ; Tue, 10 Mar 2020 20:22:01 +1100 (AEDT) Received: from localhost ([::1]:55934 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBb5L-0005FL-2X for incoming@patchwork.ozlabs.org; Tue, 10 Mar 2020 05:21:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37567) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBb2v-0001pu-F5 for qemu-devel@nongnu.org; Tue, 10 Mar 2020 05:19:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBb2t-00065H-Ah for qemu-devel@nongnu.org; Tue, 10 Mar 2020 05:19:29 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:3270 helo=huawei.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBb2p-0005io-N7; Tue, 10 Mar 2020 05:19:24 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 42AA56A27B560642CA2B; Tue, 10 Mar 2020 17:19:19 +0800 (CST) Received: from linux-kDCJWP.huawei.com (10.175.104.212) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Tue, 10 Mar 2020 17:19:11 +0800 From: Keqian Zhu To: Subject: [PATCH v2 2/2] migration: not require length align when choose fast dirty sync path Date: Tue, 10 Mar 2020 17:17:04 +0800 Message-ID: <20200310091704.42340-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200310091704.42340-1-zhukeqian1@huawei.com> References: <20200310091704.42340-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.104.212] X-CFilter-Loop: Reflected X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 45.249.212.191 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paolo Bonzini , qemu-arm@nongnu.org, Keqian Zhu , "Dr . David Alan Gilbert" , wanghaibin.wang@huawei.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" In commit aa777e297c84 ("cpu_physical_memory_sync_dirty_bitmap: Another alignment fix"), ramblock length is required to align word pages when we choose the fast dirty sync path. The reason is that "If the Ramblock is less than 64 pages in length that long can contain bits representing two different RAMBlocks, but the code will update the bmap belinging to the 1st RAMBlock only while having updated the total dirty page count for both." This is right before commit 801110ab22be ("find_ram_offset: Align ram_addr_t allocation on long boundaries"), which align ram_addr_t allocation on long boundaries. So currently we wont "updated the total dirty page count for both". By removing the alignment constraint of length in fast path, we can always use fast dirty sync path if start_global is aligned to word page. Signed-off-by: Keqian Zhu --- include/exec/ram_addr.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 8311efb7bc..57b3edf376 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -450,9 +450,8 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, uint64_t num_dirty = 0; unsigned long *dest = rb->bmap; - /* start address and length is aligned at the start of a word? */ - if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == start_global && - !(length & ((BITS_PER_LONG << TARGET_PAGE_BITS) - 1))) { + /* start address is aligned at the start of a word? */ + if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == start_global) { int k; int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS); unsigned long * const *src;