From patchwork Mon Sep 3 09:26:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 965320 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="eLBi4gZg"; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 423l6V5pcxz9s55 for ; Mon, 3 Sep 2018 19:29:38 +1000 (AEST) Received: from localhost ([::1]:44254 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fwlAu-0001y5-Fg for incoming@patchwork.ozlabs.org; Mon, 03 Sep 2018 05:29:36 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39711) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fwl8Q-00078W-IG for qemu-devel@nongnu.org; Mon, 03 Sep 2018 05:27:03 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fwl8P-00044i-J0 for qemu-devel@nongnu.org; Mon, 03 Sep 2018 05:27:02 -0400 Received: from mail-pf1-x442.google.com ([2607:f8b0:4864:20::442]:40829) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1fwl8P-00044Z-CX for qemu-devel@nongnu.org; Mon, 03 Sep 2018 05:27:01 -0400 Received: by mail-pf1-x442.google.com with SMTP id s13-v6so8451122pfi.7 for ; Mon, 03 Sep 2018 02:27:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=E4iFB29osfxzkuTgHYrtMGQRw1k89sq8DvMO5tpJsPg=; b=eLBi4gZghBvQFff6eAktAHL9PJnxtCf12EUPm55jltnERw+J5SRXnpTxZ2yYgfW4Rh eyXGempVvbZXm17gLBYdH0ABC9qzWSsEGE6wzpvfUoElfpA2KFPESMhdy/7EHEQ3biGr oFkZciqYsIQ5+LyV0RcvyaRfeCZeX1ZORQ5duLiTMdjU0WN5ParXfseIoKGishv9ZJBt 7OHDiraEYhc/AiQ/CtsqrqKKMavDQqE4Q8t0zdAVp5Bz1mvVr627pDrfyTzvHoaq40iB OhrntoRGFC2tZPgws/Xg4dfLj0VTQSGV7scJZASbnzEMq/ZQlNqWsuvZGbog5yaIhxV4 20xQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=E4iFB29osfxzkuTgHYrtMGQRw1k89sq8DvMO5tpJsPg=; b=If7kxiEC/zqBs30yK4yPV1Bqes4XTbicc/9x2JsbM0LXfCCL3NrOuCLtCmc0qFwTHl XDIAgEQbAdlUfpccRagPnmeCx9mnhZ78aGeKIT2uHWZdoAwFn3RzXlRK9VXO2h7951qN VpkzhAeEERTyhu4KWU7hxY5tHkrOUAjrAgCEitwbthBXg7UyYo6d3vi2Tst6EVZugAM6 E+F+hbMEejfRxKrFzhgS38xlWXyV6osuOhA26tEZY3V5E9/P6OWsF2s1CHTPVzlefn/I eT4c+xjtOO1hwonFUOoP8MDWeKPmv/0ivY+3ALux8OksPPfIkCeVB5nrtbSbD8tkhAuO eK7A== X-Gm-Message-State: APzg51CKgB9GRgjvYZEwgx9CF3hgiaUQidPGXryT9PPweowIMFxUW3Os ggdm6/m1Wp3TbvOpKZR+2n4= X-Google-Smtp-Source: ANB0VdaHG69sRDefiHAaTDacoUQyG4GBHlWH2QwEpmH3xwzr5f1OxZluHwbPvwv+8xECQxGBKd6uIw== X-Received: by 2002:a63:6ce:: with SMTP id 197-v6mr25018713pgg.338.1535966820521; Mon, 03 Sep 2018 02:27:00 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.40]) by smtp.gmail.com with ESMTPSA id 3-v6sm33543102pfq.10.2018.09.03.02.26.56 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 03 Sep 2018 02:27:00 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Date: Mon, 3 Sep 2018 17:26:42 +0800 Message-Id: <20180903092644.25812-3-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180903092644.25812-1-xiaoguangrong@tencent.com> References: <20180903092644.25812-1-xiaoguangrong@tencent.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::442 Subject: [Qemu-devel] [PATCH v5 2/4] migration: fix calculating xbzrle_counters.cache_miss_rate X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, quintela@redhat.com, Xiao Guangrong , qemu-devel@nongnu.org, peterx@redhat.com, dgilbert@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Xiao Guangrong As Peter pointed out: | - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's | per-guest-page granularity | | - RAMState.iterations is done for each ram_find_and_save_block(), so | it's per-host-page granularity | | An example is that when we migrate a 2M huge page in the guest, we | will only increase the RAMState.iterations by 1 (since | ram_find_and_save_block() will be called once), but we might increase | xbzrle_counters.cache_miss for 2M/4K=512 times (we'll call | save_xbzrle_page() that many times) if all the pages got cache miss. | Then IMHO the cache miss rate will be 512/1=51200% (while it should | actually be just 100% cache miss). And he also suggested as xbzrle_counters.cache_miss_rate is the only user of rs->iterations we can adapt it to count target guest page numbers After that, rename 'iterations' to 'target_page_count' to better reflect its meaning Suggested-by: Peter Xu Reviewed-by: Peter Xu Signed-off-by: Xiao Guangrong Reviewed-by: Juan Quintela --- migration/ram.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 2ad07b5e15..25af797c0a 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -301,10 +301,10 @@ struct RAMState { uint64_t num_dirty_pages_period; /* xbzrle misses since the beginning of the period */ uint64_t xbzrle_cache_miss_prev; - /* number of iterations at the beginning of period */ - uint64_t iterations_prev; - /* Iterations since start */ - uint64_t iterations; + /* total handled target pages at the beginning of period */ + uint64_t target_page_count_prev; + /* total handled target pages since start */ + uint64_t target_page_count; /* number of dirty bits in the bitmap */ uint64_t migration_dirty_pages; /* last dirty_sync_count we have seen */ @@ -1594,19 +1594,19 @@ uint64_t ram_pagesize_summary(void) static void migration_update_rates(RAMState *rs, int64_t end_time) { - uint64_t iter_count = rs->iterations - rs->iterations_prev; + uint64_t page_count = rs->target_page_count - rs->target_page_count_prev; /* calculate period counters */ ram_counters.dirty_pages_rate = rs->num_dirty_pages_period * 1000 / (end_time - rs->time_last_bitmap_sync); - if (!iter_count) { + if (!page_count) { return; } if (migrate_use_xbzrle()) { xbzrle_counters.cache_miss_rate = (double)(xbzrle_counters.cache_miss - - rs->xbzrle_cache_miss_prev) / iter_count; + rs->xbzrle_cache_miss_prev) / page_count; rs->xbzrle_cache_miss_prev = xbzrle_counters.cache_miss; } } @@ -1664,7 +1664,7 @@ static void migration_bitmap_sync(RAMState *rs) migration_update_rates(rs, end_time); - rs->iterations_prev = rs->iterations; + rs->target_page_count_prev = rs->target_page_count; /* reset period counters */ rs->time_last_bitmap_sync = end_time; @@ -3209,7 +3209,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) done = 1; break; } - rs->iterations++; + rs->target_page_count += pages; /* we want to check in the 1st loop, just in case it was the 1st time and we had to sync the dirty bitmap.