From patchwork Wed Oct 7 06:20:42 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Denis V. Lunev" X-Patchwork-Id: 527150 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 84010140D95 for ; Wed, 7 Oct 2015 17:23:18 +1100 (AEDT) Received: from localhost ([::1]:55446 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zji8C-0006tR-BU for incoming@patchwork.ozlabs.org; Wed, 07 Oct 2015 02:23:16 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48930) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zji5x-0002WY-S8 for qemu-devel@nongnu.org; Wed, 07 Oct 2015 02:20:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Zji5w-0000PJ-4h for qemu-devel@nongnu.org; Wed, 07 Oct 2015 02:20:57 -0400 Received: from mailhub.sw.ru ([195.214.232.25]:17630 helo=relay.sw.ru) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zji5v-0000P4-Ps for qemu-devel@nongnu.org; Wed, 07 Oct 2015 02:20:56 -0400 Received: from irbis.sw.ru ([10.30.2.139]) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id t976KkkT031932; Wed, 7 Oct 2015 09:20:54 +0300 (MSK) From: "Denis V. Lunev" To: Date: Wed, 7 Oct 2015 09:20:42 +0300 Message-Id: <1444198846-5383-5-git-send-email-den@openvz.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1444198846-5383-1-git-send-email-den@openvz.org> References: <5614531B.5080107@redhat.com> <1444198846-5383-1-git-send-email-den@openvz.org> MIME-Version: 1.0 X-MIME-Autoconverted: from 8bit to quoted-printable by relay.sw.ru id t976KkkT031932 X-detected-operating-system: by eggs.gnu.org: OpenBSD 3.x X-Received-From: 195.214.232.25 Cc: "Denis V. Lunev" , Igor Redko , jsnow@redhat.com, qemu-devel@nongnu.org, annam@virtuozzo.com Subject: [Qemu-devel] [PATCH 4/8] migration: add function for reseting migration bitmap X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Igor Redko Adds function ram_migration_bitmap_reset(), which resets migration_bitmap and sends a number of dirtied bytes since the last call. During estimation of dirty bytes rate and migration downtime we must avoid any copying and transferring data, but we also need to get the number of dirtied bytes and pass it to an estimator. And even more importantly - we MUST NOT stop the virtual machine during test. So we only do “begin”, “pending” and “iterate” stages of migration. >if ((pending_size && pending_size >= max_size) > || (migrate_is_test())) { > qemu_savevm_state_iterate(s->file); >} else { If we didn't explicitly check the migration for being testing one, we would have to make expression "(pending_size && pending_size >= max_size)" always evaluate as true during test. For example we could set max_downtime to 0, so that max_size would be 0. But the check for pending_size remains. This check was added with commit https://github.com/qemu/qemu/commit/b22ff1fbed9d7f1f677804cbaa9ee03ca17d0013 If we removed this check, migration might hang at “iterate” stage in ram_find_and_save_block() (that happens under rare circumstances) If we left this check as is, VM would stop during test (that would happen also under rare condition: no pages have been dirtied during test). Signed-off-by: Igor Redko Reviewed-by: Anna Melekhova Signed-off-by: Denis V. Lunev --- migration/migration.c | 3 ++- migration/ram.c | 26 +++++++++++++++++++++++++- 2 files changed, 27 insertions(+), 2 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index e0cad54..d6cb3e2 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1043,7 +1043,8 @@ static void *migration_thread(void *opaque) if (!qemu_file_rate_limit(s->file)) { pending_size = qemu_savevm_state_pending(s->file, max_size); trace_migrate_pending(pending_size, max_size); - if (pending_size && pending_size >= max_size) { + if ((pending_size && pending_size >= max_size) + || (migrate_is_test())) { qemu_savevm_state_iterate(s->file); } else { trace_migration_thread_low_pending(pending_size); diff --git a/migration/ram.c b/migration/ram.c index 2d1d0b9..fbf0b7a 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1120,6 +1120,25 @@ static void ram_migration_cancel(void *opaque) migration_end(); } +static uint64_t ram_migration_bitmap_reset(void) +{ + uint64_t dirty_pages_remaining; + int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */ + /* TODO think about more locks? + * For now only using for prediction so the only another writer + * is migration_bitmap_sync_range() + */ + qemu_mutex_lock(&migration_bitmap_mutex); + rcu_read_lock(); + ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS; + dirty_pages_remaining = migration_dirty_pages; + bitmap_zero(migration_bitmap, ram_bitmap_pages); + migration_dirty_pages = 0; + rcu_read_unlock(); + qemu_mutex_unlock(&migration_bitmap_mutex); + return dirty_pages_remaining; +} + static void reset_ram_globals(void) { last_seen_block = NULL; @@ -1249,6 +1268,10 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) int64_t t0; int pages_sent = 0; + if (migrate_is_test()) { + return ram_migration_bitmap_reset(); + } + rcu_read_lock(); if (ram_list.version != last_version) { reset_ram_globals(); @@ -1346,13 +1369,14 @@ static uint64_t ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size) remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE; - if (remaining_size < max_size) { + if ((remaining_size < max_size) || (migrate_is_test())) { qemu_mutex_lock_iothread(); rcu_read_lock(); migration_bitmap_sync(); rcu_read_unlock(); qemu_mutex_unlock_iothread(); remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE; + ram_control_sync_hook(f, RAM_CONTROL_HOOK, &remaining_size); } return remaining_size; }