From patchwork Wed Sep 26 03:16:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Wang X-Patchwork-Id: 974817 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 42KkBk4hqWz9s55 for ; Wed, 26 Sep 2018 13:36:46 +1000 (AEST) Received: from localhost ([::1]:56263 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g50d2-0005P8-6A for incoming@patchwork.ozlabs.org; Tue, 25 Sep 2018 23:36:44 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55948) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g50Su-0004hy-Mz for qemu-devel@nongnu.org; Tue, 25 Sep 2018 23:26:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1g50K7-0001vu-LI for qemu-devel@nongnu.org; Tue, 25 Sep 2018 23:17:12 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57488) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1g50K7-0001vS-DC for qemu-devel@nongnu.org; Tue, 25 Sep 2018 23:17:11 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A99983084033; Wed, 26 Sep 2018 03:17:10 +0000 (UTC) Received: from jason-ThinkPad-T450s.redhat.com (ovpn-12-161.pek2.redhat.com [10.72.12.161]) by smtp.corp.redhat.com (Postfix) with ESMTP id D8BE760BE8; Wed, 26 Sep 2018 03:17:07 +0000 (UTC) From: Jason Wang To: qemu-devel@nongnu.org Date: Wed, 26 Sep 2018 11:16:30 +0800 Message-Id: <20180926031650.8892-6-jasowang@redhat.com> In-Reply-To: <20180926031650.8892-1-jasowang@redhat.com> References: <20180926031650.8892-1-jasowang@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Wed, 26 Sep 2018 03:17:10 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 05/25] COLO: Add block replication into colo process X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, zhanghailiang , Li Zhijian , Jason Wang , Zhang Chen , Zhang Chen Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Zhang Chen Make sure master start block replication after slave's block replication started. Besides, we need to activate VM's blocks before goes into COLO state. Signed-off-by: zhanghailiang Signed-off-by: Li Zhijian Signed-off-by: Zhang Chen Signed-off-by: Zhang Chen Signed-off-by: Jason Wang --- migration/colo.c | 43 +++++++++++++++++++++++++++++++++++++++++++ migration/migration.c | 10 ++++++++++ 2 files changed, 53 insertions(+) diff --git a/migration/colo.c b/migration/colo.c index f4bdfde170..af04010061 100644 --- a/migration/colo.c +++ b/migration/colo.c @@ -27,6 +27,7 @@ #include "replication.h" #include "net/colo-compare.h" #include "net/colo.h" +#include "block/block.h" static bool vmstate_loading; static Notifier packets_compare_notifier; @@ -56,6 +57,7 @@ static void secondary_vm_do_failover(void) { int old_state; MigrationIncomingState *mis = migration_incoming_get_current(); + Error *local_err = NULL; /* Can not do failover during the process of VM's loading VMstate, Or * it will break the secondary VM. @@ -73,6 +75,11 @@ static void secondary_vm_do_failover(void) migrate_set_state(&mis->state, MIGRATION_STATUS_COLO, MIGRATION_STATUS_COMPLETED); + replication_stop_all(true, &local_err); + if (local_err) { + error_report_err(local_err); + } + if (!autostart) { error_report("\"-S\" qemu option will be ignored in secondary side"); /* recover runstate to normal migration finish state */ @@ -110,6 +117,7 @@ static void primary_vm_do_failover(void) { MigrationState *s = migrate_get_current(); int old_state; + Error *local_err = NULL; migrate_set_state(&s->state, MIGRATION_STATUS_COLO, MIGRATION_STATUS_COMPLETED); @@ -133,6 +141,13 @@ static void primary_vm_do_failover(void) FailoverStatus_str(old_state)); return; } + + replication_stop_all(true, &local_err); + if (local_err) { + error_report_err(local_err); + local_err = NULL; + } + /* Notify COLO thread that failover work is finished */ qemu_sem_post(&s->colo_exit_sem); } @@ -356,6 +371,11 @@ static int colo_do_checkpoint_transaction(MigrationState *s, qemu_savevm_state_header(fb); qemu_savevm_state_setup(fb); qemu_mutex_lock_iothread(); + replication_do_checkpoint_all(&local_err); + if (local_err) { + qemu_mutex_unlock_iothread(); + goto out; + } qemu_savevm_state_complete_precopy(fb, false, false); qemu_mutex_unlock_iothread(); @@ -446,6 +466,12 @@ static void colo_process_checkpoint(MigrationState *s) object_unref(OBJECT(bioc)); qemu_mutex_lock_iothread(); + replication_start_all(REPLICATION_MODE_PRIMARY, &local_err); + if (local_err) { + qemu_mutex_unlock_iothread(); + goto out; + } + vm_start(); qemu_mutex_unlock_iothread(); trace_colo_vm_state_change("stop", "run"); @@ -586,6 +612,11 @@ void *colo_process_incoming_thread(void *opaque) object_unref(OBJECT(bioc)); qemu_mutex_lock_iothread(); + replication_start_all(REPLICATION_MODE_SECONDARY, &local_err); + if (local_err) { + qemu_mutex_unlock_iothread(); + goto out; + } vm_start(); trace_colo_vm_state_change("stop", "run"); qemu_mutex_unlock_iothread(); @@ -666,6 +697,18 @@ void *colo_process_incoming_thread(void *opaque) goto out; } + replication_get_error_all(&local_err); + if (local_err) { + qemu_mutex_unlock_iothread(); + goto out; + } + /* discard colo disk buffer */ + replication_do_checkpoint_all(&local_err); + if (local_err) { + qemu_mutex_unlock_iothread(); + goto out; + } + vmstate_loading = false; vm_start(); trace_colo_vm_state_change("stop", "run"); diff --git a/migration/migration.c b/migration/migration.c index 6e637d3997..c93b11127e 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -386,6 +386,7 @@ static void process_incoming_migration_co(void *opaque) MigrationIncomingState *mis = migration_incoming_get_current(); PostcopyState ps; int ret; + Error *local_err = NULL; assert(mis->from_src_file); mis->migration_incoming_co = qemu_coroutine_self(); @@ -418,6 +419,15 @@ static void process_incoming_migration_co(void *opaque) /* we get COLO info, and know if we are in COLO mode */ if (!ret && migration_incoming_enable_colo()) { + /* Make sure all file formats flush their mutable metadata */ + bdrv_invalidate_cache_all(&local_err); + if (local_err) { + migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE, + MIGRATION_STATUS_FAILED); + error_report_err(local_err); + exit(EXIT_FAILURE); + } + qemu_thread_create(&mis->colo_incoming_thread, "COLO incoming", colo_process_incoming_thread, mis, QEMU_THREAD_JOINABLE); mis->have_colo_incoming_thread = true;