From patchwork Tue Nov 15 04:14:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Cody X-Patchwork-Id: 694837 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3tHvbj3ppcz9srZ for ; Tue, 15 Nov 2016 15:31:29 +1100 (AEDT) Received: from localhost ([::1]:44135 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c6VP4-0007HG-TP for incoming@patchwork.ozlabs.org; Mon, 14 Nov 2016 23:31:26 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50795) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c6V9a-0002aM-MZ for qemu-devel@nongnu.org; Mon, 14 Nov 2016 23:15:27 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c6V9Z-00009N-N5 for qemu-devel@nongnu.org; Mon, 14 Nov 2016 23:15:26 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41150) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1c6V9V-00004v-3e; Mon, 14 Nov 2016 23:15:21 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6437331B326; Tue, 15 Nov 2016 04:15:20 +0000 (UTC) Received: from localhost (ovpn-112-50.phx2.redhat.com [10.3.112.50]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id uAF4FIsN001784 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 14 Nov 2016 23:15:19 -0500 From: Jeff Cody To: qemu-block@nongnu.org Date: Mon, 14 Nov 2016 23:14:51 -0500 Message-Id: <1479183291-14086-14-git-send-email-jcody@redhat.com> In-Reply-To: <1479183291-14086-1-git-send-email-jcody@redhat.com> References: <1479183291-14086-1-git-send-email-jcody@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 15 Nov 2016 04:15:20 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL for-2.8 13/13] mirror: do not flush every time the disks are synced X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, jcody@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Paolo Bonzini This puts a huge strain on the disks when there are many concurrent migrations. With this patch we only flush twice: just before issuing the event, and just before pivoting to the destination. If management will complete the job close to the BLOCK_JOB_READY event, the cost of the second flush should be small anyway. Signed-off-by: Paolo Bonzini Message-id: 20161109162008.27287-2-pbonzini@redhat.com Signed-off-by: Jeff Cody --- block/mirror.c | 40 +++++++++++++++++++++++++--------------- 1 file changed, 25 insertions(+), 15 deletions(-) diff --git a/block/mirror.c b/block/mirror.c index 62ac87f..301ba92 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -615,6 +615,20 @@ static int coroutine_fn mirror_dirty_init(MirrorBlockJob *s) return 0; } +/* Called when going out of the streaming phase to flush the bulk of the + * data to the medium, or just before completing. + */ +static int mirror_flush(MirrorBlockJob *s) +{ + int ret = blk_flush(s->target); + if (ret < 0) { + if (mirror_error_action(s, false, -ret) == BLOCK_ERROR_ACTION_REPORT) { + s->ret = ret; + } + } + return ret; +} + static void coroutine_fn mirror_run(void *opaque) { MirrorBlockJob *s = opaque; @@ -727,27 +741,23 @@ static void coroutine_fn mirror_run(void *opaque) should_complete = false; if (s->in_flight == 0 && cnt == 0) { trace_mirror_before_flush(s); - ret = blk_flush(s->target); - if (ret < 0) { - if (mirror_error_action(s, false, -ret) == - BLOCK_ERROR_ACTION_REPORT) { - goto immediate_exit; + if (!s->synced) { + if (mirror_flush(s) < 0) { + /* Go check s->ret. */ + continue; } - } else { /* We're out of the streaming phase. From now on, if the job * is cancelled we will actually complete all pending I/O and * report completion. This way, block-job-cancel will leave * the target in a consistent state. */ - if (!s->synced) { - block_job_event_ready(&s->common); - s->synced = true; - } - - should_complete = s->should_complete || - block_job_is_cancelled(&s->common); - cnt = bdrv_get_dirty_count(s->dirty_bitmap); + block_job_event_ready(&s->common); + s->synced = true; } + + should_complete = s->should_complete || + block_job_is_cancelled(&s->common); + cnt = bdrv_get_dirty_count(s->dirty_bitmap); } if (cnt == 0 && should_complete) { @@ -765,7 +775,7 @@ static void coroutine_fn mirror_run(void *opaque) bdrv_drained_begin(bs); cnt = bdrv_get_dirty_count(s->dirty_bitmap); - if (cnt > 0) { + if (cnt > 0 || mirror_flush(s) < 0) { bdrv_drained_end(bs); continue; }