From patchwork Mon Jul 1 12:18:33 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 256104 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 2C4362C009F for ; Mon, 1 Jul 2013 22:19:50 +1000 (EST) Received: from localhost ([::1]:44582 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Utd59-0001Ol-Gq for incoming@patchwork.ozlabs.org; Mon, 01 Jul 2013 08:19:47 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59298) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Utd4V-0001EZ-Eb for qemu-devel@nongnu.org; Mon, 01 Jul 2013 08:19:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Utd4N-0000kP-Pr for qemu-devel@nongnu.org; Mon, 01 Jul 2013 08:19:07 -0400 Received: from mx1.redhat.com ([209.132.183.28]:26439) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Utd4N-0000kD-Er for qemu-devel@nongnu.org; Mon, 01 Jul 2013 08:18:59 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r61CIwoJ015863 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 1 Jul 2013 08:18:59 -0400 Received: from localhost (ovpn-112-23.ams2.redhat.com [10.36.112.23]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r61CIv9P028015; Mon, 1 Jul 2013 08:18:58 -0400 From: Stefan Hajnoczi To: Date: Mon, 1 Jul 2013 14:18:33 +0200 Message-Id: <1372681129-18528-3-git-send-email-stefanha@redhat.com> In-Reply-To: <1372681129-18528-1-git-send-email-stefanha@redhat.com> References: <1372681129-18528-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Kevin Wolf , Paolo Bonzini , Stefan Hajnoczi Subject: [Qemu-devel] [PATCH v5 02/18] block: stop relying on io_flush() in bdrv_drain_all() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org If a block driver has no file descriptors to monitor but there are still active requests, it can return 1 from .io_flush(). This is used to spin during synchronous I/O. Stop relying on .io_flush() and instead check QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active requests. This is the first step in removing .io_flush() so that event loops no longer need to have the concept of synchronous I/O. Eventually we may be able to kill synchronous I/O completely by running everything in a coroutine, but that is future work. Note this patch moves bs->throttled_reqs initialization to bdrv_new() so that bdrv_requests_pending(bs) can safely access it. In practice bs is g_malloc0() so the memory is already zeroed but it's safer to initialize the queue properly. Signed-off-by: Stefan Hajnoczi --- block.c | 46 +++++++++++++++++++++++++++++++++++----------- 1 file changed, 35 insertions(+), 11 deletions(-) diff --git a/block.c b/block.c index c26a9f8..a67b9b6 100644 --- a/block.c +++ b/block.c @@ -148,7 +148,6 @@ static void bdrv_block_timer(void *opaque) void bdrv_io_limits_enable(BlockDriverState *bs) { - qemu_co_queue_init(&bs->throttled_reqs); bs->block_timer = qemu_new_timer_ns(vm_clock, bdrv_block_timer, bs); bs->io_limits_enabled = true; } @@ -306,6 +305,7 @@ BlockDriverState *bdrv_new(const char *device_name) bdrv_iostatus_disable(bs); notifier_list_init(&bs->close_notifiers); notifier_with_return_list_init(&bs->before_write_notifiers); + qemu_co_queue_init(&bs->throttled_reqs); return bs; } @@ -1413,6 +1413,35 @@ void bdrv_close_all(void) } } +/* Check if any requests are in-flight (including throttled requests) */ +static bool bdrv_requests_pending(BlockDriverState *bs) +{ + if (!QLIST_EMPTY(&bs->tracked_requests)) { + return true; + } + if (!qemu_co_queue_empty(&bs->throttled_reqs)) { + return true; + } + if (bs->file && bdrv_requests_pending(bs->file)) { + return true; + } + if (bs->backing_hd && bdrv_requests_pending(bs->backing_hd)) { + return true; + } + return false; +} + +static bool bdrv_requests_pending_all(void) +{ + BlockDriverState *bs; + QTAILQ_FOREACH(bs, &bdrv_states, list) { + if (bdrv_requests_pending(bs)) { + return true; + } + } + return false; +} + /* * Wait for pending requests to complete across all BlockDriverStates * @@ -1427,27 +1456,22 @@ void bdrv_close_all(void) */ void bdrv_drain_all(void) { + /* Always run first iteration so any pending completion BHs run */ + bool busy = true; BlockDriverState *bs; - bool busy; - - do { - busy = qemu_aio_wait(); + while (busy) { /* FIXME: We do not have timer support here, so this is effectively * a busy wait. */ QTAILQ_FOREACH(bs, &bdrv_states, list) { if (!qemu_co_queue_empty(&bs->throttled_reqs)) { qemu_co_queue_restart_all(&bs->throttled_reqs); - busy = true; } } - } while (busy); - /* If requests are still pending there is a bug somewhere */ - QTAILQ_FOREACH(bs, &bdrv_states, list) { - assert(QLIST_EMPTY(&bs->tracked_requests)); - assert(qemu_co_queue_empty(&bs->throttled_reqs)); + busy = bdrv_requests_pending_all(); + busy |= aio_poll(qemu_get_aio_context(), busy); } }