From patchwork Thu Apr 11 15:44:35 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 235812 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 23FB82C00C6 for ; Fri, 12 Apr 2013 01:45:30 +1000 (EST) Received: from localhost ([::1]:37808 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UQJgm-0003Mg-DM for incoming@patchwork.ozlabs.org; Thu, 11 Apr 2013 11:45:28 -0400 Received: from eggs.gnu.org ([208.118.235.92]:35560) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UQJgI-0003GA-St for qemu-devel@nongnu.org; Thu, 11 Apr 2013 11:45:03 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UQJgG-0002Yp-02 for qemu-devel@nongnu.org; Thu, 11 Apr 2013 11:44:58 -0400 Received: from mx1.redhat.com ([209.132.183.28]:61963) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UQJgF-0002Yl-OS for qemu-devel@nongnu.org; Thu, 11 Apr 2013 11:44:55 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r3BFiriJ016822 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 11 Apr 2013 11:44:53 -0400 Received: from localhost (ovpn-112-31.ams2.redhat.com [10.36.112.31]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r3BFiq5C020211; Thu, 11 Apr 2013 11:44:53 -0400 From: Stefan Hajnoczi To: Date: Thu, 11 Apr 2013 17:44:35 +0200 Message-Id: <1365695085-27970-4-git-send-email-stefanha@redhat.com> In-Reply-To: <1365695085-27970-1-git-send-email-stefanha@redhat.com> References: <1365695085-27970-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Kevin Wolf , Paolo Bonzini , Anthony Liguori , pingfank@linux.vnet.ibm.com Subject: [Qemu-devel] [RFC 03/13] aio: stop using .io_flush() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Now that bdrv_drain_all() checks that requests are pending before calling qemu_aio_wait(), it is no longer necessary to call .io_flush() handlers. Behavior of aio_poll() changes as follows: .io_flush() is no longer invoked and file descriptors are *always* monitored. Previously returning 0 from .io_flush() would skip this file descriptor. Due to these changes it is essential to check that requests are pending before calling qemu_aio_wait(). Failure to do so means we block, for example, waiting for an idle iSCSI socket to become readable when there are no requests. Currently all qemu_aio_wait()/aio_poll() callers check before calling. The next patches will remove .io_flush() handler code until we can finally drop the io_flush arguments to aio_set_fd_handler() and friends. Signed-off-by: Stefan Hajnoczi --- aio-posix.c | 24 ++---------------------- aio-win32.c | 23 ++--------------------- 2 files changed, 4 insertions(+), 43 deletions(-) diff --git a/aio-posix.c b/aio-posix.c index b68eccd..569e603 100644 --- a/aio-posix.c +++ b/aio-posix.c @@ -23,7 +23,6 @@ struct AioHandler GPollFD pfd; IOHandler *io_read; IOHandler *io_write; - AioFlushHandler *io_flush; int deleted; int pollfds_idx; void *opaque; @@ -84,7 +83,6 @@ void aio_set_fd_handler(AioContext *ctx, /* Update handler with latest information */ node->io_read = io_read; node->io_write = io_write; - node->io_flush = io_flush; node->opaque = opaque; node->pollfds_idx = -1; @@ -173,7 +171,7 @@ bool aio_poll(AioContext *ctx, bool blocking) { AioHandler *node; int ret; - bool busy, progress; + bool progress; progress = false; @@ -200,20 +198,8 @@ bool aio_poll(AioContext *ctx, bool blocking) g_array_set_size(ctx->pollfds, 0); /* fill pollfds */ - busy = false; QLIST_FOREACH(node, &ctx->aio_handlers, node) { node->pollfds_idx = -1; - - /* If there aren't pending AIO operations, don't invoke callbacks. - * Otherwise, if there are no AIO requests, qemu_aio_wait() would - * wait indefinitely. - */ - if (!node->deleted && node->io_flush) { - if (node->io_flush(node->opaque) == 0) { - continue; - } - busy = true; - } if (!node->deleted && node->pfd.events) { GPollFD pfd = { .fd = node->pfd.fd, @@ -226,11 +212,6 @@ bool aio_poll(AioContext *ctx, bool blocking) ctx->walking_handlers--; - /* No AIO operations? Get us out of here */ - if (!busy) { - return progress; - } - /* wait until next event */ ret = g_poll((GPollFD *)ctx->pollfds->data, ctx->pollfds->len, @@ -250,6 +231,5 @@ bool aio_poll(AioContext *ctx, bool blocking) } } - assert(progress || busy); - return true; + return progress; } diff --git a/aio-win32.c b/aio-win32.c index 38723bf..0980d08 100644 --- a/aio-win32.c +++ b/aio-win32.c @@ -23,7 +23,6 @@ struct AioHandler { EventNotifier *e; EventNotifierHandler *io_notify; - AioFlushEventNotifierHandler *io_flush; GPollFD pfd; int deleted; QLIST_ENTRY(AioHandler) node; @@ -73,7 +72,6 @@ void aio_set_event_notifier(AioContext *ctx, } /* Update handler with latest information */ node->io_notify = io_notify; - node->io_flush = io_flush; } aio_notify(ctx); @@ -96,7 +94,7 @@ bool aio_poll(AioContext *ctx, bool blocking) { AioHandler *node; HANDLE events[MAXIMUM_WAIT_OBJECTS + 1]; - bool busy, progress; + bool progress; int count; progress = false; @@ -147,19 +145,8 @@ bool aio_poll(AioContext *ctx, bool blocking) ctx->walking_handlers++; /* fill fd sets */ - busy = false; count = 0; QLIST_FOREACH(node, &ctx->aio_handlers, node) { - /* If there aren't pending AIO operations, don't invoke callbacks. - * Otherwise, if there are no AIO requests, qemu_aio_wait() would - * wait indefinitely. - */ - if (!node->deleted && node->io_flush) { - if (node->io_flush(node->e) == 0) { - continue; - } - busy = true; - } if (!node->deleted && node->io_notify) { events[count++] = event_notifier_get_handle(node->e); } @@ -167,11 +154,6 @@ bool aio_poll(AioContext *ctx, bool blocking) ctx->walking_handlers--; - /* No AIO operations? Get us out of here */ - if (!busy) { - return progress; - } - /* wait until next event */ while (count > 0) { int timeout = blocking ? INFINITE : 0; @@ -214,6 +196,5 @@ bool aio_poll(AioContext *ctx, bool blocking) events[ret - WAIT_OBJECT_0] = events[--count]; } - assert(progress || busy); - return true; + return progress; }