From patchwork Wed Feb 20 10:28:31 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 222075 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 3D1692C007C for ; Wed, 20 Feb 2013 23:50:03 +1100 (EST) Received: from localhost ([::1]:40453 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1U86yv-0000B5-Nz for incoming@patchwork.ozlabs.org; Wed, 20 Feb 2013 05:32:57 -0500 Received: from eggs.gnu.org ([208.118.235.92]:40376) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1U86y9-0006rn-Eo for qemu-devel@nongnu.org; Wed, 20 Feb 2013 05:32:12 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1U86y6-00063U-PV for qemu-devel@nongnu.org; Wed, 20 Feb 2013 05:32:09 -0500 Received: from mx1.redhat.com ([209.132.183.28]:10295) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1U86y6-00063I-ID for qemu-devel@nongnu.org; Wed, 20 Feb 2013 05:32:06 -0500 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r1KAW57t017986 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 20 Feb 2013 05:32:05 -0500 Received: from localhost (dhcp-64-10.muc.redhat.com [10.32.64.10]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r1KAW4i6013987; Wed, 20 Feb 2013 05:32:05 -0500 From: Stefan Hajnoczi To: Date: Wed, 20 Feb 2013 11:28:31 +0100 Message-Id: <1361356113-11049-9-git-send-email-stefanha@redhat.com> In-Reply-To: <1361356113-11049-1-git-send-email-stefanha@redhat.com> References: <1361356113-11049-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Paolo Bonzini , Anthony Liguori , Laszlo Ersek , Stefan Hajnoczi Subject: [Qemu-devel] [PATCH v4 08/10] aio: extract aio_dispatch() from aio_poll() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org We will need to loop over AioHandlers calling ->io_read()/->io_write() when aio_poll() is converted from select(2) to g_poll(2). Luckily the code for this already exists, extract it into the new aio_dispatch() function. Two small changes: * aio_poll() checks !node->deleted to avoid calling handlers that have been deleted. * Fix typo 'then' -> 'them' in aio_poll() comment. Signed-off-by: Stefan Hajnoczi --- aio-posix.c | 57 +++++++++++++++++++++++++++++++++++---------------------- 1 file changed, 35 insertions(+), 22 deletions(-) diff --git a/aio-posix.c b/aio-posix.c index fe4dbb4..35131a3 100644 --- a/aio-posix.c +++ b/aio-posix.c @@ -129,30 +129,12 @@ bool aio_pending(AioContext *ctx) return false; } -bool aio_poll(AioContext *ctx, bool blocking) +static bool aio_dispatch(AioContext *ctx) { - static struct timeval tv0; AioHandler *node; - fd_set rdfds, wrfds; - int max_fd = -1; - int ret; - bool busy, progress; - - progress = false; - - /* - * If there are callbacks left that have been queued, we need to call then. - * Do not call select in this case, because it is possible that the caller - * does not need a complete flush (as is the case for qemu_aio_wait loops). - */ - if (aio_bh_poll(ctx)) { - blocking = false; - progress = true; - } + bool progress = false; /* - * Then dispatch any pending callbacks from the GSource. - * * We have to walk very carefully in case qemu_aio_set_fd_handler is * called while we're walking. */ @@ -167,11 +149,15 @@ bool aio_poll(AioContext *ctx, bool blocking) node->pfd.revents = 0; /* See comment in aio_pending. */ - if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) { + if (!node->deleted && + (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && + node->io_read) { node->io_read(node->opaque); progress = true; } - if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) { + if (!node->deleted && + (revents & (G_IO_OUT | G_IO_ERR)) && + node->io_write) { node->io_write(node->opaque); progress = true; } @@ -186,6 +172,33 @@ bool aio_poll(AioContext *ctx, bool blocking) g_free(tmp); } } + return progress; +} + +bool aio_poll(AioContext *ctx, bool blocking) +{ + static struct timeval tv0; + AioHandler *node; + fd_set rdfds, wrfds; + int max_fd = -1; + int ret; + bool busy, progress; + + progress = false; + + /* + * If there are callbacks left that have been queued, we need to call them. + * Do not call select in this case, because it is possible that the caller + * does not need a complete flush (as is the case for qemu_aio_wait loops). + */ + if (aio_bh_poll(ctx)) { + blocking = false; + progress = true; + } + + if (aio_dispatch(ctx)) { + progress = true; + } if (progress && !blocking) { return true;