From patchwork Tue Nov 15 20:48:52 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 695263 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3tJKK16zJDz9t0v for ; Wed, 16 Nov 2016 07:50:13 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Lxg2hmTT"; dkim-atps=neutral Received: from localhost ([::1]:48772 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c6kgF-0001YL-BE for incoming@patchwork.ozlabs.org; Tue, 15 Nov 2016 15:50:11 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46602) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c6kf5-0000lW-TA for qemu-devel@nongnu.org; Tue, 15 Nov 2016 15:49:01 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c6kf2-0003Ns-Nw for qemu-devel@nongnu.org; Tue, 15 Nov 2016 15:48:59 -0500 Received: from mail-wm0-x244.google.com ([2a00:1450:400c:c09::244]:36162) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1c6kf2-0003NC-Bv for qemu-devel@nongnu.org; Tue, 15 Nov 2016 15:48:56 -0500 Received: by mail-wm0-x244.google.com with SMTP id m203so3988807wma.3 for ; Tue, 15 Nov 2016 12:48:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=7YiBTglM++xiHOuVz93c6N0RokegBCEdxX6TDwdwfMc=; b=Lxg2hmTTo1mVK4CGyWwvc+Ldp0XeojWTqoOCkTfMAJRZ6LvjZY/AXEssjqmu5ngxbm WUEFDt2E7Xjz6ROGDkLHrBZgogNuYGKHEat+qPTnkT2dy3zgzStqLt/ORSeTDIx9tIy6 ncLiIIg0yEQ/fgIX/v8pVvM6CcWawGa+tyCJGBa9eMC+F/gGXskpZPHRV1LKsaKOrZEQ IcRvG3uX4rvmg8v6yUWdYrC3e1H1NbQ746Q/yNEs4n2LsP34BlV8PMKpFvxTaXkU5pTL H0R8rDnzMyXHyvSmhRgxKOLKQidgvC5gHVPgPb9ri3AbjmHakAI0uATQ76G/i4PLdW0H xNMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=7YiBTglM++xiHOuVz93c6N0RokegBCEdxX6TDwdwfMc=; b=b8+g/CHL4WwgagK1CM7t3NP9o/EobllSBh73Xuq4Hsv+UgRJlsXD84aYaSZIalp3Mo TNP7I4LEEcwShqqAfQbfFeoF668LCNvDkb0752Wzj4U75YO1AiYHm2T6R+CO2VYVuTjG LJ3uMalZDw4UIOQCy0J3jK8yhkx78WlvGK0EOT0cqKzZZj8vI9TZp/+tXyPP3+1I6kiw X6/D1AvGW6xGiH3hIn9QDgpW05G2kc01ej/ouIW2qNHLJ2ADU5cN51pltTTXzc4MW7y3 vDQ49Q12/qWPa8RYWPy+lTjZSdJQMr4QF0YPy66C2/XEasf4FlbRpzVpsAZzufDzVoxG +M7w== X-Gm-Message-State: ABUngvcUTHkuRu+i9DL3vMQifRZgAeY6v5YBfdg+y8AlFB+qgFDWvRLcqwjt7QLXrhebQw== X-Received: by 10.28.141.18 with SMTP id p18mr5548102wmd.31.1479242935023; Tue, 15 Nov 2016 12:48:55 -0800 (PST) Received: from localhost (host86-163-92-203.range86-163.btcentralplus.com. [86.163.92.203]) by smtp.gmail.com with ESMTPSA id l2sm25318621wji.7.2016.11.15.12.48.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Nov 2016 12:48:54 -0800 (PST) Date: Tue, 15 Nov 2016 20:48:52 +0000 From: Stefan Hajnoczi To: ashish mittal Message-ID: <20161115204852.GA6784@stefanha-x1.localdomain> References: <1471149312-28148-1-git-send-email-ashish.mittal@veritas.com> <20160815102046.GC13261@redhat.com> <23ea8348-7ed9-7e45-1079-235614dc90c2@redhat.com> <20160823215803.GA24259@stefanha-x1.localdomain> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.7.1 (2016-10-04) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::244 Subject: Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Venkatesha.Mg@veritas.com, Ashish Mittal , Jeff Cody , qemu-devel , Rakesh Ranjan , Markus Armbruster , Ketan Nilangekar , Abhijit Dey , Paolo Bonzini , Buddhi.Madhav@veritas.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" On Tue, Nov 15, 2016 at 11:02:34AM -0800, ashish mittal wrote: > I had replied to the QEMUBH suggestion in the email below. > > Regards, > Ashish > > On Tue, Aug 23, 2016 at 3:22 PM, ashish mittal wrote: > > Thanks Stefan, I will look at block/quorum.c. > > > > I have sent V4 of the patch with a reworked parsing logic for both > > JSON and URI. Both of them are quite compact now. > > > > URI parsing now follows the suggestion given by Kevin. > > /================/ > > However, you should use the proper interfaces to implement this, which > > is .bdrv_parse_filename(). This is a function that gets a string and > > converts it into a QDict, which is then passed to .bdrv_open(). So it > > uses exactly the same code path in .bdrv_open() as if used directly with > > QAPI. > > /================/ > > > > Additionally, I have fixed all the issues pointed out by you on V1 of > > the patch. The only change I haven't done is to replace pipes with > > QEMUBH. I am hoping this will not hold up the patch from being > > accepted, and I can make this transition later with proper dev and > > testing. Sorry, I forgot about this email. I still think the QEMUBH approach makes sense. Please try the following patch. I have compiled but not run it. You are welcome to squash it into your next patch. The following is Signed-off-by: Stefan Hajnoczi . ---8<--- diff --git a/block/vxhs.c b/block/vxhs.c index 8913e8f..22fd989 100644 --- a/block/vxhs.c +++ b/block/vxhs.c @@ -60,9 +60,6 @@ typedef struct VXHSvDiskHostsInfo { * Structure per vDisk maintained for state */ typedef struct BDRVVXHSState { - int fds[2]; - int event_reader_pos; - VXHSAIOCB *qnio_event_acb; VXHSvDiskHostsInfo vdisk_hostinfo; /* Per host info */ char *vdisk_guid; } BDRVVXHSState; @@ -73,12 +70,33 @@ static QNIOLibState qniolib; /* vdisk prefix to pass to qnio */ static const char vdisk_prefix[] = "/dev/of/vdisk"; +static void vxhs_complete_aio_bh(void *opaque) +{ + VXHSAIOCB *acb = opaque; + BlockCompletionFunc *cb = acb->common.cb; + void *cb_opaque = acb->common.opaque; + int ret = 0; + + if (acb->err != 0) { + trace_vxhs_complete_aio(acb, acb->err); + /* + * We mask all the IO errors generically as EIO for upper layers + * Right now our IO Manager uses non standard error codes. Instead + * of confusing upper layers with incorrect interpretation we are + * doing this workaround. + */ + ret = (-EIO); + } + + qemu_aio_unref(acb); + cb(cb_opaque, ret); +} + +/* Called from a libqnio thread */ static void vxhs_iio_callback(int32_t rfd, uint32_t reason, void *ctx, uint32_t error, uint32_t opcode) { VXHSAIOCB *acb = NULL; - BDRVVXHSState *s = NULL; - ssize_t ret; switch (opcode) { case IRP_READ_REQUEST: @@ -91,7 +109,6 @@ static void vxhs_iio_callback(int32_t rfd, uint32_t reason, void *ctx, */ if (ctx) { acb = ctx; - s = acb->common.bs->opaque; } else { trace_vxhs_iio_callback(error, reason); goto out; @@ -104,8 +121,8 @@ static void vxhs_iio_callback(int32_t rfd, uint32_t reason, void *ctx, trace_vxhs_iio_callback(error, reason); } - ret = qemu_write_full(s->fds[VDISK_FD_WRITE], &acb, sizeof(acb)); - g_assert(ret == sizeof(acb)); + aio_bh_schedule_oneshot(bdrv_get_aio_context(acb->common.bs), + vxhs_complete_aio_bh, acb); break; default: @@ -223,53 +240,6 @@ static void vxhs_qnio_iio_close(BDRVVXHSState *s) vxhs_qnio_close(); } -static void vxhs_complete_aio(VXHSAIOCB *acb, BDRVVXHSState *s) -{ - BlockCompletionFunc *cb = acb->common.cb; - void *opaque = acb->common.opaque; - int ret = 0; - - if (acb->err != 0) { - trace_vxhs_complete_aio(acb, acb->err); - /* - * We mask all the IO errors generically as EIO for upper layers - * Right now our IO Manager uses non standard error codes. Instead - * of confusing upper layers with incorrect interpretation we are - * doing this workaround. - */ - ret = (-EIO); - } - - qemu_aio_unref(acb); - cb(opaque, ret); -} - -/* - * This is the HyperScale event handler registered to QEMU. - * It is invoked when any IO gets completed and written on pipe - * by callback called from QNIO thread context. Then it marks - * the AIO as completed, and releases HyperScale AIO callbacks. - */ -static void vxhs_aio_event_reader(void *opaque) -{ - BDRVVXHSState *s = opaque; - char *p; - ssize_t ret; - - do { - p = (char *)&s->qnio_event_acb; - ret = read(s->fds[VDISK_FD_READ], p + s->event_reader_pos, - sizeof(s->qnio_event_acb) - s->event_reader_pos); - if (ret > 0) { - s->event_reader_pos += ret; - if (s->event_reader_pos == sizeof(s->qnio_event_acb)) { - s->event_reader_pos = 0; - vxhs_complete_aio(s->qnio_event_acb, s); - } - } - } while (ret < 0 && errno == EINTR); -} - static QemuOptsList runtime_opts = { .name = "vxhs", .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head), @@ -468,7 +438,6 @@ static int vxhs_open(BlockDriverState *bs, QDict *options, int bdrv_flags, Error **errp) { BDRVVXHSState *s = bs->opaque; - AioContext *aio_context; int qemu_qnio_cfd = -1; int qemu_rfd = -1; int ret = 0; @@ -481,32 +450,7 @@ static int vxhs_open(BlockDriverState *bs, QDict *options, s->vdisk_hostinfo.qnio_cfd = qemu_qnio_cfd; s->vdisk_hostinfo.vdisk_rfd = qemu_rfd; - - /* - * Create a pipe for communicating between two threads in different - * context. Set handler for read event, which gets triggered when - * IO completion is done by non-QEMU context. - */ - ret = qemu_pipe(s->fds); - if (ret < 0) { - trace_vxhs_open_epipe(ret); - ret = -errno; - goto errout; - } - fcntl(s->fds[VDISK_FD_READ], F_SETFL, O_NONBLOCK); - - aio_context = bdrv_get_aio_context(bs); - aio_set_fd_handler(aio_context, s->fds[VDISK_FD_READ], - false, vxhs_aio_event_reader, NULL, s); return 0; - -errout: - /* - * Close remote vDisk device if it was opened earlier - */ - vxhs_qnio_iio_close(s); - trace_vxhs_open_fail(ret); - return ret; } static const AIOCBInfo vxhs_aiocb_info = { @@ -596,14 +540,7 @@ static void vxhs_close(BlockDriverState *bs) BDRVVXHSState *s = bs->opaque; trace_vxhs_close(s->vdisk_guid); - close(s->fds[VDISK_FD_READ]); - close(s->fds[VDISK_FD_WRITE]); - /* - * Clearing all the event handlers for oflame registered to QEMU - */ - aio_set_fd_handler(bdrv_get_aio_context(bs), s->fds[VDISK_FD_READ], - false, NULL, NULL, NULL); g_free(s->vdisk_guid); s->vdisk_guid = NULL; vxhs_qnio_iio_close(s); @@ -650,23 +587,6 @@ static int64_t vxhs_getlength(BlockDriverState *bs) return vdisk_size; } -static void vxhs_detach_aio_context(BlockDriverState *bs) -{ - BDRVVXHSState *s = bs->opaque; - - aio_set_fd_handler(bdrv_get_aio_context(bs), s->fds[VDISK_FD_READ], - false, NULL, NULL, NULL); -} - -static void vxhs_attach_aio_context(BlockDriverState *bs, - AioContext *new_context) -{ - BDRVVXHSState *s = bs->opaque; - - aio_set_fd_handler(new_context, s->fds[VDISK_FD_READ], - false, vxhs_aio_event_reader, NULL, s); -} - static BlockDriver bdrv_vxhs = { .format_name = "vxhs", .protocol_name = "vxhs", @@ -677,8 +597,6 @@ static BlockDriver bdrv_vxhs = { .bdrv_getlength = vxhs_getlength, .bdrv_aio_readv = vxhs_aio_readv, .bdrv_aio_writev = vxhs_aio_writev, - .bdrv_detach_aio_context = vxhs_detach_aio_context, - .bdrv_attach_aio_context = vxhs_attach_aio_context, }; static void bdrv_vxhs_init(void)