From patchwork Mon Oct 12 12:27:24 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Lieven X-Patchwork-Id: 529089 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 5C12B1401F0 for ; Mon, 12 Oct 2015 23:32:06 +1100 (AEDT) Received: from localhost ([::1]:55071 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZlcGq-00053x-EE for incoming@patchwork.ozlabs.org; Mon, 12 Oct 2015 08:32:04 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58868) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZlcDU-0000bY-I8 for qemu-devel@nongnu.org; Mon, 12 Oct 2015 08:28:37 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZlcDO-0000nE-Ez for qemu-devel@nongnu.org; Mon, 12 Oct 2015 08:28:36 -0400 Received: from mx-v6.kamp.de ([2a02:248:0:51::16]:44303 helo=mx01.kamp.de) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZlcDO-0000le-1w for qemu-devel@nongnu.org; Mon, 12 Oct 2015 08:28:30 -0400 Received: (qmail 12448 invoked by uid 89); 12 Oct 2015 12:28:26 -0000 Received: from [82.141.1.145] by client-16-kamp (envelope-from , uid 89) with qmail-scanner-2010/03/19-MF (clamdscan: 0.98.7/20959. hbedv: 8.3.34.42/7.12.18.30. avast: 1.2.2/15101200. spamassassin: 3.4.0. Clear:RC:1(82.141.1.145):SA:0(-1.2/5.0):. Processed in 1.261254 secs); 12 Oct 2015 12:28:26 -0000 Received: from ns.kamp-intra.net (HELO dns.kamp-intra.net) ([82.141.1.145]) by mx01.kamp.de with SMTP; 12 Oct 2015 12:28:24 -0000 X-GL_Whitelist: yes Received: from lieven-pc (lieven-pc.kamp-intra.net [172.21.12.60]) by dns.kamp-intra.net (Postfix) with ESMTP id 957E0E0069; Mon, 12 Oct 2015 14:27:44 +0200 (CEST) Received: by lieven-pc (Postfix, from userid 1000) id 192162A0E3; Mon, 12 Oct 2015 14:27:31 +0200 (CEST) From: Peter Lieven To: qemu-devel@nongnu.org, qemu-block@nongnu.org Date: Mon, 12 Oct 2015 14:27:24 +0200 Message-Id: <1444652845-20642-4-git-send-email-pl@kamp.de> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1444652845-20642-1-git-send-email-pl@kamp.de> References: <1444652845-20642-1-git-send-email-pl@kamp.de> X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2a02:248:0:51::16 Cc: kwolf@redhat.com, stefanha@gmail.com, jcody@redhat.com, jsnow@redhat.com, Peter Lieven Subject: [Qemu-devel] [PATCH 3/4] ide: add support for cancelable read requests X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org this patch adds a new aio readv compatible function which copies all data through a bounce buffer. The benefit is that these requests can be flagged as canceled to avoid guest memory corruption when a canceled request is completed by the backend at a later stage. If an IDE protocol wants to use this function it has to pipe all read requests through ide_readv_cancelable and it may then enable requests_cancelable in the IDEState. If this state is enable we can avoid the blocking blk_drain_all in case of a BMDMA reset. Currently only read operations are cancelable thus we can only use this logic for read-only devices. Signed-off-by: Peter Lieven --- hw/ide/core.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ hw/ide/internal.h | 16 ++++++++++++++++ hw/ide/pci.c | 42 ++++++++++++++++++++++++++++-------------- 3 files changed, 98 insertions(+), 14 deletions(-) diff --git a/hw/ide/core.c b/hw/ide/core.c index 317406d..24547ce 100644 --- a/hw/ide/core.c +++ b/hw/ide/core.c @@ -561,6 +561,59 @@ static bool ide_sect_range_ok(IDEState *s, return true; } +static void ide_readv_cancelable_cb(void *opaque, int ret) +{ + IDECancelableRequest *req = opaque; + if (!req->canceled) { + if (!ret) { + qemu_iovec_from_buf(req->org_qiov, 0, req->buf, req->org_qiov->size); + } + req->org_cb(req->org_opaque, ret); + } + QLIST_REMOVE(req, list); + qemu_vfree(req->buf); + qemu_iovec_destroy(&req->qiov); + g_free(req); +} + +#define MAX_CANCELABLE_REQS 16 + +BlockAIOCB *ide_readv_cancelable(IDEState *s, int64_t sector_num, + QEMUIOVector *iov, int nb_sectors, + BlockCompletionFunc *cb, void *opaque) +{ + BlockAIOCB *aioreq; + IDECancelableRequest *req; + int c = 0; + + QLIST_FOREACH(req, &s->cancelable_requests, list) { + c++; + } + if (c > MAX_CANCELABLE_REQS) { + return NULL; + } + + req = g_new0(IDECancelableRequest, 1); + qemu_iovec_init(&req->qiov, 1); + req->buf = qemu_blockalign(blk_bs(s->blk), iov->size); + qemu_iovec_add(&req->qiov, req->buf, iov->size); + req->org_qiov = iov; + req->org_cb = cb; + req->org_opaque = opaque; + + aioreq = blk_aio_readv(s->blk, sector_num, &req->qiov, nb_sectors, + ide_readv_cancelable_cb, req); + if (aioreq == NULL) { + qemu_vfree(req->buf); + qemu_iovec_destroy(&req->qiov); + g_free(req); + } else { + QLIST_INSERT_HEAD(&s->cancelable_requests, req, list); + } + + return aioreq; +} + static void ide_sector_read(IDEState *s); static void ide_sector_read_cb(void *opaque, int ret) @@ -805,6 +858,7 @@ void ide_start_dma(IDEState *s, BlockCompletionFunc *cb) s->bus->retry_unit = s->unit; s->bus->retry_sector_num = ide_get_sector(s); s->bus->retry_nsector = s->nsector; + s->bus->s = s; if (s->bus->dma->ops->start_dma) { s->bus->dma->ops->start_dma(s->bus->dma, s, cb); } diff --git a/hw/ide/internal.h b/hw/ide/internal.h index 05e93ff..ad188c2 100644 --- a/hw/ide/internal.h +++ b/hw/ide/internal.h @@ -343,6 +343,16 @@ enum ide_dma_cmd { #define ide_cmd_is_read(s) \ ((s)->dma_cmd == IDE_DMA_READ) +typedef struct IDECancelableRequest { + QLIST_ENTRY(IDECancelableRequest) list; + QEMUIOVector qiov; + uint8_t *buf; + QEMUIOVector *org_qiov; + BlockCompletionFunc *org_cb; + void *org_opaque; + bool canceled; +} IDECancelableRequest; + /* NOTE: IDEState represents in fact one drive */ struct IDEState { IDEBus *bus; @@ -396,6 +406,8 @@ struct IDEState { BlockAIOCB *pio_aiocb; struct iovec iov; QEMUIOVector qiov; + QLIST_HEAD(, IDECancelableRequest) cancelable_requests; + bool requests_cancelable; /* ATA DMA state */ int32_t io_buffer_offset; int32_t io_buffer_size; @@ -468,6 +480,7 @@ struct IDEBus { uint8_t retry_unit; int64_t retry_sector_num; uint32_t retry_nsector; + IDEState *s; }; #define TYPE_IDE_DEVICE "ide-device" @@ -572,6 +585,9 @@ void ide_set_inactive(IDEState *s, bool more); BlockAIOCB *ide_issue_trim(BlockBackend *blk, int64_t sector_num, QEMUIOVector *qiov, int nb_sectors, BlockCompletionFunc *cb, void *opaque); +BlockAIOCB *ide_readv_cancelable(IDEState *s, int64_t sector_num, + QEMUIOVector *iov, int nb_sectors, + BlockCompletionFunc *cb, void *opaque); /* hw/ide/atapi.c */ void ide_atapi_cmd(IDEState *s); diff --git a/hw/ide/pci.c b/hw/ide/pci.c index d31ff88..5587183 100644 --- a/hw/ide/pci.c +++ b/hw/ide/pci.c @@ -240,21 +240,35 @@ void bmdma_cmd_writeb(BMDMAState *bm, uint32_t val) /* Ignore writes to SSBM if it keeps the old value */ if ((val & BM_CMD_START) != (bm->cmd & BM_CMD_START)) { if (!(val & BM_CMD_START)) { - /* - * We can't cancel Scatter Gather DMA in the middle of the - * operation or a partial (not full) DMA transfer would reach - * the storage so we wait for completion instead (we beahve - * like if the DMA was completed by the time the guest trying - * to cancel dma with bmdma_cmd_writeb with BM_CMD_START not - * set). - * - * In the future we'll be able to safely cancel the I/O if the - * whole DMA operation will be submitted to disk with a single - * aio operation with preadv/pwritev. - */ if (bm->bus->dma->aiocb) { - blk_drain_all(); - assert(bm->bus->dma->aiocb == NULL); + if (bm->bus->s && bm->bus->s->requests_cancelable) { + /* + * If the used IDE protocol supports request cancelation we + * can flag requests as canceled here and disable DMA. + * The IDE protocol used MUST use ide_readv_cancelable for all + * read operations and then subsequently can enable this code + * path. Currently this is only supported for read-only + * devices. + */ + IDECancelableRequest *req; + QLIST_FOREACH(req, &bm->bus->s->cancelable_requests, list) { + if (!req->canceled) { + req->org_cb(req->org_opaque, -ECANCELED); + } + req->canceled = true; + } + } else { + /* + * We can't cancel Scatter Gather DMA in the middle of the + * operation or a partial (not full) DMA transfer would reach + * the storage so we wait for completion instead (we beahve + * like if the DMA was completed by the time the guest trying + * to cancel dma with bmdma_cmd_writeb with BM_CMD_START not + * set). + */ + blk_drain_all(); + assert(bm->bus->dma->aiocb == NULL); + } } bm->status &= ~BM_STATUS_DMAING; } else {