From patchwork Wed Jun 29 03:10:37 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Cody X-Patchwork-Id: 641792 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3rfSWW4KZSz9t0X for ; Wed, 29 Jun 2016 13:16:39 +1000 (AEST) Received: from localhost ([::1]:40918 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bI5zR-0005yy-7p for incoming@patchwork.ozlabs.org; Tue, 28 Jun 2016 23:16:37 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53402) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bI5u2-0008KH-Jw for qemu-devel@nongnu.org; Tue, 28 Jun 2016 23:11:03 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bI5u1-0002Y4-Eo for qemu-devel@nongnu.org; Tue, 28 Jun 2016 23:11:02 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46401) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bI5tv-0002Wi-2R; Tue, 28 Jun 2016 23:10:55 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B8BA281F03; Wed, 29 Jun 2016 03:10:54 +0000 (UTC) Received: from localhost (ovpn-112-97.phx2.redhat.com [10.3.112.97]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u5T3Aqdu027495 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA256 bits=256 verify=NO); Tue, 28 Jun 2016 23:10:54 -0400 From: Jeff Cody To: qemu-block@nongnu.org Date: Tue, 28 Jun 2016 23:10:37 -0400 Message-Id: <1467169844-18111-4-git-send-email-jcody@redhat.com> In-Reply-To: <1467169844-18111-1-git-send-email-jcody@redhat.com> References: <1467169844-18111-1-git-send-email-jcody@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Wed, 29 Jun 2016 03:10:54 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 03/10] block/nfs: add support for libnfs pagecache X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, jcody@redhat.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Peter Lieven upcoming libnfs will have support for a read cache that can significantly help to speed up requests since libnfs by design circumvents the kernel cache. Example: qemu -cdrom nfs://127.0.0.1/iso/my.iso?pagecache=1024 The pagecache parameters takes the maximum amount of pages to cache. A page in libnfs is always the NFS_BLKSIZE which is 4KB. Signed-off-by: Peter Lieven Reviewed-by: Jeff Cody Message-id: 1463662083-20814-3-git-send-email-pl@kamp.de Signed-off-by: Jeff Cody --- block/nfs.c | 37 ++++++++++++++++++++++++++++++++++++- 1 file changed, 36 insertions(+), 1 deletion(-) diff --git a/block/nfs.c b/block/nfs.c index 60be45e..15d6832 100644 --- a/block/nfs.c +++ b/block/nfs.c @@ -38,6 +38,7 @@ #include #define QEMU_NFS_MAX_READAHEAD_SIZE 1048576 +#define QEMU_NFS_MAX_PAGECACHE_SIZE (8388608 / NFS_BLKSIZE) #define QEMU_NFS_MAX_DEBUG_LEVEL 2 typedef struct NFSClient { @@ -342,6 +343,26 @@ static int64_t nfs_client_open(NFSClient *client, const char *filename, val = QEMU_NFS_MAX_READAHEAD_SIZE; } nfs_set_readahead(client->context, val); +#ifdef LIBNFS_FEATURE_PAGECACHE + nfs_set_pagecache_ttl(client->context, 0); +#endif + client->cache_used = true; +#endif +#ifdef LIBNFS_FEATURE_PAGECACHE + nfs_set_pagecache_ttl(client->context, 0); + } else if (!strcmp(qp->p[i].name, "pagecache")) { + if (open_flags & BDRV_O_NOCACHE) { + error_setg(errp, "Cannot enable NFS pagecache " + "if cache.direct = on"); + goto fail; + } + if (val > QEMU_NFS_MAX_PAGECACHE_SIZE) { + error_report("NFS Warning: Truncating NFS pagecache" + " size to %d pages", QEMU_NFS_MAX_PAGECACHE_SIZE); + val = QEMU_NFS_MAX_PAGECACHE_SIZE; + } + nfs_set_pagecache(client->context, val); + nfs_set_pagecache_ttl(client->context, 0); client->cache_used = true; #endif #ifdef LIBNFS_FEATURE_DEBUG @@ -524,7 +545,8 @@ static int nfs_reopen_prepare(BDRVReopenState *state, } if ((state->flags & BDRV_O_NOCACHE) && client->cache_used) { - error_setg(errp, "Cannot disable cache if libnfs readahead is enabled"); + error_setg(errp, "Cannot disable cache if libnfs readahead or" + " pagecache is enabled"); return -EINVAL; } @@ -542,6 +564,15 @@ static int nfs_reopen_prepare(BDRVReopenState *state, return 0; } +#ifdef LIBNFS_FEATURE_PAGECACHE +static void nfs_invalidate_cache(BlockDriverState *bs, + Error **errp) +{ + NFSClient *client = bs->opaque; + nfs_pagecache_invalidate(client->context, client->fh); +} +#endif + static BlockDriver bdrv_nfs = { .format_name = "nfs", .protocol_name = "nfs", @@ -565,6 +596,10 @@ static BlockDriver bdrv_nfs = { .bdrv_detach_aio_context = nfs_detach_aio_context, .bdrv_attach_aio_context = nfs_attach_aio_context, + +#ifdef LIBNFS_FEATURE_PAGECACHE + .bdrv_invalidate_cache = nfs_invalidate_cache, +#endif }; static void nfs_block_init(void)