From patchwork Wed Feb 27 15:22:23 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 223631 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id EBBB62C007E for ; Thu, 28 Feb 2013 02:23:29 +1100 (EST) Received: from localhost ([::1]:60232 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UAiqt-0004hp-Bm for incoming@patchwork.ozlabs.org; Wed, 27 Feb 2013 10:23:27 -0500 Received: from eggs.gnu.org ([208.118.235.92]:36198) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UAiqC-0004CE-LN for qemu-devel@nongnu.org; Wed, 27 Feb 2013 10:22:48 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UAiq9-0007l2-N5 for qemu-devel@nongnu.org; Wed, 27 Feb 2013 10:22:44 -0500 Received: from mx1.redhat.com ([209.132.183.28]:54367) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UAiq9-0007kv-F2; Wed, 27 Feb 2013 10:22:41 -0500 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r1RFMe74002418 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 27 Feb 2013 10:22:40 -0500 Received: from localhost (ovpn-112-39.ams2.redhat.com [10.36.112.39]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r1RFMdSi028951; Wed, 27 Feb 2013 10:22:40 -0500 From: Stefan Hajnoczi To: Date: Wed, 27 Feb 2013 16:22:23 +0100 Message-Id: <1361978545-13789-6-git-send-email-stefanha@redhat.com> In-Reply-To: <1361978545-13789-1-git-send-email-stefanha@redhat.com> References: <1361978545-13789-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Anthony Liguori , qemu-stable@nongnu.org, Jason Wang , Stefan Hajnoczi , Edivaldo de Araujo Pereira Subject: [Qemu-devel] [PATCH 5/7] net: reduce the unnecessary memory allocation of multiqueue X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Jason Wang Edivaldo reports a problem that the array of NetClientState in NICState is too large - MAX_QUEUE_NUM(1024) which will wastes memory even if multiqueue is not used. Instead of static arrays, solving this issue by allocating the queues on demand for both the NetClientState array in NICState and VirtIONetQueue array in VirtIONet. Tested by myself, with single virtio-net-pci device. The memory allocation is almost the same as when multiqueue is not merged. Cc: Edivaldo de Araujo Pereira Cc: qemu-stable@nongnu.org Signed-off-by: Jason Wang Signed-off-by: Stefan Hajnoczi --- hw/virtio-net.c | 6 ++++-- include/net/net.h | 2 +- net/net.c | 19 +++++++++---------- 3 files changed, 14 insertions(+), 13 deletions(-) diff --git a/hw/virtio-net.c b/hw/virtio-net.c index 573c669..bb2c26c 100644 --- a/hw/virtio-net.c +++ b/hw/virtio-net.c @@ -44,7 +44,7 @@ typedef struct VirtIONet VirtIODevice vdev; uint8_t mac[ETH_ALEN]; uint16_t status; - VirtIONetQueue vqs[MAX_QUEUE_NUM]; + VirtIONetQueue *vqs; VirtQueue *ctrl_vq; NICState *nic; uint32_t tx_timeout; @@ -1326,8 +1326,9 @@ VirtIODevice *virtio_net_init(DeviceState *dev, NICConf *conf, n->vdev.set_status = virtio_net_set_status; n->vdev.guest_notifier_mask = virtio_net_guest_notifier_mask; n->vdev.guest_notifier_pending = virtio_net_guest_notifier_pending; + n->max_queues = MAX(conf->queues, 1); + n->vqs = g_malloc0(sizeof(VirtIONetQueue) * n->max_queues); n->vqs[0].rx_vq = virtio_add_queue(&n->vdev, 256, virtio_net_handle_rx); - n->max_queues = conf->queues; n->curr_queues = 1; n->vqs[0].n = n; n->tx_timeout = net->txtimer; @@ -1412,6 +1413,7 @@ void virtio_net_exit(VirtIODevice *vdev) } } + g_free(n->vqs); qemu_del_nic(n->nic); virtio_cleanup(&n->vdev); } diff --git a/include/net/net.h b/include/net/net.h index 43a045e..cb049a1 100644 --- a/include/net/net.h +++ b/include/net/net.h @@ -72,7 +72,7 @@ struct NetClientState { }; typedef struct NICState { - NetClientState ncs[MAX_QUEUE_NUM]; + NetClientState *ncs; NICConf *conf; void *opaque; bool peer_deleted; diff --git a/net/net.c b/net/net.c index a66aa02..f3d67f8 100644 --- a/net/net.c +++ b/net/net.c @@ -235,23 +235,20 @@ NICState *qemu_new_nic(NetClientInfo *info, const char *name, void *opaque) { - NetClientState *nc; NetClientState **peers = conf->peers.ncs; NICState *nic; - int i; + int i, queues = MAX(1, conf->queues); assert(info->type == NET_CLIENT_OPTIONS_KIND_NIC); assert(info->size >= sizeof(NICState)); - nc = qemu_new_net_client(info, peers[0], model, name); - nc->queue_index = 0; - - nic = qemu_get_nic(nc); + nic = g_malloc0(info->size + sizeof(NetClientState) * queues); + nic->ncs = (void *)nic + info->size; nic->conf = conf; nic->opaque = opaque; - for (i = 1; i < conf->queues; i++) { - qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, nc->name, + for (i = 0; i < queues; i++) { + qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name, NULL); nic->ncs[i].queue_index = i; } @@ -261,7 +258,7 @@ NICState *qemu_new_nic(NetClientInfo *info, NetClientState *qemu_get_subqueue(NICState *nic, int queue_index) { - return &nic->ncs[queue_index]; + return nic->ncs + queue_index; } NetClientState *qemu_get_queue(NICState *nic) @@ -273,7 +270,7 @@ NICState *qemu_get_nic(NetClientState *nc) { NetClientState *nc0 = nc - nc->queue_index; - return DO_UPCAST(NICState, ncs[0], nc0); + return (NICState *)((void *)nc0 - nc->info->size); } void *qemu_get_nic_opaque(NetClientState *nc) @@ -368,6 +365,8 @@ void qemu_del_nic(NICState *nic) qemu_cleanup_net_client(nc); qemu_free_net_client(nc); } + + g_free(nic); } void qemu_foreach_nic(qemu_nic_foreach func, void *opaque)