From patchwork Sun Jan 31 18:13:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Xu X-Patchwork-Id: 576234 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 061B7140BC4 for ; Mon, 1 Feb 2016 05:16:08 +1100 (AEDT) Received: from localhost ([::1]:42726 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aPwXd-0006bL-Sq for incoming@patchwork.ozlabs.org; Sun, 31 Jan 2016 13:16:05 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36709) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aPwVd-0002Kf-N8 for qemu-devel@nongnu.org; Sun, 31 Jan 2016 13:14:06 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aPwVc-0006AI-Ml for qemu-devel@nongnu.org; Sun, 31 Jan 2016 13:14:01 -0500 Received: from mx1.redhat.com ([209.132.183.28]:60620) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aPwVc-0006A9-Hs for qemu-devel@nongnu.org; Sun, 31 Jan 2016 13:14:00 -0500 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (Postfix) with ESMTPS id 4A3427734E for ; Sun, 31 Jan 2016 18:14:00 +0000 (UTC) Received: from wei-thinkpad.nay.redhat.com (vpn1-6-127.pek2.redhat.com [10.72.6.127]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u0VIDVgR014091; Sun, 31 Jan 2016 13:13:54 -0500 From: wexu@redhat.com To: qemu-devel@nongnu.org Date: Mon, 1 Feb 2016 02:13:24 +0800 Message-Id: <1454264009-24094-6-git-send-email-wexu@redhat.com> In-Reply-To: <1454264009-24094-1-git-send-email-wexu@redhat.com> References: <1454264009-24094-1-git-send-email-wexu@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Wei Xu , victork@redhat.com, mst@redhat.com, jasowang@redhat.com, yvugenfi@redhat.com, Wei Xu , marcel@redhat.com, dfleytma@redhat.com Subject: [Qemu-devel] [RFC Patch v2 05/10] virtio-net rsc: Create timer to drain the packets from the cache pool X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Wei Xu The timer will only be triggered if the packets pool is not empty, and it'll drain off all the cached packets, this is to reduce the delay to upper layer protocol stack. Signed-off-by: Wei Xu --- hw/net/virtio-net.c | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index 4f77fbe..93df0d5 100644 --- a/hw/net/virtio-net.c +++ b/hw/net/virtio-net.c @@ -48,12 +48,17 @@ #define MAX_VIRTIO_IP_PAYLOAD (65535 + IP_OFFSET) +/* Purge coalesced packets timer interval */ +#define RSC_TIMER_INTERVAL 500000 + /* Global statistics */ static uint32_t rsc_chain_no_mem; /* Switcher to enable/disable rsc */ static bool virtio_net_rsc_bypass; +static uint32_t rsc_timeout = RSC_TIMER_INTERVAL; + /* Coalesce callback for ipv4/6 */ typedef int32_t (VirtioNetCoalesce) (NetRscChain *chain, NetRscSeg *seg, const uint8_t *buf, size_t size); @@ -1625,6 +1630,35 @@ static int virtio_net_load_device(VirtIODevice *vdev, QEMUFile *f, return 0; } +static void virtio_net_rsc_purge(void *opq) +{ + int ret = 0; + NetRscChain *chain = (NetRscChain *)opq; + NetRscSeg *seg, *rn; + + QTAILQ_FOREACH_SAFE(seg, &chain->buffers, next, rn) { + if (!qemu_can_send_packet(seg->nc)) { + /* Should quit or continue? not sure if one or some + * of the queues fail would happen, try continue here */ + continue; + } + + ret = virtio_net_do_receive(seg->nc, seg->buf, seg->size); + QTAILQ_REMOVE(&chain->buffers, seg, next); + g_free(seg->buf); + g_free(seg); + + if (ret == 0) { + /* Try next queue */ + continue; + } + } + + if (!QTAILQ_EMPTY(&chain->buffers)) { + timer_mod(chain->drain_timer, + qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + rsc_timeout); + } +} static void virtio_net_rsc_cleanup(VirtIONet *n) { @@ -1810,6 +1844,8 @@ static size_t virtio_net_rsc_callback(NetRscChain *chain, NetClientState *nc, if (!virtio_net_rsc_cache_buf(chain, nc, buf, size)) { return 0; } else { + timer_mod(chain->drain_timer, + qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + rsc_timeout); return size; } } @@ -1877,6 +1913,8 @@ static NetRscChain *virtio_net_rsc_lookup_chain(NetClientState *nc, } chain->proto = proto; + chain->drain_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, + virtio_net_rsc_purge, chain); chain->do_receive = virtio_net_rsc_receive4; QTAILQ_INIT(&chain->buffers);