From patchwork Mon Nov 5 07:48:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jiangyiwen X-Patchwork-Id: 992897 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 42pPv30ZGgz9sB7 for ; Mon, 5 Nov 2018 18:48:47 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726583AbeKERHH (ORCPT ); Mon, 5 Nov 2018 12:07:07 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:44494 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726031AbeKERHH (ORCPT ); Mon, 5 Nov 2018 12:07:07 -0500 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 613581D13B11C; Mon, 5 Nov 2018 15:48:41 +0800 (CST) Received: from [127.0.0.1] (10.177.16.168) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.408.0; Mon, 5 Nov 2018 15:48:39 +0800 To: , Jason Wang CC: , , From: jiangyiwen Subject: [PATCH 5/5] VSOCK: batch sending rx buffer to increase bandwidth Message-ID: <5BDFF5D6.2030802@huawei.com> Date: Mon, 5 Nov 2018 15:48:38 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 X-Originating-IP: [10.177.16.168] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Batch sending rx buffer can improve total bandwidth. Signed-off-by: Yiwen Jiang --- drivers/vhost/vsock.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 648be39..a587ddc 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -148,10 +148,12 @@ static int get_rx_bufs(struct vhost_virtqueue *vq, struct vhost_virtqueue *vq) { struct vhost_virtqueue *tx_vq = &vsock->vqs[VSOCK_VQ_TX]; - bool added = false; bool restart_tx = false; int mergeable; size_t vsock_hlen; + int batch_count = 0; + +#define VHOST_VSOCK_BATCH 16 mutex_lock(&vq->mutex); @@ -191,8 +193,9 @@ static int get_rx_bufs(struct vhost_virtqueue *vq, list_del_init(&pkt->list); spin_unlock_bh(&vsock->send_pkt_list_lock); - headcount = get_rx_bufs(vq, vq->heads, vsock_hlen + pkt->len, - &in, likely(mergeable) ? UIO_MAXIOV : 1); + headcount = get_rx_bufs(vq, vq->heads + batch_count, + vsock_hlen + pkt->len, &in, + likely(mergeable) ? UIO_MAXIOV : 1); if (headcount <= 0) { spin_lock_bh(&vsock->send_pkt_list_lock); list_add(&pkt->list, &vsock->send_pkt_list); @@ -238,8 +241,12 @@ static int get_rx_bufs(struct vhost_virtqueue *vq, break; } - vhost_add_used_n(vq, vq->heads, headcount); - added = true; + batch_count += headcount; + if (batch_count > VHOST_VSOCK_BATCH) { + vhost_add_used_and_signal_n(&vsock->dev, vq, + vq->heads, batch_count); + batch_count = 0; + } if (pkt->reply) { int val; @@ -258,8 +265,11 @@ static int get_rx_bufs(struct vhost_virtqueue *vq, virtio_transport_free_pkt(pkt); } - if (added) - vhost_signal(&vsock->dev, vq); + + if (batch_count) { + vhost_add_used_and_signal_n(&vsock->dev, vq, + vq->heads, batch_count); + } out: mutex_unlock(&vq->mutex);