From patchwork Mon Nov 5 07:43:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jiangyiwen X-Patchwork-Id: 992892 Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 42pPn16XWjz9sB7 for ; Mon, 5 Nov 2018 18:43:33 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728931AbeKERBw (ORCPT ); Mon, 5 Nov 2018 12:01:52 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:42498 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727054AbeKERBw (ORCPT ); Mon, 5 Nov 2018 12:01:52 -0500 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 26CC0D079876C; Mon, 5 Nov 2018 15:43:28 +0800 (CST) Received: from [127.0.0.1] (10.177.16.168) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.408.0; Mon, 5 Nov 2018 15:43:25 +0800 To: , Jason Wang CC: , , From: jiangyiwen Subject: [PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock Message-ID: <5BDFF49C.3040603@huawei.com> Date: Mon, 5 Nov 2018 15:43:24 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 X-Originating-IP: [10.177.16.168] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Now vsock only support send/receive small packet, it can't achieve high performance. As previous discussed with Jason Wang, I revisit the idea of vhost-net about mergeable rx buffer and implement the mergeable rx buffer in vhost-vsock, it can allow big packet to be scattered in into different buffers and improve performance obviously. I write a tool to test the vhost-vsock performance, mainly send big packet(64K) included guest->Host and Host->Guest. The result as follows: Before performance: Single socket Multiple sockets(Max Bandwidth) Guest->Host ~400MB/s ~480MB/s Host->Guest ~1450MB/s ~1600MB/s After performance: Single socket Multiple sockets(Max Bandwidth) Guest->Host ~1700MB/s ~2900MB/s Host->Guest ~1700MB/s ~2900MB/s From the test results, the performance is improved obviously, and guest memory will not be wasted. --- Yiwen Jiang (5): VSOCK: support fill mergeable rx buffer in guest VSOCK: support fill data to mergeable rx buffer in host VSOCK: support receive mergeable rx buffer in guest VSOCK: modify default rx buf size to improve performance VSOCK: batch sending rx buffer to increase bandwidth drivers/vhost/vsock.c | 135 +++++++++++++++++++++++------ include/linux/virtio_vsock.h | 15 +++- include/uapi/linux/virtio_vsock.h | 5 ++ net/vmw_vsock/virtio_transport.c | 147 ++++++++++++++++++++++++++------ net/vmw_vsock/virtio_transport_common.c | 59 +++++++++++-- 5 files changed, 300 insertions(+), 61 deletions(-)