From patchwork Mon May 11 22:39:46 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: KY Srinivasan X-Patchwork-Id: 471044 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id CEC82140D16 for ; Tue, 12 May 2015 07:20:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752023AbbEKVT5 (ORCPT ); Mon, 11 May 2015 17:19:57 -0400 Received: from p3plsmtps2ded03.prod.phx3.secureserver.net ([208.109.80.60]:56797 "EHLO p3plsmtps2ded03.prod.phx3.secureserver.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751717AbbEKVT4 (ORCPT ); Mon, 11 May 2015 17:19:56 -0400 Received: from linuxonhyperv.com ([72.167.245.219]) by p3plsmtps2ded03.prod.phx3.secureserver.net with : DED : id SlKv1q0234kklxU01lKvzz; Mon, 11 May 2015 14:19:56 -0700 x-originating-ip: 72.167.245.219 Received: by linuxonhyperv.com (Postfix, from userid 507) id AAE2D190210; Mon, 11 May 2015 15:39:47 -0700 (PDT) From: "K. Y. Srinivasan" To: davem@davemloft.net, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, devel@linuxdriverproject.org, olaf@aepfle.de, apw@canonical.com, jasowang@redhat.com Cc: "K. Y. Srinivasan" Subject: [PATCH V6 net-next 1/1] hv_netvsc: Use the xmit_more skb flag to optimize signaling the host Date: Mon, 11 May 2015 15:39:46 -0700 Message-Id: <1431383986-30850-1-git-send-email-kys@microsoft.com> X-Mailer: git-send-email 1.7.4.1 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Based on the information given to this driver (via the xmit_more skb flag), we can defer signaling the host if more packets are on the way. This will help make the host more efficient since it can potentially process a larger batch of packets. Implement this optimization. Signed-off-by: K. Y. Srinivasan --- v2: Fixed up indentation based on feedback from David Miller. v3,v4: If the queue could be stopped, deal with that condition: Eric Dumazet v5: Fixed up indentation based on feedback from David Miller. V6: Fixed additional indentation/formatting: Joe Perches drivers/net/hyperv/netvsc.c | 43 +++++++++++++++++++++++++++---------------- 1 files changed, 27 insertions(+), 16 deletions(-) diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index ea091bc..b024968 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -743,6 +743,7 @@ static inline int netvsc_send_pkt( u64 req_id; int ret; struct hv_page_buffer *pgbuf; + u32 ring_avail = hv_ringbuf_avail_percent(&out_channel->outbound); nvmsg.hdr.msg_type = NVSP_MSG1_TYPE_SEND_RNDIS_PKT; if (packet->is_data_pkt) { @@ -769,32 +770,42 @@ static inline int netvsc_send_pkt( if (out_channel->rescind) return -ENODEV; + /* + * It is possible that once we successfully place this packet + * on the ringbuffer, we may stop the queue. In that case, we want + * to notify the host independent of the xmit_more flag. We don't + * need to be precise here; in the worst case we may signal the host + * unnecessarily. + */ + if (ring_avail < (RING_AVAIL_PERCENT_LOWATER + 1)) + packet->xmit_more = false; + if (packet->page_buf_cnt) { pgbuf = packet->cp_partial ? packet->page_buf + packet->rmsg_pgcnt : packet->page_buf; - ret = vmbus_sendpacket_pagebuffer(out_channel, - pgbuf, - packet->page_buf_cnt, - &nvmsg, - sizeof(struct nvsp_message), - req_id); + ret = vmbus_sendpacket_pagebuffer_ctl(out_channel, + pgbuf, + packet->page_buf_cnt, + &nvmsg, + sizeof(struct nvsp_message), + req_id, + VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED, + !packet->xmit_more); } else { - ret = vmbus_sendpacket( - out_channel, &nvmsg, - sizeof(struct nvsp_message), - req_id, - VM_PKT_DATA_INBAND, - VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); + ret = vmbus_sendpacket_ctl(out_channel, &nvmsg, + sizeof(struct nvsp_message), + req_id, + VM_PKT_DATA_INBAND, + VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED, + !packet->xmit_more); } if (ret == 0) { atomic_inc(&net_device->num_outstanding_sends); atomic_inc(&net_device->queue_sends[q_idx]); - if (hv_ringbuf_avail_percent(&out_channel->outbound) < - RING_AVAIL_PERCENT_LOWATER) { - netif_tx_stop_queue(netdev_get_tx_queue( - ndev, q_idx)); + if (ring_avail < RING_AVAIL_PERCENT_LOWATER) { + netif_tx_stop_queue(netdev_get_tx_queue(ndev, q_idx)); if (atomic_read(&net_device-> queue_sends[q_idx]) < 1)