From patchwork Thu Mar 20 19:32:40 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zoltan Kiss X-Patchwork-Id: 332370 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 5A1482C008F for ; Fri, 21 Mar 2014 06:33:48 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759601AbaCTTcw (ORCPT ); Thu, 20 Mar 2014 15:32:52 -0400 Received: from smtp02.citrix.com ([66.165.176.63]:27094 "EHLO SMTP02.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755653AbaCTTct (ORCPT ); Thu, 20 Mar 2014 15:32:49 -0400 X-IronPort-AV: E=Sophos;i="4.97,697,1389744000"; d="scan'208";a="112029814" Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 20 Mar 2014 19:32:48 +0000 Received: from imagesandwords.uk.xensource.com (10.80.2.133) by FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 20 Mar 2014 15:32:47 -0400 From: Zoltan Kiss To: , , CC: , , , , Zoltan Kiss Subject: [PATCH net-next] xen-netback: Stop using xenvif_tx_pending_slots_available Date: Thu, 20 Mar 2014 19:32:40 +0000 Message-ID: <1395343961-4744-1-git-send-email-zoltan.kiss@citrix.com> X-Mailer: git-send-email 1.7.9.5 MIME-Version: 1.0 X-Originating-IP: [10.80.2.133] X-DLP: MIA2 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since the early days TX stops if there isn't enough free pending slots to consume a maximum sized (slot-wise) packet. Probably the reason for that is to avoid the case when we don't have enough free pending slot in the ring to finish the packet. But if we make sure that the pending ring has the same size as the shared ring, that shouldn't really happen. The frontend can only post packets which fit the to the free space of the shared ring. If it doesn't, the frontend has to stop, as it can only increase the req_prod when the whole packet fits onto the ring. This patch avoid using this checking, makes sure the 2 ring has the same size, and remove a checking from the callback. As now we don't stop the NAPI instance on this condition, we don't have to wake it up if we free pending slots up. Signed-off-by: Zoltan Kiss --- -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index bef37be..a800a8e 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -81,7 +81,7 @@ struct xenvif_rx_meta { #define MAX_BUFFER_OFFSET PAGE_SIZE -#define MAX_PENDING_REQS 256 +#define MAX_PENDING_REQS XEN_NETIF_TX_RING_SIZE /* It's possible for an skb to have a maximal number of frags * but still be less than MAX_BUFFER_OFFSET in size. Thus the @@ -253,12 +253,6 @@ static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif) vif->pending_prod + vif->pending_cons; } -static inline bool xenvif_tx_pending_slots_available(struct xenvif *vif) -{ - return nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX - < MAX_PENDING_REQS; -} - /* Callback from stack when TX packet can be released */ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success); diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c index 83a71ac..7bb7734 100644 --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -88,8 +88,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget) local_irq_save(flags); RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do); - if (!(more_to_do && - xenvif_tx_pending_slots_available(vif))) + if (!more_to_do) __napi_complete(napi); local_irq_restore(flags); diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index bc94320..5df8d63 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -1175,8 +1175,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget) struct sk_buff *skb; int ret; - while (xenvif_tx_pending_slots_available(vif) && - (skb_queue_len(&vif->tx_queue) < budget)) { + while (skb_queue_len(&vif->tx_queue) < budget) { struct xen_netif_tx_request txreq; struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX]; struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1]; @@ -1516,13 +1515,6 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success) wake_up(&vif->dealloc_wq); spin_unlock_irqrestore(&vif->callback_lock, flags); - if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx) && - xenvif_tx_pending_slots_available(vif)) { - local_bh_disable(); - napi_schedule(&vif->napi); - local_bh_enable(); - } - if (likely(zerocopy_success)) vif->tx_zerocopy_success++; else @@ -1714,8 +1706,7 @@ static inline int rx_work_todo(struct xenvif *vif) static inline int tx_work_todo(struct xenvif *vif) { - if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) && - xenvif_tx_pending_slots_available(vif)) + if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))) return 1; return 0;