From patchwork Wed May 4 14:03:45 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Krishna Kumar X-Patchwork-Id: 94048 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3ED9CB6F47 for ; Thu, 5 May 2011 00:03:55 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753909Ab1EDODv (ORCPT ); Wed, 4 May 2011 10:03:51 -0400 Received: from e23smtp05.au.ibm.com ([202.81.31.147]:36797 "EHLO e23smtp05.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753715Ab1EDODt (ORCPT ); Wed, 4 May 2011 10:03:49 -0400 Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [202.81.31.247]) by e23smtp05.au.ibm.com (8.14.4/8.13.1) with ESMTP id p44Dvqwi001407; Wed, 4 May 2011 23:57:52 +1000 Received: from d23av03.au.ibm.com (d23av03.au.ibm.com [9.190.234.97]) by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p44E3hgX376956; Thu, 5 May 2011 00:03:43 +1000 Received: from d23av03.au.ibm.com (loopback [127.0.0.1]) by d23av03.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p44E3lR5025932; Thu, 5 May 2011 00:03:48 +1000 Received: from krkumar2.in.ibm.com ([9.124.221.167]) by d23av03.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p44E3jgs025857; Thu, 5 May 2011 00:03:45 +1000 From: Krishna Kumar To: davem@davemloft.net Cc: eric.dumazet@gmail.com, kvm@vger.kernel.org, mst@redhat.com, netdev@vger.kernel.org, rusty@rustcorp.com.au, Krishna Kumar Date: Wed, 04 May 2011 19:33:45 +0530 Message-Id: <20110504140345.14817.85236.sendpatchset@krkumar2.in.ibm.com> In-Reply-To: <20110504140258.14817.66596.sendpatchset@krkumar2.in.ibm.com> References: <20110504140258.14817.66596.sendpatchset@krkumar2.in.ibm.com> Subject: [PATCH 4/4] [RFC] sched: Changes to dequeue_skb Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Dequeue_skb has an additional check, for the first packet that is requeued, to see if the device has requested xmits after a interval. This is intended to not affect the fast xmit path, and have minimal overhead to the slow path. Drivers setting the restart time should not stop/start their tx queues, and hence the frozen/stopped check can be avoided. Signed-off-by: Krishna Kumar --- net/sched/sch_generic.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff -ruNp org/net/sched/sch_generic.c new/net/sched/sch_generic.c --- org/net/sched/sch_generic.c 2011-05-04 18:57:06.000000000 +0530 +++ new/net/sched/sch_generic.c 2011-05-04 18:57:09.000000000 +0530 @@ -50,17 +50,30 @@ static inline int dev_requeue_skb(struct return 0; } +/* + * This function can return a rare false positive for drivers setting + * xmit_restart_jiffies (e.g. virtio-net) when xmit_restart_jiffies is + * zero but the device may not be ready. That only leads to the skb + * being requeued again. + */ +static inline int can_restart_xmit(struct Qdisc *q, struct sk_buff *skb) +{ + struct net_device *dev = qdisc_dev(q); + struct netdev_queue *txq; + + txq = netdev_get_tx_queue(dev, skb_get_queue_mapping(skb)); + if (unlikely(txq->xmit_restart_jiffies)) + return time_after_eq(jiffies, txq->xmit_restart_jiffies); + return !netif_tx_queue_frozen_or_stopped(txq); +} + static inline struct sk_buff *dequeue_skb(struct Qdisc *q) { struct sk_buff *skb = q->gso_skb; if (unlikely(skb)) { - struct net_device *dev = qdisc_dev(q); - struct netdev_queue *txq; - /* check the reason of requeuing without tx lock first */ - txq = netdev_get_tx_queue(dev, skb_get_queue_mapping(skb)); - if (!netif_tx_queue_frozen_or_stopped(txq)) { + if (can_restart_xmit(q, skb)) { q->gso_skb = NULL; q->q.qlen--; } else