From patchwork Tue Nov 25 13:24:51 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Yingliang X-Patchwork-Id: 414666 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 27EF91401DD for ; Wed, 26 Nov 2014 00:25:40 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754317AbaKYNZb (ORCPT ); Tue, 25 Nov 2014 08:25:31 -0500 Received: from szxga02-in.huawei.com ([119.145.14.65]:3227 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753067AbaKYNZ2 (ORCPT ); Tue, 25 Nov 2014 08:25:28 -0500 Received: from 172.24.2.119 (EHLO szxeml403-hub.china.huawei.com) ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CCX32038; Tue, 25 Nov 2014 21:25:05 +0800 (CST) Received: from localhost (10.177.18.231) by szxeml403-hub.china.huawei.com (10.82.67.35) with Microsoft SMTP Server id 14.3.158.1; Tue, 25 Nov 2014 21:25:00 +0800 From: Yang Yingliang To: CC: , Subject: [PATCH net-next 2/3] sch_fq: add __fq_enqueue() helper Date: Tue, 25 Nov 2014 21:24:51 +0800 Message-ID: <1416921892-4756-3-git-send-email-yangyingliang@huawei.com> X-Mailer: git-send-email 1.8.1.msysgit.1 In-Reply-To: <1416921892-4756-1-git-send-email-yangyingliang@huawei.com> References: <1416921892-4756-1-git-send-email-yangyingliang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.18.231] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Move the original enqueue code to __fq_enqueue() helper. Signed-off-by: Yang Yingliang --- net/sched/sch_fq.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c index 36a22e0..ec6eff8 100644 --- a/net/sched/sch_fq.c +++ b/net/sched/sch_fq.c @@ -384,21 +384,9 @@ static void flow_queue_add(struct fq_flow *flow, struct sk_buff *skb) prev->next = skb; } } - -static int fq_enqueue(struct sk_buff *skb, struct Qdisc *sch) +static int __fq_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct fq_flow *f) { struct fq_sched_data *q = qdisc_priv(sch); - struct fq_flow *f; - - if (unlikely(sch->q.qlen >= sch->limit)) - return qdisc_drop(skb, sch); - - f = fq_classify(skb, q); - if (unlikely(f->qlen >= q->flow_plimit && f != &q->internal)) { - q->stat_flows_plimit++; - return qdisc_drop(skb, sch); - } - f->qlen++; if (skb_is_retransmit(skb)) q->stat_tcp_retrans++; @@ -421,6 +409,23 @@ static int fq_enqueue(struct sk_buff *skb, struct Qdisc *sch) return NET_XMIT_SUCCESS; } +static int fq_enqueue(struct sk_buff *skb, struct Qdisc *sch) +{ + struct fq_sched_data *q = qdisc_priv(sch); + struct fq_flow *f; + + if (unlikely(sch->q.qlen >= sch->limit)) + return qdisc_drop(skb, sch); + + f = fq_classify(skb, q); + if (unlikely(f->qlen >= q->flow_plimit && f != &q->internal)) { + q->stat_flows_plimit++; + return qdisc_drop(skb, sch); + } + + return __fq_enqueue(skb, sch, f); +} + static void fq_check_throttled(struct fq_sched_data *q, u64 now) { struct rb_node *p;