From patchwork Tue May 6 09:37:36 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Yingliang X-Patchwork-Id: 346093 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id E60F51412E6 for ; Tue, 6 May 2014 19:38:52 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934670AbaEFJil (ORCPT ); Tue, 6 May 2014 05:38:41 -0400 Received: from szxga01-in.huawei.com ([119.145.14.64]:29111 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934526AbaEFJij (ORCPT ); Tue, 6 May 2014 05:38:39 -0400 Received: from 172.24.2.119 (EHLO szxeml209-edg.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id BVC76829; Tue, 06 May 2014 17:38:32 +0800 (CST) Received: from SZXEML419-HUB.china.huawei.com (10.82.67.158) by szxeml209-edg.china.huawei.com (172.24.2.184) with Microsoft SMTP Server (TLS) id 14.3.158.1; Tue, 6 May 2014 17:37:39 +0800 Received: from localhost (10.177.18.231) by szxeml419-hub.china.huawei.com (10.82.67.158) with Microsoft SMTP Server id 14.3.158.1; Tue, 6 May 2014 17:37:36 +0800 From: Yang Yingliang To: CC: , , , Subject: [PATCH net-next] net_sched: increase drops and decrease backlog when packets are dropped Date: Tue, 6 May 2014 17:37:36 +0800 Message-ID: <1399369056-1632-1-git-send-email-yangyingliang@huawei.com> X-Mailer: git-send-email 1.8.1.msysgit.1 MIME-Version: 1.0 X-Originating-IP: [10.177.18.231] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When packets are dropped, backlog and drops statistic of qdisc need be changed. Replace kfree_skb() with qdisc_drop() for increasing drops. Signed-off-by: Yang Yingliang --- net/sched/sch_fq.c | 2 +- net/sched/sch_fq_codel.c | 3 ++- net/sched/sch_hhf.c | 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c index 23c682b42f99..958ef7d4b825 100644 --- a/net/sched/sch_fq.c +++ b/net/sched/sch_fq.c @@ -714,7 +714,7 @@ static int fq_change(struct Qdisc *sch, struct nlattr *opt) if (!skb) break; - kfree_skb(skb); + qdisc_drop(skb, sch); drop_count++; } qdisc_tree_decrease_qlen(sch, drop_count); diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c index 0bf432c782c1..83abd4cd10cd 100644 --- a/net/sched/sch_fq_codel.c +++ b/net/sched/sch_fq_codel.c @@ -344,7 +344,8 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt) while (sch->q.qlen > sch->limit) { struct sk_buff *skb = fq_codel_dequeue(sch); - kfree_skb(skb); + sch->qstats.backlog -= qdisc_pkt_len(skb); + qdisc_drop(skb, sch); q->cstats.drop_count++; } qdisc_tree_decrease_qlen(sch, q->cstats.drop_count); diff --git a/net/sched/sch_hhf.c b/net/sched/sch_hhf.c index edee03d922e2..a9051d0fff52 100644 --- a/net/sched/sch_hhf.c +++ b/net/sched/sch_hhf.c @@ -592,7 +592,7 @@ static int hhf_change(struct Qdisc *sch, struct nlattr *opt) while (sch->q.qlen > sch->limit) { struct sk_buff *skb = hhf_dequeue(sch); - kfree_skb(skb); + qdisc_drop(skb, sch); } qdisc_tree_decrease_qlen(sch, qlen - sch->q.qlen);