diff mbox series

[net-next,1/2] Increase fq_codel count in the bulk dropper

Message ID 1564875449-12122-2-git-send-email-dave.taht@gmail.com
State Accepted
Delegated to: David Miller
Headers show
Series Two small fq_codel optimizations | expand

Commit Message

Dave Taht Aug. 3, 2019, 11:37 p.m. UTC
In the field fq_codel is often used with a smaller memory or
packet limit than the default, and when the bulk dropper is hit,
the drop pattern bifircates into one that more slowly increases
the codel drop rate and hits the bulk dropper more than it should.

The scan through the 1024 queues happens more often than it needs to.

This patch increases the codel count in the bulk dropper, but
does not change the drop rate there, relying on the next codel round
to deliver the next packet at the original drop rate
(after that burst of loss), then escalate to a higher signaling rate.

Signed-off-by: Dave Taht <dave.taht@gmail.com>

---
 net/sched/sch_fq_codel.c | 2 ++
 1 file changed, 2 insertions(+)
diff mbox series

Patch

diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
index d59fbcc745d1..d67b2c40e6e6 100644
--- a/net/sched/sch_fq_codel.c
+++ b/net/sched/sch_fq_codel.c
@@ -173,6 +173,8 @@  static unsigned int fq_codel_drop(struct Qdisc *sch, unsigned int max_packets,
 		__qdisc_drop(skb, to_free);
 	} while (++i < max_packets && len < threshold);
 
+	/* Tell codel to increase its signal strength also */
+	flow->cvars.count += i;
 	flow->dropped += i;
 	q->backlogs[idx] -= len;
 	q->memory_usage -= mem;