From patchwork Thu May 31 09:56:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= X-Patchwork-Id: 923282 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=toke.dk Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; secure) header.d=toke.dk header.i=@toke.dk header.b="CXxXNP5Q"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40xNDB26Scz9s01 for ; Thu, 31 May 2018 19:57:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754482AbeEaJ5L (ORCPT ); Thu, 31 May 2018 05:57:11 -0400 Received: from mail.toke.dk ([52.28.52.200]:58983 "EHLO mail.toke.dk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754329AbeEaJ4n (ORCPT ); Thu, 31 May 2018 05:56:43 -0400 Subject: [PATCH net-next v18 8/8] sch_cake: Conditionally split GSO segments DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1527760599; bh=fiHFYzM8CaZAXoeEq5goIB7w14FST86afGRTJ2yWNc4=; h=Subject:From:To:Date:In-Reply-To:References:From; b=CXxXNP5QYrj6Qvh/RDEBz04KsfO+nzoBXP0tC14wlnJ4W5O9XrYNu3jSCcDGHoC3r VRdWMZKeu69f8uDIkMPlvoSJcH6heC4Z6l69dhPJioJLTHnnpKMab3M1feJInfbQJi fvbyw7UOOJ6YLx2CjYyQQwb8hkHO+xf/ukPkNZxayrS4Apy3Ib/jeXuQViXbc1xK4c UqV79619ITS+/XRcdHxNHA/ILnadKsyj/X7TSd3juEeTnDiz1NhSHWtwGmdQFEnuwQ mBTx/EBAfKbXWZIUwGmIj5JOdlNfHQLyqDKkgpa60CpnNe9H/Uc7CK9V7yCEbZvUR6 enWLS5oAGoytg== From: Toke =?utf-8?q?H=C3=B8iland-J=C3=B8rgensen?= To: netdev@vger.kernel.org, cake@lists.bufferbloat.net Date: Thu, 31 May 2018 11:56:39 +0200 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <152776059911.16347.9677347504134649516.stgit@alrua-kau> In-Reply-To: <152776056580.16347.7216549227396672873.stgit@alrua-kau> References: <152776056580.16347.7216549227396672873.stgit@alrua-kau> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org At lower bandwidths, the transmission time of a single GSO segment can add an unacceptable amount of latency due to HOL blocking. Furthermore, with a software shaper, any tuning mechanism employed by the kernel to control the maximum size of GSO segments is thrown off by the artificial limit on bandwidth. For this reason, we split GSO segments into their individual packets iff the shaper is active and configured to a bandwidth <= 1 Gbps. Signed-off-by: Toke Høiland-Jørgensen --- net/sched/sch_cake.c | 99 +++++++++++++++++++++++++++++++++++++------------- 1 file changed, 73 insertions(+), 26 deletions(-) diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c index 8b6bd91aa2f3..c2e9a840bfd0 100644 --- a/net/sched/sch_cake.c +++ b/net/sched/sch_cake.c @@ -80,6 +80,7 @@ #define CAKE_QUEUES (1024) #define CAKE_FLOW_MASK 63 #define CAKE_FLOW_NAT_FLAG 64 +#define CAKE_SPLIT_GSO_THRESHOLD (125000000) /* 1Gbps */ /* struct cobalt_params - contains codel and blue parameters * @interval: codel initial drop rate @@ -1649,36 +1650,73 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, if (unlikely(len > b->max_skblen)) b->max_skblen = len; - cobalt_set_enqueue_time(skb, now); - get_cobalt_cb(skb)->adjusted_len = cake_overhead(q, skb); - flow_queue_add(flow, skb); - - if (q->ack_filter) - ack = cake_ack_filter(q, flow); + if (skb_is_gso(skb) && q->rate_flags & CAKE_FLAG_SPLIT_GSO) { + struct sk_buff *segs, *nskb; + netdev_features_t features = netif_skb_features(skb); + unsigned int slen = 0; + + segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK); + if (IS_ERR_OR_NULL(segs)) + return qdisc_drop(skb, sch, to_free); + + while (segs) { + nskb = segs->next; + segs->next = NULL; + qdisc_skb_cb(segs)->pkt_len = segs->len; + cobalt_set_enqueue_time(segs, now); + get_cobalt_cb(segs)->adjusted_len = cake_overhead(q, + segs); + flow_queue_add(flow, segs); + + sch->q.qlen++; + slen += segs->len; + q->buffer_used += segs->truesize; + b->packets++; + segs = nskb; + } - if (ack) { - b->ack_drops++; - sch->qstats.drops++; - b->bytes += qdisc_pkt_len(ack); - len -= qdisc_pkt_len(ack); - q->buffer_used += skb->truesize - ack->truesize; - if (q->rate_flags & CAKE_FLAG_INGRESS) - cake_advance_shaper(q, b, ack, now, true); + /* stats */ + b->bytes += slen; + b->backlogs[idx] += slen; + b->tin_backlog += slen; + sch->qstats.backlog += slen; + q->avg_window_bytes += slen; - qdisc_tree_reduce_backlog(sch, 1, qdisc_pkt_len(ack)); - consume_skb(ack); + qdisc_tree_reduce_backlog(sch, 1, len); + consume_skb(skb); } else { - sch->q.qlen++; - q->buffer_used += skb->truesize; - } + /* not splitting */ + cobalt_set_enqueue_time(skb, now); + get_cobalt_cb(skb)->adjusted_len = cake_overhead(q, skb); + flow_queue_add(flow, skb); + + if (q->ack_filter) + ack = cake_ack_filter(q, flow); + + if (ack) { + b->ack_drops++; + sch->qstats.drops++; + b->bytes += qdisc_pkt_len(ack); + len -= qdisc_pkt_len(ack); + q->buffer_used += skb->truesize - ack->truesize; + if (q->rate_flags & CAKE_FLAG_INGRESS) + cake_advance_shaper(q, b, ack, now, true); + + qdisc_tree_reduce_backlog(sch, 1, qdisc_pkt_len(ack)); + consume_skb(ack); + } else { + sch->q.qlen++; + q->buffer_used += skb->truesize; + } - /* stats */ - b->packets++; - b->bytes += len; - b->backlogs[idx] += len; - b->tin_backlog += len; - sch->qstats.backlog += len; - q->avg_window_bytes += len; + /* stats */ + b->packets++; + b->bytes += len; + b->backlogs[idx] += len; + b->tin_backlog += len; + sch->qstats.backlog += len; + q->avg_window_bytes += len; + } if (q->overflow_timeout) cake_heapify_up(q, b->overflow_idx[idx]); @@ -2510,6 +2548,11 @@ static int cake_change(struct Qdisc *sch, struct nlattr *opt, if (tb[TCA_CAKE_MEMORY]) q->buffer_config_limit = nla_get_u32(tb[TCA_CAKE_MEMORY]); + if (q->rate_bps && q->rate_bps <= CAKE_SPLIT_GSO_THRESHOLD) + q->rate_flags |= CAKE_FLAG_SPLIT_GSO; + else + q->rate_flags &= ~CAKE_FLAG_SPLIT_GSO; + if (q->tins) { sch_tree_lock(sch); cake_reconfigure(sch); @@ -2665,6 +2708,10 @@ static int cake_dump(struct Qdisc *sch, struct sk_buff *skb) if (nla_put_u32(skb, TCA_CAKE_MPU, q->rate_mpu)) goto nla_put_failure; + if (nla_put_u32(skb, TCA_CAKE_SPLIT_GSO, + !!(q->rate_flags & CAKE_FLAG_SPLIT_GSO))) + goto nla_put_failure; + return nla_nest_end(skb, opts); nla_put_failure: