From patchwork Tue Nov 7 20:59:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Taht X-Patchwork-Id: 835467 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="m8Uz0g4Y"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3yWhdR5QMrz9s8J for ; Wed, 8 Nov 2017 07:59:55 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933671AbdKGU7x (ORCPT ); Tue, 7 Nov 2017 15:59:53 -0500 Received: from mail-pf0-f193.google.com ([209.85.192.193]:50292 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933644AbdKGU7r (ORCPT ); Tue, 7 Nov 2017 15:59:47 -0500 Received: by mail-pf0-f193.google.com with SMTP id b6so387649pfh.7 for ; Tue, 07 Nov 2017 12:59:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=gEmFrChfKxxu8/zCYjgmaX5QJe+qoVK5B2IpBGBWdKU=; b=m8Uz0g4YHHcc6iaghGAIH4/A4Waq7nQytjMiwItDUzaUk6NROT3a+FHpl6Hi9c+P9p yylTK50tr+C0EJTVEn5x32G8OSUF8B1BiW8zJg0+zyzOMshj+/Qd5MxD9G4jEuhm4i9L WeyGp0CCrUdFuLTgwCuQVwHs6LKPwbE+2/Sf+Mr0xGfdjd4mI4ytozOUgkLZFCmLZx8E mXOtZIPWrZ6bCljPEurl2PWkCMNGJcUiEbVYPMqPNBmXbJxM0VbT9t56Dp80nR4w8X5M 5tyj+rCA1odTnm+xsL1VBt6S7/y98GsgN5K8yx62CNLrIgU0XvS/aO3RsNnBbGM3FxJj R9SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=gEmFrChfKxxu8/zCYjgmaX5QJe+qoVK5B2IpBGBWdKU=; b=Su9SB8vKFcb2SONjIvi6fG+WkB7CvXuUq/Dph9QZ13wQh8hUSX45upSTMdrCaLT1xz h/fctHz2H2nDI/Rpn3k0r6ZKxjjHA4t+HElurk+t2OeG77/hMpfCbFcwV7af0XhxV/ar S9rsUF1ZElTmqMzjnP+RAkwMsQhtHTtv2VlmflKVBjyL8lbUYVSw1XFX82HhS2JtDi/b YgGA55CZRweIlet6cLzueoKGn/9n0+r6RObXvBDp2UMKw5/XemY6giPedJYajgBNVgS7 RsMaGlZc2MA/jt3PtWNFnAJg21/3V+sS3dZoKcdJNhhU30BrOWP6Y3CybCJ4fyMLmClb YTbA== X-Gm-Message-State: AJaThX7mIrRdEzcjs4kmhWazVGmg6Y4ksuxWwdUubJ91lQc7JN6XEItb Y1P8ozJQfyZqt4uR64Txy2Hc2g== X-Google-Smtp-Source: ABhQp+QSLnt5p5Z9Wddqda0wBMO60wJROpxxkkxX/+E8CFuLDinoKQfpfi+HUfO7mR/ODeZuZHnQZw== X-Received: by 10.84.235.136 with SMTP id p8mr20983plk.263.1510088386332; Tue, 07 Nov 2017 12:59:46 -0800 (PST) Received: from nemesis.lab.teklibre.com ([2603:3024:1536:86f0:2e0:4cff:fec1:1206]) by smtp.gmail.com with ESMTPSA id a81sm4548104pfe.32.2017.11.07.12.59.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 07 Nov 2017 12:59:45 -0800 (PST) From: Dave Taht To: netdev@vger.kernel.org Cc: Dave Taht Subject: [PATCH net-next 3/3] netem: support delivering packets in delayed time slots Date: Tue, 7 Nov 2017 12:59:36 -0800 Message-Id: <1510088376-5527-4-git-send-email-dave.taht@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1510088376-5527-1-git-send-email-dave.taht@gmail.com> References: <1510088376-5527-1-git-send-email-dave.taht@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Slotting is a crude approximation of the behaviors of shared media such as cable, wifi, and LTE, which gather up a bunch of packets within a varying delay window and deliver them, relative to that, nearly all at once. It works within the existing loss, duplication, jitter and delay parameters of netem. Some amount of inherent latency must be specified, regardless. The new "slot" parameter specifies a minimum and maximum delay between transmission attempts. The "bytes" and "packets" parameters can be used to limit the amount of information transferred per slot. Examples of use: tc qdisc add dev eth0 root netem delay 200us \ slot 800us 10ms bytes 64k packets 42 A more correct example, using stacked netem instances and a packet limit to emulate a tail drop wifi queue with slots and variable packet delivery, with a 200Mbit isochronous underlying rate, and 20ms path delay: tc qdisc add dev eth0 root handle 1: netem delay 20ms rate 200mbit \ limit 10000 tc qdisc add dev eth0 parent 1:1 handle 10:1 netem delay 200us \ slot 800us 10ms bytes 64k packets 42 limit 512 Signed-off-by: Dave Taht --- include/uapi/linux/pkt_sched.h | 8 +++++ net/sched/sch_netem.c | 76 ++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 81 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index 20cfd64..37b5096 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -538,6 +538,7 @@ enum { TCA_NETEM_PAD, TCA_NETEM_LATENCY64, TCA_NETEM_JITTER64, + TCA_NETEM_SLOT, __TCA_NETEM_MAX, }; @@ -575,6 +576,13 @@ struct tc_netem_rate { __s32 cell_overhead; }; +struct tc_netem_slot { + __s64 min_delay; /* nsec */ + __s64 max_delay; + __s32 max_packets; + __s32 max_bytes; +}; + enum { NETEM_LOSS_UNSPEC, NETEM_LOSS_GI, /* General Intuitive - 4 state model */ diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c index 16c4813..a7189f9 100644 --- a/net/sched/sch_netem.c +++ b/net/sched/sch_netem.c @@ -135,6 +135,13 @@ struct netem_sched_data { u32 a5; /* p23 used only in 4-states */ } clg; + struct tc_netem_slot slot_config; + struct slotstate { + u64 slot_next; + s32 packets_left; + s32 bytes_left; + } slot; + }; /* Time stamp put into socket buffer control block @@ -591,6 +598,20 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch, return NET_XMIT_SUCCESS; } +/* Delay the next round with a new future slot with a + * correct number of bytes and packets. + */ + +static void get_slot_next(struct netem_sched_data *q, u64 now) +{ + q->slot.slot_next = now + q->slot_config.min_delay + + (prandom_u32() * + (q->slot_config.max_delay - + q->slot_config.min_delay) >> 32); + q->slot.packets_left = q->slot_config.max_packets; + q->slot.bytes_left = q->slot_config.max_bytes; +} + static struct sk_buff *netem_dequeue(struct Qdisc *sch) { struct netem_sched_data *q = qdisc_priv(sch); @@ -608,14 +629,17 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch) p = rb_first(&q->t_root); if (p) { u64 time_to_send; + u64 now = ktime_get_ns(); skb = rb_to_skb(p); /* if more time remaining? */ time_to_send = netem_skb_cb(skb)->time_to_send; - if (time_to_send <= ktime_get_ns()) { - rb_erase(p, &q->t_root); + if (q->slot.slot_next && q->slot.slot_next < time_to_send) + get_slot_next(q, now); + if (time_to_send <= now && q->slot.slot_next <= now) { + rb_erase(p, &q->t_root); sch->q.qlen--; qdisc_qstats_backlog_dec(sch, skb); skb->next = NULL; @@ -634,6 +658,14 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch) skb->tstamp = 0; #endif + if (q->slot.slot_next) { + q->slot.packets_left--; + q->slot.bytes_left -= qdisc_pkt_len(skb); + if (q->slot.packets_left <= 0 || + q->slot.bytes_left <= 0) + get_slot_next(q, now); + } + if (q->qdisc) { unsigned int pkt_len = qdisc_pkt_len(skb); struct sk_buff *to_free = NULL; @@ -657,7 +689,12 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch) if (skb) goto deliver; } - qdisc_watchdog_schedule_ns(&q->watchdog, time_to_send); + + if (q->slot.slot_next > now) + qdisc_watchdog_schedule_ns(&q->watchdog, + q->slot.slot_next); + else + qdisc_watchdog_schedule_ns(&q->watchdog, time_to_send); } if (q->qdisc) { @@ -688,6 +725,7 @@ static void dist_free(struct disttable *d) * Distribution data is a variable size payload containing * signed 16 bit values. */ + static int get_dist_table(struct Qdisc *sch, const struct nlattr *attr) { struct netem_sched_data *q = qdisc_priv(sch); @@ -718,6 +756,23 @@ static int get_dist_table(struct Qdisc *sch, const struct nlattr *attr) return 0; } +static void get_slot(struct netem_sched_data *q, const struct nlattr *attr) +{ + const struct tc_netem_slot *c = nla_data(attr); + + q->slot_config = *c; + if (q->slot_config.max_packets == 0) + q->slot_config.max_packets = INT_MAX; + if (q->slot_config.max_bytes == 0) + q->slot_config.max_bytes = INT_MAX; + q->slot.packets_left = q->slot_config.max_packets; + q->slot.bytes_left = q->slot_config.max_bytes; + if (q->slot_config.min_delay | q->slot_config.max_delay) + q->slot.slot_next = ktime_get_ns(); + else + q->slot.slot_next = 0; +} + static void get_correlation(struct netem_sched_data *q, const struct nlattr *attr) { const struct tc_netem_corr *c = nla_data(attr); @@ -821,6 +876,7 @@ static const struct nla_policy netem_policy[TCA_NETEM_MAX + 1] = { [TCA_NETEM_RATE64] = { .type = NLA_U64 }, [TCA_NETEM_LATENCY64] = { .type = NLA_S64 }, [TCA_NETEM_JITTER64] = { .type = NLA_S64 }, + [TCA_NETEM_SLOT] = { .len = sizeof(struct tc_netem_slot) }, }; static int parse_attr(struct nlattr *tb[], int maxtype, struct nlattr *nla, @@ -927,6 +983,9 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt) if (tb[TCA_NETEM_ECN]) q->ecn = nla_get_u32(tb[TCA_NETEM_ECN]); + if (tb[TCA_NETEM_SLOT]) + get_slot(q, tb[TCA_NETEM_SLOT]); + return ret; } @@ -1016,6 +1075,7 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb) struct tc_netem_reorder reorder; struct tc_netem_corrupt corrupt; struct tc_netem_rate rate; + struct tc_netem_slot slot; qopt.latency = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->latency), UINT_MAX); @@ -1072,6 +1132,16 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb) if (dump_loss_model(q, skb) != 0) goto nla_put_failure; + if (q->slot_config.min_delay | q->slot_config.max_delay) { + slot = q->slot_config; + if (slot.max_packets == INT_MAX) + slot.max_packets = 0; + if (slot.max_bytes == INT_MAX) + slot.max_bytes = 0; + if (nla_put(skb, TCA_NETEM_SLOT, sizeof(slot), &slot)) + goto nla_put_failure; + } + return nla_nest_end(skb, nla); nla_put_failure: