From patchwork Tue Mar 8 10:42:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amir Vadai X-Patchwork-Id: 593998 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 83455140273 for ; Tue, 8 Mar 2016 21:39:27 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933125AbcCHKjR (ORCPT ); Tue, 8 Mar 2016 05:39:17 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:33140 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933110AbcCHKjK (ORCPT ); Tue, 8 Mar 2016 05:39:10 -0500 Received: by mail-wm0-f68.google.com with SMTP id n186so3246813wmn.0 for ; Tue, 08 Mar 2016 02:39:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kqnUlz6XwWpViSnZMdRzzv1l7jBU07u0ZbceV3xrSC4=; b=gESd9YHmg7hjHuWYqeAuNdIWHJ8z4CFXlcz5ifWRfO7S4L9TOGYMO6VQPK87jbfF/w c/BSNtpeEdAzccyXmzJKovPNZfX7dgeGcT3k+b2BMx+eh/Gn7+jNr0ml6HC7UfyJBPm6 9azfXCm0hkVfCY7yCZutyM8+2n4gV1gXE45T5S9REZS5z4HxjFHJ0f4iJ6bNZb4rNtG/ w829F+ycG0vEkHO0HMjqLLq3ICgLWTUjnpR4mbBZ68EHDhGgb4K3OdLmcgL0Pb2VzYW3 Phej00GFugf19X+8lrdSWqBLPhi/X9i640SjB/ZHneJelK/3/UvmTJLsOaSsCwAhGvQT dBoQ== X-Gm-Message-State: AD7BkJIgeUst1odg0F7HwdLLs1lmzyaQx650Q821kod1NJn6L/pmXhiZjWT8t48wnQPVAQ== X-Received: by 10.28.87.139 with SMTP id l133mr17465717wmb.38.1457433548851; Tue, 08 Mar 2016 02:39:08 -0800 (PST) Received: from office.mtl.com? (212.116.172.4.static.012.net.il. [212.116.172.4]) by smtp.gmail.com with ESMTPSA id n66sm2706344wmg.20.2016.03.08.02.39.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 08 Mar 2016 02:39:07 -0800 (PST) From: Amir Vadai To: "David S. Miller" Cc: netdev@vger.kernel.org, John Fastabend , Jiri Pirko , Or Gerlitz , Saeed Mahameed , Hadar Har-Zion , Rony Efraim , Amir Vadai Subject: [PATCH net-next V3 01/10] net/flower: Introduce hardware offload support Date: Tue, 8 Mar 2016 12:42:29 +0200 Message-Id: <1457433758-9193-2-git-send-email-amir@vadai.me> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1457433758-9193-1-git-send-email-amir@vadai.me> References: <1457433758-9193-1-git-send-email-amir@vadai.me> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch is based on a patch made by John Fastabend. It adds support for offloading cls_flower. when NETIF_F_HW_TC is on: flags = 0 => Rule will be processed twice - by hardware, and if still relevant, by software. flags = SKIP_HW => Rull will be processed by software only If hardware fail/not capabale to apply the rule, operation will NOT fail. Filter will be processed by SW only. Acked-by: Jiri Pirko Suggested-by: John Fastabend Signed-off-by: Amir Vadai --- include/linux/netdevice.h | 2 ++ include/net/pkt_cls.h | 14 ++++++++++ include/uapi/linux/pkt_cls.h | 2 ++ net/sched/cls_flower.c | 64 +++++++++++++++++++++++++++++++++++++++++++- 4 files changed, 81 insertions(+), 1 deletion(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index efe7cec..12db9d6 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -785,6 +785,7 @@ typedef u16 (*select_queue_fallback_t)(struct net_device *dev, enum { TC_SETUP_MQPRIO, TC_SETUP_CLSU32, + TC_SETUP_CLSFLOWER, }; struct tc_cls_u32_offload; @@ -794,6 +795,7 @@ struct tc_to_netdev { union { u8 tc; struct tc_cls_u32_offload *cls_u32; + struct tc_cls_flower_offload *cls_flower; }; }; diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h index bea14ee..5b4e8f0 100644 --- a/include/net/pkt_cls.h +++ b/include/net/pkt_cls.h @@ -409,4 +409,18 @@ static inline bool tc_should_offload(struct net_device *dev, u32 flags) return true; } +enum tc_fl_command { + TC_CLSFLOWER_REPLACE, + TC_CLSFLOWER_DESTROY, +}; + +struct tc_cls_flower_offload { + enum tc_fl_command command; + u64 cookie; + struct flow_dissector *dissector; + struct fl_flow_key *mask; + struct fl_flow_key *key; + struct tcf_exts *exts; +}; + #endif diff --git a/include/uapi/linux/pkt_cls.h b/include/uapi/linux/pkt_cls.h index 9874f568..c43c5f7 100644 --- a/include/uapi/linux/pkt_cls.h +++ b/include/uapi/linux/pkt_cls.h @@ -417,6 +417,8 @@ enum { TCA_FLOWER_KEY_TCP_DST, /* be16 */ TCA_FLOWER_KEY_UDP_SRC, /* be16 */ TCA_FLOWER_KEY_UDP_DST, /* be16 */ + + TCA_FLOWER_FLAGS, __TCA_FLOWER_MAX, }; diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 95b0212..25d8766 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -165,6 +165,51 @@ static void fl_destroy_filter(struct rcu_head *head) kfree(f); } +static void fl_hw_destroy_filter(struct tcf_proto *tp, u64 cookie) +{ + struct net_device *dev = tp->q->dev_queue->dev; + struct tc_cls_flower_offload offload = {0}; + struct tc_to_netdev tc; + + if (!tc_should_offload(dev, 0)) + return; + + offload.command = TC_CLSFLOWER_DESTROY; + offload.cookie = cookie; + + tc.type = TC_SETUP_CLSFLOWER; + tc.cls_flower = &offload; + + dev->netdev_ops->ndo_setup_tc(dev, tp->q->handle, tp->protocol, &tc); +} + +static void fl_hw_replace_filter(struct tcf_proto *tp, + struct flow_dissector *dissector, + struct fl_flow_key *mask, + struct fl_flow_key *key, + struct tcf_exts *actions, + u64 cookie, u32 flags) +{ + struct net_device *dev = tp->q->dev_queue->dev; + struct tc_cls_flower_offload offload = {0}; + struct tc_to_netdev tc; + + if (!tc_should_offload(dev, flags)) + return; + + offload.command = TC_CLSFLOWER_REPLACE; + offload.cookie = cookie; + offload.dissector = dissector; + offload.mask = mask; + offload.key = key; + offload.exts = actions; + + tc.type = TC_SETUP_CLSFLOWER; + tc.cls_flower = &offload; + + dev->netdev_ops->ndo_setup_tc(dev, tp->q->handle, tp->protocol, &tc); +} + static bool fl_destroy(struct tcf_proto *tp, bool force) { struct cls_fl_head *head = rtnl_dereference(tp->root); @@ -174,6 +219,7 @@ static bool fl_destroy(struct tcf_proto *tp, bool force) return false; list_for_each_entry_safe(f, next, &head->filters, list) { + fl_hw_destroy_filter(tp, (u64)f); list_del_rcu(&f->list); call_rcu(&f->rcu, fl_destroy_filter); } @@ -459,6 +505,7 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, struct cls_fl_filter *fnew; struct nlattr *tb[TCA_FLOWER_MAX + 1]; struct fl_flow_mask mask = {}; + u32 flags = 0; int err; if (!tca[TCA_OPTIONS]) @@ -486,6 +533,9 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, } fnew->handle = handle; + if (tb[TCA_FLOWER_FLAGS]) + flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]); + err = fl_set_parms(net, tp, fnew, &mask, base, tb, tca[TCA_RATE], ovr); if (err) goto errout; @@ -498,9 +548,20 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, head->ht_params); if (err) goto errout; - if (fold) + + fl_hw_replace_filter(tp, + &head->dissector, + &mask.key, + &fnew->key, + &fnew->exts, + (u64)fnew, + flags); + + if (fold) { rhashtable_remove_fast(&head->ht, &fold->ht_node, head->ht_params); + fl_hw_destroy_filter(tp, (u64)fold); + } *arg = (unsigned long) fnew; @@ -527,6 +588,7 @@ static int fl_delete(struct tcf_proto *tp, unsigned long arg) rhashtable_remove_fast(&head->ht, &f->ht_node, head->ht_params); list_del_rcu(&f->list); + fl_hw_destroy_filter(tp, (u64)f); tcf_unbind_filter(tp, &f->res); call_rcu(&f->rcu, fl_destroy_filter); return 0;