From patchwork Sun Jun 5 14:11:18 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amir Vadai X-Patchwork-Id: 630435 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3rN0BV1zJkz9t5x for ; Mon, 6 Jun 2016 00:11:46 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750971AbcFEOLa (ORCPT ); Sun, 5 Jun 2016 10:11:30 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:33216 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750911AbcFEOL3 (ORCPT ); Sun, 5 Jun 2016 10:11:29 -0400 Received: by mail-wm0-f66.google.com with SMTP id c74so2113360wme.0 for ; Sun, 05 Jun 2016 07:11:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=cewBkyay0psXk1QVsSHBcVeMRpyfTqsd1BRGYMTE3h0=; b=RvxYG/qglELKN+xDpIGxSWfXlxWydfJZBx3pDOU58cma7p7q4WY/jJPqIsGXSe8Zk7 /LgN2rqkU90Q3Amwa9uWD0+lGHbGFhvt31JkU1lJs8d0sfUhsNliLtK97Zqm3hWIh96b CA2ZrJlC5C8rghNGXQuNpC3KzTH1JKp3A1G7ntY5Inszi5RH2/eLhLTmf3VkSIxPTgiV zD8EFgVOCpfUEdqriGk+78Ar+uRQS8qJXQm4J7Tkfl0A4o/VtiQArOI68DXr9xZNjdXy YUCSX5mY8CS5+ANd07btWs+goBSIegv6ezQ2Yikub2li8WcGHzYvuWWwejdcJNl7UbZO NeyQ== X-Gm-Message-State: ALyK8tKTy1tH6pLxY0X2VDTdDozSruGYCfSsIxi/mIxiiYH/ucnReupM7RS2+TdsLQ7dZg== X-Received: by 10.194.151.73 with SMTP id uo9mr10948238wjb.177.1465135887361; Sun, 05 Jun 2016 07:11:27 -0700 (PDT) Received: from office.vadai.me (212.116.172.4.static.012.net.il. [212.116.172.4]) by smtp.gmail.com with ESMTPSA id r129sm9415321wmr.20.2016.06.05.07.11.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 05 Jun 2016 07:11:25 -0700 (PDT) From: Amir Vadai To: "David S. Miller" Cc: netdev@vger.kernel.org, Jiri Pirko , Or Gerlitz , Hadar Har-Zion , Amir Vadai Subject: [PATCH net-next] net/sched: cls_flower: Introduce support in SKIP SW flag Date: Sun, 5 Jun 2016 17:11:18 +0300 Message-Id: <20160605141118.10986-1-amir@vadai.me> X-Mailer: git-send-email 2.8.3 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Amir Vadai In order to make a filter processed only by hardware, skip_sw flag should be supplied. This is an addition to the already existing skip_hw flag (filter will be processed by software only). If no flag is specified, filter will be processed by both software and hardware. If only hardware offloaded filters exist, fl_classify() will return without doing anything. A following userspace patch will be sent once kernel patch is accepted. Example: tc filter add dev enp0s9 protocol ip prio 20 parent ffff: \ flower \ ip_proto 6 \ indev enp0s9 \ skip_sw \ action skbedit mark 0x1234 Signed-off-by: Amir Vadai Acked-by: Jiri Pirko Acked-by: John Fastabend --- net/sched/cls_flower.c | 31 ++++++++++++++++++++++--------- 1 file changed, 22 insertions(+), 9 deletions(-) diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 730aaca..d737492 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -66,6 +66,7 @@ struct cls_fl_filter { struct fl_flow_key key; struct list_head list; u32 handle; + u32 flags; struct rcu_head rcu; }; @@ -123,6 +124,9 @@ static int fl_classify(struct sk_buff *skb, const struct tcf_proto *tp, struct fl_flow_key skb_key; struct fl_flow_key skb_mkey; + if (!atomic_read(&head->ht.nelems)) + return -1; + fl_clear_masked_range(&skb_key, &head->mask); skb_key.indev_ifindex = skb->skb_iif; /* skb_flow_dissect() does not set n_proto in case an unknown protocol, @@ -136,7 +140,7 @@ static int fl_classify(struct sk_buff *skb, const struct tcf_proto *tp, f = rhashtable_lookup_fast(&head->ht, fl_key_get_start(&skb_mkey, &head->mask), head->ht_params); - if (f) { + if (f && !(f->flags & TCA_CLS_FLAGS_SKIP_SW)) { *res = f->res; return tcf_exts_exec(skb, &f->exts, res); } @@ -524,7 +528,6 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, struct cls_fl_filter *fnew; struct nlattr *tb[TCA_FLOWER_MAX + 1]; struct fl_flow_mask mask = {}; - u32 flags = 0; int err; if (!tca[TCA_OPTIONS]) @@ -552,8 +555,14 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, } fnew->handle = handle; - if (tb[TCA_FLOWER_FLAGS]) - flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]); + if (tb[TCA_FLOWER_FLAGS]) { + fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]); + + if (!tc_flags_valid(fnew->flags)) { + err = -EINVAL; + goto errout; + } + } err = fl_set_parms(net, tp, fnew, &mask, base, tb, tca[TCA_RATE], ovr); if (err) @@ -563,10 +572,12 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, if (err) goto errout; - err = rhashtable_insert_fast(&head->ht, &fnew->ht_node, - head->ht_params); - if (err) - goto errout; + if (!(fnew->flags & TCA_CLS_FLAGS_SKIP_SW)) { + err = rhashtable_insert_fast(&head->ht, &fnew->ht_node, + head->ht_params); + if (err) + goto errout; + } fl_hw_replace_filter(tp, &head->dissector, @@ -574,7 +585,7 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, &fnew->key, &fnew->exts, (unsigned long)fnew, - flags); + fnew->flags); if (fold) { rhashtable_remove_fast(&head->ht, &fold->ht_node, @@ -734,6 +745,8 @@ static int fl_dump(struct net *net, struct tcf_proto *tp, unsigned long fh, sizeof(key->tp.dst)))) goto nla_put_failure; + nla_put_u32(skb, TCA_FLOWER_FLAGS, f->flags); + if (tcf_exts_dump(skb, &f->exts)) goto nla_put_failure;