From patchwork Mon Jun 13 09:06:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amir Vadai X-Patchwork-Id: 634459 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3rSn3R1q10z9sDb for ; Mon, 13 Jun 2016 19:07:15 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1422776AbcFMJGx (ORCPT ); Mon, 13 Jun 2016 05:06:53 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:36010 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161185AbcFMJGv (ORCPT ); Mon, 13 Jun 2016 05:06:51 -0400 Received: by mail-wm0-f68.google.com with SMTP id m124so13120416wme.3 for ; Mon, 13 Jun 2016 02:06:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=bltDsN8or+GrnqCIkECAH4qQypHKtLC66xcLzZpJ0VI=; b=NO1MfvDOhgwQH2rBOi7uT1hXBOlbgDGXZs3vu8xG0EF9vPB+IhL6YXP8GhtBaX7nkT tA/LbtwI2bXVoApgcKa+4AhlqCP65kJucb1ZsyCR8Spyst8vJCKLQkAlh8j+BDsA10v1 1Yf+s/FRwSmB79xZ/1BYCeV61ox34O4J8lUSBoP6jhpexaPMbRZFBco0WPTcgzH/CIqB DoMH9CHAMj/pTGRQP/STS5G1XQeWVdW8UCM1m0GC9e91ahY6QjgOufyUkVdV3ENcaYV5 qKG2q0HF2VV0dfMwlmuFGowVoDwsld3MS1lK0/Lv6OiOzSsJzRiPAOWbIBbQv0RrzHgQ JE5g== X-Gm-Message-State: ALyK8tJzYhTHb79g87zBMrqZ5zWylbe+ntC1dmdDWwRuqfpLpnfjYdp8lyjl1tKa6SG1gA== X-Received: by 10.194.107.10 with SMTP id gy10mr41546wjb.14.1465808810070; Mon, 13 Jun 2016 02:06:50 -0700 (PDT) Received: from office.vadai.me (212.116.172.4.static.012.net.il. [212.116.172.4]) by smtp.gmail.com with ESMTPSA id l4sm13237841wml.21.2016.06.13.02.06.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 13 Jun 2016 02:06:49 -0700 (PDT) From: Amir Vadai To: "David S. Miller" Cc: netdev@vger.kernel.org, Jiri Pirko , John Fastabend , Jakub Kicinski , Or Gerlitz , Hadar Har-Zion , Amir Vadai Subject: [PATCH net-next] net/sched: flower: Return error when hw can't offload and skip_sw is set Date: Mon, 13 Jun 2016 12:06:39 +0300 Message-Id: <20160613090639.31110-1-amir@vadai.me> X-Mailer: git-send-email 2.8.3 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Amir Vadai When skip_sw is set and hardware fails to apply filter, return error to user. This will make error propagation logic similar to the one currently used in u32 classifier. Also, changed code to use tc_skip_sw() utility function. Signed-off-by: Amir Vadai Acked-by: Jiri Pirko --- net/sched/cls_flower.c | 42 +++++++++++++++++++++++++----------------- 1 file changed, 25 insertions(+), 17 deletions(-) diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 1ea6f76..5060801 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -140,7 +140,7 @@ static int fl_classify(struct sk_buff *skb, const struct tcf_proto *tp, f = rhashtable_lookup_fast(&head->ht, fl_key_get_start(&skb_mkey, &head->mask), head->ht_params); - if (f && !(f->flags & TCA_CLS_FLAGS_SKIP_SW)) { + if (f && !tc_skip_sw(f->flags)) { *res = f->res; return tcf_exts_exec(skb, &f->exts, res); } @@ -187,19 +187,20 @@ static void fl_hw_destroy_filter(struct tcf_proto *tp, unsigned long cookie) dev->netdev_ops->ndo_setup_tc(dev, tp->q->handle, tp->protocol, &tc); } -static void fl_hw_replace_filter(struct tcf_proto *tp, - struct flow_dissector *dissector, - struct fl_flow_key *mask, - struct fl_flow_key *key, - struct tcf_exts *actions, - unsigned long cookie, u32 flags) +static int fl_hw_replace_filter(struct tcf_proto *tp, + struct flow_dissector *dissector, + struct fl_flow_key *mask, + struct fl_flow_key *key, + struct tcf_exts *actions, + unsigned long cookie, u32 flags) { struct net_device *dev = tp->q->dev_queue->dev; struct tc_cls_flower_offload offload = {0}; struct tc_to_netdev tc; + int err; if (!tc_should_offload(dev, tp, flags)) - return; + return tc_skip_sw(flags) ? -EINVAL : 0; offload.command = TC_CLSFLOWER_REPLACE; offload.cookie = cookie; @@ -211,7 +212,12 @@ static void fl_hw_replace_filter(struct tcf_proto *tp, tc.type = TC_SETUP_CLSFLOWER; tc.cls_flower = &offload; - dev->netdev_ops->ndo_setup_tc(dev, tp->q->handle, tp->protocol, &tc); + err = dev->netdev_ops->ndo_setup_tc(dev, tp->q->handle, tp->protocol, &tc); + + if (tc_skip_sw(flags)) + return err; + + return 0; } static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f) @@ -572,20 +578,22 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, if (err) goto errout; - if (!(fnew->flags & TCA_CLS_FLAGS_SKIP_SW)) { + if (!tc_skip_sw(fnew->flags)) { err = rhashtable_insert_fast(&head->ht, &fnew->ht_node, head->ht_params); if (err) goto errout; } - fl_hw_replace_filter(tp, - &head->dissector, - &mask.key, - &fnew->key, - &fnew->exts, - (unsigned long)fnew, - fnew->flags); + err = fl_hw_replace_filter(tp, + &head->dissector, + &mask.key, + &fnew->key, + &fnew->exts, + (unsigned long)fnew, + fnew->flags); + if (err) + goto errout; if (fold) { rhashtable_remove_fast(&head->ht, &fold->ht_node,