From patchwork Tue Mar 1 14:24:49 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amir Vadai X-Patchwork-Id: 590550 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id DEE071402CD for ; Wed, 2 Mar 2016 01:22:00 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754083AbcCAOVy (ORCPT ); Tue, 1 Mar 2016 09:21:54 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:33343 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753745AbcCAOVw (ORCPT ); Tue, 1 Mar 2016 09:21:52 -0500 Received: by mail-wm0-f68.google.com with SMTP id n186so4376642wmn.0 for ; Tue, 01 Mar 2016 06:21:52 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JYqTJFOXzoj3nqBHV6EW5yOcb/krFXJ5zwIShq78S9o=; b=BLy2ONBz+CrktC325uh4zaF/S6NvV34E6IN7PSkN9f6KRYh4nMfYHHg3caM7Co2APl evk0CgTgh5b6wls7dBgjBsI/0z7t3tpwSMU42CFMkxsMrw6HZOk2lYygTJJxggMiF9Ze 1doj3LXpZ2+i6EcvY79sG+tq5anM2mzetqmNWFdtNw4PeoPkWJAXUZ/kyJaEq8a+3lRP +Eld9r046/I3elCXw0NHOZYJA4sYtpGReObGD7t1N4zdEmmMt6L4pkiSifWuAXw7leNb RkbR0CIbYqnFJPfiQ/GGTCMK+wh776z4YudeklaP26bMo7sLsWsYfL17bOKaNvYZQ0ex Xx4A== X-Gm-Message-State: AD7BkJJMHCGxZ05t4b7EU3YmQHOYwy+UefTkKlKEgZsXMVEeR7fEZAeu7+5Nnk8qG8TjgA== X-Received: by 10.28.128.86 with SMTP id b83mr4371001wmd.27.1456842111370; Tue, 01 Mar 2016 06:21:51 -0800 (PST) Received: from office.mtl.com? (212.116.172.4.static.012.net.il. [212.116.172.4]) by smtp.gmail.com with ESMTPSA id w144sm21537406wmd.8.2016.03.01.06.21.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 01 Mar 2016 06:21:50 -0800 (PST) From: Amir Vadai To: "David S. Miller" Cc: netdev@vger.kernel.org, Or Gerlitz , John Fastabend , Saeed Mahameed , Hadar Har-Zion , Jiri Pirko , Amir Vadai Subject: [PATCH net-next 7/8] net/mlx5e: Support offload cls_flower with drop action Date: Tue, 1 Mar 2016 16:24:49 +0200 Message-Id: <1456842290-7844-8-git-send-email-amir@vadai.me> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1456842290-7844-1-git-send-email-amir@vadai.me> References: <1456842290-7844-1-git-send-email-amir@vadai.me> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Parse tc_cls_flower_offload into device specific commands and program the hardware to classify and act accordingly. For example, to drop ICMP (ip_proto 1) packets from specific smac, dmac, src_ip, src_ip, arriving to interface ens9: # tc qdisc add dev ens9 ingress # tc filter add dev ens9 protocol ip parent ffff: \ flower ip_proto 1 \ dst_mac 7c:fe:90:69:81:62 src_mac 7c:fe:90:69:81:56 \ dst_ip 11.11.11.11 src_ip 11.11.11.12 indev ens9 \ action drop Signed-off-by: Amir Vadai Signed-off-by: Or Gerlitz --- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 + drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 287 ++++++++++++++++++++++ drivers/net/ethernet/mellanox/mlx5/core/en_tc.h | 5 + 3 files changed, 301 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index cb02b4c..1d1da94 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -1889,6 +1889,15 @@ static int mlx5e_ndo_setup_tc(struct net_device *dev, u32 handle, goto mqprio; switch (tc->type) { + case TC_SETUP_CLSFLOWER: + switch (tc->cls_flower->command) { + case TC_CLSFLOWER_REPLACE: + return mlx5e_configure_flower(priv, proto, tc->cls_flower); + case TC_CLSFLOWER_DESTROY: + return mlx5e_delete_flower(priv, tc->cls_flower); + default: + return -EINVAL; + } default: return -EINVAL; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 7d1c0a3..8fee983 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -30,6 +30,9 @@ * SOFTWARE. */ +#include +#include +#include #include #include #include @@ -94,6 +97,290 @@ static void mlx5e_tc_del_flow(struct mlx5e_priv *priv, } } +static int parse_cls_flower(struct mlx5e_priv *priv, + u32 *match_c, u32 *match_v, + struct tc_cls_flower_offload *f) +{ + void *headers_c = MLX5_ADDR_OF(fte_match_param, match_c, outer_headers); + void *headers_v = MLX5_ADDR_OF(fte_match_param, match_v, outer_headers); + u16 addr_type = 0; + u8 ip_proto = 0; + + if (f->dissector->used_keys & + ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) | + BIT(FLOW_DISSECTOR_KEY_BASIC) | + BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) | + BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) | + BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) | + BIT(FLOW_DISSECTOR_KEY_PORTS))) { + netdev_warn(priv->netdev, "Unsupported key used: 0x%x\n", + f->dissector->used_keys); + return -ENOTSUPP; + } + + if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) { + struct flow_dissector_key_control *key = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_BASIC, + f->key); + addr_type = key->addr_type; + } + + if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) { + struct flow_dissector_key_basic *key = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_BASIC, + f->key); + struct flow_dissector_key_basic *mask = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_BASIC, + f->mask); + ip_proto = key->ip_proto; + + MLX5_SET(fte_match_set_lyr_2_4, headers_c, ethertype, + ntohs(mask->n_proto)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, + ntohs(key->n_proto)); + + MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_protocol, + mask->ip_proto); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + key->ip_proto); + } + + if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) { + struct flow_dissector_key_eth_addrs *key = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_ETH_ADDRS, + f->key); + struct flow_dissector_key_eth_addrs *mask = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_ETH_ADDRS, + f->mask); + + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, + dmac_47_16), + mask->dst); + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, + dmac_47_16), + key->dst); + + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, + smac_47_16), + mask->src); + ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, + smac_47_16), + key->src); + } + + if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { + struct flow_dissector_key_ipv4_addrs *key = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_IPV4_ADDRS, + f->key); + struct flow_dissector_key_ipv4_addrs *mask = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_IPV4_ADDRS, + f->mask); + + memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, + src_ipv4_src_ipv6.ipv4_layout.ipv4), + &mask->src, sizeof(mask->src)); + memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, + src_ipv4_src_ipv6.ipv4_layout.ipv4), + &key->src, sizeof(key->src)); + memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, + dst_ipv4_dst_ipv6.ipv4_layout.ipv4), + &mask->dst, sizeof(mask->dst)); + memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, + dst_ipv4_dst_ipv6.ipv4_layout.ipv4), + &key->dst, sizeof(key->dst)); + } + + if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) { + struct flow_dissector_key_ipv6_addrs *key = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_IPV6_ADDRS, + f->key); + struct flow_dissector_key_ipv6_addrs *mask = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_IPV6_ADDRS, + f->mask); + + memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, + src_ipv4_src_ipv6.ipv6_layout.ipv6), + &mask->src, sizeof(mask->src)); + memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, + src_ipv4_src_ipv6.ipv6_layout.ipv6), + &key->src, sizeof(key->src)); + + memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c, + dst_ipv4_dst_ipv6.ipv6_layout.ipv6), + &mask->dst, sizeof(mask->dst)); + memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, + dst_ipv4_dst_ipv6.ipv6_layout.ipv6), + &key->dst, sizeof(key->dst)); + } + + if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) { + struct flow_dissector_key_ports *key = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_PORTS, + f->key); + struct flow_dissector_key_ports *mask = + skb_flow_dissector_target(f->dissector, + FLOW_DISSECTOR_KEY_PORTS, + f->mask); + switch (ip_proto) { + case IPPROTO_TCP: + MLX5_SET(fte_match_set_lyr_2_4, headers_c, + tcp_sport, ntohs(mask->src)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + tcp_sport, ntohs(key->src)); + + MLX5_SET(fte_match_set_lyr_2_4, headers_c, + tcp_dport, ntohs(mask->dst)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + tcp_dport, ntohs(key->dst)); + break; + + case IPPROTO_UDP: + MLX5_SET(fte_match_set_lyr_2_4, headers_c, + udp_sport, ntohs(mask->src)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_sport, ntohs(key->src)); + + MLX5_SET(fte_match_set_lyr_2_4, headers_c, + udp_dport, ntohs(mask->dst)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, ntohs(key->dst)); + break; + default: + netdev_err(priv->netdev, + "Only UDP and TCP transport are supported\n"); + return -EINVAL; + } + } + + return 0; +} + +static int parse_tc_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, + u32 *action, u32 *flow_tag) +{ +#ifdef CONFIG_NET_CLS_ACT + const struct tc_action *a; + + *flow_tag = MLX5_FS_DEFAULT_FLOW_TAG; + *action = 0; + + if (list_empty(&exts->actions)) + return -EINVAL; + + list_for_each_entry(a, &exts->actions, list) { + /* Only support a single action per rule */ + if (*action) + return -EINVAL; + + if (is_tcf_gact_shot(a)) { + *action |= MLX5_FLOW_CONTEXT_ACTION_DROP; + continue; + } + + return -EINVAL; + } + + return 0; +#else + return -EINVAL; +#endif +} + +int mlx5e_configure_flower(struct mlx5e_priv *priv, __be16 protocol, + struct tc_cls_flower_offload *f) +{ + struct mlx5e_tc_flow_table *tc = &priv->fts.tc; + u32 *match_c; + u32 *match_v; + int err = 0; + u32 flow_tag; + u32 action; + struct mlx5e_tc_flow *flow; + struct mlx5_flow_rule *old = NULL; + + flow = rhashtable_lookup_fast(&tc->ht, &f->cookie, + tc->ht_params); + if (flow) + old = flow->rule; + else + flow = kzalloc(sizeof(*flow), GFP_KERNEL); + + match_c = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL); + match_v = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL); + if (!match_c || !match_v || !flow) { + err = -ENOMEM; + goto err_free; + } + + flow->cookie = f->cookie; + + err = parse_cls_flower(priv, match_c, match_v, f); + if (err < 0) + goto err_free; + + err = parse_tc_actions(priv, f->exts, &action, &flow_tag); + if (err < 0) + goto err_free; + + err = rhashtable_insert_fast(&tc->ht, &flow->node, + tc->ht_params); + if (err) + goto err_free; + + flow->rule = mlx5e_tc_add_flow(priv, match_c, match_v, action, + flow_tag); + if (IS_ERR(flow->rule)) { + err = PTR_ERR(flow->rule); + goto err_hash_del; + } + + if (old) + mlx5e_tc_del_flow(priv, old); + + goto out; + +err_hash_del: + rhashtable_remove_fast(&tc->ht, &flow->node, tc->ht_params); + +err_free: + if (!old) + kfree(flow); +out: + kfree(match_c); + kfree(match_v); + return err; +} + +int mlx5e_delete_flower(struct mlx5e_priv *priv, + struct tc_cls_flower_offload *f) +{ + struct mlx5e_tc_flow *flow; + struct mlx5e_tc_flow_table *tc = &priv->fts.tc; + + flow = rhashtable_lookup_fast(&tc->ht, &f->cookie, + tc->ht_params); + if (!flow) + return -EINVAL; + + rhashtable_remove_fast(&tc->ht, &flow->node, tc->ht_params); + + mlx5e_tc_del_flow(priv, flow->rule); + + kfree(flow); + + return 0; +} + static const struct rhashtable_params mlx5e_tc_flow_ht_params = { .head_offset = offsetof(struct mlx5e_tc_flow, node), .key_offset = offsetof(struct mlx5e_tc_flow, cookie), diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h index 8a0dc0d..f1e7180 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h @@ -36,6 +36,11 @@ void mlx5e_tc_init(struct mlx5e_priv *priv); void mlx5e_tc_cleanup(struct mlx5e_priv *priv); +int mlx5e_configure_flower(struct mlx5e_priv *priv, __be16 protocol, + struct tc_cls_flower_offload *f); +int mlx5e_delete_flower(struct mlx5e_priv *priv, + struct tc_cls_flower_offload *f); + static inline int mlx5e_tc_num_filters(struct mlx5e_priv *priv) { return atomic_read(&priv->fts.tc.ht.nelems);