From patchwork Tue Nov 1 14:53:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Blakey X-Patchwork-Id: 689940 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3t7Z4N69mDz9s65 for ; Wed, 2 Nov 2016 01:53:56 +1100 (AEDT) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 1D59F10654; Tue, 1 Nov 2016 07:53:43 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 3129610628 for ; Tue, 1 Nov 2016 07:53:42 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id 69597164AF1 for ; Tue, 1 Nov 2016 08:53:41 -0600 (MDT) X-ASG-Debug-ID: 1478012020-0b323720424a9070001-byXFYA Received: from mx1-pf1.cudamail.com ([192.168.24.1]) by bar6.cudamail.com with ESMTP id liPpx7pJube3XwJN (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Tue, 01 Nov 2016 08:53:40 -0600 (MDT) X-Barracuda-Envelope-From: paulb@mellanox.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.24.1 Received: from unknown (HELO mellanox.co.il) (193.47.165.129) by mx1-pf1.cudamail.com with SMTP; 1 Nov 2016 14:53:39 -0000 Received-SPF: pass (mx1-pf1.cudamail.com: SPF record at _mtablock1.salesforce.com designates 193.47.165.129 as permitted sender) X-Barracuda-Apparent-Source-IP: 193.47.165.129 X-Barracuda-RBL-IP: 193.47.165.129 Received: from Internal Mail-Server by MTLPINE1 (envelope-from paulb@mellanox.com) with ESMTPS (AES256-SHA encrypted); 1 Nov 2016 16:53:33 +0200 Received: from r-vnc04.mtr.labs.mlnx (r-vnc04.mtr.labs.mlnx [10.208.0.116]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id uA1ErVfh000482; Tue, 1 Nov 2016 16:53:32 +0200 X-CudaMail-Envelope-Sender: paulb@mellanox.com From: Paul Blakey To: dev@openvswitch.org X-CudaMail-MID: CM-E1-1031028575 X-CudaMail-DTE: 110116 X-CudaMail-Originating-IP: 193.47.165.129 Date: Tue, 1 Nov 2016 16:53:26 +0200 X-ASG-Orig-Subj: [##CM-E1-1031028575##][PATCH ovs V1 5/9] dpif-hw-acc: converting a tc flow back to ovs flow Message-Id: <1478012010-32494-6-git-send-email-paulb@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1478012010-32494-1-git-send-email-paulb@mellanox.com> References: <1478012010-32494-1-git-send-email-paulb@mellanox.com> X-Barracuda-Connect: UNKNOWN[192.168.24.1] X-Barracuda-Start-Time: 1478012020 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC5_MJ1963, RDNS_NONE, UNPARSEABLE_RELAY X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.34192 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 UNPARSEABLE_RELAY Informational: message has unparseable relay lines 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Cc: Shahar Klein , Simon Horman , Rony Efraim , Jiri Pirko , Marcelo Ricardo Leitner , Or Gerlitz , Hadar Har-Zion , Andy Gospodarek Subject: [ovs-dev] [PATCH ovs V1 5/9] dpif-hw-acc: converting a tc flow back to ovs flow X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" converts a dumped tc flow back to dpif-flow, for use in dumping/getting flows. Signed-off-by: Paul Blakey Signed-off-by: Shahar Klein --- lib/dpif-hw-acc.c | 200 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 200 insertions(+) diff --git a/lib/dpif-hw-acc.c b/lib/dpif-hw-acc.c index 16dd497..bf91d95 100644 --- a/lib/dpif-hw-acc.c +++ b/lib/dpif-hw-acc.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -326,6 +327,205 @@ get_ovs_port(struct dpif_hw_acc *dpif, int ifindex) return -1; } +static int +dpif_hw_tc_flow_to_dpif_flow(struct dpif_hw_acc *dpif, + struct tc_flow *tc_flow, + struct dpif_flow *dpif_flow, odp_port_t inport, + struct ofpbuf *outflow, struct netdev *indev) +{ + struct ofpbuf mask_d, *mask = &mask_d; + + ofpbuf_init(mask, 512); + + dpif_flow->pmd_id = PMD_ID_NULL; + + size_t key_offset = nl_msg_start_nested(outflow, OVS_FLOW_ATTR_KEY); + size_t mask_offset = nl_msg_start_nested(mask, OVS_FLOW_ATTR_MASK); + + nl_msg_put_u32(outflow, OVS_KEY_ATTR_IN_PORT, inport); + nl_msg_put_u32(mask, OVS_KEY_ATTR_IN_PORT, 0xFFFFFFFF); + + /* OVS_KEY_ATTR_ETHERNET */ + struct ovs_key_ethernet *eth_key = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_ETHERNET, + sizeof (*eth_key)); + struct ovs_key_ethernet *eth_key_mask = + nl_msg_put_unspec_uninit(mask, OVS_KEY_ATTR_ETHERNET, + sizeof (*eth_key_mask)); + + memset(eth_key_mask, 0xFF, sizeof (*eth_key_mask)); + eth_key->eth_src = tc_flow->src_mac; + eth_key->eth_dst = tc_flow->dst_mac; + eth_key_mask->eth_src = tc_flow->src_mac_mask; + eth_key_mask->eth_dst = tc_flow->dst_mac_mask; + + nl_msg_put_be16(outflow, OVS_KEY_ATTR_ETHERTYPE, tc_flow->eth_type); + nl_msg_put_be16(mask, OVS_KEY_ATTR_ETHERTYPE, 0xFFFF); + + /* OVS_KEY_ATTR_IPV6 */ + if (tc_flow->eth_type == ntohs(ETH_P_IPV6)) { + struct ovs_key_ipv6 *ipv6 = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_IPV6, + sizeof (*ipv6)); + struct ovs_key_ipv6 *ipv6_mask = + nl_msg_put_unspec_zero(mask, OVS_KEY_ATTR_IPV6, + sizeof (*ipv6_mask)); + + memset(&ipv6_mask->ipv6_proto, 0xFF, sizeof (ipv6_mask->ipv6_proto)); + if (tc_flow->ip_proto) ipv6->ipv6_proto = tc_flow->ip_proto; + else ipv6_mask->ipv6_proto = 0; + ipv6_mask->ipv6_frag = 0; + + memcpy(ipv6->ipv6_src, tc_flow->ipv6.ipv6_src, sizeof(tc_flow->ipv6.ipv6_src)); + memcpy(ipv6_mask->ipv6_src, tc_flow->ipv6.ipv6_src_mask, sizeof(tc_flow->ipv6.ipv6_src_mask)); + memcpy(ipv6->ipv6_dst, tc_flow->ipv6.ipv6_dst, sizeof(tc_flow->ipv6.ipv6_dst)); + memcpy(ipv6_mask->ipv6_dst, tc_flow->ipv6.ipv6_dst_mask, sizeof(tc_flow->ipv6.ipv6_dst_mask)); + } + /* OVS_KEY_ATTR_IPV4 */ + if (tc_flow->eth_type == ntohs(ETH_P_IP)) { + struct ovs_key_ipv4 *ipv4 = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_IPV4, + sizeof (*ipv4)); + struct ovs_key_ipv4 *ipv4_mask = + nl_msg_put_unspec_zero(mask, OVS_KEY_ATTR_IPV4, + sizeof (*ipv4_mask)); + + memset(&ipv4_mask->ipv4_proto, 0xFF, sizeof (ipv4_mask->ipv4_proto)); + if (tc_flow->ip_proto) ipv4->ipv4_proto = tc_flow->ip_proto; + else ipv4_mask->ipv4_proto = 0; + ipv4_mask->ipv4_frag = 0; + + if (tc_flow->ipv4.ipv4_src) + ipv4->ipv4_src = tc_flow->ipv4.ipv4_src; + if (tc_flow->ipv4.ipv4_src_mask) + ipv4_mask->ipv4_src = tc_flow->ipv4.ipv4_src_mask; + if (tc_flow->ipv4.ipv4_dst) + ipv4->ipv4_dst = tc_flow->ipv4.ipv4_dst; + if (tc_flow->ipv4.ipv4_dst_mask) + ipv4_mask->ipv4_dst = tc_flow->ipv4.ipv4_dst_mask; + } + if (tc_flow->ip_proto == IPPROTO_ICMPV6) { + /* putting a masked out icmp */ + struct ovs_key_icmpv6 *icmp = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_ICMPV6, + sizeof (*icmp)); + struct ovs_key_icmpv6 *icmp_mask = + nl_msg_put_unspec_uninit(mask, OVS_KEY_ATTR_ICMPV6, + sizeof (*icmp_mask)); + + icmp->icmpv6_type = 0; + icmp->icmpv6_code = 0; + memset(icmp_mask, 0, sizeof (*icmp_mask)); + } + if (tc_flow->ip_proto == IPPROTO_ICMP) { + /* putting a masked out icmp */ + struct ovs_key_icmp *icmp = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_ICMP, + sizeof (*icmp)); + struct ovs_key_icmp *icmp_mask = + nl_msg_put_unspec_uninit(mask, OVS_KEY_ATTR_ICMP, + sizeof (*icmp_mask)); + + icmp->icmp_type = 0; + icmp->icmp_code = 0; + memset(icmp_mask, 0, sizeof (*icmp_mask)); + } + if (tc_flow->ip_proto == IPPROTO_TCP) { + struct ovs_key_tcp *tcp = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_TCP, + sizeof (*tcp)); + struct ovs_key_tcp *tcp_mask = + nl_msg_put_unspec_uninit(mask, OVS_KEY_ATTR_TCP, + sizeof (*tcp_mask)); + + memset(tcp_mask, 0x00, sizeof (*tcp_mask)); + + tcp->tcp_src = tc_flow->src_port; + tcp_mask->tcp_src = tc_flow->src_port_mask; + tcp->tcp_dst = tc_flow->dst_port; + tcp_mask->tcp_dst = tc_flow->dst_port_mask; + } + if (tc_flow->ip_proto == IPPROTO_UDP) { + struct ovs_key_udp *udp = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_UDP, + sizeof (*udp)); + struct ovs_key_udp *udp_mask = + nl_msg_put_unspec_uninit(mask, OVS_KEY_ATTR_UDP, + sizeof (*udp_mask)); + + memset(udp_mask, 0xFF, sizeof (*udp_mask)); + + udp->udp_src = tc_flow->src_port; + udp_mask->udp_src = tc_flow->src_port_mask; + udp->udp_dst = tc_flow->dst_port; + udp_mask->udp_dst = tc_flow->dst_port_mask; + } + nl_msg_end_nested(outflow, key_offset); + nl_msg_end_nested(mask, mask_offset); + + size_t actions_offset = + nl_msg_start_nested(outflow, OVS_FLOW_ATTR_ACTIONS); + if (tc_flow->ifindex_out) { + /* TODO: make this faster */ + int ovsport = get_ovs_port(dpif, tc_flow->ifindex_out); + + nl_msg_put_u32(outflow, OVS_ACTION_ATTR_OUTPUT, ovsport); + } + nl_msg_end_nested(outflow, actions_offset); + + struct nlattr *mask_attr = + ofpbuf_at_assert(mask, mask_offset, sizeof *mask_attr); + void *mask_data = ofpbuf_put_uninit(outflow, mask_attr->nla_len); + + memcpy(mask_data, mask_attr, mask_attr->nla_len); + mask_attr = mask_data; + + struct nlattr *key_attr = + ofpbuf_at_assert(outflow, key_offset, sizeof *key_attr); + struct nlattr *actions_attr = + ofpbuf_at_assert(outflow, actions_offset, sizeof *actions_attr); + + dpif_flow->key = nl_attr_get(key_attr); + dpif_flow->key_len = nl_attr_get_size(key_attr); + dpif_flow->mask = nl_attr_get(mask_attr); + dpif_flow->mask_len = nl_attr_get_size(mask_attr); + dpif_flow->actions = nl_attr_get(actions_attr); + dpif_flow->actions_len = nl_attr_get_size(actions_attr); + + if (tc_flow->stats.n_packets.hi || tc_flow->stats.n_packets.lo) { + dpif_flow->stats.used = tc_flow->lastused ? tc_flow->lastused : 0; + dpif_flow->stats.n_packets = + get_32aligned_u64(&tc_flow->stats.n_packets); + dpif_flow->stats.n_bytes = get_32aligned_u64(&tc_flow->stats.n_bytes); + } else { + dpif_flow->stats.used = 0; + dpif_flow->stats.n_packets = 0; + dpif_flow->stats.n_bytes = 0; + } + dpif_flow->stats.tcp_flags = 0; + + dpif_flow->ufid_present = false; + + ovs_u128 *ovs_ufid = + findufid(dpif, inport, tc_flow->handle, tc_flow->prio); + if (ovs_ufid) { + VLOG_DBG("Found UFID!, handle: %d, ufid: %s\n", tc_flow->handle, + printufid(ovs_ufid)); + dpif_flow->ufid = *ovs_ufid; + dpif_flow->ufid_present = true; + } else { + VLOG_DBG("Creating new UFID\n"); + ovs_assert(dpif_flow->key && dpif_flow->key_len); + dpif_flow_hash(&dpif->dpif, dpif_flow->key, dpif_flow->key_len, + &dpif_flow->ufid); + dpif_flow->ufid_present = true; + puthandle(dpif, &dpif_flow->ufid, indev, inport, tc_flow->handle, + tc_flow->prio); + } + + return 0; +} + static struct dpif_hw_acc * dpif_hw_acc_cast(const struct dpif *dpif) {