From patchwork Tue Nov 1 14:53:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Blakey X-Patchwork-Id: 689939 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3t7Z4B1TTVz9s65 for ; Wed, 2 Nov 2016 01:53:46 +1100 (AEDT) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id BAD6B1062E; Tue, 1 Nov 2016 07:53:42 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx1e4.cudamail.com (mx1.cudamail.com [69.90.118.67]) by archives.nicira.com (Postfix) with ESMTPS id 15E2910625 for ; Tue, 1 Nov 2016 07:53:42 -0700 (PDT) Received: from bar5.cudamail.com (unknown [192.168.21.12]) by mx1e4.cudamail.com (Postfix) with ESMTPS id 83D521E04BA for ; Tue, 1 Nov 2016 08:53:41 -0600 (MDT) X-ASG-Debug-ID: 1478012020-09eadd0f964f7f60001-byXFYA Received: from mx3-pf1.cudamail.com ([192.168.14.2]) by bar5.cudamail.com with ESMTP id SlSQAQb3ijXtjpxY (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Tue, 01 Nov 2016 08:53:40 -0600 (MDT) X-Barracuda-Envelope-From: paulb@mellanox.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.2 Received: from unknown (HELO mellanox.co.il) (193.47.165.129) by mx3-pf1.cudamail.com with SMTP; 1 Nov 2016 14:53:39 -0000 Received-SPF: pass (mx3-pf1.cudamail.com: SPF record at _mtablock1.salesforce.com designates 193.47.165.129 as permitted sender) X-Barracuda-Apparent-Source-IP: 193.47.165.129 X-Barracuda-RBL-IP: 193.47.165.129 Received: from Internal Mail-Server by MTLPINE1 (envelope-from paulb@mellanox.com) with ESMTPS (AES256-SHA encrypted); 1 Nov 2016 16:53:32 +0200 Received: from r-vnc04.mtr.labs.mlnx (r-vnc04.mtr.labs.mlnx [10.208.0.116]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id uA1ErVff000482; Tue, 1 Nov 2016 16:53:32 +0200 X-CudaMail-Envelope-Sender: paulb@mellanox.com From: Paul Blakey To: dev@openvswitch.org X-CudaMail-MID: CM-V1-1031019521 X-CudaMail-DTE: 110116 X-CudaMail-Originating-IP: 193.47.165.129 Date: Tue, 1 Nov 2016 16:53:24 +0200 X-ASG-Orig-Subj: [##CM-V1-1031019521##][PATCH ovs V1 3/9] dpif-hw-acc: Add new hash maps ufid <-> tc flow and ovs port <-> netdev Message-Id: <1478012010-32494-4-git-send-email-paulb@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1478012010-32494-1-git-send-email-paulb@mellanox.com> References: <1478012010-32494-1-git-send-email-paulb@mellanox.com> X-GBUdb-Analysis: 0, 193.47.165.129, Ugly c=0 p=0 Source New X-MessageSniffer-Rules: 0-0-0-22620-c X-Barracuda-Connect: UNKNOWN[192.168.14.2] X-Barracuda-Start-Time: 1478012020 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC5_MJ1963, RDNS_NONE, UNPARSEABLE_RELAY X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.34192 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 UNPARSEABLE_RELAY Informational: message has unparseable relay lines 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Cc: Shahar Klein , Simon Horman , Rony Efraim , Jiri Pirko , Marcelo Ricardo Leitner , Or Gerlitz , Hadar Har-Zion , Andy Gospodarek Subject: [ovs-dev] [PATCH ovs V1 3/9] dpif-hw-acc: Add new hash maps ufid <-> tc flow and ovs port <-> netdev X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" A flow in ovs can be identified by its ufid, and in tc by its flow handle + priority. We use the flow protocol number as the priority when offloading to tc. So we save a map that maps from ovs ufid <-> tc flow handle + protocol. Also added a map to map ovs in_port to/from netdevice to be used when offloading/dumping a flow. Signed-off-by: Paul Blakey Signed-off-by: Shahar Klein --- lib/dpif-hw-acc.c | 292 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ lib/dpif-hw-acc.h | 21 ++++ 2 files changed, 313 insertions(+) diff --git a/lib/dpif-hw-acc.c b/lib/dpif-hw-acc.c index 252a90f..5f65904 100644 --- a/lib/dpif-hw-acc.c +++ b/lib/dpif-hw-acc.c @@ -40,10 +40,290 @@ #include "unaligned.h" #include "util.h" #include "openvswitch/vlog.h" +#include "netdev-provider.h" #include "dpif-hw-acc.h" VLOG_DEFINE_THIS_MODULE(dpif_hw_acc); +static char * +printufid(const ovs_u128 * ovs_ufid) +{ + static char ufid[64]; + + if (ovs_ufid) + sprintf(ufid, "%x%x%x%x", ovs_ufid->u32[0], ovs_ufid->u32[1], + ovs_ufid->u32[2], ovs_ufid->u32[3]); + else + sprintf(ufid, "(missing_ufid)"); + return ufid; +} + +static inline size_t +hash_ufid(const ovs_u128 * ovs_ufid) +{ + return hash_words64((const uint64_t *) ovs_ufid, + sizeof *ovs_ufid / sizeof (uint64_t), 0); +} + +static inline size_t +hash_port(odp_port_t port) +{ + return hash_int((int) port, 0); +} + +static inline size_t +hash_handle_prio_port(int handle, uint16_t prio, odp_port_t port) +{ + /* TODO: fix utint cast */ + return hash_int((uint32_t) prio, hash_int(handle, hash_port(port))); +} + +static struct netdev * +port_find(struct dpif_hw_acc *dpif, odp_port_t port) +{ + struct port_netdev_hash_data *data; + size_t hash = hash_port(port); + struct netdev *res = 0; + + ovs_mutex_lock(&dpif->hash_mutex); + HMAP_FOR_EACH_WITH_HASH(data, node, hash, &dpif->port_to_netdev) { + if (data->port == port) + break; + } + if (data) { + res = data->netdev; + VLOG_DBG + ("found port mapping: port number %d -> port name %s (pointer: %p)\n", + port, res->name, res); + } + ovs_mutex_unlock(&dpif->hash_mutex); + + return res; +} + +static int +port_del_name(struct dpif_hw_acc *dpif, char *name) +{ + struct port_netdev_hash_data *data; + + ovs_mutex_lock(&dpif->hash_mutex); + HMAP_FOR_EACH(data, node, &dpif->port_to_netdev) { + if (!strcmp(data->netdev->name, name)) + break; + } + if (data) + hmap_remove(&dpif->port_to_netdev, &data->node); + ovs_mutex_unlock(&dpif->hash_mutex); + + free(data); + return data ? 1 : 0; +} + +static int +port_del(struct dpif_hw_acc *dpif, odp_port_t port) +{ + struct port_netdev_hash_data *data; + size_t hash = hash_port(port); + + ovs_mutex_lock(&dpif->hash_mutex); + HMAP_FOR_EACH_WITH_HASH(data, node, hash, &dpif->port_to_netdev) { + if (data->port == port) + break; + } + if (data) + hmap_remove(&dpif->port_to_netdev, &data->node); + ovs_mutex_unlock(&dpif->hash_mutex); + + free(data); + return data ? 1 : 0; +} + +static int +port_add(struct dpif_hw_acc *dpif, odp_port_t port, struct netdev *netdev) +{ + struct port_netdev_hash_data *data; + size_t hash = hash_port(port); + int ret = 0; + + if (!netdev || !netdev->name || !port) + return -1; + + if (netdev->netdev_class == &netdev_internal_class) { + if (!strcmp(netdev->name, "skip_hw")) { + tc_set_skip_hw(true); + } + return -1; + } + if (port_del(dpif, port) || port_del_name(dpif, netdev->name)) { + VLOG_DBG + ("%s %d %s (%p) port number %d name: %s, deleted to be replaced\n", + __FILE__, __LINE__, __func__, dpif, port, netdev->name); + ret = 1; + } + + data = malloc(sizeof (struct port_netdev_hash_data)); + data->netdev = netdev; + data->port = port; + + VLOG_DBG + ("%s %d %s (%p): adding new port mapping: %d -> netdev %p name: %s, type: %s, ifindex: %d, hash: %lu\n", + __FILE__, __LINE__, __func__, dpif, port, netdev, netdev->name, + netdev->netdev_class->type, netdev_get_ifindex(netdev), hash); + ovs_mutex_lock(&dpif->hash_mutex); + hmap_insert(&dpif->port_to_netdev, &data->node, hash); + ovs_mutex_unlock(&dpif->hash_mutex); + return ret; +} + +static int +delhandle(struct dpif_hw_acc *dpif, const ovs_u128 * ovs_ufid) +{ + struct ufid_handle_hash_data *data; + size_t hash; + + if (!ovs_ufid) { + VLOG_ERR("%s %d %s (%p) can't delete missing ufid\n", __FILE__, + __LINE__, __func__, dpif); + return 0; + } + hash = hash_ufid(ovs_ufid); + + VLOG_DBG("%s %d %s (%p): removing %s\n", __FILE__, __LINE__, __func__, + dpif, printufid(ovs_ufid)); + + ovs_mutex_lock(&dpif->hash_mutex); + HMAP_FOR_EACH_WITH_HASH(data, node_ufid, hash, &dpif->ufid_to_handle) { + if (!memcmp(&data->ovs_ufid, ovs_ufid, sizeof (*ovs_ufid))) + break; + } + if (data) { + VLOG_DBG("%s %d %s (%p) ufid %s found! handle: %d, removing it\n", + __FILE__, __LINE__, __func__, dpif, printufid(ovs_ufid), + data->handle); + hmap_remove(&dpif->ufid_to_handle, &data->node_ufid); + hmap_remove(&dpif->handle_to_ufid, &data->node_handle); + free(data); + } + ovs_mutex_unlock(&dpif->hash_mutex); + return data ? 1 : 0; +} + +static int +puthandle(struct dpif_hw_acc *dpif, const ovs_u128 * ovs_ufid, + struct netdev *in, odp_port_t port, int handle, uint16_t prio) +{ + int ret = 0; + size_t hash_to_ufid = hash_handle_prio_port(handle, prio, port); + + if (!ovs_ufid) { + VLOG_ERR("%s %d %s (%p) missing UFID!\n", __FILE__, __LINE__, __func__, + dpif); + return 0; + } + + if (delhandle(dpif, ovs_ufid)) + ret = 1; + + struct ufid_handle_hash_data *data = + malloc(sizeof (struct ufid_handle_hash_data)); + data->ovs_ufid = *ovs_ufid; + data->handle = handle; + data->netdev = in; + data->port = port; + data->prio = prio; + + ovs_mutex_lock(&dpif->hash_mutex); + hmap_insert(&dpif->ufid_to_handle, &data->node_ufid, hash_ufid(ovs_ufid)); + hmap_insert(&dpif->handle_to_ufid, &data->node_handle, hash_to_ufid); + VLOG_DBG + ("%s %d %s (%p) added mapping %s <-> (handle: %d, prio: %d, port: %d, indev: %p)\n", + __FILE__, __LINE__, __func__, dpif, printufid(ovs_ufid), handle, + prio, port, in); + ovs_mutex_unlock(&dpif->hash_mutex); + return ret; +} + +static ovs_u128 * +findufid(struct dpif_hw_acc *dpif, odp_port_t port, int handle, + uint16_t prio) +{ + struct ufid_handle_hash_data *data; + size_t hash = hash_handle_prio_port(handle, prio, port); + + VLOG_DBG + ("%s %d %s (%p) finding ufid of (handle: %d, prio: %d, port: %d), hash: %lu\n", + __FILE__, __LINE__, __func__, dpif, handle, prio, port, hash); + + ovs_mutex_lock(&dpif->hash_mutex); + HMAP_FOR_EACH_WITH_HASH(data, node_handle, hash, &dpif->handle_to_ufid) { + if (data->handle == handle && data->port == port + && data->prio == prio) + break; + } + ovs_mutex_unlock(&dpif->hash_mutex); + + return data ? &data->ovs_ufid : 0; +} + +static int +gethandle(struct dpif_hw_acc *dpif, const ovs_u128 * ovs_ufid, + struct netdev **in, uint16_t *prio, const char *func, int print) +{ + struct ufid_handle_hash_data *data; + int handle = 0; + size_t hash = 0; + + if (in) + *in = 0; + + if (!ovs_ufid) { + VLOG_DBG("%s %d %s (%p) called by %s without a ufid.\n", __FILE__, + __LINE__, __func__, dpif, func); + return 0; + } else + hash = hash_ufid(ovs_ufid); + + if (print) + VLOG_DBG("%s %d %s (%p) called by %s to find ufid %s\n", __FILE__, + __LINE__, __func__, dpif, func, printufid(ovs_ufid)); + + ovs_mutex_lock(&dpif->hash_mutex); + HMAP_FOR_EACH_WITH_HASH(data, node_ufid, hash, &dpif->ufid_to_handle) { + if (!memcmp(&data->ovs_ufid, ovs_ufid, sizeof (*ovs_ufid))) + break; + } + ovs_mutex_unlock(&dpif->hash_mutex); + + if (data && (!data->handle || !data->netdev || !data->prio)) { + VLOG_ERR + ("mising handle/dev/prio for ufid: %s, handle: %d, netdev: %p, prio: %d\n", + printufid(ovs_ufid), data->handle, data->netdev, data->prio); + return 0; + } + handle = data ? data->handle : 0; + if (in) + *in = data ? data->netdev : 0; + if (prio) + *prio = data ? data->prio : 0; + if (print && handle) + VLOG_DBG("found ufid: %s, handle: %d, prio: %d, netdev: %p\n", + printufid(ovs_ufid), handle, data->prio, data->netdev); + return handle; +} + +static int +get_ovs_port(struct dpif_hw_acc *dpif, int ifindex) +{ + struct port_netdev_hash_data *data; + + HMAP_FOR_EACH(data, node, &dpif->port_to_netdev) { + if (netdev_get_ifindex(data->netdev) == ifindex) { + return data->port; + } + } + return -1; +} + static struct dpif_hw_acc * dpif_hw_acc_cast(const struct dpif *dpif) { @@ -52,6 +332,16 @@ dpif_hw_acc_cast(const struct dpif *dpif) } static int +initmaps(struct dpif_hw_acc *dpif) +{ + hmap_init(&dpif->port_to_netdev); + hmap_init(&dpif->ufid_to_handle); + hmap_init(&dpif->handle_to_ufid); + ovs_mutex_init(&dpif->hash_mutex); + return 0; +} + +static int dpif_hw_acc_open(const struct dpif_class *class OVS_UNUSED, const char *name, bool create, struct dpif **dpifp) { @@ -75,6 +365,8 @@ dpif_hw_acc_open(const struct dpif_class *class OVS_UNUSED, } dpif = xzalloc(sizeof *dpif); + initmaps(dpif); + *CONST_CAST(const char **, &dpif->name) = xstrdup(name); uint16_t netflow_id = hash_string(dpif->name, 0); diff --git a/lib/dpif-hw-acc.h b/lib/dpif-hw-acc.h index 5559b41..10467e3 100644 --- a/lib/dpif-hw-acc.h +++ b/lib/dpif-hw-acc.h @@ -2,6 +2,7 @@ #define DPIF_HW_NETLINK_H 1 #include "ovs-thread.h" +#include "hmap.h" #include "dpif-provider.h" /* Datapath interface for the openvswitch Linux kernel module. */ @@ -9,6 +10,26 @@ struct dpif_hw_acc { struct dpif dpif; struct dpif *lp_dpif_netlink; const char *const name; + struct ovs_mutex hash_mutex; + struct hmap port_to_netdev; + struct hmap ufid_to_handle; + struct hmap handle_to_ufid; +}; + +struct port_netdev_hash_data { + struct hmap_node node; + struct netdev *netdev; + odp_port_t port; +}; + +struct ufid_handle_hash_data { + struct hmap_node node_ufid; + struct hmap_node node_handle; + ovs_u128 ovs_ufid; + int handle; + int prio; + struct netdev *netdev; + odp_port_t port; }; #endif