From patchwork Fri Aug 12 14:35:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Moats X-Patchwork-Id: 658669 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3s9nVh1JzXz9sxR for ; Sat, 13 Aug 2016 00:35:38 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 6510B10D12; Fri, 12 Aug 2016 07:35:36 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 4462D10D11 for ; Fri, 12 Aug 2016 07:35:35 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id 9F4E21626F0 for ; Fri, 12 Aug 2016 08:35:34 -0600 (MDT) X-ASG-Debug-ID: 1471012534-0b32375f6ebd700001-byXFYA Received: from mx3-pf3.cudamail.com ([192.168.14.3]) by bar6.cudamail.com with ESMTP id k7D88csEKGRVIWFk (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Fri, 12 Aug 2016 08:35:34 -0600 (MDT) X-Barracuda-Envelope-From: ryanmoats@ryans-macbook-pro-4.local X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.3 Received: from unknown (HELO fed1rmfepo201.cox.net) (68.230.241.146) by mx3-pf3.cudamail.com with SMTP; 12 Aug 2016 14:35:33 -0000 Received-SPF: none (mx3-pf3.cudamail.com: domain at ryans-macbook-pro-4.local does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 68.230.241.146 X-Barracuda-RBL-IP: 68.230.241.146 Received: from fed1rmimpo305.cox.net ([68.230.241.173]) by fed1rmfepo201.cox.net (InterMail vM.8.01.05.28 201-2260-151-171-20160122) with ESMTP id <20160812143532.GTGG18419.fed1rmfepo201.cox.net@fed1rmimpo305.cox.net> for ; Fri, 12 Aug 2016 10:35:32 -0400 Received: from Ryans-MacBook-Pro-4.local ([68.13.99.247]) by fed1rmimpo305.cox.net with cox id WEbW1t00A5LF6cs01EbWDU; Fri, 12 Aug 2016 10:35:31 -0400 X-CT-Class: Clean X-CT-Score: 0.00 X-CT-RefID: str=0001.0A090206.57ADDEB3.0111, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0 X-CT-Spam: 0 X-Authority-Analysis: v=2.1 cv=DO8vm35b c=1 sm=1 tr=0 a=Jmqd6mthTashISSy/JkQqg==:117 a=Jmqd6mthTashISSy/JkQqg==:17 a=L9H7d07YOLsA:10 a=9cW_t1CCXrUA:10 a=s5jvgZ67dGcA:10 a=7z1cN_iqozsA:10 a=VnNF1IyMAAAA:8 a=v2yCEG586liP-oSwEggA:9 a=skCgnbhlp52w9zbo2JeP:22 X-CM-Score: 0.00 Authentication-Results: cox.net; none Received: by Ryans-MacBook-Pro-4.local (Postfix, from userid 501) id C237E65FBAD; Fri, 12 Aug 2016 09:35:29 -0500 (CDT) X-CudaMail-Envelope-Sender: ryanmoats@ryans-macbook-pro-4.local From: Ryan Moats To: dev@openvswitch.org X-CudaMail-MID: CM-V3-811015087 X-CudaMail-DTE: 081216 X-CudaMail-Originating-IP: 68.230.241.146 Date: Fri, 12 Aug 2016 09:35:07 -0500 X-ASG-Orig-Subj: [##CM-V3-811015087##][PATCH v2] ovn-controller: Remove flows created for now deleted SB database rows. Message-Id: <1471012507-57961-1-git-send-email-rmoats@us.ibm.com> X-Mailer: git-send-email 2.7.4 (Apple Git-66) X-GBUdb-Analysis: 0, 68.230.241.146, Ugly c=0.297854 p=-0.333333 Source Normal X-MessageSniffer-Rules: 0-0-0-11700-c X-Barracuda-Connect: UNKNOWN[192.168.14.3] X-Barracuda-Start-Time: 1471012534 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC5_MJ1963, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31961 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Subject: [ovs-dev] [PATCH v2] ovn-controller: Remove flows created for now deleted SB database rows. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" Ensure that rows created for deleted port binding and multicast group rows are cleared when doing full processing. Signed-off-by: Ryan Moats --- v1->v2: - replace use of ssets for storing UUIDs as strings with hmaps for storing the UUID structs themselves - include an optimization suggested by Ben when reviewing v1 ovn/controller/physical.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 55 insertions(+), 1 deletion(-) diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c index 4448756..a27c1ac 100644 --- a/ovn/controller/physical.c +++ b/ovn/controller/physical.c @@ -20,6 +20,7 @@ #include "lflow.h" #include "lib/poll-loop.h" #include "ofctrl.h" +#include "openvswitch/hmap.h" #include "openvswitch/match.h" #include "openvswitch/ofp-actions.h" #include "openvswitch/ofpbuf.h" @@ -58,6 +59,15 @@ static struct simap localvif_to_ofport = SIMAP_INITIALIZER(&localvif_to_ofport); static struct hmap tunnels = HMAP_INITIALIZER(&tunnels); +struct uuid_hash_node { + struct hmap_node node; + struct uuid uuid; +}; + +static struct hmap port_binding_uuids = HMAP_INITIALIZER(&port_binding_uuids); +static struct hmap multicast_group_uuids = + HMAP_INITIALIZER(&multicast_group_uuids); + /* UUID to identify OF flows not associated with ovsdb rows. */ static struct uuid *hc_uuid = NULL; static bool full_binding_processing = false; @@ -636,6 +646,32 @@ consider_mc_group(enum mf_field_id mff_ovn_geneve, sset_destroy(&remote_chassis); } +static bool +find_uuid_in_hmap(struct hmap *hmap_p, struct uuid *uuid) { + struct uuid_hash_node *candidate; + HMAP_FOR_EACH_WITH_HASH (candidate, node, uuid_hash(uuid), hmap_p) { + if (uuid_equals(uuid, &candidate->uuid)) { + return true; + } + } + return false; +} + +/* Deletes the flows whose UUIDs are in 'old' but not 'new', and then replaces + * 'old' by 'new'. */ +static void +rationalize_hmap_and_delete_flows(struct hmap *old, struct hmap *new) +{ + struct uuid_hash_node *uuid_node, *old_node; + HMAP_FOR_EACH_SAFE (uuid_node, old_node, node, old) { + if (!find_uuid_in_hmap(new, &uuid_node->uuid)) { + ofctrl_remove_flows(&uuid_node->uuid); + } + } + hmap_swap(old, new); + hmap_clear(new); +} + void physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, const struct ovsrec_bridge *br_int, const char *this_chassis_id, @@ -791,6 +827,8 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, * 64 for logical-to-physical translation. */ const struct sbrec_port_binding *binding; if (full_binding_processing) { + struct hmap new_port_binding_uuids = + HMAP_INITIALIZER(&new_port_binding_uuids); SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) { /* Because it is possible in the above code to enter this * for loop without having cleared the flow table first, we @@ -798,7 +836,14 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, ofctrl_remove_flows(&binding->header_.uuid); consider_port_binding(mff_ovn_geneve, ct_zones, local_datapaths, patched_datapaths, binding, &ofpacts); - } + struct uuid_hash_node *hash_node = xzalloc(sizeof *hash_node); + hash_node->uuid = binding->header_.uuid; + hmap_insert(&new_port_binding_uuids, &hash_node->node, + uuid_hash(&hash_node->uuid)); + } + rationalize_hmap_and_delete_flows(&port_binding_uuids, + &new_port_binding_uuids); + hmap_destroy(&new_port_binding_uuids); full_binding_processing = false; } else { SBREC_PORT_BINDING_FOR_EACH_TRACKED (binding, ctx->ovnsb_idl) { @@ -818,6 +863,8 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, const struct sbrec_multicast_group *mc; struct ofpbuf remote_ofpacts; ofpbuf_init(&remote_ofpacts, 0); + struct hmap new_multicast_group_uuids = + HMAP_INITIALIZER(&new_multicast_group_uuids); SBREC_MULTICAST_GROUP_FOR_EACH (mc, ctx->ovnsb_idl) { /* As multicast groups are always reprocessed each time, * the first step is to clean the old flows for the group @@ -825,7 +872,14 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, ofctrl_remove_flows(&mc->header_.uuid); consider_mc_group(mff_ovn_geneve, ct_zones, local_datapaths, mc, &ofpacts, &remote_ofpacts); + struct uuid_hash_node *hash_node = xzalloc(sizeof *hash_node); + hash_node->uuid = mc->header_.uuid; + hmap_insert(&new_multicast_group_uuids, &hash_node->node, + uuid_hash(&hash_node->uuid)); } + rationalize_hmap_and_delete_flows(&multicast_group_uuids, + &new_multicast_group_uuids); + hmap_destroy(&new_multicast_group_uuids); ofpbuf_uninit(&remote_ofpacts);