From patchwork Mon Jul 18 21:21:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Moats X-Patchwork-Id: 649769 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3rtbhk0zsWz9s9G for ; Tue, 19 Jul 2016 07:21:42 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 5B6E710A50; Mon, 18 Jul 2016 14:21:41 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 805C510A4F for ; Mon, 18 Jul 2016 14:21:40 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id 195B41625E0 for ; Mon, 18 Jul 2016 15:21:40 -0600 (MDT) X-ASG-Debug-ID: 1468876898-0b32371c864ad90001-byXFYA Received: from mx3-pf2.cudamail.com ([192.168.14.1]) by bar6.cudamail.com with ESMTP id d4dvo1U9oa63cklv (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Mon, 18 Jul 2016 15:21:38 -0600 (MDT) X-Barracuda-Envelope-From: rmoats@oc7146733065.ibm.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.1 Received: from unknown (HELO fed1rmfepo203.cox.net) (68.230.241.148) by mx3-pf2.cudamail.com with SMTP; 18 Jul 2016 21:21:37 -0000 Received-SPF: none (mx3-pf2.cudamail.com: domain at oc7146733065.ibm.com does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 68.230.241.148 X-Barracuda-RBL-IP: 68.230.241.148 Received: from fed1rmimpo109.cox.net ([68.230.241.158]) by fed1rmfepo203.cox.net (InterMail vM.8.01.05.28 201-2260-151-171-20160122) with ESMTP id <20160718212135.EVQD1833.fed1rmfepo203.cox.net@fed1rmimpo109.cox.net> for ; Mon, 18 Jul 2016 17:21:35 -0400 Received: from oc7146733065.ibm.com ([68.13.99.247]) by fed1rmimpo109.cox.net with cox id LMMa1t00W5LF6cs01MMaYt; Mon, 18 Jul 2016 17:21:35 -0400 X-CT-Class: Clean X-CT-Score: 0.00 X-CT-RefID: str=0001.0A020201.578D485F.027A, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0 X-CT-Spam: 0 X-Authority-Analysis: v=2.1 cv=WtiLSYrv c=1 sm=1 tr=0 a=Jmqd6mthTashISSy/JkQqg==:117 a=Jmqd6mthTashISSy/JkQqg==:17 a=L9H7d07YOLsA:10 a=9cW_t1CCXrUA:10 a=s5jvgZ67dGcA:10 a=cAmyUtKerLwA:10 a=VnNF1IyMAAAA:8 a=Hh1FDMRAd59fH1L3ELwA:9 a=_TYeGPZiSUNtWTwS:21 a=2h_7KXNbt3SsbSPY:21 a=JmAAm-ESdPDxYsxT:21 a=skCgnbhlp52w9zbo2JeP:22 X-CM-Score: 0.00 Authentication-Results: cox.net; none Received: by oc7146733065.ibm.com (Postfix, from userid 500) id EE6891880362; Mon, 18 Jul 2016 16:21:33 -0500 (CDT) X-CudaMail-Envelope-Sender: rmoats@oc7146733065.ibm.com From: Ryan Moats To: dev@openvswitch.org X-CudaMail-MID: CM-V2-717046000 X-CudaMail-DTE: 071816 X-CudaMail-Originating-IP: 68.230.241.148 Date: Mon, 18 Jul 2016 16:21:16 -0500 X-ASG-Orig-Subj: [##CM-V2-717046000##][PATCH v23 1/2] ovn-controller: Persist ovn flow tables Message-Id: <1468876877-5388-2-git-send-email-rmoats@us.ibm.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1468876877-5388-1-git-send-email-rmoats@us.ibm.com> References: <1468876877-5388-1-git-send-email-rmoats@us.ibm.com> X-GBUdb-Analysis: 0, 68.230.241.148, Ugly c=0.142858 p=-0.5 Source Normal X-MessageSniffer-Rules: 0-0-0-32767-c X-Barracuda-Connect: UNKNOWN[192.168.14.1] X-Barracuda-Start-Time: 1468876898 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC5_MJ1963, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31354 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Subject: [ovs-dev] [PATCH v23 1/2] ovn-controller: Persist ovn flow tables X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" Ensure that ovn flow tables are persisted so that changes to them chan be applied incrementally - this is a prereq for making lflow_run and physical_run incremental. As part of this change, add a one-to-many hindex for finding desired flows by their parent's UUID. Also extend the mapping by match from one-to-one to one-to-many. Signed-off-by: Ryan Moats --- ovn/controller/lflow.c | 39 +++--- ovn/controller/lflow.h | 3 +- ovn/controller/ofctrl.c | 294 +++++++++++++++++++++++++++++----------- ovn/controller/ofctrl.h | 22 ++- ovn/controller/ovn-controller.c | 10 +- ovn/controller/physical.c | 67 +++++---- ovn/controller/physical.h | 2 +- 7 files changed, 295 insertions(+), 142 deletions(-) diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c index b77b364..b6b1765 100644 --- a/ovn/controller/lflow.c +++ b/ovn/controller/lflow.c @@ -326,8 +326,7 @@ static void consider_logical_flow(const struct lport_index *lports, struct group_table *group_table, const struct simap *ct_zones, struct hmap *dhcp_opts_p, - uint32_t *conj_id_ofs_p, - struct hmap *flow_table); + uint32_t *conj_id_ofs_p); static bool lookup_port_cb(const void *aux_, const char *port_name, unsigned int *portp) @@ -358,14 +357,14 @@ is_switch(const struct sbrec_datapath_binding *ldp) } -/* Adds the logical flows from the Logical_Flow table to 'flow_table'. */ +/* Adds the logical flows from the Logical_Flow table to flow tables. */ static void add_logical_flows(struct controller_ctx *ctx, const struct lport_index *lports, const struct mcgroup_index *mcgroups, const struct hmap *local_datapaths, const struct hmap *patched_datapaths, struct group_table *group_table, - const struct simap *ct_zones, struct hmap *flow_table) + const struct simap *ct_zones) { uint32_t conj_id_ofs = 1; @@ -380,7 +379,7 @@ add_logical_flows(struct controller_ctx *ctx, const struct lport_index *lports, SBREC_LOGICAL_FLOW_FOR_EACH (lflow, ctx->ovnsb_idl) { consider_logical_flow(lports, mcgroups, lflow, local_datapaths, patched_datapaths, group_table, ct_zones, - &dhcp_opts, &conj_id_ofs, flow_table); + &dhcp_opts, &conj_id_ofs); } dhcp_opts_destroy(&dhcp_opts); @@ -395,8 +394,7 @@ consider_logical_flow(const struct lport_index *lports, struct group_table *group_table, const struct simap *ct_zones, struct hmap *dhcp_opts_p, - uint32_t *conj_id_ofs_p, - struct hmap *flow_table) + uint32_t *conj_id_ofs_p) { /* Determine translation of logical table IDs to physical table IDs. */ bool ingress = !strcmp(lflow->pipeline, "ingress"); @@ -526,8 +524,8 @@ consider_logical_flow(const struct lport_index *lports, m->match.flow.conj_id += *conj_id_ofs_p; } if (!m->n) { - ofctrl_add_flow(flow_table, ptable, lflow->priority, - &m->match, &ofpacts); + ofctrl_add_flow(ptable, lflow->priority, &m->match, &ofpacts, + &lflow->header_.uuid); } else { uint64_t conj_stubs[64 / 8]; struct ofpbuf conj; @@ -542,8 +540,9 @@ consider_logical_flow(const struct lport_index *lports, dst->clause = src->clause; dst->n_clauses = src->n_clauses; } - ofctrl_add_flow(flow_table, ptable, lflow->priority, - &m->match, &conj); + ofctrl_add_flow(ptable, lflow->priority, &m->match, &conj, + &lflow->header_.uuid); + ofpbuf_uninit(&conj); ofpbuf_uninit(&conj); } } @@ -568,8 +567,7 @@ put_load(const uint8_t *data, size_t len, } static void -consider_neighbor_flow(struct hmap *flow_table, - const struct lport_index *lports, +consider_neighbor_flow(const struct lport_index *lports, const struct sbrec_mac_binding *b, struct ofpbuf *ofpacts_p, struct match *match_p) @@ -601,15 +599,16 @@ consider_neighbor_flow(struct hmap *flow_table, ofpbuf_clear(ofpacts_p); put_load(mac.ea, sizeof mac.ea, MFF_ETH_DST, 0, 48, ofpacts_p); - ofctrl_add_flow(flow_table, OFTABLE_MAC_BINDING, 100, match_p, ofpacts_p); + ofctrl_add_flow(OFTABLE_MAC_BINDING, 100, match_p, ofpacts_p, + &b->header_.uuid); } -/* Adds an OpenFlow flow to 'flow_table' for each MAC binding in the OVN +/* Adds an OpenFlow flow to flow tables for each MAC binding in the OVN * southbound database, using 'lports' to resolve logical port names to * numbers. */ static void add_neighbor_flows(struct controller_ctx *ctx, - const struct lport_index *lports, struct hmap *flow_table) + const struct lport_index *lports) { struct ofpbuf ofpacts; struct match match; @@ -618,7 +617,7 @@ add_neighbor_flows(struct controller_ctx *ctx, const struct sbrec_mac_binding *b; SBREC_MAC_BINDING_FOR_EACH (b, ctx->ovnsb_idl) { - consider_neighbor_flow(flow_table, lports, b, &ofpacts, &match); + consider_neighbor_flow(lports, b, &ofpacts, &match); } ofpbuf_uninit(&ofpacts); } @@ -631,12 +630,12 @@ lflow_run(struct controller_ctx *ctx, const struct lport_index *lports, const struct hmap *local_datapaths, const struct hmap *patched_datapaths, struct group_table *group_table, - const struct simap *ct_zones, struct hmap *flow_table) + const struct simap *ct_zones) { update_address_sets(ctx); add_logical_flows(ctx, lports, mcgroups, local_datapaths, - patched_datapaths, group_table, ct_zones, flow_table); - add_neighbor_flows(ctx, lports, flow_table); + patched_datapaths, group_table, ct_zones); + add_neighbor_flows(ctx, lports); } void diff --git a/ovn/controller/lflow.h b/ovn/controller/lflow.h index e96a24b..4351fd0 100644 --- a/ovn/controller/lflow.h +++ b/ovn/controller/lflow.h @@ -65,8 +65,7 @@ void lflow_run(struct controller_ctx *, const struct lport_index *, const struct hmap *local_datapaths, const struct hmap *patched_datapaths, struct group_table *group_table, - const struct simap *ct_zones, - struct hmap *flow_table); + const struct simap *ct_zones); void lflow_destroy(void); #endif /* ovn/lflow.h */ diff --git a/ovn/controller/ofctrl.c b/ovn/controller/ofctrl.c index b451453..40e49a1 100644 --- a/ovn/controller/ofctrl.c +++ b/ovn/controller/ofctrl.c @@ -17,11 +17,14 @@ #include "bitmap.h" #include "byte-order.h" #include "dirs.h" +#include "flow.h" #include "hash.h" +#include "hindex.h" #include "hmap.h" #include "ofctrl.h" #include "openflow/openflow.h" #include "openvswitch/dynamic-string.h" +#include "openvswitch/list.h" #include "openvswitch/match.h" #include "openvswitch/ofp-actions.h" #include "openvswitch/ofp-msgs.h" @@ -42,20 +45,30 @@ VLOG_DEFINE_THIS_MODULE(ofctrl); /* An OpenFlow flow. */ struct ovn_flow { + struct hmap_node match_hmap_node; /* For match based hashing. */ + struct hindex_node uuid_hindex_node; /* For uuid based hashing. */ + struct ovs_list list_node; /* For handling lists of flows. */ + /* Key. */ - struct hmap_node hmap_node; uint8_t table_id; uint16_t priority; struct match match; - /* Data. */ + /* Data. UUID is used for disambiquation. */ + struct uuid uuid; struct ofpact *ofpacts; size_t ofpacts_len; }; -static uint32_t ovn_flow_hash(const struct ovn_flow *); -static struct ovn_flow *ovn_flow_lookup(struct hmap *flow_table, - const struct ovn_flow *target); +static inline struct ovn_flow * +ovn_flow_from_node(const struct ovs_list *node) +{ + return CONTAINER_OF(node, struct ovn_flow, list_node); +} + +static uint32_t ovn_flow_match_hash(const struct ovn_flow *); +static void ovn_flow_lookup(struct hmap *, const struct ovn_flow *target, + struct ovs_list *answers); static char *ovn_flow_to_string(const struct ovn_flow *); static void ovn_flow_log(const struct ovn_flow *, const char *action); static void ovn_flow_destroy(struct ovn_flow *); @@ -108,14 +121,16 @@ static struct group_table *groups; * S_CLEAR_FLOWS or S_UPDATE_FLOWS, this is really the option we have. */ static enum mf_field_id mff_ovn_geneve; -static void ovn_flow_table_clear(struct hmap *flow_table); -static void ovn_flow_table_destroy(struct hmap *flow_table); +static void ovn_flow_table_destroy(void); static void ovn_group_table_clear(struct group_table *group_table, bool existing); static void ofctrl_recv(const struct ofp_header *, enum ofptype); +static struct hmap match_flow_table = HMAP_INITIALIZER(&match_flow_table); +static struct hindex uuid_flow_table = HINDEX_INITIALIZER(&uuid_flow_table); + void ofctrl_init(void) { @@ -333,7 +348,7 @@ run_S_CLEAR_FLOWS(void) ofputil_bucket_list_destroy(&gm.buckets); /* Clear installed_flows, to match the state of the switch. */ - ovn_flow_table_clear(&installed_flows); + ovn_flow_table_clear(); /* Clear existing groups, to match the state of the switch. */ if (groups) { @@ -456,7 +471,7 @@ void ofctrl_destroy(void) { rconn_destroy(swconn); - ovn_flow_table_destroy(&installed_flows); + ovn_flow_table_destroy(); rconn_packet_counter_destroy(tx_counter); } @@ -498,71 +513,174 @@ ofctrl_recv(const struct ofp_header *oh, enum ofptype type) } } -/* Flow table interface to the rest of ovn-controller. */ +/* Flow table interfaces to the rest of ovn-controller. */ -/* Adds a flow to 'desired_flows' with the specified 'match' and 'actions' to +/* Adds a flow to flow tables with the specified 'match' and 'actions' to * the OpenFlow table numbered 'table_id' with the given 'priority'. The * caller retains ownership of 'match' and 'actions'. * - * This just assembles the desired flow table in memory. Nothing is actually - * sent to the switch until a later call to ofctrl_run(). + * Because it is possible for both actions and matches to change on a rule, + * and because the hmap struct only supports a single hash, this method + * uses a hash on the defined key of (table_id, priority, match criteria). + * Actions from different parents is supported, with the parent's UUID + * information stored for disambiguation. * - * The caller should initialize its own hmap to hold the flows. */ + * This just assembles the desired flow tables in memory. Nothing is actually + * sent to the switch until a later call to ofctrl_run(). */ + void -ofctrl_add_flow(struct hmap *desired_flows, - uint8_t table_id, uint16_t priority, - const struct match *match, const struct ofpbuf *actions) +ofctrl_add_flow(uint8_t table_id, uint16_t priority, + const struct match *match, const struct ofpbuf *actions, + const struct uuid *uuid) { + struct ovs_list existing; + + /* Structure that uses table_id+priority+various things as hashes. */ struct ovn_flow *f = xmalloc(sizeof *f); f->table_id = table_id; f->priority = priority; f->match = *match; f->ofpacts = xmemdup(actions->data, actions->size); f->ofpacts_len = actions->size; - f->hmap_node.hash = ovn_flow_hash(f); - - if (ovn_flow_lookup(desired_flows, f)) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5); - if (!VLOG_DROP_INFO(&rl)) { - char *s = ovn_flow_to_string(f); - VLOG_INFO("dropping duplicate flow: %s", s); - free(s); + memcpy(&f->uuid, uuid, sizeof f->uuid); + f->match_hmap_node.hash = ovn_flow_match_hash(f); + f->uuid_hindex_node.hash = uuid_hash(&f->uuid); + + /* Check to see if other flows exist with the same key (table_id + * priority, match criteria). If so, check if this flow's uuid + * exists already. If so, discard this flow and log a warning. + * If the uuid isn't present, then add the flow. + * + * If no other flows have this key, then add the flow. */ + + ovn_flow_lookup(&match_flow_table, f, &existing); + + if (!ovs_list_is_empty(&existing)) { + struct ovn_flow *d; + LIST_FOR_EACH(d, list_node, &existing) { + if (f->table_id == d->table_id && f->priority == d->priority + && ofpacts_equal(f->ofpacts, f->ofpacts_len, + d->ofpacts, d->ofpacts_len)) { + if (uuid_equals(&f->uuid, &d->uuid)) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); + char *s = ovn_flow_to_string(f); + VLOG_WARN_RL(&rl, "found duplicate flow %s for parent "UUID_FMT, + s, UUID_ARGS(&f->uuid)); + ovn_flow_destroy(f); + free(s); + return; + } + } } + } + + hmap_insert(&match_flow_table, &f->match_hmap_node, + f->match_hmap_node.hash); + hindex_insert(&uuid_flow_table, &f->uuid_hindex_node, + f->uuid_hindex_node.hash); +} + +/* Removes a bundles of flows from the flow table. */ + +void +ofctrl_remove_flows(const struct uuid *uuid) +{ + /* Since there isn't an HINDEX_FOR_EACH_SAFE_WITH_HASH macro, + * first, process the hindex and put matching flows into a local list. + * Then, the local list is processed and the items deleted. */ + struct ovs_list delete_list; + ovs_list_init(&delete_list); + struct ovn_flow *f; + HINDEX_FOR_EACH_WITH_HASH(f, uuid_hindex_node, uuid_hash(uuid), + &uuid_flow_table) { + if (uuid_equals(&f->uuid, uuid)) { + ovs_list_push_back(&delete_list, &f->list_node); + } + } + LIST_FOR_EACH_POP(f, list_node, &delete_list) { + hmap_remove(&match_flow_table, &f->match_hmap_node); + hindex_remove(&uuid_flow_table, &f->uuid_hindex_node); ovn_flow_destroy(f); - return; } +} - hmap_insert(desired_flows, &f->hmap_node, f->hmap_node.hash); +/* Shortcut to remove all flows matching the supplied UUID and add this + * flow. */ +void +ofctrl_set_flow(uint8_t table_id, uint16_t priority, + const struct match *match, const struct ofpbuf *actions, + const struct uuid *uuid) +{ + ofctrl_remove_flows(uuid); + ofctrl_add_flow(table_id, priority, match, actions, uuid); } /* ovn_flow. */ -/* Returns a hash of the key in 'f'. */ +/* Duplicate an ovn_flow structure. */ +struct ovn_flow * +ofctrl_dup_flow(struct ovn_flow *source) +{ + struct ovn_flow *answer = xmalloc(sizeof *answer); + answer->table_id = source->table_id; + answer->priority = source->priority; + answer->match = source->match; + answer->ofpacts = xmemdup(source->ofpacts, source->ofpacts_len); + answer->ofpacts_len = source->ofpacts_len; + answer->uuid = source->uuid; + answer->match_hmap_node.hash = ovn_flow_match_hash(answer); + answer->uuid_hindex_node.hash = uuid_hash(&source->uuid); + return answer; +} + +/* Returns a hash of the match key in 'f'. */ static uint32_t -ovn_flow_hash(const struct ovn_flow *f) +ovn_flow_match_hash(const struct ovn_flow *f) { return hash_2words((f->table_id << 16) | f->priority, match_hash(&f->match, 0)); +} +/* Compare two flows and return -1, 0, 1 based on whether a if less than, + * equal to or greater than b. */ +static int +ovn_flow_compare_flows(struct ovn_flow *a, struct ovn_flow *b) { + return uuid_compare_3way(&a->uuid, &b->uuid); } -/* Finds and returns an ovn_flow in 'flow_table' whose key is identical to - * 'target''s key, or NULL if there is none. */ +/* Given a list of ovn_flows, goes through the list and returns + * a single flow, based on a helper method. */ + static struct ovn_flow * -ovn_flow_lookup(struct hmap *flow_table, const struct ovn_flow *target) +ovn_flow_select_from_list(struct ovs_list *flows) { + struct ovn_flow *candidate; + struct ovn_flow *answer = ovn_flow_from_node(ovs_list_front(flows)); + LIST_FOR_EACH (candidate, list_node, flows) { + if (ovn_flow_compare_flows(candidate, answer) < 0) { + answer = candidate; + } + } + return answer; +} + +/* Initializes and files in the supplied list with ovn_flows from 'flow_table' + * whose key is identical to 'target''s key. */ +static void +ovn_flow_lookup(struct hmap* flow_table, const struct ovn_flow *target, + struct ovs_list *answer) { struct ovn_flow *f; - HMAP_FOR_EACH_WITH_HASH (f, hmap_node, target->hmap_node.hash, + ovs_list_init(answer); + HMAP_FOR_EACH_WITH_HASH (f, match_hmap_node, target->match_hmap_node.hash, flow_table) { if (f->table_id == target->table_id && f->priority == target->priority && match_equal(&f->match, &target->match)) { - return f; + ovs_list_push_back(answer, &f->list_node); } } - return NULL; } static char * @@ -592,26 +710,29 @@ ovn_flow_destroy(struct ovn_flow *f) { if (f) { free(f->ofpacts); - free(f); } + free(f); } /* Flow tables of struct ovn_flow. */ -static void -ovn_flow_table_clear(struct hmap *flow_table) +void +ovn_flow_table_clear(void) { - struct ovn_flow *f; - HMAP_FOR_EACH_POP (f, hmap_node, flow_table) { + struct ovn_flow *f, *next; + HMAP_FOR_EACH_SAFE (f, next, match_hmap_node, &match_flow_table) { + hmap_remove(&match_flow_table, &f->match_hmap_node); + hindex_remove(&uuid_flow_table, &f->uuid_hindex_node); ovn_flow_destroy(f); } } static void -ovn_flow_table_destroy(struct hmap *flow_table) +ovn_flow_table_destroy(void) { - ovn_flow_table_clear(flow_table); - hmap_destroy(flow_table); + ovn_flow_table_clear(); + hmap_destroy(&match_flow_table); + hindex_destroy(&uuid_flow_table); } /* Flow table update. */ @@ -683,7 +804,7 @@ queue_group_mod(struct ofputil_group_mod *gm) * * This should be called after ofctrl_run() within the main loop. */ void -ofctrl_put(struct hmap *flow_table, struct group_table *group_table) +ofctrl_put(struct group_table *group_table) { if (!groups) { groups = group_table; @@ -692,12 +813,9 @@ ofctrl_put(struct hmap *flow_table, struct group_table *group_table) /* The flow table can be updated if the connection to the switch is up and * in the correct state and not backlogged with existing flow_mods. (Our * criteria for being backlogged appear very conservative, but the socket - * between ovn-controller and OVS provides some buffering.) Otherwise, - * discard the flows. A solution to either of those problems will cause us - * to wake up and retry. */ + * between ovn-controller and OVS provides some buffering.) */ if (state != S_UPDATE_FLOWS || rconn_packet_counter_n_packets(tx_counter)) { - ovn_flow_table_clear(flow_table); ovn_group_table_clear(group_table, false); return; } @@ -735,9 +853,10 @@ ofctrl_put(struct hmap *flow_table, struct group_table *group_table) * longer desired, delete them; if any of them should have different * actions, update them. */ struct ovn_flow *i, *next; - HMAP_FOR_EACH_SAFE (i, next, hmap_node, &installed_flows) { - struct ovn_flow *d = ovn_flow_lookup(flow_table, i); - if (!d) { + HMAP_FOR_EACH_SAFE (i, next, match_hmap_node, &installed_flows) { + struct ovs_list matches; + ovn_flow_lookup(&match_flow_table, i, &matches); + if (ovs_list_is_empty(&matches)) { /* Installed flow is no longer desirable. Delete it from the * switch and from installed_flows. */ struct ofputil_flow_mod fm = { @@ -747,11 +866,21 @@ ofctrl_put(struct hmap *flow_table, struct group_table *group_table) .command = OFPFC_DELETE_STRICT, }; queue_flow_mod(&fm); - ovn_flow_log(i, "removing"); + ovn_flow_log(i, "removing installed"); - hmap_remove(&installed_flows, &i->hmap_node); + hmap_remove(&installed_flows, &i->match_hmap_node); ovn_flow_destroy(i); } else { + /* Since we still have desired flows that match this key, + * select one and compare both its actions and uuid. + * If the actions aren't the same, queue and update + * action for the install flow. If the uuid has changed + * update that as well. */ + struct ovn_flow *d = ovn_flow_select_from_list(&matches); + if (!uuid_equals(&i->uuid, &d->uuid)) { + /* Update installed flow's UUID. */ + memcpy(&i->uuid, &d->uuid, sizeof i->uuid); + } if (!ofpacts_equal(i->ofpacts, i->ofpacts_len, d->ofpacts, d->ofpacts_len)) { /* Update actions in installed flow. */ @@ -764,41 +893,46 @@ ofctrl_put(struct hmap *flow_table, struct group_table *group_table) .command = OFPFC_MODIFY_STRICT, }; queue_flow_mod(&fm); - ovn_flow_log(i, "updating"); + ovn_flow_log(i, "updating installed"); /* Replace 'i''s actions by 'd''s. */ free(i->ofpacts); - i->ofpacts = d->ofpacts; + i->ofpacts = xmemdup(d->ofpacts, d->ofpacts_len); i->ofpacts_len = d->ofpacts_len; - d->ofpacts = NULL; - d->ofpacts_len = 0; } - - hmap_remove(flow_table, &d->hmap_node); - ovn_flow_destroy(d); } } - /* The previous loop removed from 'flow_table' all of the flows that are - * already installed. Thus, any flows remaining in 'flow_table' need to - * be added to the flow table. */ - struct ovn_flow *d; - HMAP_FOR_EACH_SAFE (d, next, hmap_node, flow_table) { - /* Send flow_mod to add flow. */ - struct ofputil_flow_mod fm = { - .match = d->match, - .priority = d->priority, - .table_id = d->table_id, - .ofpacts = d->ofpacts, - .ofpacts_len = d->ofpacts_len, - .command = OFPFC_ADD, - }; - queue_flow_mod(&fm); - ovn_flow_log(d, "adding"); - - /* Move 'd' from 'flow_table' to installed_flows. */ - hmap_remove(flow_table, &d->hmap_node); - hmap_insert(&installed_flows, &d->hmap_node, d->hmap_node.hash); + /* Iterate through the desired flows and add those that aren't found + * in the installed flow table. */ + struct ovn_flow *c; + HMAP_FOR_EACH (c, match_hmap_node, &match_flow_table) { + struct ovs_list matches; + ovn_flow_lookup(&installed_flows, c, &matches); + if (ovs_list_is_empty(&matches)) { + /* We have a key that isn't in the installed flows, so + * look back into the desired flow list for all flows + * that match this key, and select the one to be installed. */ + struct ovs_list candidates; + ovn_flow_lookup(&match_flow_table, c, &candidates); + struct ovn_flow *d = ovn_flow_select_from_list(&candidates); + /* Send flow_mod to add flow. */ + struct ofputil_flow_mod fm = { + .match = d->match, + .priority = d->priority, + .table_id = d->table_id, + .ofpacts = d->ofpacts, + .ofpacts_len = d->ofpacts_len, + .command = OFPFC_ADD, + }; + queue_flow_mod(&fm); + ovn_flow_log(d, "adding installed"); + + /* Copy 'd' from 'flow_table' to installed_flows. */ + struct ovn_flow *new_node = ofctrl_dup_flow(d); + hmap_insert(&installed_flows, &new_node->match_hmap_node, + new_node->match_hmap_node.hash); + } } /* Iterate through the installed groups from previous runs. If they diff --git a/ovn/controller/ofctrl.h b/ovn/controller/ofctrl.h index bf5dfd5..49b95b0 100644 --- a/ovn/controller/ofctrl.h +++ b/ovn/controller/ofctrl.h @@ -20,6 +20,7 @@ #include #include "openvswitch/meta-flow.h" +#include "ovsdb-idl.h" struct controller_ctx; struct hmap; @@ -31,12 +32,25 @@ struct group_table; /* Interface for OVN main loop. */ void ofctrl_init(void); enum mf_field_id ofctrl_run(const struct ovsrec_bridge *br_int); -void ofctrl_put(struct hmap *flows, struct group_table *group_table); +void ofctrl_put(struct group_table *group_table); void ofctrl_wait(void); void ofctrl_destroy(void); -/* Flow table interface to the rest of ovn-controller. */ -void ofctrl_add_flow(struct hmap *flows, uint8_t table_id, uint16_t priority, - const struct match *, const struct ofpbuf *ofpacts); +struct ovn_flow *ofctrl_dup_flow(struct ovn_flow *source); + +/* Flow table interfaces to the rest of ovn-controller. */ +void ofctrl_add_flow(uint8_t table_id, uint16_t priority, + const struct match *, const struct ofpbuf *ofpacts, + const struct uuid *uuid); + +void ofctrl_remove_flows(const struct uuid *uuid); + +void ofctrl_set_flow(uint8_t table_id, uint16_t priority, + const struct match *, const struct ofpbuf *ofpacts, + const struct uuid *uuid); + +void ofctrl_flow_table_clear(void); + +void ovn_flow_table_clear(void); #endif /* ovn/ofctrl.h */ diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c index 28ee13e..04684b2 100644 --- a/ovn/controller/ovn-controller.c +++ b/ovn/controller/ovn-controller.c @@ -441,17 +441,15 @@ main(int argc, char *argv[]) update_ct_zones(&all_lports, &patched_datapaths, &ct_zones, ct_zone_bitmap); - struct hmap flow_table = HMAP_INITIALIZER(&flow_table); + ovn_flow_table_clear(); lflow_run(&ctx, &lports, &mcgroups, &local_datapaths, - &patched_datapaths, &group_table, &ct_zones, - &flow_table); + &patched_datapaths, &group_table, &ct_zones); if (chassis_id) { physical_run(&ctx, mff_ovn_geneve, - br_int, chassis_id, &ct_zones, &flow_table, + br_int, chassis_id, &ct_zones, &local_datapaths, &patched_datapaths); } - ofctrl_put(&flow_table, &group_table); - hmap_destroy(&flow_table); + ofctrl_put(&group_table); } sset_destroy(&all_lports); diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c index d1b40c2..63d6cb8 100644 --- a/ovn/controller/physical.c +++ b/ovn/controller/physical.c @@ -51,10 +51,14 @@ physical_register_ovs_idl(struct ovsdb_idl *ovs_idl) ovsdb_idl_add_column(ovs_idl, &ovsrec_interface_col_external_ids); } + static struct simap localvif_to_ofport = SIMAP_INITIALIZER(&localvif_to_ofport); static struct hmap tunnels = HMAP_INITIALIZER(&tunnels); +/* UUID to identify OF flows not associated with ovsdb rows. */ +static struct uuid *hc_uuid = NULL; + /* Maps from a chassis to the OpenFlow port number of the tunnel that can be * used to reach that chassis. */ struct chassis_tunnel { @@ -151,8 +155,7 @@ get_localnet_port(struct hmap *local_datapaths, int64_t tunnel_key) } static void -consider_port_binding(struct hmap *flow_table, - enum mf_field_id mff_ovn_geneve, +consider_port_binding(enum mf_field_id mff_ovn_geneve, const struct simap *ct_zones, struct hmap *local_datapaths, struct hmap *patched_datapaths, @@ -321,8 +324,9 @@ consider_port_binding(struct hmap *flow_table, /* Resubmit to first logical ingress pipeline table. */ put_resubmit(OFTABLE_LOG_INGRESS_PIPELINE, ofpacts_p); - ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG, - tag ? 150 : 100, &match, ofpacts_p); + ofctrl_add_flow(OFTABLE_PHY_TO_LOG, + tag ? 150 : 100, &match, ofpacts_p, + &binding->header_.uuid); if (!tag && (!strcmp(binding->type, "localnet") || !strcmp(binding->type, "l2gateway"))) { @@ -332,7 +336,8 @@ consider_port_binding(struct hmap *flow_table, * action. */ ofpbuf_pull(ofpacts_p, ofpacts_orig_size); match_set_dl_tci_masked(&match, 0, htons(VLAN_CFI)); - ofctrl_add_flow(flow_table, 0, 100, &match, ofpacts_p); + ofctrl_add_flow(0, 100, &match, ofpacts_p, + &binding->header_.uuid); } /* Table 33, priority 100. @@ -362,8 +367,8 @@ consider_port_binding(struct hmap *flow_table, /* Resubmit to table 34. */ put_resubmit(OFTABLE_DROP_LOOPBACK, ofpacts_p); - ofctrl_add_flow(flow_table, OFTABLE_LOCAL_OUTPUT, 100, - &match, ofpacts_p); + ofctrl_add_flow(OFTABLE_LOCAL_OUTPUT, 100, + &match, ofpacts_p, &binding->header_.uuid); /* Table 34, Priority 100. * ======================= @@ -374,8 +379,8 @@ consider_port_binding(struct hmap *flow_table, match_set_metadata(&match, htonll(dp_key)); match_set_reg(&match, MFF_LOG_INPORT - MFF_REG0, port_key); match_set_reg(&match, MFF_LOG_OUTPORT - MFF_REG0, port_key); - ofctrl_add_flow(flow_table, OFTABLE_DROP_LOOPBACK, 100, - &match, ofpacts_p); + ofctrl_add_flow(OFTABLE_DROP_LOOPBACK, 100, + &match, ofpacts_p, &binding->header_.uuid); /* Table 64, Priority 100. * ======================= @@ -409,8 +414,8 @@ consider_port_binding(struct hmap *flow_table, ofpact_put_STRIP_VLAN(ofpacts_p); put_stack(MFF_IN_PORT, ofpact_put_STACK_POP(ofpacts_p)); } - ofctrl_add_flow(flow_table, OFTABLE_LOG_TO_PHY, 100, - &match, ofpacts_p); + ofctrl_add_flow(OFTABLE_LOG_TO_PHY, 100, + &match, ofpacts_p, &binding->header_.uuid); } else if (!tun) { /* Remote port connected by localnet port */ /* Table 33, priority 100. @@ -432,8 +437,8 @@ consider_port_binding(struct hmap *flow_table, /* Resubmit to table 33. */ put_resubmit(OFTABLE_LOCAL_OUTPUT, ofpacts_p); - ofctrl_add_flow(flow_table, OFTABLE_LOCAL_OUTPUT, 100, - &match, ofpacts_p); + ofctrl_add_flow(OFTABLE_LOCAL_OUTPUT, 100, + &match, ofpacts_p, &binding->header_.uuid); } else { /* Remote port connected by tunnel */ /* Table 32, priority 100. @@ -456,14 +461,13 @@ consider_port_binding(struct hmap *flow_table, /* Output to tunnel. */ ofpact_put_OUTPUT(ofpacts_p)->port = ofport; - ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 100, - &match, ofpacts_p); + ofctrl_add_flow(OFTABLE_REMOTE_OUTPUT, 100, + &match, ofpacts_p, &binding->header_.uuid); } } static void -consider_mc_group(struct hmap *flow_table, - enum mf_field_id mff_ovn_geneve, +consider_mc_group(enum mf_field_id mff_ovn_geneve, const struct simap *ct_zones, struct hmap *local_datapaths, const struct sbrec_multicast_group *mc, @@ -536,8 +540,8 @@ consider_mc_group(struct hmap *flow_table, * group as the logical output port. */ put_load(mc->tunnel_key, MFF_LOG_OUTPORT, 0, 32, ofpacts_p); - ofctrl_add_flow(flow_table, OFTABLE_LOCAL_OUTPUT, 100, - &match, ofpacts_p); + ofctrl_add_flow(OFTABLE_LOCAL_OUTPUT, 100, + &match, ofpacts_p, &mc->header_.uuid); } /* Table 32, priority 100. @@ -574,8 +578,8 @@ consider_mc_group(struct hmap *flow_table, if (local_ports) { put_resubmit(OFTABLE_LOCAL_OUTPUT, remote_ofpacts_p); } - ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 100, - &match, remote_ofpacts_p); + ofctrl_add_flow(OFTABLE_REMOTE_OUTPUT, 100, + &match, remote_ofpacts_p, &mc->header_.uuid); } } sset_destroy(&remote_chassis); @@ -584,9 +588,14 @@ consider_mc_group(struct hmap *flow_table, void physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, const struct ovsrec_bridge *br_int, const char *this_chassis_id, - const struct simap *ct_zones, struct hmap *flow_table, + const struct simap *ct_zones, struct hmap *local_datapaths, struct hmap *patched_datapaths) { + if (!hc_uuid) { + hc_uuid = xmalloc(sizeof(struct uuid)); + uuid_generate(hc_uuid); + } + for (int i = 0; i < br_int->n_ports; i++) { const struct ovsrec_port *port_rec = br_int->ports[i]; if (!strcmp(port_rec->name, br_int->name)) { @@ -672,7 +681,7 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, * 64 for logical-to-physical translation. */ const struct sbrec_port_binding *binding; SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) { - consider_port_binding(flow_table, mff_ovn_geneve, ct_zones, + consider_port_binding(mff_ovn_geneve, ct_zones, local_datapaths, patched_datapaths, binding, &ofpacts); } @@ -682,7 +691,7 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, struct ofpbuf remote_ofpacts; ofpbuf_init(&remote_ofpacts, 0); SBREC_MULTICAST_GROUP_FOR_EACH (mc, ctx->ovnsb_idl) { - consider_mc_group(flow_table, mff_ovn_geneve, ct_zones, + consider_mc_group(mff_ovn_geneve, ct_zones, local_datapaths, mc, &ofpacts, &remote_ofpacts); } @@ -724,7 +733,8 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts); - ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG, 100, &match, &ofpacts); + ofctrl_add_flow(OFTABLE_PHY_TO_LOG, 100, &match, &ofpacts, + hc_uuid); } /* Add flows for VXLAN encapsulations. Due to the limited amount of @@ -757,8 +767,7 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, put_load(binding->tunnel_key, MFF_LOG_INPORT, 0, 15, &ofpacts); put_resubmit(OFTABLE_LOG_INGRESS_PIPELINE, &ofpacts); - ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG, 100, &match, - &ofpacts); + ofctrl_add_flow(OFTABLE_PHY_TO_LOG, 100, &match, &ofpacts, hc_uuid); } } @@ -771,7 +780,7 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, match_init_catchall(&match); ofpbuf_clear(&ofpacts); put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts); - ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 0, &match, &ofpacts); + ofctrl_add_flow(OFTABLE_REMOTE_OUTPUT, 0, &match, &ofpacts, hc_uuid); /* Table 34, Priority 0. * ======================= @@ -785,7 +794,7 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, MFF_LOG_REGS; #undef MFF_LOG_REGS put_resubmit(OFTABLE_LOG_EGRESS_PIPELINE, &ofpacts); - ofctrl_add_flow(flow_table, OFTABLE_DROP_LOOPBACK, 0, &match, &ofpacts); + ofctrl_add_flow(OFTABLE_DROP_LOOPBACK, 0, &match, &ofpacts, hc_uuid); ofpbuf_uninit(&ofpacts); simap_clear(&localvif_to_ofport); diff --git a/ovn/controller/physical.h b/ovn/controller/physical.h index 2f8b58a..1f98f71 100644 --- a/ovn/controller/physical.h +++ b/ovn/controller/physical.h @@ -43,7 +43,7 @@ struct simap; void physical_register_ovs_idl(struct ovsdb_idl *); void physical_run(struct controller_ctx *, enum mf_field_id mff_ovn_geneve, const struct ovsrec_bridge *br_int, const char *chassis_id, - const struct simap *ct_zones, struct hmap *flow_table, + const struct simap *ct_zones, struct hmap *local_datapaths, struct hmap *patched_datapaths); #endif /* ovn/physical.h */