From patchwork Thu Jun 11 12:43:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1307520 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.136; helo=silver.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49jNnx4fYKz9sRN for ; Thu, 11 Jun 2020 22:43:49 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id E514B2690C; Thu, 11 Jun 2020 12:43:47 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id N4ND9hDOIILN; Thu, 11 Jun 2020 12:43:42 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by silver.osuosl.org (Postfix) with ESMTP id BC99D26904; Thu, 11 Jun 2020 12:43:42 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 92D2DC0893; Thu, 11 Jun 2020 12:43:42 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id C0BA8C016F for ; Thu, 11 Jun 2020 12:43:41 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id BBAD487A4D for ; Thu, 11 Jun 2020 12:43:41 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pz6V14AqpJ9j for ; Thu, 11 Jun 2020 12:43:41 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) by fraxinus.osuosl.org (Postfix) with ESMTPS id AA82B87A55 for ; Thu, 11 Jun 2020 12:43:40 +0000 (UTC) X-Originating-IP: 27.7.184.71 Received: from nusiddiq.home.org.home.org (unknown [27.7.184.71]) (Authenticated sender: numans@ovn.org) by relay7-d.mail.gandi.net (Postfix) with ESMTPSA id 76F852000E; Thu, 11 Jun 2020 12:43:35 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Thu, 11 Jun 2020 18:13:32 +0530 Message-Id: <20200611124332.1353713-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611124309.1353593-1-numans@ovn.org> References: <20200611124309.1353593-1-numans@ovn.org> MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn v12 1/7] I-P engine: Provide the option for an engine to clear tracked engine data in every run. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique A new function is added in the engine node called - clear_tracked_data() to clear any engine data which was tracked during the engine run. This tracked data has to be part of the engine 'data'. engine_init_run() calls clear_tracked_data() and each engine node interested in tracking the data needs to implement the en_clear_tracked_data() function. With this patch, an engine node can store any changes done to the engine data separately in the engine change handlers. The parent of this engine node can use this tracked data for incrementally processing the changes. Upcoming patches in the series will make use of this. Acked-by: Dumitru Ceara Signed-off-by: Numan Siddique --- lib/inc-proc-eng.c | 8 ++++++++ lib/inc-proc-eng.h | 9 +++++++++ 2 files changed, 17 insertions(+) diff --git a/lib/inc-proc-eng.c b/lib/inc-proc-eng.c index 9b1479a1c..8b56cbaec 100644 --- a/lib/inc-proc-eng.c +++ b/lib/inc-proc-eng.c @@ -121,6 +121,10 @@ void engine_cleanup(void) { for (size_t i = 0; i < engine_n_nodes; i++) { + if (engine_nodes[i]->clear_tracked_data) { + engine_nodes[i]->clear_tracked_data(engine_nodes[i]->data); + } + if (engine_nodes[i]->cleanup) { engine_nodes[i]->cleanup(engine_nodes[i]->data); } @@ -260,6 +264,10 @@ engine_init_run(void) VLOG_DBG("Initializing new run"); for (size_t i = 0; i < engine_n_nodes; i++) { engine_set_node_state(engine_nodes[i], EN_STALE); + + if (engine_nodes[i]->clear_tracked_data) { + engine_nodes[i]->clear_tracked_data(engine_nodes[i]->data); + } } } diff --git a/lib/inc-proc-eng.h b/lib/inc-proc-eng.h index 8606a360b..e25bcb29c 100644 --- a/lib/inc-proc-eng.h +++ b/lib/inc-proc-eng.h @@ -149,6 +149,10 @@ struct engine_node { * doesn't store pointers to DB records it's still safe to use). */ bool (*is_valid)(struct engine_node *); + + /* Method to clear up tracked data maintained by the engine node in the + * engine 'data'. It may be NULL. */ + void (*clear_tracked_data)(void *tracked_data); }; /* Initialize the data for the engine nodes. It calls each node's @@ -282,6 +286,7 @@ void engine_ovsdb_node_add_index(struct engine_node *, const char *name, .run = en_##NAME##_run, \ .cleanup = en_##NAME##_cleanup, \ .is_valid = en_##NAME##_is_valid, \ + .clear_tracked_data = NULL, \ }; #define ENGINE_NODE_CUSTOM_DATA(NAME, NAME_STR) \ @@ -291,6 +296,10 @@ void engine_ovsdb_node_add_index(struct engine_node *, const char *name, static bool (*en_##NAME##_is_valid)(struct engine_node *node) = NULL; \ ENGINE_NODE_DEF(NAME, NAME_STR) +#define ENGINE_NODE_WITH_CLEAR_TRACK_DATA(NAME, NAME_STR) \ + ENGINE_NODE(NAME, NAME_STR) \ + en_##NAME.clear_tracked_data = en_##NAME##_clear_tracked_data; + /* Macro to define member functions of an engine node which represents * a table of OVSDB */ #define ENGINE_FUNC_OVSDB(DB_NAME, TBL_NAME) \ From patchwork Thu Jun 11 12:43:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1307521 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=hemlock.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49jNpK4gmSz9sRN for ; Thu, 11 Jun 2020 22:44:09 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id B0B0689630; Thu, 11 Jun 2020 12:44:07 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id yFUwzYYfif1e; Thu, 11 Jun 2020 12:44:06 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by hemlock.osuosl.org (Postfix) with ESMTP id 8279989631; Thu, 11 Jun 2020 12:44:06 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 6B367C0178; Thu, 11 Jun 2020 12:44:06 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by lists.linuxfoundation.org (Postfix) with ESMTP id 91D6FC016F for ; Thu, 11 Jun 2020 12:44:05 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id 6A52426866 for ; Thu, 11 Jun 2020 12:44:05 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id d2PIKkO3upzo for ; Thu, 11 Jun 2020 12:44:01 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay6-d.mail.gandi.net (relay6-d.mail.gandi.net [217.70.183.198]) by silver.osuosl.org (Postfix) with ESMTPS id 301972695C for ; Thu, 11 Jun 2020 12:44:01 +0000 (UTC) X-Originating-IP: 27.7.184.71 Received: from nusiddiq.home.org.home.org (unknown [27.7.184.71]) (Authenticated sender: numans@ovn.org) by relay6-d.mail.gandi.net (Postfix) with ESMTPSA id DF005C001F; Thu, 11 Jun 2020 12:43:58 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Thu, 11 Jun 2020 18:13:38 +0530 Message-Id: <20200611124338.1353766-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611124309.1353593-1-numans@ovn.org> References: <20200611124309.1353593-1-numans@ovn.org> MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn v12 2/7] ovn-controller: I-P for ct zone and OVS interface changes in flow output stage. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique This patch handles ct zone changes and OVS interface changes incrementally in the flow output stage. Any changes to ct zone can be handled by running physical_run() instead of running flow_output_run(). And any changes to OVS interfaces can be either handled incrementally (for OVS interfaces representing VIFs) or just running physical_run() (for tunnel and other types of interfaces). To better handle this, a new engine node 'physical_flow_changes' is added which handles changes to ct zone and OVS interfaces. Signed-off-by: Numan Siddique Acked-by: Dumitru Ceara --- controller/binding.c | 23 +----- controller/binding.h | 24 ++++++- controller/ovn-controller.c | 136 +++++++++++++++++++++++++++++++++++- controller/physical.c | 51 ++++++++++++++ controller/physical.h | 5 +- 5 files changed, 214 insertions(+), 25 deletions(-) diff --git a/controller/binding.c b/controller/binding.c index 06ecb93fb..67fd2777f 100644 --- a/controller/binding.c +++ b/controller/binding.c @@ -535,7 +535,7 @@ remove_local_lport_ids(struct binding_ctx_out *b_ctx, * 'struct local_binding' is used. A shash of these local bindings is * maintained with the 'external_ids:iface-id' as the key to the shash. * - * struct local_binding has 3 main fields: + * struct local_binding (defined in binding.h) has 3 main fields: * - type * - OVS interface row object * - Port_Binding row object @@ -586,21 +586,6 @@ remove_local_lport_ids(struct binding_ctx_out *b_ctx, * - For each 'virtual' Port Binding (of type BT_VIRTUAL) provided its parent * is bound to this chassis. */ -enum local_binding_type { - BT_VIF, - BT_CONTAINER, - BT_VIRTUAL -}; - -struct local_binding { - char *name; - enum local_binding_type type; - const struct ovsrec_interface *iface; - const struct sbrec_port_binding *pb; - - /* shash of 'struct local_binding' representing children. */ - struct shash children; -}; static struct local_binding * local_binding_create(const char *name, const struct ovsrec_interface *iface, @@ -622,12 +607,6 @@ local_binding_add(struct shash *local_bindings, struct local_binding *lbinding) shash_add(local_bindings, lbinding->name, lbinding); } -static struct local_binding * -local_binding_find(struct shash *local_bindings, const char *name) -{ - return shash_find_data(local_bindings, name); -} - static void local_binding_destroy(struct local_binding *lbinding) { diff --git a/controller/binding.h b/controller/binding.h index 9214a7479..161766d2f 100644 --- a/controller/binding.h +++ b/controller/binding.h @@ -18,6 +18,7 @@ #define OVN_BINDING_H 1 #include +#include "openvswitch/shash.h" struct hmap; struct ovsdb_idl; @@ -32,7 +33,6 @@ struct sbrec_chassis; struct sbrec_port_binding_table; struct sset; struct sbrec_port_binding; -struct shash; struct binding_ctx_in { struct ovsdb_idl_txn *ovnsb_idl_txn; @@ -77,6 +77,28 @@ struct binding_ctx_out { struct smap *local_iface_ids; }; +enum local_binding_type { + BT_VIF, + BT_CONTAINER, + BT_VIRTUAL +}; + +struct local_binding { + char *name; + enum local_binding_type type; + const struct ovsrec_interface *iface; + const struct sbrec_port_binding *pb; + + /* shash of 'struct local_binding' representing children. */ + struct shash children; +}; + +static inline struct local_binding * +local_binding_find(struct shash *local_bindings, const char *name) +{ + return shash_find_data(local_bindings, name); +} + void binding_register_ovs_idl(struct ovsdb_idl *); void binding_run(struct binding_ctx_in *, struct binding_ctx_out *); bool binding_cleanup(struct ovsdb_idl_txn *ovnsb_idl_txn, diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c index bf7214e45..6e66705da 100644 --- a/controller/ovn-controller.c +++ b/controller/ovn-controller.c @@ -1368,6 +1368,10 @@ static void init_physical_ctx(struct engine_node *node, ovs_assert(br_int && chassis); + struct ovsrec_interface_table *iface_table = + (struct ovsrec_interface_table *)EN_OVSDB_GET( + engine_get_input("OVS_interface", node)); + struct ed_type_ct_zones *ct_zones_data = engine_get_input_data("ct_zones", node); struct simap *ct_zones = &ct_zones_data->current; @@ -1377,12 +1381,14 @@ static void init_physical_ctx(struct engine_node *node, p_ctx->mc_group_table = multicast_group_table; p_ctx->br_int = br_int; p_ctx->chassis_table = chassis_table; + p_ctx->iface_table = iface_table; p_ctx->chassis = chassis; p_ctx->active_tunnels = &rt_data->active_tunnels; p_ctx->local_datapaths = &rt_data->local_datapaths; p_ctx->local_lports = &rt_data->local_lports; p_ctx->ct_zones = ct_zones; p_ctx->mff_ovn_geneve = ed_mff_ovn_geneve->mff_ovn_geneve; + p_ctx->local_bindings = &rt_data->local_bindings; } static void init_lflow_ctx(struct engine_node *node, @@ -1790,6 +1796,118 @@ flow_output_port_groups_handler(struct engine_node *node, void *data) return _flow_output_resource_ref_handler(node, data, REF_TYPE_PORTGROUP); } +/* Engine node en_physical_flow_changes indicates whether + * there is a need to + * - recompute only physical flows or + * - we can incrementally process the physical flows. + * + * en_physical_flow_changes is an input to flow_output engine node. + * If the engine node 'en_physical_flow_changes' gets updated during + * engine run, it means the handler for this - + * flow_output_physical_flow_changes_handler() will either + * - recompute the physical flows by calling 'physical_run() or + * - incrementlly process some of the changes for physical flow + * calculation. Right now we handle OVS interfaces changes + * for physical flow computation. + * + * When ever a port binding happens, the follow up + * activity is the zone id allocation for that port binding. + * With this intermediate engine node, we avoid full recomputation. + * Instead we do physical flow computation (either full recomputation + * by calling physical_run() or handling the changes incrementally. + * + * Hence this is an intermediate engine node to indicate the + * flow_output engine to recomputes/compute the physical flows. + * + * TODO 1. Ideally this engine node should recompute/compute the physical + * flows instead of relegating it to the flow_output node. + * But this requires splitting the flow_output node to + * logical_flow_output and physical_flow_output. + * + * TODO 2. We can further optimise the en_ct_zone changes to + * compute the phsyical flows for changed zone ids. + * + * TODO 3: physical.c has a global simap -localvif_to_ofport which stores the + * local OVS interfaces and the ofport numbers. Ideally this should be + * part of the engine data. + */ +struct ed_type_pfc_data { + /* Both these variables are tracked and set in each engine run. */ + bool recompute_physical_flows; + bool ovs_ifaces_changed; +}; + +static void +en_physical_flow_changes_clear_tracked_data(void *data_) +{ + struct ed_type_pfc_data *data = data_; + data->recompute_physical_flows = false; + data->ovs_ifaces_changed = false; +} + +static void * +en_physical_flow_changes_init(struct engine_node *node OVS_UNUSED, + struct engine_arg *arg OVS_UNUSED) +{ + struct ed_type_pfc_data *data = xzalloc(sizeof *data); + return data; +} + +static void +en_physical_flow_changes_cleanup(void *data OVS_UNUSED) +{ +} + +/* Indicate to the flow_output engine that we need to recompute physical + * flows. */ +static void +en_physical_flow_changes_run(struct engine_node *node, void *data) +{ + struct ed_type_pfc_data *pfc_tdata = data; + pfc_tdata->recompute_physical_flows = true; + engine_set_node_state(node, EN_UPDATED); +} + +/* There are OVS interface changes. Indicate to the flow_output engine + * to handle these OVS interface changes for physical flow computations. */ +static bool +physical_flow_changes_ovs_iface_handler(struct engine_node *node, void *data) +{ + struct ed_type_pfc_data *pfc_tdata = data; + pfc_tdata->ovs_ifaces_changed = true; + engine_set_node_state(node, EN_UPDATED); + return true; +} + +static bool +flow_output_physical_flow_changes_handler(struct engine_node *node, void *data) +{ + struct ed_type_runtime_data *rt_data = + engine_get_input_data("runtime_data", node); + + struct ed_type_flow_output *fo = data; + struct physical_ctx p_ctx; + init_physical_ctx(node, rt_data, &p_ctx); + + engine_set_node_state(node, EN_UPDATED); + struct ed_type_pfc_data *pfc_data = + engine_get_input_data("physical_flow_changes", node); + + if (pfc_data->recompute_physical_flows) { + /* This indicates that we need to recompute the physical flows. */ + physical_run(&p_ctx, &fo->flow_table); + return true; + } + + if (pfc_data->ovs_ifaces_changed) { + /* There are OVS interface changes. Try to handle them + * incrementally. */ + return physical_handle_ovs_iface_changes(&p_ctx, &fo->flow_table); + } + + return true; +} + struct ovn_controller_exit_args { bool *exiting; bool *restart; @@ -1914,6 +2032,8 @@ main(int argc, char *argv[]) ENGINE_NODE(runtime_data, "runtime_data"); ENGINE_NODE(mff_ovn_geneve, "mff_ovn_geneve"); ENGINE_NODE(ofctrl_is_connected, "ofctrl_is_connected"); + ENGINE_NODE_WITH_CLEAR_TRACK_DATA(physical_flow_changes, + "physical_flow_changes"); ENGINE_NODE(flow_output, "flow_output"); ENGINE_NODE(addr_sets, "addr_sets"); ENGINE_NODE(port_groups, "port_groups"); @@ -1933,13 +2053,27 @@ main(int argc, char *argv[]) engine_add_input(&en_port_groups, &en_sb_port_group, port_groups_sb_port_group_handler); + /* Engine node physical_flow_changes indicates whether + * we can recompute only physical flows or we can + * incrementally process the physical flows. + */ + engine_add_input(&en_physical_flow_changes, &en_ct_zones, + NULL); + engine_add_input(&en_physical_flow_changes, &en_ovs_interface, + physical_flow_changes_ovs_iface_handler); + engine_add_input(&en_flow_output, &en_addr_sets, flow_output_addr_sets_handler); engine_add_input(&en_flow_output, &en_port_groups, flow_output_port_groups_handler); engine_add_input(&en_flow_output, &en_runtime_data, NULL); - engine_add_input(&en_flow_output, &en_ct_zones, NULL); engine_add_input(&en_flow_output, &en_mff_ovn_geneve, NULL); + engine_add_input(&en_flow_output, &en_physical_flow_changes, + flow_output_physical_flow_changes_handler); + + /* We need this input nodes for only data. Hence the noop handler. */ + engine_add_input(&en_flow_output, &en_ct_zones, engine_noop_handler); + engine_add_input(&en_flow_output, &en_ovs_interface, engine_noop_handler); engine_add_input(&en_flow_output, &en_ovs_open_vswitch, NULL); engine_add_input(&en_flow_output, &en_ovs_bridge, NULL); diff --git a/controller/physical.c b/controller/physical.c index f06313b9d..240ec158e 100644 --- a/controller/physical.c +++ b/controller/physical.c @@ -1780,6 +1780,57 @@ physical_run(struct physical_ctx *p_ctx, simap_destroy(&new_tunnel_to_ofport); } +bool +physical_handle_ovs_iface_changes(struct physical_ctx *p_ctx, + struct ovn_desired_flow_table *flow_table) +{ + const struct ovsrec_interface *iface_rec; + OVSREC_INTERFACE_TABLE_FOR_EACH_TRACKED (iface_rec, p_ctx->iface_table) { + if (!strcmp(iface_rec->type, "geneve") || + !strcmp(iface_rec->type, "patch")) { + return false; + } + } + + struct ofpbuf ofpacts; + ofpbuf_init(&ofpacts, 0); + + OVSREC_INTERFACE_TABLE_FOR_EACH_TRACKED (iface_rec, p_ctx->iface_table) { + const char *iface_id = smap_get(&iface_rec->external_ids, "iface-id"); + if (!iface_id) { + continue; + } + + const struct local_binding *lb = + local_binding_find(p_ctx->local_bindings, iface_id); + + if (!lb || !lb->pb) { + continue; + } + + int64_t ofport = iface_rec->n_ofport ? *iface_rec->ofport : 0; + if (ovsrec_interface_is_deleted(iface_rec)) { + ofctrl_remove_flows(flow_table, &lb->pb->header_.uuid); + simap_find_and_delete(&localvif_to_ofport, iface_id); + } else { + if (!ovsrec_interface_is_new(iface_rec)) { + ofctrl_remove_flows(flow_table, &lb->pb->header_.uuid); + } + + simap_put(&localvif_to_ofport, iface_id, ofport); + consider_port_binding(p_ctx->sbrec_port_binding_by_name, + p_ctx->mff_ovn_geneve, p_ctx->ct_zones, + p_ctx->active_tunnels, + p_ctx->local_datapaths, + lb->pb, p_ctx->chassis, + flow_table, &ofpacts); + } + } + + ofpbuf_uninit(&ofpacts); + return true; +} + bool get_tunnel_ofport(const char *chassis_name, char *encap_ip, ofp_port_t *ofport) { diff --git a/controller/physical.h b/controller/physical.h index dadf84ea1..9ca34436a 100644 --- a/controller/physical.h +++ b/controller/physical.h @@ -49,11 +49,13 @@ struct physical_ctx { const struct ovsrec_bridge *br_int; const struct sbrec_chassis_table *chassis_table; const struct sbrec_chassis *chassis; + const struct ovsrec_interface_table *iface_table; const struct sset *active_tunnels; struct hmap *local_datapaths; struct sset *local_lports; const struct simap *ct_zones; enum mf_field_id mff_ovn_geneve; + struct shash *local_bindings; }; void physical_register_ovs_idl(struct ovsdb_idl *); @@ -63,7 +65,8 @@ void physical_handle_port_binding_changes(struct physical_ctx *, struct ovn_desired_flow_table *); void physical_handle_mc_group_changes(struct physical_ctx *, struct ovn_desired_flow_table *); - +bool physical_handle_ovs_iface_changes(struct physical_ctx *, + struct ovn_desired_flow_table *); bool get_tunnel_ofport(const char *chassis_name, char *encap_ip, ofp_port_t *ofport); #endif /* controller/physical.h */ From patchwork Thu Jun 11 12:44:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1307525 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.138; helo=whitealder.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49jNpm4D0hz9sRf for ; Thu, 11 Jun 2020 22:44:32 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id E0358887F7; Thu, 11 Jun 2020 12:44:30 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Pzr9RfaTva2O; Thu, 11 Jun 2020 12:44:14 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by whitealder.osuosl.org (Postfix) with ESMTP id BFFAF8877F; Thu, 11 Jun 2020 12:44:11 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 9BEF1C016F; Thu, 11 Jun 2020 12:44:11 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 9AEF6C016F for ; Thu, 11 Jun 2020 12:44:10 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 94CE087A9E for ; Thu, 11 Jun 2020 12:44:10 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wYG53w6oV-IR for ; Thu, 11 Jun 2020 12:44:06 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay6-d.mail.gandi.net (relay6-d.mail.gandi.net [217.70.183.198]) by fraxinus.osuosl.org (Postfix) with ESMTPS id 1157387A2B for ; Thu, 11 Jun 2020 12:44:05 +0000 (UTC) X-Originating-IP: 27.7.184.71 Received: from nusiddiq.home.org.home.org (unknown [27.7.184.71]) (Authenticated sender: numans@ovn.org) by relay6-d.mail.gandi.net (Postfix) with ESMTPSA id 34901C000B; Thu, 11 Jun 2020 12:44:02 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Thu, 11 Jun 2020 18:14:00 +0530 Message-Id: <20200611124400.1355727-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611124309.1353593-1-numans@ovn.org> References: <20200611124309.1353593-1-numans@ovn.org> MIME-Version: 1.0 Cc: Venkata Anil Subject: [ovs-dev] [PATCH ovn v12 3/7] ovn-controller: Handle runtime data changes in flow output engine X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique In order to handle runtime data changes incrementally, the flow outut runtime data handle should know the changed runtime data. Runtime data now tracks the changed data for any OVS interface and SB port binding changes. The tracked data contains a hmap of tracked datapaths (which changed during runtime data processing. The flow outout runtime_data handler in this patch doesn't do much with the tracked data. It returns false if there is tracked data available so that flow_output run is called. If no tracked data is available then there is no need for flow computation and the handler returns true. Next patch in the series processes the tracked data incrementally. Co-Authored-by: Venkata Anil Signed-off-by: Venkata Anil Signed-off-by: Numan Siddique Acked-by: Dumitru Ceara --- controller/binding.c | 198 ++++++++++++++++++++++++++++++------ controller/binding.h | 23 +++++ controller/ovn-controller.c | 143 +++++++++++++++++++++++++- tests/ovn-performance.at | 20 ++-- 4 files changed, 339 insertions(+), 45 deletions(-) diff --git a/controller/binding.c b/controller/binding.c index 67fd2777f..77ed7441a 100644 --- a/controller/binding.c +++ b/controller/binding.c @@ -69,13 +69,25 @@ binding_register_ovs_idl(struct ovsdb_idl *ovs_idl) ovsdb_idl_add_column(ovs_idl, &ovsrec_qos_col_type); } +static struct tracked_binding_datapath *tracked_binding_datapath_create( + const struct sbrec_datapath_binding *, + bool is_new, struct hmap *tracked_dps); +static struct tracked_binding_datapath *tracked_binding_datapath_find( + struct hmap *, const struct sbrec_datapath_binding *); +static void tracked_binding_datapath_lport_add( + const struct sbrec_port_binding *, struct hmap *tracked_datapaths); +static void update_lport_tracking(const struct sbrec_port_binding *pb, + bool old_claim, bool new_claim, + struct hmap *tracked_dp_bindings); + static void add_local_datapath__(struct ovsdb_idl_index *sbrec_datapath_binding_by_key, struct ovsdb_idl_index *sbrec_port_binding_by_datapath, struct ovsdb_idl_index *sbrec_port_binding_by_name, const struct sbrec_datapath_binding *datapath, bool has_local_l3gateway, int depth, - struct hmap *local_datapaths) + struct hmap *local_datapaths, + struct hmap *tracked_datapaths) { uint32_t dp_key = datapath->tunnel_key; struct local_datapath *ld = get_local_datapath(local_datapaths, dp_key); @@ -92,6 +104,11 @@ add_local_datapath__(struct ovsdb_idl_index *sbrec_datapath_binding_by_key, ld->localnet_port = NULL; ld->has_local_l3gateway = has_local_l3gateway; + if (tracked_datapaths && + !tracked_binding_datapath_find(tracked_datapaths, datapath)) { + tracked_binding_datapath_create(datapath, true, tracked_datapaths); + } + if (depth >= 100) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); VLOG_WARN_RL(&rl, "datapaths nested too deep"); @@ -124,7 +141,8 @@ add_local_datapath__(struct ovsdb_idl_index *sbrec_datapath_binding_by_key, sbrec_port_binding_by_datapath, sbrec_port_binding_by_name, peer->datapath, false, - depth + 1, local_datapaths); + depth + 1, local_datapaths, + tracked_datapaths); } ld->n_peer_ports++; if (ld->n_peer_ports > ld->n_allocated_peer_ports) { @@ -147,12 +165,14 @@ add_local_datapath(struct ovsdb_idl_index *sbrec_datapath_binding_by_key, struct ovsdb_idl_index *sbrec_port_binding_by_datapath, struct ovsdb_idl_index *sbrec_port_binding_by_name, const struct sbrec_datapath_binding *datapath, - bool has_local_l3gateway, struct hmap *local_datapaths) + bool has_local_l3gateway, struct hmap *local_datapaths, + struct hmap *tracked_datapaths) { add_local_datapath__(sbrec_datapath_binding_by_key, sbrec_port_binding_by_datapath, sbrec_port_binding_by_name, - datapath, has_local_l3gateway, 0, local_datapaths); + datapath, has_local_l3gateway, 0, local_datapaths, + tracked_datapaths); } static void @@ -506,6 +526,11 @@ update_local_lport_ids(struct binding_ctx_out *b_ctx, pb->datapath->tunnel_key, pb->tunnel_key); if (sset_add(b_ctx->local_lport_ids, buf) != NULL) { b_ctx->local_lport_ids_changed = true; + + if (b_ctx->tracked_dp_bindings) { + /* Add the 'pb' to the tracked_datapaths. */ + tracked_binding_datapath_lport_add(pb, b_ctx->tracked_dp_bindings); + } } } @@ -521,6 +546,11 @@ remove_local_lport_ids(struct binding_ctx_out *b_ctx, pb->datapath->tunnel_key, pb->tunnel_key); if (sset_find_and_delete(b_ctx->local_lport_ids, buf)) { b_ctx->local_lport_ids_changed = true; + + if (b_ctx->tracked_dp_bindings) { + /* Add the 'pb' to the tracked_datapaths. */ + tracked_binding_datapath_lport_add(pb, b_ctx->tracked_dp_bindings); + } } } @@ -669,6 +699,75 @@ is_lport_container(const struct sbrec_port_binding *pb) return is_lport_vif(pb) && pb->parent_port && pb->parent_port[0]; } +static struct tracked_binding_datapath * +tracked_binding_datapath_create(const struct sbrec_datapath_binding *dp, + bool is_new, + struct hmap *tracked_datapaths) +{ + struct tracked_binding_datapath *t_dp = xzalloc(sizeof *t_dp); + t_dp->dp = dp; + t_dp->is_new = is_new; + shash_init(&t_dp->lports); + hmap_insert(tracked_datapaths, &t_dp->node, uuid_hash(&dp->header_.uuid)); + return t_dp; +} + +static struct tracked_binding_datapath * +tracked_binding_datapath_find(struct hmap *tracked_datapaths, + const struct sbrec_datapath_binding *dp) +{ + struct tracked_binding_datapath *t_dp; + size_t hash = uuid_hash(&dp->header_.uuid); + HMAP_FOR_EACH_WITH_HASH (t_dp, node, hash, tracked_datapaths) { + if (uuid_equals(&t_dp->dp->header_.uuid, &dp->header_.uuid)) { + return t_dp; + } + } + + return NULL; +} + +static void +tracked_binding_datapath_lport_add(const struct sbrec_port_binding *pb, + struct hmap *tracked_datapaths) +{ + if (!tracked_datapaths) { + return; + } + + struct tracked_binding_datapath *tracked_dp = + tracked_binding_datapath_find(tracked_datapaths, pb->datapath); + if (!tracked_dp) { + tracked_dp = tracked_binding_datapath_create(pb->datapath, false, + tracked_datapaths); + } + + /* Check if the lport is already present or not. + * If it is already present, then just update the 'pb' and 'deleted' + * fields. */ + struct tracked_binding_lport *lport = + shash_find_data(&tracked_dp->lports, pb->logical_port); + + if (!lport) { + lport = xmalloc(sizeof *lport); + shash_add(&tracked_dp->lports, pb->logical_port, lport); + } + + lport->pb = pb; +} + +void +binding_tracked_dp_destroy(struct hmap *tracked_datapaths) +{ + struct tracked_binding_datapath *t_dp; + HMAP_FOR_EACH_POP (t_dp, node, tracked_datapaths) { + shash_destroy_free_data(&t_dp->lports); + free(t_dp); + } + + hmap_destroy(tracked_datapaths); +} + /* Corresponds to each Port_Binding.type. */ enum en_lport_type { LP_UNKNOWN, @@ -722,7 +821,7 @@ static bool claim_lport(const struct sbrec_port_binding *pb, const struct sbrec_chassis *chassis_rec, const struct ovsrec_interface *iface_rec, - bool sb_readonly) + bool sb_readonly, struct hmap *tracked_datapaths) { if (pb->chassis != chassis_rec) { if (sb_readonly) { @@ -741,6 +840,10 @@ claim_lport(const struct sbrec_port_binding *pb, } sbrec_port_binding_set_chassis(pb, chassis_rec); + + if (tracked_datapaths) { + update_lport_tracking(pb, false, true, tracked_datapaths); + } } /* Check if the port encap binding, if any, has changed */ @@ -758,9 +861,14 @@ claim_lport(const struct sbrec_port_binding *pb, /* Returns false if lport is not released due to 'sb_readonly'. * Returns true otherwise. + * + * This function assumes that the the 'pb' was claimed + * earlier i.e port binding's chassis is set to this chassis. + * Caller should make sure that this is the case. */ static bool -release_lport(const struct sbrec_port_binding *pb, bool sb_readonly) +release_lport(const struct sbrec_port_binding *pb, bool sb_readonly, + struct hmap *tracked_datapaths) { if (pb->encap) { if (sb_readonly) { @@ -783,6 +891,7 @@ release_lport(const struct sbrec_port_binding *pb, bool sb_readonly) sbrec_port_binding_set_virtual_parent(pb, NULL); } + update_lport_tracking(pb, true, false, tracked_datapaths); VLOG_INFO("Releasing lport %s from this chassis.", pb->logical_port); return true; } @@ -828,13 +937,14 @@ is_lbinding_container_parent(struct local_binding *lbinding) static bool release_local_binding_children(const struct sbrec_chassis *chassis_rec, struct local_binding *lbinding, - bool sb_readonly) + bool sb_readonly, + struct hmap *tracked_dp_bindings) { struct shash_node *node; SHASH_FOR_EACH (node, &lbinding->children) { struct local_binding *l = node->data; if (is_lbinding_this_chassis(l, chassis_rec)) { - if (!release_lport(l->pb, sb_readonly)) { + if (!release_lport(l->pb, sb_readonly, tracked_dp_bindings)) { return false; } } @@ -849,16 +959,17 @@ release_local_binding_children(const struct sbrec_chassis *chassis_rec, static bool release_local_binding(const struct sbrec_chassis *chassis_rec, - struct local_binding *lbinding, bool sb_readonly) + struct local_binding *lbinding, bool sb_readonly, + struct hmap *tracked_dp_bindings) { if (!release_local_binding_children(chassis_rec, lbinding, - sb_readonly)) { + sb_readonly, tracked_dp_bindings)) { return false; } bool retval = true; if (is_lbinding_this_chassis(lbinding, chassis_rec)) { - retval = release_lport(lbinding->pb, sb_readonly); + retval = release_lport(lbinding->pb, sb_readonly, tracked_dp_bindings); } lbinding->pb = NULL; @@ -879,7 +990,8 @@ consider_vif_lport_(const struct sbrec_port_binding *pb, if (can_bind) { /* We can claim the lport. */ if (!claim_lport(pb, b_ctx_in->chassis_rec, lbinding->iface, - !b_ctx_in->ovnsb_idl_txn)){ + !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings)){ return false; } @@ -887,7 +999,8 @@ consider_vif_lport_(const struct sbrec_port_binding *pb, b_ctx_in->sbrec_port_binding_by_datapath, b_ctx_in->sbrec_port_binding_by_name, pb->datapath, false, - b_ctx_out->local_datapaths); + b_ctx_out->local_datapaths, + b_ctx_out->tracked_dp_bindings); update_local_lport_ids(b_ctx_out, pb); if (lbinding->iface && qos_map && b_ctx_in->ovs_idl_txn) { get_qos_params(pb, qos_map); @@ -907,7 +1020,8 @@ consider_vif_lport_(const struct sbrec_port_binding *pb, if (pb->chassis == b_ctx_in->chassis_rec) { /* Release the lport if there is no lbinding. */ if (!lbinding_set || !can_bind) { - return release_lport(pb, !b_ctx_in->ovnsb_idl_txn); + return release_lport(pb, !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings); } } @@ -994,7 +1108,8 @@ consider_container_lport(const struct sbrec_port_binding *pb, * release the container lport, if it was bound earlier. */ if (is_lbinding_this_chassis(container_lbinding, b_ctx_in->chassis_rec)) { - return release_lport(pb, !b_ctx_in->ovnsb_idl_txn); + return release_lport(pb, !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings); } return true; @@ -1074,13 +1189,16 @@ consider_nonvif_lport_(const struct sbrec_port_binding *pb, b_ctx_in->sbrec_port_binding_by_datapath, b_ctx_in->sbrec_port_binding_by_name, pb->datapath, has_local_l3gateway, - b_ctx_out->local_datapaths); + b_ctx_out->local_datapaths, + b_ctx_out->tracked_dp_bindings); update_local_lport_ids(b_ctx_out, pb); return claim_lport(pb, b_ctx_in->chassis_rec, NULL, - !b_ctx_in->ovnsb_idl_txn); + !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings); } else if (pb->chassis == b_ctx_in->chassis_rec) { - return release_lport(pb, !b_ctx_in->ovnsb_idl_txn); + return release_lport(pb, !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings); } return true; @@ -1116,7 +1234,7 @@ consider_localnet_lport(const struct sbrec_port_binding *pb, struct binding_ctx_out *b_ctx_out, struct hmap *qos_map) { - /* Add all localnet ports to local_lports so that we allocate ct zones + /* Add all localnet ports to local_ifaces so that we allocate ct zones * for them. */ update_local_lports(b_ctx_out, pb->logical_port); @@ -1152,7 +1270,9 @@ consider_ha_lport(const struct sbrec_port_binding *pb, b_ctx_in->sbrec_port_binding_by_datapath, b_ctx_in->sbrec_port_binding_by_name, pb->datapath, false, - b_ctx_out->local_datapaths); + b_ctx_out->local_datapaths, + b_ctx_out->tracked_dp_bindings); + update_local_lport_ids(b_ctx_out, pb); } return consider_nonvif_lport_(pb, our_chassis, false, b_ctx_in, b_ctx_out); @@ -1442,7 +1562,8 @@ add_local_datapath_peer_port(const struct sbrec_port_binding *pb, b_ctx_in->sbrec_port_binding_by_datapath, b_ctx_in->sbrec_port_binding_by_name, peer->datapath, false, - 1, b_ctx_out->local_datapaths); + 1, b_ctx_out->local_datapaths, + b_ctx_out->tracked_dp_bindings); return; } @@ -1522,6 +1643,18 @@ remove_pb_from_local_datapath(const struct sbrec_port_binding *pb, } } +static void +update_lport_tracking(const struct sbrec_port_binding *pb, + bool old_claim, bool new_claim, + struct hmap *tracked_dp_bindings) +{ + if (!tracked_dp_bindings || (old_claim == new_claim)) { + return; + } + + tracked_binding_datapath_lport_add(pb, tracked_dp_bindings); +} + /* Considers the ovs iface 'iface_rec' for claiming. * This function should be called if the external_ids:iface-id * and 'ofport' are set for the 'iface_rec'. @@ -1618,7 +1751,8 @@ consider_iface_release(const struct ovsrec_interface *iface_rec, * lbinding->iface. * Cannot access these members of lbinding after this call. */ if (!release_local_binding(b_ctx_in->chassis_rec, lbinding, - !b_ctx_in->ovnsb_idl_txn)) { + !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings)) { return false; } } @@ -1831,9 +1965,10 @@ handle_deleted_vif_lport(const struct sbrec_port_binding *pb, * clear the 'chassis' column of 'pb'. But we need to do * for the local_binding's children. */ if (lbinding->type == BT_VIF && - !release_local_binding_children(b_ctx_in->chassis_rec, - lbinding, - !b_ctx_in->ovnsb_idl_txn)) { + !release_local_binding_children( + b_ctx_in->chassis_rec, lbinding, + !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings)) { return false; } } @@ -1906,7 +2041,6 @@ binding_handle_port_binding_changes(struct binding_ctx_in *b_ctx_in, struct binding_ctx_out *b_ctx_out) { bool handled = true; - const struct sbrec_port_binding *pb; /* Run the tracked port binding loop twice. One to handle deleted @@ -1963,7 +2097,6 @@ binding_handle_port_binding_changes(struct binding_ctx_in *b_ctx_in, case LP_PATCH: case LP_LOCALPORT: case LP_VTEP: - b_ctx_out->non_vif_ports_changed = true; update_local_lport_ids(b_ctx_out, pb); if (lport_type == LP_PATCH) { /* Add the peer datapath to the local datapaths if it's @@ -1973,30 +2106,31 @@ binding_handle_port_binding_changes(struct binding_ctx_in *b_ctx_in, add_local_datapath_peer_port(pb, b_ctx_in, b_ctx_out, ld); } } + + if (lport_type == LP_VTEP) { + /* VTEP lports are claimed/released by ovn-controller-vteps. + * We are not sure what changed. */ + b_ctx_out->non_vif_ports_changed = true; + } break; case LP_L2GATEWAY: - b_ctx_out->non_vif_ports_changed = true; handled = consider_l2gw_lport(pb, b_ctx_in, b_ctx_out); break; case LP_L3GATEWAY: - b_ctx_out->non_vif_ports_changed = true; handled = consider_l3gw_lport(pb, b_ctx_in, b_ctx_out); break; case LP_CHASSISREDIRECT: - b_ctx_out->non_vif_ports_changed = true; handled = consider_cr_lport(pb, b_ctx_in, b_ctx_out); break; case LP_EXTERNAL: - b_ctx_out->non_vif_ports_changed = true; handled = consider_external_lport(pb, b_ctx_in, b_ctx_out); break; case LP_LOCALNET: { - b_ctx_out->non_vif_ports_changed = true; consider_localnet_lport(pb, b_ctx_in, b_ctx_out, qos_map_ptr); struct shash bridge_mappings = diff --git a/controller/binding.h b/controller/binding.h index 161766d2f..c9740560f 100644 --- a/controller/binding.h +++ b/controller/binding.h @@ -19,6 +19,9 @@ #include #include "openvswitch/shash.h" +#include "openvswitch/hmap.h" +#include "openvswitch/uuid.h" +#include "openvswitch/list.h" struct hmap; struct ovsdb_idl; @@ -75,6 +78,12 @@ struct binding_ctx_out { /* smap of OVS interface name as key and * OVS interface external_ids:iface-id as value. */ struct smap *local_iface_ids; + + /* hmap of 'struct tracked_binding_datapath' which the + * callee (binding_handle_ovs_interface_changes and + * binding_handle_port_binding_changes) fills in for + * the changed datapaths and port bindings. */ + struct hmap *tracked_dp_bindings; }; enum local_binding_type { @@ -99,6 +108,19 @@ local_binding_find(struct shash *local_bindings, const char *name) return shash_find_data(local_bindings, name); } +/* Represents a tracked binding logical port. */ +struct tracked_binding_lport { + const struct sbrec_port_binding *pb; +}; + +/* Represent a tracked binding datapath. */ +struct tracked_binding_datapath { + struct hmap_node node; + const struct sbrec_datapath_binding *dp; + bool is_new; + struct shash lports; /* shash of struct tracked_binding_lport. */ +}; + void binding_register_ovs_idl(struct ovsdb_idl *); void binding_run(struct binding_ctx_in *, struct binding_ctx_out *); bool binding_cleanup(struct ovsdb_idl_txn *ovnsb_idl_txn, @@ -111,4 +133,5 @@ bool binding_handle_ovs_interface_changes(struct binding_ctx_in *, struct binding_ctx_out *); bool binding_handle_port_binding_changes(struct binding_ctx_in *, struct binding_ctx_out *); +void binding_tracked_dp_destroy(struct hmap *tracked_datapaths); #endif /* controller/binding.h */ diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c index 6e66705da..351fa32db 100644 --- a/controller/ovn-controller.c +++ b/controller/ovn-controller.c @@ -980,10 +980,88 @@ struct ed_type_runtime_data { struct sset local_lport_ids; struct sset active_tunnels; + /* runtime data engine private data. */ struct sset egress_ifaces; struct smap local_iface_ids; + + /* Tracked data. See below for more details and comments. */ + bool tracked; + bool local_lports_changed; + struct hmap tracked_dp_bindings; }; +/* struct ed_type_runtime_data has the below members for tracking the + * changes done to the runtime_data engine by the runtime_data engine + * handlers. Since this engine is an input to the flow_output engine, + * the flow output runtime data handler will make use of this tracked data. + * + * ------------------------------------------------------------------------ + * | | This is a hmap of | + * | | 'struct tracked_binding_datapath' defined in | + * | | binding.h. Runtime data handlers for OVS | + * | | Interface and Port Binding changes store the | + * | @tracked_dp_bindings | changed datapaths (datapaths added/removed from | + * | | local_datapaths) and changed port bindings | + * | | (added/updated/deleted in 'local_bindings'). | + * | | So any changes to the runtime data - | + * | | local_datapaths and local_bindings is captured | + * | | here. | + * ------------------------------------------------------------------------ + * | | This is a bool which represents if the runtime | + * | | data 'local_lports' changed by the runtime data | + * | | handlers for OVS Interface and Port Binding | + * | | changes. If 'local_lports' is updated and also | + * | | results in any port binding updates, it is | + * |@local_lports_changed | captured in the @tracked_dp_bindings. So there | + * | | is no need to capture the changes in the | + * | | local_lports. If @local_lports_changed is true | + * | | but without anydata in the @tracked_dp_bindings,| + * | | it means we needto only update the SB monitor | + * | | clauses and there isno need for any flow | + * | | (re)computations. | + * ------------------------------------------------------------------------ + * | | This represents if the data was tracked or not | + * | | by the runtime data handlers during the engine | + * | @tracked | run. If the runtime data recompute is | + * | | triggered, it means there is no tracked data. | + * ------------------------------------------------------------------------ + * + * + * The changes to the following runtime_data variables are not tracked. + * + * --------------------------------------------------------------------- + * | local_datapaths | The changes to these runtime data is captured in | + * | local_bindings | the @tracked_dp_bindings indirectly and hence it | + * | local_lport_ids | is not tracked explicitly. | + * --------------------------------------------------------------------- + * | local_iface_ids | This is used internally within the runtime data | + * | egress_ifaces | engine (used only in binding.c) and hence there | + * | | there is no need to track. | + * --------------------------------------------------------------------- + * | | Active tunnels is built in the | + * | | bfd_calculate_active_tunnels() for the tunnel | + * | | OVS interfaces. Any changes to non VIF OVS | + * | | interfaces results in triggering the full | + * | active_tunnels | recompute of runtime data engine and hence there | + * | | the tracked data doesn't track it. When we | + * | | support handling changes to non VIF OVS | + * | | interfaces we need to track the changes to the | + * | | active tunnels. | + * --------------------------------------------------------------------- + * + */ + +static void +en_runtime_data_clear_tracked_data(void *data_) +{ + struct ed_type_runtime_data *data = data_; + + binding_tracked_dp_destroy(&data->tracked_dp_bindings); + hmap_init(&data->tracked_dp_bindings); + data->local_lports_changed = false; + data->tracked = false; +} + static void * en_runtime_data_init(struct engine_node *node OVS_UNUSED, struct engine_arg *arg OVS_UNUSED) @@ -997,6 +1075,10 @@ en_runtime_data_init(struct engine_node *node OVS_UNUSED, sset_init(&data->egress_ifaces); smap_init(&data->local_iface_ids); local_bindings_init(&data->local_bindings); + + /* init the tracked data. */ + hmap_init(&data->tracked_dp_bindings); + return data; } @@ -1019,6 +1101,8 @@ en_runtime_data_cleanup(void *data) } hmap_destroy(&rt_data->local_datapaths); local_bindings_destroy(&rt_data->local_bindings); + + en_runtime_data_clear_tracked_data(data); } static void @@ -1102,6 +1186,8 @@ init_binding_ctx(struct engine_node *node, b_ctx_out->egress_ifaces = &rt_data->egress_ifaces; b_ctx_out->local_bindings = &rt_data->local_bindings; b_ctx_out->local_iface_ids = &rt_data->local_iface_ids; + b_ctx_out->tracked_dp_bindings = NULL; + b_ctx_out->local_lports_changed = NULL; } static void @@ -1113,6 +1199,23 @@ en_runtime_data_run(struct engine_node *node, void *data) struct sset *local_lport_ids = &rt_data->local_lport_ids; struct sset *active_tunnels = &rt_data->active_tunnels; + /* Clear the (stale) tracked data if any. Even though the tracked data + * gets cleared in the beginning of engine_init_run(), + * any of the runtime data handler might have set some tracked + * data and later another runtime data handler might return false + * resulting in full recompute of runtime engine and rendering the tracked + * data stale. + * + * It's possible that engine framework can be enhanced to indicate + * the node handlers (in this case flow_output_runtime_data_handler) + * that its input node had a full recompute. However we would still + * need to clear the tracked data, because we don't want the + * stale tracked data to be accessed outside of the engine, since the + * tracked data is cleared in the engine_init_run() and not at the + * end of the engine run. + * */ + en_runtime_data_clear_tracked_data(data); + static bool first_run = true; if (first_run) { /* don't cleanup since there is no data yet */ @@ -1168,6 +1271,8 @@ runtime_data_ovs_interface_handler(struct engine_node *node, void *data) struct binding_ctx_in b_ctx_in; struct binding_ctx_out b_ctx_out; init_binding_ctx(node, rt_data, &b_ctx_in, &b_ctx_out); + rt_data->tracked = true; + b_ctx_out.tracked_dp_bindings = &rt_data->tracked_dp_bindings; if (!binding_handle_ovs_interface_changes(&b_ctx_in, &b_ctx_out)) { return false; @@ -1175,6 +1280,7 @@ runtime_data_ovs_interface_handler(struct engine_node *node, void *data) if (b_ctx_out.local_lports_changed) { engine_set_node_state(node, EN_UPDATED); + rt_data->local_lports_changed = b_ctx_out.local_lports_changed; } return true; @@ -1191,12 +1297,16 @@ runtime_data_sb_port_binding_handler(struct engine_node *node, void *data) return false; } + rt_data->tracked = true; + b_ctx_out.tracked_dp_bindings = &rt_data->tracked_dp_bindings; + if (!binding_handle_port_binding_changes(&b_ctx_in, &b_ctx_out)) { return false; } if (b_ctx_out.local_lport_ids_changed || - b_ctx_out.non_vif_ports_changed) { + b_ctx_out.non_vif_ports_changed || + !hmap_is_empty(b_ctx_out.tracked_dp_bindings)) { engine_set_node_state(node, EN_UPDATED); } @@ -1796,6 +1906,32 @@ flow_output_port_groups_handler(struct engine_node *node, void *data) return _flow_output_resource_ref_handler(node, data, REF_TYPE_PORTGROUP); } +static bool +flow_output_runtime_data_handler(struct engine_node *node, + void *data OVS_UNUSED) +{ + struct ed_type_runtime_data *rt_data = + engine_get_input_data("runtime_data", node); + + /* There is no tracked data. Fall back to full recompute of + * flow_output. */ + if (!rt_data->tracked) { + return false; + } + + if (!hmap_is_empty(&rt_data->tracked_dp_bindings)) { + /* We are not yet handling the tracked datapath binding + * changes. Return false to trigger full recompute. */ + return false; + } + + if (rt_data->local_lports_changed) { + engine_set_node_state(node, EN_UPDATED); + } + + return true; +} + /* Engine node en_physical_flow_changes indicates whether * there is a need to * - recompute only physical flows or @@ -2029,7 +2165,7 @@ main(int argc, char *argv[]) /* Define inc-proc-engine nodes. */ ENGINE_NODE_CUSTOM_DATA(ct_zones, "ct_zones"); - ENGINE_NODE(runtime_data, "runtime_data"); + ENGINE_NODE_WITH_CLEAR_TRACK_DATA(runtime_data, "runtime_data"); ENGINE_NODE(mff_ovn_geneve, "mff_ovn_geneve"); ENGINE_NODE(ofctrl_is_connected, "ofctrl_is_connected"); ENGINE_NODE_WITH_CLEAR_TRACK_DATA(physical_flow_changes, @@ -2066,7 +2202,8 @@ main(int argc, char *argv[]) flow_output_addr_sets_handler); engine_add_input(&en_flow_output, &en_port_groups, flow_output_port_groups_handler); - engine_add_input(&en_flow_output, &en_runtime_data, NULL); + engine_add_input(&en_flow_output, &en_runtime_data, + flow_output_runtime_data_handler); engine_add_input(&en_flow_output, &en_mff_ovn_geneve, NULL); engine_add_input(&en_flow_output, &en_physical_flow_changes, flow_output_physical_flow_changes_handler); diff --git a/tests/ovn-performance.at b/tests/ovn-performance.at index 5cc1960b6..08acd3bae 100644 --- a/tests/ovn-performance.at +++ b/tests/ovn-performance.at @@ -286,11 +286,11 @@ for i in 1 2; do [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lsp-set-type $lsp router] ) - OVN_CONTROLLER_EXPECT_HIT( + OVN_CONTROLLER_EXPECT_NO_HIT( [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lsp-set-options $lsp router-port=$lrp] ) - OVN_CONTROLLER_EXPECT_HIT( + OVN_CONTROLLER_EXPECT_NO_HIT( [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lsp-set-addresses $lsp router] ) @@ -416,15 +416,15 @@ OVN_CONTROLLER_EXPECT_NO_HIT( [ovn-nbctl --wait=hv destroy Port_Group pg1] ) -for i in 1 2; do - ls=ls$i +OVN_CONTROLLER_EXPECT_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv ls-del ls1] +) - # Delete switch $ls - OVN_CONTROLLER_EXPECT_HIT( - [hv1 hv2], [lflow_run], - [ovn-nbctl --wait=hv ls-del $ls] - ) -done +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv ls-del ls2] +) # Delete router lr1 OVN_CONTROLLER_EXPECT_NO_HIT( From patchwork Thu Jun 11 12:44:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1307523 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=hemlock.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49jNpl20QFz9sRN for ; Thu, 11 Jun 2020 22:44:31 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 9955589677; Thu, 11 Jun 2020 12:44:29 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id b05xFXYZSin3; Thu, 11 Jun 2020 12:44:28 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by hemlock.osuosl.org (Postfix) with ESMTP id 2984F89631; Thu, 11 Jun 2020 12:44:28 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id F27A6C0894; Thu, 11 Jun 2020 12:44:27 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by lists.linuxfoundation.org (Postfix) with ESMTP id 71B01C0178 for ; Thu, 11 Jun 2020 12:44:26 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id 5E4B926A1C for ; Thu, 11 Jun 2020 12:44:26 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id lu8hGCowy6vr for ; Thu, 11 Jun 2020 12:44:14 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay4-d.mail.gandi.net (relay4-d.mail.gandi.net [217.70.183.196]) by silver.osuosl.org (Postfix) with ESMTPS id 3E07826977 for ; Thu, 11 Jun 2020 12:44:12 +0000 (UTC) X-Originating-IP: 27.7.184.71 Received: from nusiddiq.home.org.home.org (unknown [27.7.184.71]) (Authenticated sender: numans@ovn.org) by relay4-d.mail.gandi.net (Postfix) with ESMTPSA id 0039BE0018; Thu, 11 Jun 2020 12:44:07 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Thu, 11 Jun 2020 18:14:05 +0530 Message-Id: <20200611124405.1356377-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611124309.1353593-1-numans@ovn.org> References: <20200611124309.1353593-1-numans@ovn.org> MIME-Version: 1.0 Cc: Venkata Anil Subject: [ovs-dev] [PATCH ovn v12 4/7] ovn-controller: Use the tracked runtime data changes for flow calculation. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Venkata Anil This patch processes the logical flows of tracked datapaths and tracked logical ports. To handle the tracked logical port changes, reference of logical flows to port bindings is maintained. Co-Authored-by: Numan Siddique Signed-off-by: Venkata Anil Signed-off-by: Numan Siddique Acked-by: Dumitru Ceara --- controller/lflow.c | 86 +++++++++++++++++++++++++++++ controller/lflow.h | 12 +++- controller/ovn-controller.c | 107 ++++++++++++++++++------------------ tests/ovn-performance.at | 12 ++-- 4 files changed, 156 insertions(+), 61 deletions(-) diff --git a/controller/lflow.c b/controller/lflow.c index 01214a3a6..eb6be0100 100644 --- a/controller/lflow.c +++ b/controller/lflow.c @@ -59,6 +59,10 @@ struct condition_aux { struct ovsdb_idl_index *sbrec_port_binding_by_name; const struct sbrec_chassis *chassis; const struct sset *active_tunnels; + const struct sbrec_logical_flow *lflow; + /* Resource reference to store the port name referenced + * in is_chassis_resident() to lhe logicl flow. */ + struct lflow_resource_ref *lfrr; }; static bool @@ -68,6 +72,8 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, struct controller_event_options *controller_event_opts, struct lflow_ctx_in *l_ctx_in, struct lflow_ctx_out *l_ctx_out); +static void lflow_resource_add(struct lflow_resource_ref *, enum ref_type, + const char *ref_name, const struct uuid *); static bool lookup_port_cb(const void *aux_, const char *port_name, unsigned int *portp) @@ -120,6 +126,14 @@ is_chassis_resident_cb(const void *c_aux_, const char *port_name) if (!pb) { return false; } + + /* Store the port_name to lflow reference. */ + int64_t dp_id = pb->datapath->tunnel_key; + char buf[16]; + snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, dp_id, pb->tunnel_key); + lflow_resource_add(c_aux->lfrr, REF_TYPE_PORTBINDING, buf, + &c_aux->lflow->header_.uuid); + if (strcmp(pb->type, "chassisredirect")) { /* for non-chassisredirect ports */ return pb->chassis && pb->chassis == c_aux->chassis; @@ -594,6 +608,8 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, .sbrec_port_binding_by_name = l_ctx_in->sbrec_port_binding_by_name, .chassis = l_ctx_in->chassis, .active_tunnels = l_ctx_in->active_tunnels, + .lflow = lflow, + .lfrr = l_ctx_out->lfrr }; expr = expr_simplify(expr, is_chassis_resident_cb, &cond_aux); expr = expr_normalize(expr); @@ -649,6 +665,8 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, int64_t dp_id = lflow->logical_datapath->tunnel_key; char buf[16]; snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, dp_id, port_id); + lflow_resource_add(l_ctx_out->lfrr, REF_TYPE_PORTBINDING, buf, + &lflow->header_.uuid); if (!sset_contains(l_ctx_in->local_lport_ids, buf)) { VLOG_DBG("lflow "UUID_FMT " port %s in match is not local, skip", @@ -847,3 +865,71 @@ lflow_destroy(void) expr_symtab_destroy(&symtab); shash_destroy(&symtab); } + +bool +lflow_add_flows_for_datapath(const struct sbrec_datapath_binding *dp, + struct lflow_ctx_in *l_ctx_in, + struct lflow_ctx_out *l_ctx_out) +{ + bool handled = true; + struct hmap dhcp_opts = HMAP_INITIALIZER(&dhcp_opts); + struct hmap dhcpv6_opts = HMAP_INITIALIZER(&dhcpv6_opts); + const struct sbrec_dhcp_options *dhcp_opt_row; + SBREC_DHCP_OPTIONS_TABLE_FOR_EACH (dhcp_opt_row, + l_ctx_in->dhcp_options_table) { + dhcp_opt_add(&dhcp_opts, dhcp_opt_row->name, dhcp_opt_row->code, + dhcp_opt_row->type); + } + + + const struct sbrec_dhcpv6_options *dhcpv6_opt_row; + SBREC_DHCPV6_OPTIONS_TABLE_FOR_EACH (dhcpv6_opt_row, + l_ctx_in->dhcpv6_options_table) { + dhcp_opt_add(&dhcpv6_opts, dhcpv6_opt_row->name, dhcpv6_opt_row->code, + dhcpv6_opt_row->type); + } + + struct hmap nd_ra_opts = HMAP_INITIALIZER(&nd_ra_opts); + nd_ra_opts_init(&nd_ra_opts); + + struct controller_event_options controller_event_opts; + controller_event_opts_init(&controller_event_opts); + + struct sbrec_logical_flow *lf_row = sbrec_logical_flow_index_init_row( + l_ctx_in->sbrec_logical_flow_by_logical_datapath); + sbrec_logical_flow_index_set_logical_datapath(lf_row, dp); + + const struct sbrec_logical_flow *lflow; + SBREC_LOGICAL_FLOW_FOR_EACH_EQUAL ( + lflow, lf_row, l_ctx_in->sbrec_logical_flow_by_logical_datapath) { + /* Remove the lflow from flow_table if present before processing it. */ + ofctrl_remove_flows(l_ctx_out->flow_table, &lflow->header_.uuid); + + if (!consider_logical_flow(lflow, &dhcp_opts, &dhcpv6_opts, + &nd_ra_opts, &controller_event_opts, + l_ctx_in, l_ctx_out)) { + handled = false; + break; + } + } + + dhcp_opts_destroy(&dhcp_opts); + dhcp_opts_destroy(&dhcpv6_opts); + nd_ra_opts_destroy(&nd_ra_opts); + controller_event_opts_destroy(&controller_event_opts); + return handled; +} + +bool +lflow_handle_flows_for_lport(const struct sbrec_port_binding *pb, + struct lflow_ctx_in *l_ctx_in, + struct lflow_ctx_out *l_ctx_out) +{ + int64_t dp_id = pb->datapath->tunnel_key; + char pb_ref_name[16]; + snprintf(pb_ref_name, sizeof(pb_ref_name), "%"PRId64"_%"PRId64, + dp_id, pb->tunnel_key); + bool changed = true; + return lflow_handle_changed_ref(REF_TYPE_PORTBINDING, pb_ref_name, + l_ctx_in, l_ctx_out, &changed); +} diff --git a/controller/lflow.h b/controller/lflow.h index f02f709d7..ae02eaf5e 100644 --- a/controller/lflow.h +++ b/controller/lflow.h @@ -48,6 +48,8 @@ struct sbrec_dhcp_options_table; struct sbrec_dhcpv6_options_table; struct sbrec_logical_flow_table; struct sbrec_mac_binding_table; +struct sbrec_datapath_binding; +struct sbrec_port_binding; struct simap; struct sset; struct uuid; @@ -72,7 +74,8 @@ struct uuid; enum ref_type { REF_TYPE_ADDRSET, - REF_TYPE_PORTGROUP + REF_TYPE_PORTGROUP, + REF_TYPE_PORTBINDING }; /* Maintains the relationship for a pair of named resource and @@ -117,6 +120,7 @@ void lflow_resource_clear(struct lflow_resource_ref *); struct lflow_ctx_in { struct ovsdb_idl_index *sbrec_multicast_group_by_name_datapath; + struct ovsdb_idl_index *sbrec_logical_flow_by_logical_datapath; struct ovsdb_idl_index *sbrec_port_binding_by_name; const struct sbrec_dhcp_options_table *dhcp_options_table; const struct sbrec_dhcpv6_options_table *dhcpv6_options_table; @@ -157,4 +161,10 @@ void lflow_destroy(void); void lflow_expr_destroy(struct hmap *lflow_expr_cache); +bool lflow_add_flows_for_datapath(const struct sbrec_datapath_binding *, + struct lflow_ctx_in *, + struct lflow_ctx_out *); +bool lflow_handle_flows_for_lport(const struct sbrec_port_binding *, + struct lflow_ctx_in *, + struct lflow_ctx_out *); #endif /* controller/lflow.h */ diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c index 351fa32db..53c412927 100644 --- a/controller/ovn-controller.c +++ b/controller/ovn-controller.c @@ -1512,6 +1512,11 @@ static void init_lflow_ctx(struct engine_node *node, engine_get_input("SB_port_binding", node), "name"); + struct ovsdb_idl_index *sbrec_logical_flow_by_dp = + engine_ovsdb_node_get_index( + engine_get_input("SB_logical_flow", node), + "logical_datapath"); + struct ovsdb_idl_index *sbrec_mc_group_by_name_dp = engine_ovsdb_node_get_index( engine_get_input("SB_multicast_group", node), @@ -1559,6 +1564,8 @@ static void init_lflow_ctx(struct engine_node *node, l_ctx_in->sbrec_multicast_group_by_name_datapath = sbrec_mc_group_by_name_dp; + l_ctx_in->sbrec_logical_flow_by_logical_datapath = + sbrec_logical_flow_by_dp; l_ctx_in->sbrec_port_binding_by_name = sbrec_port_binding_by_name; l_ctx_in->dhcp_options_table = dhcp_table; l_ctx_in->dhcpv6_options_table = dhcpv6_table; @@ -1713,7 +1720,8 @@ flow_output_sb_mac_binding_handler(struct engine_node *node, void *data) } static bool -flow_output_sb_port_binding_handler(struct engine_node *node, void *data) +flow_output_sb_port_binding_handler(struct engine_node *node, + void *data) { struct ed_type_runtime_data *rt_data = engine_get_input_data("runtime_data", node); @@ -1721,56 +1729,13 @@ flow_output_sb_port_binding_handler(struct engine_node *node, void *data) struct ed_type_flow_output *fo = data; struct ovn_desired_flow_table *flow_table = &fo->flow_table; - /* XXX: now we handle port-binding changes for physical flow processing - * only, but port-binding change can have impact to logical flow - * processing, too, in below circumstances: - * - * - When a port-binding for a lport is inserted/deleted but the lflow - * using that lport doesn't change. - * - * This can happen only when the lport name is used by ACL match - * condition, which is specified by user. Even in that case, if the port - * is actually bound on the current chassis it will trigger recompute on - * that chassis since ovs interface would be updated. So the only - * situation this would have real impact is when user defines an ACL - * that includes lport that is not on current chassis, and there is a - * port-binding creation/deletion related to that lport.e.g.: an ACL is - * defined: - * - * to-lport 1000 'outport=="A" && inport=="B"' allow-related - * - * If "A" is on current chassis, but "B" is lport that hasn't been - * created yet. When a lport "B" is created and bound on another - * chassis, the ACL will not take effect on the current chassis until a - * recompute is triggered later. This case doesn't seem to be a problem - * for real world use cases because usually lport is created before - * being referenced by name in ACLs. - * - * - When is_chassis_resident() is used in lflow. In this case the - * port binding is not a regular VIF. It can be either "patch" or - * "external", with ha-chassis-group assigned. In current - * "runtime_data" handling, port-binding changes for these types always - * trigger recomputing. So it is fine even if we do not handle it here. - * (due to the ovsdb tracking support for referenced table changes, - * ha-chassis-group changes will appear as port-binding change). - * - * - When a mac-binding doesn't change but the port-binding related to - * that mac-binding is deleted. In this case the neighbor flow generated - * for the mac-binding should be deleted. This would not cause any real - * issue for now, since the port-binding related to mac-binding is - * always logical router port, and any change to logical router port - * would just trigger recompute. - * - * Although there is no correctness issue so far (except the unusual ACL - * use case, which doesn't seem to be a real problem), it might be better - * to handle this more gracefully, without the need to consider these - * tricky scenarios. One approach is to maintain a mapping between lport - * names and the lflows that uses them, and reprocess the related lflows - * when related port-bindings change. - */ struct physical_ctx p_ctx; init_physical_ctx(node, rt_data, &p_ctx); + /* We handle port-binding changes for physical flow processing + * only. flow_output runtime data handler takes care of processing + * logical flows for any port binding changes. + */ physical_handle_port_binding_changes(&p_ctx, flow_table); engine_set_node_state(node, EN_UPDATED); @@ -1858,6 +1823,9 @@ _flow_output_resource_ref_handler(struct engine_node *node, void *data, updated = &pg_data->updated; deleted = &pg_data->deleted; break; + + /* This ref type is handled in the flow_output_runtime_data_handler. */ + case REF_TYPE_PORTBINDING: default: OVS_NOT_REACHED(); } @@ -1919,16 +1887,42 @@ flow_output_runtime_data_handler(struct engine_node *node, return false; } - if (!hmap_is_empty(&rt_data->tracked_dp_bindings)) { - /* We are not yet handling the tracked datapath binding - * changes. Return false to trigger full recompute. */ - return false; + struct hmap *tracked_dp_bindings = &rt_data->tracked_dp_bindings; + if (hmap_is_empty(tracked_dp_bindings)) { + if (rt_data->local_lports_changed) { + engine_set_node_state(node, EN_UPDATED); + } + return true; } - if (rt_data->local_lports_changed) { - engine_set_node_state(node, EN_UPDATED); + struct lflow_ctx_in l_ctx_in; + struct lflow_ctx_out l_ctx_out; + struct ed_type_flow_output *fo = data; + init_lflow_ctx(node, rt_data, fo, &l_ctx_in, &l_ctx_out); + + struct physical_ctx p_ctx; + init_physical_ctx(node, rt_data, &p_ctx); + + struct tracked_binding_datapath *tdp; + HMAP_FOR_EACH (tdp, node, tracked_dp_bindings) { + if (tdp->is_new) { + if (!lflow_add_flows_for_datapath(tdp->dp, &l_ctx_in, + &l_ctx_out)) { + return false; + } + } else { + struct shash_node *shash_node; + SHASH_FOR_EACH (shash_node, &tdp->lports) { + struct tracked_binding_lport *lport = shash_node->data; + if (!lflow_handle_flows_for_lport(lport->pb, &l_ctx_in, + &l_ctx_out)) { + return false; + } + } + } } + engine_set_node_state(node, EN_UPDATED); return true; } @@ -2099,6 +2093,9 @@ main(int argc, char *argv[]) = chassis_index_create(ovnsb_idl_loop.idl); struct ovsdb_idl_index *sbrec_multicast_group_by_name_datapath = mcast_group_index_create(ovnsb_idl_loop.idl); + struct ovsdb_idl_index *sbrec_logical_flow_by_logical_datapath + = ovsdb_idl_index_create1(ovnsb_idl_loop.idl, + &sbrec_logical_flow_col_logical_datapath); struct ovsdb_idl_index *sbrec_port_binding_by_name = ovsdb_idl_index_create1(ovnsb_idl_loop.idl, &sbrec_port_binding_col_logical_port); @@ -2258,6 +2255,8 @@ main(int argc, char *argv[]) engine_ovsdb_node_add_index(&en_sb_chassis, "name", sbrec_chassis_by_name); engine_ovsdb_node_add_index(&en_sb_multicast_group, "name_datapath", sbrec_multicast_group_by_name_datapath); + engine_ovsdb_node_add_index(&en_sb_logical_flow, "logical_datapath", + sbrec_logical_flow_by_logical_datapath); engine_ovsdb_node_add_index(&en_sb_port_binding, "name", sbrec_port_binding_by_name); engine_ovsdb_node_add_index(&en_sb_port_binding, "key", diff --git a/tests/ovn-performance.at b/tests/ovn-performance.at index 08acd3bae..a12757e18 100644 --- a/tests/ovn-performance.at +++ b/tests/ovn-performance.at @@ -274,7 +274,7 @@ for i in 1 2; do ) # Add router port to $ls - OVN_CONTROLLER_EXPECT_HIT( + OVN_CONTROLLER_EXPECT_NO_HIT( [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lrp-add lr1 $lrp 02:00:00:00:0$i:01 10.0.$i.1/24] ) @@ -282,7 +282,7 @@ for i in 1 2; do [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lsp-add $ls $lsp] ) - OVN_CONTROLLER_EXPECT_HIT( + OVN_CONTROLLER_EXPECT_NO_HIT( [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lsp-set-type $lsp router] ) @@ -353,8 +353,8 @@ for i in 1 2; do ) # Bind port $lp and wait for it to come up - OVN_CONTROLLER_EXPECT_HIT_COND( - [hv$i hv$j], [lflow_run], [>0 =0], + OVN_CONTROLLER_EXPECT_NO_HIT( + [hv$i hv$j], [lflow_run], [as hv$i ovs-vsctl add-port br-int $vif -- set Interface $vif external-ids:iface-id=$lp && ovn-nbctl wait-until Logical_Switch_Port $lp 'up=true' && ovn-nbctl --wait=hv sync] @@ -404,8 +404,8 @@ for i in 1 2; do lp=lp$i # Delete port $lp - OVN_CONTROLLER_EXPECT_HIT_COND( - [hv$i hv$j], [lflow_run], [>0 =0], + OVN_CONTROLLER_EXPECT_NO_HIT( + [hv$i hv$j], [lflow_run], [ovn-nbctl --wait=hv lsp-del $lp] ) done From patchwork Thu Jun 11 12:44:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1307526 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49jNpw4TYpz9sRN for ; Thu, 11 Jun 2020 22:44:40 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 2F21187AD8; Thu, 11 Jun 2020 12:44:39 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pscBgZvOyDFV; Thu, 11 Jun 2020 12:44:36 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by fraxinus.osuosl.org (Postfix) with ESMTP id 3291687AC5; Thu, 11 Jun 2020 12:44:36 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 22D0EC0894; Thu, 11 Jun 2020 12:44:36 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by lists.linuxfoundation.org (Postfix) with ESMTP id DD1E6C0178 for ; Thu, 11 Jun 2020 12:44:34 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id 69D3788829 for ; Thu, 11 Jun 2020 12:44:33 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wDK-b0S7+g+J for ; Thu, 11 Jun 2020 12:44:28 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay5-d.mail.gandi.net (relay5-d.mail.gandi.net [217.70.183.197]) by whitealder.osuosl.org (Postfix) with ESMTPS id 0356C886B2 for ; Thu, 11 Jun 2020 12:44:16 +0000 (UTC) X-Originating-IP: 27.7.184.71 Received: from nusiddiq.home.org.home.org (unknown [27.7.184.71]) (Authenticated sender: numans@ovn.org) by relay5-d.mail.gandi.net (Postfix) with ESMTPSA id 74FED1C0007; Thu, 11 Jun 2020 12:44:12 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Thu, 11 Jun 2020 18:14:09 +0530 Message-Id: <20200611124409.1356507-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611124309.1353593-1-numans@ovn.org> References: <20200611124309.1353593-1-numans@ovn.org> MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn v12 5/7] tests: Enhance ovn-performance testing by adding gw router port. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique This covers the scenario of setting up/deleting of BFD tunnels for HA. Tested-by: Dumitru Ceara Signed-off-by: Numan Siddique --- tests/ovn-performance.at | 104 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 104 insertions(+) diff --git a/tests/ovn-performance.at b/tests/ovn-performance.at index a12757e18..2a15cb473 100644 --- a/tests/ovn-performance.at +++ b/tests/ovn-performance.at @@ -239,6 +239,16 @@ for i in 1 2; do ovn_attach n1 br-phys 192.168.0.$i done +for i in 1 2 3; do + sim_add gw$i + as gw$i + ovs-vsctl add-br br-phys + ovs-vsctl add-br br-ex + ovs-vsctl set open . external_ids:ovn-bridge-mappings="public:br-ex" + j=$((i + 2)) + ovn_attach n1 br-phys 192.168.0.$j +done + # Wait for the tunnel ports to be created and up. # Otherwise this may affect the lflow_run count. @@ -399,6 +409,100 @@ OVN_CONTROLLER_EXPECT_NO_HIT( [ovn-nbctl --wait=hv acl-add pg1 to-lport 1001 'outport == @pg1 && ip4.src == $pg1_ip4' allow] ) +# Create a public logical switch and attach the router to it. +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv ls-add public] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-add public public-lr1] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-type public-lr1 router] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-addresses public-lr1 router] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-options public-lr1 router-port=lr1-public] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lrp-add lr1 lr1-public 00:00:20:20:12:13 172.168.0.100/24] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-add public ln-public] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-type ln-public localnet] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-addresses ln-public unknown] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-options ln-public network_name=public] +) + +OVN_CONTROLLER_EXPECT_HIT_COND( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], [=0 =0 >0 =0 =0], + [ovn-nbctl --wait=hv lrp-set-gateway-chassis lr1-public gw1 30] +) + +# After this, BFD should be enabled from hv1 and hv2 to gw1. +# So there should be lflow_run hits in hv1, hv2, gw1 and gw2 +OVN_CONTROLLER_EXPECT_HIT_COND( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], [>0 >0 >0 >0 =0], + [ovn-nbctl --wait=hv lrp-set-gateway-chassis lr1-public gw2 20] +) + +OVN_CONTROLLER_EXPECT_HIT( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], + [ovn-nbctl --wait=hv lrp-set-gateway-chassis lr1-public gw3 10] +) + +# Make gw2 master. +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], + [ovn-nbctl --wait=hv lrp-set-gateway-chassis lr1-public gw2 40] +) + +# Delete gw2 from gateway chassis +OVN_CONTROLLER_EXPECT_HIT( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], + [ovn-nbctl --wait=hv lrp-del-gateway-chassis lr1-public gw2] +) + +# Delete gw1 from gateway chassis +# After this, the BFD should be disabled entirely as gw3 is the +# only gateway chassis. +OVN_CONTROLLER_EXPECT_HIT_COND( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], [>0 >0 >0 =0 >0], + [ovn-nbctl --wait=hv lrp-del-gateway-chassis lr1-public gw1] +) + +# Delete gw3 from gateway chassis. There should be no lflow_run. +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], + [ovn-nbctl --wait=hv lrp-del-gateway-chassis lr1-public gw3] +) + for i in 1 2; do j=$((i%2 + 1)) lp=lp$i From patchwork Thu Jun 11 12:44:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1307527 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.136; helo=silver.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49jNqH366Xz9sRN for ; Thu, 11 Jun 2020 22:44:59 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id 81CFD26866; Thu, 11 Jun 2020 12:44:57 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 1sxP4y636LTx; Thu, 11 Jun 2020 12:44:48 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by silver.osuosl.org (Postfix) with ESMTP id E9F8226C40; Thu, 11 Jun 2020 12:44:31 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id DED85C0178; Thu, 11 Jun 2020 12:44:31 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by lists.linuxfoundation.org (Postfix) with ESMTP id E8D20C0894 for ; Thu, 11 Jun 2020 12:44:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id D1F6926BA2 for ; Thu, 11 Jun 2020 12:44:28 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id VIU6QTsQHCyd for ; Thu, 11 Jun 2020 12:44:25 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay4-d.mail.gandi.net (relay4-d.mail.gandi.net [217.70.183.196]) by silver.osuosl.org (Postfix) with ESMTPS id DEBF526B1E for ; Thu, 11 Jun 2020 12:44:19 +0000 (UTC) X-Originating-IP: 27.7.184.71 Received: from nusiddiq.home.org.home.org (unknown [27.7.184.71]) (Authenticated sender: numans@ovn.org) by relay4-d.mail.gandi.net (Postfix) with ESMTPSA id 6365EE000F; Thu, 11 Jun 2020 12:44:17 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Thu, 11 Jun 2020 18:14:14 +0530 Message-Id: <20200611124414.1356557-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611124309.1353593-1-numans@ovn.org> References: <20200611124309.1353593-1-numans@ovn.org> MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn v12 6/7] Add an util function get_unique_lport_key() for generating unique lport key. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique Suggested-by: Dumitru Ceara Signed-off-by: Numan Siddique --- controller/binding.c | 8 ++++---- controller/lflow.c | 8 ++++---- lib/ovn-util.h | 8 ++++++++ 3 files changed, 16 insertions(+), 8 deletions(-) diff --git a/controller/binding.c b/controller/binding.c index 77ed7441a..0a7803583 100644 --- a/controller/binding.c +++ b/controller/binding.c @@ -522,8 +522,8 @@ update_local_lport_ids(struct binding_ctx_out *b_ctx, const struct sbrec_port_binding *pb) { char buf[16]; - snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, - pb->datapath->tunnel_key, pb->tunnel_key); + get_unique_lport_key(pb->datapath->tunnel_key, pb->tunnel_key, + buf, sizeof(buf)); if (sset_add(b_ctx->local_lport_ids, buf) != NULL) { b_ctx->local_lport_ids_changed = true; @@ -542,8 +542,8 @@ remove_local_lport_ids(struct binding_ctx_out *b_ctx, const struct sbrec_port_binding *pb) { char buf[16]; - snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, - pb->datapath->tunnel_key, pb->tunnel_key); + get_unique_lport_key(pb->datapath->tunnel_key, pb->tunnel_key, + buf, sizeof(buf)); if (sset_find_and_delete(b_ctx->local_lport_ids, buf)) { b_ctx->local_lport_ids_changed = true; diff --git a/controller/lflow.c b/controller/lflow.c index eb6be0100..546658c60 100644 --- a/controller/lflow.c +++ b/controller/lflow.c @@ -130,7 +130,7 @@ is_chassis_resident_cb(const void *c_aux_, const char *port_name) /* Store the port_name to lflow reference. */ int64_t dp_id = pb->datapath->tunnel_key; char buf[16]; - snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, dp_id, pb->tunnel_key); + get_unique_lport_key(dp_id, pb->tunnel_key, buf, sizeof(buf)); lflow_resource_add(c_aux->lfrr, REF_TYPE_PORTBINDING, buf, &c_aux->lflow->header_.uuid); @@ -664,7 +664,7 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, if (port_id) { int64_t dp_id = lflow->logical_datapath->tunnel_key; char buf[16]; - snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, dp_id, port_id); + get_unique_lport_key(dp_id, port_id, buf, sizeof(buf)); lflow_resource_add(l_ctx_out->lfrr, REF_TYPE_PORTBINDING, buf, &lflow->header_.uuid); if (!sset_contains(l_ctx_in->local_lport_ids, buf)) { @@ -927,8 +927,8 @@ lflow_handle_flows_for_lport(const struct sbrec_port_binding *pb, { int64_t dp_id = pb->datapath->tunnel_key; char pb_ref_name[16]; - snprintf(pb_ref_name, sizeof(pb_ref_name), "%"PRId64"_%"PRId64, - dp_id, pb->tunnel_key); + get_unique_lport_key(dp_id, pb->tunnel_key, pb_ref_name, + sizeof(pb_ref_name)); bool changed = true; return lflow_handle_changed_ref(REF_TYPE_PORTBINDING, pb_ref_name, l_ctx_in, l_ctx_out, &changed); diff --git a/lib/ovn-util.h b/lib/ovn-util.h index e13cf4d78..eba2948ff 100644 --- a/lib/ovn-util.h +++ b/lib/ovn-util.h @@ -114,6 +114,14 @@ bool ovn_tnlid_in_use(const struct hmap *set, uint32_t tnlid); uint32_t ovn_allocate_tnlid(struct hmap *set, const char *name, uint32_t min, uint32_t max, uint32_t *hint); +static inline void +get_unique_lport_key(uint64_t dp_tunnel_key, uint64_t lport_tunnel_key, + char *buf, size_t buf_size) +{ + snprintf(buf, buf_size, "%"PRId64"_%"PRId64, dp_tunnel_key, + lport_tunnel_key); +} + char *ovn_chassis_redirect_name(const char *port_name); void ovn_set_pidfile(const char *name); From patchwork Thu Jun 11 12:44:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1307524 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49jNpm3G0hz9sRW for ; Thu, 11 Jun 2020 22:44:32 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id EC99E87A77; Thu, 11 Jun 2020 12:44:30 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8g6Shw1Ubxzz; Thu, 11 Jun 2020 12:44:29 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by fraxinus.osuosl.org (Postfix) with ESMTP id 3597787A89; Thu, 11 Jun 2020 12:44:29 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 26186C0895; Thu, 11 Jun 2020 12:44:29 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id E8B28C0893 for ; Thu, 11 Jun 2020 12:44:27 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id DD6EC8966D for ; Thu, 11 Jun 2020 12:44:27 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Cu3RzOKPQwAV for ; Thu, 11 Jun 2020 12:44:26 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay10.mail.gandi.net (relay10.mail.gandi.net [217.70.178.230]) by hemlock.osuosl.org (Postfix) with ESMTPS id 5ABA889631 for ; Thu, 11 Jun 2020 12:44:26 +0000 (UTC) Received: from nusiddiq.home.org.home.org (unknown [27.7.184.71]) (Authenticated sender: numans@ovn.org) by relay10.mail.gandi.net (Postfix) with ESMTPSA id 5CC70240008; Thu, 11 Jun 2020 12:44:21 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Thu, 11 Jun 2020 18:14:19 +0530 Message-Id: <20200611124419.1356615-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611124309.1353593-1-numans@ovn.org> References: <20200611124309.1353593-1-numans@ovn.org> MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn v12 7/7] binding.c: Reorder out params of some of the static functions. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique Signed-off-by: Numan Siddique --- controller/binding.c | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/controller/binding.c b/controller/binding.c index 0a7803583..088f52775 100644 --- a/controller/binding.c +++ b/controller/binding.c @@ -496,7 +496,7 @@ update_ld_localnet_port(const struct sbrec_port_binding *binding_rec, * Also track if the set has changed. */ static void -update_local_lports(struct binding_ctx_out *b_ctx, const char *iface_id) +update_local_lports(const char *iface_id, struct binding_ctx_out *b_ctx) { if (sset_add(b_ctx->local_lports, iface_id) != NULL) { b_ctx->local_lports_changed = true; @@ -507,7 +507,7 @@ update_local_lports(struct binding_ctx_out *b_ctx, const char *iface_id) * set has changed. */ static void -remove_local_lports(struct binding_ctx_out *b_ctx, const char *iface_id) +remove_local_lports(const char *iface_id, struct binding_ctx_out *b_ctx) { if (sset_find_and_delete(b_ctx->local_lports, iface_id)) { b_ctx->local_lports_changed = true; @@ -518,8 +518,8 @@ remove_local_lports(struct binding_ctx_out *b_ctx, const char *iface_id) * lport IDs. Also track if the set has changed. */ static void -update_local_lport_ids(struct binding_ctx_out *b_ctx, - const struct sbrec_port_binding *pb) +update_local_lport_ids(const struct sbrec_port_binding *pb, + struct binding_ctx_out *b_ctx) { char buf[16]; get_unique_lport_key(pb->datapath->tunnel_key, pb->tunnel_key, @@ -538,8 +538,8 @@ update_local_lport_ids(struct binding_ctx_out *b_ctx, * the set has changed. */ static void -remove_local_lport_ids(struct binding_ctx_out *b_ctx, - const struct sbrec_port_binding *pb) +remove_local_lport_ids(const struct sbrec_port_binding *pb, + struct binding_ctx_out *b_ctx) { char buf[16]; get_unique_lport_key(pb->datapath->tunnel_key, pb->tunnel_key, @@ -1001,7 +1001,7 @@ consider_vif_lport_(const struct sbrec_port_binding *pb, pb->datapath, false, b_ctx_out->local_datapaths, b_ctx_out->tracked_dp_bindings); - update_local_lport_ids(b_ctx_out, pb); + update_local_lport_ids(pb, b_ctx_out); if (lbinding->iface && qos_map && b_ctx_in->ovs_idl_txn) { get_qos_params(pb, qos_map); } @@ -1184,7 +1184,7 @@ consider_nonvif_lport_(const struct sbrec_port_binding *pb, struct binding_ctx_out *b_ctx_out) { if (our_chassis) { - update_local_lports(b_ctx_out, pb->logical_port); + update_local_lports(pb->logical_port, b_ctx_out); add_local_datapath(b_ctx_in->sbrec_datapath_binding_by_key, b_ctx_in->sbrec_port_binding_by_datapath, b_ctx_in->sbrec_port_binding_by_name, @@ -1192,7 +1192,7 @@ consider_nonvif_lport_(const struct sbrec_port_binding *pb, b_ctx_out->local_datapaths, b_ctx_out->tracked_dp_bindings); - update_local_lport_ids(b_ctx_out, pb); + update_local_lport_ids(pb, b_ctx_out); return claim_lport(pb, b_ctx_in->chassis_rec, NULL, !b_ctx_in->ovnsb_idl_txn, b_ctx_out->tracked_dp_bindings); @@ -1236,13 +1236,13 @@ consider_localnet_lport(const struct sbrec_port_binding *pb, { /* Add all localnet ports to local_ifaces so that we allocate ct zones * for them. */ - update_local_lports(b_ctx_out, pb->logical_port); + update_local_lports(pb->logical_port, b_ctx_out); if (qos_map && b_ctx_in->ovs_idl_txn) { get_qos_params(pb, qos_map); } - update_local_lport_ids(b_ctx_out, pb); + update_local_lport_ids(pb, b_ctx_out); } static bool @@ -1272,7 +1272,7 @@ consider_ha_lport(const struct sbrec_port_binding *pb, pb->datapath, false, b_ctx_out->local_datapaths, b_ctx_out->tracked_dp_bindings); - update_local_lport_ids(b_ctx_out, pb); + update_local_lport_ids(pb, b_ctx_out); } return consider_nonvif_lport_(pb, our_chassis, false, b_ctx_in, b_ctx_out); @@ -1338,7 +1338,7 @@ build_local_bindings(struct binding_ctx_in *b_ctx_in, ovs_assert(lbinding->type == BT_VIF); } - update_local_lports(b_ctx_out, iface_id); + update_local_lports(iface_id, b_ctx_out); smap_replace(b_ctx_out->local_iface_ids, iface_rec->name, iface_id); } @@ -1394,7 +1394,7 @@ binding_run(struct binding_ctx_in *b_ctx_in, struct binding_ctx_out *b_ctx_out) case LP_PATCH: case LP_LOCALPORT: case LP_VTEP: - update_local_lport_ids(b_ctx_out, pb); + update_local_lport_ids(pb, b_ctx_out); break; case LP_VIF: @@ -1625,7 +1625,7 @@ remove_pb_from_local_datapath(const struct sbrec_port_binding *pb, struct binding_ctx_out *b_ctx_out, struct local_datapath *ld) { - remove_local_lport_ids(b_ctx_out, pb); + remove_local_lport_ids(pb, b_ctx_out); if (!strcmp(pb->type, "patch") || !strcmp(pb->type, "l3gateway")) { remove_local_datapath_peer_port(pb, ld, b_ctx_out->local_datapaths); @@ -1669,7 +1669,7 @@ consider_iface_claim(const struct ovsrec_interface *iface_rec, struct binding_ctx_out *b_ctx_out, struct hmap *qos_map) { - update_local_lports(b_ctx_out, iface_id); + update_local_lports(iface_id, b_ctx_out); smap_replace(b_ctx_out->local_iface_ids, iface_rec->name, iface_id); struct local_binding *lbinding = @@ -1763,7 +1763,7 @@ consider_iface_release(const struct ovsrec_interface *iface_rec, local_binding_delete(b_ctx_out->local_bindings, lbinding); } - remove_local_lports(b_ctx_out, iface_id); + remove_local_lports(iface_id, b_ctx_out); smap_remove(b_ctx_out->local_iface_ids, iface_rec->name); return true; @@ -2097,7 +2097,7 @@ binding_handle_port_binding_changes(struct binding_ctx_in *b_ctx_in, case LP_PATCH: case LP_LOCALPORT: case LP_VTEP: - update_local_lport_ids(b_ctx_out, pb); + update_local_lport_ids(pb, b_ctx_out); if (lport_type == LP_PATCH) { /* Add the peer datapath to the local datapaths if it's * not present yet.