From patchwork Wed Jun 10 06:18:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1306575 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.136; helo=silver.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49hcHp2h6nz9sRh for ; Wed, 10 Jun 2020 16:18:29 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id C6C7924804; Wed, 10 Jun 2020 06:18:26 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xMrVnCeTz1hP; Wed, 10 Jun 2020 06:18:25 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by silver.osuosl.org (Postfix) with ESMTP id 10395234AC; Wed, 10 Jun 2020 06:18:25 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id E6420C0893; Wed, 10 Jun 2020 06:18:24 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 1E74DC016F for ; Wed, 10 Jun 2020 06:18:24 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 0E18588934 for ; Wed, 10 Jun 2020 06:18:24 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ohnN6AiNXfFX for ; Wed, 10 Jun 2020 06:18:22 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay2-d.mail.gandi.net (relay2-d.mail.gandi.net [217.70.183.194]) by hemlock.osuosl.org (Postfix) with ESMTPS id 0350D88713 for ; Wed, 10 Jun 2020 06:18:21 +0000 (UTC) X-Originating-IP: 115.99.171.40 Received: from nusiddiq.home.org.com (unknown [115.99.171.40]) (Authenticated sender: numans@ovn.org) by relay2-d.mail.gandi.net (Postfix) with ESMTPSA id B85CD40006; Wed, 10 Jun 2020 06:18:17 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Wed, 10 Jun 2020 11:48:11 +0530 Message-Id: <20200610061811.1966230-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200610061726.1937231-1-numans@ovn.org> References: <20200610061726.1937231-1-numans@ovn.org> MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn v11 1/6] I-P engine: Provide the option for an engine to clear tracked engine data in every run. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique A new function is added in the engine node called - clear_tracked_data() to clear any engine data which was tracked during the engine run. This tracked data has to be part of the engine 'data'. engine_init_run() calls clear_tracked_data() and each engine node interested in tracking the data needs to implement the en_clear_tracked_data() function. With this patch, an engine node can store any changes done to the engine data separately in the engine change handlers. The parent of this engine node can use this tracked data for incrementally processing the changes. Upcoming patches in the series will make use of this. Acked-by: Dumitru Ceara Signed-off-by: Numan Siddique --- lib/inc-proc-eng.c | 8 ++++++++ lib/inc-proc-eng.h | 9 +++++++++ 2 files changed, 17 insertions(+) diff --git a/lib/inc-proc-eng.c b/lib/inc-proc-eng.c index 9b1479a1c..8b56cbaec 100644 --- a/lib/inc-proc-eng.c +++ b/lib/inc-proc-eng.c @@ -121,6 +121,10 @@ void engine_cleanup(void) { for (size_t i = 0; i < engine_n_nodes; i++) { + if (engine_nodes[i]->clear_tracked_data) { + engine_nodes[i]->clear_tracked_data(engine_nodes[i]->data); + } + if (engine_nodes[i]->cleanup) { engine_nodes[i]->cleanup(engine_nodes[i]->data); } @@ -260,6 +264,10 @@ engine_init_run(void) VLOG_DBG("Initializing new run"); for (size_t i = 0; i < engine_n_nodes; i++) { engine_set_node_state(engine_nodes[i], EN_STALE); + + if (engine_nodes[i]->clear_tracked_data) { + engine_nodes[i]->clear_tracked_data(engine_nodes[i]->data); + } } } diff --git a/lib/inc-proc-eng.h b/lib/inc-proc-eng.h index 8606a360b..e25bcb29c 100644 --- a/lib/inc-proc-eng.h +++ b/lib/inc-proc-eng.h @@ -149,6 +149,10 @@ struct engine_node { * doesn't store pointers to DB records it's still safe to use). */ bool (*is_valid)(struct engine_node *); + + /* Method to clear up tracked data maintained by the engine node in the + * engine 'data'. It may be NULL. */ + void (*clear_tracked_data)(void *tracked_data); }; /* Initialize the data for the engine nodes. It calls each node's @@ -282,6 +286,7 @@ void engine_ovsdb_node_add_index(struct engine_node *, const char *name, .run = en_##NAME##_run, \ .cleanup = en_##NAME##_cleanup, \ .is_valid = en_##NAME##_is_valid, \ + .clear_tracked_data = NULL, \ }; #define ENGINE_NODE_CUSTOM_DATA(NAME, NAME_STR) \ @@ -291,6 +296,10 @@ void engine_ovsdb_node_add_index(struct engine_node *, const char *name, static bool (*en_##NAME##_is_valid)(struct engine_node *node) = NULL; \ ENGINE_NODE_DEF(NAME, NAME_STR) +#define ENGINE_NODE_WITH_CLEAR_TRACK_DATA(NAME, NAME_STR) \ + ENGINE_NODE(NAME, NAME_STR) \ + en_##NAME.clear_tracked_data = en_##NAME##_clear_tracked_data; + /* Macro to define member functions of an engine node which represents * a table of OVSDB */ #define ENGINE_FUNC_OVSDB(DB_NAME, TBL_NAME) \ From patchwork Wed Jun 10 06:18:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1306576 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=hemlock.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49hcHv1QDzz9sSF for ; Wed, 10 Jun 2020 16:18:35 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 73FAC889CA; Wed, 10 Jun 2020 06:18:33 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id erUknp2-JIwB; Wed, 10 Jun 2020 06:18:31 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by hemlock.osuosl.org (Postfix) with ESMTP id C5911889D0; Wed, 10 Jun 2020 06:18:31 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id A227FC016F; Wed, 10 Jun 2020 06:18:31 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 854F5C016F for ; Wed, 10 Jun 2020 06:18:30 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 7564386FC3 for ; Wed, 10 Jun 2020 06:18:30 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kcki3XJ_zzqj for ; Wed, 10 Jun 2020 06:18:29 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay10.mail.gandi.net (relay10.mail.gandi.net [217.70.178.230]) by fraxinus.osuosl.org (Postfix) with ESMTPS id 975F586F8E for ; Wed, 10 Jun 2020 06:18:28 +0000 (UTC) Received: from nusiddiq.home.org.com (unknown [115.99.171.40]) (Authenticated sender: numans@ovn.org) by relay10.mail.gandi.net (Postfix) with ESMTPSA id 7326E240002; Wed, 10 Jun 2020 06:18:24 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Wed, 10 Jun 2020 11:48:19 +0530 Message-Id: <20200610061819.1973526-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200610061726.1937231-1-numans@ovn.org> References: <20200610061726.1937231-1-numans@ovn.org> MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn v11 2/6] ovn-controller: I-P for ct zone and OVS interface changes in flow output stage. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique This patch handles ct zone changes and OVS interface changes incrementally in the flow output stage. Any changes to ct zone can be handled by running physical_run() instead of running flow_output_run(). And any changes to OVS interfaces can be either handled incrementally (for OVS interfaces representing VIFs) or just running physical_run() (for tunnel and other types of interfaces). To better handle this, a new engine node 'physical_flow_changes' is added which handles changes to ct zone and OVS interfaces. Signed-off-by: Numan Siddique --- controller/binding.c | 23 +----- controller/binding.h | 24 +++++- controller/ovn-controller.c | 145 +++++++++++++++++++++++++++++++++++- controller/physical.c | 51 +++++++++++++ controller/physical.h | 5 +- 5 files changed, 223 insertions(+), 25 deletions(-) diff --git a/controller/binding.c b/controller/binding.c index e79220ed5..61cdc8dbc 100644 --- a/controller/binding.c +++ b/controller/binding.c @@ -502,7 +502,7 @@ remove_local_lport_ids(const struct sbrec_port_binding *pb, * 'struct local_binding' is used. A shash of these local bindings is * maintained with the 'external_ids:iface-id' as the key to the shash. * - * struct local_binding has 3 main fields: + * struct local_binding (defined in binding.h) has 3 main fields: * - type * - OVS interface row object * - Port_Binding row object @@ -553,21 +553,6 @@ remove_local_lport_ids(const struct sbrec_port_binding *pb, * - For each 'virtual' Port Binding (of type BT_VIRTUAL) provided its parent * is bound to this chassis. */ -enum local_binding_type { - BT_VIF, - BT_CONTAINER, - BT_VIRTUAL -}; - -struct local_binding { - char *name; - enum local_binding_type type; - const struct ovsrec_interface *iface; - const struct sbrec_port_binding *pb; - - /* shash of 'struct local_binding' representing children. */ - struct shash children; -}; static struct local_binding * local_binding_create(const char *name, const struct ovsrec_interface *iface, @@ -589,12 +574,6 @@ local_binding_add(struct shash *local_bindings, struct local_binding *lbinding) shash_add(local_bindings, lbinding->name, lbinding); } -static struct local_binding * -local_binding_find(struct shash *local_bindings, const char *name) -{ - return shash_find_data(local_bindings, name); -} - static void local_binding_destroy(struct local_binding *lbinding) { diff --git a/controller/binding.h b/controller/binding.h index f10c92bf9..e3d1f07de 100644 --- a/controller/binding.h +++ b/controller/binding.h @@ -18,6 +18,7 @@ #define OVN_BINDING_H 1 #include +#include "openvswitch/shash.h" struct hmap; struct ovsdb_idl; @@ -32,7 +33,6 @@ struct sbrec_chassis; struct sbrec_port_binding_table; struct sset; struct sbrec_port_binding; -struct shash; struct binding_ctx_in { struct ovsdb_idl_txn *ovnsb_idl_txn; @@ -64,6 +64,28 @@ struct binding_ctx_out { struct smap *local_iface_ids; }; +enum local_binding_type { + BT_VIF, + BT_CONTAINER, + BT_VIRTUAL +}; + +struct local_binding { + char *name; + enum local_binding_type type; + const struct ovsrec_interface *iface; + const struct sbrec_port_binding *pb; + + /* shash of 'struct local_binding' representing children. */ + struct shash children; +}; + +static inline struct local_binding * +local_binding_find(struct shash *local_bindings, const char *name) +{ + return shash_find_data(local_bindings, name); +} + void binding_register_ovs_idl(struct ovsdb_idl *); void binding_run(struct binding_ctx_in *, struct binding_ctx_out *); bool binding_cleanup(struct ovsdb_idl_txn *ovnsb_idl_txn, diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c index bb82b15dc..a6bee1f76 100644 --- a/controller/ovn-controller.c +++ b/controller/ovn-controller.c @@ -1361,8 +1361,13 @@ static void init_physical_ctx(struct engine_node *node, ovs_assert(br_int && chassis); + struct ovsrec_interface_table *iface_table = + (struct ovsrec_interface_table *)EN_OVSDB_GET( + engine_get_input("OVS_interface", node)); + struct ed_type_ct_zones *ct_zones_data = engine_get_input_data("ct_zones", node); + struct simap *ct_zones = &ct_zones_data->current; p_ctx->sbrec_port_binding_by_name = sbrec_port_binding_by_name; @@ -1370,12 +1375,14 @@ static void init_physical_ctx(struct engine_node *node, p_ctx->mc_group_table = multicast_group_table; p_ctx->br_int = br_int; p_ctx->chassis_table = chassis_table; + p_ctx->iface_table = iface_table; p_ctx->chassis = chassis; p_ctx->active_tunnels = &rt_data->active_tunnels; p_ctx->local_datapaths = &rt_data->local_datapaths; p_ctx->local_lports = &rt_data->local_lports; p_ctx->ct_zones = ct_zones; p_ctx->mff_ovn_geneve = ed_mff_ovn_geneve->mff_ovn_geneve; + p_ctx->local_bindings = &rt_data->local_bindings; } static void init_lflow_ctx(struct engine_node *node, @@ -1783,6 +1790,125 @@ flow_output_port_groups_handler(struct engine_node *node, void *data) return _flow_output_resource_ref_handler(node, data, REF_TYPE_PORTGROUP); } +/* Engine node en_physical_flow_changes indicates whether + * there is a need to + * - recompute only physical flows or + * - we can incrementally process the physical flows. + * + * en_physical_flow_changes is an input to flow_output engine node. + * If the engine node 'en_physical_flow_changes' gets updated during + * engine run, it means the handler for this - + * flow_output_physical_flow_changes_handler() will either + * - recompute the physical flows by calling 'physical_run() or + * - incrementlly process some of the changes for physical flow + * calculation. Right now we handle OVS interfaces changes + * for physical flow computation. + * + * When ever a port binding happens, the follow up + * activity is the zone id allocation for that port binding. + * With this intermediate engine node, we avoid full recomputation. + * Instead we do physical flow computation (either full recomputation + * by calling physical_run() or handling the changes incrementally. + * + * Hence this is an intermediate engine node to indicate the + * flow_output engine to recomputes/compute the physical flows. + * + * TODO 1. Ideally this engine node should recompute/compute the physical + * flows instead of relegating it to the flow_output node. + * But this requires splitting the flow_output node to + * logical_flow_output and physical_flow_output. + * + * TODO 2. We can further optimise the en_ct_zone changes to + * compute the phsyical flows for changed zone ids. + * + * TODO 3: physical.c has a global simap -localvif_to_ofport which stores the + * local OVS interfaces and the ofport numbers. Ideally this should be + * part of the engine data. + */ +struct ed_type_pfc_data { + /* Both these variables are tracked and set in each engine run. */ + bool recompute_physical_flows; + bool ovs_ifaces_changed; +}; + +static void +en_physical_flow_changes_clear_tracked_data(void *data_) +{ + struct ed_type_pfc_data *data = data_; + data->recompute_physical_flows = false; + data->ovs_ifaces_changed = false; +} + +static void * +en_physical_flow_changes_init(struct engine_node *node OVS_UNUSED, + struct engine_arg *arg OVS_UNUSED) +{ + struct ed_type_pfc_data *data = xzalloc(sizeof *data); + return data; +} + +static void +en_physical_flow_changes_cleanup(void *data OVS_UNUSED) +{ +} + +/* Indicate to the flow_output engine that we need to recompute physical + * flows. */ +static void +en_physical_flow_changes_run(struct engine_node *node, void *data) +{ + struct ed_type_pfc_data *pfc_tdata = data; + pfc_tdata->recompute_physical_flows = true; + engine_set_node_state(node, EN_UPDATED); +} + +/* There are OVS interface changes. Indicate to the flow_output engine + * to handle these OVS interface changes for physical flow computations. */ +static bool +physical_flow_changes_ovs_iface_handler(struct engine_node *node, void *data) +{ + struct ed_type_pfc_data *pfc_tdata = data; + pfc_tdata->ovs_ifaces_changed = true; + engine_set_node_state(node, EN_UPDATED); + return true; +} + +static bool +flow_output_physical_flow_changes_handler(struct engine_node *node, void *data) +{ + struct ed_type_runtime_data *rt_data = + engine_get_input_data("runtime_data", node); + + struct ed_type_flow_output *fo = data; + struct physical_ctx p_ctx; + init_physical_ctx(node, rt_data, &p_ctx); + + engine_set_node_state(node, EN_UPDATED); + struct ed_type_pfc_data *pfc_data = + engine_get_input_data("physical_flow_changes", node); + + if (pfc_data->recompute_physical_flows) { + /* This indicates that we need to recompute the physical flows. */ + physical_run(&p_ctx, &fo->flow_table); + return true; + } + + if (pfc_data->ovs_ifaces_changed) { + /* There are OVS interface changes. Try to handle them + * incrementally. */ + return physical_handle_ovs_iface_changes(&p_ctx, &fo->flow_table); + } + + return true; +} + +static bool +flow_output_noop_handler(struct engine_node *node OVS_UNUSED, + void *data OVS_UNUSED) +{ + return true; +} + struct ovn_controller_exit_args { bool *exiting; bool *restart; @@ -1907,6 +2033,8 @@ main(int argc, char *argv[]) ENGINE_NODE(runtime_data, "runtime_data"); ENGINE_NODE(mff_ovn_geneve, "mff_ovn_geneve"); ENGINE_NODE(ofctrl_is_connected, "ofctrl_is_connected"); + ENGINE_NODE_WITH_CLEAR_TRACK_DATA(physical_flow_changes, + "physical_flow_changes"); ENGINE_NODE(flow_output, "flow_output"); ENGINE_NODE(addr_sets, "addr_sets"); ENGINE_NODE(port_groups, "port_groups"); @@ -1926,13 +2054,28 @@ main(int argc, char *argv[]) engine_add_input(&en_port_groups, &en_sb_port_group, port_groups_sb_port_group_handler); + /* Engine node physical_flow_changes indicates whether + * we can recompute only physical flows or we can + * incrementally process the physical flows. + */ + engine_add_input(&en_physical_flow_changes, &en_ct_zones, + NULL); + engine_add_input(&en_physical_flow_changes, &en_ovs_interface, + physical_flow_changes_ovs_iface_handler); + engine_add_input(&en_flow_output, &en_addr_sets, flow_output_addr_sets_handler); engine_add_input(&en_flow_output, &en_port_groups, flow_output_port_groups_handler); engine_add_input(&en_flow_output, &en_runtime_data, NULL); - engine_add_input(&en_flow_output, &en_ct_zones, NULL); engine_add_input(&en_flow_output, &en_mff_ovn_geneve, NULL); + engine_add_input(&en_flow_output, &en_physical_flow_changes, + flow_output_physical_flow_changes_handler); + + /* We need this input nodes for only data. Hence the noop handler. */ + engine_add_input(&en_flow_output, &en_ct_zones, flow_output_noop_handler); + engine_add_input(&en_flow_output, &en_ovs_interface, + flow_output_noop_handler); engine_add_input(&en_flow_output, &en_ovs_open_vswitch, NULL); engine_add_input(&en_flow_output, &en_ovs_bridge, NULL); diff --git a/controller/physical.c b/controller/physical.c index f06313b9d..240ec158e 100644 --- a/controller/physical.c +++ b/controller/physical.c @@ -1780,6 +1780,57 @@ physical_run(struct physical_ctx *p_ctx, simap_destroy(&new_tunnel_to_ofport); } +bool +physical_handle_ovs_iface_changes(struct physical_ctx *p_ctx, + struct ovn_desired_flow_table *flow_table) +{ + const struct ovsrec_interface *iface_rec; + OVSREC_INTERFACE_TABLE_FOR_EACH_TRACKED (iface_rec, p_ctx->iface_table) { + if (!strcmp(iface_rec->type, "geneve") || + !strcmp(iface_rec->type, "patch")) { + return false; + } + } + + struct ofpbuf ofpacts; + ofpbuf_init(&ofpacts, 0); + + OVSREC_INTERFACE_TABLE_FOR_EACH_TRACKED (iface_rec, p_ctx->iface_table) { + const char *iface_id = smap_get(&iface_rec->external_ids, "iface-id"); + if (!iface_id) { + continue; + } + + const struct local_binding *lb = + local_binding_find(p_ctx->local_bindings, iface_id); + + if (!lb || !lb->pb) { + continue; + } + + int64_t ofport = iface_rec->n_ofport ? *iface_rec->ofport : 0; + if (ovsrec_interface_is_deleted(iface_rec)) { + ofctrl_remove_flows(flow_table, &lb->pb->header_.uuid); + simap_find_and_delete(&localvif_to_ofport, iface_id); + } else { + if (!ovsrec_interface_is_new(iface_rec)) { + ofctrl_remove_flows(flow_table, &lb->pb->header_.uuid); + } + + simap_put(&localvif_to_ofport, iface_id, ofport); + consider_port_binding(p_ctx->sbrec_port_binding_by_name, + p_ctx->mff_ovn_geneve, p_ctx->ct_zones, + p_ctx->active_tunnels, + p_ctx->local_datapaths, + lb->pb, p_ctx->chassis, + flow_table, &ofpacts); + } + } + + ofpbuf_uninit(&ofpacts); + return true; +} + bool get_tunnel_ofport(const char *chassis_name, char *encap_ip, ofp_port_t *ofport) { diff --git a/controller/physical.h b/controller/physical.h index dadf84ea1..9ca34436a 100644 --- a/controller/physical.h +++ b/controller/physical.h @@ -49,11 +49,13 @@ struct physical_ctx { const struct ovsrec_bridge *br_int; const struct sbrec_chassis_table *chassis_table; const struct sbrec_chassis *chassis; + const struct ovsrec_interface_table *iface_table; const struct sset *active_tunnels; struct hmap *local_datapaths; struct sset *local_lports; const struct simap *ct_zones; enum mf_field_id mff_ovn_geneve; + struct shash *local_bindings; }; void physical_register_ovs_idl(struct ovsdb_idl *); @@ -63,7 +65,8 @@ void physical_handle_port_binding_changes(struct physical_ctx *, struct ovn_desired_flow_table *); void physical_handle_mc_group_changes(struct physical_ctx *, struct ovn_desired_flow_table *); - +bool physical_handle_ovs_iface_changes(struct physical_ctx *, + struct ovn_desired_flow_table *); bool get_tunnel_ofport(const char *chassis_name, char *encap_ip, ofp_port_t *ofport); #endif /* controller/physical.h */ From patchwork Wed Jun 10 06:18:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1306577 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.138; helo=whitealder.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49hcJN2XHfz9sRh for ; Wed, 10 Jun 2020 16:19:00 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id 4E23E88012; Wed, 10 Jun 2020 06:18:58 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id b6F0MtWKuAMH; Wed, 10 Jun 2020 06:18:41 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by whitealder.osuosl.org (Postfix) with ESMTP id CF07D88070; Wed, 10 Jun 2020 06:18:41 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id AF5AFC088C; Wed, 10 Jun 2020 06:18:41 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 095EBC016F for ; Wed, 10 Jun 2020 06:18:40 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id E242986F27 for ; Wed, 10 Jun 2020 06:18:39 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id C6d40v_psACK for ; Wed, 10 Jun 2020 06:18:36 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay4-d.mail.gandi.net (relay4-d.mail.gandi.net [217.70.183.196]) by fraxinus.osuosl.org (Postfix) with ESMTPS id E9C5887046 for ; Wed, 10 Jun 2020 06:18:35 +0000 (UTC) X-Originating-IP: 115.99.171.40 Received: from nusiddiq.home.org.com (unknown [115.99.171.40]) (Authenticated sender: numans@ovn.org) by relay4-d.mail.gandi.net (Postfix) with ESMTPSA id B4272E0006; Wed, 10 Jun 2020 06:18:32 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Wed, 10 Jun 2020 11:48:25 +0530 Message-Id: <20200610061825.1977154-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200610061726.1937231-1-numans@ovn.org> References: <20200610061726.1937231-1-numans@ovn.org> MIME-Version: 1.0 Cc: Venkata Anil Subject: [ovs-dev] [PATCH ovn v11 3/6] ovn-controller: Handle runtime data changes in flow output engine X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique In order to handle runtime data changes incrementally, the flow outut runtime data handle should know the changed runtime data. Runtime data now tracks the changed data for any OVS interface and SB port binding changes. The tracked data contains a hmap of tracked datapaths (which changed during runtime data processing. The flow outout runtime_data handler in this patch doesn't do much with the tracked data. It returns false if there is tracked data available so that flow_output run is called. If no tracked data is available then there is no need for flow computation and the handler returns true. Next patch in the series processes the tracked data incrementally. Co-Authored-by: Venkata Anil Signed-off-by: Venkata Anil Signed-off-by: Numan Siddique --- controller/binding.c | 291 ++++++++++++++++++++++++++++-------- controller/binding.h | 31 +++- controller/ovn-controller.c | 151 +++++++++++++++++-- tests/ovn-performance.at | 20 +-- 4 files changed, 406 insertions(+), 87 deletions(-) diff --git a/controller/binding.c b/controller/binding.c index 61cdc8dbc..3da19a219 100644 --- a/controller/binding.c +++ b/controller/binding.c @@ -69,13 +69,25 @@ binding_register_ovs_idl(struct ovsdb_idl *ovs_idl) ovsdb_idl_add_column(ovs_idl, &ovsrec_qos_col_type); } +static struct tracked_binding_datapath *tracked_binding_datapath_create( + const struct sbrec_datapath_binding *, + bool is_new, struct hmap *tracked_dps); +static struct tracked_binding_datapath *tracked_binding_datapath_find( + struct hmap *, const struct sbrec_datapath_binding *); +static void tracked_binding_datapath_lport_add( + const struct sbrec_port_binding *, struct hmap *tracked_datapaths); +static void update_lport_tracking(const struct sbrec_port_binding *pb, + bool old_claim, bool new_claim, + struct hmap *tracked_dp_bindings); + static void add_local_datapath__(struct ovsdb_idl_index *sbrec_datapath_binding_by_key, struct ovsdb_idl_index *sbrec_port_binding_by_datapath, struct ovsdb_idl_index *sbrec_port_binding_by_name, const struct sbrec_datapath_binding *datapath, bool has_local_l3gateway, int depth, - struct hmap *local_datapaths) + struct hmap *local_datapaths, + struct hmap *tracked_datapaths) { uint32_t dp_key = datapath->tunnel_key; struct local_datapath *ld = get_local_datapath(local_datapaths, dp_key); @@ -92,6 +104,11 @@ add_local_datapath__(struct ovsdb_idl_index *sbrec_datapath_binding_by_key, ld->localnet_port = NULL; ld->has_local_l3gateway = has_local_l3gateway; + if (tracked_datapaths && + !tracked_binding_datapath_find(tracked_datapaths, datapath)) { + tracked_binding_datapath_create(datapath, true, tracked_datapaths); + } + if (depth >= 100) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); VLOG_WARN_RL(&rl, "datapaths nested too deep"); @@ -124,7 +141,8 @@ add_local_datapath__(struct ovsdb_idl_index *sbrec_datapath_binding_by_key, sbrec_port_binding_by_datapath, sbrec_port_binding_by_name, peer->datapath, false, - depth + 1, local_datapaths); + depth + 1, local_datapaths, + tracked_datapaths); } ld->n_peer_ports++; if (ld->n_peer_ports > ld->n_allocated_peer_ports) { @@ -147,12 +165,14 @@ add_local_datapath(struct ovsdb_idl_index *sbrec_datapath_binding_by_key, struct ovsdb_idl_index *sbrec_port_binding_by_datapath, struct ovsdb_idl_index *sbrec_port_binding_by_name, const struct sbrec_datapath_binding *datapath, - bool has_local_l3gateway, struct hmap *local_datapaths) + bool has_local_l3gateway, struct hmap *local_datapaths, + struct hmap *tracked_datapaths) { add_local_datapath__(sbrec_datapath_binding_by_key, sbrec_port_binding_by_datapath, sbrec_port_binding_by_name, - datapath, has_local_l3gateway, 0, local_datapaths); + datapath, has_local_l3gateway, 0, local_datapaths, + tracked_datapaths); } static void @@ -473,22 +493,45 @@ update_ld_localnet_port(const struct sbrec_port_binding *binding_rec, static void update_local_lport_ids(struct sset *local_lport_ids, - const struct sbrec_port_binding *pb) + const struct sbrec_port_binding *pb, + struct hmap *tracked_datapaths) { char buf[16]; snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, pb->datapath->tunnel_key, pb->tunnel_key); - sset_add(local_lport_ids, buf); + bool added = !!sset_add(local_lport_ids, buf); + if (added && tracked_datapaths) { + /* Add the 'pb' to the tracked_datapaths. */ + tracked_binding_datapath_lport_add(pb, tracked_datapaths); + } } static void remove_local_lport_ids(const struct sbrec_port_binding *pb, - struct sset *local_lport_ids) + struct sset *local_lport_ids, + struct hmap *tracked_datapaths) { char buf[16]; snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, pb->datapath->tunnel_key, pb->tunnel_key); - sset_find_and_delete(local_lport_ids, buf); + bool deleted = sset_find_and_delete(local_lport_ids, buf); + if (deleted && tracked_datapaths) { + /* Add the 'pb' to the tracked_datapaths. */ + tracked_binding_datapath_lport_add(pb, tracked_datapaths); + } +} + +static bool +add_lport_to_local_lports(struct sset *local_lports, const char *lport_name) +{ + return !!sset_add(local_lports, lport_name); +} + +static bool +remove_lport_from_local_lports(struct sset *local_lports, + const char *lport_name) +{ + return sset_find_and_delete(local_lports, lport_name); } /* Local bindings. binding.c module binds the logical port (represented by @@ -636,6 +679,75 @@ is_lport_container(const struct sbrec_port_binding *pb) return is_lport_vif(pb) && pb->parent_port && pb->parent_port[0]; } +static struct tracked_binding_datapath * +tracked_binding_datapath_create(const struct sbrec_datapath_binding *dp, + bool is_new, + struct hmap *tracked_datapaths) +{ + struct tracked_binding_datapath *t_dp = xzalloc(sizeof *t_dp); + t_dp->dp = dp; + t_dp->is_new = is_new; + shash_init(&t_dp->lports); + hmap_insert(tracked_datapaths, &t_dp->node, uuid_hash(&dp->header_.uuid)); + return t_dp; +} + +static struct tracked_binding_datapath * +tracked_binding_datapath_find(struct hmap *tracked_datapaths, + const struct sbrec_datapath_binding *dp) +{ + struct tracked_binding_datapath *t_dp; + size_t hash = uuid_hash(&dp->header_.uuid); + HMAP_FOR_EACH_WITH_HASH (t_dp, node, hash, tracked_datapaths) { + if (uuid_equals(&t_dp->dp->header_.uuid, &dp->header_.uuid)) { + return t_dp; + } + } + + return NULL; +} + +static void +tracked_binding_datapath_lport_add(const struct sbrec_port_binding *pb, + struct hmap *tracked_datapaths) +{ + if (!tracked_datapaths) { + return; + } + + struct tracked_binding_datapath *tracked_dp = + tracked_binding_datapath_find(tracked_datapaths, pb->datapath); + if (!tracked_dp) { + tracked_dp = tracked_binding_datapath_create(pb->datapath, false, + tracked_datapaths); + } + + /* Check if the lport is already present or not. + * If it is already present, then just update the 'pb' and 'deleted' + * fields. */ + struct tracked_binding_lport *lport = + shash_find_data(&tracked_dp->lports, pb->logical_port); + + if (!lport) { + lport = xmalloc(sizeof *lport); + shash_add(&tracked_dp->lports, pb->logical_port, lport); + } + + lport->pb = pb; +} + +void +binding_tracked_dp_destroy(struct hmap *tracked_datapaths) +{ + struct tracked_binding_datapath *t_dp; + HMAP_FOR_EACH_POP (t_dp, node, tracked_datapaths) { + shash_destroy_free_data(&t_dp->lports); + free(t_dp); + } + + hmap_destroy(tracked_datapaths); +} + /* Corresponds to each Port_Binding.type. */ enum en_lport_type { LP_UNKNOWN, @@ -689,7 +801,7 @@ static bool claim_lport(const struct sbrec_port_binding *pb, const struct sbrec_chassis *chassis_rec, const struct ovsrec_interface *iface_rec, - bool sb_readonly) + bool sb_readonly, struct hmap *tracked_datapaths) { if (pb->chassis != chassis_rec) { if (sb_readonly) { @@ -708,6 +820,10 @@ claim_lport(const struct sbrec_port_binding *pb, } sbrec_port_binding_set_chassis(pb, chassis_rec); + + if (tracked_datapaths) { + update_lport_tracking(pb, false, true, tracked_datapaths); + } } /* Check if the port encap binding, if any, has changed */ @@ -725,9 +841,14 @@ claim_lport(const struct sbrec_port_binding *pb, /* Returns false if lport is not released due to 'sb_readonly'. * Returns true otherwise. + * + * This function assumes that the the 'pb' was claimed + * earlier i.e port binding's chassis is set to this chassis. + * Caller should make sure that this is the case. */ static bool -release_lport(const struct sbrec_port_binding *pb, bool sb_readonly) +release_lport(const struct sbrec_port_binding *pb, bool sb_readonly, + struct hmap *tracked_datapaths) { if (pb->encap) { if (sb_readonly) { @@ -750,6 +871,7 @@ release_lport(const struct sbrec_port_binding *pb, bool sb_readonly) sbrec_port_binding_set_virtual_parent(pb, NULL); } + update_lport_tracking(pb, true, false, tracked_datapaths); VLOG_INFO("Releasing lport %s from this chassis.", pb->logical_port); return true; } @@ -795,13 +917,14 @@ is_lbinding_container_parent(struct local_binding *lbinding) static bool release_local_binding_children(const struct sbrec_chassis *chassis_rec, struct local_binding *lbinding, - bool sb_readonly) + bool sb_readonly, + struct hmap *tracked_dp_bindings) { struct shash_node *node; SHASH_FOR_EACH (node, &lbinding->children) { struct local_binding *l = node->data; if (is_lbinding_this_chassis(l, chassis_rec)) { - if (!release_lport(l->pb, sb_readonly)) { + if (!release_lport(l->pb, sb_readonly, tracked_dp_bindings)) { return false; } } @@ -816,16 +939,17 @@ release_local_binding_children(const struct sbrec_chassis *chassis_rec, static bool release_local_binding(const struct sbrec_chassis *chassis_rec, - struct local_binding *lbinding, bool sb_readonly) + struct local_binding *lbinding, bool sb_readonly, + struct hmap *tracked_dp_bindings) { if (!release_local_binding_children(chassis_rec, lbinding, - sb_readonly)) { + sb_readonly, tracked_dp_bindings)) { return false; } bool retval = true; if (is_lbinding_this_chassis(lbinding, chassis_rec)) { - retval = release_lport(lbinding->pb, sb_readonly); + retval = release_lport(lbinding->pb, sb_readonly, tracked_dp_bindings); } lbinding->pb = NULL; @@ -846,7 +970,8 @@ consider_vif_lport_(const struct sbrec_port_binding *pb, if (can_bind) { /* We can claim the lport. */ if (!claim_lport(pb, b_ctx_in->chassis_rec, lbinding->iface, - !b_ctx_in->ovnsb_idl_txn)){ + !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings)){ return false; } @@ -854,8 +979,10 @@ consider_vif_lport_(const struct sbrec_port_binding *pb, b_ctx_in->sbrec_port_binding_by_datapath, b_ctx_in->sbrec_port_binding_by_name, pb->datapath, false, - b_ctx_out->local_datapaths); - update_local_lport_ids(b_ctx_out->local_lport_ids, pb); + b_ctx_out->local_datapaths, + b_ctx_out->tracked_dp_bindings); + update_local_lport_ids(b_ctx_out->local_lport_ids, pb, + b_ctx_out->tracked_dp_bindings); if (lbinding->iface && qos_map && b_ctx_in->ovs_idl_txn) { get_qos_params(pb, qos_map); } @@ -874,7 +1001,8 @@ consider_vif_lport_(const struct sbrec_port_binding *pb, if (pb->chassis == b_ctx_in->chassis_rec) { /* Release the lport if there is no lbinding. */ if (!lbinding_set || !can_bind) { - return release_lport(pb, !b_ctx_in->ovnsb_idl_txn); + return release_lport(pb, !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings); } } @@ -961,7 +1089,8 @@ consider_container_lport(const struct sbrec_port_binding *pb, * release the container lport, if it was bound earlier. */ if (is_lbinding_this_chassis(container_lbinding, b_ctx_in->chassis_rec)) { - return release_lport(pb, !b_ctx_in->ovnsb_idl_txn); + return release_lport(pb, !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings); } return true; @@ -1036,18 +1165,22 @@ consider_nonvif_lport_(const struct sbrec_port_binding *pb, struct binding_ctx_out *b_ctx_out) { if (our_chassis) { - sset_add(b_ctx_out->local_lports, pb->logical_port); + add_lport_to_local_lports(b_ctx_out->local_lports, pb->logical_port); add_local_datapath(b_ctx_in->sbrec_datapath_binding_by_key, b_ctx_in->sbrec_port_binding_by_datapath, b_ctx_in->sbrec_port_binding_by_name, pb->datapath, has_local_l3gateway, - b_ctx_out->local_datapaths); + b_ctx_out->local_datapaths, + b_ctx_out->tracked_dp_bindings); - update_local_lport_ids(b_ctx_out->local_lport_ids, pb); + update_local_lport_ids(b_ctx_out->local_lport_ids, pb, + b_ctx_out->tracked_dp_bindings); return claim_lport(pb, b_ctx_in->chassis_rec, NULL, - !b_ctx_in->ovnsb_idl_txn); + !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings); } else if (pb->chassis == b_ctx_in->chassis_rec) { - return release_lport(pb, !b_ctx_in->ovnsb_idl_txn); + return release_lport(pb, !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings); } return true; @@ -1083,14 +1216,15 @@ consider_localnet_lport(const struct sbrec_port_binding *pb, struct binding_ctx_out *b_ctx_out, struct hmap *qos_map) { - /* Add all localnet ports to local_lports so that we allocate ct zones + /* Add all localnet ports to local_ifaces so that we allocate ct zones * for them. */ - sset_add(b_ctx_out->local_lports, pb->logical_port); + add_lport_to_local_lports(b_ctx_out->local_lports, pb->logical_port); if (qos_map && b_ctx_in->ovs_idl_txn) { get_qos_params(pb, qos_map); } - update_local_lport_ids(b_ctx_out->local_lport_ids, pb); + update_local_lport_ids(b_ctx_out->local_lport_ids, pb, + b_ctx_out->tracked_dp_bindings); } static bool @@ -1118,7 +1252,10 @@ consider_ha_lport(const struct sbrec_port_binding *pb, b_ctx_in->sbrec_port_binding_by_datapath, b_ctx_in->sbrec_port_binding_by_name, pb->datapath, false, - b_ctx_out->local_datapaths); + b_ctx_out->local_datapaths, + b_ctx_out->tracked_dp_bindings); + update_local_lport_ids(b_ctx_out->local_lport_ids, pb, + b_ctx_out->tracked_dp_bindings); } return consider_nonvif_lport_(pb, our_chassis, false, b_ctx_in, b_ctx_out); @@ -1184,7 +1321,7 @@ build_local_bindings(struct binding_ctx_in *b_ctx_in, ovs_assert(lbinding->type == BT_VIF); } - sset_add(b_ctx_out->local_lports, iface_id); + add_lport_to_local_lports(b_ctx_out->local_lports, iface_id); smap_replace(b_ctx_out->local_iface_ids, iface_rec->name, iface_id); } @@ -1240,7 +1377,7 @@ binding_run(struct binding_ctx_in *b_ctx_in, struct binding_ctx_out *b_ctx_out) case LP_PATCH: case LP_LOCALPORT: case LP_VTEP: - update_local_lport_ids(b_ctx_out->local_lport_ids, pb); + update_local_lport_ids(b_ctx_out->local_lport_ids, pb, NULL); break; case LP_VIF: @@ -1408,7 +1545,8 @@ add_local_datapath_peer_port(const struct sbrec_port_binding *pb, b_ctx_in->sbrec_port_binding_by_datapath, b_ctx_in->sbrec_port_binding_by_name, peer->datapath, false, - 1, b_ctx_out->local_datapaths); + 1, b_ctx_out->local_datapaths, + b_ctx_out->tracked_dp_bindings); return; } @@ -1470,7 +1608,8 @@ remove_pb_from_local_datapath(const struct sbrec_port_binding *pb, struct binding_ctx_out *b_ctx_out, struct local_datapath *ld) { - remove_local_lport_ids(pb, b_ctx_out->local_lport_ids); + remove_local_lport_ids(pb, b_ctx_out->local_lport_ids, + b_ctx_out->tracked_dp_bindings); if (!strcmp(pb->type, "patch") || !strcmp(pb->type, "l3gateway")) { remove_local_datapath_peer_port(pb, ld, b_ctx_out->local_datapaths); @@ -1488,6 +1627,18 @@ remove_pb_from_local_datapath(const struct sbrec_port_binding *pb, } } +static void +update_lport_tracking(const struct sbrec_port_binding *pb, + bool old_claim, bool new_claim, + struct hmap *tracked_dp_bindings) +{ + if (!tracked_dp_bindings || (old_claim == new_claim)) { + return; + } + + tracked_binding_datapath_lport_add(pb, tracked_dp_bindings); +} + /* Considers the ovs iface 'iface_rec' for claiming. * This function should be called if the external_ids:iface-id * and 'ofport' are set for the 'iface_rec'. @@ -1500,9 +1651,13 @@ consider_iface_claim(const struct ovsrec_interface *iface_rec, const char *iface_id, struct binding_ctx_in *b_ctx_in, struct binding_ctx_out *b_ctx_out, - struct hmap *qos_map) + struct hmap *qos_map, + bool *local_lports_changed) { - sset_add(b_ctx_out->local_lports, iface_id); + if (add_lport_to_local_lports(b_ctx_out->local_lports, iface_id)) { + *local_lports_changed = true; + } + smap_replace(b_ctx_out->local_iface_ids, iface_rec->name, iface_id); struct local_binding *lbinding = @@ -1565,7 +1720,8 @@ static bool consider_iface_release(const struct ovsrec_interface *iface_rec, const char *iface_id, struct binding_ctx_in *b_ctx_in, - struct binding_ctx_out *b_ctx_out, bool *changed) + struct binding_ctx_out *b_ctx_out, bool *changed, + bool *local_lports_changed) { struct local_binding *lbinding; lbinding = local_binding_find(b_ctx_out->local_bindings, @@ -1584,7 +1740,8 @@ consider_iface_release(const struct ovsrec_interface *iface_rec, * lbinding->iface. * Cannot access these members of lbinding after this call. */ if (!release_local_binding(b_ctx_in->chassis_rec, lbinding, - !b_ctx_in->ovnsb_idl_txn)) { + !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings)) { return false; } @@ -1597,7 +1754,9 @@ consider_iface_release(const struct ovsrec_interface *iface_rec, local_binding_delete(b_ctx_out->local_bindings, lbinding); } - sset_find_and_delete(b_ctx_out->local_lports, iface_id); + if (remove_lport_from_local_lports(b_ctx_out->local_lports, iface_id)) { + *local_lports_changed = true; + } smap_remove(b_ctx_out->local_iface_ids, iface_rec->name); return true; @@ -1629,6 +1788,8 @@ binding_handle_ovs_interface_changes(struct binding_ctx_in *b_ctx_in, bool handled = true; *changed = false; + *b_ctx_out->local_lports_changed = false; + /* Run the tracked interfaces loop twice. One to handle deleted * changes. And another to handle add/update changes. * This will ensure correctness. @@ -1684,7 +1845,8 @@ binding_handle_ovs_interface_changes(struct binding_ctx_in *b_ctx_in, if (cleared_iface_id) { handled = consider_iface_release(iface_rec, cleared_iface_id, - b_ctx_in, b_ctx_out, changed); + b_ctx_in, b_ctx_out, changed, + b_ctx_out->local_lports_changed); } if (!handled) { @@ -1726,7 +1888,8 @@ binding_handle_ovs_interface_changes(struct binding_ctx_in *b_ctx_in, if (iface_id && ofport > 0) { *changed = true; handled = consider_iface_claim(iface_rec, iface_id, b_ctx_in, - b_ctx_out, qos_map_ptr); + b_ctx_out, qos_map_ptr, + b_ctx_out->local_lports_changed); if (!handled) { break; } @@ -1791,8 +1954,7 @@ static bool handle_deleted_vif_lport(const struct sbrec_port_binding *pb, enum en_lport_type lport_type, struct binding_ctx_in *b_ctx_in, - struct binding_ctx_out *b_ctx_out, - bool *changed) + struct binding_ctx_out *b_ctx_out) { struct local_binding *lbinding = get_lbinding_for_lport(pb, lport_type, b_ctx_out); @@ -1803,13 +1965,12 @@ handle_deleted_vif_lport(const struct sbrec_port_binding *pb, * clear the 'chassis' column of 'pb'. But we need to do * for the local_binding's children. */ if (lbinding->type == BT_VIF && - !release_local_binding_children(b_ctx_in->chassis_rec, - lbinding, - !b_ctx_in->ovnsb_idl_txn)) { + !release_local_binding_children( + b_ctx_in->chassis_rec, lbinding, + !b_ctx_in->ovnsb_idl_txn, + b_ctx_out->tracked_dp_bindings)) { return false; } - - *changed = true; } handle_deleted_lport(pb, b_ctx_in, b_ctx_out); @@ -1821,8 +1982,7 @@ handle_updated_vif_lport(const struct sbrec_port_binding *pb, enum en_lport_type lport_type, struct binding_ctx_in *b_ctx_in, struct binding_ctx_out *b_ctx_out, - struct hmap *qos_map, - bool *changed) + struct hmap *qos_map) { bool claimed = (pb->chassis == b_ctx_in->chassis_rec); bool handled = true; @@ -1840,14 +2000,10 @@ handle_updated_vif_lport(const struct sbrec_port_binding *pb, } bool now_claimed = (pb->chassis == b_ctx_in->chassis_rec); - bool changed_ = (claimed != now_claimed); - - if (changed_) { - *changed = true; - } + bool changed = (claimed != now_claimed); if (lport_type == LP_VIRTUAL || - (lport_type == LP_VIF && is_lport_container(pb)) || !changed_) { + (lport_type == LP_VIF && is_lport_container(pb)) || !changed) { return true; } @@ -1897,7 +2053,7 @@ binding_handle_port_binding_changes(struct binding_ctx_in *b_ctx_in, enum en_lport_type lport_type = get_lport_type(pb); if (lport_type == LP_VIF || lport_type == LP_VIRTUAL) { handled = handle_deleted_vif_lport(pb, lport_type, b_ctx_in, - b_ctx_out, changed); + b_ctx_out); } else { handle_deleted_lport(pb, b_ctx_in, b_ctx_out); } @@ -1932,15 +2088,14 @@ binding_handle_port_binding_changes(struct binding_ctx_in *b_ctx_in, case LP_VIF: case LP_VIRTUAL: handled = handle_updated_vif_lport(pb, lport_type, b_ctx_in, - b_ctx_out, qos_map_ptr, - changed); + b_ctx_out, qos_map_ptr); break; case LP_PATCH: case LP_LOCALPORT: case LP_VTEP: - *changed = true; - update_local_lport_ids(b_ctx_out->local_lport_ids, pb); + update_local_lport_ids(b_ctx_out->local_lport_ids, pb, + b_ctx_out->tracked_dp_bindings); if (lport_type == LP_PATCH) { /* Add the peer datapath to the local datapaths if it's * not present yet. @@ -1949,30 +2104,32 @@ binding_handle_port_binding_changes(struct binding_ctx_in *b_ctx_in, add_local_datapath_peer_port(pb, b_ctx_in, b_ctx_out, ld); } } + + if (lport_type == LP_VTEP) { + /* VTEP lports are claimed/released by ovn-controller-vteps. + * We are not sure what changed. So set *changed to true + * for any changes to vtep lports. */ + *changed = true; + } break; case LP_L2GATEWAY: - *changed = true; handled = consider_l2gw_lport(pb, b_ctx_in, b_ctx_out); break; case LP_L3GATEWAY: - *changed = true; handled = consider_l3gw_lport(pb, b_ctx_in, b_ctx_out); break; case LP_CHASSISREDIRECT: - *changed = true; handled = consider_cr_lport(pb, b_ctx_in, b_ctx_out); break; case LP_EXTERNAL: - *changed = true; handled = consider_external_lport(pb, b_ctx_in, b_ctx_out); break; case LP_LOCALNET: { - *changed = true; consider_localnet_lport(pb, b_ctx_in, b_ctx_out, qos_map_ptr); struct shash bridge_mappings = @@ -2007,6 +2164,10 @@ binding_handle_port_binding_changes(struct binding_ctx_in *b_ctx_in, } } + if (handled && !hmap_is_empty(b_ctx_out->tracked_dp_bindings)) { + *changed = true; + } + destroy_qos_map(&qos_map); return handled; } diff --git a/controller/binding.h b/controller/binding.h index e3d1f07de..44a476343 100644 --- a/controller/binding.h +++ b/controller/binding.h @@ -19,6 +19,9 @@ #include #include "openvswitch/shash.h" +#include "openvswitch/hmap.h" +#include "openvswitch/uuid.h" +#include "openvswitch/list.h" struct hmap; struct ovsdb_idl; @@ -54,14 +57,26 @@ struct binding_ctx_in { struct binding_ctx_out { struct hmap *local_datapaths; struct shash *local_bindings; + struct sset *local_lports; + + /* If the sset local_lports is modified, this is set to true by + * the callee. */ + bool *local_lports_changed; + /* sset of local lport ids in the format * _. */ - struct sset *local_lports; struct sset *local_lport_ids; + struct sset *egress_ifaces; /* smap of OVS interface name as key and * OVS interface external_ids:iface-id as value. */ struct smap *local_iface_ids; + + /* hmap of 'struct tracked_binding_datapath' which the + * callee (binding_handle_ovs_interface_changes and + * binding_handle_port_binding_changes) fills in for + * the changed datapaths and port bindings. */ + struct hmap *tracked_dp_bindings; }; enum local_binding_type { @@ -86,6 +101,19 @@ local_binding_find(struct shash *local_bindings, const char *name) return shash_find_data(local_bindings, name); } +/* Represents a tracked binding logical port. */ +struct tracked_binding_lport { + const struct sbrec_port_binding *pb; +}; + +/* Represent a tracked binding datapath. */ +struct tracked_binding_datapath { + struct hmap_node node; + const struct sbrec_datapath_binding *dp; + bool is_new; + struct shash lports; /* shash of struct tracked_binding_lport. */ +}; + void binding_register_ovs_idl(struct ovsdb_idl *); void binding_run(struct binding_ctx_in *, struct binding_ctx_out *); bool binding_cleanup(struct ovsdb_idl_txn *ovnsb_idl_txn, @@ -100,4 +128,5 @@ bool binding_handle_ovs_interface_changes(struct binding_ctx_in *, bool binding_handle_port_binding_changes(struct binding_ctx_in *, struct binding_ctx_out *, bool *changed); +void binding_tracked_dp_destroy(struct hmap *tracked_datapaths); #endif /* controller/binding.h */ diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c index a6bee1f76..9987ee601 100644 --- a/controller/ovn-controller.c +++ b/controller/ovn-controller.c @@ -973,10 +973,88 @@ struct ed_type_runtime_data { struct sset local_lport_ids; struct sset active_tunnels; + /* runtime data engine private data. */ struct sset egress_ifaces; struct smap local_iface_ids; + + /* Tracked data. See below for more details and comments. */ + bool tracked; + bool local_lports_changed; + struct hmap tracked_dp_bindings; }; +/* struct ed_type_runtime_data has the below members for tracking the + * changes done to the runtime_data engine by the runtime_data engine + * handlers. Since this engine is an input to the flow_output engine, + * the flow output runtime data handler will make use of this tracked data. + * + * ------------------------------------------------------------------------ + * | | This is a hmap of | + * | | 'struct tracked_binding_datapath' defined in | + * | | binding.h. Runtime data handlers for OVS | + * | | Interface and Port Binding changes store the | + * | @tracked_dp_bindings | changed datapaths (datapaths added/removed from | + * | | local_datapaths) and changed port bindings | + * | | (added/updated/deleted in 'local_bindings'). | + * | | So any changes to the runtime data - | + * | | local_datapaths and local_bindings is captured | + * | | here. | + * ------------------------------------------------------------------------ + * | | This is a bool which represents if the runtime | + * | | data 'local_lports' changed by the runtime data | + * | | handlers for OVS Interface and Port Binding | + * | | changes. If 'local_lports' is updated and also | + * | | results in any port binding updates, it is | + * |@local_lports_changed | captured in the @tracked_dp_bindings. So there | + * | | is no need to capture the changes in the | + * | | local_lports. If @local_lports_changed is true | + * | | but without anydata in the @tracked_dp_bindings,| + * | | it means we needto only update the SB monitor | + * | | clauses and there isno need for any flow | + * | | (re)computations. | + * ------------------------------------------------------------------------ + * | | This represents if the data was tracked or not | + * | | by the runtime data handlers during the engine | + * | @tracked | run. If the runtime data recompute is | + * | | triggered, it means there is no tracked data. | + * ------------------------------------------------------------------------ + * + * + * The changes to the following runtime_data variables are not tracked. + * + * --------------------------------------------------------------------- + * | local_datapaths | The changes to these runtime data is captured in | + * | local_bindings | the @tracked_dp_bindings indirectly and hence it | + * | local_lport_ids | is not tracked explicitly. | + * --------------------------------------------------------------------- + * | local_iface_ids | This is used internally within the runtime data | + * | egress_ifaces | engine (used only in binding.c) and hence there | + * | | there is no need to track. | + * --------------------------------------------------------------------- + * | | Active tunnels is built in the | + * | | bfd_calculate_active_tunnels() for the tunnel | + * | | OVS interfaces. Any changes to non VIF OVS | + * | | interfaces results in triggering the full | + * | active_tunnels | recompute of runtime data engine and hence there | + * | | the tracked data doesn't track it. When we | + * | | support handling changes to non VIF OVS | + * | | interfaces we need to track the changes to the | + * | | active tunnels. | + * --------------------------------------------------------------------- + * + */ + +static void +en_runtime_data_clear_tracked_data(void *data_) +{ + struct ed_type_runtime_data *data = data_; + + binding_tracked_dp_destroy(&data->tracked_dp_bindings); + hmap_init(&data->tracked_dp_bindings); + data->local_lports_changed = false; + data->tracked = false; +} + static void * en_runtime_data_init(struct engine_node *node OVS_UNUSED, struct engine_arg *arg OVS_UNUSED) @@ -990,6 +1068,10 @@ en_runtime_data_init(struct engine_node *node OVS_UNUSED, sset_init(&data->egress_ifaces); smap_init(&data->local_iface_ids); local_bindings_init(&data->local_bindings); + + /* init the tracked data. */ + hmap_init(&data->tracked_dp_bindings); + return data; } @@ -1012,6 +1094,8 @@ en_runtime_data_cleanup(void *data) } hmap_destroy(&rt_data->local_datapaths); local_bindings_destroy(&rt_data->local_bindings); + + en_runtime_data_clear_tracked_data(data); } static void @@ -1092,6 +1176,8 @@ init_binding_ctx(struct engine_node *node, b_ctx_out->egress_ifaces = &rt_data->egress_ifaces; b_ctx_out->local_bindings = &rt_data->local_bindings; b_ctx_out->local_iface_ids = &rt_data->local_iface_ids; + b_ctx_out->tracked_dp_bindings = NULL; + b_ctx_out->local_lports_changed = NULL; } static void @@ -1103,6 +1189,23 @@ en_runtime_data_run(struct engine_node *node, void *data) struct sset *local_lport_ids = &rt_data->local_lport_ids; struct sset *active_tunnels = &rt_data->active_tunnels; + /* Clear the (stale) tracked data if any. Even though the tracked data + * gets cleared in the beginning of engine_init_run(), + * any of the runtime data handler might have set some tracked + * data and later another runtime data handler might return false + * resulting in full recompute of runtime engine and rendering the tracked + * data stale. + * + * It's possible that engine framework can be enhanced to indicate + * the node handlers (in this case flow_output_runtime_data_handler) + * that its input node had a full recompute. However we would still + * need to clear the tracked data, because we don't want the + * stale tracked data to be accessed outside of the engine, since the + * tracked data is cleared in the engine_init_run() and not at the + * end of the engine run. + * */ + en_runtime_data_clear_tracked_data(data); + static bool first_run = true; if (first_run) { /* don't cleanup since there is no data yet */ @@ -1158,6 +1261,9 @@ runtime_data_ovs_interface_handler(struct engine_node *node, void *data) struct binding_ctx_in b_ctx_in; struct binding_ctx_out b_ctx_out; init_binding_ctx(node, rt_data, &b_ctx_in, &b_ctx_out); + rt_data->tracked = true; + b_ctx_out.tracked_dp_bindings = &rt_data->tracked_dp_bindings; + b_ctx_out.local_lports_changed = &rt_data->local_lports_changed; bool changed = false; if (!binding_handle_ovs_interface_changes(&b_ctx_in, &b_ctx_out, @@ -1183,6 +1289,9 @@ runtime_data_sb_port_binding_handler(struct engine_node *node, void *data) return false; } + rt_data->tracked = true; + b_ctx_out.tracked_dp_bindings = &rt_data->tracked_dp_bindings; + bool changed = false; if (!binding_handle_port_binding_changes(&b_ctx_in, &b_ctx_out, &changed)) { @@ -1790,6 +1899,32 @@ flow_output_port_groups_handler(struct engine_node *node, void *data) return _flow_output_resource_ref_handler(node, data, REF_TYPE_PORTGROUP); } +static bool +flow_output_runtime_data_handler(struct engine_node *node, + void *data OVS_UNUSED) +{ + struct ed_type_runtime_data *rt_data = + engine_get_input_data("runtime_data", node); + + /* There is no tracked data. Fall back to full recompute of + * flow_output. */ + if (!rt_data->tracked) { + return false; + } + + if (!hmap_is_empty(&rt_data->tracked_dp_bindings)) { + /* We are not yet handling the tracked datapath binding + * changes. Return false to trigger full recompute. */ + return false; + } + + if (rt_data->local_lports_changed) { + engine_set_node_state(node, EN_UPDATED); + } + + return true; +} + /* Engine node en_physical_flow_changes indicates whether * there is a need to * - recompute only physical flows or @@ -1902,13 +2037,6 @@ flow_output_physical_flow_changes_handler(struct engine_node *node, void *data) return true; } -static bool -flow_output_noop_handler(struct engine_node *node OVS_UNUSED, - void *data OVS_UNUSED) -{ - return true; -} - struct ovn_controller_exit_args { bool *exiting; bool *restart; @@ -2030,7 +2158,7 @@ main(int argc, char *argv[]) /* Define inc-proc-engine nodes. */ ENGINE_NODE_CUSTOM_DATA(ct_zones, "ct_zones"); - ENGINE_NODE(runtime_data, "runtime_data"); + ENGINE_NODE_WITH_CLEAR_TRACK_DATA(runtime_data, "runtime_data"); ENGINE_NODE(mff_ovn_geneve, "mff_ovn_geneve"); ENGINE_NODE(ofctrl_is_connected, "ofctrl_is_connected"); ENGINE_NODE_WITH_CLEAR_TRACK_DATA(physical_flow_changes, @@ -2067,15 +2195,16 @@ main(int argc, char *argv[]) flow_output_addr_sets_handler); engine_add_input(&en_flow_output, &en_port_groups, flow_output_port_groups_handler); - engine_add_input(&en_flow_output, &en_runtime_data, NULL); + engine_add_input(&en_flow_output, &en_runtime_data, + flow_output_runtime_data_handler); engine_add_input(&en_flow_output, &en_mff_ovn_geneve, NULL); engine_add_input(&en_flow_output, &en_physical_flow_changes, flow_output_physical_flow_changes_handler); /* We need this input nodes for only data. Hence the noop handler. */ - engine_add_input(&en_flow_output, &en_ct_zones, flow_output_noop_handler); + engine_add_input(&en_flow_output, &en_ct_zones, engine_noop_handler); engine_add_input(&en_flow_output, &en_ovs_interface, - flow_output_noop_handler); + engine_noop_handler); engine_add_input(&en_flow_output, &en_ovs_open_vswitch, NULL); engine_add_input(&en_flow_output, &en_ovs_bridge, NULL); diff --git a/tests/ovn-performance.at b/tests/ovn-performance.at index 5cc1960b6..08acd3bae 100644 --- a/tests/ovn-performance.at +++ b/tests/ovn-performance.at @@ -286,11 +286,11 @@ for i in 1 2; do [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lsp-set-type $lsp router] ) - OVN_CONTROLLER_EXPECT_HIT( + OVN_CONTROLLER_EXPECT_NO_HIT( [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lsp-set-options $lsp router-port=$lrp] ) - OVN_CONTROLLER_EXPECT_HIT( + OVN_CONTROLLER_EXPECT_NO_HIT( [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lsp-set-addresses $lsp router] ) @@ -416,15 +416,15 @@ OVN_CONTROLLER_EXPECT_NO_HIT( [ovn-nbctl --wait=hv destroy Port_Group pg1] ) -for i in 1 2; do - ls=ls$i +OVN_CONTROLLER_EXPECT_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv ls-del ls1] +) - # Delete switch $ls - OVN_CONTROLLER_EXPECT_HIT( - [hv1 hv2], [lflow_run], - [ovn-nbctl --wait=hv ls-del $ls] - ) -done +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv ls-del ls2] +) # Delete router lr1 OVN_CONTROLLER_EXPECT_NO_HIT( From patchwork Wed Jun 10 06:18:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1306578 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=hemlock.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49hcJV2QYNz9sRh for ; Wed, 10 Jun 2020 16:19:06 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id D1C3688A74; Wed, 10 Jun 2020 06:19:04 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 88XbUr8mpVIo; Wed, 10 Jun 2020 06:19:02 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by hemlock.osuosl.org (Postfix) with ESMTP id 27654889E8; Wed, 10 Jun 2020 06:19:02 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id EAB28C016F; Wed, 10 Jun 2020 06:19:01 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by lists.linuxfoundation.org (Postfix) with ESMTP id 05BFBC088C for ; Wed, 10 Jun 2020 06:19:00 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id E935E24C5E for ; Wed, 10 Jun 2020 06:18:59 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SwHF9sc+5Sdn for ; Wed, 10 Jun 2020 06:18:50 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay3-d.mail.gandi.net (relay3-d.mail.gandi.net [217.70.183.195]) by silver.osuosl.org (Postfix) with ESMTPS id 40E8524F13 for ; Wed, 10 Jun 2020 06:18:46 +0000 (UTC) X-Originating-IP: 115.99.171.40 Received: from nusiddiq.home.org.com (unknown [115.99.171.40]) (Authenticated sender: numans@ovn.org) by relay3-d.mail.gandi.net (Postfix) with ESMTPSA id C07476000B; Wed, 10 Jun 2020 06:18:41 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Wed, 10 Jun 2020 11:48:35 +0530 Message-Id: <20200610061835.1981621-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200610061726.1937231-1-numans@ovn.org> References: <20200610061726.1937231-1-numans@ovn.org> MIME-Version: 1.0 Cc: Venkata Anil Subject: [ovs-dev] [PATCH ovn v11 4/6] ovn-controller: Use the tracked runtime data changes for flow calculation. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Venkata Anil This patch processes the logical flows of tracked datapaths and tracked logical ports. To handle the tracked logical port changes, reference of logical flows to port bindings is maintained. Co-Authored-by: Numan Siddique Signed-off-by: Venkata Anil Signed-off-by: Numan Siddique --- controller/lflow.c | 86 +++++++++++++++++++++++++++++ controller/lflow.h | 12 +++- controller/ovn-controller.c | 107 ++++++++++++++++++------------------ tests/ovn-performance.at | 12 ++-- 4 files changed, 156 insertions(+), 61 deletions(-) diff --git a/controller/lflow.c b/controller/lflow.c index 01214a3a6..eb6be0100 100644 --- a/controller/lflow.c +++ b/controller/lflow.c @@ -59,6 +59,10 @@ struct condition_aux { struct ovsdb_idl_index *sbrec_port_binding_by_name; const struct sbrec_chassis *chassis; const struct sset *active_tunnels; + const struct sbrec_logical_flow *lflow; + /* Resource reference to store the port name referenced + * in is_chassis_resident() to lhe logicl flow. */ + struct lflow_resource_ref *lfrr; }; static bool @@ -68,6 +72,8 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, struct controller_event_options *controller_event_opts, struct lflow_ctx_in *l_ctx_in, struct lflow_ctx_out *l_ctx_out); +static void lflow_resource_add(struct lflow_resource_ref *, enum ref_type, + const char *ref_name, const struct uuid *); static bool lookup_port_cb(const void *aux_, const char *port_name, unsigned int *portp) @@ -120,6 +126,14 @@ is_chassis_resident_cb(const void *c_aux_, const char *port_name) if (!pb) { return false; } + + /* Store the port_name to lflow reference. */ + int64_t dp_id = pb->datapath->tunnel_key; + char buf[16]; + snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, dp_id, pb->tunnel_key); + lflow_resource_add(c_aux->lfrr, REF_TYPE_PORTBINDING, buf, + &c_aux->lflow->header_.uuid); + if (strcmp(pb->type, "chassisredirect")) { /* for non-chassisredirect ports */ return pb->chassis && pb->chassis == c_aux->chassis; @@ -594,6 +608,8 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, .sbrec_port_binding_by_name = l_ctx_in->sbrec_port_binding_by_name, .chassis = l_ctx_in->chassis, .active_tunnels = l_ctx_in->active_tunnels, + .lflow = lflow, + .lfrr = l_ctx_out->lfrr }; expr = expr_simplify(expr, is_chassis_resident_cb, &cond_aux); expr = expr_normalize(expr); @@ -649,6 +665,8 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, int64_t dp_id = lflow->logical_datapath->tunnel_key; char buf[16]; snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, dp_id, port_id); + lflow_resource_add(l_ctx_out->lfrr, REF_TYPE_PORTBINDING, buf, + &lflow->header_.uuid); if (!sset_contains(l_ctx_in->local_lport_ids, buf)) { VLOG_DBG("lflow "UUID_FMT " port %s in match is not local, skip", @@ -847,3 +865,71 @@ lflow_destroy(void) expr_symtab_destroy(&symtab); shash_destroy(&symtab); } + +bool +lflow_add_flows_for_datapath(const struct sbrec_datapath_binding *dp, + struct lflow_ctx_in *l_ctx_in, + struct lflow_ctx_out *l_ctx_out) +{ + bool handled = true; + struct hmap dhcp_opts = HMAP_INITIALIZER(&dhcp_opts); + struct hmap dhcpv6_opts = HMAP_INITIALIZER(&dhcpv6_opts); + const struct sbrec_dhcp_options *dhcp_opt_row; + SBREC_DHCP_OPTIONS_TABLE_FOR_EACH (dhcp_opt_row, + l_ctx_in->dhcp_options_table) { + dhcp_opt_add(&dhcp_opts, dhcp_opt_row->name, dhcp_opt_row->code, + dhcp_opt_row->type); + } + + + const struct sbrec_dhcpv6_options *dhcpv6_opt_row; + SBREC_DHCPV6_OPTIONS_TABLE_FOR_EACH (dhcpv6_opt_row, + l_ctx_in->dhcpv6_options_table) { + dhcp_opt_add(&dhcpv6_opts, dhcpv6_opt_row->name, dhcpv6_opt_row->code, + dhcpv6_opt_row->type); + } + + struct hmap nd_ra_opts = HMAP_INITIALIZER(&nd_ra_opts); + nd_ra_opts_init(&nd_ra_opts); + + struct controller_event_options controller_event_opts; + controller_event_opts_init(&controller_event_opts); + + struct sbrec_logical_flow *lf_row = sbrec_logical_flow_index_init_row( + l_ctx_in->sbrec_logical_flow_by_logical_datapath); + sbrec_logical_flow_index_set_logical_datapath(lf_row, dp); + + const struct sbrec_logical_flow *lflow; + SBREC_LOGICAL_FLOW_FOR_EACH_EQUAL ( + lflow, lf_row, l_ctx_in->sbrec_logical_flow_by_logical_datapath) { + /* Remove the lflow from flow_table if present before processing it. */ + ofctrl_remove_flows(l_ctx_out->flow_table, &lflow->header_.uuid); + + if (!consider_logical_flow(lflow, &dhcp_opts, &dhcpv6_opts, + &nd_ra_opts, &controller_event_opts, + l_ctx_in, l_ctx_out)) { + handled = false; + break; + } + } + + dhcp_opts_destroy(&dhcp_opts); + dhcp_opts_destroy(&dhcpv6_opts); + nd_ra_opts_destroy(&nd_ra_opts); + controller_event_opts_destroy(&controller_event_opts); + return handled; +} + +bool +lflow_handle_flows_for_lport(const struct sbrec_port_binding *pb, + struct lflow_ctx_in *l_ctx_in, + struct lflow_ctx_out *l_ctx_out) +{ + int64_t dp_id = pb->datapath->tunnel_key; + char pb_ref_name[16]; + snprintf(pb_ref_name, sizeof(pb_ref_name), "%"PRId64"_%"PRId64, + dp_id, pb->tunnel_key); + bool changed = true; + return lflow_handle_changed_ref(REF_TYPE_PORTBINDING, pb_ref_name, + l_ctx_in, l_ctx_out, &changed); +} diff --git a/controller/lflow.h b/controller/lflow.h index f02f709d7..ae02eaf5e 100644 --- a/controller/lflow.h +++ b/controller/lflow.h @@ -48,6 +48,8 @@ struct sbrec_dhcp_options_table; struct sbrec_dhcpv6_options_table; struct sbrec_logical_flow_table; struct sbrec_mac_binding_table; +struct sbrec_datapath_binding; +struct sbrec_port_binding; struct simap; struct sset; struct uuid; @@ -72,7 +74,8 @@ struct uuid; enum ref_type { REF_TYPE_ADDRSET, - REF_TYPE_PORTGROUP + REF_TYPE_PORTGROUP, + REF_TYPE_PORTBINDING }; /* Maintains the relationship for a pair of named resource and @@ -117,6 +120,7 @@ void lflow_resource_clear(struct lflow_resource_ref *); struct lflow_ctx_in { struct ovsdb_idl_index *sbrec_multicast_group_by_name_datapath; + struct ovsdb_idl_index *sbrec_logical_flow_by_logical_datapath; struct ovsdb_idl_index *sbrec_port_binding_by_name; const struct sbrec_dhcp_options_table *dhcp_options_table; const struct sbrec_dhcpv6_options_table *dhcpv6_options_table; @@ -157,4 +161,10 @@ void lflow_destroy(void); void lflow_expr_destroy(struct hmap *lflow_expr_cache); +bool lflow_add_flows_for_datapath(const struct sbrec_datapath_binding *, + struct lflow_ctx_in *, + struct lflow_ctx_out *); +bool lflow_handle_flows_for_lport(const struct sbrec_port_binding *, + struct lflow_ctx_in *, + struct lflow_ctx_out *); #endif /* controller/lflow.h */ diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c index 9987ee601..7f0c80877 100644 --- a/controller/ovn-controller.c +++ b/controller/ovn-controller.c @@ -1505,6 +1505,11 @@ static void init_lflow_ctx(struct engine_node *node, engine_get_input("SB_port_binding", node), "name"); + struct ovsdb_idl_index *sbrec_logical_flow_by_dp = + engine_ovsdb_node_get_index( + engine_get_input("SB_logical_flow", node), + "logical_datapath"); + struct ovsdb_idl_index *sbrec_mc_group_by_name_dp = engine_ovsdb_node_get_index( engine_get_input("SB_multicast_group", node), @@ -1552,6 +1557,8 @@ static void init_lflow_ctx(struct engine_node *node, l_ctx_in->sbrec_multicast_group_by_name_datapath = sbrec_mc_group_by_name_dp; + l_ctx_in->sbrec_logical_flow_by_logical_datapath = + sbrec_logical_flow_by_dp; l_ctx_in->sbrec_port_binding_by_name = sbrec_port_binding_by_name; l_ctx_in->dhcp_options_table = dhcp_table; l_ctx_in->dhcpv6_options_table = dhcpv6_table; @@ -1706,7 +1713,8 @@ flow_output_sb_mac_binding_handler(struct engine_node *node, void *data) } static bool -flow_output_sb_port_binding_handler(struct engine_node *node, void *data) +flow_output_sb_port_binding_handler(struct engine_node *node, + void *data) { struct ed_type_runtime_data *rt_data = engine_get_input_data("runtime_data", node); @@ -1714,56 +1722,13 @@ flow_output_sb_port_binding_handler(struct engine_node *node, void *data) struct ed_type_flow_output *fo = data; struct ovn_desired_flow_table *flow_table = &fo->flow_table; - /* XXX: now we handle port-binding changes for physical flow processing - * only, but port-binding change can have impact to logical flow - * processing, too, in below circumstances: - * - * - When a port-binding for a lport is inserted/deleted but the lflow - * using that lport doesn't change. - * - * This can happen only when the lport name is used by ACL match - * condition, which is specified by user. Even in that case, if the port - * is actually bound on the current chassis it will trigger recompute on - * that chassis since ovs interface would be updated. So the only - * situation this would have real impact is when user defines an ACL - * that includes lport that is not on current chassis, and there is a - * port-binding creation/deletion related to that lport.e.g.: an ACL is - * defined: - * - * to-lport 1000 'outport=="A" && inport=="B"' allow-related - * - * If "A" is on current chassis, but "B" is lport that hasn't been - * created yet. When a lport "B" is created and bound on another - * chassis, the ACL will not take effect on the current chassis until a - * recompute is triggered later. This case doesn't seem to be a problem - * for real world use cases because usually lport is created before - * being referenced by name in ACLs. - * - * - When is_chassis_resident() is used in lflow. In this case the - * port binding is not a regular VIF. It can be either "patch" or - * "external", with ha-chassis-group assigned. In current - * "runtime_data" handling, port-binding changes for these types always - * trigger recomputing. So it is fine even if we do not handle it here. - * (due to the ovsdb tracking support for referenced table changes, - * ha-chassis-group changes will appear as port-binding change). - * - * - When a mac-binding doesn't change but the port-binding related to - * that mac-binding is deleted. In this case the neighbor flow generated - * for the mac-binding should be deleted. This would not cause any real - * issue for now, since the port-binding related to mac-binding is - * always logical router port, and any change to logical router port - * would just trigger recompute. - * - * Although there is no correctness issue so far (except the unusual ACL - * use case, which doesn't seem to be a real problem), it might be better - * to handle this more gracefully, without the need to consider these - * tricky scenarios. One approach is to maintain a mapping between lport - * names and the lflows that uses them, and reprocess the related lflows - * when related port-bindings change. - */ struct physical_ctx p_ctx; init_physical_ctx(node, rt_data, &p_ctx); + /* We handle port-binding changes for physical flow processing + * only. flow_output runtime data handler takes care of processing + * logical flows for any port binding changes. + */ physical_handle_port_binding_changes(&p_ctx, flow_table); engine_set_node_state(node, EN_UPDATED); @@ -1851,6 +1816,9 @@ _flow_output_resource_ref_handler(struct engine_node *node, void *data, updated = &pg_data->updated; deleted = &pg_data->deleted; break; + + /* This ref type is handled in the flow_output_runtime_data_handler. */ + case REF_TYPE_PORTBINDING: default: OVS_NOT_REACHED(); } @@ -1912,16 +1880,42 @@ flow_output_runtime_data_handler(struct engine_node *node, return false; } - if (!hmap_is_empty(&rt_data->tracked_dp_bindings)) { - /* We are not yet handling the tracked datapath binding - * changes. Return false to trigger full recompute. */ - return false; + struct hmap *tracked_dp_bindings = &rt_data->tracked_dp_bindings; + if (hmap_is_empty(tracked_dp_bindings)) { + if (rt_data->local_lports_changed) { + engine_set_node_state(node, EN_UPDATED); + } + return true; } - if (rt_data->local_lports_changed) { - engine_set_node_state(node, EN_UPDATED); + struct lflow_ctx_in l_ctx_in; + struct lflow_ctx_out l_ctx_out; + struct ed_type_flow_output *fo = data; + init_lflow_ctx(node, rt_data, fo, &l_ctx_in, &l_ctx_out); + + struct physical_ctx p_ctx; + init_physical_ctx(node, rt_data, &p_ctx); + + struct tracked_binding_datapath *tdp; + HMAP_FOR_EACH (tdp, node, tracked_dp_bindings) { + if (tdp->is_new) { + if (!lflow_add_flows_for_datapath(tdp->dp, &l_ctx_in, + &l_ctx_out)) { + return false; + } + } else { + struct shash_node *shash_node; + SHASH_FOR_EACH (shash_node, &tdp->lports) { + struct tracked_binding_lport *lport = shash_node->data; + if (!lflow_handle_flows_for_lport(lport->pb, &l_ctx_in, + &l_ctx_out)) { + return false; + } + } + } } + engine_set_node_state(node, EN_UPDATED); return true; } @@ -2092,6 +2086,9 @@ main(int argc, char *argv[]) = chassis_index_create(ovnsb_idl_loop.idl); struct ovsdb_idl_index *sbrec_multicast_group_by_name_datapath = mcast_group_index_create(ovnsb_idl_loop.idl); + struct ovsdb_idl_index *sbrec_logical_flow_by_logical_datapath + = ovsdb_idl_index_create1(ovnsb_idl_loop.idl, + &sbrec_logical_flow_col_logical_datapath); struct ovsdb_idl_index *sbrec_port_binding_by_name = ovsdb_idl_index_create1(ovnsb_idl_loop.idl, &sbrec_port_binding_col_logical_port); @@ -2252,6 +2249,8 @@ main(int argc, char *argv[]) engine_ovsdb_node_add_index(&en_sb_chassis, "name", sbrec_chassis_by_name); engine_ovsdb_node_add_index(&en_sb_multicast_group, "name_datapath", sbrec_multicast_group_by_name_datapath); + engine_ovsdb_node_add_index(&en_sb_logical_flow, "logical_datapath", + sbrec_logical_flow_by_logical_datapath); engine_ovsdb_node_add_index(&en_sb_port_binding, "name", sbrec_port_binding_by_name); engine_ovsdb_node_add_index(&en_sb_port_binding, "key", diff --git a/tests/ovn-performance.at b/tests/ovn-performance.at index 08acd3bae..a12757e18 100644 --- a/tests/ovn-performance.at +++ b/tests/ovn-performance.at @@ -274,7 +274,7 @@ for i in 1 2; do ) # Add router port to $ls - OVN_CONTROLLER_EXPECT_HIT( + OVN_CONTROLLER_EXPECT_NO_HIT( [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lrp-add lr1 $lrp 02:00:00:00:0$i:01 10.0.$i.1/24] ) @@ -282,7 +282,7 @@ for i in 1 2; do [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lsp-add $ls $lsp] ) - OVN_CONTROLLER_EXPECT_HIT( + OVN_CONTROLLER_EXPECT_NO_HIT( [hv1 hv2], [lflow_run], [ovn-nbctl --wait=hv lsp-set-type $lsp router] ) @@ -353,8 +353,8 @@ for i in 1 2; do ) # Bind port $lp and wait for it to come up - OVN_CONTROLLER_EXPECT_HIT_COND( - [hv$i hv$j], [lflow_run], [>0 =0], + OVN_CONTROLLER_EXPECT_NO_HIT( + [hv$i hv$j], [lflow_run], [as hv$i ovs-vsctl add-port br-int $vif -- set Interface $vif external-ids:iface-id=$lp && ovn-nbctl wait-until Logical_Switch_Port $lp 'up=true' && ovn-nbctl --wait=hv sync] @@ -404,8 +404,8 @@ for i in 1 2; do lp=lp$i # Delete port $lp - OVN_CONTROLLER_EXPECT_HIT_COND( - [hv$i hv$j], [lflow_run], [>0 =0], + OVN_CONTROLLER_EXPECT_NO_HIT( + [hv$i hv$j], [lflow_run], [ovn-nbctl --wait=hv lsp-del $lp] ) done From patchwork Wed Jun 10 06:18:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1306580 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.136; helo=silver.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49hcJh1rpZz9sRh for ; Wed, 10 Jun 2020 16:19:16 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id A84B4258C9; Wed, 10 Jun 2020 06:19:14 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6ZKXugMlmDlH; Wed, 10 Jun 2020 06:19:08 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by silver.osuosl.org (Postfix) with ESMTP id 0B4E424CBC; Wed, 10 Jun 2020 06:19:04 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id DE091C088C; Wed, 10 Jun 2020 06:19:03 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by lists.linuxfoundation.org (Postfix) with ESMTP id 4815BC016F for ; Wed, 10 Jun 2020 06:19:02 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id 380EC88071 for ; Wed, 10 Jun 2020 06:19:02 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id yOtz+qO54cTh for ; Wed, 10 Jun 2020 06:18:57 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay5-d.mail.gandi.net (relay5-d.mail.gandi.net [217.70.183.197]) by whitealder.osuosl.org (Postfix) with ESMTPS id 2729387FF1 for ; Wed, 10 Jun 2020 06:18:52 +0000 (UTC) X-Originating-IP: 115.99.171.40 Received: from nusiddiq.home.org.com (unknown [115.99.171.40]) (Authenticated sender: numans@ovn.org) by relay5-d.mail.gandi.net (Postfix) with ESMTPSA id 2D3621C0004; Wed, 10 Jun 2020 06:18:48 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Wed, 10 Jun 2020 11:48:43 +0530 Message-Id: <20200610061843.1994116-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200610061726.1937231-1-numans@ovn.org> References: <20200610061726.1937231-1-numans@ovn.org> MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn v11 5/6] tests: Enhance ovn-performance testing by adding gw router port. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique This covers the scenario of setting up/deleting of BFD tunnels for HA. Tested-by: Dumitru Ceara Signed-off-by: Numan Siddique --- tests/ovn-performance.at | 104 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 104 insertions(+) diff --git a/tests/ovn-performance.at b/tests/ovn-performance.at index a12757e18..2a15cb473 100644 --- a/tests/ovn-performance.at +++ b/tests/ovn-performance.at @@ -239,6 +239,16 @@ for i in 1 2; do ovn_attach n1 br-phys 192.168.0.$i done +for i in 1 2 3; do + sim_add gw$i + as gw$i + ovs-vsctl add-br br-phys + ovs-vsctl add-br br-ex + ovs-vsctl set open . external_ids:ovn-bridge-mappings="public:br-ex" + j=$((i + 2)) + ovn_attach n1 br-phys 192.168.0.$j +done + # Wait for the tunnel ports to be created and up. # Otherwise this may affect the lflow_run count. @@ -399,6 +409,100 @@ OVN_CONTROLLER_EXPECT_NO_HIT( [ovn-nbctl --wait=hv acl-add pg1 to-lport 1001 'outport == @pg1 && ip4.src == $pg1_ip4' allow] ) +# Create a public logical switch and attach the router to it. +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv ls-add public] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-add public public-lr1] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-type public-lr1 router] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-addresses public-lr1 router] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-options public-lr1 router-port=lr1-public] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lrp-add lr1 lr1-public 00:00:20:20:12:13 172.168.0.100/24] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-add public ln-public] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-type ln-public localnet] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-addresses ln-public unknown] +) + +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2], [lflow_run], + [ovn-nbctl --wait=hv lsp-set-options ln-public network_name=public] +) + +OVN_CONTROLLER_EXPECT_HIT_COND( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], [=0 =0 >0 =0 =0], + [ovn-nbctl --wait=hv lrp-set-gateway-chassis lr1-public gw1 30] +) + +# After this, BFD should be enabled from hv1 and hv2 to gw1. +# So there should be lflow_run hits in hv1, hv2, gw1 and gw2 +OVN_CONTROLLER_EXPECT_HIT_COND( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], [>0 >0 >0 >0 =0], + [ovn-nbctl --wait=hv lrp-set-gateway-chassis lr1-public gw2 20] +) + +OVN_CONTROLLER_EXPECT_HIT( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], + [ovn-nbctl --wait=hv lrp-set-gateway-chassis lr1-public gw3 10] +) + +# Make gw2 master. +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], + [ovn-nbctl --wait=hv lrp-set-gateway-chassis lr1-public gw2 40] +) + +# Delete gw2 from gateway chassis +OVN_CONTROLLER_EXPECT_HIT( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], + [ovn-nbctl --wait=hv lrp-del-gateway-chassis lr1-public gw2] +) + +# Delete gw1 from gateway chassis +# After this, the BFD should be disabled entirely as gw3 is the +# only gateway chassis. +OVN_CONTROLLER_EXPECT_HIT_COND( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], [>0 >0 >0 =0 >0], + [ovn-nbctl --wait=hv lrp-del-gateway-chassis lr1-public gw1] +) + +# Delete gw3 from gateway chassis. There should be no lflow_run. +OVN_CONTROLLER_EXPECT_NO_HIT( + [hv1 hv2 gw1 gw2 gw3], [lflow_run], + [ovn-nbctl --wait=hv lrp-del-gateway-chassis lr1-public gw3] +) + for i in 1 2; do j=$((i%2 + 1)) lp=lp$i From patchwork Wed Jun 10 06:18:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 1306579 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49hcJc2zXMz9sRh for ; Wed, 10 Jun 2020 16:19:11 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id CA4E7870F4; Wed, 10 Jun 2020 06:19:09 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KiqfF5pe20Qv; Wed, 10 Jun 2020 06:19:07 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by fraxinus.osuosl.org (Postfix) with ESMTP id CEF3187097; Wed, 10 Jun 2020 06:19:07 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id B107AC0893; Wed, 10 Jun 2020 06:19:07 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by lists.linuxfoundation.org (Postfix) with ESMTP id 08C1EC016F for ; Wed, 10 Jun 2020 06:19:06 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id DF60688039 for ; Wed, 10 Jun 2020 06:19:05 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5mAcfj2xZTCz for ; Wed, 10 Jun 2020 06:19:02 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay3-d.mail.gandi.net (relay3-d.mail.gandi.net [217.70.183.195]) by whitealder.osuosl.org (Postfix) with ESMTPS id 68ADA88028 for ; Wed, 10 Jun 2020 06:18:57 +0000 (UTC) X-Originating-IP: 115.99.171.40 Received: from nusiddiq.home.org.com (unknown [115.99.171.40]) (Authenticated sender: numans@ovn.org) by relay3-d.mail.gandi.net (Postfix) with ESMTPSA id DA33E60005; Wed, 10 Jun 2020 06:18:54 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Wed, 10 Jun 2020 11:48:50 +0530 Message-Id: <20200610061850.2005397-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200610061726.1937231-1-numans@ovn.org> References: <20200610061726.1937231-1-numans@ovn.org> MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn v11 6/6] Add an util function get_unique_lport_key() for generating unique lport key. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Numan Siddique Suggested-by: Dumitru Ceara Signed-off-by: Numan Siddique --- controller/binding.c | 8 ++++---- controller/lflow.c | 8 ++++---- lib/ovn-util.h | 8 ++++++++ 3 files changed, 16 insertions(+), 8 deletions(-) diff --git a/controller/binding.c b/controller/binding.c index 3da19a219..27f9dbdc3 100644 --- a/controller/binding.c +++ b/controller/binding.c @@ -497,8 +497,8 @@ update_local_lport_ids(struct sset *local_lport_ids, struct hmap *tracked_datapaths) { char buf[16]; - snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, - pb->datapath->tunnel_key, pb->tunnel_key); + get_unique_lport_key(pb->datapath->tunnel_key, pb->tunnel_key, + buf, sizeof(buf)); bool added = !!sset_add(local_lport_ids, buf); if (added && tracked_datapaths) { /* Add the 'pb' to the tracked_datapaths. */ @@ -512,8 +512,8 @@ remove_local_lport_ids(const struct sbrec_port_binding *pb, struct hmap *tracked_datapaths) { char buf[16]; - snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, - pb->datapath->tunnel_key, pb->tunnel_key); + get_unique_lport_key(pb->datapath->tunnel_key, pb->tunnel_key, + buf, sizeof(buf)); bool deleted = sset_find_and_delete(local_lport_ids, buf); if (deleted && tracked_datapaths) { /* Add the 'pb' to the tracked_datapaths. */ diff --git a/controller/lflow.c b/controller/lflow.c index eb6be0100..546658c60 100644 --- a/controller/lflow.c +++ b/controller/lflow.c @@ -130,7 +130,7 @@ is_chassis_resident_cb(const void *c_aux_, const char *port_name) /* Store the port_name to lflow reference. */ int64_t dp_id = pb->datapath->tunnel_key; char buf[16]; - snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, dp_id, pb->tunnel_key); + get_unique_lport_key(dp_id, pb->tunnel_key, buf, sizeof(buf)); lflow_resource_add(c_aux->lfrr, REF_TYPE_PORTBINDING, buf, &c_aux->lflow->header_.uuid); @@ -664,7 +664,7 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow, if (port_id) { int64_t dp_id = lflow->logical_datapath->tunnel_key; char buf[16]; - snprintf(buf, sizeof(buf), "%"PRId64"_%"PRId64, dp_id, port_id); + get_unique_lport_key(dp_id, port_id, buf, sizeof(buf)); lflow_resource_add(l_ctx_out->lfrr, REF_TYPE_PORTBINDING, buf, &lflow->header_.uuid); if (!sset_contains(l_ctx_in->local_lport_ids, buf)) { @@ -927,8 +927,8 @@ lflow_handle_flows_for_lport(const struct sbrec_port_binding *pb, { int64_t dp_id = pb->datapath->tunnel_key; char pb_ref_name[16]; - snprintf(pb_ref_name, sizeof(pb_ref_name), "%"PRId64"_%"PRId64, - dp_id, pb->tunnel_key); + get_unique_lport_key(dp_id, pb->tunnel_key, pb_ref_name, + sizeof(pb_ref_name)); bool changed = true; return lflow_handle_changed_ref(REF_TYPE_PORTBINDING, pb_ref_name, l_ctx_in, l_ctx_out, &changed); diff --git a/lib/ovn-util.h b/lib/ovn-util.h index e13cf4d78..eba2948ff 100644 --- a/lib/ovn-util.h +++ b/lib/ovn-util.h @@ -114,6 +114,14 @@ bool ovn_tnlid_in_use(const struct hmap *set, uint32_t tnlid); uint32_t ovn_allocate_tnlid(struct hmap *set, const char *name, uint32_t min, uint32_t max, uint32_t *hint); +static inline void +get_unique_lport_key(uint64_t dp_tunnel_key, uint64_t lport_tunnel_key, + char *buf, size_t buf_size) +{ + snprintf(buf, buf_size, "%"PRId64"_%"PRId64, dp_tunnel_key, + lport_tunnel_key); +} + char *ovn_chassis_redirect_name(const char *port_name); void ovn_set_pidfile(const char *name);