From patchwork Sat Aug 20 18:32:38 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Moats X-Patchwork-Id: 661161 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3sGpNl2sK4z9sdg for ; Sun, 21 Aug 2016 04:32:52 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 6A34C10684; Sat, 20 Aug 2016 11:32:50 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 8F78510574 for ; Sat, 20 Aug 2016 11:32:49 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id CBC9E161537 for ; Sat, 20 Aug 2016 12:32:48 -0600 (MDT) X-ASG-Debug-ID: 1471717967-0b32372d8213a1f0001-byXFYA Received: from mx1-pf1.cudamail.com ([192.168.24.1]) by bar6.cudamail.com with ESMTP id oVvIRZHSDwqUfrbE (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Sat, 20 Aug 2016 12:32:47 -0600 (MDT) X-Barracuda-Envelope-From: stack@tombstone-01.cloud.svl.ibm.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.24.1 Received: from unknown (HELO mx0a-001b2d01.pphosted.com) (148.163.156.1) by mx1-pf1.cudamail.com with ESMTPS (AES256-SHA encrypted); 20 Aug 2016 18:32:46 -0000 Received-SPF: none (mx1-pf1.cudamail.com: domain at tombstone-01.cloud.svl.ibm.com does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 148.163.156.1 X-Barracuda-RBL-IP: 148.163.156.1 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u7KISva3138667 for ; Sat, 20 Aug 2016 14:32:46 -0400 Received: from e18.ny.us.ibm.com (e18.ny.us.ibm.com [129.33.205.208]) by mx0a-001b2d01.pphosted.com with ESMTP id 24xkc0dxp1-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Sat, 20 Aug 2016 14:32:45 -0400 Received: from localhost by e18.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 20 Aug 2016 14:32:44 -0400 Received: from d01dlp01.pok.ibm.com (9.56.250.166) by e18.ny.us.ibm.com (146.89.104.205) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Sat, 20 Aug 2016 14:32:42 -0400 X-IBM-Helo: d01dlp01.pok.ibm.com X-IBM-MailFrom: stack@tombstone-01.cloud.svl.ibm.com Received: from b01cxnp22033.gho.pok.ibm.com (b01cxnp22033.gho.pok.ibm.com [9.57.198.23]) by d01dlp01.pok.ibm.com (Postfix) with ESMTP id D36B438C8041; Sat, 20 Aug 2016 14:32:41 -0400 (EDT) Received: from b01ledav005.gho.pok.ibm.com (b01ledav005.gho.pok.ibm.com [9.57.199.110]) by b01cxnp22033.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u7KIWfc017105170; Sat, 20 Aug 2016 18:32:41 GMT Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BA1CCAE03C; Sat, 20 Aug 2016 14:32:41 -0400 (EDT) Received: from localhost (unknown [9.30.183.40]) by b01ledav005.gho.pok.ibm.com (Postfix) with SMTP id 4E313AE03B; Sat, 20 Aug 2016 14:32:41 -0400 (EDT) Received: by localhost (Postfix, from userid 1000) id 8FFBB60110; Sat, 20 Aug 2016 18:32:40 +0000 (UTC) X-CudaMail-Envelope-Sender: stack@tombstone-01.cloud.svl.ibm.com From: Ryan Moats To: dev@openvswitch.org X-CudaMail-MID: CM-E1-819016883 X-CudaMail-DTE: 082016 X-CudaMail-Originating-IP: 148.163.156.1 Date: Sat, 20 Aug 2016 18:32:38 +0000 X-ASG-Orig-Subj: [##CM-E1-819016883##][PATCH] ovn-controller: add quiet mode X-Mailer: git-send-email 2.7.4 X-TM-AS-GCONF: 00 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16082018-0044-0000-0000-000000F8A840 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00005619; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000183; SDB=6.00747310; UDB=6.00352503; IPR=6.00519909; BA=6.00004671; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00012405; XFM=3.00000011; UTC=2016-08-20 18:32:43 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16082018-0045-0000-0000-0000050FA8BF Message-Id: <1471717958-29421-1-git-send-email-rmoats@us.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-08-20_09:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=4 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1608200242 X-Barracuda-Connect: UNKNOWN[192.168.24.1] X-Barracuda-Start-Time: 1471717967 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC5_MJ1963, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.32180 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Subject: [ovs-dev] [PATCH] ovn-controller: add quiet mode X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" As discussed in [1], what the incremental processing code actually accomplished was that the ovn-controller would be "quiet" and not burn CPU when things weren't changing. This patch set recreates this state by calculating whether changes have occured that would require a full calculation to be performed. This commit depends on f5d916cb ("ovn-controller: Back out incremental processing") [1] http://openvswitch.org/pipermail/dev/2016-August/078272.html Signed-off-by: Ryan Moats --- ovn/controller/ofctrl.c | 2 ++ ovn/controller/ovn-controller.c | 77 +++++++++++++++++++++++++++++------------ ovn/controller/ovn-controller.h | 1 + ovn/controller/patch.c | 2 ++ ovn/controller/physical.c | 55 +++++++++++++++++++---------- ovn/controller/physical.h | 5 +++ 6 files changed, 101 insertions(+), 41 deletions(-) diff --git a/ovn/controller/ofctrl.c b/ovn/controller/ofctrl.c index c20f44c..fbd8749 100644 --- a/ovn/controller/ofctrl.c +++ b/ovn/controller/ofctrl.c @@ -362,6 +362,8 @@ run_S_CLEAR_FLOWS(void) queue_msg(encode_group_mod(&gm)); ofputil_uninit_group_mod(&gm); + force_full_process(); + /* Clear existing groups, to match the state of the switch. */ if (groups) { ovn_group_table_clear(groups, true); diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c index 414e6b5..7acea2c 100644 --- a/ovn/controller/ovn-controller.c +++ b/ovn/controller/ovn-controller.c @@ -314,6 +314,27 @@ static struct hmap patched_datapaths = HMAP_INITIALIZER(&patched_datapaths); * in binding.c, but consumed elsewhere. */ static struct sset all_lports = SSET_INITIALIZER(&all_lports); +/* Contains mapping of localvifs to OF ports for detecting if the physical + * configuration of the switch has changed. This is kept independently of + * the same variable name in the physical module as the two are used for + * different purposes. */ +static struct simap localvif_to_ofport = + SIMAP_INITIALIZER(&localvif_to_ofport); + +/* Contains the list of known tunnels. This is kept independently of + * the same variable name in the physical module as the two are used for + * different purposes. */ +static struct hmap tunnels = HMAP_INITIALIZER(&tunnels); + +/* The last sequence number seen from the southbound IDL. */ +static unsigned int seqno = 0; + +void +force_full_process(void) +{ + seqno = 0; +} + int main(int argc, char *argv[]) { @@ -430,39 +451,51 @@ main(int argc, char *argv[]) &all_lports); } + enum mf_field_id mff_ovn_geneve; + bool physical_change = true; if (br_int && chassis_id) { + mff_ovn_geneve = ofctrl_run(br_int); + physical_change = detect_and_save_physical_changes( + &localvif_to_ofport, &tunnels, mff_ovn_geneve, br_int, + chassis_id); + unsigned int cur_seqno = ovsdb_idl_get_seqno(ovnsb_idl_loop.idl); + patch_run(&ctx, br_int, chassis_id, &local_datapaths, &patched_datapaths); static struct lport_index lports; - static struct mcgroup_index mcgroups; lport_index_init(&lports, ctx.ovnsb_idl); - mcgroup_index_init(&mcgroups, ctx.ovnsb_idl); - - enum mf_field_id mff_ovn_geneve = ofctrl_run(br_int); - - pinctrl_run(&ctx, &lports, br_int, chassis_id, &local_datapaths); + pinctrl_run(&ctx, &lports, br_int, chassis_id, + &local_datapaths); update_ct_zones(&all_lports, &patched_datapaths, &ct_zones, ct_zone_bitmap); - struct hmap flow_table = HMAP_INITIALIZER(&flow_table); - lflow_run(&ctx, &lports, &mcgroups, &local_datapaths, - &patched_datapaths, &group_table, &ct_zones, - &flow_table); - - physical_run(&ctx, mff_ovn_geneve, - br_int, chassis_id, &ct_zones, &flow_table, - &local_datapaths, &patched_datapaths); - - ofctrl_put(&flow_table, get_nb_cfg(ctx.ovnsb_idl)); - hmap_destroy(&flow_table); - if (ctx.ovnsb_idl_txn) { - int64_t cur_cfg = ofctrl_get_cur_cfg(); - if (cur_cfg && cur_cfg != chassis->nb_cfg) { - sbrec_chassis_set_nb_cfg(chassis, cur_cfg); + + if (physical_change || seqno < cur_seqno) { + seqno = cur_seqno; + + static struct mcgroup_index mcgroups; + mcgroup_index_init(&mcgroups, ctx.ovnsb_idl); + + struct hmap flow_table = HMAP_INITIALIZER(&flow_table); + lflow_run(&ctx, &lports, &mcgroups, &local_datapaths, + &patched_datapaths, &group_table, &ct_zones, + &flow_table); + + physical_run(&ctx, mff_ovn_geneve, + br_int, chassis_id, &ct_zones, &flow_table, + &local_datapaths, &patched_datapaths); + + ofctrl_put(&flow_table, get_nb_cfg(ctx.ovnsb_idl)); + hmap_destroy(&flow_table); + if (ctx.ovnsb_idl_txn) { + int64_t cur_cfg = ofctrl_get_cur_cfg(); + if (cur_cfg && cur_cfg != chassis->nb_cfg) { + sbrec_chassis_set_nb_cfg(chassis, cur_cfg); + } } + mcgroup_index_destroy(&mcgroups); } - mcgroup_index_destroy(&mcgroups); lport_index_destroy(&lports); } diff --git a/ovn/controller/ovn-controller.h b/ovn/controller/ovn-controller.h index 470b1f5..14cf861 100644 --- a/ovn/controller/ovn-controller.h +++ b/ovn/controller/ovn-controller.h @@ -76,5 +76,6 @@ enum chassis_tunnel_type { uint32_t get_tunnel_type(const char *name); +void force_full_process(void); #endif /* ovn/ovn-controller.h */ diff --git a/ovn/controller/patch.c b/ovn/controller/patch.c index c8e47b4..7e11cf7 100644 --- a/ovn/controller/patch.c +++ b/ovn/controller/patch.c @@ -245,6 +245,7 @@ add_bridge_mappings(struct controller_ctx *ctx, br_ln, name2, br_int, name1, existing_ports); free(name1); free(name2); + force_full_process(); } shash_destroy(&bridge_mappings); @@ -362,6 +363,7 @@ add_logical_patch_ports(struct controller_ctx *ctx, create_patch_port(ctx, patch_port_id, local, br_int, src_name, br_int, dst_name, existing_ports); + force_full_process(); free(dst_name); free(src_name); add_patched_datapath(patched_datapaths, binding, local_port); diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c index 1898a2a..f509a28 100644 --- a/ovn/controller/physical.c +++ b/ovn/controller/physical.c @@ -636,13 +636,16 @@ consider_mc_group(enum mf_field_id mff_ovn_geneve, sset_destroy(&remote_chassis); } -void -physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, - const struct ovsrec_bridge *br_int, const char *this_chassis_id, - const struct simap *ct_zones, struct hmap *flow_table, - struct hmap *local_datapaths, struct hmap *patched_datapaths) +/* Scans the bridge 'br_int' and compares it to the supplied expected state. + * Returns true and updates the expected state if changes are detected. + * Returns false if no changes are found. */ +bool +detect_and_save_physical_changes(struct simap *localvif_to_ofport_p, + struct hmap *tunnels_p, + enum mf_field_id mff_ovn_geneve, + const struct ovsrec_bridge *br_int, + const char *this_chassis_id) { - /* This bool tracks physical mapping changes. */ bool physical_map_changed = false; @@ -650,6 +653,7 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, SIMAP_INITIALIZER(&new_localvif_to_ofport); struct simap new_tunnel_to_ofport = SIMAP_INITIALIZER(&new_tunnel_to_ofport); + for (int i = 0; i < br_int->n_ports; i++) { const struct ovsrec_port *port_rec = br_int->ports[i]; if (!strcmp(port_rec->name, br_int->name)) { @@ -729,7 +733,7 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, } } else { tun = xmalloc(sizeof *tun); - hmap_insert(&tunnels, &tun->hmap_node, + hmap_insert(tunnels_p, &tun->hmap_node, hash_string(chassis_id, 0)); tun->chassis_id = chassis_id; tun->ofport = u16_to_ofp(ofport); @@ -749,9 +753,9 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, /* Remove tunnels that are no longer here. */ struct chassis_tunnel *tun, *tun_next; - HMAP_FOR_EACH_SAFE (tun, tun_next, hmap_node, &tunnels) { + HMAP_FOR_EACH_SAFE (tun, tun_next, hmap_node, tunnels_p) { if (!simap_find(&new_tunnel_to_ofport, tun->chassis_id)) { - hmap_remove(&tunnels, &tun->hmap_node); + hmap_remove(tunnels_p, &tun->hmap_node); physical_map_changed = true; free(tun); } @@ -759,26 +763,41 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, /* Capture changed or removed openflow ports. */ struct simap_node *vif_name, *vif_name_next; - SIMAP_FOR_EACH_SAFE (vif_name, vif_name_next, &localvif_to_ofport) { + SIMAP_FOR_EACH_SAFE (vif_name, vif_name_next, localvif_to_ofport_p) { int newport; if ((newport = simap_get(&new_localvif_to_ofport, vif_name->name))) { - if (newport != simap_get(&localvif_to_ofport, vif_name->name)) { - simap_put(&localvif_to_ofport, vif_name->name, newport); + if (newport != simap_get(localvif_to_ofport_p, vif_name->name)) { + simap_put(localvif_to_ofport_p, vif_name->name, newport); physical_map_changed = true; } } else { - simap_find_and_delete(&localvif_to_ofport, vif_name->name); + simap_find_and_delete(localvif_to_ofport_p, vif_name->name); physical_map_changed = true; } } SIMAP_FOR_EACH (vif_name, &new_localvif_to_ofport) { - if (!simap_get(&localvif_to_ofport, vif_name->name)) { - simap_put(&localvif_to_ofport, vif_name->name, + if (!simap_get(localvif_to_ofport_p, vif_name->name)) { + simap_put(localvif_to_ofport_p, vif_name->name, simap_get(&new_localvif_to_ofport, vif_name->name)); physical_map_changed = true; } } - if (physical_map_changed) { + + simap_destroy(&new_localvif_to_ofport); + simap_destroy(&new_tunnel_to_ofport); + + return physical_map_changed; +} + +void +physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, + const struct ovsrec_bridge *br_int, const char *this_chassis_id, + const struct simap *ct_zones, struct hmap *flow_table, + struct hmap *local_datapaths, struct hmap *patched_datapaths) +{ + if (detect_and_save_physical_changes(&localvif_to_ofport, &tunnels, + mff_ovn_geneve, br_int, + this_chassis_id)) { /* Reprocess logical flow table immediately. */ poll_immediate_wake(); } @@ -828,6 +847,7 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, * ports. We set MFF_LOG_DATAPATH, MFF_LOG_INPORT, and * MFF_LOG_OUTPORT from the tunnel key data, then resubmit to table * 33 to handle packets to the local hypervisor. */ + struct chassis_tunnel *tun; HMAP_FOR_EACH (tun, hmap_node, &tunnels) { struct match match = MATCH_CATCHALL_INITIALIZER; match_set_in_port(&match, tun->ofport); @@ -925,7 +945,4 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, ofctrl_add_flow(flow_table, OFTABLE_SAVE_INPORT, 0, &match, &ofpacts); ofpbuf_uninit(&ofpacts); - - simap_destroy(&new_localvif_to_ofport); - simap_destroy(&new_tunnel_to_ofport); } diff --git a/ovn/controller/physical.h b/ovn/controller/physical.h index 86ce93c..d11615e 100644 --- a/ovn/controller/physical.h +++ b/ovn/controller/physical.h @@ -41,6 +41,11 @@ struct simap; #define OVN_GENEVE_LEN 4 void physical_register_ovs_idl(struct ovsdb_idl *); +bool detect_and_save_physical_changes(struct simap *localvif_to_ofport_p, + struct hmap *tunnels_p, + enum mf_field_id mff_ovn_geneve, + const struct ovsrec_bridge *br_int, + const char *this_chassis_id); void physical_run(struct controller_ctx *, enum mf_field_id mff_ovn_geneve, const struct ovsrec_bridge *br_int, const char *chassis_id, const struct simap *ct_zones, struct hmap *flow_table,