From patchwork Mon Apr 25 22:44:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Justin Pettit X-Patchwork-Id: 614734 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3qv1WD4n96z9t5w for ; Tue, 26 Apr 2016 08:44:40 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id D178B101B6; Mon, 25 Apr 2016 15:44:38 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx1e4.cudamail.com (mx1.cudamail.com [69.90.118.67]) by archives.nicira.com (Postfix) with ESMTPS id D1D3F101B3 for ; Mon, 25 Apr 2016 15:44:37 -0700 (PDT) Received: from bar5.cudamail.com (unknown [192.168.21.12]) by mx1e4.cudamail.com (Postfix) with ESMTPS id 34FC51E010C for ; Mon, 25 Apr 2016 16:44:37 -0600 (MDT) X-ASG-Debug-ID: 1461624276-09eadd6dea20310001-byXFYA Received: from mx1-pf2.cudamail.com ([192.168.24.2]) by bar5.cudamail.com with ESMTP id 8PdtgfUHsCtSoaKX (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Mon, 25 Apr 2016 16:44:36 -0600 (MDT) X-Barracuda-Envelope-From: jpettit@ovn.org X-Barracuda-RBL-Trusted-Forwarder: 192.168.24.2 Received: from unknown (HELO relay4-d.mail.gandi.net) (217.70.183.196) by mx1-pf2.cudamail.com with ESMTPS (DHE-RSA-AES256-SHA encrypted); 25 Apr 2016 22:44:35 -0000 Received-SPF: pass (mx1-pf2.cudamail.com: SPF record at ovn.org designates 217.70.183.196 as permitted sender) X-Barracuda-Apparent-Source-IP: 217.70.183.196 X-Barracuda-RBL-IP: 217.70.183.196 Received: from mfilter25-d.gandi.net (mfilter25-d.gandi.net [217.70.178.153]) by relay4-d.mail.gandi.net (Postfix) with ESMTP id 457331720A5; Tue, 26 Apr 2016 00:44:33 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at mfilter25-d.gandi.net Received: from relay4-d.mail.gandi.net ([IPv6:::ffff:217.70.183.196]) by mfilter25-d.gandi.net (mfilter25-d.gandi.net [::ffff:10.0.15.180]) (amavisd-new, port 10024) with ESMTP id MI41NY4-Pn7S; Tue, 26 Apr 2016 00:44:31 +0200 (CEST) X-Originating-IP: 64.134.155.243 Received: from [192.168.13.39] (ip-64-134-155-243.public.wayport.net [64.134.155.243]) (Authenticated sender: jpettit@ovn.org) by relay4-d.mail.gandi.net (Postfix) with ESMTPSA id 8D2A817209A; Tue, 26 Apr 2016 00:44:30 +0200 (CEST) Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) X-CudaMail-Whitelist-To: dev@openvswitch.org X-CudaMail-MID: CM-E2-424083244 X-CudaMail-DTE: 042516 X-CudaMail-Originating-IP: 217.70.183.196 X-CudaMail-Envelope-Sender: jpettit@ovn.org X-ASG-Orig-Subj: [##CM-E2-424083244##]Re: [ovs-dev] [PATCH] Patch v3: ovn-controller-vtep: Support BUM traffic for the VTEP Schema From: Justin Pettit In-Reply-To: <1459887220-12479-2-git-send-email-dlu998@gmail.com> Date: Mon, 25 Apr 2016 15:44:31 -0700 Message-Id: References: <1459887220-12479-1-git-send-email-dlu998@gmail.com> <1459887220-12479-2-git-send-email-dlu998@gmail.com> To: Darrell Ball X-Mailer: Apple Mail (2.3124) X-Barracuda-Connect: UNKNOWN[192.168.24.2] X-Barracuda-Start-Time: 1461624276 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-ASG-Whitelist: Header =?UTF-8?B?eFwtY3VkYW1haWxcLXdoaXRlbGlzdFwtdG8=?= X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 Cc: dev@openvswitch.org Subject: Re: [ovs-dev] [PATCH] Patch v3: ovn-controller-vtep: Support BUM traffic for the VTEP Schema X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@openvswitch.org Sender: "dev" > On Apr 5, 2016, at 1:13 PM, Darrell Ball wrote: > > This patch implements BUM support in the VTEP schema. This relates to BUM > traffic flowing from a gateway towards HVs. This code would be relevant to HW > gateways and the ovs-vtep simulator. In order to do this, the mcast macs remote > table in the VTEP schema is populated based on the OVN SB port binding. For > each logical switch, the SB port bindings are queried to find all the physical > locators to send BUM traffic to and the VTEP DB is updated. > > Some test packets were enabled in the HW gateway test case to exercise the > new code. I told you to limit the line length to 80 columns, but when you run "git log", it indents the message by four spaces, so it looks like better advice would be 75 or 76 columns. The name of the patch shouldn't have "Patch v3:" be part of the title. > diff --git a/ovn/controller-vtep/vtep.c b/ovn/controller-vtep/vtep.c > index 016c2e0..aa988c0 100644 > --- a/ovn/controller-vtep/vtep.c > +++ b/ovn/controller-vtep/vtep.c > @@ -27,9 +27,20 @@ > #include "openvswitch/vlog.h" > #include "ovn/lib/ovn-sb-idl.h" > #include "vtep/vtep-idl.h" > +#include "openvswitch/util.h" Is this additional #include needed? It seems like builds fine without it. > + if (n_locators_new) { > + int i = 0; > + struct vtep_rec_physical_locator_list_entry *ploc_entry; > + const struct vteprec_physical_locator *pl; > + LIST_FOR_EACH (ploc_entry, locators_node, locators_list) { > + locators[i] = (struct vteprec_physical_locator *) > + ploc_entry->vteprec_ploc; > + if(mmr_ext) { > + pl = shash_find_data(&mmr_ext->physical_locators, > + locators[i]->dst_ip); > + if (!pl) { > + locator_lists_differ = true; > + } I think the code can be slightly simplified since "pl" is never used other than to see if's not null like so: LIST_FOR_EACH (ploc_entry, locators_node, locators_list) { locators[i] = (struct vteprec_physical_locator *) ploc_entry->vteprec_ploc; if (mmr_ext && !shash_find_data(&mmr_ext->physical_locators, locators[i]->dst_ip)) { locator_lists_differ = true; } > +/* Creates a new 'Mcast_Macs_Remote' entry. if needed and also cleans up > + * out-dated remote mcast mac entries as needed. */ > +static void > +vtep_update_mmr(struct ovsdb_idl_txn *vtep_idl_txn, > + struct ovs_list *locators_list, > + const struct vteprec_logical_switch *vtep_ls, > + const struct mmr_hash_node_data *mmr_ext) > +{ > + struct vteprec_physical_locator **locators = NULL; > + size_t n_locators_new = ovs_list_size(locators_list); > + bool mmr_changed = false; > + > + locators = xmalloc(n_locators_new * sizeof *locators); > + > + if (vtep_process_pls(locators_list, mmr_ext, locators)) { > + mmr_changed = true; > + } I don't think you need to initialize "mmr_changed" or have this "if" block, since vtep_process_pls() always returns true or false. > -/* Updates the vtep 'Ucast_Macs_Remote' table based on non-vtep port > - * bindings. */ > +/* Updates the vtep 'Ucast_Macs_Remote' and 'Mcast_Macs_Remote' tables based > + * on non-vtep port bindings. */ > static void > vtep_macs_run(struct ovsdb_idl_txn *vtep_idl_txn, struct shash *ucast_macs_rmts, > - struct shash *physical_locators, struct shash *vt We sep_lswitches, > - struct shash *non_vtep_pbs) > + struct shash *mcast_macs_rmts, struct shash *physical_locators, > + struct shash *vtep_lswitches, struct shash *non_vtep_pbs) > { > struct shash_node *node; > struct hmap ls_map; > + const struct vteprec_physical_locator *pl; I believe this can be moved to a more speciifc code block. > + /* Collects 'Mcast_Macs_Remote's. */ > + VTEPREC_MCAST_MACS_REMOTE_FOR_EACH (mmr, ctx->vtep_idl) { > + struct mmr_hash_node_data *mmr_ext = xmalloc(sizeof *mmr_ext);; > + char *mac_tnlkey = > + xasprintf("%s_%"PRId64, mmr->MAC, > + mmr->logical_switch && mmr->logical_switch->n_tunnel_key > + ? mmr->logical_switch->tunnel_key[0] : INT64_MAX); > + > + shash_add(&mcast_macs_rmts, mac_tnlkey, mmr_ext); > + mmr_ext->mmr = mmr; > + > + shash_init(&mmr_ext->physical_locators); > + for (size_t i = 0; i < mmr_ext->mmr->locator_set->n_locators; i++) { > + shash_add(&mmr_ext->physical_locators, > + mmr_ext->mmr->locator_set->locators[i]->dst_ip, > + mmr_ext->mmr->locator_set->locators[i]); Minor, but I think it's clearer if you just access these through "mmr" rather than "mmr_ext->mmr". I made a few other minor style changes. If you agree with all the changes at the end of this message, I'll go ahead and apply the patch with those changes. Thanks, --Justin -=-=-=-=-=-=-=-=-=-=-=- diff --git a/ovn/controller-vtep/vtep.c b/ovn/controller-vtep/vtep.c index aa988c0..9e11e28 100644 --- a/ovn/controller-vtep/vtep.c +++ b/ovn/controller-vtep/vtep.c @@ -27,7 +27,6 @@ #include "openvswitch/vlog.h" #include "ovn/lib/ovn-sb-idl.h" #include "vtep/vtep-idl.h" -#include "openvswitch/util.h" VLOG_DEFINE_THIS_MODULE(vtep); @@ -93,9 +92,8 @@ create_pl(struct ovsdb_idl_txn *vtep_idl_txn, const char *chas return new_pl; } - ^L -/* Creates a new 'Mcast_Macs_Remote' entry. */ +/* Creates a new 'Mcast_Macs_Remote'. */ static void vtep_create_mmr(struct ovsdb_idl_txn *vtep_idl_txn, const char *mac, const struct vteprec_logical_switch *vtep_ls, @@ -109,14 +107,13 @@ vtep_create_mmr(struct ovsdb_idl_txn *vtep_idl_txn, const vteprec_mcast_macs_remote_set_locator_set(new_mmr, ploc_set); } -/* Compares prev and new mmr locator sets and return true if they differ and - * false otherwise. This function also preps new locator set for database - * write. +/* Compares previous and new mmr locator sets and returns true if they + * differ and false otherwise. This function also preps a new locator + * set for database write. * - * locators_list is the new set of locators for the associated - * 'Mcast_Macs_Remote' entry passed in and is queried to generate the new set - * of locators in vtep database format. - */ + * 'locators_list' is the new set of locators for the associated + * 'Mcast_Macs_Remote' entry passed in and is queried to generate the + * new set of locators in vtep database format. */ static bool vtep_process_pls(const struct ovs_list *locators_list, const struct mmr_hash_node_data *mmr_ext, @@ -136,16 +133,12 @@ vtep_process_pls(const struct ovs_list *locators_list, if (n_locators_new) { int i = 0; struct vtep_rec_physical_locator_list_entry *ploc_entry; - const struct vteprec_physical_locator *pl; LIST_FOR_EACH (ploc_entry, locators_node, locators_list) { locators[i] = (struct vteprec_physical_locator *) ploc_entry->vteprec_ploc; - if(mmr_ext) { - pl = shash_find_data(&mmr_ext->physical_locators, - locators[i]->dst_ip); - if (!pl) { + if (mmr_ext && !shash_find_data(&mmr_ext->physical_locators, + locators[i]->dst_ip)) { locator_lists_differ = true; - } } i++; } @@ -154,7 +147,7 @@ vtep_process_pls(const struct ovs_list *locators_list, return locator_lists_differ; } -/* Creates a new 'Mcast_Macs_Remote' entry. if needed and also cleans up +/* Creates a new 'Mcast_Macs_Remote' entry if needed and also cleans up * out-dated remote mcast mac entries as needed. */ static void vtep_update_mmr(struct ovsdb_idl_txn *vtep_idl_txn, @@ -164,13 +157,11 @@ vtep_update_mmr(struct ovsdb_idl_txn *vtep_idl_txn, { struct vteprec_physical_locator **locators = NULL; size_t n_locators_new = ovs_list_size(locators_list); - bool mmr_changed = false; + bool mmr_changed; locators = xmalloc(n_locators_new * sizeof *locators); - if (vtep_process_pls(locators_list, mmr_ext, locators)) { - mmr_changed = true; - } + mmr_changed = vtep_process_pls(locators_list, mmr_ext, locators); if (mmr_ext && !n_locators_new) { vteprec_mcast_macs_remote_delete(mmr_ext->mmr); @@ -262,7 +253,6 @@ vtep_macs_run(struct ovsdb_idl_txn *vtep_idl_txn, struct sha { struct shash_node *node; struct hmap ls_map; - const struct vteprec_physical_locator *pl; /* Maps from ovn logical datapath tunnel key (which is also the vtep * logical switch tunnel key) to the corresponding vtep logical switch @@ -338,11 +328,13 @@ vtep_macs_run(struct ovsdb_idl_txn *vtep_idl_txn, struct s continue; } - pl = shash_find_data(physical_locators, chassis_ip); + const struct vteprec_physical_locator *pl = + shash_find_data(physical_locators, chassis_ip); if (!pl) { pl = create_pl(vtep_idl_txn, chassis_ip); shash_add(physical_locators, chassis_ip, pl); } + const struct vteprec_physical_locator *ls_pl = shash_find_data(&ls_node->physical_locators, chassis_ip); if (!ls_pl) { @@ -420,7 +412,8 @@ vtep_macs_run(struct ovsdb_idl_txn *vtep_idl_txn, struct sha free(iter); } hmap_destroy(&ls_map); - /* Clean stale 'mcast_macs_remote' */ + + /* Clean stale 'Mcast_Macs_Remote' */ struct mmr_hash_node_data *mmr_ext; SHASH_FOR_EACH (node, mcast_macs_rmts) { mmr_ext = node->data; @@ -531,10 +524,10 @@ vtep_run(struct controller_vtep_ctx *ctx) mmr_ext->mmr = mmr; shash_init(&mmr_ext->physical_locators); - for (size_t i = 0; i < mmr_ext->mmr->locator_set->n_locators; i++) { + for (size_t i = 0; i < mmr->locator_set->n_locators; i++) { shash_add(&mmr_ext->physical_locators, - mmr_ext->mmr->locator_set->locators[i]->dst_ip, - mmr_ext->mmr->locator_set->locators[i]); + mmr->locator_set->locators[i]->dst_ip, + mmr->locator_set->locators[i]); } free(mac_tnlkey);