From patchwork Thu Jul 28 14:18:45 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Moats X-Patchwork-Id: 653736 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3s0YsS255Xz9t1P for ; Fri, 29 Jul 2016 00:19:56 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 746F611251; Thu, 28 Jul 2016 07:19:55 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 5909F1124E for ; Thu, 28 Jul 2016 07:19:54 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id DE11B161A50 for ; Thu, 28 Jul 2016 08:19:53 -0600 (MDT) X-ASG-Debug-ID: 1469715565-0b3237477427efb0001-byXFYA Received: from mx3-pf3.cudamail.com ([192.168.14.3]) by bar6.cudamail.com with ESMTP id tnLvZUUmewBzuJ8y (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Thu, 28 Jul 2016 08:19:25 -0600 (MDT) X-Barracuda-Envelope-From: stack@tombstone-01.cloud.svl.ibm.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.3 Received: from unknown (HELO mx0a-001b2d01.pphosted.com) (148.163.158.5) by mx3-pf3.cudamail.com with ESMTPS (AES256-SHA encrypted); 28 Jul 2016 14:19:24 -0000 Received-SPF: none (mx3-pf3.cudamail.com: domain at tombstone-01.cloud.svl.ibm.com does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 148.163.158.5 X-Barracuda-RBL-IP: 148.163.158.5 Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u6SEJHjL004073 for ; Thu, 28 Jul 2016 10:19:23 -0400 Received: from e37.co.us.ibm.com (e37.co.us.ibm.com [32.97.110.158]) by mx0a-001b2d01.pphosted.com with ESMTP id 24eycmj8gy-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 28 Jul 2016 10:19:20 -0400 Received: from localhost by e37.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 28 Jul 2016 08:18:58 -0600 Received: from d03dlp03.boulder.ibm.com (9.17.202.179) by e37.co.us.ibm.com (192.168.1.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 28 Jul 2016 08:18:56 -0600 X-IBM-Helo: d03dlp03.boulder.ibm.com X-IBM-MailFrom: stack@tombstone-01.cloud.svl.ibm.com Received: from b01cxnp22035.gho.pok.ibm.com (b01cxnp22035.gho.pok.ibm.com [9.57.198.25]) by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id BF8E719D8045 for ; Thu, 28 Jul 2016 08:18:30 -0600 (MDT) Received: from b01ledav005.gho.pok.ibm.com (b01ledav005.gho.pok.ibm.com [9.57.199.110]) by b01cxnp22035.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u6SEIiYj55574612; Thu, 28 Jul 2016 14:18:54 GMT Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B7338AE04B; Thu, 28 Jul 2016 10:18:54 -0400 (EDT) Received: from localhost (unknown [9.30.183.40]) by b01ledav005.gho.pok.ibm.com (Postfix) with SMTP id 6B118AE03B; Thu, 28 Jul 2016 10:18:54 -0400 (EDT) Received: by localhost (Postfix, from userid 1000) id 4F962601AA; Thu, 28 Jul 2016 14:18:47 +0000 (UTC) X-CudaMail-Envelope-Sender: stack@tombstone-01.cloud.svl.ibm.com From: Ryan Moats To: dev@openvswitch.org X-CudaMail-MID: CM-V3-727015298 X-CudaMail-DTE: 072816 X-CudaMail-Originating-IP: 148.163.158.5 Date: Thu, 28 Jul 2016 14:18:45 +0000 X-ASG-Orig-Subj: [##CM-V3-727015298##][PATCH v2] ovn-controller: Persist desired conntrack groups. X-Mailer: git-send-email 1.9.1 X-TM-AS-GCONF: 00 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16072814-0024-0000-0000-00001435F912 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00005536; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000177; SDB=6.00736661; UDB=6.00345980; IPR=6.00509254; BA=6.00004628; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00012093; XFM=3.00000011; UTC=2016-07-28 14:18:57 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16072814-0025-0000-0000-000043186C49 Message-Id: <1469715525-8908-1-git-send-email-rmoats@us.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-07-28_09:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=4 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1607280143 X-GBUdb-Analysis: 0, 148.163.158.5, Ugly c=0.452849 p=-0.407407 Source Normal X-MessageSniffer-Rules: 0-0-0-17720-c X-Barracuda-Connect: UNKNOWN[192.168.14.3] X-Barracuda-Start-Time: 1469715565 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC5_MJ1963, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.31579 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Subject: [ovs-dev] [PATCH v2] ovn-controller: Persist desired conntrack groups. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" With incremental processing of logical flows desired conntrack groups are not being persisted. This patch adds this capability, with the side effect of adding a ds_clone method that this capability leverages. Signed-off-by: Ryan Moats Reported-by: Guru Shetty Reported-at: http://openvswitch.org/pipermail/dev/2016-July/076320.html Fixes: 70c7cfe ("ovn-controller: Add incremental processing to lflow_run and physical_run") --- v1->v2 addressed review comments updated commit message changed name of ds_copy to ds_clone moved lflow uuid storage to action_params for cleaner code include/openvswitch/dynamic-string.h | 1 + include/ovn/actions.h | 6 ++++++ lib/dynamic-string.c | 9 +++++++++ ovn/controller/lflow.c | 2 ++ ovn/controller/ofctrl.c | 32 +++++++++++++++++++++++--------- ovn/controller/ofctrl.h | 3 +++ ovn/lib/actions.c | 1 + 7 files changed, 45 insertions(+), 9 deletions(-) diff --git a/include/openvswitch/dynamic-string.h b/include/openvswitch/dynamic-string.h index dfe2688..bf1f64a 100644 --- a/include/openvswitch/dynamic-string.h +++ b/include/openvswitch/dynamic-string.h @@ -73,6 +73,7 @@ void ds_swap(struct ds *, struct ds *); int ds_last(const struct ds *); bool ds_chomp(struct ds *, int c); +void ds_clone(struct ds *, struct ds *); /* Inline functions. */ diff --git a/include/ovn/actions.h b/include/ovn/actions.h index 114c71e..55720ce 100644 --- a/include/ovn/actions.h +++ b/include/ovn/actions.h @@ -22,7 +22,9 @@ #include "compiler.h" #include "openvswitch/hmap.h" #include "openvswitch/dynamic-string.h" +#include "openvswitch/uuid.h" #include "util.h" +#include "uuid.h" struct expr; struct lexer; @@ -43,6 +45,7 @@ struct group_table { struct group_info { struct hmap_node hmap_node; struct ds group; + struct uuid lflow_uuid; uint32_t group_id; }; @@ -107,6 +110,9 @@ struct action_params { /* A struct to figure out the group_id for group actions. */ struct group_table *group_table; + /* The logical flow uuid that drove this action. */ + struct uuid lflow_uuid; + /* OVN maps each logical flow table (ltable), one-to-one, onto a physical * OpenFlow flow table (ptable). A number of parameters describe this * mapping and data related to flow tables: diff --git a/lib/dynamic-string.c b/lib/dynamic-string.c index 1f17a9f..6f7b610 100644 --- a/lib/dynamic-string.c +++ b/lib/dynamic-string.c @@ -456,3 +456,12 @@ ds_chomp(struct ds *ds, int c) return false; } } + +void +ds_clone(struct ds *dst, struct ds *source) +{ + dst->length = source->length; + dst->allocated = dst->length; + dst->string = xmalloc(dst->allocated + 1); + memcpy(dst->string, source->string, dst->allocated + 1); +} diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c index a4f3322..e38c32a 100644 --- a/ovn/controller/lflow.c +++ b/ovn/controller/lflow.c @@ -383,6 +383,7 @@ add_logical_flows(struct controller_ctx *ctx, const struct lport_index *lports, if (full_flow_processing) { ovn_flow_table_clear(); + ovn_group_table_clear(group_table, false); full_logical_flow_processing = true; full_neighbor_flow_processing = true; full_flow_processing = false; @@ -515,6 +516,7 @@ consider_logical_flow(const struct lport_index *lports, .aux = &aux, .ct_zones = ct_zones, .group_table = group_table, + .lflow_uuid = lflow->header_.uuid, .n_tables = LOG_PIPELINE_LEN, .first_ptable = first_ptable, diff --git a/ovn/controller/ofctrl.c b/ovn/controller/ofctrl.c index dd9f5ec..de019b0 100644 --- a/ovn/controller/ofctrl.c +++ b/ovn/controller/ofctrl.c @@ -140,8 +140,6 @@ static ovs_be32 queue_msg(struct ofpbuf *); static void ovn_flow_table_destroy(void); static struct ofpbuf *encode_flow_mod(struct ofputil_flow_mod *); -static void ovn_group_table_clear(struct group_table *group_table, - bool existing); static struct ofpbuf *encode_group_mod(const struct ofputil_group_mod *); static void ofctrl_recv(const struct ofp_header *, enum ofptype); @@ -680,6 +678,16 @@ ofctrl_remove_flows(const struct uuid *uuid) ovn_flow_destroy(f); } } + + /* Remove any group_info information created by this logical flow. */ + struct group_info *g, *next_g; + HMAP_FOR_EACH_SAFE (g, next_g, hmap_node, &groups->desired_groups) { + if (uuid_equals(&g->lflow_uuid, uuid)) { + hmap_remove(&groups->desired_groups, &g->hmap_node); + ds_destroy(&g->group); + free(g); + } + } } /* Shortcut to remove all flows matching the supplied UUID and add this @@ -833,6 +841,15 @@ add_flow_mod(struct ofputil_flow_mod *fm, struct ovs_list *msgs) /* group_table. */ +static struct group_info * +group_info_clone(struct group_info *source) { + struct group_info *clone = xmalloc(sizeof *clone); + ds_clone(&clone->group, &source->group); + clone->group_id = source->group_id; + clone->hmap_node.hash = source->hmap_node.hash; + return clone; +} + /* Finds and returns a group_info in 'existing_groups' whose key is identical * to 'target''s key, or NULL if there is none. */ static struct group_info * @@ -851,7 +868,7 @@ ovn_group_lookup(struct hmap *exisiting_groups, } /* Clear either desired_groups or existing_groups in group_table. */ -static void +void ovn_group_table_clear(struct group_table *group_table, bool existing) { struct group_info *g, *next; @@ -1066,13 +1083,10 @@ ofctrl_put(struct group_table *group_table, int64_t nb_cfg) /* Move the contents of desired_groups to existing_groups. */ HMAP_FOR_EACH_SAFE(desired, next_group, hmap_node, &group_table->desired_groups) { - hmap_remove(&group_table->desired_groups, &desired->hmap_node); if (!ovn_group_lookup(&group_table->existing_groups, desired)) { - hmap_insert(&group_table->existing_groups, &desired->hmap_node, - desired->hmap_node.hash); - } else { - ds_destroy(&desired->group); - free(desired); + struct group_info *clone = group_info_clone(desired); + hmap_insert(&group_table->existing_groups, &clone->hmap_node, + clone->hmap_node.hash); } } diff --git a/ovn/controller/ofctrl.h b/ovn/controller/ofctrl.h index befae01..dd3f8f5 100644 --- a/ovn/controller/ofctrl.h +++ b/ovn/controller/ofctrl.h @@ -54,4 +54,7 @@ void ofctrl_flow_table_clear(void); void ovn_flow_table_clear(void); +void ovn_group_table_clear(struct group_table *group_table, + bool existing); + #endif /* ovn/ofctrl.h */ diff --git a/ovn/lib/actions.c b/ovn/lib/actions.c index fd5a867..aef5c75 100644 --- a/ovn/lib/actions.c +++ b/ovn/lib/actions.c @@ -761,6 +761,7 @@ parse_ct_lb_action(struct action_context *ctx) group_info = xmalloc(sizeof *group_info); group_info->group = ds; group_info->group_id = group_id; + group_info->lflow_uuid = ctx->ap->lflow_uuid; group_info->hmap_node.hash = hash; hmap_insert(&ctx->ap->group_table->desired_groups,