From patchwork Tue Dec 3 11:08:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ales Musil X-Patchwork-Id: 2017703 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=FN+1sM7X; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Y2dFn5hpMz1yQZ for ; Tue, 3 Dec 2024 22:09:17 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 6817440976; Tue, 3 Dec 2024 11:09:15 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id Zcnme4twi4i0; Tue, 3 Dec 2024 11:09:10 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.9.56; helo=lists.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org CB7AB40A44 Authentication-Results: smtp2.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=FN+1sM7X Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp2.osuosl.org (Postfix) with ESMTPS id CB7AB40A44; Tue, 3 Dec 2024 11:09:09 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 56118C08AA; Tue, 3 Dec 2024 11:09:09 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 68A52C087D for ; Tue, 3 Dec 2024 11:09:06 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 4F8BF40A18 for ; Tue, 3 Dec 2024 11:09:06 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id FsuBVMSJm5oE for ; Tue, 3 Dec 2024 11:09:03 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=170.10.129.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=amusil@redhat.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp2.osuosl.org 83509409B5 Authentication-Results: smtp2.osuosl.org; dmarc=pass (p=none dis=none) header.from=redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 83509409B5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp2.osuosl.org (Postfix) with ESMTPS id 83509409B5 for ; Tue, 3 Dec 2024 11:09:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733224142; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Km94y+Na/9eyRGGYNadUCIX39jXZLme9lvcbt7yOZvw=; b=FN+1sM7XiI+HEWx+DFCcL54GHxFiSTrO5NFmFq3QHXtTmLM5GdqmReMp+t1bEgRmXAg/6N 5BU+9ExJvvzSimOAW9MBhhsW/1Rr9fUPNJlXgtwKqz79EPxroX6OK8J2XhAE2JOA4YF4Su LGVaWVItgITxvuUR0dWju/6YnpoBznQ= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-187-Yav_pqEeO4CTtsM6AMKXgA-1; Tue, 03 Dec 2024 06:09:00 -0500 X-MC-Unique: Yav_pqEeO4CTtsM6AMKXgA-1 X-Mimecast-MFC-AGG-ID: Yav_pqEeO4CTtsM6AMKXgA Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9F3821955F3E for ; Tue, 3 Dec 2024 11:08:57 +0000 (UTC) Received: from amusil.brq.redhat.com (unknown [10.43.17.32]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 85B861956054; Tue, 3 Dec 2024 11:08:56 +0000 (UTC) From: Ales Musil To: dev@openvswitch.org Date: Tue, 3 Dec 2024 12:08:43 +0100 Message-ID: <20241203110853.201377-2-amusil@redhat.com> In-Reply-To: <20241203110853.201377-1-amusil@redhat.com> References: <20241203110853.201377-1-amusil@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: WORRvHFPCn0RdHfmJ6KaMcLlByB2O7dDHIf35xQf9Vg_1733224137 X-Mimecast-Originator: redhat.com Subject: [ovs-dev] [PATCH ovn 1/6] physical: Use struct physical_ctx instead of passing args one by one. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" A lot of the arguments fo certain functions were getting out of hand, pass struct physical_ctx instead of fields from that struct. This has the added benefit of having available anything that is in struct physical_ctx without adding additional arguments. Signed-off-by: Ales Musil Acked-by: Lorenzo Bianconi --- controller/physical.c | 242 +++++++++++++++--------------------------- 1 file changed, 87 insertions(+), 155 deletions(-) diff --git a/controller/physical.c b/controller/physical.c index 3ca4e0783..b3da527ae 100644 --- a/controller/physical.c +++ b/controller/physical.c @@ -332,8 +332,7 @@ find_additional_encap_for_chassis(const struct sbrec_port_binding *pb, static struct ovs_list * get_remote_tunnels(const struct sbrec_port_binding *binding, - const struct sbrec_chassis *chassis, - const struct hmap *chassis_tunnels, + const struct physical_ctx *ctx, const char *local_encap_ip) { const struct chassis_tunnel *tun; @@ -341,9 +340,9 @@ get_remote_tunnels(const struct sbrec_port_binding *binding, struct ovs_list *tunnels = xmalloc(sizeof *tunnels); ovs_list_init(tunnels); - if (binding->chassis && binding->chassis != chassis) { + if (binding->chassis && binding->chassis != ctx->chassis) { tun = get_port_binding_tun(binding->encap, binding->chassis, - chassis_tunnels, local_encap_ip); + ctx->chassis_tunnels, local_encap_ip); if (!tun) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); VLOG_WARN_RL( @@ -358,14 +357,15 @@ get_remote_tunnels(const struct sbrec_port_binding *binding, } for (size_t i = 0; i < binding->n_additional_chassis; i++) { - if (binding->additional_chassis[i] == chassis) { + if (binding->additional_chassis[i] == ctx->chassis) { continue; } const struct sbrec_encap *additional_encap; - additional_encap = find_additional_encap_for_chassis(binding, chassis); + additional_encap = find_additional_encap_for_chassis(binding, + ctx->chassis); tun = get_port_binding_tun(additional_encap, binding->additional_chassis[i], - chassis_tunnels, local_encap_ip); + ctx->chassis_tunnels, local_encap_ip); if (!tun) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); VLOG_WARN_RL( @@ -383,25 +383,20 @@ get_remote_tunnels(const struct sbrec_port_binding *binding, static void put_remote_port_redirect_overlay(const struct sbrec_port_binding *binding, - enum mf_field_id mff_ovn_geneve, + const struct physical_ctx *ctx, uint32_t port_key, struct match *match, struct ofpbuf *ofpacts_p, - const struct sbrec_chassis *chassis, - const struct hmap *chassis_tunnels, - size_t n_encap_ips, - const char **encap_ips, struct ovn_desired_flow_table *flow_table) { /* Setup encapsulation */ - for (size_t i = 0; i < n_encap_ips; i++) { + for (size_t i = 0; i < ctx->n_encap_ips; i++) { + const char *encap_ip = ctx->encap_ips[i]; struct ofpbuf *ofpacts_clone = ofpbuf_clone(ofpacts_p); match_set_reg_masked(match, MFF_LOG_ENCAP_ID - MFF_REG0, i << 16, (uint32_t) 0xFFFF << 16); - struct ovs_list *tuns = get_remote_tunnels(binding, chassis, - chassis_tunnels, - encap_ips[i]); + struct ovs_list *tuns = get_remote_tunnels(binding, ctx, encap_ip); if (!ovs_list_is_empty(tuns)) { bool is_vtep_port = !strcmp(binding->type, "vtep"); /* rewrite MFF_IN_PORT to bypass OpenFlow loopback check for ARP/ND @@ -413,7 +408,7 @@ put_remote_port_redirect_overlay(const struct sbrec_port_binding *binding, struct tunnel *tun; LIST_FOR_EACH (tun, list_node, tuns) { - put_encapsulation(mff_ovn_geneve, tun->tun, + put_encapsulation(ctx->mff_ovn_geneve, tun->tun, binding->datapath, port_key, is_vtep_port, ofpacts_clone); ofpact_put_OUTPUT(ofpacts_clone)->port = tun->tun->ofport; @@ -763,18 +758,14 @@ ofpact_put_push_vlan(struct ofpbuf *ofpacts, const struct smap *options, int tag } static void -put_replace_router_port_mac_flows(struct ovsdb_idl_index - *sbrec_port_binding_by_name, +put_replace_router_port_mac_flows(const struct physical_ctx *ctx, const struct sbrec_port_binding *localnet_port, - const struct sbrec_chassis *chassis, - const struct sset *active_tunnels, - const struct hmap *local_datapaths, struct ofpbuf *ofpacts_p, ofp_port_t ofport, struct ovn_desired_flow_table *flow_table) { - struct local_datapath *ld = get_local_datapath(local_datapaths, + struct local_datapath *ld = get_local_datapath(ctx->local_datapaths, localnet_port->datapath-> tunnel_key); ovs_assert(ld); @@ -794,7 +785,7 @@ put_replace_router_port_mac_flows(struct ovsdb_idl_index } /* Get chassis mac */ - if (!chassis_get_mac(chassis, network, &chassis_mac)) { + if (!chassis_get_mac(ctx->chassis, network, &chassis_mac)) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); /* Keeping the log level low for backward compatibility. * Chassis mac is a new configuration. @@ -810,8 +801,8 @@ put_replace_router_port_mac_flows(struct ovsdb_idl_index struct match match; struct ofpact_mac *replace_mac; char *cr_peer_name = xasprintf("cr-%s", rport_binding->logical_port); - if (lport_is_chassis_resident(sbrec_port_binding_by_name, - chassis, active_tunnels, + if (lport_is_chassis_resident(ctx->sbrec_port_binding_by_name, + ctx->chassis, ctx->active_tunnels, cr_peer_name)) { /* If a router port's chassisredirect port is * resident on this chassis, then we need not do mac replace. */ @@ -1421,18 +1412,14 @@ static void enforce_tunneling_for_multichassis_ports( struct local_datapath *ld, const struct sbrec_port_binding *binding, - const struct sbrec_chassis *chassis, - const struct hmap *chassis_tunnels, - enum mf_field_id mff_ovn_geneve, - struct ovn_desired_flow_table *flow_table, - const struct if_status_mgr *if_mgr) + const struct physical_ctx *ctx, + struct ovn_desired_flow_table *flow_table) { if (shash_is_empty(&ld->multichassis_ports)) { return; } - struct ovs_list *tuns = get_remote_tunnels(binding, chassis, - chassis_tunnels, NULL); + struct ovs_list *tuns = get_remote_tunnels(binding, ctx, NULL); if (ovs_list_is_empty(tuns)) { free(tuns); return; @@ -1461,7 +1448,7 @@ enforce_tunneling_for_multichassis_ports( struct tunnel *tun; LIST_FOR_EACH (tun, list_node, tuns) { - put_encapsulation(mff_ovn_geneve, tun->tun, + put_encapsulation(ctx->mff_ovn_geneve, tun->tun, binding->datapath, port_key, is_vtep_port, &ofpacts); ofpact_put_OUTPUT(&ofpacts)->port = tun->tun->ofport; @@ -1471,7 +1458,7 @@ enforce_tunneling_for_multichassis_ports( &binding->header_.uuid); ofpbuf_uninit(&ofpacts); - handle_pkt_too_big(flow_table, tuns, binding, mcp, if_mgr); + handle_pkt_too_big(flow_table, tuns, binding, mcp, ctx->if_mgr); } struct tunnel *tun_elem; @@ -1482,28 +1469,15 @@ enforce_tunneling_for_multichassis_ports( } static void -consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, - enum mf_field_id mff_ovn_geneve, - const struct shash *ct_zones, - const struct sset *active_tunnels, - const struct hmap *local_datapaths, - const struct shash *local_bindings, - const struct simap *patch_ofports, - const struct hmap *chassis_tunnels, +consider_port_binding(const struct physical_ctx *ctx, const struct sbrec_port_binding *binding, - const struct sbrec_chassis *chassis, - const struct physical_debug *debug, - const struct if_status_mgr *if_mgr, - size_t n_encap_ips, - const char **encap_ips, - bool always_tunnel, struct ovn_desired_flow_table *flow_table, struct ofpbuf *ofpacts_p) { uint32_t dp_key = binding->datapath->tunnel_key; uint32_t port_key = binding->tunnel_key; struct local_datapath *ld; - if (!(ld = get_local_datapath(local_datapaths, dp_key))) { + if (!(ld = get_local_datapath(ctx->local_datapaths, dp_key))) { return; } @@ -1517,7 +1491,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, match_set_metadata(&match, htonll(dp_key)); match_set_reg(&match, MFF_LOG_INPORT - MFF_REG0, port_key); - struct zone_ids icmp_zone_ids = get_zone_ids(binding, ct_zones); + struct zone_ids icmp_zone_ids = get_zone_ids(binding, ctx->ct_zones); ofpbuf_clear(ofpacts_p); put_zones_ofpacts(&icmp_zone_ids, ofpacts_p); put_resubmit(OFTABLE_LOG_INGRESS_PIPELINE, ofpacts_p); @@ -1530,17 +1504,17 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, struct match match; if (!strcmp(binding->type, "patch") || (!strcmp(binding->type, "l3gateway") - && binding->chassis == chassis)) { + && binding->chassis == ctx->chassis)) { const struct sbrec_port_binding *peer = get_binding_peer( - sbrec_port_binding_by_name, binding); + ctx->sbrec_port_binding_by_name, binding); if (!peer) { return; } - struct zone_ids binding_zones = get_zone_ids(binding, ct_zones); + struct zone_ids binding_zones = get_zone_ids(binding, ctx->ct_zones); put_local_common_flows(dp_key, binding, NULL, &binding_zones, - debug, ofpacts_p, flow_table); + &ctx->debug, ofpacts_p, flow_table); ofpbuf_clear(ofpacts_p); match_outport_dp_and_port_keys(&match, dp_key, port_key); @@ -1551,9 +1525,9 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, put_load(0, MFF_LOG_DNAT_ZONE, 0, 32, ofpacts_p); put_load(0, MFF_LOG_SNAT_ZONE, 0, 32, ofpacts_p); put_load(0, MFF_LOG_CT_ZONE, 0, 16, ofpacts_p); - struct zone_ids peer_zones = get_zone_ids(peer, ct_zones); - load_logical_ingress_metadata(peer, &peer_zones, n_encap_ips, - encap_ips, ofpacts_p, false); + struct zone_ids peer_zones = get_zone_ids(peer, ctx->ct_zones); + load_logical_ingress_metadata(peer, &peer_zones, ctx->n_encap_ips, + ctx->encap_ips, ofpacts_p, false); put_load(0, MFF_LOG_FLAGS, 0, 32, ofpacts_p); put_load(0, MFF_LOG_OUTPORT, 0, 32, ofpacts_p); for (int i = 0; i < MFF_N_LOG_REGS; i++) { @@ -1570,9 +1544,9 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, return; } if (!strcmp(binding->type, "chassisredirect") - && (binding->chassis == chassis - || ha_chassis_group_is_active(binding->ha_chassis_group, - active_tunnels, chassis))) { + && (binding->chassis == ctx->chassis || + ha_chassis_group_is_active(binding->ha_chassis_group, + ctx->active_tunnels, ctx->chassis))) { /* Table 40, priority 100. * ======================= @@ -1589,7 +1563,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, const char *distributed_port = smap_get_def(&binding->options, "distributed-port", ""); const struct sbrec_port_binding *distributed_binding - = lport_lookup_by_name(sbrec_port_binding_by_name, + = lport_lookup_by_name(ctx->sbrec_port_binding_by_name, distributed_port); if (!distributed_binding) { @@ -1613,7 +1587,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, MFF_LOG_OUTPORT, 0, 32, ofpacts_p); struct zone_ids zone_ids = get_zone_ids(distributed_binding, - ct_zones); + ctx->ct_zones); put_zones_ofpacts(&zone_ids, ofpacts_p); /* Clear the MFF_INPORT. Its possible that the same packet may @@ -1655,16 +1629,16 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, if (!binding->tag) { return; } - ofport = local_binding_get_lport_ofport(local_bindings, + ofport = local_binding_get_lport_ofport(ctx->local_bindings, binding->parent_port); if (ofport) { tag = *binding->tag; nested_container = true; parent_port = lport_lookup_by_name( - sbrec_port_binding_by_name, binding->parent_port); + ctx->sbrec_port_binding_by_name, binding->parent_port); if (parent_port - && (lport_can_bind_on_this_chassis(chassis, + && (lport_can_bind_on_this_chassis(ctx->chassis, parent_port) != CAN_BIND_AS_MAIN)) { /* Even though there is an ofport for this container * parent port, it is requested on different chassis ignore @@ -1676,15 +1650,15 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, } else if (!strcmp(binding->type, "localnet") || !strcmp(binding->type, "l2gateway")) { - ofport = u16_to_ofp(simap_get(patch_ofports, + ofport = u16_to_ofp(simap_get(ctx->patch_ofports, binding->logical_port)); if (ofport && binding->tag) { tag = *binding->tag; } } else { - ofport = local_binding_get_lport_ofport(local_bindings, + ofport = local_binding_get_lport_ofport(ctx->local_bindings, binding->logical_port); - if (ofport && !lport_can_bind_on_this_chassis(chassis, binding)) { + if (ofport && !lport_can_bind_on_this_chassis(ctx->chassis, binding)) { /* Even though there is an ofport for this port_binding, it is * requested on different chassis. So ignore this ofport. */ @@ -1693,7 +1667,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, } const struct sbrec_port_binding *localnet_port = - get_localnet_port(local_datapaths, dp_key); + get_localnet_port(ctx->local_datapaths, dp_key); struct ha_chassis_ordered *ha_ch_ordered; ha_ch_ordered = ha_chassis_get_ordered(binding->ha_chassis_group); @@ -1704,7 +1678,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, /* Enforce tunneling while we clone packets to additional chassis b/c * otherwise upstream switch won't flood the packet to both chassis. */ if (localnet_port && !binding->additional_chassis) { - ofport = u16_to_ofp(simap_get(patch_ofports, + ofport = u16_to_ofp(simap_get(ctx->patch_ofports, localnet_port->logical_port)); if (!ofport) { goto out; @@ -1727,11 +1701,11 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, * arrive from containers have a tag (vlan) associated with them. */ - struct zone_ids zone_ids = get_zone_ids(binding, ct_zones); + struct zone_ids zone_ids = get_zone_ids(binding, ctx->ct_zones); /* Pass the parent port binding if the port is a nested * container. */ put_local_common_flows(dp_key, binding, parent_port, &zone_ids, - debug, ofpacts_p, flow_table); + &ctx->debug, ofpacts_p, flow_table); /* Table 0, Priority 150 and 100. * ============================== @@ -1776,15 +1750,15 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, } } - setup_activation_strategy(binding, chassis, dp_key, port_key, + setup_activation_strategy(binding, ctx->chassis, dp_key, port_key, ofport, &zone_ids, flow_table); /* Remember the size with just strip vlan added so far, * as we're going to remove this with ofpbuf_pull() later. */ uint32_t ofpacts_orig_size = ofpacts_p->size; - load_logical_ingress_metadata(binding, &zone_ids, n_encap_ips, - encap_ips, ofpacts_p, true); + load_logical_ingress_metadata(binding, &zone_ids, ctx->n_encap_ips, + ctx->encap_ips, ofpacts_p, true); if (!strcmp(binding->type, "localport")) { /* mark the packet as incoming from a localport */ @@ -1811,8 +1785,9 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, } if (!strcmp(binding->type, "localnet")) { - put_replace_chassis_mac_flows(ct_zones, binding, local_datapaths, - ofpacts_p, ofport, flow_table); + put_replace_chassis_mac_flows(ctx->ct_zones, binding, + ctx->local_datapaths, ofpacts_p, + ofport, flow_table); } /* Table 65, Priority 100. @@ -1841,9 +1816,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, &match, ofpacts_p, &binding->header_.uuid); if (!strcmp(binding->type, "localnet")) { - put_replace_router_port_mac_flows(sbrec_port_binding_by_name, - binding, chassis, active_tunnels, - local_datapaths, ofpacts_p, + put_replace_router_port_mac_flows(ctx, binding, ofpacts_p, ofport, flow_table); } @@ -1855,7 +1828,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, if (!strcmp(binding->type, "localnet")) { /* do not forward traffic from localport to localnet port */ ofpbuf_clear(ofpacts_p); - put_drop(debug, OFTABLE_CHECK_LOOPBACK, ofpacts_p); + put_drop(&ctx->debug, OFTABLE_CHECK_LOOPBACK, ofpacts_p); match_outport_dp_and_port_keys(&match, dp_key, port_key); match_set_reg_masked(&match, MFF_LOG_FLAGS - MFF_REG0, MLF_LOCALPORT, MLF_LOCALPORT); @@ -1865,7 +1838,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, /* Drop LOCAL_ONLY traffic leaking through localnet ports. */ ofpbuf_clear(ofpacts_p); - put_drop(debug, OFTABLE_CHECK_LOOPBACK, ofpacts_p); + put_drop(&ctx->debug, OFTABLE_CHECK_LOOPBACK, ofpacts_p); match_outport_dp_and_port_keys(&match, dp_key, port_key); match_set_reg_masked(&match, MFF_LOG_FLAGS - MFF_REG0, MLF_LOCAL_ONLY, MLF_LOCAL_ONLY); @@ -1882,7 +1855,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, if (!pb->chassis) { continue; } - if (strcmp(pb->chassis->name, chassis->name)) { + if (strcmp(pb->chassis->name, ctx->chassis->name)) { continue; } @@ -1935,7 +1908,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, binding->header_.uuid.parts[0], &match, ofpacts_p, &binding->header_.uuid); } - } else if (access_type == PORT_LOCALNET && !always_tunnel) { + } else if (access_type == PORT_LOCALNET && !ctx->always_tunnel) { /* Remote port connected by localnet port */ /* Table 40, priority 100. * ======================= @@ -1963,10 +1936,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, binding->header_.uuid.parts[0], &match, ofpacts_p, &binding->header_.uuid); - enforce_tunneling_for_multichassis_ports(ld, binding, chassis, - chassis_tunnels, - mff_ovn_geneve, flow_table, - if_mgr); + enforce_tunneling_for_multichassis_ports(ld, binding, ctx, flow_table); /* No more tunneling to set up. */ goto out; @@ -1988,15 +1958,14 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name, if (redirect_type && !strcasecmp(redirect_type, "bridged")) { put_remote_port_redirect_bridged( - binding, local_datapaths, ld, &match, ofpacts_p, flow_table); + binding, ctx->local_datapaths, ld, &match, ofpacts_p, flow_table); } else if (access_type == PORT_HA_REMOTE) { put_remote_port_redirect_overlay_ha_remote( - binding, ha_ch_ordered, mff_ovn_geneve, port_key, - &match, ofpacts_p, chassis_tunnels, flow_table); + binding, ha_ch_ordered, ctx->mff_ovn_geneve, port_key, + &match, ofpacts_p, ctx->chassis_tunnels, flow_table); } else { put_remote_port_redirect_overlay( - binding, mff_ovn_geneve, port_key, &match, ofpacts_p, - chassis, chassis_tunnels, n_encap_ips, encap_ips, flow_table); + binding, ctx, port_key, &match, ofpacts_p, flow_table); } out: if (ha_ch_ordered) { @@ -2126,19 +2095,13 @@ mc_ofctrl_add_flow(const struct sbrec_multicast_group *mc, } static void -consider_mc_group(struct ovsdb_idl_index *sbrec_port_binding_by_name, - enum mf_field_id mff_ovn_geneve, - const struct shash *ct_zones, - const struct hmap *local_datapaths, - struct shash *local_bindings, - struct simap *patch_ofports, - const struct sbrec_chassis *chassis, +consider_mc_group(const struct physical_ctx *ctx, const struct sbrec_multicast_group *mc, - const struct hmap *chassis_tunnels, struct ovn_desired_flow_table *flow_table) { uint32_t dp_key = mc->datapath->tunnel_key; - struct local_datapath *ldp = get_local_datapath(local_datapaths, dp_key); + struct local_datapath *ldp = get_local_datapath(ctx->local_datapaths, + dp_key); if (!ldp) { return; } @@ -2192,7 +2155,7 @@ consider_mc_group(struct ovsdb_idl_index *sbrec_port_binding_by_name, continue; } - int zone_id = ct_zone_find_zone(ct_zones, port->logical_port); + int zone_id = ct_zone_find_zone(ctx->ct_zones, port->logical_port); if (zone_id) { put_load(zone_id, MFF_LOG_CT_ZONE, 0, 16, &ofpacts); } @@ -2213,27 +2176,28 @@ consider_mc_group(struct ovsdb_idl_index *sbrec_port_binding_by_name, } } else if (!strcmp(port->type, "localport")) { remote_ports = true; - } else if ((port->chassis == chassis - || is_additional_chassis(port, chassis)) - && (local_binding_get_primary_pb(local_bindings, lport_name) + } else if ((port->chassis == ctx->chassis + || is_additional_chassis(port, ctx->chassis)) + && (local_binding_get_primary_pb(ctx->local_bindings, + lport_name) || !strcmp(port->type, "l3gateway"))) { local_output_pb(port->tunnel_key, &ofpacts); - } else if (simap_contains(patch_ofports, port->logical_port)) { + } else if (simap_contains(ctx->patch_ofports, port->logical_port)) { local_output_pb(port->tunnel_key, &ofpacts); } else if (!strcmp(port->type, "chassisredirect") - && port->chassis == chassis) { + && port->chassis == ctx->chassis) { const char *distributed_port = smap_get(&port->options, "distributed-port"); if (distributed_port) { const struct sbrec_port_binding *distributed_binding - = lport_lookup_by_name(sbrec_port_binding_by_name, + = lport_lookup_by_name(ctx->sbrec_port_binding_by_name, distributed_port); if (distributed_binding && port->datapath == distributed_binding->datapath) { local_output_pb(distributed_binding->tunnel_key, &ofpacts); } } - } else if (!get_localnet_port(local_datapaths, + } else if (!get_localnet_port(ctx->local_datapaths, mc->datapath->tunnel_key)) { /* Add remote chassis only when localnet port not exist, * otherwise multicast will reach remote ports through localnet @@ -2274,9 +2238,10 @@ consider_mc_group(struct ovsdb_idl_index *sbrec_port_binding_by_name, put_load(mc->tunnel_key, MFF_LOG_OUTPORT, 0, 32, &ofpacts_last); } - fanout_to_chassis(mff_ovn_geneve, &remote_chassis, chassis_tunnels, - mc->datapath, mc->tunnel_key, false, &ofpacts_last); - fanout_to_chassis(mff_ovn_geneve, &vtep_chassis, chassis_tunnels, + fanout_to_chassis(ctx->mff_ovn_geneve, &remote_chassis, + ctx->chassis_tunnels, mc->datapath, mc->tunnel_key, + false, &ofpacts_last); + fanout_to_chassis(ctx->mff_ovn_geneve, &vtep_chassis, ctx->chassis_tunnels, mc->datapath, mc->tunnel_key, true, &ofpacts_last); remote_ports |= (ofpacts_last.size > 0); @@ -2284,7 +2249,8 @@ consider_mc_group(struct ovsdb_idl_index *sbrec_port_binding_by_name, put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts_last); } - bool has_vtep = get_vtep_port(local_datapaths, mc->datapath->tunnel_key); + bool has_vtep = get_vtep_port(ctx->local_datapaths, + mc->datapath->tunnel_key); uint32_t reverse_ramp_flow_index = MC_BUF_START_ID; flow_index = MC_BUF_START_ID; @@ -2314,8 +2280,8 @@ consider_mc_group(struct ovsdb_idl_index *sbrec_port_binding_by_name, if (port->chassis) { put_load(port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, &remote_ofpacts); - tunnel_to_chassis(mff_ovn_geneve, port->chassis->name, - chassis_tunnels, mc->datapath, + tunnel_to_chassis(ctx->mff_ovn_geneve, port->chassis->name, + ctx->chassis_tunnels, mc->datapath, port->tunnel_key, &remote_ofpacts); } } else if (!strcmp(port->type, "localport")) { @@ -2362,19 +2328,7 @@ physical_eval_port_binding(struct physical_ctx *p_ctx, { struct ofpbuf ofpacts; ofpbuf_init(&ofpacts, 0); - consider_port_binding(p_ctx->sbrec_port_binding_by_name, - p_ctx->mff_ovn_geneve, p_ctx->ct_zones, - p_ctx->active_tunnels, - p_ctx->local_datapaths, - p_ctx->local_bindings, - p_ctx->patch_ofports, - p_ctx->chassis_tunnels, - pb, p_ctx->chassis, &p_ctx->debug, - p_ctx->if_mgr, - p_ctx->n_encap_ips, - p_ctx->encap_ips, - p_ctx->always_tunnel, - flow_table, &ofpacts); + consider_port_binding(p_ctx, pb, flow_table, &ofpacts); ofpbuf_uninit(&ofpacts); } @@ -2454,13 +2408,7 @@ physical_handle_mc_group_changes(struct physical_ctx *p_ctx, if (!sbrec_multicast_group_is_new(mc)) { ofctrl_remove_flows(flow_table, &mc->header_.uuid); } - consider_mc_group(p_ctx->sbrec_port_binding_by_name, - p_ctx->mff_ovn_geneve, p_ctx->ct_zones, - p_ctx->local_datapaths, p_ctx->local_bindings, - p_ctx->patch_ofports, - p_ctx->chassis, mc, - p_ctx->chassis_tunnels, - flow_table); + consider_mc_group(p_ctx, mc, flow_table); } } } @@ -2486,18 +2434,7 @@ physical_run(struct physical_ctx *p_ctx, * 64 for logical-to-physical translation. */ const struct sbrec_port_binding *binding; SBREC_PORT_BINDING_TABLE_FOR_EACH (binding, p_ctx->port_binding_table) { - consider_port_binding(p_ctx->sbrec_port_binding_by_name, - p_ctx->mff_ovn_geneve, p_ctx->ct_zones, - p_ctx->active_tunnels, p_ctx->local_datapaths, - p_ctx->local_bindings, - p_ctx->patch_ofports, - p_ctx->chassis_tunnels, binding, - p_ctx->chassis, &p_ctx->debug, - p_ctx->if_mgr, - p_ctx->n_encap_ips, - p_ctx->encap_ips, - p_ctx->always_tunnel, - flow_table, &ofpacts); + consider_port_binding(p_ctx, binding, flow_table, &ofpacts); } /* Default flow for CT_ZONE_LOOKUP Table. */ @@ -2511,12 +2448,7 @@ physical_run(struct physical_ctx *p_ctx, /* Handle output to multicast groups, in tables 40 and 41. */ const struct sbrec_multicast_group *mc; SBREC_MULTICAST_GROUP_TABLE_FOR_EACH (mc, p_ctx->mc_group_table) { - consider_mc_group(p_ctx->sbrec_port_binding_by_name, - p_ctx->mff_ovn_geneve, p_ctx->ct_zones, - p_ctx->local_datapaths, p_ctx->local_bindings, - p_ctx->patch_ofports, p_ctx->chassis, - mc, p_ctx->chassis_tunnels, - flow_table); + consider_mc_group(p_ctx, mc, flow_table); } /* Table 0, priority 100. From patchwork Tue Dec 3 11:08:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ales Musil X-Patchwork-Id: 2017700 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=bPku3l6Z; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org) Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Y2dFg0BV6z1yR0 for ; Tue, 3 Dec 2024 22:09:10 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 035B881F25; Tue, 3 Dec 2024 11:09:09 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id gesy7D0DZhUd; Tue, 3 Dec 2024 11:09:06 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.9.56; helo=lists.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 241C581F3A Authentication-Results: smtp1.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=bPku3l6Z Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp1.osuosl.org (Postfix) with ESMTPS id 241C581F3A; Tue, 3 Dec 2024 11:09:06 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id DF00BC0889; Tue, 3 Dec 2024 11:09:05 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 00736C087D for ; Tue, 3 Dec 2024 11:09:05 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id D565C408D5 for ; Tue, 3 Dec 2024 11:09:04 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id zbHm6mup6BxD for ; Tue, 3 Dec 2024 11:09:03 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=170.10.129.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=amusil@redhat.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 7A0284083C Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 7A0284083C Authentication-Results: smtp4.osuosl.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=bPku3l6Z Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp4.osuosl.org (Postfix) with ESMTPS id 7A0284083C for ; Tue, 3 Dec 2024 11:09:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733224141; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KR14pC7gCeet52i4B1BgANmMftaYyo0tpI6ygI0EHBk=; b=bPku3l6Z7ZQPY4XfH++sYKpE46l07qyDdjf6pt2PLwCWIJ5ceVGvyGTa7YiDjaFAJyrGsm oJsYalnRY/2V9YCxHynlpVfEE640Z7EmrSvhIusI4wNbIbxBvwCWcTc4eKFiA9KSc2kx8H 7xaER4wEx/p5GrRY6lIrqtg1b1qWL4I= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-235-lZ3gYIzBNcymvgW8fvWhIA-1; Tue, 03 Dec 2024 06:08:59 -0500 X-MC-Unique: lZ3gYIzBNcymvgW8fvWhIA-1 X-Mimecast-MFC-AGG-ID: lZ3gYIzBNcymvgW8fvWhIA Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E95FD1955EB3 for ; Tue, 3 Dec 2024 11:08:58 +0000 (UTC) Received: from amusil.brq.redhat.com (unknown [10.43.17.32]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 10E0F1956054; Tue, 3 Dec 2024 11:08:57 +0000 (UTC) From: Ales Musil To: dev@openvswitch.org Date: Tue, 3 Dec 2024 12:08:44 +0100 Message-ID: <20241203110853.201377-3-amusil@redhat.com> In-Reply-To: <20241203110853.201377-1-amusil@redhat.com> References: <20241203110853.201377-1-amusil@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: iC5qWT4L5VTV2JNekAumP7kexKVlwk_7iVxXrL9-GLw_1733224139 X-Mimecast-Originator: redhat.com Subject: [ovs-dev] [PATCH ovn 2/6] physical: Avoid most of strcmp for port binding type. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" The port binding type was compared everywhere via strcmp(). That would be fine for if, if else chains, however, the code was using this comparison multiple times per function call in some instances. Convert the type into enum and use enum comparison instead. Signed-off-by: Ales Musil Acked-by: Lorenzo Bianconi --- controller/physical.c | 95 ++++++++++++++++++++++++------------------- 1 file changed, 53 insertions(+), 42 deletions(-) diff --git a/controller/physical.c b/controller/physical.c index b3da527ae..1a3e7e20b 100644 --- a/controller/physical.c +++ b/controller/physical.c @@ -242,13 +242,14 @@ get_zone_ids(const struct sbrec_port_binding *binding, static void put_remote_port_redirect_bridged(const struct sbrec_port_binding *binding, + const enum en_lport_type type, const struct hmap *local_datapaths, struct local_datapath *ld, struct match *match, struct ofpbuf *ofpacts_p, struct ovn_desired_flow_table *flow_table) { - if (strcmp(binding->type, "chassisredirect")) { + if (type != LP_CHASSISREDIRECT) { /* bridged based redirect is only supported for chassisredirect * type remote ports. */ return; @@ -383,6 +384,7 @@ get_remote_tunnels(const struct sbrec_port_binding *binding, static void put_remote_port_redirect_overlay(const struct sbrec_port_binding *binding, + const enum en_lport_type type, const struct physical_ctx *ctx, uint32_t port_key, struct match *match, @@ -398,7 +400,7 @@ put_remote_port_redirect_overlay(const struct sbrec_port_binding *binding, (uint32_t) 0xFFFF << 16); struct ovs_list *tuns = get_remote_tunnels(binding, ctx, encap_ip); if (!ovs_list_is_empty(tuns)) { - bool is_vtep_port = !strcmp(binding->type, "vtep"); + bool is_vtep_port = type == LP_VTEP; /* rewrite MFF_IN_PORT to bypass OpenFlow loopback check for ARP/ND * responder in L3 networks. */ if (is_vtep_port) { @@ -431,6 +433,7 @@ put_remote_port_redirect_overlay(const struct sbrec_port_binding *binding, static void put_remote_port_redirect_overlay_ha_remote( const struct sbrec_port_binding *binding, + const enum en_lport_type type, struct ha_chassis_ordered *ha_ch_ordered, enum mf_field_id mff_ovn_geneve, uint32_t port_key, struct match *match, struct ofpbuf *ofpacts_p, @@ -471,8 +474,7 @@ put_remote_port_redirect_overlay_ha_remote( } put_encapsulation(mff_ovn_geneve, tun, binding->datapath, port_key, - !strcmp(binding->type, "vtep"), - ofpacts_p); + type == LP_VTEP, ofpacts_p); /* Output to tunnels with active/backup */ struct ofpact_bundle *bundle = ofpact_put_BUNDLE(ofpacts_p); @@ -1412,6 +1414,7 @@ static void enforce_tunneling_for_multichassis_ports( struct local_datapath *ld, const struct sbrec_port_binding *binding, + const enum en_lport_type type, const struct physical_ctx *ctx, struct ovn_desired_flow_table *flow_table) { @@ -1435,7 +1438,7 @@ enforce_tunneling_for_multichassis_ports( struct ofpbuf ofpacts; ofpbuf_init(&ofpacts, 0); - bool is_vtep_port = !strcmp(binding->type, "vtep"); + bool is_vtep_port = type == LP_VTEP; /* rewrite MFF_IN_PORT to bypass OpenFlow loopback check for ARP/ND * responder in L3 networks. */ if (is_vtep_port) { @@ -1471,6 +1474,7 @@ enforce_tunneling_for_multichassis_ports( static void consider_port_binding(const struct physical_ctx *ctx, const struct sbrec_port_binding *binding, + const enum en_lport_type type, struct ovn_desired_flow_table *flow_table, struct ofpbuf *ofpacts_p) { @@ -1481,7 +1485,7 @@ consider_port_binding(const struct physical_ctx *ctx, return; } - if (get_lport_type(binding) == LP_VIF) { + if (type == LP_VIF) { /* Table 80, priority 100. * ======================= * @@ -1502,9 +1506,8 @@ consider_port_binding(const struct physical_ctx *ctx, } struct match match; - if (!strcmp(binding->type, "patch") - || (!strcmp(binding->type, "l3gateway") - && binding->chassis == ctx->chassis)) { + if (type == LP_PATCH || + (type == LP_L3GATEWAY && binding->chassis == ctx->chassis)) { const struct sbrec_port_binding *peer = get_binding_peer( ctx->sbrec_port_binding_by_name, binding); @@ -1543,7 +1546,7 @@ consider_port_binding(const struct physical_ctx *ctx, &match, ofpacts_p, &binding->header_.uuid); return; } - if (!strcmp(binding->type, "chassisredirect") + if (type == LP_CHASSISREDIRECT && (binding->chassis == ctx->chassis || ha_chassis_group_is_active(binding->ha_chassis_group, ctx->active_tunnels, ctx->chassis))) { @@ -1647,8 +1650,7 @@ consider_port_binding(const struct physical_ctx *ctx, return; } } - } else if (!strcmp(binding->type, "localnet") - || !strcmp(binding->type, "l2gateway")) { + } else if (type == LP_LOCALNET || type == LP_L2GATEWAY) { ofport = u16_to_ofp(simap_get(ctx->patch_ofports, binding->logical_port)); @@ -1728,8 +1730,7 @@ consider_port_binding(const struct physical_ctx *ctx, /* Match a VLAN tag and strip it, including stripping priority tags * (e.g. VLAN ID 0). In the latter case we'll add a second flow * for frames that lack any 802.1Q header later. */ - if (tag || !strcmp(binding->type, "localnet") - || !strcmp(binding->type, "l2gateway")) { + if (tag || type == LP_LOCALNET || type == LP_L2GATEWAY) { if (nested_container) { /* When a packet comes from a container sitting behind a * parent_port, we should let it loopback to other containers @@ -1760,7 +1761,7 @@ consider_port_binding(const struct physical_ctx *ctx, load_logical_ingress_metadata(binding, &zone_ids, ctx->n_encap_ips, ctx->encap_ips, ofpacts_p, true); - if (!strcmp(binding->type, "localport")) { + if (type == LP_LOCALPORT) { /* mark the packet as incoming from a localport */ put_load(1, MFF_LOG_FLAGS, MLF_LOCALPORT_BIT, 1, ofpacts_p); } @@ -1771,8 +1772,7 @@ consider_port_binding(const struct physical_ctx *ctx, tag ? 150 : 100, binding->header_.uuid.parts[0], &match, ofpacts_p, &binding->header_.uuid); - if (!tag && (!strcmp(binding->type, "localnet") - || !strcmp(binding->type, "l2gateway"))) { + if (!tag && (type == LP_LOCALNET || type == LP_L2GATEWAY)) { /* Add a second flow for frames that lack any 802.1Q * header. For these, drop the OFPACT_STRIP_VLAN @@ -1784,7 +1784,7 @@ consider_port_binding(const struct physical_ctx *ctx, &binding->header_.uuid); } - if (!strcmp(binding->type, "localnet")) { + if (type == LP_LOCALNET) { put_replace_chassis_mac_flows(ctx->ct_zones, binding, ctx->local_datapaths, ofpacts_p, ofport, flow_table); @@ -1815,7 +1815,7 @@ consider_port_binding(const struct physical_ctx *ctx, binding->header_.uuid.parts[0], &match, ofpacts_p, &binding->header_.uuid); - if (!strcmp(binding->type, "localnet")) { + if (type == LP_LOCALNET) { put_replace_router_port_mac_flows(ctx, binding, ofpacts_p, ofport, flow_table); } @@ -1825,7 +1825,7 @@ consider_port_binding(const struct physical_ctx *ctx, * * Do not forward local traffic from a localport to a localnet port. */ - if (!strcmp(binding->type, "localnet")) { + if (type == LP_LOCALNET) { /* do not forward traffic from localport to localnet port */ ofpbuf_clear(ofpacts_p); put_drop(&ctx->debug, OFTABLE_CHECK_LOOPBACK, ofpacts_p); @@ -1897,7 +1897,7 @@ consider_port_binding(const struct physical_ctx *ctx, * ports are present on every hypervisor. Traffic that originates at * one should never go over a tunnel to a remote hypervisor, * so resubmit them to table 40 for local delivery. */ - if (!strcmp(binding->type, "localport")) { + if (type == LP_LOCALPORT) { ofpbuf_clear(ofpacts_p); put_resubmit(OFTABLE_LOCAL_OUTPUT, ofpacts_p); match_init_catchall(&match); @@ -1936,7 +1936,8 @@ consider_port_binding(const struct physical_ctx *ctx, binding->header_.uuid.parts[0], &match, ofpacts_p, &binding->header_.uuid); - enforce_tunneling_for_multichassis_ports(ld, binding, ctx, flow_table); + enforce_tunneling_for_multichassis_ports(ld, binding, type, + ctx, flow_table); /* No more tunneling to set up. */ goto out; @@ -1958,14 +1959,15 @@ consider_port_binding(const struct physical_ctx *ctx, if (redirect_type && !strcasecmp(redirect_type, "bridged")) { put_remote_port_redirect_bridged( - binding, ctx->local_datapaths, ld, &match, ofpacts_p, flow_table); + binding, type, ctx->local_datapaths, ld, + &match, ofpacts_p, flow_table); } else if (access_type == PORT_HA_REMOTE) { put_remote_port_redirect_overlay_ha_remote( - binding, ha_ch_ordered, ctx->mff_ovn_geneve, port_key, + binding, type, ha_ch_ordered, ctx->mff_ovn_geneve, port_key, &match, ofpacts_p, ctx->chassis_tunnels, flow_table); } else { put_remote_port_redirect_overlay( - binding, ctx, port_key, &match, ofpacts_p, flow_table); + binding, type, ctx, port_key, &match, ofpacts_p, flow_table); } out: if (ha_ch_ordered) { @@ -2146,6 +2148,7 @@ consider_mc_group(const struct physical_ctx *ctx, for (size_t i = 0; i < mc->n_ports; i++) { struct sbrec_port_binding *port = mc->ports[i]; + enum en_lport_type type = get_lport_type(port); if (port->datapath != mc->datapath) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); @@ -2163,28 +2166,28 @@ consider_mc_group(const struct physical_ctx *ctx, const char *lport_name = (port->parent_port && *port->parent_port) ? port->parent_port : port->logical_port; - if (!strcmp(port->type, "patch")) { + if (type == LP_PATCH) { if (ldp->is_transit_switch) { local_output_pb(port->tunnel_key, &ofpacts); } else { remote_ramp_ports = true; remote_ports = true; } - } else if (!strcmp(port->type, "remote")) { + } else if (type == LP_REMOTE) { if (port->chassis) { remote_ports = true; } - } else if (!strcmp(port->type, "localport")) { + } else if (type == LP_LOCALPORT) { remote_ports = true; } else if ((port->chassis == ctx->chassis || is_additional_chassis(port, ctx->chassis)) && (local_binding_get_primary_pb(ctx->local_bindings, lport_name) - || !strcmp(port->type, "l3gateway"))) { + || type == LP_L3GATEWAY)) { local_output_pb(port->tunnel_key, &ofpacts); } else if (simap_contains(ctx->patch_ofports, port->logical_port)) { local_output_pb(port->tunnel_key, &ofpacts); - } else if (!strcmp(port->type, "chassisredirect") + } else if (type == LP_CHASSISREDIRECT && port->chassis == ctx->chassis) { const char *distributed_port = smap_get(&port->options, "distributed-port"); @@ -2262,6 +2265,7 @@ consider_mc_group(const struct physical_ctx *ctx, for (size_t i = 0; remote_ports && i < mc->n_ports; i++) { struct sbrec_port_binding *port = mc->ports[i]; + enum en_lport_type type = get_lport_type(port); if (port->datapath != mc->datapath) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); @@ -2271,12 +2275,12 @@ consider_mc_group(const struct physical_ctx *ctx, continue; } - if (!strcmp(port->type, "patch")) { + if (type == LP_PATCH) { if (!ldp->is_transit_switch) { local_output_pb(port->tunnel_key, &remote_ofpacts); local_output_pb(port->tunnel_key, &remote_ofpacts_ramp); } - } if (!strcmp(port->type, "remote")) { + } if (type == LP_REMOTE) { if (port->chassis) { put_load(port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, &remote_ofpacts); @@ -2284,7 +2288,7 @@ consider_mc_group(const struct physical_ctx *ctx, ctx->chassis_tunnels, mc->datapath, port->tunnel_key, &remote_ofpacts); } - } else if (!strcmp(port->type, "localport")) { + } else if (type == LP_LOCALPORT) { local_output_pb(port->tunnel_key, &remote_ofpacts); } @@ -2324,11 +2328,12 @@ consider_mc_group(const struct physical_ctx *ctx, static void physical_eval_port_binding(struct physical_ctx *p_ctx, const struct sbrec_port_binding *pb, + const enum en_lport_type type, struct ovn_desired_flow_table *flow_table) { struct ofpbuf ofpacts; ofpbuf_init(&ofpacts, 0); - consider_port_binding(p_ctx, pb, flow_table, &ofpacts); + consider_port_binding(p_ctx, pb, type, flow_table, &ofpacts); ofpbuf_uninit(&ofpacts); } @@ -2337,7 +2342,8 @@ physical_handle_flows_for_lport(const struct sbrec_port_binding *pb, bool removed, struct physical_ctx *p_ctx, struct ovn_desired_flow_table *flow_table) { - if (!strcmp(pb->type, "vtep")) { + enum en_lport_type type = get_lport_type(pb); + if (type == LP_VTEP) { /* Cannot handle changes to vtep lports (yet). */ return false; } @@ -2347,14 +2353,16 @@ physical_handle_flows_for_lport(const struct sbrec_port_binding *pb, struct local_datapath *ldp = get_local_datapath(p_ctx->local_datapaths, pb->datapath->tunnel_key); - if (!strcmp(pb->type, "external")) { + if (type == LP_EXTERNAL) { /* External lports have a dependency on the localnet port. * We need to remove the flows of the localnet port as well * and re-consider adding the flows for it. */ if (ldp && ldp->localnet_port) { ofctrl_remove_flows(flow_table, &ldp->localnet_port->header_.uuid); - physical_eval_port_binding(p_ctx, ldp->localnet_port, flow_table); + physical_eval_port_binding(p_ctx, ldp->localnet_port, + get_lport_type(ldp->localnet_port), + flow_table); } } @@ -2364,12 +2372,13 @@ physical_handle_flows_for_lport(const struct sbrec_port_binding *pb, } if (!removed) { - physical_eval_port_binding(p_ctx, pb, flow_table); - if (!strcmp(pb->type, "patch")) { + physical_eval_port_binding(p_ctx, pb, type, flow_table); + if (type == LP_PATCH) { const struct sbrec_port_binding *peer = get_binding_peer(p_ctx->sbrec_port_binding_by_name, pb); if (peer) { - physical_eval_port_binding(p_ctx, peer, flow_table); + physical_eval_port_binding(p_ctx, peer, get_lport_type(peer), + flow_table); } } } @@ -2391,7 +2400,8 @@ physical_multichassis_reprocess(const struct sbrec_port_binding *pb, SBREC_PORT_BINDING_FOR_EACH_EQUAL (port, target, p_ctx->sbrec_port_binding_by_datapath) { ofctrl_remove_flows(flow_table, &port->header_.uuid); - physical_eval_port_binding(p_ctx, port, flow_table); + physical_eval_port_binding(p_ctx, port, get_lport_type(port), + flow_table); } sbrec_port_binding_index_destroy_row(target); } @@ -2434,7 +2444,8 @@ physical_run(struct physical_ctx *p_ctx, * 64 for logical-to-physical translation. */ const struct sbrec_port_binding *binding; SBREC_PORT_BINDING_TABLE_FOR_EACH (binding, p_ctx->port_binding_table) { - consider_port_binding(p_ctx, binding, flow_table, &ofpacts); + consider_port_binding(p_ctx, binding, get_lport_type(binding), + flow_table, &ofpacts); } /* Default flow for CT_ZONE_LOOKUP Table. */ From patchwork Tue Dec 3 11:08:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ales Musil X-Patchwork-Id: 2017701 X-Patchwork-Delegate: nusiddiq@redhat.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=B0HFTdTp; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::133; helo=smtp2.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org) Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Y2dFk21tzz1yQZ for ; Tue, 3 Dec 2024 22:09:14 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id ECDB240AC9; Tue, 3 Dec 2024 11:09:11 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id GXVwSpL5KQ_K; Tue, 3 Dec 2024 11:09:08 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=2605:bc80:3010:104::8cd3:938; helo=lists.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 0B2EA40A5E Authentication-Results: smtp2.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=B0HFTdTp Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [IPv6:2605:bc80:3010:104::8cd3:938]) by smtp2.osuosl.org (Postfix) with ESMTPS id 0B2EA40A5E; Tue, 3 Dec 2024 11:09:08 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 9045FC08BF; Tue, 3 Dec 2024 11:09:07 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 2E629C087D for ; Tue, 3 Dec 2024 11:09:06 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 49654408DF for ; Tue, 3 Dec 2024 11:09:05 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id OFRpkXYNhtTQ for ; Tue, 3 Dec 2024 11:09:04 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=170.10.129.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=amusil@redhat.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org 263FE408AE Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 263FE408AE Authentication-Results: smtp4.osuosl.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=B0HFTdTp Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp4.osuosl.org (Postfix) with ESMTPS id 263FE408AE for ; Tue, 3 Dec 2024 11:09:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733224142; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RrS6mIe859+jNG9YQ/M4vIhTXbMrbhBDgByJYsKZEE4=; b=B0HFTdTpQKc8KaYFG789BqtjKPtEohNpHFR39aPpmUBHEhOgqijc8q5S9MBy6Dl3NNgAmq 7WGpjU8RUme5aJKJT5k09CVsRzLbBstVjmHeyrkNiH/mXzSElIGGfrFjWAZM1zu+nMh/lj Fb8pAniAhc6z4g7QX7ESEQIhPHtCM/w= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-307-rtAxafuqO3C-Np051mHKZw-1; Tue, 03 Dec 2024 06:09:01 -0500 X-MC-Unique: rtAxafuqO3C-Np051mHKZw-1 X-Mimecast-MFC-AGG-ID: rtAxafuqO3C-Np051mHKZw Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 497D11954197 for ; Tue, 3 Dec 2024 11:09:00 +0000 (UTC) Received: from amusil.brq.redhat.com (unknown [10.43.17.32]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 63C701956054; Tue, 3 Dec 2024 11:08:59 +0000 (UTC) From: Ales Musil To: dev@openvswitch.org Date: Tue, 3 Dec 2024 12:08:45 +0100 Message-ID: <20241203110853.201377-4-amusil@redhat.com> In-Reply-To: <20241203110853.201377-1-amusil@redhat.com> References: <20241203110853.201377-1-amusil@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: QuiNxTV-xbHO-JVa3yBjYrWLwyfAllhHk0YzF6tsroY_1733224140 X-Mimecast-Originator: redhat.com Subject: [ovs-dev] [PATCH ovn 3/6] physical: Allow l3gateway and patch port to be peers. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" Allow the l3gateway and patch port to be peers for connected LRPs, previously it was only allowed to connect two ports of the same type. This allows the CMS to connect any router with GW router which is required for the concept of transit routers. Signed-off-by: Ales Musil --- controller/physical.c | 32 ++++++++++---- tests/ovn-controller.at | 92 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 116 insertions(+), 8 deletions(-) diff --git a/controller/physical.c b/controller/physical.c index 1a3e7e20b..adf37600d 100644 --- a/controller/physical.c +++ b/controller/physical.c @@ -1067,7 +1067,8 @@ load_logical_ingress_metadata(const struct sbrec_port_binding *binding, static const struct sbrec_port_binding * get_binding_peer(struct ovsdb_idl_index *sbrec_port_binding_by_name, - const struct sbrec_port_binding *binding) + const struct sbrec_port_binding *binding, + const enum en_lport_type type) { const char *peer_name = smap_get(&binding->options, "peer"); if (!peer_name) { @@ -1076,9 +1077,17 @@ get_binding_peer(struct ovsdb_idl_index *sbrec_port_binding_by_name, const struct sbrec_port_binding *peer = lport_lookup_by_name( sbrec_port_binding_by_name, peer_name); - if (!peer || strcmp(peer->type, binding->type)) { + if (!peer) { return NULL; } + + enum en_lport_type peer_type = get_lport_type(peer); + if (type != peer_type && + !((type == LP_L3GATEWAY && peer_type == LP_PATCH) || + (type == LP_PATCH && peer_type == LP_L3GATEWAY))) { + return NULL; + } + const char *peer_peer_name = smap_get(&peer->options, "peer"); if (!peer_peer_name || strcmp(peer_peer_name, binding->logical_port)) { return NULL; @@ -1087,6 +1096,15 @@ get_binding_peer(struct ovsdb_idl_index *sbrec_port_binding_by_name, return peer; } +static bool +physical_should_eval_peer_port(const struct sbrec_port_binding *binding, + const struct sbrec_chassis *chassis, + const enum en_lport_type type) +{ + return type == LP_PATCH || + (type == LP_L3GATEWAY && binding->chassis == chassis); +} + enum access_type { PORT_LOCAL = 0, PORT_LOCALNET, @@ -1506,11 +1524,9 @@ consider_port_binding(const struct physical_ctx *ctx, } struct match match; - if (type == LP_PATCH || - (type == LP_L3GATEWAY && binding->chassis == ctx->chassis)) { - + if (physical_should_eval_peer_port(binding, ctx->chassis, type)) { const struct sbrec_port_binding *peer = get_binding_peer( - ctx->sbrec_port_binding_by_name, binding); + ctx->sbrec_port_binding_by_name, binding, type); if (!peer) { return; } @@ -2373,9 +2389,9 @@ physical_handle_flows_for_lport(const struct sbrec_port_binding *pb, if (!removed) { physical_eval_port_binding(p_ctx, pb, type, flow_table); - if (type == LP_PATCH) { + if (physical_should_eval_peer_port(pb, p_ctx->chassis, type)) { const struct sbrec_port_binding *peer = - get_binding_peer(p_ctx->sbrec_port_binding_by_name, pb); + get_binding_peer(p_ctx->sbrec_port_binding_by_name, pb, type); if (peer) { physical_eval_port_binding(p_ctx, peer, get_lport_type(peer), flow_table); diff --git a/tests/ovn-controller.at b/tests/ovn-controller.at index fb5923d48..b2bb6e2d0 100644 --- a/tests/ovn-controller.at +++ b/tests/ovn-controller.at @@ -3444,3 +3444,95 @@ OVS_WAIT_FOR_OUTPUT([ovs-vsctl show | tail -n +2], [], [dnl OVS_APP_EXIT_AND_WAIT([ovn-controller]) OVS_APP_EXIT_AND_WAIT([ovsdb-server]) AT_CLEANUP + +AT_SETUP([ovn-controller - LR peer ports combination]) +AT_KEYWORDS([ovn]) +ovn_start + +net_add n1 +sim_add hv1 +ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.20 + +check ovn-nbctl ls-add ls0 +check ovn-nbctl ls-add ls1 + +check ovn-nbctl lsp-add ls0 vif0 +check ovn-nbctl lsp-add ls0 vif1 + +check ovn-nbctl lr-add lr0 +check ovn-nbctl lr-add lr1 + +check ovn-nbctl lrp-add lr0 lr0-ls0 00:00:00:00:20:00 192.168.20.1/24 +check ovn-nbctl lsp-add ls0 ls0-lr0 \ + -- lsp-set-type ls0-lr0 router \ + -- lsp-set-addresses ls0-lr0 router \ + -- lsp-set-options ls0-lr0 router-port=lr0-ls0 + +check ovn-nbctl lrp-add lr0 lr1-ls1 00:00:00:00:30:00 192.168.30.1/24 +check ovn-nbctl lsp-add ls0 ls1-lr1 \ + -- lsp-set-type ls1-lr1 router \ + -- lsp-set-addresses ls1-lr1 router \ + -- lsp-set-options ls1-lr1 router-port=lr1-ls1 + +check ovs-vsctl -- add-port br-int vif0 \ + -- set Interface vif0 external-ids:iface-id=vif0 + +check ovs-vsctl -- add-port br-int vif1 \ + -- set Interface vif1 external-ids:iface-id=vif1 + +check ovn-nbctl lrp-add lr0 lr0-lr1 00:00:00:00:30:01 192.168.30.1/31 peer=lr1-lr0 +check ovn-nbctl lrp-add lr1 lr1-lr0 00:00:00:00:30:02 192.168.30.2/31 peer=lr0-lr1 + +wait_for_ports_up +check ovn-nbctl --wait=hv sync + +pb_cookie() { + name=$1 + fetch_column port_binding _uuid logical_port=$name |\ + cut -d '-' -f 1 | tr -d '\n' | sed 's/^0\{0,8\}//' +} + +lr0_peer_cookie="0x$(pb_cookie lr0-lr1)" +lr1_peer_cookie="0x$(pb_cookie lr1-lr0)" + +# Patch to patch +check_row_count Port_Binding 1 logical_port="lr0-lr1" type=patch +check_row_count Port_Binding 1 logical_port="lr1-lr0" type=patch +ovs-ofctl dump-flows br-int table=OFTABLE_LOG_TO_PHY > log_to_phy_flows +AT_CHECK([grep -c "cookie=$lr0_peer_cookie," log_to_phy_flows], [0], [dnl +1 +]) +AT_CHECK([grep -c "cookie=$lr1_peer_cookie," log_to_phy_flows], [0], [dnl +1 +]) + +# L3 gateway to patch +check ovn-nbctl --wait=hv set Logical_Router lr0 options:chassis=hv1 + +check_row_count Port_Binding 1 logical_port="lr0-lr1" type=l3gateway +check_row_count Port_Binding 1 logical_port="lr1-lr0" type=patch +ovs-ofctl dump-flows br-int table=OFTABLE_LOG_TO_PHY > log_to_phy_flows +AT_CHECK([grep -c "cookie=$lr0_peer_cookie," log_to_phy_flows], [0], [dnl +1 +]) +AT_CHECK([grep -c "cookie=$lr1_peer_cookie," log_to_phy_flows], [0], [dnl +1 +]) + +# Patch to L3 gateway +check ovn-nbctl remove Logical_Router lr0 options chassis +check ovn-nbctl --wait=hv set Logical_Router lr1 options:chassis=hv1 + +check_row_count Port_Binding 1 logical_port="lr0-lr1" type=patch +check_row_count Port_Binding 1 logical_port="lr1-lr0" type=l3gateway +ovs-ofctl dump-flows br-int table=OFTABLE_LOG_TO_PHY > log_to_phy_flows +AT_CHECK([grep -c "cookie=$lr0_peer_cookie," log_to_phy_flows], [0], [dnl +1 +]) +AT_CHECK([grep -c "cookie=$lr1_peer_cookie," log_to_phy_flows], [0], [dnl +1 +]) + +OVN_CLEANUP([hv1]) +AT_CLEANUP From patchwork Tue Dec 3 11:08:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ales Musil X-Patchwork-Id: 2017702 X-Patchwork-Delegate: nusiddiq@redhat.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=XZArg9F8; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::137; helo=smtp4.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org) Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Y2dFm20FSz1yQZ for ; Tue, 3 Dec 2024 22:09:16 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 3EAB440967; Tue, 3 Dec 2024 11:09:14 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id NGUdI6o9TVP6; Tue, 3 Dec 2024 11:09:12 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=2605:bc80:3010:104::8cd3:938; helo=lists.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 02E7C4092B Authentication-Results: smtp4.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=XZArg9F8 Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [IPv6:2605:bc80:3010:104::8cd3:938]) by smtp4.osuosl.org (Postfix) with ESMTPS id 02E7C4092B; Tue, 3 Dec 2024 11:09:11 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id B0208C087D; Tue, 3 Dec 2024 11:09:11 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 2EDECC08BE for ; Tue, 3 Dec 2024 11:09:07 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 38F3D4083C for ; Tue, 3 Dec 2024 11:09:06 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id u4u2T4J6YxUA for ; Tue, 3 Dec 2024 11:09:05 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=170.10.129.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=amusil@redhat.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp4.osuosl.org D3DFF408C8 Authentication-Results: smtp4.osuosl.org; dmarc=pass (p=none dis=none) header.from=redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org D3DFF408C8 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp4.osuosl.org (Postfix) with ESMTPS id D3DFF408C8 for ; Tue, 3 Dec 2024 11:09:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733224143; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U2qGx305IxuB2ltCQB5how/G8eNTowA7qdFmD7erlns=; b=XZArg9F8QuBXLt93GxshPalLIJ4FylkmkCp/P0daEKAeBiXXkiYqKOMk1x150cIe2Z7p5Q eIjhL5gH0l9T06iGQt7L5QiA7z09qDg9sVDIku92U7YfLnEdIwDQAtLs/wVAuGtZEj2J3Q L50RIB8Ke2gh8SHCRT/aLlnBQsFmfiw= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-407-vVB5Xq3JPyOTNcpHt86HIg-1; Tue, 03 Dec 2024 06:09:02 -0500 X-MC-Unique: vVB5Xq3JPyOTNcpHt86HIg-1 X-Mimecast-MFC-AGG-ID: vVB5Xq3JPyOTNcpHt86HIg Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9307D19560AB for ; Tue, 3 Dec 2024 11:09:01 +0000 (UTC) Received: from amusil.brq.redhat.com (unknown [10.43.17.32]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AD9B81956054; Tue, 3 Dec 2024 11:09:00 +0000 (UTC) From: Ales Musil To: dev@openvswitch.org Date: Tue, 3 Dec 2024 12:08:46 +0100 Message-ID: <20241203110853.201377-5-amusil@redhat.com> In-Reply-To: <20241203110853.201377-1-amusil@redhat.com> References: <20241203110853.201377-1-amusil@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 231mOKQYAG9mFKlmnnqdxyVyE_-OkaQqmtJZpYyVVBE_1733224141 X-Mimecast-Originator: redhat.com Subject: [ovs-dev] [PATCH ovn 4/6] northd: Introduce the concept of transit routers. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" In order to allow the concept of transit router set the port binding as "remote" and assign the chassis whenever the requested-chassis for LRP is remote chassis. The only supported requested-chassis for now is remote, it might be extended in the future if needed. The transit router has very similar behavior to distributed router that it will lose the CT state between chassis, in this case between AZs. Reported-at: https://issues.redhat.com/browse/FDP-872 Signed-off-by: Ales Musil Acked-by: Lorenzo Bianconi --- NEWS | 3 +++ lib/ovn-util.c | 2 +- northd/northd.c | 31 +++++++++++++++++++++++++-- northd/northd.h | 4 ++++ ovn-nb.xml | 43 ++++++++++++++++++++++++++++++++++++++ tests/ovn-northd.at | 51 +++++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 131 insertions(+), 3 deletions(-) diff --git a/NEWS b/NEWS index da3aba739..0ef98a305 100644 --- a/NEWS +++ b/NEWS @@ -4,6 +4,9 @@ Post v24.09.0 hash (with specified hash fields) for ECMP routes while choosing nexthop. - ovn-ic: Add support for route tag to prevent route learning. + - Add concept of Transit Routers, the routers not allow CMS to specify + options:requested-chassis, if the chassis is remote the port will behave + as remote port. OVN v24.09.0 - 13 Sep 2024 -------------------------- diff --git a/lib/ovn-util.c b/lib/ovn-util.c index 1ad347419..a1c8d8fb0 100644 --- a/lib/ovn-util.c +++ b/lib/ovn-util.c @@ -1226,6 +1226,7 @@ is_pb_router_type(const struct sbrec_port_binding *pb) case LP_CHASSISREDIRECT: case LP_L3GATEWAY: case LP_L2GATEWAY: + case LP_REMOTE: return true; case LP_VIF: @@ -1233,7 +1234,6 @@ is_pb_router_type(const struct sbrec_port_binding *pb) case LP_VIRTUAL: case LP_LOCALNET: case LP_LOCALPORT: - case LP_REMOTE: case LP_VTEP: case LP_EXTERNAL: case LP_UNKNOWN: diff --git a/northd/northd.c b/northd/northd.c index 3a488ff3d..f3ef090f4 100644 --- a/northd/northd.c +++ b/northd/northd.c @@ -3082,12 +3082,29 @@ ovn_port_update_sbrec(struct ovsdb_idl_txn *ovnsb_txn, /* If the router is for l3 gateway, it resides on a chassis * and its port type is "l3gateway". */ - const char *chassis_name = smap_get(&op->od->nbr->options, "chassis"); + const char *lr_chassis = smap_get(&op->od->nbr->options, "chassis"); + + /* If the LRP has requested chassis, that is remote, set the type to + * remote and add the appropriate chassis. */ + const char *req_chassis = smap_get(&op->nbrp->options, + "requested-chassis"); if (is_cr_port(op)) { + sbrec_port_binding_set_requested_chassis(op->sb, NULL); sbrec_port_binding_set_type(op->sb, "chassisredirect"); - } else if (chassis_name) { + } else if (lr_chassis) { + sbrec_port_binding_set_requested_chassis(op->sb, NULL); sbrec_port_binding_set_type(op->sb, "l3gateway"); + } else if (req_chassis) { + const struct sbrec_chassis *tr_chassis = chassis_lookup( + sbrec_chassis_by_name, sbrec_chassis_by_hostname, req_chassis); + bool trp = tr_chassis && smap_get_bool(&tr_chassis->other_config, + "is-remote", false); + sbrec_port_binding_set_requested_chassis(op->sb, + trp ? tr_chassis : NULL); + sbrec_port_binding_set_chassis(op->sb, trp ? tr_chassis : NULL); + sbrec_port_binding_set_type(op->sb, trp ? "remote" : "patch"); } else { + sbrec_port_binding_set_requested_chassis(op->sb, NULL); sbrec_port_binding_set_type(op->sb, "patch"); } @@ -4149,6 +4166,14 @@ ovn_port_assign_requested_tnl_id(struct ovn_port *op) return true; } +static bool +ovn_port_is_trp(struct ovn_port *op) +{ + return op->nbrp && + op->sb->chassis && + smap_get_bool(&op->sb->chassis->other_config, "is-remote", false); +} + static bool ovn_port_allocate_key(struct ovn_port *op) { @@ -4254,6 +4279,7 @@ build_ports(struct ovsdb_idl_txn *ovnsb_txn, sbrec_mirror_table, op, queue_id_bitmap, &active_ha_chassis_grps); + op->od->is_transit_router |= ovn_port_is_trp(op); ovs_list_remove(&op->list); } @@ -4267,6 +4293,7 @@ build_ports(struct ovsdb_idl_txn *ovnsb_txn, op, queue_id_bitmap, &active_ha_chassis_grps); sbrec_port_binding_set_logical_port(op->sb, op->key); + op->od->is_transit_router |= ovn_port_is_trp(op); ovs_list_remove(&op->list); } diff --git a/northd/northd.h b/northd/northd.h index d60c944df..7df57ba19 100644 --- a/northd/northd.h +++ b/northd/northd.h @@ -356,6 +356,10 @@ struct ovn_datapath { * If this is true, then 'l3dgw_ports' will be ignored. */ bool is_gw_router; + /* Indicates whether the router should be considered a transit router. + * This is applicable only to routers with "remote" ports. */ + bool is_transit_router; + /* OVN northd only needs to know about logical router gateway ports for * NAT/LB on a distributed router. The "distributed gateway ports" are * populated only when there is a gateway chassis or ha chassis group diff --git a/ovn-nb.xml b/ovn-nb.xml index 5114bbc2e..3b208052b 100644 --- a/ovn-nb.xml +++ b/ovn-nb.xml @@ -3095,6 +3095,32 @@ or See External IDs at the beginning of this document. + + +

+ In order to achieve status of Transit Router for + there needs to be at least one + that is considered remote. + The LRP can be remote only if it has + options:requested-chassis set to chassis that is + considered remote. See for more + details. +

+ +

+ In order for the Transit Router to work properly all the + tunnel keys for the Transit Router itself and the remote + ports keys needs to match in all AZs e.g. TR in AZ1 and AZ2 needs to + have the same tunnel key. Remote port for AZ2 in AZ1 needs to have the + same tunnel key as local port in AZ2 and vice vers. +

+ +

+ The Transit Router behaves as distributed router which + means that it has the same limitations for stateful flows like + NAT and LBs and it will lose the CT state between AZs. +

+
@@ -3705,6 +3731,23 @@ or learned by the ovn-ic daemon.

+ + +

+ If set, identifies a specific chassis (by name or hostname) that + is allowed to bind this port. This option is valid only for chassis + that have options:is-remote=true, in other words for + chassis that are in different Availability zone. The option accepts + only single value. +

+ +

+ By assigning remote chassis the gains + status of Transit Router see + table for more details. +

+ +
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at index 4eae1c67c..e34649f10 100644 --- a/tests/ovn-northd.at +++ b/tests/ovn-northd.at @@ -14253,3 +14253,54 @@ AT_CHECK([grep "lr_in_dnat" lr1flows | ovn_strip_lflows | grep "30.0.0.1"], [0], AT_CLEANUP ]) + +OVN_FOR_EACH_NORTHD_NO_HV([ +AT_SETUP([Transit router - remote ports]) +ovn_start NORTHD_TYPE + +check ovn-sbctl chassis-add hv2 geneve 192.168.0.12 \ + -- set chassis hv2 other_config:is-remote=true + +check ovn-sbctl chassis-add hv3 geneve 192.168.0.13 \ + -- set chassis hv3 other_config:is-remote=true + +check ovn-sbctl chassis-add hv4 geneve 192.168.0.14 + +check ovn-nbctl lr-add tr +ovn-nbctl lrp-add tr tr-local 00:00:00:00:30:1 192.168.100.1/31 + +ovn-nbctl lrp-add tr tr-az2 00:00:00:00:30:03 192.168.100.3/31 \ + -- set Logical_Router_Port tr-az2 options:requested-chassis=hv2 + +ovn-nbctl lrp-add tr tr-az3 00:00:00:00:30:04 192.168.100.4/31 \ + -- set Logical_Router_Port tr-az3 options:requested-chassis=hv3 + +ovn-nbctl lrp-add tr tr-az4 00:00:00:00:30:04 192.168.100.4/31 \ + -- set Logical_Router_Port tr-az4 options:requested-chassis=hv4 + +check ovn-nbctl --wait=sb sync + +hv2_uuid=$(fetch_column Chassis _uuid name=hv2) +hv3_uuid=$(fetch_column Chassis _uuid name=hv3) +hv4_uuid=$(fetch_column Chassis _uuid name=hv4) + +check_row_count Port_Binding 1 logical_port=tr-az2 type=remote chassis=$hv2_uuid requested-chassis=$hv2_uuid +check_row_count Port_Binding 1 logical_port=tr-az3 type=remote chassis=$hv3_uuid requested-chassis=$hv3_uuid + +# hv4 is not set as remote, so the port should remain as patch port +check_row_count Port_Binding 1 logical_port=tr-az4 type=patch + +check ovn-sbctl set chassis hv4 other_config:is-remote=true +check ovn-nbctl --wait=sb sync + +# Setting hv4 as remote should change the tr-az4 to remote too +check_row_count Port_Binding 1 logical_port=tr-az4 type=remote chassis=$hv4_uuid requested-chassis=$hv4_uuid + +check ovn-sbctl remove chassis hv4 other_config is-remote +check ovn-nbctl --wait=sb sync + +# Removing "is-remote" from hv4 should change the tr-az4 back to patch port +check_row_count Port_Binding 1 logical_port=tr-az4 type=patch + +AT_CLEANUP +]) From patchwork Tue Dec 3 11:08:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ales Musil X-Patchwork-Id: 2017704 X-Patchwork-Delegate: nusiddiq@redhat.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=IkCn2Akw; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.137; helo=smtp4.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org) Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Y2dFp41lWz1yR0 for ; Tue, 3 Dec 2024 22:09:18 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 7BACC409EA; Tue, 3 Dec 2024 11:09:16 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 98clRENVqw4C; Tue, 3 Dec 2024 11:09:14 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=2605:bc80:3010:104::8cd3:938; helo=lists.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org A2DB140955 Authentication-Results: smtp4.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=IkCn2Akw Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [IPv6:2605:bc80:3010:104::8cd3:938]) by smtp4.osuosl.org (Postfix) with ESMTPS id A2DB140955; Tue, 3 Dec 2024 11:09:13 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 3C801C087D; Tue, 3 Dec 2024 11:09:13 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136]) by lists.linuxfoundation.org (Postfix) with ESMTP id 57211C08C1 for ; Tue, 3 Dec 2024 11:09:08 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 177EB607C8 for ; Tue, 3 Dec 2024 11:09:07 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id sax-bWcSEGRk for ; Tue, 3 Dec 2024 11:09:06 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=170.10.133.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=amusil@redhat.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp3.osuosl.org 261C3607CA Authentication-Results: smtp3.osuosl.org; dmarc=pass (p=none dis=none) header.from=redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 261C3607CA Authentication-Results: smtp3.osuosl.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=IkCn2Akw Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by smtp3.osuosl.org (Postfix) with ESMTPS id 261C3607CA for ; Tue, 3 Dec 2024 11:09:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733224145; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M9xUoONAX0Jj9ZI9v4bCJE8poBD7vPnd44UeJW8+NL0=; b=IkCn2AkwM+OGC5fbb7Jmx4QvhsK1hC00MfEkyvJdqr+MbK2xM5fhF9DbfelIteO9xjINTv 8HnAeOuJwMm6nNmYXfqg8CM809d7xJPeY9gfzDd/eZ9495dm2jsY1yncEFgTFcARbesMUZ Avmfgh95DrdvHc1rewCmYlRLqj8z5Hg= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-394-KRbs-g4kMRu9ZP5Y_K3tUQ-1; Tue, 03 Dec 2024 06:09:03 -0500 X-MC-Unique: KRbs-g4kMRu9ZP5Y_K3tUQ-1 X-Mimecast-MFC-AGG-ID: KRbs-g4kMRu9ZP5Y_K3tUQ Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id DB5851955DA1 for ; Tue, 3 Dec 2024 11:09:02 +0000 (UTC) Received: from amusil.brq.redhat.com (unknown [10.43.17.32]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 030BE1956054; Tue, 3 Dec 2024 11:09:01 +0000 (UTC) From: Ales Musil To: dev@openvswitch.org Date: Tue, 3 Dec 2024 12:08:47 +0100 Message-ID: <20241203110853.201377-6-amusil@redhat.com> In-Reply-To: <20241203110853.201377-1-amusil@redhat.com> References: <20241203110853.201377-1-amusil@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: p9kMa8OVvO5eYfsUqwOXNrwSSMzoim-hAYRIrAExKrk_1733224143 X-Mimecast-Originator: redhat.com Subject: [ovs-dev] [PATCH ovn 5/6] actions, physical: Make the MC split action generic. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" The action used for multiple split buffer scenario, when the action wouldn't fit into the netlink buffer, can be used in different context than just the multicast groups. Change the naming and add generic helper so it is clearer that is indeed the case. Signed-off-by: Ales Musil Acked-by: Lorenzo Bianconi --- controller/physical.c | 36 ++++++++++++++++++++++-------------- controller/pinctrl.c | 21 +++++++++++---------- include/ovn/actions.h | 4 ++-- lib/actions.c | 2 +- 4 files changed, 36 insertions(+), 27 deletions(-) diff --git a/controller/physical.c b/controller/physical.c index adf37600d..1adc0a5f6 100644 --- a/controller/physical.c +++ b/controller/physical.c @@ -193,6 +193,27 @@ put_stack(enum mf_field_id field, struct ofpact_stack *stack) stack->subfield.n_bits = stack->subfield.field->n_bits; } +/* Split the ofpacts buffer to prevent overflow of the + * MAX_ACTIONS_BUFSIZE netlink buffer size supported by the kernel. + * In order to avoid all the action buffers to be squashed together by + * ovs, add a controller action for each configured openflow. + */ +static void +put_split_buf_function(uint32_t index, uint32_t outport, uint8_t stage, + struct ofpbuf *ofpacts) +{ + ovs_be32 values[2] = { + htonl(index), + htonl(outport) + }; + size_t oc_offset = + encode_start_controller_op(ACTION_OPCODE_SPLIT_BUF_ACTION, false, + NX_CTLR_NO_METER, ofpacts); + ofpbuf_put(ofpacts, values, sizeof values); + ofpbuf_put(ofpacts, &stage, sizeof stage); + encode_finish_controller_op(oc_offset, ofpacts); +} + static const struct sbrec_port_binding * get_localnet_port(const struct hmap *local_datapaths, int64_t tunnel_key) { @@ -2088,20 +2109,7 @@ mc_ofctrl_add_flow(const struct sbrec_multicast_group *mc, if (index == (mc->n_ports - 1)) { ofpbuf_put(ofpacts, ofpacts_last->data, ofpacts_last->size); } else { - /* Split multicast groups with size greater than - * MC_OFPACTS_MAX_MSG_SIZE in order to not overcome the - * MAX_ACTIONS_BUFSIZE netlink buffer size supported by the kernel. - * In order to avoid all the action buffers to be squashed together by - * ovs, add a controller action for each configured openflow. - */ - size_t oc_offset = encode_start_controller_op( - ACTION_OPCODE_MG_SPLIT_BUF, false, NX_CTLR_NO_METER, ofpacts); - ovs_be32 val = htonl(++flow_index); - ofpbuf_put(ofpacts, &val, sizeof val); - val = htonl(mc->tunnel_key); - ofpbuf_put(ofpacts, &val, sizeof val); - ofpbuf_put(ofpacts, &stage, sizeof stage); - encode_finish_controller_op(oc_offset, ofpacts); + put_split_buf_function(++flow_index, mc->tunnel_key, stage, ofpacts); } ofctrl_add_flow(flow_table, stage, prio, mc->header_.uuid.parts[0], diff --git a/controller/pinctrl.c b/controller/pinctrl.c index 3fb7e2fd7..01a77bf93 100644 --- a/controller/pinctrl.c +++ b/controller/pinctrl.c @@ -209,7 +209,7 @@ static void send_mac_binding_buffered_pkts(struct rconn *swconn) static void pinctrl_rarp_activation_strategy_handler(const struct match *md); -static void pinctrl_mg_split_buff_handler( +static void pinctrl_split_buf_action_handler( struct rconn *swconn, struct dp_packet *pkt, const struct match *md, struct ofpbuf *userdata); @@ -3772,9 +3772,9 @@ process_packet_in(struct rconn *swconn, const struct ofp_header *msg) ovs_mutex_unlock(&pinctrl_mutex); break; - case ACTION_OPCODE_MG_SPLIT_BUF: - pinctrl_mg_split_buff_handler(swconn, &packet, &pin.flow_metadata, - &userdata); + case ACTION_OPCODE_SPLIT_BUF_ACTION: + pinctrl_split_buf_action_handler(swconn, &packet, &pin.flow_metadata, + &userdata); break; default: @@ -8690,8 +8690,9 @@ pinctrl_rarp_activation_strategy_handler(const struct match *md) } static void -pinctrl_mg_split_buff_handler(struct rconn *swconn, struct dp_packet *pkt, - const struct match *md, struct ofpbuf *userdata) +pinctrl_split_buf_action_handler(struct rconn *swconn, struct dp_packet *pkt, + const struct match *md, + struct ofpbuf *userdata) { ovs_be32 *index = ofpbuf_try_pull(userdata, sizeof *index); if (!index) { @@ -8700,10 +8701,10 @@ pinctrl_mg_split_buff_handler(struct rconn *swconn, struct dp_packet *pkt, return; } - ovs_be32 *mg = ofpbuf_try_pull(userdata, sizeof *mg); - if (!mg) { + ovs_be32 *outport = ofpbuf_try_pull(userdata, sizeof *outport); + if (!outport) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_WARN_RL(&rl, "%s: missing multicast group field", __func__); + VLOG_WARN_RL(&rl, "%s: missing outport tunnel_key field", __func__); return; } @@ -8725,7 +8726,7 @@ pinctrl_mg_split_buff_handler(struct rconn *swconn, struct dp_packet *pkt, ofpact_put_set_field(&ofpacts, pkt_mark_field, &pkt_mark_value, NULL); put_load(ntohl(*index), MFF_REG6, 0, 32, &ofpacts); - put_load(ntohl(*mg), MFF_LOG_OUTPORT, 0, 32, &ofpacts); + put_load(ntohl(*outport), MFF_LOG_OUTPORT, 0, 32, &ofpacts); struct ofpact_resubmit *resubmit = ofpact_put_RESUBMIT(&ofpacts); resubmit->in_port = OFPP_CONTROLLER; diff --git a/include/ovn/actions.h b/include/ovn/actions.h index db7342f1d..7e0670a11 100644 --- a/include/ovn/actions.h +++ b/include/ovn/actions.h @@ -783,8 +783,8 @@ enum action_opcode { /* activation_strategy_rarp() */ ACTION_OPCODE_ACTIVATION_STRATEGY_RARP, - /* multicast group split buffer action. */ - ACTION_OPCODE_MG_SPLIT_BUF, + /* split buffer action. */ + ACTION_OPCODE_SPLIT_BUF_ACTION, /* "dhcp_relay_req_chk(relay_ip, server_ip)". * diff --git a/lib/actions.c b/lib/actions.c index d5fc30b27..4a328b03d 100644 --- a/lib/actions.c +++ b/lib/actions.c @@ -1968,7 +1968,7 @@ is_paused_nested_action(enum action_opcode opcode) case ACTION_OPCODE_HANDLE_SVC_CHECK: case ACTION_OPCODE_BFD_MSG: case ACTION_OPCODE_ACTIVATION_STRATEGY_RARP: - case ACTION_OPCODE_MG_SPLIT_BUF: + case ACTION_OPCODE_SPLIT_BUF_ACTION: case ACTION_OPCODE_DHCP_RELAY_REQ_CHK: case ACTION_OPCODE_DHCP_RELAY_RESP_CHK: default: From patchwork Tue Dec 3 11:08:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ales Musil X-Patchwork-Id: 2017705 X-Patchwork-Delegate: nusiddiq@redhat.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=EG4r3Ln5; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::137; helo=smtp4.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org) Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Y2dG204DNz1yQZ for ; Tue, 3 Dec 2024 22:09:30 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 6B19E40D57; Tue, 3 Dec 2024 11:09:28 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id NzOG3cI8EGWy; Tue, 3 Dec 2024 11:09:23 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=2605:bc80:3010:104::8cd3:938; helo=lists.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 61F0F40EC1 Authentication-Results: smtp4.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=EG4r3Ln5 Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [IPv6:2605:bc80:3010:104::8cd3:938]) by smtp4.osuosl.org (Postfix) with ESMTPS id 61F0F40EC1; Tue, 3 Dec 2024 11:09:22 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id AABDBC0889; Tue, 3 Dec 2024 11:09:22 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by lists.linuxfoundation.org (Postfix) with ESMTP id 1EDE0C08AA for ; Tue, 3 Dec 2024 11:09:21 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id BFB2060826 for ; Tue, 3 Dec 2024 11:09:19 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 36omf2R2tDDa for ; Tue, 3 Dec 2024 11:09:08 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=170.10.129.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=amusil@redhat.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp3.osuosl.org B77CD607F5 Authentication-Results: smtp3.osuosl.org; dmarc=pass (p=none dis=none) header.from=redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org B77CD607F5 Authentication-Results: smtp3.osuosl.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=EG4r3Ln5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp3.osuosl.org (Postfix) with ESMTPS id B77CD607F5 for ; Tue, 3 Dec 2024 11:09:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733224146; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eTVOcEoSCKmJJcnjCtt7dco445aGxUYTgIQg0iiCdkc=; b=EG4r3Ln563sWmwO5T4yhKNvVXq8OFgvd0QFgedvGN7gFAStB7gVTPWSIIioyUaBavzCzs1 WPXSK93xzAl6vHqxdvbEbGn2MGzVzWlq57X4Tebj1qmALDTxTTiyZNGmy6IE2WR/LG2UVe aVdqKkUIOc5SEusOmbcXpNvRXsPrxB4= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-47-nBH-lgzOPjSVFjuqAETccA-1; Tue, 03 Dec 2024 06:09:05 -0500 X-MC-Unique: nBH-lgzOPjSVFjuqAETccA-1 X-Mimecast-MFC-AGG-ID: nBH-lgzOPjSVFjuqAETccA Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 690C31955BF9 for ; Tue, 3 Dec 2024 11:09:04 +0000 (UTC) Received: from amusil.brq.redhat.com (unknown [10.43.17.32]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 4CFF11955EA7; Tue, 3 Dec 2024 11:09:03 +0000 (UTC) From: Ales Musil To: dev@openvswitch.org Date: Tue, 3 Dec 2024 12:08:48 +0100 Message-ID: <20241203110853.201377-7-amusil@redhat.com> In-Reply-To: <20241203110853.201377-1-amusil@redhat.com> References: <20241203110853.201377-1-amusil@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: M5WMJfpV0mEN0D6iTLXGkIIanrul2uMUZJZ1bJVbQfs_1733224144 X-Mimecast-Originator: redhat.com Subject: [ovs-dev] [PATCH ovn 6/6] northd, controller: Flood ARP and NA packet on transit router. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" When packets goes between AZs through transit router for the first time there isn't any MAC binding for the remote port equivalent. The TR will properly generate ARP/ND NS packet that will arrive to the remote AZ, however the response would never leave the remote AZ as a consequence the local AZ would never learn this MAC binding. To prevent the described behavior add a new table that will contain all remote chassis and corresponding encapsulations that allow us to just flood all chassis with any packet that will be sent to this table. At the same time add a new action that sends the packet to this table. In order to properly generate MAC binding we need to redirect the ARP into ingress instead of egress as usual for reception from tunnels. Add flows that will match on ARP and ND NA with combination of 0 outport which should indicate that this is the remote flood flow. Only exception is VXLAN which doesn't have enough space for outport encoding, in that case we need to send the packet to both ingress and egress as we cannot determine if it was part of the remote flood or regular packet that arrived from another chassis in the same AZ. Signed-off-by: Ales Musil --- controller/lflow.c | 1 + controller/lflow.h | 4 + controller/physical.c | 188 ++++++++++++++++++++++++++++++++---- include/ovn/actions.h | 3 + lib/actions.c | 17 ++++ northd/northd.c | 12 ++- tests/multinode-macros.at | 48 ++++++++++ tests/multinode.at | 196 ++++++++++++++++++++++++++++++++++++++ tests/ovn-controller.at | 76 +++++++++++++++ tests/ovn-macros.at | 1 + tests/ovn.at | 10 +- tests/test-ovn.c | 1 + utilities/ovn-trace.c | 3 + 13 files changed, 536 insertions(+), 24 deletions(-) diff --git a/controller/lflow.c b/controller/lflow.c index 6c49484f1..c94e68c3f 100644 --- a/controller/lflow.c +++ b/controller/lflow.c @@ -889,6 +889,7 @@ add_matches_to_flow_table(const struct sbrec_logical_flow *lflow, .ct_nw_dst_load_table = OFTABLE_CT_ORIG_NW_DST_LOAD, .ct_ip6_dst_load_table = OFTABLE_CT_ORIG_IP6_DST_LOAD, .ct_tp_dst_load_table = OFTABLE_CT_ORIG_TP_DST_LOAD, + .flood_remote_table = OFTABLE_FLOOD_REMOTE_CHASSIS, .ctrl_meter_id = ctrl_meter_id, .common_nat_ct_zone = get_common_nat_zone(ldp), }; diff --git a/controller/lflow.h b/controller/lflow.h index 206328f9e..b27721baa 100644 --- a/controller/lflow.h +++ b/controller/lflow.h @@ -98,6 +98,10 @@ struct uuid; #define OFTABLE_CT_ORIG_NW_DST_LOAD 81 #define OFTABLE_CT_ORIG_IP6_DST_LOAD 82 #define OFTABLE_CT_ORIG_TP_DST_LOAD 83 +#define OFTABLE_FLOOD_REMOTE_CHASSIS 84 + +/* Common defines shared between some controller components. */ +#define CHASSIS_FLOOD_INDEX_START 0x8000 struct lflow_ctx_in { diff --git a/controller/physical.c b/controller/physical.c index 1adc0a5f6..b6a8ba396 100644 --- a/controller/physical.c +++ b/controller/physical.c @@ -185,6 +185,73 @@ put_encapsulation(enum mf_field_id mff_ovn_geneve, } } +static void +put_decapsulation(enum mf_field_id mff_ovn_geneve, + const struct chassis_tunnel *tun, + struct ofpbuf *ofpacts) +{ + if (tun->type == GENEVE) { + put_move(MFF_TUN_ID, 0, MFF_LOG_DATAPATH, 0, 24, ofpacts); + put_move(mff_ovn_geneve, 16, MFF_LOG_INPORT, 0, 15, ofpacts); + put_move(mff_ovn_geneve, 0, MFF_LOG_OUTPORT, 0, 16, ofpacts); + } else if (tun->type == STT) { + put_move(MFF_TUN_ID, 40, MFF_LOG_INPORT, 0, 15, ofpacts); + put_move(MFF_TUN_ID, 24, MFF_LOG_OUTPORT, 0, 16, ofpacts); + put_move(MFF_TUN_ID, 0, MFF_LOG_DATAPATH, 0, 24, ofpacts); + } else if (tun->type == VXLAN) { + /* Add flows for non-VTEP tunnels. Split VNI into two 12-bit + * sections and use them for datapath and outport IDs. */ + put_move(MFF_TUN_ID, 12, MFF_LOG_OUTPORT, 0, 12, ofpacts); + put_move(MFF_TUN_ID, 0, MFF_LOG_DATAPATH, 0, 12, ofpacts); + } else { + OVS_NOT_REACHED(); + } +} + + +static void +put_remote_chassis_flood_encap(struct ofpbuf *ofpacts, + enum chassis_tunnel_type type, + enum mf_field_id mff_ovn_geneve) +{ + if (type == GENEVE) { + put_move(MFF_LOG_DATAPATH, 0, MFF_TUN_ID, 0, 24, ofpacts); + put_load(0, mff_ovn_geneve, 0, 32, ofpacts); + put_move(MFF_LOG_INPORT, 0, mff_ovn_geneve, 16, 15, ofpacts); + } else if (type == STT) { + put_move(MFF_LOG_INPORT, 0, MFF_TUN_ID, 40, 15, ofpacts); + put_load(0, MFF_TUN_ID, 24, 16, ofpacts); + put_move(MFF_LOG_DATAPATH, 0, MFF_TUN_ID, 0, 24, ofpacts); + } else if (type == VXLAN) { + put_move(MFF_LOG_INPORT, 0, MFF_TUN_ID, 12, 12, ofpacts); + put_move(MFF_LOG_DATAPATH, 0, MFF_TUN_ID, 0, 12, ofpacts); + } else { + OVS_NOT_REACHED(); + } +} + +static void +match_set_chassis_flood_outport(struct match *match, + enum chassis_tunnel_type type, + enum mf_field_id mff_ovn_geneve) +{ + if (type == GENEVE) { + /* Outport occupies the lower half of tunnel metadata (0-15). */ + union mf_value value, mask; + memset(&value, 0, sizeof value); + memset(&mask, 0, sizeof mask); + + const struct mf_field *mf_ovn_geneve = mf_from_id(mff_ovn_geneve); + memset(&mask.tun_metadata[mf_ovn_geneve->n_bytes - 2], 0xff, 2); + + tun_metadata_set_match(mf_ovn_geneve, &value, &mask, match, NULL); + } else if (type == STT) { + /* Outport occupies bits 24-39. */ + match_set_tun_id_masked(match, 0, htonll(UINT64_C(0xffff) << 24)); + } +} + + static void put_stack(enum mf_field_id field, struct ofpact_stack *stack) { @@ -2349,6 +2416,106 @@ consider_mc_group(const struct physical_ctx *ctx, sset_destroy(&vtep_chassis); } +#define CHASSIS_FLOOD_MAX_MSG_SIZE MC_OFPACTS_MAX_MSG_SIZE + +static void +physical_eval_remote_chassis_flows(const struct physical_ctx *ctx, + struct ofpbuf *egress_ofpacts, + struct ovn_desired_flow_table *flow_table) +{ + struct match match = MATCH_CATCHALL_INITIALIZER; + uint32_t index = CHASSIS_FLOOD_INDEX_START; + struct chassis_tunnel *prev = NULL; + + uint8_t actions_stub[256]; + struct ofpbuf ingress_ofpacts; + ofpbuf_use_stub(&ingress_ofpacts, actions_stub, sizeof(actions_stub)); + + ofpbuf_clear(egress_ofpacts); + + const struct sbrec_chassis *chassis; + SBREC_CHASSIS_TABLE_FOR_EACH (chassis, ctx->chassis_table) { + if (!smap_get_bool(&chassis->other_config, "is-remote", false)) { + continue; + } + + struct chassis_tunnel *tun = + chassis_tunnel_find(ctx->chassis_tunnels, chassis->name, + NULL, NULL); + if (!tun) { + continue; + } + + if (!(prev && prev->type == tun->type)) { + put_remote_chassis_flood_encap(egress_ofpacts, tun->type, + ctx->mff_ovn_geneve); + } + + ofpact_put_OUTPUT(egress_ofpacts)->port = tun->ofport; + prev = tun; + + if (egress_ofpacts->size > CHASSIS_FLOOD_MAX_MSG_SIZE) { + match_init_catchall(&match); + match_set_reg(&match, MFF_REG6 - MFF_REG0, index++); + + put_split_buf_function(index, 0, OFTABLE_FLOOD_REMOTE_CHASSIS, + egress_ofpacts); + + ofctrl_add_flow(flow_table, OFTABLE_FLOOD_REMOTE_CHASSIS, 100, 0, + &match, egress_ofpacts, hc_uuid); + + ofpbuf_clear(egress_ofpacts); + prev = NULL; + } + + + ofpbuf_clear(&ingress_ofpacts); + put_decapsulation(ctx->mff_ovn_geneve, tun, &ingress_ofpacts); + put_resubmit(OFTABLE_LOG_INGRESS_PIPELINE, &ingress_ofpacts); + if (tun->type == VXLAN) { + /* VXLAN doesn't carry the inport information, we cannot set + * the outport to 0 then and match on it. */ + put_resubmit(OFTABLE_LOCAL_OUTPUT, &ingress_ofpacts); + } + + /* Add match on ARP response coming from remote chassis. */ + match_init_catchall(&match); + match_set_in_port(&match, tun->ofport); + match_set_dl_type(&match, htons(ETH_TYPE_ARP)); + match_set_arp_opcode_masked(&match, 2, UINT8_MAX); + match_set_chassis_flood_outport(&match, tun->type, + ctx->mff_ovn_geneve); + + ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG, 120, + chassis->header_.uuid.parts[0], + &match, &ingress_ofpacts, hc_uuid); + + /* Add match on ND NA coming from remote chassis. */ + match_init_catchall(&match); + match_set_in_port(&match, tun->ofport); + match_set_dl_type(&match, htons(ETH_TYPE_IPV6)); + match_set_nw_proto(&match, IPPROTO_ICMPV6); + match_set_icmp_type(&match, 136); + match_set_icmp_code(&match, 0); + match_set_chassis_flood_outport(&match, tun->type, + ctx->mff_ovn_geneve); + + ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG, 120, + chassis->header_.uuid.parts[0], + &match, &ingress_ofpacts, hc_uuid); + } + + if (egress_ofpacts->size > 0) { + match_init_catchall(&match); + match_set_reg(&match, MFF_REG6 - MFF_REG0, index); + + ofctrl_add_flow(flow_table, OFTABLE_FLOOD_REMOTE_CHASSIS, 100, 0, + &match, egress_ofpacts, hc_uuid); + } + + ofpbuf_uninit(&ingress_ofpacts); +} + static void physical_eval_port_binding(struct physical_ctx *p_ctx, const struct sbrec_port_binding *pb, @@ -2504,24 +2671,7 @@ physical_run(struct physical_ctx *p_ctx, match_set_in_port(&match, tun->ofport); ofpbuf_clear(&ofpacts); - if (tun->type == GENEVE) { - put_move(MFF_TUN_ID, 0, MFF_LOG_DATAPATH, 0, 24, &ofpacts); - put_move(p_ctx->mff_ovn_geneve, 16, MFF_LOG_INPORT, 0, 15, - &ofpacts); - put_move(p_ctx->mff_ovn_geneve, 0, MFF_LOG_OUTPORT, 0, 16, - &ofpacts); - } else if (tun->type == STT) { - put_move(MFF_TUN_ID, 40, MFF_LOG_INPORT, 0, 15, &ofpacts); - put_move(MFF_TUN_ID, 24, MFF_LOG_OUTPORT, 0, 16, &ofpacts); - put_move(MFF_TUN_ID, 0, MFF_LOG_DATAPATH, 0, 24, &ofpacts); - } else if (tun->type == VXLAN) { - /* Add flows for non-VTEP tunnels. Split VNI into two 12-bit - * sections and use them for datapath and outport IDs. */ - put_move(MFF_TUN_ID, 12, MFF_LOG_OUTPORT, 0, 12, &ofpacts); - put_move(MFF_TUN_ID, 0, MFF_LOG_DATAPATH, 0, 12, &ofpacts); - } else { - OVS_NOT_REACHED(); - } + put_decapsulation(p_ctx->mff_ovn_geneve, tun, &ofpacts); put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts); ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG, 100, 0, &match, @@ -2773,5 +2923,7 @@ physical_run(struct physical_ctx *p_ctx, ofctrl_add_flow(flow_table, OFTABLE_CT_ORIG_IP6_DST_LOAD, 100, 0, &match, &ofpacts, hc_uuid); + physical_eval_remote_chassis_flows(p_ctx, &ofpacts, flow_table); + ofpbuf_uninit(&ofpacts); } diff --git a/include/ovn/actions.h b/include/ovn/actions.h index 7e0670a11..73beeeee9 100644 --- a/include/ovn/actions.h +++ b/include/ovn/actions.h @@ -134,6 +134,7 @@ struct collector_set_ids; OVNACT(CT_ORIG_NW_DST, ovnact_result) \ OVNACT(CT_ORIG_IP6_DST, ovnact_result) \ OVNACT(CT_ORIG_TP_DST, ovnact_result) \ + OVNACT(FLOOD_REMOTE, ovnact_null) \ /* enum ovnact_type, with a member OVNACT_ for each action. */ enum OVS_PACKED_ENUM ovnact_type { @@ -945,6 +946,8 @@ struct ovnact_encode_params { * to resubmit. */ uint32_t ct_tp_dst_load_table; /* OpenFlow table for 'ct_tp_dst' * to resubmit. */ + uint32_t flood_remote_table; /* OpenFlow table for 'chassis_flood' + * to resubmit. */ }; void ovnacts_encode(const struct ovnact[], size_t ovnacts_len, diff --git a/lib/actions.c b/lib/actions.c index 4a328b03d..3973b7346 100644 --- a/lib/actions.c +++ b/lib/actions.c @@ -5531,6 +5531,21 @@ format_CT_ORIG_TP_DST(const struct ovnact_result *res, struct ds *s) ds_put_cstr(s, " = ct_tp_dst();"); } +static void +format_FLOOD_REMOTE(const struct ovnact_null *null OVS_UNUSED, struct ds *s) +{ + ds_put_cstr(s, "flood_remote;"); +} + +static void +encode_FLOOD_REMOTE(const struct ovnact_null *null OVS_UNUSED, + const struct ovnact_encode_params *ep, + struct ofpbuf *ofpacts) +{ + put_load(CHASSIS_FLOOD_INDEX_START, MFF_REG6, 0, 32, ofpacts); + emit_resubmit(ofpacts, ep->flood_remote_table); +} + /* Parses an assignment or exchange or put_dhcp_opts action. */ static void parse_set_action(struct action_context *ctx) @@ -5758,6 +5773,8 @@ parse_action(struct action_context *ctx) parse_sample(ctx); } else if (lexer_match_id(ctx->lexer, "mac_cache_use")) { ovnact_put_MAC_CACHE_USE(ctx->ovnacts); + } else if (lexer_match_id(ctx->lexer, "flood_remote")) { + ovnact_put_FLOOD_REMOTE(ctx->ovnacts); } else { lexer_syntax_error(ctx->lexer, "expecting action"); } diff --git a/northd/northd.c b/northd/northd.c index f3ef090f4..e3bcf22cb 100644 --- a/northd/northd.c +++ b/northd/northd.c @@ -13390,21 +13390,22 @@ build_neigh_learning_flows_for_lrouter( * */ /* Flows for LOOKUP_NEIGHBOR. */ + const char *flood = od->is_transit_router ? "flood_remote; " : ""; bool learn_from_arp_request = smap_get_bool(&od->nbr->options, "always_learn_from_arp_request", true); ds_clear(actions); ds_put_format(actions, REGBIT_LOOKUP_NEIGHBOR_RESULT - " = lookup_arp(inport, arp.spa, arp.sha); %snext;", + " = lookup_arp(inport, arp.spa, arp.sha); %s%snext;", learn_from_arp_request ? "" : - REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" = 1; "); + REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" = 1; ", flood); ovn_lflow_add(lflows, od, S_ROUTER_IN_LOOKUP_NEIGHBOR, 100, "arp.op == 2", ds_cstr(actions), lflow_ref); ds_clear(actions); ds_put_format(actions, REGBIT_LOOKUP_NEIGHBOR_RESULT - " = lookup_nd(inport, nd.target, nd.tll); %snext;", + " = lookup_nd(inport, nd.target, nd.tll); %s%snext;", learn_from_arp_request ? "" : - REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" = 1; "); + REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" = 1; ", flood); ovn_lflow_add(lflows, od, S_ROUTER_IN_LOOKUP_NEIGHBOR, 100, "nd_na", ds_cstr(actions), lflow_ref); @@ -13420,7 +13421,8 @@ build_neigh_learning_flows_for_lrouter( ds_put_format(actions, REGBIT_LOOKUP_NEIGHBOR_RESULT " = lookup_nd(inport, nd.target, nd.tll); " REGBIT_LOOKUP_NEIGHBOR_IP_RESULT - " = lookup_nd_ip(inport, nd.target); next;"); + " = lookup_nd_ip(inport, nd.target); %snext;", + flood); ovn_lflow_add(lflows, od, S_ROUTER_IN_LOOKUP_NEIGHBOR, 110, "nd_na && ip6.src == fe80::/10 && ip6.dst == ff00::/8", ds_cstr(actions), lflow_ref); diff --git a/tests/multinode-macros.at b/tests/multinode-macros.at index 698d2c625..29f0711e6 100644 --- a/tests/multinode-macros.at +++ b/tests/multinode-macros.at @@ -112,6 +112,54 @@ cleanup_multinode_resources_by_nodes() { done } +# multinode_cleanup_northd NODE +# +# Removes previously set nothd on specified node +multinode_cleanup_northd() { + c=$1 + # Cleanup existing one + m_as $c /usr/share/ovn/scripts/ovn-ctl stop_northd + m_as $c rm -f /etc/ovn/*.db +} + +# multinode_setup_northd NODE +# +# Sets up northd on specified node. +multinode_setup_northd() { + c=$1 + + multinode_cleanup_northd $c + + m_as $c /usr/share/ovn/scripts/ovn-ctl start_northd + m_as $c ovn-nbctl set-connection ptcp:6641 + m_as $c ovn-sbctl set-connection ptcp:6642 +} + +# multinode_setup_controller NODE ENCAP_IP REMOTE_IP [ENCAP_TYPE] +# +# Sets up controller on specified node. +multinode_setup_controller() { + c=$1 + encap_ip=$3 + remote_ip=$4 + encap_type=${5:-"geneve"} + + # Cleanup existing one + m_as $c /usr/share/openvswitch/scripts/ovs-ctl stop + m_as $c /usr/share/ovn/scripts/ovn-ctl stop_controller + m_as $c rm -f /etc/openvswitch/*.db + + m_as $c /usr/share/openvswitch/scripts/ovs-ctl start --system-id=$c + m_as $c /usr/share/ovn/scripts/ovn-ctl start_controller + + m_as $c ovs-vsctl set open . external_ids:ovn-encap-ip=$encap_ip + m_as $c ovs-vsctl set open . external-ids:ovn-encap-type=$encap_type + m_as $c ovs-vsctl set open . external-ids:ovn-remote=tcp:$remote_ip:6642 + m_as $c ovs-vsctl set open . external-ids:ovn-openflow-probe-interval=60 + m_as $c ovs-vsctl set open . external-ids:ovn-remote-probe-interval=180000 + m_as $c ovs-vsctl set open . external-ids:ovn-bridge-datapath-type=system +} + # m_count_rows TABLE [CONDITION...] # # Prints the number of rows in TABLE (that satisfy CONDITION). diff --git a/tests/multinode.at b/tests/multinode.at index a45dc55cc..2962f54d0 100644 --- a/tests/multinode.at +++ b/tests/multinode.at @@ -2582,3 +2582,199 @@ Connected to 10.0.2.4 (10.0.2.4) port 8080 fi AT_CLEANUP + +AT_SETUP([ovn multinode - Transit Router basic functionality]) + +# Check that ovn-fake-multinode setup is up and running +check_fake_multinode_setup + +# Delete the multinode NB and OVS resources before starting the test. +cleanup_multinode_resources + +# Network topology +# ┌─────────────────────────────────� ┌────────────────────────────────� +# │ │ │ │ +# │ ┌───────────────────� AZ1 │ │ AZ2 ┌───────────────────� │ +# │ │ external │ │ │ │ │ │ +# │ │ │ │ │ │ │ │ +# │ │ 192.168.100.10/24 │ │ │ │ ................. │ │ +# │ │ 1000::10/64 │ │ │ │ │ │ +# │ └─────────┬─────────┘ │ │ └─────────┬─────────┘ │ +# │ │ │ │ │ │ +# │ │ │ │ │ │ +# │ ┌─────────┴─────────� │ │ ┌─────────┴─────────� │ +# │ │ 192.168.100.1/24 │ │ │ │ 192.168.100.1/24 │ │ +# │ │ 1000::1/64 │ │ │ │ 1000::1/64 │ │ +# │ │ │ │ │ │ │ │ +# │ │ GW │ │ │ │ GW │ │ +# │ │ │ │ │ │ │ │ +# │ │ 100.65.0.1/30 │ │ │ │ 100.65.0.5/30 │ │ +# │ │ 100:65::1/126 │ │ │ │ 100:65::5/126 │ │ +# │ └─────────┬─────────┘ │ │ └───────────────────┘ │ +# │ │ │ │ │ │ +# │ │ Peer ports │ │ │ Peer ports │ +# │ │ │ │ │ │ +# │ ┌─────────┴──────────────────│─────│──────────────────┴─────────� │ +# │ │ 100.65.0.2/30 │ │ 100.65.0.6/30 │ │ +# │ │ 100:65::2/126 │ │ 100:65::6/126 │ │ +# │ │ │ │ │ │ +# │ │ │ TR │ │ │ +# │ │ │ │ │ │ +# │ │ 10.100.200.1/24 │ │ 10.100.200.1/24 │ │ +# │ │ 10:200::1/64 │ │ 10:200::1/64 │ │ +# │ └─────────┬──────────────────│─────│────────────────────────────┘ │ +# │ │ │ │ │ │ +# │ │ │ │ │ │ +# │ │ │ │ │ │ +# │ ┌─────────┴──────────────────│─────│────────────────────────────� │ +# │ │ │ TS │ │ │ +# │ └─────────┬──────────────────│─────│────────────────────────────┘ │ +# │ │ │ │ │ │ +# │ │ │ │ │ │ +# │ │ │ │ │ │ +# │ ┌─────────┴─────────� │ │ ┌─────────┴─────────� │ +# │ │ pod10 │ │ │ │ pod20 │ │ +# │ │ │ │ │ │ │ │ +# │ │ 10.100.200.10/24 │ │ │ │ 10.100.200.20/24 │ │ +# │ │ 10:200::10/64 │ │ │ │ 10:200::20/64 │ │ +# │ └───────────────────┘ │ │ └───────────────────┘ │ +# └─────────────────────────────────┘ └────────────────────────────────┘ + +for i in 1 2; do + chassis="ovn-chassis-$i" + ip=$(m_as $chassis ip -4 addr show eth1 | grep inet | awk '{print $2}' | cut -d'/' -f1) + + multinode_setup_northd $chassis + multinode_setup_controller $chassis $chassis $ip $ip + + check m_as $chassis ovs-vsctl set open . external_ids:ovn-monitor-all=true + check m_as $chassis ovs-vsctl set open . external_ids:ovn-is-interconn=true + + check m_as $chassis ovn-nbctl ls-add public + + check m_as $chassis ovn-nbctl lsp-add public public-gw + check m_as $chassis ovn-nbctl lsp-set-type public-gw router + check m_as $chassis ovn-nbctl lsp-set-addresses public-gw router + check m_as $chassis ovn-nbctl lsp-set-options public-gw router-port=gw-public + + check m_as $chassis ovn-nbctl lr-add gw + check m_as $chassis ovn-nbctl lrp-add gw gw-public 00:00:00:00:20:00 192.168.100.1/24 1000::1/64 + + check m_as $chassis ovn-nbctl set logical_router gw options:chassis=$chassis + + # Add TR and set the same tunnel key for both chassis + check m_as $chassis ovn-nbctl ls-add ts + check m_as $chassis ovn-nbctl set logical_switch ts other_config:requested-tnl-key=10 + + check m_as $chassis ovn-nbctl lsp-add ts ts-tr + check m_as $chassis ovn-nbctl lsp-set-type ts-tr router + check m_as $chassis ovn-nbctl lsp-set-addresses ts-tr router + check m_as $chassis ovn-nbctl lsp-set-options ts-tr router-port=tr-ts + + check m_as $chassis ovn-nbctl lr-add tr + check m_as $chassis ovn-nbctl lrp-add tr tr-ts 00:00:00:00:10:00 10.100.200.1/24 10:200::1/64 + check m_as $chassis ovn-nbctl set logical_router tr options:requested-tnl-key=20 + + # Add TS pods, with the same tunnel keys on both sides + check m_as $chassis ovn-nbctl lsp-add ts pod10 + check m_as $chassis ovn-nbctl lsp-set-addresses pod10 "00:00:00:00:10:10 10.100.200.10 10:200::10" + check m_as $chassis ovn-nbctl set logical_switch_port pod10 options:requested-tnl-key=10 + + check m_as $chassis ovn-nbctl lsp-add ts pod20 + check m_as $chassis ovn-nbctl lsp-set-addresses pod20 "00:00:00:00:10:20 10.100.200.20 10:200::20" + check m_as $chassis ovn-nbctl set logical_switch_port pod20 options:requested-tnl-key=20 +done + +# Add SNAT for the GW router that corresponds to "gw-tr" LRP IP +check m_as ovn-chassis-1 ovn-nbctl lr-nat-add gw snat 100.65.0.1 192.168.100.0/24 +check m_as ovn-chassis-1 ovn-nbctl lr-nat-add gw snat 100:65::1 1000::/64 +check m_as ovn-chassis-2 ovn-nbctl lr-nat-add gw snat 100.65.0.5 192.168.100.0/24 +check m_as ovn-chassis-2 ovn-nbctl lr-nat-add gw snat 100:65::5 1000::/64 + +# Add peer ports between GW and TR +check m_as ovn-chassis-1 ovn-nbctl lrp-add gw gw-tr 00:00:00:00:30:01 100.65.0.1/30 100:65::1/126 peer=tr-gw +check m_as ovn-chassis-1 ovn-nbctl lrp-add tr tr-gw 00:00:00:00:30:02 100.65.0.2/30 100:65::2/126 peer=gw-tr + +check m_as ovn-chassis-2 ovn-nbctl lrp-add gw gw-tr 00:00:00:00:30:05 100.65.0.5/30 100:65::5/126 peer=tr-gw +check m_as ovn-chassis-2 ovn-nbctl lrp-add tr tr-gw 00:00:00:00:30:06 100.65.0.6/30 100:65::6/126 peer=gw-tr + +# Add routes for the TS subnet +check m_as ovn-chassis-1 ovn-nbctl lr-route-add gw 10.100.200.0/24 100.65.0.2 +check m_as ovn-chassis-1 ovn-nbctl lr-route-add gw 10:200::/64 100:65::2 +check m_as ovn-chassis-2 ovn-nbctl lr-route-add gw 10.100.200.0/24 100.65.0.6 +check m_as ovn-chassis-2 ovn-nbctl lr-route-add gw 10:200::/64 100:65::6 + +# Add mutual remote ports +check m_as ovn-chassis-1 ovn-nbctl lrp-add tr tr-az2 00:00:00:00:30:06 100.65.0.6/30 100:65::6/126 +check m_as ovn-chassis-1 ovn-nbctl set logical_router_port tr-az2 options:requested-chassis=ovn-chassis-2 + +check m_as ovn-chassis-2 ovn-nbctl lrp-add tr tr-az1 00:00:00:00:30:02 100.65.0.2/30 100:65::2/126 +check m_as ovn-chassis-2 ovn-nbctl set logical_router_port tr-az1 options:requested-chassis=ovn-chassis-1 + +# Important set the proper tunnel keys +check m_as ovn-chassis-1 ovn-nbctl set logical_router_port tr-gw options:requested-tnl-key=10 +check m_as ovn-chassis-1 ovn-nbctl set logical_router_port tr-az2 options:requested-tnl-key=20 + +check m_as ovn-chassis-2 ovn-nbctl set logical_router_port tr-gw options:requested-tnl-key=20 +check m_as ovn-chassis-2 ovn-nbctl set logical_router_port tr-az1 options:requested-tnl-key=10 + +check m_as ovn-chassis-1 ovn-nbctl lsp-add public external +check m_as ovn-chassis-1 ovn-nbctl lsp-set-addresses external "00:00:00:00:20:10 192.168.100.10 1000::10" + +# Add mutual chassis +check m_as ovn-chassis-1 ovn-sbctl chassis-add ovn-chassis-2 geneve $(m_as ovn-chassis-2 ip -4 addr show eth1 | grep inet | awk '{print $2}' | cut -d'/' -f1) +check m_as ovn-chassis-1 ovn-sbctl set chassis ovn-chassis-2 other_config:is-remote=true + +check m_as ovn-chassis-2 ovn-sbctl chassis-add ovn-chassis-1 geneve $(m_as ovn-chassis-1 ip -4 addr show eth1 | grep inet | awk '{print $2}' | cut -d'/' -f1) +check m_as ovn-chassis-2 ovn-sbctl set chassis ovn-chassis-1 other_config:is-remote=true + +# Configure ports on the transit switch as remotes +check m_as ovn-chassis-1 ovn-nbctl lsp-set-type pod20 remote +check m_as ovn-chassis-1 ovn-nbctl lsp-set-options pod10 requested-chassis=ovn-chassis-1 +check m_as ovn-chassis-1 ovn-nbctl lsp-set-options pod20 requested-chassis=ovn-chassis-2 + +check m_as ovn-chassis-2 ovn-nbctl lsp-set-type pod10 remote +check m_as ovn-chassis-2 ovn-nbctl lsp-set-options pod10 requested-chassis=ovn-chassis-1 +check m_as ovn-chassis-2 ovn-nbctl lsp-set-options pod20 requested-chassis=ovn-chassis-2 + +m_as ovn-chassis-1 /data/create_fake_vm.sh external external 00:00:00:00:20:10 1500 192.168.100.10 24 192.168.100.1 1000::10/64 1000::1 +m_as ovn-chassis-1 /data/create_fake_vm.sh pod10 pod10 00:00:00:00:10:10 1500 10.100.200.10 24 10.100.200.1 10:200::10/64 10:200::1 +m_as ovn-chassis-2 /data/create_fake_vm.sh pod20 pod20 00:00:00:00:10:20 1500 10.100.200.20 24 10.100.200.1 10:200::20/64 10:200::1 + +# We cannot use any of the helpers as they assume that there is only single ovn-northd instance running +check m_as ovn-chassis-1 ovn-nbctl --wait=hv sync +OVS_WAIT_UNTIL([test -n "$(m_as ovn-chassis-1 ovn-sbctl --bare --columns _uuid find Port_Binding logical_port=external up=true)"]) +OVS_WAIT_UNTIL([test -n "$(m_as ovn-chassis-1 ovn-sbctl --bare --columns _uuid find Port_Binding logical_port=pod10 up=true)"]) +check m_as ovn-chassis-2 ovn-nbctl --wait=hv sync +OVS_WAIT_UNTIL([test -n "$(m_as ovn-chassis-2 ovn-sbctl --bare --columns _uuid find Port_Binding logical_port=pod20 up=true)"]) + +M_NS_CHECK_EXEC([ovn-chassis-1], [external], [ping -q -c 5 -i 0.3 -w 2 10.100.200.20 | FORMAT_PING], \ +[0], [dnl +5 packets transmitted, 5 received, 0% packet loss, time 0ms +]) + +M_NS_CHECK_EXEC([ovn-chassis-1], [external], [ping -q -c 5 -i 0.3 -w 2 10:200::20 | FORMAT_PING], \ +[0], [dnl +5 packets transmitted, 5 received, 0% packet loss, time 0ms +]) + +echo "Chassis1" +m_as ovn-chassis-1 ovn-sbctl show +m_as ovn-chassis-1 ovn-nbctl show +m_as ovn-chassis-1 ovs-vsctl show + +echo "Chassis2" +m_as ovn-chassis-2 ovn-sbctl show +m_as ovn-chassis-2 ovn-nbctl show +m_as ovn-chassis-2 ovs-vsctl show + +# Connect the chassis back to the original northd and remove northd per chassis. +for i in 1 2; do + chassis="ovn-chassis-$i" + ip=$(m_as $chassis ip -4 addr show eth1 | grep inet | awk '{print $2}' | cut -d'/' -f1) + + multinode_setup_controller $chassis $chassis $ip "170.168.0.2" + multinode_cleanup_northd $chassis +done + +AT_CLEANUP diff --git a/tests/ovn-controller.at b/tests/ovn-controller.at index b2bb6e2d0..7c6f69975 100644 --- a/tests/ovn-controller.at +++ b/tests/ovn-controller.at @@ -3536,3 +3536,79 @@ AT_CHECK([grep -c "cookie=$lr1_peer_cookie," log_to_phy_flows], [0], [dnl OVN_CLEANUP([hv1]) AT_CLEANUP + +AT_SETUP([Remote chassis flood flows]) +ovn_start + +net_add n1 +sim_add hv1 +as hv1 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.11 24 geneve,vxlan,stt + +check ovs-vsctl set open . external_ids:ovn-is-interconn=true + +check ovn-sbctl chassis-add hv2 geneve 192.168.0.12 \ + -- set chassis hv2 other_config:is-remote=true + +check ovn-sbctl chassis-add hv3 stt 192.168.0.13 \ + -- set chassis hv3 other_config:is-remote=true + +check ovn-sbctl chassis-add hv4 vxlan 192.168.0.14 \ + -- set chassis hv4 other_config:is-remote=true + +check ovn-nbctl --wait=hv sync + +chassis_cookie() { + name=$1 + fetch_column chassis _uuid name=$name |\ + cut -d '-' -f 1 | tr -d '\n' | sed 's/^0\{0,8\}//' +} + +ovs-ofctl dump-flows --names --no-stats br-int table=OFTABLE_PHY_TO_LOG > phy_to_log_flows +ovs-ofctl dump-flows --names --no-stats br-int table=OFTABLE_FLOOD_REMOTE_CHASSIS > flood_flows + +# Check that we have all encap + output actions one by one because the order can change +# Geneve +AT_CHECK([grep -c 'move:OXM_OF_METADATA\[[0..23\]]->NXM_NX_TUN_ID\[[0..23\]],set_field:0->tun_metadata0,move:NXM_NX_REG14\[[0..14\]]->NXM_NX_TUN_METADATA0\[[16..30\]],output:"ovn-hv2-0"' flood_flows], [0], [dnl +1 +]) + +# STT +AT_CHECK([grep -c 'move:NXM_NX_REG14\[[0..14\]]->NXM_NX_TUN_ID\[[40..54\]],load:0->NXM_NX_TUN_ID\[[24..39\]],move:OXM_OF_METADATA\[[0..23\]]->NXM_NX_TUN_ID\[[0..23\]],output:"ovn-hv3-0"' flood_flows], [0], [dnl +1 +]) + +# VXLAN +AT_CHECK([grep -c 'move:NXM_NX_REG14\[[0..11\]]->NXM_NX_TUN_ID\[[12..23\]],move:OXM_OF_METADATA\[[0..11\]]->NXM_NX_TUN_ID\[[0..11\]],output:"ovn-hv4-0"' flood_flows], [0], [dnl +1 +]) + +AT_CHECK([grep -c "reg6=0x8000" flood_flows], [0], [dnl +1 +]) + +# Check ingress flows for ARP and ND NA +# Geneve +hv2_cookie="0x$(chassis_cookie hv2)" +AT_CHECK_UNQUOTED([grep "cookie=$hv2_cookie," phy_to_log_flows], [0], [dnl + cookie=$hv2_cookie, priority=120,arp,tun_metadata0=0,in_port="ovn-hv2-0",arp_op=2 actions=move:NXM_NX_TUN_ID[[0..23]]->OXM_OF_METADATA[[0..23]],move:NXM_NX_TUN_METADATA0[[16..30]]->NXM_NX_REG14[[0..14]],move:NXM_NX_TUN_METADATA0[[0..15]]->NXM_NX_REG15[[0..15]],resubmit(,OFTABLE_LOG_INGRESS_PIPELINE) + cookie=$hv2_cookie, priority=120,icmp6,tun_metadata0=0,in_port="ovn-hv2-0",icmp_type=136,icmp_code=0 actions=move:NXM_NX_TUN_ID[[0..23]]->OXM_OF_METADATA[[0..23]],move:NXM_NX_TUN_METADATA0[[16..30]]->NXM_NX_REG14[[0..14]],move:NXM_NX_TUN_METADATA0[[0..15]]->NXM_NX_REG15[[0..15]],resubmit(,OFTABLE_LOG_INGRESS_PIPELINE) +]) + +# STT +hv3_cookie="0x$(chassis_cookie hv3)" +AT_CHECK_UNQUOTED([grep "cookie=$hv3_cookie," phy_to_log_flows], [0], [dnl + cookie=$hv3_cookie, priority=120,icmp6,tun_id=0/0xffff000000,in_port="ovn-hv3-0",icmp_type=136,icmp_code=0 actions=move:NXM_NX_TUN_ID[[40..54]]->NXM_NX_REG14[[0..14]],move:NXM_NX_TUN_ID[[24..39]]->NXM_NX_REG15[[0..15]],move:NXM_NX_TUN_ID[[0..23]]->OXM_OF_METADATA[[0..23]],resubmit(,OFTABLE_LOG_INGRESS_PIPELINE) + cookie=$hv3_cookie, priority=120,arp,tun_id=0/0xffff000000,in_port="ovn-hv3-0",arp_op=2 actions=move:NXM_NX_TUN_ID[[40..54]]->NXM_NX_REG14[[0..14]],move:NXM_NX_TUN_ID[[24..39]]->NXM_NX_REG15[[0..15]],move:NXM_NX_TUN_ID[[0..23]]->OXM_OF_METADATA[[0..23]],resubmit(,OFTABLE_LOG_INGRESS_PIPELINE) +]) + +# VXLAN +hv4_cookie="0x$(chassis_cookie hv4)" +AT_CHECK_UNQUOTED([grep "cookie=$hv4_cookie," phy_to_log_flows], [0], [dnl + cookie=$hv4_cookie, priority=120,icmp6,in_port="ovn-hv4-0",icmp_type=136,icmp_code=0 actions=move:NXM_NX_TUN_ID[[12..23]]->NXM_NX_REG15[[0..11]],move:NXM_NX_TUN_ID[[0..11]]->OXM_OF_METADATA[[0..11]],resubmit(,OFTABLE_LOG_INGRESS_PIPELINE),resubmit(,OFTABLE_LOCAL_OUTPUT) + cookie=$hv4_cookie, priority=120,arp,in_port="ovn-hv4-0",arp_op=2 actions=move:NXM_NX_TUN_ID[[12..23]]->NXM_NX_REG15[[0..11]],move:NXM_NX_TUN_ID[[0..11]]->OXM_OF_METADATA[[0..11]],resubmit(,OFTABLE_LOG_INGRESS_PIPELINE),resubmit(,OFTABLE_LOCAL_OUTPUT) +]) + +OVN_CLEANUP([hv1]) +AT_CLEANUP diff --git a/tests/ovn-macros.at b/tests/ovn-macros.at index efb333a47..dfe6240d8 100644 --- a/tests/ovn-macros.at +++ b/tests/ovn-macros.at @@ -1409,5 +1409,6 @@ m4_define([OFTABLE_CT_ZONE_LOOKUP], [80]) m4_define([OFTABLE_CT_ORIG_NW_DST_LOAD], [81]) m4_define([OFTABLE_CT_ORIG_IP6_DST_LOAD], [82]) m4_define([OFTABLE_CT_ORIG_TP_DST_LOAD], [83]) +m4_define([OFTABLE_FLOOD_REMOTE_CHASSIS], [84]) m4_define([OFTABLE_SAVE_INPORT_HEX], [m4_eval(OFTABLE_SAVE_INPORT, 16)]) diff --git a/tests/ovn.at b/tests/ovn.at index 2fdf1a88c..d1c317a8b 100644 --- a/tests/ovn.at +++ b/tests/ovn.at @@ -2279,6 +2279,12 @@ ct_tp_dst; ct_tp_dst(); Syntax error at `ct_tp_dst' expecting action. +flood_remote; + encodes as set_field:0x8000->reg6,resubmit(,OFTABLE_FLOOD_REMOTE_CHASSIS) + +flood_remote(); + Syntax error at `(' expecting `;'. + # Miscellaneous negative tests. ; Syntax error at `;'. @@ -35636,7 +35642,9 @@ check_default_flows() { # respectively and it's OK if they don't have a default action. # Tables 81, 82 and 83 are part of ct_nw_dst(), ct_ip6_dst() and ct_tp_dst() # actions respectively and its OK for them to not have default flows. - if test ${table} -eq 68 -o ${table} -eq 70 -o ${table} -eq 81 -o ${table} -eq 82 -o ${table} -eq 83; then + # Table 84 is part of flood_remote; action and its OK for + # it to not have default flows. + if test ${table} -eq 68 -o ${table} -eq 70 -o ${table} -eq 81 -o ${table} -eq 82 -o ${table} -eq 83 -o ${table} -eq 84; then continue; fi AT_CHECK([grep -qe "table=$table.* priority=0\(,metadata=0x\w*\)\? actions" oflows], [0], [ignore], [ignore], [echo "Table $table does not contain a default action"]) diff --git a/tests/test-ovn.c b/tests/test-ovn.c index 7954bb98a..c3463e4cf 100644 --- a/tests/test-ovn.c +++ b/tests/test-ovn.c @@ -1379,6 +1379,7 @@ test_parse_actions(struct ovs_cmdl_context *ctx OVS_UNUSED) .ct_nw_dst_load_table = OFTABLE_CT_ORIG_NW_DST_LOAD, .ct_ip6_dst_load_table = OFTABLE_CT_ORIG_IP6_DST_LOAD, .ct_tp_dst_load_table = OFTABLE_CT_ORIG_TP_DST_LOAD, + .flood_remote_table = OFTABLE_FLOOD_REMOTE_CHASSIS, .lflow_uuid.parts = { 0xaaaaaaaa, 0xbbbbbbbb, 0xcccccccc, 0xdddddddd}, .dp_key = 0xabcdef, diff --git a/utilities/ovn-trace.c b/utilities/ovn-trace.c index 806bdf3d9..423245f3d 100644 --- a/utilities/ovn-trace.c +++ b/utilities/ovn-trace.c @@ -3453,6 +3453,9 @@ trace_actions(const struct ovnact *ovnacts, size_t ovnacts_len, break; case OVNACT_CT_ORIG_TP_DST: break; + case OVNACT_FLOOD_REMOTE: + ovntrace_node_append(super, OVNTRACE_NODE_OUTPUT, + "/* Flood to all remote chassis */"); } } ofpbuf_uninit(&stack);