From patchwork Tue Jul 23 08:05:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 1963650 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=dyp5ClEQ; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::136; helo=smtp3.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org) Received: from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WSqTf2JZ6z1yZw for ; Tue, 23 Jul 2024 18:05:57 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 9FF10607C7; Tue, 23 Jul 2024 08:05:53 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id z8RUfwdd_8zX; Tue, 23 Jul 2024 08:05:50 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.9.56; helo=lists.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 765ED6071D Authentication-Results: smtp3.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=dyp5ClEQ Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp3.osuosl.org (Postfix) with ESMTPS id 765ED6071D; Tue, 23 Jul 2024 08:05:50 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 372EBC0A97; Tue, 23 Jul 2024 08:05:50 +0000 (UTC) X-Original-To: ovs-dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by lists.linuxfoundation.org (Postfix) with ESMTP id 879E3C0A96 for ; Tue, 23 Jul 2024 08:05:48 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 6277380B4D for ; Tue, 23 Jul 2024 08:05:48 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id cL69zFmgCRQV for ; Tue, 23 Jul 2024 08:05:46 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=170.10.129.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=lorenzo.bianconi@redhat.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp1.osuosl.org 90A9C80B7E Authentication-Results: smtp1.osuosl.org; dmarc=pass (p=none dis=none) header.from=redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 90A9C80B7E Authentication-Results: smtp1.osuosl.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=dyp5ClEQ Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp1.osuosl.org (Postfix) with ESMTPS id 90A9C80B7E for ; Tue, 23 Jul 2024 08:05:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1721721944; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=MOxmqX14FzcZiEklFY1IREO35grFYEK8nchhZFnel8A=; b=dyp5ClEQW4LFK1M+FlNGFU5wUIUME1XwEHnog0+06WzpIApoT9JCm96lXc/E4wfPvqaUpR JSSHqS9nGoPoml3DFAmVs2jtJkRwBhYQgL9GDBIDWhN4Pg/Mz9SBUFouGK+htiPblzAyVK fHdLAL4Hnpygc6Dvz4HwPimE89cnAaU= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-317-4BJS3gaaN12iPYqi__5O-A-1; Tue, 23 Jul 2024 04:05:42 -0400 X-MC-Unique: 4BJS3gaaN12iPYqi__5O-A-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-6b75e880a12so99981926d6.3 for ; Tue, 23 Jul 2024 01:05:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721721941; x=1722326741; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=MOxmqX14FzcZiEklFY1IREO35grFYEK8nchhZFnel8A=; b=TKLzfgetSU6xx1uGNTjDJdd9y2JgCuDe24y+Kka3SMRJNCiNoaEK/5NFoEcVai1D0h rln2msE3mXJ8Qg+XCFHMQ4zN2voeIB1rqirIP15LNuI1ratKZVgWWeKuhoZnpjo6exWC AYFSV+XOH4UZm2moMf9KuTtyoEOh5mUHOTg6Ojo/g7l5TRt0NzL2sLBj17lCA03hNvIV qHyNobptkGwDK2dGVOwu5n8nju/Skaqj5sJJCWQs5EUu6cWwEPUEp3y7MlkZB0U3JZgm DCZ+WCfkIgOd2DcMDsVRnyKtF5Ysxc+ckDq/15w7gnFbOD49pelbfwCy6EeMaH2Z0oX+ Mnmw== X-Gm-Message-State: AOJu0YxmShE2WfNKY07667evVQfBcCh3nz3Z9WLZuHT4mcVHcNpeCy62 1eM/zPvAeQuP4xlsGWNNviLkm6BdexuaDzL85tYEPbPGQcyqOEcmDzY8YBM7IvqA4y/qXzaA5Ze Q/gqhoDtpfYxdjXUTaFplpAm02wzejBNgYh0rfdFt9+sJGIRYbifi8sdkGNg3DV173WOfdQ3szG wcch/+BXDE9m8/qRj6nValY82Hx0Sg5TaIJLhwOdyxYx9yxMOnJg== X-Received: by 2002:a05:6214:c82:b0:6b7:9c28:4cc8 with SMTP id 6a1803df08f44-6b9610f403fmr113465826d6.52.1721721940695; Tue, 23 Jul 2024 01:05:40 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFnOIgG69dncjcYejcYq/z+PAzGU2z0GEiFsm6VU+/paiAzBAOnp7ZQ2o1k6NbUu91SgANmOA== X-Received: by 2002:a05:6214:c82:b0:6b7:9c28:4cc8 with SMTP id 6a1803df08f44-6b9610f403fmr113465196d6.52.1721721939482; Tue, 23 Jul 2024 01:05:39 -0700 (PDT) Received: from localhost (77-32-99-124.dyn.eolo.it. [77.32.99.124]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6b9684230c3sm26821766d6.101.2024.07.23.01.05.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Jul 2024 01:05:39 -0700 (PDT) From: Lorenzo Bianconi To: ovs-dev@openvswitch.org Date: Tue, 23 Jul 2024 10:05:28 +0200 Message-ID: <0da4bb3e24e4b611c9e504a80dc2f2da814a2bac.1721721638.git.lorenzo.bianconi@redhat.com> X-Mailer: git-send-email 2.45.2 MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [ovs-dev] [PATCH v8 ovn] northd: Add bfd, static_routes, route_policies and bfd_sync nodes to I-P engine. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dceara@redhat.com Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" Introduce bfd, static_routes, route_policies and bfd_sync nodes to northd I-P engine to track bfd connections and northd static_route/policy_route changes. Reported-at: https://issues.redhat.com/browse/FDP-600 Signed-off-by: Lorenzo Bianconi --- Changes since v7: - rework node dependencies and move all NB-BFD table update in bfd_sync node - fix some more comments Changes since v6: - remove unnecessary node inputs - fix comment Changes since v5: - remove bfd_mutex Changes since v4: - introduce bfd_nb_sync node and ger rid of bfd_consumer routine Changes since v3: - fix bfd_northd_change_handler logic - fix route_policies_northd_change_handler logic - fix static_routes_northd_change_handler logic - add unit-tests - cosmetics Changes since v2: - add incremental processing routines - split bfd_consumer node in static_routes and route_policies nodes Changes since v1: - add incremental processing logic for bfd_consumer node to avoid a full recompute --- northd/en-lflow.c | 25 +- northd/en-northd.c | 225 +++++++++++++ northd/en-northd.h | 23 ++ northd/inc-proc-northd.c | 27 +- northd/northd.c | 702 +++++++++++++++++++++++++++------------ northd/northd.h | 73 +++- tests/ovn-northd.at | 56 ++++ 7 files changed, 895 insertions(+), 236 deletions(-) diff --git a/northd/en-lflow.c b/northd/en-lflow.c index c4b927fb8..3dba5034b 100644 --- a/northd/en-lflow.c +++ b/northd/en-lflow.c @@ -41,6 +41,11 @@ lflow_get_input_data(struct engine_node *node, struct lflow_input *lflow_input) { struct northd_data *northd_data = engine_get_input_data("northd", node); + struct bfd_data *bfd_data = engine_get_input_data("bfd", node); + struct static_routes_data *static_routes_data = + engine_get_input_data("static_routes", node); + struct route_policies_data *route_policies_data = + engine_get_input_data("route_policies", node); struct port_group_data *pg_data = engine_get_input_data("port_group", node); struct sync_meters_data *sync_meters_data = @@ -50,10 +55,6 @@ lflow_get_input_data(struct engine_node *node, struct ed_type_ls_stateful *ls_stateful_data = engine_get_input_data("ls_stateful", node); - lflow_input->nbrec_bfd_table = - EN_OVSDB_GET(engine_get_input("NB_bfd", node)); - lflow_input->sbrec_bfd_table = - EN_OVSDB_GET(engine_get_input("SB_bfd", node)); lflow_input->sbrec_logical_flow_table = EN_OVSDB_GET(engine_get_input("SB_logical_flow", node)); lflow_input->sbrec_multicast_group_table = @@ -78,7 +79,10 @@ lflow_get_input_data(struct engine_node *node, lflow_input->meter_groups = &sync_meters_data->meter_groups; lflow_input->lb_datapaths_map = &northd_data->lb_datapaths_map; lflow_input->svc_monitor_map = &northd_data->svc_monitor_map; - lflow_input->bfd_connections = NULL; + lflow_input->bfd_connections = &bfd_data->bfd_connections; + lflow_input->parsed_routes = &static_routes_data->parsed_routes; + lflow_input->route_tables = &static_routes_data->route_tables; + lflow_input->route_policies = &route_policies_data->route_policies; struct ed_type_global_config *global_config = engine_get_input_data("global_config", node); @@ -95,25 +99,14 @@ void en_lflow_run(struct engine_node *node, void *data) struct lflow_input lflow_input; lflow_get_input_data(node, &lflow_input); - struct hmap bfd_connections = HMAP_INITIALIZER(&bfd_connections); - lflow_input.bfd_connections = &bfd_connections; - stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec()); struct lflow_data *lflow_data = data; lflow_table_clear(lflow_data->lflow_table); lflow_reset_northd_refs(&lflow_input); - build_bfd_table(eng_ctx->ovnsb_idl_txn, - lflow_input.nbrec_bfd_table, - lflow_input.sbrec_bfd_table, - lflow_input.lr_ports, - &bfd_connections); build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input, lflow_data->lflow_table); - bfd_cleanup_connections(lflow_input.nbrec_bfd_table, - &bfd_connections); - hmap_destroy(&bfd_connections); stopwatch_stop(BUILD_LFLOWS_STOPWATCH_NAME, time_msec()); engine_set_node_state(node, EN_UPDATED); diff --git a/northd/en-northd.c b/northd/en-northd.c index 4479b4aff..817a4736c 100644 --- a/northd/en-northd.c +++ b/northd/en-northd.c @@ -236,6 +236,172 @@ northd_global_config_handler(struct engine_node *node, void *data OVS_UNUSED) return true; } +bool +route_policies_northd_change_handler(struct engine_node *node, + void *data OVS_UNUSED) +{ + struct northd_data *northd_data = engine_get_input_data("northd", node); + if (!northd_has_tracked_data(&northd_data->trk_data)) { + return false; + } + + /* This node uses the below data from the en_northd engine node. + * See (lr_stateful_get_input_data()) + * 1. northd_data->lr_datapaths + * 2. northd_data->lr_ports + * This data gets updated when a logical router or logical router port + * is created or deleted. + * Northd engine node presently falls back to full recompute when + * this happens and so does this node. + * Note: When we add I-P to the created/deleted logical routers or + * logical router ports, we need to revisit this handler. + * + * This node also accesses the route policies of the logical router. + * When these route policies get updated, en_northd engine recomputes + * and so does this node. + * Note: When we add I-P to handle route policies changes, we need + * to revisit this handler. + */ + return true; +} + +void +en_route_policies_run(struct engine_node *node, void *data) +{ + struct northd_data *northd_data = engine_get_input_data("northd", node); + struct bfd_data *bfd_data = engine_get_input_data("bfd", node); + struct route_policies_data *route_policies_data = data; + + route_policies_destroy(data); + route_policies_init(data); + + struct ovn_datapath *od; + HMAP_FOR_EACH (od, key_node, &northd_data->lr_datapaths.datapaths) { + build_route_policies(od, &northd_data->lr_ports, + &route_policies_data->route_policies, + &route_policies_data->bfd_active_connections, + &bfd_data->bfd_connections); + } + + engine_set_node_state(node, EN_UPDATED); +} + +bool +static_routes_northd_change_handler(struct engine_node *node, + void *data OVS_UNUSED) +{ + struct northd_data *northd_data = engine_get_input_data("northd", node); + if (!northd_has_tracked_data(&northd_data->trk_data)) { + return false; + } + + /* This node uses the below data from the en_northd engine node. + * See (lr_stateful_get_input_data()) + * 1. northd_data->lr_datapaths + * 2. northd_data->lr_ports + * This data gets updated when a logical router or logical router port + * is created or deleted. + * Northd engine node presently falls back to full recompute when + * this happens and so does this node. + * Note: When we add I-P to the created/deleted logical routers or + * logical router ports, we need to revisit this handler. + * + * This node also accesses the static routes of the logical router. + * When these static routes gets updated, en_northd engine recomputes + * and so does this node. + * Note: When we add I-P to handle static routes changes, we need + * to revisit this handler. + */ + return true; +} + +void +en_static_routes_run(struct engine_node *node, void *data) +{ + struct northd_data *northd_data = engine_get_input_data("northd", node); + struct bfd_data *bfd_data = engine_get_input_data("bfd", node); + struct static_routes_data *static_routes_data = data; + + static_routes_destroy(data); + static_routes_init(data); + + struct ovn_datapath *od; + HMAP_FOR_EACH (od, key_node, &northd_data->lr_datapaths.datapaths) { + for (int i = 0; i < od->nbr->n_ports; i++) { + const char *route_table_name = + smap_get(&od->nbr->ports[i]->options, "route_table"); + get_route_table_id(&static_routes_data->route_tables, + route_table_name); + } + + build_parsed_routes(od, &northd_data->lr_ports, + &static_routes_data->parsed_routes, + &static_routes_data->route_tables, + &static_routes_data->bfd_active_connections, + &bfd_data->bfd_connections); + } + + engine_set_node_state(node, EN_UPDATED); +} + +bool +bfd_northd_change_handler(struct engine_node *node, + void *data OVS_UNUSED) +{ + struct northd_data *northd_data = engine_get_input_data("northd", node); + if (!northd_has_tracked_data(&northd_data->trk_data)) { + return false; + } + + /* This node uses the below data from the en_northd engine node. + * See (lr_stateful_get_input_data()) + * 1. northd_data->lr_ports + * This data gets updated when a logical router port is created or + * deleted. Northd engine node presently falls back to full recompute + * when this happens and so does this node. + * Note: When we add I-P to the created/deleted logical router ports, + * we need to revisit this handler. + */ + return true; +} + +void +en_bfd_run(struct engine_node *node, void *data) +{ + struct northd_data *northd_data = engine_get_input_data("northd", node); + struct bfd_data *bfd_data = data; + const struct nbrec_bfd_table *nbrec_bfd_table = + EN_OVSDB_GET(engine_get_input("NB_bfd", node)); + const struct sbrec_bfd_table *sbrec_bfd_table = + EN_OVSDB_GET(engine_get_input("SB_bfd", node)); + + bfd_destroy(data); + bfd_init(data); + build_bfd_map(nbrec_bfd_table, sbrec_bfd_table, + &northd_data->lr_ports, &bfd_data->bfd_connections); + engine_set_node_state(node, EN_UPDATED); +} + +void +en_bfd_sync_run(struct engine_node *node OVS_UNUSED, + void *data OVS_UNUSED) +{ + const struct engine_context *eng_ctx = engine_get_context(); + struct bfd_data *bfd_data = engine_get_input_data("bfd", node); + struct route_policies_data *route_policies_data + = engine_get_input_data("route_policies", node); + struct static_routes_data *static_routes_data + = engine_get_input_data("static_routes", node); + const struct nbrec_bfd_table *nbrec_bfd_table = + EN_OVSDB_GET(engine_get_input("NB_bfd", node)); + + bfd_table_sync(eng_ctx->ovnsb_idl_txn, nbrec_bfd_table, + &bfd_data->bfd_connections, + &route_policies_data->bfd_active_connections, + &static_routes_data->bfd_active_connections); + engine_set_node_state(node, EN_UPDATED); +} + void *en_northd_init(struct engine_node *node OVS_UNUSED, struct engine_arg *arg OVS_UNUSED) @@ -247,6 +413,42 @@ void return data; } +void +*en_route_policies_init(struct engine_node *node OVS_UNUSED, + struct engine_arg *arg OVS_UNUSED) +{ + struct route_policies_data *data = xzalloc(sizeof *data); + + route_policies_init(data); + return data; +} + +void +*en_static_routes_init(struct engine_node *node OVS_UNUSED, + struct engine_arg *arg OVS_UNUSED) +{ + struct static_routes_data *data = xzalloc(sizeof *data); + + static_routes_init(data); + return data; +} + +void +*en_bfd_init(struct engine_node *node OVS_UNUSED, + struct engine_arg *arg OVS_UNUSED) +{ + struct bfd_data *data = xzalloc(sizeof *data); + + bfd_init(data); + return data; +} + +void *en_bfd_sync_init(struct engine_node *node OVS_UNUSED, + struct engine_arg *arg OVS_UNUSED) +{ + return NULL; +} + void en_northd_cleanup(void *data) { @@ -259,3 +461,26 @@ en_northd_clear_tracked_data(void *data_) struct northd_data *data = data_; destroy_northd_data_tracked_changes(data); } + +void +en_route_policies_cleanup(void *data) +{ + route_policies_destroy(data); +} + +void +en_static_routes_cleanup(void *data) +{ + static_routes_destroy(data); +} + +void +en_bfd_cleanup(void *data) +{ + bfd_destroy(data); +} + +void +en_bfd_sync_cleanup(void *data OVS_UNUSED) +{ +} diff --git a/northd/en-northd.h b/northd/en-northd.h index 9b7bda32a..68f7755f8 100644 --- a/northd/en-northd.h +++ b/northd/en-northd.h @@ -19,5 +19,28 @@ bool northd_nb_logical_switch_handler(struct engine_node *, void *data); bool northd_nb_logical_router_handler(struct engine_node *, void *data); bool northd_sb_port_binding_handler(struct engine_node *, void *data); bool northd_lb_data_handler(struct engine_node *, void *data); +void *en_static_routes_init(struct engine_node *node OVS_UNUSED, + struct engine_arg *arg OVS_UNUSED); +void en_route_policies_cleanup(void *data); +bool route_policies_northd_change_handler(struct engine_node *node, + void *data OVS_UNUSED); +void en_route_policies_run(struct engine_node *node, void *data); +void *en_route_policies_init(struct engine_node *node OVS_UNUSED, + struct engine_arg *arg OVS_UNUSED); +void en_static_routes_cleanup(void *data); +bool static_routes_northd_change_handler(struct engine_node *node, + void *data OVS_UNUSED); +void en_static_routes_run(struct engine_node *node, void *data); +void *en_bfd_init(struct engine_node *node OVS_UNUSED, + struct engine_arg *arg OVS_UNUSED); +void en_bfd_cleanup(void *data); +bool bfd_northd_change_handler(struct engine_node *node, + void *data OVS_UNUSED); +void en_bfd_run(struct engine_node *node, void *data); +void *en_bfd_sync_init(struct engine_node *node OVS_UNUSED, + struct engine_arg *arg OVS_UNUSED); +void en_bfd_sync_run(struct engine_node *node OVS_UNUSED, + void *data OVS_UNUSED); +void en_bfd_sync_cleanup(void *data OVS_UNUSED); #endif /* EN_NORTHD_H */ diff --git a/northd/inc-proc-northd.c b/northd/inc-proc-northd.c index d56e9783a..b3baecaff 100644 --- a/northd/inc-proc-northd.c +++ b/northd/inc-proc-northd.c @@ -155,6 +155,10 @@ static ENGINE_NODE_WITH_CLEAR_TRACK_DATA(lb_data, "lb_data"); static ENGINE_NODE_WITH_CLEAR_TRACK_DATA(lr_nat, "lr_nat"); static ENGINE_NODE_WITH_CLEAR_TRACK_DATA(lr_stateful, "lr_stateful"); static ENGINE_NODE_WITH_CLEAR_TRACK_DATA(ls_stateful, "ls_stateful"); +static ENGINE_NODE(route_policies, "route_policies"); +static ENGINE_NODE(static_routes, "static_routes"); +static ENGINE_NODE(bfd, "bfd"); +static ENGINE_NODE(bfd_sync, "bfd_sync"); void inc_proc_northd_init(struct ovsdb_idl_loop *nb, struct ovsdb_idl_loop *sb) @@ -237,18 +241,37 @@ void inc_proc_northd_init(struct ovsdb_idl_loop *nb, engine_add_input(&en_fdb_aging, &en_global_config, node_global_config_handler); + engine_add_input(&en_bfd, &en_nb_bfd, NULL); + engine_add_input(&en_bfd, &en_sb_bfd, NULL); + engine_add_input(&en_bfd, &en_northd, bfd_northd_change_handler); + + engine_add_input(&en_route_policies, &en_bfd, NULL); + engine_add_input(&en_route_policies, &en_northd, + route_policies_northd_change_handler); + + engine_add_input(&en_static_routes, &en_bfd, NULL); + engine_add_input(&en_static_routes, &en_northd, + static_routes_northd_change_handler); + + engine_add_input(&en_bfd_sync, &en_bfd, NULL); + engine_add_input(&en_bfd_sync, &en_nb_bfd, NULL); + engine_add_input(&en_bfd_sync, &en_static_routes, NULL); + engine_add_input(&en_bfd_sync, &en_route_policies, NULL); + engine_add_input(&en_sync_meters, &en_nb_acl, NULL); engine_add_input(&en_sync_meters, &en_nb_meter, NULL); engine_add_input(&en_sync_meters, &en_sb_meter, NULL); - engine_add_input(&en_lflow, &en_nb_bfd, NULL); engine_add_input(&en_lflow, &en_nb_acl, NULL); engine_add_input(&en_lflow, &en_sync_meters, NULL); - engine_add_input(&en_lflow, &en_sb_bfd, NULL); engine_add_input(&en_lflow, &en_sb_logical_flow, NULL); engine_add_input(&en_lflow, &en_sb_multicast_group, NULL); engine_add_input(&en_lflow, &en_sb_igmp_group, NULL); engine_add_input(&en_lflow, &en_sb_logical_dp_group, NULL); + engine_add_input(&en_lflow, &en_bfd, NULL); + engine_add_input(&en_lflow, &en_bfd_sync, NULL); + engine_add_input(&en_lflow, &en_route_policies, NULL); + engine_add_input(&en_lflow, &en_static_routes, NULL); engine_add_input(&en_lflow, &en_global_config, node_global_config_handler); engine_add_input(&en_lflow, &en_northd, lflow_northd_handler); diff --git a/northd/northd.c b/northd/northd.c index fe7753f82..70bfb144e 100644 --- a/northd/northd.c +++ b/northd/northd.c @@ -9729,11 +9729,49 @@ build_lswitch_ip_unicast_lookup_for_nats( struct bfd_entry { struct hmap_node hmap_node; + const struct nbrec_bfd *nb_bt; const struct sbrec_bfd *sb_bt; - bool ref; + struct ovn_port *op; + + char *logical_port; + char *dst_ip; + char *status; + bool stale; }; +static struct bfd_entry * +bfd_alloc_entry(struct hmap *bfd_connections, + const char *logical_port, const char *dst_ip, + const char *status) +{ + struct bfd_entry *bfd_e = xzalloc(sizeof *bfd_e); + bfd_e->logical_port = xstrdup(logical_port); + bfd_e->dst_ip = xstrdup(dst_ip); + bfd_e->status = xstrdup(status); + uint32_t hash = hash_string(dst_ip, 0); + hash = hash_string(logical_port, hash); + hmap_insert(bfd_connections, &bfd_e->hmap_node, hash); + + return bfd_e; +} + +static void +bfd_erase_entry(struct bfd_entry *bfd_e) +{ + free(bfd_e->logical_port); + free(bfd_e->dst_ip); + free(bfd_e->status); + free(bfd_e); +} + +static void +bfd_set_status(struct bfd_entry *bfd_e, const char *status) +{ + free(bfd_e->status); + bfd_e->status = xstrdup(status); +} + static struct bfd_entry * bfd_port_lookup(const struct hmap *bfd_map, const char *logical_port, const char *dst_ip) @@ -9744,36 +9782,13 @@ bfd_port_lookup(const struct hmap *bfd_map, const char *logical_port, hash = hash_string(dst_ip, 0); hash = hash_string(logical_port, hash); HMAP_FOR_EACH_WITH_HASH (bfd_e, hmap_node, hash, bfd_map) { - if (!strcmp(bfd_e->sb_bt->logical_port, logical_port) && - !strcmp(bfd_e->sb_bt->dst_ip, dst_ip)) { + if (!strcmp(bfd_e->logical_port, logical_port) && + !strcmp(bfd_e->dst_ip, dst_ip)) { return bfd_e; } } - return NULL; -} -void -bfd_cleanup_connections(const struct nbrec_bfd_table *nbrec_bfd_table, - struct hmap *bfd_map) -{ - const struct nbrec_bfd *nb_bt; - struct bfd_entry *bfd_e; - - NBREC_BFD_TABLE_FOR_EACH (nb_bt, nbrec_bfd_table) { - bfd_e = bfd_port_lookup(bfd_map, nb_bt->logical_port, nb_bt->dst_ip); - if (!bfd_e) { - continue; - } - - if (!bfd_e->ref && strcmp(nb_bt->status, "admin_down")) { - /* no user for this bfd connection */ - nbrec_bfd_set_status(nb_bt, "admin_down"); - } - } - - HMAP_FOR_EACH_POP (bfd_e, hmap_node, bfd_map) { - free(bfd_e); - } + return NULL; } #define BFD_DEF_MINTX 1000 /* 1s */ @@ -9837,50 +9852,64 @@ static int bfd_get_unused_port(unsigned long *bfd_src_ports) } void -build_bfd_table(struct ovsdb_idl_txn *ovnsb_txn, - const struct nbrec_bfd_table *nbrec_bfd_table, - const struct sbrec_bfd_table *sbrec_bfd_table, - const struct hmap *lr_ports, struct hmap *bfd_connections) +bfd_table_sync(struct ovsdb_idl_txn *ovnsb_txn, + const struct nbrec_bfd_table *nbrec_bfd_table, + struct hmap *bfd_connections, + struct hmap *rp_bfd_connections, + struct hmap *sr_bfd_connections) { - struct hmap sb_only = HMAP_INITIALIZER(&sb_only); - const struct sbrec_bfd *sb_bt; - unsigned long *bfd_src_ports; - struct bfd_entry *bfd_e; - uint32_t hash; + if (!ovnsb_txn) { + return; + } - bfd_src_ports = bitmap_allocate(BFD_UDP_SRC_PORT_LEN); + unsigned long *bfd_src_ports = bitmap_allocate(BFD_UDP_SRC_PORT_LEN); - SBREC_BFD_TABLE_FOR_EACH (sb_bt, sbrec_bfd_table) { - bfd_e = xmalloc(sizeof *bfd_e); - bfd_e->sb_bt = sb_bt; - hash = hash_string(sb_bt->dst_ip, 0); - hash = hash_string(sb_bt->logical_port, hash); - hmap_insert(&sb_only, &bfd_e->hmap_node, hash); - bitmap_set1(bfd_src_ports, sb_bt->src_port - BFD_UDP_SRC_PORT_START); + struct bfd_entry *bfd_e; + HMAP_FOR_EACH (bfd_e, hmap_node, bfd_connections) { + bfd_e->stale = true; + /* we need to check if this entry is even in the BFD nb db table */ + if (bfd_e->sb_bt) { + bitmap_set1(bfd_src_ports, + bfd_e->sb_bt->src_port - BFD_UDP_SRC_PORT_START); + } } const struct nbrec_bfd *nb_bt; NBREC_BFD_TABLE_FOR_EACH (nb_bt, nbrec_bfd_table) { - if (!nb_bt->status) { - /* default state is admin_down */ - nbrec_bfd_set_status(nb_bt, "admin_down"); + bfd_e = bfd_port_lookup(bfd_connections, nb_bt->logical_port, + nb_bt->dst_ip); + if (!bfd_e) { + continue; } - struct ovn_port *op = ovn_port_find(lr_ports, nb_bt->logical_port); - bfd_e = bfd_port_lookup(&sb_only, nb_bt->logical_port, nb_bt->dst_ip); - if (!bfd_e) { + struct ovn_port *op = bfd_e->op; + if (!op) { + continue; + } + + if (!bfd_port_lookup(rp_bfd_connections, nb_bt->logical_port, + nb_bt->dst_ip) && + !bfd_port_lookup(sr_bfd_connections, nb_bt->logical_port, + nb_bt->dst_ip)) { + /* no consumer for this bfd connection, mark it as admin_down. */ + bfd_set_status(bfd_e, "admin_down"); + } + nbrec_bfd_set_status(nb_bt, bfd_e->status); + + if (!bfd_e->sb_bt) { int udp_src = bfd_get_unused_port(bfd_src_ports); if (udp_src < 0) { continue; } - sb_bt = sbrec_bfd_insert(ovnsb_txn); + /* Add entry to bfd sb table. */ + const struct sbrec_bfd *sb_bt = sbrec_bfd_insert(ovnsb_txn); sbrec_bfd_set_logical_port(sb_bt, nb_bt->logical_port); sbrec_bfd_set_dst_ip(sb_bt, nb_bt->dst_ip); sbrec_bfd_set_disc(sb_bt, 1 + random_uint32()); sbrec_bfd_set_src_port(sb_bt, udp_src); sbrec_bfd_set_status(sb_bt, nb_bt->status); - if (op && op->sb && op->sb->chassis) { + if (op->sb->chassis) { sbrec_bfd_set_chassis_name(sb_bt, op->sb->chassis->name); } @@ -9898,41 +9927,77 @@ build_bfd_table(struct ovsdb_idl_txn *ovnsb_txn, sbrec_bfd_set_status(bfd_e->sb_bt, nb_bt->status); } else { nbrec_bfd_set_status(nb_bt, bfd_e->sb_bt->status); + bfd_set_status(bfd_e, bfd_e->sb_bt->status); } } + build_bfd_update_sb_conf(nb_bt, bfd_e->sb_bt); - if (op && op->sb && op->sb->chassis && - strcmp(op->sb->chassis->name, bfd_e->sb_bt->chassis_name)) { + if (op->sb->chassis && !strcmp(op->sb->chassis->name, + bfd_e->sb_bt->chassis_name)) { sbrec_bfd_set_chassis_name(bfd_e->sb_bt, op->sb->chassis->name); } - - hmap_remove(&sb_only, &bfd_e->hmap_node); - bfd_e->ref = false; - hash = hash_string(bfd_e->sb_bt->dst_ip, 0); - hash = hash_string(bfd_e->sb_bt->logical_port, hash); - hmap_insert(bfd_connections, &bfd_e->hmap_node, hash); } + bfd_e->stale = false; + } - if (op) { - op->has_bfd = true; + HMAP_FOR_EACH (bfd_e, hmap_node, bfd_connections) { + if (!bfd_e->stale) { + continue; } - } - HMAP_FOR_EACH_POP (bfd_e, hmap_node, &sb_only) { - struct ovn_port *op = ovn_port_find(lr_ports, - bfd_e->sb_bt->logical_port); - if (op) { - op->has_bfd = false; + if (bfd_e->op) { + bfd_e->op->has_bfd = false; } sbrec_bfd_delete(bfd_e->sb_bt); - free(bfd_e); } - hmap_destroy(&sb_only); bitmap_free(bfd_src_ports); } +void +build_bfd_map(const struct nbrec_bfd_table *nbrec_bfd_table, + const struct sbrec_bfd_table *sbrec_bfd_table, + const struct hmap *lr_ports, struct hmap *bfd_connections) +{ + struct bfd_entry *bfd_e; + + /* align bfd_map to sb db */ + const struct sbrec_bfd *sb_bt; + SBREC_BFD_TABLE_FOR_EACH (sb_bt, sbrec_bfd_table) { + bfd_e = bfd_port_lookup(bfd_connections, sb_bt->logical_port, + sb_bt->dst_ip); + if (!bfd_e) { + bfd_e = bfd_alloc_entry(bfd_connections, sb_bt->logical_port, + sb_bt->dst_ip, sb_bt->status); + } + bfd_e->sb_bt = sb_bt; + } + + const struct nbrec_bfd *nb_bt; + NBREC_BFD_TABLE_FOR_EACH (nb_bt, nbrec_bfd_table) { + struct ovn_port *op = ovn_port_find(lr_ports, nb_bt->logical_port); + if (!op || !op->sb) { + /* skip not bounded ports */ + continue; + } + + bfd_e = bfd_port_lookup(bfd_connections, nb_bt->logical_port, + nb_bt->dst_ip); + if (!bfd_e) { + /* brand new entry. */ + bfd_e = bfd_alloc_entry(bfd_connections, nb_bt->logical_port, + nb_bt->dst_ip, "admin_down"); + } + + bfd_e->nb_bt = nb_bt; + bfd_e->op = op; + if (op) { + op->has_bfd = true; + } + } +} + /* Returns a string of the IP address of the router port 'op' that * overlaps with 'ip_s". If one is not found, returns NULL. * @@ -9968,17 +10033,13 @@ get_outport_for_routing_policy_nexthop(struct ovn_datapath *od, return NULL; } -static struct ovs_mutex bfd_lock = OVS_MUTEX_INITIALIZER; - -static bool check_bfd_state( - const struct nbrec_logical_router_policy *rule, - const struct hmap *bfd_connections, - struct ovn_port *out_port, - const char *nexthop) +static bool check_bfd_state(const struct nbrec_logical_router_policy *rule, + struct ovn_port *out_port, const char *nexthop, + struct hmap *bfd_active_connections, + struct hmap *bfd_connections) { struct in6_addr nexthop_v6; bool is_nexthop_v6 = ipv6_parse(nexthop, &nexthop_v6); - bool ret = true; for (size_t i = 0; i < rule->n_bfd_sessions; i++) { /* Check if there is a BFD session associated to the reroute @@ -10003,39 +10064,45 @@ static bool check_bfd_state( struct bfd_entry *bfd_e = bfd_port_lookup(bfd_connections, nb_bt->logical_port, nb_bt->dst_ip); - ovs_mutex_lock(&bfd_lock); - if (bfd_e) { - bfd_e->ref = true; + if (!bfd_e) { + continue; } - if (!strcmp(nb_bt->status, "admin_down")) { - nbrec_bfd_set_status(nb_bt, "down"); + if (!strcmp(bfd_e->status, "admin_down")) { + bfd_set_status(bfd_e, "down"); } - ret = strcmp(nb_bt->status, "down"); - ovs_mutex_unlock(&bfd_lock); - break; + /* This route policy is linked to an active bfd session. */ + if (!bfd_port_lookup(bfd_active_connections, nb_bt->logical_port, + nb_bt->dst_ip)) { + bfd_alloc_entry(bfd_active_connections, nb_bt->logical_port, + nb_bt->dst_ip, bfd_e->status); + } + + return strcmp(bfd_e->status, "down"); } - return ret; + return true; } static void build_routing_policy_flow(struct lflow_table *lflows, struct ovn_datapath *od, - const struct hmap *lr_ports, - const struct nbrec_logical_router_policy *rule, - const struct hmap *bfd_connections, + const struct hmap *lr_ports, struct route_policy *rp, const struct ovsdb_idl_row *stage_hint, struct lflow_ref *lflow_ref) { + const struct nbrec_logical_router_policy *rule = rp->rule; struct ds match = DS_EMPTY_INITIALIZER; struct ds actions = DS_EMPTY_INITIALIZER; if (!strcmp(rule->action, "reroute")) { ovs_assert(rule->n_nexthops <= 1); - char *nexthop = - (rule->n_nexthops == 1 ? rule->nexthops[0] : rule->nexthop); + if (!rp->n_valid_nexthops) { + return; + } + + char *nexthop = rp->valid_nexthops[0]; struct ovn_port *out_port = get_outport_for_routing_policy_nexthop( od, lr_ports, rule->priority, nexthop); if (!out_port) { @@ -10051,10 +10118,6 @@ build_routing_policy_flow(struct lflow_table *lflows, struct ovn_datapath *od, return; } - if (!check_bfd_state(rule, bfd_connections, out_port, nexthop)) { - return; - } - uint32_t pkt_mark = smap_get_uint(&rule->options, "pkt_mark", 0); if (pkt_mark) { ds_put_format(&actions, "pkt.mark = %u; ", pkt_mark); @@ -10097,19 +10160,18 @@ static void build_ecmp_routing_policy_flows(struct lflow_table *lflows, struct ovn_datapath *od, const struct hmap *lr_ports, - const struct nbrec_logical_router_policy *rule, - const struct hmap *bfd_connections, + struct route_policy *rp, uint16_t ecmp_group_id, struct lflow_ref *lflow_ref) { - ovs_assert(rule->n_nexthops > 1); - bool nexthops_is_ipv4 = true; + const struct nbrec_logical_router_policy *rule = rp->rule; + ovs_assert(rule->n_nexthops > 1); /* Check that all the nexthops belong to the same addr family before * adding logical flows. */ - for (uint16_t i = 0; i < rule->n_nexthops; i++) { - bool is_ipv4 = strchr(rule->nexthops[i], '.') ? true : false; + for (uint16_t i = 0; i < rp->n_valid_nexthops; i++) { + bool is_ipv4 = strchr(rp->valid_nexthops[i], '.') ? true : false; if (i == 0) { nexthops_is_ipv4 = is_ipv4; @@ -10120,7 +10182,7 @@ build_ecmp_routing_policy_flows(struct lflow_table *lflows, VLOG_WARN_RL(&rl, "nexthop [%s] of the router policy with " "the match [%s] do not belong to the same address " "family as other next hops", - rule->nexthops[i], rule->match); + rp->valid_nexthops[i], rule->match); return; } } @@ -10128,40 +10190,30 @@ build_ecmp_routing_policy_flows(struct lflow_table *lflows, struct ds match = DS_EMPTY_INITIALIZER; struct ds actions = DS_EMPTY_INITIALIZER; - size_t *valid_nexthops = xcalloc(rule->n_nexthops, sizeof *valid_nexthops); - size_t n_valid_nexthops = 0; - - for (size_t i = 0; i < rule->n_nexthops; i++) { + for (size_t i = 0; i < rp->n_valid_nexthops; i++) { struct ovn_port *out_port = get_outport_for_routing_policy_nexthop( - od, lr_ports, rule->priority, rule->nexthops[i]); + od, lr_ports, rule->priority, rp->valid_nexthops[i]); if (!out_port) { goto cleanup; } const char *lrp_addr_s = - find_lrp_member_ip(out_port, rule->nexthops[i]); + find_lrp_member_ip(out_port, rp->valid_nexthops[i]); if (!lrp_addr_s) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); VLOG_WARN_RL(&rl, "lrp_addr not found for routing policy " " priority %"PRId64" nexthop %s", - rule->priority, rule->nexthops[i]); + rule->priority, rp->valid_nexthops[i]); goto cleanup; } - if (!check_bfd_state(rule, bfd_connections, out_port, - rule->nexthops[i])) { - continue; - } - - valid_nexthops[n_valid_nexthops++] = i + 1; - ds_clear(&actions); uint32_t pkt_mark = smap_get_uint(&rule->options, "pkt_mark", 0); if (pkt_mark) { ds_put_format(&actions, "pkt.mark = %u; ", pkt_mark); } - bool is_ipv4 = strchr(rule->nexthops[i], '.') ? true : false; + bool is_ipv4 = strchr(rp->valid_nexthops[i], '.') ? true : false; ds_put_format(&actions, "%s = %s; " "%s = %s; " @@ -10170,7 +10222,7 @@ build_ecmp_routing_policy_flows(struct lflow_table *lflows, "flags.loopback = 1; " "next;", is_ipv4 ? REG_NEXT_HOP_IPV4 : REG_NEXT_HOP_IPV6, - rule->nexthops[i], + rp->valid_nexthops[i], is_ipv4 ? REG_SRC_IPV4 : REG_SRC_IPV6, lrp_addr_s, out_port->lrp_networks.ea_s, @@ -10186,37 +10238,30 @@ build_ecmp_routing_policy_flows(struct lflow_table *lflows, lflow_ref); } - if (!n_valid_nexthops) { - goto cleanup; - } - ds_clear(&actions); - if (n_valid_nexthops > 1) { + if (rp->n_valid_nexthops > 1) { ds_put_format(&actions, "%s = %"PRIu16 "; %s = select(", REG_ECMP_GROUP_ID, ecmp_group_id, REG_ECMP_MEMBER_ID); - for (size_t i = 0; i < n_valid_nexthops; i++) { + for (size_t i = 0; i < rp->n_valid_nexthops; i++) { if (i > 0) { ds_put_cstr(&actions, ", "); } - ds_put_format(&actions, "%"PRIuSIZE, valid_nexthops[i]); + ds_put_format(&actions, "%"PRIuSIZE, i + 1); } ds_put_cstr(&actions, ");"); } else { ds_put_format(&actions, "%s = %"PRIu16 - "; %s = %"PRIuSIZE"; next;", REG_ECMP_GROUP_ID, - ecmp_group_id, REG_ECMP_MEMBER_ID, - valid_nexthops[0]); + "; %s = 1; next;", REG_ECMP_GROUP_ID, + ecmp_group_id, REG_ECMP_MEMBER_ID); } ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_POLICY, rule->priority, rule->match, ds_cstr(&actions), &rule->header_, lflow_ref); - cleanup: - free(valid_nexthops); ds_destroy(&match); ds_destroy(&actions); } @@ -10241,7 +10286,7 @@ route_table_add(struct simap *route_tables, const char *route_table_name) return rtb_id; } -static uint32_t +uint32_t get_route_table_id(struct simap *route_tables, const char *route_table_name) { if (!route_table_name || !route_table_name[0]) { @@ -10282,18 +10327,6 @@ build_route_table_lflow(struct ovn_datapath *od, struct lflow_table *lflows, ds_destroy(&actions); } -struct parsed_route { - struct ovs_list list_node; - struct in6_addr prefix; - unsigned int plen; - bool is_src_route; - uint32_t route_table_id; - uint32_t hash; - const struct nbrec_logical_router_static_route *route; - bool ecmp_symmetric_reply; - bool is_discard_route; -}; - static uint32_t route_hash(struct parsed_route *route) { @@ -10308,11 +10341,53 @@ find_static_route_outport(struct ovn_datapath *od, const struct hmap *lr_ports, /* Parse and validate the route. Return the parsed route if successful. * Otherwise return NULL. */ + static struct parsed_route * +parsed_route_lookup(struct hmap *routes, size_t hash, + struct parsed_route *new_pr) +{ + struct parsed_route *pr; + HMAP_FOR_EACH_WITH_HASH (pr, key_node, hash, routes) { + if (pr->plen != new_pr->plen) { + continue; + } + + if (memcmp(&pr->prefix, &new_pr->prefix, sizeof(struct in6_addr))) { + continue; + } + + if (pr->is_src_route != new_pr->is_src_route) { + continue; + } + + if (pr->route_table_id != new_pr->route_table_id) { + continue; + } + + if (pr->route != new_pr->route) { + continue; + } + + if (pr->ecmp_symmetric_reply != new_pr->ecmp_symmetric_reply) { + continue; + } + + if (pr->is_discard_route != new_pr->is_discard_route) { + continue; + } + + return pr; + } + + return NULL; +} + +static void parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports, - struct ovs_list *routes, struct simap *route_tables, + struct hmap *routes, struct simap *route_tables, const struct nbrec_logical_router_static_route *route, - const struct hmap *bfd_connections) + struct hmap *bfd_active_connections, + struct hmap *bfd_connections) { /* Verify that the next hop is an IP address with an all-ones mask. */ struct in6_addr nexthop; @@ -10325,7 +10400,7 @@ parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports, VLOG_WARN_RL(&rl, "bad 'nexthop' %s in static route " UUID_FMT, route->nexthop, UUID_ARGS(&route->header_.uuid)); - return NULL; + return; } if ((IN6_IS_ADDR_V4MAPPED(&nexthop) && plen != 32) || (!IN6_IS_ADDR_V4MAPPED(&nexthop) && plen != 128)) { @@ -10333,7 +10408,7 @@ parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports, VLOG_WARN_RL(&rl, "bad next hop mask %s in static route " UUID_FMT, route->nexthop, UUID_ARGS(&route->header_.uuid)); - return NULL; + return; } } @@ -10344,7 +10419,7 @@ parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports, VLOG_WARN_RL(&rl, "bad 'ip_prefix' %s in static route " UUID_FMT, route->ip_prefix, UUID_ARGS(&route->header_.uuid)); - return NULL; + return; } /* Verify that ip_prefix and nexthop have same address familiy. */ @@ -10355,7 +10430,7 @@ parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports, " %s and 'nexthop' %s in static route "UUID_FMT, route->ip_prefix, route->nexthop, UUID_ARGS(&route->header_.uuid)); - return NULL; + return; } } @@ -10364,52 +10439,85 @@ parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports, !find_static_route_outport(od, lr_ports, route, IN6_IS_ADDR_V4MAPPED(&prefix), NULL, NULL)) { - return NULL; + return; } const struct nbrec_bfd *nb_bt = route->bfd; if (nb_bt && !strcmp(nb_bt->dst_ip, route->nexthop)) { - struct bfd_entry *bfd_e; + struct bfd_entry *bfd_e = bfd_port_lookup(bfd_connections, + nb_bt->logical_port, + nb_bt->dst_ip); + if (!bfd_e) { + return; + } - bfd_e = bfd_port_lookup(bfd_connections, nb_bt->logical_port, - nb_bt->dst_ip); - ovs_mutex_lock(&bfd_lock); - if (bfd_e) { - bfd_e->ref = true; + if (!strcmp(bfd_e->status, "admin_down")) { + bfd_set_status(bfd_e, "down"); } - if (!strcmp(nb_bt->status, "admin_down")) { - nbrec_bfd_set_status(nb_bt, "down"); + /* This static route is linked to an active bfd session. */ + if (!bfd_port_lookup(bfd_active_connections, nb_bt->logical_port, + nb_bt->dst_ip)) { + bfd_alloc_entry(bfd_active_connections, nb_bt->logical_port, + nb_bt->dst_ip, bfd_e->status); } - if (!strcmp(nb_bt->status, "down")) { - ovs_mutex_unlock(&bfd_lock); - return NULL; + if (!strcmp(bfd_e->status, "down")) { + return; } - ovs_mutex_unlock(&bfd_lock); } - struct parsed_route *pr = xzalloc(sizeof *pr); - pr->prefix = prefix; - pr->plen = plen; - pr->route_table_id = get_route_table_id(route_tables, route->route_table); - pr->is_src_route = (route->policy && !strcmp(route->policy, - "src-ip")); - pr->hash = route_hash(pr); - pr->route = route; - pr->ecmp_symmetric_reply = smap_get_bool(&route->options, - "ecmp_symmetric_reply", false); - pr->is_discard_route = is_discard_route; - ovs_list_insert(routes, &pr->list_node); - return pr; + struct parsed_route *new_pr = xzalloc(sizeof *new_pr); + new_pr->prefix = prefix; + new_pr->plen = plen; + new_pr->route_table_id = get_route_table_id(route_tables, + route->route_table); + new_pr->is_src_route = (route->policy && + !strcmp(route->policy, "src-ip")); + new_pr->hash = route_hash(new_pr); + new_pr->route = route; + new_pr->nbr = od->nbr; + new_pr->ecmp_symmetric_reply = smap_get_bool(&route->options, + "ecmp_symmetric_reply", + false); + new_pr->is_discard_route = is_discard_route; + + size_t hash = uuid_hash(&od->key); + struct parsed_route *pr = parsed_route_lookup(routes, hash, new_pr); + if (!pr) { + hmap_insert(routes, &new_pr->key_node, hash); + } else { + pr->stale = false; + free(new_pr); + } } -static void -parsed_routes_destroy(struct ovs_list *routes) +void +build_parsed_routes(struct ovn_datapath *od, const struct hmap *lr_ports, + struct hmap *routes, struct simap *route_tables, + struct hmap *bfd_active_connections, + struct hmap *bfd_connections) { struct parsed_route *pr; - LIST_FOR_EACH_SAFE (pr, list_node, routes) { - ovs_list_remove(&pr->list_node); + HMAP_FOR_EACH (pr, key_node, routes) { + if (pr->nbr == od->nbr) { + pr->stale = true; + } + } + + for (int i = 0; i < od->nbr->n_static_routes; i++) { + parsed_routes_add(od, lr_ports, routes, route_tables, + od->nbr->static_routes[i], + bfd_active_connections, + bfd_connections); + } + + HMAP_FOR_EACH_SAFE (pr, key_node, routes) { + if (!pr->stale) { + continue; + } + + hmap_remove(routes, &pr->key_node); free(pr); } } @@ -12722,7 +12830,8 @@ static void build_static_route_flows_for_lrouter( struct ovn_datapath *od, const struct chassis_features *features, struct lflow_table *lflows, const struct hmap *lr_ports, - const struct hmap *bfd_connections, + struct hmap *parsed_routes, + struct simap *route_tables, struct lflow_ref *lflow_ref) { ovs_assert(od->nbr); @@ -12736,22 +12845,16 @@ build_static_route_flows_for_lrouter( struct hmap ecmp_groups = HMAP_INITIALIZER(&ecmp_groups); struct hmap unique_routes = HMAP_INITIALIZER(&unique_routes); - struct ovs_list parsed_routes = OVS_LIST_INITIALIZER(&parsed_routes); - struct simap route_tables = SIMAP_INITIALIZER(&route_tables); struct ecmp_groups_node *group; for (int i = 0; i < od->nbr->n_ports; i++) { build_route_table_lflow(od, lflows, od->nbr->ports[i], - &route_tables, lflow_ref); + route_tables, lflow_ref); } - for (int i = 0; i < od->nbr->n_static_routes; i++) { - struct parsed_route *route = - parsed_routes_add(od, lr_ports, &parsed_routes, &route_tables, - od->nbr->static_routes[i], bfd_connections); - if (!route) { - continue; - } + struct parsed_route *route; + HMAP_FOR_EACH_WITH_HASH (route, key_node, uuid_hash(&od->key), + parsed_routes) { group = ecmp_groups_find(&ecmp_groups, route); if (group) { ecmp_groups_add_route(group, route); @@ -12780,8 +12883,6 @@ build_static_route_flows_for_lrouter( } ecmp_groups_destroy(&ecmp_groups); unique_routes_destroy(&unique_routes); - parsed_routes_destroy(&parsed_routes); - simap_destroy(&route_tables); } /* IP Multicast lookup. Here we set the output port, adjust TTL and @@ -12888,6 +12989,115 @@ build_mcast_lookup_flows_for_lrouter( } } +static struct route_policy * +route_policies_lookup(struct hmap *route_policies, size_t hash, + struct route_policy *new_rp) +{ + struct route_policy *rp; + HMAP_FOR_EACH_WITH_HASH (rp, key_node, hash, route_policies) { + if (rp->rule != new_rp->rule) { + continue; + } + + if (rp->n_valid_nexthops != new_rp->n_valid_nexthops) { + continue; + } + + size_t i; + for (i = 0; i < new_rp->n_valid_nexthops; i++) { + size_t j; + + for (j = 0; j < rp->n_valid_nexthops; j++) { + if (!strcmp(new_rp->valid_nexthops[i], + rp->valid_nexthops[j])) { + break; + } + } + + if (j == rp->n_valid_nexthops) { + break; + } + } + + if (i == new_rp->n_valid_nexthops) { + return rp; + } + } + + return NULL; +} + +void +build_route_policies(struct ovn_datapath *od, struct hmap *lr_ports, + struct hmap *route_policies, + struct hmap *bfd_active_connections, + struct hmap *bfd_connections) +{ + struct route_policy *rp; + + HMAP_FOR_EACH (rp, key_node, route_policies) { + if (rp->nbr == od->nbr) { + rp->stale = true; + } + } + + for (int i = 0; i < od->nbr->n_policies; i++) { + const struct nbrec_logical_router_policy *rule = od->nbr->policies[i]; + size_t n_valid_nexthops = 0; + char **valid_nexthops = NULL; + + if (!strcmp(rule->action, "reroute")) { + size_t n_nexthops = rule->n_nexthops ? rule->n_nexthops : 1; + + valid_nexthops = xcalloc(n_nexthops, sizeof *valid_nexthops); + for (size_t j = 0; j < n_nexthops; j++) { + char *nexthop = rule->n_nexthops + ? rule->nexthops[j] : rule->nexthop; + struct ovn_port *out_port = + get_outport_for_routing_policy_nexthop( + od, lr_ports, rule->priority, nexthop); + if (!out_port || !check_bfd_state(rule, out_port, nexthop, + bfd_active_connections, + bfd_connections)) { + continue; + } + valid_nexthops[n_valid_nexthops++] = nexthop; + } + + if (!n_valid_nexthops) { + free(valid_nexthops); + continue; + } + } + + struct route_policy *new_rp = xzalloc(sizeof *new_rp); + new_rp->rule = rule; + new_rp->n_valid_nexthops = n_valid_nexthops; + new_rp->valid_nexthops = valid_nexthops; + new_rp->nbr = od->nbr; + + size_t hash = uuid_hash(&od->key); + rp = route_policies_lookup(route_policies, hash, new_rp); + if (!rp) { + hmap_insert(route_policies, &new_rp->key_node, hash); + } else { + rp->stale = false; + free(valid_nexthops); + free(new_rp); + } + } + + HMAP_FOR_EACH_SAFE (rp, key_node, route_policies) { + if (!rp->stale) { + continue; + } + + hmap_remove(route_policies, &rp->key_node); + free(rp->valid_nexthops); + free(rp); + } +} + /* Logical router ingress table POLICY: Policy. * * A packet that arrives at this table is an IP packet that should be @@ -12901,7 +13111,7 @@ static void build_ingress_policy_flows_for_lrouter( struct ovn_datapath *od, struct lflow_table *lflows, const struct hmap *lr_ports, - const struct hmap *bfd_connections, + struct hmap *route_policies, struct lflow_ref *lflow_ref) { ovs_assert(od->nbr); @@ -12918,21 +13128,20 @@ build_ingress_policy_flows_for_lrouter( /* Convert routing policies to flows. */ uint16_t ecmp_group_id = 1; - for (int i = 0; i < od->nbr->n_policies; i++) { - const struct nbrec_logical_router_policy *rule - = od->nbr->policies[i]; + struct route_policy *rp; + HMAP_FOR_EACH_WITH_HASH (rp, key_node, uuid_hash(&od->key), + route_policies) { + const struct nbrec_logical_router_policy *rule = rp->rule; bool is_ecmp_reroute = (!strcmp(rule->action, "reroute") && rule->n_nexthops > 1); if (is_ecmp_reroute) { - build_ecmp_routing_policy_flows(lflows, od, lr_ports, rule, - bfd_connections, ecmp_group_id, - lflow_ref); + build_ecmp_routing_policy_flows(lflows, od, lr_ports, rp, + ecmp_group_id, lflow_ref); ecmp_group_id++; } else { - build_routing_policy_flow(lflows, od, lr_ports, rule, - bfd_connections, &rule->header_, - lflow_ref); + build_routing_policy_flow(lflows, od, lr_ports, rp, + &rule->header_, lflow_ref); } } } @@ -15867,6 +16076,9 @@ struct lswitch_flow_build_info { struct ds actions; size_t thread_lflow_counter; const char *svc_monitor_mac; + struct hmap *parsed_routes; + struct hmap *route_policies; + struct simap *route_tables; }; /* Helper function to combine all lflow generation which is iterated by @@ -15913,12 +16125,13 @@ build_lswitch_and_lrouter_iterate_by_lr(struct ovn_datapath *od, build_ip_routing_pre_flows_for_lrouter(od, lsi->lflows, NULL); build_static_route_flows_for_lrouter(od, lsi->features, lsi->lflows, lsi->lr_ports, - lsi->bfd_connections, + lsi->parsed_routes, + lsi->route_tables, NULL); build_mcast_lookup_flows_for_lrouter(od, lsi->lflows, &lsi->match, &lsi->actions, NULL); build_ingress_policy_flows_for_lrouter(od, lsi->lflows, lsi->lr_ports, - lsi->bfd_connections, NULL); + lsi->route_policies, NULL); build_arp_resolve_flows_for_lrouter(od, lsi->lflows, NULL); build_check_pkt_len_flows_for_lrouter(od, lsi->lflows, lsi->lr_ports, &lsi->match, &lsi->actions, @@ -16234,7 +16447,10 @@ build_lswitch_and_lrouter_flows( const struct hmap *svc_monitor_map, const struct hmap *bfd_connections, const struct chassis_features *features, - const char *svc_monitor_mac) + const char *svc_monitor_mac, + struct hmap *parsed_routes, + struct hmap *route_policies, + struct simap *route_tables) { char *svc_check_match = xasprintf("eth.dst == %s", svc_monitor_mac); @@ -16268,6 +16484,9 @@ build_lswitch_and_lrouter_flows( lsiv[index].svc_check_match = svc_check_match; lsiv[index].thread_lflow_counter = 0; lsiv[index].svc_monitor_mac = svc_monitor_mac; + lsiv[index].parsed_routes = parsed_routes; + lsiv[index].route_tables = route_tables; + lsiv[index].route_policies = route_policies; ds_init(&lsiv[index].match); ds_init(&lsiv[index].actions); @@ -16308,6 +16527,9 @@ build_lswitch_and_lrouter_flows( .features = features, .svc_check_match = svc_check_match, .svc_monitor_mac = svc_monitor_mac, + .parsed_routes = parsed_routes, + .route_tables = route_tables, + .route_policies = route_policies, .match = DS_EMPTY_INITIALIZER, .actions = DS_EMPTY_INITIALIZER, }; @@ -16469,7 +16691,10 @@ void build_lflows(struct ovsdb_idl_txn *ovnsb_txn, input_data->svc_monitor_map, input_data->bfd_connections, input_data->features, - input_data->svc_monitor_mac); + input_data->svc_monitor_mac, + input_data->parsed_routes, + input_data->route_policies, + input_data->route_tables); if (parallelization_state == STATE_INIT_HASH_SIZES) { parallelization_state = STATE_USE_PARALLELIZATION; @@ -17545,6 +17770,27 @@ northd_init(struct northd_data *data) init_northd_tracked_data(data); } +void +route_policies_init(struct route_policies_data *data) +{ + hmap_init(&data->route_policies); + hmap_init(&data->bfd_active_connections); +} + +void +static_routes_init(struct static_routes_data *data) +{ + hmap_init(&data->parsed_routes); + simap_init(&data->route_tables); + hmap_init(&data->bfd_active_connections); +} + +void +bfd_init(struct bfd_data *data) +{ + hmap_init(&data->bfd_connections); +} + void northd_destroy(struct northd_data *data) { @@ -17584,6 +17830,48 @@ northd_destroy(struct northd_data *data) destroy_northd_tracked_data(data); } +static void +__bfd_destroy(struct hmap *bfd_connections) +{ + struct bfd_entry *bfd_e; + + HMAP_FOR_EACH_POP (bfd_e, hmap_node, bfd_connections) { + bfd_erase_entry(bfd_e); + } + hmap_destroy(bfd_connections); +} + +void +bfd_destroy(struct bfd_data *data) +{ + __bfd_destroy(&data->bfd_connections); +} + +void +route_policies_destroy(struct route_policies_data *data) +{ + struct route_policy *rp; + HMAP_FOR_EACH_POP (rp, key_node, &data->route_policies) { + free(rp->valid_nexthops); + free(rp); + }; + hmap_destroy(&data->route_policies); + __bfd_destroy(&data->bfd_active_connections); +} + +void +static_routes_destroy(struct static_routes_data *data) +{ + struct parsed_route *r; + HMAP_FOR_EACH_POP (r, key_node, &data->parsed_routes) { + free(r); + } + hmap_destroy(&data->parsed_routes); + + simap_destroy(&data->route_tables); + __bfd_destroy(&data->bfd_active_connections); +} + void ovnnb_db_run(struct northd_input *input_data, struct northd_data *data, diff --git a/northd/northd.h b/northd/northd.h index d4a8d75ab..7ddb72eb7 100644 --- a/northd/northd.h +++ b/northd/northd.h @@ -23,6 +23,7 @@ #include "northd/en-port-group.h" #include "northd/ipam.h" #include "openvswitch/hmap.h" +#include "simap.h" #include "ovs-thread.h" struct northd_input { @@ -160,14 +161,34 @@ struct northd_data { struct northd_tracked_data trk_data; }; +struct route_policy { + struct hmap_node key_node; + const struct nbrec_logical_router_policy *rule; + size_t n_valid_nexthops; + char **valid_nexthops; + const struct nbrec_logical_router *nbr; + bool stale; +}; + +struct static_routes_data { + struct hmap parsed_routes; + struct simap route_tables; + struct hmap bfd_active_connections; +}; + +struct route_policies_data { + struct hmap route_policies; + struct hmap bfd_active_connections; +}; + +struct bfd_data { + struct hmap bfd_connections; +}; + struct lr_nat_table; struct lflow_input { - /* Northbound table references */ - const struct nbrec_bfd_table *nbrec_bfd_table; - /* Southbound table references */ - const struct sbrec_bfd_table *sbrec_bfd_table; const struct sbrec_logical_flow_table *sbrec_logical_flow_table; const struct sbrec_multicast_group_table *sbrec_multicast_group_table; const struct sbrec_igmp_group_table *sbrec_igmp_group_table; @@ -190,6 +211,9 @@ struct lflow_input { const struct hmap *svc_monitor_map; bool ovn_internal_version_changed; const char *svc_monitor_mac; + struct hmap *parsed_routes; + struct hmap *route_policies; + struct simap *route_tables; }; extern int parallelization_state; @@ -653,6 +677,20 @@ struct ovn_port { struct lflow_ref *stateful_lflow_ref; }; +struct parsed_route { + struct hmap_node key_node; + struct in6_addr prefix; + unsigned int plen; + bool is_src_route; + uint32_t route_table_id; + uint32_t hash; + const struct nbrec_logical_router_static_route *route; + bool ecmp_symmetric_reply; + bool is_discard_route; + const struct nbrec_logical_router *nbr; + bool stale; +}; + void ovnnb_db_run(struct northd_input *input_data, struct northd_data *data, struct ovsdb_idl_txn *ovnnb_txn, @@ -674,6 +712,18 @@ void northd_init(struct northd_data *data); void northd_indices_create(struct northd_data *data, struct ovsdb_idl *ovnsb_idl); +void route_policies_init(struct route_policies_data *); +void route_policies_destroy(struct route_policies_data *); +void build_parsed_routes(struct ovn_datapath *, const struct hmap *, + struct hmap *, struct simap *, struct hmap *, + struct hmap *); +uint32_t get_route_table_id(struct simap *, const char *); +void static_routes_init(struct static_routes_data *); +void static_routes_destroy(struct static_routes_data *); + +void bfd_init(struct bfd_data *); +void bfd_destroy(struct bfd_data *); + struct lflow_table; struct lr_stateful_tracked_data; struct ls_stateful_tracked_data; @@ -711,13 +761,14 @@ bool northd_handle_lb_data_changes(struct tracked_lb_data *, struct hmap *lbgrp_datapaths_map, struct northd_tracked_data *); -void build_bfd_table(struct ovsdb_idl_txn *ovnsb_txn, - const struct nbrec_bfd_table *, - const struct sbrec_bfd_table *, - const struct hmap *lr_ports, - struct hmap *bfd_connections); -void bfd_cleanup_connections(const struct nbrec_bfd_table *, - struct hmap *bfd_map); +void build_route_policies(struct ovn_datapath *, struct hmap *, struct hmap *, + struct hmap *, struct hmap *); +void bfd_table_sync(struct ovsdb_idl_txn *, const struct nbrec_bfd_table *, + struct hmap *, struct hmap *, struct hmap *); +void build_bfd_map(const struct nbrec_bfd_table *, + const struct sbrec_bfd_table *, + const struct hmap *lr_ports, + struct hmap *bfd_connections); void run_update_worker_pool(int n_threads); const struct ovn_datapath *northd_get_datapath_for_port( diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at index a389d1988..a84fb6fab 100644 --- a/tests/ovn-northd.at +++ b/tests/ovn-northd.at @@ -12,6 +12,7 @@ m4_define([_DUMP_DB_TABLES], [ ovn-sbctl list meter >> $1 ovn-sbctl list meter_band >> $1 ovn-sbctl list port_group >> $1 + ovn-sbctl list bfd >> $1 ovn-sbctl dump-flows > lflows_$1 ]) @@ -3858,6 +3859,7 @@ for i in $(seq 1 9); do check ovn-nbctl --wait=sb lsp-set-addresses sw$i-r0 00:00:00:00:00:0$i done +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats uuid=$(ovn-nbctl create bfd logical_port=r0-sw1 dst_ip=192.168.1.2 status=down min_tx=250 min_rx=250 detect_mult=10) ovn-nbctl create bfd logical_port=r0-sw2 dst_ip=192.168.2.2 status=down min_tx=500 min_rx=500 detect_mult=20 ovn-nbctl create bfd logical_port=r0-sw3 dst_ip=192.168.3.2 status=down @@ -3870,6 +3872,13 @@ wait_row_count bfd 1 logical_port=r0-sw2 detect_mult=20 dst_ip=192.168.2.2 \ wait_row_count bfd 1 logical_port=r0-sw3 detect_mult=5 dst_ip=192.168.3.2 \ min_rx=1000 min_tx=1000 status=admin_down +check_engine_stats northd norecompute nocompute +check_engine_stats bfd recompute nocompute +check_engine_stats lflow recompute nocompute +check_engine_stats northd_output norecompute compute +CHECK_NO_CHANGE_AFTER_RECOMPUTE +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats + uuid=$(fetch_column nb:bfd _uuid logical_port=r0-sw1) check ovn-nbctl set bfd $uuid min_tx=1000 min_rx=1000 detect_mult=100 @@ -3878,10 +3887,25 @@ check ovn-nbctl clear bfd $uuid_2 min_rx wait_row_count bfd 1 logical_port=r0-sw2 min_rx=1000 wait_row_count bfd 1 logical_port=r0-sw1 min_rx=1000 min_tx=1000 detect_mult=100 +check_engine_stats northd norecompute nocompute +check_engine_stats bfd recompute nocompute +check_engine_stats lflow recompute nocompute +check_engine_stats northd_output norecompute compute +CHECK_NO_CHANGE_AFTER_RECOMPUTE +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats + check ovn-nbctl --bfd=$uuid lr-route-add r0 100.0.0.0/8 192.168.1.2 wait_column down bfd status logical_port=r0-sw1 AT_CHECK([ovn-nbctl lr-route-list r0 | grep 192.168.1.2 | grep -q bfd],[0]) +check_engine_stats northd recompute nocompute +check_engine_stats bfd recompute nocompute +check_engine_stats static_routes recompute nocompute +check_engine_stats lflow recompute nocompute +check_engine_stats northd_output norecompute compute +CHECK_NO_CHANGE_AFTER_RECOMPUTE +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats + check ovn-nbctl --bfd lr-route-add r0 200.0.0.0/8 192.168.2.2 wait_column down bfd status logical_port=r0-sw2 AT_CHECK([ovn-nbctl lr-route-list r0 | grep 192.168.2.2 | grep -q bfd],[0]) @@ -3890,10 +3914,26 @@ check ovn-nbctl --bfd lr-route-add r0 240.0.0.0/8 192.168.5.2 r0-sw5 wait_column down bfd status logical_port=r0-sw5 AT_CHECK([ovn-nbctl lr-route-list r0 | grep 192.168.5.2 | grep -q bfd],[0]) +check_engine_stats northd recompute nocompute +check_engine_stats bfd recompute nocompute +check_engine_stats static_routes recompute nocompute +check_engine_stats lflow recompute nocompute +check_engine_stats northd_output norecompute compute +CHECK_NO_CHANGE_AFTER_RECOMPUTE +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats + check ovn-nbctl --bfd --policy=src-ip lr-route-add r0 192.168.6.1/32 192.168.10.10 r0-sw6 wait_column down bfd status logical_port=r0-sw6 AT_CHECK([ovn-nbctl lr-route-list r0 | grep 192.168.6.1 | grep -q bfd],[0]) +check_engine_stats northd recompute nocompute +check_engine_stats bfd recompute nocompute +check_engine_stats route_policies recompute nocompute +check_engine_stats lflow recompute nocompute +check_engine_stats northd_output norecompute compute +CHECK_NO_CHANGE_AFTER_RECOMPUTE +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats + check ovn-nbctl --bfd --policy=src-ip lr-route-add r0 192.168.7.1/32 192.168.10.10 r0-sw7 wait_column down bfd status logical_port=r0-sw7 AT_CHECK([ovn-nbctl lr-route-list r0 | grep 192.168.7.1 | grep -q bfd],[0]) @@ -3921,6 +3961,14 @@ wait_column down bfd status logical_port=r0-sw8 bfd_route_policy_uuid=$(fetch_column nb:bfd _uuid logical_port=r0-sw8) AT_CHECK([ovn-nbctl list logical_router_policy | grep -q $bfd_route_policy_uuid]) +check_engine_stats northd recompute nocompute +check_engine_stats bfd recompute nocompute +check_engine_stats static_routes recompute nocompute +check_engine_stats lflow recompute nocompute +check_engine_stats northd_output norecompute compute +CHECK_NO_CHANGE_AFTER_RECOMPUTE +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats + check ovn-nbctl lr-policy-del r0 check ovn-nbctl --bfd lr-policy-add r0 100 "ip4.src == 2.3.4.0/24" reroute 192.168.9.2,192.168.9.3,192.168.9.4 @@ -3928,6 +3976,14 @@ wait_column down bfd status dst_ip=192.168.9.2 wait_column down bfd status dst_ip=192.168.9.3 wait_column down bfd status dst_ip=192.168.9.4 +check_engine_stats northd recompute nocompute +check_engine_stats bfd recompute nocompute +check_engine_stats route_policies recompute nocompute +check_engine_stats lflow recompute nocompute +check_engine_stats northd_output norecompute compute +CHECK_NO_CHANGE_AFTER_RECOMPUTE +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats + bfd_route_policy_uuid=$(fetch_column nb:bfd _uuid logical_port=r0-sw9) AT_CHECK([ovn-nbctl list logical_router_policy | sed s/,//g | grep -q "$bfd_route_policy_uuid"])