Message ID | 20231207111159.3524479-1-dceara@redhat.com |
---|---|
State | Accepted |
Headers | show |
Series | [ovs-dev] ovn-northd-ddlog: Remove. | expand |
Context | Check | Description |
---|---|---|
ovsrobot/apply-robot | success | apply and check: success |
ovsrobot/github-robot-_Build_and_Test | success | github build: passed |
ovsrobot/github-robot-_ovn-kubernetes | success | github build: passed |
On 12/7/23 12:11, Dumitru Ceara wrote: > From: Ben Pfaff <blp@ovn.org> > > Originally posted at: > https://patchwork.ozlabs.org/project/ovn/patch/20211007194649.4014965-1-blp@ovn.org/ > > Signed-off-by: Ben Pfaff <blp@ovn.org> > --- CC: Han, Mark, Numan > NOTE: I (Dumitru) only rebased this on the latest version of the main > branch. I didn't mark myself as co-author/didn't sign off (yet) because > I didn't really add anything new to the patch. I will however sign off > when pushing this to main if it gets Acked by other maintainers. On a > personal note, I think it's very unfortunate to see the DDlog code go > but it's not used for a while.. > > Changes in V1: > - rebased on top of latest main branch > --- > Documentation/automake.mk | 2 - > Documentation/intro/install/general.rst | 51 - > Documentation/topics/debugging-ddlog.rst | 280 - > Documentation/topics/index.rst | 1 - > Documentation/tutorials/ddlog-new-feature.rst | 393 - > Documentation/tutorials/index.rst | 1 - > NEWS | 1 + > TODO.rst | 6 - > acinclude.m4 | 79 - > configure.ac | 5 - > lib/ovn-util.c | 37 +- > lib/ovn-util.h | 5 - > lib/stopwatch-names.h | 3 - > m4/ovn.m4 | 16 - > northd/.gitignore | 2 - > northd/automake.mk | 111 - > northd/bitwise.dl | 272 - > northd/bitwise.rs | 133 - > northd/copp.dl | 30 - > northd/helpers.dl | 60 - > northd/ipam.dl | 499 - > northd/lrouter.dl | 947 -- > northd/lswitch.dl | 824 -- > northd/multicast.dl | 273 - > northd/ovn-nb.dlopts | 27 - > northd/ovn-northd-ddlog.c | 1368 --- > northd/ovn-northd.8.xml | 56 +- > northd/ovn-sb.dlopts | 34 - > northd/ovn.dl | 387 - > northd/ovn.rs | 750 -- > northd/ovn.toml | 2 - > northd/ovn_northd.dl | 9105 ----------------- > northd/ovsdb2ddlog2c | 131 - > tests/ovn-macros.at | 37 +- > tests/ovn-northd.at | 352 +- > tests/ovn.at | 24 +- > tests/ovs-macros.at | 6 +- > tests/perf-northd.at | 2 +- > tests/system-common-macros.at | 2 +- > tests/system-ovn-kmod.at | 10 +- > tests/system-ovn.at | 152 +- > tutorial/ovs-sandbox | 19 +- > utilities/ovn-ctl | 10 +- > 43 files changed, 296 insertions(+), 16209 deletions(-) > delete mode 100644 Documentation/topics/debugging-ddlog.rst > delete mode 100644 Documentation/tutorials/ddlog-new-feature.rst > delete mode 100644 northd/bitwise.dl > delete mode 100644 northd/bitwise.rs > delete mode 100644 northd/copp.dl > delete mode 100644 northd/helpers.dl > delete mode 100644 northd/ipam.dl > delete mode 100644 northd/lrouter.dl > delete mode 100644 northd/lswitch.dl > delete mode 100644 northd/multicast.dl > delete mode 100644 northd/ovn-nb.dlopts > delete mode 100644 northd/ovn-northd-ddlog.c > delete mode 100644 northd/ovn-sb.dlopts > delete mode 100644 northd/ovn.dl > delete mode 100644 northd/ovn.rs > delete mode 100644 northd/ovn.toml > delete mode 100644 northd/ovn_northd.dl > delete mode 100755 northd/ovsdb2ddlog2c > > diff --git a/Documentation/automake.mk b/Documentation/automake.mk > index 7fcd186cac..b00876737b 100644 > --- a/Documentation/automake.mk > +++ b/Documentation/automake.mk > @@ -24,14 +24,12 @@ DOC_SOURCE = \ > Documentation/tutorials/images/ovsdb-relay-3.png \ > Documentation/tutorials/ovn-rbac.rst \ > Documentation/tutorials/ovn-interconnection.rst \ > - Documentation/tutorials/ddlog-new-feature.rst \ > Documentation/topics/index.rst \ > Documentation/topics/testing.rst \ > Documentation/topics/high-availability.rst \ > Documentation/topics/integration.rst \ > Documentation/topics/ovn-news-2.8.rst \ > Documentation/topics/role-based-access-control.rst \ > - Documentation/topics/debugging-ddlog.rst \ > Documentation/topics/vif-plug-providers/index.rst \ > Documentation/topics/vif-plug-providers/vif-plug-providers.rst \ > Documentation/howto/index.rst \ > diff --git a/Documentation/intro/install/general.rst b/Documentation/intro/install/general.rst > index dd8bf5c2c0..ab62094828 100644 > --- a/Documentation/intro/install/general.rst > +++ b/Documentation/intro/install/general.rst > @@ -102,13 +102,6 @@ need the following software: > The environment variable OVS_RESOLV_CONF can be used to specify DNS server > configuration file (the default file on Linux is /etc/resolv.conf). > > -- `DDlog <https://github.com/vmware/differential-datalog>`, if you > - want to build ``ovn-northd-ddlog``, an alternate implementation of > - ``ovn-northd`` that scales better to large deployments. The NEWS > - file specifies the right version of DDlog to use with this release. > - Building with DDlog supports requires Rust to be installed (see > - https://www.rust-lang.org/tools/install). > - > If you are working from a Git tree or snapshot (instead of from a distribution > tarball), or if you modify the OVN build system or the database > schema, you will also need the following software: > @@ -210,37 +203,6 @@ the default database directory, add options as shown here:: > ``yum install`` or ``rpm -ivh``) and .deb (e.g. via > ``apt-get install`` or ``dpkg -i``) use the above configure options. > > -Use ``--with-ddlog`` to build with DDlog support. To build with > -DDlog, the build system needs to be able to find the ``ddlog`` and > -``ovsdb2ddlog`` binaries and the DDlog library directory (the > -directory that contains ``ddlog_std.dl``). This option supports a > -few ways to do that: > - > - * If binaries are in $PATH, use the library directory as argument, > - e.g. ``--with-ddlog=$HOME/differential-datalog/lib``. This is > - suitable if DDlog was installed from source via ``stack install`` or > - from (hypothetical) distribution packaging. > - > - The DDlog documentation recommends pointing $DDLOG_HOME to the > - DDlog source directory. If you did this, so that $DDLOG_HOME/lib > - is the library directory, you may use ``--with-ddlog`` without an > - argument. > - > - * If the binaries and libraries are in the ``bin`` and ``lib`` > - subdirectories of an installation directory, use the installation > - directory as the argument. This is suitable if DDlog was > - installed from one of the binary tarballs published by the DDlog > - developers. > - > -.. note:: > - > - Building with DDLog adds a few minutes to the build because the > - Rust compiler is slow. Add ``--enable-ddlog-fast-build`` to make > - this about 2x faster. This disables some Rust compiler > - optimizations, making a much slower ``ovn-northd-ddlog`` > - executable, so it should not be used for production builds or for > - profiling. > - > By default, static libraries are built and linked against. If you want to use > shared libraries instead:: > > @@ -418,14 +380,6 @@ An example after install might be:: > $ ovn-ctl start_northd > $ ovn-ctl start_controller > > -If you built with DDlog support, then you can start > -``ovn-northd-ddlog`` instead of ``ovn-northd`` by adding > -``--ovn-northd-ddlog=yes``, e.g.:: > - > - $ export PATH=$PATH:/usr/local/share/ovn/scripts > - $ ovn-ctl --ovn-northd-ddlog=yes start_northd > - $ ovn-ctl start_controller > - > Starting OVN Central services > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > @@ -481,11 +435,6 @@ Unix domain socket:: > > $ ovn-northd --pidfile --detach --log-file > > -If you built with DDlog support, you can start ``ovn-northd-ddlog`` > -instead, the same way:: > - > - $ ovn-northd-ddlog --pidfile --detach --log-file > - > Starting OVN Central services in containers > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > diff --git a/Documentation/topics/debugging-ddlog.rst b/Documentation/topics/debugging-ddlog.rst > deleted file mode 100644 > index 2ae72a38ea..0000000000 > --- a/Documentation/topics/debugging-ddlog.rst > +++ /dev/null > @@ -1,280 +0,0 @@ > -.. > - Licensed under the Apache License, Version 2.0 (the "License"); you may > - not use this file except in compliance with the License. You may obtain > - a copy of the License at > - > - http://www.apache.org/licenses/LICENSE-2.0 > - > - Unless required by applicable law or agreed to in writing, software > - distributed under the License is distributed on an "AS IS" BASIS, WITHOUT > - WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the > - License for the specific language governing permissions and limitations > - under the License. > - > - Convention for heading levels in OVN documentation: > - > - ======= Heading 0 (reserved for the title in a document) > - ------- Heading 1 > - ~~~~~~~ Heading 2 > - +++++++ Heading 3 > - ''''''' Heading 4 > - > - Avoid deeper levels because they do not render well. > - > -========================================= > -Debugging the DDlog version of ovn-northd > -========================================= > - > -This document gives some tips for debugging correctness issues in the > -DDlog implementation of ``ovn-northd``. To keep things conrete, we > -assume here that a failure occurred in one of the test cases in > -``ovn-e2e.at``, but the same methodology applies in any other > -environment. If none of these methods helps, ask for assistance or > -submit a bug report. > - > -Before trying these methods, you may want to check the northd log > -file, ``tests/testsuite.dir/<test_number>/northd/ovn-northd.log`` for > -error messages that might explain the failure. > - > -Compare OVSDB tables generated by DDlog vs C > --------------------------------------------- > - > -The first thing I typically want to check when ``ovn-northd-ddlog`` > -does not behave as expected is how the OVSDB tables computed by DDlog > -differ from what the C implementation produces. Fortunately, all the > -infrastructure needed to do this already exists in OVN. > - > -First, let's modify the test script, e.g., ``ovn.at`` to dump the > -contents of OVSDB right before the failure. The most common issue is > -a difference between the logical flows generated by the two > -implementations. To make it easy to compare the generated flows, make > -sure that the test contains something like this in the right place:: > - > - ovn-sbctl dump-flows > sbflows > - AT_CAPTURE_FILE([sbflows]) > - > -The first line above dumps the OVN logical flow table to a file named > -``sbflows``. The second line ensures that, if the test fails, > -``sbflows`` get logged to ``testsuite.log``. That is not particularly > -useful for us right now, but it means that if someone later submits a > -bug report, that's one more piece of data that we don't have to ask > -for them to submit along with it. > - > -Next, we want to run the test twice, with the C and DDlog versions of > -northd, e.g., ``make check -j6 TESTSUITEFLAGS="-d 111 112"`` if 111 > -and 112 are the C and DDlog versions of the same test. The ``-d`` in > -this command line makes the test driver keep test directories around > -even for tests that succeed, since by default it deletes them. > - > -Now you can look at ``sbflows`` in each test log directory. The > -``ovn-northd-ddlog`` developers have gone to some trouble to make the > -DDlog flows as similar as possible to the C ones, right down to white > -space and other formatting. Thus, the DDlog output is often identical > -to C aside from logical datapath UUIDs. > - > -Usually, this means that one can get informative results by running > -``diff``, e.g.:: > - > - diff -u tests/testsuite.dir/111/sbflows tests/testsuite.dir/111/sbflows > - > -Running the input through the ``uuidfilt`` utility from OVS will > -generally get rid of the logical datapath UUID differences as well:: > - > - diff -u <(uuidfilt tests/testsuite.dir/111/sbflows) <(uuidfilt tests/testsuite.dir/111/sbflows) > - > -If there are nontrivial differences, this often identifies your bug. > - > -Often, once you have identified the difference between the two OVSDB > -dumps, this will immediately lead you to the root cause of the bug, > -but if you are not this lucky then the next method may help. > - > -Record and replay DDlog execution > ---------------------------------- > - > -DDlog offers a way to record all input table updates throughout the > -execution of northd and replay them against DDlog running as a > -standalone executable without all other OVN components. This has two > -advantages. First, this allows one to easily tweak the inputs, e.g. > -to simplify the test scenario. Second, the recorded execution can be > -easily replayed anywhere without having to reproduce your OVN setup. > - > -Use the ``--ddlog-record`` option to record updates, > -e.g. ``--ddlog-record=replay.dat`` to record to ``replay.dat``. > -(OVN's built-in tests automatically do this.) The file contains the > -log of transactions in the DDlog command format (see > -https://github.com/vmware/differential-datalog/blob/master/doc/command_reference/command_reference.md). > - > -To replay the log, you will need the standalone DDlog executable. By > -default, the build system does not compile this program, because it > -increases the already long Rust compilation time. To build it, add > -``NORTHD_CLI=1`` to the ``make`` command line, e.g. ``make > -NORTHD_CLI=1``. > - > -You can modify the log before replaying it, e.g., adding ``dump > -<table>`` commands to dump the contents of relations at various points > -during execution. The <table> name must be fully qualified based on > -the file in which it is declared, e.g. ``OVN_Southbound::<table>`` for > -southbound tables or ``lrouter::<table>.`` for ``lrouter.dl``. You > -can also use ``dump`` without an argument to dump the contents of all > -tables. > - > -The following command replays the log generated by OVN test number > -112 and dumps the output of DDlog to ``replay.dump``:: > - > - northd/ovn_northd_ddlog/target/release/ovn_northd_cli < tests/testsuite.dir/112/northd/replay.dat > replay.dump > - > -Or, to dump just the table contents following the run, without having > -to edit ``replay.dat``:: > - > - (cat tests/testsuite.dir/112/northd/replay.dat; echo 'dump;') | northd/ovn_northd_ddlog/target/release/ovn_northd_cli --no-delta --no-init-snapshot > replay.dump > - > -Depending on whether and how you installed OVS and OVN, you might need > -to point ``LD_LIBRARY_PATH`` to library build directories to get the > -CLI to run, e.g.:: > - > - export LD_LIBRARY_PATH=$HOME/ovn/_build/lib/.libs:$HOME/ovs/_build/lib/.libs > - > -.. note:: > - > - The replay output may be less informative than you expect because > - DDlog does not, by default, keep around enough information to > - include input relation and intermediate relations in the output. > - These relations are often critical to understanding what is going > - on. To include them, add the options > - ``--output-internal-relations --output-input-relations=In_`` to > - ``DDLOG_EXTRA_FLAGS`` for building ``ovn-northd-ddlog``. For > - example, ``configure`` as:: > - > - ./configure DDLOG_EXTRA_FLAGS='--output-internal-relations --output-input-relations=In_' > - > -Debugging by Logging > --------------------- > - > -One limitation of the previous method is that it allows one to inspect > -inputs and outputs of a rule, but not the (sometimes fairly > -complicated) computation that goes on inside the rule. You can of > -course break up the rule into several rules and dump the intermediate > -outputs. > - > -There are at least two alternatives for generating log messages. > -First, you can write rules to add strings to the Warning relation > -declared in ``ovn_north.dl``. Code in ``ovn-northd-ddlog.c`` will log > -any given string in this relation just once, when it is first added to > -the relation. (If it is removed from the relation and then added back > -later, it will be logged again.) > - > -Second, you can call using the ``warn()`` function declared in > -``ovn.dl`` from a DDlog rule. It's not straightforward to know > -exactly when this function will be called, like it would be in an > -imperative language like C, since DDlog is a declarative language > -where the user doesn't directly control when rules are triggered. You > -might, for example, see the rule being triggered multiple times with > -the same input. Nevertheless, this debugging technique is useful in > -practice. > - > -You will find many examples of the use of Warning and ``warn`` in > -``ovn_northd.dl``, where it is frequently used to report non-critical > -errors. > - > -Debugging panics > ----------------- > - > -**TODO**: update these instructions as DDlog's internal handling of panic's > -is improved. > - > -DDlog is a safe language, so DDlog programs normally do not crash, > -except for the following three cases: > - > -- A panic in a Rust function imported to DDlog as ``extern function``. > - > -- A panic in a C function imported to DDlog as ``extern function``. > - > -- A bug in the DDlog runtime or libraries. > - > -Below we walk through the steps involved in debugging such failures. > -In this scenario, there is an array-index-out-of-bounds error in the > -``ovn_scan_static_dynamic_ip6()`` function, which is written in Rust > -and imported to DDlog as an ``extern function``. When invoked from a > -DDlog rule, this function causes a panic in one of DDlog worker > -threads. > - > -**Step 1: Check for error messages in the northd log.** A panic can > -generally lead to unpredictable outcomes, so one cannot count on a > -clean error message showing up in the log (Other outcomes include > -crashing the entire process and even deadlocks. We are working to > -eliminate the latter possibility). In this case we are lucky to > -observe a bunch of error messages like the following in the ``northd`` > -log: > - > - ``2019-09-23T16:23:24.549Z|00011|ovn_northd|ERR|ddlog_transaction_commit(): > - error: failed to receive flush ack message from timely dataflow > - thread`` > - > -These messages are telling us that something is broken inside the > -DDlog runtime. > - > -**Step 2: Record and replay the failing scenario.** We use DDlog's > -record/replay capabilities (see above) to capture the faulty scenario. > -We replay the recorded trace:: > - > - northd/ovn_northd_ddlog/target/release/ovn_northd_cli < tests/testsuite.dir/117/northd/replay.dat > - > -This generates a bunch of output ending with:: > - > - thread 'worker thread 2' panicked at 'index out of bounds: the len is 1 but the index is 1', /rustc/eae3437dfe991621e8afdc82734f4a172d7ddf9b/src/libcore/slice/mod.rs:2681:10 > - note: run with RUST_BACKTRACE=1 environment variable to display a backtrace. > - > -We re-run the CLI again with backtrace enabled (as suggested by the > -error message):: > - > - RUST_BACKTRACE=1 northd/ovn_northd_ddlog/target/release/ovn_northd_cli < tests/testsuite.dir/117/northd/replay.dat > - > -This finally yields the following stack trace, which suggests array > -bound violation in ``ovn_scan_static_dynamic_ip6``:: > - > - 0: backtrace::backtrace::libunwind::trace > - at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29 10: core::panicking::panic_bounds_check > - at src/libcore/panicking.rs:61 > - [SKIPPED] > - 11: ovn_northd_ddlog::__ovn::ovn_scan_static_dynamic_ip6 > - 12: ovn_northd_ddlog::prog::__f > - [SKIPPED] > - > -Finally, looking at the source code of > -``ovn_scan_static_dynamic_ip6``, we identify the following line, > -containing an unsafe array indexing operator, as the culprit:: > - > - ovn_ipv6_parse(&f[1].to_string()) > - > -Clean build > -~~~~~~~~~~~ > - > -Occasionally it's desirable to a full and complete build of the > -DDlog-generated code. To trigger that, delete the generated > -``ovn_northd_ddlog`` directory and the ``ddlog.stamp`` witness file, > -like this:: > - > - rm -rf northd/ovn_northd_ddlog northd/ddlog.stamp > - > -or:: > - > - make clean-ddlog > - > -Submitting a bug report > ------------------------ > - > -If you are having trouble with DDlog and the above methods do not > -help, please submit a bug report to ``bugs@openvswitch.org``, CC > -``ryzhyk@gmail.com``. > - > -In addition to problem description, please provide as many of the > -following as possible: > - > -- Are you running with the right DDlog for the version of OVN? OVN > - and DDlog are both evolving and OVN needs to build against a > - specific version of DDlog. > - > -- ``replay.dat`` file generated as described above > - > -- Logs: ``ovn-northd.log`` and ``testsuite.log``, if you are running > - the OVN test suite > diff --git a/Documentation/topics/index.rst b/Documentation/topics/index.rst > index e9e49c7426..55bb919c09 100644 > --- a/Documentation/topics/index.rst > +++ b/Documentation/topics/index.rst > @@ -36,7 +36,6 @@ OVN > .. toctree:: > :maxdepth: 2 > > - debugging-ddlog > integration.rst > high-availability > role-based-access-control > diff --git a/Documentation/tutorials/ddlog-new-feature.rst b/Documentation/tutorials/ddlog-new-feature.rst > deleted file mode 100644 > index de66ca5ada..0000000000 > --- a/Documentation/tutorials/ddlog-new-feature.rst > +++ /dev/null > @@ -1,393 +0,0 @@ > -.. > - Licensed under the Apache License, Version 2.0 (the "License"); you may > - not use this file except in compliance with the License. You may obtain > - a copy of the License at > - > - http://www.apache.org/licenses/LICENSE-2.0 > - > - Unless required by applicable law or agreed to in writing, software > - distributed under the License is distributed on an "AS IS" BASIS, WITHOUT > - WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the > - License for the specific language governing permissions and limitations > - under the License. > - > - Convention for heading levels in OVN documentation: > - > - ======= Heading 0 (reserved for the title in a document) > - ------- Heading 1 > - ~~~~~~~ Heading 2 > - +++++++ Heading 3 > - ''''''' Heading 4 > - > - Avoid deeper levels because they do not render well. > - > -=========================================================== > -Adding a new OVN feature to the DDlog version of ovn-northd > -=========================================================== > - > -This document describes the usual steps an OVN developer should go > -through when adding a new feature to ``ovn-northd-ddlog``. In order to > -make things less abstract we will use the IP Multicast > -``ovn-northd-ddlog`` implementation as an example. Even though the > -document is structured as a tutorial there might still exist > -feature-specific aspects that are not covered here. > - > -Overview > --------- > - > -DDlog is a dataflow system: it receives data from a data source (a set > -of "input relations"), processes it through "intermediate relations" > -according to the rules specified in the DDlog program, and sends the > -processed "output relations" to a data sink. In OVN, the input > -relations primarily come from the OVN Northbound database and the > -output relations primarily go to the OVN Southbound database. The > -process looks like this:: > - > - from NBDB +----------+ +-----------------+ +-----------+ to SBDB > - ---------->|Input rels|-->|Intermediate rels|-->|Output rels|----------> > - +----------+ +-----------------+ +-----------+ > - > -Adding a new feature to ``ovn-northd-ddlog`` usually involves the > -following steps: > - > -1. Update northbound and/or southbound OVSDB schemas. > - > -2. Configure DDlog/OVSDB bindings. > - > -3. Define intermediate DDlog relations and rules to compute them. > - > -4. Write rules to update output relations. > - > -5. Generate ``Logical_Flow``s and/or other forwarding records (e.g., > - ``Multicast_Group``) that will control the dataplane operations. > - > -Update NB and/or SB OVSDB schemas > ---------------------------------- > - > -This step is no different from the normal development flow in C. > - > -Most of the times a developer chooses between two ways of configuring > -a new feature: > - > -1. Adding a set of columns to tables in the NB and/or SB database (or > - adding key-value pairs to existing columns). > - > -2. Adding new tables to the NB and/or SB database. > - > -Looking at IP Multicast, there are two ``OVN Northbound`` tables where > -configuration information is stored: > - > -- ``Logical_Switch``, column ``other_config``, keys ``mcast_*``. > - > -- ``Logical_Router``, column ``options``, keys ``mcast_*``. > - > -These tables become inputs to the DDlog pipeline. > - > -In addition we add a new table ``IP_Multicast`` to the SB database. > -DDlog will update this table, that is, ``IP_Multicast`` receives > -output from the above pipeline. > - > -Configuring DDlog/OVSDB bindings > --------------------------------- > - > -Configuring ``northd/automake.mk`` > -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > - > -The OVN build process uses DDlog's ``ovsdb2ddlog`` utility to parse > -``ovn-nb.ovsschema`` and ``ovn-sb.ovsschema`` and then automatically > -populate ``OVN_Northbound.dl`` and ``OVN_Southbound.dl``. For each > -OVN Northbound and Southbound table, it generates one or more > -corresponding DDlog relations. > - > -We need to supply ``ovsdb2ddlog`` with some information that it can't > -infer from the OVSDB schemas. This information must be specified as > -``ovsdb2ddlog`` arguments, which are read from > -``northd/ovn-nb.dlopts`` and ``northd/ovn-sb.dlopts``. > - > -The main choice for each new table is whether it is used for output. > -Output tables can also be used for input, but the converse is not > -true. If the table is used for output at all, we add ``-o <table>`` > -to the option file. Our new table ``IP_Multicast`` is an output > -table, so we add ``-o IP_Multicast`` to ``ovn-sb.dlopts``. > - > -For input-only tables, ``ovsdb2ddlog`` generates a DDlog input > -relation with the same name. For output tables, it generates this > -table plus an output relation named ``Out_<table>``. Thus, > -``OVN_Southbound.dl`` has two relations for ``IP_Multicast``:: > - > - input relation IP_Multicast ( > - _uuid: uuid, > - datapath: string, > - enabled: Set<bool>, > - querier: Set<bool> > - ) > - output relation Out_IP_Multicast ( > - _uuid: uuid, > - datapath: string, > - enabled: Set<bool>, > - querier: Set<bool> > - ) > - > -For an output table, consider whether only some of the columns are > -used for output, that is, some of the columns are effectively > -input-only. This is common in OVN for OVSDB columns that are managed > -externally (e.g. by a CMS). For each input-only column, we add ``--ro > -<table>.<column>``. Alternatively, if most of the columns are > -input-only but a few are output columns, add ``--rw <table>.<column>`` > -for each of the output columns. In our case, all of the columns are > -used for output, so we do not need to add anything. > - > -Finally, in some cases ``ovn-northd-ddlog`` shouldn't change values in > -. One such case is the ``seq_no`` column in the > -``IP_Multicast`` table. To do that we need to instruct ``ovsdb2ddlog`` > -to treat the column as read-only by using the ``--ro`` switch. > - > -``ovsdb2ddlog`` generates a number of additional DDlog relations, for > -use by auto-generated OVSDB adapter logic. These are irrelevant to > -most DDLog developers, although sometimes they can be handy for > -debugging. See the appendix_ for details. > - > -Define intermediate DDlog relations and rules to compute them. > --------------------------------------------------------------- > - > -Obviously there will be a one-to-one relationship between logical > -switches/routers and IP multicast configuration. One way to represent > -this relationship is to create multicast configuration DDlog relations > -to be referenced by ``&Switch`` and ``&Router`` DDlog records:: > - > - /* IP Multicast per switch configuration. */ > - relation &McastSwitchCfg( > - datapath : uuid, > - enabled : bool, > - querier : bool > - } > - > - &McastSwitchCfg( > - .datapath = ls_uuid, > - .enabled = map_get_bool_def(other_config, "mcast_snoop", false), > - .querier = map_get_bool_def(other_config, "mcast_querier", true)) :- > - nb.Logical_Switch(._uuid = ls_uuid, > - .other_config = other_config). > - > -Then reference these relations in ``&Switch`` and ``&Router``. For > -example, in ``lswitch.dl``, the ``&Switch`` relation definition now > -contains:: > - > - relation &Switch( > - ls: nb.Logical_Switch, > - [...] > - mcast_cfg: Ref<McastSwitchCfg> > - ) > - > -And is populated by the following rule which references the correct > -``McastSwitchCfg`` based on the logical switch uuid:: > - > - &Switch(.ls = ls, > - [...] > - .mcast_cfg = mcast_cfg) :- > - nb.Logical_Switch[ls], > - [...] > - mcast_cfg in &McastSwitchCfg(.datapath = ls._uuid). > - > -Build state based on information dynamically updated by ``ovn-controller`` > -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > - > -Some OVN features rely on information learned by ``ovn-controller`` to > -generate ``Logical_Flow`` or other records that control the dataplane. > -In case of IP Multicast, ``ovn-controller`` uses IGMP to learn > -multicast groups that are joined by hosts. > - > -Each ``ovn-controller`` maintains its own set of records to avoid > -ownership and concurrency with other controllers. If two hosts that > -are connected to the same logical switch but reside on different > -hypervisors (different ``ovn-controller`` processes) join the same > -multicast group G, each of the controllers will create an > -``IGMP_Group`` record in the ``OVN Southbound`` database which will > -contain a set of ports to which the interested hosts are connected. > - > -At this point ``ovn-northd-ddlog`` needs to aggregate the per-chassis > -IGMP records to generate a single ``Logical_Flow`` for group G. > -Moreover, the ports on which the hosts are connected are represented > -as references to ``Port_Binding`` records in the database. These also > -need to be translated to ``&SwitchPort`` DDlog relations. The > -corresponding DDlog operations that need to be performed are: > - > -- Flatten the ``<IGMP group, ports>`` mapping in order to be able to > - do the translation from ``Port_Binding`` to ``&SwitchPort``. For > - each ``IGMP_Group`` record in the ``OVN Southbound`` database > - generate an individual record of type ``IgmpSwitchGroupPort`` for > - each ``Port_Binding`` in the set of ports that joined the > - group. Also, translate the ``Port_Binding`` uuid to the > - corresponding ``Logical_Switch_Port`` uuid:: > - > - relation IgmpSwitchGroupPort( > - address: string, > - switch : Ref<Switch>, > - port : uuid > - ) > - > - IgmpSwitchGroupPort(address, switch, lsp_uuid) :- > - sb::IGMP_Group(.address = address, .datapath = igmp_dp_set, > - .ports = pb_ports), > - var pb_port_uuid = FlatMap(pb_ports), > - sb::Port_Binding(._uuid = pb_port_uuid, .logical_port = lsp_name), > - &SwitchPort( > - .lsp = nb.Logical_Switch_Port{._uuid = lsp_uuid, .name = lsp_name}, > - .sw = switch). > - > -- Aggregate the flattened IgmpSwitchGroupPort (implicitly from all > - ``ovn-controller`` instances) grouping by adress and logical > - switch:: > - > - relation IgmpSwitchMulticastGroup( > - address: string, > - switch : Ref<Switch>, > - ports : Set<uuid> > - ) > - > - IgmpSwitchMulticastGroup(address, switch, ports) :- > - IgmpSwitchGroupPort(address, switch, port), > - var ports = port.group_by((address, switch)).to_set(). > - > -At this point we have all the feature configuration relevant > -information stored in DDlog relations in ``ovn-northd-ddlog`` memory. > - > -Pitfalls of projections > -~~~~~~~~~~~~~~~~~~~~~~~ > - > -A projection is a join that uses only some of the data in a record. > -When the fields that are used have duplicates, the result can be many > -"copies" of a record, which DDlog represents internally with an > -integer "weight" that counts the number of copies. We don't have a > -projection with duplicates in this example, but `lswitch.dl` has many > -of them, such as this one:: > - > - relation LogicalSwitchHasACLs(ls: uuid, has_acls: bool) > - > - LogicalSwitchHasACLs(ls, true) :- > - LogicalSwitchACL(ls, _). > - > - LogicalSwitchHasACLs(ls, false) :- > - nb::Logical_Switch(._uuid = ls), > - not LogicalSwitchACL(ls, _). > - > -When multiple projections get joined together, the weights can > -overflow, which causes DDlog to malfunction. The solution is to make > -the relation an output relation, which causes DDlog to filter it > -through a "distinct" operator that reduces the weights to 1. Thus, > -`LogicalSwitchHasACLs` is actually implemented this way:: > - > - output relation LogicalSwitchHasACLs(ls: uuid, has_acls: bool) > - > -For more information, see `Avoiding weight overflow > -<https://github.com/vmware/differential-datalog/blob/master/doc/tutorial/tutorial.md#avoiding-weight-overflow>`_ > -in the DDlog tutorial. > - > -Write rules to update output relations > --------------------------------------- > - > -The developer updates output tables by writing rules that generate > -``Out_*`` relations. For IP Multicast this means:: > - > - /* IP_Multicast table (only applicable for Switches). */ > - sb::Out_IP_Multicast(._uuid = hash128(cfg.datapath), > - .datapath = cfg.datapath, > - .enabled = set_singleton(cfg.enabled), > - .querier = set_singleton(cfg.querier)) :- > - &McastSwitchCfg[cfg]. > - > -.. note:: ``OVN_Southbound.dl`` also contains an ``IP_Multicast`` > - relation with ``input`` qualifier. This relation stores the > - current snapshot of the OVSDB table and cannot be written to. > - > -Generate ``Logical_Flow`` and/or other forwarding records > ---------------------------------------------------------- > - > -At this point we have defined all DDlog relations required to generate > -``Logical_Flow``s. All we have to do is write the rules to do so. > -For each ``IgmpSwitchMulticastGroup`` we generate a ``Flow`` that has > -as action ``"outport = <Multicast_Group>; output;"``:: > - > - /* Ingress table 17: Add IP multicast flows learnt from IGMP (priority 90). */ > - for (IgmpSwitchMulticastGroup(.address = address, .switch = &sw)) { > - Flow(.logical_datapath = sw.dpname, > - .stage = switch_stage(IN, L2_LKUP), > - .priority = 90, > - .__match = "eth.mcast && ip4 && ip4.dst == ${address}", > - .actions = "outport = \"${address}\"; output;", > - .external_ids = map_empty()) > - } > - > -In some cases generating a logical flow is not enough. For IGMP we > -also need to maintain OVN southbound ``Multicast_Group`` records, > -one per IGMP group storing the corresponding ``Port_Binding`` uuids of > -ports where multicast traffic should be sent. This is also relatively > -straightforward:: > - > - /* Create a multicast group for each IGMP group learned by a Switch. > - * 'tunnel_key' == 0 triggers an ID allocation later. > - */ > - sb::Out_Multicast_Group (.datapath = switch.dpname, > - .name = address, > - .tunnel_key = 0, > - .ports = set_map_uuid2name(port_ids)) :- > - IgmpSwitchMulticastGroup(address, &switch, port_ids). > - > -We must also define DDlog relations that will allocate ``tunnel_key`` > -values. There are two cases: tunnel keys for records that already > -existed in the database are preserved to implement stable id > -allocation; new multicast groups need new keys. This kind of > -allocation can be tricky, especially to new users of DDlog. OVN > -contains multiple instances of allocation, so it's probably worth > -reading through the existing cases and following their pattern, and, > -if it's still tricky, asking for assistance. > - > -Appendix A. Additional relations generated by ``ovsdb2ddlog`` > -------------------------------------------------------------- > - > -.. _appendix: > - > -ovsdb2ddlog generates some extra relations to manage communication > -with the OVSDB server. It generates records in the following > -relations when rows in OVSDB output tables need to be added or deleted > -or updated. > - > -In the steady state, when everything is working well, a given record > -stays in any one of these relations only briefly: just long enough for > -``ovn-northd-ddlog`` to send a transaction to the OVSDB server. When > -the OVSDB server applies the update and sends an acknowledgement, this > -ordinarily means that these relations become empty, because there are > -no longer any further changes to send. > - > -Thus, records that persist in one of these relations is a sign of a > -problem. One example of such a problem is the database server > -rejecting the transactions sent by ``ovn-northd-ddlog``, which might > -happen if, for example, a bug in a ``.dl`` file would cause some OVSDB > -constraint or relational integrity rule to be violated. (Such a > -problem can often be diagnosed by looking in the OVSDB server's log.) > - > -- ``DeltaPlus_IP_Multicast`` used by the DDlog program to track new > - records that are not yet added to the database:: > - > - output relation DeltaPlus_IP_Multicast ( > - datapath: uuid_or_string_t, > - enabled: Set<bool>, > - querier: Set<bool> > - ) > - > -- ``DeltaMinus_IP_Multicast`` used by the DDlog program to track > - records that are no longer needed in the database and need to be > - removed:: > - > - output relation DeltaMinus_IP_Multicast ( > - _uuid: uuid > - ) > - > -- ``Update_IP_Multicast`` used by the DDlog program to track records > - whose fields need to be updated in the database:: > - > - output relation Update_IP_Multicast ( > - _uuid: uuid, > - enabled: Set<bool>, > - querier: Set<bool> > - ) > diff --git a/Documentation/tutorials/index.rst b/Documentation/tutorials/index.rst > index de90b780a5..0b7f52d0b7 100644 > --- a/Documentation/tutorials/index.rst > +++ b/Documentation/tutorials/index.rst > @@ -45,4 +45,3 @@ vSwitch. > ovn-ovsdb-relay > ovn-ipsec > ovn-interconnection > - ddlog-new-feature > diff --git a/NEWS b/NEWS > index acb3b854fb..b3f40f4f91 100644 > --- a/NEWS > +++ b/NEWS > @@ -10,6 +10,7 @@ Post v23.09.0 > external_ids:ovn-openflow-probe-interval configuration option for > ovn-controller no longer matters and is ignored. > - Enable PMTU discovery on geneve tunnels for E/W traffic. > + - ovn-northd-ddlog has been removed. > > OVN v23.09.0 - 15 Sep 2023 > -------------------------- > diff --git a/TODO.rst b/TODO.rst > index b790a9fadf..09b4be54d3 100644 > --- a/TODO.rst > +++ b/TODO.rst > @@ -86,12 +86,6 @@ OVN To-do List > > * Packaging for Debian. > > -* ovn-northd-ddlog: Calls to warn() and err() from DDlog code would be > - better refactored to use the Warning[] relation (and introduce an > - Error[] relation once we want to issue some errors that way). This > - would be easier with some improvements in DDlog to more easily > - output to multiple relations from a single production. > - > * IP Multicast Relay > > * When connecting bridged logical switches (localnet) to logical routers > diff --git a/acinclude.m4 b/acinclude.m4 > index ad3ee9fdf4..1a80dfaa78 100644 > --- a/acinclude.m4 > +++ b/acinclude.m4 > @@ -42,85 +42,6 @@ AC_DEFUN([OVS_ENABLE_WERROR], > fi > AC_SUBST([SPARSE_WERROR])]) > > -dnl OVS_CHECK_DDLOG([VERSION]) > -dnl > -dnl Configure ddlog source tree, checking for the given DDlog VERSION. > -dnl VERSION should be a major and minor, e.g. 0.36, which will accept > -dnl 0.36.0, 0.36.1, and so on. Omit VERSION to accept any version of > -dnl ddlog (which is probably only useful for developers who are trying > -dnl different versions, since OVN is currently bound to a particular > -dnl DDlog version). > -AC_DEFUN([OVS_CHECK_DDLOG], [ > - AC_ARG_VAR([DDLOG_HOME], [Root of the DDlog installation]) > - AC_ARG_WITH( > - [ddlog], > - [AS_HELP_STRING([--with-ddlog[[=INSTALLDIR|LIBDIR]]], [Enables DDlog])], > - [DDLOG_PATH=$PATH > - if test "$withval" = yes; then > - # --with-ddlog: $DDLOG_HOME must be set > - if test -z "$DDLOG_HOME"; then > - AC_MSG_ERROR([To build with DDlog, specify the DDlog install or library directory on --with-ddlog or in \$DDLOG_HOME]) > - fi > - DDLOGLIBDIR=$DDLOG_HOME/lib > - test -d "$DDLOG_HOME/bin" && DDLOG_PATH=$DDLOG_HOME/bin > - elif test -f "$withval/lib/ddlog_std.dl"; then > - # --with-ddlog=INSTALLDIR > - DDLOGLIBDIR=$withval/lib > - test -d "$withval/bin" && DDLOG_PATH=$withval/bin > - elif test -f "$withval/ddlog_std.dl"; then > - # --with-ddlog=LIBDIR > - DDLOGLIBDIR=$withval/lib > - else > - AC_MSG_ERROR([$withval does not contain ddlog_std.dl or lib/ddlog_std.dl]) > - fi], > - [DDLOGLIBDIR=no > - DDLOG_PATH=no]) > - > - AC_MSG_CHECKING([for DDlog library directory]) > - AC_MSG_RESULT([$DDLOGLIBDIR]) > - if test "$DDLOGLIBDIR" != no; then > - AC_ARG_VAR([DDLOG], [path to ddlog binary]) > - AC_PATH_PROGS([DDLOG], [ddlog], [none], [$DDLOG_PATH]) > - if test X"$DDLOG" = X"none"; then > - AC_MSG_ERROR([ddlog is required to build with DDlog]) > - fi > - > - AC_ARG_VAR([OVSDB2DDLOG], [path to ovsdb2ddlog binary]) > - AC_PATH_PROGS([OVSDB2DDLOG], [ovsdb2ddlog], [none], [$DDLOG_PATH]) > - if test X"$OVSDB2DDLOG" = X"none"; then > - AC_MSG_ERROR([ovsdb2ddlog is required to build with DDlog]) > - fi > - > - for tool in "$DDLOG" "$OVSDB2DDLOG"; do > - AC_MSG_CHECKING([$tool version]) > - $tool --version >&AS_MESSAGE_LOG_FD 2>&1 > - tool_version=$($tool --version | sed -n 's/^.* v\([[0-9]][[^ ]]*\).*/\1/p') > - AC_MSG_RESULT([$tool_version]) > - m4_if([$1], [], [], [ > - AS_CASE([$tool_version], > - [$1 | $1.*], [], > - [*], [AC_MSG_ERROR([DDlog version $1.x is required, but $tool is version $tool_version])])]) > - done > - > - AC_ARG_VAR([CARGO]) > - AC_CHECK_PROGS([CARGO], [cargo], [none]) > - if test X"$CARGO" = X"none"; then > - AC_MSG_ERROR([cargo is required to build with DDlog]) > - fi > - > - AC_ARG_VAR([RUSTC]) > - AC_CHECK_PROGS([RUSTC], [rustc], [none]) > - if test X"$RUSTC" = X"none"; then > - AC_MSG_ERROR([rustc is required to build with DDlog]) > - fi > - > - AC_SUBST([DDLOGLIBDIR]) > - AC_DEFINE([DDLOG], [1], [Build OVN daemons with ddlog.]) > - fi > - > - AM_CONDITIONAL([DDLOG], [test "$DDLOGLIBDIR" != no]) > -]) > - > dnl Checks for net/if_dl.h. > dnl > dnl (We use this as a proxy for checking whether we're building on FreeBSD > diff --git a/configure.ac b/configure.ac > index e8a5edbb25..8284800e74 100644 > --- a/configure.ac > +++ b/configure.ac > @@ -132,7 +132,6 @@ OVS_LIBTOOL_VERSIONS > OVS_CHECK_CXX > AX_FUNC_POSIX_MEMALIGN > OVN_CHECK_UNBOUND > -OVS_CHECK_DDLOG_FAST_BUILD > > OVS_CHECK_INCLUDE_NEXT([stdio.h string.h]) > AC_CONFIG_FILES([lib/libovn.sym]) > @@ -169,7 +168,6 @@ OVS_CONDITIONAL_CC_OPTION([-Wno-unused-parameter], [HAVE_WNO_UNUSED_PARAMETER]) > OVS_ENABLE_WERROR > OVS_ENABLE_SPARSE > > -OVS_CHECK_DDLOG([0.47]) > OVS_CHECK_PRAGMA_MESSAGE > OVN_CHECK_OVS > OVN_CHECK_VIF_PLUG_PROVIDER > @@ -177,9 +175,6 @@ OVN_ENABLE_VIF_PLUG > OVS_CTAGS_IDENTIFIERS > AC_SUBST([OVS_CFLAGS]) > AC_SUBST([OVS_LDFLAGS]) > -AC_SUBST([DDLOG_EXTRA_FLAGS]) > -AC_SUBST([DDLOG_EXTRA_RUSTFLAGS]) > -AC_SUBST([DDLOG_NORTHD_LIB_ONLY]) > > AC_SUBST([ovs_srcdir], ['${OVSDIR}']) > AC_SUBST([ovs_builddir], ['${OVSBUILDDIR}']) > diff --git a/lib/ovn-util.c b/lib/ovn-util.c > index 33105202f2..6ef9cac7f2 100644 > --- a/lib/ovn-util.c > +++ b/lib/ovn-util.c > @@ -304,37 +304,27 @@ bool extract_ip_address(const char *address, struct lport_addresses *laddrs) > bool > extract_lrp_networks(const struct nbrec_logical_router_port *lrp, > struct lport_addresses *laddrs) > -{ > - return extract_lrp_networks__(lrp->mac, lrp->networks, lrp->n_networks, > - laddrs); > -} > - > -/* Separate out the body of 'extract_lrp_networks()' for use from DDlog, > - * which does not know the 'nbrec_logical_router_port' type. */ > -bool > -extract_lrp_networks__(char *mac, char **networks, size_t n_networks, > - struct lport_addresses *laddrs) > { > memset(laddrs, 0, sizeof *laddrs); > > - if (!eth_addr_from_string(mac, &laddrs->ea)) { > + if (!eth_addr_from_string(lrp->mac, &laddrs->ea)) { > laddrs->ea = eth_addr_zero; > return false; > } > snprintf(laddrs->ea_s, sizeof laddrs->ea_s, ETH_ADDR_FMT, > ETH_ADDR_ARGS(laddrs->ea)); > > - for (int i = 0; i < n_networks; i++) { > + for (int i = 0; i < lrp->n_networks; i++) { > ovs_be32 ip4; > struct in6_addr ip6; > unsigned int plen; > char *error; > > - error = ip_parse_cidr(networks[i], &ip4, &plen); > + error = ip_parse_cidr(lrp->networks[i], &ip4, &plen); > if (!error) { > if (!ip4) { > static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); > - VLOG_WARN_RL(&rl, "bad 'networks' %s", networks[i]); > + VLOG_WARN_RL(&rl, "bad 'networks' %s", lrp->networks[i]); > continue; > } > > @@ -343,13 +333,13 @@ extract_lrp_networks__(char *mac, char **networks, size_t n_networks, > } > free(error); > > - error = ipv6_parse_cidr(networks[i], &ip6, &plen); > + error = ipv6_parse_cidr(lrp->networks[i], &ip6, &plen); > if (!error) { > add_ipv6_netaddr(laddrs, ip6, plen); > } else { > static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); > VLOG_INFO_RL(&rl, "invalid syntax '%s' in networks", > - networks[i]); > + lrp->networks[i]); > free(error); > } > } > @@ -889,21 +879,6 @@ ovn_get_internal_version(void) > N_OVNACTS, OVN_INTERNAL_MINOR_VER); > } > > -#ifdef DDLOG > -/* Callbacks used by the ddlog northd code to print warnings and errors. */ > -void > -ddlog_warn(const char *msg) > -{ > - VLOG_WARN("%s", msg); > -} > - > -void > -ddlog_err(const char *msg) > -{ > - VLOG_ERR("%s", msg); > -} > -#endif > - > uint32_t > get_tunnel_type(const char *name) > { > diff --git a/lib/ovn-util.h b/lib/ovn-util.h > index bff50dbde9..aa0b3b2fb4 100644 > --- a/lib/ovn-util.h > +++ b/lib/ovn-util.h > @@ -311,11 +311,6 @@ BUILD_ASSERT_DECL( > /* The number of tables for the ingress and egress pipelines. */ > #define LOG_PIPELINE_LEN 29 > > -#ifdef DDLOG > -void ddlog_warn(const char *msg); > -void ddlog_err(const char *msg); > -#endif > - > static inline uint32_t > hash_add_in6_addr(uint32_t hash, const struct in6_addr *addr) > { > diff --git a/lib/stopwatch-names.h b/lib/stopwatch-names.h > index 3452cc71cf..4e93c1dc14 100644 > --- a/lib/stopwatch-names.h > +++ b/lib/stopwatch-names.h > @@ -15,9 +15,6 @@ > #ifndef STOPWATCH_NAMES_H > #define STOPWATCH_NAMES_H 1 > > -/* In order to not duplicate names for stopwatches between ddlog and non-ddlog > - * we define them in a common header file. > - */ > #define NORTHD_LOOP_STOPWATCH_NAME "ovn-northd-loop" > #define OVNNB_DB_RUN_STOPWATCH_NAME "ovnnb_db_run" > #define OVNSB_DB_RUN_STOPWATCH_NAME "ovnsb_db_run" > diff --git a/m4/ovn.m4 b/m4/ovn.m4 > index 902865c585..ebe4c96123 100644 > --- a/m4/ovn.m4 > +++ b/m4/ovn.m4 > @@ -576,19 +576,3 @@ AC_DEFUN([OVN_CHECK_UNBOUND], > fi > AM_CONDITIONAL([HAVE_UNBOUND], [test "$HAVE_UNBOUND" = yes]) > AC_SUBST([HAVE_UNBOUND])]) > - > -dnl Checks for --enable-ddlog-fast-build and updates DDLOG_EXTRA_RUSTFLAGS. > -AC_DEFUN([OVS_CHECK_DDLOG_FAST_BUILD], > - [AC_ARG_ENABLE( > - [ddlog_fast_build], > - [AS_HELP_STRING([--enable-ddlog-fast-build], > - [Build ddlog programs faster, but generate slower code])], > - [case "${enableval}" in > - (yes) ddlog_fast_build=true ;; > - (no) ddlog_fast_build=false ;; > - (*) AC_MSG_ERROR([bad value ${enableval} for --enable-ddlog-fast-build]) ;; > - esac], > - [ddlog_fast_build=false]) > - if $ddlog_fast_build; then > - DDLOG_EXTRA_RUSTFLAGS="-C opt-level=z" > - fi]) > diff --git a/northd/.gitignore b/northd/.gitignore > index 0f2b33ae7d..39a6f79887 100644 > --- a/northd/.gitignore > +++ b/northd/.gitignore > @@ -1,6 +1,4 @@ > /ovn-northd > -/ovn-northd-ddlog > /ovn-northd.8 > /OVN_Northbound.dl > /OVN_Southbound.dl > -/ovn_northd_ddlog/ > diff --git a/northd/automake.mk b/northd/automake.mk > index cf622fc3c9..5d77ca67b7 100644 > --- a/northd/automake.mk > +++ b/northd/automake.mk > @@ -35,114 +35,3 @@ northd_ovn_northd_LDADD = \ > man_MANS += northd/ovn-northd.8 > EXTRA_DIST += northd/ovn-northd.8.xml > CLEANFILES += northd/ovn-northd.8 > - > -EXTRA_DIST += \ > - northd/ovn-nb.dlopts \ > - northd/ovn-sb.dlopts \ > - northd/ovn.toml \ > - northd/ovn.rs \ > - northd/bitwise.rs \ > - northd/ovsdb2ddlog2c \ > - $(ddlog_sources) > - > -ddlog_sources = \ > - northd/ovn_northd.dl \ > - northd/lswitch.dl \ > - northd/lrouter.dl \ > - northd/ipam.dl \ > - northd/multicast.dl \ > - northd/ovn.dl \ > - northd/ovn.rs \ > - northd/helpers.dl \ > - northd/bitwise.dl \ > - northd/copp.dl > -ddlog_nodist_sources = \ > - northd/OVN_Northbound.dl \ > - northd/OVN_Southbound.dl > - > -if DDLOG > -bin_PROGRAMS += northd/ovn-northd-ddlog > -northd_ovn_northd_ddlog_SOURCES = northd/ovn-northd-ddlog.c > -nodist_northd_ovn_northd_ddlog_SOURCES = \ > - northd/ovn-northd-ddlog-sb.inc \ > - northd/ovn-northd-ddlog-nb.inc \ > - northd/ovn_northd_ddlog/ddlog.h > -northd_ovn_northd_ddlog_LDADD = \ > - northd/ovn_northd_ddlog/target/release/libovn_northd_ddlog.la \ > - lib/libovn.la \ > - $(OVSDB_LIBDIR)/libovsdb.la \ > - $(OVS_LIBDIR)/libopenvswitch.la > - > -nb_opts = $$(cat $(srcdir)/northd/ovn-nb.dlopts) > -northd/OVN_Northbound.dl: ovn-nb.ovsschema northd/ovn-nb.dlopts > - $(AM_V_GEN)$(OVSDB2DDLOG) -f $< --output-file $@ $(nb_opts) > -northd/ovn-northd-ddlog-nb.inc: ovn-nb.ovsschema northd/ovn-nb.dlopts northd/ovsdb2ddlog2c > - $(AM_V_GEN)$(run_python) $(srcdir)/northd/ovsdb2ddlog2c -p nb_ -f $< --output-file $@ $(nb_opts) > - > -sb_opts = $$(cat $(srcdir)/northd/ovn-sb.dlopts) > -northd/OVN_Southbound.dl: ovn-sb.ovsschema northd/ovn-sb.dlopts > - $(AM_V_GEN)$(OVSDB2DDLOG) -f $< --output-file $@ $(sb_opts) > -northd/ovn-northd-ddlog-sb.inc: ovn-sb.ovsschema northd/ovn-sb.dlopts northd/ovsdb2ddlog2c > - $(AM_V_GEN)$(run_python) $(srcdir)/northd/ovsdb2ddlog2c -p sb_ -f $< --output-file $@ $(sb_opts) > - > -BUILT_SOURCES += \ > - northd/ovn-northd-ddlog-sb.inc \ > - northd/ovn-northd-ddlog-nb.inc \ > - northd/ovn_northd_ddlog/ddlog.h > - > -northd/ovn_northd_ddlog/ddlog.h: northd/ddlog.stamp > - > -CARGO_VERBOSE = $(cargo_verbose_$(V)) > -cargo_verbose_ = $(cargo_verbose_$(AM_DEFAULT_VERBOSITY)) > -cargo_verbose_0 = > -cargo_verbose_1 = --verbose > - > -DDLOGFLAGS = -L $(DDLOGLIBDIR) -L $(builddir)/northd $(DDLOG_EXTRA_FLAGS) > - > -RUSTFLAGS = \ > - -L ../../lib/.libs \ > - -L $(OVS_LIBDIR)/.libs \ > - $$LIBOPENVSWITCH_DEPS \ > - $$LIBOVN_DEPS \ > - -Awarnings $(DDLOG_EXTRA_RUSTFLAGS) > - > -northd/ddlog.stamp: $(ddlog_sources) $(ddlog_nodist_sources) > - $(AM_V_GEN)$(DDLOG) -i $< -o $(builddir)/northd $(DDLOGFLAGS) > - $(AM_V_at)touch $@ > - > -NORTHD_LIB = 1 > -NORTHD_CLI = 0 > - > -ddlog_targets = $(northd_lib_$(NORTHD_LIB)) $(northd_cli_$(NORTHD_CLI)) > -northd_lib_1 = northd/ovn_northd_ddlog/target/release/libovn_%_ddlog.la > -northd_cli_1 = northd/ovn_northd_ddlog/target/release/ovn_%_cli > -EXTRA_northd_ovn_northd_DEPENDENCIES = $(northd_cli_$(NORTHD_CLI)) > - > -cargo_build = $(cargo_build_$(NORTHD_LIB)$(NORTHD_CLI)) > -cargo_build_01 = --features command-line --bin ovn_northd_cli > -cargo_build_10 = --lib > -cargo_build_11 = --features command-line > - > -libtool_deps = $(srcdir)/build-aux/libtool-deps > -$(ddlog_targets): northd/ddlog.stamp lib/libovn.la $(OVS_LIBDIR)/libopenvswitch.la > - $(AM_V_GEN)LIBOVN_DEPS=`$(libtool_deps) lib/libovn.la` && \ > - LIBOPENVSWITCH_DEPS=`$(libtool_deps) $(OVS_LIBDIR)/libopenvswitch.la` && \ > - cd northd/ovn_northd_ddlog && \ > - RUSTC='$(RUSTC)' RUSTFLAGS="$(RUSTFLAGS)" \ > - cargo build --release $(CARGO_VERBOSE) $(cargo_build) --no-default-features --features ovsdb,c_api > -endif > - > -CLEAN_LOCAL += clean-ddlog > -clean-ddlog: > - rm -rf northd/ovn_northd_ddlog northd/ddlog.stamp > - > -CLEANFILES += \ > - northd/ddlog.stamp \ > - northd/ovn_northd_ddlog/ddlog.h \ > - northd/ovn_northd_ddlog/target/release/libovn_northd_ddlog.a \ > - northd/ovn_northd_ddlog/target/release/libovn_northd_ddlog.la \ > - northd/ovn_northd_ddlog/target/release/ovn_northd_cli \ > - northd/OVN_Northbound.dl \ > - northd/OVN_Southbound.dl \ > - northd/ovn-northd-ddlog-nb.inc \ > - northd/ovn-northd-ddlog-sb.inc > diff --git a/northd/bitwise.dl b/northd/bitwise.dl > deleted file mode 100644 > index 877d155a23..0000000000 > --- a/northd/bitwise.dl > +++ /dev/null > @@ -1,272 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -/* > - * Returns true if and only if 'x' is a power of 2. > - */ > -function is_power_of_two(x: u8): bool { x != 0 and (x & (x - 1)) == 0 } > -function is_power_of_two(x: u16): bool { x != 0 and (x & (x - 1)) == 0 } > -function is_power_of_two(x: u32): bool { x != 0 and (x & (x - 1)) == 0 } > -function is_power_of_two(x: u64): bool { x != 0 and (x & (x - 1)) == 0 } > -function is_power_of_two(x: u128): bool { x != 0 and (x & (x - 1)) == 0 } > - > -/* > - * Returns the next power of 2 greater than 'x', or None if that's bigger than the > - * type's maximum value. > - */ > -function next_power_of_two(x: u8): Option<u8> { u8_next_power_of_two(x) } > -function next_power_of_two(x: u16): Option<u16> { u16_next_power_of_two(x) } > -function next_power_of_two(x: u32): Option<u32> { u32_next_power_of_two(x) } > -function next_power_of_two(x: u64): Option<u64> { u64_next_power_of_two(x) } > -function next_power_of_two(x: u128): Option<u128> { u128_next_power_of_two(x) } > - > -extern function u8_next_power_of_two(x: u8): Option<u8> > -extern function u16_next_power_of_two(x: u16): Option<u16> > -extern function u32_next_power_of_two(x: u32): Option<u32> > -extern function u64_next_power_of_two(x: u64): Option<u64> > -extern function u128_next_power_of_two(x: u128): Option<u128> > - > -/* > - * Returns the next power of 2 greater than 'x', or 0 if that's bigger than the > - * type's maximum value. > - */ > -function wrapping_next_power_of_two(x: u8): u8 { u8_wrapping_next_power_of_two(x) } > -function wrapping_next_power_of_two(x: u16): u16 { u16_wrapping_next_power_of_two(x) } > -function wrapping_next_power_of_two(x: u32): u32 { u32_wrapping_next_power_of_two(x) } > -function wrapping_next_power_of_two(x: u64): u64 { u64_wrapping_next_power_of_two(x) } > -function wrapping_next_power_of_two(x: u128): u128 { u128_wrapping_next_power_of_two(x) } > - > -extern function u8_wrapping_next_power_of_two(x: u8): u8 > -extern function u16_wrapping_next_power_of_two(x: u16): u16 > -extern function u32_wrapping_next_power_of_two(x: u32): u32 > -extern function u64_wrapping_next_power_of_two(x: u64): u64 > -extern function u128_wrapping_next_power_of_two(x: u128): u128 > - > -/* > - * Number of 1-bits in the binary representation of 'x'. > - */ > -function count_ones(x: u8): u32 { u8_count_ones(x) } > -function count_ones(x: u16): u32 { u16_count_ones(x) } > -function count_ones(x: u32): u32 { u32_count_ones(x) } > -function count_ones(x: u64): u32 { u64_count_ones(x) } > -function count_ones(x: u128): u32 { u128_count_ones(x) } > - > -extern function u8_count_ones(x: u8): u32 > -extern function u16_count_ones(x: u16): u32 > -extern function u32_count_ones(x: u32): u32 > -extern function u64_count_ones(x: u64): u32 > -extern function u128_count_ones(x: u128): u32 > - > -/* > - * Number of 0-bits in the binary representation of 'x'. > - */ > -function count_zeros(x: u8): u32 { u8_count_zeros(x) } > -function count_zeros(x: u16): u32 { u16_count_zeros(x) } > -function count_zeros(x: u32): u32 { u32_count_zeros(x) } > -function count_zeros(x: u64): u32 { u64_count_zeros(x) } > -function count_zeros(x: u128): u32 { u128_count_zeros(x) } > - > -extern function u8_count_zeros(x: u8): u32 > -extern function u16_count_zeros(x: u16): u32 > -extern function u32_count_zeros(x: u32): u32 > -extern function u64_count_zeros(x: u64): u32 > -extern function u128_count_zeros(x: u128): u32 > - > -/* > - * Number of leading 0-bits in the binary representation of 'x'. > - */ > -function leading_zeros(x: u8): u32 { u8_leading_zeros(x) } > -function leading_zeros(x: u16): u32 { u16_leading_zeros(x) } > -function leading_zeros(x: u32): u32 { u32_leading_zeros(x) } > -function leading_zeros(x: u64): u32 { u64_leading_zeros(x) } > -function leading_zeros(x: u128): u32 { u128_leading_zeros(x) } > - > -extern function u8_leading_zeros(x: u8): u32 > -extern function u16_leading_zeros(x: u16): u32 > -extern function u32_leading_zeros(x: u32): u32 > -extern function u64_leading_zeros(x: u64): u32 > -extern function u128_leading_zeros(x: u128): u32 > - > -/* > - * Number of leading 1-bits in the binary representation of 'x'. > - */ > -function leading_ones(x: u8): u32 { u8_leading_ones(x) } > -function leading_ones(x: u16): u32 { u16_leading_ones(x) } > -function leading_ones(x: u32): u32 { u32_leading_ones(x) } > -function leading_ones(x: u64): u32 { u64_leading_ones(x) } > -function leading_ones(x: u128): u32 { u128_leading_ones(x) } > - > -extern function u8_leading_ones(x: u8): u32 > -extern function u16_leading_ones(x: u16): u32 > -extern function u32_leading_ones(x: u32): u32 > -extern function u64_leading_ones(x: u64): u32 > -extern function u128_leading_ones(x: u128): u32 > - > -/* > - * Number of trailing 0-bits in the binary representation of 'x'. > - */ > -function trailing_zeros(x: u8): u32 { u8_trailing_zeros(x) } > -function trailing_zeros(x: u16): u32 { u16_trailing_zeros(x) } > -function trailing_zeros(x: u32): u32 { u32_trailing_zeros(x) } > -function trailing_zeros(x: u64): u32 { u64_trailing_zeros(x) } > -function trailing_zeros(x: u128): u32 { u128_trailing_zeros(x) } > - > -extern function u8_trailing_zeros(x: u8): u32 > -extern function u16_trailing_zeros(x: u16): u32 > -extern function u32_trailing_zeros(x: u32): u32 > -extern function u64_trailing_zeros(x: u64): u32 > -extern function u128_trailing_zeros(x: u128): u32 > - > -/* > - * Number of trailing 0-bits in the binary representation of 'x'. > - */ > -function trailing_ones(x: u8): u32 { u8_trailing_ones(x) } > -function trailing_ones(x: u16): u32 { u16_trailing_ones(x) } > -function trailing_ones(x: u32): u32 { u32_trailing_ones(x) } > -function trailing_ones(x: u64): u32 { u64_trailing_ones(x) } > -function trailing_ones(x: u128): u32 { u128_trailing_ones(x) } > - > -extern function u8_trailing_ones(x: u8): u32 > -extern function u16_trailing_ones(x: u16): u32 > -extern function u32_trailing_ones(x: u32): u32 > -extern function u64_trailing_ones(x: u64): u32 > -extern function u128_trailing_ones(x: u128): u32 > - > -/* > - * Reverses the order of bits in 'x'. > - */ > -function reverse_bits(x: u8): u8 { u8_reverse_bits(x) } > -function reverse_bits(x: u16): u16 { u16_reverse_bits(x) } > -function reverse_bits(x: u32): u32 { u32_reverse_bits(x) } > -function reverse_bits(x: u64): u64 { u64_reverse_bits(x) } > -function reverse_bits(x: u128): u128 { u128_reverse_bits(x) } > - > -extern function u8_reverse_bits(x: u8): u8 > -extern function u16_reverse_bits(x: u16): u16 > -extern function u32_reverse_bits(x: u32): u32 > -extern function u64_reverse_bits(x: u64): u64 > -extern function u128_reverse_bits(x: u128): u128 > - > -/* > - * Reverses the order of bytes in 'x'. > - */ > -function swap_bytes(x: u8): u8 { u8_swap_bytes(x) } > -function swap_bytes(x: u16): u16 { u16_swap_bytes(x) } > -function swap_bytes(x: u32): u32 { u32_swap_bytes(x) } > -function swap_bytes(x: u64): u64 { u64_swap_bytes(x) } > -function swap_bytes(x: u128): u128 { u128_swap_bytes(x) } > - > -extern function u8_swap_bytes(x: u8): u8 > -extern function u16_swap_bytes(x: u16): u16 > -extern function u32_swap_bytes(x: u32): u32 > -extern function u64_swap_bytes(x: u64): u64 > -extern function u128_swap_bytes(x: u128): u128 > - > -/* > - * Converts 'x' from big endian to the machine's native endianness. > - * On a big-endian machine it is a no-op. > - * On a little-end machine it is equivalent to swap_bytes(). > - */ > -function from_be(x: u8): u8 { u8_from_be(x) } > -function from_be(x: u16): u16 { u16_from_be(x) } > -function from_be(x: u32): u32 { u32_from_be(x) } > -function from_be(x: u64): u64 { u64_from_be(x) } > -function from_be(x: u128): u128 { u128_from_be(x) } > - > -extern function u8_from_be(x: u8): u8 > -extern function u16_from_be(x: u16): u16 > -extern function u32_from_be(x: u32): u32 > -extern function u64_from_be(x: u64): u64 > -extern function u128_from_be(x: u128): u128 > - > -/* > - * Converts 'x' from the machine's native endianness to big endian. > - * On a big-endian machine it is a no-op. > - * On a little-endian machine it is equivalent to swap_bytes(). > - */ > -function to_be(x: u8): u8 { u8_to_be(x) } > -function to_be(x: u16): u16 { u16_to_be(x) } > -function to_be(x: u32): u32 { u32_to_be(x) } > -function to_be(x: u64): u64 { u64_to_be(x) } > -function to_be(x: u128): u128 { u128_to_be(x) } > - > -extern function u8_to_be(x: u8): u8 > -extern function u16_to_be(x: u16): u16 > -extern function u32_to_be(x: u32): u32 > -extern function u64_to_be(x: u64): u64 > -extern function u128_to_be(x: u128): u128 > - > -/* > - * Converts 'x' from little endian to the machine's native endianness. > - * On a little-endian machine it is a no-op. > - * On a big-endian machine it is equivalent to swap_bytes(). > - */ > -function from_le(x: u8): u8 { u8_from_le(x) } > -function from_le(x: u16): u16 { u16_from_le(x) } > -function from_le(x: u32): u32 { u32_from_le(x) } > -function from_le(x: u64): u64 { u64_from_le(x) } > -function from_le(x: u128): u128 { u128_from_le(x) } > - > -extern function u8_from_le(x: u8): u8 > -extern function u16_from_le(x: u16): u16 > -extern function u32_from_le(x: u32): u32 > -extern function u64_from_le(x: u64): u64 > -extern function u128_from_le(x: u128): u128 > - > -/* > - * Converts 'x' from the machine's native endianness to little endian. > - * On a little-endian machine it is a no-op. > - * On a big-endian machine it is equivalent to swap_bytes(). > - */ > -function to_le(x: u8): u8 { u8_to_le(x) } > -function to_le(x: u16): u16 { u16_to_le(x) } > -function to_le(x: u32): u32 { u32_to_le(x) } > -function to_le(x: u64): u64 { u64_to_le(x) } > -function to_le(x: u128): u128 { u128_to_le(x) } > - > -extern function u8_to_le(x: u8): u8 > -extern function u16_to_le(x: u16): u16 > -extern function u32_to_le(x: u32): u32 > -extern function u64_to_le(x: u64): u64 > -extern function u128_to_le(x: u128): u128 > - > -/* > - * Rotates the bits in 'x' left by 'n' positions. > - */ > -function rotate_left(x: u8, n: u32): u8 { u8_rotate_left(x, n) } > -function rotate_left(x: u16, n: u32): u16 { u16_rotate_left(x, n) } > -function rotate_left(x: u32, n: u32): u32 { u32_rotate_left(x, n) } > -function rotate_left(x: u64, n: u32): u64 { u64_rotate_left(x, n) } > -function rotate_left(x: u128, n: u32): u128 { u128_rotate_left(x, n) } > - > -extern function u8_rotate_left(x: u8, n: u32): u8 > -extern function u16_rotate_left(x: u16, n: u32): u16 > -extern function u32_rotate_left(x: u32, n: u32): u32 > -extern function u64_rotate_left(x: u64, n: u32): u64 > -extern function u128_rotate_left(x: u128, n: u32): u128 > - > -/* > - * Rotates the bits in 'x' right by 'n' positions. > - */ > -function rotate_right(x: u8, n: u32): u8 { u8_rotate_right(x, n) } > -function rotate_right(x: u16, n: u32): u16 { u16_rotate_right(x, n) } > -function rotate_right(x: u32, n: u32): u32 { u32_rotate_right(x, n) } > -function rotate_right(x: u64, n: u32): u64 { u64_rotate_right(x, n) } > -function rotate_right(x: u128, n: u32): u128 { u128_rotate_right(x, n) } > - > -extern function u8_rotate_right(x: u8, n: u32): u8 > -extern function u16_rotate_right(x: u16, n: u32): u16 > -extern function u32_rotate_right(x: u32, n: u32): u32 > -extern function u64_rotate_right(x: u64, n: u32): u64 > -extern function u128_rotate_right(x: u128, n: u32): u128 > diff --git a/northd/bitwise.rs b/northd/bitwise.rs > deleted file mode 100644 > index 97c0ecfa36..0000000000 > --- a/northd/bitwise.rs > +++ /dev/null > @@ -1,133 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -use ddlog_std::option2std; > - > -pub fn u8_next_power_of_two(x: &u8) -> ddlog_std::Option<u8> { > - option2std(x.checked_next_power_of_two()) > -} > -pub fn u16_next_power_of_two(x: &u16) -> ddlog_std::Option<u16> { > - option2std(x.checked_next_power_of_two()) > -} > -pub fn u32_next_power_of_two(x: &u32) -> ddlog_std::Option<u32> { > - option2std(x.checked_next_power_of_two()) > -} > -pub fn u64_next_power_of_two(x: &u64) -> ddlog_std::Option<u64> { > - option2std(x.checked_next_power_of_two()) > -} > -pub fn u128_next_power_of_two(x: &u128) -> ddlog_std::Option<u128> { > - option2std(x.checked_next_power_of_two()) > -} > - > -// Rust has wrapping_next_power_of_two() in nightly. We implement it > -// ourselves to avoid the dependency. > -pub fn u8_wrapping_next_power_of_two(x: &u8) -> u8 { > - x.checked_next_power_of_two().unwrap_or(0) > -} > -pub fn u16_wrapping_next_power_of_two(x: &u16) -> u16 { > - x.checked_next_power_of_two().unwrap_or(0) > -} > -pub fn u32_wrapping_next_power_of_two(x: &u32) -> u32 { > - x.checked_next_power_of_two().unwrap_or(0) > -} > -pub fn u64_wrapping_next_power_of_two(x: &u64) -> u64 { > - x.checked_next_power_of_two().unwrap_or(0) > -} > -pub fn u128_wrapping_next_power_of_two(x: &u128) -> u128 { > - x.checked_next_power_of_two().unwrap_or(0) > -} > - > -pub fn u8_count_ones(x: &u8) -> u32 { x.count_ones() } > -pub fn u16_count_ones(x: &u16) -> u32 { x.count_ones() } > -pub fn u32_count_ones(x: &u32) -> u32 { x.count_ones() } > -pub fn u64_count_ones(x: &u64) -> u32 { x.count_ones() } > -pub fn u128_count_ones(x: &u128) -> u32 { x.count_ones() } > - > -pub fn u8_count_zeros(x: &u8) -> u32 { x.count_zeros() } > -pub fn u16_count_zeros(x: &u16) -> u32 { x.count_zeros() } > -pub fn u32_count_zeros(x: &u32) -> u32 { x.count_zeros() } > -pub fn u64_count_zeros(x: &u64) -> u32 { x.count_zeros() } > -pub fn u128_count_zeros(x: &u128) -> u32 { x.count_zeros() } > - > -pub fn u8_leading_ones(x: &u8) -> u32 { x.leading_ones() } > -pub fn u16_leading_ones(x: &u16) -> u32 { x.leading_ones() } > -pub fn u32_leading_ones(x: &u32) -> u32 { x.leading_ones() } > -pub fn u64_leading_ones(x: &u64) -> u32 { x.leading_ones() } > -pub fn u128_leading_ones(x: &u128) -> u32 { x.leading_ones() } > - > -pub fn u8_leading_zeros(x: &u8) -> u32 { x.leading_zeros() } > -pub fn u16_leading_zeros(x: &u16) -> u32 { x.leading_zeros() } > -pub fn u32_leading_zeros(x: &u32) -> u32 { x.leading_zeros() } > -pub fn u64_leading_zeros(x: &u64) -> u32 { x.leading_zeros() } > -pub fn u128_leading_zeros(x: &u128) -> u32 { x.leading_zeros() } > - > -pub fn u8_trailing_ones(x: &u8) -> u32 { x.trailing_ones() } > -pub fn u16_trailing_ones(x: &u16) -> u32 { x.trailing_ones() } > -pub fn u32_trailing_ones(x: &u32) -> u32 { x.trailing_ones() } > -pub fn u64_trailing_ones(x: &u64) -> u32 { x.trailing_ones() } > -pub fn u128_trailing_ones(x: &u128) -> u32 { x.trailing_ones() } > - > -pub fn u8_trailing_zeros(x: &u8) -> u32 { x.trailing_zeros() } > -pub fn u16_trailing_zeros(x: &u16) -> u32 { x.trailing_zeros() } > -pub fn u32_trailing_zeros(x: &u32) -> u32 { x.trailing_zeros() } > -pub fn u64_trailing_zeros(x: &u64) -> u32 { x.trailing_zeros() } > -pub fn u128_trailing_zeros(x: &u128) -> u32 { x.trailing_zeros() } > - > -pub fn u8_reverse_bits(x: &u8) -> u8 { x.reverse_bits() } > -pub fn u16_reverse_bits(x: &u16) -> u16 { x.reverse_bits() } > -pub fn u32_reverse_bits(x: &u32) -> u32 { x.reverse_bits() } > -pub fn u64_reverse_bits(x: &u64) -> u64 { x.reverse_bits() } > -pub fn u128_reverse_bits(x: &u128) -> u128 { x.reverse_bits() } > - > -pub fn u8_swap_bytes(x: &u8) -> u8 { x.swap_bytes() } > -pub fn u16_swap_bytes(x: &u16) -> u16 { x.swap_bytes() } > -pub fn u32_swap_bytes(x: &u32) -> u32 { x.swap_bytes() } > -pub fn u64_swap_bytes(x: &u64) -> u64 { x.swap_bytes() } > -pub fn u128_swap_bytes(x: &u128) -> u128 { x.swap_bytes() } > - > -pub fn u8_from_be(x: &u8) -> u8 { u8::from_be(*x) } > -pub fn u16_from_be(x: &u16) -> u16 { u16::from_be(*x) } > -pub fn u32_from_be(x: &u32) -> u32 { u32::from_be(*x) } > -pub fn u64_from_be(x: &u64) -> u64 { u64::from_be(*x) } > -pub fn u128_from_be(x: &u128) -> u128 { u128::from_be(*x) } > - > -pub fn u8_to_be(x: &u8) -> u8 { x.to_be() } > -pub fn u16_to_be(x: &u16) -> u16 { x.to_be() } > -pub fn u32_to_be(x: &u32) -> u32 { x.to_be() } > -pub fn u64_to_be(x: &u64) -> u64 { x.to_be() } > -pub fn u128_to_be(x: &u128) -> u128 { x.to_be() } > - > -pub fn u8_from_le(x: &u8) -> u8 { u8::from_le(*x) } > -pub fn u16_from_le(x: &u16) -> u16 { u16::from_le(*x) } > -pub fn u32_from_le(x: &u32) -> u32 { u32::from_le(*x) } > -pub fn u64_from_le(x: &u64) -> u64 { u64::from_le(*x) } > -pub fn u128_from_le(x: &u128) -> u128 { u128::from_le(*x) } > - > -pub fn u8_to_le(x: &u8) -> u8 { x.to_le() } > -pub fn u16_to_le(x: &u16) -> u16 { x.to_le() } > -pub fn u32_to_le(x: &u32) -> u32 { x.to_le() } > -pub fn u64_to_le(x: &u64) -> u64 { x.to_le() } > -pub fn u128_to_le(x: &u128) -> u128 { x.to_le() } > - > -pub fn u8_rotate_left(x: &u8, n: &u32) -> u8 { x.rotate_left(*n) } > -pub fn u16_rotate_left(x: &u16, n: &u32) -> u16 { x.rotate_left(*n) } > -pub fn u32_rotate_left(x: &u32, n: &u32) -> u32 { x.rotate_left(*n) } > -pub fn u64_rotate_left(x: &u64, n: &u32) -> u64 { x.rotate_left(*n) } > -pub fn u128_rotate_left(x: &u128, n: &u32) -> u128 { x.rotate_left(*n) } > - > -pub fn u8_rotate_right(x: &u8, n: &u32) -> u8 { x.rotate_right(*n) } > -pub fn u16_rotate_right(x: &u16, n: &u32) -> u16 { x.rotate_right(*n) } > -pub fn u32_rotate_right(x: &u32, n: &u32) -> u32 { x.rotate_right(*n) } > -pub fn u64_rotate_right(x: &u64, n: &u32) -> u64 { x.rotate_right(*n) } > -pub fn u128_rotate_right(x: &u128, n: &u32) -> u128 { x.rotate_right(*n) } > diff --git a/northd/copp.dl b/northd/copp.dl > deleted file mode 100644 > index c4f3b7e70c..0000000000 > --- a/northd/copp.dl > +++ /dev/null > @@ -1,30 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -function cOPP_ARP() : istring { i"arp" } > -function cOPP_ARP_RESOLVE() : istring { i"arp-resolve" } > -function cOPP_DHCPV4_OPTS() : istring { i"dhcpv4-opts" } > -function cOPP_DHCPV6_OPTS() : istring { i"dhcpv6-opts" } > -function cOPP_DNS() : istring { i"dns" } > -function cOPP_EVENT_ELB() : istring { i"event-elb" } > -function cOPP_ICMP4_ERR() : istring { i"icmp4-error" } > -function cOPP_ICMP6_ERR() : istring { i"icmp6-error" } > -function cOPP_IGMP() : istring { i"igmp" } > -function cOPP_ND_NA() : istring { i"nd-na" } > -function cOPP_ND_NS() : istring { i"nd-ns" } > -function cOPP_ND_NS_RESOLVE() : istring { i"nd-ns-resolve" } > -function cOPP_ND_RA_OPTS() : istring { i"nd-ra-opts" } > -function cOPP_TCP_RESET() : istring { i"tcp-reset" } > -function cOPP_REJECT() : istring { i"reject" } > -function cOPP_BFD() : istring { i"bfd" } > diff --git a/northd/helpers.dl b/northd/helpers.dl > deleted file mode 100644 > index 50e137d99e..0000000000 > --- a/northd/helpers.dl > +++ /dev/null > @@ -1,60 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -import OVN_Northbound as nb > -import OVN_Southbound as sb > -import ovsdb > -import ovn > - > - > -output relation Warning[string] > - > -/* Switch-to-router logical port connections */ > -relation SwitchRouterPeer(lsp: uuid, lsp_name: istring, lrp: uuid) > -SwitchRouterPeer(lsp, lsp_name, lrp) :- > - &nb::Logical_Switch_Port(._uuid = lsp, .name = lsp_name, .__type = i"router", .options = options), > - Some{var router_port} = options.get(i"router-port"), > - &nb::Logical_Router_Port(.name = router_port, ._uuid = lrp). > - > -function get_bool_def(m: Map<istring, istring>, k: istring, def: bool): bool = { > - m.get(k) > - .and_then(|x| match (x.to_lowercase()) { > - "false" -> Some{false}, > - "true" -> Some{true}, > - _ -> None > - }) > - .unwrap_or(def) > -} > - > -function get_int_def(m: Map<istring, istring>, k: istring, def: integer): integer = { > - m.get(k).and_then(|v| v.ival().parse_dec_u64()).unwrap_or(def) > -} > - > -function clamp(x: 'A, range: ('A, 'A)): 'A { > - (var min, var max) = range; > - if (x < min) { > - min > - } else if (x > max) { > - max > - } else { > - x > - } > -} > - > -function ha_chassis_group_uuid(uuid: uuid): uuid { hash128("hacg" ++ uuid) } > -function ha_chassis_uuid(chassis_name: string, nb_chassis_uuid: uuid): uuid { hash128("hac" ++ chassis_name ++ nb_chassis_uuid) } > - > -/* Dummy relation with one empty row, useful for putting into antijoins. */ > -relation Unit() > -Unit(). > diff --git a/northd/ipam.dl b/northd/ipam.dl > deleted file mode 100644 > index 600c55f5c8..0000000000 > --- a/northd/ipam.dl > +++ /dev/null > @@ -1,499 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -/* > - * IPAM (IP address management) and MACAM (MAC address management) > - * > - * IPAM generally stands for IP address management. In non-virtualized > - * world, MAC addresses come with the hardware. But, with virtualized > - * workloads, they need to be assigned and managed. This function > - * does both IP address management (ipam) and MAC address management > - * (macam). > - */ > - > -import OVN_Northbound as nb > -import ovsdb > -import allocate > -import helpers > -import ovn > -import lswitch > -import lrouter > - > -function mAC_ADDR_SPACE(): bit<48> = 48'hffffff > - > -/* > - * IPv4 dynamic address allocation. > - */ > - > -/* > - * The fixed portions of a request for a dynamic LSP address. > - */ > -typedef dynamic_address_request = DynamicAddressRequest{ > - mac: Option<eth_addr>, > - ip4: Option<in_addr>, > - ip6: Option<in6_addr> > -} > -function parse_dynamic_address_request(s: string): Option<dynamic_address_request> { > - var tokens = string_split(s, " "); > - var n = tokens.len(); > - if (n < 1 or n > 3) { > - return None > - }; > - > - var t0 = tokens.nth(0).unwrap_or(""); > - var t1 = tokens.nth(1).unwrap_or(""); > - var t2 = tokens.nth(2).unwrap_or(""); > - if (t0 == "dynamic") { > - if (n == 1) { > - Some{DynamicAddressRequest{None, None, None}} > - } else if (n == 2) { > - match (ip46_parse(t1)) { > - Some{IPv4{ipv4}} -> Some{DynamicAddressRequest{None, Some{ipv4}, None}}, > - Some{IPv6{ipv6}} -> Some{DynamicAddressRequest{None, None, Some{ipv6}}}, > - _ -> None > - } > - } else if (n == 3) { > - match ((ip_parse(t1), ipv6_parse(t2))) { > - (Some{ipv4}, Some{ipv6}) -> Some{DynamicAddressRequest{None, Some{ipv4}, Some{ipv6}}}, > - _ -> None > - } > - } else { > - None > - } > - } else if (n == 2 and t1 == "dynamic") { > - match (eth_addr_from_string(t0)) { > - Some{mac} -> Some{DynamicAddressRequest{Some{mac}, None, None}}, > - _ -> None > - } > - } else { > - None > - } > -} > - > -/* SwitchIPv4ReservedAddress - keeps track of statically reserved IPv4 addresses > - * for each switch whose subnet option is set, including: > - * (1) first and last (multicast) address in the subnet range > - * (2) addresses from `other_config.exclude_ips` > - * (3) port addresses in lsp.addresses, except "unknown" addresses, addresses of > - * "router" ports, dynamic addresses > - * (4) addresses associated with router ports peered with the switch. > - * (5) static IP component of "dynamic" `lsp.addresses`. > - * > - * Addresses are kept in host-endian format (i.e., bit<32> vs in_addr). > - */ > -relation SwitchIPv4ReservedAddress(lswitch: uuid, addr: bit<32>) > - > -/* Add reserved address groups (1) and (2). */ > -SwitchIPv4ReservedAddress(.lswitch = sw._uuid, > - .addr = addr) :- > - sw in &Switch(.subnet = Some{(_, _, start_ipv4, total_ipv4s)}), > - var exclude_ips = { > - var exclude_ips = set_singleton(start_ipv4); > - exclude_ips.insert(start_ipv4 + total_ipv4s - 1); > - match (map_get(sw.other_config, i"exclude_ips")) { > - None -> exclude_ips, > - Some{exclude_ip_list} -> match (parse_ip_list(exclude_ip_list.ival())) { > - Left{err} -> { > - warn("logical switch ${uuid2str(sw._uuid)}: bad exclude_ips (${err})"); > - exclude_ips > - }, > - Right{ranges} -> { > - for (rng in ranges) { > - (var ip_start, var ip_end) = rng; > - var start = ip_start.a; > - var end = match (ip_end) { > - None -> start, > - Some{ip} -> ip.a > - }; > - start = max(start_ipv4, start); > - end = min(start_ipv4 + total_ipv4s - 1, end); > - if (end >= start) { > - for (addr in range_vec(start, end+1, 1)) { > - exclude_ips.insert(addr) > - } > - } else { > - warn("logical switch ${uuid2str(sw._uuid)}: excluded addresses not in subnet") > - } > - }; > - exclude_ips > - } > - } > - } > - }, > - var addr = FlatMap(exclude_ips). > - > -/* Add reserved address group (3). */ > -SwitchIPv4ReservedAddress(.lswitch = ls_uuid, > - .addr = addr) :- > - SwitchPortStaticAddresses( > - .port = &SwitchPort{ > - .sw = &Switch{._uuid = ls_uuid, > - .subnet = Some{(_, _, start_ipv4, total_ipv4s)}}, > - .peer = None}, > - .addrs = lport_addrs > - ), > - var addrs = { > - var addrs = set_empty(); > - for (addr in lport_addrs.ipv4_addrs) { > - var addr_host_endian = addr.addr.a; > - if (addr_host_endian >= start_ipv4 and addr_host_endian < start_ipv4 + total_ipv4s) { > - addrs.insert(addr_host_endian) > - } else () > - }; > - addrs > - }, > - var addr = FlatMap(addrs). > - > -/* Add reserved address group (4) */ > -SwitchIPv4ReservedAddress(.lswitch = ls_uuid, > - .addr = addr) :- > - &SwitchPort( > - .sw = &Switch{._uuid = ls_uuid, > - .subnet = Some{(_, _, start_ipv4, total_ipv4s)}}, > - .peer = Some{rport}), > - var addrs = { > - var addrs = set_empty(); > - for (addr in rport.networks.ipv4_addrs) { > - var addr_host_endian = addr.addr.a; > - if (addr_host_endian >= start_ipv4 and addr_host_endian < start_ipv4 + total_ipv4s) { > - addrs.insert(addr_host_endian) > - } else () > - }; > - addrs > - }, > - var addr = FlatMap(addrs). > - > -/* Add reserved address group (5) */ > -SwitchIPv4ReservedAddress(.lswitch = sw._uuid, > - .addr = ip_addr.a) :- > - &SwitchPort(.sw = sw, .lsp = lsp, .static_dynamic_ipv4 = Some{ip_addr}). > - > -/* Aggregate all reserved addresses for each switch. */ > -relation SwitchIPv4ReservedAddresses(lswitch: uuid, addrs: Set<bit<32>>) > - > -SwitchIPv4ReservedAddresses(lswitch, addrs) :- > - SwitchIPv4ReservedAddress(lswitch, addr), > - var addrs = addr.group_by(lswitch).to_set(). > - > -SwitchIPv4ReservedAddresses(lswitch_uuid, set_empty()) :- > - &nb::Logical_Switch(._uuid = lswitch_uuid), > - not SwitchIPv4ReservedAddress(lswitch_uuid, _). > - > -/* Allocate dynamic IP addresses for ports that require them: > - */ > -relation SwitchPortAllocatedIPv4DynAddress(lsport: uuid, dyn_addr: Option<in_addr>) > - > -SwitchPortAllocatedIPv4DynAddress(lsport, dyn_addr) :- > - /* Aggregate all ports of a switch that need a dynamic IP address */ > - port in &SwitchPort(.needs_dynamic_ipv4address = true, > - .sw = sw), > - var switch_id = sw._uuid, > - var ports = port.group_by(switch_id).to_vec(), > - SwitchIPv4ReservedAddresses(switch_id, reserved_addrs), > - /* Allocate dynamic addresses only for ports that don't have a dynamic address > - * or have one that is no longer valid. */ > - var dyn_addresses = { > - var used_addrs = reserved_addrs; > - var assigned_addrs = vec_empty(); > - var need_addr = vec_empty(); > - (var start_ipv4, var total_ipv4s) = match (ports.nth(0)) { > - None -> { (0, 0) } /* no ports with dynamic addresses */, > - Some{port0} -> { > - match (port0.sw.subnet) { > - None -> { > - abort("needs_dynamic_ipv4address is true, but subnet is undefined in port ${uuid2str(port0.lsp._uuid)}"); > - (0, 0) > - }, > - Some{(_, _, start_ipv4, total_ipv4s)} -> (start_ipv4, total_ipv4s) > - } > - } > - }; > - for (port in ports) { > - //warn("port(${port.lsp._uuid})"); > - match (port.dynamic_address) { > - None -> { > - /* no dynamic address yet -- allocate one now */ > - //warn("need_addr(${port.lsp._uuid})"); > - need_addr.push(port.lsp._uuid) > - }, > - Some{dynaddr} -> { > - match (dynaddr.ipv4_addrs.nth(0)) { > - None -> { > - /* dynamic address does not have IPv4 component -- allocate one now */ > - //warn("need_addr(${port.lsp._uuid})"); > - need_addr.push(port.lsp._uuid) > - }, > - Some{addr} -> { > - var haddr = addr.addr.a; > - if (haddr < start_ipv4 or haddr >= start_ipv4 + total_ipv4s) { > - need_addr.push(port.lsp._uuid) > - } else if (used_addrs.contains(haddr)) { > - need_addr.push(port.lsp._uuid); > - warn("Duplicate IP set on switch ${port.lsp.name}: ${addr.addr}") > - } else { > - /* has valid dynamic address -- record it in used_addrs */ > - used_addrs.insert(haddr); > - assigned_addrs.push((port.lsp._uuid, Some{haddr})) > - } > - } > - } > - } > - } > - }; > - assigned_addrs.append(allocate_opt(used_addrs, need_addr, start_ipv4, start_ipv4 + total_ipv4s - 1)); > - assigned_addrs > - }, > - var port_address = FlatMap(dyn_addresses), > - (var lsport, var dyn_addr_bits) = port_address, > - var dyn_addr = dyn_addr_bits.map(|x| InAddr{x}). > - > -/* Compute new dynamic IPv4 address assignment: > - * - port does not need dynamic IP - use static_dynamic_ip if any > - * - a new address has been allocated for port - use this address > - * - otherwise, use existing dynamic IP > - */ > -relation SwitchPortNewIPv4DynAddress(lsport: uuid, dyn_addr: Option<in_addr>) > - > -SwitchPortNewIPv4DynAddress(lsp._uuid, ip_addr) :- > - &SwitchPort(.sw = sw, > - .needs_dynamic_ipv4address = false, > - .static_dynamic_ipv4 = static_dynamic_ipv4, > - .lsp = lsp), > - var ip_addr = { > - match (static_dynamic_ipv4) { > - None -> { None }, > - Some{addr} -> { > - match (sw.subnet) { > - None -> { None }, > - Some{(_, _, start_ipv4, total_ipv4s)} -> { > - var haddr = addr.a; > - if (haddr < start_ipv4 or haddr >= start_ipv4 + total_ipv4s) { > - /* new static ip is not valid */ > - None > - } else { > - Some{addr} > - } > - } > - } > - } > - } > - }. > - > -SwitchPortNewIPv4DynAddress(lsport, addr) :- > - SwitchPortAllocatedIPv4DynAddress(lsport, addr). > - > -/* > - * Dynamic MAC address allocation. > - */ > - > -function get_mac_prefix(options: Map<istring,istring>, uuid: uuid) : bit<48> > -{ > - match (map_get(options, i"mac_prefix").and_then(|pref| pref.ival().scan_eth_addr_prefix())) { > - Some{prefix} -> prefix.ha, > - None -> eth_addr_pseudorandom(uuid, 16'h1234).ha & 48'hffffff000000 > - } > -} > -function put_mac_prefix(options: mut Map<istring,istring>, mac_prefix: bit<48>) > -{ > - map_insert(options, i"mac_prefix", > - string_substr(to_string(EthAddr{mac_prefix}), 0, 8).intern()) > -} > -relation MacPrefix(mac_prefix: bit<48>) > -MacPrefix(get_mac_prefix(options, uuid)) :- > - nb::NB_Global(._uuid = uuid, .options = options). > - > -/* ReservedMACAddress - keeps track of statically reserved MAC addresses. > - * (1) static addresses in `lsp.addresses` > - * (2) static MAC component of "dynamic" `lsp.addresses`. > - * (3) addresses associated with router ports peered with the switch. > - * > - * Addresses are kept in host-endian format. > - */ > -relation ReservedMACAddress(addr: bit<48>) > - > -/* Add reserved address group (1). */ > -ReservedMACAddress(.addr = lport_addrs.ea.ha) :- > - SwitchPortStaticAddresses(.addrs = lport_addrs). > - > -/* Add reserved address group (2). */ > -ReservedMACAddress(.addr = mac_addr.ha) :- > - &SwitchPort(.lsp = lsp, .static_dynamic_mac = Some{mac_addr}). > - > -/* Add reserved address group (3). */ > -ReservedMACAddress(.addr = rport.networks.ea.ha) :- > - &SwitchPort(.peer = Some{rport}). > - > -/* Aggregate all reserved MAC addresses. */ > -relation ReservedMACAddresses(addrs: Set<bit<48>>) > - > -ReservedMACAddresses(addrs) :- > - ReservedMACAddress(addr), > - var addrs = addr.group_by(()).to_set(). > - > -/* Handle case when `ReservedMACAddress` is empty */ > -ReservedMACAddresses(set_empty()) :- > - // NB_Global should have exactly one record, so we can > - // use it as a base for antijoin. > - nb::NB_Global(), > - not ReservedMACAddress(_). > - > -/* Allocate dynamic MAC addresses for ports that require them: > - * Case 1: port doesn't need dynamic MAC (i.e., does not have dynamic address or > - * has a dynamic address with a static MAC). > - * Case 2: needs dynamic MAC, has dynamic MAC, has existing dynamic MAC with the right prefix > - * needs dynamic MAC, does not have fixed dynamic MAC, doesn't have existing dynamic MAC with correct prefix > - */ > -relation SwitchPortAllocatedMACDynAddress(lsport: uuid, dyn_addr: bit<48>) > - > -SwitchPortAllocatedMACDynAddress(lsport, dyn_addr), > -SwitchPortDuplicateMACAddress(dup_addrs) :- > - /* Group all ports that need a dynamic IP address */ > - port in &SwitchPort(.needs_dynamic_macaddress = true, .lsp = lsp), > - SwitchPortNewIPv4DynAddress(lsp._uuid, ipv4_addr), > - var ports = (port, ipv4_addr).group_by(()).to_vec(), > - ReservedMACAddresses(reserved_addrs), > - MacPrefix(mac_prefix), > - (var dyn_addresses, var dup_addrs) = { > - var used_addrs = reserved_addrs; > - var need_addr = vec_empty(); > - var dup_addrs = set_empty(); > - for (port_with_addr in ports) { > - (var port, var ipv4_addr) = port_with_addr; > - var hint = match (ipv4_addr) { > - None -> Some { mac_prefix | 1 }, > - Some{addr} -> { > - /* The tentative MAC's suffix will be in the interval (1, 0xfffffe). */ > - var mac_suffix: bit<24> = addr.a[23:0] % ((mAC_ADDR_SPACE() - 1)[23:0]) + 1; > - Some{ mac_prefix | (24'd0 ++ mac_suffix) } > - } > - }; > - match (port.dynamic_address) { > - None -> { > - /* no dynamic address yet -- allocate one now */ > - need_addr.push((port.lsp._uuid, hint)) > - }, > - Some{dynaddr} -> { > - var haddr = dynaddr.ea.ha; > - if ((haddr ^ mac_prefix) >> 24 != 0) { > - /* existing dynamic address is no longer valid */ > - need_addr.push((port.lsp._uuid, hint)) > - } else if (used_addrs.contains(haddr)) { > - dup_addrs.insert(dynaddr.ea); > - } else { > - /* has valid dynamic address -- record it in used_addrs */ > - used_addrs.insert(haddr) > - } > - } > - } > - }; > - // FIXME: if a port has a dynamic address that is no longer valid, and > - // we are unable to allocate a new address, the current behavior is to > - // keep the old invalid address. It should probably be changed to > - // removing the old address. > - // FIXME: OVN allocates MAC addresses by seeding them with IPv4 address. > - // Implement a custom allocation function that simulates this behavior. > - var res = allocate_with_hint(used_addrs, need_addr, mac_prefix + 1, mac_prefix + mAC_ADDR_SPACE() - 1); > - var res_strs = vec_empty(); > - for (x in res) { > - (var uuid, var addr) = x; > - res_strs.push("${uuid2str(uuid)}: ${EthAddr{addr}}") > - }; > - (res, dup_addrs) > - }, > - var port_address = FlatMap(dyn_addresses), > - (var lsport, var dyn_addr) = port_address. > - > -relation SwitchPortDuplicateMACAddress(dup_addrs: Set<eth_addr>) > -Warning["Duplicate MAC set: ${ea}"] :- > - SwitchPortDuplicateMACAddress(dup_addrs), > - var ea = FlatMap(dup_addrs). > - > -/* Compute new dynamic MAC address assignment: > - * - port does not need dynamic MAC - use `static_dynamic_mac` > - * - a new address has been allocated for port - use this address > - * - otherwise, use existing dynamic MAC > - */ > -relation SwitchPortNewMACDynAddress(lsport: uuid, dyn_addr: Option<eth_addr>) > - > -SwitchPortNewMACDynAddress(lsp._uuid, mac_addr) :- > - &SwitchPort(.needs_dynamic_macaddress = false, > - .lsp = lsp, > - .sw = sw, > - .static_dynamic_mac = static_dynamic_mac), > - var mac_addr = match (static_dynamic_mac) { > - None -> None, > - Some{addr} -> { > - if (sw.subnet.is_some() or sw.ipv6_prefix.is_some() or > - map_get(sw.other_config, i"mac_only") == Some{i"true"}) { > - Some{addr} > - } else { > - None > - } > - } > - }. > - > -SwitchPortNewMACDynAddress(lsport, Some{EthAddr{addr}}) :- > - SwitchPortAllocatedMACDynAddress(lsport, addr). > - > -SwitchPortNewMACDynAddress(lsp._uuid, addr) :- > - &SwitchPort(.needs_dynamic_macaddress = true, .lsp = lsp, .dynamic_address = cur_address), > - not SwitchPortAllocatedMACDynAddress(lsp._uuid, _), > - var addr = match (cur_address) { > - None -> None, > - Some{dynaddr} -> Some{dynaddr.ea} > - }. > - > -/* > - * Dynamic IPv6 address allocation. > - * `needs_dynamic_ipv6address` -> mac.to_ipv6_eui64(ipv6_prefix) > - */ > -relation SwitchPortNewDynamicAddress(port: Intern<SwitchPort>, address: Option<lport_addresses>) > - > -SwitchPortNewDynamicAddress(port, None) :- > - port in &SwitchPort(.lsp = lsp), > - SwitchPortNewMACDynAddress(lsp._uuid, None). > - > -SwitchPortNewDynamicAddress(port, lport_address) :- > - port in &SwitchPort(.lsp = lsp, > - .sw = sw, > - .needs_dynamic_ipv6address = needs_dynamic_ipv6address, > - .static_dynamic_ipv6 = static_dynamic_ipv6), > - SwitchPortNewMACDynAddress(lsp._uuid, Some{mac_addr}), > - SwitchPortNewIPv4DynAddress(lsp._uuid, opt_ip4_addr), > - var ip6_addr = match ((static_dynamic_ipv6, needs_dynamic_ipv6address, sw.ipv6_prefix)) { > - (Some{ipv6}, _, _) -> " ${ipv6}", > - (_, true, Some{prefix}) -> " ${mac_addr.to_ipv6_eui64(prefix)}", > - _ -> "" > - }, > - var ip4_addr = match (opt_ip4_addr) { > - None -> "", > - Some{ip4} -> " ${ip4}" > - }, > - var addr_string = "${mac_addr}${ip6_addr}${ip4_addr}", > - var lport_address = extract_addresses(addr_string). > - > - > -///* If there's more than one dynamic addresses in port->addresses, log a warning > -// and only allocate the first dynamic address */ > -// > -// VLOG_WARN_RL(&rl, "More than one dynamic address " > -// "configured for logical switch port '%s'", > -// nbsp->name); > -// > -////>> * MAC addresses suffixes in OUIs managed by OVN"s MACAM (MAC Address > -////>> Management) system, in the range 1...0xfffffe. > -////>> * IPv4 addresses in ranges managed by OVN's IPAM (IP Address Management) > -////>> system. The range varies depending on the size of the subnet. > -////>> > -////>> Are these `dynamic_addresses` in OVN_Northbound.Logical_Switch_Port`? > diff --git a/northd/lrouter.dl b/northd/lrouter.dl > deleted file mode 100644 > index 0e4308eb5e..0000000000 > --- a/northd/lrouter.dl > +++ /dev/null > @@ -1,947 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -import OVN_Northbound as nb > -import OVN_Southbound as sb > -import graph as graph > -import multicast > -import ovsdb > -import ovn > -import helpers > -import lswitch > -import set > - > -function is_enabled(lr: nb::Logical_Router): bool { is_enabled(lr.enabled) } > -function is_enabled(lrp: Intern<nb::Logical_Router_Port>): bool { is_enabled(lrp.enabled) } > -function is_enabled(rp: RouterPort): bool { rp.lrp.is_enabled() } > -function is_enabled(rp: Intern<RouterPort>): bool { rp.lrp.is_enabled() } > - > -/* default logical flow prioriry for distributed routes */ > -function dROUTE_PRIO(): bit<32> = 400 > - > -/* LogicalRouterPortCandidate. > - * > - * Each row pairs a logical router port with its logical router, but without > - * checking that the logical router port is on only one logical router. > - * > - * (Use LogicalRouterPort instead, which guarantees uniqueness.) */ > -relation LogicalRouterPortCandidate(lrp_uuid: uuid, lr_uuid: uuid) > -LogicalRouterPortCandidate(lrp_uuid, lr_uuid) :- > - nb::Logical_Router(._uuid = lr_uuid, .ports = ports), > - var lrp_uuid = FlatMap(ports). > -Warning[message] :- > - LogicalRouterPortCandidate(lrp_uuid, lr_uuid), > - var lrs = lr_uuid.group_by(lrp_uuid).to_set(), > - lrs.size() > 1, > - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid), > - var message = "Bad configuration: logical router port ${lrp.name} belongs " > - "to more than one logical router". > - > -/* Each row means 'lport' is in 'lrouter' (and only that lrouter). */ > -relation LogicalRouterPort(lport: uuid, lrouter: uuid) > -LogicalRouterPort(lrp_uuid, lr_uuid) :- > - LogicalRouterPortCandidate(lrp_uuid, lr_uuid), > - var lrs = lr_uuid.group_by(lrp_uuid).to_set(), > - lrs.size() == 1, > - Some{var lr_uuid} = lrs.nth(0). > - > -/* > - * Peer routers. > - * > - * Each row in the relation indicates that routers 'a' and 'b' can reach > - * each other directly through router ports. > - * > - * This relation is symmetric: if (a,b) then (b,a). > - * This relation is antireflexive: if (a,b) then a != b. > - * > - * Routers aren't peers if they can reach each other only through logical > - * switch ports (that's the ReachableLogicalRouter table). > - */ > -relation PeerLogicalRouter(a: uuid, b: uuid) > -PeerLogicalRouter(lrp_uuid, peer._uuid) :- > - LogicalRouterPort(lrp_uuid, _), > - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid), > - Some{var peer_name} = lrp.peer, > - peer in &nb::Logical_Router_Port(.name = peer_name), > - peer.peer == Some{lrp.name}, // 'peer' must point back to 'lrp' > - lrp_uuid != peer._uuid. // No reflexive pointers. > - > -/* > - * First-hop routers. > - * > - * Each row indicates that 'lrouter' is a first-hop logical router for > - * 'lswitch', that is, that a "cable" directly connects 'lrouter' and > - * 'lswitch'. > - * > - * A switch can have multiple first-hop routers. */ > -relation FirstHopLogicalRouter(lrouter: uuid, lswitch: uuid) > -FirstHopLogicalRouter(lrouter, lswitch) :- > - LogicalRouterPort(lrp_uuid, lrouter), > - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid, .peer = None), > - LogicalSwitchRouterPort(lsp_uuid, lrp.name, lswitch). > - > -relation LogicalSwitchRouterPort(lsp: uuid, lsp_router_port: istring, ls: uuid) > -LogicalSwitchRouterPort(lsp, lsp_router_port, ls) :- > - LogicalSwitchPort(lsp, ls), > - &nb::Logical_Switch_Port(._uuid = lsp, .__type = i"router", .options = options), > - Some{var lsp_router_port} = options.get(i"router-port"). > - > -/* Undirected edges connecting one router and another. > - * This is a building block for ConnectedLogicalRouter. */ > -relation LogicalRouterEdge(a: uuid, b: uuid) > -LogicalRouterEdge(a, b) :- > - FirstHopLogicalRouter(a, ls), > - FirstHopLogicalRouter(b, ls), > - a <= b. > -LogicalRouterEdge(a, b) :- PeerLogicalRouter(a, b). > -function edge_from(e: LogicalRouterEdge): uuid = { e.a } > -function edge_to(e: LogicalRouterEdge): uuid = { e.b } > - > -/* > - * Sets of routers such that packets can transit directly or indirectly among > - * any of the routers in a set. Any given router is in exactly one set. > - * > - * Each row (set, elem) identifes the membership of router with UUID 'elem' in > - * set 'set', where 'set' is the minimum UUID across all its elements. > - * > - * We implement this using the graph transformer because there is no > - * way to implement "connected components" in raw DDlog that avoids O(n**2) > - * blowup in the number of nodes in a component. > - */ > -relation ConnectedLogicalRouter[(uuid, uuid)] > -apply graph::ConnectedComponents(LogicalRouterEdge, edge_from, edge_to) > - -> (ConnectedLogicalRouter) > - > -// ha_chassis_group and gateway_chassis may not both be present. > -Warning[message] :- > - lrp in &nb::Logical_Router_Port(), > - lrp.ha_chassis_group.is_some(), > - not lrp.gateway_chassis.is_empty(), > - var message = "Both ha_chassis_group and gateway_chassis configured on " > - "port ${lrp.name}; ignoring the latter". > - > -// A distributed gateway port cannot also be an L3 gateway router. > -Warning[message] :- > - lrp in &nb::Logical_Router_Port(), > - lrp.ha_chassis_group.is_some() or not lrp.gateway_chassis.is_empty(), > - lrp.options.contains_key(i"chassis"), > - var message = "Bad configuration: distributed gateway port configured on " > - "port ${lrp.name} on L3 gateway router". > - > -/* Distributed gateway ports. > - * > - * Each row means 'lrp' is a distributed gateway port on 'lr_uuid'. > - * > - * A logical router can have multiple distributed gateway ports. */ > -relation DistributedGatewayPort(lrp: Intern<nb::Logical_Router_Port>, > - lr_uuid: uuid, cr_lrp_uuid: uuid) > - > -// lrp._uuid is already in use; generate a new UUID by hashing it. > -DistributedGatewayPort(lrp, lr_uuid, hash128(lrp_uuid)) :- > - lr in nb::Logical_Router(._uuid = lr_uuid), > - LogicalRouterPort(lrp_uuid, lr._uuid), > - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid), > - not lrp.options.contains_key(i"chassis"), > - var has_hcg = lrp.ha_chassis_group.is_some(), > - var has_gc = not lrp.gateway_chassis.is_empty(), > - has_hcg or has_gc. > - > -/* HAChassis is an abstraction over nb::Gateway_Chassis and nb::HA_Chassis, which > - * are different ways to represent the same configuration. Each row is > - * effectively one HA_Chassis record. (Usually, we could associate each > - * row with a particular 'lr_uuid', but it's permissible for more than one > - * logical router to use a HA chassis group, so we omit it so that multiple > - * references get merged.) > - * > - * nb::Gateway_Chassis has an "options" column that this omits because > - * nb::HA_Chassis doesn't have anything similar. That's OK because no options > - * were ever defined. */ > -relation HAChassis(hacg_uuid: uuid, > - hac_uuid: uuid, > - chassis_name: istring, > - priority: integer, > - external_ids: Map<istring,istring>) > -HAChassis(ha_chassis_group_uuid(lrp._uuid), gw_chassis_uuid, > - chassis_name, priority, external_ids) :- > - DistributedGatewayPort(.lrp = lrp), > - lrp.ha_chassis_group == None, > - var gw_chassis_uuid = FlatMap(lrp.gateway_chassis), > - nb::Gateway_Chassis(._uuid = gw_chassis_uuid, > - .chassis_name = chassis_name, > - .priority = priority, > - .external_ids = eids), > - var external_ids = eids.insert_imm(i"chassis-name", chassis_name). > -HAChassis(ha_chassis_group_uuid(ha_chassis_group._uuid), ha_chassis_uuid, > - chassis_name, priority, external_ids) :- > - DistributedGatewayPort(.lrp = lrp), > - Some{var hac_group_uuid} = lrp.ha_chassis_group, > - ha_chassis_group in nb::HA_Chassis_Group(._uuid = hac_group_uuid), > - var ha_chassis_uuid = FlatMap(ha_chassis_group.ha_chassis), > - nb::HA_Chassis(._uuid = ha_chassis_uuid, > - .chassis_name = chassis_name, > - .priority = priority, > - .external_ids = eids), > - var external_ids = eids.insert_imm(i"chassis-name", chassis_name). > - > -/* HAChassisGroup is an abstraction for sb::HA_Chassis_Group that papers over > - * the two southbound ways to configure it via nb::Gateway_Chassis and > - * nb::HA_Chassis. The former configuration method does not provide a name or > - * external_ids for the group (only for individual chassis), so we generate > - * them. > - * > - * (Usually, we could associated each row with a particular 'lr_uuid', but it's > - * permissible for more than one logical router to use a HA chassis group, so > - * we omit it so that multiple references get merged.) > - */ > -relation HAChassisGroup(uuid: uuid, > - name: istring, > - external_ids: Map<istring,istring>) > -HAChassisGroup(ha_chassis_group_uuid(lrp._uuid), lrp.name, map_empty()) :- > - DistributedGatewayPort(.lrp = lrp), > - lrp.ha_chassis_group == None, > - not lrp.gateway_chassis.is_empty(). > -HAChassisGroup(ha_chassis_group_uuid(hac_group_uuid), > - name, external_ids) :- > - DistributedGatewayPort(.lrp = lrp), > - Some{var hac_group_uuid} = lrp.ha_chassis_group, > - nb::HA_Chassis_Group(._uuid = hacg_uuid, > - .name = name, > - .external_ids = external_ids). > - > -/* Each row maps from a distributed gateway logical router port to the name of > - * its HAChassisGroup. > - * This level of indirection is needed because multiple distributed gateway > - * logical router ports are allowed to reference a given HAChassisGroup. */ > -relation DistributedGatewayPortHAChassisGroup( > - lrp: Intern<nb::Logical_Router_Port>, > - hacg_uuid: uuid) > -DistributedGatewayPortHAChassisGroup(lrp, ha_chassis_group_uuid(lrp._uuid)) :- > - DistributedGatewayPort(.lrp = lrp), > - lrp.ha_chassis_group == None, > - lrp.gateway_chassis.size() > 0. > -DistributedGatewayPortHAChassisGroup(lrp, > - ha_chassis_group_uuid(hac_group_uuid)) :- > - DistributedGatewayPort(.lrp = lrp), > - Some{var hac_group_uuid} = lrp.ha_chassis_group, > - nb::HA_Chassis_Group(._uuid = hac_group_uuid). > - > - > -/* For each router port, tracks whether it's a redirect port of its router */ > -relation RouterPortIsRedirect(lrp: uuid, is_redirect: bool) > -RouterPortIsRedirect(lrp, true) :- DistributedGatewayPort(&nb::Logical_Router_Port{._uuid = lrp}, _, _). > -RouterPortIsRedirect(lrp, false) :- > - &nb::Logical_Router_Port(._uuid = lrp), > - not DistributedGatewayPort(&nb::Logical_Router_Port{._uuid = lrp}, _, _). > - > -/* > - * LogicalRouterDGWPorts maps from each logical router UUID > - * to the logical router's set of distributed gateway (or redirect) ports. */ > -relation LogicalRouterDGWPorts( > - lr_uuid: uuid, > - l3dgw_ports: Vec<Intern<nb::Logical_Router_Port>>) > -LogicalRouterDGWPorts(lr_uuid, l3dgw_ports) :- > - DistributedGatewayPort(lrp, lr_uuid, _), > - var l3dgw_ports = lrp.group_by(lr_uuid).to_vec(). > -LogicalRouterDGWPorts(lr_uuid, vec_empty()) :- > - lr in nb::Logical_Router(), > - var lr_uuid = lr._uuid, > - not DistributedGatewayPort(_, lr_uuid, _). > - > -typedef ExceptionalExtIps = AllowedExtIps{ips: Intern<nb::Address_Set>} > - | ExemptedExtIps{ips: Intern<nb::Address_Set>} > - > -typedef NAT = NAT{ > - nat: Intern<nb::NAT>, > - external_ip: v46_ip, > - external_mac: Option<eth_addr>, > - exceptional_ext_ips: Option<ExceptionalExtIps> > -} > - > -relation LogicalRouterNAT0( > - lr: uuid, > - nat: Intern<nb::NAT>, > - external_ip: v46_ip, > - external_mac: Option<eth_addr>) > -LogicalRouterNAT0(lr, nat, external_ip, external_mac) :- > - nb::Logical_Router(._uuid = lr, .nat = nats), > - var nat_uuid = FlatMap(nats), > - nat in &nb::NAT(._uuid = nat_uuid), > - Some{var external_ip} = ip46_parse(nat.external_ip.ival()), > - var external_mac = match (nat.external_mac) { > - Some{s} -> eth_addr_from_string(s.ival()), > - None -> None > - }. > -Warning["Bad ip address ${nat.external_ip} in nat configuration for router ${lr_name}."] :- > - nb::Logical_Router(._uuid = lr, .nat = nats, .name = lr_name), > - var nat_uuid = FlatMap(nats), > - nat in &nb::NAT(._uuid = nat_uuid), > - None = ip46_parse(nat.external_ip.ival()). > -Warning["Bad MAC address ${s} in nat configuration for router ${lr_name}."] :- > - nb::Logical_Router(._uuid = lr, .nat = nats, .name = lr_name), > - var nat_uuid = FlatMap(nats), > - nat in &nb::NAT(._uuid = nat_uuid), > - Some{var s} = nat.external_mac, > - None = eth_addr_from_string(s.ival()). > - > -relation LogicalRouterNAT(lr: uuid, nat: NAT) > -LogicalRouterNAT(lr, NAT{nat, external_ip, external_mac, None}) :- > - LogicalRouterNAT0(lr, nat, external_ip, external_mac), > - nat.allowed_ext_ips == None, > - nat.exempted_ext_ips == None. > -LogicalRouterNAT(lr, NAT{nat, external_ip, external_mac, Some{AllowedExtIps{__as}}}) :- > - LogicalRouterNAT0(lr, nat, external_ip, external_mac), > - nat.exempted_ext_ips == None, > - Some{var __as_uuid} = nat.allowed_ext_ips, > - __as in &nb::Address_Set(._uuid = __as_uuid). > -LogicalRouterNAT(lr, NAT{nat, external_ip, external_mac, Some{ExemptedExtIps{__as}}}) :- > - LogicalRouterNAT0(lr, nat, external_ip, external_mac), > - nat.allowed_ext_ips == None, > - Some{var __as_uuid} = nat.exempted_ext_ips, > - __as in &nb::Address_Set(._uuid = __as_uuid). > -Warning["NAT rule: ${nat._uuid} not applied, since" > - "both allowed and exempt external ips set"] :- > - LogicalRouterNAT0(lr, nat, _, _), > - nat.allowed_ext_ips.is_some() and nat.exempted_ext_ips.is_some(). > - > -relation LogicalRouterNATs(lr: uuid, nat: Vec<NAT>) > - > -LogicalRouterNATs(lr, nats) :- > - LogicalRouterNAT(lr, nat), > - var nats = nat.group_by(lr).to_vec(). > - > -LogicalRouterNATs(lr, vec_empty()) :- > - nb::Logical_Router(._uuid = lr), > - not LogicalRouterNAT(lr, _). > - > -function get_force_snat_ip(options: Map<istring,istring>, key_type: istring): Set<v46_ip> = > -{ > - var ips = set_empty(); > - match (options.get(i"${key_type}_force_snat_ip")) { > - None -> (), > - Some{s} -> { > - for (token in s.split(" ")) { > - match (ip46_parse(token)) { > - Some{ip} -> ips.insert(ip), > - _ -> () // XXX warn > - } > - }; > - } > - }; > - ips > -} > - > -function has_force_snat_ip(options: Map<istring, istring>, key_type: istring): bool { > - not get_force_snat_ip(options, key_type).is_empty() > -} > - > -function lb_force_snat_router_ip(lr_options: Map<istring, istring>): bool { > - lr_options.get(i"lb_force_snat_ip") == Some{i"router_ip"} and > - lr_options.contains_key(i"chassis") > -} > - > -typedef LBForceSNAT = NoForceSNAT > - | ForceSNAT > - | SkipSNAT > - > -function snat_for_lb(lr_options: Map<istring, istring>, lb: Intern<nb::Load_Balancer>): LBForceSNAT { > - if (lb.options.get_bool_def(i"skip_snat", false)) { > - return SkipSNAT > - }; > - if (not get_force_snat_ip(lr_options, i"lb").is_empty() or lb_force_snat_router_ip(lr_options)) { > - return ForceSNAT > - }; > - return NoForceSNAT > -} > - > -/* For each router, collect the set of IPv4 and IPv6 addresses used for SNAT, > - * which includes: > - * > - * - dnat_force_snat_addrs > - * - lb_force_snat_addrs > - * - IP addresses used in the router's attached NAT rules > - * > - * This is like init_nat_entries() in northd.c. */ > -relation LogicalRouterSnatIP(lr: uuid, snat_ip: v46_ip, nat: Option<NAT>) > -LogicalRouterSnatIP(lr._uuid, force_snat_ip, None) :- > - lr in nb::Logical_Router(), > - var dnat_force_snat_ips = get_force_snat_ip(lr.options, i"dnat"), > - var lb_force_snat_ips = if (lb_force_snat_router_ip(lr.options)) { > - set_empty() > - } else { > - get_force_snat_ip(lr.options, i"lb") > - }, > - var force_snat_ip = FlatMap(dnat_force_snat_ips.union(lb_force_snat_ips)). > -LogicalRouterSnatIP(lr, snat_ip, Some{nat}) :- > - LogicalRouterNAT(lr, nat@NAT{.nat = &nb::NAT{.__type = i"snat"}, .external_ip = snat_ip}). > - > -function group_to_setunionmap(g: Group<'K1, ('K2,Set<'V>)>): Map<'K2,Set<'V>> { > - var map = map_empty(); > - for ((entry, _) in g) { > - (var key, var value) = entry; > - match (map.get(key)) { > - None -> map.insert(key, value), > - Some{old_value} -> map.insert(key, old_value.union(value)) > - } > - }; > - map > -} > -relation LogicalRouterSnatIPs(lr: uuid, snat_ips: Map<v46_ip, Set<NAT>>) > -LogicalRouterSnatIPs(lr, snat_ips) :- > - LogicalRouterSnatIP(lr, snat_ip, nat), > - var snat_ips = (snat_ip, nat.to_set()).group_by(lr).group_to_setunionmap(). > -LogicalRouterSnatIPs(lr._uuid, map_empty()) :- > - lr in nb::Logical_Router(), > - not LogicalRouterSnatIP(.lr = lr._uuid). > - > -relation LogicalRouterLB(lr: uuid, lb: Intern<LoadBalancer>) > -LogicalRouterLB(lr, lb) :- > - nb::Logical_Router(._uuid = lr, .load_balancer = lbs), > - var lb_uuid = FlatMap(lbs), > - lb in &LoadBalancer(.lb = &nb::Load_Balancer{._uuid = lb_uuid}). > - > -relation LogicalRouterLBs(lr: uuid, lb: Vec<Intern<LoadBalancer>>) > - > -LogicalRouterLBs(lr, lbs) :- > - LogicalRouterLB(lr, lb), > - var lbs = lb.group_by(lr).to_vec(). > - > -LogicalRouterLBs(lr, vec_empty()) :- > - nb::Logical_Router(._uuid = lr), > - not LogicalRouterLB(lr, _). > - > -// LogicalRouterCopp maps from each LR to its collection of Copp meters, > -// dropping any Copp meter whose meter name doesn't exist. > -relation LogicalRouterCopp(lr: uuid, meters: Map<istring,istring>) > -LogicalRouterCopp(lr, meters) :- LogicalRouterCopp0(lr, meters). > -LogicalRouterCopp(lr, map_empty()) :- > - nb::Logical_Router(._uuid = lr), > - not LogicalRouterCopp0(lr, _). > - > -relation LogicalRouterCopp0(lr: uuid, meters: Map<istring,istring>) > -LogicalRouterCopp0(lr, meters) :- > - nb::Logical_Router(._uuid = lr, .copp = Some{copp_uuid}), > - nb::Copp(._uuid = copp_uuid, .meters = meters), > - var entry = FlatMap(meters), > - (var copp_id, var meter_name) = entry, > - &nb::Meter(.name = meter_name), > - var meters = (copp_id, meter_name).group_by(lr).to_map(). > - > -/* Router relation collects all attributes of a logical router. > - * > - * `l3dgw_ports` - optional redirect ports (see `DistributedGatewayPort`) > - * `is_gateway` - true iff the router is a gateway router. Together with > - * `l3dgw_port`, this flag affects the generation of various flows > - * related to NAT and load balancing. > - * `learn_from_arp_request` - whether ARP requests to addresses on the router > - * should always be learned > - */ > - > -function chassis_redirect_name(port_name: istring): string = "cr-${port_name}" > - > -typedef LoadBalancer = LoadBalancer { > - lb: Intern<nb::Load_Balancer>, > - ipv4s: Set<istring>, > - ipv6s: Set<istring>, > - routable: bool > -} > - > -relation LoadBalancer[Intern<LoadBalancer>] > -LoadBalancer[LoadBalancer{lb, ipv4s, ipv6s, routable}.intern()] :- > - nb::Load_Balancer[lb], > - var routable = lb.options.get_bool_def(i"add_route", false), > - (var ipv4s, var ipv6s) = { > - var ipv4s = set_empty(); > - var ipv6s = set_empty(); > - for ((vip, _) in lb.vips) { > - /* node->key contains IP:port or just IP. */ > - match (ip_address_and_port_from_lb_key(vip.ival())) { > - None -> (), > - Some{(IPv4{ipv4}, _)} -> ipv4s.insert(i"${ipv4}"), > - Some{(IPv6{ipv6}, _)} -> ipv6s.insert(i"${ipv6}"), > - } > - }; > - (ipv4s, ipv6s) > - }. > - > -typedef Router = Router { > - /* Fields copied from nb::Logical_Router. */ > - _uuid: uuid, > - name: istring, > - policies: Set<uuid>, > - enabled: Option<bool>, > - nat: Set<uuid>, > - options: Map<istring,istring>, > - external_ids: Map<istring,istring>, > - > - /* Additional computed fields. */ > - l3dgw_ports: Vec<Intern<nb::Logical_Router_Port>>, > - is_gateway: bool, > - nats: Vec<NAT>, > - snat_ips: Map<v46_ip, Set<NAT>>, > - mcast_cfg: Intern<McastRouterCfg>, > - learn_from_arp_request: bool, > - force_lb_snat: bool, > - copp: Map<istring, istring>, > -} > - > -relation Router[Intern<Router>] > - > -Router[Router{ > - ._uuid = lr._uuid, > - .name = lr.name, > - .policies = lr.policies, > - .enabled = lr.enabled, > - .nat = lr.nat, > - .options = lr.options, > - .external_ids = lr.external_ids, > - > - .l3dgw_ports = l3dgw_ports, > - .is_gateway = lr.options.contains_key(i"chassis"), > - .nats = nats, > - .snat_ips = snat_ips, > - .mcast_cfg = mcast_cfg, > - .learn_from_arp_request = learn_from_arp_request, > - .force_lb_snat = force_lb_snat, > - .copp = copp}.intern()] :- > - lr in nb::Logical_Router(), > - lr.is_enabled(), > - LogicalRouterDGWPorts(lr._uuid, l3dgw_ports), > - LogicalRouterNATs(lr._uuid, nats), > - LogicalRouterSnatIPs(lr._uuid, snat_ips), > - LogicalRouterCopp(lr._uuid, copp), > - mcast_cfg in &McastRouterCfg(.datapath = lr._uuid), > - var learn_from_arp_request = lr.options.get_bool_def(i"always_learn_from_arp_request", true), > - var force_lb_snat = lb_force_snat_router_ip(lr.options). > - > -typedef LogicalRouterLBIPs = LogicalRouterLBIPs { > - lr: uuid, > - lb_ipv4s_routable: Set<istring>, > - lb_ipv4s_unroutable: Set<istring>, > - lb_ipv6s_routable: Set<istring>, > - lb_ipv6s_unroutable: Set<istring>, > -} > - > -relation LogicalRouterLBIPs[Intern<LogicalRouterLBIPs>] > - > -LogicalRouterLBIPs[LogicalRouterLBIPs{ > - .lr = lr_uuid, > - .lb_ipv4s_routable = lb_ipv4s_routable, > - .lb_ipv4s_unroutable = lb_ipv4s_unroutable, > - .lb_ipv6s_routable = lb_ipv6s_routable, > - .lb_ipv6s_unroutable = lb_ipv6s_unroutable > - }.intern() > -] :- > - LogicalRouterLBs(lr_uuid, lbs), > - (var lb_ipv4s_routable, var lb_ipv4s_unroutable, > - var lb_ipv6s_routable, var lb_ipv6s_unroutable) = { > - var lb_ipv4s_routable = set_empty(); > - var lb_ipv4s_unroutable = set_empty(); > - var lb_ipv6s_routable = set_empty(); > - var lb_ipv6s_unroutable = set_empty(); > - for (lb in lbs) { > - if (lb.routable) { > - lb_ipv4s_routable = lb_ipv4s_routable.union(lb.ipv4s); > - lb_ipv6s_routable = lb_ipv6s_routable.union(lb.ipv6s); > - } else { > - lb_ipv4s_unroutable = lb_ipv4s_unroutable.union(lb.ipv4s); > - lb_ipv6s_unroutable = lb_ipv6s_unroutable.union(lb.ipv6s); > - } > - }; > - (lb_ipv4s_routable, lb_ipv4s_unroutable, > - lb_ipv6s_routable, lb_ipv6s_unroutable) > - }. > - > -/* Router - to - LB-uuid */ > -relation RouterLB(router: Intern<Router>, lb_uuid: uuid) > - > -RouterLB(router, lb.lb._uuid) :- > - LogicalRouterLB(lr_uuid, lb), > - router in &Router(._uuid = lr_uuid). > - > -/* Like RouterLB, but only includes gateway routers. */ > -relation GWRouterLB(router: Intern<Router>, lb_uuid: uuid) > - > -GWRouterLB(router, lb_uuid) :- > - RouterLB(router, lb_uuid), > - router.l3dgw_ports.len() > 0 or router.is_gateway. > - > -/* Router-to-router logical port connections */ > -relation RouterRouterPeer(rport1: uuid, rport2: uuid, rport2_name: istring) > - > -RouterRouterPeer(rport1, rport2, peer_name) :- > - &nb::Logical_Router_Port(._uuid = rport1, .peer = peer), > - Some{var peer_name} = peer, > - &nb::Logical_Router_Port(._uuid = rport2, .name = peer_name). > - > -/* Router port can peer with anothe router port, a switch port or have > - * no peer. > - */ > -typedef RouterPeer = PeerRouter{rport: uuid, name: istring} > - | PeerSwitch{sport: uuid, name: istring} > - | PeerNone > - > -function router_peer_name(peer: RouterPeer): Option<istring> = { > - match (peer) { > - PeerRouter{_, n} -> Some{n}, > - PeerSwitch{_, n} -> Some{n}, > - PeerNone -> None > - } > -} > - > -relation RouterPortPeer(rport: uuid, peer: RouterPeer) > - > -/* Router-to-router logical port connections */ > -RouterPortPeer(rport, PeerSwitch{sport, sport_name}) :- > - SwitchRouterPeer(sport, sport_name, rport). > - > -RouterPortPeer(rport1, PeerRouter{rport2, rport2_name}) :- > - RouterRouterPeer(rport1, rport2, rport2_name). > - > -RouterPortPeer(rport, PeerNone) :- > - &nb::Logical_Router_Port(._uuid = rport), > - not SwitchRouterPeer(_, _, rport), > - not RouterRouterPeer(rport, _, _). > - > -/* Each row maps from a Logical_Router port to the input options in its > - * corresponding Port_Binding (if any). This is because northd preserves > - * most of the options in that column. (northd unconditionally sets the > - * ipv6_prefix_delegation and ipv6_prefix options, so we remove them for > - * faster convergence.) */ > -relation RouterPortSbOptions(lrp_uuid: uuid, options: Map<istring,istring>) > -RouterPortSbOptions(lrp._uuid, options) :- > - lrp in &nb::Logical_Router_Port(), > - pb in sb::Port_Binding(._uuid = lrp._uuid), > - var options = { > - var options = pb.options; > - options.remove(i"ipv6_prefix"); > - options.remove(i"ipv6_prefix_delegation"); > - options > - }. > -RouterPortSbOptions(lrp._uuid, map_empty()) :- > - lrp in &nb::Logical_Router_Port(), > - not sb::Port_Binding(._uuid = lrp._uuid). > - > -relation RouterPortHasBfd(lrp_uuid: uuid, has_bfd: bool) > -RouterPortHasBfd(lrp_uuid, true) :- > - &nb::Logical_Router_Port(._uuid = lrp_uuid, .name = logical_port), > - nb::BFD(.logical_port = logical_port). > -RouterPortHasBfd(lrp_uuid, false) :- > - &nb::Logical_Router_Port(._uuid = lrp_uuid, .name = logical_port), > - not nb::BFD(.logical_port = logical_port). > - > -/* FIXME: what should happen when extract_lrp_networks fails? */ > -/* RouterPort relation collects all attributes of a logical router port */ > -typedef RouterPort = RouterPort { > - lrp: Intern<nb::Logical_Router_Port>, > - json_name: string, > - networks: lport_addresses, > - router: Intern<Router>, > - is_redirect: bool, > - peer: RouterPeer, > - mcast_cfg: Intern<McastPortCfg>, > - sb_options: Map<istring,istring>, > - has_bfd: bool, > - enabled: bool > -} > - > -relation RouterPort[Intern<RouterPort>] > - > -RouterPort[RouterPort{ > - .lrp = lrp, > - .json_name = json_escape(lrp.name), > - .networks = networks, > - .router = router, > - .is_redirect = is_redirect, > - .peer = peer, > - .mcast_cfg = mcast_cfg, > - .sb_options = sb_options, > - .has_bfd = has_bfd, > - .enabled = lrp.is_enabled() > - }.intern()] :- > - lrp in &nb::Logical_Router_Port(), > - Some{var networks} = extract_lrp_networks(lrp.mac.ival(), lrp.networks.map(ival)), > - LogicalRouterPort(lrp._uuid, lrouter_uuid), > - router in &Router(._uuid = lrouter_uuid), > - RouterPortIsRedirect(lrp._uuid, is_redirect), > - RouterPortPeer(lrp._uuid, peer), > - mcast_cfg in &McastPortCfg(.port = lrp._uuid, .router_port = true), > - RouterPortSbOptions(lrp._uuid, sb_options), > - RouterPortHasBfd(lrp._uuid, has_bfd). > - > -relation RouterPortNetworksIPv4Addr(port: Intern<RouterPort>, addr: ipv4_netaddr) > - > -RouterPortNetworksIPv4Addr(port, addr) :- > - port in &RouterPort(.networks = networks), > - var addr = FlatMap(networks.ipv4_addrs). > - > -relation RouterPortNetworksIPv6Addr(port: Intern<RouterPort>, addr: ipv6_netaddr) > - > -RouterPortNetworksIPv6Addr(port, addr) :- > - port in &RouterPort(.networks = networks), > - var addr = FlatMap(networks.ipv6_addrs). > - > -/* StaticRoute: Collects and parses attributes of a static route. */ > -typedef route_policy = SrcIp | DstIp > -function route_policy_from_string(s: Option<istring>): route_policy = { > - if (s == Some{i"src-ip"}) { SrcIp } else { DstIp } > -} > -function to_string(policy: route_policy): string = { > - match (policy) { > - SrcIp -> "src-ip", > - DstIp -> "dst-ip" > - } > -} > - > -typedef route_key = RouteKey { > - policy: route_policy, > - ip_prefix: v46_ip, > - plen: bit<32> > -} > - > -/* StaticRouteDown contains the UUID of all the static routes that are down. > - * A static route is down if it has a BFD whose dst_ip matches it nexthop and > - * that BFD is down or admin_down. */ > -relation StaticRouteDown(lrsr_uuid: uuid) > -StaticRouteDown(lrsr_uuid) :- > - nb::Logical_Router_Static_Route(._uuid = lrsr_uuid, .bfd = Some{bfd_uuid}, .nexthop = nexthop), > - bfd in nb::BFD(._uuid = bfd_uuid, .dst_ip = nexthop), > - match (bfd.status) { > - None -> true, > - Some{status} -> (status == i"admin_down" or status == i"down") > - }. > - > -relation &StaticRoute(lrsr: nb::Logical_Router_Static_Route, > - key: route_key, > - nexthop: v46_ip, > - output_port: Option<istring>, > - ecmp_symmetric_reply: bool) > - > -&StaticRoute(.lrsr = lrsr, > - .key = RouteKey{policy, ip_prefix, plen}, > - .nexthop = nexthop, > - .output_port = lrsr.output_port, > - .ecmp_symmetric_reply = esr) :- > - lrsr in nb::Logical_Router_Static_Route(), > - not StaticRouteDown(lrsr._uuid), > - var policy = route_policy_from_string(lrsr.policy), > - Some{(var nexthop, var nexthop_plen)} = ip46_parse_cidr(lrsr.nexthop.ival()), > - match (nexthop) { > - IPv4{_} -> nexthop_plen == 32, > - IPv6{_} -> nexthop_plen == 128 > - }, > - Some{(var ip_prefix, var plen)} = ip46_parse_cidr(lrsr.ip_prefix.ival()), > - match ((nexthop, ip_prefix)) { > - (IPv4{_}, IPv4{_}) -> true, > - (IPv6{_}, IPv6{_}) -> true, > - _ -> false > - }, > - var esr = lrsr.options.get_bool_def(i"ecmp_symmetric_reply", false). > - > -relation &StaticRouteEmptyNextHop(lrsr: nb::Logical_Router_Static_Route, > - key: route_key, > - output_port: Option<istring>) > -&StaticRouteEmptyNextHop(.lrsr = lrsr, > - .key = RouteKey{policy, ip_prefix, plen}, > - .output_port = lrsr.output_port) :- > - lrsr in nb::Logical_Router_Static_Route(.nexthop = i""), > - not StaticRouteDown(lrsr._uuid), > - var policy = route_policy_from_string(lrsr.policy), > - Some{(var ip_prefix, var plen)} = ip46_parse_cidr(lrsr.ip_prefix.ival()). > - > -/* Returns the IP address of the router port 'op' that > - * overlaps with 'ip'. If one is not found, returns None. */ > -function find_lrp_member_ip(networks: lport_addresses, ip: v46_ip): Option<v46_ip> = > -{ > - match (ip) { > - IPv4{ip4} -> { > - for (na in networks.ipv4_addrs) { > - if ((na.addr, ip4).same_network(na.netmask())) { > - /* There should be only 1 interface that matches the > - * supplied IP. Otherwise, it's a configuration error, > - * because subnets of a router's interfaces should NOT > - * overlap. */ > - return Some{IPv4{na.addr}} > - } > - }; > - return None > - }, > - IPv6{ip6} -> { > - for (na in networks.ipv6_addrs) { > - if ((na.addr, ip6).same_network(na.netmask())) { > - /* There should be only 1 interface that matches the > - * supplied IP. Otherwise, it's a configuration error, > - * because subnets of a router's interfaces should NOT > - * overlap. */ > - return Some{IPv6{na.addr}} > - } > - }; > - return None > - } > - } > -} > - > - > -/* Step 1: compute router-route pairs */ > -relation RouterStaticRoute_( > - router : Intern<Router>, > - key : route_key, > - nexthop : v46_ip, > - output_port : Option<istring>, > - ecmp_symmetric_reply : bool) > - > -RouterStaticRoute_(.router = router, > - .key = route.key, > - .nexthop = route.nexthop, > - .output_port = route.output_port, > - .ecmp_symmetric_reply = route.ecmp_symmetric_reply) :- > - router in &Router(), > - nb::Logical_Router(._uuid = router._uuid, .static_routes = routes), > - var route_id = FlatMap(routes), > - route in &StaticRoute(.lrsr = nb::Logical_Router_Static_Route{._uuid = route_id}). > - > -relation RouterStaticRouteEmptyNextHop_( > - router : Intern<Router>, > - key : route_key, > - output_port : Option<istring>) > - > -RouterStaticRouteEmptyNextHop_(.router = router, > - .key = route.key, > - .output_port = route.output_port) :- > - router in &Router(), > - nb::Logical_Router(._uuid = router._uuid, .static_routes = routes), > - var route_id = FlatMap(routes), > - route in &StaticRouteEmptyNextHop(.lrsr = nb::Logical_Router_Static_Route{._uuid = route_id}). > - > -/* Step-2: compute output_port for each pair */ > -typedef route_dst = RouteDst { > - nexthop: v46_ip, > - src_ip: v46_ip, > - port: Intern<RouterPort>, > - ecmp_symmetric_reply: bool > -} > - > -relation RouterStaticRoute( > - router : Intern<Router>, > - key : route_key, > - dsts : Set<route_dst>) > - > -RouterStaticRoute(router, key, dsts) :- > - rsr in RouterStaticRoute_(.router = router, .output_port = None), > - /* output_port is not specified, find the > - * router port matching the next hop. */ > - port in &RouterPort(.router = &Router{._uuid = router._uuid}, > - .networks = networks), > - Some{var src_ip} = find_lrp_member_ip(networks, rsr.nexthop), > - var dst = RouteDst{rsr.nexthop, src_ip, port, rsr.ecmp_symmetric_reply}, > - var key = rsr.key, > - var dsts = dst.group_by((router, key)).to_set(). > - > -RouterStaticRoute(router, key, dsts) :- > - RouterStaticRoute_(.router = router, > - .key = key, > - .nexthop = nexthop, > - .output_port = Some{oport}, > - .ecmp_symmetric_reply = ecmp_symmetric_reply), > - /* output_port specified */ > - port in &RouterPort(.lrp = &nb::Logical_Router_Port{.name = oport}, > - .networks = networks), > - Some{var src_ip} = match (find_lrp_member_ip(networks, nexthop)) { > - Some{src_ip} -> Some{src_ip}, > - None -> { > - /* There are no IP networks configured on the router's port via > - * which 'route->nexthop' is theoretically reachable. But since > - * 'out_port' has been specified, we honor it by trying to reach > - * 'route->nexthop' via the first IP address of 'out_port'. > - * (There are cases, e.g in GCE, where each VM gets a /32 IP > - * address and the default gateway is still reachable from it.) */ > - match (key.ip_prefix) { > - IPv4{_} -> match (networks.ipv4_addrs.nth(0)) { > - Some{addr} -> Some{IPv4{addr.addr}}, > - None -> { > - warn("No path for static route ${key.ip_prefix}; next hop ${nexthop}"); > - None > - } > - }, > - IPv6{_} -> match (networks.ipv6_addrs.nth(0)) { > - Some{addr} -> Some{IPv6{addr.addr}}, > - None -> { > - warn("No path for static route ${key.ip_prefix}; next hop ${nexthop}"); > - None > - } > - } > - } > - } > - }, > - var dsts = set_singleton(RouteDst{nexthop, src_ip, port, ecmp_symmetric_reply}). > - > -relation RouterStaticRouteEmptyNextHop( > - router : Intern<Router>, > - key : route_key, > - dsts : Set<route_dst>) > - > -RouterStaticRouteEmptyNextHop(router, key, dsts) :- > - RouterStaticRouteEmptyNextHop_(.router = router, > - .key = key, > - .output_port = Some{oport}), > - /* output_port specified */ > - port in &RouterPort(.lrp = &nb::Logical_Router_Port{.name = oport}, > - .networks = networks), > - /* There are no IP networks configured on the router's port via > - * which 'route->nexthop' is theoretically reachable. But since > - * 'out_port' has been specified, we honor it by trying to reach > - * 'route->nexthop' via the first IP address of 'out_port'. > - * (There are cases, e.g in GCE, where each VM gets a /32 IP > - * address and the default gateway is still reachable from it.) */ > - Some{var src_ip} = match (key.ip_prefix) { > - IPv4{_} -> match (networks.ipv4_addrs.nth(0)) { > - Some{addr} -> Some{IPv4{addr.addr}}, > - None -> { > - warn("No path for static route ${key.ip_prefix}"); > - None > - } > - }, > - IPv6{_} -> match (networks.ipv6_addrs.nth(0)) { > - Some{addr} -> Some{IPv6{addr.addr}}, > - None -> { > - warn("No path for static route ${key.ip_prefix}"); > - None > - } > - } > - }, > - var dsts = set_singleton(RouteDst{src_ip, src_ip, port, false}). > - > -/* compute route-route pairs for nexthop = "discard" routes */ > -relation &DiscardRoute(lrsr: nb::Logical_Router_Static_Route, > - key: route_key) > -&DiscardRoute(.lrsr = lrsr, > - .key = RouteKey{policy, ip_prefix, plen}) :- > - lrsr in nb::Logical_Router_Static_Route(.nexthop = i"discard"), > - var policy = route_policy_from_string(lrsr.policy), > - Some{(var ip_prefix, var plen)} = ip46_parse_cidr(lrsr.ip_prefix.ival()). > - > -relation RouterDiscardRoute_( > - router : Intern<Router>, > - key : route_key) > - > -RouterDiscardRoute_(.router = router, > - .key = route.key) :- > - router in &Router(), > - nb::Logical_Router(._uuid = router._uuid, .static_routes = routes), > - var route_id = FlatMap(routes), > - route in &DiscardRoute(.lrsr = nb::Logical_Router_Static_Route{._uuid = route_id}). > - > -Warning[message] :- > - RouterStaticRoute_(.router = router, .key = key, .nexthop = nexthop), > - not RouterStaticRoute(.router = router, .key = key), > - var message = "No path for ${key.policy} static route ${key.ip_prefix}/${key.plen} with next hop ${nexthop}". > diff --git a/northd/lswitch.dl b/northd/lswitch.dl > deleted file mode 100644 > index 33c5c706b3..0000000000 > --- a/northd/lswitch.dl > +++ /dev/null > @@ -1,824 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -import OVN_Northbound as nb > -import OVN_Southbound as sb > -import ovsdb > -import ovn > -import lrouter > -import multicast > -import helpers > -import ipam > -import vec > -import set > - > -function is_enabled(lsp: Intern<nb::Logical_Switch_Port>): bool { is_enabled(lsp.enabled) } > -function is_enabled(sp: SwitchPort): bool { sp.lsp.is_enabled() } > -function is_enabled(sp: Intern<SwitchPort>): bool { sp.lsp.is_enabled() } > - > -relation SwitchRouterPeerRef(lsp: uuid, rport: Option<Intern<RouterPort>>) > - > -SwitchRouterPeerRef(lsp, Some{rport}) :- > - SwitchRouterPeer(lsp, _, lrp), > - rport in &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp}). > - > -SwitchRouterPeerRef(lsp, None) :- > - &nb::Logical_Switch_Port(._uuid = lsp), > - not SwitchRouterPeer(lsp, _, _). > - > -/* LogicalSwitchPortCandidate. > - * > - * Each row pairs a logical switch port with its logical switch, but without > - * checking that the logical switch port is on only one logical switch. > - * > - * (Use LogicalSwitchPort instead, which guarantees uniqueness.) */ > -relation LogicalSwitchPortCandidate(lsp_uuid: uuid, ls_uuid: uuid) > -LogicalSwitchPortCandidate(lsp_uuid, ls_uuid) :- > - &nb::Logical_Switch(._uuid = ls_uuid, .ports = ports), > - var lsp_uuid = FlatMap(ports). > -Warning[message] :- > - LogicalSwitchPortCandidate(lsp_uuid, ls_uuid), > - var lss = ls_uuid.group_by(lsp_uuid).to_set(), > - lss.size() > 1, > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > - var message = "Bad configuration: logical switch port ${lsp.name} belongs " > - "to more than one logical switch". > - > -/* Each row means 'lport' is in 'lswitch' (and only that lswitch). */ > -relation LogicalSwitchPort(lport: uuid, lswitch: uuid) > -LogicalSwitchPort(lsp_uuid, ls_uuid) :- > - LogicalSwitchPortCandidate(lsp_uuid, ls_uuid), > - var lss = ls_uuid.group_by(lsp_uuid).to_set(), > - lss.size() == 1, > - Some{var ls_uuid} = lss.nth(0). > - > -/* Each logical switch port with an "unknown" address (with its logical switch). */ > -relation LogicalSwitchPortWithUnknownAddress(ls: uuid, lsp: uuid) > -LogicalSwitchPortWithUnknownAddress(ls_uuid, lsp_uuid) :- > - LogicalSwitchPort(lsp_uuid, ls_uuid), > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > - lsp.is_enabled() and lsp.addresses.contains(i"unknown"). > - > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > -// is an output relation: > -output relation LogicalSwitchHasUnknownPorts(ls: uuid, has_unknown: bool) > -LogicalSwitchHasUnknownPorts(ls, true) :- LogicalSwitchPortWithUnknownAddress(ls, _). > -LogicalSwitchHasUnknownPorts(ls, false) :- > - &nb::Logical_Switch(._uuid = ls), > - not LogicalSwitchPortWithUnknownAddress(ls, _). > - > -/* PortStaticAddresses: static IP addresses associated with each Logical_Switch_Port */ > -relation PortStaticAddresses(lsport: uuid, ip4addrs: Set<istring>, ip6addrs: Set<istring>) > - > -PortStaticAddresses(.lsport = port_uuid, > - .ip4addrs = ip4_addrs.union().map(intern), > - .ip6addrs = ip6_addrs.union().map(intern)) :- > - &nb::Logical_Switch_Port(._uuid = port_uuid, .addresses = addresses), > - var address = FlatMap(if (addresses.is_empty()) { set_singleton(i"") } else { addresses }), > - (var ip4addrs, var ip6addrs) = if (not is_dynamic_lsp_address(address.ival())) { > - split_addresses(address.ival()) > - } else { (set_empty(), set_empty()) }, > - (var ip4_addrs, var ip6_addrs) = (ip4addrs, ip6addrs).group_by(port_uuid).group_unzip(). > - > -relation PortInGroup(port: uuid, group: uuid) > - > -PortInGroup(port, group) :- > - nb::Port_Group(._uuid = group, .ports = ports), > - var port = FlatMap(ports). > - > -/* All ACLs associated with logical switch */ > -relation LogicalSwitchACL(ls: uuid, acl: uuid) > - > -LogicalSwitchACL(ls, acl) :- > - &nb::Logical_Switch(._uuid = ls, .acls = acls), > - var acl = FlatMap(acls). > - > -LogicalSwitchACL(ls, acl) :- > - &nb::Logical_Switch(._uuid = ls, .ports = ports), > - var port_id = FlatMap(ports), > - PortInGroup(port_id, group_id), > - nb::Port_Group(._uuid = group_id, .acls = acls), > - var acl = FlatMap(acls). > - > -relation LogicalSwitchStatefulACL(ls: uuid, acl: uuid) > - > -LogicalSwitchStatefulACL(ls, acl) :- > - LogicalSwitchACL(ls, acl), > - &nb::ACL(._uuid = acl, .action = i"allow-related"). > - > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > -// is an output relation: > -output relation LogicalSwitchHasStatefulACL(ls: uuid, has_stateful_acl: bool) > - > -LogicalSwitchHasStatefulACL(ls, true) :- > - LogicalSwitchStatefulACL(ls, _). > - > -LogicalSwitchHasStatefulACL(ls, false) :- > - &nb::Logical_Switch(._uuid = ls), > - not LogicalSwitchStatefulACL(ls, _). > - > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > -// is an output relation: > -output relation LogicalSwitchHasACLs(ls: uuid, has_acls: bool) > - > -LogicalSwitchHasACLs(ls, true) :- > - LogicalSwitchACL(ls, _). > - > -LogicalSwitchHasACLs(ls, false) :- > - &nb::Logical_Switch(._uuid = ls), > - not LogicalSwitchACL(ls, _). > - > -/* > - * LogicalSwitchLocalnetPorts maps from each logical switch UUID > - * to the logical switch's set of localnet ports. Each localnet > - * port is expressed as a tuple of its UUID and its name. > - */ > -relation LogicalSwitchLocalnetPort0(ls_uuid: uuid, lsp: (uuid, istring)) > -LogicalSwitchLocalnetPort0(ls_uuid, (lsp_uuid, lsp.name)) :- > - ls in &nb::Logical_Switch(._uuid = ls_uuid), > - var lsp_uuid = FlatMap(ls.ports), > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > - lsp.__type == i"localnet". > - > -relation LogicalSwitchLocalnetPorts(ls_uuid: uuid, localnet_ports: Vec<(uuid, istring)>) > -LogicalSwitchLocalnetPorts(ls_uuid, localnet_ports) :- > - LogicalSwitchLocalnetPort0(ls_uuid, lsp), > - var localnet_ports = lsp.group_by(ls_uuid).to_vec(). > -LogicalSwitchLocalnetPorts(ls_uuid, vec_empty()) :- > - ls in &nb::Logical_Switch(), > - var ls_uuid = ls._uuid, > - not LogicalSwitchLocalnetPort0(ls_uuid, _). > - > -/* Flatten the list of dns_records in Logical_Switch */ > -relation LogicalSwitchDNS(ls_uuid: uuid, dns_uuid: uuid) > - > -LogicalSwitchDNS(ls._uuid, dns_uuid) :- > - &nb::Logical_Switch[ls], > - var dns_uuid = FlatMap(ls.dns_records), > - nb::DNS(._uuid = dns_uuid). > - > -relation LogicalSwitchWithDNSRecords(ls: uuid) > - > -LogicalSwitchWithDNSRecords(ls) :- > - LogicalSwitchDNS(ls, dns_uuid), > - nb::DNS(._uuid = dns_uuid, .records = records), > - not records.is_empty(). > - > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > -// is an output relation: > -output relation LogicalSwitchHasDNSRecords(ls: uuid, has_dns_records: bool) > - > -LogicalSwitchHasDNSRecords(ls, true) :- > - LogicalSwitchWithDNSRecords(ls). > - > -LogicalSwitchHasDNSRecords(ls, false) :- > - &nb::Logical_Switch(._uuid = ls), > - not LogicalSwitchWithDNSRecords(ls). > - > -relation LogicalSwitchHasNonRouterPort0(ls: uuid) > -LogicalSwitchHasNonRouterPort0(ls_uuid) :- > - ls in &nb::Logical_Switch(._uuid = ls_uuid), > - var lsp_uuid = FlatMap(ls.ports), > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > - lsp.__type != i"router". > - > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > -// is an output relation: > -output relation LogicalSwitchHasNonRouterPort(ls: uuid, has_non_router_port: bool) > -LogicalSwitchHasNonRouterPort(ls, true) :- > - LogicalSwitchHasNonRouterPort0(ls). > -LogicalSwitchHasNonRouterPort(ls, false) :- > - &nb::Logical_Switch(._uuid = ls), > - not LogicalSwitchHasNonRouterPort0(ls). > - > -// LogicalSwitchCopp maps from each LS to its collection of Copp meters, > -// dropping any Copp meter whose meter name doesn't exist. > -relation LogicalSwitchCopp(ls: uuid, meters: Map<istring,istring>) > -LogicalSwitchCopp(ls, meters) :- LogicalSwitchCopp0(ls, meters). > -LogicalSwitchCopp(ls, map_empty()) :- > - &nb::Logical_Switch(._uuid = ls), > - not LogicalSwitchCopp0(ls, _). > - > -relation LogicalSwitchCopp0(ls: uuid, meters: Map<istring,istring>) > -LogicalSwitchCopp0(ls, meters) :- > - &nb::Logical_Switch(._uuid = ls, .copp = Some{copp_uuid}), > - nb::Copp(._uuid = copp_uuid, .meters = meters), > - var entry = FlatMap(meters), > - (var copp_id, var meter_name) = entry, > - &nb::Meter(.name = meter_name), > - var meters = (copp_id, meter_name).group_by(ls).to_map(). > - > -/* Switch relation collects all attributes of a logical switch */ > - > -typedef Switch = Switch { > - /* Fields copied from nb::Logical_Switch_Port. */ > - _uuid: uuid, > - name: istring, > - other_config: Map<istring,istring>, > - external_ids: Map<istring,istring>, > - > - /* Additional computed fields. */ > - has_stateful_acl: bool, > - has_acls: bool, > - has_lb_vip: bool, > - has_dns_records: bool, > - has_unknown_ports: bool, > - localnet_ports: Vec<(uuid, istring)>, // UUID and name of each localnet port. > - subnet: Option<(in_addr/*subnet*/, in_addr/*mask*/, bit<32>/*start_ipv4*/, bit<32>/*total_ipv4s*/)>, > - ipv6_prefix: Option<in6_addr>, > - mcast_cfg: Intern<McastSwitchCfg>, > - is_vlan_transparent: bool, > - copp: Map<istring, istring>, > - > - /* Does this switch have at least one port with type != "router"? */ > - has_non_router_port: bool > -} > - > - > -relation Switch[Intern<Switch>] > - > -function ipv6_parse_prefix(s: string): Option<in6_addr> { > - if (string_contains(s, "/")) { > - match (ipv6_parse_cidr(s)) { > - Right{(addr, 64)} -> Some{addr}, > - _ -> None > - } > - } else { > - ipv6_parse(s) > - } > -} > - > -Switch[Switch{ > - ._uuid = ls._uuid, > - .name = ls.name, > - .other_config = ls.other_config, > - .external_ids = ls.external_ids, > - > - .has_stateful_acl = has_stateful_acl, > - .has_acls = has_acls, > - .has_lb_vip = has_lb_vip, > - .has_dns_records = has_dns_records, > - .has_unknown_ports = has_unknown_ports, > - .localnet_ports = localnet_ports, > - .subnet = subnet, > - .ipv6_prefix = ipv6_prefix, > - .mcast_cfg = mcast_cfg, > - .has_non_router_port = has_non_router_port, > - .copp = copp, > - .is_vlan_transparent = is_vlan_transparent > - }.intern()] :- > - &nb::Logical_Switch[ls], > - LogicalSwitchHasStatefulACL(ls._uuid, has_stateful_acl), > - LogicalSwitchHasACLs(ls._uuid, has_acls), > - LogicalSwitchHasLBVIP(ls._uuid, has_lb_vip), > - LogicalSwitchHasDNSRecords(ls._uuid, has_dns_records), > - LogicalSwitchHasUnknownPorts(ls._uuid, has_unknown_ports), > - LogicalSwitchLocalnetPorts(ls._uuid, localnet_ports), > - LogicalSwitchHasNonRouterPort(ls._uuid, has_non_router_port), > - LogicalSwitchCopp(ls._uuid, copp), > - mcast_cfg in &McastSwitchCfg(.datapath = ls._uuid), > - var subnet = > - match (ls.other_config.get(i"subnet")) { > - None -> None, > - Some{subnet_str} -> { > - match (ip_parse_masked(subnet_str.ival())) { > - Left{err} -> { > - warn("bad 'subnet' ${subnet_str}"); > - None > - }, > - Right{(subnet, mask)} -> { > - if (mask.cidr_bits() == Some{32} or not mask.is_cidr()) { > - warn("bad 'subnet' ${subnet_str}"); > - None > - } else { > - Some{(subnet, mask, (subnet.a & mask.a) + 1, ~mask.a)} > - } > - } > - } > - } > - }, > - var ipv6_prefix = > - match (ls.other_config.get(i"ipv6_prefix")) { > - None -> None, > - Some{prefix} -> ipv6_parse_prefix(prefix.ival()) > - }, > - var is_vlan_transparent = ls.other_config.get_bool_def(i"vlan-passthru", false). > - > -/* LogicalSwitchLB: many-to-many relation between logical switches and nb::LB */ > -relation LogicalSwitchLB(sw_uuid: uuid, lb: Intern<nb::Load_Balancer>) > -LogicalSwitchLB(sw_uuid, lb) :- > - &nb::Logical_Switch(._uuid = sw_uuid, .load_balancer = lb_ids), > - var lb_id = FlatMap(lb_ids), > - lb in &nb::Load_Balancer(._uuid = lb_id). > - > - > -relation SwitchLB(sw: Intern<Switch>, lb_uuid: uuid) > - > -SwitchLB(sw, lb._uuid) :- > - LogicalSwitchLB(sw_uuid, lb), > - sw in &Switch(._uuid = sw_uuid). > - > -/* Load balancer VIPs associated with switch */ > -relation SwitchLBVIP(sw_uuid: uuid, lb: Intern<nb::Load_Balancer>, vip: istring, backends: istring) > -SwitchLBVIP(sw_uuid, lb, vip, backends) :- > - LogicalSwitchLB(sw_uuid, lb@(&nb::Load_Balancer{.vips = vips})), > - var kv = FlatMap(vips), > - (var vip, var backends) = kv. > - > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > -// is an output relation: > -output relation LogicalSwitchHasLBVIP(sw_uuid: uuid, has_lb_vip: bool) > -LogicalSwitchHasLBVIP(sw_uuid, true) :- > - SwitchLBVIP(.sw_uuid = sw_uuid). > -LogicalSwitchHasLBVIP(sw_uuid, false) :- > - &nb::Logical_Switch(._uuid = sw_uuid), > - not SwitchLBVIP(.sw_uuid = sw_uuid). > - > -/* Load balancer virtual IPs. > - * > - * Three levels: > - * - LBVIP0 is load balancer virtual IPs with health checks. > - * - LBVIP1 also includes virtual IPs without health checks. > - * - LBVIP parses the IP address and port (and drops VIPs where those are invalid). > - */ > -relation LBVIP0( > - lb: Intern<nb::Load_Balancer>, > - vip_key: istring, > - backend_ips: istring, > - health_check: Intern<nb::Load_Balancer_Health_Check>) > -LBVIP0(lb, vip_key, backend_ips, health_check) :- > - lb in &nb::Load_Balancer(), > - var vip = FlatMap(lb.vips), > - (var vip_key, var backend_ips) = vip, > - health_check in &nb::Load_Balancer_Health_Check(.vip = vip_key), > - lb.health_check.contains(health_check._uuid). > - > -relation LBVIP1( > - lb: Intern<nb::Load_Balancer>, > - vip_key: istring, > - backend_ips: istring, > - health_check: Option<Intern<nb::Load_Balancer_Health_Check>>) > -LBVIP1(lb, vip_key, backend_ips, Some{health_check}) :- > - LBVIP0(lb, vip_key, backend_ips, health_check). > -LBVIP1(lb, vip_key, backend_ips, None) :- > - lb in &nb::Load_Balancer(), > - var vip = FlatMap(lb.vips), > - (var vip_key, var backend_ips) = vip, > - not LBVIP0(lb, vip_key, backend_ips, _). > - > -typedef LBVIP = LBVIP { > - lb: Intern<nb::Load_Balancer>, > - vip_key: istring, > - backend_ips: istring, > - health_check: Option<Intern<nb::Load_Balancer_Health_Check>>, > - vip_addr: v46_ip, > - vip_port: bit<16>, > - backends: Vec<lb_vip_backend> > -} > - > -relation LBVIP[Intern<LBVIP>] > - > -LBVIP[LBVIP{lb, vip_key, backend_ips, health_check, vip_addr, vip_port, backends}.intern()] :- > - LBVIP1(lb, vip_key, backend_ips, health_check), > - Some{(var vip_addr, var vip_port)} = ip_address_and_port_from_lb_key(vip_key.ival()), > - var backends = backend_ips.split(",").filter_map( > - |ip| parse_vip_backend(ip, lb.ip_port_mappings)). > - > -typedef svc_monitor = SvcMonitor{ > - port_name: istring, // Might name a switch or router port. > - src_ip: istring > -} > - > -/* Backends for load balancer virtual IPs. > - * > - * Use caution with this table, because load balancer virtual IPs > - * sometimes have no backends and there is some significance to that. > - * In cases that are really per-LBVIP, instead of per-LBVIPBackend, > - * process the LBVIPs directly. */ > -typedef lb_vip_backend = LBVIPBackend{ > - ip: v46_ip, > - port: bit<16>, > - svc_monitor: Option<svc_monitor>} > - > -function parse_vip_backend(backend_ip: string, > - mappings: Map<istring,istring>): Option<lb_vip_backend> { > - match (ip_address_and_port_from_lb_key(backend_ip)) { > - Some{(ip, port)} -> Some{LBVIPBackend{ip, port, parse_ip_port_mapping(mappings, ip)}}, > - _ -> None > - } > -} > - > -function parse_ip_port_mapping(mappings: Map<istring,istring>, ip: v46_ip) > - : Option<svc_monitor> { > - for ((key, value) in mappings) { > - if (ip46_parse(key.ival()) == Some{ip}) { > - var strs = value.split(":"); > - if (strs.len() != 2) { > - return None > - }; > - > - return match ((strs.nth(0), strs.nth(1))) { > - (Some{port_name}, Some{src_ip}) -> Some{SvcMonitor{port_name.intern(), src_ip.intern()}}, > - _ -> None > - } > - } > - }; > - return None > -} > - > -function is_online(status: Option<istring>): bool = { > - match (status) { > - Some{s} -> s == i"online", > - _ -> true > - } > -} > -function default_protocol(protocol: Option<istring>): istring = { > - match (protocol) { > - Some{x} -> x, > - None -> i"tcp" > - } > -} > - > -relation LBVIPWithStatus( > - lbvip: Intern<LBVIP>, > - up_backends: istring) > -LBVIPWithStatus(lbvip, i"") :- > - lbvip in &LBVIP(.backends = vec_empty()). > -LBVIPWithStatus(lbvip, up_backends) :- > - LBVIPBackendStatus(lbvip, backend, up), > - var up_backends = ((backend, up)).group_by(lbvip).to_vec().filter_map(|x| { > - (LBVIPBackend{var ip, var port, _}, var up) = x; > - match ((up, port)) { > - (true, 0) -> Some{"${ip.to_bracketed_string()}"}, > - (true, _) -> Some{"${ip.to_bracketed_string()}:${port}"}, > - _ -> None > - } > - }).join(",").intern(). > - > -/* Maps from a load-balancer virtual IP backend to whether it's up or not. > - * > - * Only some backends have health checking enabled. The ones that don't > - * are always considered to be up. */ > -relation LBVIPBackendStatus0( > - lbvip: Intern<LBVIP>, > - backend: lb_vip_backend, > - up: bool) > -LBVIPBackendStatus0(lbvip, backend, is_online(sm.status)) :- > - LBVIP[lbvip@&LBVIP{.lb = lb}], > - var backend = FlatMap(lbvip.backends), > - Some{var svc_monitor} = backend.svc_monitor, > - sm in &sb::Service_Monitor(.port = backend.port as integer), > - ip46_parse(sm.ip.ival()) == Some{backend.ip}, > - svc_monitor.port_name == sm.logical_port, > - default_protocol(lb.protocol) == default_protocol(sm.protocol). > - > -relation LBVIPBackendStatus( > - lbvip: Intern<LBVIP>, > - backend: lb_vip_backend, > - up: bool) > -LBVIPBackendStatus(lbvip, backend, up) :- LBVIPBackendStatus0(lbvip, backend, up). > -LBVIPBackendStatus(lbvip, backend, true) :- > - LBVIP[lbvip@&LBVIP{.lb = lb}], > - var backend = FlatMap(lbvip.backends), > - not LBVIPBackendStatus0(lbvip, backend, _). > - > -/* SwitchPortDHCPv4Options: many-to-one relation between logical switches and DHCPv4 options */ > -relation SwitchPortDHCPv4Options( > - port: Intern<SwitchPort>, > - dhcpv4_options: Intern<nb::DHCP_Options>) > - > -SwitchPortDHCPv4Options(port, options) :- > - port in &SwitchPort(.lsp = lsp), > - port.lsp.__type != i"external", > - Some{var dhcpv4_uuid} = lsp.dhcpv4_options, > - options in &nb::DHCP_Options(._uuid = dhcpv4_uuid). > - > -/* SwitchPortDHCPv6Options: many-to-one relation between logical switches and DHCPv4 options */ > -relation SwitchPortDHCPv6Options( > - port: Intern<SwitchPort>, > - dhcpv6_options: Intern<nb::DHCP_Options>) > - > -SwitchPortDHCPv6Options(port, options) :- > - port in &SwitchPort(.lsp = lsp), > - port.lsp.__type != i"external", > - Some{var dhcpv6_uuid} = lsp.dhcpv6_options, > - options in &nb::DHCP_Options(._uuid = dhcpv6_uuid). > - > -/* SwitchQoS: many-to-one relation between logical switches and nb::QoS */ > -relation SwitchQoS(sw: Intern<Switch>, qos: Intern<nb::QoS>) > - > -SwitchQoS(sw, qos) :- > - sw in &Switch(), > - &nb::Logical_Switch(._uuid = sw._uuid, .qos_rules = qos_rules), > - var qos_rule = FlatMap(qos_rules), > - qos in &nb::QoS(._uuid = qos_rule). > - > -/* Reports whether a given ACL is associated with a fair meter. > - * 'has_fair_meter' is false if 'acl' has no meter, or if has a meter > - * that isn't a fair meter. (The latter case has two subcases: the > - * case where the meter that the ACL names corresponds to an nb::Meter > - * with that name, and the case where it doesn't.) */ > -relation ACLHasFairMeter(acl: Intern<nb::ACL>, has_fair_meter: bool) > -ACLHasFairMeter(acl, true) :- > - ACLWithFairMeter(acl, _). > -ACLHasFairMeter(acl, false) :- > - acl in &nb::ACL(), > - not ACLWithFairMeter(acl, _). > - > -/* All the ACLs associated with a fair meter, with their fair meters. */ > -relation ACLWithFairMeter(acl: Intern<nb::ACL>, meter: Intern<nb::Meter>) > -ACLWithFairMeter(acl, meter) :- > - acl in &nb::ACL(.meter = Some{meter_name}), > - meter in &nb::Meter(.name = meter_name, .fair = Some{true}). > - > -/* SwitchACL: many-to-many relation between logical switches and ACLs */ > -relation &SwitchACL(sw: Intern<Switch>, > - acl: Intern<nb::ACL>, > - has_fair_meter: bool) > - > -&SwitchACL(.sw = sw, .acl = acl, .has_fair_meter = has_fair_meter) :- > - LogicalSwitchACL(sw_uuid, acl_uuid), > - sw in &Switch(._uuid = sw_uuid), > - acl in &nb::ACL(._uuid = acl_uuid), > - ACLHasFairMeter(acl, has_fair_meter). > - > -function oVN_FEATURE_PORT_UP_NOTIF(): istring { i"port-up-notif" } > -relation SwitchPortUp0(lsp: uuid) > -SwitchPortUp0(lsp) :- > - &nb::Logical_Switch_Port(._uuid = lsp, .__type = i"router"). > -SwitchPortUp0(lsp) :- > - &nb::Logical_Switch_Port(._uuid = lsp, .name = lsp_name, .__type = __type), > - sb::Port_Binding(.logical_port = lsp_name, .up = up, .chassis = Some{chassis_uuid}), > - sb::Chassis(._uuid = chassis_uuid, .other_config = other_config), > - if (other_config.get_bool_def(oVN_FEATURE_PORT_UP_NOTIF(), false)) { > - up == Some{true} > - } else { > - true > - }. > - > -relation SwitchPortUp(lsp: uuid, up: bool) > -SwitchPortUp(lsp, true) :- SwitchPortUp0(lsp). > -SwitchPortUp(lsp, false) :- &nb::Logical_Switch_Port(._uuid = lsp), not SwitchPortUp0(lsp). > - > -relation SwitchPortHAChassisGroup0(lsp_uuid: uuid, hac_group_uuid: uuid) > -SwitchPortHAChassisGroup0(lsp_uuid, ha_chassis_group_uuid(ls_uuid)) :- > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > - lsp.__type == i"external", > - Some{var hac_group_uuid} = lsp.ha_chassis_group, > - ha_chassis_group in nb::HA_Chassis_Group(._uuid = hac_group_uuid), > - /* If the group is empty, then HA_Chassis_Group record will not be created in SB, > - * and so we should not create a reference to the group in Port_Binding table, > - * to avoid integrity violation. */ > - not ha_chassis_group.ha_chassis.is_empty(), > - LogicalSwitchPort(.lport = lsp_uuid, .lswitch = ls_uuid). > -relation SwitchPortHAChassisGroup(lsp_uuid: uuid, hac_group_uuid: Option<uuid>) > -SwitchPortHAChassisGroup(lsp_uuid, Some{hac_group_uuid}) :- > - SwitchPortHAChassisGroup0(lsp_uuid, hac_group_uuid). > -SwitchPortHAChassisGroup(lsp_uuid, None) :- > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > - not SwitchPortHAChassisGroup0(lsp_uuid, _). > - > -/* SwitchPort relation collects all attributes of a logical switch port > - * - `peer` - peer router port, if any > - * - `static_dynamic_mac` - port has a "dynamic" address that contains a static MAC, > - * e.g., "80:fa:5b:06:72:b7 dynamic" > - * - `static_dynamic_ipv4`, `static_dynamic_ipv6` - port has a "dynamic" address that contains a static IP, > - * e.g., "dynamic 192.168.1.2" > - * - `needs_dynamic_ipv4address` - port requires a dynamically allocated IPv4 address > - * - `needs_dynamic_macaddress` - port requires a dynamically allocated MAC address > - * - `needs_dynamic_tag` - port requires a dynamically allocated tag > - * - `up` - true if the port is bound to a chassis or has type "" > - * - 'hac_group_uuid' - uuid of sb::HA_Chassis_Group, only for "external" ports > - */ > -typedef SwitchPort = SwitchPort { > - lsp: Intern<nb::Logical_Switch_Port>, > - json_name: istring, > - sw: Intern<Switch>, > - peer: Option<Intern<RouterPort>>, > - static_addresses: Vec<lport_addresses>, > - dynamic_address: Option<lport_addresses>, > - static_dynamic_mac: Option<eth_addr>, > - static_dynamic_ipv4: Option<in_addr>, > - static_dynamic_ipv6: Option<in6_addr>, > - ps_addresses: Vec<lport_addresses>, > - ps_eth_addresses: Vec<istring>, > - parent_name: Option<istring>, > - needs_dynamic_ipv4address: bool, > - needs_dynamic_macaddress: bool, > - needs_dynamic_ipv6address: bool, > - needs_dynamic_tag: bool, > - up: bool, > - mcast_cfg: Intern<McastPortCfg>, > - hac_group_uuid: Option<uuid> > -} > - > -relation SwitchPort[Intern<SwitchPort>] > - > -SwitchPort[SwitchPort{ > - .lsp = lsp, > - .json_name = lsp.name.json_escape().intern(), > - .sw = sw, > - .peer = peer, > - .static_addresses = static_addresses, > - .dynamic_address = dynamic_address, > - .static_dynamic_mac = static_dynamic_mac, > - .static_dynamic_ipv4 = static_dynamic_ipv4, > - .static_dynamic_ipv6 = static_dynamic_ipv6, > - .ps_addresses = ps_addresses, > - .ps_eth_addresses = ps_eth_addresses, > - .parent_name = parent_name, > - .needs_dynamic_ipv4address = needs_dynamic_ipv4address, > - .needs_dynamic_macaddress = needs_dynamic_macaddress, > - .needs_dynamic_ipv6address = needs_dynamic_ipv6address, > - .needs_dynamic_tag = needs_dynamic_tag, > - .up = up, > - .mcast_cfg = mcast_cfg, > - .hac_group_uuid = hac_group_uuid > - }.intern()] :- > - lsp in &nb::Logical_Switch_Port(), > - LogicalSwitchPort(lsp._uuid, lswitch_uuid), > - sw in &Switch(._uuid = lswitch_uuid, > - .other_config = other_config, > - .subnet = subnet, > - .ipv6_prefix = ipv6_prefix), > - SwitchRouterPeerRef(lsp._uuid, peer), > - SwitchPortUp(lsp._uuid, up), > - mcast_cfg in &McastPortCfg(.port = lsp._uuid, .router_port = false), > - var static_addresses = { > - var static_addresses = vec_empty(); > - for (addr in lsp.addresses) { > - if ((addr != i"router") and (not is_dynamic_lsp_address(addr.ival()))) { > - match (extract_lsp_addresses(addr.ival())) { > - None -> (), > - Some{lport_addr} -> static_addresses.push(lport_addr) > - } > - } else () > - }; > - static_addresses > - }, > - var ps_addresses = { > - var ps_addresses = vec_empty(); > - for (addr in lsp.port_security) { > - match (extract_lsp_addresses(addr.ival())) { > - None -> (), > - Some{lport_addr} -> ps_addresses.push(lport_addr) > - } > - }; > - ps_addresses > - }, > - var ps_eth_addresses = { > - var ps_eth_addresses = vec_empty(); > - for (ps_addr in ps_addresses) { > - ps_eth_addresses.push(i"${ps_addr.ea}") > - }; > - ps_eth_addresses > - }, > - var dynamic_address = match (lsp.dynamic_addresses) { > - None -> None, > - Some{lport_addr} -> extract_lsp_addresses(lport_addr.ival()) > - }, > - (var static_dynamic_mac, > - var static_dynamic_ipv4, > - var static_dynamic_ipv6, > - var has_dyn_lsp_addr) = { > - var dynamic_address_request = None; > - for (addr in lsp.addresses) { > - dynamic_address_request = parse_dynamic_address_request(addr.ival()); > - if (dynamic_address_request.is_some()) { > - break > - } > - }; > - > - match (dynamic_address_request) { > - Some{DynamicAddressRequest{mac, ipv4, ipv6}} -> (mac, ipv4, ipv6, true), > - None -> (None, None, None, false) > - } > - }, > - var needs_dynamic_ipv4address = has_dyn_lsp_addr and peer == None and subnet.is_some() and > - static_dynamic_ipv4 == None, > - var needs_dynamic_macaddress = has_dyn_lsp_addr and peer == None and static_dynamic_mac == None and > - (subnet.is_some() or ipv6_prefix.is_some() or > - other_config.get(i"mac_only") == Some{i"true"}), > - var needs_dynamic_ipv6address = has_dyn_lsp_addr and peer == None and ipv6_prefix.is_some() and static_dynamic_ipv6 == None, > - var parent_name = match (lsp.parent_name) { > - None -> None, > - Some{pname} -> if (pname == i"") { None } else { Some{pname} } > - }, > - /* Port needs dynamic tag if it has a parent and its `tag_request` is 0. */ > - var needs_dynamic_tag = parent_name.is_some() and lsp.tag_request == Some{0}, > - SwitchPortHAChassisGroup(.lsp_uuid = lsp._uuid, > - .hac_group_uuid = hac_group_uuid). > - > -/* Switch port port security addresses */ > -relation SwitchPortPSAddresses(port: Intern<SwitchPort>, > - ps_addrs: lport_addresses) > - > -SwitchPortPSAddresses(port, ps_addrs) :- > - port in &SwitchPort(.ps_addresses = ps_addresses), > - var ps_addrs = FlatMap(ps_addresses). > - > -/* All static addresses associated with a port parsed into > - * the lport_addresses data structure */ > -relation SwitchPortStaticAddresses(port: Intern<SwitchPort>, > - addrs: lport_addresses) > -SwitchPortStaticAddresses(port, addrs) :- > - port in &SwitchPort(.static_addresses = static_addresses), > - var addrs = FlatMap(static_addresses). > - > -/* All static and dynamic addresses associated with a port parsed into > - * the lport_addresses data structure */ > -relation SwitchPortAddresses(port: Intern<SwitchPort>, > - addrs: lport_addresses) > - > -SwitchPortAddresses(port, addrs) :- SwitchPortStaticAddresses(port, addrs). > - > -SwitchPortAddresses(port, dynamic_address) :- > - SwitchPortNewDynamicAddress(port, Some{dynamic_address}). > - > -/* "router" is a special Logical_Switch_Port address value that indicates that the Ethernet, IPv4, and IPv6 > - * this port should be obtained from the connected logical router port, as specified by router-port in > - * options. > - * > - * The resulting addresses are used to populate the logical switch’s destination lookup, and also for the > - * logical switch to generate ARP and ND replies. > - * > - * If the connected logical router port is a distributed gateway port and the logical router has rules > - * specified in nat with external_mac, then those addresses are also used to populate the switch’s destination > - * lookup. */ > -SwitchPortAddresses(port, addrs) :- > - port in &SwitchPort(.lsp = lsp, .peer = Some{&rport}), > - Some{var addrs} = { > - var opt_addrs = None; > - for (addr in lsp.addresses) { > - if (addr == i"router") { > - opt_addrs = Some{rport.networks} > - } else () > - }; > - opt_addrs > - }. > - > -/* All static and dynamic IPv4 addresses associated with a port */ > -relation SwitchPortIPv4Address(port: Intern<SwitchPort>, > - ea: eth_addr, > - addr: ipv4_netaddr) > - > -SwitchPortIPv4Address(port, ea, addr) :- > - SwitchPortAddresses(port, LPortAddress{.ea = ea, .ipv4_addrs = addrs}), > - var addr = FlatMap(addrs). > - > -/* All static and dynamic IPv6 addresses associated with a port */ > -relation SwitchPortIPv6Address(port: Intern<SwitchPort>, > - ea: eth_addr, > - addr: ipv6_netaddr) > - > -SwitchPortIPv6Address(port, ea, addr) :- > - SwitchPortAddresses(port, LPortAddress{.ea = ea, .ipv6_addrs = addrs}), > - var addr = FlatMap(addrs). > - > -/* Service monitoring. */ > - > -/* MAC allocated for service monitor usage. Just one mac is allocated > - * for this purpose and ovn-controller's on each chassis will make use > - * of this mac when sending out the packets to monitor the services > - * defined in Service_Monitor Southbound table. Since these packets > - * all locally handled, having just one mac is good enough. */ > -function get_svc_monitor_mac(options: Map<istring,istring>, uuid: uuid) > - : eth_addr = > -{ > - var existing_mac = match ( > - options.get(i"svc_monitor_mac")) > - { > - Some{mac} -> scan_eth_addr(mac.ival()), > - None -> None > - }; > - match (existing_mac) { > - Some{mac} -> mac, > - None -> eth_addr_pseudorandom(uuid, 'h5678) > - } > -} > -function put_svc_monitor_mac(options: mut Map<istring,istring>, > - svc_monitor_mac: eth_addr) > -{ > - options.insert(i"svc_monitor_mac", svc_monitor_mac.to_string().intern()); > -} > -relation SvcMonitorMac(mac: eth_addr) > -SvcMonitorMac(get_svc_monitor_mac(options, uuid)) :- > - nb::NB_Global(._uuid = uuid, .options = options). > - > -relation UseCtInvMatch[bool] > -UseCtInvMatch[options.get_bool_def(i"use_ct_inv_match", true)] :- > - nb::NB_Global(.options = options). > -UseCtInvMatch[true] :- > - Unit(), > - not nb in nb::NB_Global(). > diff --git a/northd/multicast.dl b/northd/multicast.dl > deleted file mode 100644 > index 56bfa0c637..0000000000 > --- a/northd/multicast.dl > +++ /dev/null > @@ -1,273 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -import OVN_Northbound as nb > -import OVN_Southbound as sb > -import ovn > -import ovsdb > -import helpers > -import lswitch > -import lrouter > - > -function mCAST_DEFAULT_MAX_ENTRIES(): integer = 2048 > - > -function mCAST_DEFAULT_IDLE_TIMEOUT_S(): integer = 300 > -function mCAST_IDLE_TIMEOUT_S_RANGE(): (integer, integer) = (15, 3600) > - > -function mCAST_DEFAULT_QUERY_INTERVAL_S(): integer = 1 > -function mCAST_QUERY_INTERVAL_S_RANGE(): (integer, integer) = (1, 3600) > - > -function mCAST_DEFAULT_QUERY_MAX_RESPONSE_S(): integer = 1 > - > -/* IP Multicast per switch configuration. */ > -typedef McastSwitchCfg = McastSwitchCfg { > - datapath : uuid, > - enabled : bool, > - querier : bool, > - flood_unreg : bool, > - eth_src : istring, > - ip4_src : istring, > - ip6_src : istring, > - table_size : integer, > - idle_timeout : integer, > - query_interval: integer, > - query_max_resp: integer > -} > - > -relation McastSwitchCfg[Intern<McastSwitchCfg>] > - > - /* FIXME: Right now table_size is enforced only in ovn-controller but in > - * the ovn-northd C version we enforce it on the aggregate groups too. > - */ > - > -McastSwitchCfg[McastSwitchCfg { > - .datapath = ls_uuid, > - .enabled = other_config.get_bool_def(i"mcast_snoop", false), > - .querier = other_config.get_bool_def(i"mcast_querier", true), > - .flood_unreg = other_config.get_bool_def(i"mcast_flood_unregistered", false), > - .eth_src = other_config.get(i"mcast_eth_src").unwrap_or(i""), > - .ip4_src = other_config.get(i"mcast_ip4_src").unwrap_or(i""), > - .ip6_src = other_config.get(i"mcast_ip6_src").unwrap_or(i""), > - .table_size = other_config.get_int_def(i"mcast_table_size", mCAST_DEFAULT_MAX_ENTRIES()), > - .idle_timeout = idle_timeout, > - .query_interval = query_interval, > - .query_max_resp = query_max_resp > - }.intern()] :- > - &nb::Logical_Switch(._uuid = ls_uuid, > - .other_config = other_config), > - var idle_timeout = other_config.get_int_def(i"mcast_idle_timeout", mCAST_DEFAULT_IDLE_TIMEOUT_S()) > - .clamp(mCAST_IDLE_TIMEOUT_S_RANGE()), > - var query_interval = other_config.get_int_def(i"mcast_query_interval", idle_timeout / 2) > - .clamp(mCAST_QUERY_INTERVAL_S_RANGE()), > - var query_max_resp = other_config.get_int_def(i"mcast_query_max_response", > - mCAST_DEFAULT_QUERY_MAX_RESPONSE_S()). > - > -/* IP Multicast per router configuration. */ > -typedef McastRouterCfg = McastRouterCfg { > - datapath: uuid, > - relay : bool > -} > - > -relation McastRouterCfg[Intern<McastRouterCfg>] > - > -McastRouterCfg[McastRouterCfg{lr_uuid, mcast_relay}.intern()] :- > - nb::Logical_Router(._uuid = lr_uuid, .options = options), > - var mcast_relay = options.get_bool_def(i"mcast_relay", false). > - > -/* IP Multicast port configuration. */ > -typedef McastPortCfg = McastPortCfg { > - port : uuid, > - router_port : bool, > - flood : bool, > - flood_reports : bool > -} > - > -relation McastPortCfg[Intern<McastPortCfg>] > - > -McastPortCfg[McastPortCfg{lsp_uuid, false, flood, flood_reports}.intern()] :- > - &nb::Logical_Switch_Port(._uuid = lsp_uuid, .options = options), > - var flood = options.get_bool_def(i"mcast_flood", false), > - var flood_reports = options.get_bool_def(i"mcast_flood_reports", false). > - > -McastPortCfg[McastPortCfg{lrp_uuid, true, flood, flood}.intern()] :- > - &nb::Logical_Router_Port(._uuid = lrp_uuid, .options = options), > - var flood = options.get_bool_def(i"mcast_flood", false). > - > -/* Mapping between Switch and the set of router port uuids on which to flood > - * IP multicast for relay. > - */ > -relation SwitchMcastFloodRelayPorts(sw: Intern<Switch>, ports: Set<uuid>) > - > -SwitchMcastFloodRelayPorts(switch, relay_ports) :- > - &SwitchPort( > - .lsp = lsp, > - .sw = switch, > - .peer = Some{&RouterPort{.router = &Router{.mcast_cfg = mcast_cfg}}} > - ), mcast_cfg.relay, > - var relay_ports = lsp._uuid.group_by(switch).to_set(). > - > -SwitchMcastFloodRelayPorts(switch, set_empty()) :- > - Switch[switch], > - not &SwitchPort( > - .sw = switch, > - .peer = Some{ > - &RouterPort{ > - .router = &Router{.mcast_cfg = &McastRouterCfg{.relay=true}} > - } > - } > - ). > - > -/* Mapping between Switch and the set of port uuids on which to > - * flood IP multicast statically. > - */ > -relation SwitchMcastFloodPorts(sw: Intern<Switch>, ports: Set<uuid>) > - > -SwitchMcastFloodPorts(switch, flood_ports) :- > - &SwitchPort( > - .lsp = lsp, > - .sw = switch, > - .mcast_cfg = &McastPortCfg{.flood = true}), > - var flood_ports = lsp._uuid.group_by(switch).to_set(). > - > -SwitchMcastFloodPorts(switch, set_empty()) :- > - Switch[switch], > - not &SwitchPort( > - .sw = switch, > - .mcast_cfg = &McastPortCfg{.flood = true}). > - > -/* Mapping between Switch and the set of port uuids on which to > - * flood IP multicast reports statically. > - */ > -relation SwitchMcastFloodReportPorts(sw: Intern<Switch>, ports: Set<uuid>) > - > -SwitchMcastFloodReportPorts(switch, flood_ports) :- > - &SwitchPort( > - .lsp = lsp, > - .sw = switch, > - .mcast_cfg = &McastPortCfg{.flood_reports = true}), > - var flood_ports = lsp._uuid.group_by(switch).to_set(). > - > -SwitchMcastFloodReportPorts(switch, set_empty()) :- > - Switch[switch], > - not &SwitchPort( > - .sw = switch, > - .mcast_cfg = &McastPortCfg{.flood_reports = true}). > - > -/* Mapping between Router and the set of port uuids on which to > - * flood IP multicast reports statically. > - */ > -relation RouterMcastFloodPorts(sw: Intern<Router>, ports: Set<uuid>) > - > -RouterMcastFloodPorts(router, flood_ports) :- > - &RouterPort( > - .lrp = lrp, > - .router = router, > - .mcast_cfg = &McastPortCfg{.flood = true} > - ), > - var flood_ports = lrp._uuid.group_by(router).to_set(). > - > -RouterMcastFloodPorts(router, set_empty()) :- > - Router[router], > - not &RouterPort( > - .router = router, > - .mcast_cfg = &McastPortCfg{.flood = true}). > - > -/* Flattened IGMP group. One record per address-port tuple. */ > -relation IgmpSwitchGroupPort( > - address: istring, > - switch : Intern<Switch>, > - port : uuid > -) > - > -IgmpSwitchGroupPort(address, switch, lsp_uuid) :- > - sb::IGMP_Group(.address = address, .ports = pb_ports), > - var pb_port_uuid = FlatMap(pb_ports), > - sb::Port_Binding(._uuid = pb_port_uuid, .logical_port = lsp_name), > - &SwitchPort( > - .lsp = &nb::Logical_Switch_Port{._uuid = lsp_uuid, .name = lsp_name}, > - .sw = switch). > -IgmpSwitchGroupPort(address, switch, localnet_port.0) :- > - IgmpSwitchGroupPort(address, switch, _), > - var localnet_port = FlatMap(switch.localnet_ports). > - > -/* Aggregated IGMP group: merges all IgmpSwitchGroupPort for a given > - * address-switch tuple from all chassis. > - */ > -relation IgmpSwitchMulticastGroup( > - address: istring, > - switch : Intern<Switch>, > - ports : Set<uuid> > -) > - > -IgmpSwitchMulticastGroup(address, switch, ports) :- > - IgmpSwitchGroupPort(address, switch, port), > - var ports = port.group_by((address, switch)).to_set(). > - > -/* Flattened IGMP group representation for routers with relay enabled. One > - * record per address-port tuple for all IGMP groups learned by switches > - * connected to the router. > - */ > -relation IgmpRouterGroupPort( > - address: istring, > - router : Intern<Router>, > - port : uuid > -) > - > -IgmpRouterGroupPort(address, rtr_port.router, rtr_port.lrp._uuid) :- > - SwitchMcastFloodRelayPorts(switch, sw_flood_ports), > - IgmpSwitchMulticastGroup(address, switch, _), > - /* For IPv6 only relay routable multicast groups > - * (RFC 4291 2.7). > - */ > - match (ipv6_parse(address.ival())) { > - Some{ipv6} -> ipv6.is_routable_multicast(), > - None -> true > - }, > - var flood_port = FlatMap(sw_flood_ports), > - &SwitchPort(.lsp = &nb::Logical_Switch_Port{._uuid = flood_port}, > - .peer = Some{rtr_port}), > - RouterPortIsRedirect(rtr_port.lrp._uuid, false). > - > -/* Store the chassis redirect port uuid for redirect ports, otherwise traffic > - * will not be tunneled properly. This will be translated back to the patch > - * port on the remote hypervisor. > - */ > -IgmpRouterGroupPort(address, rtr_port.router, cr_lrp_uuid) :- > - SwitchMcastFloodRelayPorts(switch, sw_flood_ports), > - IgmpSwitchMulticastGroup(address, switch, _), > - /* For IPv6 only relay routable multicast groups > - * (RFC 4291 2.7). > - */ > - match (ipv6_parse(address.ival())) { > - Some{ipv6} -> ipv6.is_routable_multicast(), > - None -> true > - }, > - var flood_port = FlatMap(sw_flood_ports), > - &SwitchPort(.lsp = &nb::Logical_Switch_Port{._uuid = flood_port}, > - .peer = Some{rtr_port}), > - RouterPortIsRedirect(rtr_port.lrp._uuid, true), > - DistributedGatewayPort(rtr_port.lrp, _, cr_lrp_uuid). > - > -/* Aggregated IGMP group for routers: merges all IgmpRouterGroupPort for > - * a given address-router tuple from all connected switches. > - */ > -relation IgmpRouterMulticastGroup( > - address: istring, > - router : Intern<Router>, > - ports : Set<uuid> > -) > - > -IgmpRouterMulticastGroup(address, router, ports) :- > - IgmpRouterGroupPort(address, router, port), > - var ports = port.group_by((address, router)).to_set(). > diff --git a/northd/ovn-nb.dlopts b/northd/ovn-nb.dlopts > deleted file mode 100644 > index 9a460adef4..0000000000 > --- a/northd/ovn-nb.dlopts > +++ /dev/null > @@ -1,27 +0,0 @@ > ---intern-strings > --o BFD > ---rw BFD.status > --o Logical_Router_Port > ---rw Logical_Router_Port.ipv6_prefix > --o Logical_Switch_Port > ---rw Logical_Switch_Port.tag > ---rw Logical_Switch_Port.dynamic_addresses > ---rw Logical_Switch_Port.up > --o NB_Global > ---rw NB_Global.sb_cfg > ---rw NB_Global.hv_cfg > ---rw NB_Global.options > ---rw NB_Global.ipsec > ---rw NB_Global.nb_cfg_timestamp > ---rw NB_Global.hv_cfg_timestamp > ---intern-table DHCP_Options > ---intern-table ACL > ---intern-table QoS > ---intern-table Load_Balancer > ---intern-table Logical_Switch > ---intern-table Load_Balancer_Health_Check > ---intern-table Meter > ---intern-table NAT > ---intern-table Address_Set > ---intern-table Logical_Router_Port > ---intern-table Logical_Switch_Port > diff --git a/northd/ovn-northd-ddlog.c b/northd/ovn-northd-ddlog.c > deleted file mode 100644 > index 1c06bd0028..0000000000 > --- a/northd/ovn-northd-ddlog.c > +++ /dev/null > @@ -1,1368 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -#include <config.h> > - > -#include <getopt.h> > -#include <stdlib.h> > -#include <stdio.h> > -#include <fcntl.h> > -#include <unistd.h> > - > -#include "command-line.h" > -#include "daemon.h" > -#include "fatal-signal.h" > -#include "hash.h" > -#include "jsonrpc.h" > -#include "lib/ovn-util.h" > -#include "memory.h" > -#include "openvswitch/json.h" > -#include "openvswitch/poll-loop.h" > -#include "openvswitch/vlog.h" > -#include "ovs-numa.h" > -#include "ovsdb-cs.h" > -#include "ovsdb-data.h" > -#include "ovsdb-error.h" > -#include "ovsdb-parser.h" > -#include "ovsdb-types.h" > -#include "simap.h" > -#include "stopwatch.h" > -#include "lib/stopwatch-names.h" > -#include "lib/uuidset.h" > -#include "stream-ssl.h" > -#include "stream.h" > -#include "unixctl.h" > -#include "util.h" > -#include "uuid.h" > - > -#include "northd/ovn_northd_ddlog/ddlog.h" > - > -VLOG_DEFINE_THIS_MODULE(ovn_northd); > - > -#include "northd/ovn-northd-ddlog-nb.inc" > -#include "northd/ovn-northd-ddlog-sb.inc" > - > -struct northd_status { > - bool locked; > - bool pause; > -}; > - > -static unixctl_cb_func ovn_northd_exit; > -static unixctl_cb_func ovn_northd_pause; > -static unixctl_cb_func ovn_northd_resume; > -static unixctl_cb_func ovn_northd_is_paused; > -static unixctl_cb_func ovn_northd_status; > - > -static unixctl_cb_func ovn_northd_enable_cpu_profiling; > -static unixctl_cb_func ovn_northd_disable_cpu_profiling; > -static unixctl_cb_func ovn_northd_profile; > - > -/* --ddlog-record: The name of a file to which to record DDlog commands for > - * later replay. Useful for debugging. If null (by default), DDlog commands > - * are not recorded. */ > -static const char *record_file; > - > -static const char *ovnnb_db; > -static const char *ovnsb_db; > -static const char *unixctl_path; > - > -/* SSL options */ > -static const char *ssl_private_key_file; > -static const char *ssl_certificate_file; > -static const char *ssl_ca_cert_file; > - > -/* Frequently used table ids. */ > -static table_id WARNING_TABLE_ID; > -static table_id NB_CFG_TIMESTAMP_ID; > - > -/* Initialize frequently used table ids. */ > -static void > -init_table_ids(ddlog_prog ddlog) > -{ > - WARNING_TABLE_ID = ddlog_get_table_id(ddlog, "helpers::Warning"); > - NB_CFG_TIMESTAMP_ID = ddlog_get_table_id(ddlog, "NbCfgTimestamp"); > -} > - > -struct northd_ctx { > - /* Shared between NB and SB database contexts. */ > - ddlog_prog ddlog; > - ddlog_delta *delta; /* Accumulated delta to send to OVSDB. */ > - > - /* Database info. > - * > - * The '*_relations' vectors are arrays of strings that contain DDlog > - * relation names, terminated by a null pointer. 'prefix' is the prefix > - * for the DDlog module that contains the relations. */ > - char *prefix; /* e.g. "OVN_Northbound::" */ > - const char **input_relations; > - const char **output_relations; > - const char **output_only_relations; > - > - /* Whether this is the database that has the 'nb_cfg_timestamp' and > - * 'sb_cfg_timestamp' columns in NB_Global. True for the northbound > - * database, false for the southbound database. */ > - bool has_timestamp_columns; > - > - /* OVSDB connection. */ > - struct ovsdb_cs *cs; > - struct json *request_id; /* JSON request ID for outstanding txn if any. */ > - enum { > - /* Initial state, before the output-only data (if any) has been > - * requested. */ > - S_INITIAL, > - > - /* Output-only data has been requested. Waiting for reply. */ > - S_OUTPUT_ONLY_DATA_REQUESTED, > - > - /* Output-only data (if any) has been received. Any request sent out > - * now would be to update data. */ > - S_UPDATE, > - } state; > - > - /* Database info. */ > - const char *db_name; /* e.g. "OVN_Northbound". */ > - struct json *output_only_data; > - const char *lock_name; /* Name of lock we need, NULL if none. */ > - bool paused; > -}; > - > -static struct ovsdb_cs_ops northd_cs_ops; > - > -static struct json *get_database_ops(struct northd_ctx *); > -static int ddlog_clear(struct northd_ctx *); > - > -static void > -northd_ctx_connection_status(struct unixctl_conn *conn, int argc OVS_UNUSED, > - const char *argv[] OVS_UNUSED, void *ctx_) > -{ > - const struct northd_ctx *ctx = ctx_; > - bool connected = ovsdb_cs_is_connected(ctx->cs); > - unixctl_command_reply(conn, connected ? "connected" : "not connected"); > -} > - > -static void > -northd_ctx_cluster_state_reset(struct unixctl_conn *conn, int argc OVS_UNUSED, > - const char *argv[] OVS_UNUSED, void *ctx_) > -{ > - const struct northd_ctx *ctx = ctx_; > - VLOG_INFO("Resetting %s database cluster state", ctx->db_name); > - ovsdb_cs_reset_min_index(ctx->cs); > - unixctl_command_reply(conn, NULL); > -} > - > -static struct northd_ctx * > -northd_ctx_create(const char *server, const char *database, > - const char *unixctl_command_prefix, > - const char *lock_name, > - ddlog_prog ddlog, ddlog_delta *delta, > - const char **input_relations, > - const char **output_relations, > - const char **output_only_relations, > - bool paused) > -{ > - struct northd_ctx *ctx = xmalloc(sizeof *ctx); > - *ctx = (struct northd_ctx) { > - .ddlog = ddlog, > - .delta = delta, > - .prefix = xasprintf("%s::", database), > - .input_relations = input_relations, > - .output_relations = output_relations, > - .output_only_relations = output_only_relations, > - /* 'has_timestamp_columns' will get filled in later. */ > - .cs = ovsdb_cs_create(database, 1 /* XXX */, &northd_cs_ops, ctx), > - .state = S_INITIAL, > - .db_name = database, > - /* 'output_only_relations' will get filled in later. */ > - .lock_name = lock_name, > - .paused = paused, > - }; > - > - ovsdb_cs_set_remote(ctx->cs, server, true); > - ovsdb_cs_set_lock(ctx->cs, lock_name); > - > - char *cmd = xasprintf("%s-connection-status", unixctl_command_prefix); > - unixctl_command_register(cmd, "", 0, 0, > - northd_ctx_connection_status, ctx); > - free(cmd); > - > - cmd = xasprintf("%s-cluster-state-reset", unixctl_command_prefix); > - unixctl_command_register(cmd, "", 0, 0, > - northd_ctx_cluster_state_reset, ctx); > - free(cmd); > - > - return ctx; > -} > - > -static void > -northd_ctx_destroy(struct northd_ctx *ctx) > -{ > - if (ctx) { > - ovsdb_cs_destroy(ctx->cs); > - json_destroy(ctx->request_id); > - json_destroy(ctx->output_only_data); > - free(ctx->prefix); > - free(ctx); > - } > -} > - > -static struct json * > -northd_compose_monitor_request(const struct json *schema_json, void *ctx_) > -{ > - struct northd_ctx *ctx = ctx_; > - > - struct shash *schema = ovsdb_cs_parse_schema(schema_json); > - > - const struct sset *nb_global = shash_find_data( > - schema, "NB_Global"); > - ctx->has_timestamp_columns > - = (nb_global > - && sset_contains(nb_global, "nb_cfg_timestamp") > - && sset_contains(nb_global, "sb_cfg_timestamp")); > - > - struct json *monitor_requests = json_object_create(); > - > - /* This should be smarter about ignoring not needed ones. There's a lot > - * more logic for this in ovsdb_idl_compose_monitor_request(). */ > - const struct shash_node *node; > - SHASH_FOR_EACH (node, schema) { > - const char *table_name = node->name; > - > - /* Only subscribe to input relations we care about. */ > - for (const char **p = ctx->input_relations; *p; p++) { > - if (!strcmp(table_name, *p)) { > - const struct sset *schema_columns = node->data; > - struct json *subscribed_columns = json_array_create_empty(); > - > - const char *column; > - SSET_FOR_EACH (column, schema_columns) { > - if (strcmp(column, "_version")) { > - json_array_add(subscribed_columns, > - json_string_create(column)); > - } > - } > - > - struct json *monitor_request = json_object_create(); > - json_object_put(monitor_request, "columns", > - subscribed_columns); > - json_object_put(monitor_requests, table_name, > - json_array_create_1(monitor_request)); > - break; > - } > - } > - } > - ovsdb_cs_free_schema(schema); > - > - return monitor_requests; > -} > - > -static struct ovsdb_cs_ops northd_cs_ops = { northd_compose_monitor_request }; > - > -/* Sends the database server a request for all the row UUIDs in output-only > - * tables. */ > -static void > -northd_send_output_only_data_request(struct northd_ctx *ctx) > -{ > - if (ctx->output_only_relations[0]) { > - json_destroy(ctx->output_only_data); > - ctx->output_only_data = NULL; > - > - struct json *ops = json_array_create_1( > - json_string_create(ctx->db_name)); > - for (size_t i = 0; ctx->output_only_relations[i]; i++) { > - const char *table = ctx->output_only_relations[i]; > - struct json *op = json_object_create(); > - json_object_put_string(op, "op", "select"); > - json_object_put_string(op, "table", table); > - json_object_put(op, "columns", > - json_array_create_1(json_string_create("_uuid"))); > - json_object_put(op, "where", json_array_create_empty()); > - json_array_add(ops, op); > - } > - > - ctx->state = S_OUTPUT_ONLY_DATA_REQUESTED; > - ctx->request_id = ovsdb_cs_send_transaction(ctx->cs, ops); > - } else { > - ctx->state = S_UPDATE; > - } > -} > - > -static void > -northd_pause(struct northd_ctx *ctx) > -{ > - if (!ctx->paused && ctx->lock_name) { > - ctx->paused = true; > - VLOG_INFO("This ovn-northd instance is now paused."); > - ovsdb_cs_set_lock(ctx->cs, NULL); > - } > -} > - > -static void > -northd_unpause(struct northd_ctx *ctx) > -{ > - if (ctx->paused) { > - ovsdb_cs_set_lock(ctx->cs, ctx->lock_name); > - ctx->paused = false; > - } > -} > - > -static void > -warning_cb(uintptr_t arg OVS_UNUSED, > - table_id table OVS_UNUSED, > - const ddlog_record *rec, > - ssize_t weight) > -{ > - size_t len; > - const char *s = ddlog_get_str_with_length(rec, &len); > - if (weight > 0) { > - VLOG_WARN("New warning: %.*s", (int)len, s); > - } else { > - VLOG_WARN("Warning cleared: %.*s", (int)len, s); > - } > -} > - > -static int > -ddlog_commit(ddlog_prog ddlog, ddlog_delta *delta) > -{ > - ddlog_delta *new_delta = ddlog_transaction_commit_dump_changes(ddlog); > - if (!new_delta) { > - VLOG_WARN("Transaction commit failed"); > - return -1; > - } > - > - /* Remove warnings from delta and output them straight away. */ > - ddlog_delta *warnings = ddlog_delta_remove_table(new_delta, WARNING_TABLE_ID); > - ddlog_delta_enumerate(warnings, warning_cb, 0); > - ddlog_free_delta(warnings); > - > - /* Merge changes into `delta`. */ > - ddlog_delta_union(delta, new_delta); > - ddlog_free_delta(new_delta); > - > - return 0; > -} > - > -static int > -ddlog_clear(struct northd_ctx *ctx) > -{ > - int n_failures = 0; > - for (int i = 0; ctx->input_relations[i]; i++) { > - char *table = xasprintf("%s%s", ctx->prefix, ctx->input_relations[i]); > - if (ddlog_clear_relation(ctx->ddlog, ddlog_get_table_id(ctx->ddlog, > - table))) { > - n_failures++; > - } > - free(table); > - } > - if (n_failures) { > - VLOG_WARN("failed to clear %d tables in %s database", > - n_failures, ctx->db_name); > - } > - return n_failures; > -} > - > -static const struct json * > -json_object_get(const struct json *json, const char *member_name) > -{ > - return (json && json->type == JSON_OBJECT > - ? shash_find_data(json_object(json), member_name) > - : NULL); > -} > - > -/* Stores into '*nb_cfgp' the new value of NB_Global::nb_cfg in the updates in > - * <table-updates> provided by the caller. Leaves '*nb_cfgp' alone if the > - * updates don't set NB_Global::nb_cfg. */ > -static void > -get_nb_cfg(const struct json *table_updates, int64_t *nb_cfgp) > -{ > - const struct json *nb_global = json_object_get(table_updates, "NB_Global"); > - if (nb_global) { > - struct shash_node *row; > - SHASH_FOR_EACH (row, json_object(nb_global)) { > - const struct json *value = row->data; > - const struct json *new = json_object_get(value, "new"); > - const struct json *nb_cfg = json_object_get(new, "nb_cfg"); > - if (nb_cfg && nb_cfg->type == JSON_INTEGER) { > - *nb_cfgp = json_integer(nb_cfg); > - return; > - } > - } > - } > -} > - > -static void > -northd_parse_updates(struct northd_ctx *ctx, struct ovs_list *updates) > -{ > - if (ovs_list_is_empty(updates)) { > - return; > - } > - > - if (ddlog_transaction_start(ctx->ddlog)) { > - VLOG_WARN("DDlog failed to start transaction"); > - return; > - } > - > - > - /* Whenever a new 'nb_cfg' value comes in, we take the current time and > - * push it into the NbCfgTimestamp relation for the DDlog program to put > - * into nb::NB_Global.nb_cfg_timestamp. > - * > - * The 'old_nb_cfg' variables track the state we've pushed into DDlog. > - * The 'new_nb_cfg' variables track what 'updates' sets (by default, > - * no change, so we initialize from the old variables). */ > - static int64_t old_nb_cfg = INT64_MIN; > - static int64_t old_nb_cfg_timestamp = INT64_MIN; > - int64_t new_nb_cfg = old_nb_cfg == INT64_MIN ? 0 : old_nb_cfg; > - int64_t new_nb_cfg_timestamp = old_nb_cfg_timestamp; > - > - struct ovsdb_cs_event *event; > - LIST_FOR_EACH (event, list_node, updates) { > - ovs_assert(event->type == OVSDB_CS_EVENT_TYPE_UPDATE); > - struct ovsdb_cs_update_event *update = &event->update; > - if (update->clear && ddlog_clear(ctx)) { > - goto error; > - } > - > - char *updates_s = json_to_string(update->table_updates, 0); > - if (ddlog_apply_ovsdb_updates(ctx->ddlog, ctx->prefix, updates_s)) { > - VLOG_WARN("DDlog failed to apply updates %s", updates_s); > - free(updates_s); > - goto error; > - } > - free(updates_s); > - > - if (ctx->has_timestamp_columns) { > - get_nb_cfg(update->table_updates, &new_nb_cfg); > - } > - } > - > - if (ctx->has_timestamp_columns && new_nb_cfg != old_nb_cfg) { > - new_nb_cfg_timestamp = time_wall_msec(); > - > - ddlog_cmd *cmds[2]; > - int n_cmds = 0; > - if (old_nb_cfg_timestamp != INT64_MIN) { > - cmds[n_cmds++] = ddlog_delete_val_cmd( > - NB_CFG_TIMESTAMP_ID, ddlog_i64(old_nb_cfg_timestamp)); > - } > - cmds[n_cmds++] = ddlog_insert_cmd( > - NB_CFG_TIMESTAMP_ID, ddlog_i64(new_nb_cfg_timestamp)); > - if (ddlog_apply_updates(ctx->ddlog, cmds, n_cmds) < 0) { > - goto error; > - } > - } > - > - /* Commit changes to DDlog. */ > - if (ddlog_commit(ctx->ddlog, ctx->delta)) { > - goto error; > - } > - if (ctx->has_timestamp_columns) { > - old_nb_cfg = new_nb_cfg; > - old_nb_cfg_timestamp = new_nb_cfg_timestamp; > - } > - > - /* This update may have implications for the other side, so > - * immediately wake to check for more changes to be applied. */ > - poll_immediate_wake(); > - > - return; > - > -error: > - ddlog_transaction_rollback(ctx->ddlog); > -} > - > -static void > -northd_process_txn_reply(struct northd_ctx *ctx, > - const struct jsonrpc_msg *reply) > -{ > - if (!json_equal(reply->id, ctx->request_id)) { > - VLOG_WARN("unexpected transaction reply"); > - return; > - } > - > - json_destroy(ctx->request_id); > - ctx->request_id = NULL; > - > - if (reply->type == JSONRPC_ERROR) { > - char *s = jsonrpc_msg_to_string(reply); > - VLOG_WARN("received database error: %s", s); > - free(s); > - > - ovsdb_cs_force_reconnect(ctx->cs); > - return; > - } > - > - switch (ctx->state) { > - case S_INITIAL: > - OVS_NOT_REACHED(); > - break; > - > - case S_OUTPUT_ONLY_DATA_REQUESTED: > - json_destroy(ctx->output_only_data); > - ctx->output_only_data = json_clone(reply->result); > - > - ctx->state = S_UPDATE; > - break; > - > - case S_UPDATE: > - /* Nothing to do. */ > - break; > - > - default: > - OVS_NOT_REACHED(); > - } > -} > - > -static void > -destroy_event_list(struct ovs_list *events) > -{ > - struct ovsdb_cs_event *event; > - LIST_FOR_EACH_POP (event, list_node, events) { > - ovsdb_cs_event_destroy(event); > - } > -} > - > -/* Processes a batch of messages from the database server on 'ctx'. */ > -static void > -northd_run(struct northd_ctx *ctx) > -{ > - struct ovs_list events; > - ovsdb_cs_run(ctx->cs, &events); > - > - struct ovs_list updates = OVS_LIST_INITIALIZER(&updates); > - struct ovsdb_cs_event *event; > - LIST_FOR_EACH_POP (event, list_node, &events) { > - switch (event->type) { > - case OVSDB_CS_EVENT_TYPE_RECONNECT: > - json_destroy(ctx->request_id); > - ctx->state = S_INITIAL; > - break; > - > - case OVSDB_CS_EVENT_TYPE_LOCKED: > - break; > - > - case OVSDB_CS_EVENT_TYPE_UPDATE: > - if (event->update.clear) { > - destroy_event_list(&updates); > - } > - ovs_list_push_back(&updates, &event->list_node); > - continue; > - > - case OVSDB_CS_EVENT_TYPE_TXN_REPLY: > - northd_process_txn_reply(ctx, event->txn_reply); > - break; > - } > - ovsdb_cs_event_destroy(event); > - } > - > - northd_parse_updates(ctx, &updates); > - destroy_event_list(&updates); > - > - if (ctx->state == S_INITIAL && ovsdb_cs_may_send_transaction(ctx->cs)) { > - northd_send_output_only_data_request(ctx); > - } > -} > - > -/* Pass the changes for 'ctx' to its database server. */ > -static void > -northd_send_deltas(struct northd_ctx *ctx) > -{ > - if (ctx->request_id || !ovsdb_cs_may_send_transaction(ctx->cs)) { > - return; > - } > - > - struct json *ops = get_database_ops(ctx); > - if (!ops) { > - return; > - } > - > - struct json *comment = json_object_create(); > - json_object_put_string(comment, "op", "comment"); > - json_object_put_string(comment, "comment", "ovn-northd-ddlog"); > - json_array_add(ops, comment); > - > - ctx->request_id = ovsdb_cs_send_transaction(ctx->cs, ops); > -} > - > -static void > -northd_update_probe_interval_cb( > - uintptr_t probe_intervalp_, > - table_id table OVS_UNUSED, > - const ddlog_record *rec, > - ssize_t weight OVS_UNUSED) > -{ > - int *probe_intervalp = (int *) probe_intervalp_; > - > - int64_t x = ddlog_get_i64(rec); > - *probe_intervalp = (x > 0 && x < 1000 ? 1000 > - : x > INT_MAX ? INT_MAX > - : x); > -} > - > -static void > -northd_update_probe_interval(struct northd_ctx *nb, struct northd_ctx *sb) > -{ > - /* 0 means that Northd_Probe_Interval is empty. That means that we haven't > - * connected to the database and retrieved an initial snapshot. Thus, we > - * set an infinite probe interval to allow for retrieving and stabilizing > - * an initial snapshot of the databse, which can take a long time. > - * > - * -1 means that Northd_Probe_Interval is nonempty but the database doesn't > - * set a probe interval. Thus, we use the default probe interval. > - * > - * Any other value is an explicit probe interval request from the > - * database. */ > - int probe_interval = 0; > - table_id tid = ddlog_get_table_id(nb->ddlog, "Northd_Probe_Interval"); > - ddlog_delta *probe_delta = ddlog_delta_remove_table(nb->delta, tid); > - ddlog_delta_enumerate(probe_delta, northd_update_probe_interval_cb, (uintptr_t) &probe_interval); > - ddlog_free_delta(probe_delta); > - > - ovsdb_cs_set_probe_interval(nb->cs, probe_interval); > - ovsdb_cs_set_probe_interval(sb->cs, probe_interval); > -} > - > -/* Arranges for poll_block() to wake up when northd_run() has something to > - * do or when activity occurs on a transaction on 'ctx'. */ > -static void > -northd_wait(struct northd_ctx *ctx) > -{ > - ovsdb_cs_wait(ctx->cs); > -} > - > -/* ddlog-specific actions. */ > - > -/* Generate OVSDB update command for delta-plus, delta-minus, and delta-update > - * tables. */ > -static void > -ddlog_table_update_deltas(struct ds *ds, ddlog_prog ddlog, ddlog_delta *delta, > - const char *db, const char *table) > -{ > - int error; > - char *updates; > - > - error = ddlog_dump_ovsdb_delta_tables(ddlog, delta, db, table, &updates); > - if (error) { > - VLOG_INFO("DDlog error %d dumping delta for table %s", error, table); > - return; > - } > - > - if (!updates[0]) { > - ddlog_free_json(updates); > - return; > - } > - > - ds_put_cstr(ds, updates); > - ds_put_char(ds, ','); > - ddlog_free_json(updates); > -} > - > -/* Generate OVSDB update command for a output-only table. */ > -static void > -ddlog_table_update_output(struct ds *ds, ddlog_prog ddlog, ddlog_delta *delta, > - const char *db, const char *table) > -{ > - int error; > - char *updates; > - > - error = ddlog_dump_ovsdb_output_table(ddlog, delta, db, table, &updates); > - if (error) { > - VLOG_WARN("%s: failed to generate update commands for " > - "output-only table (error %d)", table, error); > - return; > - } > - char *table_name = xasprintf("%s::Out_%s", db, table); > - ddlog_delta_clear_table(delta, ddlog_get_table_id(ddlog, table_name)); > - free(table_name); > - > - if (!updates[0]) { > - ddlog_free_json(updates); > - return; > - } > - > - ds_put_cstr(ds, updates); > - ds_put_char(ds, ','); > - ddlog_free_json(updates); > -} > - > -static struct ovsdb_error * > -parse_output_only_data(const struct json *txn_result, size_t index, > - struct uuidset *uuidset) > -{ > - if (txn_result->type != JSON_ARRAY || txn_result->array.n <= index) { > - return ovsdb_syntax_error(txn_result, NULL, > - "transaction result missing for " > - "output-only relation %"PRIuSIZE, index); > - } > - > - struct ovsdb_parser p; > - ovsdb_parser_init(&p, txn_result->array.elems[0], "select result"); > - const struct json *rows = ovsdb_parser_member(&p, "rows", OP_ARRAY); > - struct ovsdb_error *error = ovsdb_parser_finish(&p); > - if (error) { > - return error; > - } > - > - for (size_t i = 0; i < rows->array.n; i++) { > - const struct json *row = rows->array.elems[i]; > - > - ovsdb_parser_init(&p, row, "row"); > - const struct json *uuid = ovsdb_parser_member(&p, "_uuid", OP_ARRAY); > - error = ovsdb_parser_finish(&p); > - if (error) { > - return error; > - } > - > - struct ovsdb_base_type base_type = OVSDB_BASE_UUID_INIT; > - union ovsdb_atom atom; > - error = ovsdb_atom_from_json(&atom, &base_type, uuid, NULL); > - if (error) { > - return error; > - } > - uuidset_insert(uuidset, &atom.uuid); > - } > - > - return NULL; > -} > - > -static bool > -get_ddlog_uuid(const ddlog_record *rec, struct uuid *uuid) > -{ > - if (!ddlog_is_int(rec)) { > - return false; > - } > - > - __uint128_t u128 = ddlog_get_u128(rec); > - uuid->parts[0] = u128 >> 96; > - uuid->parts[1] = u128 >> 64; > - uuid->parts[2] = u128 >> 32; > - uuid->parts[3] = u128; > - return true; > -} > - > -struct dump_index_data { > - ddlog_prog prog; > - struct uuidset *rows_present; > - const char *table; > - struct ds *ops_s; > -}; > - > -static void OVS_UNUSED > -index_cb(uintptr_t data_, const ddlog_record *rec) > -{ > - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5); > - struct dump_index_data *data = (struct dump_index_data *) data_; > - > - /* Extract the rec's row UUID as 'uuid'. */ > - const ddlog_record *rec_uuid = ddlog_get_named_struct_field(rec, "_uuid"); > - if (!rec_uuid) { > - VLOG_WARN_RL(&rl, "%s: row has no _uuid column", data->table); > - return; > - } > - struct uuid uuid; > - if (!get_ddlog_uuid(rec_uuid, &uuid)) { > - VLOG_WARN_RL(&rl, "%s: _uuid column has unexpected type", data->table); > - return; > - } > - > - /* If a row with the given UUID was already in the database, then > - * send a operation to update it; otherwise, send an operation to > - * insert it. */ > - struct uuidset_node *node = uuidset_find(data->rows_present, &uuid); > - char *s = NULL; > - int ret; > - if (node) { > - uuidset_delete(data->rows_present, node); > - ret = ddlog_into_ovsdb_update_str(data->prog, data->table, rec, &s); > - } else { > - ret = ddlog_into_ovsdb_insert_str(data->prog, data->table, rec, &s); > - } > - if (ret) { > - VLOG_WARN_RL(&rl, "%s: ddlog could not convert row into database op", > - data->table); > - return; > - } > - ds_put_format(data->ops_s, "%s,", s); > - ddlog_free_json(s); > -} > - > -static struct json * > -where_uuid_equals(const struct uuid *uuid) > -{ > - return > - json_array_create_1( > - json_array_create_3( > - json_string_create("_uuid"), > - json_string_create("=="), > - json_array_create_2( > - json_string_create("uuid"), > - json_string_create_nocopy( > - xasprintf(UUID_FMT, UUID_ARGS(uuid)))))); > -} > - > -static void > -add_delete_row_op(const char *table, const struct uuid *uuid, struct ds *ops_s) > -{ > - struct json *op = json_object_create(); > - json_object_put_string(op, "op", "delete"); > - json_object_put_string(op, "table", table); > - json_object_put(op, "where", where_uuid_equals(uuid)); > - json_to_ds(op, 0, ops_s); > - json_destroy(op); > - ds_put_char(ops_s, ','); > -} > - > -static void > -northd_update_sb_cfg_cb( > - uintptr_t new_sb_cfgp_, > - table_id table OVS_UNUSED, > - const ddlog_record *rec, > - ssize_t weight) > -{ > - int64_t *new_sb_cfgp = (int64_t *) new_sb_cfgp_; > - > - if (weight < 0) { > - return; > - } > - > - if (ddlog_get_int(rec, NULL, 0) <= sizeof *new_sb_cfgp) { > - *new_sb_cfgp = ddlog_get_i64(rec); > - } > -} > - > -static struct json * > -get_database_ops(struct northd_ctx *ctx) > -{ > - struct ds ops_s = DS_EMPTY_INITIALIZER; > - ds_put_char(&ops_s, '['); > - json_string_escape(ctx->db_name, &ops_s); > - ds_put_char(&ops_s, ','); > - size_t start_len = ops_s.length; > - > - for (const char **p = ctx->output_relations; *p; p++) { > - ddlog_table_update_deltas(&ops_s, ctx->ddlog, ctx->delta, > - ctx->db_name, *p); > - } > - > - if (ctx->output_only_data) { > - /* > - * We just reconnected to the database (or connected for the first time > - * in this execution). We assume that the contents of the output-only > - * tables might have changed (this is especially true the first time we > - * connect to the database a given execution, of course; we can't > - * assume that the tables have any particular contents in this case). > - * > - * ctx->output_only_data is a database reply that tells us the > - * UUIDs of the rows that exist in the database. Our strategy is to > - * compare these UUIDs to the UUIDs of the rows that exist in the DDlog > - * analogues of these tables, and then add, delete, or update rows as > - * necessary. > - * > - * (ctx->output_only_data only gives row UUIDs, not full row > - * contents. That means that for rows that exist in OVSDB and in > - * DDLog, we always send an update to set all the columns. It wouldn't > - * save bandwidth to do anything else, since we'd always have to send > - * the full row contents in one direction and if there were differences > - * we'd have to send the contents in both directions. With this > - * strategy we only send them in one direction even in the worst case.) > - * > - * (We can't just send an operation to delete all the rows and then > - * re-add them all in the same transaction, because ovsdb-server > - * rejecting deleting a row with a given UUID and the adding the same > - * UUID back in a single transaction.) > - */ > - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 2); > - > - for (size_t i = 0; ctx->output_only_relations[i]; i++) { > - const char *table = ctx->output_only_relations[i]; > - > - /* Parse the list of row UUIDs received from OVSDB. */ > - struct uuidset rows_present = UUIDSET_INITIALIZER(&rows_present); > - struct ovsdb_error *error = parse_output_only_data( > - ctx->output_only_data, i, &rows_present); > - if (error) { > - char *s = ovsdb_error_to_string_free(error); > - VLOG_WARN_RL(&rl, "%s", s); > - free(s); > - uuidset_destroy(&rows_present); > - continue; > - } > - > - /* Get the index_id for the DDlog table. > - * > - * We require output-only tables to have an accompanying index > - * named <table>_Index. */ > - char *index = xasprintf("%s_Index", table); > - index_id idxid = ddlog_get_index_id(ctx->ddlog, index); > - if (idxid == -1) { > - VLOG_WARN_RL(&rl, "%s: unknown index", index); > - free(index); > - uuidset_destroy(&rows_present); > - continue; > - } > - free(index); > - > - /* For each row in the index, update a corresponding OVSDB row, if > - * there is one, otherwise insert a new row. */ > - struct dump_index_data cbdata = { > - ctx->ddlog, &rows_present, table, &ops_s > - }; > - ddlog_dump_index(ctx->ddlog, idxid, index_cb, (uintptr_t) &cbdata); > - > - /* Any uuids remaining in 'rows_present' are rows that are in OVSDB > - * but not DDlog. Delete them from OVSDB. */ > - struct uuidset_node *node; > - UUIDSET_FOR_EACH (node, &rows_present) { > - add_delete_row_op(table, &node->uuid, &ops_s); > - } > - uuidset_destroy(&rows_present); > - > - /* Discard any queued output to this table, since we just > - * did a full sync to it. */ > - struct ds tmp = DS_EMPTY_INITIALIZER; > - ddlog_table_update_output(&tmp, ctx->ddlog, ctx->delta, > - ctx->db_name, table); > - ds_destroy(&tmp); > - } > - > - json_destroy(ctx->output_only_data); > - ctx->output_only_data = NULL; > - } else { > - for (const char **p = ctx->output_only_relations; *p; p++) { > - ddlog_table_update_output(&ops_s, ctx->ddlog, ctx->delta, > - ctx->db_name, *p); > - } > - } > - > - /* If we're updating nb::NB_Global.sb_cfg, then also update > - * sb_cfg_timestamp. > - * > - * XXX If the transaction we're sending to the database fails, then > - * currently as written we'll never find out about it and sb_cfg_timestamp > - * will not be updated. > - */ > - static int64_t old_sb_cfg = INT64_MIN; > - static int64_t old_sb_cfg_timestamp = INT64_MIN; > - int64_t new_sb_cfg = old_sb_cfg; > - if (ctx->has_timestamp_columns) { > - table_id sb_cfg_tid = ddlog_get_table_id(ctx->ddlog, "SbCfg"); > - ddlog_delta *sb_cfg_delta = ddlog_delta_remove_table(ctx->delta, > - sb_cfg_tid); > - ddlog_delta_enumerate(sb_cfg_delta, northd_update_sb_cfg_cb, > - (uintptr_t) &new_sb_cfg); > - ddlog_free_delta(sb_cfg_delta); > - > - if (new_sb_cfg != old_sb_cfg) { > - old_sb_cfg = new_sb_cfg; > - old_sb_cfg_timestamp = time_wall_msec(); > - ds_put_format(&ops_s, "{\"op\":\"update\",\"table\":\"NB_Global\",\"where\":[]," > - "\"row\":{\"sb_cfg_timestamp\":%"PRId64"}},", old_sb_cfg_timestamp); > - } > - } > - > - struct json *ops; > - if (ops_s.length > start_len) { > - ds_chomp(&ops_s, ','); > - ds_put_char(&ops_s, ']'); > - ops = json_from_string(ds_cstr(&ops_s)); > - } else { > - ops = NULL; > - } > - > - ds_destroy(&ops_s); > - > - return ops; > -} > - > -/* Callback used by the ddlog engine to print error messages. Note that > - * this is only used by the ddlog runtime, as opposed to the application > - * code in ovn_northd.dl, which uses the vlog facility directly. */ > -static void > -ddlog_print_error(const char *msg) > -{ > - VLOG_ERR("%s", msg); > -} > - > -static void > -usage(void) > -{ > - printf("\ > -%s: OVN northbound management daemon (DDlog version)\n\ > -usage: %s [OPTIONS]\n\ > -\n\ > -Options:\n\ > - --ovnnb-db=DATABASE connect to ovn-nb database at DATABASE\n\ > - (default: %s)\n\ > - --ovnsb-db=DATABASE connect to ovn-sb database at DATABASE\n\ > - (default: %s)\n\ > - --dry-run start in paused state (do not commit db changes)\n\ > - --ddlog-record=FILE.TXT record db changes to replay later for debugging\n\ > - --unixctl=SOCKET override default control socket name\n\ > - -h, --help display this help message\n\ > - -o, --options list available options\n\ > - -V, --version display version information\n\ > -", program_name, program_name, default_nb_db(), default_sb_db()); > - daemon_usage(); > - vlog_usage(); > - stream_usage("database", true, true, false); > -} > - > -static void > -parse_options(int argc OVS_UNUSED, char *argv[] OVS_UNUSED, > - bool *pause) > -{ > - enum { > - OVN_DAEMON_OPTION_ENUMS, > - VLOG_OPTION_ENUMS, > - SSL_OPTION_ENUMS, > - OPT_DRY_RUN, > - OPT_DDLOG_RECORD, > - }; > - static const struct option long_options[] = { > - {"ovnsb-db", required_argument, NULL, 'd'}, > - {"ovnnb-db", required_argument, NULL, 'D'}, > - {"unixctl", required_argument, NULL, 'u'}, > - {"help", no_argument, NULL, 'h'}, > - {"options", no_argument, NULL, 'o'}, > - {"version", no_argument, NULL, 'V'}, > - {"dry-run", no_argument, NULL, OPT_DRY_RUN}, > - {"ddlog-record", required_argument, NULL, OPT_DDLOG_RECORD}, > - OVN_DAEMON_LONG_OPTIONS, > - VLOG_LONG_OPTIONS, > - STREAM_SSL_LONG_OPTIONS, > - {NULL, 0, NULL, 0}, > - }; > - char *short_options = ovs_cmdl_long_options_to_short_options(long_options); > - > - for (;;) { > - int c; > - > - c = getopt_long(argc, argv, short_options, long_options, NULL); > - if (c == -1) { > - break; > - } > - > - switch (c) { > - OVN_DAEMON_OPTION_HANDLERS; > - VLOG_OPTION_HANDLERS; > - > - case 'p': > - ssl_private_key_file = optarg; > - break; > - > - case 'c': > - ssl_certificate_file = optarg; > - break; > - > - case 'C': > - ssl_ca_cert_file = optarg; > - break; > - > - case 'd': > - ovnsb_db = optarg; > - break; > - > - case 'D': > - ovnnb_db = optarg; > - break; > - > - case 'u': > - unixctl_path = optarg; > - break; > - > - case 'h': > - usage(); > - exit(EXIT_SUCCESS); > - > - case 'o': > - ovs_cmdl_print_options(long_options); > - exit(EXIT_SUCCESS); > - > - case 'V': > - ovs_print_version(0, 0); > - exit(EXIT_SUCCESS); > - > - case OPT_DRY_RUN: > - *pause = true; > - break; > - > - case OPT_DDLOG_RECORD: > - record_file = optarg; > - break; > - > - default: > - break; > - } > - } > - > - if (!ovnsb_db || !ovnsb_db[0]) { > - ovnsb_db = default_sb_db(); > - } > - > - if (!ovnnb_db || !ovnnb_db[0]) { > - ovnnb_db = default_nb_db(); > - } > - > - free(short_options); > -} > - > -static void > -update_ssl_config(void) > -{ > - if (ssl_private_key_file && ssl_certificate_file) { > - stream_ssl_set_key_and_cert(ssl_private_key_file, > - ssl_certificate_file); > - } > - if (ssl_ca_cert_file) { > - stream_ssl_set_ca_cert_file(ssl_ca_cert_file, false); > - } > -} > - > -int > -main(int argc, char *argv[]) > -{ > - int res = EXIT_SUCCESS; > - struct unixctl_server *unixctl; > - int retval; > - bool exiting; > - struct northd_status status = { > - .locked = false, > - .pause = false, > - }; > - > - fatal_ignore_sigpipe(); > - ovs_cmdl_proctitle_init(argc, argv); > - set_program_name(argv[0]); > - service_start(&argc, &argv); > - parse_options(argc, argv, &status.pause); > - > - daemonize_start(false); > - > - char *abs_unixctl_path = get_abs_unix_ctl_path(unixctl_path); > - retval = unixctl_server_create(abs_unixctl_path, &unixctl); > - free(abs_unixctl_path); > - > - if (retval) { > - exit(EXIT_FAILURE); > - } > - > - unixctl_command_register("exit", "", 0, 0, ovn_northd_exit, &exiting); > - unixctl_command_register("status", "", 0, 0, ovn_northd_status, &status); > - > - ddlog_prog ddlog; > - ddlog_delta *delta; > - ddlog = ddlog_run(1, false, ddlog_print_error, &delta); > - if (!ddlog) { > - ovs_fatal(0, "DDlog instance could not be created"); > - } > - init_table_ids(ddlog); > - > - unixctl_command_register("enable-cpu-profiling", "", 0, 0, > - ovn_northd_enable_cpu_profiling, ddlog); > - unixctl_command_register("disable-cpu-profiling", "", 0, 0, > - ovn_northd_disable_cpu_profiling, ddlog); > - unixctl_command_register("profile", "", 0, 0, ovn_northd_profile, ddlog); > - > - int replay_fd = -1; > - if (record_file) { > - replay_fd = open(record_file, O_CREAT | O_WRONLY | O_TRUNC, 0666); > - if (replay_fd < 0) { > - ovs_fatal(errno, "%s: could not create DDlog record file", > - record_file); > - } > - > - if (ddlog_record_commands(ddlog, replay_fd)) { > - ovs_fatal(0, "could not enable DDlog command recording"); > - } > - } > - > - struct northd_ctx *nb_ctx = northd_ctx_create( > - ovnnb_db, "OVN_Northbound", "nb", NULL, ddlog, delta, > - nb_input_relations, nb_output_relations, nb_output_only_relations, > - status.pause); > - struct northd_ctx *sb_ctx = northd_ctx_create( > - ovnsb_db, "OVN_Southbound", "sb", "ovn_northd", ddlog, delta, > - sb_input_relations, sb_output_relations, sb_output_only_relations, > - status.pause); > - > - unixctl_command_register("pause", "", 0, 0, ovn_northd_pause, sb_ctx); > - unixctl_command_register("resume", "", 0, 0, ovn_northd_resume, sb_ctx); > - unixctl_command_register("is-paused", "", 0, 0, ovn_northd_is_paused, > - sb_ctx); > - > - char *ovn_internal_version = ovn_get_internal_version(); > - VLOG_INFO("OVN internal version is : [%s]", ovn_internal_version); > - free(ovn_internal_version); > - > - daemonize_complete(); > - > - stopwatch_create(NORTHD_LOOP_STOPWATCH_NAME, SW_MS); > - stopwatch_create(OVNNB_DB_RUN_STOPWATCH_NAME, SW_MS); > - stopwatch_create(OVNSB_DB_RUN_STOPWATCH_NAME, SW_MS); > - > - /* Main loop. */ > - exiting = false; > - while (!exiting) { > - update_ssl_config(); > - memory_run(); > - if (memory_should_report()) { > - struct simap usage = SIMAP_INITIALIZER(&usage); > - > - /* Nothing special to report yet. */ > - memory_report(&usage); > - simap_destroy(&usage); > - } > - > - bool has_lock = ovsdb_cs_has_lock(sb_ctx->cs); > - if (!sb_ctx->paused) { > - if (has_lock && !status.locked) { > - VLOG_INFO("ovn-northd lock acquired. " > - "This ovn-northd instance is now active."); > - } else if (!has_lock && status.locked) { > - VLOG_INFO("ovn-northd lock lost. " > - "This ovn-northd instance is now on standby."); > - } > - } > - status.locked = has_lock; > - status.pause = sb_ctx->paused; > - > - stopwatch_start(OVNNB_DB_RUN_STOPWATCH_NAME, time_msec()); > - northd_run(nb_ctx); > - stopwatch_stop(OVNNB_DB_RUN_STOPWATCH_NAME, time_msec()); > - stopwatch_start(OVNSB_DB_RUN_STOPWATCH_NAME, time_msec()); > - northd_run(sb_ctx); > - stopwatch_stop(OVNSB_DB_RUN_STOPWATCH_NAME, time_msec()); > - northd_update_probe_interval(nb_ctx, sb_ctx); > - if (ovsdb_cs_has_lock(sb_ctx->cs) && > - sb_ctx->state == S_UPDATE && > - nb_ctx->state == S_UPDATE && > - ovsdb_cs_may_send_transaction(sb_ctx->cs) && > - ovsdb_cs_may_send_transaction(nb_ctx->cs)) { > - northd_send_deltas(nb_ctx); > - northd_send_deltas(sb_ctx); > - } > - > - stopwatch_stop(NORTHD_LOOP_STOPWATCH_NAME, time_msec()); > - stopwatch_start(NORTHD_LOOP_STOPWATCH_NAME, time_msec()); > - unixctl_server_run(unixctl); > - > - northd_wait(nb_ctx); > - northd_wait(sb_ctx); > - unixctl_server_wait(unixctl); > - memory_wait(); > - if (exiting) { > - poll_immediate_wake(); > - } > - poll_block(); > - if (should_service_stop()) { > - exiting = true; > - } > - } > - > - northd_ctx_destroy(nb_ctx); > - northd_ctx_destroy(sb_ctx); > - > - ddlog_free_delta(delta); > - ddlog_stop(ddlog); > - > - if (replay_fd >= 0) { > - fsync(replay_fd); > - close(replay_fd); > - } > - > - unixctl_server_destroy(unixctl); > - service_stop(); > - > - exit(res); > -} > - > -static void > -ovn_northd_exit(struct unixctl_conn *conn, int argc OVS_UNUSED, > - const char *argv[] OVS_UNUSED, void *exiting_) > -{ > - bool *exiting = exiting_; > - *exiting = true; > - > - unixctl_command_reply(conn, NULL); > -} > - > -static void > -ovn_northd_pause(struct unixctl_conn *conn, int argc OVS_UNUSED, > - const char *argv[] OVS_UNUSED, void *sb_ctx_) > -{ > - struct northd_ctx *sb_ctx = sb_ctx_; > - northd_pause(sb_ctx); > - unixctl_command_reply(conn, NULL); > -} > - > -static void > -ovn_northd_resume(struct unixctl_conn *conn, int argc OVS_UNUSED, > - const char *argv[] OVS_UNUSED, void *sb_ctx_) > -{ > - struct northd_ctx *sb_ctx = sb_ctx_; > - northd_unpause(sb_ctx); > - unixctl_command_reply(conn, NULL); > -} > - > -static void > -ovn_northd_is_paused(struct unixctl_conn *conn, int argc OVS_UNUSED, > - const char *argv[] OVS_UNUSED, void *sb_ctx_) > -{ > - struct northd_ctx *sb_ctx = sb_ctx_; > - unixctl_command_reply(conn, sb_ctx->paused ? "true" : "false"); > -} > - > -static void > -ovn_northd_status(struct unixctl_conn *conn, int argc OVS_UNUSED, > - const char *argv[] OVS_UNUSED, void *status_) > -{ > - struct northd_status *status = status_; > - > - /* Use a labeled formatted output so we can add more to the status command > - * later without breaking any consuming scripts. */ > - char *s = xasprintf("Status: %s\n", > - status->pause ? "paused" > - : status->locked ? "active" > - : "standby"); > - unixctl_command_reply(conn, s); > - free(s); > -} > - > -static void > -ovn_northd_enable_cpu_profiling(struct unixctl_conn *conn, int argc OVS_UNUSED, > - const char *argv[] OVS_UNUSED, void *prog_) > -{ > - ddlog_prog prog = prog_; > - ddlog_enable_cpu_profiling(prog, true); > - unixctl_command_reply(conn, NULL); > -} > - > -static void > -ovn_northd_disable_cpu_profiling(struct unixctl_conn *conn, > - int argc OVS_UNUSED, > - const char *argv[] OVS_UNUSED, void *prog_) > -{ > - ddlog_prog prog = prog_; > - ddlog_enable_cpu_profiling(prog, false); > - unixctl_command_reply(conn, NULL); > -} > - > -static void > -ovn_northd_profile(struct unixctl_conn *conn, int argc OVS_UNUSED, > - const char *argv[] OVS_UNUSED, void *prog_) > -{ > - ddlog_prog prog = prog_; > - char *profile = ddlog_profile(prog); > - unixctl_command_reply(conn, profile); > - free(profile); > -} > diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml > index a77bd719e8..bf12ed5cd8 100644 > --- a/northd/ovn-northd.8.xml > +++ b/northd/ovn-northd.8.xml > @@ -1,7 +1,7 @@ > <?xml version="1.0" encoding="utf-8"?> > <manpage program="ovn-northd" section="8" title="ovn-northd"> > <h1>Name</h1> > - <p>ovn-northd and ovn-northd-ddlog -- Open Virtual Network central control daemon</p> > + <p>ovn-northd -- Open Virtual Network central control daemon</p> > > <h1>Synopsis</h1> > <p><code>ovn-northd</code> [<var>options</var>]</p> > @@ -18,14 +18,6 @@ > <code>ovn-sb</code>(5)) below it. > </p> > > - <p> > - <code>ovn-northd</code> is implemented in C. > - <code>ovn-northd-ddlog</code> is a compatible implementation written in > - DDlog, a language for incremental database processing. This > - documentation applies to both implementations, with differences indicated > - where relevant. > - </p> > - > <h1>Options</h1> > <dl> > <dt><code>--ovnnb-db=<var>database</var></code></dt> > @@ -42,16 +34,6 @@ > as the default. Otherwise, the default is > <code>unix:@RUNDIR@/ovnsb_db.sock</code>. > </dd> > - <dt><code>--ddlog-record=<var>file</var></code></dt> > - <dd> > - This option is for <code>ovn-north-ddlog</code> only. It causes the > - daemon to record the initial database state and later changes to > - <var>file</var> in the text-based DDlog command format. The > - <code>ovn_northd_cli</code> program can later replay these changes for > - debugging purposes. This option has a performance impact. See > - <code>debugging-ddlog.rst</code> in the OVN documentation for more > - details. > - </dd> > <dt><code>--dry-run</code></dt> > <dd> > <p> > @@ -61,12 +43,6 @@ > <code>pause</code> command, under <code>Runtime Management > Commands</code> below. > </p> > - > - <p> > - For <code>ovn-northd-ddlog</code>, one could use this option with > - <code>--ddlog-record</code> to generate a replay log without > - restarting a process or disturbing a running system. > - </p> > </dd> > <dt><code>n-threads N</code></dt> > <dd> > @@ -85,10 +61,6 @@ > If N is more than 256, then N is set to 256, parallelization is > enabled (with 256 threads) and a warning is logged. > </p> > - > - <p> > - ovn-northd-ddlog does not support this option. > - </p> > </dd> > </dl> > <p> > @@ -241,32 +213,6 @@ > </dl> > </p> > > - <p> > - Only <code>ovn-northd-ddlog</code> supports the following commands: > - </p> > - > - <dl> > - <dt><code>enable-cpu-profiling</code></dt> > - <dt><code>disable-cpu-profiling</code></dt> > - <dd> > - Enables or disables profiling of CPU time used by the DDlog engine. > - When CPU profiling is enabled, the <code>profile</code> command (see > - below) will include DDlog CPU usage statistics in its output. Enabling > - CPU profiling will slow <code>ovn-northd-ddlog</code>. Disabling CPU > - profiling does not clear any previously recorded statistics. > - </dd> > - > - <dt><code>profile</code></dt> > - <dd> > - Outputs a profile of the current and peak sizes of arrangements inside > - DDlog. This profiling data can be useful for optimizing DDlog code. > - If CPU profiling was previously enabled (even if it was later > - disabled), the output also includes a CPU time profile. See > - <code>Profiling</code> inside the tutorial in the DDlog repository for > - an introduction to profiling DDlog. > - </dd> > - </dl> > - > <h1>Active-Standby for High Availability</h1> > <p> > You may run <code>ovn-northd</code> more than once in an OVN deployment. > diff --git a/northd/ovn-sb.dlopts b/northd/ovn-sb.dlopts > deleted file mode 100644 > index 99b65f1019..0000000000 > --- a/northd/ovn-sb.dlopts > +++ /dev/null > @@ -1,34 +0,0 @@ > ---intern-strings > --o Address_Set > --o BFD > --o DHCP_Options > --o DHCPv6_Options > --o DNS > --o Datapath_Binding > --o FDB > --o Gateway_Chassis > --o HA_Chassis > --o HA_Chassis_Group > --o IP_Multicast > --o Load_Balancer > --o Logical_DP_Group > --o MAC_Binding > --o Meter > --o Meter_Band > --o Multicast_Group > --o Port_Binding > --o Port_Group > --o RBAC_Permission > --o RBAC_Role > --o SB_Global > --o Service_Monitor > ---output-only Logical_Flow > ---ro IP_Multicast.seq_no > ---ro Port_Binding.chassis > ---ro Port_Binding.encap > ---ro Port_Binding.virtual_parent > ---ro SB_Global.connections > ---ro SB_Global.external_ids > ---ro SB_Global.ssl > ---ro Service_Monitor.status > ---intern-table Service_Monitor > diff --git a/northd/ovn.dl b/northd/ovn.dl > deleted file mode 100644 > index 3585eb3dc2..0000000000 > --- a/northd/ovn.dl > +++ /dev/null > @@ -1,387 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -import ovsdb > -import bitwise > - > -/* Logical port is enabled if it does not have an enabled flag or the flag is true */ > -function is_enabled(s: Option<bool>): bool = { > - s != Some{false} > -} > - > -/* > - * Ethernet addresses > - */ > -typedef eth_addr = EthAddr { > - ha: bit<48> // In host byte order, e.g. ha[40] is the multicast bit. > -} > - > -function to_string(addr: eth_addr): string { > - eth_addr2string(addr) > -} > -extern function eth_addr_from_string(s: string): Option<eth_addr> > -extern function scan_eth_addr(s: string): Option<eth_addr> > -extern function scan_eth_addr_prefix(s: string): Option<eth_addr> > -function eth_addr_zero(): eth_addr { EthAddr{0} } > -function eth_addr_pseudorandom(seed: uuid, variant: bit<16>) : eth_addr { > - EthAddr{hash64(seed ++ variant) as bit<48>}.mark_random() > -} > -function mark_random(ea: eth_addr): eth_addr { EthAddr{ea.ha & ~(1 << 40) | (1 << 41)} } > - > -function to_eui64(ea: eth_addr): bit<64> { > - var ha = ea.ha as u64; > - (((ha & 64'hffffff000000) << 16) | 64'hfffe000000 | (ha & 64'hffffff)) ^ (1 << 57) > -} > - > -extern function eth_addr2string(addr: eth_addr): string > - > -/* > - * IPv4 addresses > - */ > - > -typedef in_addr = InAddr { > - a: bit<32> // In host byte order. > -} > - > -extern function ip_parse(s: string): Option<in_addr> > -extern function ip_parse_masked(s: string): Either<string/*err*/, (in_addr/*host_ip*/, in_addr/*mask*/)> > -extern function ip_parse_cidr(s: string): Either<string/*err*/, (in_addr/*ip*/, bit<32>/*plen*/)> > -extern function scan_static_dynamic_ip(s: string): Option<in_addr> > -function ip_create_mask(plen: bit<32>): in_addr { InAddr{(64'hffffffff << (32 - plen))[31:0]} } > - > -function to_string(ip: in_addr): string = { > - "${ip.a >> 24}.${(ip.a >> 16) & 'hff}.${(ip.a >> 8) & 'hff}.${ip.a & 'hff}" > -} > - > -function is_cidr(netmask: in_addr): bool { var x = ~netmask.a; (x & (x + 1)) == 0 } > -function is_local_multicast(ip: in_addr): bool { (ip.a & 32'hffffff00) == 32'he0000000 } > -function is_zero(a: in_addr): bool { a.a == 0 } > -function is_all_ones(a: in_addr): bool { a.a == 32'hffffffff } > -function cidr_bits(ip: in_addr): Option<bit<8>> { > - if (ip.is_cidr()) { > - Some{32 - ip.a.trailing_zeros() as u8} > - } else { > - None > - } > -} > - > -function network(addr: in_addr, mask: in_addr): in_addr { InAddr{addr.a & mask.a} } > -function host(addr: in_addr, mask: in_addr): in_addr { InAddr{addr.a & ~mask.a} } > -function bcast(addr: in_addr, mask: in_addr): in_addr { InAddr{addr.a | ~mask.a} } > - > -/* True if both 'ips' are in the same network as defined by netmask 'mask', > - * false otherwise. */ > -function same_network(ips: (in_addr, in_addr), mask: in_addr): bool { > - ((ips.0.a ^ ips.1.a) & mask.a) == 0 > -} > - > -/* > - * parse IPv4 address list of the form: > - * "10.0.0.4 10.0.0.10 10.0.0.20..10.0.0.50 10.0.0.100..10.0.0.110" > - */ > -extern function parse_ip_list(ips: string): Either<string, Vec<(in_addr, Option<in_addr>)>> > - > -/* > - * IPv6 addresses > - */ > -typedef in6_addr = In6Addr { > - aaaa: bit<128> // In host byte order. > -} > - > -extern function ipv6_parse(s: string): Option<in6_addr> > -extern function ipv6_parse_masked(s: string): Either<string/*err*/, (in6_addr/*ip*/, in6_addr/*mask*/)> > -extern function ipv6_parse_cidr(s: string): Either<string/*err*/, (in6_addr/*ip*/, bit<32>/*plen*/)> > - > -// Return IPv6 link local address for the given 'ea'. > -function to_ipv6_lla(ea: eth_addr): in6_addr { > - In6Addr{(128'hfe80 << 112) | (ea.to_eui64() as u128)} > -} > - > -// Returns IPv6 EUI64 address for 'ea' with the given 'prefix'. > -function to_ipv6_eui64(ea: eth_addr, prefix: in6_addr): in6_addr { > - In6Addr{(prefix.aaaa & ~128'hffffffffffffffff) | (ea.to_eui64() as u128)} > -} > - > -function ipv6_create_mask(plen: bit<32>): in6_addr { > - if (plen == 0) { > - In6Addr{0} > - } else { > - var shift = max(0, 128 - plen); > - In6Addr{128'hffffffffffffffffffffffffffffffff << shift} > - } > -} > - > -function is_zero(a: in6_addr): bool { a.aaaa == 0 } > -function is_all_ones(a: in6_addr): bool { a.aaaa == 128'hffffffffffffffffffffffffffffffff } > -function is_lla(a: in6_addr): bool { (a.aaaa >> 64) == 128'hfe80000000000000 } > -function is_all_hosts(a: in6_addr): bool { a.aaaa == 128'hff020000000000000000000000000001 } > -function is_cidr(netmask: in6_addr): bool { var x = ~netmask.aaaa; (x & (x + 1)) == 0 } > -function is_multicast(a: in6_addr): bool { (a.aaaa >> 120) == 128'hff } > -function is_routable_multicast(a: in6_addr): bool { > - a.is_multicast() and match ((a.aaaa >> 112) as u8 & 8'hf) { > - 0 -> false, > - 1 -> false, > - 2 -> false, > - 3 -> false, > - 15 -> false, > - _ -> true > - } > -} > - > -extern function string_mapped(addr: in6_addr): string > - > -function network(addr: in6_addr, mask: in6_addr): in6_addr { In6Addr{addr.aaaa & mask.aaaa} } > -function host(addr: in6_addr, mask: in6_addr): in6_addr { In6Addr{addr.aaaa & ~mask.aaaa} } > -function solicited_node(ip6: in6_addr): in6_addr { > - In6Addr{(ip6.aaaa & 128'hffffff) | 128'hff02_0000_0000_0000_0000_0001_ff00_0000} > -} > - > -/* True if both 'ips' are in the same network as defined by netmask 'mask', > - * false otherwise. */ > -function same_network(ips: (in6_addr, in6_addr), mask: in6_addr): bool { > - ips.0.network(mask) == ips.1.network(mask) > -} > - > -function multicast_to_ethernet(ip6: in6_addr): eth_addr { > - EthAddr{48'h333300000000 | (ip6.aaaa as bit<48> & 48'hffffffff)} > -} > - > -function cidr_bits(ip6: in6_addr): Option<bit<8>> { > - if (ip6.is_cidr()) { > - Some{128 - ip6.aaaa.trailing_zeros() as u8} > - } else { > - None > - } > -} > - > -function to_string(addr: in6_addr): string { inet6_ntop(addr) } > -extern function inet6_ntop(addr: in6_addr): string > - > -/* > - * IPv4 | IPv6 addresses > - */ > - > -typedef v46_ip = IPv4 { ipv4: in_addr } | IPv6 { ipv6: in6_addr } > - > -function ip46_parse_cidr(s: string) : Option<(v46_ip, bit<32>)> = { > - match (ip_parse_cidr(s)) { > - Right{(ipv4, plen)} -> return Some{(IPv4{ipv4}, plen)}, > - _ -> () > - }; > - match (ipv6_parse_cidr(s)) { > - Right{(ipv6, plen)} -> return Some{(IPv6{ipv6}, plen)}, > - _ -> () > - }; > - None > -} > -function ip46_parse_masked(s: string) : Option<(v46_ip, v46_ip)> = { > - match (ip_parse_masked(s)) { > - Right{(ipv4, mask)} -> return Some{(IPv4{ipv4}, IPv4{mask})}, > - _ -> () > - }; > - match (ipv6_parse_masked(s)) { > - Right{(ipv6, mask)} -> return Some{(IPv6{ipv6}, IPv6{mask})}, > - _ -> () > - }; > - None > -} > -function ip46_parse(s: string) : Option<v46_ip> = { > - match (ip_parse(s)) { > - Some{ipv4} -> return Some{IPv4{ipv4}}, > - _ -> () > - }; > - match (ipv6_parse(s)) { > - Some{ipv6} -> return Some{IPv6{ipv6}}, > - _ -> () > - }; > - None > -} > -function to_string(ip46: v46_ip) : string = { > - match (ip46) { > - IPv4{ipv4} -> "${ipv4}", > - IPv6{ipv6} -> "${ipv6}" > - } > -} > -function to_bracketed_string(ip46: v46_ip) : string = { > - match (ip46) { > - IPv4{ipv4} -> "${ipv4}", > - IPv6{ipv6} -> "[${ipv6}]" > - } > -} > - > -function network(ip46: v46_ip, plen: bit<32>) : v46_ip { > - match (ip46) { > - IPv4{ipv4} -> IPv4{InAddr{ipv4.a & ip_create_mask(plen).a}}, > - IPv6{ipv6} -> IPv6{In6Addr{ipv6.aaaa & ipv6_create_mask(plen).aaaa}} > - } > -} > - > -function is_all_ones(ip46: v46_ip) : bool { > - match (ip46) { > - IPv4{ipv4} -> ipv4.is_all_ones(), > - IPv6{ipv6} -> ipv6.is_all_ones() > - } > -} > - > -function cidr_bits(ip46: v46_ip) : Option<bit<8>> { > - match (ip46) { > - IPv4{ipv4} -> ipv4.cidr_bits(), > - IPv6{ipv6} -> ipv6.cidr_bits() > - } > -} > - > -function ipX(ip46: v46_ip) : string { > - match (ip46) { > - IPv4{_} -> "ip4", > - IPv6{_} -> "ip6" > - } > -} > - > -function xxreg(ip46: v46_ip) : string { > - match (ip46) { > - IPv4{_} -> "", > - IPv6{_} -> "xx" > - } > -} > - > -/* > - * CIDR-masked IPv4 address > - */ > - > -typedef ipv4_netaddr = IPV4NetAddr { > - addr: in_addr, /* 192.168.10.123 */ > - plen: bit<32> /* CIDR Prefix: 24. */ > -} > - > -function netmask(na: ipv4_netaddr): in_addr { ip_create_mask(na.plen) } > -function bcast(na: ipv4_netaddr): in_addr { na.addr.bcast(na.netmask()) } > - > -/* Returns the network (with the host bits zeroed) > - * or the host (with the network bits zeroed). */ > -function network(na: ipv4_netaddr): in_addr { na.addr.network(na.netmask()) } > -function host(na: ipv4_netaddr): in_addr { na.addr.host(na.netmask()) } > - > -/* Match on the host, if the host part is nonzero, or on the network > - * otherwise. */ > -function match_host_or_network(na: ipv4_netaddr): string { > - if (na.plen < 32 and na.host().is_zero()) { > - "${na.addr}/${na.plen}" > - } else { > - "${na.addr}" > - } > -} > - > -/* Match on the network. */ > -function match_network(na: ipv4_netaddr): string { > - if (na.plen < 32) { > - "${na.network()}/${na.plen}" > - } else { > - "${na.addr}" > - } > -} > - > -/* > - * CIDR-masked IPv6 address > - */ > - > -typedef ipv6_netaddr = IPV6NetAddr { > - addr: in6_addr, /* fc00::1 */ > - plen: bit<32> /* CIDR Prefix: 64 */ > -} > - > -function netmask(na: ipv6_netaddr): in6_addr { ipv6_create_mask(na.plen) } > - > -/* Returns the network (with the host bits zeroed). > - * or the host (with the network bits zeroed). */ > -function network(na: ipv6_netaddr): in6_addr { na.addr.network(na.netmask()) } > -function host(na: ipv6_netaddr): in6_addr { na.addr.host(na.netmask()) } > - > -function solicited_node(na: ipv6_netaddr): in6_addr { na.addr.solicited_node() } > - > -function is_lla(na: ipv6_netaddr): bool { na.network().is_lla() } > - > -/* Match on the network. */ > -function match_network(na: ipv6_netaddr): string { > - if (na.plen < 128) { > - "${na.network()}/${na.plen}" > - } else { > - "${na.addr}" > - } > -} > - > -/* > - * Set of addresses associated with a logical port. > - * > - * A logical port always has one Ethernet address, plus any number of IPv4 and IPv6 addresses. > - */ > -typedef lport_addresses = LPortAddress { > - ea: eth_addr, > - ipv4_addrs: Vec<ipv4_netaddr>, > - ipv6_addrs: Vec<ipv6_netaddr> > -} > - > -function to_string(addr: lport_addresses): string = { > - var addrs = ["${addr.ea}"]; > - for (ip4 in addr.ipv4_addrs) { > - addrs.push("${ip4.addr}") > - }; > - > - for (ip6 in addr.ipv6_addrs) { > - addrs.push("${ip6.addr}") > - }; > - > - string_join(addrs, " ") > -} > - > -/* > - * Packet header lengths > - */ > -function eTH_HEADER_LEN(): integer = 14 > -function vLAN_HEADER_LEN(): integer = 4 > -function vLAN_ETH_HEADER_LEN(): integer = eTH_HEADER_LEN() + vLAN_HEADER_LEN() > - > -/* > - * Logging > - */ > -extern function warn(msg: string): () > -extern function abort(msg: string): () > - > -/* > - * C functions imported from OVN > - */ > -extern function is_dynamic_lsp_address(addr: string): bool > -extern function extract_lsp_addresses(address: string): Option<lport_addresses> > -extern function extract_addresses(address: string): Option<lport_addresses> > -extern function extract_lrp_networks(mac: string, networks: Set<string>): Option<lport_addresses> > -extern function extract_ip_addresses(address: string): Option<lport_addresses> > - > -extern function split_addresses(addr: string): (Set<string>, Set<string>) > - > -extern function ovn_internal_version(): string > - > -/* > - * C functions imported from OVS > - */ > -extern function json_string_escape(s: string): string > - > -function json_escape(s: string): string { > - s.json_string_escape() > -} > -function json_escape(s: istring): string { > - s.ival().json_string_escape() > -} > - > -/* For a 'key' of the form "IP:port" or just "IP", returns > - * (v46_ip, port) tuple. */ > -extern function ip_address_and_port_from_lb_key(k: string): Option<(v46_ip, bit<16>)> > diff --git a/northd/ovn.rs b/northd/ovn.rs > deleted file mode 100644 > index 746884071e..0000000000 > --- a/northd/ovn.rs > +++ /dev/null > @@ -1,750 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -use ::nom::*; > -use ::differential_datalog::record; > -use ::std::ffi; > -use ::std::ptr; > -use ::std::default; > -use ::std::process; > -use ::std::os::raw; > -use ::libc; > - > -pub fn warn(msg: &String) { > - warn_(msg.as_str()) > -} > - > -pub fn warn_(msg: &str) { > - unsafe { > - ddlog_warn(ffi::CString::new(msg).unwrap().as_ptr()); > - } > -} > - > -pub fn err_(msg: &str) { > - unsafe { > - ddlog_err(ffi::CString::new(msg).unwrap().as_ptr()); > - } > -} > - > -pub fn abort(msg: &String) { > - abort_(msg.as_str()) > -} > - > -fn abort_(msg: &str) { > - err_(format!("DDlog error: {}.", msg).as_ref()); > - process::abort(); > -} > - > -const ETH_ADDR_SIZE: usize = 6; > -const IN6_ADDR_SIZE: usize = 16; > -const INET6_ADDRSTRLEN: usize = 46; > -const INET_ADDRSTRLEN: usize = 16; > -const ETH_ADDR_STRLEN: usize = 17; > - > -const AF_INET: usize = 2; > -const AF_INET6: usize = 10; > - > -/* Implementation for externs declared in ovn.dl */ > - > -#[repr(C)] > -#[derive(Default, PartialEq, Eq, PartialOrd, Ord, Clone, Hash, Serialize, Deserialize, Debug, IntoRecord, Mutator)] > -pub struct eth_addr_c { > - x: [u8; ETH_ADDR_SIZE] > -} > - > -impl eth_addr_c { > - pub fn from_ddlog(d: ð_addr) -> Self { > - eth_addr_c { > - x: [(d.ha >> 40) as u8, > - (d.ha >> 32) as u8, > - (d.ha >> 24) as u8, > - (d.ha >> 16) as u8, > - (d.ha >> 8) as u8, > - d.ha as u8] > - } > - } > - pub fn to_ddlog(&self) -> eth_addr { > - let ea0 = u16::from_be_bytes([self.x[0], self.x[1]]) as u64; > - let ea1 = u16::from_be_bytes([self.x[2], self.x[3]]) as u64; > - let ea2 = u16::from_be_bytes([self.x[4], self.x[5]]) as u64; > - eth_addr { ha: (ea0 << 32) | (ea1 << 16) | ea2 } > - } > -} > - > -pub fn eth_addr2string(addr: ð_addr) -> String { > - let c = eth_addr_c::from_ddlog(addr); > - format!("{:02x}:{:02x}:{:02x}:{:02x}:{:02x}:{:02x}", > - c.x[0], c.x[1], c.x[2], c.x[3], c.x[4], c.x[5]) > -} > - > -pub fn eth_addr_from_string(s: &String) -> ddlog_std::Option<eth_addr> { > - let mut ea: eth_addr_c = Default::default(); > - unsafe { > - if ovs::eth_addr_from_string(string2cstr(s).as_ptr(), &mut ea as *mut eth_addr_c) { > - ddlog_std::Option::Some{x: ea.to_ddlog()} > - } else { > - ddlog_std::Option::None > - } > - } > -} > - > -#[repr(C)] > -struct in6_addr_c { > - bytes: [u8; 16] > -} > - > -impl Default for in6_addr_c { > - fn default() -> Self { > - in6_addr_c { > - bytes: [0; 16] > - } > - } > -} > - > -impl in6_addr_c { > - pub fn from_ddlog(d: &in6_addr) -> Self { > - in6_addr_c{bytes: d.aaaa.to_be_bytes()} > - } > - pub fn to_ddlog(&self) -> in6_addr { > - in6_addr{aaaa: u128::from_be_bytes(self.bytes)} > - } > -} > - > -pub fn string_mapped(addr: &in6_addr) -> String { > - let addr = in6_addr_c::from_ddlog(addr); > - let mut addr_str = [0 as i8; INET6_ADDRSTRLEN]; > - unsafe { > - ovs::ipv6_string_mapped(&mut addr_str[0] as *mut raw::c_char, &addr as *const in6_addr_c); > - cstr2string(&addr_str as *const raw::c_char) > - } > -} > - > -pub fn json_string_escape(s: &String) -> String { > - let mut ds = ovs_ds::new(); > - unsafe { > - ovs::json_string_escape(ffi::CString::new(s.as_str()).unwrap().as_ptr() as *const raw::c_char, > - &mut ds as *mut ovs_ds); > - }; > - unsafe{ds.into_string()} > -} > - > -pub fn extract_lsp_addresses(address: &String) -> ddlog_std::Option<lport_addresses> { > - unsafe { > - let mut laddrs: lport_addresses_c = Default::default(); > - if ovn_c::extract_lsp_addresses(string2cstr(address).as_ptr(), > - &mut laddrs as *mut lport_addresses_c) { > - ddlog_std::Option::Some{x: laddrs.into_ddlog()} > - } else { > - ddlog_std::Option::None > - } > - } > -} > - > -pub fn extract_addresses(address: &String) -> ddlog_std::Option<lport_addresses> { > - unsafe { > - let mut laddrs: lport_addresses_c = Default::default(); > - let mut ofs: raw::c_int = 0; > - if ovn_c::extract_addresses(string2cstr(address).as_ptr(), > - &mut laddrs as *mut lport_addresses_c, > - &mut ofs as *mut raw::c_int) { > - ddlog_std::Option::Some{x: laddrs.into_ddlog()} > - } else { > - ddlog_std::Option::None > - } > - } > -} > - > -pub fn extract_lrp_networks(mac: &String, networks: &ddlog_std::Set<String>) -> ddlog_std::Option<lport_addresses> > -{ > - unsafe { > - let mut laddrs: lport_addresses_c = Default::default(); > - let mut networks_cstrs = Vec::with_capacity(networks.x.len()); > - let mut networks_ptrs = Vec::with_capacity(networks.x.len()); > - for net in networks.x.iter() { > - networks_cstrs.push(string2cstr(net)); > - networks_ptrs.push(networks_cstrs.last().unwrap().as_ptr()); > - }; > - if ovn_c::extract_lrp_networks__(string2cstr(mac).as_ptr(), networks_ptrs.as_ptr() as *const *const raw::c_char, > - networks_ptrs.len(), &mut laddrs as *mut lport_addresses_c) { > - ddlog_std::Option::Some{x: laddrs.into_ddlog()} > - } else { > - ddlog_std::Option::None > - } > - } > -} > - > -pub fn extract_ip_addresses(address: &String) -> ddlog_std::Option<lport_addresses> { > - unsafe { > - let mut laddrs: lport_addresses_c = Default::default(); > - if ovn_c::extract_ip_addresses(string2cstr(address).as_ptr(), > - &mut laddrs as *mut lport_addresses_c) { > - ddlog_std::Option::Some{x: laddrs.into_ddlog()} > - } else { > - ddlog_std::Option::None > - } > - } > -} > - > -pub fn ovn_internal_version() -> String { > - unsafe { > - let s = ovn_c::ovn_get_internal_version(); > - let retval = cstr2string(s); > - free(s as *mut raw::c_void); > - retval > - } > -} > - > -pub fn ipv6_parse_masked(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in6_addr, in6_addr>> > -{ > - unsafe { > - let mut ip: in6_addr_c = Default::default(); > - let mut mask: in6_addr_c = Default::default(); > - let err = ovs::ipv6_parse_masked(string2cstr(s).as_ptr(), &mut ip as *mut in6_addr_c, &mut mask as *mut in6_addr_c); > - if (err != ptr::null_mut()) { > - let errstr = cstr2string(err); > - free(err as *mut raw::c_void); > - ddlog_std::Either::Left{l: errstr} > - } else { > - ddlog_std::Either::Right{r: ddlog_std::tuple2(ip.to_ddlog(), mask.to_ddlog())} > - } > - } > -} > - > -pub fn ipv6_parse_cidr(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in6_addr, u32>> > -{ > - unsafe { > - let mut ip: in6_addr_c = Default::default(); > - let mut plen: raw::c_uint = 0; > - let err = ovs::ipv6_parse_cidr(string2cstr(s).as_ptr(), &mut ip as *mut in6_addr_c, &mut plen as *mut raw::c_uint); > - if (err != ptr::null_mut()) { > - let errstr = cstr2string(err); > - free(err as *mut raw::c_void); > - ddlog_std::Either::Left{l: errstr} > - } else { > - ddlog_std::Either::Right{r: ddlog_std::tuple2(ip.to_ddlog(), plen as u32)} > - } > - } > -} > - > -pub fn ipv6_parse(s: &String) -> ddlog_std::Option<in6_addr> > -{ > - unsafe { > - let mut ip: in6_addr_c = Default::default(); > - let res = ovs::ipv6_parse(string2cstr(s).as_ptr(), &mut ip as *mut in6_addr_c); > - if (res) { > - ddlog_std::Option::Some{x: ip.to_ddlog()} > - } else { > - ddlog_std::Option::None > - } > - } > -} > - > -pub type ovs_be32 = u32; > - > -impl in_addr { > - pub fn from_be32(nl: ovs_be32) -> in_addr { > - in_addr{a: ddlog_std::ntohl(&nl)} > - } > - pub fn to_be32(&self) -> ovs_be32 { > - ddlog_std::htonl(&self.a) > - } > -} > - > -pub fn ip_parse_masked(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in_addr, in_addr>> > -{ > - unsafe { > - let mut ip: ovs_be32 = 0; > - let mut mask: ovs_be32 = 0; > - let err = ovs::ip_parse_masked(string2cstr(s).as_ptr(), &mut ip as *mut ovs_be32, &mut mask as *mut ovs_be32); > - if (err != ptr::null_mut()) { > - let errstr = cstr2string(err); > - free(err as *mut raw::c_void); > - ddlog_std::Either::Left{l: errstr} > - } else { > - ddlog_std::Either::Right{r: ddlog_std::tuple2(in_addr::from_be32(ip), > - in_addr::from_be32(mask))} > - } > - } > -} > - > -pub fn ip_parse_cidr(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in_addr, u32>> > -{ > - unsafe { > - let mut ip: ovs_be32 = 0; > - let mut plen: raw::c_uint = 0; > - let err = ovs::ip_parse_cidr(string2cstr(s).as_ptr(), &mut ip as *mut ovs_be32, &mut plen as *mut raw::c_uint); > - if (err != ptr::null_mut()) { > - let errstr = cstr2string(err); > - free(err as *mut raw::c_void); > - ddlog_std::Either::Left{l: errstr} > - } else { > - ddlog_std::Either::Right{r: ddlog_std::tuple2(in_addr::from_be32(ip), plen as u32)} > - } > - } > -} > - > -pub fn ip_parse(s: &String) -> ddlog_std::Option<in_addr> > -{ > - unsafe { > - let mut ip: ovs_be32 = 0; > - if (ovs::ip_parse(string2cstr(s).as_ptr(), &mut ip as *mut ovs_be32)) { > - ddlog_std::Option::Some{x: in_addr::from_be32(ip)} > - } else { > - ddlog_std::Option::None > - } > - } > -} > - > -pub fn is_dynamic_lsp_address(address: &String) -> bool { > - unsafe { > - ovn_c::is_dynamic_lsp_address(string2cstr(address).as_ptr()) > - } > -} > - > -pub fn split_addresses(addresses: &String) -> ddlog_std::tuple2<ddlog_std::Set<String>, ddlog_std::Set<String>> { > - let mut ip4_addrs = ovs_svec::new(); > - let mut ip6_addrs = ovs_svec::new(); > - unsafe { > - ovn_c::split_addresses(string2cstr(addresses).as_ptr(), &mut ip4_addrs as *mut ovs_svec, &mut ip6_addrs as *mut ovs_svec); > - ddlog_std::tuple2(ip4_addrs.into_strings(), ip6_addrs.into_strings()) > - } > -} > - > -pub fn scan_eth_addr(s: &String) -> ddlog_std::Option<eth_addr> { > - let mut ea: eth_addr_c = Default::default(); > - unsafe { > - if ovs::ovs_scan(string2cstr(s).as_ptr(), b"%hhx:%hhx:%hhx:%hhx:%hhx:%hhx\0".as_ptr() as *const raw::c_char, > - &mut ea.x[0] as *mut u8, &mut ea.x[1] as *mut u8, > - &mut ea.x[2] as *mut u8, &mut ea.x[3] as *mut u8, > - &mut ea.x[4] as *mut u8, &mut ea.x[5] as *mut u8) > - { > - ddlog_std::Option::Some{x: ea.to_ddlog()} > - } else { > - ddlog_std::Option::None > - } > - } > -} > - > -pub fn scan_eth_addr_prefix(s: &String) -> ddlog_std::Option<eth_addr> { > - let mut b2: u8 = 0; > - let mut b1: u8 = 0; > - let mut b0: u8 = 0; > - unsafe { > - if ovs::ovs_scan(string2cstr(s).as_ptr(), b"%hhx:%hhx:%hhx\0".as_ptr() as *const raw::c_char, > - &mut b2 as *mut u8, &mut b1 as *mut u8, &mut b0 as *mut u8) > - { > - ddlog_std::Option::Some{x: eth_addr{ha: ((b2 as u64) << 40) | ((b1 as u64) << 32) | ((b0 as u64) << 24)} } > - } else { > - ddlog_std::Option::None > - } > - } > -} > - > -pub fn scan_static_dynamic_ip(s: &String) -> ddlog_std::Option<in_addr> { > - let mut ip0: u8 = 0; > - let mut ip1: u8 = 0; > - let mut ip2: u8 = 0; > - let mut ip3: u8 = 0; > - let mut n: raw::c_uint = 0; > - unsafe { > - if ovs::ovs_scan(string2cstr(s).as_ptr(), b"dynamic %hhu.%hhu.%hhu.%hhu%n\0".as_ptr() as *const raw::c_char, > - &mut ip0 as *mut u8, > - &mut ip1 as *mut u8, > - &mut ip2 as *mut u8, > - &mut ip3 as *mut u8, > - &mut n) && s.len() == (n as usize) > - { > - let a0 = (ip0 as u32) << 24; > - let a1 = (ip1 as u32) << 16; > - let a2 = (ip2 as u32) << 8; > - let a3 = ip3 as u32; > - ddlog_std::Option::Some{x: in_addr{a: a0 | a1 | a2 | a3}} > - } else { > - ddlog_std::Option::None > - } > - } > -} > - > -pub fn ip_address_and_port_from_lb_key(k: &String) -> > - ddlog_std::Option<ddlog_std::tuple2<v46_ip, u16>> > -{ > - unsafe { > - let mut ip_address: *mut raw::c_char = ptr::null_mut(); > - let mut port: libc::uint16_t = 0; > - let mut addr_family: raw::c_int = 0; > - > - ovn_c::ip_address_and_port_from_lb_key(string2cstr(k).as_ptr(), &mut ip_address as *mut *mut raw::c_char, > - &mut port as *mut libc::uint16_t, &mut addr_family as *mut raw::c_int); > - if (ip_address != ptr::null_mut()) { > - match (ip46_parse(&cstr2string(ip_address))) { > - ddlog_std::Option::Some{x: ip46} => { > - let res = ddlog_std::tuple2(ip46, port as u16); > - free(ip_address as *mut raw::c_void); > - return ddlog_std::Option::Some{x: res} > - }, > - _ => () > - } > - } > - ddlog_std::Option::None > - } > -} > - > -pub fn str_to_int(s: &String, base: &u16) -> ddlog_std::Option<u64> { > - let mut i: raw::c_int = 0; > - let ok = unsafe { > - ovs::str_to_int(string2cstr(s).as_ptr(), *base as raw::c_int, &mut i as *mut raw::c_int) > - }; > - if ok { > - ddlog_std::Option::Some{x: i as u64} > - } else { > - ddlog_std::Option::None > - } > -} > - > -pub fn str_to_uint(s: &String, base: &u16) -> ddlog_std::Option<u64> { > - let mut i: raw::c_uint = 0; > - let ok = unsafe { > - ovs::str_to_uint(string2cstr(s).as_ptr(), *base as raw::c_int, &mut i as *mut raw::c_uint) > - }; > - if ok { > - ddlog_std::Option::Some{x: i as u64} > - } else { > - ddlog_std::Option::None > - } > -} > - > -pub fn inet6_ntop(addr: &in6_addr) -> String { > - let addr_c = in6_addr_c::from_ddlog(addr); > - let mut buf = [0 as i8; INET6_ADDRSTRLEN]; > - unsafe { > - let res = inet_ntop(AF_INET6 as raw::c_int, &addr_c as *const in6_addr_c as *const raw::c_void, > - &mut buf[0] as *mut raw::c_char, INET6_ADDRSTRLEN as libc::socklen_t); > - if res == ptr::null() { > - warn(&format!("inet_ntop({:?}) failed", *addr)); > - "".to_owned() > - } else { > - cstr2string(&buf as *const raw::c_char) > - } > - } > -} > - > -/* Internals */ > - > -unsafe fn cstr2string(s: *const raw::c_char) -> String { > - ffi::CStr::from_ptr(s).to_owned().into_string(). > - unwrap_or_else(|e|{ warn(&format!("cstr2string: {}", e)); "".to_owned() }) > -} > - > -fn string2cstr(s: &String) -> ffi::CString { > - ffi::CString::new(s.as_str()).unwrap() > -} > - > -/* OVS dynamic string type */ > -#[repr(C)] > -struct ovs_ds { > - s: *mut raw::c_char, /* Null-terminated string. */ > - length: libc::size_t, /* Bytes used, not including null terminator. */ > - allocated: libc::size_t /* Bytes allocated, not including null terminator. */ > -} > - > -impl ovs_ds { > - pub fn new() -> ovs_ds { > - ovs_ds{s: ptr::null_mut(), length: 0, allocated: 0} > - } > - > - pub unsafe fn into_string(mut self) -> String { > - let res = cstr2string(ovs::ds_cstr(&self as *const ovs_ds)); > - ovs::ds_destroy(&mut self as *mut ovs_ds); > - res > - } > -} > - > -/* OVS string vector type */ > -#[repr(C)] > -struct ovs_svec { > - names: *mut *mut raw::c_char, > - n: libc::size_t, > - allocated: libc::size_t > -} > - > -impl ovs_svec { > - pub fn new() -> ovs_svec { > - ovs_svec{names: ptr::null_mut(), n: 0, allocated: 0} > - } > - > - pub unsafe fn into_strings(mut self) -> ddlog_std::Set<String> { > - let mut res: ddlog_std::Set<String> = ddlog_std::Set::new(); > - unsafe { > - for i in 0..self.n { > - res.insert(cstr2string(*self.names.offset(i as isize))); > - } > - ovs::svec_destroy(&mut self as *mut ovs_svec); > - } > - res > - } > -} > - > - > -// ovn/lib/ovn-util.h > -#[repr(C)] > -struct ipv4_netaddr_c { > - addr: libc::uint32_t, > - mask: libc::uint32_t, > - network: libc::uint32_t, > - plen: raw::c_uint, > - > - addr_s: [raw::c_char; INET_ADDRSTRLEN + 1], /* "192.168.10.123" */ > - network_s: [raw::c_char; INET_ADDRSTRLEN + 1], /* "192.168.10.0" */ > - bcast_s: [raw::c_char; INET_ADDRSTRLEN + 1] /* "192.168.10.255" */ > -} > - > -impl Default for ipv4_netaddr_c { > - fn default() -> Self { > - ipv4_netaddr_c { > - addr: 0, > - mask: 0, > - network: 0, > - plen: 0, > - addr_s: [0; INET_ADDRSTRLEN + 1], > - network_s: [0; INET_ADDRSTRLEN + 1], > - bcast_s: [0; INET_ADDRSTRLEN + 1] > - } > - } > -} > - > -impl ipv4_netaddr_c { > - pub fn to_ddlog(&self) -> ipv4_netaddr { > - ipv4_netaddr{ > - addr: in_addr::from_be32(self.addr), > - plen: self.plen, > - } > - } > -} > - > -#[repr(C)] > -struct ipv6_netaddr_c { > - addr: in6_addr_c, /* fc00::1 */ > - mask: in6_addr_c, /* ffff:ffff:ffff:ffff:: */ > - sn_addr: in6_addr_c, /* ff02:1:ff00::1 */ > - network: in6_addr_c, /* fc00:: */ > - plen: raw::c_uint, /* CIDR Prefix: 64 */ > - > - addr_s: [raw::c_char; INET6_ADDRSTRLEN + 1], /* "fc00::1" */ > - sn_addr_s: [raw::c_char; INET6_ADDRSTRLEN + 1], /* "ff02:1:ff00::1" */ > - network_s: [raw::c_char; INET6_ADDRSTRLEN + 1] /* "fc00::" */ > -} > - > -impl Default for ipv6_netaddr_c { > - fn default() -> Self { > - ipv6_netaddr_c { > - addr: Default::default(), > - mask: Default::default(), > - sn_addr: Default::default(), > - network: Default::default(), > - plen: 0, > - addr_s: [0; INET6_ADDRSTRLEN + 1], > - sn_addr_s: [0; INET6_ADDRSTRLEN + 1], > - network_s: [0; INET6_ADDRSTRLEN + 1] > - } > - } > -} > - > -impl ipv6_netaddr_c { > - pub unsafe fn to_ddlog(&self) -> ipv6_netaddr { > - ipv6_netaddr{ > - addr: in6_addr_c::to_ddlog(&self.addr), > - plen: self.plen > - } > - } > -} > - > - > -// ovn-util.h > -#[repr(C)] > -struct lport_addresses_c { > - ea_s: [raw::c_char; ETH_ADDR_STRLEN + 1], > - ea: eth_addr_c, > - n_ipv4_addrs: libc::size_t, > - ipv4_addrs: *mut ipv4_netaddr_c, > - n_ipv6_addrs: libc::size_t, > - ipv6_addrs: *mut ipv6_netaddr_c > -} > - > -impl Default for lport_addresses_c { > - fn default() -> Self { > - lport_addresses_c { > - ea_s: [0; ETH_ADDR_STRLEN + 1], > - ea: Default::default(), > - n_ipv4_addrs: 0, > - ipv4_addrs: ptr::null_mut(), > - n_ipv6_addrs: 0, > - ipv6_addrs: ptr::null_mut() > - } > - } > -} > - > -impl lport_addresses_c { > - pub unsafe fn into_ddlog(mut self) -> lport_addresses { > - let mut ipv4_addrs = ddlog_std::Vec::with_capacity(self.n_ipv4_addrs); > - for i in 0..self.n_ipv4_addrs { > - ipv4_addrs.push((&*self.ipv4_addrs.offset(i as isize)).to_ddlog()) > - } > - let mut ipv6_addrs = ddlog_std::Vec::with_capacity(self.n_ipv6_addrs); > - for i in 0..self.n_ipv6_addrs { > - ipv6_addrs.push((&*self.ipv6_addrs.offset(i as isize)).to_ddlog()) > - } > - let res = lport_addresses { > - ea: self.ea.to_ddlog(), > - ipv4_addrs: ipv4_addrs, > - ipv6_addrs: ipv6_addrs > - }; > - ovn_c::destroy_lport_addresses(&mut self as *mut lport_addresses_c); > - res > - } > -} > - > -/* functions imported from northd.c */ > -extern "C" { > - fn ddlog_warn(msg: *const raw::c_char); > - fn ddlog_err(msg: *const raw::c_char); > -} > - > -/* functions imported from libovn */ > -mod ovn_c { > - use ::std::os::raw; > - use ::libc; > - use super::lport_addresses_c; > - use super::ovs_svec; > - use super::in6_addr_c; > - > - #[link(name = "ovn")] > - extern "C" { > - // ovn/lib/ovn-util.h > - pub fn extract_lsp_addresses(address: *const raw::c_char, laddrs: *mut lport_addresses_c) -> bool; > - pub fn extract_addresses(address: *const raw::c_char, laddrs: *mut lport_addresses_c, ofs: *mut raw::c_int) -> bool; > - pub fn extract_lrp_networks__(mac: *const raw::c_char, networks: *const *const raw::c_char, > - n_networks: libc::size_t, laddrs: *mut lport_addresses_c) -> bool; > - pub fn extract_ip_addresses(address: *const raw::c_char, laddrs: *mut lport_addresses_c) -> bool; > - pub fn destroy_lport_addresses(addrs: *mut lport_addresses_c); > - pub fn is_dynamic_lsp_address(address: *const raw::c_char) -> bool; > - pub fn split_addresses(addresses: *const raw::c_char, ip4_addrs: *mut ovs_svec, ipv6_addrs: *mut ovs_svec); > - pub fn ip_address_and_port_from_lb_key(key: *const raw::c_char, ip_address: *mut *mut raw::c_char, > - port: *mut libc::uint16_t, addr_family: *mut raw::c_int); > - pub fn ovn_get_internal_version() -> *mut raw::c_char; > - } > -} > - > -mod ovs { > - use ::std::os::raw; > - use ::libc; > - use super::in6_addr_c; > - use super::ovs_be32; > - use super::ovs_ds; > - use super::eth_addr_c; > - use super::ovs_svec; > - > - /* functions imported from libopenvswitch */ > - #[link(name = "openvswitch")] > - extern "C" { > - // lib/packets.h > - pub fn ipv6_string_mapped(addr_str: *mut raw::c_char, addr: *const in6_addr_c) -> *const raw::c_char; > - pub fn ipv6_parse_masked(s: *const raw::c_char, ip: *mut in6_addr_c, mask: *mut in6_addr_c) -> *mut raw::c_char; > - pub fn ipv6_parse_cidr(s: *const raw::c_char, ip: *mut in6_addr_c, plen: *mut raw::c_uint) -> *mut raw::c_char; > - pub fn ipv6_parse(s: *const raw::c_char, ip: *mut in6_addr_c) -> bool; > - pub fn ip_parse_masked(s: *const raw::c_char, ip: *mut ovs_be32, mask: *mut ovs_be32) -> *mut raw::c_char; > - pub fn ip_parse_cidr(s: *const raw::c_char, ip: *mut ovs_be32, plen: *mut raw::c_uint) -> *mut raw::c_char; > - pub fn ip_parse(s: *const raw::c_char, ip: *mut ovs_be32) -> bool; > - pub fn eth_addr_from_string(s: *const raw::c_char, ea: *mut eth_addr_c) -> bool; > - > - // include/openvswitch/json.h > - pub fn json_string_escape(str: *const raw::c_char, out: *mut ovs_ds); > - // openvswitch/dynamic-string.h > - pub fn ds_destroy(ds: *mut ovs_ds); > - pub fn ds_cstr(ds: *const ovs_ds) -> *const raw::c_char; > - pub fn svec_destroy(v: *mut ovs_svec); > - pub fn ovs_scan(s: *const raw::c_char, format: *const raw::c_char, ...) -> bool; > - pub fn str_to_int(s: *const raw::c_char, base: raw::c_int, i: *mut raw::c_int) -> bool; > - pub fn str_to_uint(s: *const raw::c_char, base: raw::c_int, i: *mut raw::c_uint) -> bool; > - } > -} > - > -/* functions imported from libc */ > -#[link(name = "c")] > -extern "C" { > - fn free(ptr: *mut raw::c_void); > -} > - > -/* functions imported from arp/inet6 */ > -extern "C" { > - fn inet_ntop(af: raw::c_int, cp: *const raw::c_void, > - buf: *mut raw::c_char, len: libc::socklen_t) -> *const raw::c_char; > -} > - > -/* > - * Parse IPv4 address list. > - */ > - > -named!(parse_spaces<nom::types::CompleteStr, ()>, > - do_parse!(many1!(one_of!(&" \t\n\r\x0c\x0b")) >> (()) ) > -); > - > -named!(parse_opt_spaces<nom::types::CompleteStr, ()>, > - do_parse!(opt!(parse_spaces) >> (())) > -); > - > -named!(parse_ipv4_range<nom::types::CompleteStr, (String, Option<String>)>, > - do_parse!(addr1: many_till!(complete!(nom::anychar), alt!(do_parse!(eof!() >> (nom::types::CompleteStr(""))) | peek!(tag!("..")) | tag!(" ") )) >> > - parse_opt_spaces >> > - addr2: opt!(do_parse!(tag!("..") >> > - parse_opt_spaces >> > - addr2: many_till!(complete!(nom::anychar), alt!(do_parse!(eof!() >> (' ')) | char!(' ')) ) >> > - (addr2) )) >> > - parse_opt_spaces >> > - (addr1.0.into_iter().collect(), addr2.map(|x|x.0.into_iter().collect())) ) > -); > - > -named!(parse_ipv4_address_list<nom::types::CompleteStr, Vec<(String, Option<String>)>>, > - do_parse!(parse_opt_spaces >> > - ranges: many0!(parse_ipv4_range) >> > - (ranges))); > - > -pub fn parse_ip_list(ips: &String) -> ddlog_std::Either<String, ddlog_std::Vec<ddlog_std::tuple2<in_addr, ddlog_std::Option<in_addr>>>> > -{ > - match parse_ipv4_address_list(nom::types::CompleteStr(ips.as_str())) { > - Err(e) => { > - ddlog_std::Either::Left{l: format!("invalid IP list format: \"{}\"", ips.as_str())} > - }, > - Ok((nom::types::CompleteStr(""), ranges)) => { > - let mut res = vec![]; > - for (ip1, ip2) in ranges.iter() { > - let start = match ip_parse(&ip1) { > - ddlog_std::Option::None => return ddlog_std::Either::Left{l: format!("invalid IP address: \"{}\"", *ip1)}, > - ddlog_std::Option::Some{x: ip} => ip > - }; > - let end = match ip2 { > - None => ddlog_std::Option::None, > - Some(ip_str) => match ip_parse(&ip_str.clone()) { > - ddlog_std::Option::None => return ddlog_std::Either::Left{l: format!("invalid IP address: \"{}\"", *ip_str)}, > - x => x > - } > - }; > - res.push(ddlog_std::tuple2(start, end)); > - }; > - ddlog_std::Either::Right{r: ddlog_std::Vec::from(res)} > - }, > - Ok((suffix, _)) => { > - ddlog_std::Either::Left{l: format!("IP address list contains trailing characters: \"{}\"", suffix)} > - } > - } > -} > diff --git a/northd/ovn.toml b/northd/ovn.toml > deleted file mode 100644 > index 64108996ed..0000000000 > --- a/northd/ovn.toml > +++ /dev/null > @@ -1,2 +0,0 @@ > -[dependencies.nom] > -version = "4.0" > diff --git a/northd/ovn_northd.dl b/northd/ovn_northd.dl > deleted file mode 100644 > index 2fe73959c6..0000000000 > --- a/northd/ovn_northd.dl > +++ /dev/null > @@ -1,9105 +0,0 @@ > -/* > - * Licensed under the Apache License, Version 2.0 (the "License"); > - * you may not use this file except in compliance with the License. > - * You may obtain a copy of the License at: > - * > - * http://www.apache.org/licenses/LICENSE-2.0 > - * > - * Unless required by applicable law or agreed to in writing, software > - * distributed under the License is distributed on an "AS IS" BASIS, > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > - * See the License for the specific language governing permissions and > - * limitations under the License. > - */ > - > -import OVN_Northbound as nb > -import OVN_Southbound as sb > -import copp > -import ovsdb > -import allocate > -import ovn > -import lswitch > -import lrouter > -import multicast > -import helpers > -import ipam > -import vec > -import set > - > -index Logical_Flow_Index() on sb::Out_Logical_Flow() > - > -/* Meter_Band table */ > -for (mb in nb::Meter_Band) { > - sb::Out_Meter_Band(._uuid = mb._uuid, > - .action = mb.action, > - .rate = mb.rate, > - .burst_size = mb.burst_size) > -} > - > -/* Meter table */ > -for (meter in &nb::Meter) { > - sb::Out_Meter(._uuid = meter._uuid, > - .name = meter.name, > - .unit = meter.unit, > - .bands = meter.bands) > -} > -sb::Out_Meter(._uuid = hash128(name), > - .name = name, > - .unit = meter.unit, > - .bands = meter.bands) :- > - ACLWithFairMeter(acl, meter), > - var name = acl_log_meter_name(meter.name, acl._uuid).intern(). > - > -/* Proxy table for Out_Datapath_Binding: contains all Datapath_Binding fields, > - * except tunnel id, which is allocated separately (see TunKeyAllocation). */ > -relation OutProxy_Datapath_Binding ( > - _uuid: uuid, > - external_ids: Map<istring,istring> > -) > - > -/* Datapath_Binding table */ > -OutProxy_Datapath_Binding(uuid, external_ids) :- > - &nb::Logical_Switch(._uuid = uuid, .name = name, .external_ids = ids, > - .other_config = other_config), > - var uuid_str = uuid2str(uuid).intern(), > - var external_ids = { > - var eids = [i"logical-switch" -> uuid_str, i"name" -> name]; > - match (ids.get(i"neutron:network_name")) { > - None -> (), > - Some{nnn} -> eids.insert(i"name2", nnn) > - }; > - match (other_config.get(i"interconn-ts")) { > - None -> (), > - Some{value} -> eids.insert(i"interconn-ts", value) > - }; > - eids > - }. > - > -OutProxy_Datapath_Binding(uuid, external_ids) :- > - lr in nb::Logical_Router(._uuid = uuid, .name = name, .external_ids = ids, > - .options = options), > - lr.is_enabled(), > - var uuid_str = uuid2str(uuid).intern(), > - var external_ids = { > - var eids = [i"logical-router" -> uuid_str, i"name" -> name]; > - match (ids.get(i"neutron:router_name")) { > - None -> (), > - Some{nnn} -> eids.insert(i"name2", nnn) > - }; > - match (options.get(i"snat-ct-zone").and_then(parse_dec_u64)) { > - None -> (), > - Some{zone} -> eids.insert(i"snat-ct-zone", i"${zone}") > - }; > - var learn_from_arp_request = options.get_bool_def(i"always_learn_from_arp_request", true); > - if (not learn_from_arp_request) { > - eids.insert(i"always_learn_from_arp_request", i"false") > - }; > - eids > - }. > - > -sb::Out_Datapath_Binding(uuid, tunkey, load_balancers, external_ids) :- > - OutProxy_Datapath_Binding(uuid, external_ids), > - TunKeyAllocation(uuid, tunkey), > - /* Datapath_Binding.load_balancers is not used anymore, it's still in the > - * schema for compatibility reasons. Reset it to empty, just in case. > - */ > - var load_balancers = set_empty(). > - > -function get_requested_chassis(options: Map<istring,istring>) : istring = { > - var requested_chassis = match(options.get(i"requested-chassis")) { > - None -> i"", > - Some{requested_chassis} -> requested_chassis, > - }; > - requested_chassis > -} > - > -relation RequestedChassis( > - name: istring, > - chassis: uuid, > -) > -RequestedChassis(name, chassis) :- > - sb::Chassis(._uuid = chassis, .name=name). > -RequestedChassis(hostname, chassis) :- > - sb::Chassis(._uuid = chassis, .hostname=hostname). > - > -/* Proxy table for Out_Datapath_Binding: contains all Datapath_Binding fields, > - * except tunnel id, which is allocated separately (see PortTunKeyAllocation). */ > -relation OutProxy_Port_Binding ( > - _uuid: uuid, > - logical_port: istring, > - __type: istring, > - gateway_chassis: Set<uuid>, > - ha_chassis_group: Option<uuid>, > - options: Map<istring,istring>, > - datapath: uuid, > - parent_port: Option<istring>, > - tag: Option<integer>, > - mac: Set<istring>, > - nat_addresses: Set<istring>, > - external_ids: Map<istring,istring>, > - requested_chassis: Option<uuid> > -) > - > -/* Case 1a: Create a Port_Binding per logical switch port that is not of type > - * "router" */ > -OutProxy_Port_Binding(._uuid = lsp._uuid, > - .logical_port = lsp.name, > - .__type = lsp.__type, > - .gateway_chassis = set_empty(), > - .ha_chassis_group = sp.hac_group_uuid, > - .options = options, > - .datapath = sw._uuid, > - .parent_port = lsp.parent_name, > - .tag = tag, > - .mac = lsp.addresses, > - .nat_addresses = set_empty(), > - .external_ids = eids, > - .requested_chassis = None) :- > - sp in &SwitchPort(.lsp = lsp, .sw = sw), > - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), > - var tag = match (opt_tag) { > - None -> lsp.tag, > - Some{t} -> Some{t} > - }, > - lsp.__type != i"router", > - var chassis_name_or_hostname = get_requested_chassis(lsp.options), > - chassis_name_or_hostname == i"", > - var eids = { > - var eids = lsp.external_ids; > - match (lsp.external_ids.get(i"neutron:port_name")) { > - None -> (), > - Some{name} -> eids.insert(i"name", name) > - }; > - eids > - }, > - var options = { > - var options = lsp.options; > - if (sw.other_config.get(i"vlan-passthru") == Some{i"true"}) { > - options.insert(i"vlan-passthru", i"true") > - }; > - options > - }. > - > -/* Case 1b: Create a Port_Binding per logical switch port that is not of type > - * "router" and has options "requested-chassis" pointing at chassis name or > - * hostname. */ > -OutProxy_Port_Binding(._uuid = lsp._uuid, > - .logical_port = lsp.name, > - .__type = lsp.__type, > - .gateway_chassis = set_empty(), > - .ha_chassis_group = sp.hac_group_uuid, > - .options = options, > - .datapath = sw._uuid, > - .parent_port = lsp.parent_name, > - .tag = tag, > - .mac = lsp.addresses, > - .nat_addresses = set_empty(), > - .external_ids = eids, > - .requested_chassis = Some{requested_chassis}) :- > - sp in &SwitchPort(.lsp = lsp, .sw = sw), > - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), > - var tag = match (opt_tag) { > - None -> lsp.tag, > - Some{t} -> Some{t} > - }, > - lsp.__type != i"router", > - var chassis_name_or_hostname = get_requested_chassis(lsp.options), > - chassis_name_or_hostname != i"", > - RequestedChassis(chassis_name_or_hostname, requested_chassis), > - var eids = { > - var eids = lsp.external_ids; > - match (lsp.external_ids.get(i"neutron:port_name")) { > - None -> (), > - Some{name} -> eids.insert(i"name", name) > - }; > - eids > - }, > - var options = { > - var options = lsp.options; > - if (sw.other_config.get(i"vlan-passthru") == Some{i"true"}) { > - options.insert(i"vlan-passthru", i"true") > - }; > - options > - }. > - > -/* Case 1c: Create a Port_Binding per logical switch port that is not of type > - * "router" and has options "requested-chassis" pointing at non-existent > - * chassis name or hostname. */ > -OutProxy_Port_Binding(._uuid = lsp._uuid, > - .logical_port = lsp.name, > - .__type = lsp.__type, > - .gateway_chassis = set_empty(), > - .ha_chassis_group = sp.hac_group_uuid, > - .options = options, > - .datapath = sw._uuid, > - .parent_port = lsp.parent_name, > - .tag = tag, > - .mac = lsp.addresses, > - .nat_addresses = set_empty(), > - .external_ids = eids, > - .requested_chassis = None) :- > - sp in &SwitchPort(.lsp = lsp, .sw = sw), > - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), > - var tag = match (opt_tag) { > - None -> lsp.tag, > - Some{t} -> Some{t} > - }, > - lsp.__type != i"router", > - var chassis_name_or_hostname = get_requested_chassis(lsp.options), > - chassis_name_or_hostname != i"", > - not RequestedChassis(chassis_name_or_hostname, _), > - var eids = { > - var eids = lsp.external_ids; > - match (lsp.external_ids.get(i"neutron:port_name")) { > - None -> (), > - Some{name} -> eids.insert(i"name", name) > - }; > - eids > - }, > - var options = { > - var options = lsp.options; > - if (sw.other_config.get(i"vlan-passthru") == Some{i"true"}) { > - options.insert(i"vlan-passthru", i"true") > - }; > - options > - }. > - > -relation SwitchPortLBIPs( > - port: Intern<SwitchPort>, > - lbips: Option<Intern<LogicalRouterLBIPs>>) > - > -SwitchPortLBIPs(.port = port, > - .lbips = Some{lbips}) :- > - port in &SwitchPort(.peer = Some{peer}), > - port.lsp.options.get(i"router-port").is_some(), > - lbips in &LogicalRouterLBIPs(.lr = peer.router._uuid). > - > -SwitchPortLBIPs(.port = port, > - .lbips = None) :- > - port in &SwitchPort(.peer = peer), > - peer.is_none() or port.lsp.options.get(i"router-port").is_none(). > - > -/* Case 2: Create a Port_Binding per logical switch port of type "router" */ > -OutProxy_Port_Binding(._uuid = lsp._uuid, > - .logical_port = lsp.name, > - .__type = __type, > - .gateway_chassis = set_empty(), > - .ha_chassis_group = None, > - .options = options, > - .datapath = sw._uuid, > - .parent_port = lsp.parent_name, > - .tag = None, > - .mac = lsp.addresses, > - .nat_addresses = nat_addresses, > - .external_ids = eids, > - .requested_chassis = None) :- > - SwitchPortLBIPs(.port = &SwitchPort{.lsp = lsp, .sw = sw, .peer = peer}, > - .lbips = lbips), > - var eids = { > - var eids = lsp.external_ids; > - match (lsp.external_ids.get(i"neutron:port_name")) { > - None -> (), > - Some{name} -> eids.insert(i"name", name) > - }; > - eids > - }, > - Some{var router_port} = lsp.options.get(i"router-port"), > - var opt_chassis = peer.and_then(|p| p.router.options.get(i"chassis")), > - var l3dgw_port = peer.and_then(|p| p.router.l3dgw_ports.nth(0)), > - (var __type, var options) = { > - var options = [i"peer" -> router_port]; > - match (opt_chassis) { > - None -> { > - (i"patch", options) > - }, > - Some{chassis} -> { > - options.insert(i"l3gateway-chassis", chassis); > - (i"l3gateway", options) > - } > - } > - }, > - var base_nat_addresses = { > - match (lsp.options.get(i"nat-addresses")) { > - None -> { set_empty() }, > - Some{nat_addresses} -> { > - if (nat_addresses == i"router") { > - match ((l3dgw_port, opt_chassis, peer)) { > - (None, None, _) -> set_empty(), > - (_, _, None) -> set_empty(), > - (_, _, Some{rport}) -> get_nat_addresses(rport, lbips.unwrap_or_default(), false) > - } > - } else { > - /* Only accept manual specification of ethernet address > - * followed by IPv4 addresses on type "l3gateway" ports. */ > - if (opt_chassis.is_some()) { > - match (extract_lsp_addresses(nat_addresses.ival())) { > - None -> { > - warn("Error extracting nat-addresses."); > - set_empty() > - }, > - Some{_} -> { set_singleton(nat_addresses) } > - } > - } else { set_empty() } > - } > - } > - } > - }, > - /* Add the router mac and IPv4 addresses to > - * Port_Binding.nat_addresses so that GARP is sent for these > - * IPs by the ovn-controller on which the distributed gateway > - * router port resides if: > - * > - * 1. The peer has 'reside-on-redirect-chassis' set and the > - * the logical router datapath has distributed router port. > - * > - * 2. The peer is distributed gateway router port. > - * > - * 3. The peer's router is a gateway router and the port has a localnet > - * port. > - * > - * Note: Port_Binding.nat_addresses column is also used for > - * sending the GARPs for the router port IPs. > - * */ > - var garp_nat_addresses = match (peer) { > - Some{rport} -> match ( > - (rport.lrp.options.get_bool_def(i"reside-on-redirect-chassis", false) > - and l3dgw_port.is_some()) or > - rport.is_redirect or > - (rport.router.options.contains_key(i"chassis") and > - not sw.localnet_ports.is_empty())) { > - false -> set_empty(), > - true -> set_singleton(get_garp_nat_addresses(rport).intern()) > - }, > - None -> set_empty() > - }, > - var nat_addresses = set_union(base_nat_addresses, garp_nat_addresses). > - > -/* Case 3: Port_Binding per logical router port */ > -OutProxy_Port_Binding(._uuid = lrp._uuid, > - .logical_port = lrp.name, > - .__type = __type, > - .gateway_chassis = set_empty(), > - .ha_chassis_group = None, > - .options = options, > - .datapath = router._uuid, > - .parent_port = None, > - .tag = None, // always empty for router ports > - .mac = set_singleton(i"${lrp.mac} ${lrp.networks.map(ival).to_vec().join(\" \")}"), > - .nat_addresses = set_empty(), > - .external_ids = lrp.external_ids, > - .requested_chassis = None) :- > - rp in &RouterPort(.lrp = lrp, .router = router, .peer = peer), > - RouterPortRAOptionsComplete(lrp._uuid, options0), > - (var __type, var options1) = match (router.options.get(i"chassis")) { > - /* TODO: derived ports */ > - None -> (i"patch", map_empty()), > - Some{lrchassis} -> (i"l3gateway", [i"l3gateway-chassis" -> lrchassis]) > - }, > - var options2 = match (router_peer_name(peer)) { > - None -> map_empty(), > - Some{peer_name} -> [i"peer" -> peer_name] > - }, > - var options3 = match ((peer, rp.networks.ipv6_addrs.is_empty())) { > - (PeerSwitch{_, _}, false) -> { > - var enabled = lrp.is_enabled(); > - var pd = lrp.options.get_bool_def(i"prefix_delegation", false); > - var p = lrp.options.get_bool_def(i"prefix", false); > - [i"ipv6_prefix_delegation" -> i"${pd and enabled}", > - i"ipv6_prefix" -> i"${p and enabled}"] > - }, > - _ -> map_empty() > - }, > - PreserveIPv6RAPDList(lrp._uuid, ipv6_ra_pd_list), > - var options4 = match (ipv6_ra_pd_list) { > - None -> map_empty(), > - Some{value} -> [i"ipv6_ra_pd_list" -> value] > - }, > - RouterPortIsRedirect(lrp._uuid, is_redirect), > - var options5 = match (is_redirect) { > - false -> map_empty(), > - true -> [i"chassis-redirect-port" -> chassis_redirect_name(lrp.name).intern()] > - }, > - var options = options0.union(options1).union(options2).union(options3).union(options4).union(options5), > - var eids = { > - var eids = lrp.external_ids; > - match (lrp.external_ids.get(i"neutron:port_name")) { > - None -> (), > - Some{name} -> eids.insert(i"name", name) > - }; > - eids > - }. > -/* > -*/ > -function get_router_load_balancer_ips(lbips: Intern<LogicalRouterLBIPs>, > - routable_only: bool) : > - (Set<istring>, Set<istring>) = > -{ > - if (routable_only) { > - (lbips.lb_ipv4s_routable, lbips.lb_ipv6s_routable) > - } else { > - (union(lbips.lb_ipv4s_routable, lbips.lb_ipv4s_unroutable), > - union(lbips.lb_ipv6s_routable, lbips.lb_ipv6s_unroutable)) > - } > -} > - > -/* Returns an array of strings, each consisting of a MAC address followed > - * by one or more IP addresses, and if the port is a distributed gateway > - * port, followed by 'is_chassis_resident("LPORT_NAME")', where the > - * LPORT_NAME is the name of the L3 redirect port or the name of the > - * logical_port specified in a NAT rule. These strings include the > - * external IP addresses of all NAT rules defined on that router, and all > - * of the IP addresses used in load balancer VIPs defined on that router. > - */ > -function get_nat_addresses(rport: Intern<RouterPort>, lbips: Intern<LogicalRouterLBIPs>, routable_only: bool): Set<istring> = > -{ > - var addresses = set_empty(); > - var has_redirect = not rport.router.l3dgw_ports.is_empty(); > - match (eth_addr_from_string(rport.lrp.mac.ival())) { > - None -> addresses, > - Some{mac} -> { > - var c_addresses = "${mac}"; > - var central_ip_address = false; > - > - /* Get NAT IP addresses. */ > - for (nat in rport.router.nats) { > - if (routable_only and > - (nat.nat.__type == i"snat" or > - not nat.nat.options.get_bool_def(i"add_route", false))) { > - continue; > - }; > - /* Determine whether this NAT rule satisfies the conditions for > - * distributed NAT processing. */ > - if (has_redirect and nat.nat.__type == i"dnat_and_snat" and > - nat.nat.logical_port.is_some() and nat.external_mac.is_some()) { > - /* Distributed NAT rule. */ > - var logical_port = nat.nat.logical_port.unwrap_or_default(); > - var external_mac = nat.external_mac.unwrap_or_default(); > - addresses.insert(i"${external_mac} ${nat.external_ip} " > - "is_chassis_resident(${json_escape(logical_port)})") > - } else { > - /* Centralized NAT rule, either on gateway router or distributed > - * router. > - * Check if external_ip is same as router ip. If so, then there > - * is no need to add this to the nat_addresses. The router IPs > - * will be added separately. */ > - var is_router_ip = false; > - match (nat.external_ip) { > - IPv4{ei} -> { > - for (ipv4 in rport.networks.ipv4_addrs) { > - if (ei == ipv4.addr) { > - is_router_ip = true; > - break > - } > - } > - }, > - IPv6{ei} -> { > - for (ipv6 in rport.networks.ipv6_addrs) { > - if (ei == ipv6.addr) { > - is_router_ip = true; > - break > - } > - } > - } > - }; > - if (not is_router_ip) { > - c_addresses = c_addresses ++ " ${nat.external_ip}"; > - central_ip_address = true > - } > - } > - }; > - > - /* A set to hold all load-balancer vips. */ > - (var all_ips_v4, var all_ips_v6) = get_router_load_balancer_ips(lbips, routable_only); > - > - for (ip_address in set_union(all_ips_v4, all_ips_v6)) { > - c_addresses = c_addresses ++ " ${ip_address}"; > - central_ip_address = true > - }; > - > - if (central_ip_address) { > - /* Gratuitous ARP for centralized NAT rules on distributed gateway > - * ports should be restricted to the gateway chassis. */ > - if (has_redirect) { > - c_addresses = c_addresses ++ match (rport.router.l3dgw_ports.nth(0)) { > - None -> "", > - Some {var gw_port} -> " is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})" > - } > - } else (); > - > - addresses.insert(c_addresses.intern()) > - } else (); > - addresses > - } > - } > -} > - > -function get_garp_nat_addresses(rport: Intern<RouterPort>): string = { > - var garp_info = ["${rport.networks.ea}"]; > - for (ipv4_addr in rport.networks.ipv4_addrs) { > - garp_info.push("${ipv4_addr.addr}") > - }; > - match (rport.router.l3dgw_ports.nth(0)) { > - None -> (), > - Some {var gw_port} -> garp_info.push( > - "is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})") > - }; > - garp_info.join(" ") > -} > - > -/* Extra options computed for router ports by the logical flow generation code */ > -relation RouterPortRAOptions(lrp: uuid, options: Map<istring, istring>) > - > -relation RouterPortRAOptionsComplete(lrp: uuid, options: Map<istring, istring>) > - > -RouterPortRAOptionsComplete(lrp, options) :- > - RouterPortRAOptions(lrp, options). > -RouterPortRAOptionsComplete(lrp, map_empty()) :- > - &nb::Logical_Router_Port(._uuid = lrp), > - not RouterPortRAOptions(lrp, _). > - > -function has_distributed_nat(nats: Vec<NAT>): bool { > - for (nat in nats) { > - if (nat.nat.__type == i"dnat_and_snat") { > - return true > - } > - }; > - return false > -} > - > -/* > - * Create derived port for Logical_Router_Ports with non-empty 'gateway_chassis' column. > - */ > - > -/* Create derived ports */ > -OutProxy_Port_Binding(._uuid = cr_lrp_uuid, > - .logical_port = chassis_redirect_name(lrp.name).intern(), > - .__type = i"chassisredirect", > - .gateway_chassis = set_empty(), > - .ha_chassis_group = Some{hacg_uuid}, > - .options = options, > - .datapath = lr_uuid, > - .parent_port = None, > - .tag = None, //always empty for router ports > - .mac = set_singleton(i"${lrp.mac} ${lrp.networks.map(ival).to_vec().join(\" \")}"), > - .nat_addresses = set_empty(), > - .external_ids = lrp.external_ids, > - .requested_chassis = None) :- > - DistributedGatewayPort(lrp, lr_uuid, cr_lrp_uuid), > - DistributedGatewayPortHAChassisGroup(lrp, hacg_uuid), > - var redirect_type = match (lrp.options.get(i"redirect-type")) { > - Some{var value} -> [i"redirect-type" -> value], > - _ -> map_empty() > - }, > - LogicalRouterNATs(lr_uuid, nats), > - var always_redirect = if (has_distributed_nat(nats) or > - lrp.options.get(i"redirect-type") == Some{i"bridged"}) { > - map_empty() > - } else { > - [i"always-redirect" -> i"true"] > - }, > - var options = redirect_type.union(always_redirect).insert_imm(i"distributed-port", lrp.name). > - > -/* > - * We want to preserve 'up' (set by ovn-controller) for Port_Binding rows. > - * We need to set set 'up' in new rows to Some{false}; if we don't set > - * it at all, ovn-controller will never update it. > - */ > -relation PortBindingUp0(pb_uuid: uuid, up: bool) > -PortBindingUp0(pb_uuid, up) :- sb::Port_Binding(._uuid = pb_uuid, .up = Some{up}). > - > -relation PortBindingUp(pb_uuid: uuid, up: bool) > -PortBindingUp(pb_uuid, up) :- PortBindingUp0(pb_uuid, up). > -PortBindingUp(pb_uuid, false) :- > - OutProxy_Port_Binding(._uuid = pb_uuid), > - not PortBindingUp0(pb_uuid, _). > - > -/* Add allocated qdisc_queue_id and tunnel key to Port_Binding. > - */ > -sb::Out_Port_Binding(._uuid = pbinding._uuid, > - .logical_port = pbinding.logical_port, > - .__type = pbinding.__type, > - .gateway_chassis = pbinding.gateway_chassis, > - .ha_chassis_group = pbinding.ha_chassis_group, > - .options = options0, > - .datapath = pbinding.datapath, > - .tunnel_key = tunkey, > - .parent_port = pbinding.parent_port, > - .tag = pbinding.tag, > - .mac = pbinding.mac, > - .nat_addresses = pbinding.nat_addresses, > - .external_ids = pbinding.external_ids, > - .up = Some{up}, > - .requested_chassis = pbinding.requested_chassis) :- > - pbinding in OutProxy_Port_Binding(), > - PortTunKeyAllocation(pbinding._uuid, tunkey), > - QueueIDAllocation(pbinding._uuid, qid), > - PortBindingUp(pbinding._uuid, up), > - var options0 = match (qid) { > - None -> pbinding.options, > - Some{id} -> pbinding.options.insert_imm(i"qdisc_queue_id", i"${id}") > - }. > - > -/* Referenced chassis. > - * > - * These tables track the sb::Chassis that a packet that traverses logical > - * router 'lr_uuid' can end up at (or start from). This is used for > - * sb::Out_HA_Chassis_Group's ref_chassis column. > - * > - * Only HA Chassis Groups with more than 1 chassis need to maintain the > - * referenced chassis. > - * > - * RefChassisSet0 has a row for each logical router that actually references a > - * chassis. RefChassisSet has a row for every logical router. */ > -relation RefChassis(lr_uuid: uuid, chassis_uuid: uuid) > -RefChassis(lr_uuid, chassis_uuid) :- > - DistributedGatewayPortHAChassisGroup(lrp, hacg_uuid), > - HAChassis(.hacg_uuid = hacg_uuid, .hac_uuid = hac_uuid), > - var hacg_size = hac_uuid.group_by(hacg_uuid).to_set().size(), > - hacg_size > 1, > - DistributedGatewayPort(lrp, lr_uuid, _), > - ConnectedLogicalRouter[(lr_uuid, set_uuid)], > - ConnectedLogicalRouter[(lr2_uuid, set_uuid)], > - FirstHopLogicalRouter(lr2_uuid, ls_uuid), > - LogicalSwitchPort(lsp_uuid, ls_uuid), > - &nb::Logical_Switch_Port(._uuid = lsp_uuid, .name = lsp_name), > - sb::Port_Binding(.logical_port = lsp_name, .chassis = chassis_uuids), > - Some{var chassis_uuid} = chassis_uuids. > -relation RefChassisSet0(lr_uuid: uuid, chassis_uuids: Set<uuid>) > -RefChassisSet0(lr_uuid, chassis_uuids) :- > - RefChassis(lr_uuid, chassis_uuid), > - var chassis_uuids = chassis_uuid.group_by(lr_uuid).to_set(). > -relation RefChassisSet(lr_uuid: uuid, chassis_uuids: Set<uuid>) > -RefChassisSet(lr_uuid, chassis_uuids) :- > - RefChassisSet0(lr_uuid, chassis_uuids). > -RefChassisSet(lr_uuid, set_empty()) :- > - nb::Logical_Router(._uuid = lr_uuid), > - not RefChassisSet0(lr_uuid, _). > - > -/* Referenced chassis for an HA chassis group. > - * > - * Multiple logical routers can reference an HA chassis group so we merge the > - * referenced chassis across all of them. > - */ > -relation HAChassisGroupRefChassisSet(hacg_uuid: uuid, > - chassis_uuids: Set<uuid>) > -HAChassisGroupRefChassisSet(hacg_uuid, chassis_uuids) :- > - DistributedGatewayPortHAChassisGroup(lrp, hacg_uuid), > - DistributedGatewayPort(lrp, lr_uuid, _), > - RefChassisSet(lr_uuid, chassis_uuids), > - var chassis_uuids = chassis_uuids.group_by(hacg_uuid).union(). > - > -/* HA_Chassis_Group and HA_Chassis. */ > -sb::Out_HA_Chassis_Group(hacg_uuid, hacg_name, ha_chassis, ref_chassis, eids) :- > - HAChassis(hacg_uuid, hac_uuid, chassis_name, _, _), > - var chassis_uuid = ha_chassis_uuid(chassis_name.ival(), hac_uuid), > - var ha_chassis = chassis_uuid.group_by(hacg_uuid).to_set(), > - HAChassisGroup(hacg_uuid, hacg_name, eids), > - HAChassisGroupRefChassisSet(hacg_uuid, ref_chassis). > - > -sb::Out_HA_Chassis(ha_chassis_uuid(chassis_name.ival(), hac_uuid), chassis, priority, eids) :- > - HAChassis(_, hac_uuid, chassis_name, priority, eids), > - chassis_rec in sb::Chassis(.name = chassis_name), > - var chassis = Some{chassis_rec._uuid}. > -sb::Out_HA_Chassis(ha_chassis_uuid(chassis_name.ival(), hac_uuid), None, priority, eids) :- > - HAChassis(_, hac_uuid, chassis_name, priority, eids), > - not chassis_rec in sb::Chassis(.name = chassis_name). > - > -relation HAChassisToChassis(name: istring, chassis: Option<uuid>) > -HAChassisToChassis(name, Some{chassis}) :- > - sb::Chassis(._uuid = chassis, .name = name). > -HAChassisToChassis(name, None) :- > - nb::HA_Chassis(.chassis_name = name), > - not sb::Chassis(.name = name). > -sb::Out_HA_Chassis(ha_chassis_uuid(ha_chassis.chassis_name.ival(), hac_uuid), chassis, priority, eids) :- > - sp in &SwitchPort(), > - sp.lsp.__type == i"external", > - Some{var ha_chassis_group_uuid} = sp.lsp.ha_chassis_group, > - ha_chassis_group in nb::HA_Chassis_Group(._uuid = ha_chassis_group_uuid), > - var hac_uuid = FlatMap(ha_chassis_group.ha_chassis), > - ha_chassis in nb::HA_Chassis(._uuid = hac_uuid, .priority = priority, .external_ids = eids), > - HAChassisToChassis(ha_chassis.chassis_name, chassis). > -sb::Out_HA_Chassis_Group(_uuid, name, ha_chassis, set_empty() /* XXX? */, eids) :- > - sp in &SwitchPort(), > - sp.lsp.__type == i"external", > - var ls_uuid = sp.sw._uuid, > - Some{var ha_chassis_group_uuid} = sp.lsp.ha_chassis_group, > - ha_chassis_group in nb::HA_Chassis_Group(._uuid = ha_chassis_group_uuid, .name = name, > - .external_ids = eids), > - var hac_uuid = FlatMap(ha_chassis_group.ha_chassis), > - ha_chassis in nb::HA_Chassis(._uuid = hac_uuid), > - var ha_chassis_uuid_name = ha_chassis_uuid(ha_chassis.chassis_name.ival(), hac_uuid), > - var ha_chassis = ha_chassis_uuid_name.group_by((ls_uuid, name, eids)).to_set(), > - var _uuid = ha_chassis_group_uuid(ls_uuid). > - > -/* > - * SB_Global: copy nb_cfg and options from NB. > - * If NB_Global does not exist yet, just keep the current value of SB_Global, > - * if any. > - */ > -for (nb_global in nb::NB_Global) { > - sb::Out_SB_Global(._uuid = nb_global._uuid, > - .nb_cfg = nb_global.nb_cfg, > - .options = nb_global.options, > - .ipsec = nb_global.ipsec) > -} > - > -sb::Out_SB_Global(._uuid = sb_global._uuid, > - .nb_cfg = sb_global.nb_cfg, > - .options = sb_global.options, > - .ipsec = sb_global.ipsec) :- > - sb_global in sb::SB_Global(), > - not nb::NB_Global(). > - > -/* sb::Chassis_Private joined with is_remote from sb::Chassis, > - * including a record even for a null Chassis ref. */ > -relation ChassisPrivate( > - cp: sb::Chassis_Private, > - is_remote: bool) > -ChassisPrivate(cp, c.other_config.get_bool_def(i"is-remote", false)) :- > - cp in sb::Chassis_Private(.chassis = Some{uuid}), > - c in sb::Chassis(._uuid = uuid). > -ChassisPrivate(cp, false), > -Warning["Chassis not exist for Chassis_Private record, name: ${cp.name}"] :- > - cp in sb::Chassis_Private(.chassis = Some{uuid}), > - not sb::Chassis(._uuid = uuid). > -ChassisPrivate(cp, false), > -Warning["Chassis not exist for Chassis_Private record, name: ${cp.name}"] :- > - cp in sb::Chassis_Private(.chassis = None). > - > -/* Track minimum hv_cfg across all the (non-remote) chassis. */ > -relation HvCfg0(hv_cfg: integer) > -HvCfg0(hv_cfg) :- > - ChassisPrivate(.cp = sb::Chassis_Private{.nb_cfg = chassis_cfg}, .is_remote = false), > - var hv_cfg = chassis_cfg.group_by(()).min(). > -relation HvCfg(hv_cfg: integer) > -HvCfg(hv_cfg) :- HvCfg0(hv_cfg). > -HvCfg(hv_cfg) :- > - nb::NB_Global(.nb_cfg = hv_cfg), > - not HvCfg0(). > - > -/* Track maximum nb_cfg_timestamp among all the (non-remote) chassis > - * that have the minimum nb_cfg. */ > -relation HvCfgTimestamp0(hv_cfg_timestamp: integer) > -HvCfgTimestamp0(hv_cfg_timestamp) :- > - HvCfg(hv_cfg), > - ChassisPrivate(.cp = sb::Chassis_Private{.nb_cfg = hv_cfg, > - .nb_cfg_timestamp = chassis_cfg_timestamp}, > - .is_remote = false), > - var hv_cfg_timestamp = chassis_cfg_timestamp.group_by(()).max(). > -relation HvCfgTimestamp(hv_cfg_timestamp: integer) > -HvCfgTimestamp(hv_cfg_timestamp) :- HvCfgTimestamp0(hv_cfg_timestamp). > -HvCfgTimestamp(hv_cfg_timestamp) :- > - nb::NB_Global(.hv_cfg_timestamp = hv_cfg_timestamp), > - not HvCfgTimestamp0(). > - > -/* > - * nb::Out_NB_Global. > - * > - * OutNBGlobal0 generates the new record in the common case. > - * OutNBGlobal1 generates the new record as a copy of nb::NB_Global, if sb::SB_Global is missing. > - * nb::Out_NB_Global makes sure we have only a single record in the relation. > - * > - * (We don't generate an NB_Global output record if there isn't > - * one in the input. We don't have enough entropy available to > - * generate a random _uuid. Doesn't seem like a big deal, because > - * OVN probably hasn't really been initialized yet.) > - */ > -relation OutNBGlobal0[nb::Out_NB_Global] > -OutNBGlobal0[nb::Out_NB_Global{._uuid = _uuid, > - .sb_cfg = sb_cfg, > - .hv_cfg = hv_cfg, > - .nb_cfg_timestamp = nb_cfg_timestamp, > - .hv_cfg_timestamp = hv_cfg_timestamp, > - .ipsec = ipsec, > - .options = options}] :- > - NbCfgTimestamp[nb_cfg_timestamp], > - HvCfgTimestamp(hv_cfg_timestamp), > - nbg in nb::NB_Global(._uuid = _uuid, .ipsec = ipsec), > - sb::SB_Global(.nb_cfg = sb_cfg), > - HvCfg(hv_cfg), > - HvCfgTimestamp(hv_cfg_timestamp), > - MacPrefix(mac_prefix), > - SvcMonitorMac(svc_monitor_mac), > - OvnMaxDpKeyLocal[max_tunid], > - var options = { > - var options = nbg.options; > - options.put_mac_prefix(mac_prefix); > - options.put_svc_monitor_mac(svc_monitor_mac); > - options.insert(i"max_tunid", i"${max_tunid}"); > - options.insert(i"northd_internal_version", ovn_internal_version().intern()); > - options > - }. > - > -relation OutNBGlobal1[nb::Out_NB_Global] > -OutNBGlobal1[x] :- OutNBGlobal0[x]. > -OutNBGlobal1[nb::Out_NB_Global{._uuid = nbg._uuid, > - .sb_cfg = nbg.sb_cfg, > - .hv_cfg = nbg.hv_cfg, > - .ipsec = nbg.ipsec, > - .options = nbg.options, > - .nb_cfg_timestamp = nbg.nb_cfg_timestamp, > - .hv_cfg_timestamp = nbg.hv_cfg_timestamp}] :- > - Unit(), > - not OutNBGlobal0[_], > - nbg in nb::NB_Global(). > - > -nb::Out_NB_Global[y] :- > - OutNBGlobal1[x], > - var y = x.group_by(()).group_first(). > - > -// Tracks the value that should go into NB_Global's 'nb_cfg_timestamp' column. > -// ovn-northd-ddlog.c pushes the current time directly into this relation. > -input relation NbCfgTimestamp[integer] > - > -output relation SbCfg[integer] > -SbCfg[sb_cfg] :- nb::Out_NB_Global(.sb_cfg = sb_cfg). > - > -output relation Northd_Probe_Interval[s64] > -Northd_Probe_Interval[interval] :- > - nb in nb::NB_Global(), > - var interval = nb.options.get(i"northd_probe_interval").and_then(parse_dec_i64).unwrap_or(-1). > - > -relation CheckLspIsUp[bool] > -CheckLspIsUp[check_lsp_is_up] :- > - nb in nb::NB_Global(), > - var check_lsp_is_up = not nb.options.get_bool_def(i"ignore_lsp_down", true). > -CheckLspIsUp[true] :- > - Unit(), > - not nb in nb::NB_Global(). > - > -/* > - * Address_Set: copy from NB + additional records generated from NB Port_Group (two records for each > - * Port_Group for IPv4 and IPv6 addresses). > - * > - * There can be name collisions between the two types of Address_Set records. User-defined records > - * take precedence. > - */ > -sb::Out_Address_Set(._uuid = nb_as._uuid, > - .name = nb_as.name, > - .addresses = nb_as.addresses) :- > - nb_as in &nb::Address_Set(). > - > -sb::Out_Address_Set(._uuid = hash128("svc_monitor_mac"), > - .name = i"svc_monitor_mac", > - .addresses = set_singleton(i"${svc_monitor_mac}")) :- > - SvcMonitorMac(svc_monitor_mac). > - > -sb::Out_Address_Set(hash128(as_name), as_name, pg_ip4addrs.union()) :- > - PortGroupPort(.pg_name = pg_name, .port = port_uuid), > - var as_name = i"${pg_name}_ip4", > - // avoid name collisions with user-defined Address_Sets > - not &nb::Address_Set(.name = as_name), > - PortStaticAddresses(.lsport = port_uuid, .ip4addrs = stat), > - SwitchPortNewDynamicAddress(&SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = port_uuid}}, > - dyn_addr), > - var dynamic = match (dyn_addr) { > - None -> set_empty(), > - Some{lpaddress} -> match (lpaddress.ipv4_addrs.nth(0)) { > - None -> set_empty(), > - Some{addr} -> set_singleton(i"${addr.addr}") > - } > - }, > - //PortDynamicAddresses(.lsport = port_uuid, .ip4addrs = dynamic), > - var port_ip4addrs = stat.union(dynamic), > - var pg_ip4addrs = port_ip4addrs.group_by(as_name).to_vec(). > - > -sb::Out_Address_Set(hash128(as_name), as_name, set_empty()) :- > - nb::Port_Group(.ports = set_empty(), .name = pg_name), > - var as_name = i"${pg_name}_ip4", > - // avoid name collisions with user-defined Address_Sets > - not &nb::Address_Set(.name = as_name). > - > -sb::Out_Address_Set(hash128(as_name), as_name, pg_ip6addrs.union()) :- > - PortGroupPort(.pg_name = pg_name, .port = port_uuid), > - var as_name = i"${pg_name}_ip6", > - // avoid name collisions with user-defined Address_Sets > - not &nb::Address_Set(.name = as_name), > - PortStaticAddresses(.lsport = port_uuid, .ip6addrs = stat), > - SwitchPortNewDynamicAddress(&SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = port_uuid}}, > - dyn_addr), > - var dynamic = match (dyn_addr) { > - None -> set_empty(), > - Some{lpaddress} -> match (lpaddress.ipv6_addrs.nth(0)) { > - None -> set_empty(), > - Some{addr} -> set_singleton(i"${addr.addr}") > - } > - }, > - //PortDynamicAddresses(.lsport = port_uuid, .ip6addrs = dynamic), > - var port_ip6addrs = stat.union(dynamic), > - var pg_ip6addrs = port_ip6addrs.group_by(as_name).to_vec(). > - > -sb::Out_Address_Set(hash128(as_name), as_name, set_empty()) :- > - nb::Port_Group(.ports = set_empty(), .name = pg_name), > - var as_name = i"${pg_name}_ip6", > - // avoid name collisions with user-defined Address_Sets > - not &nb::Address_Set(.name = as_name). > - > -/* > - * Port_Group > - * > - * Create one SB Port_Group record for every datapath that has ports > - * referenced by the NB Port_Group.ports field. In order to maintain the > - * SB Port_Group.name uniqueness constraint, ovn-northd populates the field > - * with the value: <SB.Logical_Datapath.tunnel_key>_<NB.Port_Group.name>. > - */ > - > -relation PortGroupPort( > - pg_uuid: uuid, > - pg_name: istring, > - port: uuid) > - > -PortGroupPort(pg_uuid, pg_name, port) :- > - nb::Port_Group(._uuid = pg_uuid, .name = pg_name, .ports = pg_ports), > - var port = FlatMap(pg_ports). > - > -sb::Out_Port_Group(._uuid = hash128(sb_name), .name = sb_name, .ports = port_names) :- > - PortGroupPort(.pg_uuid = _uuid, .pg_name = nb_name, .port = port_uuid), > - &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{._uuid = port_uuid, > - .name = port_name}, > - .sw = &Switch{._uuid = ls_uuid}), > - TunKeyAllocation(.datapath = ls_uuid, .tunkey = tunkey), > - var sb_name = i"${tunkey}_${nb_name}", > - var port_names = port_name.group_by((_uuid, sb_name)).to_set(). > - > -/* > - * Multicast_Group: > - * - three static rows per logical switch: one for flooding, one for packets > - * with unknown destinations, one for flooding IP multicast known traffic to > - * mrouters. > - * - dynamically created rows based on IGMP groups learned by controllers. > - */ > - > -function mC_FLOOD(): (istring, integer) = > - (i"_MC_flood", 32768) > - > -function mC_UNKNOWN(): (istring, integer) = > - (i"_MC_unknown", 32769) > - > -function mC_MROUTER_FLOOD(): (istring, integer) = > - (i"_MC_mrouter_flood", 32770) > - > -function mC_MROUTER_STATIC(): (istring, integer) = > - (i"_MC_mrouter_static", 32771) > - > -function mC_STATIC(): (istring, integer) = > - (i"_MC_static", 32772) > - > -function mC_FLOOD_L2(): (istring, integer) = > - (i"_MC_flood_l2", 32773) > - > -function mC_IP_MCAST_MIN(): (istring, integer) = > - (i"_MC_ip_mcast_min", 32774) > - > -function mC_IP_MCAST_MAX(): (istring, integer) = > - (i"_MC_ip_mcast_max", 65535) > - > - > -// TODO: check that Multicast_Group.ports should not include derived ports > - > -/* Proxy table for Out_Multicast_Group: contains all Multicast_Group fields, > - * except `_uuid`, which will be computed by hashing the remaining fields, > - * and tunnel key, which case it is allocated separately (see > - * MulticastGroupTunKeyAllocation). */ > -relation OutProxy_Multicast_Group ( > - datapath: uuid, > - name: istring, > - ports: Set<uuid> > -) > - > -/* Only create flood group if the switch has enabled ports */ > -sb::Out_Multicast_Group (._uuid = hash128((datapath,name)), > - .datapath = datapath, > - .name = name, > - .tunnel_key = tunnel_key, > - .ports = port_ids) :- > - &SwitchPort(.lsp = lsp, .sw = sw), > - lsp.is_enabled(), > - var datapath = sw._uuid, > - var port_ids = lsp._uuid.group_by((datapath)).to_set(), > - (var name, var tunnel_key) = mC_FLOOD(). > - > -/* Create a multicast group to flood to all switch ports except router ports. > - */ > -sb::Out_Multicast_Group (._uuid = hash128((datapath,name)), > - .datapath = datapath, > - .name = name, > - .tunnel_key = tunnel_key, > - .ports = port_ids) :- > - &SwitchPort(.lsp = lsp, .sw = sw), > - lsp.is_enabled(), > - lsp.__type != i"router", > - var datapath = sw._uuid, > - var port_ids = lsp._uuid.group_by((datapath)).to_set(), > - (var name, var tunnel_key) = mC_FLOOD_L2(). > - > -/* Only create unknown group if the switch has ports with "unknown" address */ > -sb::Out_Multicast_Group (._uuid = hash128((ls,name)), > - .datapath = ls, > - .name = name, > - .tunnel_key = tunnel_key, > - .ports = ports) :- > - LogicalSwitchPortWithUnknownAddress(ls, lsp), > - var ports = lsp.group_by(ls).to_set(), > - (var name, var tunnel_key) = mC_UNKNOWN(). > - > -/* Create a multicast group to flood multicast traffic to routers with > - * multicast relay enabled. > - */ > -sb::Out_Multicast_Group (._uuid = hash128((sw._uuid,name)), > - .datapath = sw._uuid, > - .name = name, > - .tunnel_key = tunnel_key, > - .ports = port_ids) :- > - SwitchMcastFloodRelayPorts(sw, port_ids), > - not port_ids.is_empty(), > - (var name, var tunnel_key) = mC_MROUTER_FLOOD(). > - > -/* Create a multicast group to flood traffic (no reports) to ports with > - * multicast flood enabled. > - */ > -sb::Out_Multicast_Group (._uuid = hash128((sw._uuid,name)), > - .datapath = sw._uuid, > - .name = name, > - .tunnel_key = tunnel_key, > - .ports = port_ids) :- > - SwitchMcastFloodPorts(sw, port_ids), > - not port_ids.is_empty(), > - (var name, var tunnel_key) = mC_STATIC(). > - > -/* Create a multicast group to flood reports to ports with > - * multicast flood_reports enabled. > - */ > -sb::Out_Multicast_Group (._uuid = hash128((sw._uuid,name)), > - .datapath = sw._uuid, > - .name = name, > - .tunnel_key = tunnel_key, > - .ports = port_ids) :- > - SwitchMcastFloodReportPorts(sw, port_ids), > - not port_ids.is_empty(), > - (var name, var tunnel_key) = mC_MROUTER_STATIC(). > - > -/* Create a multicast group to flood traffic and reports to router ports with > - * multicast flood enabled. > - */ > -sb::Out_Multicast_Group (._uuid = hash128((rtr._uuid,name)), > - .datapath = rtr._uuid, > - .name = name, > - .tunnel_key = tunnel_key, > - .ports = port_ids) :- > - RouterMcastFloodPorts(rtr, port_ids), > - not port_ids.is_empty(), > - (var name, var tunnel_key) = mC_STATIC(). > - > -/* Create a multicast group for each IGMP group learned by a Switch. > - * 'tunnel_key' == 0 triggers an ID allocation later. > - */ > -OutProxy_Multicast_Group (.datapath = switch._uuid, > - .name = address, > - .ports = port_ids) :- > - IgmpSwitchMulticastGroup(address, switch, port_ids). > - > -/* Create a multicast group for each IGMP group learned by a Router. > - * 'tunnel_key' == 0 triggers an ID allocation later. > - */ > -OutProxy_Multicast_Group (.datapath = router._uuid, > - .name = address, > - .ports = port_ids) :- > - IgmpRouterMulticastGroup(address, router, port_ids). > - > -/* Allocate a 'tunnel_key' for dynamic multicast groups. */ > -sb::Out_Multicast_Group(._uuid = hash128((mcgroup.datapath,mcgroup.name)), > - .datapath = mcgroup.datapath, > - .name = mcgroup.name, > - .tunnel_key = tunnel_key, > - .ports = mcgroup.ports) :- > - mcgroup in OutProxy_Multicast_Group(), > - MulticastGroupTunKeyAllocation(mcgroup.datapath, mcgroup.name, tunnel_key). > - > -/* > - * MAC binding: records inserted by hypervisors; northd removes records for deleted logical ports and datapaths. > - */ > -sb::Out_MAC_Binding (._uuid = mb._uuid, > - .logical_port = mb.logical_port, > - .ip = mb.ip, > - .mac = mb.mac, > - .datapath = mb.datapath) :- > - sb::MAC_Binding[mb], > - sb::Out_Port_Binding(.logical_port = mb.logical_port), > - sb::Out_Datapath_Binding(._uuid = mb.datapath). > - > -/* > - * DHCP options: fixed table > - */ > -sb::Out_DHCP_Options ( > - ._uuid = 128'h7d9d898a_179b_4898_8382_b73bec391f23, > - .name = i"offerip", > - .code = 0, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hea5e7d14_fd97_491c_8004_a120bdbc4306, > - .name = i"netmask", > - .code = 1, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hdab5e39b_6702_4245_9573_6c142aa3724c, > - .name = i"router", > - .code = 3, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h340b4bc5_c5c3_43d1_ae77_564da69c8fcc, > - .name = i"dns_server", > - .code = 6, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hcd1ab302_cbb2_4eab_9ec5_ec1c8541bd82, > - .name = i"log_server", > - .code = 7, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h1c7ea6a0_fe6b_48c1_a920_302583c1ff08, > - .name = i"lpr_server", > - .code = 9, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hae312373_2261_41b5_a2c4_186f426dd929, > - .name = i"hostname", > - .code = 12, > - .__type = i"str" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hae35e575_226a_4ab5_a1c4_166f426dd999, > - .name = i"domain_name", > - .code = 15, > - .__type = i"str" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'had0ec3e0_8be9_4c77_bceb_f8954a34c7ba, > - .name = i"swap_server", > - .code = 16, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h884c2e02_6e99_4d12_aef7_8454ebf8a3b7, > - .name = i"policy_filter", > - .code = 21, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h57cc2c61_fd2a_41c6_b6b1_6ce9a8901f86, > - .name = i"router_solicitation", > - .code = 32, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h48249097_03f0_46c1_a32a_2dd57cd4d0f8, > - .name = i"nis_server", > - .code = 41, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h333fe07e_bdd1_4371_aa4f_a412bc60f3a2, > - .name = i"ntp_server", > - .code = 42, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h6207109c_49d0_4348_8238_dd92afb69bf0, > - .name = i"server_id", > - .code = 54, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h2090b783_26d3_4c1d_830c_54c1b6c5d846, > - .name = i"tftp_server", > - .code = 66, > - .__type = i"host_id" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'ha18ff399_caea_406e_af7e_321c6f74e581, > - .name = i"classless_static_route", > - .code = 121, > - .__type = i"static_routes" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hb81ad7b4_62f0_40c7_a9a3_f96677628767, > - .name = i"ms_classless_static_route", > - .code = 249, > - .__type = i"static_routes" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h0c2e144e_4b5f_4e21_8978_0e20bac9a6ea, > - .name = i"ip_forward_enable", > - .code = 19, > - .__type = i"bool" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h6feb1926_9469_4b40_bfbf_478b9888cd3a, > - .name = i"router_discovery", > - .code = 31, > - .__type = i"bool" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hcb776249_e8b1_4502_b33b_fa294d44077d, > - .name = i"ethernet_encap", > - .code = 36, > - .__type = i"bool" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'ha2df9eaa_aea9_497f_b339_0c8ec3e39a07, > - .name = i"default_ttl", > - .code = 23, > - .__type = i"uint8" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hb44b45a9_5004_4ef5_8e6a_aa8629e1afb1, > - .name = i"tcp_ttl", > - .code = 37, > - .__type = i"uint8" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h50f01ca7_c650_46f0_8f50_39a67ec657da, > - .name = i"mtu", > - .code = 26, > - .__type = i"uint16" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h9d31c057_6085_4810_96af_eeac7d3c5308, > - .name = i"lease_time", > - .code = 51, > - .__type = i"uint32" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hea1e2e7a_9585_46ee_ad49_adfdefc0c4ef, > - .name = i"T1", > - .code = 58, > - .__type = i"uint32" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hbc83a233_554b_453a_afca_1eadf76810d2, > - .name = i"T2", > - .code = 59, > - .__type = i"uint32" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h1ab3eeca_0523_4101_9076_eea77d0232f4, > - .name = i"bootfile_name", > - .code = 67, > - .__type = i"str" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'ha5c20b69_f7f3_4fa8_b550_8697aec6cbb7, > - .name = i"wpad", > - .code = 252, > - .__type = i"str" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h1516bcb6_cc93_4233_a63f_bd29c8601831, > - .name = i"path_prefix", > - .code = 210, > - .__type = i"str" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hc98e13cd_f653_473c_85c1_850dcad685fc, > - .name = i"tftp_server_address", > - .code = 150, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hfbe06e70_b43d_4dd9_9b21_2f27eb5da5df, > - .name = i"arp_cache_timeout", > - .code = 35, > - .__type = i"uint32" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h2af54a3c_545c_4104_ae1c_432caa3e085e, > - .name = i"tcp_keepalive_interval", > - .code = 38, > - .__type = i"uint32" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h4b2144e8_8d3f_4d96_9032_fe23c1866cd4, > - .name = i"domain_search_list", > - .code = 119, > - .__type = i"domains" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'hb7236164_eea4_4bf2_9306_8619a9e3ad1d, > - .name = i"broadcast_address", > - .code = 28, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h32224b72_1561_4279_b430_982423b62a69, > - .name = i"netbios_name_server", > - .code = 44, > - .__type = i"ipv4" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h691db4ae_624e_43e2_9f4a_5ed9de58f0e5, > - .name = i"netbios_node_type", > - .code = 46, > - .__type = i"uint8" > -). > - > -sb::Out_DHCP_Options ( > - ._uuid = 128'h2d738583_96f4_4a78_99a1_f8f7fe328f3f, > - .name = i"bootfile_name_alt", > - .code = 254, > - .__type = i"str" > -). > - > - > -/* > - * DHCPv6 options: fixed table > - */ > -sb::Out_DHCPv6_Options ( > - ._uuid = 128'h100b2659_0ec0_4da7_9ec3_25997f92dc00, > - .name = i"server_id", > - .code = 2, > - .__type = i"mac" > -). > - > -sb::Out_DHCPv6_Options ( > - ._uuid = 128'h53f49b50_db75_4b0d_83df_50d31009ca9c, > - .name = i"ia_addr", > - .code = 5, > - .__type = i"ipv6" > -). > - > -sb::Out_DHCPv6_Options ( > - ._uuid = 128'he3619685_d4f7_42ad_936b_4f4440b7eeb4, > - .name = i"dns_server", > - .code = 23, > - .__type = i"ipv6" > -). > - > -sb::Out_DHCPv6_Options ( > - ._uuid = 128'hcb8a4e7f_a312_4cb1_a846_e474d9f0c531, > - .name = i"domain_search", > - .code = 24, > - .__type = i"str" > -). > - > - > -/* > - * DNS: copied from NB + datapaths column pointer to LS datapaths that use the record > - */ > - > -function map_to_lowercase(m_in: Map<istring,istring>): Map<istring,istring> { > - var m_out = map_empty(); > - for ((k, v) in m_in) { > - m_out.insert(k.to_lowercase().intern(), v.to_lowercase().intern()) > - }; > - m_out > -} > - > -sb::Out_DNS(._uuid = hash128(nbdns._uuid), > - .records = map_to_lowercase(nbdns.records), > - .datapaths = datapaths, > - .external_ids = nbdns.external_ids.insert_imm(i"dns_id", uuid2str(nbdns._uuid).intern())) :- > - nb::DNS[nbdns], > - LogicalSwitchDNS(ls_uuid, nbdns._uuid), > - var datapaths = ls_uuid.group_by(nbdns).to_set(). > - > -/* > - * RBAC_Permission: fixed > - */ > - > -sb::Out_RBAC_Permission ( > - ._uuid = 128'h7df3749a_1754_4a78_afa4_3abf526fe510, > - .table = i"Chassis", > - .authorization = set_singleton(i"name"), > - .insert_delete = true, > - .update = [i"nb_cfg", i"external_ids", i"encaps", > - i"vtep_logical_switches", i"other_config", > - i"transport_zones"].to_set() > -). > - > -sb::Out_RBAC_Permission ( > - ._uuid = 128'h07e623f7_137c_4a11_9084_3b3f89cb4a54, > - .table = i"Chassis_Private", > - .authorization = set_singleton(i"name"), > - .insert_delete = true, > - .update = [i"nb_cfg", i"nb_cfg_timestamp", i"chassis", i"external_ids"].to_set() > -). > - > -sb::Out_RBAC_Permission ( > - ._uuid = 128'h94bec860_431e_4d95_82e7_3b75d8997241, > - .table = i"Encap", > - .authorization = set_singleton(i"chassis_name"), > - .insert_delete = true, > - .update = [i"type", i"options", i"ip"].to_set() > -). > - > -sb::Out_RBAC_Permission ( > - ._uuid = 128'hd8ceff1a_2b11_48bd_802f_4a991aa4e908, > - .table = i"Port_Binding", > - .authorization = set_singleton(i""), > - .insert_delete = false, > - .update = [i"chassis", i"encap", i"up", i"virtual_parent"].to_set() > -). > - > -sb::Out_RBAC_Permission ( > - ._uuid = 128'h6ffdc696_8bfb_4d82_b620_a00d39270b2f, > - .table = i"MAC_Binding", > - .authorization = set_singleton(i""), > - .insert_delete = true, > - .update = [i"logical_port", i"ip", i"mac", i"datapath"].to_set() > -). > - > -sb::Out_RBAC_Permission ( > - ._uuid = 128'h39231c7e_4bf1_41d0_ada4_1d8a319c0da3, > - .table = i"Service_Monitor", > - .authorization = set_singleton(i""), > - .insert_delete = false, > - .update = set_singleton(i"status") > -). > - > -sb::Out_RBAC_Permission ( > - ._uuid = 128'h5256f48e_172c_4d85_8f04_e199fa817633, > - .table = i"IGMP_Group", > - .authorization = set_singleton(i""), > - .insert_delete = true, > - .update = [i"address", i"chassis", i"datapath", i"ports"].to_set() > -). > - > -sb::Out_RBAC_Permission ( > - ._uuid = 128'h2e5cbf3d_26f6_4f8a_9926_d6f77f61654f, > - .table = i"Controller_Event", > - .authorization = set_singleton(i""), > - .insert_delete = true, > - .update = [i"chassis", i"event_info", i"event_type", > - i"seq_num"].to_set() > -). > - > -sb::Out_RBAC_Permission ( > - ._uuid = 128'hb70964fc_322f_4ae5_aee4_ff6afadcc126, > - .table = i"FDB", > - .authorization = set_singleton(i""), > - .insert_delete = true, > - .update = [i"dp_key", i"mac", i"port_key"].to_set() > -). > - > -/* > - * RBAC_Role: fixed > - */ > -sb::Out_RBAC_Role ( > - ._uuid = 128'ha406b472_5de8_4456_9f38_bf344c911b22, > - .name = i"ovn-controller", > - .permissions = [ > - i"Chassis" -> 128'h7df3749a_1754_4a78_afa4_3abf526fe510, > - i"Chassis_Private" -> 128'h07e623f7_137c_4a11_9084_3b3f89cb4a54, > - i"Controller_Event" -> 128'h2e5cbf3d_26f6_4f8a_9926_d6f77f61654f, > - i"Encap" -> 128'h94bec860_431e_4d95_82e7_3b75d8997241, > - i"FDB" -> 128'hb70964fc_322f_4ae5_aee4_ff6afadcc126, > - i"IGMP_Group" -> 128'h5256f48e_172c_4d85_8f04_e199fa817633, > - i"Port_Binding" -> 128'hd8ceff1a_2b11_48bd_802f_4a991aa4e908, > - i"MAC_Binding" -> 128'h6ffdc696_8bfb_4d82_b620_a00d39270b2f, > - i"Service_Monitor"-> 128'h39231c7e_4bf1_41d0_ada4_1d8a319c0da3] > - > -). > - > -/* Output modified Logical_Switch_Port table with dynamic address updated */ > -nb::Out_Logical_Switch_Port(._uuid = lsp._uuid, > - .tag = tag, > - .dynamic_addresses = dynamic_addresses, > - .up = Some{up}) :- > - SwitchPortNewDynamicAddress(&SwitchPort{.lsp = lsp, .up = up}, opt_dyn_addr), > - var dynamic_addresses = opt_dyn_addr.and_then(|a| Some{i"${a}"}), > - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), > - var tag = match (opt_tag) { > - None -> lsp.tag, > - Some{t} -> Some{t} > - }. > - > -relation LRPIPv6Prefix0(lrp_uuid: uuid, ipv6_prefix: istring) > -LRPIPv6Prefix0(lrp._uuid, ipv6_prefix.intern()) :- > - lrp in &nb::Logical_Router_Port(), > - lrp.options.get_bool_def(i"prefix", false), > - sb::Port_Binding(.logical_port = lrp.name, .options = options), > - Some{var ipv6_ra_pd_list} = options.get(i"ipv6_ra_pd_list"), > - var parts = ipv6_ra_pd_list.split(","), > - Some{var ipv6_prefix} = parts.nth(1). > - > -relation LRPIPv6Prefix(lrp_uuid: uuid, ipv6_prefix: Option<istring>) > -LRPIPv6Prefix(lrp_uuid, Some{ipv6_prefix}) :- > - LRPIPv6Prefix0(lrp_uuid, ipv6_prefix). > -LRPIPv6Prefix(lrp_uuid, None) :- > - &nb::Logical_Router_Port(._uuid = lrp_uuid), > - not LRPIPv6Prefix0(lrp_uuid, _). > - > -nb::Out_Logical_Router_Port(._uuid = _uuid, > - .ipv6_prefix = to_set(ipv6_prefix)) :- > - &nb::Logical_Router_Port(._uuid = _uuid, .name = name), > - LRPIPv6Prefix(_uuid, ipv6_prefix). > - > -typedef Pipeline = Ingress | Egress > - > -typedef Stage = Stage { > - pipeline : Pipeline, > - table_id : bit<8>, > - table_name : istring > -} > - > -/* Logical switch ingress stages. */ > -function s_SWITCH_IN_PORT_SEC_L2(): Intern<Stage> { Stage{Ingress, 0, i"ls_in_port_sec_l2"}.intern() } > -function s_SWITCH_IN_PORT_SEC_IP(): Intern<Stage> { Stage{Ingress, 1, i"ls_in_port_sec_ip"}.intern() } > -function s_SWITCH_IN_PORT_SEC_ND(): Intern<Stage> { Stage{Ingress, 2, i"ls_in_port_sec_nd"}.intern() } > -function s_SWITCH_IN_LOOKUP_FDB(): Intern<Stage> { Stage{Ingress, 3, i"ls_in_lookup_fdb"}.intern() } > -function s_SWITCH_IN_PUT_FDB(): Intern<Stage> { Stage{Ingress, 4, i"ls_in_put_fdb"}.intern() } > -function s_SWITCH_IN_PRE_ACL(): Intern<Stage> { Stage{Ingress, 5, i"ls_in_pre_acl"}.intern() } > -function s_SWITCH_IN_PRE_LB(): Intern<Stage> { Stage{Ingress, 6, i"ls_in_pre_lb"}.intern() } > -function s_SWITCH_IN_PRE_STATEFUL(): Intern<Stage> { Stage{Ingress, 7, i"ls_in_pre_stateful"}.intern() } > -function s_SWITCH_IN_ACL_HINT(): Intern<Stage> { Stage{Ingress, 8, i"ls_in_acl_hint"}.intern() } > -function s_SWITCH_IN_ACL(): Intern<Stage> { Stage{Ingress, 9, i"ls_in_acl"}.intern() } > -function s_SWITCH_IN_QOS_MARK(): Intern<Stage> { Stage{Ingress, 10, i"ls_in_qos_mark"}.intern() } > -function s_SWITCH_IN_QOS_METER(): Intern<Stage> { Stage{Ingress, 11, i"ls_in_qos_meter"}.intern() } > -function s_SWITCH_IN_STATEFUL(): Intern<Stage> { Stage{Ingress, 12, i"ls_in_stateful"}.intern() } > -function s_SWITCH_IN_PRE_HAIRPIN(): Intern<Stage> { Stage{Ingress, 13, i"ls_in_pre_hairpin"}.intern() } > -function s_SWITCH_IN_NAT_HAIRPIN(): Intern<Stage> { Stage{Ingress, 14, i"ls_in_nat_hairpin"}.intern() } > -function s_SWITCH_IN_HAIRPIN(): Intern<Stage> { Stage{Ingress, 15, i"ls_in_hairpin"}.intern() } > -function s_SWITCH_IN_ARP_ND_RSP(): Intern<Stage> { Stage{Ingress, 16, i"ls_in_arp_rsp"}.intern() } > -function s_SWITCH_IN_DHCP_OPTIONS(): Intern<Stage> { Stage{Ingress, 17, i"ls_in_dhcp_options"}.intern() } > -function s_SWITCH_IN_DHCP_RESPONSE(): Intern<Stage> { Stage{Ingress, 18, i"ls_in_dhcp_response"}.intern() } > -function s_SWITCH_IN_DNS_LOOKUP(): Intern<Stage> { Stage{Ingress, 19, i"ls_in_dns_lookup"}.intern() } > -function s_SWITCH_IN_DNS_RESPONSE(): Intern<Stage> { Stage{Ingress, 20, i"ls_in_dns_response"}.intern() } > -function s_SWITCH_IN_EXTERNAL_PORT(): Intern<Stage> { Stage{Ingress, 21, i"ls_in_external_port"}.intern() } > -function s_SWITCH_IN_L2_LKUP(): Intern<Stage> { Stage{Ingress, 22, i"ls_in_l2_lkup"}.intern() } > -function s_SWITCH_IN_L2_UNKNOWN(): Intern<Stage> { Stage{Ingress, 23, i"ls_in_l2_unknown"}.intern() } > - > -/* Logical switch egress stages. */ > -function s_SWITCH_OUT_PRE_LB(): Intern<Stage> { Stage{ Egress, 0, i"ls_out_pre_lb"}.intern() } > -function s_SWITCH_OUT_PRE_ACL(): Intern<Stage> { Stage{ Egress, 1, i"ls_out_pre_acl"}.intern() } > -function s_SWITCH_OUT_PRE_STATEFUL(): Intern<Stage> { Stage{ Egress, 2, i"ls_out_pre_stateful"}.intern() } > -function s_SWITCH_OUT_ACL_HINT(): Intern<Stage> { Stage{ Egress, 3, i"ls_out_acl_hint"}.intern() } > -function s_SWITCH_OUT_ACL(): Intern<Stage> { Stage{ Egress, 4, i"ls_out_acl"}.intern() } > -function s_SWITCH_OUT_QOS_MARK(): Intern<Stage> { Stage{ Egress, 5, i"ls_out_qos_mark"}.intern() } > -function s_SWITCH_OUT_QOS_METER(): Intern<Stage> { Stage{ Egress, 6, i"ls_out_qos_meter"}.intern() } > -function s_SWITCH_OUT_STATEFUL(): Intern<Stage> { Stage{ Egress, 7, i"ls_out_stateful"}.intern() } > -function s_SWITCH_OUT_PORT_SEC_IP(): Intern<Stage> { Stage{ Egress, 8, i"ls_out_port_sec_ip"}.intern() } > -function s_SWITCH_OUT_PORT_SEC_L2(): Intern<Stage> { Stage{ Egress, 9, i"ls_out_port_sec_l2"}.intern() } > - > -/* Logical router ingress stages. */ > -function s_ROUTER_IN_ADMISSION(): Intern<Stage> { Stage{Ingress, 0, i"lr_in_admission"}.intern() } > -function s_ROUTER_IN_LOOKUP_NEIGHBOR(): Intern<Stage> { Stage{Ingress, 1, i"lr_in_lookup_neighbor"}.intern() } > -function s_ROUTER_IN_LEARN_NEIGHBOR(): Intern<Stage> { Stage{Ingress, 2, i"lr_in_learn_neighbor"}.intern() } > -function s_ROUTER_IN_IP_INPUT(): Intern<Stage> { Stage{Ingress, 3, i"lr_in_ip_input"}.intern() } > -function s_ROUTER_IN_UNSNAT(): Intern<Stage> { Stage{Ingress, 4, i"lr_in_unsnat"}.intern() } > -function s_ROUTER_IN_DEFRAG(): Intern<Stage> { Stage{Ingress, 5, i"lr_in_defrag"}.intern() } > -function s_ROUTER_IN_DNAT(): Intern<Stage> { Stage{Ingress, 6, i"lr_in_dnat"}.intern() } > -function s_ROUTER_IN_ECMP_STATEFUL(): Intern<Stage> { Stage{Ingress, 7, i"lr_in_ecmp_stateful"}.intern() } > -function s_ROUTER_IN_ND_RA_OPTIONS(): Intern<Stage> { Stage{Ingress, 8, i"lr_in_nd_ra_options"}.intern() } > -function s_ROUTER_IN_ND_RA_RESPONSE(): Intern<Stage> { Stage{Ingress, 9, i"lr_in_nd_ra_response"}.intern() } > -function s_ROUTER_IN_IP_ROUTING(): Intern<Stage> { Stage{Ingress, 10, i"lr_in_ip_routing"}.intern() } > -function s_ROUTER_IN_IP_ROUTING_ECMP(): Intern<Stage> { Stage{Ingress, 11, i"lr_in_ip_routing_ecmp"}.intern() } > -function s_ROUTER_IN_POLICY(): Intern<Stage> { Stage{Ingress, 12, i"lr_in_policy"}.intern() } > -function s_ROUTER_IN_POLICY_ECMP(): Intern<Stage> { Stage{Ingress, 13, i"lr_in_policy_ecmp"}.intern() } > -function s_ROUTER_IN_ARP_RESOLVE(): Intern<Stage> { Stage{Ingress, 14, i"lr_in_arp_resolve"}.intern() } > -function s_ROUTER_IN_CHK_PKT_LEN(): Intern<Stage> { Stage{Ingress, 15, i"lr_in_chk_pkt_len"}.intern() } > -function s_ROUTER_IN_LARGER_PKTS(): Intern<Stage> { Stage{Ingress, 16, i"lr_in_larger_pkts"}.intern() } > -function s_ROUTER_IN_GW_REDIRECT(): Intern<Stage> { Stage{Ingress, 17, i"lr_in_gw_redirect"}.intern() } > -function s_ROUTER_IN_ARP_REQUEST(): Intern<Stage> { Stage{Ingress, 18, i"lr_in_arp_request"}.intern() } > - > -/* Logical router egress stages. */ > -function s_ROUTER_OUT_UNDNAT(): Intern<Stage> { Stage{ Egress, 0, i"lr_out_undnat"}.intern() } > -function s_ROUTER_OUT_POST_UNDNAT(): Intern<Stage> { Stage{ Egress, 1, i"lr_out_post_undnat"}.intern() } > -function s_ROUTER_OUT_SNAT(): Intern<Stage> { Stage{ Egress, 2, i"lr_out_snat"}.intern() } > -function s_ROUTER_OUT_EGR_LOOP(): Intern<Stage> { Stage{ Egress, 3, i"lr_out_egr_loop"}.intern() } > -function s_ROUTER_OUT_DELIVERY(): Intern<Stage> { Stage{ Egress, 4, i"lr_out_delivery"}.intern() } > - > -/* > - * OVS register usage: > - * > - * Logical Switch pipeline: > - * +----+----------------------------------------------+---+------------------+ > - * | R0 | REGBIT_{CONNTRACK/DHCP/DNS} | | | > - * | | REGBIT_{HAIRPIN/HAIRPIN_REPLY} | | | > - * | | REGBIT_ACL_LABEL | X | | > - * +----+----------------------------------------------+ X | | > - * | R1 | ORIG_DIP_IPV4 (>= IN_STATEFUL) | R | | > - * +----+----------------------------------------------+ E | | > - * | R2 | ORIG_TP_DPORT (>= IN_STATEFUL) | G | | > - * +----+----------------------------------------------+ 0 | | > - * | R3 | ACL_LABEL | | | > - * +----+----------------------------------------------+---+------------------+ > - * | R4 | UNUSED | | | > - * +----+----------------------------------------------+ X | ORIG_DIP_IPV6 | > - * | R5 | UNUSED | X | (>= IN_STATEFUL) | > - * +----+----------------------------------------------+ R | | > - * | R6 | UNUSED | E | | > - * +----+----------------------------------------------+ G | | > - * | R7 | UNUSED | 1 | | > - * +----+----------------------------------------------+---+------------------+ > - * | R8 | UNUSED | > - * +----+----------------------------------------------+ > - * | R9 | UNUSED | > - * +----+----------------------------------------------+ > - * > - * Logical Router pipeline: > - * +-----+--------------------------+---+-----------------+---+---------------+ > - * | R0 | REGBIT_ND_RA_OPTS_RESULT | | | | | > - * | | (= IN_ND_RA_OPTIONS) | X | | | | > - * | | NEXT_HOP_IPV4 | R | | | | > - * | | (>= IP_INPUT) | E | INPORT_ETH_ADDR | X | | > - * +-----+--------------------------+ G | (< IP_INPUT) | X | | > - * | R1 | SRC_IPV4 for ARP-REQ | 0 | | R | | > - * | | (>= IP_INPUT) | | | E | NEXT_HOP_IPV6 | > - * +-----+--------------------------+---+-----------------+ G | (>= DEFRAG) | > - * | R2 | UNUSED | X | | 0 | | > - * | | | R | | | | > - * +-----+--------------------------+ E | UNUSED | | | > - * | R3 | UNUSED | G | | | | > - * | | | 1 | | | | > - * +-----+--------------------------+---+-----------------+---+---------------+ > - * | R4 | UNUSED | X | | | | > - * | | | R | | | | > - * +-----+--------------------------+ E | UNUSED | X | | > - * | R5 | UNUSED | G | | X | | > - * | | | 2 | | R |SRC_IPV6 for NS| > - * +-----+--------------------------+---+-----------------+ E | (>= | > - * | R6 | UNUSED | X | | G | IN_IP_ROUTING)| > - * | | | R | | 1 | | > - * +-----+--------------------------+ E | UNUSED | | | > - * | R7 | UNUSED | G | | | | > - * | | | 3 | | | | > - * +-----+--------------------------+---+-----------------+---+---------------+ > - * | R8 | ECMP_GROUP_ID | | | > - * | | ECMP_MEMBER_ID | X | | > - * +-----+--------------------------+ R | | > - * | | REGBIT_{ | E | | > - * | | EGRESS_LOOPBACK/ | G | UNUSED | > - * | R9 | PKT_LARGER/ | 4 | | > - * | | LOOKUP_NEIGHBOR_RESULT/| | | > - * | | SKIP_LOOKUP_NEIGHBOR} | | | > - * | | | | | > - * | | REG_ORIG_TP_DPORT_ROUTER | | | > - * | | | | | > - * +-----+--------------------------+---+-----------------+ > - * > - */ > - > -/* Register definitions specific to routers. */ > -function rEG_NEXT_HOP(): istring = i"reg0" /* reg0 for IPv4, xxreg0 for IPv6 */ > -function rEG_SRC(): istring = i"reg1" /* reg1 for IPv4, xxreg1 for IPv6 */ > - > -/* Register definitions specific to switches. */ > -function rEGBIT_CONNTRACK_DEFRAG() : istring = i"reg0[0]" > -function rEGBIT_CONNTRACK_COMMIT() : istring = i"reg0[1]" > -function rEGBIT_CONNTRACK_NAT() : istring = i"reg0[2]" > -function rEGBIT_DHCP_OPTS_RESULT() : istring = i"reg0[3]" > -function rEGBIT_DNS_LOOKUP_RESULT(): istring = i"reg0[4]" > -function rEGBIT_ND_RA_OPTS_RESULT(): istring = i"reg0[5]" > -function rEGBIT_HAIRPIN() : istring = i"reg0[6]" > -function rEGBIT_ACL_HINT_ALLOW_NEW(): istring = i"reg0[7]" > -function rEGBIT_ACL_HINT_ALLOW() : istring = i"reg0[8]" > -function rEGBIT_ACL_HINT_DROP() : istring = i"reg0[9]" > -function rEGBIT_ACL_HINT_BLOCK() : istring = i"reg0[10]" > -function rEGBIT_LKUP_FDB() : istring = i"reg0[11]" > -function rEGBIT_HAIRPIN_REPLY() : istring = i"reg0[12]" > -function rEGBIT_ACL_LABEL() : istring = i"reg0[13]" > - > -function rEG_ORIG_DIP_IPV4() : istring = i"reg1" > -function rEG_ORIG_DIP_IPV6() : istring = i"xxreg1" > -function rEG_ORIG_TP_DPORT() : istring = i"reg2[0..15]" > - > -/* Register definitions for switches and routers. */ > - > -/* Indicate that this packet has been recirculated using egress > - * loopback. This allows certain checks to be bypassed, such as a > -* logical router dropping packets with source IP address equals > -* one of the logical router's own IP addresses. */ > -function rEGBIT_EGRESS_LOOPBACK() : istring = i"reg9[0]" > -/* Register to store the result of check_pkt_larger action. */ > -function rEGBIT_PKT_LARGER() : istring = i"reg9[1]" > -function rEGBIT_LOOKUP_NEIGHBOR_RESULT() : istring = i"reg9[2]" > -function rEGBIT_LOOKUP_NEIGHBOR_IP_RESULT() : istring = i"reg9[3]" > - > -/* Register to store the eth address associated to a router port for packets > - * received in S_ROUTER_IN_ADMISSION. > - */ > -function rEG_INPORT_ETH_ADDR() : istring = i"xreg0[0..47]" > - > -/* Register for ECMP bucket selection. */ > -function rEG_ECMP_GROUP_ID() : istring = i"reg8[0..15]" > -function rEG_ECMP_MEMBER_ID() : istring = i"reg8[16..31]" > - > -function rEG_ORIG_TP_DPORT_ROUTER() : string = "reg9[16..31]" > - > -/* Register used for setting a label for ACLs in a Logical Switch. */ > -function rEG_LABEL() : istring = i"reg3" > - > -function fLAGBIT_NOT_VXLAN() : istring = i"flags[1] == 0" > - > -function mFF_N_LOG_REGS() : bit<32> = 10 > - > -/* > - * Generating sb::Logical_Flow and sb::Logical_DP_Group. > - * > - * Some logical flows occur in multiple logical datapaths. These can > - * be represented two ways: either as multiple Logical_Flow records > - * (each with logical_datapath set appropriately) or as a single > - * Logical_Flow record that points to a Logical_DP_Group record that > - * lists all the datapaths it's in. (It would be possible to mix or > - * duplicate these methods, but we don't do that.) We have to support > - * both: > - * > - * - There's a setting "use_logical_dp_groups" that globally > - * enables or disables this feature. > - */ > - > -relation Flow( > - logical_datapath: uuid, > - stage: Intern<Stage>, > - priority: integer, > - __match: istring, > - actions: istring, > - io_port: Option<istring>, > - controller_meter: Option<istring>, > - stage_hint: bit<32> > -) > - > -function stage_hint(_uuid: uuid): bit<32> { > - _uuid[127:96] > -} > - > -/* If this option is 'true' northd will combine logical flows that differ by > - * logical datapath only by creating a datapath group. */ > -relation UseLogicalDatapathGroups[bool] > -UseLogicalDatapathGroups[use_logical_dp_groups] :- > - nb in nb::NB_Global(), > - var use_logical_dp_groups = nb.options.get_bool_def(i"use_logical_dp_groups", true). > -UseLogicalDatapathGroups[false] :- > - Unit(), > - not nb in nb::NB_Global(). > - > -relation AggregatedFlow ( > - logical_datapaths: Set<uuid>, > - stage: Intern<Stage>, > - priority: integer, > - __match: istring, > - actions: istring, > - io_port: Option<istring>, > - controller_meter: Option<istring>, > - stage_hint: bit<32> > -) > -function make_flow_tags(io_port: Option<istring>): Map<istring,istring> { > - match (io_port) { > - None -> map_empty(), > - Some{s} -> [ i"in_out_port" -> s ] > - } > -} > -function make_flow_external_ids(stage_hint: bit<32>, stage: Intern<Stage>): Map<istring,istring> { > - if (stage_hint == 0) { > - [i"stage-name" -> stage.table_name] > - } else { > - [i"stage-name" -> stage.table_name, > - i"stage-hint" -> i"${hex(stage_hint)}"] > - } > -} > -AggregatedFlow(.logical_datapaths = g.to_set(), > - .stage = stage, > - .priority = priority, > - .__match = __match, > - .actions = actions, > - .io_port = io_port, > - .controller_meter = controller_meter, > - .stage_hint = stage_hint) :- > - UseLogicalDatapathGroups[true], > - Flow(logical_datapath, stage, priority, __match, actions, io_port, controller_meter, stage_hint), > - var g = logical_datapath.group_by((stage, priority, __match, actions, io_port, controller_meter, stage_hint)). > - > - > -AggregatedFlow(.logical_datapaths = set_singleton(logical_datapath), > - .stage = stage, > - .priority = priority, > - .__match = __match, > - .actions = actions, > - .io_port = io_port, > - .controller_meter = controller_meter, > - .stage_hint = stage_hint) :- > - UseLogicalDatapathGroups[false], > - Flow(logical_datapath, stage, priority, __match, actions, io_port, controller_meter, stage_hint). > - > - > -function to_istring(pipeline: Pipeline): istring { > - if (pipeline == Ingress) { > - i"ingress" > - } else { > - i"egress" > - } > -} > - > -for (f in AggregatedFlow()) { > - if (f.logical_datapaths.size() == 1) { > - Some{var dp} = f.logical_datapaths.nth(0) in > - sb::Out_Logical_Flow( > - ._uuid = hash128((dp, f.stage, f.priority, f.__match, f.actions, f.controller_meter, f.io_port, f.stage_hint)), > - .logical_datapath = Some{dp}, > - .logical_dp_group = None, > - .pipeline = f.stage.pipeline.to_istring(), > - .table_id = f.stage.table_id as integer, > - .priority = f.priority, > - .controller_meter = f.controller_meter, > - .__match = f.__match, > - .actions = f.actions, > - .tags = make_flow_tags(f.io_port), > - .external_ids = make_flow_external_ids(f.stage_hint, f.stage)) > - } else { > - var group_uuid = hash128(f.logical_datapaths) in { > - sb::Out_Logical_Flow( > - ._uuid = hash128((group_uuid, f.stage, f.priority, f.__match, f.actions, f.controller_meter, f.io_port, f.stage_hint)), > - .logical_datapath = None, > - .logical_dp_group = Some{group_uuid}, > - .pipeline = f.stage.pipeline.to_istring(), > - .table_id = f.stage.table_id as integer, > - .priority = f.priority, > - .controller_meter = f.controller_meter, > - .__match = f.__match, > - .actions = f.actions, > - .tags = make_flow_tags(f.io_port), > - .external_ids = make_flow_external_ids(f.stage_hint, f.stage)); > - sb::Out_Logical_DP_Group(._uuid = group_uuid, .datapaths = f.logical_datapaths) > - } > - } > -} > - > -/* Logical flows for forwarding groups. */ > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > - .priority = 50, > - .__match = __match, > - .actions = actions, > - .stage_hint = stage_hint(fg_uuid), > - .io_port = None, > - .controller_meter = None) :- > - sw in &Switch(), > - &nb::Logical_Switch(._uuid = sw._uuid, .forwarding_groups = forwarding_groups), > - var fg_uuid = FlatMap(forwarding_groups), > - fg in nb::Forwarding_Group(._uuid = fg_uuid), > - not fg.child_port.is_empty(), > - var __match = i"arp.tpa == ${fg.vip} && arp.op == 1", > - var actions = i"eth.dst = eth.src; " > - "eth.src = ${fg.vmac}; " > - "arp.op = 2; /* ARP reply */ " > - "arp.tha = arp.sha; " > - "arp.sha = ${fg.vmac}; " > - "arp.tpa = arp.spa; " > - "arp.spa = ${fg.vip}; " > - "outport = inport; " > - "flags.loopback = 1; " > - "output;". > - > -function escape_child_ports(child_port: Set<istring>): string { > - var escaped = vec_with_capacity(child_port.size()); > - for (s in child_port) { > - escaped.push(json_escape(s)) > - }; > - escaped.join(",") > -} > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 50, > - .__match = __match, > - .actions = actions.intern(), > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - sw in &Switch(), > - &nb::Logical_Switch(._uuid = sw._uuid, .forwarding_groups = forwarding_groups), > - var fg_uuid = FlatMap(forwarding_groups), > - fg in nb::Forwarding_Group(._uuid = fg_uuid), > - not fg.child_port.is_empty(), > - var __match = i"eth.dst == ${fg.vmac}", > - var actions = "fwd_group(" ++ > - if (fg.liveness) { "liveness=\"true\"," } else { "" } ++ > - "childports=" ++ escape_child_ports(fg.child_port) ++ ");". > - > -/* Logical switch ingress table PORT_SEC_L2: admission control framework > - * (priority 100) */ > -for (sw in &Switch()) { > - if (not sw.is_vlan_transparent) { > - /* Block logical VLANs. */ > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_L2(), > - .priority = 100, > - .__match = i"vlan.present", > - .actions = i"drop;", > - .stage_hint = 0 /*TODO: check*/, > - .io_port = None, > - .controller_meter = None) > - }; > - > - /* Broadcast/multicast source address is invalid */ > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_L2(), > - .priority = 100, > - .__match = i"eth.src[40]", > - .actions = i"drop;", > - .stage_hint = 0 /*TODO: check*/, > - .io_port = None, > - .controller_meter = None) > - /* Port security flows have priority 50 (see below) and will continue to the next table > - if packet source is acceptable. */ > -} > - > -// space-separated set of strings > -function join(strings: Set<string>, sep: string): string { > - strings.to_vec().join(sep) > -} > - > -function build_port_security_ipv6_flow( > - pipeline: Pipeline, > - ea: eth_addr, > - ipv6_addrs: Vec<ipv6_netaddr>): string = > -{ > - var ip6_addrs = vec_empty(); > - > - /* Allow link-local address. */ > - ip6_addrs.push(ea.to_ipv6_lla().string_mapped()); > - > - /* Allow ip6.dst=ff00::/8 for multicast packets */ > - if (pipeline == Egress) { > - ip6_addrs.push("ff00::/8") > - }; > - for (addr in ipv6_addrs) { > - ip6_addrs.push(addr.match_network()) > - }; > - > - var dir = if (pipeline == Ingress) { "src" } else { "dst" }; > - " && ip6.${dir} == {" ++ ip6_addrs.join(", ") ++ "}" > -} > - > -function build_port_security_ipv6_nd_flow( > - ea: eth_addr, > - ipv6_addrs: Vec<ipv6_netaddr>): string = > -{ > - var __match = " && ip6 && nd && ((nd.sll == ${eth_addr_zero()} || " > - "nd.sll == ${ea}) || ((nd.tll == ${eth_addr_zero()} || " > - "nd.tll == ${ea})"; > - if (ipv6_addrs.is_empty()) { > - __match ++ "))" > - } else { > - __match = __match ++ " && (nd.target == ${ea.to_ipv6_lla()}"; > - > - for(addr in ipv6_addrs) { > - __match = __match ++ " || nd.target == ${addr.match_network()}" > - }; > - __match ++ ")))" > - } > -} > - > -/* Pre-ACL */ > -for (&Switch(._uuid =ls_uuid)) { > - /* Ingress and Egress Pre-ACL Table (Priority 0): Packets are > - * allowed by default. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_ACL(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_ACL(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_ACL(), > - .priority = 110, > - .__match = i"eth.dst == $svc_monitor_mac", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_ACL(), > - .priority = 110, > - .__match = i"eth.src == $svc_monitor_mac", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* stateless filters always take precedence over stateful ACLs. */ > -for (&SwitchACL(.sw = sw@&Switch{._uuid = ls_uuid}, .acl = acl, .has_fair_meter = fair_meter)) { > - if (sw.has_stateful_acl) { > - if (acl.action == i"allow-stateless") { > - if (acl.direction == i"from-lport") { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_ACL(), > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > - .__match = acl.__match, > - .actions = i"next;", > - .stage_hint = stage_hint(acl._uuid), > - .io_port = None, > - .controller_meter = None) > - } else { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_ACL(), > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > - .__match = acl.__match, > - .actions = i"next;", > - .stage_hint = stage_hint(acl._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - } > - } > -} > - > -/* If there are any stateful ACL rules in this datapath, we must > - * send all IP packets through the conntrack action, which handles > - * defragmentation, in order to match L4 headers. */ > - > -for (&SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"router"}, > - .json_name = lsp_name, > - .sw = &Switch{._uuid = ls_uuid, .has_stateful_acl = true})) { > - /* Can't use ct() for router ports. Consider the > - * following configuration: lp1(10.0.0.2) on > - * hostA--ls1--lr0--ls2--lp2(10.0.1.2) on hostB, For a > - * ping from lp1 to lp2, First, the response will go > - * through ct() with a zone for lp2 in the ls2 ingress > - * pipeline on hostB. That ct zone knows about this > - * connection. Next, it goes through ct() with the zone > - * for the router port in the egress pipeline of ls2 on > - * hostB. This zone does not know about the connection, > - * as the icmp request went through the logical router > - * on hostA, not hostB. This would only work with > - * distributed conntrack state across all chassis. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_ACL(), > - .priority = 110, > - .__match = i"ip && inport == ${lsp_name}", > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_ACL(), > - .priority = 110, > - .__match = i"ip && outport == ${lsp_name}", > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > -} > - > -for (&SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"localnet"}, > - .json_name = lsp_name, > - .sw = &Switch{._uuid = ls_uuid, .has_stateful_acl = true})) { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_ACL(), > - .priority = 110, > - .__match = i"ip && inport == ${lsp_name}", > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_ACL(), > - .priority = 110, > - .__match = i"ip && outport == ${lsp_name}", > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > -} > - > -for (&Switch(._uuid = ls_uuid, .has_stateful_acl = true)) { > - /* Ingress and Egress Pre-ACL Table (Priority 110). > - * > - * Not to do conntrack on ND and ICMP destination > - * unreachable packets. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_ACL(), > - .priority = 110, > - .__match = i"nd || nd_rs || nd_ra || mldv1 || mldv2 || " > - "(udp && udp.src == 546 && udp.dst == 547)", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_ACL(), > - .priority = 110, > - .__match = i"nd || nd_rs || nd_ra || mldv1 || mldv2 || " > - "(udp && udp.src == 546 && udp.dst == 547)", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Ingress and Egress Pre-ACL Table (Priority 100). > - * > - * Regardless of whether the ACL is "from-lport" or "to-lport", > - * we need rules in both the ingress and egress table, because > - * the return traffic needs to be followed. > - * > - * 'REGBIT_CONNTRACK_DEFRAG' is set to let the pre-stateful table send > - * it to conntrack for tracking and defragmentation. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_ACL(), > - .priority = 100, > - .__match = i"ip", > - .actions = i"${rEGBIT_CONNTRACK_DEFRAG()} = 1; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_ACL(), > - .priority = 100, > - .__match = i"ip", > - .actions = i"${rEGBIT_CONNTRACK_DEFRAG()} = 1; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Pre-LB */ > -for (&Switch(._uuid = ls_uuid)) { > - /* Do not send ND packets to conntrack */ > - var __match = i"nd || nd_rs || nd_ra || mldv1 || mldv2" in { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_LB(), > - .priority = 110, > - .__match = __match, > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_LB(), > - .priority = 110, > - .__match = __match, > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - }; > - > - /* Do not send service monitor packets to conntrack. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_LB(), > - .priority = 110, > - .__match = i"eth.dst == $svc_monitor_mac", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_LB(), > - .priority = 110, > - .__match = i"eth.src == $svc_monitor_mac", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Allow all packets to go to next tables by default. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_LB(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_LB(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -for (&SwitchPort(.lsp = lsp, .json_name = lsp_name, .sw = &Switch{._uuid = ls_uuid})) > -if (lsp.__type == i"router" or lsp.__type == i"localnet") { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_LB(), > - .priority = 110, > - .__match = i"ip && inport == ${lsp_name}", > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_LB(), > - .priority = 110, > - .__match = i"ip && outport == ${lsp_name}", > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > -} > - > -/* Empty LoadBalancer Controller event */ > -function build_empty_lb_event_flow(key: istring, lb: Intern<nb::Load_Balancer>): Option<(istring, istring)> { > - (var ip, var port) = match (ip_address_and_port_from_lb_key(key.ival())) { > - Some{(ip, port)} -> (ip, port), > - _ -> return None > - }; > - > - var protocol = if (lb.protocol == Some{i"tcp"}) { "tcp" } else { "udp" }; > - var vip = match (port) { > - 0 -> "${ip}", > - _ -> "${ip.to_bracketed_string()}:${port}" > - }; > - > - var __match = vec_with_capacity(2); > - __match.push("${ip.ipX()}.dst == ${ip}"); > - if (port != 0) { > - __match.push("${protocol}.dst == ${port}"); > - }; > - > - var action = i"trigger_event(" > - "event = \"empty_lb_backends\", " > - "vip = \"${vip}\", " > - "protocol = \"${protocol}\", " > - "load_balancer = \"${uuid2str(lb._uuid)}\");"; > - > - Some{(__match.join(" && ").intern(), action)} > -} > - > -/* Contains the load balancers for which an event should be sent each time it > - * runs out of backends. > - * > - * The preferred way to do this by setting an individual Load_Balancer's > - * options:event=true. > - * > - * The deprecated way is to set nb::NB_Global options:controller_event=true, > - * which enables events for every load balancer. > - */ > -relation LoadBalancerEmptyEvents(lb_uuid: uuid) > -LoadBalancerEmptyEvents(lb_uuid) :- > - nb::NB_Global(.options = global_options), > - var global_events = global_options.get_bool_def(i"controller_event", false), > - &nb::Load_Balancer(._uuid = lb_uuid, .options = local_options), > - var local_events = local_options.get_bool_def(i"event", false), > - global_events or local_events. > - > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PRE_LB(), > - .priority = 130, > - .__match = __match, > - .actions = __action, > - .io_port = None, > - .controller_meter = sw.copp.get(cOPP_EVENT_ELB()), > - .stage_hint = stage_hint(lb._uuid)) :- > - SwitchLBVIP(.sw_uuid = sw_uuid, .lb = lb, .vip = vip, .backends = backends), > - LoadBalancerEmptyEvents(lb._uuid), > - not lb.options.get_bool_def(i"reject", false), > - sw in &Switch(._uuid = sw_uuid), > - backends == i"", > - Some {(var __match, var __action)} = build_empty_lb_event_flow(vip, lb). > - > -/* 'REGBIT_CONNTRACK_NAT' is set to let the pre-stateful table send > - * packet to conntrack for defragmentation. > - * > - * Send all the packets to conntrack in the ingress pipeline if the > - * logical switch has a load balancer with VIP configured. Earlier > - * we used to set the REGBIT_CONNTRACK_DEFRAG flag in the ingress pipeline > - * if the IP destination matches the VIP. But this causes few issues when > - * a logical switch has no ACLs configured with allow-related. > - * To understand the issue, lets a take a TCP load balancer - > - * 10.0.0.10:80=10.0.0.3:80. > - * If a logical port - p1 with IP - 10.0.0.5 opens a TCP connection with > - * the VIP - 10.0.0.10, then the packet in the ingress pipeline of 'p1' > - * is sent to the p1's conntrack zone id and the packet is load balanced > - * to the backend - 10.0.0.3. For the reply packet from the backend lport, > - * it is not sent to the conntrack of backend lport's zone id. This is fine > - * as long as the packet is valid. Suppose the backend lport sends an > - * invalid TCP packet (like incorrect sequence number), the packet gets > - * delivered to the lport 'p1' without unDNATing the packet to the > - * VIP - 10.0.0.10. And this causes the connection to be reset by the > - * lport p1's VIF. > - * > - * We can't fix this issue by adding a logical flow to drop ct.inv packets > - * in the egress pipeline since it will drop all other connections not > - * destined to the load balancers. > - * > - * To fix this issue, we send all the packets to the conntrack in the > - * ingress pipeline if a load balancer is configured. We can now > - * add a lflow to drop ct.inv packets. > - */ > -for (sw in &Switch(.has_lb_vip = true)) { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PRE_LB(), > - .priority = 100, > - .__match = i"ip", > - .actions = i"${rEGBIT_CONNTRACK_NAT()} = 1; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_OUT_PRE_LB(), > - .priority = 100, > - .__match = i"ip", > - .actions = i"${rEGBIT_CONNTRACK_NAT()} = 1; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Pre-stateful */ > -relation LbProtocol[string] > -LbProtocol["tcp"]. > -LbProtocol["udp"]. > -LbProtocol["sctp"]. > -for (&Switch(._uuid = ls_uuid)) { > - /* Ingress and Egress pre-stateful Table (Priority 0): Packets are > - * allowed by default. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_STATEFUL(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_STATEFUL(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* If rEGBIT_CONNTRACK_NAT() is set as 1, then packets should just be sent > - * through nat (without committing). > - * > - * rEGBIT_CONNTRACK_COMMIT() is set for new connections and > - * rEGBIT_CONNTRACK_NAT() is set for established connections. So they > - * don't overlap. > - * > - * In the ingress pipeline, also store the original destination IP and > - * transport port to be used when detecting hairpin packets. > - */ > - for (LbProtocol[protocol]) { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_STATEFUL(), > - .priority = 120, > - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1 && ip4 && ${protocol}", > - .actions = i"${rEG_ORIG_DIP_IPV4()} = ip4.dst; " > - "${rEG_ORIG_TP_DPORT()} = ${protocol}.dst; ct_lb;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_STATEFUL(), > - .priority = 120, > - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1 && ip6 && ${protocol}", > - .actions = i"${rEG_ORIG_DIP_IPV6()} = ip6.dst; " > - "${rEG_ORIG_TP_DPORT()} = ${protocol}.dst; ct_lb;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - }; > - > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_STATEFUL(), > - .priority = 110, > - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1", > - .actions = i"ct_lb;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_STATEFUL(), > - .priority = 110, > - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1", > - .actions = i"ct_lb;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* If rEGBIT_CONNTRACK_DEFRAG() is set as 1, then the packets should be > - * sent to conntrack for tracking and defragmentation. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_STATEFUL(), > - .priority = 100, > - .__match = i"${rEGBIT_CONNTRACK_DEFRAG()} == 1", > - .actions = i"ct_next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PRE_STATEFUL(), > - .priority = 100, > - .__match = i"${rEGBIT_CONNTRACK_DEFRAG()} == 1", > - .actions = i"ct_next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -function acl_log_meter_name(meter_name: istring, acl_uuid: uuid): string = > -{ > - "${meter_name}__${uuid2str(acl_uuid)}" > -} > - > -function build_acl_log(acl: Intern<nb::ACL>, fair_meter: bool): string = > -{ > - if (not acl.log) { > - "" > - } else { > - var strs = vec_empty(); > - match (acl.name) { > - None -> (), > - Some{name} -> strs.push("name=${json_escape(name)}") > - }; > - /* If a severity level isn't specified, default to "info". */ > - match (acl.severity) { > - None -> strs.push("severity=info"), > - Some{severity} -> strs.push("severity=${severity}") > - }; > - match (acl.action.ival()) { > - "drop" -> { > - strs.push("verdict=drop") > - }, > - "reject" -> { > - strs.push("verdict=reject") > - }, > - "allow" -> { > - strs.push("verdict=allow") > - }, > - "allow-related" -> { > - strs.push("verdict=allow") > - }, > - "allow-stateless" -> { > - strs.push("verdict=allow") > - }, > - _ -> () > - }; > - match (acl.meter) { > - Some{meter} -> { > - var name = match (fair_meter) { > - true -> acl_log_meter_name(meter, acl._uuid), > - false -> meter.ival() > - }; > - strs.push("meter=${json_escape(name)}") > - }, > - None -> () > - }; > - "log(${strs.join(\", \")}); " > - } > -} > - > -/* Due to various hard-coded priorities need to implement ACLs, the > - * northbound database supports a smaller range of ACL priorities than > - * are available to logical flows. This value is added to an ACL > - * priority to determine the ACL's logical flow priority. */ > -function oVN_ACL_PRI_OFFSET(): integer = 1000 > - > -/* Intermediate relation that stores reject ACLs. > - * The following rules generate logical flows for these ACLs. > - */ > -relation Reject( > - lsuuid: uuid, > - pipeline: Pipeline, > - stage: Intern<Stage>, > - acl: Intern<nb::ACL>, > - fair_meter: bool, > - controller_meter: Option<istring>, > - extra_match: istring, > - extra_actions: istring) > - > -/* build_reject_acl_rules() */ > -function next_to_stage(stage: Intern<Stage>): string { > - var pipeline = match (stage.pipeline) { > - Ingress -> "ingress", > - Egress -> "egress" > - }; > - "next(pipeline=${pipeline},table=${stage.table_id})" > -} > -for (Reject(lsuuid, pipeline, stage, acl, fair_meter, controller_meter, > - extra_match_, extra_actions_)) { > - var extra_match = if (extra_match_ == i"") { "" } else { "(${extra_match_}) && " } in > - var extra_actions = if (extra_actions_ == i"") { "" } else { "${extra_actions_} " } in > - var next_stage = match (pipeline) { > - Ingress -> s_SWITCH_OUT_QOS_MARK(), > - Egress -> s_SWITCH_IN_L2_LKUP() > - } in > - var acl_log = build_acl_log(acl, fair_meter) in > - var __match = extra_match ++ acl.__match in > - var actions = acl_log ++ extra_actions ++ "reg0 = 0; " > - "reject { " > - "/* eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ " > - "outport <-> inport; ${next_to_stage(next_stage)}; };" in > - Flow(.logical_datapath = lsuuid, > - .stage = stage, > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > - .__match = __match.intern(), > - .actions = actions.intern(), > - .io_port = None, > - .controller_meter = controller_meter, > - .stage_hint = stage_hint(acl._uuid)) > -} > - > -/* build_acls */ > -for (UseCtInvMatch[use_ct_inv_match]) { > - (var ct_inv_or, var and_not_ct_inv) = match (use_ct_inv_match) { > - true -> ("ct.inv || ", "&& !ct.inv "), > - false -> ("", ""), > - } in > - for (sw in &Switch(._uuid = ls_uuid)) > - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in > - { > - /* Ingress and Egress ACL Table (Priority 0): Packets are allowed by > - * default. If the logical switch has no ACLs or no load balancers, > - * then add 65535-priority flow to advance the packet to next > - * stage. > - * > - * A related rule at priority 1 is added below if there > - * are any stateful ACLs in this datapath. */ > - var priority = if (not sw.has_acls and not sw.has_lb_vip) { 65535 } else { 0 } > - in > - { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_ACL(), > - .priority = priority, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_ACL(), > - .priority = priority, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - }; > - > - if (has_stateful) { > - /* Ingress and Egress ACL Table (Priority 1). > - * > - * By default, traffic is allowed. This is partially handled by > - * the Priority 0 ACL flows added earlier, but we also need to > - * commit IP flows. This is because, while the initiater's > - * direction may not have any stateful rules, the server's may > - * and then its return traffic would not have an associated > - * conntrack entry and would return "+invalid". > - * > - * We use "ct_commit" for a connection that is not already known > - * by the connection tracker. Once a connection is committed, > - * subsequent packets will hit the flow at priority 0 that just > - * uses "next;" > - * > - * We also check for established connections that have ct_label.blocked > - * set on them. That's a connection that was disallowed, but is > - * now allowed by policy again since it hit this default-allow flow. > - * We need to set ct_label.blocked=0 to let the connection continue, > - * which will be done by ct_commit() in the "stateful" stage. > - * Subsequent packets will hit the flow at priority 0 that just > - * uses "next;". */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_ACL(), > - .priority = 1, > - .__match = i"ip && (!ct.est || (ct.est && ct_label.blocked == 1))", > - .actions = i"${rEGBIT_CONNTRACK_COMMIT()} = 1; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_ACL(), > - .priority = 1, > - .__match = i"ip && (!ct.est || (ct.est && ct_label.blocked == 1))", > - .actions = i"${rEGBIT_CONNTRACK_COMMIT()} = 1; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Ingress and Egress ACL Table (Priority 65532). > - * > - * Always drop traffic that's in an invalid state. Also drop > - * reply direction packets for connections that have been marked > - * for deletion (bit 0 of ct_label is set). > - * > - * This is enforced at a higher priority than ACLs can be defined. */ > - var __match = (ct_inv_or ++ "(ct.est && ct.rpl && ct_label.blocked == 1)").intern() in { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_ACL(), > - .priority = 65532, > - .__match = __match, > - .actions = i"drop;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_ACL(), > - .priority = 65532, > - .__match = __match, > - .actions = i"drop;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - }; > - > - /* Ingress and Egress ACL Table (Priority 65532). > - * > - * Allow reply traffic that is part of an established > - * conntrack entry that has not been marked for deletion > - * (bit 0 of ct_label). We only match traffic in the > - * reply direction because we want traffic in the request > - * direction to hit the currently defined policy from ACLs. > - * > - * This is enforced at a higher priority than ACLs can be defined. */ > - var __match = ("ct.est && !ct.rel && !ct.new " ++ and_not_ct_inv ++ > - "&& ct.rpl && ct_label.blocked == 0").intern() in { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_ACL(), > - .priority = 65532, > - .__match = __match, > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_ACL(), > - .priority = 65532, > - .__match = __match, > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - }; > - > - /* Ingress and Egress ACL Table (Priority 65532). > - * > - * Allow traffic that is related to an existing conntrack entry that > - * has not been marked for deletion (bit 0 of ct_label). > - * > - * This is enforced at a higher priority than ACLs can be defined. > - * > - * NOTE: This does not support related data sessions (eg, > - * a dynamically negotiated FTP data channel), but will allow > - * related traffic such as an ICMP Port Unreachable through > - * that's generated from a non-listening UDP port. */ > - var __match = ("!ct.est && ct.rel && !ct.new " ++ and_not_ct_inv ++ > - "&& ct_label.blocked == 0").intern() in { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_ACL(), > - .priority = 65532, > - .__match = __match, > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_ACL(), > - .priority = 65532, > - .__match = __match, > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - }; > - > - /* Ingress and Egress ACL Table (Priority 65532). > - * > - * Not to do conntrack on ND packets. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_ACL(), > - .priority = 65532, > - .__match = i"nd || nd_ra || nd_rs || mldv1 || mldv2", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_ACL(), > - .priority = 65532, > - .__match = i"nd || nd_ra || nd_rs || mldv1 || mldv2", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - }; > - > - /* Add a 34000 priority flow to advance the DNS reply from ovn-controller, > - * if the CMS has configured DNS records for the datapath. > - */ > - if (sw.has_dns_records) { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_ACL(), > - .priority = 34000, > - .__match = i"udp.src == 53", > - .actions = if has_stateful i"ct_commit; next;" else i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - }; > - > - if (sw.has_acls or sw.has_lb_vip) { > - /* Add a 34000 priority flow to advance the service monitor reply > - * packets to skip applying ingress ACLs. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_ACL(), > - .priority = 34000, > - .__match = i"eth.dst == $svc_monitor_mac", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_ACL(), > - .priority = 34000, > - .__match = i"eth.src == $svc_monitor_mac", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - } > - } > -} > - > -/* This stage builds hints for the IN/OUT_ACL stage. Based on various > - * combinations of ct flags packets may hit only a subset of the logical > - * flows in the IN/OUT_ACL stage. > - * > - * Populating ACL hints first and storing them in registers simplifies > - * the logical flow match expressions in the IN/OUT_ACL stage and > - * generates less openflows. > - * > - * Certain combinations of ct flags might be valid matches for multiple > - * types of ACL logical flows (e.g., allow/drop). In such cases hints > - * corresponding to all potential matches are set. > - */ > -input relation AclHintStages[Intern<Stage>] > -AclHintStages[s_SWITCH_IN_ACL_HINT()]. > -AclHintStages[s_SWITCH_OUT_ACL_HINT()]. > -for (sw in &Switch(._uuid = ls_uuid)) { > - for (AclHintStages[stage]) { > - /* In any case, advance to the next stage. */ > - var priority = if (not sw.has_acls and not sw.has_lb_vip) { 65535 } else { 0 } in > - Flow(ls_uuid, stage, priority, i"1", i"next;", None, None, 0) > - }; > - > - for (AclHintStages[stage]) > - if (sw.has_stateful_acl or sw.has_lb_vip) { > - /* New, not already established connections, may hit either allow > - * or drop ACLs. For allow ACLs, the connection must also be committed > - * to conntrack so we set REGBIT_ACL_HINT_ALLOW_NEW. > - */ > - Flow(ls_uuid, stage, 7, i"ct.new && !ct.est", > - i"${rEGBIT_ACL_HINT_ALLOW_NEW()} = 1; " > - "${rEGBIT_ACL_HINT_DROP()} = 1; " > - "next;", None, None, 0); > - > - /* Already established connections in the "request" direction that > - * are already marked as "blocked" may hit either: > - * - allow ACLs for connections that were previously allowed by a > - * policy that was deleted and is being readded now. In this case > - * the connection should be recommitted so we set > - * REGBIT_ACL_HINT_ALLOW_NEW. > - * - drop ACLs. > - */ > - Flow(ls_uuid, stage, 6, i"!ct.new && ct.est && !ct.rpl && ct_label.blocked == 1", > - i"${rEGBIT_ACL_HINT_ALLOW_NEW()} = 1; " > - "${rEGBIT_ACL_HINT_DROP()} = 1; " > - "next;", None, None, 0); > - > - /* Not tracked traffic can either be allowed or dropped. */ > - Flow(ls_uuid, stage, 5, i"!ct.trk", > - i"${rEGBIT_ACL_HINT_ALLOW()} = 1; " > - "${rEGBIT_ACL_HINT_DROP()} = 1; " > - "next;", None, None, 0); > - > - /* Already established connections in the "request" direction may hit > - * either: > - * - allow ACLs in which case the traffic should be allowed so we set > - * REGBIT_ACL_HINT_ALLOW. > - * - drop ACLs in which case the traffic should be blocked and the > - * connection must be committed with ct_label.blocked set so we set > - * REGBIT_ACL_HINT_BLOCK. > - */ > - Flow(ls_uuid, stage, 4, i"!ct.new && ct.est && !ct.rpl && ct_label.blocked == 0", > - i"${rEGBIT_ACL_HINT_ALLOW()} = 1; " > - "${rEGBIT_ACL_HINT_BLOCK()} = 1; " > - "next;", None, None, 0); > - > - /* Not established or established and already blocked connections may > - * hit drop ACLs. > - */ > - Flow(ls_uuid, stage, 3, i"!ct.est", > - i"${rEGBIT_ACL_HINT_DROP()} = 1; " > - "next;", None, None, 0); > - Flow(ls_uuid, stage, 2, i"ct.est && ct_label.blocked == 1", > - i"${rEGBIT_ACL_HINT_DROP()} = 1; " > - "next;", None, None, 0); > - > - /* Established connections that were previously allowed might hit > - * drop ACLs in which case the connection must be committed with > - * ct_label.blocked set. > - */ > - Flow(ls_uuid, stage, 1, i"ct.est && ct_label.blocked == 0", > - i"${rEGBIT_ACL_HINT_BLOCK()} = 1; " > - "next;", None, None, 0) > - } > -} > - > -/* Ingress or Egress ACL Table (Various priorities). */ > -for (&SwitchACL(.sw = sw, .acl = acl, .has_fair_meter = fair_meter)) { > - /* consider_acl */ > - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in > - var ingress = acl.direction == i"from-lport" in > - var stage = if (ingress) { s_SWITCH_IN_ACL() } else { s_SWITCH_OUT_ACL() } in > - var pipeline = if ingress Ingress else Egress in > - var stage_hint = stage_hint(acl._uuid) in > - var acl_log = build_acl_log(acl, fair_meter) in > - var acl_match = acl.__match.intern() in > - if (acl.action == i"allow" or acl.action == i"allow-related") { > - /* If there are any stateful flows, we must even commit "allow" > - * actions. This is because, while the initiater's > - * direction may not have any stateful rules, the server's > - * may and then its return traffic would not have an > - * associated conntrack entry and would return "+invalid". */ > - if (not has_stateful) { > - Flow(.logical_datapath = sw._uuid, > - .stage = stage, > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > - .__match = acl.__match, > - .actions = i"${acl_log}next;", > - .stage_hint = stage_hint, > - .io_port = None, > - .controller_meter = None) > - } else { > - /* Commit the connection tracking entry if it's a new > - * connection that matches this ACL. After this commit, > - * the reply traffic is allowed by a flow we create at > - * priority 65535, defined earlier. > - * > - * It's also possible that a known connection was marked for > - * deletion after a policy was deleted, but the policy was > - * re-added while that connection is still known. We catch > - * that case here and un-set ct_label.blocked (which will be done > - * by ct_commit in the "stateful" stage) to indicate that the > - * connection should be allowed to resume. > - * If the ACL has a label, then load REG_LABEL with the label and > - * set the REGBIT_ACL_LABEL field. > - */ > - var __action = if (acl.label != 0) { > - i"${rEGBIT_CONNTRACK_COMMIT()} = 1; ${rEGBIT_ACL_LABEL()} = 1; " > - "${rEG_LABEL()} = ${acl.label}; ${acl_log}next;" > - } else { > - i"${rEGBIT_CONNTRACK_COMMIT()} = 1; ${acl_log}next;" > - } in Flow(.logical_datapath = sw._uuid, > - .stage = stage, > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > - .__match = i"${rEGBIT_ACL_HINT_ALLOW_NEW()} == 1 && (${acl.__match})", > - .actions = __action, > - .stage_hint = stage_hint, > - .io_port = None, > - .controller_meter = None); > - > - /* Match on traffic in the request direction for an established > - * connection tracking entry that has not been marked for > - * deletion. We use this to ensure that this > - * connection is still allowed by the currently defined > - * policy. Match untracked packets too. > - * Commit the connection only if the ACL has a label. This is done to > - * update the connection tracking entry label in case the ACL > - * allowing the connection changes. > - */ > - var __action = if (acl.label != 0) { > - i"${rEGBIT_CONNTRACK_COMMIT()} = 1; ${rEGBIT_ACL_LABEL()} = 1; " > - "${rEG_LABEL()} = ${acl.label}; ${acl_log}next;" > - } else { > - i"${acl_log}next;" > - } in Flow(.logical_datapath = sw._uuid, > - .stage = stage, > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > - .__match = i"${rEGBIT_ACL_HINT_ALLOW()} == 1 && (${acl.__match})", > - .actions = __action, > - .stage_hint = stage_hint, > - .io_port = None, > - .controller_meter = None) > - } > - } else if (acl.action == i"allow-stateless") { > - Flow(.logical_datapath = sw._uuid, > - .stage = stage, > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > - .__match = acl.__match, > - .actions = i"${acl_log}next;", > - .stage_hint = stage_hint, > - .io_port = None, > - .controller_meter = None) > - } else if (acl.action == i"drop" or acl.action == i"reject") { > - /* The implementation of "drop" differs if stateful ACLs are in > - * use for this datapath. In that case, the actions differ > - * depending on whether the connection was previously committed > - * to the connection tracker with ct_commit. */ > - var controller_meter = sw.copp.get(cOPP_REJECT()) in > - if (has_stateful) { > - /* If the packet is not tracked or not part of an established > - * connection, then we can simply reject/drop it. */ > - var __match = "${rEGBIT_ACL_HINT_DROP()} == 1" in > - if (acl.action == i"reject") { > - Reject(sw._uuid, pipeline, stage, acl, fair_meter, controller_meter, __match.intern(), i"") > - } else { > - Flow(.logical_datapath = sw._uuid, > - .stage = stage, > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > - .__match = (__match ++ " && (${acl.__match})").intern(), > - .actions = i"${acl_log}/* drop */", > - .stage_hint = stage_hint, > - .io_port = None, > - .controller_meter = None) > - }; > - /* For an existing connection without ct_label set, we've > - * encountered a policy change. ACLs previously allowed > - * this connection and we committed the connection tracking > - * entry. Current policy says that we should drop this > - * connection. First, we set bit 0 of ct_label to indicate > - * that this connection is set for deletion. By not > - * specifying "next;", we implicitly drop the packet after > - * updating conntrack state. We would normally defer > - * ct_commit() to the "stateful" stage, but since we're > - * rejecting/dropping the packet, we go ahead and do it here. > - */ > - var __match = "${rEGBIT_ACL_HINT_BLOCK()} == 1" in > - var actions = "ct_commit { ct_label.blocked = 1; }; " in > - if (acl.action == i"reject") { > - Reject(sw._uuid, pipeline, stage, acl, fair_meter, controller_meter, __match.intern(), actions.intern()) > - } else { > - Flow(.logical_datapath = sw._uuid, > - .stage = stage, > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > - .__match = (__match ++ " && (${acl.__match})").intern(), > - .actions = i"${actions}${acl_log}/* drop */", > - .stage_hint = stage_hint, > - .io_port = None, > - .controller_meter = None) > - } > - } else { > - /* There are no stateful ACLs in use on this datapath, > - * so a "reject/drop" ACL is simply the "reject/drop" > - * logical flow action in all cases. */ > - if (acl.action == i"reject") { > - Reject(sw._uuid, pipeline, stage, acl, fair_meter, controller_meter, i"", i"") > - } else { > - Flow(.logical_datapath = sw._uuid, > - .stage = stage, > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > - .__match = acl.__match, > - .actions = i"${acl_log}/* drop */", > - .stage_hint = stage_hint, > - .io_port = None, > - .controller_meter = None) > - } > - } > - } > -} > - > -/* Add 34000 priority flow to allow DHCP reply from ovn-controller to all > - * logical ports of the datapath if the CMS has configured DHCPv4 options. > - * */ > -for (SwitchPortDHCPv4Options(.port = &SwitchPort{.lsp = lsp, .sw = sw}, > - .dhcpv4_options = dhcpv4_options@&nb::DHCP_Options{.options = options}) > - if lsp.__type != i"external") { > - (Some{var server_id}, Some{var server_mac}, Some{var lease_time}) = > - (options.get(i"server_id"), options.get(i"server_mac"), options.get(i"lease_time")) in > - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_OUT_ACL(), > - .priority = 34000, > - .__match = i"outport == ${json_escape(lsp.name)} " > - "&& eth.src == ${server_mac} " > - "&& ip4.src == ${server_id} && udp && udp.src == 67 " > - "&& udp.dst == 68", > - .actions = if (has_stateful) i"ct_commit; next;" else i"next;", > - .stage_hint = stage_hint(dhcpv4_options._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > -} > - > -for (SwitchPortDHCPv6Options(.port = &SwitchPort{.lsp = lsp, .sw = sw}, > - .dhcpv6_options = dhcpv6_options@&nb::DHCP_Options{.options=options} ) > - if lsp.__type != i"external") { > - Some{var server_mac} = options.get(i"server_id") in > - Some{var ea} = eth_addr_from_string(server_mac.ival()) in > - var server_ip = ea.to_ipv6_lla() in > - /* Get the link local IP of the DHCPv6 server from the > - * server MAC. */ > - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_OUT_ACL(), > - .priority = 34000, > - .__match = i"outport == ${json_escape(lsp.name)} " > - "&& eth.src == ${server_mac} " > - "&& ip6.src == ${server_ip} && udp && udp.src == 547 " > - "&& udp.dst == 546", > - .actions = if (has_stateful) i"ct_commit; next;" else i"next;", > - .stage_hint = stage_hint(dhcpv6_options._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > -} > - > -relation QoSAction(qos: uuid, key_action: istring, value_action: integer) > - > -QoSAction(qos, k, v) :- > - &nb::QoS(._uuid = qos, .action = actions), > - (var k, var v) = FlatMap(actions). > - > -/* QoS rules */ > -for (&Switch(._uuid = ls_uuid)) { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_QOS_MARK(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_QOS_MARK(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_QOS_METER(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_QOS_METER(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -for (SwitchQoS(.sw = sw, .qos = qos)) { > - var ingress = if (qos.direction == i"from-lport") true else false in > - var pipeline = if ingress "ingress" else "egress" in { > - var stage = if (ingress) { s_SWITCH_IN_QOS_MARK() } else { s_SWITCH_OUT_QOS_MARK() } in > - /* FIXME: Can value_action be negative? */ > - for (QoSAction(qos._uuid, key_action, value_action)) { > - if (key_action == i"dscp") { > - Flow(.logical_datapath = sw._uuid, > - .stage = stage, > - .priority = qos.priority, > - .__match = qos.__match, > - .actions = i"ip.dscp = ${value_action}; next;", > - .stage_hint = stage_hint(qos._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - }; > - > - (var burst, var rate) = { > - var rate = 0; > - var burst = 0; > - for ((key_bandwidth, value_bandwidth) in qos.bandwidth) { > - /* FIXME: Can value_bandwidth be negative? */ > - if (key_bandwidth == i"rate") { > - rate = value_bandwidth > - } else if (key_bandwidth == i"burst") { > - burst = value_bandwidth > - } else () > - }; > - (burst, rate) > - } in > - if (rate != 0) { > - var stage = if (ingress) { s_SWITCH_IN_QOS_METER() } else { s_SWITCH_OUT_QOS_METER() } in > - var meter_action = if (burst != 0) { > - i"set_meter(${rate}, ${burst}); next;" > - } else { > - i"set_meter(${rate}); next;" > - } in > - /* Ingress and Egress QoS Meter Table. > - * > - * We limit the bandwidth of this flow by adding a meter table. > - */ > - Flow(.logical_datapath = sw._uuid, > - .stage = stage, > - .priority = qos.priority, > - .__match = qos.__match, > - .actions = meter_action, > - .stage_hint = stage_hint(qos._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - } > -} > - > -/* stateful rules */ > -for (&Switch(._uuid = ls_uuid)) { > - /* Ingress and Egress stateful Table (Priority 0): Packets are > - * allowed by default. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_STATEFUL(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_STATEFUL(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* If REGBIT_CONNTRACK_COMMIT is set as 1 and REGBIT_CONNTRACK_SET_LABEL > - * is set to 1, then the packets should be > - * committed to conntrack. We always set ct_label.blocked to 0 here as > - * any packet that makes it this far is part of a connection we > - * want to allow to continue. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_STATEFUL(), > - .priority = 100, > - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 1", > - .actions = i"ct_commit { ct_label.blocked = 0; ct_label.label = ${rEG_LABEL()}; }; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_STATEFUL(), > - .priority = 100, > - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 1", > - .actions = i"ct_commit { ct_label.blocked = 0; ct_label.label = ${rEG_LABEL()}; }; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* If REGBIT_CONNTRACK_COMMIT is set as 1, then the packets should be > - * committed to conntrack. We always set ct_label.blocked to 0 here as > - * any packet that makes it this far is part of a connection we > - * want to allow to continue. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_STATEFUL(), > - .priority = 100, > - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 0", > - .actions = i"ct_commit { ct_label.blocked = 0; }; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_STATEFUL(), > - .priority = 100, > - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 0", > - .actions = i"ct_commit { ct_label.blocked = 0; }; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Load balancing rules for new connections get committed to conntrack > - * table. So even if REGBIT_CONNTRACK_COMMIT is set in a previous table > - * a higher priority rule for load balancing below also commits the > - * connection, so it is okay if we do not hit the above match on > - * REGBIT_CONNTRACK_COMMIT. */ > -function get_match_for_lb_key(ip_address: v46_ip, > - port: bit<16>, > - protocol: Option<istring>, > - redundancy: bool, > - use_nexthop_reg: bool, > - use_dest_tp_reg: bool): string = { > - var port_match = if (port != 0) { > - var proto = if (protocol == Some{i"udp"}) { > - "udp" > - } else { > - "tcp" > - }; > - if (redundancy) { " && ${proto}" } else { "" } ++ > - if (use_dest_tp_reg) { > - " && ${rEG_ORIG_TP_DPORT_ROUTER()} == ${port}" > - } else { > - " && ${proto}.dst == ${port}" > - } > - } else { > - "" > - }; > - > - var ip_match = match (ip_address) { > - IPv4{ipv4} -> > - if (use_nexthop_reg) { > - "${rEG_NEXT_HOP()} == ${ipv4}" > - } else { > - "ip4.dst == ${ipv4}" > - }, > - IPv6{ipv6} -> > - if (use_nexthop_reg) { > - "xx${rEG_NEXT_HOP()} == ${ipv6}" > - } else { > - "ip6.dst == ${ipv6}" > - } > - }; > - > - var ipx = match (ip_address) { > - IPv4{ipv4} -> "ip4", > - IPv6{ipv6} -> "ip6", > - }; > - > - if (redundancy) { ipx ++ " && " } else { "" } ++ ip_match ++ port_match > -} > -/* New connections in Ingress table. */ > - > -function ct_lb(backends: istring, > - selection_fields: Set<istring>, protocol: Option<istring>): string { > - var args = vec_with_capacity(2); > - args.push("backends=${backends}"); > - > - if (not selection_fields.is_empty()) { > - var hash_fields = vec_with_capacity(selection_fields.size()); > - for (sf in selection_fields) { > - var hf = match ((sf.ival(), protocol)) { > - ("tp_src", Some{p}) -> "${p}_src", > - ("tp_dst", Some{p}) -> "${p}_dst", > - _ -> sf.ival() > - }; > - hash_fields.push(hf); > - }; > - hash_fields.sort(); > - args.push("hash_fields=" ++ json_escape(hash_fields.join(","))); > - }; > - > - "ct_lb(" ++ args.join("; ") ++ ");" > -} > -function build_lb_vip_actions(lbvip: Intern<LBVIP>, > - up_backends: istring, > - stage: Intern<Stage>, > - actions0: string): (string, bool) { > - if (up_backends == i"") { > - if (lbvip.lb.options.get_bool_def(i"reject", false)) { > - return ("reg0 = 0; reject { outport <-> inport; ${next_to_stage(stage)};};", true) > - } else if (lbvip.health_check.is_some()) { > - return ("drop;", false) > - } // else fall through > - }; > - > - var actions = ct_lb(up_backends, lbvip.lb.selection_fields, lbvip.lb.protocol); > - (actions0 ++ actions, false) > -} > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_STATEFUL(), > - .priority = priority, > - .__match = __match, > - .actions = actions, > - .io_port = None, > - .controller_meter = meter, > - .stage_hint = 0) :- > - LBVIPWithStatus(lbvip@&LBVIP{.lb = lb}, up_backends), > - var priority = if (lbvip.vip_port != 0) { 120 } else { 110 }, > - (var actions0, var reject) = { > - /* Store the original destination IP to be used when generating > - * hairpin flows. > - */ > - var actions0 = match (lbvip.vip_addr) { > - IPv4{ipv4} -> "${rEG_ORIG_DIP_IPV4()} = ${ipv4}; ", > - IPv6{ipv6} -> "${rEG_ORIG_DIP_IPV6()} = ${ipv6}; " > - }; > - > - /* Store the original destination port to be used when generating > - * hairpin flows. > - */ > - var actions1 = if (lbvip.vip_port != 0) { > - "${rEG_ORIG_TP_DPORT()} = ${lbvip.vip_port}; " > - } else { > - "" > - }; > - > - build_lb_vip_actions(lbvip, up_backends, s_SWITCH_OUT_QOS_MARK(), actions0 ++ actions1) > - }, > - var actions = actions0.intern(), > - var __match = ("ct.new && " ++ get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port, lb.protocol, false, false, false)).intern(), > - SwitchLB(sw, lb._uuid), > - var meter = if (reject) { > - sw.copp.get(cOPP_REJECT()) > - } else { > - None > - }. > - > -/* Ingress Pre-Hairpin/Nat-Hairpin/Hairpin tabled (Priority 0). > - * Packets that don't need hairpinning should continue processing. > - */ > -Flow(.logical_datapath = ls_uuid, > - .stage = stage, > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - &Switch(._uuid = ls_uuid), > - var stages = [s_SWITCH_IN_PRE_HAIRPIN(), > - s_SWITCH_IN_NAT_HAIRPIN(), > - s_SWITCH_IN_HAIRPIN()], > - var stage = FlatMap(stages). > - > -for (&Switch(._uuid = ls_uuid, .has_lb_vip = true)) { > - /* Check if the packet needs to be hairpinned. > - * Set REGBIT_HAIRPIN in the original direction and > - * REGBIT_HAIRPIN_REPLY in the reply direction. > - */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PRE_HAIRPIN(), > - .priority = 100, > - .__match = i"ip && ct.trk", > - .actions = i"${rEGBIT_HAIRPIN()} = chk_lb_hairpin(); " > - "${rEGBIT_HAIRPIN_REPLY()} = chk_lb_hairpin_reply(); " > - "next;", > - .stage_hint = stage_hint(ls_uuid), > - .io_port = None, > - .controller_meter = None); > - > - /* If packet needs to be hairpinned, snat the src ip with the VIP > - * for new sessions. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_NAT_HAIRPIN(), > - .priority = 100, > - .__match = i"ip && ct.new && ct.trk && ${rEGBIT_HAIRPIN()} == 1", > - .actions = i"ct_snat_to_vip; next;", > - .stage_hint = stage_hint(ls_uuid), > - .io_port = None, > - .controller_meter = None); > - > - /* If packet needs to be hairpinned, for established sessions there > - * should already be an SNAT conntrack entry. > - */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_NAT_HAIRPIN(), > - .priority = 100, > - .__match = i"ip && ct.est && ct.trk && ${rEGBIT_HAIRPIN()} == 1", > - .actions = i"ct_snat;", > - .stage_hint = stage_hint(ls_uuid), > - .io_port = None, > - .controller_meter = None); > - > - /* For the reply of hairpinned traffic, snat the src ip to the VIP. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_NAT_HAIRPIN(), > - .priority = 90, > - .__match = i"ip && ${rEGBIT_HAIRPIN_REPLY()} == 1", > - .actions = i"ct_snat;", > - .stage_hint = stage_hint(ls_uuid), > - .io_port = None, > - .controller_meter = None); > - > - /* Ingress Hairpin table. > - * - Priority 1: Packets that were SNAT-ed for hairpinning should be > - * looped back (i.e., swap ETH addresses and send back on inport). > - */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_HAIRPIN(), > - .priority = 1, > - .__match = i"(${rEGBIT_HAIRPIN()} == 1 || ${rEGBIT_HAIRPIN_REPLY()} == 1)", > - .actions = i"eth.dst <-> eth.src; outport = inport; flags.loopback = 1; output;", > - .stage_hint = stage_hint(ls_uuid), > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Logical switch ingress table PORT_SEC_L2: ingress port security - L2 (priority 50) > - ingress table PORT_SEC_IP: ingress port security - IP (priority 90 and 80) > - ingress table PORT_SEC_ND: ingress port security - ND (priority 90 and 80) */ > -for (&SwitchPort(.lsp = lsp, .sw = sw, .json_name = json_name, .ps_eth_addresses = ps_eth_addresses) > - if lsp.is_enabled() and lsp.__type != i"external") { > - for (pbinding in sb::Out_Port_Binding(.logical_port = lsp.name)) { > - var __match = if (ps_eth_addresses.is_empty()) { > - i"inport == ${json_name}" > - } else { > - i"inport == ${json_name} && eth.src == {${ps_eth_addresses.join(\" \")}}" > - } in > - > - var actions = { > - var queue = match (pbinding.options.get(i"qdisc_queue_id")) { > - None -> i"", > - Some{id} -> i"set_queue(${id});" > - }; > - var ramp = if (lsp.__type == i"vtep") { > - i"next(pipeline=ingress, table=${s_SWITCH_IN_L2_LKUP().table_id});" > - } else { > - i"next;" > - }; > - i"${queue}${ramp}" > - } in > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_L2(), > - .priority = 50, > - .__match = __match, > - .actions = actions, > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > - } > -} > - > -/** > -* Build port security constraints on IPv4 and IPv6 src and dst fields > -* and add logical flows to S_SWITCH_(IN/OUT)_PORT_SEC_IP stage. > -* > -* For each port security of the logical port, following > -* logical flows are added > -* - If the port security has IPv4 addresses, > -* - Priority 90 flow to allow IPv4 packets for known IPv4 addresses > -* > -* - If the port security has IPv6 addresses, > -* - Priority 90 flow to allow IPv6 packets for known IPv6 addresses > -* > -* - If the port security has IPv4 addresses or IPv6 addresses or both > -* - Priority 80 flow to drop all IPv4 and IPv6 traffic > -*/ > -for (SwitchPortPSAddresses(.port = port@&SwitchPort{.sw = sw}, .ps_addrs = ps) > - if port.is_enabled() and > - (ps.ipv4_addrs.len() > 0 or ps.ipv6_addrs.len() > 0) and > - port.lsp.__type != i"external") > -{ > - if (ps.ipv4_addrs.len() > 0) { > - var dhcp_match = i"inport == ${port.json_name}" > - " && eth.src == ${ps.ea}" > - " && ip4.src == 0.0.0.0" > - " && ip4.dst == 255.255.255.255" > - " && udp.src == 68 && udp.dst == 67" in { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > - .priority = 90, > - .__match = dhcp_match, > - .actions = i"next;", > - .stage_hint = stage_hint(port.lsp._uuid), > - .io_port = Some{port.lsp.name}, > - .controller_meter = None) > - }; > - var addrs = { > - var addrs = vec_empty(); > - for (addr in ps.ipv4_addrs) { > - /* When the netmask is applied, if the host portion is > - * non-zero, the host can only use the specified > - * address. If zero, the host is allowed to use any > - * address in the subnet. > - */ > - addrs.push(addr.match_host_or_network()) > - }; > - addrs > - } in > - var __match = > - "inport == ${port.json_name} && eth.src == ${ps.ea} && ip4.src == {" ++ > - addrs.join(", ") ++ "}" in > - { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > - .priority = 90, > - .__match = __match.intern(), > - .actions = i"next;", > - .stage_hint = stage_hint(port.lsp._uuid), > - .io_port = Some{port.lsp.name}, > - .controller_meter = None) > - } > - }; > - if (ps.ipv6_addrs.len() > 0) { > - var dad_match = i"inport == ${port.json_name}" > - " && eth.src == ${ps.ea}" > - " && ip6.src == ::" > - " && ip6.dst == ff02::/16" > - " && icmp6.type == {131, 135, 143}" in > - { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > - .priority = 90, > - .__match = dad_match, > - .actions = i"next;", > - .stage_hint = stage_hint(port.lsp._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - var __match = "inport == ${port.json_name} && eth.src == ${ps.ea}" ++ > - build_port_security_ipv6_flow(Ingress, ps.ea, ps.ipv6_addrs) in > - { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > - .priority = 90, > - .__match = __match.intern(), > - .actions = i"next;", > - .stage_hint = stage_hint(port.lsp._uuid), > - .io_port = Some{port.lsp.name}, > - .controller_meter = None) > - } > - }; > - var __match = i"inport == ${port.json_name} && eth.src == ${ps.ea} && ip" in > - { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > - .priority = 80, > - .__match = __match, > - .actions = i"drop;", > - .stage_hint = stage_hint(port.lsp._uuid), > - .io_port = Some{port.lsp.name}, > - .controller_meter = None) > - } > -} > - > -/** > - * Build port security constraints on ARP and IPv6 ND fields > - * and add logical flows to S_SWITCH_IN_PORT_SEC_ND stage. > - * > - * For each port security of the logical port, following > - * logical flows are added > - * - If the port security has no IP (both IPv4 and IPv6) or > - * if it has IPv4 address(es) > - * - Priority 90 flow to allow ARP packets for known MAC addresses > - * in the eth.src and arp.spa fields. If the port security > - * has IPv4 addresses, allow known IPv4 addresses in the arp.tpa field. > - * > - * - If the port security has no IP (both IPv4 and IPv6) or > - * if it has IPv6 address(es) > - * - Priority 90 flow to allow IPv6 ND packets for known MAC addresses > - * in the eth.src and nd.sll/nd.tll fields. If the port security > - * has IPv6 addresses, allow known IPv6 addresses in the nd.target field > - * for IPv6 Neighbor Advertisement packet. > - * > - * - Priority 80 flow to drop ARP and IPv6 ND packets. > - */ > -for (SwitchPortPSAddresses(.port = port@&SwitchPort{.sw = sw}, .ps_addrs = ps) > - if port.is_enabled() and port.lsp.__type != i"external") > -{ > - var no_ip = ps.ipv4_addrs.is_empty() and ps.ipv6_addrs.is_empty() in > - { > - if (not ps.ipv4_addrs.is_empty() or no_ip) { > - var __match = { > - var prefix = "inport == ${port.json_name} && eth.src == ${ps.ea} && arp.sha == ${ps.ea}"; > - if (not ps.ipv4_addrs.is_empty()) { > - var spas = vec_empty(); > - for (addr in ps.ipv4_addrs) { > - spas.push(addr.match_host_or_network()) > - }; > - prefix ++ " && arp.spa == {${spas.join(\", \")}}" > - } else { > - prefix > - } > - } in { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_ND(), > - .priority = 90, > - .__match = __match.intern(), > - .actions = i"next;", > - .stage_hint = stage_hint(port.lsp._uuid), > - .io_port = Some{port.lsp.name}, > - .controller_meter = None) > - } > - }; > - if (not ps.ipv6_addrs.is_empty() or no_ip) { > - var __match = "inport == ${port.json_name} && eth.src == ${ps.ea}" ++ > - build_port_security_ipv6_nd_flow(ps.ea, ps.ipv6_addrs) in > - { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_ND(), > - .priority = 90, > - .__match = __match.intern(), > - .actions = i"next;", > - .stage_hint = stage_hint(port.lsp._uuid), > - .io_port = Some{port.lsp.name}, > - .controller_meter = None) > - } > - }; > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_PORT_SEC_ND(), > - .priority = 80, > - .__match = i"inport == ${port.json_name} && (arp || nd)", > - .actions = i"drop;", > - .stage_hint = stage_hint(port.lsp._uuid), > - .io_port = Some{port.lsp.name}, > - .controller_meter = None) > - } > -} > - > -/* Ingress table PORT_SEC_ND and PORT_SEC_IP: Port security - IP and ND, by > - * default goto next. (priority 0)*/ > -for (&Switch(._uuid = ls_uuid)) { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PORT_SEC_ND(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Ingress table ARP_ND_RSP: ARP/ND responder, skip requests coming from > - * localnet and vtep ports. (priority 100); see ovn-northd.8.xml for the > - * rationale. */ > -for (&SwitchPort(.lsp = lsp, .sw = sw, .json_name = json_name) > - if lsp.is_enabled() and > - (lsp.__type == i"localnet" or lsp.__type == i"vtep")) > -{ > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > - .priority = 100, > - .__match = i"inport == ${json_name}", > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > -} > - > -function lsp_is_up(lsp: Intern<nb::Logical_Switch_Port>): bool = { > - lsp.up == Some{true} > -} > - > -/* Ingress table ARP_ND_RSP: ARP/ND responder, reply for known IPs. > - * (priority 50). */ > -/* Handle > - * - GARPs for virtual ip which belongs to a logical port > - * of type 'virtual' and bind that port. > - * > - * - ARP reply from the virtual ip which belongs to a logical > - * port of type 'virtual' and bind that port. > - * */ > - Flow(.logical_datapath = sp.sw._uuid, > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > - .priority = 100, > - .__match = i"inport == ${vp.json_name} && " > - "((arp.op == 1 && arp.spa == ${virtual_ip} && arp.tpa == ${virtual_ip}) || " > - "(arp.op == 2 && arp.spa == ${virtual_ip}))", > - .actions = i"bind_vport(${sp.json_name}, inport); next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{vp.lsp.name}, > - .controller_meter = None) :- > - sp in &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"virtual"}), > - Some{var virtual_ip} = lsp.options.get(i"virtual-ip"), > - Some{var virtual_parents} = lsp.options.get(i"virtual-parents"), > - Some{var ip} = ip_parse(virtual_ip.ival()), > - var vparent = FlatMap(virtual_parents.split(",")), > - vp in &SwitchPort(.lsp = &nb::Logical_Switch_Port{.name = vparent.intern()}), > - vp.sw == sp.sw. > - > -/* > - * Add ARP/ND reply flows if either the > - * - port is up and it doesn't have 'unknown' address defined or > - * - port type is router or > - * - port type is localport > - */ > -for (CheckLspIsUp[check_lsp_is_up]) { > - for (SwitchPortIPv4Address(.port = &SwitchPort{.lsp = lsp, .sw = sw, .json_name = json_name}, > - .ea = ea, .addr = addr) > - if lsp.is_enabled() and > - ((lsp_is_up(lsp) or not check_lsp_is_up) > - or lsp.__type == i"router" or lsp.__type == i"localport") and > - lsp.__type != i"external" and lsp.__type != i"virtual" and > - not lsp.addresses.contains(i"unknown") and > - not sw.is_vlan_transparent) > - { > - var __match = "arp.tpa == ${addr.addr} && arp.op == 1" in > - { > - var actions = i"eth.dst = eth.src; " > - "eth.src = ${ea}; " > - "arp.op = 2; /* ARP reply */ " > - "arp.tha = arp.sha; " > - "arp.sha = ${ea}; " > - "arp.tpa = arp.spa; " > - "arp.spa = ${addr.addr}; " > - "outport = inport; " > - "flags.loopback = 1; " > - "output;" in > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > - .priority = 50, > - .__match = __match.intern(), > - .actions = actions, > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = None, > - .controller_meter = None); > - > - /* Do not reply to an ARP request from the port that owns the > - * address (otherwise a DHCP client that ARPs to check for a > - * duplicate address will fail). Instead, forward it the usual > - * way. > - * > - * (Another alternative would be to simply drop the packet. If > - * everything is working as it is configured, then this would > - * produce equivalent results, since no one should reply to the > - * request. But ARPing for one's own IP address is intended to > - * detect situations where the network is not working as > - * configured, so dropping the request would frustrate that > - * intent.) */ > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > - .priority = 100, > - .__match = i"${__match} && inport == ${json_name}", > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > - } > - } > -} > - > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > - .priority = 50, > - .__match = __match.intern(), > - .actions = __actions, > - .stage_hint = stage_hint(sp.lsp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - > - sp in &SwitchPort(.sw = sw, .peer = Some{rp}), > - rp.is_enabled(), > - var proxy_ips = { > - match (sp.lsp.options.get(i"arp_proxy")) { > - None -> "", > - Some {addresses} -> { > - match (extract_ip_addresses(addresses.ival())) { > - None -> "", > - Some{addr} -> { > - var ip4_addrs = vec_empty(); > - for (ip4 in addr.ipv4_addrs) { > - ip4_addrs.push("${ip4.addr}") > - }; > - string_join(ip4_addrs, ",") > - } > - } > - } > - } > - }, > - proxy_ips != "", > - var __match = "arp.op == 1 && arp.tpa == {" ++ proxy_ips ++ "}", > - var __actions = i"eth.dst = eth.src; " > - "eth.src = ${rp.networks.ea}; " > - "arp.op = 2; /* ARP reply */ " > - "arp.tha = arp.sha; " > - "arp.sha = ${rp.networks.ea}; " > - "arp.tpa <-> arp.spa; " > - "outport = inport; " > - "flags.loopback = 1; " > - "output;". > - > -/* For ND solicitations, we need to listen for both the > - * unicast IPv6 address and its all-nodes multicast address, > - * but always respond with the unicast IPv6 address. */ > -for (SwitchPortIPv6Address(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, > - .ea = ea, .addr = addr) > - if lsp.is_enabled() and > - (lsp_is_up(lsp) or lsp.__type == i"router" or lsp.__type == i"localport") and > - lsp.__type != i"external" and lsp.__type != i"virtual" and > - not sw.is_vlan_transparent) > -{ > - var __match = "nd_ns && ip6.dst == {${addr.addr}, ${addr.solicited_node()}} && nd.target == ${addr.addr}" in > - var actions = i"${if (lsp.__type == i\"router\") \"nd_na_router\" else \"nd_na\"} { " > - "eth.src = ${ea}; " > - "ip6.src = ${addr.addr}; " > - "nd.target = ${addr.addr}; " > - "nd.tll = ${ea}; " > - "outport = inport; " > - "flags.loopback = 1; " > - "output; " > - "};" in > - { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > - .priority = 50, > - .__match = __match.intern(), > - .actions = actions, > - .io_port = None, > - .controller_meter = sw.copp.get(cOPP_ND_NA()), > - .stage_hint = stage_hint(lsp._uuid)); > - > - /* Do not reply to a solicitation from the port that owns the > - * address (otherwise DAD detection will fail). */ > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > - .priority = 100, > - .__match = i"${__match} && inport == ${json_name}", > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > - } > -} > - > -/* Ingress table ARP_ND_RSP: ARP/ND responder, by default goto next. > - * (priority 0)*/ > -for (ls in &nb::Logical_Switch) { > - Flow(.logical_datapath = ls._uuid, > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Ingress table ARP_ND_RSP: ARP/ND responder for service monitor source ip. > - * (priority 110)*/ > -Flow(.logical_datapath = sp.sw._uuid, > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > - .priority = 110, > - .__match = i"arp.tpa == ${svc_mon_src_ip} && arp.op == 1", > - .actions = i"eth.dst = eth.src; " > - "eth.src = ${svc_monitor_mac}; " > - "arp.op = 2; /* ARP reply */ " > - "arp.tha = arp.sha; " > - "arp.sha = ${svc_monitor_mac}; " > - "arp.tpa = arp.spa; " > - "arp.spa = ${svc_mon_src_ip}; " > - "outport = inport; " > - "flags.loopback = 1; " > - "output;", > - .stage_hint = stage_hint(lbvip.lb._uuid), > - .io_port = None, > - .controller_meter = None) :- > - LBVIP[lbvip], > - var lbvipbackend = FlatMap(lbvip.backends), > - Some{var svc_monitor} = lbvipbackend.svc_monitor, > - sp in &SwitchPort( > - .lsp = &nb::Logical_Switch_Port{.name = svc_monitor.port_name}), > - var svc_mon_src_ip = svc_monitor.src_ip, > - SvcMonitorMac(svc_monitor_mac). > - > -function build_dhcpv4_action( > - lsp_json_key: string, > - dhcpv4_options: Intern<nb::DHCP_Options>, > - offer_ip: in_addr, > - lsp_options: Map<istring,istring>) : Option<(istring, istring, string)> = > -{ > - match (ip_parse_masked(dhcpv4_options.cidr.ival())) { > - Left{err} -> { > - /* cidr defined is invalid */ > - None > - }, > - Right{(var host_ip, var mask)} -> { > - if (not (offer_ip, host_ip).same_network(mask)) { > - /* the offer ip of the logical port doesn't belong to the cidr > - * defined in the DHCPv4 options. > - */ > - None > - } else { > - match ((dhcpv4_options.options.get(i"server_id"), > - dhcpv4_options.options.get(i"server_mac"), > - dhcpv4_options.options.get(i"lease_time"))) > - { > - (Some{var server_ip}, Some{var server_mac}, Some{var lease_time}) -> { > - var options_map = dhcpv4_options.options; > - > - /* server_mac is not DHCPv4 option, delete it from the smap. */ > - options_map.remove(i"server_mac"); > - options_map.insert(i"netmask", i"${mask}"); > - > - match (lsp_options.get(i"hostname")) { > - None -> (), > - Some{port_hostname} -> options_map.insert(i"hostname", port_hostname) > - }; > - > - var options = vec_empty(); > - for (node in options_map) { > - (var k, var v) = node; > - options.push("${k} = ${v}") > - }; > - options.sort(); > - var options_action = "${rEGBIT_DHCP_OPTS_RESULT()} = put_dhcp_opts(offerip = ${offer_ip}, " ++ > - options.join(", ") ++ "); next;"; > - var response_action = i"eth.dst = eth.src; eth.src = ${server_mac}; " > - "ip4.src = ${server_ip}; udp.src = 67; " > - "udp.dst = 68; outport = inport; flags.loopback = 1; " > - "output;"; > - > - var ipv4_addr_match = "ip4.src == ${offer_ip} && ip4.dst == {${server_ip}, 255.255.255.255}"; > - Some{(options_action.intern(), response_action, ipv4_addr_match)} > - }, > - _ -> { > - /* "server_id", "server_mac" and "lease_time" should be > - * present in the dhcp_options. */ > - //static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); > - warn("Required DHCPv4 options not defined for lport - ${lsp_json_key}"); > - None > - } > - } > - } > - } > - } > -} > - > -function build_dhcpv6_action( > - lsp_json_key: string, > - dhcpv6_options: Intern<nb::DHCP_Options>, > - offer_ip: in6_addr): Option<(istring, istring)> = > -{ > - match (ipv6_parse_masked(dhcpv6_options.cidr.ival())) { > - Left{err} -> { > - /* cidr defined is invalid */ > - //warn("cidr is invalid - ${err}"); > - None > - }, > - Right{(var host_ip, var mask)} -> { > - if (not (offer_ip, host_ip).same_network(mask)) { > - /* offer_ip doesn't belongs to the cidr defined in lport's DHCPv6 > - * options.*/ > - //warn("ip does not belong to cidr"); > - None > - } else { > - /* "server_id" should be the MAC address. */ > - match (dhcpv6_options.options.get(i"server_id")) { > - None -> { > - warn("server_id not present in the DHCPv6 options for lport ${lsp_json_key}"); > - None > - }, > - Some{server_mac} -> { > - match (eth_addr_from_string(server_mac.ival())) { > - None -> { > - warn("server_id not present in the DHCPv6 options for lport ${lsp_json_key}"); > - None > - }, > - Some{ea} -> { > - /* Get the link local IP of the DHCPv6 server from the server MAC. */ > - var server_ip = ea.to_ipv6_lla().string_mapped(); > - var ia_addr = offer_ip.string_mapped(); > - var options = vec_empty(); > - > - /* Check whether the dhcpv6 options should be configured as stateful. > - * Only reply with ia_addr option for dhcpv6 stateful address mode. */ > - if (not dhcpv6_options.options.get_bool_def(i"dhcpv6_stateless", false)) { > - options.push("ia_addr = ${ia_addr}") > - } else (); > - > - for ((k, v) in dhcpv6_options.options) { > - if (k != i"dhcpv6_stateless") { > - options.push("${k} = ${v}") > - } else () > - }; > - options.sort(); > - > - var options_action = "${rEGBIT_DHCP_OPTS_RESULT()} = put_dhcpv6_opts(" ++ > - options.join(", ") ++ > - "); next;"; > - var response_action = i"eth.dst = eth.src; eth.src = ${server_mac}; " > - "ip6.dst = ip6.src; ip6.src = ${server_ip}; udp.src = 547; " > - "udp.dst = 546; outport = inport; flags.loopback = 1; " > - "output;"; > - Some{(options_action.intern(), response_action)} > - } > - } > - } > - } > - } > - } > - } > -} > - > -/* If 'names' has one element, returns json_escape() for it. > - * Otherwise, returns json_escape() of all of its elements inside "{...}". > - */ > -function json_escape_vec(names: Vec<string>): string > -{ > - match ((names.len(), names.nth(0))) { > - (1, Some{name}) -> json_escape(name), > - _ -> { > - var json_names = vec_with_capacity(names.len()); > - for (name in names) { > - json_names.push(json_escape(name)); > - }; > - "{" ++ json_names.join(", ") ++ "}" > - } > - } > -} > - > -/* > - * Ordinarily, returns a single match against 'lsp'. > - * > - * If 'lsp' is an external port, returns a match against the localnet port(s) on > - * its switch along with a condition that it only operate if 'lsp' is > - * chassis-resident. This makes sense as a condition for sending DHCP replies > - * to external ports because only one chassis should send such a reply. > - * > - * Returns a prefix and a suffix string. There is no reason for this except > - * that it makes it possible to exactly mimic the format used by northd.c > - * so that text-based comparisons do not show differences. (This fails if > - * there's more than one localnet port since the C version uses multiple flows > - * in that case.) > - */ > -function match_dhcp_input(lsp: Intern<SwitchPort>): (string, string) = > -{ > - if (lsp.lsp.__type == i"external" and not lsp.sw.localnet_ports.is_empty()) { > - ("inport == " ++ json_escape_vec(lsp.sw.localnet_ports.map(|x| x.1.ival())) ++ " && ", > - " && is_chassis_resident(${lsp.json_name})") > - } else { > - ("inport == ${lsp.json_name} && ", "") > - } > -} > - > -/* Logical switch ingress tables DHCP_OPTIONS and DHCP_RESPONSE: DHCP options > - * and response priority 100 flows. */ > -for (lsp in &SwitchPort > - /* Don't add the DHCP flows if the port is not enabled or if the > - * port is a router port. */ > - if (lsp.is_enabled() and lsp.lsp.__type != i"router") > - /* If it's an external port and there is no localnet port > - * and if it doesn't belong to an HA chassis group ignore it. */ > - and (lsp.lsp.__type != i"external" > - or (not lsp.sw.localnet_ports.is_empty() > - and lsp.lsp.ha_chassis_group.is_some()))) > -{ > - for (lps in LogicalSwitchPort(.lport = lsp.lsp._uuid, .lswitch = lsuuid)) { > - var json_key = json_escape(lsp.lsp.name) in > - (var pfx, var sfx) = match_dhcp_input(lsp) in > - { > - /* DHCPv4 options enabled for this port */ > - Some{var dhcpv4_options_uuid} = lsp.lsp.dhcpv4_options in > - { > - for (dhcpv4_options in &nb::DHCP_Options(._uuid = dhcpv4_options_uuid)) { > - for (SwitchPortIPv4Address(.port = &SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = lsp.lsp._uuid}}, .ea = ea, .addr = addr)) { > - Some{(var options_action, var response_action, var ipv4_addr_match)} = > - build_dhcpv4_action(json_key, dhcpv4_options, addr.addr, lsp.lsp.options) in > - { > - var __match = > - (pfx ++ "eth.src == ${ea} && " > - "ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && " > - "udp.src == 68 && udp.dst == 67" ++ sfx).intern() > - in > - Flow(.logical_datapath = lsuuid, > - .stage = s_SWITCH_IN_DHCP_OPTIONS(), > - .priority = 100, > - .__match = __match, > - .actions = options_action, > - .io_port = None, > - .controller_meter = lsp.sw.copp.get(cOPP_DHCPV4_OPTS()), > - .stage_hint = stage_hint(lsp.lsp._uuid)); > - > - /* Allow ip4.src = OFFER_IP and > - * ip4.dst = {SERVER_IP, 255.255.255.255} for the below > - * cases > - * - When the client wants to renew the IP by sending > - * the DHCPREQUEST to the server ip. > - * - When the client wants to renew the IP by > - * broadcasting the DHCPREQUEST. > - */ > - var __match = pfx ++ "eth.src == ${ea} && " > - "${ipv4_addr_match} && udp.src == 68 && udp.dst == 67" ++ sfx in > - Flow(.logical_datapath = lsuuid, > - .stage = s_SWITCH_IN_DHCP_OPTIONS(), > - .priority = 100, > - .__match = __match.intern(), > - .actions = options_action, > - .io_port = None, > - .controller_meter = lsp.sw.copp.get(cOPP_DHCPV4_OPTS()), > - .stage_hint = stage_hint(lsp.lsp._uuid)); > - > - /* If REGBIT_DHCP_OPTS_RESULT is set, it means the > - * put_dhcp_opts action is successful. */ > - var __match = pfx ++ "eth.src == ${ea} && " > - "ip4 && udp.src == 68 && udp.dst == 67 && " ++ > - rEGBIT_DHCP_OPTS_RESULT() ++ sfx in > - Flow(.logical_datapath = lsuuid, > - .stage = s_SWITCH_IN_DHCP_RESPONSE(), > - .priority = 100, > - .__match = __match.intern(), > - .actions = response_action, > - .stage_hint = stage_hint(lsp.lsp._uuid), > - .io_port = None, > - .controller_meter = None) > - // FIXME: is there a constraint somewhere that guarantees that build_dhcpv4_action > - // returns Some() for at most 1 address in lsp_addrs? Otherwise, simulate this break > - // by computing an aggregate that returns the first element of a group. > - //break; > - } > - } > - } > - }; > - > - /* DHCPv6 options enabled for this port */ > - Some{var dhcpv6_options_uuid} = lsp.lsp.dhcpv6_options in > - { > - for (dhcpv6_options in &nb::DHCP_Options(._uuid = dhcpv6_options_uuid)) { > - for (SwitchPortIPv6Address(.port = &SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = lsp.lsp._uuid}}, .ea = ea, .addr = addr)) { > - Some{(var options_action, var response_action)} = > - build_dhcpv6_action(json_key, dhcpv6_options, addr.addr) in > - { > - var __match = pfx ++ "eth.src == ${ea}" > - " && ip6.dst == ff02::1:2 && udp.src == 546 &&" > - " udp.dst == 547" ++ sfx in > - { > - Flow(.logical_datapath = lsuuid, > - .stage = s_SWITCH_IN_DHCP_OPTIONS(), > - .priority = 100, > - .__match = __match.intern(), > - .actions = options_action, > - .io_port = None, > - .controller_meter = lsp.sw.copp.get(cOPP_DHCPV6_OPTS()), > - .stage_hint = stage_hint(lsp.lsp._uuid)); > - > - /* If REGBIT_DHCP_OPTS_RESULT is set to 1, it means the > - * put_dhcpv6_opts action is successful */ > - Flow(.logical_datapath = lsuuid, > - .stage = s_SWITCH_IN_DHCP_RESPONSE(), > - .priority = 100, > - .__match = (__match ++ " && ${rEGBIT_DHCP_OPTS_RESULT()}").intern(), > - .actions = response_action, > - .stage_hint = stage_hint(lsp.lsp._uuid), > - .io_port = None, > - .controller_meter = None) > - // FIXME: is there a constraint somewhere that guarantees that build_dhcpv4_action > - // returns Some() for at most 1 address in lsp_addrs? Otherwise, simulate this breaks > - // by computing an aggregate that returns the first element of a group. > - //break; > - } > - } > - } > - } > - } > - } > - } > -} > - > -/* Logical switch ingress tables DNS_LOOKUP and DNS_RESPONSE: DNS lookup and > - * response priority 100 flows. > - */ > -for (LogicalSwitchHasDNSRecords(ls, true)) > -{ > - Flow(.logical_datapath = ls, > - .stage = s_SWITCH_IN_DNS_LOOKUP(), > - .priority = 100, > - .__match = i"udp.dst == 53", > - .actions = i"${rEGBIT_DNS_LOOKUP_RESULT()} = dns_lookup(); next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - var action = i"eth.dst <-> eth.src; ip4.src <-> ip4.dst; " > - "udp.dst = udp.src; udp.src = 53; outport = inport; " > - "flags.loopback = 1; output;" in > - Flow(.logical_datapath = ls, > - .stage = s_SWITCH_IN_DNS_RESPONSE(), > - .priority = 100, > - .__match = i"udp.dst == 53 && ${rEGBIT_DNS_LOOKUP_RESULT()}", > - .actions = action, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - var action = i"eth.dst <-> eth.src; ip6.src <-> ip6.dst; " > - "udp.dst = udp.src; udp.src = 53; outport = inport; " > - "flags.loopback = 1; output;" in > - Flow(.logical_datapath = ls, > - .stage = s_SWITCH_IN_DNS_RESPONSE(), > - .priority = 100, > - .__match = i"udp.dst == 53 && ${rEGBIT_DNS_LOOKUP_RESULT()}", > - .actions = action, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Ingress table DHCP_OPTIONS and DHCP_RESPONSE: DHCP options and response, by > - * default goto next. (priority 0). > - * > - * Ingress table DNS_LOOKUP and DNS_RESPONSE: DNS lookup and response, by > - * default goto next. (priority 0). > - > - * Ingress table EXTERNAL_PORT - External port handling, by default goto next. > - * (priority 0). */ > -for (ls in &nb::Logical_Switch) { > - Flow(.logical_datapath = ls._uuid, > - .stage = s_SWITCH_IN_DHCP_OPTIONS(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = ls._uuid, > - .stage = s_SWITCH_IN_DHCP_RESPONSE(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = ls._uuid, > - .stage = s_SWITCH_IN_DNS_LOOKUP(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = ls._uuid, > - .stage = s_SWITCH_IN_DNS_RESPONSE(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = ls._uuid, > - .stage = s_SWITCH_IN_EXTERNAL_PORT(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 110, > - .__match = i"eth.dst == $svc_monitor_mac", > - .actions = i"handle_svc_check(inport);", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - sw in &Switch(). > - > -for (sw in &Switch(._uuid = ls_uuid, .mcast_cfg = mcast_cfg) > - if (mcast_cfg.enabled)) { > - var controller_meter = sw.copp.get(cOPP_IGMP()) in > - for (SwitchMcastFloodRelayPorts(sw, relay_ports)) { > - for (SwitchMcastFloodReportPorts(sw, flood_report_ports)) { > - for (SwitchMcastFloodPorts(sw, flood_ports)) { > - var flood_relay = not relay_ports.is_empty() in > - var flood_reports = not flood_report_ports.is_empty() in > - var flood_static = not flood_ports.is_empty() in > - var igmp_act = { > - if (flood_reports) { > - var mrouter_static = json_escape(mC_MROUTER_STATIC().0); > - i"clone { " > - "outport = ${mrouter_static}; " > - "output; " > - "};igmp;" > - } else { > - i"igmp;" > - } > - } in { > - /* Punt IGMP traffic to controller. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 100, > - .__match = i"ip4 && ip.proto == 2", > - .actions = i"${igmp_act}", > - .io_port = None, > - .controller_meter = controller_meter, > - .stage_hint = 0); > - > - /* Punt MLD traffic to controller. */ > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 100, > - .__match = i"mldv1 || mldv2", > - .actions = igmp_act, > - .io_port = None, > - .controller_meter = controller_meter, > - .stage_hint = 0); > - > - /* Flood all IP multicast traffic destined to 224.0.0.X to > - * all ports - RFC 4541, section 2.1.2, item 2. > - */ > - var flood = json_escape(mC_FLOOD().0) in > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 85, > - .__match = i"ip4.mcast && ip4.dst == 224.0.0.0/24", > - .actions = i"outport = ${flood}; output;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Flood all IPv6 multicast traffic destined to reserved > - * multicast IPs (RFC 4291, 2.7.1). > - */ > - var flood = json_escape(mC_FLOOD().0) in > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 85, > - .__match = i"ip6.mcast_flood", > - .actions = i"outport = ${flood}; output;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Forward uregistered IP multicast to routers with relay > - * enabled and to any ports configured to flood IP > - * multicast traffic. If configured to flood unregistered > - * traffic this will be handled by the L2 multicast flow. > - */ > - if (not mcast_cfg.flood_unreg) { > - var relay_act = { > - if (flood_relay) { > - var rtr_flood = json_escape(mC_MROUTER_FLOOD().0); > - "clone { " > - "outport = ${rtr_flood}; " > - "output; " > - "}; " > - } else { > - "" > - } > - } in > - var static_act = { > - if (flood_static) { > - var mc_static = json_escape(mC_STATIC().0); > - "outport =${mc_static}; output;" > - } else { > - "" > - } > - } in > - var drop_act = { > - if (not flood_relay and not flood_static) { > - "drop;" > - } else { > - "" > - } > - } in > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 80, > - .__match = i"ip4.mcast || ip6.mcast", > - .actions = i"${relay_act}${static_act}${drop_act}", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - } > - } > - } > - } > - } > -} > - > -/* Ingress table L2_LKUP: Add IP multicast flows learnt from IGMP/MLD (priority > - * 90). */ > -for (IgmpSwitchMulticastGroup(.address = address, .switch = sw)) { > - /* RFC 4541, section 2.1.2, item 2: Skip groups in the 224.0.0.X > - * range. > - * > - * RFC 4291, section 2.7.1: Skip groups that correspond to all > - * hosts. > - */ > - Some{var ip} = ip46_parse(address.ival()) in > - (var skip_address) = match (ip) { > - IPv4{ipv4} -> ipv4.is_local_multicast(), > - IPv6{ipv6} -> ipv6.is_all_hosts() > - } in > - var ipX = ip.ipX() in > - for (SwitchMcastFloodRelayPorts(sw, relay_ports) if not skip_address) { > - for (SwitchMcastFloodPorts(sw, flood_ports)) { > - var flood_relay = not relay_ports.is_empty() in > - var flood_static = not flood_ports.is_empty() in > - var mc_rtr_flood = json_escape(mC_MROUTER_FLOOD().0) in > - var mc_static = json_escape(mC_STATIC().0) in > - var relay_act = { > - if (flood_relay) { > - "clone { " > - "outport = ${mc_rtr_flood}; output; " > - "};" > - } else { > - "" > - } > - } in > - var static_act = { > - if (flood_static) { > - "clone { " > - "outport =${mc_static}; " > - "output; " > - "};" > - } else { > - "" > - } > - } in > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 90, > - .__match = i"eth.mcast && ${ipX} && ${ipX}.dst == ${address}", > - .actions = > - i"${relay_act} ${static_act} outport = \"${address}\"; " > - "output;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - } > - } > -} > - > -/* Table EXTERNAL_PORT: External port. Drop ARP request for router ips from > - * external ports on chassis not binding those ports. This makes the router > - * pipeline to be run only on the chassis binding the external ports. > - * > - * For an external port X on logical switch LS, if X is not resident on this > - * chassis, drop ARP requests arriving on localnet ports from X's Ethernet > - * address, if the ARP request is asking to translate the IP address of a > - * router port on LS. */ > -Flow(.logical_datapath = sp.sw._uuid, > - .stage = s_SWITCH_IN_EXTERNAL_PORT(), > - .priority = 100, > - .__match = (i"inport == ${json_escape(localnet_port.1)} && " > - "eth.src == ${lp_addr.ea} && " > - "!is_chassis_resident(${sp.json_name}) && " > - "arp.tpa == ${rp_addr.addr} && arp.op == 1"), > - .actions = i"drop;", > - .stage_hint = stage_hint(sp.lsp._uuid), > - .io_port = Some{localnet_port.1}, > - .controller_meter = None) :- > - sp in &SwitchPort(), > - sp.lsp.__type == i"external", > - var localnet_port = FlatMap(sp.sw.localnet_ports), > - var lp_addr = FlatMap(sp.static_addresses), > - rp in &SwitchPort(.sw = sp.sw), > - rp.lsp.__type == i"router", > - SwitchPortIPv4Address(.port = rp, .addr = rp_addr). > -Flow(.logical_datapath = sp.sw._uuid, > - .stage = s_SWITCH_IN_EXTERNAL_PORT(), > - .priority = 100, > - .__match = (i"inport == ${json_escape(localnet_port.1)} && " > - "eth.src == ${lp_addr.ea} && " > - "!is_chassis_resident(${sp.json_name}) && " > - "nd_ns && ip6.dst == {${rp_addr.addr}, ${rp_addr.solicited_node()}} && " > - "nd.target == ${rp_addr.addr}"), > - .actions = i"drop;", > - .stage_hint = stage_hint(sp.lsp._uuid), > - .io_port = Some{localnet_port.1}, > - .controller_meter = None) :- > - sp in &SwitchPort(), > - sp.lsp.__type == i"external", > - var localnet_port = FlatMap(sp.sw.localnet_ports), > - var lp_addr = FlatMap(sp.static_addresses), > - rp in &SwitchPort(.sw = sp.sw), > - rp.lsp.__type == i"router", > - SwitchPortIPv6Address(.port = rp, .addr = rp_addr). > -Flow(.logical_datapath = sp.sw._uuid, > - .stage = s_SWITCH_IN_EXTERNAL_PORT(), > - .priority = 100, > - .__match = (i"inport == ${json_escape(localnet_port.1)} && " > - "eth.src == ${lp_addr.ea} && " > - "eth.dst == ${ea} && " > - "!is_chassis_resident(${sp.json_name})"), > - .actions = i"drop;", > - .stage_hint = stage_hint(sp.lsp._uuid), > - .io_port = Some{localnet_port.1}, > - .controller_meter = None) :- > - sp in &SwitchPort(), > - sp.lsp.__type == i"external", > - var localnet_port = FlatMap(sp.sw.localnet_ports), > - var lp_addr = FlatMap(sp.static_addresses), > - rp in &SwitchPort(.sw = sp.sw), > - rp.lsp.__type == i"router", > - SwitchPortAddresses(.port = rp, .addrs = LPortAddress{.ea = ea}). > - > -/* Ingress table L2_LKUP: Destination lookup, broadcast and multicast handling > - * (priority 100). */ > -for (ls in &nb::Logical_Switch) { > - var mc_flood = json_escape(mC_FLOOD().0) in > - Flow(.logical_datapath = ls._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 70, > - .__match = i"eth.mcast", > - .actions = i"outport = ${mc_flood}; output;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Ingress table L2_LKUP: Destination lookup, unicast handling (priority 50). > -*/ > -for (SwitchPortStaticAddresses(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, > - .addrs = addrs) > - if lsp.__type != i"external") { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 50, > - .__match = i"eth.dst == ${addrs.ea}", > - .actions = i"outport = ${json_name}; output;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = None, > - .controller_meter = None) > -} > - > -/* > - * Ingress table L2_LKUP: Flows that flood self originated ARP/ND packets in the > - * switching domain. > - */ > -/* Self originated ARP requests/ND need to be flooded to the L2 domain > - * (except on router ports). Determine that packets are self originated > - * by also matching on source MAC. Matching on ingress port is not > - * reliable in case this is a VLAN-backed network. > - * Priority: 75. > - */ > - > -/* Returns 'true' if the IP 'addr' is on the same subnet with one of the > - * IPs configured on the router port. > - */ > -function lrouter_port_ip_reachable(rp: Intern<RouterPort>, addr: v46_ip): bool { > - match (addr) { > - IPv4{ipv4} -> { > - for (na in rp.networks.ipv4_addrs) { > - if ((ipv4, na.addr).same_network(na.netmask())) { > - return true > - } > - } > - }, > - IPv6{ipv6} -> { > - for (na in rp.networks.ipv6_addrs) { > - if ((ipv6, na.addr).same_network(na.netmask())) { > - return true > - } > - } > - } > - }; > - false > -} > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 75, > - .__match = __match, > - .actions = actions, > - .stage_hint = stage_hint(sp.lsp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - sp in &SwitchPort(.sw = sw@&Switch{.has_non_router_port = true}, .peer = Some{rp}), > - rp.is_enabled(), > - var eth_src_set = { > - var eth_src_set = set_singleton(i"${rp.networks.ea}"); > - for (nat in rp.router.nats) { > - match (nat.nat.external_mac) { > - Some{mac} -> > - if (lrouter_port_ip_reachable(rp, nat.external_ip)) { > - eth_src_set.insert(mac) > - } else (), > - _ -> () > - } > - }; > - eth_src_set > - }, > - var eth_src = "{" ++ eth_src_set.to_vec().join(", ") ++ "}", > - var __match = i"eth.src == ${eth_src} && (arp.op == 1 || nd_ns)", > - var mc_flood_l2 = json_escape(mC_FLOOD_L2().0), > - var actions = i"outport = ${mc_flood_l2}; output;". > - > -/* Forward ARP requests for owned IP addresses (L3, VIP, NAT) only to this > - * router port. > - * Priority: 80. > - */ > -function get_arp_forward_ips(rp: Intern<RouterPort>, lbips: Intern<LogicalRouterLBIPs>): > - (Set<istring>, Set<istring>, Set<istring>, Set<istring>) = > -{ > - var reachable_ips_v4 = set_empty(); > - var reachable_ips_v6 = set_empty(); > - var unreachable_ips_v4 = set_empty(); > - var unreachable_ips_v6 = set_empty(); > - > - (var lb_ips_v4, var lb_ips_v6) > - = get_router_load_balancer_ips(lbips, false); > - for (a in lb_ips_v4) { > - /* Check if the ovn port has a network configured on which we could > - * expect ARP requests for the LB VIP. > - */ > - match (ip_parse(a.ival())) { > - Some{ipv4} -> if (lrouter_port_ip_reachable(rp, IPv4{ipv4})) { > - reachable_ips_v4.insert(a) > - } else { > - unreachable_ips_v4.insert(a) > - }, > - _ -> () > - } > - }; > - for (a in lb_ips_v6) { > - /* Check if the ovn port has a network configured on which we could > - * expect NS requests for the LB VIP. > - */ > - match (ipv6_parse(a.ival())) { > - Some{ipv6} -> if (lrouter_port_ip_reachable(rp, IPv6{ipv6})) { > - reachable_ips_v6.insert(a) > - } else { > - unreachable_ips_v6.insert(a) > - }, > - _ -> () > - } > - }; > - > - for (nat in rp.router.nats) { > - if (nat.nat.__type != i"snat") { > - /* Check if the ovn port has a network configured on which we could > - * expect ARP requests/NS for the DNAT external_ip. > - */ > - if (lrouter_port_ip_reachable(rp, nat.external_ip)) { > - match (nat.external_ip) { > - IPv4{_} -> reachable_ips_v4.insert(nat.nat.external_ip), > - IPv6{_} -> reachable_ips_v6.insert(nat.nat.external_ip) > - } > - } else { > - match (nat.external_ip) { > - IPv4{_} -> unreachable_ips_v4.insert(nat.nat.external_ip), > - IPv6{_} -> unreachable_ips_v6.insert(nat.nat.external_ip), > - } > - } > - } > - }; > - > - for (a in rp.networks.ipv4_addrs) { > - reachable_ips_v4.insert(i"${a.addr}") > - }; > - for (a in rp.networks.ipv6_addrs) { > - reachable_ips_v6.insert(i"${a.addr}") > - }; > - > - (reachable_ips_v4, reachable_ips_v6, unreachable_ips_v4, unreachable_ips_v6) > -} > - > -relation &SwitchPortARPForwards( > - port: Intern<SwitchPort>, > - reachable_ips_v4: Set<istring>, > - reachable_ips_v6: Set<istring>, > - unreachable_ips_v4: Set<istring>, > - unreachable_ips_v6: Set<istring> > -) > - > -&SwitchPortARPForwards(.port = port, > - .reachable_ips_v4 = reachable_ips_v4, > - .reachable_ips_v6 = reachable_ips_v6, > - .unreachable_ips_v4 = unreachable_ips_v4, > - .unreachable_ips_v6 = unreachable_ips_v6) :- > - port in &SwitchPort(.peer = Some{rp@&RouterPort{.enabled = true}}), > - lbips in &LogicalRouterLBIPs(.lr = rp.router._uuid), > - (var reachable_ips_v4, var reachable_ips_v6, var unreachable_ips_v4, var unreachable_ips_v6) = get_arp_forward_ips(rp, lbips). > - > -/* Packets received from VXLAN tunnels have already been through the > - * router pipeline so we should skip them. Normally this is done by the > - * multicast_group implementation (VXLAN packets skip table 32 which > - * delivers to patch ports) but we're bypassing multicast_groups. > - * (This is why we match against fLAGBIT_NOT_VXLAN() here.) > - */ > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 80, > - .__match = i"${fLAGBIT_NOT_VXLAN()} && arp.op == 1 && arp.tpa == ${ipv4}", > - .actions = if (sw.has_non_router_port) { > - i"clone {outport = ${sp.json_name}; output; }; " > - "outport = ${mc_flood_l2}; output;" > - } else { > - i"outport = ${sp.json_name}; output;" > - }, > - .stage_hint = stage_hint(sp.lsp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - var mc_flood_l2 = json_escape(mC_FLOOD_L2().0), > - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .reachable_ips_v4 = ips_v4), > - var ipv4 = FlatMap(ips_v4). > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 80, > - .__match = i"${fLAGBIT_NOT_VXLAN()} && nd_ns && nd.target == ${ipv6}", > - .actions = if (sw.has_non_router_port) { > - i"clone {outport = ${sp.json_name}; output; }; " > - "outport = ${mc_flood_l2}; output;" > - } else { > - i"outport = ${sp.json_name}; output;" > - }, > - .stage_hint = stage_hint(sp.lsp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - var mc_flood_l2 = json_escape(mC_FLOOD_L2().0), > - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .reachable_ips_v6 = ips_v6), > - var ipv6 = FlatMap(ips_v6). > - > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 90, > - .__match = i"${fLAGBIT_NOT_VXLAN()} && arp.op == 1 && arp.tpa == ${ipv4}", > - .actions = actions, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - var actions = i"outport = ${json_escape(mC_FLOOD().0)}; output;", > - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .unreachable_ips_v4 = ips_v4), > - var ipv4 = FlatMap(ips_v4). > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 90, > - .__match = i"${fLAGBIT_NOT_VXLAN()} && nd_ns && nd.target == ${ipv6}", > - .actions = actions, > - .stage_hint = stage_hint(sp.lsp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - var actions = i"outport = ${json_escape(mC_FLOOD().0)}; output;", > - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .unreachable_ips_v6 = ips_v6), > - var ipv6 = FlatMap(ips_v6). > - > -for (SwitchPortNewDynamicAddress(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, > - .address = Some{addrs}) > - if lsp.__type != i"external") { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 50, > - .__match = i"eth.dst == ${addrs.ea}", > - .actions = i"outport = ${json_name}; output;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = None, > - .controller_meter = None) > -} > - > -for (&SwitchPort(.lsp = lsp, > - .json_name = json_name, > - .sw = sw, > - .peer = Some{&RouterPort{.lrp = lrp, > - .is_redirect = is_redirect, > - .router = &Router{._uuid = lr_uuid, > - .l3dgw_ports = l3dgw_ports}}}) > - if (lsp.addresses.contains(i"router") and lsp.__type != i"external")) > -{ > - Some{var mac} = scan_eth_addr(lrp.mac.ival()) in { > - var add_chassis_resident_check = > - not sw.localnet_ports.is_empty() and > - (/* The peer of this port represents a distributed > - * gateway port. The destination lookup flow for the > - * router's distributed gateway port MAC address should > - * only be programmed on the "redirect-chassis". */ > - is_redirect or > - /* Check if the option 'reside-on-redirect-chassis' > - * is set to true on the peer port. If set to true > - * and if the logical switch has a localnet port, it > - * means the router pipeline for the packets from > - * this logical switch should be run on the chassis > - * hosting the gateway port. > - */ > - lrp.options.get_bool_def(i"reside-on-redirect-chassis", false)) in > - var __match = if (add_chassis_resident_check) { > - var redirect_port_name = if (is_redirect) { > - json_escape(chassis_redirect_name(lrp.name)) > - } else { > - match (l3dgw_ports.nth(0)) { > - Some {var gw_port} -> json_escape(chassis_redirect_name(gw_port.name)), > - None -> "" > - } > - }; > - /* The destination lookup flow for the router's > - * distributed gateway port MAC address should only be > - * programmed on the "redirect-chassis". */ > - i"eth.dst == ${mac} && is_chassis_resident(${redirect_port_name})" > - } else { > - i"eth.dst == ${mac}" > - } in > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 50, > - .__match = __match, > - .actions = i"outport = ${json_name}; output;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = None, > - .controller_meter = None); > - > - /* Add ethernet addresses specified in NAT rules on > - * distributed logical routers. */ > - if (is_redirect) { > - for (LogicalRouterNAT(.lr = lr_uuid, .nat = nat)) { > - if (nat.nat.__type == i"dnat_and_snat") { > - Some{var lport} = nat.nat.logical_port in > - Some{var emac} = nat.nat.external_mac in > - Some{var nat_mac} = eth_addr_from_string(emac.ival()) in > - var __match = i"eth.dst == ${nat_mac} && is_chassis_resident(${json_escape(lport)})" in > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 50, > - .__match = __match, > - .actions = i"outport = ${json_name}; output;", > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - } > - } > - } > -} > -// FIXME: do we care about this? > -/* } else { > - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); > - > - VLOG_INFO_RL(&rl, > - "%s: invalid syntax '%s' in addresses column", > - op->nbsp->name, op->nbsp->addresses[i]); > - }*/ > - > -/* Ingress table L2_LKUP and L2_UNKNOWN: Destination lookup for unknown MACs (priority 0). */ > -for (sw in &Switch(._uuid = ls_uuid)) { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_L2_LKUP(), > - .priority = 0, > - .__match = i"1", > - .actions = i"outport = get_fdb(eth.dst); next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_L2_UNKNOWN(), > - .priority = 50, > - .__match = i"outport == \"none\"", > - .actions = if (sw.has_unknown_ports) { > - var mc_unknown = json_escape(mC_UNKNOWN().0); > - i"outport = ${mc_unknown}; output;" > - } else { > - i"drop;" > - }, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_L2_UNKNOWN(), > - .priority = 0, > - .__match = i"1", > - .actions = i"output;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Egress tables PORT_SEC_IP: Egress port security - IP (priority 0) > - * Egress table PORT_SEC_L2: Egress port security L2 - multicast/broadcast (priority 100). */ > -for (&Switch(._uuid = ls_uuid)) { > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PORT_SEC_IP(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_OUT_PORT_SEC_L2(), > - .priority = 100, > - .__match = i"eth.mcast", > - .actions = i"output;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_LOOKUP_FDB(), > - .priority = 100, > - .__match = i"inport == ${sp.json_name}", > - .actions = i"$[rEGBIT_LKUP_FDB()} = lookup_fdb(inport, eth.src); next;", > - .stage_hint = stage_hint(lsp_uuid), > - .io_port = Some{sp.lsp.name}, > - .controller_meter = None), > -Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_LOOKUP_FDB(), > - .priority = 100, > - .__match = i"inport == ${sp.json_name} && ${rEGBIT_LKUP_FDB()} == 0", > - .actions = i"put_fdb(inport, eth.src); next;", > - .stage_hint = stage_hint(lsp_uuid), > - .io_port = Some{sp.lsp.name}, > - .controller_meter = None) :- > - LogicalSwitchPortWithUnknownAddress(ls_uuid, lsp_uuid), > - sp in &SwitchPort(.lsp = &nb::Logical_Switch_Port{._uuid = lsp_uuid, .__type = i""}, > - .ps_addresses = vec_empty()). > - > -Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_LOOKUP_FDB(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None), > -Flow(.logical_datapath = ls_uuid, > - .stage = s_SWITCH_IN_PUT_FDB(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - &Switch(._uuid = ls_uuid). > - > -/* Egress table PORT_SEC_IP: Egress port security - IP (priorities 90 and 80) > - * if port security enabled. > - * > - * Egress table PORT_SEC_L2: Egress port security - L2 (priorities 50 and 150). > - * > - * Priority 50 rules implement port security for enabled logical port. > - * > - * Priority 150 rules drop packets to disabled logical ports, so that they > - * don't even receive multicast or broadcast packets. */ > -Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_OUT_PORT_SEC_L2(), > - .priority = 50, > - .__match = __match, > - .actions = i"${queue_action}output;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) :- > - &SwitchPort(.sw = sw, .lsp = lsp, .json_name = json_name, .ps_eth_addresses = ps_eth_addresses), > - lsp.is_enabled(), > - lsp.__type != i"external", > - var __match = if (ps_eth_addresses.is_empty()) { > - i"outport == ${json_name}" > - } else { > - i"outport == ${json_name} && eth.dst == {${ps_eth_addresses.join(\" \")}}" > - }, > - pbinding in sb::Out_Port_Binding(.logical_port = lsp.name), > - var queue_action = match ((lsp.__type.ival(), > - pbinding.options.get(i"qdisc_queue_id"))) { > - ("localnet", Some{queue_id}) -> "set_queue(${queue_id});", > - _ -> "" > - }. > - > -for (&SwitchPort(.lsp = lsp, .json_name = json_name, .sw = sw)) { > - if (not lsp.is_enabled() and lsp.__type != i"external") { > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_OUT_PORT_SEC_L2(), > - .priority = 150, > - .__match = i"outport == {$json_name}", > - .actions = i"drop;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > - } > -} > - > -for (SwitchPortPSAddresses(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, > - .ps_addrs = ps) > - if (ps.ipv4_addrs.len() > 0 or ps.ipv6_addrs.len() > 0) > - and lsp.__type != i"external") > -{ > - if (ps.ipv4_addrs.len() > 0) { > - var addrs = { > - var addrs = vec_empty(); > - for (addr in ps.ipv4_addrs) { > - /* When the netmask is applied, if the host portion is > - * non-zero, the host can only use the specified > - * address. If zero, the host is allowed to use any > - * address in the subnet. > - */ > - addrs.push(addr.match_host_or_network()); > - if (addr.plen < 32 and not addr.host().is_zero()) { > - addrs.push("${addr.bcast()}") > - } > - }; > - addrs > - } in > - var __match = > - "outport == ${json_name} && eth.dst == ${ps.ea} && ip4.dst == {255.255.255.255, 224.0.0.0/4, " ++ > - addrs.join(", ") ++ "}" in > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_OUT_PORT_SEC_IP(), > - .priority = 90, > - .__match = __match.intern(), > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > - }; > - if (ps.ipv6_addrs.len() > 0) { > - var __match = "outport == ${json_name} && eth.dst == ${ps.ea}" ++ > - build_port_security_ipv6_flow(Egress, ps.ea, ps.ipv6_addrs) in > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_OUT_PORT_SEC_IP(), > - .priority = 90, > - .__match = __match.intern(), > - .actions = i"next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > - }; > - var __match = i"outport == ${json_name} && eth.dst == ${ps.ea} && ip" in > - Flow(.logical_datapath = sw._uuid, > - .stage = s_SWITCH_OUT_PORT_SEC_IP(), > - .priority = 80, > - .__match = __match, > - .actions = i"drop;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = Some{lsp.name}, > - .controller_meter = None) > -} > - > -/* Logical router ingress table ADMISSION: Admission control framework. */ > -for (&Router(._uuid = lr_uuid)) { > - /* Logical VLANs not supported. > - * Broadcast/multicast source address is invalid. */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ADMISSION(), > - .priority = 100, > - .__match = i"vlan.present || eth.src[40]", > - .actions = i"drop;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Logical router ingress table ADMISSION: match (priority 50). */ > -for (&RouterPort(.lrp = lrp, > - .json_name = json_name, > - .networks = lrp_networks, > - .router = router, > - .is_redirect = is_redirect) > - /* Drop packets from disabled logical ports (since logical flow > - * tables are default-drop). */ > - if lrp.is_enabled()) > -{ > - //if (op->derived) { > - // /* No ingress packets should be received on a chassisredirect > - // * port. */ > - // continue; > - //} > - > - /* Store the ethernet address of the port receiving the packet. > - * This will save us from having to match on inport further down in > - * the pipeline. > - */ > - var gw_mtu = lrp.options.get_int_def(i"gateway_mtu", 0) in > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN() in > - var actions = if (gw_mtu > 0) { > - "${rEGBIT_PKT_LARGER()} = check_pkt_larger(${mtu}); " > - } else { > - "" > - } ++ "${rEG_INPORT_ETH_ADDR()} = ${lrp_networks.ea}; next;" in { > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_ADMISSION(), > - .priority = 50, > - .__match = i"eth.mcast && inport == ${json_name}", > - .actions = actions.intern(), > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None); > - > - var __match = > - "eth.dst == ${lrp_networks.ea} && inport == ${json_name}" ++ > - if is_redirect { > - /* Traffic with eth.dst = l3dgw_port->lrp_networks.ea > - * should only be received on the "redirect-chassis". */ > - " && is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})" > - } else { "" } in > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_ADMISSION(), > - .priority = 50, > - .__match = __match.intern(), > - .actions = actions.intern(), > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None) > - } > -} > - > - > -/* Logical router ingress table LOOKUP_NEIGHBOR and > - * table LEARN_NEIGHBOR. */ > -/* Learn MAC bindings from ARP/IPv6 ND. > - * > - * For ARP packets, table LOOKUP_NEIGHBOR does a lookup for the > - * (arp.spa, arp.sha) in the mac binding table using the 'lookup_arp' > - * action and stores the result in REGBIT_LOOKUP_NEIGHBOR_RESULT bit. > - * If "always_learn_from_arp_request" is set to false, it will also > - * lookup for the (arp.spa) in the mac binding table using the > - * "lookup_arp_ip" action for ARP request packets, and stores the > - * result in REGBIT_LOOKUP_NEIGHBOR_IP_RESULT bit; or set that bit > - * to "1" directly for ARP response packets. > - * > - * For IPv6 ND NA packets, table LOOKUP_NEIGHBOR does a lookup > - * for the (nd.target, nd.tll) in the mac binding table using the > - * 'lookup_nd' action and stores the result in > - * REGBIT_LOOKUP_NEIGHBOR_RESULT bit. If > - * "always_learn_from_arp_request" is set to false, > - * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT bit is set. > - * > - * For IPv6 ND NS packets, table LOOKUP_NEIGHBOR does a lookup > - * for the (ip6.src, nd.sll) in the mac binding table using the > - * 'lookup_nd' action and stores the result in > - * REGBIT_LOOKUP_NEIGHBOR_RESULT bit. If > - * "always_learn_from_arp_request" is set to false, it will also lookup > - * for the (ip6.src) in the mac binding table using the "lookup_nd_ip" > - * action and stores the result in REGBIT_LOOKUP_NEIGHBOR_IP_RESULT > - * bit. > - * > - * Table LEARN_NEIGHBOR learns the mac-binding using the action > - * - 'put_arp/put_nd'. Learning mac-binding is skipped if > - * REGBIT_LOOKUP_NEIGHBOR_RESULT bit is set or > - * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT is not set. > - * > - * */ > - > -/* Flows for LOOKUP_NEIGHBOR. */ > -for (&Router(._uuid = lr_uuid, > - .learn_from_arp_request = learn_from_arp_request, > - .copp = copp)) > -var rLNR = rEGBIT_LOOKUP_NEIGHBOR_RESULT() in > -var rLNIR = rEGBIT_LOOKUP_NEIGHBOR_IP_RESULT() in > -{ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > - .priority = 100, > - .__match = i"arp.op == 2", > - .actions = > - ("${rLNR} = lookup_arp(inport, arp.spa, arp.sha); " ++ > - { if (learn_from_arp_request) "" else "${rLNIR} = 1; " } ++ > - "next;").intern(), > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > - .priority = 100, > - .__match = i"nd_na", > - .actions = > - ("${rLNR} = lookup_nd(inport, nd.target, nd.tll); " ++ > - { if (learn_from_arp_request) "" else "${rLNIR} = 1; " } ++ > - "next;").intern(), > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > - .priority = 100, > - .__match = i"nd_ns", > - .actions = > - ("${rLNR} = lookup_nd(inport, ip6.src, nd.sll); " ++ > - { if (learn_from_arp_request) "" else > - "${rLNIR} = lookup_nd_ip(inport, ip6.src); " } ++ > - "next;").intern(), > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* For other packet types, we can skip neighbor learning. > - * So set REGBIT_LOOKUP_NEIGHBOR_RESULT to 1. */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > - .priority = 0, > - .__match = i"1", > - .actions = i"${rLNR} = 1; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Flows for LEARN_NEIGHBOR. */ > - /* Skip Neighbor learning if not required. */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), > - .priority = 100, > - .__match = > - ("${rLNR} == 1" ++ > - { if (learn_from_arp_request) "" else " || ${rLNIR} == 0" }).intern(), > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), > - .priority = 90, > - .__match = i"arp", > - .actions = i"put_arp(inport, arp.spa, arp.sha); next;", > - .io_port = None, > - .controller_meter = copp.get(cOPP_ARP()), > - .stage_hint = 0); > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), > - .priority = 90, > - .__match = i"nd_na", > - .actions = i"put_nd(inport, nd.target, nd.tll); next;", > - .io_port = None, > - .controller_meter = copp.get(cOPP_ND_NA()), > - .stage_hint = 0); > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), > - .priority = 90, > - .__match = i"nd_ns", > - .actions = i"put_nd(inport, ip6.src, nd.sll); next;", > - .io_port = None, > - .controller_meter = copp.get(cOPP_ND_NS()), > - .stage_hint = 0) > -} > - > -/* Check if we need to learn mac-binding from ARP requests. */ > -for (RouterPortNetworksIPv4Addr(rp@&RouterPort{.router = router}, addr)) { > - var chassis_residence = match (rp.is_redirect) { > - true -> " && is_chassis_resident(${json_escape(chassis_redirect_name(rp.lrp.name))})", > - false -> "" > - } in > - var rLNR = rEGBIT_LOOKUP_NEIGHBOR_RESULT() in > - var rLNIR = rEGBIT_LOOKUP_NEIGHBOR_IP_RESULT() in > - var match0 = "inport == ${rp.json_name} && " > - "arp.spa == ${addr.match_network()}" in > - var match1 = "arp.op == 1" ++ chassis_residence in > - var learn_from_arp_request = router.learn_from_arp_request in { > - if (not learn_from_arp_request) { > - /* ARP request to this address should always get learned, > - * so add a priority-110 flow to set > - * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT to 1. */ > - var __match = [match0, "arp.tpa == ${addr.addr}", match1] in > - var actions = i"${rLNR} = lookup_arp(inport, arp.spa, arp.sha); " > - "${rLNIR} = 1; " > - "next;" in > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > - .priority = 110, > - .__match = __match.join(" && ").intern(), > - .actions = actions, > - .stage_hint = stage_hint(rp.lrp._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - > - var actions = "${rLNR} = lookup_arp(inport, arp.spa, arp.sha); " ++ > - { if (learn_from_arp_request) "" else > - "${rLNIR} = lookup_arp_ip(inport, arp.spa); " } ++ > - "next;" in > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > - .priority = 100, > - .__match = i"${match0} && ${match1}", > - .actions = actions.intern(), > - .stage_hint = stage_hint(rp.lrp._uuid), > - .io_port = None, > - .controller_meter = None) > - } > -} > - > - > -/* Logical router ingress table IP_INPUT: IP Input. */ > -for (router in &Router(._uuid = lr_uuid, .mcast_cfg = mcast_cfg)) { > - /* L3 admission control: drop multicast and broadcast source, localhost > - * source or destination, and zero network source or destination > - * (priority 100). */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 100, > - .__match = i"ip4.src_mcast ||" > - "ip4.src == 255.255.255.255 || " > - "ip4.src == 127.0.0.0/8 || " > - "ip4.dst == 127.0.0.0/8 || " > - "ip4.src == 0.0.0.0/8 || " > - "ip4.dst == 0.0.0.0/8", > - .actions = i"drop;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Drop ARP packets (priority 85). ARP request packets for router's own > - * IPs are handled with priority-90 flows. > - * Drop IPv6 ND packets (priority 85). ND NA packets for router's own > - * IPs are handled with priority-90 flows. > - */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 85, > - .__match = i"arp || nd", > - .actions = i"drop;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Allow IPv6 multicast traffic that's supposed to reach the > - * router pipeline (e.g., router solicitations). > - */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 84, > - .__match = i"nd_rs || nd_ra", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Drop other reserved multicast. */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 83, > - .__match = i"ip6.mcast_rsvd", > - .actions = i"drop;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Allow other multicast if relay enabled (priority 82). */ > - var mcast_action = { if (mcast_cfg.relay) { i"next;" } else { i"drop;" } } in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 82, > - .__match = i"ip4.mcast || ip6.mcast", > - .actions = mcast_action, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Drop Ethernet local broadcast. By definition this traffic should > - * not be forwarded.*/ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 50, > - .__match = i"eth.bcast", > - .actions = i"drop;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* TTL discard */ > - Flow( > - .logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 30, > - .__match = i"ip4 && ip.ttl == {0, 1}", > - .actions = i"drop;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - /* Pass other traffic not already handled to the next table for > - * routing. */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -function format_v4_networks(networks: lport_addresses, add_bcast: bool): string = > -{ > - var addrs = vec_empty(); > - for (addr in networks.ipv4_addrs) { > - addrs.push("${addr.addr}"); > - if (add_bcast) { > - addrs.push("${addr.bcast()}") > - } else () > - }; > - if (addrs.len() == 1) { > - addrs.join(", ") > - } else { > - "{" ++ addrs.join(", ") ++ "}" > - } > -} > - > -function format_v6_networks(networks: lport_addresses): string = > -{ > - var addrs = vec_empty(); > - for (addr in networks.ipv6_addrs) { > - addrs.push("${addr.addr}") > - }; > - if (addrs.len() == 1) { > - addrs.join(", ") > - } else { > - "{" ++ addrs.join(", ") ++ "}" > - } > -} > - > -/* The following relation is used in ARP reply flow generation to determine whether > - * the is_chassis_resident check must be added to the flow. > - */ > -relation AddChassisResidentCheck_(lrp: uuid, add_check: bool) > - > -AddChassisResidentCheck_(lrp._uuid, res) :- > - &SwitchPort(.peer = Some{&RouterPort{.lrp = lrp, .router = router, .is_redirect = is_redirect}}, > - .sw = sw), > - not router.l3dgw_ports.is_empty(), > - not sw.localnet_ports.is_empty(), > - var res = if (is_redirect) { > - /* Traffic with eth.src = l3dgw_port->lrp_networks.ea > - * should only be sent from the "redirect-chassis", so that > - * upstream MAC learning points to the "redirect-chassis". > - * Also need to avoid generation of multiple ARP responses > - * from different chassis. */ > - true > - } else { > - /* Check if the option 'reside-on-redirect-chassis' > - * is set to true on the router port. If set to true > - * and if peer's logical switch has a localnet port, it > - * means the router pipeline for the packets from > - * peer's logical switch is be run on the chassis > - * hosting the gateway port and it should reply to the > - * ARP requests for the router port IPs. > - */ > - lrp.options.get_bool_def(i"reside-on-redirect-chassis", false) > - }. > - > - > -relation AddChassisResidentCheck(lrp: uuid, add_check: bool) > - > -AddChassisResidentCheck(lrp, add_check) :- > - AddChassisResidentCheck_(lrp, add_check). > - > -AddChassisResidentCheck(lrp, false) :- > - &nb::Logical_Router_Port(._uuid = lrp), > - not AddChassisResidentCheck_(lrp, _). > - > - > -/* Logical router ingress table IP_INPUT: IP Input for IPv4. */ > -for (&RouterPort(.router = router, .networks = networks, .lrp = lrp) > - if (not networks.ipv4_addrs.is_empty())) > -{ > - /* L3 admission control: drop packets that originate from an > - * IPv4 address owned by the router or a broadcast address > - * known to the router (priority 100). */ > - var __match = "ip4.src == " ++ > - format_v4_networks(networks, true) ++ > - " && ${rEGBIT_EGRESS_LOOPBACK()} == 0" in > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 100, > - .__match = __match.intern(), > - .actions = i"drop;", > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None); > - > - /* ICMP echo reply. These flows reply to ICMP echo requests > - * received for the router's IP address. Since packets only > - * get here as part of the logical router datapath, the inport > - * (i.e. the incoming locally attached net) does not matter. > - * The ip.ttl also does not matter (RFC1812 section 4.2.2.9) */ > - var __match = "ip4.dst == " ++ > - format_v4_networks(networks, false) ++ > - " && icmp4.type == 8 && icmp4.code == 0" in > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 90, > - .__match = __match.intern(), > - .actions = i"ip4.dst <-> ip4.src; " > - "ip.ttl = 255; " > - "icmp4.type = 0; " > - "flags.loopback = 1; " > - "next; ", > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Priority-90-92 flows handle ARP requests and ND packets. Most are > - * per logical port but DNAT addresses can be handled per datapath > - * for non gateway router ports. > - * > - * Priority 91 and 92 flows are added for each gateway router > - * port to handle the special cases. In case we get the packet > - * on a regular port, just reply with the port's ETH address. > - */ > -LogicalRouterNatArpNdFlow(router, nat) :- > - router in &Router(._uuid = lr), > - LogicalRouterNAT(.lr = lr, .nat = nat@NAT{.nat = &nb::NAT{.__type = __type}}), > - /* Skip SNAT entries for now, we handle unique SNAT IPs separately > - * below. > - */ > - __type != i"snat". > -/* Now handle SNAT entries too, one per unique SNAT IP. */ > -LogicalRouterNatArpNdFlow(router, nat) :- > - router in &Router(.snat_ips = snat_ips), > - var snat_ip = FlatMap(snat_ips), > - (var ip, var nats) = snat_ip, > - Some{var nat} = nats.nth(0). > - > -relation LogicalRouterNatArpNdFlow(router: Intern<Router>, nat: NAT) > -LogicalRouterArpNdFlow(router, nat, None, rEG_INPORT_ETH_ADDR(), None, false, 90) :- > - LogicalRouterNatArpNdFlow(router, nat). > - > -/* ARP / ND handling for external IP addresses. > - * > - * DNAT and SNAT IP addresses are external IP addresses that need ARP > - * handling. > - * > - * These are already taken care globally, per router. The only > - * exception is on the l3dgw_port where we might need to use a > - * different ETH address. > - */ > -LogicalRouterPortNatArpNdFlow(router, nat, l3dgw_port) :- > - router in &Router(._uuid = lr_uuid, .l3dgw_ports = l3dgw_ports), > - Some {var l3dgw_port} = l3dgw_ports.nth(0), > - LogicalRouterNAT(lr_uuid, nat), > - /* Skip SNAT entries for now, we handle unique SNAT IPs separately > - * below. > - */ > - nat.nat.__type != i"snat". > -/* Now handle SNAT entries too, one per unique SNAT IP. */ > -LogicalRouterPortNatArpNdFlow(router, nat, l3dgw_port) :- > - router in &Router(.l3dgw_ports = l3dgw_ports, .snat_ips = snat_ips), > - Some {var l3dgw_port} = l3dgw_ports.nth(0), > - var snat_ip = FlatMap(snat_ips), > - (var ip, var nats) = snat_ip, > - Some{var nat} = nats.nth(0). > - > -/* Respond to ARP/NS requests on the chassis that binds the gw > - * port. Drop the ARP/NS requests on other chassis. > - */ > -relation LogicalRouterPortNatArpNdFlow(router: Intern<Router>, nat: NAT, lrp: Intern<nb::Logical_Router_Port>) > -LogicalRouterArpNdFlow(router, nat, Some{lrp}, mac, Some{extra_match}, false, 92), > -LogicalRouterArpNdFlow(router, nat, Some{lrp}, mac, None, true, 91) :- > - LogicalRouterPortNatArpNdFlow(router, nat, lrp), > - (var mac, var extra_match) = match ((nat.external_mac, nat.nat.logical_port)) { > - (Some{external_mac}, Some{logical_port}) -> ( > - /* distributed NAT case, use nat->external_mac */ > - external_mac.to_string().intern(), > - /* Traffic with eth.src = nat->external_mac should only be > - * sent from the chassis where nat->logical_port is > - * resident, so that upstream MAC learning points to the > - * correct chassis. Also need to avoid generation of > - * multiple ARP responses from different chassis. */ > - i"is_chassis_resident(${json_escape(logical_port)})" > - ), > - _ -> ( > - rEG_INPORT_ETH_ADDR(), > - /* Traffic with eth.src = l3dgw_port->lrp_networks.ea_s > - * should only be sent from the gateway chassis, so that > - * upstream MAC learning points to the gateway chassis. > - * Also need to avoid generation of multiple ARP responses > - * from different chassis. */ > - match (router.l3dgw_ports.nth(0)) { > - None -> i"", > - Some {var gw_port} -> i"is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})" > - } > - ) > - }. > - > -/* Now divide the ARP/ND flows into ARP and ND. */ > -relation LogicalRouterArpNdFlow( > - router: Intern<Router>, > - nat: NAT, > - lrp: Option<Intern<nb::Logical_Router_Port>>, > - mac: istring, > - extra_match: Option<istring>, > - drop: bool, > - priority: integer) > -LogicalRouterArpFlow(router, lrp, i"${ipv4}", mac, extra_match, drop, priority, > - stage_hint(nat.nat._uuid)) :- > - LogicalRouterArpNdFlow(router, nat@NAT{.external_ip = IPv4{ipv4}}, lrp, > - mac, extra_match, drop, priority). > -LogicalRouterNdFlow(router, lrp, i"nd_na", ipv6, true, mac, extra_match, drop, priority, > - stage_hint(nat.nat._uuid)) :- > - LogicalRouterArpNdFlow(router, nat@NAT{.external_ip = IPv6{ipv6}}, lrp, > - mac, extra_match, drop, priority). > - > -relation LogicalRouterNdFlowLB( > - lr: Intern<Router>, > - lrp: Option<Intern<nb::Logical_Router_Port>>, > - ip: istring, > - mac: istring, > - extra_match: Option<istring>, > - stage_hint: bit<32>) > -Flow(.logical_datapath = lr._uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 90, > - .__match = __match.intern(), > - .actions = actions, > - .stage_hint = stage_hint, > - .io_port = None, > - .controller_meter = lr.copp.get(cOPP_ND_NA())) :- > - LogicalRouterNdFlowLB(.lr = lr, .lrp = lrp, .ip = ip, > - .mac = mac, .extra_match = extra_match, > - .stage_hint = stage_hint), > - var __match = { > - var clauses = vec_with_capacity(4); > - match (lrp) { > - Some{p} -> clauses.push(i"inport == ${json_escape(p.name)}"), > - None -> () > - }; > - clauses.push(i"nd_ns && nd.target == ${ip}"); > - clauses.append(extra_match.to_vec()); > - clauses.join(" && ") > - }, > - var actions = > - i"nd_na { " > - "eth.src = ${mac}; " > - "ip6.src = nd.target; " > - "nd.tll = ${mac}; " > - "outport = inport; " > - "flags.loopback = 1; " > - "output; " > - "};". > - > -relation LogicalRouterArpFlow( > - lr: Intern<Router>, > - lrp: Option<Intern<nb::Logical_Router_Port>>, > - ip: istring, > - mac: istring, > - extra_match: Option<istring>, > - drop: bool, > - priority: integer, > - stage_hint: bit<32>) > -Flow(.logical_datapath = lr._uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = priority, > - .__match = __match.intern(), > - .actions = actions, > - .stage_hint = stage_hint, > - .io_port = None, > - .controller_meter = None) :- > - LogicalRouterArpFlow(.lr = lr, .lrp = lrp, .ip = ip, .mac = mac, > - .extra_match = extra_match, .drop = drop, > - .priority = priority, .stage_hint = stage_hint), > - var __match = { > - var clauses = vec_with_capacity(3); > - match (lrp) { > - Some{p} -> clauses.push(i"inport == ${json_escape(p.name)}"), > - None -> () > - }; > - clauses.push(i"arp.op == 1 && arp.tpa == ${ip}"); > - clauses.append(extra_match.to_vec()); > - clauses.join(" && ") > - }, > - var actions = if (drop) { > - i"drop;" > - } else { > - i"eth.dst = eth.src; " > - "eth.src = ${mac}; " > - "arp.op = 2; /* ARP reply */ " > - "arp.tha = arp.sha; " > - "arp.sha = ${mac}; " > - "arp.tpa <-> arp.spa; " > - "outport = inport; " > - "flags.loopback = 1; " > - "output;" > - }. > - > -relation LogicalRouterNdFlow( > - lr: Intern<Router>, > - lrp: Option<Intern<nb::Logical_Router_Port>>, > - action: istring, > - ip: in6_addr, > - sn_ip: bool, > - mac: istring, > - extra_match: Option<istring>, > - drop: bool, > - priority: integer, > - stage_hint: bit<32>) > -Flow(.logical_datapath = lr._uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = priority, > - .__match = __match.intern(), > - .actions = actions, > - .io_port = None, > - .controller_meter = controller_meter, > - .stage_hint = stage_hint) :- > - LogicalRouterNdFlow(.lr = lr, .lrp = lrp, .action = action, .ip = ip, > - .sn_ip = sn_ip, .mac = mac, .extra_match = extra_match, > - .drop = drop, .priority = priority, > - .stage_hint = stage_hint), > - var __match = { > - var clauses = vec_with_capacity(4); > - match (lrp) { > - Some{p} -> clauses.push(i"inport == ${json_escape(p.name)}"), > - None -> () > - }; > - if (sn_ip) { > - clauses.push(i"ip6.dst == {${ip}, ${ip.solicited_node()}}") > - }; > - clauses.push(i"nd_ns && nd.target == ${ip}"); > - clauses.append(extra_match.to_vec()); > - clauses.join(" && ") > - }, > - (var actions, var controller_meter) = if (drop) { > - (i"drop;", None) > - } else { > - (i"${action} { " > - "eth.src = ${mac}; " > - "ip6.src = nd.target; " > - "nd.tll = ${mac}; " > - "outport = inport; " > - "flags.loopback = 1; " > - "output; " > - "};", > - lr.copp.get(cOPP_ND_NA())) > - }. > - > -/* ICMP time exceeded */ > -for (RouterPortNetworksIPv4Addr(.port = &RouterPort{.lrp = lrp, > - .json_name = json_name, > - .router = router, > - .networks = networks, > - .is_redirect = is_redirect}, > - .addr = addr)) > -{ > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 40, > - .__match = i"inport == ${json_name} && ip4 && " > - "ip.ttl == {0, 1} && !ip.later_frag", > - .actions = i"icmp4 {" > - "eth.dst <-> eth.src; " > - "icmp4.type = 11; /* Time exceeded */ " > - "icmp4.code = 0; /* TTL exceeded in transit */ " > - "ip4.dst = ip4.src; " > - "ip4.src = ${addr.addr}; " > - "ip.ttl = 255; " > - "next; };", > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None); > - > - /* ARP reply. These flows reply to ARP requests for the router's own > - * IP address. */ > - for (AddChassisResidentCheck(lrp._uuid, add_chassis_resident_check)) { > - var __match = > - "arp.spa == ${addr.match_network()}" ++ > - if (add_chassis_resident_check) { > - var redirect_port_name = if (is_redirect) { > - json_escape(chassis_redirect_name(lrp.name)) > - } else { > - match (router.l3dgw_ports.nth(0)) { > - None -> "", > - Some {var gw_port} -> json_escape(chassis_redirect_name(gw_port.name)) > - } > - }; > - " && is_chassis_resident(${redirect_port_name})" > - } else "" in > - LogicalRouterArpFlow(.lr = router, > - .lrp = Some{lrp}, > - .ip = i"${addr.addr}", > - .mac = rEG_INPORT_ETH_ADDR(), > - .extra_match = Some{__match.intern()}, > - .drop = false, > - .priority = 90, > - .stage_hint = stage_hint(lrp._uuid)) > - } > -} > - > -LogicalRouterNdFlow(.lr = r, > - .lrp = Some{lrp}, > - .action = i"nd_na", > - .ip = ip, > - .sn_ip = false, > - .mac = rEG_INPORT_ETH_ADDR(), > - .extra_match = residence_check, > - .drop = false, > - .priority = 90, > - .stage_hint = 0) :- > - &LBVIP(.vip_addr = IPv6{ip}, .lb = lb), > - RouterLB(r, lb._uuid), > - &RouterPort(.router = r, .lrp = lrp, .is_redirect = is_redirect), > - var residence_check = match (is_redirect) { > - true -> Some{i"is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})"}, > - false -> None > - }. > - > -for (&RouterPort(.lrp = lrp, > - .router = router@&Router{._uuid = lr_uuid}, > - .json_name = json_name, > - .networks = networks, > - .is_redirect = is_redirect)) { > - for (lbips in &LogicalRouterLBIPs(.lr = lr_uuid)) { > - var residence_check = match (is_redirect) { > - true -> Some{i"is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})"}, > - false -> None > - } in { > - var all_ipv4s = union(lbips.lb_ipv4s_routable, lbips.lb_ipv4s_unroutable) in > - not all_ipv4s.is_empty() in > - LogicalRouterArpFlow(.lr = router, > - .lrp = Some{lrp}, > - .ip = i"{ ${all_ipv4s.to_vec().join(\", \")} }", > - .mac = rEG_INPORT_ETH_ADDR(), > - .extra_match = residence_check, > - .drop = false, > - .priority = 90, > - .stage_hint = 0); > - > - var all_ipv6s = union(lbips.lb_ipv6s_routable, lbips.lb_ipv6s_unroutable) in > - not all_ipv6s.is_empty() in > - LogicalRouterNdFlowLB(.lr = router, > - .lrp = Some{lrp}, > - .ip = ("{ " ++ all_ipv6s.to_vec().join(", ") ++ " }").intern(), > - .mac = rEG_INPORT_ETH_ADDR(), > - .extra_match = residence_check, > - .stage_hint = 0) > - } > - } > -} > - > -/* Drop IP traffic destined to router owned IPs except if the IP is > - * also a SNAT IP. Those are dropped later, in stage > - * "lr_in_arp_resolve", if unSNAT was unsuccessful. > - * > - * Priority 60. > - */ > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 60, > - .__match = ("ip4.dst == {" ++ match_ips.join(", ") ++ "}").intern(), > - .actions = i"drop;", > - .stage_hint = stage_hint(lrp_uuid), > - .io_port = None, > - .controller_meter = None) :- > - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, > - .router = &Router{.snat_ips = snat_ips, > - .force_lb_snat = false, > - ._uuid = lr_uuid}, > - .networks = networks), > - var addr = FlatMap(networks.ipv4_addrs), > - not snat_ips.contains_key(IPv4{addr.addr}), > - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 60, > - .__match = ("ip6.dst == {" ++ match_ips.join(", ") ++ "}").intern(), > - .actions = i"drop;", > - .stage_hint = stage_hint(lrp_uuid), > - .io_port = None, > - .controller_meter = None) :- > - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, > - .router = &Router{.snat_ips = snat_ips, > - .force_lb_snat = false, > - ._uuid = lr_uuid}, > - .networks = networks), > - var addr = FlatMap(networks.ipv6_addrs), > - not snat_ips.contains_key(IPv6{addr.addr}), > - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). > - > -for (RouterPortNetworksIPv4Addr( > - .port = &RouterPort{ > - .router = &Router{._uuid = lr_uuid, > - .l3dgw_ports = vec_empty(), > - .is_gateway = false, > - .copp = copp}, > - .lrp = lrp}, > - .addr = addr)) > -{ > - /* UDP/TCP/SCTP port unreachable. */ > - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag && udp" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 80, > - .__match = __match, > - .actions = i"icmp4 {" > - "eth.dst <-> eth.src; " > - "ip4.dst <-> ip4.src; " > - "ip.ttl = 255; " > - "icmp4.type = 3; " > - "icmp4.code = 3; " > - "next; };", > - .io_port = None, > - .controller_meter = copp.get(cOPP_ICMP4_ERR()), > - .stage_hint = stage_hint(lrp._uuid)); > - > - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag && tcp" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 80, > - .__match = __match, > - .actions = i"tcp_reset {" > - "eth.dst <-> eth.src; " > - "ip4.dst <-> ip4.src; " > - "next; };", > - .io_port = None, > - .controller_meter = copp.get(cOPP_TCP_RESET()), > - .stage_hint = stage_hint(lrp._uuid)); > - > - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag && sctp" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 80, > - .__match = __match, > - .actions = i"sctp_abort {" > - "eth.dst <-> eth.src; " > - "ip4.dst <-> ip4.src; " > - "next; };", > - .io_port = None, > - .controller_meter = copp.get(cOPP_TCP_RESET()), > - .stage_hint = stage_hint(lrp._uuid)); > - > - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 70, > - .__match = __match, > - .actions = i"icmp4 {" > - "eth.dst <-> eth.src; " > - "ip4.dst <-> ip4.src; " > - "ip.ttl = 255; " > - "icmp4.type = 3; " > - "icmp4.code = 2; " > - "next; };", > - .io_port = None, > - .controller_meter = copp.get(cOPP_ICMP4_ERR()), > - .stage_hint = stage_hint(lrp._uuid)) > -} > - > -/* DHCPv6 reply handling */ > -Flow(.logical_datapath = rp.router._uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 100, > - .__match = i"ip6.dst == ${ipv6_addr.addr} " > - "&& udp.src == 547 && udp.dst == 546", > - .actions = i"reg0 = 0; handle_dhcpv6_reply;", > - .stage_hint = stage_hint(rp.lrp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - rp in &RouterPort(), > - var ipv6_addr = FlatMap(rp.networks.ipv6_addrs). > - > -/* Logical router ingress table IP_INPUT: IP Input for IPv6. */ > -for (&RouterPort(.router = router, .networks = networks, .lrp = lrp) > - if (not networks.ipv6_addrs.is_empty())) > -{ > - //if (op->derived) { > - // /* No ingress packets are accepted on a chassisredirect > - // * port, so no need to program flows for that port. */ > - // continue; > - //} > - > - /* ICMPv6 echo reply. These flows reply to echo requests > - * received for the router's IP address. */ > - var __match = "ip6.dst == " ++ > - format_v6_networks(networks) ++ > - " && icmp6.type == 128 && icmp6.code == 0" in > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 90, > - .__match = __match.intern(), > - .actions = i"ip6.dst <-> ip6.src; " > - "ip.ttl = 255; " > - "icmp6.type = 129; " > - "flags.loopback = 1; " > - "next; ", > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None) > -} > - > -/* ND reply. These flows reply to ND solicitations for the > - * router's own IP address. */ > -for (RouterPortNetworksIPv6Addr(.port = &RouterPort{.lrp = lrp, > - .is_redirect = is_redirect, > - .router = router, > - .networks = networks, > - .json_name = json_name}, > - .addr = addr)) > -{ > - var extra_match = if (is_redirect) { > - /* Traffic with eth.src = l3dgw_port->lrp_networks.ea > - * should only be sent from the gateway chassis, so that > - * upstream MAC learning points to the gateway chassis. > - * Also need to avoid generation of multiple ND replies > - * from different chassis. */ > - Some{i"is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})"} > - } else None in > - LogicalRouterNdFlow(.lr = router, > - .lrp = Some{lrp}, > - .action = i"nd_na_router", > - .ip = addr.addr, > - .sn_ip = true, > - .mac = rEG_INPORT_ETH_ADDR(), > - .extra_match = extra_match, > - .drop = false, > - .priority = 90, > - .stage_hint = stage_hint(lrp._uuid)) > -} > - > -/* UDP/TCP/SCTP port unreachable */ > -for (RouterPortNetworksIPv6Addr( > - .port = &RouterPort{.router = &Router{._uuid = lr_uuid, > - .l3dgw_ports = vec_empty(), > - .is_gateway = false, > - .copp = copp}, > - .lrp = lrp, > - .json_name = json_name}, > - .addr = addr)) > -{ > - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag && tcp" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 80, > - .__match = __match, > - .actions = i"tcp_reset {" > - "eth.dst <-> eth.src; " > - "ip6.dst <-> ip6.src; " > - "next; };", > - .io_port = None, > - .controller_meter = copp.get(cOPP_TCP_RESET()), > - .stage_hint = stage_hint(lrp._uuid)); > - > - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag && sctp" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 80, > - .__match = __match, > - .actions = i"sctp_abort {" > - "eth.dst <-> eth.src; " > - "ip6.dst <-> ip6.src; " > - "next; };", > - .io_port = None, > - .controller_meter = copp.get(cOPP_TCP_RESET()), > - .stage_hint = stage_hint(lrp._uuid)); > - > - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag && udp" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 80, > - .__match = __match, > - .actions = i"icmp6 {" > - "eth.dst <-> eth.src; " > - "ip6.dst <-> ip6.src; " > - "ip.ttl = 255; " > - "icmp6.type = 1; " > - "icmp6.code = 4; " > - "next; };", > - .io_port = None, > - .controller_meter = copp.get(cOPP_ICMP6_ERR()), > - .stage_hint = stage_hint(lrp._uuid)); > - > - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 70, > - .__match = __match, > - .actions = i"icmp6 {" > - "eth.dst <-> eth.src; " > - "ip6.dst <-> ip6.src; " > - "ip.ttl = 255; " > - "icmp6.type = 1; " > - "icmp6.code = 3; " > - "next; };", > - .io_port = None, > - .controller_meter = copp.get(cOPP_ICMP6_ERR()), > - .stage_hint = stage_hint(lrp._uuid)) > -} > - > -/* ICMPv6 time exceeded */ > -for (RouterPortNetworksIPv6Addr(.port = &RouterPort{.router = router, > - .lrp = lrp, > - .json_name = json_name}, > - .addr = addr) > - /* skip link-local address */ > - if (not addr.is_lla())) > -{ > - var __match = i"inport == ${json_name} && ip6 && " > - "ip6.src == ${addr.match_network()} && " > - "ip.ttl == {0, 1} && !ip.later_frag" in > - var actions = i"icmp6 {" > - "eth.dst <-> eth.src; " > - "ip6.dst = ip6.src; " > - "ip6.src = ${addr.addr}; " > - "ip.ttl = 255; " > - "icmp6.type = 3; /* Time exceeded */ " > - "icmp6.code = 0; /* TTL exceeded in transit */ " > - "next; };" in > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 40, > - .__match = __match, > - .actions = actions, > - .io_port = None, > - .controller_meter = router.copp.get(cOPP_ICMP6_ERR()), > - .stage_hint = stage_hint(lrp._uuid)) > -} > - > -/* NAT, Defrag and load balancing. */ > - > -function default_allow_flow(datapath: uuid, stage: Intern<Stage>): Flow { > - Flow{.logical_datapath = datapath, > - .stage = stage, > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .io_port = None, > - .controller_meter = None, > - .stage_hint = 0} > -} > -for (r in &Router(._uuid = lr_uuid)) { > - /* Packets are allowed by default. */ > - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_DEFRAG())]; > - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_UNSNAT())]; > - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_SNAT())]; > - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_DNAT())]; > - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_UNDNAT())]; > - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_POST_UNDNAT())]; > - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_EGR_LOOP())]; > - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_ECMP_STATEFUL())]; > - > - /* Send the IPv6 NS packets to next table. When ovn-controller > - * generates IPv6 NS (for the action - nd_ns{}), the injected > - * packet would go through conntrack - which is not required. */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_SNAT(), > - .priority = 120, > - .__match = i"nd_ns", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -for (r in &Router(._uuid = lr_uuid, > - .l3dgw_ports = l3dgw_ports, > - .is_gateway = is_gateway, > - .nat = nat)) { > - for (LogicalRouterLBs(lr_uuid, lbs)) { > - if ((l3dgw_ports.len() > 0 or is_gateway) and (not is_empty(nat) or not is_empty(lbs))) { > - /* If the router has load balancer or DNAT rules, re-circulate every packet > - * through the DNAT zone so that packets that need to be unDNATed in the > - * reverse direction get unDNATed. > - * > - * We also commit newly initiated connections in the reply direction to the > - * DNAT zone. This ensures that these flows are tracked. If the flow was > - * not committed, it would produce ongoing datapath flows with the ct.new > - * flag set. Some NICs are unable to offload these flows. > - */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_POST_UNDNAT(), > - .priority = 50, > - .__match = i"ip && ct.new", > - .actions = i"ct_commit { } ; next; ", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_UNDNAT(), > - .priority = 50, > - .__match = i"ip", > - .actions = i"flags.loopback = 1; ct_dnat;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - } > - } > -} > - > -Flow(.logical_datapath = lr, > - .stage = s_ROUTER_OUT_SNAT(), > - .priority = 120, > - .__match = i"flags.skip_snat_for_lb == 1 && ip", > - .actions = i"next;", > - .stage_hint = stage_hint(lb.lb._uuid), > - .io_port = None, > - .controller_meter = None) :- > - LogicalRouterLB(lr, lb), > - lb.lb.options.get_bool_def(i"skip_snat", false) > - . > - > -function lrouter_nat_is_stateless(nat: NAT): bool = { > - Some{i"true"} == nat.nat.options.get(i"stateless") > -} > - > -/* Handles the match criteria and actions in logical flow > - * based on external ip based NAT rule filter. > - * > - * For ALLOWED_EXT_IPs, we will add an additional match criteria > - * of comparing ip*.src/dst with the allowed external ip address set. > - * > - * For EXEMPTED_EXT_IPs, we will have an additional logical flow > - * where we compare ip*.src/dst with the exempted external ip address set > - * and action says "next" instead of ct*. > - */ > -function lrouter_nat_add_ext_ip_match( > - router: Intern<Router>, > - nat: NAT, > - __match: string, > - ipX: string, > - is_src: bool, > - mask: v46_ip): (string, Option<Flow>) > -{ > - var dir = if (is_src) "src" else "dst"; > - match (nat.exceptional_ext_ips) { > - None -> ("", None), > - Some{AllowedExtIps{__as}} -> (" && ${ipX}.${dir} == $${__as.name}", None), > - Some{ExemptedExtIps{__as}} -> { > - /* Priority of logical flows corresponding to exempted_ext_ips is > - * +1 of the corresponding regular NAT rule. > - * For example, if we have following NAT rule and we associate > - * exempted external ips to it: > - * "ovn-nbctl lr-nat-add router dnat_and_snat 10.15.24.139 50.0.0.11" > - * > - * And now we associate exempted external ip address set to it. > - * Now corresponding to above rule we will have following logical > - * flows: > - * lr_out_snat...priority=162, match=(..ip4.dst == $exempt_range), > - * action=(next;) > - * lr_out_snat...priority=161, match=(..), action=(ct_snat(....);) > - * > - */ > - var priority = match (is_src) { > - true -> { > - /* S_ROUTER_IN_DNAT uses priority 100 */ > - 100 + 1 > - }, > - false -> { > - /* S_ROUTER_OUT_SNAT uses priority (mask + 1 + 128 + 1) */ > - var is_gw_router = router.l3dgw_ports.is_empty(); > - var mask_1bits = mask.cidr_bits().unwrap_or(8'd0) as integer; > - mask_1bits + 2 + { if (not is_gw_router) 128 else 0 } > - } > - }; > - > - ("", > - Some{Flow{.logical_datapath = router._uuid, > - .stage = if (is_src) { s_ROUTER_IN_DNAT() } else { s_ROUTER_OUT_SNAT() }, > - .priority = priority, > - .__match = i"${__match} && ${ipX}.${dir} == $${__as.name}", > - .actions = i"next;", > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None}}) > - } > - } > -} > - > -relation LogicalRouterForceSnatFlows( > - logical_router: uuid, > - ips: Set<v46_ip>, > - context: string) > -Flow(.logical_datapath = logical_router, > - .stage = s_ROUTER_IN_UNSNAT(), > - .priority = 110, > - .__match = i"${ipX} && ${ipX}.dst == ${ip}", > - .actions = i"ct_snat;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None), > -/* Higher priority rules to force SNAT with the IP addresses > - * configured in the Gateway router. This only takes effect > - * when the packet has already been DNATed or load balanced once. */ > -Flow(.logical_datapath = logical_router, > - .stage = s_ROUTER_OUT_SNAT(), > - .priority = 100, > - .__match = i"flags.force_snat_for_${context} == 1 && ${ipX}", > - .actions = i"ct_snat(${ip});", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - LogicalRouterForceSnatFlows(.logical_router = logical_router, > - .ips = ips, > - .context = context), > - var ip = FlatMap(ips), > - var ipX = ip.ipX(). > - > -/* Higher priority rules to force SNAT with the router port ip. > - * This only takes effect when the packet has already been > - * load balanced once. */ > -for (rp in &RouterPort(.router = &Router{._uuid = lr_uuid, .options = lr_options}, .lrp = lrp)) { > - if (lb_force_snat_router_ip(lr_options) and rp.peer != PeerNone) { > - Some{var ipv4} = rp.networks.ipv4_addrs.nth(0) in { > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_UNSNAT(), > - .priority = 110, > - .__match = i"inport == ${rp.json_name} && ip4.dst == ${ipv4.addr}", > - .actions = i"ct_snat;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_SNAT(), > - .priority = 110, > - .__match = i"flags.force_snat_for_lb == 1 && ip4 && outport == ${rp.json_name}", > - .actions = i"ct_snat(${ipv4.addr});", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - if (rp.networks.ipv4_addrs.len() > 1) { > - Warning["Logical router port ${rp.json_name} is configured with multiple IPv4 " > - "addresses. Only the first IP [${ipv4.addr}] is considered as SNAT for " > - "load balancer"] > - } > - }; > - > - /* op->lrp_networks.ipv6_addrs will always have LLA and that will be > - * last in the list. So add the flows only if n_ipv6_addrs > 1. */ > - if (rp.networks.ipv6_addrs.len() > 1) { > - Some{var ipv6} = rp.networks.ipv6_addrs.nth(0) in { > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_UNSNAT(), > - .priority = 110, > - .__match = i"inport == ${rp.json_name} && ip6.dst == ${ipv6.addr}", > - .actions = i"ct_snat;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_SNAT(), > - .priority = 110, > - .__match = i"flags.force_snat_for_lb == 1 && ip6 && outport == ${rp.json_name}", > - .actions = i"ct_snat(${ipv6.addr});", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - if (rp.networks.ipv6_addrs.len() > 2) { > - Warning["Logical router port ${rp.json_name} is configured with multiple IPv6 " > - "addresses. Only the first IP [${ipv6.addr}] is considered as SNAT for " > - "load balancer"] > - } > - } > - } > - } > -} > - > -relation VirtualLogicalPort(logical_port: Option<istring>) > -VirtualLogicalPort(Some{logical_port}) :- > - lsp in &nb::Logical_Switch_Port(.name = logical_port, .__type = i"virtual"). > - > -/* NAT rules are only valid on Gateway routers and routers with > - * l3dgw_port (router has a port with "redirect-chassis" > - * specified). */ > -for (r in &Router(._uuid = lr_uuid, > - .l3dgw_ports = l3dgw_ports, > - .is_gateway = is_gateway) > - if not l3dgw_ports.is_empty() or is_gateway) > -{ > - for (LogicalRouterNAT(.lr = lr_uuid, .nat = nat)) { > - var ipX = nat.external_ip.ipX() in > - var xx = nat.external_ip.xxreg() in > - /* Check the validity of nat->logical_ip. 'logical_ip' can > - * be a subnet when the type is "snat". */ > - Some{(_, var mask)} = ip46_parse_masked(nat.nat.logical_ip.ival()) in > - true == match ((mask.is_all_ones(), nat.nat.__type.ival())) { > - (_, "snat") -> true, > - (false, _) -> { > - warn("bad ip ${nat.nat.logical_ip} for dnat in router ${uuid2str(lr_uuid)}"); > - false > - }, > - _ -> true > - } in > - /* For distributed router NAT, determine whether this NAT rule > - * satisfies the conditions for distributed NAT processing. */ > - var mac = match ((not l3dgw_ports.is_empty() and nat.nat.__type == i"dnat_and_snat", > - nat.nat.logical_port, nat.external_mac)) { > - (true, Some{_}, Some{mac}) -> Some{mac}, > - _ -> None > - } in > - var stateless = (lrouter_nat_is_stateless(nat) > - and nat.nat.__type == i"dnat_and_snat") in > - { > - /* Ingress UNSNAT table: It is for already established connections' > - * reverse traffic. i.e., SNAT has already been done in egress > - * pipeline and now the packet has entered the ingress pipeline as > - * part of a reply. We undo the SNAT here. > - * > - * Undoing SNAT has to happen before DNAT processing. This is > - * because when the packet was DNATed in ingress pipeline, it did > - * not know about the possibility of eventual additional SNAT in > - * egress pipeline. */ > - if (nat.nat.__type == i"snat" or nat.nat.__type == i"dnat_and_snat") { > - if (l3dgw_ports.is_empty()) { > - /* Gateway router. */ > - var actions = if (stateless) { > - i"${ipX}.dst=${nat.nat.logical_ip}; next;" > - } else { > - i"ct_snat;" > - } in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_UNSNAT(), > - .priority = 90, > - .__match = i"ip && ${ipX}.dst == ${nat.nat.external_ip}", > - .actions = actions, > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - Some {var gwport} = l3dgw_ports.nth(0) in { > - /* Distributed router. */ > - > - /* Traffic received on l3dgw_port is subject to NAT. */ > - var __match = > - "ip && ${ipX}.dst == ${nat.nat.external_ip}" > - " && inport == ${json_escape(gwport.name)}" ++ > - if (mac == None) { > - /* Flows for NAT rules that are centralized are only > - * programmed on the "redirect-chassis". */ > - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" > - } else { "" } in > - var actions = if (stateless) { > - i"${ipX}.dst=${nat.nat.logical_ip}; next;" > - } else { > - i"ct_snat;" > - } in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_UNSNAT(), > - .priority = 100, > - .__match = __match.intern(), > - .actions = actions, > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - }; > - > - /* Ingress DNAT table: Packets enter the pipeline with destination > - * IP address that needs to be DNATted from a external IP address > - * to a logical IP address. */ > - var ip_and_ports = "${nat.nat.logical_ip}" ++ > - if (nat.nat.external_port_range != i"") { > - " ${nat.nat.external_port_range}" > - } else { > - "" > - } in > - if (nat.nat.__type == i"dnat" or nat.nat.__type == i"dnat_and_snat") { > - l3dgw_ports.is_empty() in > - var __match = "ip && ${ipX}.dst == ${nat.nat.external_ip}" in > - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( > - r, nat, __match, ipX, true, mask) in > - { > - /* Gateway router. */ > - /* Packet when it goes from the initiator to destination. > - * We need to set flags.loopback because the router can > - * send the packet back through the same interface. */ > - Some{var f} = ext_flow in Flow[f]; > - > - var flag_action = > - if (has_force_snat_ip(r.options, i"dnat")) { > - /* Indicate to the future tables that a DNAT has taken > - * place and a force SNAT needs to be done in the > - * Egress SNAT table. */ > - "flags.force_snat_for_dnat = 1; " > - } else { "" } in > - var nat_actions = if (stateless) { > - "${ipX}.dst=${nat.nat.logical_ip}; next;" > - } else { > - "flags.loopback = 1; " > - "ct_dnat(${ip_and_ports});" > - } in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_DNAT(), > - .priority = 100, > - .__match = (__match ++ ext_ip_match).intern(), > - .actions = (flag_action ++ nat_actions).intern(), > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - > - Some {var gwport} = l3dgw_ports.nth(0) in > - var __match = > - "ip && ${ipX}.dst == ${nat.nat.external_ip}" > - " && inport == ${json_escape(gwport.name)}" ++ > - if (mac == None) { > - /* Flows for NAT rules that are centralized are only > - * programmed on the "redirect-chassis". */ > - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" > - } else { "" } in > - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( > - r, nat, __match, ipX, true, mask) in > - { > - /* Distributed router. */ > - /* Traffic received on l3dgw_port is subject to NAT. */ > - Some{var f} = ext_flow in Flow[f]; > - > - var actions = if (stateless) { > - i"${ipX}.dst=${nat.nat.logical_ip}; next;" > - } else { > - i"ct_dnat(${ip_and_ports});" > - } in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_DNAT(), > - .priority = 100, > - .__match = (__match ++ ext_ip_match).intern(), > - .actions = actions, > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - }; > - > - /* ARP resolve for NAT IPs. */ > - Some {var gwport} = l3dgw_ports.nth(0) in { > - var gwport_name = json_escape(gwport.name) in { > - if (nat.nat.__type == i"snat") { > - var __match = i"inport == ${gwport_name} && " > - "${ipX}.src == ${nat.nat.external_ip}" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 120, > - .__match = __match, > - .actions = i"next;", > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - > - var nexthop_reg = "${xx}${rEG_NEXT_HOP()}" in > - var __match = i"outport == ${gwport_name} && " > - "${nexthop_reg} == ${nat.nat.external_ip}" in > - var dst_mac = match (mac) { > - Some{value} -> i"${value}", > - None -> gwport.mac > - } in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 100, > - .__match = __match, > - .actions = i"eth.dst = ${dst_mac}; next;", > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - }; > - > - /* Egress UNDNAT table: It is for already established connections' > - * reverse traffic. i.e., DNAT has already been done in ingress > - * pipeline and now the packet has entered the egress pipeline as > - * part of a reply. We undo the DNAT here. > - * > - * Note that this only applies for NAT on a distributed router. > - */ > - if ((nat.nat.__type == i"dnat" or nat.nat.__type == i"dnat_and_snat")) { > - Some {var gwport} = l3dgw_ports.nth(0) in > - var __match = > - "ip && ${ipX}.src == ${nat.nat.logical_ip}" > - " && outport == ${json_escape(gwport.name)}" ++ > - if (mac == None) { > - /* Flows for NAT rules that are centralized are only > - * programmed on the "redirect-chassis". */ > - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" > - } else { "" } in > - var actions = > - match (mac) { > - Some{mac_addr} -> "eth.src = ${mac_addr}; ", > - None -> "" > - } ++ > - if (stateless) { > - "${ipX}.src=${nat.nat.external_ip}; next;" > - } else { > - "ct_dnat;" > - } in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_UNDNAT(), > - .priority = 100, > - .__match = __match.intern(), > - .actions = actions.intern(), > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - > - /* Egress SNAT table: Packets enter the egress pipeline with > - * source ip address that needs to be SNATted to a external ip > - * address. */ > - var ip_and_ports = "${nat.nat.external_ip}" ++ > - if (nat.nat.external_port_range != i"") { > - " ${nat.nat.external_port_range}" > - } else { > - "" > - } in > - if (nat.nat.__type == i"snat" or nat.nat.__type == i"dnat_and_snat") { > - l3dgw_ports.is_empty() in > - var __match = "ip && ${ipX}.src == ${nat.nat.logical_ip}" in > - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( > - r, nat, __match, ipX, false, mask) in > - { > - /* Gateway router. */ > - Some{var f} = ext_flow in Flow[f]; > - > - /* The priority here is calculated such that the > - * nat->logical_ip with the longest mask gets a higher > - * priority. */ > - var actions = if (stateless) { > - i"${ipX}.src=${nat.nat.external_ip}; next;" > - } else { > - i"ct_snat(${ip_and_ports});" > - } in > - Some{var plen} = mask.cidr_bits() in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_SNAT(), > - .priority = plen as bit<64> + 1, > - .__match = (__match ++ ext_ip_match).intern(), > - .actions = actions, > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - > - Some {var gwport} = l3dgw_ports.nth(0) in > - var __match = > - "ip && ${ipX}.src == ${nat.nat.logical_ip}" > - " && outport == ${json_escape(gwport.name)}" ++ > - if (mac == None) { > - /* Flows for NAT rules that are centralized are only > - * programmed on the "redirect-chassis". */ > - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" > - } else { "" } in > - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( > - r, nat, __match, ipX, false, mask) in > - { > - /* Distributed router. */ > - Some{var f} = ext_flow in Flow[f]; > - > - var actions = > - match (mac) { > - Some{mac_addr} -> "eth.src = ${mac_addr}; ", > - _ -> "" > - } ++ if (stateless) { > - "${ipX}.src=${nat.nat.external_ip}; next;" > - } else { > - "ct_snat(${ip_and_ports});" > - } in > - /* The priority here is calculated such that the > - * nat->logical_ip with the longest mask gets a higher > - * priority. */ > - Some{var plen} = mask.cidr_bits() in > - var priority = (plen as bit<64>) + 1 in > - var centralized_boost = if (mac == None) 128 else 0 in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_SNAT(), > - .priority = priority + centralized_boost, > - .__match = (__match ++ ext_ip_match).intern(), > - .actions = actions.intern(), > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - }; > - > - /* Logical router ingress table ADMISSION: > - * For NAT on a distributed router, add rules allowing > - * ingress traffic with eth.dst matching nat->external_mac > - * on the l3dgw_port instance where nat->logical_port is > - * resident. */ > - Some{var mac_addr} = mac in > - Some{var gwport} = l3dgw_ports.nth(0) in > - Some{var logical_port} = nat.nat.logical_port in > - var __match = > - i"eth.dst == ${mac_addr} && inport == ${json_escape(gwport.name)}" > - " && is_chassis_resident(${json_escape(logical_port)})" in > - /* Store the ethernet address of the port receiving the packet. > - * This will save us from having to match on inport further > - * down in the pipeline. > - */ > - var actions = i"${rEG_INPORT_ETH_ADDR()} = ${gwport.mac}; next;" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ADMISSION(), > - .priority = 50, > - .__match = __match, > - .actions = actions, > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None); > - > - /* Ingress Gateway Redirect Table: For NAT on a distributed > - * router, add flows that are specific to a NAT rule. These > - * flows indicate the presence of an applicable NAT rule that > - * can be applied in a distributed manner. > - * In particulr the IP src register and eth.src are set to NAT external IP and > - * NAT external mac so the ARP request generated in the following > - * stage is sent out with proper IP/MAC src addresses > - */ > - Some{var mac_addr} = mac in > - Some{var gwport} = l3dgw_ports.nth(0) in > - Some{var logical_port} = nat.nat.logical_port in > - Some{var external_mac} = nat.nat.external_mac in > - var __match = > - i"${ipX}.src == ${nat.nat.logical_ip} && " > - "outport == ${json_escape(gwport.name)} && " > - "is_chassis_resident(${json_escape(logical_port)})" in > - var actions = > - i"eth.src = ${external_mac}; " > - "${xx}${rEG_SRC()} = ${nat.nat.external_ip}; " > - "next;" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_GW_REDIRECT(), > - .priority = 100, > - .__match = __match, > - .actions = actions, > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None); > - > - for (VirtualLogicalPort(nat.nat.logical_port)) { > - Some{var gwport} = l3dgw_ports.nth(0) in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_GW_REDIRECT(), > - .priority = 80, > - .__match = i"${ipX}.src == ${nat.nat.logical_ip} && " > - "outport == ${json_escape(gwport.name)}", > - .actions = i"drop;", > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - > - /* Egress Loopback table: For NAT on a distributed router. > - * If packets in the egress pipeline on the distributed > - * gateway port have ip.dst matching a NAT external IP, then > - * loop a clone of the packet back to the beginning of the > - * ingress pipeline with inport = outport. */ > - Some{var gwport} = l3dgw_ports.nth(0) in > - /* Distributed router. */ > - Some{var port} = match (mac) { > - Some{_} -> match (nat.nat.logical_port) { > - Some{name} -> Some{json_escape(name)}, > - None -> None: Option<string> > - }, > - None -> Some{json_escape(chassis_redirect_name(gwport.name))} > - } in > - var __match = i"${ipX}.dst == ${nat.nat.external_ip} && outport == ${json_escape(gwport.name)} && is_chassis_resident(${port})" in > - var regs = { > - var regs = vec_empty(); > - for (j in range_vec(0, mFF_N_LOG_REGS(), 01)) { > - regs.push("reg${j} = 0; ") > - }; > - regs > - } in > - var actions = > - "clone { ct_clear; " > - "inport = outport; outport = \"\"; " > - "eth.dst <-> eth.src; " > - "flags = 0; flags.loopback = 1; " ++ > - regs.join("") ++ > - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " > - "next(pipeline=ingress, table=0); };" in > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_EGR_LOOP(), > - .priority = 100, > - .__match = __match, > - .actions = actions.intern(), > - .stage_hint = stage_hint(nat.nat._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - }; > - > - /* Handle force SNAT options set in the gateway router. */ > - if (l3dgw_ports.is_empty()) { > - var dnat_force_snat_ips = get_force_snat_ip(r.options, i"dnat") in > - if (not dnat_force_snat_ips.is_empty()) > - LogicalRouterForceSnatFlows(.logical_router = lr_uuid, > - .ips = dnat_force_snat_ips, > - .context = "dnat"); > - > - var lb_force_snat_ips = get_force_snat_ip(r.options, i"lb") in > - if (not lb_force_snat_ips.is_empty()) > - LogicalRouterForceSnatFlows(.logical_router = lr_uuid, > - .ips = lb_force_snat_ips, > - .context = "lb") > - } > -} > - > -function nats_contain_vip(nats: Vec<NAT>, vip: v46_ip): bool { > - for (nat in nats) { > - if (nat.external_ip == vip) { > - return true > - } > - }; > - return false > -} > - > -/* If there are any load balancing rules, we should send > - * the packet to conntrack for defragmentation and > - * tracking. This helps with two things. > - * > - * 1. With tracking, we can send only new connections to > - * pick a DNAT ip address from a group. > - * 2. If there are L4 ports in load balancing rules, we > - * need the defragmentation to match on L4 ports. > - * > - * One of these flows must be created for each unique LB VIP address. > - * We create one for each VIP:port pair; flows with the same IP and > - * different port numbers will produce identical flows that will > - * get merged by DDlog. */ > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_DEFRAG(), > - .priority = prio, > - .__match = __match, > - .actions = __actions, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb), > - var prio = if (port != 0) { 110 } else { 100 }, > - var proto = match (lb.protocol) { > - Some{proto} -> proto, > - _ -> i"tcp" > - }, > - var proto_match = if (port != 0) { " && ${proto}" } else { "" }, > - var __match = ("ip && ${ip.ipX()}.dst == ${ip}" ++ proto_match).intern(), > - var actions1 = "${ip.xxreg()}${rEG_NEXT_HOP()} = ${ip}; ", > - var actions2 = > - if (port != 0) { > - "${rEG_ORIG_TP_DPORT_ROUTER()} = ${proto}.dst; ct_dnat;" > - } else { > - "ct_dnat;" > - }, > - var __actions = (actions1 ++ actions2).intern(), > - GWRouterLB(r, lb._uuid). > - > -/* Higher priority rules are added for load-balancing in DNAT > - * table. For every match (on a VIP[:port]), we add two flows > - * via add_router_lb_flow(). One flow is for specific matching > - * on ct.new with an action of "ct_lb($targets);". The other > - * flow is for ct.est with an action of "ct_dnat;". */ > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_DNAT(), > - .priority = prio, > - .__match = __match, > - .actions = actions, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - lbvip in &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb), > - var proto = match (lb.protocol) { > - Some{proto} -> proto, > - _ -> i"tcp" > - }, > - var match1 = "${ip.ipX()} && ${ip.xxreg()}${rEG_NEXT_HOP()} == ${ip}", > - var match2 = if (port != 0) { > - " && ${proto} && ${rEG_ORIG_TP_DPORT_ROUTER()} == ${port}" > - } else { > - "" > - }, > - var prio = if (port != 0) { 120 } else { 110 }, > - var match0 = ("ct.est && " ++ match1 ++ match2 ++ " && ct_label.natted == 1").intern(), > - GWRouterLB(r, lb._uuid), > - var __match = match ((r.l3dgw_ports.nth(0), lbvip.backend_ips != i"" or lb.options.get_bool_def(i"reject", false))) { > - (Some {var gw_port}, true) -> i"${match0} && is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})", > - _ -> match0 > - }, > - var actions = match (snat_for_lb(r.options, lb)) { > - SkipSNAT -> i"flags.skip_snat_for_lb = 1; next;", > - ForceSNAT -> i"flags.force_snat_for_lb = 1; next;", > - _ -> i"next;" > - }. > - > -/* The load balancer vip is also present in the NAT entries. > - * So add a high priority lflow to advance the the packet > - * destined to the vip (and the vip port if defined) > - * in the S_ROUTER_IN_UNSNAT stage. > - * There seems to be an issue with ovs-vswitchd. When the new > - * connection packet destined for the lb vip is received, > - * it is dnat'ed in the S_ROUTER_IN_DNAT stage in the dnat > - * conntrack zone. For the next packet, if it goes through > - * unsnat stage, the conntrack flags are not set properly, and > - * it doesn't hit the established state flows in > - * S_ROUTER_IN_DNAT stage. */ > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_UNSNAT(), > - .priority = 120, > - .__match = __match, > - .actions = i"next;", > - .stage_hint = stage_hint(lb._uuid), > - .io_port = None, > - .controller_meter = None) :- > - &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb), > - var proto = match (lb.protocol) { > - Some{proto} -> proto, > - _ -> i"tcp" > - }, > - var port_match = if (port != 0) { " && ${proto}.dst == ${port}" } else { "" }, > - var __match = ("${ip.ipX()} && ${ip.ipX()}.dst == ${ip} && ${proto}" ++ > - port_match).intern(), > - GWRouterLB(r, lb._uuid), > - nats_contain_vip(r.nats, ip). > - > -/* Add logical flows to UNDNAT the load balanced reverse traffic in > - * the router egress pipleine stage - S_ROUTER_OUT_UNDNAT if the logical > - * router has a gateway router port associated. > - */ > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_OUT_UNDNAT(), > - .priority = 120, > - .__match = __match, > - .actions = action, > - .stage_hint = stage_hint(lb._uuid), > - .io_port = None, > - .controller_meter = None) :- > - &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb, .backends = backends), > - var proto = match (lb.protocol) { > - Some{proto} -> proto, > - _ -> i"tcp" > - }, > - var conds = backends.map(|b| { > - var port_match = if (b.port != 0) { > - " && ${proto}.src == ${b.port}" > - } else { > - "" > - }; > - "(${b.ip.ipX()}.src == ${b.ip}" ++ port_match ++ ")" > - }).join(" || "), > - conds != "", > - RouterLB(r, lb._uuid), > - Some{var gwport} = r.l3dgw_ports.nth(0), > - var __match = > - i"${ip.ipX()} && (${conds}) && " > - "outport == ${json_escape(gwport.name)} && " > - "is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})", > - var action = match (snat_for_lb(r.options, lb)) { > - SkipSNAT -> i"flags.skip_snat_for_lb = 1; ct_dnat;", > - ForceSNAT -> i"flags.force_snat_for_lb = 1; ct_dnat;", > - _ -> i"ct_dnat;" > - }. > - > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_DNAT(), > - .priority = 130, > - .__match = __match, > - .actions = __action, > - .io_port = None, > - .controller_meter = r.copp.get(cOPP_EVENT_ELB()), > - .stage_hint = stage_hint(lb._uuid)) :- > - &LBVIP(.vip_key = lb_key, .lb = lb, .backend_ips = i""), > - not lb.options.get_bool_def(i"reject", false), > - LoadBalancerEmptyEvents(lb._uuid), > - Some {(var __match, var __action)} = build_empty_lb_event_flow(lb_key, lb), > - GWRouterLB(r, lb._uuid). > - > -/* Higher priority rules are added for load-balancing in DNAT > - * table. For every match (on a VIP[:port]), we add two flows > - * via add_router_lb_flow(). One flow is for specific matching > - * on ct.new with an action of "ct_lb($targets);". The other > - * flow is for ct.est with an action of "ct_dnat;". */ > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_DNAT(), > - .priority = priority, > - .__match = __match, > - .actions = actions, > - .io_port = None, > - .controller_meter = meter, > - .stage_hint = 0) :- > - LBVIPWithStatus(lbvip@&LBVIP{.lb = lb}, up_backends), > - var priority = if (lbvip.vip_port != 0) 120 else 110, > - (var actions0, var reject) = build_lb_vip_actions(lbvip, up_backends, s_ROUTER_OUT_SNAT(), ""), > - var actions1 = actions0.intern(), > - var match0 = "ct.new && " ++ > - get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port, lb.protocol, true, true, true), > - var match1 = match0.intern(), > - GWRouterLB(r, lb._uuid), > - var actions = match ((reject, snat_for_lb(r.options, lb))) { > - (false, SkipSNAT) -> i"flags.skip_snat_for_lb = 1; ${actions1}", > - (false, ForceSNAT) -> i"flags.force_snat_for_lb = 1; ${actions1}", > - _ -> actions1 > - }, > - var __match = match (r.l3dgw_ports.nth(0)) { > - Some{gw_port} -> i"${match1} && is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})", > - _ -> match1 > - }, > - var meter = if (reject) { > - r.copp.get(cOPP_REJECT()) > - } else { > - None > - }. > - > - > -/* Defaults based on MaxRtrInterval and MinRtrInterval from RFC 4861 section > - * 6.2.1 > - */ > -function nD_RA_MAX_INTERVAL_DEFAULT(): integer = 600 > -function nD_RA_MAX_INTERVAL_RANGE(): (integer, integer) { (4, 1800) } > - > -function nd_ra_min_interval_default(max: integer): integer = > -{ > - if (max >= 9) { max / 3 } else { max * 3 / 4 } > -} > - > -function nD_RA_MIN_INTERVAL_RANGE(max: integer): (integer, integer) = (3, ((max * 3) / 4)) > - > -function nD_MTU_DEFAULT(): integer = 0 > - > -function copy_ra_to_sb(port: RouterPort, address_mode: istring): Map<istring, istring> = > -{ > - var options = port.sb_options; > - > - options.insert(i"ipv6_ra_send_periodic", i"true"); > - options.insert(i"ipv6_ra_address_mode", address_mode); > - > - var max_interval = port.lrp.ipv6_ra_configs > - .get_int_def(i"max_interval", nD_RA_MAX_INTERVAL_DEFAULT()) > - .clamp(nD_RA_MAX_INTERVAL_RANGE()); > - options.insert(i"ipv6_ra_max_interval", i"${max_interval}"); > - > - var min_interval = port.lrp.ipv6_ra_configs > - .get_int_def(i"min_interval", nd_ra_min_interval_default(max_interval)) > - .clamp(nD_RA_MIN_INTERVAL_RANGE(max_interval)); > - options.insert(i"ipv6_ra_min_interval", i"${min_interval}"); > - > - var mtu = port.lrp.ipv6_ra_configs.get_int_def(i"mtu", nD_MTU_DEFAULT()); > - > - /* RFC 2460 requires the MTU for IPv6 to be at least 1280 */ > - if (mtu != 0 and mtu >= 1280) { > - options.insert(i"ipv6_ra_mtu", i"${mtu}") > - }; > - > - var prefixes = vec_empty(); > - for (addr in port.networks.ipv6_addrs) { > - if (addr.is_lla()) { > - options.insert(i"ipv6_ra_src_addr", i"${addr.addr}") > - } else { > - prefixes.push(addr.match_network()) > - } > - }; > - match (port.sb_options.get(i"ipv6_ra_pd_list")) { > - Some{value} -> prefixes.push(value.ival()), > - _ -> () > - }; > - options.insert(i"ipv6_ra_prefixes", prefixes.join(" ").intern()); > - > - match (port.lrp.ipv6_ra_configs.get(i"rdnss")) { > - Some{value} -> options.insert(i"ipv6_ra_rdnss", value), > - _ -> () > - }; > - > - match (port.lrp.ipv6_ra_configs.get(i"dnssl")) { > - Some{value} -> options.insert(i"ipv6_ra_dnssl", value), > - _ -> () > - }; > - > - options.insert(i"ipv6_ra_src_eth", i"${port.networks.ea}"); > - > - var prf = match (port.lrp.ipv6_ra_configs.get(i"router_preference")) { > - Some{prf} -> if (prf == i"HIGH" or prf == i"LOW") prf else i"MEDIUM", > - _ -> i"MEDIUM" > - }; > - options.insert(i"ipv6_ra_prf", prf); > - > - match (port.lrp.ipv6_ra_configs.get(i"route_info")) { > - Some{s} -> options.insert(i"ipv6_ra_route_info", s), > - _ -> () > - }; > - > - options > -} > - > -/* Logical router ingress table ND_RA_OPTIONS and ND_RA_RESPONSE: IPv6 Router > - * Adv (RA) options and response. */ > -// FIXME: do these rules apply to derived ports? > -for (&RouterPort[port@RouterPort{.lrp = lrp@&nb::Logical_Router_Port{.peer = None}, > - .router = router, > - .json_name = json_name, > - .networks = networks, > - .peer = PeerSwitch{}}] > - if (not networks.ipv6_addrs.is_empty())) > -{ > - Some{var address_mode} = lrp.ipv6_ra_configs.get(i"address_mode") in > - /* FIXME: we need a nicer wat to write this */ > - true == > - if ((address_mode != i"slaac") and > - (address_mode != i"dhcpv6_stateful") and > - (address_mode != i"dhcpv6_stateless")) { > - warn("Invalid address mode [${address_mode}] defined"); > - false > - } else { true } in > - { > - if (lrp.ipv6_ra_configs.get_bool_def(i"send_periodic", false)) { > - RouterPortRAOptions(lrp._uuid, copy_ra_to_sb(port, address_mode)) > - }; > - > - (true, var prefix) = > - { > - var add_rs_response_flow = false; > - var prefix = ""; > - for (addr in networks.ipv6_addrs) { > - if (not addr.is_lla()) { > - prefix = prefix ++ ", prefix = ${addr.match_network()}"; > - add_rs_response_flow = true > - } else () > - }; > - (add_rs_response_flow, prefix) > - } in > - { > - var __match = i"inport == ${json_name} && ip6.dst == ff02::2 && nd_rs" in > - /* As per RFC 2460, 1280 is minimum IPv6 MTU. */ > - var mtu = match(lrp.ipv6_ra_configs.get(i"mtu")) { > - Some{mtu_s} -> { > - match (parse_dec_u64(mtu_s)) { > - None -> 0, > - Some{mtu} -> if (mtu >= 1280) mtu else 0 > - } > - }, > - None -> 0 > - } in > - var actions0 = > - "${rEGBIT_ND_RA_OPTS_RESULT()} = put_nd_ra_opts(" > - "addr_mode = ${json_escape(address_mode)}, " > - "slla = ${networks.ea}" ++ > - if (mtu > 0) { ", mtu = ${mtu}" } else { "" } in > - var router_preference = match (lrp.ipv6_ra_configs.get(i"router_preference")) { > - Some{prf} -> if (prf == i"MEDIUM") { "" } else { ", router_preference = \"${prf}\"" }, > - None -> "" > - } in > - var actions = actions0 ++ router_preference ++ prefix ++ "); next;" in > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_ND_RA_OPTIONS(), > - .priority = 50, > - .__match = __match, > - .actions = actions.intern(), > - .io_port = None, > - .controller_meter = router.copp.get(cOPP_ND_RA_OPTS()), > - .stage_hint = stage_hint(lrp._uuid)); > - > - var __match = i"inport == ${json_name} && ip6.dst == ff02::2 && " > - "nd_ra && ${rEGBIT_ND_RA_OPTS_RESULT()}" in > - var ip6_str = networks.ea.to_ipv6_lla().string_mapped() in > - var actions = i"eth.dst = eth.src; eth.src = ${networks.ea}; " > - "ip6.dst = ip6.src; ip6.src = ${ip6_str}; " > - "outport = inport; flags.loopback = 1; " > - "output;" in > - Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_ND_RA_RESPONSE(), > - .priority = 50, > - .__match = __match, > - .actions = actions, > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - } > -} > - > - > -/* Logical router ingress table ND_RA_OPTIONS, ND_RA_RESPONSE: RS responder, by > - * default goto next. (priority 0)*/ > -for (&Router(._uuid = lr_uuid)) > -{ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ND_RA_OPTIONS(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ND_RA_RESPONSE(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Proxy table that stores per-port routes. > - * There routes get converted into logical flows by > - * the following rule. > - */ > -relation Route(key: route_key, // matching criteria > - port: Intern<RouterPort>, // output port > - src_ip: v46_ip, // source IP address for output > - gateway: Option<v46_ip>) // next hop (unless being delivered) > - > -function build_route_match(key: route_key) : (string, bit<32>) = > -{ > - var ipX = key.ip_prefix.ipX(); > - > - /* The priority here is calculated to implement longest-prefix-match > - * routing. */ > - (var dir, var priority) = match (key.policy) { > - SrcIp -> ("src", key.plen * 2), > - DstIp -> ("dst", (key.plen * 2) + 1) > - }; > - > - var network = key.ip_prefix.network(key.plen); > - var __match = "${ipX}.${dir} == ${network}/${key.plen}"; > - > - (__match, priority) > -} > -for (Route(.port = port, > - .key = key, > - .src_ip = src_ip, > - .gateway = gateway)) > -{ > - var ipX = key.ip_prefix.ipX() in > - var xx = key.ip_prefix.xxreg() in > - /* IPv6 link-local addresses must be scoped to the local router port. */ > - var inport_match = match (key.ip_prefix) { > - IPv6{prefix} -> if (prefix.is_lla()) { > - "inport == ${port.json_name} && " > - } else "", > - _ -> "" > - } in > - (var ip_match, var priority) = build_route_match(key) in > - var __match = inport_match ++ ip_match in > - var nexthop = match (gateway) { > - Some{gw} -> "${gw}", > - None -> "${ipX}.dst" > - } in > - var actions = > - i"${rEG_ECMP_GROUP_ID()} = 0; " > - "${xx}${rEG_NEXT_HOP()} = ${nexthop}; " > - "${xx}${rEG_SRC()} = ${src_ip}; " > - "eth.src = ${port.networks.ea}; " > - "outport = ${port.json_name}; " > - "flags.loopback = 1; " > - "next;" in > - { > - Flow(.logical_datapath = port.router._uuid, > - .stage = s_ROUTER_IN_IP_ROUTING(), > - .priority = priority as integer, > - .__match = __match.intern(), > - .actions = i"ip.ttl--; ${actions}", > - .stage_hint = stage_hint(port.lrp._uuid), > - .io_port = None, > - .controller_meter = None); > - > - if (port.has_bfd) { > - Flow(.logical_datapath = port.router._uuid, > - .stage = s_ROUTER_IN_IP_ROUTING(), > - .priority = priority as integer + 1, > - .__match = i"${__match} && udp.dst == 3784", > - .actions = actions, > - .stage_hint = stage_hint(port.lrp._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - } > -} > - > -/* Install drop routes for all the static routes with nexthop = "discard" */ > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_IP_ROUTING(), > - .priority = priority as integer, > - .__match = ip_match.intern(), > - .actions = i"drop;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - r in RouterDiscardRoute_(.router = router, .key = key), > - (var ip_match, var priority) = build_route_match(r.key). > - > -/* Logical router ingress table IP_ROUTING & IP_ROUTING_ECMP: IP Routing. > - * > - * A packet that arrives at this table is an IP packet that should be > - * routed to the address in 'ip[46].dst'. > - * > - * For regular routes without ECMP, table IP_ROUTING sets outport to the > - * correct output port, eth.src to the output port's MAC address, and > - * '[xx]${rEG_NEXT_HOP()}' to the next-hop IP address (leaving 'ip[46].dst', the > - * packet’s final destination, unchanged), and advances to the next table. > - * > - * For ECMP routes, i.e. multiple routes with same policy and prefix, table > - * IP_ROUTING remembers ECMP group id and selects a member id, and advances > - * to table IP_ROUTING_ECMP, which sets outport, eth.src, and the appropriate > - * next-hop register for the selected ECMP member. > - * */ > -Route(key, port, src_ip, None) :- > - RouterPortNetworksIPv4Addr(.port = port, .addr = addr), > - var key = RouteKey{DstIp, IPv4{addr.addr}, addr.plen}, > - var src_ip = IPv4{addr.addr}. > - > -Route(key, port, src_ip, None) :- > - RouterPortNetworksIPv6Addr(.port = port, .addr = addr), > - var key = RouteKey{DstIp, IPv6{addr.addr}, addr.plen}, > - var src_ip = IPv6{addr.addr}. > - > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_IP_ROUTING_ECMP(), > - .priority = 150, > - .__match = i"${rEG_ECMP_GROUP_ID()} == 0", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - r in &Router(). > - > -/* Convert the static routes to flows. */ > -Route(key, dst.port, dst.src_ip, Some{dst.nexthop}) :- > - RouterStaticRoute(.router = router, .key = key, .dsts = dsts), > - dsts.size() == 1, > - Some{var dst} = dsts.nth(0). > - > -Route(key, dst.port, dst.src_ip, None) :- > - RouterStaticRouteEmptyNextHop(.router = router, .key = key, .dsts = dsts), > - dsts.size() == 1, > - Some{var dst} = dsts.nth(0). > - > -/* Create routes from peer to port's routable addresses */ > -Route(key, peer, src_ip, None) :- > - RouterPortRoutableAddresses(port, addresses), > - FirstHopRouterPortRoutableAddresses(port, peer_uuid), > - peer_lrp in &nb::Logical_Router_Port(._uuid = peer_uuid), > - peer in &RouterPort(.lrp = peer_lrp, .networks = networks), > - Some{var src} = networks.ipv4_addrs.first(), > - var src_ip = IPv4{src.addr}, > - var addr = FlatMap(addresses), > - var ip4_addr = FlatMap(addr.ipv4_addrs), > - var key = RouteKey{DstIp, IPv4{ip4_addr.addr}, ip4_addr.plen}. > - > -/* This relation indicates that logical router port "port" has routable > - * addresses (i.e. DNAT and Load Balancer VIPs) and that logical router > - * port "peer" is reachable via a hop across a single logical switch. > - */ > -relation FirstHopRouterPortRoutableAddresses( > - port: uuid, > - peer: uuid) > -FirstHopRouterPortRoutableAddresses(port_uuid, peer_uuid) :- > - FirstHopLogicalRouter(r1, ls), > - FirstHopLogicalRouter(r2, ls), > - r1 != r2, > - LogicalRouterPort(port_uuid, r1), > - LogicalRouterPort(peer_uuid, r2), > - RouterPortRoutableAddresses(.rport = port_uuid), > - lrp in &nb::Logical_Router_Port(._uuid = port_uuid), > - peer_lrp in &nb::Logical_Router_Port(._uuid = peer_uuid), > - LogicalSwitchRouterPort(_, lrp.name, ls), > - LogicalSwitchRouterPort(_, peer_lrp.name, ls). > - > -relation RouterPortRoutableAddresses( > - rport: uuid, > - addresses: Set<lport_addresses>) > -RouterPortRoutableAddresses(port.lrp._uuid, addresses) :- > - port in &RouterPort(.is_redirect = true), > - lbips in &LogicalRouterLBIPs(.lr = port.router._uuid), > - var addresses = get_nat_addresses(port, lbips, true).filter_map(|addrs| addrs.ival().extract_addresses()), > - addresses != set_empty(). > - > -/* Return a vector of pairs (1, set[0]), ... (n, set[n - 1]). */ > -function numbered_vec(set: Set<'A>) : Vec<(bit<16>, 'A)> = { > - var vec = vec_with_capacity(set.size()); > - var i = 1; > - for (x in set) { > - vec.push((i, x)); > - i = i + 1 > - }; > - vec > -} > - > -relation EcmpGroup( > - group_id: bit<16>, > - router: Intern<Router>, > - key: route_key, > - dsts: Set<route_dst>, > - route_match: string, // This is build_route_match(key).0 > - route_priority: integer) // This is build_route_match(key).1 > - > -EcmpGroup(group_id, router, key, dsts, route_match, route_priority) :- > - r in RouterStaticRoute(.router = router, .key = key, .dsts = dsts), > - dsts.size() > 1, > - var groups = (router, key, dsts).group_by(()).to_set(), > - var group_id_and_group = FlatMap(numbered_vec(groups)), > - (var group_id, (var router, var key, var dsts)) = group_id_and_group, > - (var route_match, var route_priority0) = build_route_match(key), > - var route_priority = route_priority0 as integer. > - > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_IP_ROUTING(), > - .priority = route_priority, > - .__match = route_match.intern(), > - .actions = actions, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - EcmpGroup(group_id, router, key, dsts, route_match, route_priority), > - var all_member_ids = { > - var member_ids = vec_with_capacity(dsts.size()); > - for (i in range_vec(1, dsts.size()+1, 1)) { > - member_ids.push("${i}") > - }; > - member_ids.join(", ") > - }, > - var actions = > - i"ip.ttl--; " > - "flags.loopback = 1; " > - "${rEG_ECMP_GROUP_ID()} = ${group_id}; " /* XXX */ > - "${rEG_ECMP_MEMBER_ID()} = select(${all_member_ids});". > - > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_IP_ROUTING_ECMP(), > - .priority = 100, > - .__match = __match, > - .actions = actions, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - EcmpGroup(group_id, router, key, dsts, _, _), > - var member_id_and_dst = FlatMap(numbered_vec(dsts)), > - (var member_id, var dst) = member_id_and_dst, > - var xx = dst.nexthop.xxreg(), > - var __match = i"${rEG_ECMP_GROUP_ID()} == ${group_id} && " > - "${rEG_ECMP_MEMBER_ID()} == ${member_id}", > - var actions = i"${xx}${rEG_NEXT_HOP()} = ${dst.nexthop}; " > - "${xx}${rEG_SRC()} = ${dst.src_ip}; " > - "eth.src = ${dst.port.networks.ea}; " > - "outport = ${dst.port.json_name}; " > - "next;". > - > -/* If symmetric ECMP replies are enabled, then packets that arrive over > - * an ECMP route need to go through conntrack. > - */ > -relation EcmpSymmetricReply( > - router: Intern<Router>, > - dst: route_dst, > - route_match: string, > - tunkey: integer) > -EcmpSymmetricReply(router, dst, route_match, tunkey) :- > - EcmpGroup(.router = router, .dsts = dsts, .route_match = route_match), > - router.is_gateway, > - var dst = FlatMap(dsts), > - dst.ecmp_symmetric_reply, > - PortTunKeyAllocation(.port = dst.port.lrp._uuid, .tunkey = tunkey). > - > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_DEFRAG(), > - .priority = 100, > - .__match = __match, > - .actions = i"ct_next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - EcmpSymmetricReply(router, dst, route_match, _), > - var __match = i"inport == ${dst.port.json_name} && ${route_match}". > - > -/* And packets that go out over an ECMP route need conntrack. > - XXX this seems to exactly duplicate the above flow? */ > - > -/* Save src eth and inport in ct_label for packets that arrive over > - * an ECMP route. > - */ > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_ECMP_STATEFUL(), > - .priority = 100, > - .__match = __match, > - .actions = actions, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - EcmpSymmetricReply(router, dst, route_match, tunkey), > - var __match = i"inport == ${dst.port.json_name} && ${route_match} && " > - "(ct.new && !ct.est)", > - var actions = i"ct_commit { ct_label.ecmp_reply_eth = eth.src;" > - " ct_label.ecmp_reply_port = ${tunkey};}; next;". > - > -/* Bypass ECMP selection if we already have ct_label information > - * for where to route the packet. > - */ > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_IP_ROUTING(), > - .priority = 300, > - .__match = i"${ecmp_reply} && ${route_match}", > - .actions = i"ip.ttl--; " > - "flags.loopback = 1; " > - "eth.src = ${dst.port.networks.ea}; " > - "${xx}reg1 = ${dst.src_ip}; " > - "outport = ${dst.port.json_name}; " > - "next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None), > -/* Egress reply traffic for symmetric ECMP routes skips router policies. */ > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_POLICY(), > - .priority = 65535, > - .__match = ecmp_reply, > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None), > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 200, > - .__match = ecmp_reply, > - .actions = i"eth.dst = ct_label.ecmp_reply_eth; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - EcmpSymmetricReply(router, dst, route_match, tunkey), > - var ecmp_reply = i"ct.rpl && ct_label.ecmp_reply_port == ${tunkey}", > - var xx = dst.nexthop.xxreg(). > - > - > -/* IP Multicast lookup. Here we set the output port, adjust TTL and advance > - * to next table (priority 500). > - */ > -/* Drop IPv6 multicast traffic that shouldn't be forwarded, > - * i.e., router solicitation and router advertisement. > - */ > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_IP_ROUTING(), > - .priority = 550, > - .__match = i"nd_rs || nd_ra", > - .actions = i"drop;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - router in &Router(). > - > -for (IgmpRouterMulticastGroup(address, rtr, ports)) { > - for (RouterMcastFloodPorts(rtr, flood_ports) if rtr.mcast_cfg.relay) { > - var flood_static = not flood_ports.is_empty() in > - var mc_static = json_escape(mC_STATIC().0) in > - var static_act = { > - if (flood_static) { > - "clone { " > - "outport = ${mc_static}; " > - "ip.ttl--; " > - "next; " > - "}; " > - } else { > - "" > - } > - } in > - Some{var ip} = ip46_parse(address.ival()) in > - var ipX = ip.ipX() in > - Flow(.logical_datapath = rtr._uuid, > - .stage = s_ROUTER_IN_IP_ROUTING(), > - .priority = 500, > - .__match = i"${ipX} && ${ipX}.dst == ${address} ", > - .actions = > - i"${static_act}outport = ${json_escape(address)}; " > - "ip.ttl--; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > - } > -} > - > -/* If needed, flood unregistered multicast on statically configured ports. > - * Priority 450. Otherwise drop any multicast traffic. > - */ > -for (RouterMcastFloodPorts(rtr, flood_ports) if rtr.mcast_cfg.relay) { > - var mc_static = json_escape(mC_STATIC().0) in > - var flood_static = not flood_ports.is_empty() in > - var actions = if (flood_static) { > - i"clone { " > - "outport = ${mc_static}; " > - "ip.ttl--; " > - "next; " > - "};" > - } else { > - i"drop;" > - } in > - Flow(.logical_datapath = rtr._uuid, > - .stage = s_ROUTER_IN_IP_ROUTING(), > - .priority = 450, > - .__match = i"ip4.mcast || ip6.mcast", > - .actions = actions, > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Logical router ingress table POLICY: Policy. > - * > - * A packet that arrives at this table is an IP packet that should be > - * permitted/denied/rerouted to the address in the rule's nexthop. > - * This table sets outport to the correct out_port, > - * eth.src to the output port's MAC address, > - * the appropriate register to the next-hop IP address (leaving > - * 'ip[46].dst', the packet’s final destination, unchanged), and > - * advances to the next table for ARP/ND resolution. */ > -for (&Router(._uuid = lr_uuid)) { > - /* This is a catch-all rule. It has the lowest priority (0) > - * does a match-all("1") and pass-through (next) */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_POLICY(), > - .priority = 0, > - .__match = i"1", > - .actions = i"${rEG_ECMP_GROUP_ID()} = 0; next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_POLICY_ECMP(), > - .priority = 150, > - .__match = i"${rEG_ECMP_GROUP_ID()} == 0", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Convert routing policies to flows. */ > -function pkt_mark_policy(options: Map<istring,istring>): string { > - var pkt_mark = options.get(i"pkt_mark").and_then(parse_dec_u64).unwrap_or(0); > - if (pkt_mark > 0 and pkt_mark < (1 << 32)) { > - "pkt.mark = ${pkt_mark}; " > - } else { > - "" > - } > -} > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_POLICY(), > - .priority = policy.priority, > - .__match = policy.__match, > - .actions = actions.intern(), > - .stage_hint = stage_hint(policy._uuid), > - .io_port = None, > - .controller_meter = None) :- > - r in &Router(), > - var policy_uuid = FlatMap(r.policies), > - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), > - policy.action == i"reroute", > - Some{var nexthop_s} = match (policy.nexthops.size()) { > - 0 -> policy.nexthop, > - 1 -> policy.nexthops.nth(0), > - _ -> None /* >1 nexthops handled separately as ECMP. */ > - }, > - Some{var nexthop} = ip46_parse(nexthop_s.ival()), > - out_port in &RouterPort(.router = r), > - Some{var src_ip} = find_lrp_member_ip(out_port.networks, nexthop), > - /* > - None: > - VLOG_WARN_RL(&rl, "lrp_addr not found for routing policy " > - " priority %"PRId64" nexthop %s", > - rule->priority, rule->nexthop); > - */ > - var xx = src_ip.xxreg(), > - var actions = (pkt_mark_policy(policy.options) ++ > - "${xx}${rEG_NEXT_HOP()} = ${nexthop}; " > - "${xx}${rEG_SRC()} = ${src_ip}; " > - "eth.src = ${out_port.networks.ea}; " > - "outport = ${out_port.json_name}; " > - "flags.loopback = 1; " > - "${rEG_ECMP_GROUP_ID()} = 0; " > - "next;"). > - > -/* Returns true if the addresses in 'addrs' are all IPv4 or all IPv6, > - false if they are a mix. */ > -function all_same_addr_family(addrs: Set<istring>): bool { > - var addr_families = set_empty(); > - for (a in addrs) { > - addr_families.insert(a.contains(".")) > - }; > - addr_families.size() <= 1 > -} > - > -relation EcmpReroutePolicy( > - r: Intern<Router>, > - policy: nb::Logical_Router_Policy, > - ecmp_group_id: usize > -) > -EcmpReroutePolicy(r, policy, ecmp_group_id) :- > - r in &Router(), > - var policy_uuid = FlatMap(r.policies), > - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), > - policy.action == i"reroute", > - policy.nexthops.size() > 1, > - var policies = policy.group_by(r).to_vec().map(|x| (x.nexthop, x)).sort_imm().map(|x| x.1), > - var ecmp_group_ids = range_vec(1, policies.len() + 1, 1), > - var numbered_policies = policies.zip(ecmp_group_ids), > - var pair = FlatMap(numbered_policies), > - (var policy, var ecmp_group_id) = pair, > - all_same_addr_family(policy.nexthops). > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_POLICY_ECMP(), > - .priority = 100, > - .__match = __match, > - .actions = actions.intern(), > - .stage_hint = stage_hint(policy._uuid), > - .io_port = None, > - .controller_meter = None) :- > - EcmpReroutePolicy(r, policy, ecmp_group_id), > - var member_ids = range_vec(1, policy.nexthops.size() + 1, 1), > - var numbered_nexthops = policy.nexthops.map(ival).to_vec().zip(member_ids), > - var pair = FlatMap(numbered_nexthops), > - (var nexthop_s, var member_id) = pair, > - Some{var nexthop} = ip46_parse(nexthop_s), > - out_port in &RouterPort(.router = r), > - Some{var src_ip} = find_lrp_member_ip(out_port.networks, nexthop), // or warn > - var xx = src_ip.xxreg(), > - var actions = (pkt_mark_policy(policy.options) ++ > - "${xx}${rEG_NEXT_HOP()} = ${nexthop}; " > - "${xx}${rEG_SRC()} = ${src_ip}; " > - "eth.src = ${out_port.networks.ea}; " > - "outport = ${out_port.json_name}; " > - "flags.loopback = 1; " > - "next;"), > - var __match = i"${rEG_ECMP_GROUP_ID()} == ${ecmp_group_id} && " > - "${rEG_ECMP_MEMBER_ID()} == ${member_id}". > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_POLICY(), > - .priority = policy.priority, > - .__match = policy.__match, > - .actions = actions, > - .stage_hint = stage_hint(policy._uuid), > - .io_port = None, > - .controller_meter = None) :- > - EcmpReroutePolicy(r, policy, ecmp_group_id), > - var member_ids = { > - var n = policy.nexthops.size(); > - var member_ids = vec_with_capacity(n); > - for (i in range_vec(1, n + 1, 1)) { > - member_ids.push("${i}") > - }; > - member_ids.join(", ") > - }, > - var actions = i"${rEG_ECMP_GROUP_ID()} = ${ecmp_group_id}; " > - "${rEG_ECMP_MEMBER_ID()} = select(${member_ids});". > - > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_POLICY(), > - .priority = policy.priority, > - .__match = policy.__match, > - .actions = i"drop;", > - .stage_hint = stage_hint(policy._uuid), > - .io_port = None, > - .controller_meter = None) :- > - r in &Router(), > - var policy_uuid = FlatMap(r.policies), > - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), > - policy.action == i"drop". > -Flow(.logical_datapath = r._uuid, > - .stage = s_ROUTER_IN_POLICY(), > - .priority = policy.priority, > - .__match = policy.__match, > - .actions = (pkt_mark_policy(policy.options) ++ "${rEG_ECMP_GROUP_ID()} = 0; next;").intern(), > - .stage_hint = stage_hint(policy._uuid), > - .io_port = None, > - .controller_meter = None) :- > - r in &Router(), > - var policy_uuid = FlatMap(r.policies), > - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), > - policy.action == i"allow". > - > - > -/* XXX destination unreachable */ > - > -/* Local router ingress table ARP_RESOLVE: ARP Resolution. > - * > - * Multicast packets already have the outport set so just advance to next > - * table (priority 500). > - */ > -for (&Router(._uuid = lr_uuid)) { > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 500, > - .__match = i"ip4.mcast || ip6.mcast", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Local router ingress table ARP_RESOLVE: ARP Resolution. > - * > - * Any packet that reaches this table is an IP packet whose next-hop IP > - * address is in the next-hop register. (ip4.dst is the final destination.) This table > - * resolves the IP address in the next-hop register into an output port in outport and an > - * Ethernet address in eth.dst. */ > -// FIXME: does this apply to redirect ports? > -for (rp in &RouterPort(.peer = PeerRouter{peer_port, _}, > - .router = router, > - .networks = networks)) > -{ > - for (&RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = peer_port}, > - .json_name = peer_json_name, > - .router = peer_router)) > - { > - /* This is a logical router port. If next-hop IP address in > - * the next-hop register matches IP address of this router port, then > - * the packet is intended to eventually be sent to this > - * logical port. Set the destination mac address using this > - * port's mac address. > - * > - * The packet is still in peer's logical pipeline. So the match > - * should be on peer's outport. */ > - if (not networks.ipv4_addrs.is_empty()) { > - var __match = "outport == ${peer_json_name} && " > - "${rEG_NEXT_HOP()} == " ++ > - format_v4_networks(networks, false) in > - Flow(.logical_datapath = peer_router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 100, > - .__match = __match.intern(), > - .actions = i"eth.dst = ${networks.ea}; next;", > - .stage_hint = stage_hint(rp.lrp._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - > - if (not networks.ipv6_addrs.is_empty()) { > - var __match = "outport == ${peer_json_name} && " > - "xx${rEG_NEXT_HOP()} == " ++ > - format_v6_networks(networks) in > - Flow(.logical_datapath = peer_router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 100, > - .__match = __match.intern(), > - .actions = i"eth.dst = ${networks.ea}; next;", > - .stage_hint = stage_hint(rp.lrp._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - } > -} > - > -/* Packet is on a non gateway chassis and > - * has an unresolved ARP on a network behind gateway > - * chassis attached router port. Since, redirect type > - * is "bridged", instead of calling "get_arp" > - * on this node, we will redirect the packet to gateway > - * chassis, by setting destination mac router port mac.*/ > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 50, > - .__match = i"outport == ${rp.json_name} && " > - "!is_chassis_resident(${json_escape(chassis_redirect_name(l3dgw_port.name))})", > - .actions = i"eth.dst = ${rp.networks.ea}; next;", > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - rp in &RouterPort(.lrp = lrp, .router = router), > - Some{var l3dgw_port} = router.l3dgw_ports.nth(0), > - Some{i"bridged"} == lrp.options.get(i"redirect-type"). > - > - > -/* Drop IP traffic destined to router owned IPs. Part of it is dropped > - * in stage "lr_in_ip_input" but traffic that could have been unSNATed > - * but didn't match any existing session might still end up here. > - * > - * Priority 1. > - */ > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 1, > - .__match = ("ip4.dst == {" ++ match_ips.join(", ") ++ "}").intern(), > - .actions = i"drop;", > - .stage_hint = stage_hint(lrp_uuid), > - .io_port = None, > - .controller_meter = None) :- > - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, > - .router = &Router{.snat_ips = snat_ips, > - ._uuid = lr_uuid}, > - .networks = networks), > - var addr = FlatMap(networks.ipv4_addrs), > - snat_ips.contains_key(IPv4{addr.addr}), > - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 1, > - .__match = ("ip6.dst == {" ++ match_ips.join(", ") ++ "}").intern(), > - .actions = i"drop;", > - .stage_hint = stage_hint(lrp_uuid), > - .io_port = None, > - .controller_meter = None) :- > - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, > - .router = &Router{.snat_ips = snat_ips, > - ._uuid = lr_uuid}, > - .networks = networks), > - var addr = FlatMap(networks.ipv6_addrs), > - snat_ips.contains_key(IPv6{addr.addr}), > - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). > - > -/* Create ARP resolution flows for NAT and LB addresses for first hop > - * logical routers > - */ > -Flow(.logical_datapath = peer.router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 100, > - .__match = ("outport == ${peer.json_name} && " ++ rEG_NEXT_HOP() ++ " == {${ips}}").intern(), > - .actions = i"eth.dst = ${addr.ea}; next;", > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - RouterPortRoutableAddresses(port, addresses), > - FirstHopRouterPortRoutableAddresses(port, peer_uuid), > - peer in &RouterPort(.lrp = lrp), > - lrp._uuid == peer_uuid, > - not peer.router.options.get_bool_def(i"dynamic_neigh_routers", false), > - var addr = FlatMap(addresses), > - var ips = addr.ipv4_addrs.map(|a| a.addr.to_string()).join(", "). > - > -/* This is a logical switch port that backs a VM or a container. > - * Extract its addresses. For each of the address, go through all > - * the router ports attached to the switch (to which this port > - * connects) and if the address in question is reachable from the > - * router port, add an ARP/ND entry in that router's pipeline. */ > -for (SwitchPortIPv4Address( > - .port = &SwitchPort{.lsp = lsp, .sw = sw}, > - .ea = ea, > - .addr = addr) > - if lsp.__type != i"router" and lsp.__type != i"virtual" and lsp.is_enabled()) > -{ > - for (&SwitchPort(.sw = &Switch{._uuid = sw._uuid}, > - .peer = Some{peer@&RouterPort{.router = peer_router}})) > - { > - Some{_} = find_lrp_member_ip(peer.networks, IPv4{addr.addr}) in > - Flow(.logical_datapath = peer_router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 100, > - .__match = i"outport == ${peer.json_name} && " > - "${rEG_NEXT_HOP()} == ${addr.addr}", > - .actions = i"eth.dst = ${ea}; next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = None, > - .controller_meter = None) > - } > -} > - > -for (SwitchPortIPv6Address( > - .port = &SwitchPort{.lsp = lsp, .sw = sw}, > - .ea = ea, > - .addr = addr) > - if lsp.__type != i"router" and lsp.__type != i"virtual" and lsp.is_enabled()) > -{ > - for (&SwitchPort(.sw = &Switch{._uuid = sw._uuid}, > - .peer = Some{peer@&RouterPort{.router = peer_router}})) > - { > - Some{_} = find_lrp_member_ip(peer.networks, IPv6{addr.addr}) in > - Flow(.logical_datapath = peer_router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 100, > - .__match = i"outport == ${peer.json_name} && " > - "xx${rEG_NEXT_HOP()} == ${addr.addr}", > - .actions = i"eth.dst = ${ea}; next;", > - .stage_hint = stage_hint(lsp._uuid), > - .io_port = None, > - .controller_meter = None) > - } > -} > - > -/* True if 's' is an empty set or a set that contains just an empty string, > - * false otherwise. > - * > - * This is meant for sets of 0 or 1 elements, like the OVSDB integration > - * with DDlog uses. */ > -function is_empty_set_or_string(s: Option<istring>): bool = { > - match (s) { > - None -> true, > - Some{s} -> s == i"" > - } > -} > - > -/* This is a virtual port. Add ARP replies for the virtual ip with > - * the mac of the present active virtual parent. > - * If the logical port doesn't have virtual parent set in > - * Port_Binding table, then add the flow to set eth.dst to > - * 00:00:00:00:00:00 and advance to next table so that ARP is > - * resolved by router pipeline using the arp{} action. > - * The MAC_Binding entry for the virtual ip might be invalid. */ > -Flow(.logical_datapath = peer.router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 100, > - .__match = i"outport == ${peer.json_name} && " > - "${rEG_NEXT_HOP()} == ${virtual_ip}", > - .actions = i"eth.dst = 00:00:00:00:00:00; next;", > - .stage_hint = stage_hint(sp.lsp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - sp in &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"virtual"}), > - Some{var virtual_ip_s} = lsp.options.get(i"virtual-ip"), > - Some{var virtual_parents} = lsp.options.get(i"virtual-parents"), > - Some{var virtual_ip} = ip_parse(virtual_ip_s.ival()), > - pb in sb::Port_Binding(.logical_port = sp.lsp.name), > - is_empty_set_or_string(pb.virtual_parent) or pb.chassis == None, > - sp2 in &SwitchPort(.sw = sp.sw, .peer = Some{peer}), > - Some{_} = find_lrp_member_ip(peer.networks, IPv4{virtual_ip}). > -Flow(.logical_datapath = peer.router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 100, > - .__match = i"outport == ${peer.json_name} && " > - "${rEG_NEXT_HOP()} == ${virtual_ip}", > - .actions = i"eth.dst = ${address.ea}; next;", > - .stage_hint = stage_hint(sp.lsp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - sp in &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"virtual"}), > - Some{var virtual_ip_s} = lsp.options.get(i"virtual-ip"), > - Some{var virtual_parents} = lsp.options.get(i"virtual-parents"), > - Some{var virtual_ip} = ip_parse(virtual_ip_s.ival()), > - pb in sb::Port_Binding(.logical_port = sp.lsp.name), > - not (is_empty_set_or_string(pb.virtual_parent) or pb.chassis == None), > - Some{var virtual_parent} = pb.virtual_parent, > - vp in &SwitchPort(.lsp = &nb::Logical_Switch_Port{.name = virtual_parent}), > - var address = FlatMap(vp.static_addresses), > - sp2 in &SwitchPort(.sw = sp.sw, .peer = Some{peer}), > - Some{_} = find_lrp_member_ip(peer.networks, IPv4{virtual_ip}). > - > -/* This is a logical switch port that connects to a router. */ > - > -/* The peer of this switch port is the router port for which > - * we need to add logical flows such that it can resolve > - * ARP entries for all the other router ports connected to > - * the switch in question. */ > -for (&SwitchPort(.lsp = lsp1, > - .peer = Some{peer1@&RouterPort{.router = peer_router}}, > - .sw = sw) > - if lsp1.is_enabled() and > - not peer_router.options.get_bool_def(i"dynamic_neigh_routers", false)) > -{ > - for (&SwitchPort(.lsp = lsp2, .peer = Some{peer2}, > - .sw = &Switch{._uuid = sw._uuid}) > - /* Skip the router port under consideration. */ > - if peer2.lrp._uuid != peer1.lrp._uuid) > - { > - if (not peer2.networks.ipv4_addrs.is_empty()) { > - Flow(.logical_datapath = peer_router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 100, > - .__match = i"outport == ${peer1.json_name} && " > - "${rEG_NEXT_HOP()} == ${format_v4_networks(peer2.networks, false)}", > - .actions = i"eth.dst = ${peer2.networks.ea}; next;", > - .stage_hint = stage_hint(lsp1._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - > - if (not peer2.networks.ipv6_addrs.is_empty()) { > - Flow(.logical_datapath = peer_router._uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 100, > - .__match = i"outport == ${peer1.json_name} && " > - "xx${rEG_NEXT_HOP()} == ${format_v6_networks(peer2.networks)}", > - .actions = i"eth.dst = ${peer2.networks.ea}; next;", > - .stage_hint = stage_hint(lsp1._uuid), > - .io_port = None, > - .controller_meter = None) > - } > - } > -} > - > -for (&Router(._uuid = lr_uuid)) > -{ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 0, > - .__match = i"ip4", > - .actions = i"get_arp(outport, ${rEG_NEXT_HOP()}); next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None); > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > - .priority = 0, > - .__match = i"ip6", > - .actions = i"get_nd(outport, xx${rEG_NEXT_HOP()}); next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Local router ingress table CHK_PKT_LEN: Check packet length. > - * > - * Any IPv4 or IPv6 packet with outport set to a router port that has > - * gateway_mtu > 0 configured, check the packet length and store the result in > - * the 'REGBIT_PKT_LARGER' register bit. > - * > - * Local router ingress table LARGER_PKTS: Handle larger packets. > - * > - * Any IPv4 or IPv6 packet with outport set to a router port that has > - * gatway_mtu > 0 configured and the 'REGBIT_PKT_LARGER' register bit is set, > - * generate an ICMPv4/ICMPv6 packet with type 3/2 (Destination > - * Unreachable/Packet Too Big) and code 4/0 (Fragmentation needed). > - */ > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_CHK_PKT_LEN(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - &Router(._uuid = lr_uuid). > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LARGER_PKTS(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) :- > - &Router(._uuid = lr_uuid). > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_CHK_PKT_LEN(), > - .priority = 50, > - .__match = i"outport == ${gw_mtu_rp.json_name}", > - .actions = i"${rEGBIT_PKT_LARGER()} = check_pkt_larger(${mtu}); " > - "next;", > - .stage_hint = stage_hint(gw_mtu_rp.lrp._uuid), > - .io_port = None, > - .controller_meter = None) :- > - r in &Router(._uuid = lr_uuid), > - gw_mtu_rp in &RouterPort(.router = r), > - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), > - gw_mtu > 0, > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(). > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LARGER_PKTS(), > - .priority = 150, > - .__match = i"inport == ${rp.json_name} && outport == ${gw_mtu_rp.json_name} && ip4 && " > - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", > - .actions = i"icmp4_error {" > - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " > - "${rEGBIT_PKT_LARGER()} = 0; " > - "eth.dst = ${rp.networks.ea}; " > - "ip4.dst = ip4.src; " > - "ip4.src = ${first_ipv4.addr}; " > - "ip.ttl = 255; " > - "icmp4.type = 3; /* Destination Unreachable. */ " > - "icmp4.code = 4; /* Frag Needed and DF was Set. */ " > - /* Set icmp4.frag_mtu to gw_mtu */ > - "icmp4.frag_mtu = ${gw_mtu}; " > - "next(pipeline=ingress, table=0); " > - "};", > - .io_port = None, > - .controller_meter = r.copp.get(cOPP_ICMP4_ERR()), > - .stage_hint = stage_hint(rp.lrp._uuid)) :- > - r in &Router(._uuid = lr_uuid), > - gw_mtu_rp in &RouterPort(.router = r), > - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), > - gw_mtu > 0, > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), > - rp in &RouterPort(.router = r), > - rp.lrp != gw_mtu_rp.lrp, > - Some{var first_ipv4} = rp.networks.ipv4_addrs.nth(0). > - > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 150, > - .__match = i"inport == ${rp.json_name} && ip4 && " > - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", > - .actions = i"icmp4_error {" > - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " > - "${rEGBIT_PKT_LARGER()} = 0; " > - "eth.dst = ${rp.networks.ea}; " > - "ip4.dst = ip4.src; " > - "ip4.src = ${first_ipv4.addr}; " > - "ip.ttl = 255; " > - "icmp4.type = 3; /* Destination Unreachable. */ " > - "icmp4.code = 4; /* Frag Needed and DF was Set. */ " > - /* Set icmp4.frag_mtu to gw_mtu */ > - "icmp4.frag_mtu = ${gw_mtu}; " > - "next(pipeline=ingress, table=0); " > - "};", > - .io_port = None, > - .controller_meter = r.copp.get(cOPP_ICMP4_ERR()), > - .stage_hint = stage_hint(rp.lrp._uuid)) :- > - r in &Router(._uuid = lr_uuid), > - gw_mtu_rp in &RouterPort(.router = r), > - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), > - gw_mtu > 0, > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), > - rp in &RouterPort(.router = r), > - rp.lrp == gw_mtu_rp.lrp, > - Some{var first_ipv4} = rp.networks.ipv4_addrs.nth(0). > - > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_LARGER_PKTS(), > - .priority = 150, > - .__match = i"inport == ${rp.json_name} && outport == ${gw_mtu_rp.json_name} && ip6 && " > - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", > - .actions = i"icmp6_error {" > - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " > - "${rEGBIT_PKT_LARGER()} = 0; " > - "eth.dst = ${rp.networks.ea}; " > - "ip6.dst = ip6.src; " > - "ip6.src = ${first_ipv6.addr}; " > - "ip.ttl = 255; " > - "icmp6.type = 2; /* Packet Too Big. */ " > - "icmp6.code = 0; " > - /* Set icmp6.frag_mtu to gw_mtu */ > - "icmp6.frag_mtu = ${gw_mtu}; " > - "next(pipeline=ingress, table=0); " > - "};", > - .io_port = None, > - .controller_meter = r.copp.get(cOPP_ICMP6_ERR()), > - .stage_hint = stage_hint(rp.lrp._uuid)) :- > - r in &Router(._uuid = lr_uuid), > - gw_mtu_rp in &RouterPort(.router = r), > - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), > - gw_mtu > 0, > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), > - rp in &RouterPort(.router = r), > - rp.lrp != gw_mtu_rp.lrp, > - Some{var first_ipv6} = rp.networks.ipv6_addrs.nth(0). > - > -Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 150, > - .__match = i"inport == ${rp.json_name} && ip6 && " > - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", > - .actions = i"icmp6_error {" > - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " > - "${rEGBIT_PKT_LARGER()} = 0; " > - "eth.dst = ${rp.networks.ea}; " > - "ip6.dst = ip6.src; " > - "ip6.src = ${first_ipv6.addr}; " > - "ip.ttl = 255; " > - "icmp6.type = 2; /* Packet Too Big. */ " > - "icmp6.code = 0; " > - /* Set icmp6.frag_mtu to gw_mtu */ > - "icmp6.frag_mtu = ${gw_mtu}; " > - "next(pipeline=ingress, table=0); " > - "};", > - .io_port = None, > - .controller_meter = r.copp.get(cOPP_ICMP6_ERR()), > - .stage_hint = stage_hint(rp.lrp._uuid)) :- > - r in &Router(._uuid = lr_uuid), > - gw_mtu_rp in &RouterPort(.router = r), > - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), > - gw_mtu > 0, > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), > - rp in &RouterPort(.router = r), > - rp.lrp == gw_mtu_rp.lrp, > - Some{var first_ipv6} = rp.networks.ipv6_addrs.nth(0). > - > -/* Logical router ingress table GW_REDIRECT: Gateway redirect. > - * > - * For traffic with outport equal to the l3dgw_port > - * on a distributed router, this table redirects a subset > - * of the traffic to the l3redirect_port which represents > - * the central instance of the l3dgw_port. > - */ > -for (&Router(._uuid = lr_uuid)) > -{ > - /* For traffic with outport == l3dgw_port, if the > - * packet did not match any higher priority redirect > - * rule, then the traffic is redirected to the central > - * instance of the l3dgw_port. */ > - for (DistributedGatewayPort(lrp, lr_uuid, _)) { > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_GW_REDIRECT(), > - .priority = 50, > - .__match = i"outport == ${json_escape(lrp.name)}", > - .actions = i"outport = ${json_escape(chassis_redirect_name(lrp.name))}; next;", > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - > - /* Packets are allowed by default. */ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_GW_REDIRECT(), > - .priority = 0, > - .__match = i"1", > - .actions = i"next;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > -/* Local router ingress table ARP_REQUEST: ARP request. > - * > - * In the common case where the Ethernet destination has been resolved, > - * this table outputs the packet (priority 0). Otherwise, it composes > - * and sends an ARP/IPv6 NA request (priority 100). */ > -Flow(.logical_datapath = router._uuid, > - .stage = s_ROUTER_IN_ARP_REQUEST(), > - .priority = 200, > - .__match = __match, > - .actions = actions, > - .io_port = None, > - .controller_meter = router.copp.get(cOPP_ND_NS_RESOLVE()), > - .stage_hint = 0) :- > - rsr in RouterStaticRoute(.router = router), > - var dst = FlatMap(rsr.dsts), > - IPv6{var gw_ip6} = dst.nexthop, > - var __match = i"eth.dst == 00:00:00:00:00:00 && " > - "ip6 && xx${rEG_NEXT_HOP()} == ${dst.nexthop}", > - var sn_addr = gw_ip6.solicited_node(), > - var eth_dst = sn_addr.multicast_to_ethernet(), > - var sn_addr_s = sn_addr.string_mapped(), > - var actions = i"nd_ns { " > - "eth.dst = ${eth_dst}; " > - "ip6.dst = ${sn_addr_s}; " > - "nd.target = ${dst.nexthop}; " > - "output; " > - "};". > - > -for (&Router(._uuid = lr_uuid, .copp = copp)) > -{ > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ARP_REQUEST(), > - .priority = 100, > - .__match = i"eth.dst == 00:00:00:00:00:00 && ip4", > - .actions = i"arp { " > - "eth.dst = ff:ff:ff:ff:ff:ff; " > - "arp.spa = ${rEG_SRC()}; " > - "arp.tpa = ${rEG_NEXT_HOP()}; " > - "arp.op = 1; " /* ARP request */ > - "output; " > - "};", > - .io_port = None, > - .controller_meter = copp.get(cOPP_ARP_RESOLVE()), > - .stage_hint = 0); > - > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ARP_REQUEST(), > - .priority = 100, > - .__match = i"eth.dst == 00:00:00:00:00:00 && ip6", > - .actions = i"nd_ns { " > - "nd.target = xx${rEG_NEXT_HOP()}; " > - "output; " > - "};", > - .io_port = None, > - .controller_meter = copp.get(cOPP_ND_NS_RESOLVE()), > - .stage_hint = 0); > - > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_ARP_REQUEST(), > - .priority = 0, > - .__match = i"1", > - .actions = i"output;", > - .stage_hint = 0, > - .io_port = None, > - .controller_meter = None) > -} > - > - > -/* Logical router egress table DELIVERY: Delivery (priority 100). > - * > - * Priority 100 rules deliver packets to enabled logical ports. */ > -for (&RouterPort(.lrp = lrp, > - .json_name = json_name, > - .networks = lrp_networks, > - .router = &Router{._uuid = lr_uuid, .mcast_cfg = mcast_cfg}) > - /* Drop packets to disabled logical ports (since logical flow > - * tables are default-drop). */ > - if lrp.is_enabled()) > -{ > - /* If multicast relay is enabled then also adjust source mac for IP > - * multicast traffic. > - */ > - if (mcast_cfg.relay) { > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_DELIVERY(), > - .priority = 110, > - .__match = i"(ip4.mcast || ip6.mcast) && " > - "outport == ${json_name}", > - .actions = i"eth.src = ${lrp_networks.ea}; output;", > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None) > - }; > - /* No egress packets should be processed in the context of > - * a chassisredirect port. The chassisredirect port should > - * be replaced by the l3dgw port in the local output > - * pipeline stage before egress processing. */ > - > - Flow(.logical_datapath = lr_uuid, > - .stage = s_ROUTER_OUT_DELIVERY(), > - .priority = 100, > - .__match = i"outport == ${json_name}", > - .actions = i"output;", > - .stage_hint = stage_hint(lrp._uuid), > - .io_port = None, > - .controller_meter = None) > -} > - > -/* > - * Datapath tunnel key allocation: > - * > - * Allocates a globally unique tunnel id in the range 1...2**24-1 for > - * each Logical_Switch and Logical_Router. > - */ > - > -function oVN_MAX_DP_KEY(): integer { (64'd1 << 24) - 1 } > -function oVN_MAX_DP_GLOBAL_NUM(): integer { (64'd1 << 16) - 1 } > -function oVN_MIN_DP_KEY_LOCAL(): integer { 1 } > -function oVN_MAX_DP_KEY_LOCAL(): integer { oVN_MAX_DP_KEY() - oVN_MAX_DP_GLOBAL_NUM() } > -function oVN_MIN_DP_KEY_GLOBAL(): integer { oVN_MAX_DP_KEY_LOCAL() + 1 } > -function oVN_MAX_DP_KEY_GLOBAL(): integer { oVN_MAX_DP_KEY() } > - > -function oVN_MAX_DP_VXLAN_KEY(): integer { (64'd1 << 12) - 1 } > -function oVN_MAX_DP_VXLAN_KEY_LOCAL(): integer { oVN_MAX_DP_KEY() - oVN_MAX_DP_GLOBAL_NUM() } > - > -/* If any chassis uses VXLAN encapsulation, then the entire deployment is in VXLAN mode. */ > -relation IsVxlanMode0() > -IsVxlanMode0() :- > - sb::Chassis(.encaps = encaps), > - var encap_uuid = FlatMap(encaps), > - sb::Encap(._uuid = encap_uuid, .__type = i"vxlan"). > - > -relation IsVxlanMode[bool] > -IsVxlanMode[true] :- > - IsVxlanMode0(). > -IsVxlanMode[false] :- > - Unit(), > - not IsVxlanMode0(). > - > -/* The maximum datapath tunnel key that may be used. */ > -relation OvnMaxDpKeyLocal[integer] > -/* OVN_MAX_DP_GLOBAL_NUM doesn't apply for vxlan mode. */ > -OvnMaxDpKeyLocal[oVN_MAX_DP_VXLAN_KEY()] :- IsVxlanMode[true]. > -OvnMaxDpKeyLocal[oVN_MAX_DP_KEY() - oVN_MAX_DP_GLOBAL_NUM()] :- IsVxlanMode[false]. > - > -relation OvnPortKeyBits[bit<32>] > -OvnPortKeyBits[12] :- IsVxlanMode[true]. > -OvnPortKeyBits[16] :- IsVxlanMode[false]. > - > -relation OvnDpKeyBits[bit<32>] > -OvnDpKeyBits[12] :- IsVxlanMode[true]. > -OvnDpKeyBits[24] :- IsVxlanMode[false]. > - > -function get_dp_tunkey(map: Map<istring,istring>, key: istring, bits: bit<32>): Option<integer> { > - map.get(key) > - .and_then(parse_dec_u64) > - .and_then(|x| if (x > 0 and x < (1<<bits)) { > - Some{x} > - } else { > - None > - }) > -} > - > -// Tunnel keys requested by datapaths. > -relation RequestedTunKey(datapath: uuid, tunkey: integer) > -RequestedTunKey(uuid, tunkey) :- > - OvnDpKeyBits[bits], > - ls in &nb::Logical_Switch(._uuid = uuid), > - Some{var tunkey} = get_dp_tunkey(ls.other_config, i"requested-tnl-key", bits). > -RequestedTunKey(uuid, tunkey) :- > - OvnDpKeyBits[bits], > - lr in nb::Logical_Router(._uuid = uuid), > - Some{var tunkey} = get_dp_tunkey(lr.options, i"requested-tnl-key", bits). > -Warning[message] :- > - RequestedTunKey(datapath, tunkey), > - var count = datapath.group_by((tunkey)).size(), > - count > 1, > - var message = "${count} logical switches or routers request " > - "datapath tunnel key ${tunkey}". > - > -// Assign tunnel keys: > -// - First priority to requested tunnel keys. > -// - Second priority to already assigned tunnel keys. > -// In either case, make an arbitrary choice in case of conflicts within a > -// priority level. > -relation AssignedTunKey(datapath: uuid, tunkey: integer) > -AssignedTunKey(datapath, tunkey) :- > - RequestedTunKey(datapath, tunkey), > - var datapath = datapath.group_by(tunkey).first(). > -AssignedTunKey(datapath, tunkey) :- > - sb::Datapath_Binding(._uuid = datapath, .tunnel_key = tunkey), > - not RequestedTunKey(_, tunkey), > - not RequestedTunKey(datapath, _), > - var datapath = datapath.group_by(tunkey).first(). > - > -// all tunnel keys already in use in the Realized table > -relation AllocatedTunKeys(keys: Set<integer>) > -AllocatedTunKeys(keys) :- > - AssignedTunKey(.tunkey = tunkey), > - var keys = tunkey.group_by(()).to_set(). > - > -// Datapath_Binding's not yet in the Realized table > -relation NotYetAllocatedTunKeys(datapaths: Vec<uuid>) > - > -NotYetAllocatedTunKeys(datapaths) :- > - OutProxy_Datapath_Binding(._uuid = datapath), > - not AssignedTunKey(datapath, _), > - var datapaths = datapath.group_by(()).to_vec(). > - > -// Perform the allocation > -relation TunKeyAllocation(datapath: uuid, tunkey: integer) > - > -TunKeyAllocation(datapath, tunkey) :- AssignedTunKey(datapath, tunkey). > - > -// Case 1: AllocatedTunKeys relation is not empty (i.e., contains > -// a single record that stores a set of allocated keys) > -TunKeyAllocation(datapath, tunkey) :- > - NotYetAllocatedTunKeys(unallocated), > - AllocatedTunKeys(allocated), > - OvnMaxDpKeyLocal[max_dp_key_local], > - var allocation = FlatMap(allocate(allocated, unallocated, 1, max_dp_key_local)), > - (var datapath, var tunkey) = allocation. > - > -// Case 2: AllocatedTunKeys relation is empty > -TunKeyAllocation(datapath, tunkey) :- > - NotYetAllocatedTunKeys(unallocated), > - not AllocatedTunKeys(_), > - OvnMaxDpKeyLocal[max_dp_key_local], > - var allocation = FlatMap(allocate(set_empty(), unallocated, 1, max_dp_key_local)), > - (var datapath, var tunkey) = allocation. > - > -/* > - * Port id allocation: > - * > - * Port IDs in a per-datapath space in the range 1...2**(bits-1)-1, where > - * bits is the number of bits available for port keys (default: 16, vxlan: 12) > - */ > - > -function get_port_tunkey(map: Map<istring,istring>, key: istring, bits: bit<32>): Option<integer> { > - map.get(key) > - .and_then(parse_dec_u64) > - .and_then(|x| if (x > 0 and x < (1<<bits)) { > - Some{x} > - } else { > - None > - }) > -} > - > -// Tunnel keys requested by port bindings. > -relation RequestedPortTunKey(datapath: uuid, port: uuid, tunkey: integer) > -RequestedPortTunKey(datapath, port, tunkey) :- > - OvnPortKeyBits[bits], > - sp in &SwitchPort(), > - var datapath = sp.sw._uuid, > - var port = sp.lsp._uuid, > - Some{var tunkey} = get_port_tunkey(sp.lsp.options, i"requested-tnl-key", bits). > -RequestedPortTunKey(datapath, port, tunkey) :- > - OvnPortKeyBits[bits], > - rp in &RouterPort(), > - var datapath = rp.router._uuid, > - var port = rp.lrp._uuid, > - Some{var tunkey} = get_port_tunkey(rp.lrp.options, i"requested-tnl-key", bits). > -Warning[message] :- > - RequestedPortTunKey(datapath, port, tunkey), > - var count = port.group_by((datapath, tunkey)).size(), > - count > 1, > - var message = "${count} logical ports in the same datapath " > - "request port tunnel key ${tunkey}". > - > -// Assign tunnel keys: > -// - First priority to requested tunnel keys. > -// - Second priority to already assigned tunnel keys. > -// In either case, make an arbitrary choice in case of conflicts within a > -// priority level. > -relation AssignedPortTunKey(datapath: uuid, port: uuid, tunkey: integer) > -AssignedPortTunKey(datapath, port, tunkey) :- > - RequestedPortTunKey(datapath, port, tunkey), > - var port = port.group_by((datapath, tunkey)).first(). > -AssignedPortTunKey(datapath, port, tunkey) :- > - sb::Port_Binding(._uuid = port_uuid, > - .datapath = datapath, > - .tunnel_key = tunkey), > - not RequestedPortTunKey(datapath, _, tunkey), > - not RequestedPortTunKey(datapath, port_uuid, _), > - var port = port_uuid.group_by((datapath, tunkey)).first(). > - > -// all tunnel keys already in use in the Realized table > -relation AllocatedPortTunKeys(datapath: uuid, keys: Set<integer>) > - > -AllocatedPortTunKeys(datapath, keys) :- > - AssignedPortTunKey(datapath, port, tunkey), > - var keys = tunkey.group_by(datapath).to_set(). > - > -// Port_Binding's not yet in the Realized table > -relation NotYetAllocatedPortTunKeys(datapath: uuid, all_logical_ids: Vec<uuid>) > - > -NotYetAllocatedPortTunKeys(datapath, all_names) :- > - OutProxy_Port_Binding(._uuid = port_uuid, .datapath = datapath), > - not AssignedPortTunKey(datapath, port_uuid, _), > - var all_names = port_uuid.group_by(datapath).to_vec(). > - > -// Perform the allocation. > -relation PortTunKeyAllocation(port: uuid, tunkey: integer) > - > -// Transfer existing allocations from the realized table. > -PortTunKeyAllocation(port, tunkey) :- AssignedPortTunKey(_, port, tunkey). > - > -// Case 1: AllocatedPortTunKeys(datapath) is not empty (i.e., contains > -// a single record that stores a set of allocated keys). > -PortTunKeyAllocation(port, tunkey) :- > - AllocatedPortTunKeys(datapath, allocated), > - NotYetAllocatedPortTunKeys(datapath, unallocated), > - var allocation = FlatMap(allocate(allocated, unallocated, 1, 64'hffff)), > - (var port, var tunkey) = allocation. > - > -// Case 2: PortAllocatedTunKeys(datapath) relation is empty > -PortTunKeyAllocation(port, tunkey) :- > - NotYetAllocatedPortTunKeys(datapath, unallocated), > - not AllocatedPortTunKeys(datapath, _), > - var allocation = FlatMap(allocate(set_empty(), unallocated, 1, 64'hffff)), > - (var port, var tunkey) = allocation. > - > -/* > - * Multicast group tunnel_key allocation: > - * > - * Tunnel-keys in a per-datapath space in the range 32770...65535 > - */ > - > -// All tunnel keys already in use in the Realized table. > -relation AllocatedMulticastGroupTunKeys(datapath_uuid: uuid, keys: Set<integer>) > - > -AllocatedMulticastGroupTunKeys(datapath_uuid, keys) :- > - sb::Multicast_Group(.datapath = datapath_uuid, .tunnel_key = tunkey), > - //sb::UUIDMap_Datapath_Binding(datapath, Left{datapath_uuid}), > - var keys = tunkey.group_by(datapath_uuid).to_set(). > - > -// Multicast_Group's not yet in the Realized table. > -relation NotYetAllocatedMulticastGroupTunKeys(datapath_uuid: uuid, > - all_logical_ids: Vec<istring>) > - > -NotYetAllocatedMulticastGroupTunKeys(datapath_uuid, all_names) :- > - OutProxy_Multicast_Group(.name = name, .datapath = datapath_uuid), > - not sb::Multicast_Group(.name = name, .datapath = datapath_uuid), > - var all_names = name.group_by(datapath_uuid).to_vec(). > - > -// Perform the allocation > -relation MulticastGroupTunKeyAllocation(datapath_uuid: uuid, group: istring, tunkey: integer) > - > -// transfer existing allocations from the realized table > -MulticastGroupTunKeyAllocation(datapath_uuid, group, tunkey) :- > - //sb::UUIDMap_Datapath_Binding(_, datapath_uuid), > - sb::Multicast_Group(.name = group, > - .datapath = datapath_uuid, > - .tunnel_key = tunkey). > - > -// Case 1: AllocatedMulticastGroupTunKeys(datapath) is not empty (i.e., > -// contains a single record that stores a set of allocated keys) > -MulticastGroupTunKeyAllocation(datapath_uuid, group, tunkey) :- > - AllocatedMulticastGroupTunKeys(datapath_uuid, allocated), > - NotYetAllocatedMulticastGroupTunKeys(datapath_uuid, unallocated), > - (_, var min_key) = mC_IP_MCAST_MIN(), > - (_, var max_key) = mC_IP_MCAST_MAX(), > - var allocation = FlatMap(allocate(allocated, unallocated, > - min_key, max_key)), > - (var group, var tunkey) = allocation. > - > -// Case 2: AllocatedMulticastGroupTunKeys(datapath) relation is empty > -MulticastGroupTunKeyAllocation(datapath_uuid, group, tunkey) :- > - NotYetAllocatedMulticastGroupTunKeys(datapath_uuid, unallocated), > - not AllocatedMulticastGroupTunKeys(datapath_uuid, _), > - (_, var min_key) = mC_IP_MCAST_MIN(), > - (_, var max_key) = mC_IP_MCAST_MAX(), > - var allocation = FlatMap(allocate(set_empty(), unallocated, > - min_key, max_key)), > - (var group, var tunkey) = allocation. > - > -/* > - * Queue ID allocation > - * > - * Queue IDs on a chassis, for routers that have QoS enabled, in a per-chassis > - * space in the range 1...0xf000. It looks to me like there'd only be a small > - * number of these per chassis, and probably a small number overall, in case it > - * matters. > - * > - * Queue ID may also need to be deallocated if port loses QoS attributes > - * > - * This logic applies mainly to sb::Port_Binding records bound to a chassis > - * (i.e. with the chassis column nonempty) but "localnet" ports can also > - * have a queue ID. For those we use the port's own UUID as the chassis UUID. > - */ > - > -function port_has_qos_params(opts: Map<istring, istring>): bool = { > - opts.contains_key(i"qos_max_rate") or opts.contains_key(i"qos_burst") > -} > - > - > -// ports in Out_Port_Binding that require queue ID on chassis > -relation PortRequiresQID(port: uuid, chassis: uuid) > - > -PortRequiresQID(pb._uuid, chassis) :- > - pb in OutProxy_Port_Binding(), > - pb.__type != i"localnet", > - port_has_qos_params(pb.options), > - sb::Port_Binding(._uuid = pb._uuid, .chassis = chassis_set), > - Some{var chassis} = chassis_set. > -PortRequiresQID(pb._uuid, pb._uuid) :- > - pb in OutProxy_Port_Binding(), > - pb.__type == i"localnet", > - port_has_qos_params(pb.options), > - sb::Port_Binding(._uuid = pb._uuid). > - > -relation AggPortRequiresQID(chassis: uuid, ports: Vec<uuid>) > - > -AggPortRequiresQID(chassis, ports) :- > - PortRequiresQID(port, chassis), > - var ports = port.group_by(chassis).to_vec(). > - > -relation AllocatedQIDs(chassis: uuid, allocated_ids: Map<uuid, integer>) > - > -AllocatedQIDs(chassis, allocated_ids) :- > - pb in sb::Port_Binding(), > - pb.__type != i"localnet", > - Some{var chassis} = pb.chassis, > - Some{var qid_str} = pb.options.get(i"qdisc_queue_id"), > - Some{var qid} = parse_dec_u64(qid_str), > - var allocated_ids = (pb._uuid, qid).group_by(chassis).to_map(). > -AllocatedQIDs(chassis, allocated_ids) :- > - pb in sb::Port_Binding(), > - pb.__type == i"localnet", > - var chassis = pb._uuid, > - Some{var qid_str} = pb.options.get(i"qdisc_queue_id"), > - Some{var qid} = parse_dec_u64(qid_str), > - var allocated_ids = (pb._uuid, qid).group_by(chassis).to_map(). > - > -// allocate queue IDs to ports > -relation QueueIDAllocation(port: uuid, qids: Option<integer>) > - > -// None for ports that do not require a queue > -QueueIDAllocation(port, None) :- > - OutProxy_Port_Binding(._uuid = port), > - not PortRequiresQID(port, _). > - > -QueueIDAllocation(port, Some{qid}) :- > - AggPortRequiresQID(chassis, ports), > - AllocatedQIDs(chassis, allocated_ids), > - var allocations = FlatMap(adjust_allocation(allocated_ids, ports, 1, 64'hf000)), > - (var port, var qid) = allocations. > - > -QueueIDAllocation(port, Some{qid}) :- > - AggPortRequiresQID(chassis, ports), > - not AllocatedQIDs(chassis, _), > - var allocations = FlatMap(adjust_allocation(map_empty(), ports, 1, 64'hf000)), > - (var port, var qid) = allocations. > - > -/* > - * This allows ovn-northd to preserve options:ipv6_ra_pd_list, which is set by > - * ovn-controller. > - */ > -relation PreserveIPv6RAPDList(lrp_uuid: uuid, ipv6_ra_pd_list: Option<istring>) > -PreserveIPv6RAPDList(lrp_uuid, ipv6_ra_pd_list) :- > - sb::Port_Binding(._uuid = lrp_uuid, .options = options), > - var ipv6_ra_pd_list = options.get(i"ipv6_ra_pd_list"). > -PreserveIPv6RAPDList(lrp_uuid, None) :- > - &nb::Logical_Router_Port(._uuid = lrp_uuid), > - not sb::Port_Binding(._uuid = lrp_uuid). > - > -/* > - * Tag allocation for nested containers. > - */ > - > -/* Reserved tags for each parent port, including: > - * 1. For ports that need a dynamically allocated tag, existing tag, if any, > - * 2. For ports that have a statically assigned tag (via `tag_request`), the > - * `tag_request` value. > - * 3. For ports that do not have a tag_request, but have a tag statically assigned > - * by directly setting the `tag` field, use this value. > - */ > -relation SwitchPortReservedTag(parent_name: istring, tags: integer) > - > -SwitchPortReservedTag(parent_name, tag) :- > - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = needs_dynamic_tag, .parent_name = Some{parent_name}), > - Some{var tag} = if (needs_dynamic_tag) { > - lsp.tag > - } else { > - match (lsp.tag_request) { > - Some{req} -> Some{req}, > - None -> lsp.tag > - } > - }. > - > -relation SwitchPortReservedTags(parent_name: istring, tags: Set<integer>) > - > -SwitchPortReservedTags(parent_name, tags) :- > - SwitchPortReservedTag(parent_name, tag), > - var tags = tag.group_by(parent_name).to_set(). > - > -SwitchPortReservedTags(parent_name, set_empty()) :- > - &nb::Logical_Switch_Port(.name = parent_name), > - not SwitchPortReservedTag(.parent_name = parent_name). > - > -/* Allocate tags for ports that require dynamically allocated tags and do not > - * have any yet. > - */ > -relation SwitchPortAllocatedTags(lsp_uuid: uuid, tag: Option<integer>) > - > -SwitchPortAllocatedTags(lsp_uuid, tag) :- > - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = true, .parent_name = Some{parent_name}), > - lsp.tag == None, > - var lsps_need_tag = lsp._uuid.group_by(parent_name).to_vec(), > - SwitchPortReservedTags(parent_name, reserved), > - var dyn_tags = allocate_opt(reserved, > - lsps_need_tag, > - 1, /* Tag 0 is invalid for nested containers. */ > - 4095), > - var lsp_tag = FlatMap(dyn_tags), > - (var lsp_uuid, var tag) = lsp_tag. > - > -/* New tag-to-port assignment: > - * Case 1. Statically reserved tag (via `tag_request`), if any. > - * Case 2. Existing tag for ports that require a dynamically allocated tag and already have one. > - * Case 3. Use newly allocated tags (from `SwitchPortAllocatedTags`) for all other ports. > - */ > -relation SwitchPortNewDynamicTag(port: uuid, tag: Option<integer>) > - > -/* Case 1 */ > -SwitchPortNewDynamicTag(lsp._uuid, tag) :- > - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = false), > - var tag = match (lsp.tag_request) { > - Some{0} -> None, > - treq -> treq > - }. > - > -/* Case 2 */ > -SwitchPortNewDynamicTag(lsp._uuid, Some{tag}) :- > - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = true), > - Some{var tag} = lsp.tag. > - > -/* Case 3 */ > -SwitchPortNewDynamicTag(lsp._uuid, tag) :- > - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = true), > - lsp.tag == None, > - SwitchPortAllocatedTags(lsp._uuid, tag). > - > -/* IP_Multicast table (only applicable for Switches). */ > -sb::Out_IP_Multicast(._uuid = cfg.datapath, > - .datapath = cfg.datapath, > - .enabled = Some{cfg.enabled}, > - .querier = Some{cfg.querier}, > - .eth_src = cfg.eth_src, > - .ip4_src = cfg.ip4_src, > - .ip6_src = cfg.ip6_src, > - .table_size = Some{cfg.table_size}, > - .idle_timeout = Some{cfg.idle_timeout}, > - .query_interval = Some{cfg.query_interval}, > - .query_max_resp = Some{cfg.query_max_resp}) :- > - McastSwitchCfg[cfg]. > - > - > -relation PortExists(name: istring) > -PortExists(name) :- &nb::Logical_Switch_Port(.name = name). > -PortExists(name) :- &nb::Logical_Router_Port(.name = name). > - > -sb::Out_Load_Balancer(._uuid = lb._uuid, > - .name = lb.name, > - .vips = lb.vips, > - .protocol = lb.protocol, > - .datapaths = datapaths, > - .external_ids = [i"lb_id" -> uuid2str(lb_uuid).intern()], > - .options = options) :- > - nb in &nb::Logical_Switch(._uuid = ls_uuid, .load_balancer = lb_uuids), > - var lb_uuid = FlatMap(lb_uuids), > - var datapaths = ls_uuid.group_by(lb_uuid).to_set(), > - lb in &nb::Load_Balancer(._uuid = lb_uuid), > - /* Store the fact that northd provides the original (destination IP + > - * transport port) tuple. > - */ > - var options = lb.options.insert_imm(i"hairpin_orig_tuple", i"true"). > - > -sb::Out_Service_Monitor(._uuid = hash128((svc_monitor.port_name, lbvipbackend.ip, lbvipbackend.port, protocol)), > - .ip = i"${lbvipbackend.ip}", > - .protocol = Some{protocol}, > - .port = lbvipbackend.port as integer, > - .logical_port = svc_monitor.port_name, > - .src_mac = i"${svc_monitor_mac}", > - .src_ip = svc_monitor.src_ip, > - .options = health_check.options, > - .external_ids = map_empty()) :- > - SvcMonitorMac(svc_monitor_mac), > - LBVIP[lbvip@&LBVIP{.lb = lb}], > - Some{var health_check} = lbvip.health_check, > - var lbvipbackend = FlatMap(lbvip.backends), > - Some{var svc_monitor} = lbvipbackend.svc_monitor, > - PortExists(svc_monitor.port_name), > - var protocol = default_protocol(lb.protocol), > - protocol != i"sctp". > - > -Warning["SCTP load balancers do not currently support " > - "health checks. Not creating health checks for " > - "load balancer ${uuid2str(lb._uuid)}"] :- > - LBVIP[lbvip@&LBVIP{.lb = lb}], > - default_protocol(lb.protocol) == i"sctp", > - Some{var health_check} = lbvip.health_check, > - var lbvipbackend = FlatMap(lbvip.backends), > - Some{var svc_monitor} = lbvipbackend.svc_monitor. > - > -/* > - * BFD table. > - */ > - > -/* > - * BFD source port allocation. > - * > - * We need to assign a unique source port to each (logical_port, dst_ip) pair. > - * RFC 5881 section 4 says: > - * > - * The source port MUST be in the range 49152 through 65535. > - * The same UDP source port number MUST be used for all BFD > - * Control packets associated with a particular session. > - * The source port number SHOULD be unique among all BFD > - * sessions on the system > - */ > -function bFD_UDP_SRC_PORT_MIN(): integer { 49152 } > -function bFD_UDP_SRC_PORT_MAX(): integer { 65535 } > - > -// Get already assigned BFD source ports. > -// If there's a conflict, make an arbitrary choice. > -relation AssignedSrcPort( > - logical_port: istring, > - dst_ip: istring, > - src_port: integer) > -AssignedSrcPort(logical_port, dst_ip, src_port) :- > - sb::BFD(.logical_port = logical_port, .dst_ip = dst_ip, .src_port = src_port), > - var pair = (logical_port, dst_ip).group_by(src_port).first(), > - (var logical_port, var dst_ip) = pair. > - > -// All source ports already in use. > -relation AllocatedSrcPorts0(src_ports: Set<integer>) > -AllocatedSrcPorts0(src_ports) :- > - AssignedSrcPort(.src_port = src_port), > - var src_ports = src_port.group_by(()).to_set(). > - > -relation AllocatedSrcPorts(src_ports: Set<integer>) > -AllocatedSrcPorts(src_ports) :- AllocatedSrcPorts0(src_ports). > -AllocatedSrcPorts(set_empty()) :- Unit(), not AllocatedSrcPorts0(_). > - > -// (logical_port, dst_ip) pairs not yet in the Realized table > -relation NotYetAllocatedSrcPorts(pairs: Vec<(istring, istring)>) > -NotYetAllocatedSrcPorts(pairs) :- > - nb::BFD(.logical_port = logical_port, .dst_ip = dst_ip), > - not AssignedSrcPort(logical_port, dst_ip, _), > - var pairs = (logical_port, dst_ip).group_by(()).to_vec(). > - > -// Perform the allocation > -relation SrcPortAllocation( > - logical_port: istring, > - dst_ip: istring, > - src_port: integer) > -SrcPortAllocation(logical_port, dst_ip, src_port) :- AssignedSrcPort(logical_port, dst_ip, src_port). > -SrcPortAllocation(logical_port, dst_ip, src_port) :- > - NotYetAllocatedSrcPorts(unallocated), > - AllocatedSrcPorts(allocated), > - var allocation = FlatMap(allocate(allocated, unallocated, > - bFD_UDP_SRC_PORT_MIN(), bFD_UDP_SRC_PORT_MAX())), > - ((var logical_port, var dst_ip), var src_port) = allocation. > - > -relation SouthboundBFDStatus( > - logical_port: istring, > - dst_ip: istring, > - status: Option<istring> > -) > -SouthboundBFDStatus(bfd.logical_port, bfd.dst_ip, Some{bfd.status}) :- bfd in sb::BFD(). > -SouthboundBFDStatus(logical_port, dst_ip, None) :- > - nb::BFD(.logical_port = logical_port, .dst_ip = dst_ip), > - not sb::BFD(.logical_port = logical_port, .dst_ip = dst_ip). > - > -function bFD_DEF_MINTX(): integer { 1000 } // 1 second > -function bFD_DEF_MINRX(): integer { 1000 } // 1 second > -function bFD_DEF_DETECT_MULT(): integer { 5 } > -sb::Out_BFD(._uuid = hash, > - .src_port = src_port, > - .disc = max(1, hash as u32) as integer, > - .logical_port = nb.logical_port, > - .dst_ip = nb.dst_ip, > - .min_tx = nb.min_tx.unwrap_or(bFD_DEF_MINTX()), > - .min_rx = nb.min_rx.unwrap_or(bFD_DEF_MINRX()), > - .detect_mult = nb.detect_mult.unwrap_or(bFD_DEF_DETECT_MULT()), > - .status = status, > - .external_ids = map_empty(), > - .options = [i"nb_status" -> nb.status.unwrap_or(i""), > - i"sb_status" -> sb_status.unwrap_or(i""), > - i"referenced" -> i"${referenced}"]) :- > - nb in nb::BFD(), > - SrcPortAllocation(nb.logical_port, nb.dst_ip, src_port), > - SouthboundBFDStatus(nb.logical_port, nb.dst_ip, sb_status), > - BFDReferenced(nb._uuid, referenced), > - var status = bfd_new_status(referenced, nb.status, sb_status).1, > - var hash = hash128((nb.logical_port, nb.dst_ip)). > - > -relation BFDReferenced0(bfd_uuid: uuid) > -BFDReferenced0(bfd_uuid) :- > - nb::Logical_Router_Static_Route(.bfd = Some{bfd_uuid}, .nexthop = nexthop), > - nb::BFD(._uuid = bfd_uuid, .dst_ip = nexthop). > - > -relation BFDReferenced(bfd_uuid: uuid, referenced: bool) > -BFDReferenced(bfd_uuid, true) :- BFDReferenced0(bfd_uuid). > -BFDReferenced(bfd_uuid, false) :- > - nb::BFD(._uuid = bfd_uuid), > - not BFDReferenced0(bfd_uuid). > - > -// Given the following: > -// - 'referenced': whether a BFD object is referenced by a route > -// - 'nb_status0': 'status' in the existing nb::BFD record > -// - 'sb_status0': 'status' in the existing sb::BFD record (None, if none exists yet) > -// computes and returns (nb_status, sb_status), which are the values to use next in these records > -function bfd_new_status(referenced: bool, > - nb_status0: Option<istring>, > - sb_status0: Option<istring>): (istring, istring) { > - var nb_status = nb_status0.unwrap_or(i"admin_down"); > - match (sb_status0) { > - Some{sb_status} -> if (nb_status != i"admin_down" and sb_status != i"admin_down") { > - nb_status = sb_status > - }, > - _ -> () > - }; > - var sb_status = nb_status; > - if (referenced) { > - if (nb_status == i"admin_down") { > - nb_status = i"down" > - } > - } else { > - nb_status = i"admin_down" > - }; > - warn("nb_status=${nb_status} sb_status=${sb_status} referenced=${referenced}"); > - (nb_status, sb_status) > -} > -nb::Out_BFD(bfd_uuid, Some{status}) :- > - nb in nb::BFD(._uuid = bfd_uuid), > - BFDReferenced(bfd_uuid, referenced), > - SouthboundBFDStatus(nb.logical_port, nb.dst_ip, sb_status), > - var status = bfd_new_status(referenced, nb.status, sb_status).0. > - > -/* > - * Logical router BFD flows > - */ > - > -function lrouter_bfd_flows(lr_uuid: uuid, > - lrp_uuid: uuid, > - ipX: string, > - networks: string, > - controller_meter: Option<istring>) > - : (Flow, Flow) > -{ > - (Flow{.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 110, > - .__match = i"${ipX}.src == ${networks} && udp.dst == 3784", > - .actions = i"next; ", > - .stage_hint = stage_hint(lrp_uuid), > - .io_port = None, > - .controller_meter = None}, > - Flow{.logical_datapath = lr_uuid, > - .stage = s_ROUTER_IN_IP_INPUT(), > - .priority = 110, > - .__match = i"${ipX}.dst == ${networks} && udp.dst == 3784", > - .actions = i"handle_bfd_msg(); ", > - .io_port = None, > - .controller_meter = controller_meter, > - .stage_hint = stage_hint(lrp_uuid)}) > -} > -for (&RouterPort(.router = router, .networks = networks, .lrp = lrp, .has_bfd = true)) { > - var controller_meter = router.copp.get(cOPP_BFD()) in { > - if (not networks.ipv4_addrs.is_empty()) { > - (var a, var b) = lrouter_bfd_flows(router._uuid, lrp._uuid, "ip4", > - format_v4_networks(networks, false), > - controller_meter) in { > - Flow[a]; > - Flow[b] > - } > - }; > - > - if (not networks.ipv6_addrs.is_empty()) { > - (var a, var b) = lrouter_bfd_flows(router._uuid, lrp._uuid, "ip6", > - format_v6_networks(networks), > - controller_meter) in { > - Flow[a]; > - Flow[b] > - } > - } > - } > -} > - > -/* Clean up stale FDB entries. */ > -sb::Out_FDB(_uuid, mac, dp_key, port_key) :- > - sb::FDB(_uuid, mac, dp_key, port_key), > - sb::Out_Datapath_Binding(._uuid = dp_uuid, .tunnel_key = dp_key), > - sb::Out_Port_Binding(.datapath = dp_uuid, .tunnel_key = port_key). > diff --git a/northd/ovsdb2ddlog2c b/northd/ovsdb2ddlog2c > deleted file mode 100755 > index fa994c99e5..0000000000 > --- a/northd/ovsdb2ddlog2c > +++ /dev/null > @@ -1,131 +0,0 @@ > -#!/usr/bin/env python3 > -# Copyright (c) 2020 Nicira, Inc. > -# > -# Licensed under the Apache License, Version 2.0 (the "License"); > -# you may not use this file except in compliance with the License. > -# You may obtain a copy of the License at: > -# > -# http://www.apache.org/licenses/LICENSE-2.0 > -# > -# Unless required by applicable law or agreed to in writing, software > -# distributed under the License is distributed on an "AS IS" BASIS, > -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > -# See the License for the specific language governing permissions and > -# limitations under the License. > - > -import getopt > -import sys > - > -import ovs.json > -import ovs.db.error > -import ovs.db.schema > - > -argv0 = sys.argv[0] > - > -def usage(): > - print("""\ > -%(argv0)s: ovsdb schema compiler for northd > -usage: %(argv0)s [OPTIONS] > - > -The following option must be specified: > - -p, --prefix=PREFIX Prefix for declarations in output. > - > -The following ovsdb2ddlog options are supported: > - -f, --schema-file=FILE OVSDB schema file. > - -o, --output-table=TABLE Mark TABLE as output. > - --output-only-table=TABLE Mark TABLE as output-only. DDlog will send updates to this table directly to OVSDB without comparing it with current OVSDB state. > - --ro=TABLE.COLUMN Ignored. > - --rw=TABLE.COLUMN Ignored. > - --intern-table=TABLE Ignored. > - --intern-strings Ignored. > - --output-file=FILE.inc Write output to FILE.inc. If this option is not specified, output will be written to stdout. > - > -The following options are also available: > - -h, --help display this help message > - -V, --version display version information\ > -""" % {'argv0': argv0}) > - sys.exit(0) > - > -if __name__ == "__main__": > - try: > - try: > - options, args = getopt.gnu_getopt(sys.argv[1:], 'p:f:o:hV', > - ['prefix=', > - 'schema-file=', > - 'output-table=', > - 'output-only-table=', > - 'intern-table=', > - 'ro=', > - 'rw=', > - 'output-file=', > - 'intern-strings']) > - except getopt.GetoptError as geo: > - sys.stderr.write("%s: %s\n" % (argv0, geo.msg)) > - sys.exit(1) > - > - prefix = None > - schema_file = None > - output_tables = set() > - output_only_tables = set() > - output_file = None > - for key, value in options: > - if key in ['-h', '--help']: > - usage() > - elif key in ['-V', '--version']: > - print("ovsdb2ddlog2c (OVN) @VERSION@") > - elif key in ['-p', '--prefix']: > - prefix = value > - elif key in ['-f', '--schema-file']: > - schema_file = value > - elif key in ['-o', '--output-table']: > - output_tables.add(value) > - elif key == '--output-only-table': > - output_only_tables.add(value) > - elif key in ['--ro', '--rw', '--intern-table', '--intern-strings']: > - pass > - elif key == '--output-file': > - output_file = value > - else: > - assert False > - > - if schema_file is None: > - sys.stderr.write("%s: missing -f or --schema-file option\n" % argv0) > - sys.exit(1) > - if prefix is None: > - sys.stderr.write("%s: missing -p or --prefix option\n" % argv0) > - sys.exit(1) > - if not output_tables.isdisjoint(output_only_tables): > - example = next(iter(output_tables.intersect(output_only_tables))) > - sys.stderr.write("%s: %s may not be both an output table and " > - "an output-only table\n" % (argv0, example)) > - sys.exit(1) > - > - schema = ovs.db.schema.DbSchema.from_json(ovs.json.from_file( > - schema_file)) > - > - all_tables = set(schema.tables.keys()) > - missing_tables = (output_tables | output_only_tables) - all_tables > - if missing_tables: > - sys.stderr.write("%s: %s is not the name of a table\n" > - % (argv0, next(iter(missing_tables)))) > - sys.exit(1) > - > - f = sys.stdout if output_file is None else open(output_file, "w") > - for name, tables in ( > - ("input_relations", all_tables - output_only_tables), > - ("output_relations", output_tables), > - ("output_only_relations", output_only_tables)): > - f.write("static const char *%s%s[] = {\n" % (prefix, name)) > - for table in sorted(tables): > - f.write(" \"%s\",\n" % table) > - f.write(" NULL,\n") > - f.write("};\n\n") > - if schema_file is not None: > - f.close() > - except ovs.db.error.Error as e: > - sys.stderr.write("%s: %s\n" % (argv0, e)) > - sys.exit(1) > - > -# Local variables: > -# mode: python > -# End: > diff --git a/tests/ovn-macros.at b/tests/ovn-macros.at > index 1b693a22c3..a30b626ef1 100644 > --- a/tests/ovn-macros.at > +++ b/tests/ovn-macros.at > @@ -47,11 +47,11 @@ m4_define([OVN_CLEANUP],[ > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) > + OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > if test -d northd-backup; then > as northd-backup > - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) > + OVS_APP_EXIT_AND_WAIT([ovn-northd]) > fi > > OVN_CLEANUP_VSWITCH([main]) > @@ -71,11 +71,11 @@ m4_define([OVN_CLEANUP_AZ],[ > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as $1/northd > - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) > + OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > if test -d $1/northd-backup; then > as $1/northd-backup > - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) > + OVS_APP_EXIT_AND_WAIT([ovn-northd]) > fi > > as $1/ic > @@ -165,11 +165,6 @@ ovn_start_northd() { > backup) suffix=-backup ;; > esac > > - case ${NORTHD_TYPE:=ovn-northd} in > - ovn-northd) ;; > - ovn-northd-ddlog) northd_args="$northd_args --ddlog-record=${AZ:+$AZ/}northd$suffix/replay.dat -v" ;; > - esac > - > if test X$NORTHD_USE_PARALLELIZATION = Xyes; then > northd_args="$northd_args --n-threads=4" > fi > @@ -177,7 +172,7 @@ ovn_start_northd() { > local name=${d_prefix}northd${suffix} > echo "${prefix}starting $name" > test -d "$ovs_base/$name" || mkdir "$ovs_base/$name" > - as $name start_daemon $NORTHD_TYPE $northd_args -vjsonrpc \ > + as $name start_daemon ovn-northd $northd_args -vjsonrpc \ > --ovnnb-db=$OVN_NB_DB --ovnsb-db=$OVN_SB_DB > } > > @@ -219,10 +214,7 @@ ovn_start () { > fi > > if test X$HAVE_OPENSSL = Xyes; then > - # Create the SB DB pssl+RBAC connection. Ideally we could pre-create > - # SB_Global and Connection with ovsdb-tool transact at DB creation > - # time, but unfortunately that does not work, northd-ddlog will replace > - # the SB_Global record on startup. > + # Create the SB DB pssl+RBAC connection. > ovn-sbctl \ > -- --id=@c create connection \ > target=\"pssl:0:127.0.0.1\" role=ovn-controller \ > @@ -945,26 +937,23 @@ m4_define([OVN_POPULATE_ARP], [AT_CHECK(ovn_populate_arp__, [0], [ignore])]) > # Defines versions of the test with all combinations of northd, > # parallelization enabled and conditional monitoring on/off. > m4_define([OVN_FOR_EACH_NORTHD], > - [m4_foreach([NORTHD_TYPE], [ovn-northd], > - [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], > - [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1 > -])])])]) > + [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], > + [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1 > +])])]) > > # Defines versions of the test with all combinations of northd and > # parallelization enabled. To be used when the ovn-controller configuration > # is not relevant. > m4_define([OVN_FOR_EACH_NORTHD_NO_HV], > - [m4_foreach([NORTHD_TYPE], [ovn-northd], > - [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], [$1 > -])])]) > + [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], [$1 > +])]) > > # Defines versions of the test with all combinations of northd and > # parallelization on/off. To be used when the ovn-controller configuration > # is not relevant and we want to test parallelization permutations. > m4_define([OVN_FOR_EACH_NORTHD_NO_HV_PARALLELIZATION], > - [m4_foreach([NORTHD_TYPE], [ovn-northd], > - [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no], [$1 > -])])]) > + [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no], [$1 > +])]) > > # OVN_NBCTL(NBCTL_COMMAND) adds NBCTL_COMMAND to list of commands to be run by RUN_OVN_NBCTL(). > m4_define([OVN_NBCTL], [ > diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at > index e7910e83c6..3c2f93567a 100644 > --- a/tests/ovn-northd.at > +++ b/tests/ovn-northd.at > @@ -29,7 +29,7 @@ m4_define([CHECK_NO_CHANGE_AFTER_RECOMPUTE], [ > wait_for_ports_up > fi > _DUMP_DB_TABLES(before) > - check as northd ovn-appctl -t NORTHD_TYPE inc-engine/recompute > + check as northd ovn-appctl -t ovn-northd inc-engine/recompute > check ovn-nbctl --wait=sb sync > _DUMP_DB_TABLES(after) > AT_CHECK([as northd diff before after], [0], [dnl > @@ -357,7 +357,7 @@ ovn_init_db ovn-nb; ovn-nbctl init > > # test unixctl option > mkdir "$ovs_base"/northd > -as northd start_daemon NORTHD_TYPE --unixctl="$ovs_base"/northd/NORTHD_TYPE[].ctl --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock > +as northd start_daemon ovn-northd --unixctl="$ovs_base"/northd/ovn-northd.ctl --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock > ovn-nbctl ls-add sw > ovn-nbctl --wait=sb lsp-add sw p1 > # northd created with unixctl option successfully created port_binding entry > @@ -365,7 +365,7 @@ check_row_count Port_Binding 1 logical_port=p1 > AT_CHECK([ovn-nbctl --wait=sb lsp-del p1]) > > # ovs-appctl exit with unixctl option > -OVS_APP_EXIT_AND_WAIT_BY_TARGET(["$ovs_base"/northd/]NORTHD_TYPE[.ctl], ["$ovs_base"/northd/]NORTHD_TYPE[.pid]) > +OVS_APP_EXIT_AND_WAIT_BY_TARGET(["$ovs_base"/northd/ovn-northd.ctl], ["$ovs_base"/northd/ovn-northd.pid]) > > # Check no port_binding entry for new port as ovn-northd is not running > # > @@ -376,7 +376,7 @@ AT_CHECK([ovn-nbctl --timeout=10 --wait=sb sync], [142], [], [ignore]) > check_row_count Port_Binding 0 logical_port=p2 > > # test default unixctl path > -as northd start_daemon NORTHD_TYPE --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock > +as northd start_daemon ovn-northd --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock > ovn-nbctl --wait=sb lsp-add sw p3 > # northd created with default unixctl path successfully created port_binding entry > check_row_count Port_Binding 1 logical_port=p3 > @@ -386,7 +386,7 @@ OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > AT_CLEANUP > ]) > @@ -737,7 +737,7 @@ OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > AT_CLEANUP > ]) > @@ -750,10 +750,10 @@ AT_SETUP([ovn-northd pause and resume]) > ovn_start --backup-northd=paused > > get_northd_status() { > - as northd ovn-appctl -t NORTHD_TYPE is-paused > - as northd ovn-appctl -t NORTHD_TYPE status > - as northd-backup ovn-appctl -t NORTHD_TYPE is-paused > - as northd-backup ovn-appctl -t NORTHD_TYPE status > + as northd ovn-appctl -t ovn-northd is-paused > + as northd ovn-appctl -t ovn-northd status > + as northd-backup ovn-appctl -t ovn-northd is-paused > + as northd-backup ovn-appctl -t ovn-northd status > } > > AS_BOX([Check that the backup is paused]) > @@ -764,7 +764,7 @@ Status: paused > ]) > > AS_BOX([Resume the backup]) > -check as northd-backup ovs-appctl -t NORTHD_TYPE resume > +check as northd-backup ovs-appctl -t ovn-northd resume > OVS_WAIT_FOR_OUTPUT([get_northd_status], [0], [false > Status: active > false > @@ -781,8 +781,8 @@ check ovn-nbctl --wait=sb ls-del sw0 > check_row_count Datapath_Binding 0 > > AS_BOX([Pause the main northd]) > -check as northd ovs-appctl -t NORTHD_TYPE pause > -check as northd-backup ovs-appctl -t NORTHD_TYPE pause > +check as northd ovs-appctl -t ovn-northd pause > +check as northd-backup ovs-appctl -t ovn-northd pause > AT_CHECK([get_northd_status], [0], [true > Status: paused > true > @@ -798,7 +798,7 @@ check_row_count Datapath_Binding 0 > # Do not resume both main and backup right after each other > # as there would be no guarentee of which one would become active > AS_BOX([Resume the main northd]) > -check as northd ovs-appctl -t NORTHD_TYPE resume > +check as northd ovs-appctl -t ovn-northd resume > OVS_WAIT_FOR_OUTPUT([get_northd_status], [0], [false > Status: active > true > @@ -806,7 +806,7 @@ Status: paused > ]) > > AS_BOX([Resume the backup northd]) > -check as northd-backup ovs-appctl -t NORTHD_TYPE resume > +check as northd-backup ovs-appctl -t ovn-northd resume > OVS_WAIT_FOR_OUTPUT([get_northd_status], [0], [false > Status: active > false > @@ -831,7 +831,7 @@ check_row_count Datapath_Binding 1 > > # Kill northd. > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > # With ovn-northd gone, changes to nbdb won't be reflected into sbdb. > # Make sure. > @@ -1268,7 +1268,7 @@ AT_CLEANUP > > OVN_FOR_EACH_NORTHD_NO_HV([ > AT_SETUP([check Load balancer health check and Service Monitor sync]) > -ovn_start NORTHD_TYPE > +ovn_start ovn-northd > check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80,20.0.0.3:80 > > check ovn-nbctl --wait=sb set load_balancer . ip_port_mappings:10.0.0.3=sw0-p1 > @@ -4624,7 +4624,7 @@ AT_SKIP_IF([expr "$PKIDIR" : ".*[[ '\" > ovn_start > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as ovn-sb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > @@ -4652,7 +4652,7 @@ cp $PKIDIR/$key2 $key > cp $PKIDIR/$cert3 $cert > cp $PKIDIR/$cacert $cacert > as northd > -start_daemon ovn$NORTHD_TYPE -vjsonrpc \ > +start_daemon ovn-northd -vjsonrpc \ > --ovnnb-db=$OVN_NB_DB --ovnsb-db=ssl:127.0.0.1:$TCP_PORT \ > -p $key -c $cert -C $cacert > > @@ -4664,7 +4664,7 @@ cp $PKIDIR/$key $key > cp $PKIDIR/$cert $cert > check ovn-nbctl --wait=sb sync > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > AT_CLEANUP > ]) > > @@ -6506,7 +6506,7 @@ AT_CLEANUP > OVN_FOR_EACH_NORTHD_NO_HV([ > AT_SETUP([ovn-northd -- lrp with chassis-redirect and ls with vtep lport]) > AT_KEYWORDS([multiple-l3dgw-ports]) > -ovn_start NORTHD_TYPE > +ovn_start > check ovn-sbctl chassis-add ch1 geneve 127.0.0.2 > > check ovn-nbctl lr-add lr1 > @@ -6601,7 +6601,7 @@ AT_CLEANUP > > OVN_FOR_EACH_NORTHD_NO_HV([ > AT_SETUP([check options:requested-chassis fills requested_chassis col]) > -ovn_start NORTHD_TYPE > +ovn_start > > # Add chassis ch1. > check ovn-sbctl chassis-add ch1 geneve 127.0.0.2 > @@ -8190,39 +8190,39 @@ OVN_FOR_EACH_NORTHD_NO_HV([ > AT_SETUP([northd-parallelization unixctl]) > ovn_start > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 > -OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t NORTHD_TYPE parallel-build/get-n-threads], [0], [1 > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 > +OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t ovn-northd parallel-build/get-n-threads], [0], [1 > ]) > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 4 > -OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t NORTHD_TYPE parallel-build/get-n-threads], [0], [4 > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 4 > +OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t ovn-northd parallel-build/get-n-threads], [0], [4 > ]) > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 > -OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t NORTHD_TYPE parallel-build/get-n-threads], [0], [1 > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 > +OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t ovn-northd parallel-build/get-n-threads], [0], [1 > ]) > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 0], [2], [], > +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 0], [2], [], > [invalid n_threads: 0 > ovn-appctl: ovn-northd: server returned an error > ]) > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads -1], [2], [], > +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads -1], [2], [], > [invalid n_threads: -1 > ovn-appctl: ovn-northd: server returned an error > ]) > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 300], [2], [], > +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 300], [2], [], > [invalid n_threads: 300 > ovn-appctl: ovn-northd: server returned an error > ]) > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads], [2], [], > +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads], [2], [], > ["parallel-build/set-n-threads" command requires at least 1 arguments > ovn-appctl: ovn-northd: server returned an error > ]) > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 2], [2], [], > +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 2], [2], [], > ["parallel-build/set-n-threads" command takes at most 1 arguments > ovn-appctl: ovn-northd: server returned an error > ]) > @@ -8262,16 +8262,16 @@ check ovn-nbctl lrp-add lr1 lrp0 "f0:00:00:01:00:01" 10.1.255.254/16 > check ovn-nbctl lr-nat-add lr1 snat 10.2.0.1 10.1.0.0/16 > add_switch_ports 1 50 > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 4 > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 4 > add_switch_ports 51 100 > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 8 > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 8 > add_switch_ports 101 150 > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 4 > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 4 > add_switch_ports 151 200 > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 > add_switch_ports 201 250 > check ovn-nbctl --wait=sb sync > > @@ -8288,7 +8288,7 @@ ovn-sbctl dump-flows | DUMP_FLOWS_SORTED > flows2 > AT_CHECK([diff flows1 flows2]) > > # Restart with with 8 threads > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 8 > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 8 > delete_switch_ports 1 250 > add_switch_ports 1 250 > check ovn-nbctl --wait=sb sync > @@ -8973,22 +8973,22 @@ p2_uuid=$(fetch_column nb:Logical_Switch_Port _uuid name=sw0-p2) > echo "p1 uuid - $p1_uuid" > ovn-nbctl --wait=sb sync > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > foo_as_uuid=$(ovn-nbctl create address_set name=foo addresses=\"1.1.1.1\",\"1.1.1.2\") > wait_column '1.1.1.1 1.1.1.2' Address_Set addresses name=foo > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [1 > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [1 > ]) > > rm -f northd/ovn-northd.log > -check as northd ovn-appctl -t NORTHD_TYPE vlog/reopen > -check as northd ovn-appctl -t NORTHD_TYPE vlog/set jsonrpc:dbg > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd vlog/reopen > +check as northd ovn-appctl -t ovn-northd vlog/set jsonrpc:dbg > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.3 -- \ > add address_set $foo_as_uuid addresses 1.1.2.1/4 > wait_column '1.1.1.1 1.1.1.2 1.1.1.3 1.1.2.1/4' Address_Set addresses name=foo > > # There should be no recompute of the sync_to_sb_addr_set engine node . > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > ]) > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > @@ -8996,14 +8996,14 @@ AT_CHECK([grep transact northd/ovn-northd.log | grep Address_Set | \ > grep -c mutate], [0], [1 > ]) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.4 -- \ > remove address_set $foo_as_uuid addresses 1.1.1.1 -- \ > remove address_set $foo_as_uuid addresses 1.1.2.1/4 > wait_column '1.1.1.2 1.1.1.3 1.1.1.4' Address_Set addresses name=foo > > # There should be no recompute of the sync_to_sb_addr_set engine node . > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > ]) > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > @@ -9013,9 +9013,9 @@ grep -c mutate], [0], [2 > > # Pause ovn-northd and add/remove few addresses. when it is resumed > # it should use mutate for updating the address sets. > -check as northd ovn-appctl -t NORTHD_TYPE pause > +check as northd ovn-appctl -t ovn-northd pause > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.5 > check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.6 > check ovn-nbctl remove address_set $foo_as_uuid addresses 1.1.1.2 > @@ -9023,10 +9023,10 @@ check ovn-nbctl remove address_set $foo_as_uuid addresses 1.1.1.2 > check_column '1.1.1.2 1.1.1.3 1.1.1.4' Address_Set addresses name=foo > > # Resume northd now > -check as northd ovn-appctl -t NORTHD_TYPE resume > +check as northd ovn-appctl -t ovn-northd resume > wait_column '1.1.1.3 1.1.1.4 1.1.1.5 1.1.1.6' Address_Set addresses name=foo > # There should be recompute of the sync_to_sb_addr_set engine node . > -recompute_stat=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute) > +recompute_stat=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute) > AT_CHECK([test $recompute_stat -ge 1]) > > AT_CHECK([grep transact northd/ovn-northd.log | grep Address_Set | \ > @@ -9034,25 +9034,25 @@ grep -c mutate], [0], [3 > ]) > > # Create a port group. This should result in recompute of sb_to_sync_addr_set engine node. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl pg-add pg1 > wait_column '' Address_Set addresses name=pg1_ip4 > -recompute_stat=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute) > +recompute_stat=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute) > AT_CHECK([test $recompute_stat -ge 1]) > > # Add sw0-p1 to port group pg1 > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl add port_group pg1 ports ${p1_uuid} > wait_column '20.0.0.4' Address_Set addresses name=pg1_ip4 > > # There should be no recompute of the sync_to_sb_addr_set engine node. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > ]) > > # No change, no recompute > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb sync > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > ]) > > AT_CLEANUP > @@ -9091,18 +9091,18 @@ $4 > } > > AS_BOX([Create new PG1 and PG2]) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb -- pg-add pg1 -- pg-add pg2 > dnl The northd node should not recompute, it should handle nb_global update > dnl though, therefore "compute: 1". > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > Node: northd > - recompute: 0 > - compute: 1 > - abort: 0 > ]) > dnl The port_group node recomputes every time a NB port group is added/deleted. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > Node: port_group > - recompute: 1 > - compute: 0 > @@ -9110,7 +9110,7 @@ Node: port_group > ]) > dnl The port_group node is an input for the lflow node. Port_group > dnl recompute/compute triggers lflow recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > Node: lflow > - recompute: 1 > - compute: 0 > @@ -9125,7 +9125,7 @@ check ovn-nbctl --wait=sb \ > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > AS_BOX([Add one port from the two switches to PG1]) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb \ > -- pg-set-ports pg1 sw1.1 sw2.1 > check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" > @@ -9133,7 +9133,7 @@ check_column "sw2.1" sb:Port_Group ports name="${sw2_key}_pg1" > > dnl The northd node should not recompute, it should handle nb_global update > dnl though, therefore "compute: 1". > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > Node: northd > - recompute: 0 > - compute: 1 > @@ -9141,7 +9141,7 @@ Node: northd > ]) > dnl The port_group node recomputes also every time a port from a new switch > dnl is added to the group. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > Node: port_group > - recompute: 1 > - compute: 0 > @@ -9149,7 +9149,7 @@ Node: port_group > ]) > dnl The port_group node is an input for the lflow node. Port_group > dnl recompute/compute triggers lflow recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > Node: lflow > - recompute: 1 > - compute: 0 > @@ -9160,7 +9160,7 @@ check_acl_lflows 1 0 1 0 > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > AS_BOX([Add one port from the two switches to PG2]) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb \ > -- pg-set-ports pg2 sw1.2 sw2.2 > check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" > @@ -9170,7 +9170,7 @@ check_column "sw2.2" sb:Port_Group ports name="${sw2_key}_pg2" > > dnl The northd node should not recompute, it should handle nb_global update > dnl though, therefore "compute: 1". > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > Node: northd > - recompute: 0 > - compute: 1 > @@ -9178,7 +9178,7 @@ Node: northd > ]) > dnl The port_group node recomputes also every time a port from a new switch > dnl is added to the group. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > Node: port_group > - recompute: 1 > - compute: 0 > @@ -9186,7 +9186,7 @@ Node: port_group > ]) > dnl The port_group node is an input for the lflow node. Port_group > dnl recompute/compute triggers lflow recompute (for ACLs). > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > Node: lflow > - recompute: 1 > - compute: 0 > @@ -9197,7 +9197,7 @@ check_acl_lflows 1 1 1 1 > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > AS_BOX([Add one more port from the two switches to PG1 and PG2]) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb \ > -- pg-set-ports pg1 sw1.1 sw2.1 sw1.3 sw2.3 \ > -- pg-set-ports pg2 sw1.2 sw2.2 sw1.3 sw2.3 > @@ -9208,7 +9208,7 @@ check_column "sw2.2 sw2.3" sb:Port_Group ports name="${sw2_key}_pg2" > > dnl The northd node should not recompute, it should handle nb_global update > dnl though, therefore "compute: 1". > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > Node: northd > - recompute: 0 > - compute: 1 > @@ -9216,7 +9216,7 @@ Node: northd > ]) > dnl We did not change the set of switches a pg is applied to, there should be > dnl no recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > Node: port_group > - recompute: 0 > - compute: 1 > @@ -9224,7 +9224,7 @@ Node: port_group > ]) > dnl We did not change the set of switches a pg is applied to, there should be > dnl no recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > Node: lflow > - recompute: 0 > - compute: 1 > @@ -9235,7 +9235,7 @@ check_acl_lflows 1 1 1 1 > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > AS_BOX([Remove the last port from PG1 and PG2]) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb \ > -- pg-set-ports pg1 sw1.1 sw2.1 \ > -- pg-set-ports pg2 sw1.2 sw2.2 > @@ -9246,7 +9246,7 @@ check_column "sw2.2" sb:Port_Group ports name="${sw2_key}_pg2" > > dnl The northd node should not recompute, it should handle nb_global update > dnl though, therefore "compute: 1". > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > Node: northd > - recompute: 0 > - compute: 1 > @@ -9254,7 +9254,7 @@ Node: northd > ]) > dnl We did not change the set of switches a pg is applied to, there should be > dnl no recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > Node: port_group > - recompute: 0 > - compute: 1 > @@ -9262,7 +9262,7 @@ Node: port_group > ]) > dnl We did not change the set of switches a pg is applied to, there should be > dnl no recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > Node: lflow > - recompute: 0 > - compute: 1 > @@ -9273,7 +9273,7 @@ check_acl_lflows 1 1 1 1 > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > AS_BOX([Remove the second port from PG2]) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb pg-set-ports pg2 sw1.2 > check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" > check_column "sw2.1" sb:Port_Group ports name="${sw2_key}_pg1" > @@ -9283,7 +9283,7 @@ AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg2"], [0], [ > > dnl The northd node should not recompute, it should handle nb_global update > dnl though, therefore "compute: 1". > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > Node: northd > - recompute: 0 > - compute: 1 > @@ -9291,7 +9291,7 @@ Node: northd > ]) > dnl We changed the set of switches a pg is applied to, there should be > dnl a recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > Node: port_group > - recompute: 1 > - compute: 0 > @@ -9299,7 +9299,7 @@ Node: port_group > ]) > dnl We changed the set of switches a pg is applied to, there should be > dnl a recompute (for ACLs). > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > Node: lflow > - recompute: 1 > - compute: 0 > @@ -9310,7 +9310,7 @@ check_acl_lflows 1 1 1 0 > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > AS_BOX([Remove the second port from PG1]) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb pg-set-ports pg1 sw1.1 > check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" > AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg1"], [0], [ > @@ -9321,7 +9321,7 @@ AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg2"], [0], [ > > dnl The northd node should not recompute, it should handle nb_global update > dnl though, therefore "compute: 1". > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > Node: northd > - recompute: 0 > - compute: 1 > @@ -9329,7 +9329,7 @@ Node: northd > ]) > dnl We changed the set of switches a pg is applied to, there should be > dnl a recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > Node: port_group > - recompute: 1 > - compute: 0 > @@ -9337,7 +9337,7 @@ Node: port_group > ]) > dnl We changed the set of switches a pg is applied to, there should be > dnl a recompute (for ACLs). > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > Node: lflow > - recompute: 1 > - compute: 0 > @@ -9348,7 +9348,7 @@ check_acl_lflows 1 1 0 0 > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > AS_BOX([Add second port to both PGs]) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb \ > -- pg-set-ports pg1 sw1.1 sw2.1 \ > -- pg-set-ports pg2 sw1.2 sw2.2 > @@ -9359,7 +9359,7 @@ check_column "sw2.2" sb:Port_Group ports name="${sw2_key}_pg2" > > dnl The northd node should not recompute, it should handle nb_global update > dnl though, therefore "compute: 1". > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > Node: northd > - recompute: 0 > - compute: 1 > @@ -9367,7 +9367,7 @@ Node: northd > ]) > dnl We changed the set of switches a pg is applied to, there should be a > dnl recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > Node: port_group > - recompute: 1 > - compute: 0 > @@ -9375,7 +9375,7 @@ Node: port_group > ]) > dnl We changed the set of switches a pg is applied to, there should be a > dnl recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > Node: lflow > - recompute: 1 > - compute: 0 > @@ -9386,7 +9386,7 @@ check_acl_lflows 1 1 1 1 > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > AS_BOX([Remove second port from both PGs]) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb \ > -- pg-set-ports pg1 sw1.1 \ > -- pg-set-ports pg2 sw1.2 > @@ -9399,7 +9399,7 @@ AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg2"], [0], [ > > dnl The northd node should not recompute, it should handle nb_global update > dnl though, therefore "compute: 1". > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > Node: northd > - recompute: 0 > - compute: 1 > @@ -9407,7 +9407,7 @@ Node: northd > ]) > dnl We changed the set of switches a pg is applied to, there should be a > dnl recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > Node: port_group > - recompute: 1 > - compute: 0 > @@ -9415,7 +9415,7 @@ Node: port_group > ]) > dnl We changed the set of switches a pg is applied to, there should be a > dnl recompute. > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > Node: lflow > - recompute: 1 > - compute: 0 > @@ -9724,7 +9724,7 @@ check ovn-sbctl chassis-add hv1 geneve 127.0.0.1 \ > > check ovn-nbctl --wait=sb sync > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE debug/chassis-features-list], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd debug/chassis-features-list], [0], [dnl > ct_no_masked_label: true > ct_lb_related: true > mac_binding_timestamp: true > @@ -9739,7 +9739,7 @@ check ovn-sbctl chassis-add hv2 geneve 127.0.0.2 \ > > check ovn-nbctl --wait=sb sync > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE debug/chassis-features-list], [0], [dnl > +AT_CHECK([as northd ovn-appctl -t ovn-northd debug/chassis-features-list], [0], [dnl > ct_no_masked_label: true > ct_lb_related: true > mac_binding_timestamp: true > @@ -10012,14 +10012,14 @@ check ovn-nbctl ls-add sw > check ovn-nbctl lsp-add sw p1 > > check ovn-nbctl --wait=sb sync > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lsp-set-options p1 foo=bar > -sb_lb_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_lb recompute) > +sb_lb_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_lb recompute) > AT_CHECK([test x$sb_lb_recomp = x0]) > > check ovn-nbctl --wait=sb lsp-set-type p1 external > -sb_lb_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_lb recompute) > +sb_lb_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_lb recompute) > AT_CHECK([test x$sb_lb_recomp != x0]) > > AT_CLEANUP > @@ -10043,13 +10043,13 @@ check_recompute_counter() { > sync_sb_pb_recomp_min=$5 > sync_sb_pb_recomp_max=$6 > > - northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) > + northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) > AT_CHECK([test $northd_recomp -ge $northd_recomp_min && test $northd_recomp -le $northd_recomp_max]) > > - lflow_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow recompute) > + lflow_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow recompute) > AT_CHECK([test $lflow_recomp -ge $lflow_recomp_min && test $lflow_recomp -le $lflow_recomp_max]) > > - sync_sb_pb_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_pb recompute) > + sync_sb_pb_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_pb recompute) > AT_CHECK([test $sync_sb_pb_recomp -ge $sync_sb_pb_recomp_min && test $sync_sb_pb_recomp -le $sync_sb_pb_recomp_max]) > } > > @@ -10062,32 +10062,32 @@ ovs-vsctl add-port br-int lsp-pilot -- set interface lsp-pilot external_ids:ifac > wait_for_ports_up > check ovn-nbctl --wait=hv sync > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=hv lsp-add ls0 lsp0-0 -- lsp-set-addresses lsp0-0 "unknown" > ovs-vsctl add-port br-int lsp0-0 -- set interface lsp0-0 external_ids:iface-id=lsp0-0 > wait_for_ports_up > check ovn-nbctl --wait=hv sync > check_recompute_counter 4 5 5 5 5 5 > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=hv lsp-add ls0 lsp0-1 -- lsp-set-addresses lsp0-1 "aa:aa:aa:00:00:01 192.168.0.11" > ovs-vsctl add-port br-int lsp0-1 -- set interface lsp0-1 external_ids:iface-id=lsp0-1 > wait_for_ports_up > check ovn-nbctl --wait=hv sync > check_recompute_counter 0 0 0 0 0 0 > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=hv lsp-add ls0 lsp0-2 -- lsp-set-addresses lsp0-2 "aa:aa:aa:00:00:02 192.168.0.12" > ovs-vsctl add-port br-int lsp0-2 -- set interface lsp0-2 external_ids:iface-id=lsp0-2 > wait_for_ports_up > check ovn-nbctl --wait=hv sync > check_recompute_counter 0 0 0 0 0 0 > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=hv lsp-del lsp0-1 > check_recompute_counter 0 0 0 0 0 0 > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=hv lsp-set-addresses lsp0-2 "aa:aa:aa:00:00:88 192.168.0.88" > check_recompute_counter 0 0 0 0 0 0 > > @@ -10106,7 +10106,7 @@ done > check_recompute_counter 0 0 0 0 0 0 > > # No change, no recompute > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb sync > check_recompute_counter 0 0 0 0 0 0 > > @@ -10118,7 +10118,7 @@ ovn-nbctl dhcp-options-create 192.168.0.0/24 > CIDR_UUID=$(ovn-nbctl --bare --columns=_uuid find dhcp_options cidr="192.168.0.0/24") > ovn-nbctl dhcp-options-set-options $CIDR_UUID lease_time=3600 router=192.168.0.1 server_id=192.168.0.1 server_mac=c0:ff:ee:00:00:01 hostname="\"foo\"" > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > ovn-nbctl --wait=sb lsp-set-dhcpv4-options lsp0-2 $CIDR_UUID > check_recompute_counter 0 0 0 0 0 0 > > @@ -10129,7 +10129,7 @@ check ovn-nbctl lsp-set-addresses lsp0-2 "aa:aa:aa:00:00:01 192.168.0.11 aef0::4 > d1="$(ovn-nbctl create DHCP_Options cidr="aef0\:\:/64" \ > options="\"server_id\"=\"00:00:00:10:00:01\"")" > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > ovn-nbctl --wait=sb lsp-set-dhcpv6-options lsp0-2 ${d1} > check_recompute_counter 0 0 0 0 0 0 > > @@ -10146,10 +10146,10 @@ AT_SETUP([LSP incremental processing with only router ports before and after add > ovn_start > > check_recompute_counter() { > - northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) > + northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) > AT_CHECK([test x$northd_recomp = x$1]) > > - lflow_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow recompute) > + lflow_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow recompute) > AT_CHECK([test x$lflow_recomp = x$2]) > } > > @@ -10166,7 +10166,7 @@ ovn-nbctl lb-add lb0 192.168.0.10:80 10.0.0.10:8080 > check ovn-nbctl --wait=sb ls-lb-add ls0 lb0 > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > # Add a lsp. northd and lflow engine shouldn't recompute even though this is > # the first lsp added after the router ports. > check ovn-nbctl --wait=hv lsp-add ls0 lsp0-1 -- lsp-set-addresses lsp0-1 "aa:aa:aa:00:00:01 192.168.0.11" > @@ -10175,7 +10175,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Delete the lsp. northd and lflow engine shouldn't recompute even though > # the logical switch is now left with only router ports. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=hv lsp-del lsp0-1 > check_recompute_counter 0 0 > CHECK_NO_CHANGE_AFTER_RECOMPUTE > @@ -10213,7 +10213,7 @@ wait_row_count port_binding $(($n + 1)) > > # Delete multiple ports, and one of them not incrementally processible. This is > # to trigger partial I-P and then fall back to recompute. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > args="--wait=hv lsp-del lsp0-foo" > for i in $(seq $n); do > args="$args -- lsp-del lsp0-$i" > @@ -10221,7 +10221,7 @@ done > check ovn-nbctl $args > > wait_row_count Port_Binding 0 > -northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) > +northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) > echo northd_recomp $northd_recomp > AT_CHECK([test $northd_recomp -ge 1]) > > @@ -10234,26 +10234,26 @@ AT_SETUP([ACL/Meter incremental processing - no northd recompute]) > ovn_start > > check_recompute_counter() { > - northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) > + northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) > AT_CHECK([test x$northd_recomp = x$1]) > > - lflow_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow recompute) > + lflow_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow recompute) > AT_CHECK([test x$lflow_recomp = x$2]) > > - sync_meters_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_meters recompute) > + sync_meters_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_meters recompute) > AT_CHECK([test x$sync_meters_recomp = x$3]) > } > > check ovn-nbctl --wait=sb ls-add ls > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb meter-add m drop 1 pktps > check ovn-nbctl --wait=sb acl-add ls from-lport 1 1 allow > dnl Only triggers recompute of the sync_meters and lflow nodes. > check_recompute_counter 0 2 2 > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb meter-del m > check ovn-nbctl --wait=sb acl-del ls > dnl Only triggers recompute of the sync_meters and lflow nodes. > @@ -10325,7 +10325,7 @@ check ovn-sbctl chassis-add local-ch0 geneve 127.0.0.2 > wait_row_count Chassis 2 > > remote_chassis_uuid=$(fetch_column Chassis _uuid name=remote-ch0) > -as northd ovn-appctl -t NORTHD_TYPE vlog/set dbg > +as northd ovn-appctl -t ovn-northd vlog/set dbg > > check ovn-nbctl ls-add sw0 > check ovn-nbctl lsp-add sw0 sw0-r1 -- lsp-set-type sw0-r1 remote > @@ -10392,7 +10392,7 @@ check_engine_stats() { > echo "__file__:__line__: Checking engine stats for node $node : recompute - \ > $recompute : compute - $compute" > > - node_stat=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats $node) > + node_stat=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats $node) > # node_stat will be of this format : > # - Node: lflow - recompute: 3 - compute: 0 - abort: 0 > node_recompute_ct=$(echo $node_stat | cut -d '-' -f2 | cut -d ':' -f2) > @@ -10420,7 +10420,7 @@ $recompute : compute - $compute" > # Test I-P for load balancers. > # Presently ovn-northd handles I-P for NB LBs in northd_lb_data engine node > # only. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 > > check_engine_stats lb_data norecompute compute > @@ -10429,7 +10429,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb set load_balancer . ip_port_mappings:10.0.0.3=sw0-p1:10.0.0.2 > check_engine_stats lb_data norecompute compute > @@ -10450,7 +10450,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb -- lb-del lb2 -- lb-del lb3 > check_engine_stats lb_data norecompute compute > @@ -10459,7 +10459,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > AT_CHECK([ovn-nbctl --wait=sb \ > -- --id=@hc create Load_Balancer_Health_Check vip="10.0.0.10\:80" \ > @@ -10473,7 +10473,7 @@ check_engine_stats sync_to_sb_lb recompute nocompute > > # Any change to load balancer health check should also result in full recompute > # of northd node (but not northd_lb_data node) > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb set load_balancer_health_check . options:foo=bar1 > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > @@ -10481,21 +10481,21 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > # Delete the health check from the load balancer. northd engine node should do a full recompute. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb clear Load_Balancer . health_check > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl ls-add sw0 > check ovn-nbctl --wait=sb lr-add lr0 > ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24 > ovn-nbctl lsp-add sw0 sw0-lr0 > ovn-nbctl lsp-set-type sw0-lr0 router > ovn-nbctl lsp-set-addresses sw0-lr0 00:00:00:00:ff:01 > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > ovn-nbctl --wait=sb lsp-set-options sw0-lr0 router-port=lr0-sw0 > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > @@ -10503,7 +10503,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > # Associate lb1 to sw0. There should be no recompute of northd engine node > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb ls-lb-add sw0 lb1 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10515,7 +10515,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > # Modify the backend of the lb1 vip > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10524,7 +10524,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > # Cleanup the vip of lb1. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb clear load_Balancer lb1 vips > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10533,7 +10533,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > # Set the vips of lb1 back > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10542,7 +10542,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > # Add another vip to lb1 > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10551,7 +10551,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > # Disassociate lb1 from sw0. There should be a full recompute of northd engine node. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb ls-lb-del sw0 lb1 > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > @@ -10561,7 +10561,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > # Associate lb1 to sw0 and also create a port sw0p1. This should not result in > # full recompute of northd, but should rsult in full recompute of lflow node. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb ls-lb-add sw0 lb1 -- lsp-add sw0 sw0p1 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10569,10 +10569,10 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > # Disassociate lb1 from sw0. There should be a recompute of northd engine node. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb ls-lb-del sw0 lb1 > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > @@ -10581,7 +10581,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Add lb1 to lr0 and then disassociate > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lr-lb-add lr0 lb1 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10590,7 +10590,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Modify the backend of the lb1 vip > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10599,7 +10599,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Cleanup the vip of lb1. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb clear load_Balancer lb1 vips > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10608,7 +10608,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Set the vips of lb1 back > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10617,7 +10617,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Add another vip to lb1 > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10625,7 +10625,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lr-lb-del lr0 lb1 > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > @@ -10634,7 +10634,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Test load balancer group now > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > lbg1_uuid=$(ovn-nbctl --wait=sb create load_balancer_group name=lbg1) > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10642,19 +10642,19 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > lb1_uuid=$(fetch_column nb:Load_Balancer _uuid) > > # Add lb to the lbg1 group > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb add load_balancer_group . load_Balancer $lb1_uuid > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb clear load_balancer_group . load_Balancer > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > @@ -10662,7 +10662,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > # Add back lb to the lbg1 group > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb add load_balancer_group . load_Balancer $lb1_uuid > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10671,7 +10671,7 @@ check_engine_stats sync_to_sb_lb recompute nocompute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb add logical_switch sw0 load_balancer_group $lbg1_uuid > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10679,7 +10679,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > > # Update lb and this should not result in northd recompute > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb set load_balancer . options:bar=foo > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10687,7 +10687,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > > # Modify the backend of the lb1 vip > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10696,7 +10696,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Cleanup the vip of lb1. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb clear load_Balancer lb1 vips > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10705,7 +10705,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Set the vips of lb1 back > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10714,7 +10714,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Add another vip to lb1 > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10722,14 +10722,14 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb clear logical_switch sw0 load_balancer_group > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl add logical_router lr0 load_balancer_group $lbg1_uuid > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10738,7 +10738,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Modify the backend of the lb1 vip > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10747,7 +10747,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Cleanup the vip of lb1. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb clear load_Balancer lb1 vips > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10756,7 +10756,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Set the vips of lb1 back > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10765,7 +10765,7 @@ check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Add another vip to lb1 > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10773,7 +10773,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb clear logical_router lr0 load_balancer_group > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > @@ -10781,14 +10781,14 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > > # Add back lb group to logical switch and then delete it. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb add logical_switch sw0 load_balancer_group $lbg1_uuid > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb clear logical_switch sw0 load_balancer_group -- \ > destroy load_balancer_group $lbg1_uuid > check_engine_stats lb_data norecompute compute > @@ -10812,21 +10812,21 @@ lb2_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb2) > lb3_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb3) > lb4_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb4) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > lbg1_uuid=$(ovn-nbctl --wait=sb create load_balancer_group name=lbg1) > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb set load_balancer_group . load_balancer="$lb2_uuid,$lb3_uuid,$lb4_uuid" > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb set logical_switch sw0 load_balancer_group=$lbg1_uuid > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10834,7 +10834,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb set logical_router lr1 load_balancer_group=$lbg1_uuid > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10842,7 +10842,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb ls-lb-add sw0 lb2 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10850,7 +10850,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb ls-lb-add sw0 lb3 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10858,7 +10858,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lr-lb-add lr1 lb1 > check ovn-nbctl --wait=sb lr-lb-add lr1 lb2 > check_engine_stats lb_data norecompute compute > @@ -10867,7 +10867,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb ls-lb-del sw0 lb2 > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > @@ -10875,7 +10875,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute nocompute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lr-lb-del lr1 lb2 > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > @@ -10885,7 +10885,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Deleting lb4 should not result in lflow recompute as it is > # only associated with logical switch sw0. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-del lb4 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10895,7 +10895,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE > > # Deleting lb2 should result in lflow recompute as it is > # associated with logical router lr1 through lb group. > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb lb-del lb2 > check_engine_stats lb_data norecompute compute > check_engine_stats northd norecompute compute > @@ -10903,7 +10903,7 @@ check_engine_stats lflow recompute nocompute > check_engine_stats sync_to_sb_lb recompute compute > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > check ovn-nbctl --wait=sb remove load_balancer_group . load_balancer $lb3_uuid > check_engine_stats lb_data norecompute compute > check_engine_stats northd recompute nocompute > diff --git a/tests/ovn.at b/tests/ovn.at > index e8c79512b2..67c4ccd39b 100644 > --- a/tests/ovn.at > +++ b/tests/ovn.at > @@ -7248,7 +7248,7 @@ compare_dhcp_packets 1 > > # Stop ovn-northd so that we can modify the northd_version. > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > northd_version=$(ovn-sbctl get SB_Global . options:northd_internal_version | sed s/\"//g) > echo "northd version = $northd_version" > @@ -8805,7 +8805,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > AT_CLEANUP > ]) > > @@ -9611,8 +9611,8 @@ check test "$c6_tag" != "$c3_tag" > > AS_BOX([restart northd and make sure tag allocation is stable]) > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > -start_daemon NORTHD_TYPE \ > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > +start_daemon ovn-northd \ > --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock \ > --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock > > @@ -25547,7 +25547,7 @@ check_row_count Service_Monitor 0 > # Let's also be sure the warning message about SCTP load balancers is > # is in the ovn-northd log > > -AT_CHECK([test 1 = `grep -c "SCTP load balancers do not currently support health checks" northd/NORTHD_TYPE.log`]) > +AT_CHECK([test 1 = `grep -c "SCTP load balancers do not currently support health checks" northd/ovn-northd.log`]) > > AT_CLEANUP > ]) > @@ -30791,7 +30791,7 @@ check ovn-nbctl --wait=hv ls-lb-del sw0 lb-ipv6 > # original destination tuple. > # > # ovn-controller should fall back to matching on ct_nw_dst()/ct_tp_dst(). > -as northd ovn-appctl -t NORTHD_TYPE pause > +as northd ovn-appctl -t ovn-northd pause > > check ovn-sbctl \ > -- remove load_balancer lb-ipv4-tcp options hairpin_orig_tuple \ > @@ -30840,7 +30840,7 @@ OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_a > ]) > > # Resume ovn-northd. > -as northd ovn-appctl -t NORTHD_TYPE resume > +as northd ovn-appctl -t ovn-northd resume > check ovn-nbctl --wait=hv sync > > as hv2 ovs-vsctl del-port hv2-vif1 > @@ -30959,7 +30959,7 @@ AT_CHECK([grep -c $northd_version hv1/ovn-controller.log], [0], [1 > > # Stop ovn-northd so that we can modify the northd_version. > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > check ovn-sbctl set SB_Global . options:northd_internal_version=foo > > @@ -31101,7 +31101,7 @@ OVS_WAIT_UNTIL([test `ovs-vsctl get Interface lsp2 external_ids:ovn-installed` = > AS_BOX([ovn-controller should not reset Port_Binding.up without northd]) > # Pause northd and clear the "up" field to simulate older ovn-northd > # versions writing to the Southbound DB. > -as northd ovn-appctl -t NORTHD_TYPE pause > +as northd ovn-appctl -t ovn-northd pause > > as hv1 ovn-appctl -t ovn-controller debug/pause > check ovn-sbctl clear Port_Binding lsp1 up > @@ -31116,7 +31116,7 @@ check_column "" Port_Binding up logical_port=lsp1 > > # Once northd should explicitly set the Port_Binding.up field to 'false' and > # ovn-controller sets it to 'true' as soon as the update is processed. > -as northd ovn-appctl -t NORTHD_TYPE resume > +as northd ovn-appctl -t ovn-northd resume > wait_column "true" Port_Binding up logical_port=lsp1 > wait_column "true" nb:Logical_Switch_Port up name=lsp1 > > @@ -31270,7 +31270,7 @@ check ovn-nbctl lsp-set-addresses sw0-p4 "00:00:00:00:00:04 192.168.47.4" > # Pause ovn-northd. When it is resumed, all the below NB updates > # will be sent in one transaction. > > -check as northd ovn-appctl -t NORTHD_TYPE pause > +check as northd ovn-appctl -t ovn-northd pause > > check ovn-nbctl lsp-add sw0 sw0-p1 > check ovn-nbctl lsp-set-addresses sw0-p1 "00:00:00:00:00:01 192.168.47.1" > @@ -31282,7 +31282,7 @@ check ovn-nbctl acl-add pg1 to-lport 1002 "outport == @pg1 && ip4 && ip4.src == > > # resume ovn-northd now. This should result in a single update message > # from SB ovsdb-server to ovn-controller for all the above NB updates. > -check as northd ovn-appctl -t NORTHD_TYPE resume > +check as northd ovn-appctl -t ovn-northd resume > > AS_BOX([Wait for sw0-p1 and sw0-p2 to be up]) > wait_for_ports_up sw0-p1 sw0-p2 > diff --git a/tests/ovs-macros.at b/tests/ovs-macros.at > index 5b8da2048a..35760cf06b 100644 > --- a/tests/ovs-macros.at > +++ b/tests/ovs-macros.at > @@ -5,13 +5,11 @@ m4_include([m4/compat.m4]) > > dnl Make AT_SETUP automatically do some things for us: > dnl - Run the ovs_init() shell function as the first step in every test. > -dnl - If NORTHD_TYPE is defined, then append it to the test name and > +dnl - If NORTHD_USE_DP_GROUPS is defined, then append it to the test name and > dnl set it as a shell variable as well. > m4_rename([AT_SETUP], [OVS_AT_SETUP]) > m4_define([AT_SETUP], > - [OVS_AT_SETUP($@[]m4_ifdef([NORTHD_TYPE], [ -- NORTHD_TYPE])[]m4_ifdef([NORTHD_USE_PARALLELIZATION], [ -- parallelization=NORTHD_USE_PARALLELIZATION])[]m4_ifdef([OVN_MONITOR_ALL], [ -- ovn_monitor_all=OVN_MONITOR_ALL])) > -m4_ifdef([NORTHD_TYPE], [[NORTHD_TYPE]=NORTHD_TYPE > -])dnl > + [OVS_AT_SETUP($@[]m4_ifdef([NORTHD_USE_PARALLELIZATION], [ -- parallelization=NORTHD_USE_PARALLELIZATION])[]m4_ifdef([OVN_MONITOR_ALL], [ -- ovn_monitor_all=OVN_MONITOR_ALL])) > m4_ifdef([NORTHD_USE_PARALLELIZATION], [[NORTHD_USE_PARALLELIZATION]=NORTHD_USE_PARALLELIZATION > ])dnl > m4_ifdef([NORTHD_DUMMY_NUMA], [[NORTHD_DUMMY_NUMA]=NORTHD_DUMMY_NUMA > diff --git a/tests/perf-northd.at b/tests/perf-northd.at > index ca115dadc2..18fb209146 100644 > --- a/tests/perf-northd.at > +++ b/tests/perf-northd.at > @@ -60,7 +60,7 @@ m4_define([PARSE_STOPWATCH], [ > # to performance results. > # > m4_define([PERF_RECORD_STOPWATCH], [ > - PERF_RECORD_RESULT($3, [`ovn-appctl -t northd/NORTHD_TYPE stopwatch/show $1 | PARSE_STOPWATCH($2)`]) > + PERF_RECORD_RESULT($3, [`ovn-appctl -t northd/ovn-northd stopwatch/show $1 | PARSE_STOPWATCH($2)`]) > ]) > > # PERF_RECORD() > diff --git a/tests/system-common-macros.at b/tests/system-common-macros.at > index 65c4884ea1..4bfc74582c 100644 > --- a/tests/system-common-macros.at > +++ b/tests/system-common-macros.at > @@ -494,7 +494,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > diff --git a/tests/system-ovn-kmod.at b/tests/system-ovn-kmod.at > index a81bae7133..93fd962004 100644 > --- a/tests/system-ovn-kmod.at > +++ b/tests/system-ovn-kmod.at > @@ -507,7 +507,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -801,7 +801,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -947,7 +947,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -1113,7 +1113,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -1263,7 +1263,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP([" > diff --git a/tests/system-ovn.at b/tests/system-ovn.at > index c454526073..0abf2828a2 100644 > --- a/tests/system-ovn.at > +++ b/tests/system-ovn.at > @@ -171,7 +171,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -351,7 +351,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -463,7 +463,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -575,7 +575,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -797,7 +797,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -1023,7 +1023,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -1337,7 +1337,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -1620,7 +1620,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) > @@ -1841,7 +1841,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) > @@ -1949,7 +1949,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) > @@ -2059,7 +2059,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) > @@ -2304,7 +2304,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -2399,7 +2399,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -2556,7 +2556,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -2727,7 +2727,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -2900,7 +2900,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -3119,7 +3119,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -3262,7 +3262,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -3405,7 +3405,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -3586,7 +3586,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -3747,7 +3747,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -3930,7 +3930,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -4098,7 +4098,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -4175,7 +4175,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -4341,7 +4341,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -4565,7 +4565,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -4748,7 +4748,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -4851,7 +4851,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -4949,7 +4949,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -5195,7 +5195,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -5444,7 +5444,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -5567,7 +5567,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -5687,7 +5687,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -5796,7 +5796,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -5905,7 +5905,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -5999,7 +5999,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -6173,7 +6173,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -6366,7 +6366,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -6415,7 +6415,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -6506,7 +6506,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > @@ -6742,7 +6742,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > @@ -6893,7 +6893,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > @@ -7022,7 +7022,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -7147,7 +7147,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > @@ -7264,7 +7264,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -7479,7 +7479,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > @@ -7622,7 +7622,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -7764,7 +7764,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -7865,7 +7865,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -7966,7 +7966,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -8066,7 +8066,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -8423,7 +8423,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -8552,7 +8552,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -8616,7 +8616,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -8713,7 +8713,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -8822,7 +8822,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -8896,7 +8896,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -9097,7 +9097,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -9247,7 +9247,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -9398,7 +9398,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -9526,7 +9526,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -9677,7 +9677,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -9822,7 +9822,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -9933,7 +9933,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -10075,7 +10075,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -10289,7 +10289,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -10434,7 +10434,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -10599,7 +10599,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -10753,7 +10753,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -10889,7 +10889,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -11215,7 +11215,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -11393,7 +11393,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -11535,7 +11535,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -11624,7 +11624,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -11697,7 +11697,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > @@ -11811,7 +11811,7 @@ as ovn-nb > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > as > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > diff --git a/tutorial/ovs-sandbox b/tutorial/ovs-sandbox > index 2219b0e99e..787adb87a7 100755 > --- a/tutorial/ovs-sandbox > +++ b/tutorial/ovs-sandbox > @@ -71,8 +71,6 @@ ovssrcdir= > schema= > installed=false > built=false > -ddlog=false > -ddlog_record=true > ic_sb_schema= > ic_nb_schema= > ovn_rbac=true > @@ -147,8 +145,6 @@ General options: > -S, --schema=FILE use FILE as vswitch.ovsschema > > OVN options: > - --ddlog use ovn-northd-ddlog > - --no-ddlog-record do not record ddlog transactions (for performance) > --no-ovn-rbac disable role-based access control for OVN > --n-northds=NUMBER run NUMBER copies of northd (default: 1) > --n-ics=NUMBER run NUMBER copies of ic (default: 1) > @@ -242,12 +238,6 @@ EOF > --gdb-ovn-controller-vtep) > gdb_ovn_controller_vtep=true > ;; > - --ddlog) > - ddlog=true > - ;; > - --no-ddlog-record | --no-record-ddlog) > - ddlog_record=false > - ;; > --no-ovn-rbac) > ovn_rbac=false > ;; > @@ -680,17 +670,10 @@ for i in $(seq $n_ics); do > done > > northd_args= > -if $ddlog; then > - OVN_NORTHD=ovn-northd-ddlog > -else > - OVN_NORTHD=ovn-northd > -fi > +OVN_NORTHD=ovn-northd > > for i in $(seq $n_northds); do > if [ $i -eq 1 ]; then inst=""; else inst=$i; fi > - if $ddlog && $ddlog_record; then > - northd_args=--ddlog-record=replay$inst.txt > - fi > rungdb $gdb_ovn_northd $gdb_ovn_northd_ex $OVN_NORTHD --detach \ > --no-chdir --pidfile=$OVN_NORTHD$inst.pid -vconsole:off \ > --log-file=$OVN_NORTHD$inst.log -vsyslog:off \ > diff --git a/utilities/ovn-ctl b/utilities/ovn-ctl > index dc8865abf8..876565c801 100755 > --- a/utilities/ovn-ctl > +++ b/utilities/ovn-ctl > @@ -786,7 +786,6 @@ set_defaults () { > OVN_CONTROLLER_WRAPPER= > OVSDB_NB_WRAPPER= > OVSDB_SB_WRAPPER= > - OVN_NORTHD_DDLOG=no > > OVSDB_DISABLE_FILE_COLUMN_DIFF=no > > @@ -1031,9 +1030,6 @@ Options: > --db-sb-relay-remote Specifies upstream cluster/server remote for ovsdb relay > --db-sb-relay-use-remote-in-db=no|yes > OVN_Sorthbound db listen on target connection table (default: $DB_SB_RELAY_USE_REMOTE_IN_DB) > - > - --ovn-northd-ddlog=yes|no whether we should run the DDlog version > - of ovn-northd. The default is "no". > -h, --help display this help message > > File location options: > @@ -1212,11 +1208,7 @@ do > esac > done > > -if test X"$OVN_NORTHD_DDLOG" = Xyes; then > - OVN_NORTHD_BIN=ovn-northd-ddlog > -else > - OVN_NORTHD_BIN=ovn-northd > -fi > +OVN_NORTHD_BIN=ovn-northd > > case $command in > start_northd)
On Thu, Dec 7, 2023 at 6:17 AM Dumitru Ceara <dceara@redhat.com> wrote: > > On 12/7/23 12:11, Dumitru Ceara wrote: > > From: Ben Pfaff <blp@ovn.org> > > > > Originally posted at: > > https://patchwork.ozlabs.org/project/ovn/patch/20211007194649.4014965-1-blp@ovn.org/ > > > > Signed-off-by: Ben Pfaff <blp@ovn.org> > > --- > > CC: Han, Mark, Numan Acked-by: Numan Siddique <numans@ovn.org> Thanks Numan > > > NOTE: I (Dumitru) only rebased this on the latest version of the main > > branch. I didn't mark myself as co-author/didn't sign off (yet) because > > I didn't really add anything new to the patch. I will however sign off > > when pushing this to main if it gets Acked by other maintainers. On a > > personal note, I think it's very unfortunate to see the DDlog code go > > but it's not used for a while.. > > > > Changes in V1: > > - rebased on top of latest main branch > > --- > > Documentation/automake.mk | 2 - > > Documentation/intro/install/general.rst | 51 - > > Documentation/topics/debugging-ddlog.rst | 280 - > > Documentation/topics/index.rst | 1 - > > Documentation/tutorials/ddlog-new-feature.rst | 393 - > > Documentation/tutorials/index.rst | 1 - > > NEWS | 1 + > > TODO.rst | 6 - > > acinclude.m4 | 79 - > > configure.ac | 5 - > > lib/ovn-util.c | 37 +- > > lib/ovn-util.h | 5 - > > lib/stopwatch-names.h | 3 - > > m4/ovn.m4 | 16 - > > northd/.gitignore | 2 - > > northd/automake.mk | 111 - > > northd/bitwise.dl | 272 - > > northd/bitwise.rs | 133 - > > northd/copp.dl | 30 - > > northd/helpers.dl | 60 - > > northd/ipam.dl | 499 - > > northd/lrouter.dl | 947 -- > > northd/lswitch.dl | 824 -- > > northd/multicast.dl | 273 - > > northd/ovn-nb.dlopts | 27 - > > northd/ovn-northd-ddlog.c | 1368 --- > > northd/ovn-northd.8.xml | 56 +- > > northd/ovn-sb.dlopts | 34 - > > northd/ovn.dl | 387 - > > northd/ovn.rs | 750 -- > > northd/ovn.toml | 2 - > > northd/ovn_northd.dl | 9105 ----------------- > > northd/ovsdb2ddlog2c | 131 - > > tests/ovn-macros.at | 37 +- > > tests/ovn-northd.at | 352 +- > > tests/ovn.at | 24 +- > > tests/ovs-macros.at | 6 +- > > tests/perf-northd.at | 2 +- > > tests/system-common-macros.at | 2 +- > > tests/system-ovn-kmod.at | 10 +- > > tests/system-ovn.at | 152 +- > > tutorial/ovs-sandbox | 19 +- > > utilities/ovn-ctl | 10 +- > > 43 files changed, 296 insertions(+), 16209 deletions(-) > > delete mode 100644 Documentation/topics/debugging-ddlog.rst > > delete mode 100644 Documentation/tutorials/ddlog-new-feature.rst > > delete mode 100644 northd/bitwise.dl > > delete mode 100644 northd/bitwise.rs > > delete mode 100644 northd/copp.dl > > delete mode 100644 northd/helpers.dl > > delete mode 100644 northd/ipam.dl > > delete mode 100644 northd/lrouter.dl > > delete mode 100644 northd/lswitch.dl > > delete mode 100644 northd/multicast.dl > > delete mode 100644 northd/ovn-nb.dlopts > > delete mode 100644 northd/ovn-northd-ddlog.c > > delete mode 100644 northd/ovn-sb.dlopts > > delete mode 100644 northd/ovn.dl > > delete mode 100644 northd/ovn.rs > > delete mode 100644 northd/ovn.toml > > delete mode 100644 northd/ovn_northd.dl > > delete mode 100755 northd/ovsdb2ddlog2c > > > > diff --git a/Documentation/automake.mk b/Documentation/automake.mk > > index 7fcd186cac..b00876737b 100644 > > --- a/Documentation/automake.mk > > +++ b/Documentation/automake.mk > > @@ -24,14 +24,12 @@ DOC_SOURCE = \ > > Documentation/tutorials/images/ovsdb-relay-3.png \ > > Documentation/tutorials/ovn-rbac.rst \ > > Documentation/tutorials/ovn-interconnection.rst \ > > - Documentation/tutorials/ddlog-new-feature.rst \ > > Documentation/topics/index.rst \ > > Documentation/topics/testing.rst \ > > Documentation/topics/high-availability.rst \ > > Documentation/topics/integration.rst \ > > Documentation/topics/ovn-news-2.8.rst \ > > Documentation/topics/role-based-access-control.rst \ > > - Documentation/topics/debugging-ddlog.rst \ > > Documentation/topics/vif-plug-providers/index.rst \ > > Documentation/topics/vif-plug-providers/vif-plug-providers.rst \ > > Documentation/howto/index.rst \ > > diff --git a/Documentation/intro/install/general.rst b/Documentation/intro/install/general.rst > > index dd8bf5c2c0..ab62094828 100644 > > --- a/Documentation/intro/install/general.rst > > +++ b/Documentation/intro/install/general.rst > > @@ -102,13 +102,6 @@ need the following software: > > The environment variable OVS_RESOLV_CONF can be used to specify DNS server > > configuration file (the default file on Linux is /etc/resolv.conf). > > > > -- `DDlog <https://github.com/vmware/differential-datalog>`, if you > > - want to build ``ovn-northd-ddlog``, an alternate implementation of > > - ``ovn-northd`` that scales better to large deployments. The NEWS > > - file specifies the right version of DDlog to use with this release. > > - Building with DDlog supports requires Rust to be installed (see > > - https://www.rust-lang.org/tools/install). > > - > > If you are working from a Git tree or snapshot (instead of from a distribution > > tarball), or if you modify the OVN build system or the database > > schema, you will also need the following software: > > @@ -210,37 +203,6 @@ the default database directory, add options as shown here:: > > ``yum install`` or ``rpm -ivh``) and .deb (e.g. via > > ``apt-get install`` or ``dpkg -i``) use the above configure options. > > > > -Use ``--with-ddlog`` to build with DDlog support. To build with > > -DDlog, the build system needs to be able to find the ``ddlog`` and > > -``ovsdb2ddlog`` binaries and the DDlog library directory (the > > -directory that contains ``ddlog_std.dl``). This option supports a > > -few ways to do that: > > - > > - * If binaries are in $PATH, use the library directory as argument, > > - e.g. ``--with-ddlog=$HOME/differential-datalog/lib``. This is > > - suitable if DDlog was installed from source via ``stack install`` or > > - from (hypothetical) distribution packaging. > > - > > - The DDlog documentation recommends pointing $DDLOG_HOME to the > > - DDlog source directory. If you did this, so that $DDLOG_HOME/lib > > - is the library directory, you may use ``--with-ddlog`` without an > > - argument. > > - > > - * If the binaries and libraries are in the ``bin`` and ``lib`` > > - subdirectories of an installation directory, use the installation > > - directory as the argument. This is suitable if DDlog was > > - installed from one of the binary tarballs published by the DDlog > > - developers. > > - > > -.. note:: > > - > > - Building with DDLog adds a few minutes to the build because the > > - Rust compiler is slow. Add ``--enable-ddlog-fast-build`` to make > > - this about 2x faster. This disables some Rust compiler > > - optimizations, making a much slower ``ovn-northd-ddlog`` > > - executable, so it should not be used for production builds or for > > - profiling. > > - > > By default, static libraries are built and linked against. If you want to use > > shared libraries instead:: > > > > @@ -418,14 +380,6 @@ An example after install might be:: > > $ ovn-ctl start_northd > > $ ovn-ctl start_controller > > > > -If you built with DDlog support, then you can start > > -``ovn-northd-ddlog`` instead of ``ovn-northd`` by adding > > -``--ovn-northd-ddlog=yes``, e.g.:: > > - > > - $ export PATH=$PATH:/usr/local/share/ovn/scripts > > - $ ovn-ctl --ovn-northd-ddlog=yes start_northd > > - $ ovn-ctl start_controller > > - > > Starting OVN Central services > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > @@ -481,11 +435,6 @@ Unix domain socket:: > > > > $ ovn-northd --pidfile --detach --log-file > > > > -If you built with DDlog support, you can start ``ovn-northd-ddlog`` > > -instead, the same way:: > > - > > - $ ovn-northd-ddlog --pidfile --detach --log-file > > - > > Starting OVN Central services in containers > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > diff --git a/Documentation/topics/debugging-ddlog.rst b/Documentation/topics/debugging-ddlog.rst > > deleted file mode 100644 > > index 2ae72a38ea..0000000000 > > --- a/Documentation/topics/debugging-ddlog.rst > > +++ /dev/null > > @@ -1,280 +0,0 @@ > > -.. > > - Licensed under the Apache License, Version 2.0 (the "License"); you may > > - not use this file except in compliance with the License. You may obtain > > - a copy of the License at > > - > > - http://www.apache.org/licenses/LICENSE-2.0 > > - > > - Unless required by applicable law or agreed to in writing, software > > - distributed under the License is distributed on an "AS IS" BASIS, WITHOUT > > - WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the > > - License for the specific language governing permissions and limitations > > - under the License. > > - > > - Convention for heading levels in OVN documentation: > > - > > - ======= Heading 0 (reserved for the title in a document) > > - ------- Heading 1 > > - ~~~~~~~ Heading 2 > > - +++++++ Heading 3 > > - ''''''' Heading 4 > > - > > - Avoid deeper levels because they do not render well. > > - > > -========================================= > > -Debugging the DDlog version of ovn-northd > > -========================================= > > - > > -This document gives some tips for debugging correctness issues in the > > -DDlog implementation of ``ovn-northd``. To keep things conrete, we > > -assume here that a failure occurred in one of the test cases in > > -``ovn-e2e.at``, but the same methodology applies in any other > > -environment. If none of these methods helps, ask for assistance or > > -submit a bug report. > > - > > -Before trying these methods, you may want to check the northd log > > -file, ``tests/testsuite.dir/<test_number>/northd/ovn-northd.log`` for > > -error messages that might explain the failure. > > - > > -Compare OVSDB tables generated by DDlog vs C > > --------------------------------------------- > > - > > -The first thing I typically want to check when ``ovn-northd-ddlog`` > > -does not behave as expected is how the OVSDB tables computed by DDlog > > -differ from what the C implementation produces. Fortunately, all the > > -infrastructure needed to do this already exists in OVN. > > - > > -First, let's modify the test script, e.g., ``ovn.at`` to dump the > > -contents of OVSDB right before the failure. The most common issue is > > -a difference between the logical flows generated by the two > > -implementations. To make it easy to compare the generated flows, make > > -sure that the test contains something like this in the right place:: > > - > > - ovn-sbctl dump-flows > sbflows > > - AT_CAPTURE_FILE([sbflows]) > > - > > -The first line above dumps the OVN logical flow table to a file named > > -``sbflows``. The second line ensures that, if the test fails, > > -``sbflows`` get logged to ``testsuite.log``. That is not particularly > > -useful for us right now, but it means that if someone later submits a > > -bug report, that's one more piece of data that we don't have to ask > > -for them to submit along with it. > > - > > -Next, we want to run the test twice, with the C and DDlog versions of > > -northd, e.g., ``make check -j6 TESTSUITEFLAGS="-d 111 112"`` if 111 > > -and 112 are the C and DDlog versions of the same test. The ``-d`` in > > -this command line makes the test driver keep test directories around > > -even for tests that succeed, since by default it deletes them. > > - > > -Now you can look at ``sbflows`` in each test log directory. The > > -``ovn-northd-ddlog`` developers have gone to some trouble to make the > > -DDlog flows as similar as possible to the C ones, right down to white > > -space and other formatting. Thus, the DDlog output is often identical > > -to C aside from logical datapath UUIDs. > > - > > -Usually, this means that one can get informative results by running > > -``diff``, e.g.:: > > - > > - diff -u tests/testsuite.dir/111/sbflows tests/testsuite.dir/111/sbflows > > - > > -Running the input through the ``uuidfilt`` utility from OVS will > > -generally get rid of the logical datapath UUID differences as well:: > > - > > - diff -u <(uuidfilt tests/testsuite.dir/111/sbflows) <(uuidfilt tests/testsuite.dir/111/sbflows) > > - > > -If there are nontrivial differences, this often identifies your bug. > > - > > -Often, once you have identified the difference between the two OVSDB > > -dumps, this will immediately lead you to the root cause of the bug, > > -but if you are not this lucky then the next method may help. > > - > > -Record and replay DDlog execution > > ---------------------------------- > > - > > -DDlog offers a way to record all input table updates throughout the > > -execution of northd and replay them against DDlog running as a > > -standalone executable without all other OVN components. This has two > > -advantages. First, this allows one to easily tweak the inputs, e.g. > > -to simplify the test scenario. Second, the recorded execution can be > > -easily replayed anywhere without having to reproduce your OVN setup. > > - > > -Use the ``--ddlog-record`` option to record updates, > > -e.g. ``--ddlog-record=replay.dat`` to record to ``replay.dat``. > > -(OVN's built-in tests automatically do this.) The file contains the > > -log of transactions in the DDlog command format (see > > -https://github.com/vmware/differential-datalog/blob/master/doc/command_reference/command_reference.md). > > - > > -To replay the log, you will need the standalone DDlog executable. By > > -default, the build system does not compile this program, because it > > -increases the already long Rust compilation time. To build it, add > > -``NORTHD_CLI=1`` to the ``make`` command line, e.g. ``make > > -NORTHD_CLI=1``. > > - > > -You can modify the log before replaying it, e.g., adding ``dump > > -<table>`` commands to dump the contents of relations at various points > > -during execution. The <table> name must be fully qualified based on > > -the file in which it is declared, e.g. ``OVN_Southbound::<table>`` for > > -southbound tables or ``lrouter::<table>.`` for ``lrouter.dl``. You > > -can also use ``dump`` without an argument to dump the contents of all > > -tables. > > - > > -The following command replays the log generated by OVN test number > > -112 and dumps the output of DDlog to ``replay.dump``:: > > - > > - northd/ovn_northd_ddlog/target/release/ovn_northd_cli < tests/testsuite.dir/112/northd/replay.dat > replay.dump > > - > > -Or, to dump just the table contents following the run, without having > > -to edit ``replay.dat``:: > > - > > - (cat tests/testsuite.dir/112/northd/replay.dat; echo 'dump;') | northd/ovn_northd_ddlog/target/release/ovn_northd_cli --no-delta --no-init-snapshot > replay.dump > > - > > -Depending on whether and how you installed OVS and OVN, you might need > > -to point ``LD_LIBRARY_PATH`` to library build directories to get the > > -CLI to run, e.g.:: > > - > > - export LD_LIBRARY_PATH=$HOME/ovn/_build/lib/.libs:$HOME/ovs/_build/lib/.libs > > - > > -.. note:: > > - > > - The replay output may be less informative than you expect because > > - DDlog does not, by default, keep around enough information to > > - include input relation and intermediate relations in the output. > > - These relations are often critical to understanding what is going > > - on. To include them, add the options > > - ``--output-internal-relations --output-input-relations=In_`` to > > - ``DDLOG_EXTRA_FLAGS`` for building ``ovn-northd-ddlog``. For > > - example, ``configure`` as:: > > - > > - ./configure DDLOG_EXTRA_FLAGS='--output-internal-relations --output-input-relations=In_' > > - > > -Debugging by Logging > > --------------------- > > - > > -One limitation of the previous method is that it allows one to inspect > > -inputs and outputs of a rule, but not the (sometimes fairly > > -complicated) computation that goes on inside the rule. You can of > > -course break up the rule into several rules and dump the intermediate > > -outputs. > > - > > -There are at least two alternatives for generating log messages. > > -First, you can write rules to add strings to the Warning relation > > -declared in ``ovn_north.dl``. Code in ``ovn-northd-ddlog.c`` will log > > -any given string in this relation just once, when it is first added to > > -the relation. (If it is removed from the relation and then added back > > -later, it will be logged again.) > > - > > -Second, you can call using the ``warn()`` function declared in > > -``ovn.dl`` from a DDlog rule. It's not straightforward to know > > -exactly when this function will be called, like it would be in an > > -imperative language like C, since DDlog is a declarative language > > -where the user doesn't directly control when rules are triggered. You > > -might, for example, see the rule being triggered multiple times with > > -the same input. Nevertheless, this debugging technique is useful in > > -practice. > > - > > -You will find many examples of the use of Warning and ``warn`` in > > -``ovn_northd.dl``, where it is frequently used to report non-critical > > -errors. > > - > > -Debugging panics > > ----------------- > > - > > -**TODO**: update these instructions as DDlog's internal handling of panic's > > -is improved. > > - > > -DDlog is a safe language, so DDlog programs normally do not crash, > > -except for the following three cases: > > - > > -- A panic in a Rust function imported to DDlog as ``extern function``. > > - > > -- A panic in a C function imported to DDlog as ``extern function``. > > - > > -- A bug in the DDlog runtime or libraries. > > - > > -Below we walk through the steps involved in debugging such failures. > > -In this scenario, there is an array-index-out-of-bounds error in the > > -``ovn_scan_static_dynamic_ip6()`` function, which is written in Rust > > -and imported to DDlog as an ``extern function``. When invoked from a > > -DDlog rule, this function causes a panic in one of DDlog worker > > -threads. > > - > > -**Step 1: Check for error messages in the northd log.** A panic can > > -generally lead to unpredictable outcomes, so one cannot count on a > > -clean error message showing up in the log (Other outcomes include > > -crashing the entire process and even deadlocks. We are working to > > -eliminate the latter possibility). In this case we are lucky to > > -observe a bunch of error messages like the following in the ``northd`` > > -log: > > - > > - ``2019-09-23T16:23:24.549Z|00011|ovn_northd|ERR|ddlog_transaction_commit(): > > - error: failed to receive flush ack message from timely dataflow > > - thread`` > > - > > -These messages are telling us that something is broken inside the > > -DDlog runtime. > > - > > -**Step 2: Record and replay the failing scenario.** We use DDlog's > > -record/replay capabilities (see above) to capture the faulty scenario. > > -We replay the recorded trace:: > > - > > - northd/ovn_northd_ddlog/target/release/ovn_northd_cli < tests/testsuite.dir/117/northd/replay.dat > > - > > -This generates a bunch of output ending with:: > > - > > - thread 'worker thread 2' panicked at 'index out of bounds: the len is 1 but the index is 1', /rustc/eae3437dfe991621e8afdc82734f4a172d7ddf9b/src/libcore/slice/mod.rs:2681:10 > > - note: run with RUST_BACKTRACE=1 environment variable to display a backtrace. > > - > > -We re-run the CLI again with backtrace enabled (as suggested by the > > -error message):: > > - > > - RUST_BACKTRACE=1 northd/ovn_northd_ddlog/target/release/ovn_northd_cli < tests/testsuite.dir/117/northd/replay.dat > > - > > -This finally yields the following stack trace, which suggests array > > -bound violation in ``ovn_scan_static_dynamic_ip6``:: > > - > > - 0: backtrace::backtrace::libunwind::trace > > - at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29 10: core::panicking::panic_bounds_check > > - at src/libcore/panicking.rs:61 > > - [SKIPPED] > > - 11: ovn_northd_ddlog::__ovn::ovn_scan_static_dynamic_ip6 > > - 12: ovn_northd_ddlog::prog::__f > > - [SKIPPED] > > - > > -Finally, looking at the source code of > > -``ovn_scan_static_dynamic_ip6``, we identify the following line, > > -containing an unsafe array indexing operator, as the culprit:: > > - > > - ovn_ipv6_parse(&f[1].to_string()) > > - > > -Clean build > > -~~~~~~~~~~~ > > - > > -Occasionally it's desirable to a full and complete build of the > > -DDlog-generated code. To trigger that, delete the generated > > -``ovn_northd_ddlog`` directory and the ``ddlog.stamp`` witness file, > > -like this:: > > - > > - rm -rf northd/ovn_northd_ddlog northd/ddlog.stamp > > - > > -or:: > > - > > - make clean-ddlog > > - > > -Submitting a bug report > > ------------------------ > > - > > -If you are having trouble with DDlog and the above methods do not > > -help, please submit a bug report to ``bugs@openvswitch.org``, CC > > -``ryzhyk@gmail.com``. > > - > > -In addition to problem description, please provide as many of the > > -following as possible: > > - > > -- Are you running with the right DDlog for the version of OVN? OVN > > - and DDlog are both evolving and OVN needs to build against a > > - specific version of DDlog. > > - > > -- ``replay.dat`` file generated as described above > > - > > -- Logs: ``ovn-northd.log`` and ``testsuite.log``, if you are running > > - the OVN test suite > > diff --git a/Documentation/topics/index.rst b/Documentation/topics/index.rst > > index e9e49c7426..55bb919c09 100644 > > --- a/Documentation/topics/index.rst > > +++ b/Documentation/topics/index.rst > > @@ -36,7 +36,6 @@ OVN > > .. toctree:: > > :maxdepth: 2 > > > > - debugging-ddlog > > integration.rst > > high-availability > > role-based-access-control > > diff --git a/Documentation/tutorials/ddlog-new-feature.rst b/Documentation/tutorials/ddlog-new-feature.rst > > deleted file mode 100644 > > index de66ca5ada..0000000000 > > --- a/Documentation/tutorials/ddlog-new-feature.rst > > +++ /dev/null > > @@ -1,393 +0,0 @@ > > -.. > > - Licensed under the Apache License, Version 2.0 (the "License"); you may > > - not use this file except in compliance with the License. You may obtain > > - a copy of the License at > > - > > - http://www.apache.org/licenses/LICENSE-2.0 > > - > > - Unless required by applicable law or agreed to in writing, software > > - distributed under the License is distributed on an "AS IS" BASIS, WITHOUT > > - WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the > > - License for the specific language governing permissions and limitations > > - under the License. > > - > > - Convention for heading levels in OVN documentation: > > - > > - ======= Heading 0 (reserved for the title in a document) > > - ------- Heading 1 > > - ~~~~~~~ Heading 2 > > - +++++++ Heading 3 > > - ''''''' Heading 4 > > - > > - Avoid deeper levels because they do not render well. > > - > > -=========================================================== > > -Adding a new OVN feature to the DDlog version of ovn-northd > > -=========================================================== > > - > > -This document describes the usual steps an OVN developer should go > > -through when adding a new feature to ``ovn-northd-ddlog``. In order to > > -make things less abstract we will use the IP Multicast > > -``ovn-northd-ddlog`` implementation as an example. Even though the > > -document is structured as a tutorial there might still exist > > -feature-specific aspects that are not covered here. > > - > > -Overview > > --------- > > - > > -DDlog is a dataflow system: it receives data from a data source (a set > > -of "input relations"), processes it through "intermediate relations" > > -according to the rules specified in the DDlog program, and sends the > > -processed "output relations" to a data sink. In OVN, the input > > -relations primarily come from the OVN Northbound database and the > > -output relations primarily go to the OVN Southbound database. The > > -process looks like this:: > > - > > - from NBDB +----------+ +-----------------+ +-----------+ to SBDB > > - ---------->|Input rels|-->|Intermediate rels|-->|Output rels|----------> > > - +----------+ +-----------------+ +-----------+ > > - > > -Adding a new feature to ``ovn-northd-ddlog`` usually involves the > > -following steps: > > - > > -1. Update northbound and/or southbound OVSDB schemas. > > - > > -2. Configure DDlog/OVSDB bindings. > > - > > -3. Define intermediate DDlog relations and rules to compute them. > > - > > -4. Write rules to update output relations. > > - > > -5. Generate ``Logical_Flow``s and/or other forwarding records (e.g., > > - ``Multicast_Group``) that will control the dataplane operations. > > - > > -Update NB and/or SB OVSDB schemas > > ---------------------------------- > > - > > -This step is no different from the normal development flow in C. > > - > > -Most of the times a developer chooses between two ways of configuring > > -a new feature: > > - > > -1. Adding a set of columns to tables in the NB and/or SB database (or > > - adding key-value pairs to existing columns). > > - > > -2. Adding new tables to the NB and/or SB database. > > - > > -Looking at IP Multicast, there are two ``OVN Northbound`` tables where > > -configuration information is stored: > > - > > -- ``Logical_Switch``, column ``other_config``, keys ``mcast_*``. > > - > > -- ``Logical_Router``, column ``options``, keys ``mcast_*``. > > - > > -These tables become inputs to the DDlog pipeline. > > - > > -In addition we add a new table ``IP_Multicast`` to the SB database. > > -DDlog will update this table, that is, ``IP_Multicast`` receives > > -output from the above pipeline. > > - > > -Configuring DDlog/OVSDB bindings > > --------------------------------- > > - > > -Configuring ``northd/automake.mk`` > > -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > - > > -The OVN build process uses DDlog's ``ovsdb2ddlog`` utility to parse > > -``ovn-nb.ovsschema`` and ``ovn-sb.ovsschema`` and then automatically > > -populate ``OVN_Northbound.dl`` and ``OVN_Southbound.dl``. For each > > -OVN Northbound and Southbound table, it generates one or more > > -corresponding DDlog relations. > > - > > -We need to supply ``ovsdb2ddlog`` with some information that it can't > > -infer from the OVSDB schemas. This information must be specified as > > -``ovsdb2ddlog`` arguments, which are read from > > -``northd/ovn-nb.dlopts`` and ``northd/ovn-sb.dlopts``. > > - > > -The main choice for each new table is whether it is used for output. > > -Output tables can also be used for input, but the converse is not > > -true. If the table is used for output at all, we add ``-o <table>`` > > -to the option file. Our new table ``IP_Multicast`` is an output > > -table, so we add ``-o IP_Multicast`` to ``ovn-sb.dlopts``. > > - > > -For input-only tables, ``ovsdb2ddlog`` generates a DDlog input > > -relation with the same name. For output tables, it generates this > > -table plus an output relation named ``Out_<table>``. Thus, > > -``OVN_Southbound.dl`` has two relations for ``IP_Multicast``:: > > - > > - input relation IP_Multicast ( > > - _uuid: uuid, > > - datapath: string, > > - enabled: Set<bool>, > > - querier: Set<bool> > > - ) > > - output relation Out_IP_Multicast ( > > - _uuid: uuid, > > - datapath: string, > > - enabled: Set<bool>, > > - querier: Set<bool> > > - ) > > - > > -For an output table, consider whether only some of the columns are > > -used for output, that is, some of the columns are effectively > > -input-only. This is common in OVN for OVSDB columns that are managed > > -externally (e.g. by a CMS). For each input-only column, we add ``--ro > > -<table>.<column>``. Alternatively, if most of the columns are > > -input-only but a few are output columns, add ``--rw <table>.<column>`` > > -for each of the output columns. In our case, all of the columns are > > -used for output, so we do not need to add anything. > > - > > -Finally, in some cases ``ovn-northd-ddlog`` shouldn't change values in > > -. One such case is the ``seq_no`` column in the > > -``IP_Multicast`` table. To do that we need to instruct ``ovsdb2ddlog`` > > -to treat the column as read-only by using the ``--ro`` switch. > > - > > -``ovsdb2ddlog`` generates a number of additional DDlog relations, for > > -use by auto-generated OVSDB adapter logic. These are irrelevant to > > -most DDLog developers, although sometimes they can be handy for > > -debugging. See the appendix_ for details. > > - > > -Define intermediate DDlog relations and rules to compute them. > > --------------------------------------------------------------- > > - > > -Obviously there will be a one-to-one relationship between logical > > -switches/routers and IP multicast configuration. One way to represent > > -this relationship is to create multicast configuration DDlog relations > > -to be referenced by ``&Switch`` and ``&Router`` DDlog records:: > > - > > - /* IP Multicast per switch configuration. */ > > - relation &McastSwitchCfg( > > - datapath : uuid, > > - enabled : bool, > > - querier : bool > > - } > > - > > - &McastSwitchCfg( > > - .datapath = ls_uuid, > > - .enabled = map_get_bool_def(other_config, "mcast_snoop", false), > > - .querier = map_get_bool_def(other_config, "mcast_querier", true)) :- > > - nb.Logical_Switch(._uuid = ls_uuid, > > - .other_config = other_config). > > - > > -Then reference these relations in ``&Switch`` and ``&Router``. For > > -example, in ``lswitch.dl``, the ``&Switch`` relation definition now > > -contains:: > > - > > - relation &Switch( > > - ls: nb.Logical_Switch, > > - [...] > > - mcast_cfg: Ref<McastSwitchCfg> > > - ) > > - > > -And is populated by the following rule which references the correct > > -``McastSwitchCfg`` based on the logical switch uuid:: > > - > > - &Switch(.ls = ls, > > - [...] > > - .mcast_cfg = mcast_cfg) :- > > - nb.Logical_Switch[ls], > > - [...] > > - mcast_cfg in &McastSwitchCfg(.datapath = ls._uuid). > > - > > -Build state based on information dynamically updated by ``ovn-controller`` > > -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > - > > -Some OVN features rely on information learned by ``ovn-controller`` to > > -generate ``Logical_Flow`` or other records that control the dataplane. > > -In case of IP Multicast, ``ovn-controller`` uses IGMP to learn > > -multicast groups that are joined by hosts. > > - > > -Each ``ovn-controller`` maintains its own set of records to avoid > > -ownership and concurrency with other controllers. If two hosts that > > -are connected to the same logical switch but reside on different > > -hypervisors (different ``ovn-controller`` processes) join the same > > -multicast group G, each of the controllers will create an > > -``IGMP_Group`` record in the ``OVN Southbound`` database which will > > -contain a set of ports to which the interested hosts are connected. > > - > > -At this point ``ovn-northd-ddlog`` needs to aggregate the per-chassis > > -IGMP records to generate a single ``Logical_Flow`` for group G. > > -Moreover, the ports on which the hosts are connected are represented > > -as references to ``Port_Binding`` records in the database. These also > > -need to be translated to ``&SwitchPort`` DDlog relations. The > > -corresponding DDlog operations that need to be performed are: > > - > > -- Flatten the ``<IGMP group, ports>`` mapping in order to be able to > > - do the translation from ``Port_Binding`` to ``&SwitchPort``. For > > - each ``IGMP_Group`` record in the ``OVN Southbound`` database > > - generate an individual record of type ``IgmpSwitchGroupPort`` for > > - each ``Port_Binding`` in the set of ports that joined the > > - group. Also, translate the ``Port_Binding`` uuid to the > > - corresponding ``Logical_Switch_Port`` uuid:: > > - > > - relation IgmpSwitchGroupPort( > > - address: string, > > - switch : Ref<Switch>, > > - port : uuid > > - ) > > - > > - IgmpSwitchGroupPort(address, switch, lsp_uuid) :- > > - sb::IGMP_Group(.address = address, .datapath = igmp_dp_set, > > - .ports = pb_ports), > > - var pb_port_uuid = FlatMap(pb_ports), > > - sb::Port_Binding(._uuid = pb_port_uuid, .logical_port = lsp_name), > > - &SwitchPort( > > - .lsp = nb.Logical_Switch_Port{._uuid = lsp_uuid, .name = lsp_name}, > > - .sw = switch). > > - > > -- Aggregate the flattened IgmpSwitchGroupPort (implicitly from all > > - ``ovn-controller`` instances) grouping by adress and logical > > - switch:: > > - > > - relation IgmpSwitchMulticastGroup( > > - address: string, > > - switch : Ref<Switch>, > > - ports : Set<uuid> > > - ) > > - > > - IgmpSwitchMulticastGroup(address, switch, ports) :- > > - IgmpSwitchGroupPort(address, switch, port), > > - var ports = port.group_by((address, switch)).to_set(). > > - > > -At this point we have all the feature configuration relevant > > -information stored in DDlog relations in ``ovn-northd-ddlog`` memory. > > - > > -Pitfalls of projections > > -~~~~~~~~~~~~~~~~~~~~~~~ > > - > > -A projection is a join that uses only some of the data in a record. > > -When the fields that are used have duplicates, the result can be many > > -"copies" of a record, which DDlog represents internally with an > > -integer "weight" that counts the number of copies. We don't have a > > -projection with duplicates in this example, but `lswitch.dl` has many > > -of them, such as this one:: > > - > > - relation LogicalSwitchHasACLs(ls: uuid, has_acls: bool) > > - > > - LogicalSwitchHasACLs(ls, true) :- > > - LogicalSwitchACL(ls, _). > > - > > - LogicalSwitchHasACLs(ls, false) :- > > - nb::Logical_Switch(._uuid = ls), > > - not LogicalSwitchACL(ls, _). > > - > > -When multiple projections get joined together, the weights can > > -overflow, which causes DDlog to malfunction. The solution is to make > > -the relation an output relation, which causes DDlog to filter it > > -through a "distinct" operator that reduces the weights to 1. Thus, > > -`LogicalSwitchHasACLs` is actually implemented this way:: > > - > > - output relation LogicalSwitchHasACLs(ls: uuid, has_acls: bool) > > - > > -For more information, see `Avoiding weight overflow > > -<https://github.com/vmware/differential-datalog/blob/master/doc/tutorial/tutorial.md#avoiding-weight-overflow>`_ > > -in the DDlog tutorial. > > - > > -Write rules to update output relations > > --------------------------------------- > > - > > -The developer updates output tables by writing rules that generate > > -``Out_*`` relations. For IP Multicast this means:: > > - > > - /* IP_Multicast table (only applicable for Switches). */ > > - sb::Out_IP_Multicast(._uuid = hash128(cfg.datapath), > > - .datapath = cfg.datapath, > > - .enabled = set_singleton(cfg.enabled), > > - .querier = set_singleton(cfg.querier)) :- > > - &McastSwitchCfg[cfg]. > > - > > -.. note:: ``OVN_Southbound.dl`` also contains an ``IP_Multicast`` > > - relation with ``input`` qualifier. This relation stores the > > - current snapshot of the OVSDB table and cannot be written to. > > - > > -Generate ``Logical_Flow`` and/or other forwarding records > > ---------------------------------------------------------- > > - > > -At this point we have defined all DDlog relations required to generate > > -``Logical_Flow``s. All we have to do is write the rules to do so. > > -For each ``IgmpSwitchMulticastGroup`` we generate a ``Flow`` that has > > -as action ``"outport = <Multicast_Group>; output;"``:: > > - > > - /* Ingress table 17: Add IP multicast flows learnt from IGMP (priority 90). */ > > - for (IgmpSwitchMulticastGroup(.address = address, .switch = &sw)) { > > - Flow(.logical_datapath = sw.dpname, > > - .stage = switch_stage(IN, L2_LKUP), > > - .priority = 90, > > - .__match = "eth.mcast && ip4 && ip4.dst == ${address}", > > - .actions = "outport = \"${address}\"; output;", > > - .external_ids = map_empty()) > > - } > > - > > -In some cases generating a logical flow is not enough. For IGMP we > > -also need to maintain OVN southbound ``Multicast_Group`` records, > > -one per IGMP group storing the corresponding ``Port_Binding`` uuids of > > -ports where multicast traffic should be sent. This is also relatively > > -straightforward:: > > - > > - /* Create a multicast group for each IGMP group learned by a Switch. > > - * 'tunnel_key' == 0 triggers an ID allocation later. > > - */ > > - sb::Out_Multicast_Group (.datapath = switch.dpname, > > - .name = address, > > - .tunnel_key = 0, > > - .ports = set_map_uuid2name(port_ids)) :- > > - IgmpSwitchMulticastGroup(address, &switch, port_ids). > > - > > -We must also define DDlog relations that will allocate ``tunnel_key`` > > -values. There are two cases: tunnel keys for records that already > > -existed in the database are preserved to implement stable id > > -allocation; new multicast groups need new keys. This kind of > > -allocation can be tricky, especially to new users of DDlog. OVN > > -contains multiple instances of allocation, so it's probably worth > > -reading through the existing cases and following their pattern, and, > > -if it's still tricky, asking for assistance. > > - > > -Appendix A. Additional relations generated by ``ovsdb2ddlog`` > > -------------------------------------------------------------- > > - > > -.. _appendix: > > - > > -ovsdb2ddlog generates some extra relations to manage communication > > -with the OVSDB server. It generates records in the following > > -relations when rows in OVSDB output tables need to be added or deleted > > -or updated. > > - > > -In the steady state, when everything is working well, a given record > > -stays in any one of these relations only briefly: just long enough for > > -``ovn-northd-ddlog`` to send a transaction to the OVSDB server. When > > -the OVSDB server applies the update and sends an acknowledgement, this > > -ordinarily means that these relations become empty, because there are > > -no longer any further changes to send. > > - > > -Thus, records that persist in one of these relations is a sign of a > > -problem. One example of such a problem is the database server > > -rejecting the transactions sent by ``ovn-northd-ddlog``, which might > > -happen if, for example, a bug in a ``.dl`` file would cause some OVSDB > > -constraint or relational integrity rule to be violated. (Such a > > -problem can often be diagnosed by looking in the OVSDB server's log.) > > - > > -- ``DeltaPlus_IP_Multicast`` used by the DDlog program to track new > > - records that are not yet added to the database:: > > - > > - output relation DeltaPlus_IP_Multicast ( > > - datapath: uuid_or_string_t, > > - enabled: Set<bool>, > > - querier: Set<bool> > > - ) > > - > > -- ``DeltaMinus_IP_Multicast`` used by the DDlog program to track > > - records that are no longer needed in the database and need to be > > - removed:: > > - > > - output relation DeltaMinus_IP_Multicast ( > > - _uuid: uuid > > - ) > > - > > -- ``Update_IP_Multicast`` used by the DDlog program to track records > > - whose fields need to be updated in the database:: > > - > > - output relation Update_IP_Multicast ( > > - _uuid: uuid, > > - enabled: Set<bool>, > > - querier: Set<bool> > > - ) > > diff --git a/Documentation/tutorials/index.rst b/Documentation/tutorials/index.rst > > index de90b780a5..0b7f52d0b7 100644 > > --- a/Documentation/tutorials/index.rst > > +++ b/Documentation/tutorials/index.rst > > @@ -45,4 +45,3 @@ vSwitch. > > ovn-ovsdb-relay > > ovn-ipsec > > ovn-interconnection > > - ddlog-new-feature > > diff --git a/NEWS b/NEWS > > index acb3b854fb..b3f40f4f91 100644 > > --- a/NEWS > > +++ b/NEWS > > @@ -10,6 +10,7 @@ Post v23.09.0 > > external_ids:ovn-openflow-probe-interval configuration option for > > ovn-controller no longer matters and is ignored. > > - Enable PMTU discovery on geneve tunnels for E/W traffic. > > + - ovn-northd-ddlog has been removed. > > > > OVN v23.09.0 - 15 Sep 2023 > > -------------------------- > > diff --git a/TODO.rst b/TODO.rst > > index b790a9fadf..09b4be54d3 100644 > > --- a/TODO.rst > > +++ b/TODO.rst > > @@ -86,12 +86,6 @@ OVN To-do List > > > > * Packaging for Debian. > > > > -* ovn-northd-ddlog: Calls to warn() and err() from DDlog code would be > > - better refactored to use the Warning[] relation (and introduce an > > - Error[] relation once we want to issue some errors that way). This > > - would be easier with some improvements in DDlog to more easily > > - output to multiple relations from a single production. > > - > > * IP Multicast Relay > > > > * When connecting bridged logical switches (localnet) to logical routers > > diff --git a/acinclude.m4 b/acinclude.m4 > > index ad3ee9fdf4..1a80dfaa78 100644 > > --- a/acinclude.m4 > > +++ b/acinclude.m4 > > @@ -42,85 +42,6 @@ AC_DEFUN([OVS_ENABLE_WERROR], > > fi > > AC_SUBST([SPARSE_WERROR])]) > > > > -dnl OVS_CHECK_DDLOG([VERSION]) > > -dnl > > -dnl Configure ddlog source tree, checking for the given DDlog VERSION. > > -dnl VERSION should be a major and minor, e.g. 0.36, which will accept > > -dnl 0.36.0, 0.36.1, and so on. Omit VERSION to accept any version of > > -dnl ddlog (which is probably only useful for developers who are trying > > -dnl different versions, since OVN is currently bound to a particular > > -dnl DDlog version). > > -AC_DEFUN([OVS_CHECK_DDLOG], [ > > - AC_ARG_VAR([DDLOG_HOME], [Root of the DDlog installation]) > > - AC_ARG_WITH( > > - [ddlog], > > - [AS_HELP_STRING([--with-ddlog[[=INSTALLDIR|LIBDIR]]], [Enables DDlog])], > > - [DDLOG_PATH=$PATH > > - if test "$withval" = yes; then > > - # --with-ddlog: $DDLOG_HOME must be set > > - if test -z "$DDLOG_HOME"; then > > - AC_MSG_ERROR([To build with DDlog, specify the DDlog install or library directory on --with-ddlog or in \$DDLOG_HOME]) > > - fi > > - DDLOGLIBDIR=$DDLOG_HOME/lib > > - test -d "$DDLOG_HOME/bin" && DDLOG_PATH=$DDLOG_HOME/bin > > - elif test -f "$withval/lib/ddlog_std.dl"; then > > - # --with-ddlog=INSTALLDIR > > - DDLOGLIBDIR=$withval/lib > > - test -d "$withval/bin" && DDLOG_PATH=$withval/bin > > - elif test -f "$withval/ddlog_std.dl"; then > > - # --with-ddlog=LIBDIR > > - DDLOGLIBDIR=$withval/lib > > - else > > - AC_MSG_ERROR([$withval does not contain ddlog_std.dl or lib/ddlog_std.dl]) > > - fi], > > - [DDLOGLIBDIR=no > > - DDLOG_PATH=no]) > > - > > - AC_MSG_CHECKING([for DDlog library directory]) > > - AC_MSG_RESULT([$DDLOGLIBDIR]) > > - if test "$DDLOGLIBDIR" != no; then > > - AC_ARG_VAR([DDLOG], [path to ddlog binary]) > > - AC_PATH_PROGS([DDLOG], [ddlog], [none], [$DDLOG_PATH]) > > - if test X"$DDLOG" = X"none"; then > > - AC_MSG_ERROR([ddlog is required to build with DDlog]) > > - fi > > - > > - AC_ARG_VAR([OVSDB2DDLOG], [path to ovsdb2ddlog binary]) > > - AC_PATH_PROGS([OVSDB2DDLOG], [ovsdb2ddlog], [none], [$DDLOG_PATH]) > > - if test X"$OVSDB2DDLOG" = X"none"; then > > - AC_MSG_ERROR([ovsdb2ddlog is required to build with DDlog]) > > - fi > > - > > - for tool in "$DDLOG" "$OVSDB2DDLOG"; do > > - AC_MSG_CHECKING([$tool version]) > > - $tool --version >&AS_MESSAGE_LOG_FD 2>&1 > > - tool_version=$($tool --version | sed -n 's/^.* v\([[0-9]][[^ ]]*\).*/\1/p') > > - AC_MSG_RESULT([$tool_version]) > > - m4_if([$1], [], [], [ > > - AS_CASE([$tool_version], > > - [$1 | $1.*], [], > > - [*], [AC_MSG_ERROR([DDlog version $1.x is required, but $tool is version $tool_version])])]) > > - done > > - > > - AC_ARG_VAR([CARGO]) > > - AC_CHECK_PROGS([CARGO], [cargo], [none]) > > - if test X"$CARGO" = X"none"; then > > - AC_MSG_ERROR([cargo is required to build with DDlog]) > > - fi > > - > > - AC_ARG_VAR([RUSTC]) > > - AC_CHECK_PROGS([RUSTC], [rustc], [none]) > > - if test X"$RUSTC" = X"none"; then > > - AC_MSG_ERROR([rustc is required to build with DDlog]) > > - fi > > - > > - AC_SUBST([DDLOGLIBDIR]) > > - AC_DEFINE([DDLOG], [1], [Build OVN daemons with ddlog.]) > > - fi > > - > > - AM_CONDITIONAL([DDLOG], [test "$DDLOGLIBDIR" != no]) > > -]) > > - > > dnl Checks for net/if_dl.h. > > dnl > > dnl (We use this as a proxy for checking whether we're building on FreeBSD > > diff --git a/configure.ac b/configure.ac > > index e8a5edbb25..8284800e74 100644 > > --- a/configure.ac > > +++ b/configure.ac > > @@ -132,7 +132,6 @@ OVS_LIBTOOL_VERSIONS > > OVS_CHECK_CXX > > AX_FUNC_POSIX_MEMALIGN > > OVN_CHECK_UNBOUND > > -OVS_CHECK_DDLOG_FAST_BUILD > > > > OVS_CHECK_INCLUDE_NEXT([stdio.h string.h]) > > AC_CONFIG_FILES([lib/libovn.sym]) > > @@ -169,7 +168,6 @@ OVS_CONDITIONAL_CC_OPTION([-Wno-unused-parameter], [HAVE_WNO_UNUSED_PARAMETER]) > > OVS_ENABLE_WERROR > > OVS_ENABLE_SPARSE > > > > -OVS_CHECK_DDLOG([0.47]) > > OVS_CHECK_PRAGMA_MESSAGE > > OVN_CHECK_OVS > > OVN_CHECK_VIF_PLUG_PROVIDER > > @@ -177,9 +175,6 @@ OVN_ENABLE_VIF_PLUG > > OVS_CTAGS_IDENTIFIERS > > AC_SUBST([OVS_CFLAGS]) > > AC_SUBST([OVS_LDFLAGS]) > > -AC_SUBST([DDLOG_EXTRA_FLAGS]) > > -AC_SUBST([DDLOG_EXTRA_RUSTFLAGS]) > > -AC_SUBST([DDLOG_NORTHD_LIB_ONLY]) > > > > AC_SUBST([ovs_srcdir], ['${OVSDIR}']) > > AC_SUBST([ovs_builddir], ['${OVSBUILDDIR}']) > > diff --git a/lib/ovn-util.c b/lib/ovn-util.c > > index 33105202f2..6ef9cac7f2 100644 > > --- a/lib/ovn-util.c > > +++ b/lib/ovn-util.c > > @@ -304,37 +304,27 @@ bool extract_ip_address(const char *address, struct lport_addresses *laddrs) > > bool > > extract_lrp_networks(const struct nbrec_logical_router_port *lrp, > > struct lport_addresses *laddrs) > > -{ > > - return extract_lrp_networks__(lrp->mac, lrp->networks, lrp->n_networks, > > - laddrs); > > -} > > - > > -/* Separate out the body of 'extract_lrp_networks()' for use from DDlog, > > - * which does not know the 'nbrec_logical_router_port' type. */ > > -bool > > -extract_lrp_networks__(char *mac, char **networks, size_t n_networks, > > - struct lport_addresses *laddrs) > > { > > memset(laddrs, 0, sizeof *laddrs); > > > > - if (!eth_addr_from_string(mac, &laddrs->ea)) { > > + if (!eth_addr_from_string(lrp->mac, &laddrs->ea)) { > > laddrs->ea = eth_addr_zero; > > return false; > > } > > snprintf(laddrs->ea_s, sizeof laddrs->ea_s, ETH_ADDR_FMT, > > ETH_ADDR_ARGS(laddrs->ea)); > > > > - for (int i = 0; i < n_networks; i++) { > > + for (int i = 0; i < lrp->n_networks; i++) { > > ovs_be32 ip4; > > struct in6_addr ip6; > > unsigned int plen; > > char *error; > > > > - error = ip_parse_cidr(networks[i], &ip4, &plen); > > + error = ip_parse_cidr(lrp->networks[i], &ip4, &plen); > > if (!error) { > > if (!ip4) { > > static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); > > - VLOG_WARN_RL(&rl, "bad 'networks' %s", networks[i]); > > + VLOG_WARN_RL(&rl, "bad 'networks' %s", lrp->networks[i]); > > continue; > > } > > > > @@ -343,13 +333,13 @@ extract_lrp_networks__(char *mac, char **networks, size_t n_networks, > > } > > free(error); > > > > - error = ipv6_parse_cidr(networks[i], &ip6, &plen); > > + error = ipv6_parse_cidr(lrp->networks[i], &ip6, &plen); > > if (!error) { > > add_ipv6_netaddr(laddrs, ip6, plen); > > } else { > > static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); > > VLOG_INFO_RL(&rl, "invalid syntax '%s' in networks", > > - networks[i]); > > + lrp->networks[i]); > > free(error); > > } > > } > > @@ -889,21 +879,6 @@ ovn_get_internal_version(void) > > N_OVNACTS, OVN_INTERNAL_MINOR_VER); > > } > > > > -#ifdef DDLOG > > -/* Callbacks used by the ddlog northd code to print warnings and errors. */ > > -void > > -ddlog_warn(const char *msg) > > -{ > > - VLOG_WARN("%s", msg); > > -} > > - > > -void > > -ddlog_err(const char *msg) > > -{ > > - VLOG_ERR("%s", msg); > > -} > > -#endif > > - > > uint32_t > > get_tunnel_type(const char *name) > > { > > diff --git a/lib/ovn-util.h b/lib/ovn-util.h > > index bff50dbde9..aa0b3b2fb4 100644 > > --- a/lib/ovn-util.h > > +++ b/lib/ovn-util.h > > @@ -311,11 +311,6 @@ BUILD_ASSERT_DECL( > > /* The number of tables for the ingress and egress pipelines. */ > > #define LOG_PIPELINE_LEN 29 > > > > -#ifdef DDLOG > > -void ddlog_warn(const char *msg); > > -void ddlog_err(const char *msg); > > -#endif > > - > > static inline uint32_t > > hash_add_in6_addr(uint32_t hash, const struct in6_addr *addr) > > { > > diff --git a/lib/stopwatch-names.h b/lib/stopwatch-names.h > > index 3452cc71cf..4e93c1dc14 100644 > > --- a/lib/stopwatch-names.h > > +++ b/lib/stopwatch-names.h > > @@ -15,9 +15,6 @@ > > #ifndef STOPWATCH_NAMES_H > > #define STOPWATCH_NAMES_H 1 > > > > -/* In order to not duplicate names for stopwatches between ddlog and non-ddlog > > - * we define them in a common header file. > > - */ > > #define NORTHD_LOOP_STOPWATCH_NAME "ovn-northd-loop" > > #define OVNNB_DB_RUN_STOPWATCH_NAME "ovnnb_db_run" > > #define OVNSB_DB_RUN_STOPWATCH_NAME "ovnsb_db_run" > > diff --git a/m4/ovn.m4 b/m4/ovn.m4 > > index 902865c585..ebe4c96123 100644 > > --- a/m4/ovn.m4 > > +++ b/m4/ovn.m4 > > @@ -576,19 +576,3 @@ AC_DEFUN([OVN_CHECK_UNBOUND], > > fi > > AM_CONDITIONAL([HAVE_UNBOUND], [test "$HAVE_UNBOUND" = yes]) > > AC_SUBST([HAVE_UNBOUND])]) > > - > > -dnl Checks for --enable-ddlog-fast-build and updates DDLOG_EXTRA_RUSTFLAGS. > > -AC_DEFUN([OVS_CHECK_DDLOG_FAST_BUILD], > > - [AC_ARG_ENABLE( > > - [ddlog_fast_build], > > - [AS_HELP_STRING([--enable-ddlog-fast-build], > > - [Build ddlog programs faster, but generate slower code])], > > - [case "${enableval}" in > > - (yes) ddlog_fast_build=true ;; > > - (no) ddlog_fast_build=false ;; > > - (*) AC_MSG_ERROR([bad value ${enableval} for --enable-ddlog-fast-build]) ;; > > - esac], > > - [ddlog_fast_build=false]) > > - if $ddlog_fast_build; then > > - DDLOG_EXTRA_RUSTFLAGS="-C opt-level=z" > > - fi]) > > diff --git a/northd/.gitignore b/northd/.gitignore > > index 0f2b33ae7d..39a6f79887 100644 > > --- a/northd/.gitignore > > +++ b/northd/.gitignore > > @@ -1,6 +1,4 @@ > > /ovn-northd > > -/ovn-northd-ddlog > > /ovn-northd.8 > > /OVN_Northbound.dl > > /OVN_Southbound.dl > > -/ovn_northd_ddlog/ > > diff --git a/northd/automake.mk b/northd/automake.mk > > index cf622fc3c9..5d77ca67b7 100644 > > --- a/northd/automake.mk > > +++ b/northd/automake.mk > > @@ -35,114 +35,3 @@ northd_ovn_northd_LDADD = \ > > man_MANS += northd/ovn-northd.8 > > EXTRA_DIST += northd/ovn-northd.8.xml > > CLEANFILES += northd/ovn-northd.8 > > - > > -EXTRA_DIST += \ > > - northd/ovn-nb.dlopts \ > > - northd/ovn-sb.dlopts \ > > - northd/ovn.toml \ > > - northd/ovn.rs \ > > - northd/bitwise.rs \ > > - northd/ovsdb2ddlog2c \ > > - $(ddlog_sources) > > - > > -ddlog_sources = \ > > - northd/ovn_northd.dl \ > > - northd/lswitch.dl \ > > - northd/lrouter.dl \ > > - northd/ipam.dl \ > > - northd/multicast.dl \ > > - northd/ovn.dl \ > > - northd/ovn.rs \ > > - northd/helpers.dl \ > > - northd/bitwise.dl \ > > - northd/copp.dl > > -ddlog_nodist_sources = \ > > - northd/OVN_Northbound.dl \ > > - northd/OVN_Southbound.dl > > - > > -if DDLOG > > -bin_PROGRAMS += northd/ovn-northd-ddlog > > -northd_ovn_northd_ddlog_SOURCES = northd/ovn-northd-ddlog.c > > -nodist_northd_ovn_northd_ddlog_SOURCES = \ > > - northd/ovn-northd-ddlog-sb.inc \ > > - northd/ovn-northd-ddlog-nb.inc \ > > - northd/ovn_northd_ddlog/ddlog.h > > -northd_ovn_northd_ddlog_LDADD = \ > > - northd/ovn_northd_ddlog/target/release/libovn_northd_ddlog.la \ > > - lib/libovn.la \ > > - $(OVSDB_LIBDIR)/libovsdb.la \ > > - $(OVS_LIBDIR)/libopenvswitch.la > > - > > -nb_opts = $$(cat $(srcdir)/northd/ovn-nb.dlopts) > > -northd/OVN_Northbound.dl: ovn-nb.ovsschema northd/ovn-nb.dlopts > > - $(AM_V_GEN)$(OVSDB2DDLOG) -f $< --output-file $@ $(nb_opts) > > -northd/ovn-northd-ddlog-nb.inc: ovn-nb.ovsschema northd/ovn-nb.dlopts northd/ovsdb2ddlog2c > > - $(AM_V_GEN)$(run_python) $(srcdir)/northd/ovsdb2ddlog2c -p nb_ -f $< --output-file $@ $(nb_opts) > > - > > -sb_opts = $$(cat $(srcdir)/northd/ovn-sb.dlopts) > > -northd/OVN_Southbound.dl: ovn-sb.ovsschema northd/ovn-sb.dlopts > > - $(AM_V_GEN)$(OVSDB2DDLOG) -f $< --output-file $@ $(sb_opts) > > -northd/ovn-northd-ddlog-sb.inc: ovn-sb.ovsschema northd/ovn-sb.dlopts northd/ovsdb2ddlog2c > > - $(AM_V_GEN)$(run_python) $(srcdir)/northd/ovsdb2ddlog2c -p sb_ -f $< --output-file $@ $(sb_opts) > > - > > -BUILT_SOURCES += \ > > - northd/ovn-northd-ddlog-sb.inc \ > > - northd/ovn-northd-ddlog-nb.inc \ > > - northd/ovn_northd_ddlog/ddlog.h > > - > > -northd/ovn_northd_ddlog/ddlog.h: northd/ddlog.stamp > > - > > -CARGO_VERBOSE = $(cargo_verbose_$(V)) > > -cargo_verbose_ = $(cargo_verbose_$(AM_DEFAULT_VERBOSITY)) > > -cargo_verbose_0 = > > -cargo_verbose_1 = --verbose > > - > > -DDLOGFLAGS = -L $(DDLOGLIBDIR) -L $(builddir)/northd $(DDLOG_EXTRA_FLAGS) > > - > > -RUSTFLAGS = \ > > - -L ../../lib/.libs \ > > - -L $(OVS_LIBDIR)/.libs \ > > - $$LIBOPENVSWITCH_DEPS \ > > - $$LIBOVN_DEPS \ > > - -Awarnings $(DDLOG_EXTRA_RUSTFLAGS) > > - > > -northd/ddlog.stamp: $(ddlog_sources) $(ddlog_nodist_sources) > > - $(AM_V_GEN)$(DDLOG) -i $< -o $(builddir)/northd $(DDLOGFLAGS) > > - $(AM_V_at)touch $@ > > - > > -NORTHD_LIB = 1 > > -NORTHD_CLI = 0 > > - > > -ddlog_targets = $(northd_lib_$(NORTHD_LIB)) $(northd_cli_$(NORTHD_CLI)) > > -northd_lib_1 = northd/ovn_northd_ddlog/target/release/libovn_%_ddlog.la > > -northd_cli_1 = northd/ovn_northd_ddlog/target/release/ovn_%_cli > > -EXTRA_northd_ovn_northd_DEPENDENCIES = $(northd_cli_$(NORTHD_CLI)) > > - > > -cargo_build = $(cargo_build_$(NORTHD_LIB)$(NORTHD_CLI)) > > -cargo_build_01 = --features command-line --bin ovn_northd_cli > > -cargo_build_10 = --lib > > -cargo_build_11 = --features command-line > > - > > -libtool_deps = $(srcdir)/build-aux/libtool-deps > > -$(ddlog_targets): northd/ddlog.stamp lib/libovn.la $(OVS_LIBDIR)/libopenvswitch.la > > - $(AM_V_GEN)LIBOVN_DEPS=`$(libtool_deps) lib/libovn.la` && \ > > - LIBOPENVSWITCH_DEPS=`$(libtool_deps) $(OVS_LIBDIR)/libopenvswitch.la` && \ > > - cd northd/ovn_northd_ddlog && \ > > - RUSTC='$(RUSTC)' RUSTFLAGS="$(RUSTFLAGS)" \ > > - cargo build --release $(CARGO_VERBOSE) $(cargo_build) --no-default-features --features ovsdb,c_api > > -endif > > - > > -CLEAN_LOCAL += clean-ddlog > > -clean-ddlog: > > - rm -rf northd/ovn_northd_ddlog northd/ddlog.stamp > > - > > -CLEANFILES += \ > > - northd/ddlog.stamp \ > > - northd/ovn_northd_ddlog/ddlog.h \ > > - northd/ovn_northd_ddlog/target/release/libovn_northd_ddlog.a \ > > - northd/ovn_northd_ddlog/target/release/libovn_northd_ddlog.la \ > > - northd/ovn_northd_ddlog/target/release/ovn_northd_cli \ > > - northd/OVN_Northbound.dl \ > > - northd/OVN_Southbound.dl \ > > - northd/ovn-northd-ddlog-nb.inc \ > > - northd/ovn-northd-ddlog-sb.inc > > diff --git a/northd/bitwise.dl b/northd/bitwise.dl > > deleted file mode 100644 > > index 877d155a23..0000000000 > > --- a/northd/bitwise.dl > > +++ /dev/null > > @@ -1,272 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -/* > > - * Returns true if and only if 'x' is a power of 2. > > - */ > > -function is_power_of_two(x: u8): bool { x != 0 and (x & (x - 1)) == 0 } > > -function is_power_of_two(x: u16): bool { x != 0 and (x & (x - 1)) == 0 } > > -function is_power_of_two(x: u32): bool { x != 0 and (x & (x - 1)) == 0 } > > -function is_power_of_two(x: u64): bool { x != 0 and (x & (x - 1)) == 0 } > > -function is_power_of_two(x: u128): bool { x != 0 and (x & (x - 1)) == 0 } > > - > > -/* > > - * Returns the next power of 2 greater than 'x', or None if that's bigger than the > > - * type's maximum value. > > - */ > > -function next_power_of_two(x: u8): Option<u8> { u8_next_power_of_two(x) } > > -function next_power_of_two(x: u16): Option<u16> { u16_next_power_of_two(x) } > > -function next_power_of_two(x: u32): Option<u32> { u32_next_power_of_two(x) } > > -function next_power_of_two(x: u64): Option<u64> { u64_next_power_of_two(x) } > > -function next_power_of_two(x: u128): Option<u128> { u128_next_power_of_two(x) } > > - > > -extern function u8_next_power_of_two(x: u8): Option<u8> > > -extern function u16_next_power_of_two(x: u16): Option<u16> > > -extern function u32_next_power_of_two(x: u32): Option<u32> > > -extern function u64_next_power_of_two(x: u64): Option<u64> > > -extern function u128_next_power_of_two(x: u128): Option<u128> > > - > > -/* > > - * Returns the next power of 2 greater than 'x', or 0 if that's bigger than the > > - * type's maximum value. > > - */ > > -function wrapping_next_power_of_two(x: u8): u8 { u8_wrapping_next_power_of_two(x) } > > -function wrapping_next_power_of_two(x: u16): u16 { u16_wrapping_next_power_of_two(x) } > > -function wrapping_next_power_of_two(x: u32): u32 { u32_wrapping_next_power_of_two(x) } > > -function wrapping_next_power_of_two(x: u64): u64 { u64_wrapping_next_power_of_two(x) } > > -function wrapping_next_power_of_two(x: u128): u128 { u128_wrapping_next_power_of_two(x) } > > - > > -extern function u8_wrapping_next_power_of_two(x: u8): u8 > > -extern function u16_wrapping_next_power_of_two(x: u16): u16 > > -extern function u32_wrapping_next_power_of_two(x: u32): u32 > > -extern function u64_wrapping_next_power_of_two(x: u64): u64 > > -extern function u128_wrapping_next_power_of_two(x: u128): u128 > > - > > -/* > > - * Number of 1-bits in the binary representation of 'x'. > > - */ > > -function count_ones(x: u8): u32 { u8_count_ones(x) } > > -function count_ones(x: u16): u32 { u16_count_ones(x) } > > -function count_ones(x: u32): u32 { u32_count_ones(x) } > > -function count_ones(x: u64): u32 { u64_count_ones(x) } > > -function count_ones(x: u128): u32 { u128_count_ones(x) } > > - > > -extern function u8_count_ones(x: u8): u32 > > -extern function u16_count_ones(x: u16): u32 > > -extern function u32_count_ones(x: u32): u32 > > -extern function u64_count_ones(x: u64): u32 > > -extern function u128_count_ones(x: u128): u32 > > - > > -/* > > - * Number of 0-bits in the binary representation of 'x'. > > - */ > > -function count_zeros(x: u8): u32 { u8_count_zeros(x) } > > -function count_zeros(x: u16): u32 { u16_count_zeros(x) } > > -function count_zeros(x: u32): u32 { u32_count_zeros(x) } > > -function count_zeros(x: u64): u32 { u64_count_zeros(x) } > > -function count_zeros(x: u128): u32 { u128_count_zeros(x) } > > - > > -extern function u8_count_zeros(x: u8): u32 > > -extern function u16_count_zeros(x: u16): u32 > > -extern function u32_count_zeros(x: u32): u32 > > -extern function u64_count_zeros(x: u64): u32 > > -extern function u128_count_zeros(x: u128): u32 > > - > > -/* > > - * Number of leading 0-bits in the binary representation of 'x'. > > - */ > > -function leading_zeros(x: u8): u32 { u8_leading_zeros(x) } > > -function leading_zeros(x: u16): u32 { u16_leading_zeros(x) } > > -function leading_zeros(x: u32): u32 { u32_leading_zeros(x) } > > -function leading_zeros(x: u64): u32 { u64_leading_zeros(x) } > > -function leading_zeros(x: u128): u32 { u128_leading_zeros(x) } > > - > > -extern function u8_leading_zeros(x: u8): u32 > > -extern function u16_leading_zeros(x: u16): u32 > > -extern function u32_leading_zeros(x: u32): u32 > > -extern function u64_leading_zeros(x: u64): u32 > > -extern function u128_leading_zeros(x: u128): u32 > > - > > -/* > > - * Number of leading 1-bits in the binary representation of 'x'. > > - */ > > -function leading_ones(x: u8): u32 { u8_leading_ones(x) } > > -function leading_ones(x: u16): u32 { u16_leading_ones(x) } > > -function leading_ones(x: u32): u32 { u32_leading_ones(x) } > > -function leading_ones(x: u64): u32 { u64_leading_ones(x) } > > -function leading_ones(x: u128): u32 { u128_leading_ones(x) } > > - > > -extern function u8_leading_ones(x: u8): u32 > > -extern function u16_leading_ones(x: u16): u32 > > -extern function u32_leading_ones(x: u32): u32 > > -extern function u64_leading_ones(x: u64): u32 > > -extern function u128_leading_ones(x: u128): u32 > > - > > -/* > > - * Number of trailing 0-bits in the binary representation of 'x'. > > - */ > > -function trailing_zeros(x: u8): u32 { u8_trailing_zeros(x) } > > -function trailing_zeros(x: u16): u32 { u16_trailing_zeros(x) } > > -function trailing_zeros(x: u32): u32 { u32_trailing_zeros(x) } > > -function trailing_zeros(x: u64): u32 { u64_trailing_zeros(x) } > > -function trailing_zeros(x: u128): u32 { u128_trailing_zeros(x) } > > - > > -extern function u8_trailing_zeros(x: u8): u32 > > -extern function u16_trailing_zeros(x: u16): u32 > > -extern function u32_trailing_zeros(x: u32): u32 > > -extern function u64_trailing_zeros(x: u64): u32 > > -extern function u128_trailing_zeros(x: u128): u32 > > - > > -/* > > - * Number of trailing 0-bits in the binary representation of 'x'. > > - */ > > -function trailing_ones(x: u8): u32 { u8_trailing_ones(x) } > > -function trailing_ones(x: u16): u32 { u16_trailing_ones(x) } > > -function trailing_ones(x: u32): u32 { u32_trailing_ones(x) } > > -function trailing_ones(x: u64): u32 { u64_trailing_ones(x) } > > -function trailing_ones(x: u128): u32 { u128_trailing_ones(x) } > > - > > -extern function u8_trailing_ones(x: u8): u32 > > -extern function u16_trailing_ones(x: u16): u32 > > -extern function u32_trailing_ones(x: u32): u32 > > -extern function u64_trailing_ones(x: u64): u32 > > -extern function u128_trailing_ones(x: u128): u32 > > - > > -/* > > - * Reverses the order of bits in 'x'. > > - */ > > -function reverse_bits(x: u8): u8 { u8_reverse_bits(x) } > > -function reverse_bits(x: u16): u16 { u16_reverse_bits(x) } > > -function reverse_bits(x: u32): u32 { u32_reverse_bits(x) } > > -function reverse_bits(x: u64): u64 { u64_reverse_bits(x) } > > -function reverse_bits(x: u128): u128 { u128_reverse_bits(x) } > > - > > -extern function u8_reverse_bits(x: u8): u8 > > -extern function u16_reverse_bits(x: u16): u16 > > -extern function u32_reverse_bits(x: u32): u32 > > -extern function u64_reverse_bits(x: u64): u64 > > -extern function u128_reverse_bits(x: u128): u128 > > - > > -/* > > - * Reverses the order of bytes in 'x'. > > - */ > > -function swap_bytes(x: u8): u8 { u8_swap_bytes(x) } > > -function swap_bytes(x: u16): u16 { u16_swap_bytes(x) } > > -function swap_bytes(x: u32): u32 { u32_swap_bytes(x) } > > -function swap_bytes(x: u64): u64 { u64_swap_bytes(x) } > > -function swap_bytes(x: u128): u128 { u128_swap_bytes(x) } > > - > > -extern function u8_swap_bytes(x: u8): u8 > > -extern function u16_swap_bytes(x: u16): u16 > > -extern function u32_swap_bytes(x: u32): u32 > > -extern function u64_swap_bytes(x: u64): u64 > > -extern function u128_swap_bytes(x: u128): u128 > > - > > -/* > > - * Converts 'x' from big endian to the machine's native endianness. > > - * On a big-endian machine it is a no-op. > > - * On a little-end machine it is equivalent to swap_bytes(). > > - */ > > -function from_be(x: u8): u8 { u8_from_be(x) } > > -function from_be(x: u16): u16 { u16_from_be(x) } > > -function from_be(x: u32): u32 { u32_from_be(x) } > > -function from_be(x: u64): u64 { u64_from_be(x) } > > -function from_be(x: u128): u128 { u128_from_be(x) } > > - > > -extern function u8_from_be(x: u8): u8 > > -extern function u16_from_be(x: u16): u16 > > -extern function u32_from_be(x: u32): u32 > > -extern function u64_from_be(x: u64): u64 > > -extern function u128_from_be(x: u128): u128 > > - > > -/* > > - * Converts 'x' from the machine's native endianness to big endian. > > - * On a big-endian machine it is a no-op. > > - * On a little-endian machine it is equivalent to swap_bytes(). > > - */ > > -function to_be(x: u8): u8 { u8_to_be(x) } > > -function to_be(x: u16): u16 { u16_to_be(x) } > > -function to_be(x: u32): u32 { u32_to_be(x) } > > -function to_be(x: u64): u64 { u64_to_be(x) } > > -function to_be(x: u128): u128 { u128_to_be(x) } > > - > > -extern function u8_to_be(x: u8): u8 > > -extern function u16_to_be(x: u16): u16 > > -extern function u32_to_be(x: u32): u32 > > -extern function u64_to_be(x: u64): u64 > > -extern function u128_to_be(x: u128): u128 > > - > > -/* > > - * Converts 'x' from little endian to the machine's native endianness. > > - * On a little-endian machine it is a no-op. > > - * On a big-endian machine it is equivalent to swap_bytes(). > > - */ > > -function from_le(x: u8): u8 { u8_from_le(x) } > > -function from_le(x: u16): u16 { u16_from_le(x) } > > -function from_le(x: u32): u32 { u32_from_le(x) } > > -function from_le(x: u64): u64 { u64_from_le(x) } > > -function from_le(x: u128): u128 { u128_from_le(x) } > > - > > -extern function u8_from_le(x: u8): u8 > > -extern function u16_from_le(x: u16): u16 > > -extern function u32_from_le(x: u32): u32 > > -extern function u64_from_le(x: u64): u64 > > -extern function u128_from_le(x: u128): u128 > > - > > -/* > > - * Converts 'x' from the machine's native endianness to little endian. > > - * On a little-endian machine it is a no-op. > > - * On a big-endian machine it is equivalent to swap_bytes(). > > - */ > > -function to_le(x: u8): u8 { u8_to_le(x) } > > -function to_le(x: u16): u16 { u16_to_le(x) } > > -function to_le(x: u32): u32 { u32_to_le(x) } > > -function to_le(x: u64): u64 { u64_to_le(x) } > > -function to_le(x: u128): u128 { u128_to_le(x) } > > - > > -extern function u8_to_le(x: u8): u8 > > -extern function u16_to_le(x: u16): u16 > > -extern function u32_to_le(x: u32): u32 > > -extern function u64_to_le(x: u64): u64 > > -extern function u128_to_le(x: u128): u128 > > - > > -/* > > - * Rotates the bits in 'x' left by 'n' positions. > > - */ > > -function rotate_left(x: u8, n: u32): u8 { u8_rotate_left(x, n) } > > -function rotate_left(x: u16, n: u32): u16 { u16_rotate_left(x, n) } > > -function rotate_left(x: u32, n: u32): u32 { u32_rotate_left(x, n) } > > -function rotate_left(x: u64, n: u32): u64 { u64_rotate_left(x, n) } > > -function rotate_left(x: u128, n: u32): u128 { u128_rotate_left(x, n) } > > - > > -extern function u8_rotate_left(x: u8, n: u32): u8 > > -extern function u16_rotate_left(x: u16, n: u32): u16 > > -extern function u32_rotate_left(x: u32, n: u32): u32 > > -extern function u64_rotate_left(x: u64, n: u32): u64 > > -extern function u128_rotate_left(x: u128, n: u32): u128 > > - > > -/* > > - * Rotates the bits in 'x' right by 'n' positions. > > - */ > > -function rotate_right(x: u8, n: u32): u8 { u8_rotate_right(x, n) } > > -function rotate_right(x: u16, n: u32): u16 { u16_rotate_right(x, n) } > > -function rotate_right(x: u32, n: u32): u32 { u32_rotate_right(x, n) } > > -function rotate_right(x: u64, n: u32): u64 { u64_rotate_right(x, n) } > > -function rotate_right(x: u128, n: u32): u128 { u128_rotate_right(x, n) } > > - > > -extern function u8_rotate_right(x: u8, n: u32): u8 > > -extern function u16_rotate_right(x: u16, n: u32): u16 > > -extern function u32_rotate_right(x: u32, n: u32): u32 > > -extern function u64_rotate_right(x: u64, n: u32): u64 > > -extern function u128_rotate_right(x: u128, n: u32): u128 > > diff --git a/northd/bitwise.rs b/northd/bitwise.rs > > deleted file mode 100644 > > index 97c0ecfa36..0000000000 > > --- a/northd/bitwise.rs > > +++ /dev/null > > @@ -1,133 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -use ddlog_std::option2std; > > - > > -pub fn u8_next_power_of_two(x: &u8) -> ddlog_std::Option<u8> { > > - option2std(x.checked_next_power_of_two()) > > -} > > -pub fn u16_next_power_of_two(x: &u16) -> ddlog_std::Option<u16> { > > - option2std(x.checked_next_power_of_two()) > > -} > > -pub fn u32_next_power_of_two(x: &u32) -> ddlog_std::Option<u32> { > > - option2std(x.checked_next_power_of_two()) > > -} > > -pub fn u64_next_power_of_two(x: &u64) -> ddlog_std::Option<u64> { > > - option2std(x.checked_next_power_of_two()) > > -} > > -pub fn u128_next_power_of_two(x: &u128) -> ddlog_std::Option<u128> { > > - option2std(x.checked_next_power_of_two()) > > -} > > - > > -// Rust has wrapping_next_power_of_two() in nightly. We implement it > > -// ourselves to avoid the dependency. > > -pub fn u8_wrapping_next_power_of_two(x: &u8) -> u8 { > > - x.checked_next_power_of_two().unwrap_or(0) > > -} > > -pub fn u16_wrapping_next_power_of_two(x: &u16) -> u16 { > > - x.checked_next_power_of_two().unwrap_or(0) > > -} > > -pub fn u32_wrapping_next_power_of_two(x: &u32) -> u32 { > > - x.checked_next_power_of_two().unwrap_or(0) > > -} > > -pub fn u64_wrapping_next_power_of_two(x: &u64) -> u64 { > > - x.checked_next_power_of_two().unwrap_or(0) > > -} > > -pub fn u128_wrapping_next_power_of_two(x: &u128) -> u128 { > > - x.checked_next_power_of_two().unwrap_or(0) > > -} > > - > > -pub fn u8_count_ones(x: &u8) -> u32 { x.count_ones() } > > -pub fn u16_count_ones(x: &u16) -> u32 { x.count_ones() } > > -pub fn u32_count_ones(x: &u32) -> u32 { x.count_ones() } > > -pub fn u64_count_ones(x: &u64) -> u32 { x.count_ones() } > > -pub fn u128_count_ones(x: &u128) -> u32 { x.count_ones() } > > - > > -pub fn u8_count_zeros(x: &u8) -> u32 { x.count_zeros() } > > -pub fn u16_count_zeros(x: &u16) -> u32 { x.count_zeros() } > > -pub fn u32_count_zeros(x: &u32) -> u32 { x.count_zeros() } > > -pub fn u64_count_zeros(x: &u64) -> u32 { x.count_zeros() } > > -pub fn u128_count_zeros(x: &u128) -> u32 { x.count_zeros() } > > - > > -pub fn u8_leading_ones(x: &u8) -> u32 { x.leading_ones() } > > -pub fn u16_leading_ones(x: &u16) -> u32 { x.leading_ones() } > > -pub fn u32_leading_ones(x: &u32) -> u32 { x.leading_ones() } > > -pub fn u64_leading_ones(x: &u64) -> u32 { x.leading_ones() } > > -pub fn u128_leading_ones(x: &u128) -> u32 { x.leading_ones() } > > - > > -pub fn u8_leading_zeros(x: &u8) -> u32 { x.leading_zeros() } > > -pub fn u16_leading_zeros(x: &u16) -> u32 { x.leading_zeros() } > > -pub fn u32_leading_zeros(x: &u32) -> u32 { x.leading_zeros() } > > -pub fn u64_leading_zeros(x: &u64) -> u32 { x.leading_zeros() } > > -pub fn u128_leading_zeros(x: &u128) -> u32 { x.leading_zeros() } > > - > > -pub fn u8_trailing_ones(x: &u8) -> u32 { x.trailing_ones() } > > -pub fn u16_trailing_ones(x: &u16) -> u32 { x.trailing_ones() } > > -pub fn u32_trailing_ones(x: &u32) -> u32 { x.trailing_ones() } > > -pub fn u64_trailing_ones(x: &u64) -> u32 { x.trailing_ones() } > > -pub fn u128_trailing_ones(x: &u128) -> u32 { x.trailing_ones() } > > - > > -pub fn u8_trailing_zeros(x: &u8) -> u32 { x.trailing_zeros() } > > -pub fn u16_trailing_zeros(x: &u16) -> u32 { x.trailing_zeros() } > > -pub fn u32_trailing_zeros(x: &u32) -> u32 { x.trailing_zeros() } > > -pub fn u64_trailing_zeros(x: &u64) -> u32 { x.trailing_zeros() } > > -pub fn u128_trailing_zeros(x: &u128) -> u32 { x.trailing_zeros() } > > - > > -pub fn u8_reverse_bits(x: &u8) -> u8 { x.reverse_bits() } > > -pub fn u16_reverse_bits(x: &u16) -> u16 { x.reverse_bits() } > > -pub fn u32_reverse_bits(x: &u32) -> u32 { x.reverse_bits() } > > -pub fn u64_reverse_bits(x: &u64) -> u64 { x.reverse_bits() } > > -pub fn u128_reverse_bits(x: &u128) -> u128 { x.reverse_bits() } > > - > > -pub fn u8_swap_bytes(x: &u8) -> u8 { x.swap_bytes() } > > -pub fn u16_swap_bytes(x: &u16) -> u16 { x.swap_bytes() } > > -pub fn u32_swap_bytes(x: &u32) -> u32 { x.swap_bytes() } > > -pub fn u64_swap_bytes(x: &u64) -> u64 { x.swap_bytes() } > > -pub fn u128_swap_bytes(x: &u128) -> u128 { x.swap_bytes() } > > - > > -pub fn u8_from_be(x: &u8) -> u8 { u8::from_be(*x) } > > -pub fn u16_from_be(x: &u16) -> u16 { u16::from_be(*x) } > > -pub fn u32_from_be(x: &u32) -> u32 { u32::from_be(*x) } > > -pub fn u64_from_be(x: &u64) -> u64 { u64::from_be(*x) } > > -pub fn u128_from_be(x: &u128) -> u128 { u128::from_be(*x) } > > - > > -pub fn u8_to_be(x: &u8) -> u8 { x.to_be() } > > -pub fn u16_to_be(x: &u16) -> u16 { x.to_be() } > > -pub fn u32_to_be(x: &u32) -> u32 { x.to_be() } > > -pub fn u64_to_be(x: &u64) -> u64 { x.to_be() } > > -pub fn u128_to_be(x: &u128) -> u128 { x.to_be() } > > - > > -pub fn u8_from_le(x: &u8) -> u8 { u8::from_le(*x) } > > -pub fn u16_from_le(x: &u16) -> u16 { u16::from_le(*x) } > > -pub fn u32_from_le(x: &u32) -> u32 { u32::from_le(*x) } > > -pub fn u64_from_le(x: &u64) -> u64 { u64::from_le(*x) } > > -pub fn u128_from_le(x: &u128) -> u128 { u128::from_le(*x) } > > - > > -pub fn u8_to_le(x: &u8) -> u8 { x.to_le() } > > -pub fn u16_to_le(x: &u16) -> u16 { x.to_le() } > > -pub fn u32_to_le(x: &u32) -> u32 { x.to_le() } > > -pub fn u64_to_le(x: &u64) -> u64 { x.to_le() } > > -pub fn u128_to_le(x: &u128) -> u128 { x.to_le() } > > - > > -pub fn u8_rotate_left(x: &u8, n: &u32) -> u8 { x.rotate_left(*n) } > > -pub fn u16_rotate_left(x: &u16, n: &u32) -> u16 { x.rotate_left(*n) } > > -pub fn u32_rotate_left(x: &u32, n: &u32) -> u32 { x.rotate_left(*n) } > > -pub fn u64_rotate_left(x: &u64, n: &u32) -> u64 { x.rotate_left(*n) } > > -pub fn u128_rotate_left(x: &u128, n: &u32) -> u128 { x.rotate_left(*n) } > > - > > -pub fn u8_rotate_right(x: &u8, n: &u32) -> u8 { x.rotate_right(*n) } > > -pub fn u16_rotate_right(x: &u16, n: &u32) -> u16 { x.rotate_right(*n) } > > -pub fn u32_rotate_right(x: &u32, n: &u32) -> u32 { x.rotate_right(*n) } > > -pub fn u64_rotate_right(x: &u64, n: &u32) -> u64 { x.rotate_right(*n) } > > -pub fn u128_rotate_right(x: &u128, n: &u32) -> u128 { x.rotate_right(*n) } > > diff --git a/northd/copp.dl b/northd/copp.dl > > deleted file mode 100644 > > index c4f3b7e70c..0000000000 > > --- a/northd/copp.dl > > +++ /dev/null > > @@ -1,30 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -function cOPP_ARP() : istring { i"arp" } > > -function cOPP_ARP_RESOLVE() : istring { i"arp-resolve" } > > -function cOPP_DHCPV4_OPTS() : istring { i"dhcpv4-opts" } > > -function cOPP_DHCPV6_OPTS() : istring { i"dhcpv6-opts" } > > -function cOPP_DNS() : istring { i"dns" } > > -function cOPP_EVENT_ELB() : istring { i"event-elb" } > > -function cOPP_ICMP4_ERR() : istring { i"icmp4-error" } > > -function cOPP_ICMP6_ERR() : istring { i"icmp6-error" } > > -function cOPP_IGMP() : istring { i"igmp" } > > -function cOPP_ND_NA() : istring { i"nd-na" } > > -function cOPP_ND_NS() : istring { i"nd-ns" } > > -function cOPP_ND_NS_RESOLVE() : istring { i"nd-ns-resolve" } > > -function cOPP_ND_RA_OPTS() : istring { i"nd-ra-opts" } > > -function cOPP_TCP_RESET() : istring { i"tcp-reset" } > > -function cOPP_REJECT() : istring { i"reject" } > > -function cOPP_BFD() : istring { i"bfd" } > > diff --git a/northd/helpers.dl b/northd/helpers.dl > > deleted file mode 100644 > > index 50e137d99e..0000000000 > > --- a/northd/helpers.dl > > +++ /dev/null > > @@ -1,60 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -import OVN_Northbound as nb > > -import OVN_Southbound as sb > > -import ovsdb > > -import ovn > > - > > - > > -output relation Warning[string] > > - > > -/* Switch-to-router logical port connections */ > > -relation SwitchRouterPeer(lsp: uuid, lsp_name: istring, lrp: uuid) > > -SwitchRouterPeer(lsp, lsp_name, lrp) :- > > - &nb::Logical_Switch_Port(._uuid = lsp, .name = lsp_name, .__type = i"router", .options = options), > > - Some{var router_port} = options.get(i"router-port"), > > - &nb::Logical_Router_Port(.name = router_port, ._uuid = lrp). > > - > > -function get_bool_def(m: Map<istring, istring>, k: istring, def: bool): bool = { > > - m.get(k) > > - .and_then(|x| match (x.to_lowercase()) { > > - "false" -> Some{false}, > > - "true" -> Some{true}, > > - _ -> None > > - }) > > - .unwrap_or(def) > > -} > > - > > -function get_int_def(m: Map<istring, istring>, k: istring, def: integer): integer = { > > - m.get(k).and_then(|v| v.ival().parse_dec_u64()).unwrap_or(def) > > -} > > - > > -function clamp(x: 'A, range: ('A, 'A)): 'A { > > - (var min, var max) = range; > > - if (x < min) { > > - min > > - } else if (x > max) { > > - max > > - } else { > > - x > > - } > > -} > > - > > -function ha_chassis_group_uuid(uuid: uuid): uuid { hash128("hacg" ++ uuid) } > > -function ha_chassis_uuid(chassis_name: string, nb_chassis_uuid: uuid): uuid { hash128("hac" ++ chassis_name ++ nb_chassis_uuid) } > > - > > -/* Dummy relation with one empty row, useful for putting into antijoins. */ > > -relation Unit() > > -Unit(). > > diff --git a/northd/ipam.dl b/northd/ipam.dl > > deleted file mode 100644 > > index 600c55f5c8..0000000000 > > --- a/northd/ipam.dl > > +++ /dev/null > > @@ -1,499 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -/* > > - * IPAM (IP address management) and MACAM (MAC address management) > > - * > > - * IPAM generally stands for IP address management. In non-virtualized > > - * world, MAC addresses come with the hardware. But, with virtualized > > - * workloads, they need to be assigned and managed. This function > > - * does both IP address management (ipam) and MAC address management > > - * (macam). > > - */ > > - > > -import OVN_Northbound as nb > > -import ovsdb > > -import allocate > > -import helpers > > -import ovn > > -import lswitch > > -import lrouter > > - > > -function mAC_ADDR_SPACE(): bit<48> = 48'hffffff > > - > > -/* > > - * IPv4 dynamic address allocation. > > - */ > > - > > -/* > > - * The fixed portions of a request for a dynamic LSP address. > > - */ > > -typedef dynamic_address_request = DynamicAddressRequest{ > > - mac: Option<eth_addr>, > > - ip4: Option<in_addr>, > > - ip6: Option<in6_addr> > > -} > > -function parse_dynamic_address_request(s: string): Option<dynamic_address_request> { > > - var tokens = string_split(s, " "); > > - var n = tokens.len(); > > - if (n < 1 or n > 3) { > > - return None > > - }; > > - > > - var t0 = tokens.nth(0).unwrap_or(""); > > - var t1 = tokens.nth(1).unwrap_or(""); > > - var t2 = tokens.nth(2).unwrap_or(""); > > - if (t0 == "dynamic") { > > - if (n == 1) { > > - Some{DynamicAddressRequest{None, None, None}} > > - } else if (n == 2) { > > - match (ip46_parse(t1)) { > > - Some{IPv4{ipv4}} -> Some{DynamicAddressRequest{None, Some{ipv4}, None}}, > > - Some{IPv6{ipv6}} -> Some{DynamicAddressRequest{None, None, Some{ipv6}}}, > > - _ -> None > > - } > > - } else if (n == 3) { > > - match ((ip_parse(t1), ipv6_parse(t2))) { > > - (Some{ipv4}, Some{ipv6}) -> Some{DynamicAddressRequest{None, Some{ipv4}, Some{ipv6}}}, > > - _ -> None > > - } > > - } else { > > - None > > - } > > - } else if (n == 2 and t1 == "dynamic") { > > - match (eth_addr_from_string(t0)) { > > - Some{mac} -> Some{DynamicAddressRequest{Some{mac}, None, None}}, > > - _ -> None > > - } > > - } else { > > - None > > - } > > -} > > - > > -/* SwitchIPv4ReservedAddress - keeps track of statically reserved IPv4 addresses > > - * for each switch whose subnet option is set, including: > > - * (1) first and last (multicast) address in the subnet range > > - * (2) addresses from `other_config.exclude_ips` > > - * (3) port addresses in lsp.addresses, except "unknown" addresses, addresses of > > - * "router" ports, dynamic addresses > > - * (4) addresses associated with router ports peered with the switch. > > - * (5) static IP component of "dynamic" `lsp.addresses`. > > - * > > - * Addresses are kept in host-endian format (i.e., bit<32> vs in_addr). > > - */ > > -relation SwitchIPv4ReservedAddress(lswitch: uuid, addr: bit<32>) > > - > > -/* Add reserved address groups (1) and (2). */ > > -SwitchIPv4ReservedAddress(.lswitch = sw._uuid, > > - .addr = addr) :- > > - sw in &Switch(.subnet = Some{(_, _, start_ipv4, total_ipv4s)}), > > - var exclude_ips = { > > - var exclude_ips = set_singleton(start_ipv4); > > - exclude_ips.insert(start_ipv4 + total_ipv4s - 1); > > - match (map_get(sw.other_config, i"exclude_ips")) { > > - None -> exclude_ips, > > - Some{exclude_ip_list} -> match (parse_ip_list(exclude_ip_list.ival())) { > > - Left{err} -> { > > - warn("logical switch ${uuid2str(sw._uuid)}: bad exclude_ips (${err})"); > > - exclude_ips > > - }, > > - Right{ranges} -> { > > - for (rng in ranges) { > > - (var ip_start, var ip_end) = rng; > > - var start = ip_start.a; > > - var end = match (ip_end) { > > - None -> start, > > - Some{ip} -> ip.a > > - }; > > - start = max(start_ipv4, start); > > - end = min(start_ipv4 + total_ipv4s - 1, end); > > - if (end >= start) { > > - for (addr in range_vec(start, end+1, 1)) { > > - exclude_ips.insert(addr) > > - } > > - } else { > > - warn("logical switch ${uuid2str(sw._uuid)}: excluded addresses not in subnet") > > - } > > - }; > > - exclude_ips > > - } > > - } > > - } > > - }, > > - var addr = FlatMap(exclude_ips). > > - > > -/* Add reserved address group (3). */ > > -SwitchIPv4ReservedAddress(.lswitch = ls_uuid, > > - .addr = addr) :- > > - SwitchPortStaticAddresses( > > - .port = &SwitchPort{ > > - .sw = &Switch{._uuid = ls_uuid, > > - .subnet = Some{(_, _, start_ipv4, total_ipv4s)}}, > > - .peer = None}, > > - .addrs = lport_addrs > > - ), > > - var addrs = { > > - var addrs = set_empty(); > > - for (addr in lport_addrs.ipv4_addrs) { > > - var addr_host_endian = addr.addr.a; > > - if (addr_host_endian >= start_ipv4 and addr_host_endian < start_ipv4 + total_ipv4s) { > > - addrs.insert(addr_host_endian) > > - } else () > > - }; > > - addrs > > - }, > > - var addr = FlatMap(addrs). > > - > > -/* Add reserved address group (4) */ > > -SwitchIPv4ReservedAddress(.lswitch = ls_uuid, > > - .addr = addr) :- > > - &SwitchPort( > > - .sw = &Switch{._uuid = ls_uuid, > > - .subnet = Some{(_, _, start_ipv4, total_ipv4s)}}, > > - .peer = Some{rport}), > > - var addrs = { > > - var addrs = set_empty(); > > - for (addr in rport.networks.ipv4_addrs) { > > - var addr_host_endian = addr.addr.a; > > - if (addr_host_endian >= start_ipv4 and addr_host_endian < start_ipv4 + total_ipv4s) { > > - addrs.insert(addr_host_endian) > > - } else () > > - }; > > - addrs > > - }, > > - var addr = FlatMap(addrs). > > - > > -/* Add reserved address group (5) */ > > -SwitchIPv4ReservedAddress(.lswitch = sw._uuid, > > - .addr = ip_addr.a) :- > > - &SwitchPort(.sw = sw, .lsp = lsp, .static_dynamic_ipv4 = Some{ip_addr}). > > - > > -/* Aggregate all reserved addresses for each switch. */ > > -relation SwitchIPv4ReservedAddresses(lswitch: uuid, addrs: Set<bit<32>>) > > - > > -SwitchIPv4ReservedAddresses(lswitch, addrs) :- > > - SwitchIPv4ReservedAddress(lswitch, addr), > > - var addrs = addr.group_by(lswitch).to_set(). > > - > > -SwitchIPv4ReservedAddresses(lswitch_uuid, set_empty()) :- > > - &nb::Logical_Switch(._uuid = lswitch_uuid), > > - not SwitchIPv4ReservedAddress(lswitch_uuid, _). > > - > > -/* Allocate dynamic IP addresses for ports that require them: > > - */ > > -relation SwitchPortAllocatedIPv4DynAddress(lsport: uuid, dyn_addr: Option<in_addr>) > > - > > -SwitchPortAllocatedIPv4DynAddress(lsport, dyn_addr) :- > > - /* Aggregate all ports of a switch that need a dynamic IP address */ > > - port in &SwitchPort(.needs_dynamic_ipv4address = true, > > - .sw = sw), > > - var switch_id = sw._uuid, > > - var ports = port.group_by(switch_id).to_vec(), > > - SwitchIPv4ReservedAddresses(switch_id, reserved_addrs), > > - /* Allocate dynamic addresses only for ports that don't have a dynamic address > > - * or have one that is no longer valid. */ > > - var dyn_addresses = { > > - var used_addrs = reserved_addrs; > > - var assigned_addrs = vec_empty(); > > - var need_addr = vec_empty(); > > - (var start_ipv4, var total_ipv4s) = match (ports.nth(0)) { > > - None -> { (0, 0) } /* no ports with dynamic addresses */, > > - Some{port0} -> { > > - match (port0.sw.subnet) { > > - None -> { > > - abort("needs_dynamic_ipv4address is true, but subnet is undefined in port ${uuid2str(port0.lsp._uuid)}"); > > - (0, 0) > > - }, > > - Some{(_, _, start_ipv4, total_ipv4s)} -> (start_ipv4, total_ipv4s) > > - } > > - } > > - }; > > - for (port in ports) { > > - //warn("port(${port.lsp._uuid})"); > > - match (port.dynamic_address) { > > - None -> { > > - /* no dynamic address yet -- allocate one now */ > > - //warn("need_addr(${port.lsp._uuid})"); > > - need_addr.push(port.lsp._uuid) > > - }, > > - Some{dynaddr} -> { > > - match (dynaddr.ipv4_addrs.nth(0)) { > > - None -> { > > - /* dynamic address does not have IPv4 component -- allocate one now */ > > - //warn("need_addr(${port.lsp._uuid})"); > > - need_addr.push(port.lsp._uuid) > > - }, > > - Some{addr} -> { > > - var haddr = addr.addr.a; > > - if (haddr < start_ipv4 or haddr >= start_ipv4 + total_ipv4s) { > > - need_addr.push(port.lsp._uuid) > > - } else if (used_addrs.contains(haddr)) { > > - need_addr.push(port.lsp._uuid); > > - warn("Duplicate IP set on switch ${port.lsp.name}: ${addr.addr}") > > - } else { > > - /* has valid dynamic address -- record it in used_addrs */ > > - used_addrs.insert(haddr); > > - assigned_addrs.push((port.lsp._uuid, Some{haddr})) > > - } > > - } > > - } > > - } > > - } > > - }; > > - assigned_addrs.append(allocate_opt(used_addrs, need_addr, start_ipv4, start_ipv4 + total_ipv4s - 1)); > > - assigned_addrs > > - }, > > - var port_address = FlatMap(dyn_addresses), > > - (var lsport, var dyn_addr_bits) = port_address, > > - var dyn_addr = dyn_addr_bits.map(|x| InAddr{x}). > > - > > -/* Compute new dynamic IPv4 address assignment: > > - * - port does not need dynamic IP - use static_dynamic_ip if any > > - * - a new address has been allocated for port - use this address > > - * - otherwise, use existing dynamic IP > > - */ > > -relation SwitchPortNewIPv4DynAddress(lsport: uuid, dyn_addr: Option<in_addr>) > > - > > -SwitchPortNewIPv4DynAddress(lsp._uuid, ip_addr) :- > > - &SwitchPort(.sw = sw, > > - .needs_dynamic_ipv4address = false, > > - .static_dynamic_ipv4 = static_dynamic_ipv4, > > - .lsp = lsp), > > - var ip_addr = { > > - match (static_dynamic_ipv4) { > > - None -> { None }, > > - Some{addr} -> { > > - match (sw.subnet) { > > - None -> { None }, > > - Some{(_, _, start_ipv4, total_ipv4s)} -> { > > - var haddr = addr.a; > > - if (haddr < start_ipv4 or haddr >= start_ipv4 + total_ipv4s) { > > - /* new static ip is not valid */ > > - None > > - } else { > > - Some{addr} > > - } > > - } > > - } > > - } > > - } > > - }. > > - > > -SwitchPortNewIPv4DynAddress(lsport, addr) :- > > - SwitchPortAllocatedIPv4DynAddress(lsport, addr). > > - > > -/* > > - * Dynamic MAC address allocation. > > - */ > > - > > -function get_mac_prefix(options: Map<istring,istring>, uuid: uuid) : bit<48> > > -{ > > - match (map_get(options, i"mac_prefix").and_then(|pref| pref.ival().scan_eth_addr_prefix())) { > > - Some{prefix} -> prefix.ha, > > - None -> eth_addr_pseudorandom(uuid, 16'h1234).ha & 48'hffffff000000 > > - } > > -} > > -function put_mac_prefix(options: mut Map<istring,istring>, mac_prefix: bit<48>) > > -{ > > - map_insert(options, i"mac_prefix", > > - string_substr(to_string(EthAddr{mac_prefix}), 0, 8).intern()) > > -} > > -relation MacPrefix(mac_prefix: bit<48>) > > -MacPrefix(get_mac_prefix(options, uuid)) :- > > - nb::NB_Global(._uuid = uuid, .options = options). > > - > > -/* ReservedMACAddress - keeps track of statically reserved MAC addresses. > > - * (1) static addresses in `lsp.addresses` > > - * (2) static MAC component of "dynamic" `lsp.addresses`. > > - * (3) addresses associated with router ports peered with the switch. > > - * > > - * Addresses are kept in host-endian format. > > - */ > > -relation ReservedMACAddress(addr: bit<48>) > > - > > -/* Add reserved address group (1). */ > > -ReservedMACAddress(.addr = lport_addrs.ea.ha) :- > > - SwitchPortStaticAddresses(.addrs = lport_addrs). > > - > > -/* Add reserved address group (2). */ > > -ReservedMACAddress(.addr = mac_addr.ha) :- > > - &SwitchPort(.lsp = lsp, .static_dynamic_mac = Some{mac_addr}). > > - > > -/* Add reserved address group (3). */ > > -ReservedMACAddress(.addr = rport.networks.ea.ha) :- > > - &SwitchPort(.peer = Some{rport}). > > - > > -/* Aggregate all reserved MAC addresses. */ > > -relation ReservedMACAddresses(addrs: Set<bit<48>>) > > - > > -ReservedMACAddresses(addrs) :- > > - ReservedMACAddress(addr), > > - var addrs = addr.group_by(()).to_set(). > > - > > -/* Handle case when `ReservedMACAddress` is empty */ > > -ReservedMACAddresses(set_empty()) :- > > - // NB_Global should have exactly one record, so we can > > - // use it as a base for antijoin. > > - nb::NB_Global(), > > - not ReservedMACAddress(_). > > - > > -/* Allocate dynamic MAC addresses for ports that require them: > > - * Case 1: port doesn't need dynamic MAC (i.e., does not have dynamic address or > > - * has a dynamic address with a static MAC). > > - * Case 2: needs dynamic MAC, has dynamic MAC, has existing dynamic MAC with the right prefix > > - * needs dynamic MAC, does not have fixed dynamic MAC, doesn't have existing dynamic MAC with correct prefix > > - */ > > -relation SwitchPortAllocatedMACDynAddress(lsport: uuid, dyn_addr: bit<48>) > > - > > -SwitchPortAllocatedMACDynAddress(lsport, dyn_addr), > > -SwitchPortDuplicateMACAddress(dup_addrs) :- > > - /* Group all ports that need a dynamic IP address */ > > - port in &SwitchPort(.needs_dynamic_macaddress = true, .lsp = lsp), > > - SwitchPortNewIPv4DynAddress(lsp._uuid, ipv4_addr), > > - var ports = (port, ipv4_addr).group_by(()).to_vec(), > > - ReservedMACAddresses(reserved_addrs), > > - MacPrefix(mac_prefix), > > - (var dyn_addresses, var dup_addrs) = { > > - var used_addrs = reserved_addrs; > > - var need_addr = vec_empty(); > > - var dup_addrs = set_empty(); > > - for (port_with_addr in ports) { > > - (var port, var ipv4_addr) = port_with_addr; > > - var hint = match (ipv4_addr) { > > - None -> Some { mac_prefix | 1 }, > > - Some{addr} -> { > > - /* The tentative MAC's suffix will be in the interval (1, 0xfffffe). */ > > - var mac_suffix: bit<24> = addr.a[23:0] % ((mAC_ADDR_SPACE() - 1)[23:0]) + 1; > > - Some{ mac_prefix | (24'd0 ++ mac_suffix) } > > - } > > - }; > > - match (port.dynamic_address) { > > - None -> { > > - /* no dynamic address yet -- allocate one now */ > > - need_addr.push((port.lsp._uuid, hint)) > > - }, > > - Some{dynaddr} -> { > > - var haddr = dynaddr.ea.ha; > > - if ((haddr ^ mac_prefix) >> 24 != 0) { > > - /* existing dynamic address is no longer valid */ > > - need_addr.push((port.lsp._uuid, hint)) > > - } else if (used_addrs.contains(haddr)) { > > - dup_addrs.insert(dynaddr.ea); > > - } else { > > - /* has valid dynamic address -- record it in used_addrs */ > > - used_addrs.insert(haddr) > > - } > > - } > > - } > > - }; > > - // FIXME: if a port has a dynamic address that is no longer valid, and > > - // we are unable to allocate a new address, the current behavior is to > > - // keep the old invalid address. It should probably be changed to > > - // removing the old address. > > - // FIXME: OVN allocates MAC addresses by seeding them with IPv4 address. > > - // Implement a custom allocation function that simulates this behavior. > > - var res = allocate_with_hint(used_addrs, need_addr, mac_prefix + 1, mac_prefix + mAC_ADDR_SPACE() - 1); > > - var res_strs = vec_empty(); > > - for (x in res) { > > - (var uuid, var addr) = x; > > - res_strs.push("${uuid2str(uuid)}: ${EthAddr{addr}}") > > - }; > > - (res, dup_addrs) > > - }, > > - var port_address = FlatMap(dyn_addresses), > > - (var lsport, var dyn_addr) = port_address. > > - > > -relation SwitchPortDuplicateMACAddress(dup_addrs: Set<eth_addr>) > > -Warning["Duplicate MAC set: ${ea}"] :- > > - SwitchPortDuplicateMACAddress(dup_addrs), > > - var ea = FlatMap(dup_addrs). > > - > > -/* Compute new dynamic MAC address assignment: > > - * - port does not need dynamic MAC - use `static_dynamic_mac` > > - * - a new address has been allocated for port - use this address > > - * - otherwise, use existing dynamic MAC > > - */ > > -relation SwitchPortNewMACDynAddress(lsport: uuid, dyn_addr: Option<eth_addr>) > > - > > -SwitchPortNewMACDynAddress(lsp._uuid, mac_addr) :- > > - &SwitchPort(.needs_dynamic_macaddress = false, > > - .lsp = lsp, > > - .sw = sw, > > - .static_dynamic_mac = static_dynamic_mac), > > - var mac_addr = match (static_dynamic_mac) { > > - None -> None, > > - Some{addr} -> { > > - if (sw.subnet.is_some() or sw.ipv6_prefix.is_some() or > > - map_get(sw.other_config, i"mac_only") == Some{i"true"}) { > > - Some{addr} > > - } else { > > - None > > - } > > - } > > - }. > > - > > -SwitchPortNewMACDynAddress(lsport, Some{EthAddr{addr}}) :- > > - SwitchPortAllocatedMACDynAddress(lsport, addr). > > - > > -SwitchPortNewMACDynAddress(lsp._uuid, addr) :- > > - &SwitchPort(.needs_dynamic_macaddress = true, .lsp = lsp, .dynamic_address = cur_address), > > - not SwitchPortAllocatedMACDynAddress(lsp._uuid, _), > > - var addr = match (cur_address) { > > - None -> None, > > - Some{dynaddr} -> Some{dynaddr.ea} > > - }. > > - > > -/* > > - * Dynamic IPv6 address allocation. > > - * `needs_dynamic_ipv6address` -> mac.to_ipv6_eui64(ipv6_prefix) > > - */ > > -relation SwitchPortNewDynamicAddress(port: Intern<SwitchPort>, address: Option<lport_addresses>) > > - > > -SwitchPortNewDynamicAddress(port, None) :- > > - port in &SwitchPort(.lsp = lsp), > > - SwitchPortNewMACDynAddress(lsp._uuid, None). > > - > > -SwitchPortNewDynamicAddress(port, lport_address) :- > > - port in &SwitchPort(.lsp = lsp, > > - .sw = sw, > > - .needs_dynamic_ipv6address = needs_dynamic_ipv6address, > > - .static_dynamic_ipv6 = static_dynamic_ipv6), > > - SwitchPortNewMACDynAddress(lsp._uuid, Some{mac_addr}), > > - SwitchPortNewIPv4DynAddress(lsp._uuid, opt_ip4_addr), > > - var ip6_addr = match ((static_dynamic_ipv6, needs_dynamic_ipv6address, sw.ipv6_prefix)) { > > - (Some{ipv6}, _, _) -> " ${ipv6}", > > - (_, true, Some{prefix}) -> " ${mac_addr.to_ipv6_eui64(prefix)}", > > - _ -> "" > > - }, > > - var ip4_addr = match (opt_ip4_addr) { > > - None -> "", > > - Some{ip4} -> " ${ip4}" > > - }, > > - var addr_string = "${mac_addr}${ip6_addr}${ip4_addr}", > > - var lport_address = extract_addresses(addr_string). > > - > > - > > -///* If there's more than one dynamic addresses in port->addresses, log a warning > > -// and only allocate the first dynamic address */ > > -// > > -// VLOG_WARN_RL(&rl, "More than one dynamic address " > > -// "configured for logical switch port '%s'", > > -// nbsp->name); > > -// > > -////>> * MAC addresses suffixes in OUIs managed by OVN"s MACAM (MAC Address > > -////>> Management) system, in the range 1...0xfffffe. > > -////>> * IPv4 addresses in ranges managed by OVN's IPAM (IP Address Management) > > -////>> system. The range varies depending on the size of the subnet. > > -////>> > > -////>> Are these `dynamic_addresses` in OVN_Northbound.Logical_Switch_Port`? > > diff --git a/northd/lrouter.dl b/northd/lrouter.dl > > deleted file mode 100644 > > index 0e4308eb5e..0000000000 > > --- a/northd/lrouter.dl > > +++ /dev/null > > @@ -1,947 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -import OVN_Northbound as nb > > -import OVN_Southbound as sb > > -import graph as graph > > -import multicast > > -import ovsdb > > -import ovn > > -import helpers > > -import lswitch > > -import set > > - > > -function is_enabled(lr: nb::Logical_Router): bool { is_enabled(lr.enabled) } > > -function is_enabled(lrp: Intern<nb::Logical_Router_Port>): bool { is_enabled(lrp.enabled) } > > -function is_enabled(rp: RouterPort): bool { rp.lrp.is_enabled() } > > -function is_enabled(rp: Intern<RouterPort>): bool { rp.lrp.is_enabled() } > > - > > -/* default logical flow prioriry for distributed routes */ > > -function dROUTE_PRIO(): bit<32> = 400 > > - > > -/* LogicalRouterPortCandidate. > > - * > > - * Each row pairs a logical router port with its logical router, but without > > - * checking that the logical router port is on only one logical router. > > - * > > - * (Use LogicalRouterPort instead, which guarantees uniqueness.) */ > > -relation LogicalRouterPortCandidate(lrp_uuid: uuid, lr_uuid: uuid) > > -LogicalRouterPortCandidate(lrp_uuid, lr_uuid) :- > > - nb::Logical_Router(._uuid = lr_uuid, .ports = ports), > > - var lrp_uuid = FlatMap(ports). > > -Warning[message] :- > > - LogicalRouterPortCandidate(lrp_uuid, lr_uuid), > > - var lrs = lr_uuid.group_by(lrp_uuid).to_set(), > > - lrs.size() > 1, > > - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid), > > - var message = "Bad configuration: logical router port ${lrp.name} belongs " > > - "to more than one logical router". > > - > > -/* Each row means 'lport' is in 'lrouter' (and only that lrouter). */ > > -relation LogicalRouterPort(lport: uuid, lrouter: uuid) > > -LogicalRouterPort(lrp_uuid, lr_uuid) :- > > - LogicalRouterPortCandidate(lrp_uuid, lr_uuid), > > - var lrs = lr_uuid.group_by(lrp_uuid).to_set(), > > - lrs.size() == 1, > > - Some{var lr_uuid} = lrs.nth(0). > > - > > -/* > > - * Peer routers. > > - * > > - * Each row in the relation indicates that routers 'a' and 'b' can reach > > - * each other directly through router ports. > > - * > > - * This relation is symmetric: if (a,b) then (b,a). > > - * This relation is antireflexive: if (a,b) then a != b. > > - * > > - * Routers aren't peers if they can reach each other only through logical > > - * switch ports (that's the ReachableLogicalRouter table). > > - */ > > -relation PeerLogicalRouter(a: uuid, b: uuid) > > -PeerLogicalRouter(lrp_uuid, peer._uuid) :- > > - LogicalRouterPort(lrp_uuid, _), > > - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid), > > - Some{var peer_name} = lrp.peer, > > - peer in &nb::Logical_Router_Port(.name = peer_name), > > - peer.peer == Some{lrp.name}, // 'peer' must point back to 'lrp' > > - lrp_uuid != peer._uuid. // No reflexive pointers. > > - > > -/* > > - * First-hop routers. > > - * > > - * Each row indicates that 'lrouter' is a first-hop logical router for > > - * 'lswitch', that is, that a "cable" directly connects 'lrouter' and > > - * 'lswitch'. > > - * > > - * A switch can have multiple first-hop routers. */ > > -relation FirstHopLogicalRouter(lrouter: uuid, lswitch: uuid) > > -FirstHopLogicalRouter(lrouter, lswitch) :- > > - LogicalRouterPort(lrp_uuid, lrouter), > > - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid, .peer = None), > > - LogicalSwitchRouterPort(lsp_uuid, lrp.name, lswitch). > > - > > -relation LogicalSwitchRouterPort(lsp: uuid, lsp_router_port: istring, ls: uuid) > > -LogicalSwitchRouterPort(lsp, lsp_router_port, ls) :- > > - LogicalSwitchPort(lsp, ls), > > - &nb::Logical_Switch_Port(._uuid = lsp, .__type = i"router", .options = options), > > - Some{var lsp_router_port} = options.get(i"router-port"). > > - > > -/* Undirected edges connecting one router and another. > > - * This is a building block for ConnectedLogicalRouter. */ > > -relation LogicalRouterEdge(a: uuid, b: uuid) > > -LogicalRouterEdge(a, b) :- > > - FirstHopLogicalRouter(a, ls), > > - FirstHopLogicalRouter(b, ls), > > - a <= b. > > -LogicalRouterEdge(a, b) :- PeerLogicalRouter(a, b). > > -function edge_from(e: LogicalRouterEdge): uuid = { e.a } > > -function edge_to(e: LogicalRouterEdge): uuid = { e.b } > > - > > -/* > > - * Sets of routers such that packets can transit directly or indirectly among > > - * any of the routers in a set. Any given router is in exactly one set. > > - * > > - * Each row (set, elem) identifes the membership of router with UUID 'elem' in > > - * set 'set', where 'set' is the minimum UUID across all its elements. > > - * > > - * We implement this using the graph transformer because there is no > > - * way to implement "connected components" in raw DDlog that avoids O(n**2) > > - * blowup in the number of nodes in a component. > > - */ > > -relation ConnectedLogicalRouter[(uuid, uuid)] > > -apply graph::ConnectedComponents(LogicalRouterEdge, edge_from, edge_to) > > - -> (ConnectedLogicalRouter) > > - > > -// ha_chassis_group and gateway_chassis may not both be present. > > -Warning[message] :- > > - lrp in &nb::Logical_Router_Port(), > > - lrp.ha_chassis_group.is_some(), > > - not lrp.gateway_chassis.is_empty(), > > - var message = "Both ha_chassis_group and gateway_chassis configured on " > > - "port ${lrp.name}; ignoring the latter". > > - > > -// A distributed gateway port cannot also be an L3 gateway router. > > -Warning[message] :- > > - lrp in &nb::Logical_Router_Port(), > > - lrp.ha_chassis_group.is_some() or not lrp.gateway_chassis.is_empty(), > > - lrp.options.contains_key(i"chassis"), > > - var message = "Bad configuration: distributed gateway port configured on " > > - "port ${lrp.name} on L3 gateway router". > > - > > -/* Distributed gateway ports. > > - * > > - * Each row means 'lrp' is a distributed gateway port on 'lr_uuid'. > > - * > > - * A logical router can have multiple distributed gateway ports. */ > > -relation DistributedGatewayPort(lrp: Intern<nb::Logical_Router_Port>, > > - lr_uuid: uuid, cr_lrp_uuid: uuid) > > - > > -// lrp._uuid is already in use; generate a new UUID by hashing it. > > -DistributedGatewayPort(lrp, lr_uuid, hash128(lrp_uuid)) :- > > - lr in nb::Logical_Router(._uuid = lr_uuid), > > - LogicalRouterPort(lrp_uuid, lr._uuid), > > - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid), > > - not lrp.options.contains_key(i"chassis"), > > - var has_hcg = lrp.ha_chassis_group.is_some(), > > - var has_gc = not lrp.gateway_chassis.is_empty(), > > - has_hcg or has_gc. > > - > > -/* HAChassis is an abstraction over nb::Gateway_Chassis and nb::HA_Chassis, which > > - * are different ways to represent the same configuration. Each row is > > - * effectively one HA_Chassis record. (Usually, we could associate each > > - * row with a particular 'lr_uuid', but it's permissible for more than one > > - * logical router to use a HA chassis group, so we omit it so that multiple > > - * references get merged.) > > - * > > - * nb::Gateway_Chassis has an "options" column that this omits because > > - * nb::HA_Chassis doesn't have anything similar. That's OK because no options > > - * were ever defined. */ > > -relation HAChassis(hacg_uuid: uuid, > > - hac_uuid: uuid, > > - chassis_name: istring, > > - priority: integer, > > - external_ids: Map<istring,istring>) > > -HAChassis(ha_chassis_group_uuid(lrp._uuid), gw_chassis_uuid, > > - chassis_name, priority, external_ids) :- > > - DistributedGatewayPort(.lrp = lrp), > > - lrp.ha_chassis_group == None, > > - var gw_chassis_uuid = FlatMap(lrp.gateway_chassis), > > - nb::Gateway_Chassis(._uuid = gw_chassis_uuid, > > - .chassis_name = chassis_name, > > - .priority = priority, > > - .external_ids = eids), > > - var external_ids = eids.insert_imm(i"chassis-name", chassis_name). > > -HAChassis(ha_chassis_group_uuid(ha_chassis_group._uuid), ha_chassis_uuid, > > - chassis_name, priority, external_ids) :- > > - DistributedGatewayPort(.lrp = lrp), > > - Some{var hac_group_uuid} = lrp.ha_chassis_group, > > - ha_chassis_group in nb::HA_Chassis_Group(._uuid = hac_group_uuid), > > - var ha_chassis_uuid = FlatMap(ha_chassis_group.ha_chassis), > > - nb::HA_Chassis(._uuid = ha_chassis_uuid, > > - .chassis_name = chassis_name, > > - .priority = priority, > > - .external_ids = eids), > > - var external_ids = eids.insert_imm(i"chassis-name", chassis_name). > > - > > -/* HAChassisGroup is an abstraction for sb::HA_Chassis_Group that papers over > > - * the two southbound ways to configure it via nb::Gateway_Chassis and > > - * nb::HA_Chassis. The former configuration method does not provide a name or > > - * external_ids for the group (only for individual chassis), so we generate > > - * them. > > - * > > - * (Usually, we could associated each row with a particular 'lr_uuid', but it's > > - * permissible for more than one logical router to use a HA chassis group, so > > - * we omit it so that multiple references get merged.) > > - */ > > -relation HAChassisGroup(uuid: uuid, > > - name: istring, > > - external_ids: Map<istring,istring>) > > -HAChassisGroup(ha_chassis_group_uuid(lrp._uuid), lrp.name, map_empty()) :- > > - DistributedGatewayPort(.lrp = lrp), > > - lrp.ha_chassis_group == None, > > - not lrp.gateway_chassis.is_empty(). > > -HAChassisGroup(ha_chassis_group_uuid(hac_group_uuid), > > - name, external_ids) :- > > - DistributedGatewayPort(.lrp = lrp), > > - Some{var hac_group_uuid} = lrp.ha_chassis_group, > > - nb::HA_Chassis_Group(._uuid = hacg_uuid, > > - .name = name, > > - .external_ids = external_ids). > > - > > -/* Each row maps from a distributed gateway logical router port to the name of > > - * its HAChassisGroup. > > - * This level of indirection is needed because multiple distributed gateway > > - * logical router ports are allowed to reference a given HAChassisGroup. */ > > -relation DistributedGatewayPortHAChassisGroup( > > - lrp: Intern<nb::Logical_Router_Port>, > > - hacg_uuid: uuid) > > -DistributedGatewayPortHAChassisGroup(lrp, ha_chassis_group_uuid(lrp._uuid)) :- > > - DistributedGatewayPort(.lrp = lrp), > > - lrp.ha_chassis_group == None, > > - lrp.gateway_chassis.size() > 0. > > -DistributedGatewayPortHAChassisGroup(lrp, > > - ha_chassis_group_uuid(hac_group_uuid)) :- > > - DistributedGatewayPort(.lrp = lrp), > > - Some{var hac_group_uuid} = lrp.ha_chassis_group, > > - nb::HA_Chassis_Group(._uuid = hac_group_uuid). > > - > > - > > -/* For each router port, tracks whether it's a redirect port of its router */ > > -relation RouterPortIsRedirect(lrp: uuid, is_redirect: bool) > > -RouterPortIsRedirect(lrp, true) :- DistributedGatewayPort(&nb::Logical_Router_Port{._uuid = lrp}, _, _). > > -RouterPortIsRedirect(lrp, false) :- > > - &nb::Logical_Router_Port(._uuid = lrp), > > - not DistributedGatewayPort(&nb::Logical_Router_Port{._uuid = lrp}, _, _). > > - > > -/* > > - * LogicalRouterDGWPorts maps from each logical router UUID > > - * to the logical router's set of distributed gateway (or redirect) ports. */ > > -relation LogicalRouterDGWPorts( > > - lr_uuid: uuid, > > - l3dgw_ports: Vec<Intern<nb::Logical_Router_Port>>) > > -LogicalRouterDGWPorts(lr_uuid, l3dgw_ports) :- > > - DistributedGatewayPort(lrp, lr_uuid, _), > > - var l3dgw_ports = lrp.group_by(lr_uuid).to_vec(). > > -LogicalRouterDGWPorts(lr_uuid, vec_empty()) :- > > - lr in nb::Logical_Router(), > > - var lr_uuid = lr._uuid, > > - not DistributedGatewayPort(_, lr_uuid, _). > > - > > -typedef ExceptionalExtIps = AllowedExtIps{ips: Intern<nb::Address_Set>} > > - | ExemptedExtIps{ips: Intern<nb::Address_Set>} > > - > > -typedef NAT = NAT{ > > - nat: Intern<nb::NAT>, > > - external_ip: v46_ip, > > - external_mac: Option<eth_addr>, > > - exceptional_ext_ips: Option<ExceptionalExtIps> > > -} > > - > > -relation LogicalRouterNAT0( > > - lr: uuid, > > - nat: Intern<nb::NAT>, > > - external_ip: v46_ip, > > - external_mac: Option<eth_addr>) > > -LogicalRouterNAT0(lr, nat, external_ip, external_mac) :- > > - nb::Logical_Router(._uuid = lr, .nat = nats), > > - var nat_uuid = FlatMap(nats), > > - nat in &nb::NAT(._uuid = nat_uuid), > > - Some{var external_ip} = ip46_parse(nat.external_ip.ival()), > > - var external_mac = match (nat.external_mac) { > > - Some{s} -> eth_addr_from_string(s.ival()), > > - None -> None > > - }. > > -Warning["Bad ip address ${nat.external_ip} in nat configuration for router ${lr_name}."] :- > > - nb::Logical_Router(._uuid = lr, .nat = nats, .name = lr_name), > > - var nat_uuid = FlatMap(nats), > > - nat in &nb::NAT(._uuid = nat_uuid), > > - None = ip46_parse(nat.external_ip.ival()). > > -Warning["Bad MAC address ${s} in nat configuration for router ${lr_name}."] :- > > - nb::Logical_Router(._uuid = lr, .nat = nats, .name = lr_name), > > - var nat_uuid = FlatMap(nats), > > - nat in &nb::NAT(._uuid = nat_uuid), > > - Some{var s} = nat.external_mac, > > - None = eth_addr_from_string(s.ival()). > > - > > -relation LogicalRouterNAT(lr: uuid, nat: NAT) > > -LogicalRouterNAT(lr, NAT{nat, external_ip, external_mac, None}) :- > > - LogicalRouterNAT0(lr, nat, external_ip, external_mac), > > - nat.allowed_ext_ips == None, > > - nat.exempted_ext_ips == None. > > -LogicalRouterNAT(lr, NAT{nat, external_ip, external_mac, Some{AllowedExtIps{__as}}}) :- > > - LogicalRouterNAT0(lr, nat, external_ip, external_mac), > > - nat.exempted_ext_ips == None, > > - Some{var __as_uuid} = nat.allowed_ext_ips, > > - __as in &nb::Address_Set(._uuid = __as_uuid). > > -LogicalRouterNAT(lr, NAT{nat, external_ip, external_mac, Some{ExemptedExtIps{__as}}}) :- > > - LogicalRouterNAT0(lr, nat, external_ip, external_mac), > > - nat.allowed_ext_ips == None, > > - Some{var __as_uuid} = nat.exempted_ext_ips, > > - __as in &nb::Address_Set(._uuid = __as_uuid). > > -Warning["NAT rule: ${nat._uuid} not applied, since" > > - "both allowed and exempt external ips set"] :- > > - LogicalRouterNAT0(lr, nat, _, _), > > - nat.allowed_ext_ips.is_some() and nat.exempted_ext_ips.is_some(). > > - > > -relation LogicalRouterNATs(lr: uuid, nat: Vec<NAT>) > > - > > -LogicalRouterNATs(lr, nats) :- > > - LogicalRouterNAT(lr, nat), > > - var nats = nat.group_by(lr).to_vec(). > > - > > -LogicalRouterNATs(lr, vec_empty()) :- > > - nb::Logical_Router(._uuid = lr), > > - not LogicalRouterNAT(lr, _). > > - > > -function get_force_snat_ip(options: Map<istring,istring>, key_type: istring): Set<v46_ip> = > > -{ > > - var ips = set_empty(); > > - match (options.get(i"${key_type}_force_snat_ip")) { > > - None -> (), > > - Some{s} -> { > > - for (token in s.split(" ")) { > > - match (ip46_parse(token)) { > > - Some{ip} -> ips.insert(ip), > > - _ -> () // XXX warn > > - } > > - }; > > - } > > - }; > > - ips > > -} > > - > > -function has_force_snat_ip(options: Map<istring, istring>, key_type: istring): bool { > > - not get_force_snat_ip(options, key_type).is_empty() > > -} > > - > > -function lb_force_snat_router_ip(lr_options: Map<istring, istring>): bool { > > - lr_options.get(i"lb_force_snat_ip") == Some{i"router_ip"} and > > - lr_options.contains_key(i"chassis") > > -} > > - > > -typedef LBForceSNAT = NoForceSNAT > > - | ForceSNAT > > - | SkipSNAT > > - > > -function snat_for_lb(lr_options: Map<istring, istring>, lb: Intern<nb::Load_Balancer>): LBForceSNAT { > > - if (lb.options.get_bool_def(i"skip_snat", false)) { > > - return SkipSNAT > > - }; > > - if (not get_force_snat_ip(lr_options, i"lb").is_empty() or lb_force_snat_router_ip(lr_options)) { > > - return ForceSNAT > > - }; > > - return NoForceSNAT > > -} > > - > > -/* For each router, collect the set of IPv4 and IPv6 addresses used for SNAT, > > - * which includes: > > - * > > - * - dnat_force_snat_addrs > > - * - lb_force_snat_addrs > > - * - IP addresses used in the router's attached NAT rules > > - * > > - * This is like init_nat_entries() in northd.c. */ > > -relation LogicalRouterSnatIP(lr: uuid, snat_ip: v46_ip, nat: Option<NAT>) > > -LogicalRouterSnatIP(lr._uuid, force_snat_ip, None) :- > > - lr in nb::Logical_Router(), > > - var dnat_force_snat_ips = get_force_snat_ip(lr.options, i"dnat"), > > - var lb_force_snat_ips = if (lb_force_snat_router_ip(lr.options)) { > > - set_empty() > > - } else { > > - get_force_snat_ip(lr.options, i"lb") > > - }, > > - var force_snat_ip = FlatMap(dnat_force_snat_ips.union(lb_force_snat_ips)). > > -LogicalRouterSnatIP(lr, snat_ip, Some{nat}) :- > > - LogicalRouterNAT(lr, nat@NAT{.nat = &nb::NAT{.__type = i"snat"}, .external_ip = snat_ip}). > > - > > -function group_to_setunionmap(g: Group<'K1, ('K2,Set<'V>)>): Map<'K2,Set<'V>> { > > - var map = map_empty(); > > - for ((entry, _) in g) { > > - (var key, var value) = entry; > > - match (map.get(key)) { > > - None -> map.insert(key, value), > > - Some{old_value} -> map.insert(key, old_value.union(value)) > > - } > > - }; > > - map > > -} > > -relation LogicalRouterSnatIPs(lr: uuid, snat_ips: Map<v46_ip, Set<NAT>>) > > -LogicalRouterSnatIPs(lr, snat_ips) :- > > - LogicalRouterSnatIP(lr, snat_ip, nat), > > - var snat_ips = (snat_ip, nat.to_set()).group_by(lr).group_to_setunionmap(). > > -LogicalRouterSnatIPs(lr._uuid, map_empty()) :- > > - lr in nb::Logical_Router(), > > - not LogicalRouterSnatIP(.lr = lr._uuid). > > - > > -relation LogicalRouterLB(lr: uuid, lb: Intern<LoadBalancer>) > > -LogicalRouterLB(lr, lb) :- > > - nb::Logical_Router(._uuid = lr, .load_balancer = lbs), > > - var lb_uuid = FlatMap(lbs), > > - lb in &LoadBalancer(.lb = &nb::Load_Balancer{._uuid = lb_uuid}). > > - > > -relation LogicalRouterLBs(lr: uuid, lb: Vec<Intern<LoadBalancer>>) > > - > > -LogicalRouterLBs(lr, lbs) :- > > - LogicalRouterLB(lr, lb), > > - var lbs = lb.group_by(lr).to_vec(). > > - > > -LogicalRouterLBs(lr, vec_empty()) :- > > - nb::Logical_Router(._uuid = lr), > > - not LogicalRouterLB(lr, _). > > - > > -// LogicalRouterCopp maps from each LR to its collection of Copp meters, > > -// dropping any Copp meter whose meter name doesn't exist. > > -relation LogicalRouterCopp(lr: uuid, meters: Map<istring,istring>) > > -LogicalRouterCopp(lr, meters) :- LogicalRouterCopp0(lr, meters). > > -LogicalRouterCopp(lr, map_empty()) :- > > - nb::Logical_Router(._uuid = lr), > > - not LogicalRouterCopp0(lr, _). > > - > > -relation LogicalRouterCopp0(lr: uuid, meters: Map<istring,istring>) > > -LogicalRouterCopp0(lr, meters) :- > > - nb::Logical_Router(._uuid = lr, .copp = Some{copp_uuid}), > > - nb::Copp(._uuid = copp_uuid, .meters = meters), > > - var entry = FlatMap(meters), > > - (var copp_id, var meter_name) = entry, > > - &nb::Meter(.name = meter_name), > > - var meters = (copp_id, meter_name).group_by(lr).to_map(). > > - > > -/* Router relation collects all attributes of a logical router. > > - * > > - * `l3dgw_ports` - optional redirect ports (see `DistributedGatewayPort`) > > - * `is_gateway` - true iff the router is a gateway router. Together with > > - * `l3dgw_port`, this flag affects the generation of various flows > > - * related to NAT and load balancing. > > - * `learn_from_arp_request` - whether ARP requests to addresses on the router > > - * should always be learned > > - */ > > - > > -function chassis_redirect_name(port_name: istring): string = "cr-${port_name}" > > - > > -typedef LoadBalancer = LoadBalancer { > > - lb: Intern<nb::Load_Balancer>, > > - ipv4s: Set<istring>, > > - ipv6s: Set<istring>, > > - routable: bool > > -} > > - > > -relation LoadBalancer[Intern<LoadBalancer>] > > -LoadBalancer[LoadBalancer{lb, ipv4s, ipv6s, routable}.intern()] :- > > - nb::Load_Balancer[lb], > > - var routable = lb.options.get_bool_def(i"add_route", false), > > - (var ipv4s, var ipv6s) = { > > - var ipv4s = set_empty(); > > - var ipv6s = set_empty(); > > - for ((vip, _) in lb.vips) { > > - /* node->key contains IP:port or just IP. */ > > - match (ip_address_and_port_from_lb_key(vip.ival())) { > > - None -> (), > > - Some{(IPv4{ipv4}, _)} -> ipv4s.insert(i"${ipv4}"), > > - Some{(IPv6{ipv6}, _)} -> ipv6s.insert(i"${ipv6}"), > > - } > > - }; > > - (ipv4s, ipv6s) > > - }. > > - > > -typedef Router = Router { > > - /* Fields copied from nb::Logical_Router. */ > > - _uuid: uuid, > > - name: istring, > > - policies: Set<uuid>, > > - enabled: Option<bool>, > > - nat: Set<uuid>, > > - options: Map<istring,istring>, > > - external_ids: Map<istring,istring>, > > - > > - /* Additional computed fields. */ > > - l3dgw_ports: Vec<Intern<nb::Logical_Router_Port>>, > > - is_gateway: bool, > > - nats: Vec<NAT>, > > - snat_ips: Map<v46_ip, Set<NAT>>, > > - mcast_cfg: Intern<McastRouterCfg>, > > - learn_from_arp_request: bool, > > - force_lb_snat: bool, > > - copp: Map<istring, istring>, > > -} > > - > > -relation Router[Intern<Router>] > > - > > -Router[Router{ > > - ._uuid = lr._uuid, > > - .name = lr.name, > > - .policies = lr.policies, > > - .enabled = lr.enabled, > > - .nat = lr.nat, > > - .options = lr.options, > > - .external_ids = lr.external_ids, > > - > > - .l3dgw_ports = l3dgw_ports, > > - .is_gateway = lr.options.contains_key(i"chassis"), > > - .nats = nats, > > - .snat_ips = snat_ips, > > - .mcast_cfg = mcast_cfg, > > - .learn_from_arp_request = learn_from_arp_request, > > - .force_lb_snat = force_lb_snat, > > - .copp = copp}.intern()] :- > > - lr in nb::Logical_Router(), > > - lr.is_enabled(), > > - LogicalRouterDGWPorts(lr._uuid, l3dgw_ports), > > - LogicalRouterNATs(lr._uuid, nats), > > - LogicalRouterSnatIPs(lr._uuid, snat_ips), > > - LogicalRouterCopp(lr._uuid, copp), > > - mcast_cfg in &McastRouterCfg(.datapath = lr._uuid), > > - var learn_from_arp_request = lr.options.get_bool_def(i"always_learn_from_arp_request", true), > > - var force_lb_snat = lb_force_snat_router_ip(lr.options). > > - > > -typedef LogicalRouterLBIPs = LogicalRouterLBIPs { > > - lr: uuid, > > - lb_ipv4s_routable: Set<istring>, > > - lb_ipv4s_unroutable: Set<istring>, > > - lb_ipv6s_routable: Set<istring>, > > - lb_ipv6s_unroutable: Set<istring>, > > -} > > - > > -relation LogicalRouterLBIPs[Intern<LogicalRouterLBIPs>] > > - > > -LogicalRouterLBIPs[LogicalRouterLBIPs{ > > - .lr = lr_uuid, > > - .lb_ipv4s_routable = lb_ipv4s_routable, > > - .lb_ipv4s_unroutable = lb_ipv4s_unroutable, > > - .lb_ipv6s_routable = lb_ipv6s_routable, > > - .lb_ipv6s_unroutable = lb_ipv6s_unroutable > > - }.intern() > > -] :- > > - LogicalRouterLBs(lr_uuid, lbs), > > - (var lb_ipv4s_routable, var lb_ipv4s_unroutable, > > - var lb_ipv6s_routable, var lb_ipv6s_unroutable) = { > > - var lb_ipv4s_routable = set_empty(); > > - var lb_ipv4s_unroutable = set_empty(); > > - var lb_ipv6s_routable = set_empty(); > > - var lb_ipv6s_unroutable = set_empty(); > > - for (lb in lbs) { > > - if (lb.routable) { > > - lb_ipv4s_routable = lb_ipv4s_routable.union(lb.ipv4s); > > - lb_ipv6s_routable = lb_ipv6s_routable.union(lb.ipv6s); > > - } else { > > - lb_ipv4s_unroutable = lb_ipv4s_unroutable.union(lb.ipv4s); > > - lb_ipv6s_unroutable = lb_ipv6s_unroutable.union(lb.ipv6s); > > - } > > - }; > > - (lb_ipv4s_routable, lb_ipv4s_unroutable, > > - lb_ipv6s_routable, lb_ipv6s_unroutable) > > - }. > > - > > -/* Router - to - LB-uuid */ > > -relation RouterLB(router: Intern<Router>, lb_uuid: uuid) > > - > > -RouterLB(router, lb.lb._uuid) :- > > - LogicalRouterLB(lr_uuid, lb), > > - router in &Router(._uuid = lr_uuid). > > - > > -/* Like RouterLB, but only includes gateway routers. */ > > -relation GWRouterLB(router: Intern<Router>, lb_uuid: uuid) > > - > > -GWRouterLB(router, lb_uuid) :- > > - RouterLB(router, lb_uuid), > > - router.l3dgw_ports.len() > 0 or router.is_gateway. > > - > > -/* Router-to-router logical port connections */ > > -relation RouterRouterPeer(rport1: uuid, rport2: uuid, rport2_name: istring) > > - > > -RouterRouterPeer(rport1, rport2, peer_name) :- > > - &nb::Logical_Router_Port(._uuid = rport1, .peer = peer), > > - Some{var peer_name} = peer, > > - &nb::Logical_Router_Port(._uuid = rport2, .name = peer_name). > > - > > -/* Router port can peer with anothe router port, a switch port or have > > - * no peer. > > - */ > > -typedef RouterPeer = PeerRouter{rport: uuid, name: istring} > > - | PeerSwitch{sport: uuid, name: istring} > > - | PeerNone > > - > > -function router_peer_name(peer: RouterPeer): Option<istring> = { > > - match (peer) { > > - PeerRouter{_, n} -> Some{n}, > > - PeerSwitch{_, n} -> Some{n}, > > - PeerNone -> None > > - } > > -} > > - > > -relation RouterPortPeer(rport: uuid, peer: RouterPeer) > > - > > -/* Router-to-router logical port connections */ > > -RouterPortPeer(rport, PeerSwitch{sport, sport_name}) :- > > - SwitchRouterPeer(sport, sport_name, rport). > > - > > -RouterPortPeer(rport1, PeerRouter{rport2, rport2_name}) :- > > - RouterRouterPeer(rport1, rport2, rport2_name). > > - > > -RouterPortPeer(rport, PeerNone) :- > > - &nb::Logical_Router_Port(._uuid = rport), > > - not SwitchRouterPeer(_, _, rport), > > - not RouterRouterPeer(rport, _, _). > > - > > -/* Each row maps from a Logical_Router port to the input options in its > > - * corresponding Port_Binding (if any). This is because northd preserves > > - * most of the options in that column. (northd unconditionally sets the > > - * ipv6_prefix_delegation and ipv6_prefix options, so we remove them for > > - * faster convergence.) */ > > -relation RouterPortSbOptions(lrp_uuid: uuid, options: Map<istring,istring>) > > -RouterPortSbOptions(lrp._uuid, options) :- > > - lrp in &nb::Logical_Router_Port(), > > - pb in sb::Port_Binding(._uuid = lrp._uuid), > > - var options = { > > - var options = pb.options; > > - options.remove(i"ipv6_prefix"); > > - options.remove(i"ipv6_prefix_delegation"); > > - options > > - }. > > -RouterPortSbOptions(lrp._uuid, map_empty()) :- > > - lrp in &nb::Logical_Router_Port(), > > - not sb::Port_Binding(._uuid = lrp._uuid). > > - > > -relation RouterPortHasBfd(lrp_uuid: uuid, has_bfd: bool) > > -RouterPortHasBfd(lrp_uuid, true) :- > > - &nb::Logical_Router_Port(._uuid = lrp_uuid, .name = logical_port), > > - nb::BFD(.logical_port = logical_port). > > -RouterPortHasBfd(lrp_uuid, false) :- > > - &nb::Logical_Router_Port(._uuid = lrp_uuid, .name = logical_port), > > - not nb::BFD(.logical_port = logical_port). > > - > > -/* FIXME: what should happen when extract_lrp_networks fails? */ > > -/* RouterPort relation collects all attributes of a logical router port */ > > -typedef RouterPort = RouterPort { > > - lrp: Intern<nb::Logical_Router_Port>, > > - json_name: string, > > - networks: lport_addresses, > > - router: Intern<Router>, > > - is_redirect: bool, > > - peer: RouterPeer, > > - mcast_cfg: Intern<McastPortCfg>, > > - sb_options: Map<istring,istring>, > > - has_bfd: bool, > > - enabled: bool > > -} > > - > > -relation RouterPort[Intern<RouterPort>] > > - > > -RouterPort[RouterPort{ > > - .lrp = lrp, > > - .json_name = json_escape(lrp.name), > > - .networks = networks, > > - .router = router, > > - .is_redirect = is_redirect, > > - .peer = peer, > > - .mcast_cfg = mcast_cfg, > > - .sb_options = sb_options, > > - .has_bfd = has_bfd, > > - .enabled = lrp.is_enabled() > > - }.intern()] :- > > - lrp in &nb::Logical_Router_Port(), > > - Some{var networks} = extract_lrp_networks(lrp.mac.ival(), lrp.networks.map(ival)), > > - LogicalRouterPort(lrp._uuid, lrouter_uuid), > > - router in &Router(._uuid = lrouter_uuid), > > - RouterPortIsRedirect(lrp._uuid, is_redirect), > > - RouterPortPeer(lrp._uuid, peer), > > - mcast_cfg in &McastPortCfg(.port = lrp._uuid, .router_port = true), > > - RouterPortSbOptions(lrp._uuid, sb_options), > > - RouterPortHasBfd(lrp._uuid, has_bfd). > > - > > -relation RouterPortNetworksIPv4Addr(port: Intern<RouterPort>, addr: ipv4_netaddr) > > - > > -RouterPortNetworksIPv4Addr(port, addr) :- > > - port in &RouterPort(.networks = networks), > > - var addr = FlatMap(networks.ipv4_addrs). > > - > > -relation RouterPortNetworksIPv6Addr(port: Intern<RouterPort>, addr: ipv6_netaddr) > > - > > -RouterPortNetworksIPv6Addr(port, addr) :- > > - port in &RouterPort(.networks = networks), > > - var addr = FlatMap(networks.ipv6_addrs). > > - > > -/* StaticRoute: Collects and parses attributes of a static route. */ > > -typedef route_policy = SrcIp | DstIp > > -function route_policy_from_string(s: Option<istring>): route_policy = { > > - if (s == Some{i"src-ip"}) { SrcIp } else { DstIp } > > -} > > -function to_string(policy: route_policy): string = { > > - match (policy) { > > - SrcIp -> "src-ip", > > - DstIp -> "dst-ip" > > - } > > -} > > - > > -typedef route_key = RouteKey { > > - policy: route_policy, > > - ip_prefix: v46_ip, > > - plen: bit<32> > > -} > > - > > -/* StaticRouteDown contains the UUID of all the static routes that are down. > > - * A static route is down if it has a BFD whose dst_ip matches it nexthop and > > - * that BFD is down or admin_down. */ > > -relation StaticRouteDown(lrsr_uuid: uuid) > > -StaticRouteDown(lrsr_uuid) :- > > - nb::Logical_Router_Static_Route(._uuid = lrsr_uuid, .bfd = Some{bfd_uuid}, .nexthop = nexthop), > > - bfd in nb::BFD(._uuid = bfd_uuid, .dst_ip = nexthop), > > - match (bfd.status) { > > - None -> true, > > - Some{status} -> (status == i"admin_down" or status == i"down") > > - }. > > - > > -relation &StaticRoute(lrsr: nb::Logical_Router_Static_Route, > > - key: route_key, > > - nexthop: v46_ip, > > - output_port: Option<istring>, > > - ecmp_symmetric_reply: bool) > > - > > -&StaticRoute(.lrsr = lrsr, > > - .key = RouteKey{policy, ip_prefix, plen}, > > - .nexthop = nexthop, > > - .output_port = lrsr.output_port, > > - .ecmp_symmetric_reply = esr) :- > > - lrsr in nb::Logical_Router_Static_Route(), > > - not StaticRouteDown(lrsr._uuid), > > - var policy = route_policy_from_string(lrsr.policy), > > - Some{(var nexthop, var nexthop_plen)} = ip46_parse_cidr(lrsr.nexthop.ival()), > > - match (nexthop) { > > - IPv4{_} -> nexthop_plen == 32, > > - IPv6{_} -> nexthop_plen == 128 > > - }, > > - Some{(var ip_prefix, var plen)} = ip46_parse_cidr(lrsr.ip_prefix.ival()), > > - match ((nexthop, ip_prefix)) { > > - (IPv4{_}, IPv4{_}) -> true, > > - (IPv6{_}, IPv6{_}) -> true, > > - _ -> false > > - }, > > - var esr = lrsr.options.get_bool_def(i"ecmp_symmetric_reply", false). > > - > > -relation &StaticRouteEmptyNextHop(lrsr: nb::Logical_Router_Static_Route, > > - key: route_key, > > - output_port: Option<istring>) > > -&StaticRouteEmptyNextHop(.lrsr = lrsr, > > - .key = RouteKey{policy, ip_prefix, plen}, > > - .output_port = lrsr.output_port) :- > > - lrsr in nb::Logical_Router_Static_Route(.nexthop = i""), > > - not StaticRouteDown(lrsr._uuid), > > - var policy = route_policy_from_string(lrsr.policy), > > - Some{(var ip_prefix, var plen)} = ip46_parse_cidr(lrsr.ip_prefix.ival()). > > - > > -/* Returns the IP address of the router port 'op' that > > - * overlaps with 'ip'. If one is not found, returns None. */ > > -function find_lrp_member_ip(networks: lport_addresses, ip: v46_ip): Option<v46_ip> = > > -{ > > - match (ip) { > > - IPv4{ip4} -> { > > - for (na in networks.ipv4_addrs) { > > - if ((na.addr, ip4).same_network(na.netmask())) { > > - /* There should be only 1 interface that matches the > > - * supplied IP. Otherwise, it's a configuration error, > > - * because subnets of a router's interfaces should NOT > > - * overlap. */ > > - return Some{IPv4{na.addr}} > > - } > > - }; > > - return None > > - }, > > - IPv6{ip6} -> { > > - for (na in networks.ipv6_addrs) { > > - if ((na.addr, ip6).same_network(na.netmask())) { > > - /* There should be only 1 interface that matches the > > - * supplied IP. Otherwise, it's a configuration error, > > - * because subnets of a router's interfaces should NOT > > - * overlap. */ > > - return Some{IPv6{na.addr}} > > - } > > - }; > > - return None > > - } > > - } > > -} > > - > > - > > -/* Step 1: compute router-route pairs */ > > -relation RouterStaticRoute_( > > - router : Intern<Router>, > > - key : route_key, > > - nexthop : v46_ip, > > - output_port : Option<istring>, > > - ecmp_symmetric_reply : bool) > > - > > -RouterStaticRoute_(.router = router, > > - .key = route.key, > > - .nexthop = route.nexthop, > > - .output_port = route.output_port, > > - .ecmp_symmetric_reply = route.ecmp_symmetric_reply) :- > > - router in &Router(), > > - nb::Logical_Router(._uuid = router._uuid, .static_routes = routes), > > - var route_id = FlatMap(routes), > > - route in &StaticRoute(.lrsr = nb::Logical_Router_Static_Route{._uuid = route_id}). > > - > > -relation RouterStaticRouteEmptyNextHop_( > > - router : Intern<Router>, > > - key : route_key, > > - output_port : Option<istring>) > > - > > -RouterStaticRouteEmptyNextHop_(.router = router, > > - .key = route.key, > > - .output_port = route.output_port) :- > > - router in &Router(), > > - nb::Logical_Router(._uuid = router._uuid, .static_routes = routes), > > - var route_id = FlatMap(routes), > > - route in &StaticRouteEmptyNextHop(.lrsr = nb::Logical_Router_Static_Route{._uuid = route_id}). > > - > > -/* Step-2: compute output_port for each pair */ > > -typedef route_dst = RouteDst { > > - nexthop: v46_ip, > > - src_ip: v46_ip, > > - port: Intern<RouterPort>, > > - ecmp_symmetric_reply: bool > > -} > > - > > -relation RouterStaticRoute( > > - router : Intern<Router>, > > - key : route_key, > > - dsts : Set<route_dst>) > > - > > -RouterStaticRoute(router, key, dsts) :- > > - rsr in RouterStaticRoute_(.router = router, .output_port = None), > > - /* output_port is not specified, find the > > - * router port matching the next hop. */ > > - port in &RouterPort(.router = &Router{._uuid = router._uuid}, > > - .networks = networks), > > - Some{var src_ip} = find_lrp_member_ip(networks, rsr.nexthop), > > - var dst = RouteDst{rsr.nexthop, src_ip, port, rsr.ecmp_symmetric_reply}, > > - var key = rsr.key, > > - var dsts = dst.group_by((router, key)).to_set(). > > - > > -RouterStaticRoute(router, key, dsts) :- > > - RouterStaticRoute_(.router = router, > > - .key = key, > > - .nexthop = nexthop, > > - .output_port = Some{oport}, > > - .ecmp_symmetric_reply = ecmp_symmetric_reply), > > - /* output_port specified */ > > - port in &RouterPort(.lrp = &nb::Logical_Router_Port{.name = oport}, > > - .networks = networks), > > - Some{var src_ip} = match (find_lrp_member_ip(networks, nexthop)) { > > - Some{src_ip} -> Some{src_ip}, > > - None -> { > > - /* There are no IP networks configured on the router's port via > > - * which 'route->nexthop' is theoretically reachable. But since > > - * 'out_port' has been specified, we honor it by trying to reach > > - * 'route->nexthop' via the first IP address of 'out_port'. > > - * (There are cases, e.g in GCE, where each VM gets a /32 IP > > - * address and the default gateway is still reachable from it.) */ > > - match (key.ip_prefix) { > > - IPv4{_} -> match (networks.ipv4_addrs.nth(0)) { > > - Some{addr} -> Some{IPv4{addr.addr}}, > > - None -> { > > - warn("No path for static route ${key.ip_prefix}; next hop ${nexthop}"); > > - None > > - } > > - }, > > - IPv6{_} -> match (networks.ipv6_addrs.nth(0)) { > > - Some{addr} -> Some{IPv6{addr.addr}}, > > - None -> { > > - warn("No path for static route ${key.ip_prefix}; next hop ${nexthop}"); > > - None > > - } > > - } > > - } > > - } > > - }, > > - var dsts = set_singleton(RouteDst{nexthop, src_ip, port, ecmp_symmetric_reply}). > > - > > -relation RouterStaticRouteEmptyNextHop( > > - router : Intern<Router>, > > - key : route_key, > > - dsts : Set<route_dst>) > > - > > -RouterStaticRouteEmptyNextHop(router, key, dsts) :- > > - RouterStaticRouteEmptyNextHop_(.router = router, > > - .key = key, > > - .output_port = Some{oport}), > > - /* output_port specified */ > > - port in &RouterPort(.lrp = &nb::Logical_Router_Port{.name = oport}, > > - .networks = networks), > > - /* There are no IP networks configured on the router's port via > > - * which 'route->nexthop' is theoretically reachable. But since > > - * 'out_port' has been specified, we honor it by trying to reach > > - * 'route->nexthop' via the first IP address of 'out_port'. > > - * (There are cases, e.g in GCE, where each VM gets a /32 IP > > - * address and the default gateway is still reachable from it.) */ > > - Some{var src_ip} = match (key.ip_prefix) { > > - IPv4{_} -> match (networks.ipv4_addrs.nth(0)) { > > - Some{addr} -> Some{IPv4{addr.addr}}, > > - None -> { > > - warn("No path for static route ${key.ip_prefix}"); > > - None > > - } > > - }, > > - IPv6{_} -> match (networks.ipv6_addrs.nth(0)) { > > - Some{addr} -> Some{IPv6{addr.addr}}, > > - None -> { > > - warn("No path for static route ${key.ip_prefix}"); > > - None > > - } > > - } > > - }, > > - var dsts = set_singleton(RouteDst{src_ip, src_ip, port, false}). > > - > > -/* compute route-route pairs for nexthop = "discard" routes */ > > -relation &DiscardRoute(lrsr: nb::Logical_Router_Static_Route, > > - key: route_key) > > -&DiscardRoute(.lrsr = lrsr, > > - .key = RouteKey{policy, ip_prefix, plen}) :- > > - lrsr in nb::Logical_Router_Static_Route(.nexthop = i"discard"), > > - var policy = route_policy_from_string(lrsr.policy), > > - Some{(var ip_prefix, var plen)} = ip46_parse_cidr(lrsr.ip_prefix.ival()). > > - > > -relation RouterDiscardRoute_( > > - router : Intern<Router>, > > - key : route_key) > > - > > -RouterDiscardRoute_(.router = router, > > - .key = route.key) :- > > - router in &Router(), > > - nb::Logical_Router(._uuid = router._uuid, .static_routes = routes), > > - var route_id = FlatMap(routes), > > - route in &DiscardRoute(.lrsr = nb::Logical_Router_Static_Route{._uuid = route_id}). > > - > > -Warning[message] :- > > - RouterStaticRoute_(.router = router, .key = key, .nexthop = nexthop), > > - not RouterStaticRoute(.router = router, .key = key), > > - var message = "No path for ${key.policy} static route ${key.ip_prefix}/${key.plen} with next hop ${nexthop}". > > diff --git a/northd/lswitch.dl b/northd/lswitch.dl > > deleted file mode 100644 > > index 33c5c706b3..0000000000 > > --- a/northd/lswitch.dl > > +++ /dev/null > > @@ -1,824 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -import OVN_Northbound as nb > > -import OVN_Southbound as sb > > -import ovsdb > > -import ovn > > -import lrouter > > -import multicast > > -import helpers > > -import ipam > > -import vec > > -import set > > - > > -function is_enabled(lsp: Intern<nb::Logical_Switch_Port>): bool { is_enabled(lsp.enabled) } > > -function is_enabled(sp: SwitchPort): bool { sp.lsp.is_enabled() } > > -function is_enabled(sp: Intern<SwitchPort>): bool { sp.lsp.is_enabled() } > > - > > -relation SwitchRouterPeerRef(lsp: uuid, rport: Option<Intern<RouterPort>>) > > - > > -SwitchRouterPeerRef(lsp, Some{rport}) :- > > - SwitchRouterPeer(lsp, _, lrp), > > - rport in &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp}). > > - > > -SwitchRouterPeerRef(lsp, None) :- > > - &nb::Logical_Switch_Port(._uuid = lsp), > > - not SwitchRouterPeer(lsp, _, _). > > - > > -/* LogicalSwitchPortCandidate. > > - * > > - * Each row pairs a logical switch port with its logical switch, but without > > - * checking that the logical switch port is on only one logical switch. > > - * > > - * (Use LogicalSwitchPort instead, which guarantees uniqueness.) */ > > -relation LogicalSwitchPortCandidate(lsp_uuid: uuid, ls_uuid: uuid) > > -LogicalSwitchPortCandidate(lsp_uuid, ls_uuid) :- > > - &nb::Logical_Switch(._uuid = ls_uuid, .ports = ports), > > - var lsp_uuid = FlatMap(ports). > > -Warning[message] :- > > - LogicalSwitchPortCandidate(lsp_uuid, ls_uuid), > > - var lss = ls_uuid.group_by(lsp_uuid).to_set(), > > - lss.size() > 1, > > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > > - var message = "Bad configuration: logical switch port ${lsp.name} belongs " > > - "to more than one logical switch". > > - > > -/* Each row means 'lport' is in 'lswitch' (and only that lswitch). */ > > -relation LogicalSwitchPort(lport: uuid, lswitch: uuid) > > -LogicalSwitchPort(lsp_uuid, ls_uuid) :- > > - LogicalSwitchPortCandidate(lsp_uuid, ls_uuid), > > - var lss = ls_uuid.group_by(lsp_uuid).to_set(), > > - lss.size() == 1, > > - Some{var ls_uuid} = lss.nth(0). > > - > > -/* Each logical switch port with an "unknown" address (with its logical switch). */ > > -relation LogicalSwitchPortWithUnknownAddress(ls: uuid, lsp: uuid) > > -LogicalSwitchPortWithUnknownAddress(ls_uuid, lsp_uuid) :- > > - LogicalSwitchPort(lsp_uuid, ls_uuid), > > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > > - lsp.is_enabled() and lsp.addresses.contains(i"unknown"). > > - > > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > > -// is an output relation: > > -output relation LogicalSwitchHasUnknownPorts(ls: uuid, has_unknown: bool) > > -LogicalSwitchHasUnknownPorts(ls, true) :- LogicalSwitchPortWithUnknownAddress(ls, _). > > -LogicalSwitchHasUnknownPorts(ls, false) :- > > - &nb::Logical_Switch(._uuid = ls), > > - not LogicalSwitchPortWithUnknownAddress(ls, _). > > - > > -/* PortStaticAddresses: static IP addresses associated with each Logical_Switch_Port */ > > -relation PortStaticAddresses(lsport: uuid, ip4addrs: Set<istring>, ip6addrs: Set<istring>) > > - > > -PortStaticAddresses(.lsport = port_uuid, > > - .ip4addrs = ip4_addrs.union().map(intern), > > - .ip6addrs = ip6_addrs.union().map(intern)) :- > > - &nb::Logical_Switch_Port(._uuid = port_uuid, .addresses = addresses), > > - var address = FlatMap(if (addresses.is_empty()) { set_singleton(i"") } else { addresses }), > > - (var ip4addrs, var ip6addrs) = if (not is_dynamic_lsp_address(address.ival())) { > > - split_addresses(address.ival()) > > - } else { (set_empty(), set_empty()) }, > > - (var ip4_addrs, var ip6_addrs) = (ip4addrs, ip6addrs).group_by(port_uuid).group_unzip(). > > - > > -relation PortInGroup(port: uuid, group: uuid) > > - > > -PortInGroup(port, group) :- > > - nb::Port_Group(._uuid = group, .ports = ports), > > - var port = FlatMap(ports). > > - > > -/* All ACLs associated with logical switch */ > > -relation LogicalSwitchACL(ls: uuid, acl: uuid) > > - > > -LogicalSwitchACL(ls, acl) :- > > - &nb::Logical_Switch(._uuid = ls, .acls = acls), > > - var acl = FlatMap(acls). > > - > > -LogicalSwitchACL(ls, acl) :- > > - &nb::Logical_Switch(._uuid = ls, .ports = ports), > > - var port_id = FlatMap(ports), > > - PortInGroup(port_id, group_id), > > - nb::Port_Group(._uuid = group_id, .acls = acls), > > - var acl = FlatMap(acls). > > - > > -relation LogicalSwitchStatefulACL(ls: uuid, acl: uuid) > > - > > -LogicalSwitchStatefulACL(ls, acl) :- > > - LogicalSwitchACL(ls, acl), > > - &nb::ACL(._uuid = acl, .action = i"allow-related"). > > - > > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > > -// is an output relation: > > -output relation LogicalSwitchHasStatefulACL(ls: uuid, has_stateful_acl: bool) > > - > > -LogicalSwitchHasStatefulACL(ls, true) :- > > - LogicalSwitchStatefulACL(ls, _). > > - > > -LogicalSwitchHasStatefulACL(ls, false) :- > > - &nb::Logical_Switch(._uuid = ls), > > - not LogicalSwitchStatefulACL(ls, _). > > - > > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > > -// is an output relation: > > -output relation LogicalSwitchHasACLs(ls: uuid, has_acls: bool) > > - > > -LogicalSwitchHasACLs(ls, true) :- > > - LogicalSwitchACL(ls, _). > > - > > -LogicalSwitchHasACLs(ls, false) :- > > - &nb::Logical_Switch(._uuid = ls), > > - not LogicalSwitchACL(ls, _). > > - > > -/* > > - * LogicalSwitchLocalnetPorts maps from each logical switch UUID > > - * to the logical switch's set of localnet ports. Each localnet > > - * port is expressed as a tuple of its UUID and its name. > > - */ > > -relation LogicalSwitchLocalnetPort0(ls_uuid: uuid, lsp: (uuid, istring)) > > -LogicalSwitchLocalnetPort0(ls_uuid, (lsp_uuid, lsp.name)) :- > > - ls in &nb::Logical_Switch(._uuid = ls_uuid), > > - var lsp_uuid = FlatMap(ls.ports), > > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > > - lsp.__type == i"localnet". > > - > > -relation LogicalSwitchLocalnetPorts(ls_uuid: uuid, localnet_ports: Vec<(uuid, istring)>) > > -LogicalSwitchLocalnetPorts(ls_uuid, localnet_ports) :- > > - LogicalSwitchLocalnetPort0(ls_uuid, lsp), > > - var localnet_ports = lsp.group_by(ls_uuid).to_vec(). > > -LogicalSwitchLocalnetPorts(ls_uuid, vec_empty()) :- > > - ls in &nb::Logical_Switch(), > > - var ls_uuid = ls._uuid, > > - not LogicalSwitchLocalnetPort0(ls_uuid, _). > > - > > -/* Flatten the list of dns_records in Logical_Switch */ > > -relation LogicalSwitchDNS(ls_uuid: uuid, dns_uuid: uuid) > > - > > -LogicalSwitchDNS(ls._uuid, dns_uuid) :- > > - &nb::Logical_Switch[ls], > > - var dns_uuid = FlatMap(ls.dns_records), > > - nb::DNS(._uuid = dns_uuid). > > - > > -relation LogicalSwitchWithDNSRecords(ls: uuid) > > - > > -LogicalSwitchWithDNSRecords(ls) :- > > - LogicalSwitchDNS(ls, dns_uuid), > > - nb::DNS(._uuid = dns_uuid, .records = records), > > - not records.is_empty(). > > - > > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > > -// is an output relation: > > -output relation LogicalSwitchHasDNSRecords(ls: uuid, has_dns_records: bool) > > - > > -LogicalSwitchHasDNSRecords(ls, true) :- > > - LogicalSwitchWithDNSRecords(ls). > > - > > -LogicalSwitchHasDNSRecords(ls, false) :- > > - &nb::Logical_Switch(._uuid = ls), > > - not LogicalSwitchWithDNSRecords(ls). > > - > > -relation LogicalSwitchHasNonRouterPort0(ls: uuid) > > -LogicalSwitchHasNonRouterPort0(ls_uuid) :- > > - ls in &nb::Logical_Switch(._uuid = ls_uuid), > > - var lsp_uuid = FlatMap(ls.ports), > > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > > - lsp.__type != i"router". > > - > > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > > -// is an output relation: > > -output relation LogicalSwitchHasNonRouterPort(ls: uuid, has_non_router_port: bool) > > -LogicalSwitchHasNonRouterPort(ls, true) :- > > - LogicalSwitchHasNonRouterPort0(ls). > > -LogicalSwitchHasNonRouterPort(ls, false) :- > > - &nb::Logical_Switch(._uuid = ls), > > - not LogicalSwitchHasNonRouterPort0(ls). > > - > > -// LogicalSwitchCopp maps from each LS to its collection of Copp meters, > > -// dropping any Copp meter whose meter name doesn't exist. > > -relation LogicalSwitchCopp(ls: uuid, meters: Map<istring,istring>) > > -LogicalSwitchCopp(ls, meters) :- LogicalSwitchCopp0(ls, meters). > > -LogicalSwitchCopp(ls, map_empty()) :- > > - &nb::Logical_Switch(._uuid = ls), > > - not LogicalSwitchCopp0(ls, _). > > - > > -relation LogicalSwitchCopp0(ls: uuid, meters: Map<istring,istring>) > > -LogicalSwitchCopp0(ls, meters) :- > > - &nb::Logical_Switch(._uuid = ls, .copp = Some{copp_uuid}), > > - nb::Copp(._uuid = copp_uuid, .meters = meters), > > - var entry = FlatMap(meters), > > - (var copp_id, var meter_name) = entry, > > - &nb::Meter(.name = meter_name), > > - var meters = (copp_id, meter_name).group_by(ls).to_map(). > > - > > -/* Switch relation collects all attributes of a logical switch */ > > - > > -typedef Switch = Switch { > > - /* Fields copied from nb::Logical_Switch_Port. */ > > - _uuid: uuid, > > - name: istring, > > - other_config: Map<istring,istring>, > > - external_ids: Map<istring,istring>, > > - > > - /* Additional computed fields. */ > > - has_stateful_acl: bool, > > - has_acls: bool, > > - has_lb_vip: bool, > > - has_dns_records: bool, > > - has_unknown_ports: bool, > > - localnet_ports: Vec<(uuid, istring)>, // UUID and name of each localnet port. > > - subnet: Option<(in_addr/*subnet*/, in_addr/*mask*/, bit<32>/*start_ipv4*/, bit<32>/*total_ipv4s*/)>, > > - ipv6_prefix: Option<in6_addr>, > > - mcast_cfg: Intern<McastSwitchCfg>, > > - is_vlan_transparent: bool, > > - copp: Map<istring, istring>, > > - > > - /* Does this switch have at least one port with type != "router"? */ > > - has_non_router_port: bool > > -} > > - > > - > > -relation Switch[Intern<Switch>] > > - > > -function ipv6_parse_prefix(s: string): Option<in6_addr> { > > - if (string_contains(s, "/")) { > > - match (ipv6_parse_cidr(s)) { > > - Right{(addr, 64)} -> Some{addr}, > > - _ -> None > > - } > > - } else { > > - ipv6_parse(s) > > - } > > -} > > - > > -Switch[Switch{ > > - ._uuid = ls._uuid, > > - .name = ls.name, > > - .other_config = ls.other_config, > > - .external_ids = ls.external_ids, > > - > > - .has_stateful_acl = has_stateful_acl, > > - .has_acls = has_acls, > > - .has_lb_vip = has_lb_vip, > > - .has_dns_records = has_dns_records, > > - .has_unknown_ports = has_unknown_ports, > > - .localnet_ports = localnet_ports, > > - .subnet = subnet, > > - .ipv6_prefix = ipv6_prefix, > > - .mcast_cfg = mcast_cfg, > > - .has_non_router_port = has_non_router_port, > > - .copp = copp, > > - .is_vlan_transparent = is_vlan_transparent > > - }.intern()] :- > > - &nb::Logical_Switch[ls], > > - LogicalSwitchHasStatefulACL(ls._uuid, has_stateful_acl), > > - LogicalSwitchHasACLs(ls._uuid, has_acls), > > - LogicalSwitchHasLBVIP(ls._uuid, has_lb_vip), > > - LogicalSwitchHasDNSRecords(ls._uuid, has_dns_records), > > - LogicalSwitchHasUnknownPorts(ls._uuid, has_unknown_ports), > > - LogicalSwitchLocalnetPorts(ls._uuid, localnet_ports), > > - LogicalSwitchHasNonRouterPort(ls._uuid, has_non_router_port), > > - LogicalSwitchCopp(ls._uuid, copp), > > - mcast_cfg in &McastSwitchCfg(.datapath = ls._uuid), > > - var subnet = > > - match (ls.other_config.get(i"subnet")) { > > - None -> None, > > - Some{subnet_str} -> { > > - match (ip_parse_masked(subnet_str.ival())) { > > - Left{err} -> { > > - warn("bad 'subnet' ${subnet_str}"); > > - None > > - }, > > - Right{(subnet, mask)} -> { > > - if (mask.cidr_bits() == Some{32} or not mask.is_cidr()) { > > - warn("bad 'subnet' ${subnet_str}"); > > - None > > - } else { > > - Some{(subnet, mask, (subnet.a & mask.a) + 1, ~mask.a)} > > - } > > - } > > - } > > - } > > - }, > > - var ipv6_prefix = > > - match (ls.other_config.get(i"ipv6_prefix")) { > > - None -> None, > > - Some{prefix} -> ipv6_parse_prefix(prefix.ival()) > > - }, > > - var is_vlan_transparent = ls.other_config.get_bool_def(i"vlan-passthru", false). > > - > > -/* LogicalSwitchLB: many-to-many relation between logical switches and nb::LB */ > > -relation LogicalSwitchLB(sw_uuid: uuid, lb: Intern<nb::Load_Balancer>) > > -LogicalSwitchLB(sw_uuid, lb) :- > > - &nb::Logical_Switch(._uuid = sw_uuid, .load_balancer = lb_ids), > > - var lb_id = FlatMap(lb_ids), > > - lb in &nb::Load_Balancer(._uuid = lb_id). > > - > > - > > -relation SwitchLB(sw: Intern<Switch>, lb_uuid: uuid) > > - > > -SwitchLB(sw, lb._uuid) :- > > - LogicalSwitchLB(sw_uuid, lb), > > - sw in &Switch(._uuid = sw_uuid). > > - > > -/* Load balancer VIPs associated with switch */ > > -relation SwitchLBVIP(sw_uuid: uuid, lb: Intern<nb::Load_Balancer>, vip: istring, backends: istring) > > -SwitchLBVIP(sw_uuid, lb, vip, backends) :- > > - LogicalSwitchLB(sw_uuid, lb@(&nb::Load_Balancer{.vips = vips})), > > - var kv = FlatMap(vips), > > - (var vip, var backends) = kv. > > - > > -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this > > -// is an output relation: > > -output relation LogicalSwitchHasLBVIP(sw_uuid: uuid, has_lb_vip: bool) > > -LogicalSwitchHasLBVIP(sw_uuid, true) :- > > - SwitchLBVIP(.sw_uuid = sw_uuid). > > -LogicalSwitchHasLBVIP(sw_uuid, false) :- > > - &nb::Logical_Switch(._uuid = sw_uuid), > > - not SwitchLBVIP(.sw_uuid = sw_uuid). > > - > > -/* Load balancer virtual IPs. > > - * > > - * Three levels: > > - * - LBVIP0 is load balancer virtual IPs with health checks. > > - * - LBVIP1 also includes virtual IPs without health checks. > > - * - LBVIP parses the IP address and port (and drops VIPs where those are invalid). > > - */ > > -relation LBVIP0( > > - lb: Intern<nb::Load_Balancer>, > > - vip_key: istring, > > - backend_ips: istring, > > - health_check: Intern<nb::Load_Balancer_Health_Check>) > > -LBVIP0(lb, vip_key, backend_ips, health_check) :- > > - lb in &nb::Load_Balancer(), > > - var vip = FlatMap(lb.vips), > > - (var vip_key, var backend_ips) = vip, > > - health_check in &nb::Load_Balancer_Health_Check(.vip = vip_key), > > - lb.health_check.contains(health_check._uuid). > > - > > -relation LBVIP1( > > - lb: Intern<nb::Load_Balancer>, > > - vip_key: istring, > > - backend_ips: istring, > > - health_check: Option<Intern<nb::Load_Balancer_Health_Check>>) > > -LBVIP1(lb, vip_key, backend_ips, Some{health_check}) :- > > - LBVIP0(lb, vip_key, backend_ips, health_check). > > -LBVIP1(lb, vip_key, backend_ips, None) :- > > - lb in &nb::Load_Balancer(), > > - var vip = FlatMap(lb.vips), > > - (var vip_key, var backend_ips) = vip, > > - not LBVIP0(lb, vip_key, backend_ips, _). > > - > > -typedef LBVIP = LBVIP { > > - lb: Intern<nb::Load_Balancer>, > > - vip_key: istring, > > - backend_ips: istring, > > - health_check: Option<Intern<nb::Load_Balancer_Health_Check>>, > > - vip_addr: v46_ip, > > - vip_port: bit<16>, > > - backends: Vec<lb_vip_backend> > > -} > > - > > -relation LBVIP[Intern<LBVIP>] > > - > > -LBVIP[LBVIP{lb, vip_key, backend_ips, health_check, vip_addr, vip_port, backends}.intern()] :- > > - LBVIP1(lb, vip_key, backend_ips, health_check), > > - Some{(var vip_addr, var vip_port)} = ip_address_and_port_from_lb_key(vip_key.ival()), > > - var backends = backend_ips.split(",").filter_map( > > - |ip| parse_vip_backend(ip, lb.ip_port_mappings)). > > - > > -typedef svc_monitor = SvcMonitor{ > > - port_name: istring, // Might name a switch or router port. > > - src_ip: istring > > -} > > - > > -/* Backends for load balancer virtual IPs. > > - * > > - * Use caution with this table, because load balancer virtual IPs > > - * sometimes have no backends and there is some significance to that. > > - * In cases that are really per-LBVIP, instead of per-LBVIPBackend, > > - * process the LBVIPs directly. */ > > -typedef lb_vip_backend = LBVIPBackend{ > > - ip: v46_ip, > > - port: bit<16>, > > - svc_monitor: Option<svc_monitor>} > > - > > -function parse_vip_backend(backend_ip: string, > > - mappings: Map<istring,istring>): Option<lb_vip_backend> { > > - match (ip_address_and_port_from_lb_key(backend_ip)) { > > - Some{(ip, port)} -> Some{LBVIPBackend{ip, port, parse_ip_port_mapping(mappings, ip)}}, > > - _ -> None > > - } > > -} > > - > > -function parse_ip_port_mapping(mappings: Map<istring,istring>, ip: v46_ip) > > - : Option<svc_monitor> { > > - for ((key, value) in mappings) { > > - if (ip46_parse(key.ival()) == Some{ip}) { > > - var strs = value.split(":"); > > - if (strs.len() != 2) { > > - return None > > - }; > > - > > - return match ((strs.nth(0), strs.nth(1))) { > > - (Some{port_name}, Some{src_ip}) -> Some{SvcMonitor{port_name.intern(), src_ip.intern()}}, > > - _ -> None > > - } > > - } > > - }; > > - return None > > -} > > - > > -function is_online(status: Option<istring>): bool = { > > - match (status) { > > - Some{s} -> s == i"online", > > - _ -> true > > - } > > -} > > -function default_protocol(protocol: Option<istring>): istring = { > > - match (protocol) { > > - Some{x} -> x, > > - None -> i"tcp" > > - } > > -} > > - > > -relation LBVIPWithStatus( > > - lbvip: Intern<LBVIP>, > > - up_backends: istring) > > -LBVIPWithStatus(lbvip, i"") :- > > - lbvip in &LBVIP(.backends = vec_empty()). > > -LBVIPWithStatus(lbvip, up_backends) :- > > - LBVIPBackendStatus(lbvip, backend, up), > > - var up_backends = ((backend, up)).group_by(lbvip).to_vec().filter_map(|x| { > > - (LBVIPBackend{var ip, var port, _}, var up) = x; > > - match ((up, port)) { > > - (true, 0) -> Some{"${ip.to_bracketed_string()}"}, > > - (true, _) -> Some{"${ip.to_bracketed_string()}:${port}"}, > > - _ -> None > > - } > > - }).join(",").intern(). > > - > > -/* Maps from a load-balancer virtual IP backend to whether it's up or not. > > - * > > - * Only some backends have health checking enabled. The ones that don't > > - * are always considered to be up. */ > > -relation LBVIPBackendStatus0( > > - lbvip: Intern<LBVIP>, > > - backend: lb_vip_backend, > > - up: bool) > > -LBVIPBackendStatus0(lbvip, backend, is_online(sm.status)) :- > > - LBVIP[lbvip@&LBVIP{.lb = lb}], > > - var backend = FlatMap(lbvip.backends), > > - Some{var svc_monitor} = backend.svc_monitor, > > - sm in &sb::Service_Monitor(.port = backend.port as integer), > > - ip46_parse(sm.ip.ival()) == Some{backend.ip}, > > - svc_monitor.port_name == sm.logical_port, > > - default_protocol(lb.protocol) == default_protocol(sm.protocol). > > - > > -relation LBVIPBackendStatus( > > - lbvip: Intern<LBVIP>, > > - backend: lb_vip_backend, > > - up: bool) > > -LBVIPBackendStatus(lbvip, backend, up) :- LBVIPBackendStatus0(lbvip, backend, up). > > -LBVIPBackendStatus(lbvip, backend, true) :- > > - LBVIP[lbvip@&LBVIP{.lb = lb}], > > - var backend = FlatMap(lbvip.backends), > > - not LBVIPBackendStatus0(lbvip, backend, _). > > - > > -/* SwitchPortDHCPv4Options: many-to-one relation between logical switches and DHCPv4 options */ > > -relation SwitchPortDHCPv4Options( > > - port: Intern<SwitchPort>, > > - dhcpv4_options: Intern<nb::DHCP_Options>) > > - > > -SwitchPortDHCPv4Options(port, options) :- > > - port in &SwitchPort(.lsp = lsp), > > - port.lsp.__type != i"external", > > - Some{var dhcpv4_uuid} = lsp.dhcpv4_options, > > - options in &nb::DHCP_Options(._uuid = dhcpv4_uuid). > > - > > -/* SwitchPortDHCPv6Options: many-to-one relation between logical switches and DHCPv4 options */ > > -relation SwitchPortDHCPv6Options( > > - port: Intern<SwitchPort>, > > - dhcpv6_options: Intern<nb::DHCP_Options>) > > - > > -SwitchPortDHCPv6Options(port, options) :- > > - port in &SwitchPort(.lsp = lsp), > > - port.lsp.__type != i"external", > > - Some{var dhcpv6_uuid} = lsp.dhcpv6_options, > > - options in &nb::DHCP_Options(._uuid = dhcpv6_uuid). > > - > > -/* SwitchQoS: many-to-one relation between logical switches and nb::QoS */ > > -relation SwitchQoS(sw: Intern<Switch>, qos: Intern<nb::QoS>) > > - > > -SwitchQoS(sw, qos) :- > > - sw in &Switch(), > > - &nb::Logical_Switch(._uuid = sw._uuid, .qos_rules = qos_rules), > > - var qos_rule = FlatMap(qos_rules), > > - qos in &nb::QoS(._uuid = qos_rule). > > - > > -/* Reports whether a given ACL is associated with a fair meter. > > - * 'has_fair_meter' is false if 'acl' has no meter, or if has a meter > > - * that isn't a fair meter. (The latter case has two subcases: the > > - * case where the meter that the ACL names corresponds to an nb::Meter > > - * with that name, and the case where it doesn't.) */ > > -relation ACLHasFairMeter(acl: Intern<nb::ACL>, has_fair_meter: bool) > > -ACLHasFairMeter(acl, true) :- > > - ACLWithFairMeter(acl, _). > > -ACLHasFairMeter(acl, false) :- > > - acl in &nb::ACL(), > > - not ACLWithFairMeter(acl, _). > > - > > -/* All the ACLs associated with a fair meter, with their fair meters. */ > > -relation ACLWithFairMeter(acl: Intern<nb::ACL>, meter: Intern<nb::Meter>) > > -ACLWithFairMeter(acl, meter) :- > > - acl in &nb::ACL(.meter = Some{meter_name}), > > - meter in &nb::Meter(.name = meter_name, .fair = Some{true}). > > - > > -/* SwitchACL: many-to-many relation between logical switches and ACLs */ > > -relation &SwitchACL(sw: Intern<Switch>, > > - acl: Intern<nb::ACL>, > > - has_fair_meter: bool) > > - > > -&SwitchACL(.sw = sw, .acl = acl, .has_fair_meter = has_fair_meter) :- > > - LogicalSwitchACL(sw_uuid, acl_uuid), > > - sw in &Switch(._uuid = sw_uuid), > > - acl in &nb::ACL(._uuid = acl_uuid), > > - ACLHasFairMeter(acl, has_fair_meter). > > - > > -function oVN_FEATURE_PORT_UP_NOTIF(): istring { i"port-up-notif" } > > -relation SwitchPortUp0(lsp: uuid) > > -SwitchPortUp0(lsp) :- > > - &nb::Logical_Switch_Port(._uuid = lsp, .__type = i"router"). > > -SwitchPortUp0(lsp) :- > > - &nb::Logical_Switch_Port(._uuid = lsp, .name = lsp_name, .__type = __type), > > - sb::Port_Binding(.logical_port = lsp_name, .up = up, .chassis = Some{chassis_uuid}), > > - sb::Chassis(._uuid = chassis_uuid, .other_config = other_config), > > - if (other_config.get_bool_def(oVN_FEATURE_PORT_UP_NOTIF(), false)) { > > - up == Some{true} > > - } else { > > - true > > - }. > > - > > -relation SwitchPortUp(lsp: uuid, up: bool) > > -SwitchPortUp(lsp, true) :- SwitchPortUp0(lsp). > > -SwitchPortUp(lsp, false) :- &nb::Logical_Switch_Port(._uuid = lsp), not SwitchPortUp0(lsp). > > - > > -relation SwitchPortHAChassisGroup0(lsp_uuid: uuid, hac_group_uuid: uuid) > > -SwitchPortHAChassisGroup0(lsp_uuid, ha_chassis_group_uuid(ls_uuid)) :- > > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > > - lsp.__type == i"external", > > - Some{var hac_group_uuid} = lsp.ha_chassis_group, > > - ha_chassis_group in nb::HA_Chassis_Group(._uuid = hac_group_uuid), > > - /* If the group is empty, then HA_Chassis_Group record will not be created in SB, > > - * and so we should not create a reference to the group in Port_Binding table, > > - * to avoid integrity violation. */ > > - not ha_chassis_group.ha_chassis.is_empty(), > > - LogicalSwitchPort(.lport = lsp_uuid, .lswitch = ls_uuid). > > -relation SwitchPortHAChassisGroup(lsp_uuid: uuid, hac_group_uuid: Option<uuid>) > > -SwitchPortHAChassisGroup(lsp_uuid, Some{hac_group_uuid}) :- > > - SwitchPortHAChassisGroup0(lsp_uuid, hac_group_uuid). > > -SwitchPortHAChassisGroup(lsp_uuid, None) :- > > - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), > > - not SwitchPortHAChassisGroup0(lsp_uuid, _). > > - > > -/* SwitchPort relation collects all attributes of a logical switch port > > - * - `peer` - peer router port, if any > > - * - `static_dynamic_mac` - port has a "dynamic" address that contains a static MAC, > > - * e.g., "80:fa:5b:06:72:b7 dynamic" > > - * - `static_dynamic_ipv4`, `static_dynamic_ipv6` - port has a "dynamic" address that contains a static IP, > > - * e.g., "dynamic 192.168.1.2" > > - * - `needs_dynamic_ipv4address` - port requires a dynamically allocated IPv4 address > > - * - `needs_dynamic_macaddress` - port requires a dynamically allocated MAC address > > - * - `needs_dynamic_tag` - port requires a dynamically allocated tag > > - * - `up` - true if the port is bound to a chassis or has type "" > > - * - 'hac_group_uuid' - uuid of sb::HA_Chassis_Group, only for "external" ports > > - */ > > -typedef SwitchPort = SwitchPort { > > - lsp: Intern<nb::Logical_Switch_Port>, > > - json_name: istring, > > - sw: Intern<Switch>, > > - peer: Option<Intern<RouterPort>>, > > - static_addresses: Vec<lport_addresses>, > > - dynamic_address: Option<lport_addresses>, > > - static_dynamic_mac: Option<eth_addr>, > > - static_dynamic_ipv4: Option<in_addr>, > > - static_dynamic_ipv6: Option<in6_addr>, > > - ps_addresses: Vec<lport_addresses>, > > - ps_eth_addresses: Vec<istring>, > > - parent_name: Option<istring>, > > - needs_dynamic_ipv4address: bool, > > - needs_dynamic_macaddress: bool, > > - needs_dynamic_ipv6address: bool, > > - needs_dynamic_tag: bool, > > - up: bool, > > - mcast_cfg: Intern<McastPortCfg>, > > - hac_group_uuid: Option<uuid> > > -} > > - > > -relation SwitchPort[Intern<SwitchPort>] > > - > > -SwitchPort[SwitchPort{ > > - .lsp = lsp, > > - .json_name = lsp.name.json_escape().intern(), > > - .sw = sw, > > - .peer = peer, > > - .static_addresses = static_addresses, > > - .dynamic_address = dynamic_address, > > - .static_dynamic_mac = static_dynamic_mac, > > - .static_dynamic_ipv4 = static_dynamic_ipv4, > > - .static_dynamic_ipv6 = static_dynamic_ipv6, > > - .ps_addresses = ps_addresses, > > - .ps_eth_addresses = ps_eth_addresses, > > - .parent_name = parent_name, > > - .needs_dynamic_ipv4address = needs_dynamic_ipv4address, > > - .needs_dynamic_macaddress = needs_dynamic_macaddress, > > - .needs_dynamic_ipv6address = needs_dynamic_ipv6address, > > - .needs_dynamic_tag = needs_dynamic_tag, > > - .up = up, > > - .mcast_cfg = mcast_cfg, > > - .hac_group_uuid = hac_group_uuid > > - }.intern()] :- > > - lsp in &nb::Logical_Switch_Port(), > > - LogicalSwitchPort(lsp._uuid, lswitch_uuid), > > - sw in &Switch(._uuid = lswitch_uuid, > > - .other_config = other_config, > > - .subnet = subnet, > > - .ipv6_prefix = ipv6_prefix), > > - SwitchRouterPeerRef(lsp._uuid, peer), > > - SwitchPortUp(lsp._uuid, up), > > - mcast_cfg in &McastPortCfg(.port = lsp._uuid, .router_port = false), > > - var static_addresses = { > > - var static_addresses = vec_empty(); > > - for (addr in lsp.addresses) { > > - if ((addr != i"router") and (not is_dynamic_lsp_address(addr.ival()))) { > > - match (extract_lsp_addresses(addr.ival())) { > > - None -> (), > > - Some{lport_addr} -> static_addresses.push(lport_addr) > > - } > > - } else () > > - }; > > - static_addresses > > - }, > > - var ps_addresses = { > > - var ps_addresses = vec_empty(); > > - for (addr in lsp.port_security) { > > - match (extract_lsp_addresses(addr.ival())) { > > - None -> (), > > - Some{lport_addr} -> ps_addresses.push(lport_addr) > > - } > > - }; > > - ps_addresses > > - }, > > - var ps_eth_addresses = { > > - var ps_eth_addresses = vec_empty(); > > - for (ps_addr in ps_addresses) { > > - ps_eth_addresses.push(i"${ps_addr.ea}") > > - }; > > - ps_eth_addresses > > - }, > > - var dynamic_address = match (lsp.dynamic_addresses) { > > - None -> None, > > - Some{lport_addr} -> extract_lsp_addresses(lport_addr.ival()) > > - }, > > - (var static_dynamic_mac, > > - var static_dynamic_ipv4, > > - var static_dynamic_ipv6, > > - var has_dyn_lsp_addr) = { > > - var dynamic_address_request = None; > > - for (addr in lsp.addresses) { > > - dynamic_address_request = parse_dynamic_address_request(addr.ival()); > > - if (dynamic_address_request.is_some()) { > > - break > > - } > > - }; > > - > > - match (dynamic_address_request) { > > - Some{DynamicAddressRequest{mac, ipv4, ipv6}} -> (mac, ipv4, ipv6, true), > > - None -> (None, None, None, false) > > - } > > - }, > > - var needs_dynamic_ipv4address = has_dyn_lsp_addr and peer == None and subnet.is_some() and > > - static_dynamic_ipv4 == None, > > - var needs_dynamic_macaddress = has_dyn_lsp_addr and peer == None and static_dynamic_mac == None and > > - (subnet.is_some() or ipv6_prefix.is_some() or > > - other_config.get(i"mac_only") == Some{i"true"}), > > - var needs_dynamic_ipv6address = has_dyn_lsp_addr and peer == None and ipv6_prefix.is_some() and static_dynamic_ipv6 == None, > > - var parent_name = match (lsp.parent_name) { > > - None -> None, > > - Some{pname} -> if (pname == i"") { None } else { Some{pname} } > > - }, > > - /* Port needs dynamic tag if it has a parent and its `tag_request` is 0. */ > > - var needs_dynamic_tag = parent_name.is_some() and lsp.tag_request == Some{0}, > > - SwitchPortHAChassisGroup(.lsp_uuid = lsp._uuid, > > - .hac_group_uuid = hac_group_uuid). > > - > > -/* Switch port port security addresses */ > > -relation SwitchPortPSAddresses(port: Intern<SwitchPort>, > > - ps_addrs: lport_addresses) > > - > > -SwitchPortPSAddresses(port, ps_addrs) :- > > - port in &SwitchPort(.ps_addresses = ps_addresses), > > - var ps_addrs = FlatMap(ps_addresses). > > - > > -/* All static addresses associated with a port parsed into > > - * the lport_addresses data structure */ > > -relation SwitchPortStaticAddresses(port: Intern<SwitchPort>, > > - addrs: lport_addresses) > > -SwitchPortStaticAddresses(port, addrs) :- > > - port in &SwitchPort(.static_addresses = static_addresses), > > - var addrs = FlatMap(static_addresses). > > - > > -/* All static and dynamic addresses associated with a port parsed into > > - * the lport_addresses data structure */ > > -relation SwitchPortAddresses(port: Intern<SwitchPort>, > > - addrs: lport_addresses) > > - > > -SwitchPortAddresses(port, addrs) :- SwitchPortStaticAddresses(port, addrs). > > - > > -SwitchPortAddresses(port, dynamic_address) :- > > - SwitchPortNewDynamicAddress(port, Some{dynamic_address}). > > - > > -/* "router" is a special Logical_Switch_Port address value that indicates that the Ethernet, IPv4, and IPv6 > > - * this port should be obtained from the connected logical router port, as specified by router-port in > > - * options. > > - * > > - * The resulting addresses are used to populate the logical switch’s destination lookup, and also for the > > - * logical switch to generate ARP and ND replies. > > - * > > - * If the connected logical router port is a distributed gateway port and the logical router has rules > > - * specified in nat with external_mac, then those addresses are also used to populate the switch’s destination > > - * lookup. */ > > -SwitchPortAddresses(port, addrs) :- > > - port in &SwitchPort(.lsp = lsp, .peer = Some{&rport}), > > - Some{var addrs} = { > > - var opt_addrs = None; > > - for (addr in lsp.addresses) { > > - if (addr == i"router") { > > - opt_addrs = Some{rport.networks} > > - } else () > > - }; > > - opt_addrs > > - }. > > - > > -/* All static and dynamic IPv4 addresses associated with a port */ > > -relation SwitchPortIPv4Address(port: Intern<SwitchPort>, > > - ea: eth_addr, > > - addr: ipv4_netaddr) > > - > > -SwitchPortIPv4Address(port, ea, addr) :- > > - SwitchPortAddresses(port, LPortAddress{.ea = ea, .ipv4_addrs = addrs}), > > - var addr = FlatMap(addrs). > > - > > -/* All static and dynamic IPv6 addresses associated with a port */ > > -relation SwitchPortIPv6Address(port: Intern<SwitchPort>, > > - ea: eth_addr, > > - addr: ipv6_netaddr) > > - > > -SwitchPortIPv6Address(port, ea, addr) :- > > - SwitchPortAddresses(port, LPortAddress{.ea = ea, .ipv6_addrs = addrs}), > > - var addr = FlatMap(addrs). > > - > > -/* Service monitoring. */ > > - > > -/* MAC allocated for service monitor usage. Just one mac is allocated > > - * for this purpose and ovn-controller's on each chassis will make use > > - * of this mac when sending out the packets to monitor the services > > - * defined in Service_Monitor Southbound table. Since these packets > > - * all locally handled, having just one mac is good enough. */ > > -function get_svc_monitor_mac(options: Map<istring,istring>, uuid: uuid) > > - : eth_addr = > > -{ > > - var existing_mac = match ( > > - options.get(i"svc_monitor_mac")) > > - { > > - Some{mac} -> scan_eth_addr(mac.ival()), > > - None -> None > > - }; > > - match (existing_mac) { > > - Some{mac} -> mac, > > - None -> eth_addr_pseudorandom(uuid, 'h5678) > > - } > > -} > > -function put_svc_monitor_mac(options: mut Map<istring,istring>, > > - svc_monitor_mac: eth_addr) > > -{ > > - options.insert(i"svc_monitor_mac", svc_monitor_mac.to_string().intern()); > > -} > > -relation SvcMonitorMac(mac: eth_addr) > > -SvcMonitorMac(get_svc_monitor_mac(options, uuid)) :- > > - nb::NB_Global(._uuid = uuid, .options = options). > > - > > -relation UseCtInvMatch[bool] > > -UseCtInvMatch[options.get_bool_def(i"use_ct_inv_match", true)] :- > > - nb::NB_Global(.options = options). > > -UseCtInvMatch[true] :- > > - Unit(), > > - not nb in nb::NB_Global(). > > diff --git a/northd/multicast.dl b/northd/multicast.dl > > deleted file mode 100644 > > index 56bfa0c637..0000000000 > > --- a/northd/multicast.dl > > +++ /dev/null > > @@ -1,273 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -import OVN_Northbound as nb > > -import OVN_Southbound as sb > > -import ovn > > -import ovsdb > > -import helpers > > -import lswitch > > -import lrouter > > - > > -function mCAST_DEFAULT_MAX_ENTRIES(): integer = 2048 > > - > > -function mCAST_DEFAULT_IDLE_TIMEOUT_S(): integer = 300 > > -function mCAST_IDLE_TIMEOUT_S_RANGE(): (integer, integer) = (15, 3600) > > - > > -function mCAST_DEFAULT_QUERY_INTERVAL_S(): integer = 1 > > -function mCAST_QUERY_INTERVAL_S_RANGE(): (integer, integer) = (1, 3600) > > - > > -function mCAST_DEFAULT_QUERY_MAX_RESPONSE_S(): integer = 1 > > - > > -/* IP Multicast per switch configuration. */ > > -typedef McastSwitchCfg = McastSwitchCfg { > > - datapath : uuid, > > - enabled : bool, > > - querier : bool, > > - flood_unreg : bool, > > - eth_src : istring, > > - ip4_src : istring, > > - ip6_src : istring, > > - table_size : integer, > > - idle_timeout : integer, > > - query_interval: integer, > > - query_max_resp: integer > > -} > > - > > -relation McastSwitchCfg[Intern<McastSwitchCfg>] > > - > > - /* FIXME: Right now table_size is enforced only in ovn-controller but in > > - * the ovn-northd C version we enforce it on the aggregate groups too. > > - */ > > - > > -McastSwitchCfg[McastSwitchCfg { > > - .datapath = ls_uuid, > > - .enabled = other_config.get_bool_def(i"mcast_snoop", false), > > - .querier = other_config.get_bool_def(i"mcast_querier", true), > > - .flood_unreg = other_config.get_bool_def(i"mcast_flood_unregistered", false), > > - .eth_src = other_config.get(i"mcast_eth_src").unwrap_or(i""), > > - .ip4_src = other_config.get(i"mcast_ip4_src").unwrap_or(i""), > > - .ip6_src = other_config.get(i"mcast_ip6_src").unwrap_or(i""), > > - .table_size = other_config.get_int_def(i"mcast_table_size", mCAST_DEFAULT_MAX_ENTRIES()), > > - .idle_timeout = idle_timeout, > > - .query_interval = query_interval, > > - .query_max_resp = query_max_resp > > - }.intern()] :- > > - &nb::Logical_Switch(._uuid = ls_uuid, > > - .other_config = other_config), > > - var idle_timeout = other_config.get_int_def(i"mcast_idle_timeout", mCAST_DEFAULT_IDLE_TIMEOUT_S()) > > - .clamp(mCAST_IDLE_TIMEOUT_S_RANGE()), > > - var query_interval = other_config.get_int_def(i"mcast_query_interval", idle_timeout / 2) > > - .clamp(mCAST_QUERY_INTERVAL_S_RANGE()), > > - var query_max_resp = other_config.get_int_def(i"mcast_query_max_response", > > - mCAST_DEFAULT_QUERY_MAX_RESPONSE_S()). > > - > > -/* IP Multicast per router configuration. */ > > -typedef McastRouterCfg = McastRouterCfg { > > - datapath: uuid, > > - relay : bool > > -} > > - > > -relation McastRouterCfg[Intern<McastRouterCfg>] > > - > > -McastRouterCfg[McastRouterCfg{lr_uuid, mcast_relay}.intern()] :- > > - nb::Logical_Router(._uuid = lr_uuid, .options = options), > > - var mcast_relay = options.get_bool_def(i"mcast_relay", false). > > - > > -/* IP Multicast port configuration. */ > > -typedef McastPortCfg = McastPortCfg { > > - port : uuid, > > - router_port : bool, > > - flood : bool, > > - flood_reports : bool > > -} > > - > > -relation McastPortCfg[Intern<McastPortCfg>] > > - > > -McastPortCfg[McastPortCfg{lsp_uuid, false, flood, flood_reports}.intern()] :- > > - &nb::Logical_Switch_Port(._uuid = lsp_uuid, .options = options), > > - var flood = options.get_bool_def(i"mcast_flood", false), > > - var flood_reports = options.get_bool_def(i"mcast_flood_reports", false). > > - > > -McastPortCfg[McastPortCfg{lrp_uuid, true, flood, flood}.intern()] :- > > - &nb::Logical_Router_Port(._uuid = lrp_uuid, .options = options), > > - var flood = options.get_bool_def(i"mcast_flood", false). > > - > > -/* Mapping between Switch and the set of router port uuids on which to flood > > - * IP multicast for relay. > > - */ > > -relation SwitchMcastFloodRelayPorts(sw: Intern<Switch>, ports: Set<uuid>) > > - > > -SwitchMcastFloodRelayPorts(switch, relay_ports) :- > > - &SwitchPort( > > - .lsp = lsp, > > - .sw = switch, > > - .peer = Some{&RouterPort{.router = &Router{.mcast_cfg = mcast_cfg}}} > > - ), mcast_cfg.relay, > > - var relay_ports = lsp._uuid.group_by(switch).to_set(). > > - > > -SwitchMcastFloodRelayPorts(switch, set_empty()) :- > > - Switch[switch], > > - not &SwitchPort( > > - .sw = switch, > > - .peer = Some{ > > - &RouterPort{ > > - .router = &Router{.mcast_cfg = &McastRouterCfg{.relay=true}} > > - } > > - } > > - ). > > - > > -/* Mapping between Switch and the set of port uuids on which to > > - * flood IP multicast statically. > > - */ > > -relation SwitchMcastFloodPorts(sw: Intern<Switch>, ports: Set<uuid>) > > - > > -SwitchMcastFloodPorts(switch, flood_ports) :- > > - &SwitchPort( > > - .lsp = lsp, > > - .sw = switch, > > - .mcast_cfg = &McastPortCfg{.flood = true}), > > - var flood_ports = lsp._uuid.group_by(switch).to_set(). > > - > > -SwitchMcastFloodPorts(switch, set_empty()) :- > > - Switch[switch], > > - not &SwitchPort( > > - .sw = switch, > > - .mcast_cfg = &McastPortCfg{.flood = true}). > > - > > -/* Mapping between Switch and the set of port uuids on which to > > - * flood IP multicast reports statically. > > - */ > > -relation SwitchMcastFloodReportPorts(sw: Intern<Switch>, ports: Set<uuid>) > > - > > -SwitchMcastFloodReportPorts(switch, flood_ports) :- > > - &SwitchPort( > > - .lsp = lsp, > > - .sw = switch, > > - .mcast_cfg = &McastPortCfg{.flood_reports = true}), > > - var flood_ports = lsp._uuid.group_by(switch).to_set(). > > - > > -SwitchMcastFloodReportPorts(switch, set_empty()) :- > > - Switch[switch], > > - not &SwitchPort( > > - .sw = switch, > > - .mcast_cfg = &McastPortCfg{.flood_reports = true}). > > - > > -/* Mapping between Router and the set of port uuids on which to > > - * flood IP multicast reports statically. > > - */ > > -relation RouterMcastFloodPorts(sw: Intern<Router>, ports: Set<uuid>) > > - > > -RouterMcastFloodPorts(router, flood_ports) :- > > - &RouterPort( > > - .lrp = lrp, > > - .router = router, > > - .mcast_cfg = &McastPortCfg{.flood = true} > > - ), > > - var flood_ports = lrp._uuid.group_by(router).to_set(). > > - > > -RouterMcastFloodPorts(router, set_empty()) :- > > - Router[router], > > - not &RouterPort( > > - .router = router, > > - .mcast_cfg = &McastPortCfg{.flood = true}). > > - > > -/* Flattened IGMP group. One record per address-port tuple. */ > > -relation IgmpSwitchGroupPort( > > - address: istring, > > - switch : Intern<Switch>, > > - port : uuid > > -) > > - > > -IgmpSwitchGroupPort(address, switch, lsp_uuid) :- > > - sb::IGMP_Group(.address = address, .ports = pb_ports), > > - var pb_port_uuid = FlatMap(pb_ports), > > - sb::Port_Binding(._uuid = pb_port_uuid, .logical_port = lsp_name), > > - &SwitchPort( > > - .lsp = &nb::Logical_Switch_Port{._uuid = lsp_uuid, .name = lsp_name}, > > - .sw = switch). > > -IgmpSwitchGroupPort(address, switch, localnet_port.0) :- > > - IgmpSwitchGroupPort(address, switch, _), > > - var localnet_port = FlatMap(switch.localnet_ports). > > - > > -/* Aggregated IGMP group: merges all IgmpSwitchGroupPort for a given > > - * address-switch tuple from all chassis. > > - */ > > -relation IgmpSwitchMulticastGroup( > > - address: istring, > > - switch : Intern<Switch>, > > - ports : Set<uuid> > > -) > > - > > -IgmpSwitchMulticastGroup(address, switch, ports) :- > > - IgmpSwitchGroupPort(address, switch, port), > > - var ports = port.group_by((address, switch)).to_set(). > > - > > -/* Flattened IGMP group representation for routers with relay enabled. One > > - * record per address-port tuple for all IGMP groups learned by switches > > - * connected to the router. > > - */ > > -relation IgmpRouterGroupPort( > > - address: istring, > > - router : Intern<Router>, > > - port : uuid > > -) > > - > > -IgmpRouterGroupPort(address, rtr_port.router, rtr_port.lrp._uuid) :- > > - SwitchMcastFloodRelayPorts(switch, sw_flood_ports), > > - IgmpSwitchMulticastGroup(address, switch, _), > > - /* For IPv6 only relay routable multicast groups > > - * (RFC 4291 2.7). > > - */ > > - match (ipv6_parse(address.ival())) { > > - Some{ipv6} -> ipv6.is_routable_multicast(), > > - None -> true > > - }, > > - var flood_port = FlatMap(sw_flood_ports), > > - &SwitchPort(.lsp = &nb::Logical_Switch_Port{._uuid = flood_port}, > > - .peer = Some{rtr_port}), > > - RouterPortIsRedirect(rtr_port.lrp._uuid, false). > > - > > -/* Store the chassis redirect port uuid for redirect ports, otherwise traffic > > - * will not be tunneled properly. This will be translated back to the patch > > - * port on the remote hypervisor. > > - */ > > -IgmpRouterGroupPort(address, rtr_port.router, cr_lrp_uuid) :- > > - SwitchMcastFloodRelayPorts(switch, sw_flood_ports), > > - IgmpSwitchMulticastGroup(address, switch, _), > > - /* For IPv6 only relay routable multicast groups > > - * (RFC 4291 2.7). > > - */ > > - match (ipv6_parse(address.ival())) { > > - Some{ipv6} -> ipv6.is_routable_multicast(), > > - None -> true > > - }, > > - var flood_port = FlatMap(sw_flood_ports), > > - &SwitchPort(.lsp = &nb::Logical_Switch_Port{._uuid = flood_port}, > > - .peer = Some{rtr_port}), > > - RouterPortIsRedirect(rtr_port.lrp._uuid, true), > > - DistributedGatewayPort(rtr_port.lrp, _, cr_lrp_uuid). > > - > > -/* Aggregated IGMP group for routers: merges all IgmpRouterGroupPort for > > - * a given address-router tuple from all connected switches. > > - */ > > -relation IgmpRouterMulticastGroup( > > - address: istring, > > - router : Intern<Router>, > > - ports : Set<uuid> > > -) > > - > > -IgmpRouterMulticastGroup(address, router, ports) :- > > - IgmpRouterGroupPort(address, router, port), > > - var ports = port.group_by((address, router)).to_set(). > > diff --git a/northd/ovn-nb.dlopts b/northd/ovn-nb.dlopts > > deleted file mode 100644 > > index 9a460adef4..0000000000 > > --- a/northd/ovn-nb.dlopts > > +++ /dev/null > > @@ -1,27 +0,0 @@ > > ---intern-strings > > --o BFD > > ---rw BFD.status > > --o Logical_Router_Port > > ---rw Logical_Router_Port.ipv6_prefix > > --o Logical_Switch_Port > > ---rw Logical_Switch_Port.tag > > ---rw Logical_Switch_Port.dynamic_addresses > > ---rw Logical_Switch_Port.up > > --o NB_Global > > ---rw NB_Global.sb_cfg > > ---rw NB_Global.hv_cfg > > ---rw NB_Global.options > > ---rw NB_Global.ipsec > > ---rw NB_Global.nb_cfg_timestamp > > ---rw NB_Global.hv_cfg_timestamp > > ---intern-table DHCP_Options > > ---intern-table ACL > > ---intern-table QoS > > ---intern-table Load_Balancer > > ---intern-table Logical_Switch > > ---intern-table Load_Balancer_Health_Check > > ---intern-table Meter > > ---intern-table NAT > > ---intern-table Address_Set > > ---intern-table Logical_Router_Port > > ---intern-table Logical_Switch_Port > > diff --git a/northd/ovn-northd-ddlog.c b/northd/ovn-northd-ddlog.c > > deleted file mode 100644 > > index 1c06bd0028..0000000000 > > --- a/northd/ovn-northd-ddlog.c > > +++ /dev/null > > @@ -1,1368 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -#include <config.h> > > - > > -#include <getopt.h> > > -#include <stdlib.h> > > -#include <stdio.h> > > -#include <fcntl.h> > > -#include <unistd.h> > > - > > -#include "command-line.h" > > -#include "daemon.h" > > -#include "fatal-signal.h" > > -#include "hash.h" > > -#include "jsonrpc.h" > > -#include "lib/ovn-util.h" > > -#include "memory.h" > > -#include "openvswitch/json.h" > > -#include "openvswitch/poll-loop.h" > > -#include "openvswitch/vlog.h" > > -#include "ovs-numa.h" > > -#include "ovsdb-cs.h" > > -#include "ovsdb-data.h" > > -#include "ovsdb-error.h" > > -#include "ovsdb-parser.h" > > -#include "ovsdb-types.h" > > -#include "simap.h" > > -#include "stopwatch.h" > > -#include "lib/stopwatch-names.h" > > -#include "lib/uuidset.h" > > -#include "stream-ssl.h" > > -#include "stream.h" > > -#include "unixctl.h" > > -#include "util.h" > > -#include "uuid.h" > > - > > -#include "northd/ovn_northd_ddlog/ddlog.h" > > - > > -VLOG_DEFINE_THIS_MODULE(ovn_northd); > > - > > -#include "northd/ovn-northd-ddlog-nb.inc" > > -#include "northd/ovn-northd-ddlog-sb.inc" > > - > > -struct northd_status { > > - bool locked; > > - bool pause; > > -}; > > - > > -static unixctl_cb_func ovn_northd_exit; > > -static unixctl_cb_func ovn_northd_pause; > > -static unixctl_cb_func ovn_northd_resume; > > -static unixctl_cb_func ovn_northd_is_paused; > > -static unixctl_cb_func ovn_northd_status; > > - > > -static unixctl_cb_func ovn_northd_enable_cpu_profiling; > > -static unixctl_cb_func ovn_northd_disable_cpu_profiling; > > -static unixctl_cb_func ovn_northd_profile; > > - > > -/* --ddlog-record: The name of a file to which to record DDlog commands for > > - * later replay. Useful for debugging. If null (by default), DDlog commands > > - * are not recorded. */ > > -static const char *record_file; > > - > > -static const char *ovnnb_db; > > -static const char *ovnsb_db; > > -static const char *unixctl_path; > > - > > -/* SSL options */ > > -static const char *ssl_private_key_file; > > -static const char *ssl_certificate_file; > > -static const char *ssl_ca_cert_file; > > - > > -/* Frequently used table ids. */ > > -static table_id WARNING_TABLE_ID; > > -static table_id NB_CFG_TIMESTAMP_ID; > > - > > -/* Initialize frequently used table ids. */ > > -static void > > -init_table_ids(ddlog_prog ddlog) > > -{ > > - WARNING_TABLE_ID = ddlog_get_table_id(ddlog, "helpers::Warning"); > > - NB_CFG_TIMESTAMP_ID = ddlog_get_table_id(ddlog, "NbCfgTimestamp"); > > -} > > - > > -struct northd_ctx { > > - /* Shared between NB and SB database contexts. */ > > - ddlog_prog ddlog; > > - ddlog_delta *delta; /* Accumulated delta to send to OVSDB. */ > > - > > - /* Database info. > > - * > > - * The '*_relations' vectors are arrays of strings that contain DDlog > > - * relation names, terminated by a null pointer. 'prefix' is the prefix > > - * for the DDlog module that contains the relations. */ > > - char *prefix; /* e.g. "OVN_Northbound::" */ > > - const char **input_relations; > > - const char **output_relations; > > - const char **output_only_relations; > > - > > - /* Whether this is the database that has the 'nb_cfg_timestamp' and > > - * 'sb_cfg_timestamp' columns in NB_Global. True for the northbound > > - * database, false for the southbound database. */ > > - bool has_timestamp_columns; > > - > > - /* OVSDB connection. */ > > - struct ovsdb_cs *cs; > > - struct json *request_id; /* JSON request ID for outstanding txn if any. */ > > - enum { > > - /* Initial state, before the output-only data (if any) has been > > - * requested. */ > > - S_INITIAL, > > - > > - /* Output-only data has been requested. Waiting for reply. */ > > - S_OUTPUT_ONLY_DATA_REQUESTED, > > - > > - /* Output-only data (if any) has been received. Any request sent out > > - * now would be to update data. */ > > - S_UPDATE, > > - } state; > > - > > - /* Database info. */ > > - const char *db_name; /* e.g. "OVN_Northbound". */ > > - struct json *output_only_data; > > - const char *lock_name; /* Name of lock we need, NULL if none. */ > > - bool paused; > > -}; > > - > > -static struct ovsdb_cs_ops northd_cs_ops; > > - > > -static struct json *get_database_ops(struct northd_ctx *); > > -static int ddlog_clear(struct northd_ctx *); > > - > > -static void > > -northd_ctx_connection_status(struct unixctl_conn *conn, int argc OVS_UNUSED, > > - const char *argv[] OVS_UNUSED, void *ctx_) > > -{ > > - const struct northd_ctx *ctx = ctx_; > > - bool connected = ovsdb_cs_is_connected(ctx->cs); > > - unixctl_command_reply(conn, connected ? "connected" : "not connected"); > > -} > > - > > -static void > > -northd_ctx_cluster_state_reset(struct unixctl_conn *conn, int argc OVS_UNUSED, > > - const char *argv[] OVS_UNUSED, void *ctx_) > > -{ > > - const struct northd_ctx *ctx = ctx_; > > - VLOG_INFO("Resetting %s database cluster state", ctx->db_name); > > - ovsdb_cs_reset_min_index(ctx->cs); > > - unixctl_command_reply(conn, NULL); > > -} > > - > > -static struct northd_ctx * > > -northd_ctx_create(const char *server, const char *database, > > - const char *unixctl_command_prefix, > > - const char *lock_name, > > - ddlog_prog ddlog, ddlog_delta *delta, > > - const char **input_relations, > > - const char **output_relations, > > - const char **output_only_relations, > > - bool paused) > > -{ > > - struct northd_ctx *ctx = xmalloc(sizeof *ctx); > > - *ctx = (struct northd_ctx) { > > - .ddlog = ddlog, > > - .delta = delta, > > - .prefix = xasprintf("%s::", database), > > - .input_relations = input_relations, > > - .output_relations = output_relations, > > - .output_only_relations = output_only_relations, > > - /* 'has_timestamp_columns' will get filled in later. */ > > - .cs = ovsdb_cs_create(database, 1 /* XXX */, &northd_cs_ops, ctx), > > - .state = S_INITIAL, > > - .db_name = database, > > - /* 'output_only_relations' will get filled in later. */ > > - .lock_name = lock_name, > > - .paused = paused, > > - }; > > - > > - ovsdb_cs_set_remote(ctx->cs, server, true); > > - ovsdb_cs_set_lock(ctx->cs, lock_name); > > - > > - char *cmd = xasprintf("%s-connection-status", unixctl_command_prefix); > > - unixctl_command_register(cmd, "", 0, 0, > > - northd_ctx_connection_status, ctx); > > - free(cmd); > > - > > - cmd = xasprintf("%s-cluster-state-reset", unixctl_command_prefix); > > - unixctl_command_register(cmd, "", 0, 0, > > - northd_ctx_cluster_state_reset, ctx); > > - free(cmd); > > - > > - return ctx; > > -} > > - > > -static void > > -northd_ctx_destroy(struct northd_ctx *ctx) > > -{ > > - if (ctx) { > > - ovsdb_cs_destroy(ctx->cs); > > - json_destroy(ctx->request_id); > > - json_destroy(ctx->output_only_data); > > - free(ctx->prefix); > > - free(ctx); > > - } > > -} > > - > > -static struct json * > > -northd_compose_monitor_request(const struct json *schema_json, void *ctx_) > > -{ > > - struct northd_ctx *ctx = ctx_; > > - > > - struct shash *schema = ovsdb_cs_parse_schema(schema_json); > > - > > - const struct sset *nb_global = shash_find_data( > > - schema, "NB_Global"); > > - ctx->has_timestamp_columns > > - = (nb_global > > - && sset_contains(nb_global, "nb_cfg_timestamp") > > - && sset_contains(nb_global, "sb_cfg_timestamp")); > > - > > - struct json *monitor_requests = json_object_create(); > > - > > - /* This should be smarter about ignoring not needed ones. There's a lot > > - * more logic for this in ovsdb_idl_compose_monitor_request(). */ > > - const struct shash_node *node; > > - SHASH_FOR_EACH (node, schema) { > > - const char *table_name = node->name; > > - > > - /* Only subscribe to input relations we care about. */ > > - for (const char **p = ctx->input_relations; *p; p++) { > > - if (!strcmp(table_name, *p)) { > > - const struct sset *schema_columns = node->data; > > - struct json *subscribed_columns = json_array_create_empty(); > > - > > - const char *column; > > - SSET_FOR_EACH (column, schema_columns) { > > - if (strcmp(column, "_version")) { > > - json_array_add(subscribed_columns, > > - json_string_create(column)); > > - } > > - } > > - > > - struct json *monitor_request = json_object_create(); > > - json_object_put(monitor_request, "columns", > > - subscribed_columns); > > - json_object_put(monitor_requests, table_name, > > - json_array_create_1(monitor_request)); > > - break; > > - } > > - } > > - } > > - ovsdb_cs_free_schema(schema); > > - > > - return monitor_requests; > > -} > > - > > -static struct ovsdb_cs_ops northd_cs_ops = { northd_compose_monitor_request }; > > - > > -/* Sends the database server a request for all the row UUIDs in output-only > > - * tables. */ > > -static void > > -northd_send_output_only_data_request(struct northd_ctx *ctx) > > -{ > > - if (ctx->output_only_relations[0]) { > > - json_destroy(ctx->output_only_data); > > - ctx->output_only_data = NULL; > > - > > - struct json *ops = json_array_create_1( > > - json_string_create(ctx->db_name)); > > - for (size_t i = 0; ctx->output_only_relations[i]; i++) { > > - const char *table = ctx->output_only_relations[i]; > > - struct json *op = json_object_create(); > > - json_object_put_string(op, "op", "select"); > > - json_object_put_string(op, "table", table); > > - json_object_put(op, "columns", > > - json_array_create_1(json_string_create("_uuid"))); > > - json_object_put(op, "where", json_array_create_empty()); > > - json_array_add(ops, op); > > - } > > - > > - ctx->state = S_OUTPUT_ONLY_DATA_REQUESTED; > > - ctx->request_id = ovsdb_cs_send_transaction(ctx->cs, ops); > > - } else { > > - ctx->state = S_UPDATE; > > - } > > -} > > - > > -static void > > -northd_pause(struct northd_ctx *ctx) > > -{ > > - if (!ctx->paused && ctx->lock_name) { > > - ctx->paused = true; > > - VLOG_INFO("This ovn-northd instance is now paused."); > > - ovsdb_cs_set_lock(ctx->cs, NULL); > > - } > > -} > > - > > -static void > > -northd_unpause(struct northd_ctx *ctx) > > -{ > > - if (ctx->paused) { > > - ovsdb_cs_set_lock(ctx->cs, ctx->lock_name); > > - ctx->paused = false; > > - } > > -} > > - > > -static void > > -warning_cb(uintptr_t arg OVS_UNUSED, > > - table_id table OVS_UNUSED, > > - const ddlog_record *rec, > > - ssize_t weight) > > -{ > > - size_t len; > > - const char *s = ddlog_get_str_with_length(rec, &len); > > - if (weight > 0) { > > - VLOG_WARN("New warning: %.*s", (int)len, s); > > - } else { > > - VLOG_WARN("Warning cleared: %.*s", (int)len, s); > > - } > > -} > > - > > -static int > > -ddlog_commit(ddlog_prog ddlog, ddlog_delta *delta) > > -{ > > - ddlog_delta *new_delta = ddlog_transaction_commit_dump_changes(ddlog); > > - if (!new_delta) { > > - VLOG_WARN("Transaction commit failed"); > > - return -1; > > - } > > - > > - /* Remove warnings from delta and output them straight away. */ > > - ddlog_delta *warnings = ddlog_delta_remove_table(new_delta, WARNING_TABLE_ID); > > - ddlog_delta_enumerate(warnings, warning_cb, 0); > > - ddlog_free_delta(warnings); > > - > > - /* Merge changes into `delta`. */ > > - ddlog_delta_union(delta, new_delta); > > - ddlog_free_delta(new_delta); > > - > > - return 0; > > -} > > - > > -static int > > -ddlog_clear(struct northd_ctx *ctx) > > -{ > > - int n_failures = 0; > > - for (int i = 0; ctx->input_relations[i]; i++) { > > - char *table = xasprintf("%s%s", ctx->prefix, ctx->input_relations[i]); > > - if (ddlog_clear_relation(ctx->ddlog, ddlog_get_table_id(ctx->ddlog, > > - table))) { > > - n_failures++; > > - } > > - free(table); > > - } > > - if (n_failures) { > > - VLOG_WARN("failed to clear %d tables in %s database", > > - n_failures, ctx->db_name); > > - } > > - return n_failures; > > -} > > - > > -static const struct json * > > -json_object_get(const struct json *json, const char *member_name) > > -{ > > - return (json && json->type == JSON_OBJECT > > - ? shash_find_data(json_object(json), member_name) > > - : NULL); > > -} > > - > > -/* Stores into '*nb_cfgp' the new value of NB_Global::nb_cfg in the updates in > > - * <table-updates> provided by the caller. Leaves '*nb_cfgp' alone if the > > - * updates don't set NB_Global::nb_cfg. */ > > -static void > > -get_nb_cfg(const struct json *table_updates, int64_t *nb_cfgp) > > -{ > > - const struct json *nb_global = json_object_get(table_updates, "NB_Global"); > > - if (nb_global) { > > - struct shash_node *row; > > - SHASH_FOR_EACH (row, json_object(nb_global)) { > > - const struct json *value = row->data; > > - const struct json *new = json_object_get(value, "new"); > > - const struct json *nb_cfg = json_object_get(new, "nb_cfg"); > > - if (nb_cfg && nb_cfg->type == JSON_INTEGER) { > > - *nb_cfgp = json_integer(nb_cfg); > > - return; > > - } > > - } > > - } > > -} > > - > > -static void > > -northd_parse_updates(struct northd_ctx *ctx, struct ovs_list *updates) > > -{ > > - if (ovs_list_is_empty(updates)) { > > - return; > > - } > > - > > - if (ddlog_transaction_start(ctx->ddlog)) { > > - VLOG_WARN("DDlog failed to start transaction"); > > - return; > > - } > > - > > - > > - /* Whenever a new 'nb_cfg' value comes in, we take the current time and > > - * push it into the NbCfgTimestamp relation for the DDlog program to put > > - * into nb::NB_Global.nb_cfg_timestamp. > > - * > > - * The 'old_nb_cfg' variables track the state we've pushed into DDlog. > > - * The 'new_nb_cfg' variables track what 'updates' sets (by default, > > - * no change, so we initialize from the old variables). */ > > - static int64_t old_nb_cfg = INT64_MIN; > > - static int64_t old_nb_cfg_timestamp = INT64_MIN; > > - int64_t new_nb_cfg = old_nb_cfg == INT64_MIN ? 0 : old_nb_cfg; > > - int64_t new_nb_cfg_timestamp = old_nb_cfg_timestamp; > > - > > - struct ovsdb_cs_event *event; > > - LIST_FOR_EACH (event, list_node, updates) { > > - ovs_assert(event->type == OVSDB_CS_EVENT_TYPE_UPDATE); > > - struct ovsdb_cs_update_event *update = &event->update; > > - if (update->clear && ddlog_clear(ctx)) { > > - goto error; > > - } > > - > > - char *updates_s = json_to_string(update->table_updates, 0); > > - if (ddlog_apply_ovsdb_updates(ctx->ddlog, ctx->prefix, updates_s)) { > > - VLOG_WARN("DDlog failed to apply updates %s", updates_s); > > - free(updates_s); > > - goto error; > > - } > > - free(updates_s); > > - > > - if (ctx->has_timestamp_columns) { > > - get_nb_cfg(update->table_updates, &new_nb_cfg); > > - } > > - } > > - > > - if (ctx->has_timestamp_columns && new_nb_cfg != old_nb_cfg) { > > - new_nb_cfg_timestamp = time_wall_msec(); > > - > > - ddlog_cmd *cmds[2]; > > - int n_cmds = 0; > > - if (old_nb_cfg_timestamp != INT64_MIN) { > > - cmds[n_cmds++] = ddlog_delete_val_cmd( > > - NB_CFG_TIMESTAMP_ID, ddlog_i64(old_nb_cfg_timestamp)); > > - } > > - cmds[n_cmds++] = ddlog_insert_cmd( > > - NB_CFG_TIMESTAMP_ID, ddlog_i64(new_nb_cfg_timestamp)); > > - if (ddlog_apply_updates(ctx->ddlog, cmds, n_cmds) < 0) { > > - goto error; > > - } > > - } > > - > > - /* Commit changes to DDlog. */ > > - if (ddlog_commit(ctx->ddlog, ctx->delta)) { > > - goto error; > > - } > > - if (ctx->has_timestamp_columns) { > > - old_nb_cfg = new_nb_cfg; > > - old_nb_cfg_timestamp = new_nb_cfg_timestamp; > > - } > > - > > - /* This update may have implications for the other side, so > > - * immediately wake to check for more changes to be applied. */ > > - poll_immediate_wake(); > > - > > - return; > > - > > -error: > > - ddlog_transaction_rollback(ctx->ddlog); > > -} > > - > > -static void > > -northd_process_txn_reply(struct northd_ctx *ctx, > > - const struct jsonrpc_msg *reply) > > -{ > > - if (!json_equal(reply->id, ctx->request_id)) { > > - VLOG_WARN("unexpected transaction reply"); > > - return; > > - } > > - > > - json_destroy(ctx->request_id); > > - ctx->request_id = NULL; > > - > > - if (reply->type == JSONRPC_ERROR) { > > - char *s = jsonrpc_msg_to_string(reply); > > - VLOG_WARN("received database error: %s", s); > > - free(s); > > - > > - ovsdb_cs_force_reconnect(ctx->cs); > > - return; > > - } > > - > > - switch (ctx->state) { > > - case S_INITIAL: > > - OVS_NOT_REACHED(); > > - break; > > - > > - case S_OUTPUT_ONLY_DATA_REQUESTED: > > - json_destroy(ctx->output_only_data); > > - ctx->output_only_data = json_clone(reply->result); > > - > > - ctx->state = S_UPDATE; > > - break; > > - > > - case S_UPDATE: > > - /* Nothing to do. */ > > - break; > > - > > - default: > > - OVS_NOT_REACHED(); > > - } > > -} > > - > > -static void > > -destroy_event_list(struct ovs_list *events) > > -{ > > - struct ovsdb_cs_event *event; > > - LIST_FOR_EACH_POP (event, list_node, events) { > > - ovsdb_cs_event_destroy(event); > > - } > > -} > > - > > -/* Processes a batch of messages from the database server on 'ctx'. */ > > -static void > > -northd_run(struct northd_ctx *ctx) > > -{ > > - struct ovs_list events; > > - ovsdb_cs_run(ctx->cs, &events); > > - > > - struct ovs_list updates = OVS_LIST_INITIALIZER(&updates); > > - struct ovsdb_cs_event *event; > > - LIST_FOR_EACH_POP (event, list_node, &events) { > > - switch (event->type) { > > - case OVSDB_CS_EVENT_TYPE_RECONNECT: > > - json_destroy(ctx->request_id); > > - ctx->state = S_INITIAL; > > - break; > > - > > - case OVSDB_CS_EVENT_TYPE_LOCKED: > > - break; > > - > > - case OVSDB_CS_EVENT_TYPE_UPDATE: > > - if (event->update.clear) { > > - destroy_event_list(&updates); > > - } > > - ovs_list_push_back(&updates, &event->list_node); > > - continue; > > - > > - case OVSDB_CS_EVENT_TYPE_TXN_REPLY: > > - northd_process_txn_reply(ctx, event->txn_reply); > > - break; > > - } > > - ovsdb_cs_event_destroy(event); > > - } > > - > > - northd_parse_updates(ctx, &updates); > > - destroy_event_list(&updates); > > - > > - if (ctx->state == S_INITIAL && ovsdb_cs_may_send_transaction(ctx->cs)) { > > - northd_send_output_only_data_request(ctx); > > - } > > -} > > - > > -/* Pass the changes for 'ctx' to its database server. */ > > -static void > > -northd_send_deltas(struct northd_ctx *ctx) > > -{ > > - if (ctx->request_id || !ovsdb_cs_may_send_transaction(ctx->cs)) { > > - return; > > - } > > - > > - struct json *ops = get_database_ops(ctx); > > - if (!ops) { > > - return; > > - } > > - > > - struct json *comment = json_object_create(); > > - json_object_put_string(comment, "op", "comment"); > > - json_object_put_string(comment, "comment", "ovn-northd-ddlog"); > > - json_array_add(ops, comment); > > - > > - ctx->request_id = ovsdb_cs_send_transaction(ctx->cs, ops); > > -} > > - > > -static void > > -northd_update_probe_interval_cb( > > - uintptr_t probe_intervalp_, > > - table_id table OVS_UNUSED, > > - const ddlog_record *rec, > > - ssize_t weight OVS_UNUSED) > > -{ > > - int *probe_intervalp = (int *) probe_intervalp_; > > - > > - int64_t x = ddlog_get_i64(rec); > > - *probe_intervalp = (x > 0 && x < 1000 ? 1000 > > - : x > INT_MAX ? INT_MAX > > - : x); > > -} > > - > > -static void > > -northd_update_probe_interval(struct northd_ctx *nb, struct northd_ctx *sb) > > -{ > > - /* 0 means that Northd_Probe_Interval is empty. That means that we haven't > > - * connected to the database and retrieved an initial snapshot. Thus, we > > - * set an infinite probe interval to allow for retrieving and stabilizing > > - * an initial snapshot of the databse, which can take a long time. > > - * > > - * -1 means that Northd_Probe_Interval is nonempty but the database doesn't > > - * set a probe interval. Thus, we use the default probe interval. > > - * > > - * Any other value is an explicit probe interval request from the > > - * database. */ > > - int probe_interval = 0; > > - table_id tid = ddlog_get_table_id(nb->ddlog, "Northd_Probe_Interval"); > > - ddlog_delta *probe_delta = ddlog_delta_remove_table(nb->delta, tid); > > - ddlog_delta_enumerate(probe_delta, northd_update_probe_interval_cb, (uintptr_t) &probe_interval); > > - ddlog_free_delta(probe_delta); > > - > > - ovsdb_cs_set_probe_interval(nb->cs, probe_interval); > > - ovsdb_cs_set_probe_interval(sb->cs, probe_interval); > > -} > > - > > -/* Arranges for poll_block() to wake up when northd_run() has something to > > - * do or when activity occurs on a transaction on 'ctx'. */ > > -static void > > -northd_wait(struct northd_ctx *ctx) > > -{ > > - ovsdb_cs_wait(ctx->cs); > > -} > > - > > -/* ddlog-specific actions. */ > > - > > -/* Generate OVSDB update command for delta-plus, delta-minus, and delta-update > > - * tables. */ > > -static void > > -ddlog_table_update_deltas(struct ds *ds, ddlog_prog ddlog, ddlog_delta *delta, > > - const char *db, const char *table) > > -{ > > - int error; > > - char *updates; > > - > > - error = ddlog_dump_ovsdb_delta_tables(ddlog, delta, db, table, &updates); > > - if (error) { > > - VLOG_INFO("DDlog error %d dumping delta for table %s", error, table); > > - return; > > - } > > - > > - if (!updates[0]) { > > - ddlog_free_json(updates); > > - return; > > - } > > - > > - ds_put_cstr(ds, updates); > > - ds_put_char(ds, ','); > > - ddlog_free_json(updates); > > -} > > - > > -/* Generate OVSDB update command for a output-only table. */ > > -static void > > -ddlog_table_update_output(struct ds *ds, ddlog_prog ddlog, ddlog_delta *delta, > > - const char *db, const char *table) > > -{ > > - int error; > > - char *updates; > > - > > - error = ddlog_dump_ovsdb_output_table(ddlog, delta, db, table, &updates); > > - if (error) { > > - VLOG_WARN("%s: failed to generate update commands for " > > - "output-only table (error %d)", table, error); > > - return; > > - } > > - char *table_name = xasprintf("%s::Out_%s", db, table); > > - ddlog_delta_clear_table(delta, ddlog_get_table_id(ddlog, table_name)); > > - free(table_name); > > - > > - if (!updates[0]) { > > - ddlog_free_json(updates); > > - return; > > - } > > - > > - ds_put_cstr(ds, updates); > > - ds_put_char(ds, ','); > > - ddlog_free_json(updates); > > -} > > - > > -static struct ovsdb_error * > > -parse_output_only_data(const struct json *txn_result, size_t index, > > - struct uuidset *uuidset) > > -{ > > - if (txn_result->type != JSON_ARRAY || txn_result->array.n <= index) { > > - return ovsdb_syntax_error(txn_result, NULL, > > - "transaction result missing for " > > - "output-only relation %"PRIuSIZE, index); > > - } > > - > > - struct ovsdb_parser p; > > - ovsdb_parser_init(&p, txn_result->array.elems[0], "select result"); > > - const struct json *rows = ovsdb_parser_member(&p, "rows", OP_ARRAY); > > - struct ovsdb_error *error = ovsdb_parser_finish(&p); > > - if (error) { > > - return error; > > - } > > - > > - for (size_t i = 0; i < rows->array.n; i++) { > > - const struct json *row = rows->array.elems[i]; > > - > > - ovsdb_parser_init(&p, row, "row"); > > - const struct json *uuid = ovsdb_parser_member(&p, "_uuid", OP_ARRAY); > > - error = ovsdb_parser_finish(&p); > > - if (error) { > > - return error; > > - } > > - > > - struct ovsdb_base_type base_type = OVSDB_BASE_UUID_INIT; > > - union ovsdb_atom atom; > > - error = ovsdb_atom_from_json(&atom, &base_type, uuid, NULL); > > - if (error) { > > - return error; > > - } > > - uuidset_insert(uuidset, &atom.uuid); > > - } > > - > > - return NULL; > > -} > > - > > -static bool > > -get_ddlog_uuid(const ddlog_record *rec, struct uuid *uuid) > > -{ > > - if (!ddlog_is_int(rec)) { > > - return false; > > - } > > - > > - __uint128_t u128 = ddlog_get_u128(rec); > > - uuid->parts[0] = u128 >> 96; > > - uuid->parts[1] = u128 >> 64; > > - uuid->parts[2] = u128 >> 32; > > - uuid->parts[3] = u128; > > - return true; > > -} > > - > > -struct dump_index_data { > > - ddlog_prog prog; > > - struct uuidset *rows_present; > > - const char *table; > > - struct ds *ops_s; > > -}; > > - > > -static void OVS_UNUSED > > -index_cb(uintptr_t data_, const ddlog_record *rec) > > -{ > > - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5); > > - struct dump_index_data *data = (struct dump_index_data *) data_; > > - > > - /* Extract the rec's row UUID as 'uuid'. */ > > - const ddlog_record *rec_uuid = ddlog_get_named_struct_field(rec, "_uuid"); > > - if (!rec_uuid) { > > - VLOG_WARN_RL(&rl, "%s: row has no _uuid column", data->table); > > - return; > > - } > > - struct uuid uuid; > > - if (!get_ddlog_uuid(rec_uuid, &uuid)) { > > - VLOG_WARN_RL(&rl, "%s: _uuid column has unexpected type", data->table); > > - return; > > - } > > - > > - /* If a row with the given UUID was already in the database, then > > - * send a operation to update it; otherwise, send an operation to > > - * insert it. */ > > - struct uuidset_node *node = uuidset_find(data->rows_present, &uuid); > > - char *s = NULL; > > - int ret; > > - if (node) { > > - uuidset_delete(data->rows_present, node); > > - ret = ddlog_into_ovsdb_update_str(data->prog, data->table, rec, &s); > > - } else { > > - ret = ddlog_into_ovsdb_insert_str(data->prog, data->table, rec, &s); > > - } > > - if (ret) { > > - VLOG_WARN_RL(&rl, "%s: ddlog could not convert row into database op", > > - data->table); > > - return; > > - } > > - ds_put_format(data->ops_s, "%s,", s); > > - ddlog_free_json(s); > > -} > > - > > -static struct json * > > -where_uuid_equals(const struct uuid *uuid) > > -{ > > - return > > - json_array_create_1( > > - json_array_create_3( > > - json_string_create("_uuid"), > > - json_string_create("=="), > > - json_array_create_2( > > - json_string_create("uuid"), > > - json_string_create_nocopy( > > - xasprintf(UUID_FMT, UUID_ARGS(uuid)))))); > > -} > > - > > -static void > > -add_delete_row_op(const char *table, const struct uuid *uuid, struct ds *ops_s) > > -{ > > - struct json *op = json_object_create(); > > - json_object_put_string(op, "op", "delete"); > > - json_object_put_string(op, "table", table); > > - json_object_put(op, "where", where_uuid_equals(uuid)); > > - json_to_ds(op, 0, ops_s); > > - json_destroy(op); > > - ds_put_char(ops_s, ','); > > -} > > - > > -static void > > -northd_update_sb_cfg_cb( > > - uintptr_t new_sb_cfgp_, > > - table_id table OVS_UNUSED, > > - const ddlog_record *rec, > > - ssize_t weight) > > -{ > > - int64_t *new_sb_cfgp = (int64_t *) new_sb_cfgp_; > > - > > - if (weight < 0) { > > - return; > > - } > > - > > - if (ddlog_get_int(rec, NULL, 0) <= sizeof *new_sb_cfgp) { > > - *new_sb_cfgp = ddlog_get_i64(rec); > > - } > > -} > > - > > -static struct json * > > -get_database_ops(struct northd_ctx *ctx) > > -{ > > - struct ds ops_s = DS_EMPTY_INITIALIZER; > > - ds_put_char(&ops_s, '['); > > - json_string_escape(ctx->db_name, &ops_s); > > - ds_put_char(&ops_s, ','); > > - size_t start_len = ops_s.length; > > - > > - for (const char **p = ctx->output_relations; *p; p++) { > > - ddlog_table_update_deltas(&ops_s, ctx->ddlog, ctx->delta, > > - ctx->db_name, *p); > > - } > > - > > - if (ctx->output_only_data) { > > - /* > > - * We just reconnected to the database (or connected for the first time > > - * in this execution). We assume that the contents of the output-only > > - * tables might have changed (this is especially true the first time we > > - * connect to the database a given execution, of course; we can't > > - * assume that the tables have any particular contents in this case). > > - * > > - * ctx->output_only_data is a database reply that tells us the > > - * UUIDs of the rows that exist in the database. Our strategy is to > > - * compare these UUIDs to the UUIDs of the rows that exist in the DDlog > > - * analogues of these tables, and then add, delete, or update rows as > > - * necessary. > > - * > > - * (ctx->output_only_data only gives row UUIDs, not full row > > - * contents. That means that for rows that exist in OVSDB and in > > - * DDLog, we always send an update to set all the columns. It wouldn't > > - * save bandwidth to do anything else, since we'd always have to send > > - * the full row contents in one direction and if there were differences > > - * we'd have to send the contents in both directions. With this > > - * strategy we only send them in one direction even in the worst case.) > > - * > > - * (We can't just send an operation to delete all the rows and then > > - * re-add them all in the same transaction, because ovsdb-server > > - * rejecting deleting a row with a given UUID and the adding the same > > - * UUID back in a single transaction.) > > - */ > > - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 2); > > - > > - for (size_t i = 0; ctx->output_only_relations[i]; i++) { > > - const char *table = ctx->output_only_relations[i]; > > - > > - /* Parse the list of row UUIDs received from OVSDB. */ > > - struct uuidset rows_present = UUIDSET_INITIALIZER(&rows_present); > > - struct ovsdb_error *error = parse_output_only_data( > > - ctx->output_only_data, i, &rows_present); > > - if (error) { > > - char *s = ovsdb_error_to_string_free(error); > > - VLOG_WARN_RL(&rl, "%s", s); > > - free(s); > > - uuidset_destroy(&rows_present); > > - continue; > > - } > > - > > - /* Get the index_id for the DDlog table. > > - * > > - * We require output-only tables to have an accompanying index > > - * named <table>_Index. */ > > - char *index = xasprintf("%s_Index", table); > > - index_id idxid = ddlog_get_index_id(ctx->ddlog, index); > > - if (idxid == -1) { > > - VLOG_WARN_RL(&rl, "%s: unknown index", index); > > - free(index); > > - uuidset_destroy(&rows_present); > > - continue; > > - } > > - free(index); > > - > > - /* For each row in the index, update a corresponding OVSDB row, if > > - * there is one, otherwise insert a new row. */ > > - struct dump_index_data cbdata = { > > - ctx->ddlog, &rows_present, table, &ops_s > > - }; > > - ddlog_dump_index(ctx->ddlog, idxid, index_cb, (uintptr_t) &cbdata); > > - > > - /* Any uuids remaining in 'rows_present' are rows that are in OVSDB > > - * but not DDlog. Delete them from OVSDB. */ > > - struct uuidset_node *node; > > - UUIDSET_FOR_EACH (node, &rows_present) { > > - add_delete_row_op(table, &node->uuid, &ops_s); > > - } > > - uuidset_destroy(&rows_present); > > - > > - /* Discard any queued output to this table, since we just > > - * did a full sync to it. */ > > - struct ds tmp = DS_EMPTY_INITIALIZER; > > - ddlog_table_update_output(&tmp, ctx->ddlog, ctx->delta, > > - ctx->db_name, table); > > - ds_destroy(&tmp); > > - } > > - > > - json_destroy(ctx->output_only_data); > > - ctx->output_only_data = NULL; > > - } else { > > - for (const char **p = ctx->output_only_relations; *p; p++) { > > - ddlog_table_update_output(&ops_s, ctx->ddlog, ctx->delta, > > - ctx->db_name, *p); > > - } > > - } > > - > > - /* If we're updating nb::NB_Global.sb_cfg, then also update > > - * sb_cfg_timestamp. > > - * > > - * XXX If the transaction we're sending to the database fails, then > > - * currently as written we'll never find out about it and sb_cfg_timestamp > > - * will not be updated. > > - */ > > - static int64_t old_sb_cfg = INT64_MIN; > > - static int64_t old_sb_cfg_timestamp = INT64_MIN; > > - int64_t new_sb_cfg = old_sb_cfg; > > - if (ctx->has_timestamp_columns) { > > - table_id sb_cfg_tid = ddlog_get_table_id(ctx->ddlog, "SbCfg"); > > - ddlog_delta *sb_cfg_delta = ddlog_delta_remove_table(ctx->delta, > > - sb_cfg_tid); > > - ddlog_delta_enumerate(sb_cfg_delta, northd_update_sb_cfg_cb, > > - (uintptr_t) &new_sb_cfg); > > - ddlog_free_delta(sb_cfg_delta); > > - > > - if (new_sb_cfg != old_sb_cfg) { > > - old_sb_cfg = new_sb_cfg; > > - old_sb_cfg_timestamp = time_wall_msec(); > > - ds_put_format(&ops_s, "{\"op\":\"update\",\"table\":\"NB_Global\",\"where\":[]," > > - "\"row\":{\"sb_cfg_timestamp\":%"PRId64"}},", old_sb_cfg_timestamp); > > - } > > - } > > - > > - struct json *ops; > > - if (ops_s.length > start_len) { > > - ds_chomp(&ops_s, ','); > > - ds_put_char(&ops_s, ']'); > > - ops = json_from_string(ds_cstr(&ops_s)); > > - } else { > > - ops = NULL; > > - } > > - > > - ds_destroy(&ops_s); > > - > > - return ops; > > -} > > - > > -/* Callback used by the ddlog engine to print error messages. Note that > > - * this is only used by the ddlog runtime, as opposed to the application > > - * code in ovn_northd.dl, which uses the vlog facility directly. */ > > -static void > > -ddlog_print_error(const char *msg) > > -{ > > - VLOG_ERR("%s", msg); > > -} > > - > > -static void > > -usage(void) > > -{ > > - printf("\ > > -%s: OVN northbound management daemon (DDlog version)\n\ > > -usage: %s [OPTIONS]\n\ > > -\n\ > > -Options:\n\ > > - --ovnnb-db=DATABASE connect to ovn-nb database at DATABASE\n\ > > - (default: %s)\n\ > > - --ovnsb-db=DATABASE connect to ovn-sb database at DATABASE\n\ > > - (default: %s)\n\ > > - --dry-run start in paused state (do not commit db changes)\n\ > > - --ddlog-record=FILE.TXT record db changes to replay later for debugging\n\ > > - --unixctl=SOCKET override default control socket name\n\ > > - -h, --help display this help message\n\ > > - -o, --options list available options\n\ > > - -V, --version display version information\n\ > > -", program_name, program_name, default_nb_db(), default_sb_db()); > > - daemon_usage(); > > - vlog_usage(); > > - stream_usage("database", true, true, false); > > -} > > - > > -static void > > -parse_options(int argc OVS_UNUSED, char *argv[] OVS_UNUSED, > > - bool *pause) > > -{ > > - enum { > > - OVN_DAEMON_OPTION_ENUMS, > > - VLOG_OPTION_ENUMS, > > - SSL_OPTION_ENUMS, > > - OPT_DRY_RUN, > > - OPT_DDLOG_RECORD, > > - }; > > - static const struct option long_options[] = { > > - {"ovnsb-db", required_argument, NULL, 'd'}, > > - {"ovnnb-db", required_argument, NULL, 'D'}, > > - {"unixctl", required_argument, NULL, 'u'}, > > - {"help", no_argument, NULL, 'h'}, > > - {"options", no_argument, NULL, 'o'}, > > - {"version", no_argument, NULL, 'V'}, > > - {"dry-run", no_argument, NULL, OPT_DRY_RUN}, > > - {"ddlog-record", required_argument, NULL, OPT_DDLOG_RECORD}, > > - OVN_DAEMON_LONG_OPTIONS, > > - VLOG_LONG_OPTIONS, > > - STREAM_SSL_LONG_OPTIONS, > > - {NULL, 0, NULL, 0}, > > - }; > > - char *short_options = ovs_cmdl_long_options_to_short_options(long_options); > > - > > - for (;;) { > > - int c; > > - > > - c = getopt_long(argc, argv, short_options, long_options, NULL); > > - if (c == -1) { > > - break; > > - } > > - > > - switch (c) { > > - OVN_DAEMON_OPTION_HANDLERS; > > - VLOG_OPTION_HANDLERS; > > - > > - case 'p': > > - ssl_private_key_file = optarg; > > - break; > > - > > - case 'c': > > - ssl_certificate_file = optarg; > > - break; > > - > > - case 'C': > > - ssl_ca_cert_file = optarg; > > - break; > > - > > - case 'd': > > - ovnsb_db = optarg; > > - break; > > - > > - case 'D': > > - ovnnb_db = optarg; > > - break; > > - > > - case 'u': > > - unixctl_path = optarg; > > - break; > > - > > - case 'h': > > - usage(); > > - exit(EXIT_SUCCESS); > > - > > - case 'o': > > - ovs_cmdl_print_options(long_options); > > - exit(EXIT_SUCCESS); > > - > > - case 'V': > > - ovs_print_version(0, 0); > > - exit(EXIT_SUCCESS); > > - > > - case OPT_DRY_RUN: > > - *pause = true; > > - break; > > - > > - case OPT_DDLOG_RECORD: > > - record_file = optarg; > > - break; > > - > > - default: > > - break; > > - } > > - } > > - > > - if (!ovnsb_db || !ovnsb_db[0]) { > > - ovnsb_db = default_sb_db(); > > - } > > - > > - if (!ovnnb_db || !ovnnb_db[0]) { > > - ovnnb_db = default_nb_db(); > > - } > > - > > - free(short_options); > > -} > > - > > -static void > > -update_ssl_config(void) > > -{ > > - if (ssl_private_key_file && ssl_certificate_file) { > > - stream_ssl_set_key_and_cert(ssl_private_key_file, > > - ssl_certificate_file); > > - } > > - if (ssl_ca_cert_file) { > > - stream_ssl_set_ca_cert_file(ssl_ca_cert_file, false); > > - } > > -} > > - > > -int > > -main(int argc, char *argv[]) > > -{ > > - int res = EXIT_SUCCESS; > > - struct unixctl_server *unixctl; > > - int retval; > > - bool exiting; > > - struct northd_status status = { > > - .locked = false, > > - .pause = false, > > - }; > > - > > - fatal_ignore_sigpipe(); > > - ovs_cmdl_proctitle_init(argc, argv); > > - set_program_name(argv[0]); > > - service_start(&argc, &argv); > > - parse_options(argc, argv, &status.pause); > > - > > - daemonize_start(false); > > - > > - char *abs_unixctl_path = get_abs_unix_ctl_path(unixctl_path); > > - retval = unixctl_server_create(abs_unixctl_path, &unixctl); > > - free(abs_unixctl_path); > > - > > - if (retval) { > > - exit(EXIT_FAILURE); > > - } > > - > > - unixctl_command_register("exit", "", 0, 0, ovn_northd_exit, &exiting); > > - unixctl_command_register("status", "", 0, 0, ovn_northd_status, &status); > > - > > - ddlog_prog ddlog; > > - ddlog_delta *delta; > > - ddlog = ddlog_run(1, false, ddlog_print_error, &delta); > > - if (!ddlog) { > > - ovs_fatal(0, "DDlog instance could not be created"); > > - } > > - init_table_ids(ddlog); > > - > > - unixctl_command_register("enable-cpu-profiling", "", 0, 0, > > - ovn_northd_enable_cpu_profiling, ddlog); > > - unixctl_command_register("disable-cpu-profiling", "", 0, 0, > > - ovn_northd_disable_cpu_profiling, ddlog); > > - unixctl_command_register("profile", "", 0, 0, ovn_northd_profile, ddlog); > > - > > - int replay_fd = -1; > > - if (record_file) { > > - replay_fd = open(record_file, O_CREAT | O_WRONLY | O_TRUNC, 0666); > > - if (replay_fd < 0) { > > - ovs_fatal(errno, "%s: could not create DDlog record file", > > - record_file); > > - } > > - > > - if (ddlog_record_commands(ddlog, replay_fd)) { > > - ovs_fatal(0, "could not enable DDlog command recording"); > > - } > > - } > > - > > - struct northd_ctx *nb_ctx = northd_ctx_create( > > - ovnnb_db, "OVN_Northbound", "nb", NULL, ddlog, delta, > > - nb_input_relations, nb_output_relations, nb_output_only_relations, > > - status.pause); > > - struct northd_ctx *sb_ctx = northd_ctx_create( > > - ovnsb_db, "OVN_Southbound", "sb", "ovn_northd", ddlog, delta, > > - sb_input_relations, sb_output_relations, sb_output_only_relations, > > - status.pause); > > - > > - unixctl_command_register("pause", "", 0, 0, ovn_northd_pause, sb_ctx); > > - unixctl_command_register("resume", "", 0, 0, ovn_northd_resume, sb_ctx); > > - unixctl_command_register("is-paused", "", 0, 0, ovn_northd_is_paused, > > - sb_ctx); > > - > > - char *ovn_internal_version = ovn_get_internal_version(); > > - VLOG_INFO("OVN internal version is : [%s]", ovn_internal_version); > > - free(ovn_internal_version); > > - > > - daemonize_complete(); > > - > > - stopwatch_create(NORTHD_LOOP_STOPWATCH_NAME, SW_MS); > > - stopwatch_create(OVNNB_DB_RUN_STOPWATCH_NAME, SW_MS); > > - stopwatch_create(OVNSB_DB_RUN_STOPWATCH_NAME, SW_MS); > > - > > - /* Main loop. */ > > - exiting = false; > > - while (!exiting) { > > - update_ssl_config(); > > - memory_run(); > > - if (memory_should_report()) { > > - struct simap usage = SIMAP_INITIALIZER(&usage); > > - > > - /* Nothing special to report yet. */ > > - memory_report(&usage); > > - simap_destroy(&usage); > > - } > > - > > - bool has_lock = ovsdb_cs_has_lock(sb_ctx->cs); > > - if (!sb_ctx->paused) { > > - if (has_lock && !status.locked) { > > - VLOG_INFO("ovn-northd lock acquired. " > > - "This ovn-northd instance is now active."); > > - } else if (!has_lock && status.locked) { > > - VLOG_INFO("ovn-northd lock lost. " > > - "This ovn-northd instance is now on standby."); > > - } > > - } > > - status.locked = has_lock; > > - status.pause = sb_ctx->paused; > > - > > - stopwatch_start(OVNNB_DB_RUN_STOPWATCH_NAME, time_msec()); > > - northd_run(nb_ctx); > > - stopwatch_stop(OVNNB_DB_RUN_STOPWATCH_NAME, time_msec()); > > - stopwatch_start(OVNSB_DB_RUN_STOPWATCH_NAME, time_msec()); > > - northd_run(sb_ctx); > > - stopwatch_stop(OVNSB_DB_RUN_STOPWATCH_NAME, time_msec()); > > - northd_update_probe_interval(nb_ctx, sb_ctx); > > - if (ovsdb_cs_has_lock(sb_ctx->cs) && > > - sb_ctx->state == S_UPDATE && > > - nb_ctx->state == S_UPDATE && > > - ovsdb_cs_may_send_transaction(sb_ctx->cs) && > > - ovsdb_cs_may_send_transaction(nb_ctx->cs)) { > > - northd_send_deltas(nb_ctx); > > - northd_send_deltas(sb_ctx); > > - } > > - > > - stopwatch_stop(NORTHD_LOOP_STOPWATCH_NAME, time_msec()); > > - stopwatch_start(NORTHD_LOOP_STOPWATCH_NAME, time_msec()); > > - unixctl_server_run(unixctl); > > - > > - northd_wait(nb_ctx); > > - northd_wait(sb_ctx); > > - unixctl_server_wait(unixctl); > > - memory_wait(); > > - if (exiting) { > > - poll_immediate_wake(); > > - } > > - poll_block(); > > - if (should_service_stop()) { > > - exiting = true; > > - } > > - } > > - > > - northd_ctx_destroy(nb_ctx); > > - northd_ctx_destroy(sb_ctx); > > - > > - ddlog_free_delta(delta); > > - ddlog_stop(ddlog); > > - > > - if (replay_fd >= 0) { > > - fsync(replay_fd); > > - close(replay_fd); > > - } > > - > > - unixctl_server_destroy(unixctl); > > - service_stop(); > > - > > - exit(res); > > -} > > - > > -static void > > -ovn_northd_exit(struct unixctl_conn *conn, int argc OVS_UNUSED, > > - const char *argv[] OVS_UNUSED, void *exiting_) > > -{ > > - bool *exiting = exiting_; > > - *exiting = true; > > - > > - unixctl_command_reply(conn, NULL); > > -} > > - > > -static void > > -ovn_northd_pause(struct unixctl_conn *conn, int argc OVS_UNUSED, > > - const char *argv[] OVS_UNUSED, void *sb_ctx_) > > -{ > > - struct northd_ctx *sb_ctx = sb_ctx_; > > - northd_pause(sb_ctx); > > - unixctl_command_reply(conn, NULL); > > -} > > - > > -static void > > -ovn_northd_resume(struct unixctl_conn *conn, int argc OVS_UNUSED, > > - const char *argv[] OVS_UNUSED, void *sb_ctx_) > > -{ > > - struct northd_ctx *sb_ctx = sb_ctx_; > > - northd_unpause(sb_ctx); > > - unixctl_command_reply(conn, NULL); > > -} > > - > > -static void > > -ovn_northd_is_paused(struct unixctl_conn *conn, int argc OVS_UNUSED, > > - const char *argv[] OVS_UNUSED, void *sb_ctx_) > > -{ > > - struct northd_ctx *sb_ctx = sb_ctx_; > > - unixctl_command_reply(conn, sb_ctx->paused ? "true" : "false"); > > -} > > - > > -static void > > -ovn_northd_status(struct unixctl_conn *conn, int argc OVS_UNUSED, > > - const char *argv[] OVS_UNUSED, void *status_) > > -{ > > - struct northd_status *status = status_; > > - > > - /* Use a labeled formatted output so we can add more to the status command > > - * later without breaking any consuming scripts. */ > > - char *s = xasprintf("Status: %s\n", > > - status->pause ? "paused" > > - : status->locked ? "active" > > - : "standby"); > > - unixctl_command_reply(conn, s); > > - free(s); > > -} > > - > > -static void > > -ovn_northd_enable_cpu_profiling(struct unixctl_conn *conn, int argc OVS_UNUSED, > > - const char *argv[] OVS_UNUSED, void *prog_) > > -{ > > - ddlog_prog prog = prog_; > > - ddlog_enable_cpu_profiling(prog, true); > > - unixctl_command_reply(conn, NULL); > > -} > > - > > -static void > > -ovn_northd_disable_cpu_profiling(struct unixctl_conn *conn, > > - int argc OVS_UNUSED, > > - const char *argv[] OVS_UNUSED, void *prog_) > > -{ > > - ddlog_prog prog = prog_; > > - ddlog_enable_cpu_profiling(prog, false); > > - unixctl_command_reply(conn, NULL); > > -} > > - > > -static void > > -ovn_northd_profile(struct unixctl_conn *conn, int argc OVS_UNUSED, > > - const char *argv[] OVS_UNUSED, void *prog_) > > -{ > > - ddlog_prog prog = prog_; > > - char *profile = ddlog_profile(prog); > > - unixctl_command_reply(conn, profile); > > - free(profile); > > -} > > diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml > > index a77bd719e8..bf12ed5cd8 100644 > > --- a/northd/ovn-northd.8.xml > > +++ b/northd/ovn-northd.8.xml > > @@ -1,7 +1,7 @@ > > <?xml version="1.0" encoding="utf-8"?> > > <manpage program="ovn-northd" section="8" title="ovn-northd"> > > <h1>Name</h1> > > - <p>ovn-northd and ovn-northd-ddlog -- Open Virtual Network central control daemon</p> > > + <p>ovn-northd -- Open Virtual Network central control daemon</p> > > > > <h1>Synopsis</h1> > > <p><code>ovn-northd</code> [<var>options</var>]</p> > > @@ -18,14 +18,6 @@ > > <code>ovn-sb</code>(5)) below it. > > </p> > > > > - <p> > > - <code>ovn-northd</code> is implemented in C. > > - <code>ovn-northd-ddlog</code> is a compatible implementation written in > > - DDlog, a language for incremental database processing. This > > - documentation applies to both implementations, with differences indicated > > - where relevant. > > - </p> > > - > > <h1>Options</h1> > > <dl> > > <dt><code>--ovnnb-db=<var>database</var></code></dt> > > @@ -42,16 +34,6 @@ > > as the default. Otherwise, the default is > > <code>unix:@RUNDIR@/ovnsb_db.sock</code>. > > </dd> > > - <dt><code>--ddlog-record=<var>file</var></code></dt> > > - <dd> > > - This option is for <code>ovn-north-ddlog</code> only. It causes the > > - daemon to record the initial database state and later changes to > > - <var>file</var> in the text-based DDlog command format. The > > - <code>ovn_northd_cli</code> program can later replay these changes for > > - debugging purposes. This option has a performance impact. See > > - <code>debugging-ddlog.rst</code> in the OVN documentation for more > > - details. > > - </dd> > > <dt><code>--dry-run</code></dt> > > <dd> > > <p> > > @@ -61,12 +43,6 @@ > > <code>pause</code> command, under <code>Runtime Management > > Commands</code> below. > > </p> > > - > > - <p> > > - For <code>ovn-northd-ddlog</code>, one could use this option with > > - <code>--ddlog-record</code> to generate a replay log without > > - restarting a process or disturbing a running system. > > - </p> > > </dd> > > <dt><code>n-threads N</code></dt> > > <dd> > > @@ -85,10 +61,6 @@ > > If N is more than 256, then N is set to 256, parallelization is > > enabled (with 256 threads) and a warning is logged. > > </p> > > - > > - <p> > > - ovn-northd-ddlog does not support this option. > > - </p> > > </dd> > > </dl> > > <p> > > @@ -241,32 +213,6 @@ > > </dl> > > </p> > > > > - <p> > > - Only <code>ovn-northd-ddlog</code> supports the following commands: > > - </p> > > - > > - <dl> > > - <dt><code>enable-cpu-profiling</code></dt> > > - <dt><code>disable-cpu-profiling</code></dt> > > - <dd> > > - Enables or disables profiling of CPU time used by the DDlog engine. > > - When CPU profiling is enabled, the <code>profile</code> command (see > > - below) will include DDlog CPU usage statistics in its output. Enabling > > - CPU profiling will slow <code>ovn-northd-ddlog</code>. Disabling CPU > > - profiling does not clear any previously recorded statistics. > > - </dd> > > - > > - <dt><code>profile</code></dt> > > - <dd> > > - Outputs a profile of the current and peak sizes of arrangements inside > > - DDlog. This profiling data can be useful for optimizing DDlog code. > > - If CPU profiling was previously enabled (even if it was later > > - disabled), the output also includes a CPU time profile. See > > - <code>Profiling</code> inside the tutorial in the DDlog repository for > > - an introduction to profiling DDlog. > > - </dd> > > - </dl> > > - > > <h1>Active-Standby for High Availability</h1> > > <p> > > You may run <code>ovn-northd</code> more than once in an OVN deployment. > > diff --git a/northd/ovn-sb.dlopts b/northd/ovn-sb.dlopts > > deleted file mode 100644 > > index 99b65f1019..0000000000 > > --- a/northd/ovn-sb.dlopts > > +++ /dev/null > > @@ -1,34 +0,0 @@ > > ---intern-strings > > --o Address_Set > > --o BFD > > --o DHCP_Options > > --o DHCPv6_Options > > --o DNS > > --o Datapath_Binding > > --o FDB > > --o Gateway_Chassis > > --o HA_Chassis > > --o HA_Chassis_Group > > --o IP_Multicast > > --o Load_Balancer > > --o Logical_DP_Group > > --o MAC_Binding > > --o Meter > > --o Meter_Band > > --o Multicast_Group > > --o Port_Binding > > --o Port_Group > > --o RBAC_Permission > > --o RBAC_Role > > --o SB_Global > > --o Service_Monitor > > ---output-only Logical_Flow > > ---ro IP_Multicast.seq_no > > ---ro Port_Binding.chassis > > ---ro Port_Binding.encap > > ---ro Port_Binding.virtual_parent > > ---ro SB_Global.connections > > ---ro SB_Global.external_ids > > ---ro SB_Global.ssl > > ---ro Service_Monitor.status > > ---intern-table Service_Monitor > > diff --git a/northd/ovn.dl b/northd/ovn.dl > > deleted file mode 100644 > > index 3585eb3dc2..0000000000 > > --- a/northd/ovn.dl > > +++ /dev/null > > @@ -1,387 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -import ovsdb > > -import bitwise > > - > > -/* Logical port is enabled if it does not have an enabled flag or the flag is true */ > > -function is_enabled(s: Option<bool>): bool = { > > - s != Some{false} > > -} > > - > > -/* > > - * Ethernet addresses > > - */ > > -typedef eth_addr = EthAddr { > > - ha: bit<48> // In host byte order, e.g. ha[40] is the multicast bit. > > -} > > - > > -function to_string(addr: eth_addr): string { > > - eth_addr2string(addr) > > -} > > -extern function eth_addr_from_string(s: string): Option<eth_addr> > > -extern function scan_eth_addr(s: string): Option<eth_addr> > > -extern function scan_eth_addr_prefix(s: string): Option<eth_addr> > > -function eth_addr_zero(): eth_addr { EthAddr{0} } > > -function eth_addr_pseudorandom(seed: uuid, variant: bit<16>) : eth_addr { > > - EthAddr{hash64(seed ++ variant) as bit<48>}.mark_random() > > -} > > -function mark_random(ea: eth_addr): eth_addr { EthAddr{ea.ha & ~(1 << 40) | (1 << 41)} } > > - > > -function to_eui64(ea: eth_addr): bit<64> { > > - var ha = ea.ha as u64; > > - (((ha & 64'hffffff000000) << 16) | 64'hfffe000000 | (ha & 64'hffffff)) ^ (1 << 57) > > -} > > - > > -extern function eth_addr2string(addr: eth_addr): string > > - > > -/* > > - * IPv4 addresses > > - */ > > - > > -typedef in_addr = InAddr { > > - a: bit<32> // In host byte order. > > -} > > - > > -extern function ip_parse(s: string): Option<in_addr> > > -extern function ip_parse_masked(s: string): Either<string/*err*/, (in_addr/*host_ip*/, in_addr/*mask*/)> > > -extern function ip_parse_cidr(s: string): Either<string/*err*/, (in_addr/*ip*/, bit<32>/*plen*/)> > > -extern function scan_static_dynamic_ip(s: string): Option<in_addr> > > -function ip_create_mask(plen: bit<32>): in_addr { InAddr{(64'hffffffff << (32 - plen))[31:0]} } > > - > > -function to_string(ip: in_addr): string = { > > - "${ip.a >> 24}.${(ip.a >> 16) & 'hff}.${(ip.a >> 8) & 'hff}.${ip.a & 'hff}" > > -} > > - > > -function is_cidr(netmask: in_addr): bool { var x = ~netmask.a; (x & (x + 1)) == 0 } > > -function is_local_multicast(ip: in_addr): bool { (ip.a & 32'hffffff00) == 32'he0000000 } > > -function is_zero(a: in_addr): bool { a.a == 0 } > > -function is_all_ones(a: in_addr): bool { a.a == 32'hffffffff } > > -function cidr_bits(ip: in_addr): Option<bit<8>> { > > - if (ip.is_cidr()) { > > - Some{32 - ip.a.trailing_zeros() as u8} > > - } else { > > - None > > - } > > -} > > - > > -function network(addr: in_addr, mask: in_addr): in_addr { InAddr{addr.a & mask.a} } > > -function host(addr: in_addr, mask: in_addr): in_addr { InAddr{addr.a & ~mask.a} } > > -function bcast(addr: in_addr, mask: in_addr): in_addr { InAddr{addr.a | ~mask.a} } > > - > > -/* True if both 'ips' are in the same network as defined by netmask 'mask', > > - * false otherwise. */ > > -function same_network(ips: (in_addr, in_addr), mask: in_addr): bool { > > - ((ips.0.a ^ ips.1.a) & mask.a) == 0 > > -} > > - > > -/* > > - * parse IPv4 address list of the form: > > - * "10.0.0.4 10.0.0.10 10.0.0.20..10.0.0.50 10.0.0.100..10.0.0.110" > > - */ > > -extern function parse_ip_list(ips: string): Either<string, Vec<(in_addr, Option<in_addr>)>> > > - > > -/* > > - * IPv6 addresses > > - */ > > -typedef in6_addr = In6Addr { > > - aaaa: bit<128> // In host byte order. > > -} > > - > > -extern function ipv6_parse(s: string): Option<in6_addr> > > -extern function ipv6_parse_masked(s: string): Either<string/*err*/, (in6_addr/*ip*/, in6_addr/*mask*/)> > > -extern function ipv6_parse_cidr(s: string): Either<string/*err*/, (in6_addr/*ip*/, bit<32>/*plen*/)> > > - > > -// Return IPv6 link local address for the given 'ea'. > > -function to_ipv6_lla(ea: eth_addr): in6_addr { > > - In6Addr{(128'hfe80 << 112) | (ea.to_eui64() as u128)} > > -} > > - > > -// Returns IPv6 EUI64 address for 'ea' with the given 'prefix'. > > -function to_ipv6_eui64(ea: eth_addr, prefix: in6_addr): in6_addr { > > - In6Addr{(prefix.aaaa & ~128'hffffffffffffffff) | (ea.to_eui64() as u128)} > > -} > > - > > -function ipv6_create_mask(plen: bit<32>): in6_addr { > > - if (plen == 0) { > > - In6Addr{0} > > - } else { > > - var shift = max(0, 128 - plen); > > - In6Addr{128'hffffffffffffffffffffffffffffffff << shift} > > - } > > -} > > - > > -function is_zero(a: in6_addr): bool { a.aaaa == 0 } > > -function is_all_ones(a: in6_addr): bool { a.aaaa == 128'hffffffffffffffffffffffffffffffff } > > -function is_lla(a: in6_addr): bool { (a.aaaa >> 64) == 128'hfe80000000000000 } > > -function is_all_hosts(a: in6_addr): bool { a.aaaa == 128'hff020000000000000000000000000001 } > > -function is_cidr(netmask: in6_addr): bool { var x = ~netmask.aaaa; (x & (x + 1)) == 0 } > > -function is_multicast(a: in6_addr): bool { (a.aaaa >> 120) == 128'hff } > > -function is_routable_multicast(a: in6_addr): bool { > > - a.is_multicast() and match ((a.aaaa >> 112) as u8 & 8'hf) { > > - 0 -> false, > > - 1 -> false, > > - 2 -> false, > > - 3 -> false, > > - 15 -> false, > > - _ -> true > > - } > > -} > > - > > -extern function string_mapped(addr: in6_addr): string > > - > > -function network(addr: in6_addr, mask: in6_addr): in6_addr { In6Addr{addr.aaaa & mask.aaaa} } > > -function host(addr: in6_addr, mask: in6_addr): in6_addr { In6Addr{addr.aaaa & ~mask.aaaa} } > > -function solicited_node(ip6: in6_addr): in6_addr { > > - In6Addr{(ip6.aaaa & 128'hffffff) | 128'hff02_0000_0000_0000_0000_0001_ff00_0000} > > -} > > - > > -/* True if both 'ips' are in the same network as defined by netmask 'mask', > > - * false otherwise. */ > > -function same_network(ips: (in6_addr, in6_addr), mask: in6_addr): bool { > > - ips.0.network(mask) == ips.1.network(mask) > > -} > > - > > -function multicast_to_ethernet(ip6: in6_addr): eth_addr { > > - EthAddr{48'h333300000000 | (ip6.aaaa as bit<48> & 48'hffffffff)} > > -} > > - > > -function cidr_bits(ip6: in6_addr): Option<bit<8>> { > > - if (ip6.is_cidr()) { > > - Some{128 - ip6.aaaa.trailing_zeros() as u8} > > - } else { > > - None > > - } > > -} > > - > > -function to_string(addr: in6_addr): string { inet6_ntop(addr) } > > -extern function inet6_ntop(addr: in6_addr): string > > - > > -/* > > - * IPv4 | IPv6 addresses > > - */ > > - > > -typedef v46_ip = IPv4 { ipv4: in_addr } | IPv6 { ipv6: in6_addr } > > - > > -function ip46_parse_cidr(s: string) : Option<(v46_ip, bit<32>)> = { > > - match (ip_parse_cidr(s)) { > > - Right{(ipv4, plen)} -> return Some{(IPv4{ipv4}, plen)}, > > - _ -> () > > - }; > > - match (ipv6_parse_cidr(s)) { > > - Right{(ipv6, plen)} -> return Some{(IPv6{ipv6}, plen)}, > > - _ -> () > > - }; > > - None > > -} > > -function ip46_parse_masked(s: string) : Option<(v46_ip, v46_ip)> = { > > - match (ip_parse_masked(s)) { > > - Right{(ipv4, mask)} -> return Some{(IPv4{ipv4}, IPv4{mask})}, > > - _ -> () > > - }; > > - match (ipv6_parse_masked(s)) { > > - Right{(ipv6, mask)} -> return Some{(IPv6{ipv6}, IPv6{mask})}, > > - _ -> () > > - }; > > - None > > -} > > -function ip46_parse(s: string) : Option<v46_ip> = { > > - match (ip_parse(s)) { > > - Some{ipv4} -> return Some{IPv4{ipv4}}, > > - _ -> () > > - }; > > - match (ipv6_parse(s)) { > > - Some{ipv6} -> return Some{IPv6{ipv6}}, > > - _ -> () > > - }; > > - None > > -} > > -function to_string(ip46: v46_ip) : string = { > > - match (ip46) { > > - IPv4{ipv4} -> "${ipv4}", > > - IPv6{ipv6} -> "${ipv6}" > > - } > > -} > > -function to_bracketed_string(ip46: v46_ip) : string = { > > - match (ip46) { > > - IPv4{ipv4} -> "${ipv4}", > > - IPv6{ipv6} -> "[${ipv6}]" > > - } > > -} > > - > > -function network(ip46: v46_ip, plen: bit<32>) : v46_ip { > > - match (ip46) { > > - IPv4{ipv4} -> IPv4{InAddr{ipv4.a & ip_create_mask(plen).a}}, > > - IPv6{ipv6} -> IPv6{In6Addr{ipv6.aaaa & ipv6_create_mask(plen).aaaa}} > > - } > > -} > > - > > -function is_all_ones(ip46: v46_ip) : bool { > > - match (ip46) { > > - IPv4{ipv4} -> ipv4.is_all_ones(), > > - IPv6{ipv6} -> ipv6.is_all_ones() > > - } > > -} > > - > > -function cidr_bits(ip46: v46_ip) : Option<bit<8>> { > > - match (ip46) { > > - IPv4{ipv4} -> ipv4.cidr_bits(), > > - IPv6{ipv6} -> ipv6.cidr_bits() > > - } > > -} > > - > > -function ipX(ip46: v46_ip) : string { > > - match (ip46) { > > - IPv4{_} -> "ip4", > > - IPv6{_} -> "ip6" > > - } > > -} > > - > > -function xxreg(ip46: v46_ip) : string { > > - match (ip46) { > > - IPv4{_} -> "", > > - IPv6{_} -> "xx" > > - } > > -} > > - > > -/* > > - * CIDR-masked IPv4 address > > - */ > > - > > -typedef ipv4_netaddr = IPV4NetAddr { > > - addr: in_addr, /* 192.168.10.123 */ > > - plen: bit<32> /* CIDR Prefix: 24. */ > > -} > > - > > -function netmask(na: ipv4_netaddr): in_addr { ip_create_mask(na.plen) } > > -function bcast(na: ipv4_netaddr): in_addr { na.addr.bcast(na.netmask()) } > > - > > -/* Returns the network (with the host bits zeroed) > > - * or the host (with the network bits zeroed). */ > > -function network(na: ipv4_netaddr): in_addr { na.addr.network(na.netmask()) } > > -function host(na: ipv4_netaddr): in_addr { na.addr.host(na.netmask()) } > > - > > -/* Match on the host, if the host part is nonzero, or on the network > > - * otherwise. */ > > -function match_host_or_network(na: ipv4_netaddr): string { > > - if (na.plen < 32 and na.host().is_zero()) { > > - "${na.addr}/${na.plen}" > > - } else { > > - "${na.addr}" > > - } > > -} > > - > > -/* Match on the network. */ > > -function match_network(na: ipv4_netaddr): string { > > - if (na.plen < 32) { > > - "${na.network()}/${na.plen}" > > - } else { > > - "${na.addr}" > > - } > > -} > > - > > -/* > > - * CIDR-masked IPv6 address > > - */ > > - > > -typedef ipv6_netaddr = IPV6NetAddr { > > - addr: in6_addr, /* fc00::1 */ > > - plen: bit<32> /* CIDR Prefix: 64 */ > > -} > > - > > -function netmask(na: ipv6_netaddr): in6_addr { ipv6_create_mask(na.plen) } > > - > > -/* Returns the network (with the host bits zeroed). > > - * or the host (with the network bits zeroed). */ > > -function network(na: ipv6_netaddr): in6_addr { na.addr.network(na.netmask()) } > > -function host(na: ipv6_netaddr): in6_addr { na.addr.host(na.netmask()) } > > - > > -function solicited_node(na: ipv6_netaddr): in6_addr { na.addr.solicited_node() } > > - > > -function is_lla(na: ipv6_netaddr): bool { na.network().is_lla() } > > - > > -/* Match on the network. */ > > -function match_network(na: ipv6_netaddr): string { > > - if (na.plen < 128) { > > - "${na.network()}/${na.plen}" > > - } else { > > - "${na.addr}" > > - } > > -} > > - > > -/* > > - * Set of addresses associated with a logical port. > > - * > > - * A logical port always has one Ethernet address, plus any number of IPv4 and IPv6 addresses. > > - */ > > -typedef lport_addresses = LPortAddress { > > - ea: eth_addr, > > - ipv4_addrs: Vec<ipv4_netaddr>, > > - ipv6_addrs: Vec<ipv6_netaddr> > > -} > > - > > -function to_string(addr: lport_addresses): string = { > > - var addrs = ["${addr.ea}"]; > > - for (ip4 in addr.ipv4_addrs) { > > - addrs.push("${ip4.addr}") > > - }; > > - > > - for (ip6 in addr.ipv6_addrs) { > > - addrs.push("${ip6.addr}") > > - }; > > - > > - string_join(addrs, " ") > > -} > > - > > -/* > > - * Packet header lengths > > - */ > > -function eTH_HEADER_LEN(): integer = 14 > > -function vLAN_HEADER_LEN(): integer = 4 > > -function vLAN_ETH_HEADER_LEN(): integer = eTH_HEADER_LEN() + vLAN_HEADER_LEN() > > - > > -/* > > - * Logging > > - */ > > -extern function warn(msg: string): () > > -extern function abort(msg: string): () > > - > > -/* > > - * C functions imported from OVN > > - */ > > -extern function is_dynamic_lsp_address(addr: string): bool > > -extern function extract_lsp_addresses(address: string): Option<lport_addresses> > > -extern function extract_addresses(address: string): Option<lport_addresses> > > -extern function extract_lrp_networks(mac: string, networks: Set<string>): Option<lport_addresses> > > -extern function extract_ip_addresses(address: string): Option<lport_addresses> > > - > > -extern function split_addresses(addr: string): (Set<string>, Set<string>) > > - > > -extern function ovn_internal_version(): string > > - > > -/* > > - * C functions imported from OVS > > - */ > > -extern function json_string_escape(s: string): string > > - > > -function json_escape(s: string): string { > > - s.json_string_escape() > > -} > > -function json_escape(s: istring): string { > > - s.ival().json_string_escape() > > -} > > - > > -/* For a 'key' of the form "IP:port" or just "IP", returns > > - * (v46_ip, port) tuple. */ > > -extern function ip_address_and_port_from_lb_key(k: string): Option<(v46_ip, bit<16>)> > > diff --git a/northd/ovn.rs b/northd/ovn.rs > > deleted file mode 100644 > > index 746884071e..0000000000 > > --- a/northd/ovn.rs > > +++ /dev/null > > @@ -1,750 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -use ::nom::*; > > -use ::differential_datalog::record; > > -use ::std::ffi; > > -use ::std::ptr; > > -use ::std::default; > > -use ::std::process; > > -use ::std::os::raw; > > -use ::libc; > > - > > -pub fn warn(msg: &String) { > > - warn_(msg.as_str()) > > -} > > - > > -pub fn warn_(msg: &str) { > > - unsafe { > > - ddlog_warn(ffi::CString::new(msg).unwrap().as_ptr()); > > - } > > -} > > - > > -pub fn err_(msg: &str) { > > - unsafe { > > - ddlog_err(ffi::CString::new(msg).unwrap().as_ptr()); > > - } > > -} > > - > > -pub fn abort(msg: &String) { > > - abort_(msg.as_str()) > > -} > > - > > -fn abort_(msg: &str) { > > - err_(format!("DDlog error: {}.", msg).as_ref()); > > - process::abort(); > > -} > > - > > -const ETH_ADDR_SIZE: usize = 6; > > -const IN6_ADDR_SIZE: usize = 16; > > -const INET6_ADDRSTRLEN: usize = 46; > > -const INET_ADDRSTRLEN: usize = 16; > > -const ETH_ADDR_STRLEN: usize = 17; > > - > > -const AF_INET: usize = 2; > > -const AF_INET6: usize = 10; > > - > > -/* Implementation for externs declared in ovn.dl */ > > - > > -#[repr(C)] > > -#[derive(Default, PartialEq, Eq, PartialOrd, Ord, Clone, Hash, Serialize, Deserialize, Debug, IntoRecord, Mutator)] > > -pub struct eth_addr_c { > > - x: [u8; ETH_ADDR_SIZE] > > -} > > - > > -impl eth_addr_c { > > - pub fn from_ddlog(d: ð_addr) -> Self { > > - eth_addr_c { > > - x: [(d.ha >> 40) as u8, > > - (d.ha >> 32) as u8, > > - (d.ha >> 24) as u8, > > - (d.ha >> 16) as u8, > > - (d.ha >> 8) as u8, > > - d.ha as u8] > > - } > > - } > > - pub fn to_ddlog(&self) -> eth_addr { > > - let ea0 = u16::from_be_bytes([self.x[0], self.x[1]]) as u64; > > - let ea1 = u16::from_be_bytes([self.x[2], self.x[3]]) as u64; > > - let ea2 = u16::from_be_bytes([self.x[4], self.x[5]]) as u64; > > - eth_addr { ha: (ea0 << 32) | (ea1 << 16) | ea2 } > > - } > > -} > > - > > -pub fn eth_addr2string(addr: ð_addr) -> String { > > - let c = eth_addr_c::from_ddlog(addr); > > - format!("{:02x}:{:02x}:{:02x}:{:02x}:{:02x}:{:02x}", > > - c.x[0], c.x[1], c.x[2], c.x[3], c.x[4], c.x[5]) > > -} > > - > > -pub fn eth_addr_from_string(s: &String) -> ddlog_std::Option<eth_addr> { > > - let mut ea: eth_addr_c = Default::default(); > > - unsafe { > > - if ovs::eth_addr_from_string(string2cstr(s).as_ptr(), &mut ea as *mut eth_addr_c) { > > - ddlog_std::Option::Some{x: ea.to_ddlog()} > > - } else { > > - ddlog_std::Option::None > > - } > > - } > > -} > > - > > -#[repr(C)] > > -struct in6_addr_c { > > - bytes: [u8; 16] > > -} > > - > > -impl Default for in6_addr_c { > > - fn default() -> Self { > > - in6_addr_c { > > - bytes: [0; 16] > > - } > > - } > > -} > > - > > -impl in6_addr_c { > > - pub fn from_ddlog(d: &in6_addr) -> Self { > > - in6_addr_c{bytes: d.aaaa.to_be_bytes()} > > - } > > - pub fn to_ddlog(&self) -> in6_addr { > > - in6_addr{aaaa: u128::from_be_bytes(self.bytes)} > > - } > > -} > > - > > -pub fn string_mapped(addr: &in6_addr) -> String { > > - let addr = in6_addr_c::from_ddlog(addr); > > - let mut addr_str = [0 as i8; INET6_ADDRSTRLEN]; > > - unsafe { > > - ovs::ipv6_string_mapped(&mut addr_str[0] as *mut raw::c_char, &addr as *const in6_addr_c); > > - cstr2string(&addr_str as *const raw::c_char) > > - } > > -} > > - > > -pub fn json_string_escape(s: &String) -> String { > > - let mut ds = ovs_ds::new(); > > - unsafe { > > - ovs::json_string_escape(ffi::CString::new(s.as_str()).unwrap().as_ptr() as *const raw::c_char, > > - &mut ds as *mut ovs_ds); > > - }; > > - unsafe{ds.into_string()} > > -} > > - > > -pub fn extract_lsp_addresses(address: &String) -> ddlog_std::Option<lport_addresses> { > > - unsafe { > > - let mut laddrs: lport_addresses_c = Default::default(); > > - if ovn_c::extract_lsp_addresses(string2cstr(address).as_ptr(), > > - &mut laddrs as *mut lport_addresses_c) { > > - ddlog_std::Option::Some{x: laddrs.into_ddlog()} > > - } else { > > - ddlog_std::Option::None > > - } > > - } > > -} > > - > > -pub fn extract_addresses(address: &String) -> ddlog_std::Option<lport_addresses> { > > - unsafe { > > - let mut laddrs: lport_addresses_c = Default::default(); > > - let mut ofs: raw::c_int = 0; > > - if ovn_c::extract_addresses(string2cstr(address).as_ptr(), > > - &mut laddrs as *mut lport_addresses_c, > > - &mut ofs as *mut raw::c_int) { > > - ddlog_std::Option::Some{x: laddrs.into_ddlog()} > > - } else { > > - ddlog_std::Option::None > > - } > > - } > > -} > > - > > -pub fn extract_lrp_networks(mac: &String, networks: &ddlog_std::Set<String>) -> ddlog_std::Option<lport_addresses> > > -{ > > - unsafe { > > - let mut laddrs: lport_addresses_c = Default::default(); > > - let mut networks_cstrs = Vec::with_capacity(networks.x.len()); > > - let mut networks_ptrs = Vec::with_capacity(networks.x.len()); > > - for net in networks.x.iter() { > > - networks_cstrs.push(string2cstr(net)); > > - networks_ptrs.push(networks_cstrs.last().unwrap().as_ptr()); > > - }; > > - if ovn_c::extract_lrp_networks__(string2cstr(mac).as_ptr(), networks_ptrs.as_ptr() as *const *const raw::c_char, > > - networks_ptrs.len(), &mut laddrs as *mut lport_addresses_c) { > > - ddlog_std::Option::Some{x: laddrs.into_ddlog()} > > - } else { > > - ddlog_std::Option::None > > - } > > - } > > -} > > - > > -pub fn extract_ip_addresses(address: &String) -> ddlog_std::Option<lport_addresses> { > > - unsafe { > > - let mut laddrs: lport_addresses_c = Default::default(); > > - if ovn_c::extract_ip_addresses(string2cstr(address).as_ptr(), > > - &mut laddrs as *mut lport_addresses_c) { > > - ddlog_std::Option::Some{x: laddrs.into_ddlog()} > > - } else { > > - ddlog_std::Option::None > > - } > > - } > > -} > > - > > -pub fn ovn_internal_version() -> String { > > - unsafe { > > - let s = ovn_c::ovn_get_internal_version(); > > - let retval = cstr2string(s); > > - free(s as *mut raw::c_void); > > - retval > > - } > > -} > > - > > -pub fn ipv6_parse_masked(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in6_addr, in6_addr>> > > -{ > > - unsafe { > > - let mut ip: in6_addr_c = Default::default(); > > - let mut mask: in6_addr_c = Default::default(); > > - let err = ovs::ipv6_parse_masked(string2cstr(s).as_ptr(), &mut ip as *mut in6_addr_c, &mut mask as *mut in6_addr_c); > > - if (err != ptr::null_mut()) { > > - let errstr = cstr2string(err); > > - free(err as *mut raw::c_void); > > - ddlog_std::Either::Left{l: errstr} > > - } else { > > - ddlog_std::Either::Right{r: ddlog_std::tuple2(ip.to_ddlog(), mask.to_ddlog())} > > - } > > - } > > -} > > - > > -pub fn ipv6_parse_cidr(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in6_addr, u32>> > > -{ > > - unsafe { > > - let mut ip: in6_addr_c = Default::default(); > > - let mut plen: raw::c_uint = 0; > > - let err = ovs::ipv6_parse_cidr(string2cstr(s).as_ptr(), &mut ip as *mut in6_addr_c, &mut plen as *mut raw::c_uint); > > - if (err != ptr::null_mut()) { > > - let errstr = cstr2string(err); > > - free(err as *mut raw::c_void); > > - ddlog_std::Either::Left{l: errstr} > > - } else { > > - ddlog_std::Either::Right{r: ddlog_std::tuple2(ip.to_ddlog(), plen as u32)} > > - } > > - } > > -} > > - > > -pub fn ipv6_parse(s: &String) -> ddlog_std::Option<in6_addr> > > -{ > > - unsafe { > > - let mut ip: in6_addr_c = Default::default(); > > - let res = ovs::ipv6_parse(string2cstr(s).as_ptr(), &mut ip as *mut in6_addr_c); > > - if (res) { > > - ddlog_std::Option::Some{x: ip.to_ddlog()} > > - } else { > > - ddlog_std::Option::None > > - } > > - } > > -} > > - > > -pub type ovs_be32 = u32; > > - > > -impl in_addr { > > - pub fn from_be32(nl: ovs_be32) -> in_addr { > > - in_addr{a: ddlog_std::ntohl(&nl)} > > - } > > - pub fn to_be32(&self) -> ovs_be32 { > > - ddlog_std::htonl(&self.a) > > - } > > -} > > - > > -pub fn ip_parse_masked(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in_addr, in_addr>> > > -{ > > - unsafe { > > - let mut ip: ovs_be32 = 0; > > - let mut mask: ovs_be32 = 0; > > - let err = ovs::ip_parse_masked(string2cstr(s).as_ptr(), &mut ip as *mut ovs_be32, &mut mask as *mut ovs_be32); > > - if (err != ptr::null_mut()) { > > - let errstr = cstr2string(err); > > - free(err as *mut raw::c_void); > > - ddlog_std::Either::Left{l: errstr} > > - } else { > > - ddlog_std::Either::Right{r: ddlog_std::tuple2(in_addr::from_be32(ip), > > - in_addr::from_be32(mask))} > > - } > > - } > > -} > > - > > -pub fn ip_parse_cidr(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in_addr, u32>> > > -{ > > - unsafe { > > - let mut ip: ovs_be32 = 0; > > - let mut plen: raw::c_uint = 0; > > - let err = ovs::ip_parse_cidr(string2cstr(s).as_ptr(), &mut ip as *mut ovs_be32, &mut plen as *mut raw::c_uint); > > - if (err != ptr::null_mut()) { > > - let errstr = cstr2string(err); > > - free(err as *mut raw::c_void); > > - ddlog_std::Either::Left{l: errstr} > > - } else { > > - ddlog_std::Either::Right{r: ddlog_std::tuple2(in_addr::from_be32(ip), plen as u32)} > > - } > > - } > > -} > > - > > -pub fn ip_parse(s: &String) -> ddlog_std::Option<in_addr> > > -{ > > - unsafe { > > - let mut ip: ovs_be32 = 0; > > - if (ovs::ip_parse(string2cstr(s).as_ptr(), &mut ip as *mut ovs_be32)) { > > - ddlog_std::Option::Some{x: in_addr::from_be32(ip)} > > - } else { > > - ddlog_std::Option::None > > - } > > - } > > -} > > - > > -pub fn is_dynamic_lsp_address(address: &String) -> bool { > > - unsafe { > > - ovn_c::is_dynamic_lsp_address(string2cstr(address).as_ptr()) > > - } > > -} > > - > > -pub fn split_addresses(addresses: &String) -> ddlog_std::tuple2<ddlog_std::Set<String>, ddlog_std::Set<String>> { > > - let mut ip4_addrs = ovs_svec::new(); > > - let mut ip6_addrs = ovs_svec::new(); > > - unsafe { > > - ovn_c::split_addresses(string2cstr(addresses).as_ptr(), &mut ip4_addrs as *mut ovs_svec, &mut ip6_addrs as *mut ovs_svec); > > - ddlog_std::tuple2(ip4_addrs.into_strings(), ip6_addrs.into_strings()) > > - } > > -} > > - > > -pub fn scan_eth_addr(s: &String) -> ddlog_std::Option<eth_addr> { > > - let mut ea: eth_addr_c = Default::default(); > > - unsafe { > > - if ovs::ovs_scan(string2cstr(s).as_ptr(), b"%hhx:%hhx:%hhx:%hhx:%hhx:%hhx\0".as_ptr() as *const raw::c_char, > > - &mut ea.x[0] as *mut u8, &mut ea.x[1] as *mut u8, > > - &mut ea.x[2] as *mut u8, &mut ea.x[3] as *mut u8, > > - &mut ea.x[4] as *mut u8, &mut ea.x[5] as *mut u8) > > - { > > - ddlog_std::Option::Some{x: ea.to_ddlog()} > > - } else { > > - ddlog_std::Option::None > > - } > > - } > > -} > > - > > -pub fn scan_eth_addr_prefix(s: &String) -> ddlog_std::Option<eth_addr> { > > - let mut b2: u8 = 0; > > - let mut b1: u8 = 0; > > - let mut b0: u8 = 0; > > - unsafe { > > - if ovs::ovs_scan(string2cstr(s).as_ptr(), b"%hhx:%hhx:%hhx\0".as_ptr() as *const raw::c_char, > > - &mut b2 as *mut u8, &mut b1 as *mut u8, &mut b0 as *mut u8) > > - { > > - ddlog_std::Option::Some{x: eth_addr{ha: ((b2 as u64) << 40) | ((b1 as u64) << 32) | ((b0 as u64) << 24)} } > > - } else { > > - ddlog_std::Option::None > > - } > > - } > > -} > > - > > -pub fn scan_static_dynamic_ip(s: &String) -> ddlog_std::Option<in_addr> { > > - let mut ip0: u8 = 0; > > - let mut ip1: u8 = 0; > > - let mut ip2: u8 = 0; > > - let mut ip3: u8 = 0; > > - let mut n: raw::c_uint = 0; > > - unsafe { > > - if ovs::ovs_scan(string2cstr(s).as_ptr(), b"dynamic %hhu.%hhu.%hhu.%hhu%n\0".as_ptr() as *const raw::c_char, > > - &mut ip0 as *mut u8, > > - &mut ip1 as *mut u8, > > - &mut ip2 as *mut u8, > > - &mut ip3 as *mut u8, > > - &mut n) && s.len() == (n as usize) > > - { > > - let a0 = (ip0 as u32) << 24; > > - let a1 = (ip1 as u32) << 16; > > - let a2 = (ip2 as u32) << 8; > > - let a3 = ip3 as u32; > > - ddlog_std::Option::Some{x: in_addr{a: a0 | a1 | a2 | a3}} > > - } else { > > - ddlog_std::Option::None > > - } > > - } > > -} > > - > > -pub fn ip_address_and_port_from_lb_key(k: &String) -> > > - ddlog_std::Option<ddlog_std::tuple2<v46_ip, u16>> > > -{ > > - unsafe { > > - let mut ip_address: *mut raw::c_char = ptr::null_mut(); > > - let mut port: libc::uint16_t = 0; > > - let mut addr_family: raw::c_int = 0; > > - > > - ovn_c::ip_address_and_port_from_lb_key(string2cstr(k).as_ptr(), &mut ip_address as *mut *mut raw::c_char, > > - &mut port as *mut libc::uint16_t, &mut addr_family as *mut raw::c_int); > > - if (ip_address != ptr::null_mut()) { > > - match (ip46_parse(&cstr2string(ip_address))) { > > - ddlog_std::Option::Some{x: ip46} => { > > - let res = ddlog_std::tuple2(ip46, port as u16); > > - free(ip_address as *mut raw::c_void); > > - return ddlog_std::Option::Some{x: res} > > - }, > > - _ => () > > - } > > - } > > - ddlog_std::Option::None > > - } > > -} > > - > > -pub fn str_to_int(s: &String, base: &u16) -> ddlog_std::Option<u64> { > > - let mut i: raw::c_int = 0; > > - let ok = unsafe { > > - ovs::str_to_int(string2cstr(s).as_ptr(), *base as raw::c_int, &mut i as *mut raw::c_int) > > - }; > > - if ok { > > - ddlog_std::Option::Some{x: i as u64} > > - } else { > > - ddlog_std::Option::None > > - } > > -} > > - > > -pub fn str_to_uint(s: &String, base: &u16) -> ddlog_std::Option<u64> { > > - let mut i: raw::c_uint = 0; > > - let ok = unsafe { > > - ovs::str_to_uint(string2cstr(s).as_ptr(), *base as raw::c_int, &mut i as *mut raw::c_uint) > > - }; > > - if ok { > > - ddlog_std::Option::Some{x: i as u64} > > - } else { > > - ddlog_std::Option::None > > - } > > -} > > - > > -pub fn inet6_ntop(addr: &in6_addr) -> String { > > - let addr_c = in6_addr_c::from_ddlog(addr); > > - let mut buf = [0 as i8; INET6_ADDRSTRLEN]; > > - unsafe { > > - let res = inet_ntop(AF_INET6 as raw::c_int, &addr_c as *const in6_addr_c as *const raw::c_void, > > - &mut buf[0] as *mut raw::c_char, INET6_ADDRSTRLEN as libc::socklen_t); > > - if res == ptr::null() { > > - warn(&format!("inet_ntop({:?}) failed", *addr)); > > - "".to_owned() > > - } else { > > - cstr2string(&buf as *const raw::c_char) > > - } > > - } > > -} > > - > > -/* Internals */ > > - > > -unsafe fn cstr2string(s: *const raw::c_char) -> String { > > - ffi::CStr::from_ptr(s).to_owned().into_string(). > > - unwrap_or_else(|e|{ warn(&format!("cstr2string: {}", e)); "".to_owned() }) > > -} > > - > > -fn string2cstr(s: &String) -> ffi::CString { > > - ffi::CString::new(s.as_str()).unwrap() > > -} > > - > > -/* OVS dynamic string type */ > > -#[repr(C)] > > -struct ovs_ds { > > - s: *mut raw::c_char, /* Null-terminated string. */ > > - length: libc::size_t, /* Bytes used, not including null terminator. */ > > - allocated: libc::size_t /* Bytes allocated, not including null terminator. */ > > -} > > - > > -impl ovs_ds { > > - pub fn new() -> ovs_ds { > > - ovs_ds{s: ptr::null_mut(), length: 0, allocated: 0} > > - } > > - > > - pub unsafe fn into_string(mut self) -> String { > > - let res = cstr2string(ovs::ds_cstr(&self as *const ovs_ds)); > > - ovs::ds_destroy(&mut self as *mut ovs_ds); > > - res > > - } > > -} > > - > > -/* OVS string vector type */ > > -#[repr(C)] > > -struct ovs_svec { > > - names: *mut *mut raw::c_char, > > - n: libc::size_t, > > - allocated: libc::size_t > > -} > > - > > -impl ovs_svec { > > - pub fn new() -> ovs_svec { > > - ovs_svec{names: ptr::null_mut(), n: 0, allocated: 0} > > - } > > - > > - pub unsafe fn into_strings(mut self) -> ddlog_std::Set<String> { > > - let mut res: ddlog_std::Set<String> = ddlog_std::Set::new(); > > - unsafe { > > - for i in 0..self.n { > > - res.insert(cstr2string(*self.names.offset(i as isize))); > > - } > > - ovs::svec_destroy(&mut self as *mut ovs_svec); > > - } > > - res > > - } > > -} > > - > > - > > -// ovn/lib/ovn-util.h > > -#[repr(C)] > > -struct ipv4_netaddr_c { > > - addr: libc::uint32_t, > > - mask: libc::uint32_t, > > - network: libc::uint32_t, > > - plen: raw::c_uint, > > - > > - addr_s: [raw::c_char; INET_ADDRSTRLEN + 1], /* "192.168.10.123" */ > > - network_s: [raw::c_char; INET_ADDRSTRLEN + 1], /* "192.168.10.0" */ > > - bcast_s: [raw::c_char; INET_ADDRSTRLEN + 1] /* "192.168.10.255" */ > > -} > > - > > -impl Default for ipv4_netaddr_c { > > - fn default() -> Self { > > - ipv4_netaddr_c { > > - addr: 0, > > - mask: 0, > > - network: 0, > > - plen: 0, > > - addr_s: [0; INET_ADDRSTRLEN + 1], > > - network_s: [0; INET_ADDRSTRLEN + 1], > > - bcast_s: [0; INET_ADDRSTRLEN + 1] > > - } > > - } > > -} > > - > > -impl ipv4_netaddr_c { > > - pub fn to_ddlog(&self) -> ipv4_netaddr { > > - ipv4_netaddr{ > > - addr: in_addr::from_be32(self.addr), > > - plen: self.plen, > > - } > > - } > > -} > > - > > -#[repr(C)] > > -struct ipv6_netaddr_c { > > - addr: in6_addr_c, /* fc00::1 */ > > - mask: in6_addr_c, /* ffff:ffff:ffff:ffff:: */ > > - sn_addr: in6_addr_c, /* ff02:1:ff00::1 */ > > - network: in6_addr_c, /* fc00:: */ > > - plen: raw::c_uint, /* CIDR Prefix: 64 */ > > - > > - addr_s: [raw::c_char; INET6_ADDRSTRLEN + 1], /* "fc00::1" */ > > - sn_addr_s: [raw::c_char; INET6_ADDRSTRLEN + 1], /* "ff02:1:ff00::1" */ > > - network_s: [raw::c_char; INET6_ADDRSTRLEN + 1] /* "fc00::" */ > > -} > > - > > -impl Default for ipv6_netaddr_c { > > - fn default() -> Self { > > - ipv6_netaddr_c { > > - addr: Default::default(), > > - mask: Default::default(), > > - sn_addr: Default::default(), > > - network: Default::default(), > > - plen: 0, > > - addr_s: [0; INET6_ADDRSTRLEN + 1], > > - sn_addr_s: [0; INET6_ADDRSTRLEN + 1], > > - network_s: [0; INET6_ADDRSTRLEN + 1] > > - } > > - } > > -} > > - > > -impl ipv6_netaddr_c { > > - pub unsafe fn to_ddlog(&self) -> ipv6_netaddr { > > - ipv6_netaddr{ > > - addr: in6_addr_c::to_ddlog(&self.addr), > > - plen: self.plen > > - } > > - } > > -} > > - > > - > > -// ovn-util.h > > -#[repr(C)] > > -struct lport_addresses_c { > > - ea_s: [raw::c_char; ETH_ADDR_STRLEN + 1], > > - ea: eth_addr_c, > > - n_ipv4_addrs: libc::size_t, > > - ipv4_addrs: *mut ipv4_netaddr_c, > > - n_ipv6_addrs: libc::size_t, > > - ipv6_addrs: *mut ipv6_netaddr_c > > -} > > - > > -impl Default for lport_addresses_c { > > - fn default() -> Self { > > - lport_addresses_c { > > - ea_s: [0; ETH_ADDR_STRLEN + 1], > > - ea: Default::default(), > > - n_ipv4_addrs: 0, > > - ipv4_addrs: ptr::null_mut(), > > - n_ipv6_addrs: 0, > > - ipv6_addrs: ptr::null_mut() > > - } > > - } > > -} > > - > > -impl lport_addresses_c { > > - pub unsafe fn into_ddlog(mut self) -> lport_addresses { > > - let mut ipv4_addrs = ddlog_std::Vec::with_capacity(self.n_ipv4_addrs); > > - for i in 0..self.n_ipv4_addrs { > > - ipv4_addrs.push((&*self.ipv4_addrs.offset(i as isize)).to_ddlog()) > > - } > > - let mut ipv6_addrs = ddlog_std::Vec::with_capacity(self.n_ipv6_addrs); > > - for i in 0..self.n_ipv6_addrs { > > - ipv6_addrs.push((&*self.ipv6_addrs.offset(i as isize)).to_ddlog()) > > - } > > - let res = lport_addresses { > > - ea: self.ea.to_ddlog(), > > - ipv4_addrs: ipv4_addrs, > > - ipv6_addrs: ipv6_addrs > > - }; > > - ovn_c::destroy_lport_addresses(&mut self as *mut lport_addresses_c); > > - res > > - } > > -} > > - > > -/* functions imported from northd.c */ > > -extern "C" { > > - fn ddlog_warn(msg: *const raw::c_char); > > - fn ddlog_err(msg: *const raw::c_char); > > -} > > - > > -/* functions imported from libovn */ > > -mod ovn_c { > > - use ::std::os::raw; > > - use ::libc; > > - use super::lport_addresses_c; > > - use super::ovs_svec; > > - use super::in6_addr_c; > > - > > - #[link(name = "ovn")] > > - extern "C" { > > - // ovn/lib/ovn-util.h > > - pub fn extract_lsp_addresses(address: *const raw::c_char, laddrs: *mut lport_addresses_c) -> bool; > > - pub fn extract_addresses(address: *const raw::c_char, laddrs: *mut lport_addresses_c, ofs: *mut raw::c_int) -> bool; > > - pub fn extract_lrp_networks__(mac: *const raw::c_char, networks: *const *const raw::c_char, > > - n_networks: libc::size_t, laddrs: *mut lport_addresses_c) -> bool; > > - pub fn extract_ip_addresses(address: *const raw::c_char, laddrs: *mut lport_addresses_c) -> bool; > > - pub fn destroy_lport_addresses(addrs: *mut lport_addresses_c); > > - pub fn is_dynamic_lsp_address(address: *const raw::c_char) -> bool; > > - pub fn split_addresses(addresses: *const raw::c_char, ip4_addrs: *mut ovs_svec, ipv6_addrs: *mut ovs_svec); > > - pub fn ip_address_and_port_from_lb_key(key: *const raw::c_char, ip_address: *mut *mut raw::c_char, > > - port: *mut libc::uint16_t, addr_family: *mut raw::c_int); > > - pub fn ovn_get_internal_version() -> *mut raw::c_char; > > - } > > -} > > - > > -mod ovs { > > - use ::std::os::raw; > > - use ::libc; > > - use super::in6_addr_c; > > - use super::ovs_be32; > > - use super::ovs_ds; > > - use super::eth_addr_c; > > - use super::ovs_svec; > > - > > - /* functions imported from libopenvswitch */ > > - #[link(name = "openvswitch")] > > - extern "C" { > > - // lib/packets.h > > - pub fn ipv6_string_mapped(addr_str: *mut raw::c_char, addr: *const in6_addr_c) -> *const raw::c_char; > > - pub fn ipv6_parse_masked(s: *const raw::c_char, ip: *mut in6_addr_c, mask: *mut in6_addr_c) -> *mut raw::c_char; > > - pub fn ipv6_parse_cidr(s: *const raw::c_char, ip: *mut in6_addr_c, plen: *mut raw::c_uint) -> *mut raw::c_char; > > - pub fn ipv6_parse(s: *const raw::c_char, ip: *mut in6_addr_c) -> bool; > > - pub fn ip_parse_masked(s: *const raw::c_char, ip: *mut ovs_be32, mask: *mut ovs_be32) -> *mut raw::c_char; > > - pub fn ip_parse_cidr(s: *const raw::c_char, ip: *mut ovs_be32, plen: *mut raw::c_uint) -> *mut raw::c_char; > > - pub fn ip_parse(s: *const raw::c_char, ip: *mut ovs_be32) -> bool; > > - pub fn eth_addr_from_string(s: *const raw::c_char, ea: *mut eth_addr_c) -> bool; > > - > > - // include/openvswitch/json.h > > - pub fn json_string_escape(str: *const raw::c_char, out: *mut ovs_ds); > > - // openvswitch/dynamic-string.h > > - pub fn ds_destroy(ds: *mut ovs_ds); > > - pub fn ds_cstr(ds: *const ovs_ds) -> *const raw::c_char; > > - pub fn svec_destroy(v: *mut ovs_svec); > > - pub fn ovs_scan(s: *const raw::c_char, format: *const raw::c_char, ...) -> bool; > > - pub fn str_to_int(s: *const raw::c_char, base: raw::c_int, i: *mut raw::c_int) -> bool; > > - pub fn str_to_uint(s: *const raw::c_char, base: raw::c_int, i: *mut raw::c_uint) -> bool; > > - } > > -} > > - > > -/* functions imported from libc */ > > -#[link(name = "c")] > > -extern "C" { > > - fn free(ptr: *mut raw::c_void); > > -} > > - > > -/* functions imported from arp/inet6 */ > > -extern "C" { > > - fn inet_ntop(af: raw::c_int, cp: *const raw::c_void, > > - buf: *mut raw::c_char, len: libc::socklen_t) -> *const raw::c_char; > > -} > > - > > -/* > > - * Parse IPv4 address list. > > - */ > > - > > -named!(parse_spaces<nom::types::CompleteStr, ()>, > > - do_parse!(many1!(one_of!(&" \t\n\r\x0c\x0b")) >> (()) ) > > -); > > - > > -named!(parse_opt_spaces<nom::types::CompleteStr, ()>, > > - do_parse!(opt!(parse_spaces) >> (())) > > -); > > - > > -named!(parse_ipv4_range<nom::types::CompleteStr, (String, Option<String>)>, > > - do_parse!(addr1: many_till!(complete!(nom::anychar), alt!(do_parse!(eof!() >> (nom::types::CompleteStr(""))) | peek!(tag!("..")) | tag!(" ") )) >> > > - parse_opt_spaces >> > > - addr2: opt!(do_parse!(tag!("..") >> > > - parse_opt_spaces >> > > - addr2: many_till!(complete!(nom::anychar), alt!(do_parse!(eof!() >> (' ')) | char!(' ')) ) >> > > - (addr2) )) >> > > - parse_opt_spaces >> > > - (addr1.0.into_iter().collect(), addr2.map(|x|x.0.into_iter().collect())) ) > > -); > > - > > -named!(parse_ipv4_address_list<nom::types::CompleteStr, Vec<(String, Option<String>)>>, > > - do_parse!(parse_opt_spaces >> > > - ranges: many0!(parse_ipv4_range) >> > > - (ranges))); > > - > > -pub fn parse_ip_list(ips: &String) -> ddlog_std::Either<String, ddlog_std::Vec<ddlog_std::tuple2<in_addr, ddlog_std::Option<in_addr>>>> > > -{ > > - match parse_ipv4_address_list(nom::types::CompleteStr(ips.as_str())) { > > - Err(e) => { > > - ddlog_std::Either::Left{l: format!("invalid IP list format: \"{}\"", ips.as_str())} > > - }, > > - Ok((nom::types::CompleteStr(""), ranges)) => { > > - let mut res = vec![]; > > - for (ip1, ip2) in ranges.iter() { > > - let start = match ip_parse(&ip1) { > > - ddlog_std::Option::None => return ddlog_std::Either::Left{l: format!("invalid IP address: \"{}\"", *ip1)}, > > - ddlog_std::Option::Some{x: ip} => ip > > - }; > > - let end = match ip2 { > > - None => ddlog_std::Option::None, > > - Some(ip_str) => match ip_parse(&ip_str.clone()) { > > - ddlog_std::Option::None => return ddlog_std::Either::Left{l: format!("invalid IP address: \"{}\"", *ip_str)}, > > - x => x > > - } > > - }; > > - res.push(ddlog_std::tuple2(start, end)); > > - }; > > - ddlog_std::Either::Right{r: ddlog_std::Vec::from(res)} > > - }, > > - Ok((suffix, _)) => { > > - ddlog_std::Either::Left{l: format!("IP address list contains trailing characters: \"{}\"", suffix)} > > - } > > - } > > -} > > diff --git a/northd/ovn.toml b/northd/ovn.toml > > deleted file mode 100644 > > index 64108996ed..0000000000 > > --- a/northd/ovn.toml > > +++ /dev/null > > @@ -1,2 +0,0 @@ > > -[dependencies.nom] > > -version = "4.0" > > diff --git a/northd/ovn_northd.dl b/northd/ovn_northd.dl > > deleted file mode 100644 > > index 2fe73959c6..0000000000 > > --- a/northd/ovn_northd.dl > > +++ /dev/null > > @@ -1,9105 +0,0 @@ > > -/* > > - * Licensed under the Apache License, Version 2.0 (the "License"); > > - * you may not use this file except in compliance with the License. > > - * You may obtain a copy of the License at: > > - * > > - * http://www.apache.org/licenses/LICENSE-2.0 > > - * > > - * Unless required by applicable law or agreed to in writing, software > > - * distributed under the License is distributed on an "AS IS" BASIS, > > - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > - * See the License for the specific language governing permissions and > > - * limitations under the License. > > - */ > > - > > -import OVN_Northbound as nb > > -import OVN_Southbound as sb > > -import copp > > -import ovsdb > > -import allocate > > -import ovn > > -import lswitch > > -import lrouter > > -import multicast > > -import helpers > > -import ipam > > -import vec > > -import set > > - > > -index Logical_Flow_Index() on sb::Out_Logical_Flow() > > - > > -/* Meter_Band table */ > > -for (mb in nb::Meter_Band) { > > - sb::Out_Meter_Band(._uuid = mb._uuid, > > - .action = mb.action, > > - .rate = mb.rate, > > - .burst_size = mb.burst_size) > > -} > > - > > -/* Meter table */ > > -for (meter in &nb::Meter) { > > - sb::Out_Meter(._uuid = meter._uuid, > > - .name = meter.name, > > - .unit = meter.unit, > > - .bands = meter.bands) > > -} > > -sb::Out_Meter(._uuid = hash128(name), > > - .name = name, > > - .unit = meter.unit, > > - .bands = meter.bands) :- > > - ACLWithFairMeter(acl, meter), > > - var name = acl_log_meter_name(meter.name, acl._uuid).intern(). > > - > > -/* Proxy table for Out_Datapath_Binding: contains all Datapath_Binding fields, > > - * except tunnel id, which is allocated separately (see TunKeyAllocation). */ > > -relation OutProxy_Datapath_Binding ( > > - _uuid: uuid, > > - external_ids: Map<istring,istring> > > -) > > - > > -/* Datapath_Binding table */ > > -OutProxy_Datapath_Binding(uuid, external_ids) :- > > - &nb::Logical_Switch(._uuid = uuid, .name = name, .external_ids = ids, > > - .other_config = other_config), > > - var uuid_str = uuid2str(uuid).intern(), > > - var external_ids = { > > - var eids = [i"logical-switch" -> uuid_str, i"name" -> name]; > > - match (ids.get(i"neutron:network_name")) { > > - None -> (), > > - Some{nnn} -> eids.insert(i"name2", nnn) > > - }; > > - match (other_config.get(i"interconn-ts")) { > > - None -> (), > > - Some{value} -> eids.insert(i"interconn-ts", value) > > - }; > > - eids > > - }. > > - > > -OutProxy_Datapath_Binding(uuid, external_ids) :- > > - lr in nb::Logical_Router(._uuid = uuid, .name = name, .external_ids = ids, > > - .options = options), > > - lr.is_enabled(), > > - var uuid_str = uuid2str(uuid).intern(), > > - var external_ids = { > > - var eids = [i"logical-router" -> uuid_str, i"name" -> name]; > > - match (ids.get(i"neutron:router_name")) { > > - None -> (), > > - Some{nnn} -> eids.insert(i"name2", nnn) > > - }; > > - match (options.get(i"snat-ct-zone").and_then(parse_dec_u64)) { > > - None -> (), > > - Some{zone} -> eids.insert(i"snat-ct-zone", i"${zone}") > > - }; > > - var learn_from_arp_request = options.get_bool_def(i"always_learn_from_arp_request", true); > > - if (not learn_from_arp_request) { > > - eids.insert(i"always_learn_from_arp_request", i"false") > > - }; > > - eids > > - }. > > - > > -sb::Out_Datapath_Binding(uuid, tunkey, load_balancers, external_ids) :- > > - OutProxy_Datapath_Binding(uuid, external_ids), > > - TunKeyAllocation(uuid, tunkey), > > - /* Datapath_Binding.load_balancers is not used anymore, it's still in the > > - * schema for compatibility reasons. Reset it to empty, just in case. > > - */ > > - var load_balancers = set_empty(). > > - > > -function get_requested_chassis(options: Map<istring,istring>) : istring = { > > - var requested_chassis = match(options.get(i"requested-chassis")) { > > - None -> i"", > > - Some{requested_chassis} -> requested_chassis, > > - }; > > - requested_chassis > > -} > > - > > -relation RequestedChassis( > > - name: istring, > > - chassis: uuid, > > -) > > -RequestedChassis(name, chassis) :- > > - sb::Chassis(._uuid = chassis, .name=name). > > -RequestedChassis(hostname, chassis) :- > > - sb::Chassis(._uuid = chassis, .hostname=hostname). > > - > > -/* Proxy table for Out_Datapath_Binding: contains all Datapath_Binding fields, > > - * except tunnel id, which is allocated separately (see PortTunKeyAllocation). */ > > -relation OutProxy_Port_Binding ( > > - _uuid: uuid, > > - logical_port: istring, > > - __type: istring, > > - gateway_chassis: Set<uuid>, > > - ha_chassis_group: Option<uuid>, > > - options: Map<istring,istring>, > > - datapath: uuid, > > - parent_port: Option<istring>, > > - tag: Option<integer>, > > - mac: Set<istring>, > > - nat_addresses: Set<istring>, > > - external_ids: Map<istring,istring>, > > - requested_chassis: Option<uuid> > > -) > > - > > -/* Case 1a: Create a Port_Binding per logical switch port that is not of type > > - * "router" */ > > -OutProxy_Port_Binding(._uuid = lsp._uuid, > > - .logical_port = lsp.name, > > - .__type = lsp.__type, > > - .gateway_chassis = set_empty(), > > - .ha_chassis_group = sp.hac_group_uuid, > > - .options = options, > > - .datapath = sw._uuid, > > - .parent_port = lsp.parent_name, > > - .tag = tag, > > - .mac = lsp.addresses, > > - .nat_addresses = set_empty(), > > - .external_ids = eids, > > - .requested_chassis = None) :- > > - sp in &SwitchPort(.lsp = lsp, .sw = sw), > > - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), > > - var tag = match (opt_tag) { > > - None -> lsp.tag, > > - Some{t} -> Some{t} > > - }, > > - lsp.__type != i"router", > > - var chassis_name_or_hostname = get_requested_chassis(lsp.options), > > - chassis_name_or_hostname == i"", > > - var eids = { > > - var eids = lsp.external_ids; > > - match (lsp.external_ids.get(i"neutron:port_name")) { > > - None -> (), > > - Some{name} -> eids.insert(i"name", name) > > - }; > > - eids > > - }, > > - var options = { > > - var options = lsp.options; > > - if (sw.other_config.get(i"vlan-passthru") == Some{i"true"}) { > > - options.insert(i"vlan-passthru", i"true") > > - }; > > - options > > - }. > > - > > -/* Case 1b: Create a Port_Binding per logical switch port that is not of type > > - * "router" and has options "requested-chassis" pointing at chassis name or > > - * hostname. */ > > -OutProxy_Port_Binding(._uuid = lsp._uuid, > > - .logical_port = lsp.name, > > - .__type = lsp.__type, > > - .gateway_chassis = set_empty(), > > - .ha_chassis_group = sp.hac_group_uuid, > > - .options = options, > > - .datapath = sw._uuid, > > - .parent_port = lsp.parent_name, > > - .tag = tag, > > - .mac = lsp.addresses, > > - .nat_addresses = set_empty(), > > - .external_ids = eids, > > - .requested_chassis = Some{requested_chassis}) :- > > - sp in &SwitchPort(.lsp = lsp, .sw = sw), > > - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), > > - var tag = match (opt_tag) { > > - None -> lsp.tag, > > - Some{t} -> Some{t} > > - }, > > - lsp.__type != i"router", > > - var chassis_name_or_hostname = get_requested_chassis(lsp.options), > > - chassis_name_or_hostname != i"", > > - RequestedChassis(chassis_name_or_hostname, requested_chassis), > > - var eids = { > > - var eids = lsp.external_ids; > > - match (lsp.external_ids.get(i"neutron:port_name")) { > > - None -> (), > > - Some{name} -> eids.insert(i"name", name) > > - }; > > - eids > > - }, > > - var options = { > > - var options = lsp.options; > > - if (sw.other_config.get(i"vlan-passthru") == Some{i"true"}) { > > - options.insert(i"vlan-passthru", i"true") > > - }; > > - options > > - }. > > - > > -/* Case 1c: Create a Port_Binding per logical switch port that is not of type > > - * "router" and has options "requested-chassis" pointing at non-existent > > - * chassis name or hostname. */ > > -OutProxy_Port_Binding(._uuid = lsp._uuid, > > - .logical_port = lsp.name, > > - .__type = lsp.__type, > > - .gateway_chassis = set_empty(), > > - .ha_chassis_group = sp.hac_group_uuid, > > - .options = options, > > - .datapath = sw._uuid, > > - .parent_port = lsp.parent_name, > > - .tag = tag, > > - .mac = lsp.addresses, > > - .nat_addresses = set_empty(), > > - .external_ids = eids, > > - .requested_chassis = None) :- > > - sp in &SwitchPort(.lsp = lsp, .sw = sw), > > - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), > > - var tag = match (opt_tag) { > > - None -> lsp.tag, > > - Some{t} -> Some{t} > > - }, > > - lsp.__type != i"router", > > - var chassis_name_or_hostname = get_requested_chassis(lsp.options), > > - chassis_name_or_hostname != i"", > > - not RequestedChassis(chassis_name_or_hostname, _), > > - var eids = { > > - var eids = lsp.external_ids; > > - match (lsp.external_ids.get(i"neutron:port_name")) { > > - None -> (), > > - Some{name} -> eids.insert(i"name", name) > > - }; > > - eids > > - }, > > - var options = { > > - var options = lsp.options; > > - if (sw.other_config.get(i"vlan-passthru") == Some{i"true"}) { > > - options.insert(i"vlan-passthru", i"true") > > - }; > > - options > > - }. > > - > > -relation SwitchPortLBIPs( > > - port: Intern<SwitchPort>, > > - lbips: Option<Intern<LogicalRouterLBIPs>>) > > - > > -SwitchPortLBIPs(.port = port, > > - .lbips = Some{lbips}) :- > > - port in &SwitchPort(.peer = Some{peer}), > > - port.lsp.options.get(i"router-port").is_some(), > > - lbips in &LogicalRouterLBIPs(.lr = peer.router._uuid). > > - > > -SwitchPortLBIPs(.port = port, > > - .lbips = None) :- > > - port in &SwitchPort(.peer = peer), > > - peer.is_none() or port.lsp.options.get(i"router-port").is_none(). > > - > > -/* Case 2: Create a Port_Binding per logical switch port of type "router" */ > > -OutProxy_Port_Binding(._uuid = lsp._uuid, > > - .logical_port = lsp.name, > > - .__type = __type, > > - .gateway_chassis = set_empty(), > > - .ha_chassis_group = None, > > - .options = options, > > - .datapath = sw._uuid, > > - .parent_port = lsp.parent_name, > > - .tag = None, > > - .mac = lsp.addresses, > > - .nat_addresses = nat_addresses, > > - .external_ids = eids, > > - .requested_chassis = None) :- > > - SwitchPortLBIPs(.port = &SwitchPort{.lsp = lsp, .sw = sw, .peer = peer}, > > - .lbips = lbips), > > - var eids = { > > - var eids = lsp.external_ids; > > - match (lsp.external_ids.get(i"neutron:port_name")) { > > - None -> (), > > - Some{name} -> eids.insert(i"name", name) > > - }; > > - eids > > - }, > > - Some{var router_port} = lsp.options.get(i"router-port"), > > - var opt_chassis = peer.and_then(|p| p.router.options.get(i"chassis")), > > - var l3dgw_port = peer.and_then(|p| p.router.l3dgw_ports.nth(0)), > > - (var __type, var options) = { > > - var options = [i"peer" -> router_port]; > > - match (opt_chassis) { > > - None -> { > > - (i"patch", options) > > - }, > > - Some{chassis} -> { > > - options.insert(i"l3gateway-chassis", chassis); > > - (i"l3gateway", options) > > - } > > - } > > - }, > > - var base_nat_addresses = { > > - match (lsp.options.get(i"nat-addresses")) { > > - None -> { set_empty() }, > > - Some{nat_addresses} -> { > > - if (nat_addresses == i"router") { > > - match ((l3dgw_port, opt_chassis, peer)) { > > - (None, None, _) -> set_empty(), > > - (_, _, None) -> set_empty(), > > - (_, _, Some{rport}) -> get_nat_addresses(rport, lbips.unwrap_or_default(), false) > > - } > > - } else { > > - /* Only accept manual specification of ethernet address > > - * followed by IPv4 addresses on type "l3gateway" ports. */ > > - if (opt_chassis.is_some()) { > > - match (extract_lsp_addresses(nat_addresses.ival())) { > > - None -> { > > - warn("Error extracting nat-addresses."); > > - set_empty() > > - }, > > - Some{_} -> { set_singleton(nat_addresses) } > > - } > > - } else { set_empty() } > > - } > > - } > > - } > > - }, > > - /* Add the router mac and IPv4 addresses to > > - * Port_Binding.nat_addresses so that GARP is sent for these > > - * IPs by the ovn-controller on which the distributed gateway > > - * router port resides if: > > - * > > - * 1. The peer has 'reside-on-redirect-chassis' set and the > > - * the logical router datapath has distributed router port. > > - * > > - * 2. The peer is distributed gateway router port. > > - * > > - * 3. The peer's router is a gateway router and the port has a localnet > > - * port. > > - * > > - * Note: Port_Binding.nat_addresses column is also used for > > - * sending the GARPs for the router port IPs. > > - * */ > > - var garp_nat_addresses = match (peer) { > > - Some{rport} -> match ( > > - (rport.lrp.options.get_bool_def(i"reside-on-redirect-chassis", false) > > - and l3dgw_port.is_some()) or > > - rport.is_redirect or > > - (rport.router.options.contains_key(i"chassis") and > > - not sw.localnet_ports.is_empty())) { > > - false -> set_empty(), > > - true -> set_singleton(get_garp_nat_addresses(rport).intern()) > > - }, > > - None -> set_empty() > > - }, > > - var nat_addresses = set_union(base_nat_addresses, garp_nat_addresses). > > - > > -/* Case 3: Port_Binding per logical router port */ > > -OutProxy_Port_Binding(._uuid = lrp._uuid, > > - .logical_port = lrp.name, > > - .__type = __type, > > - .gateway_chassis = set_empty(), > > - .ha_chassis_group = None, > > - .options = options, > > - .datapath = router._uuid, > > - .parent_port = None, > > - .tag = None, // always empty for router ports > > - .mac = set_singleton(i"${lrp.mac} ${lrp.networks.map(ival).to_vec().join(\" \")}"), > > - .nat_addresses = set_empty(), > > - .external_ids = lrp.external_ids, > > - .requested_chassis = None) :- > > - rp in &RouterPort(.lrp = lrp, .router = router, .peer = peer), > > - RouterPortRAOptionsComplete(lrp._uuid, options0), > > - (var __type, var options1) = match (router.options.get(i"chassis")) { > > - /* TODO: derived ports */ > > - None -> (i"patch", map_empty()), > > - Some{lrchassis} -> (i"l3gateway", [i"l3gateway-chassis" -> lrchassis]) > > - }, > > - var options2 = match (router_peer_name(peer)) { > > - None -> map_empty(), > > - Some{peer_name} -> [i"peer" -> peer_name] > > - }, > > - var options3 = match ((peer, rp.networks.ipv6_addrs.is_empty())) { > > - (PeerSwitch{_, _}, false) -> { > > - var enabled = lrp.is_enabled(); > > - var pd = lrp.options.get_bool_def(i"prefix_delegation", false); > > - var p = lrp.options.get_bool_def(i"prefix", false); > > - [i"ipv6_prefix_delegation" -> i"${pd and enabled}", > > - i"ipv6_prefix" -> i"${p and enabled}"] > > - }, > > - _ -> map_empty() > > - }, > > - PreserveIPv6RAPDList(lrp._uuid, ipv6_ra_pd_list), > > - var options4 = match (ipv6_ra_pd_list) { > > - None -> map_empty(), > > - Some{value} -> [i"ipv6_ra_pd_list" -> value] > > - }, > > - RouterPortIsRedirect(lrp._uuid, is_redirect), > > - var options5 = match (is_redirect) { > > - false -> map_empty(), > > - true -> [i"chassis-redirect-port" -> chassis_redirect_name(lrp.name).intern()] > > - }, > > - var options = options0.union(options1).union(options2).union(options3).union(options4).union(options5), > > - var eids = { > > - var eids = lrp.external_ids; > > - match (lrp.external_ids.get(i"neutron:port_name")) { > > - None -> (), > > - Some{name} -> eids.insert(i"name", name) > > - }; > > - eids > > - }. > > -/* > > -*/ > > -function get_router_load_balancer_ips(lbips: Intern<LogicalRouterLBIPs>, > > - routable_only: bool) : > > - (Set<istring>, Set<istring>) = > > -{ > > - if (routable_only) { > > - (lbips.lb_ipv4s_routable, lbips.lb_ipv6s_routable) > > - } else { > > - (union(lbips.lb_ipv4s_routable, lbips.lb_ipv4s_unroutable), > > - union(lbips.lb_ipv6s_routable, lbips.lb_ipv6s_unroutable)) > > - } > > -} > > - > > -/* Returns an array of strings, each consisting of a MAC address followed > > - * by one or more IP addresses, and if the port is a distributed gateway > > - * port, followed by 'is_chassis_resident("LPORT_NAME")', where the > > - * LPORT_NAME is the name of the L3 redirect port or the name of the > > - * logical_port specified in a NAT rule. These strings include the > > - * external IP addresses of all NAT rules defined on that router, and all > > - * of the IP addresses used in load balancer VIPs defined on that router. > > - */ > > -function get_nat_addresses(rport: Intern<RouterPort>, lbips: Intern<LogicalRouterLBIPs>, routable_only: bool): Set<istring> = > > -{ > > - var addresses = set_empty(); > > - var has_redirect = not rport.router.l3dgw_ports.is_empty(); > > - match (eth_addr_from_string(rport.lrp.mac.ival())) { > > - None -> addresses, > > - Some{mac} -> { > > - var c_addresses = "${mac}"; > > - var central_ip_address = false; > > - > > - /* Get NAT IP addresses. */ > > - for (nat in rport.router.nats) { > > - if (routable_only and > > - (nat.nat.__type == i"snat" or > > - not nat.nat.options.get_bool_def(i"add_route", false))) { > > - continue; > > - }; > > - /* Determine whether this NAT rule satisfies the conditions for > > - * distributed NAT processing. */ > > - if (has_redirect and nat.nat.__type == i"dnat_and_snat" and > > - nat.nat.logical_port.is_some() and nat.external_mac.is_some()) { > > - /* Distributed NAT rule. */ > > - var logical_port = nat.nat.logical_port.unwrap_or_default(); > > - var external_mac = nat.external_mac.unwrap_or_default(); > > - addresses.insert(i"${external_mac} ${nat.external_ip} " > > - "is_chassis_resident(${json_escape(logical_port)})") > > - } else { > > - /* Centralized NAT rule, either on gateway router or distributed > > - * router. > > - * Check if external_ip is same as router ip. If so, then there > > - * is no need to add this to the nat_addresses. The router IPs > > - * will be added separately. */ > > - var is_router_ip = false; > > - match (nat.external_ip) { > > - IPv4{ei} -> { > > - for (ipv4 in rport.networks.ipv4_addrs) { > > - if (ei == ipv4.addr) { > > - is_router_ip = true; > > - break > > - } > > - } > > - }, > > - IPv6{ei} -> { > > - for (ipv6 in rport.networks.ipv6_addrs) { > > - if (ei == ipv6.addr) { > > - is_router_ip = true; > > - break > > - } > > - } > > - } > > - }; > > - if (not is_router_ip) { > > - c_addresses = c_addresses ++ " ${nat.external_ip}"; > > - central_ip_address = true > > - } > > - } > > - }; > > - > > - /* A set to hold all load-balancer vips. */ > > - (var all_ips_v4, var all_ips_v6) = get_router_load_balancer_ips(lbips, routable_only); > > - > > - for (ip_address in set_union(all_ips_v4, all_ips_v6)) { > > - c_addresses = c_addresses ++ " ${ip_address}"; > > - central_ip_address = true > > - }; > > - > > - if (central_ip_address) { > > - /* Gratuitous ARP for centralized NAT rules on distributed gateway > > - * ports should be restricted to the gateway chassis. */ > > - if (has_redirect) { > > - c_addresses = c_addresses ++ match (rport.router.l3dgw_ports.nth(0)) { > > - None -> "", > > - Some {var gw_port} -> " is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})" > > - } > > - } else (); > > - > > - addresses.insert(c_addresses.intern()) > > - } else (); > > - addresses > > - } > > - } > > -} > > - > > -function get_garp_nat_addresses(rport: Intern<RouterPort>): string = { > > - var garp_info = ["${rport.networks.ea}"]; > > - for (ipv4_addr in rport.networks.ipv4_addrs) { > > - garp_info.push("${ipv4_addr.addr}") > > - }; > > - match (rport.router.l3dgw_ports.nth(0)) { > > - None -> (), > > - Some {var gw_port} -> garp_info.push( > > - "is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})") > > - }; > > - garp_info.join(" ") > > -} > > - > > -/* Extra options computed for router ports by the logical flow generation code */ > > -relation RouterPortRAOptions(lrp: uuid, options: Map<istring, istring>) > > - > > -relation RouterPortRAOptionsComplete(lrp: uuid, options: Map<istring, istring>) > > - > > -RouterPortRAOptionsComplete(lrp, options) :- > > - RouterPortRAOptions(lrp, options). > > -RouterPortRAOptionsComplete(lrp, map_empty()) :- > > - &nb::Logical_Router_Port(._uuid = lrp), > > - not RouterPortRAOptions(lrp, _). > > - > > -function has_distributed_nat(nats: Vec<NAT>): bool { > > - for (nat in nats) { > > - if (nat.nat.__type == i"dnat_and_snat") { > > - return true > > - } > > - }; > > - return false > > -} > > - > > -/* > > - * Create derived port for Logical_Router_Ports with non-empty 'gateway_chassis' column. > > - */ > > - > > -/* Create derived ports */ > > -OutProxy_Port_Binding(._uuid = cr_lrp_uuid, > > - .logical_port = chassis_redirect_name(lrp.name).intern(), > > - .__type = i"chassisredirect", > > - .gateway_chassis = set_empty(), > > - .ha_chassis_group = Some{hacg_uuid}, > > - .options = options, > > - .datapath = lr_uuid, > > - .parent_port = None, > > - .tag = None, //always empty for router ports > > - .mac = set_singleton(i"${lrp.mac} ${lrp.networks.map(ival).to_vec().join(\" \")}"), > > - .nat_addresses = set_empty(), > > - .external_ids = lrp.external_ids, > > - .requested_chassis = None) :- > > - DistributedGatewayPort(lrp, lr_uuid, cr_lrp_uuid), > > - DistributedGatewayPortHAChassisGroup(lrp, hacg_uuid), > > - var redirect_type = match (lrp.options.get(i"redirect-type")) { > > - Some{var value} -> [i"redirect-type" -> value], > > - _ -> map_empty() > > - }, > > - LogicalRouterNATs(lr_uuid, nats), > > - var always_redirect = if (has_distributed_nat(nats) or > > - lrp.options.get(i"redirect-type") == Some{i"bridged"}) { > > - map_empty() > > - } else { > > - [i"always-redirect" -> i"true"] > > - }, > > - var options = redirect_type.union(always_redirect).insert_imm(i"distributed-port", lrp.name). > > - > > -/* > > - * We want to preserve 'up' (set by ovn-controller) for Port_Binding rows. > > - * We need to set set 'up' in new rows to Some{false}; if we don't set > > - * it at all, ovn-controller will never update it. > > - */ > > -relation PortBindingUp0(pb_uuid: uuid, up: bool) > > -PortBindingUp0(pb_uuid, up) :- sb::Port_Binding(._uuid = pb_uuid, .up = Some{up}). > > - > > -relation PortBindingUp(pb_uuid: uuid, up: bool) > > -PortBindingUp(pb_uuid, up) :- PortBindingUp0(pb_uuid, up). > > -PortBindingUp(pb_uuid, false) :- > > - OutProxy_Port_Binding(._uuid = pb_uuid), > > - not PortBindingUp0(pb_uuid, _). > > - > > -/* Add allocated qdisc_queue_id and tunnel key to Port_Binding. > > - */ > > -sb::Out_Port_Binding(._uuid = pbinding._uuid, > > - .logical_port = pbinding.logical_port, > > - .__type = pbinding.__type, > > - .gateway_chassis = pbinding.gateway_chassis, > > - .ha_chassis_group = pbinding.ha_chassis_group, > > - .options = options0, > > - .datapath = pbinding.datapath, > > - .tunnel_key = tunkey, > > - .parent_port = pbinding.parent_port, > > - .tag = pbinding.tag, > > - .mac = pbinding.mac, > > - .nat_addresses = pbinding.nat_addresses, > > - .external_ids = pbinding.external_ids, > > - .up = Some{up}, > > - .requested_chassis = pbinding.requested_chassis) :- > > - pbinding in OutProxy_Port_Binding(), > > - PortTunKeyAllocation(pbinding._uuid, tunkey), > > - QueueIDAllocation(pbinding._uuid, qid), > > - PortBindingUp(pbinding._uuid, up), > > - var options0 = match (qid) { > > - None -> pbinding.options, > > - Some{id} -> pbinding.options.insert_imm(i"qdisc_queue_id", i"${id}") > > - }. > > - > > -/* Referenced chassis. > > - * > > - * These tables track the sb::Chassis that a packet that traverses logical > > - * router 'lr_uuid' can end up at (or start from). This is used for > > - * sb::Out_HA_Chassis_Group's ref_chassis column. > > - * > > - * Only HA Chassis Groups with more than 1 chassis need to maintain the > > - * referenced chassis. > > - * > > - * RefChassisSet0 has a row for each logical router that actually references a > > - * chassis. RefChassisSet has a row for every logical router. */ > > -relation RefChassis(lr_uuid: uuid, chassis_uuid: uuid) > > -RefChassis(lr_uuid, chassis_uuid) :- > > - DistributedGatewayPortHAChassisGroup(lrp, hacg_uuid), > > - HAChassis(.hacg_uuid = hacg_uuid, .hac_uuid = hac_uuid), > > - var hacg_size = hac_uuid.group_by(hacg_uuid).to_set().size(), > > - hacg_size > 1, > > - DistributedGatewayPort(lrp, lr_uuid, _), > > - ConnectedLogicalRouter[(lr_uuid, set_uuid)], > > - ConnectedLogicalRouter[(lr2_uuid, set_uuid)], > > - FirstHopLogicalRouter(lr2_uuid, ls_uuid), > > - LogicalSwitchPort(lsp_uuid, ls_uuid), > > - &nb::Logical_Switch_Port(._uuid = lsp_uuid, .name = lsp_name), > > - sb::Port_Binding(.logical_port = lsp_name, .chassis = chassis_uuids), > > - Some{var chassis_uuid} = chassis_uuids. > > -relation RefChassisSet0(lr_uuid: uuid, chassis_uuids: Set<uuid>) > > -RefChassisSet0(lr_uuid, chassis_uuids) :- > > - RefChassis(lr_uuid, chassis_uuid), > > - var chassis_uuids = chassis_uuid.group_by(lr_uuid).to_set(). > > -relation RefChassisSet(lr_uuid: uuid, chassis_uuids: Set<uuid>) > > -RefChassisSet(lr_uuid, chassis_uuids) :- > > - RefChassisSet0(lr_uuid, chassis_uuids). > > -RefChassisSet(lr_uuid, set_empty()) :- > > - nb::Logical_Router(._uuid = lr_uuid), > > - not RefChassisSet0(lr_uuid, _). > > - > > -/* Referenced chassis for an HA chassis group. > > - * > > - * Multiple logical routers can reference an HA chassis group so we merge the > > - * referenced chassis across all of them. > > - */ > > -relation HAChassisGroupRefChassisSet(hacg_uuid: uuid, > > - chassis_uuids: Set<uuid>) > > -HAChassisGroupRefChassisSet(hacg_uuid, chassis_uuids) :- > > - DistributedGatewayPortHAChassisGroup(lrp, hacg_uuid), > > - DistributedGatewayPort(lrp, lr_uuid, _), > > - RefChassisSet(lr_uuid, chassis_uuids), > > - var chassis_uuids = chassis_uuids.group_by(hacg_uuid).union(). > > - > > -/* HA_Chassis_Group and HA_Chassis. */ > > -sb::Out_HA_Chassis_Group(hacg_uuid, hacg_name, ha_chassis, ref_chassis, eids) :- > > - HAChassis(hacg_uuid, hac_uuid, chassis_name, _, _), > > - var chassis_uuid = ha_chassis_uuid(chassis_name.ival(), hac_uuid), > > - var ha_chassis = chassis_uuid.group_by(hacg_uuid).to_set(), > > - HAChassisGroup(hacg_uuid, hacg_name, eids), > > - HAChassisGroupRefChassisSet(hacg_uuid, ref_chassis). > > - > > -sb::Out_HA_Chassis(ha_chassis_uuid(chassis_name.ival(), hac_uuid), chassis, priority, eids) :- > > - HAChassis(_, hac_uuid, chassis_name, priority, eids), > > - chassis_rec in sb::Chassis(.name = chassis_name), > > - var chassis = Some{chassis_rec._uuid}. > > -sb::Out_HA_Chassis(ha_chassis_uuid(chassis_name.ival(), hac_uuid), None, priority, eids) :- > > - HAChassis(_, hac_uuid, chassis_name, priority, eids), > > - not chassis_rec in sb::Chassis(.name = chassis_name). > > - > > -relation HAChassisToChassis(name: istring, chassis: Option<uuid>) > > -HAChassisToChassis(name, Some{chassis}) :- > > - sb::Chassis(._uuid = chassis, .name = name). > > -HAChassisToChassis(name, None) :- > > - nb::HA_Chassis(.chassis_name = name), > > - not sb::Chassis(.name = name). > > -sb::Out_HA_Chassis(ha_chassis_uuid(ha_chassis.chassis_name.ival(), hac_uuid), chassis, priority, eids) :- > > - sp in &SwitchPort(), > > - sp.lsp.__type == i"external", > > - Some{var ha_chassis_group_uuid} = sp.lsp.ha_chassis_group, > > - ha_chassis_group in nb::HA_Chassis_Group(._uuid = ha_chassis_group_uuid), > > - var hac_uuid = FlatMap(ha_chassis_group.ha_chassis), > > - ha_chassis in nb::HA_Chassis(._uuid = hac_uuid, .priority = priority, .external_ids = eids), > > - HAChassisToChassis(ha_chassis.chassis_name, chassis). > > -sb::Out_HA_Chassis_Group(_uuid, name, ha_chassis, set_empty() /* XXX? */, eids) :- > > - sp in &SwitchPort(), > > - sp.lsp.__type == i"external", > > - var ls_uuid = sp.sw._uuid, > > - Some{var ha_chassis_group_uuid} = sp.lsp.ha_chassis_group, > > - ha_chassis_group in nb::HA_Chassis_Group(._uuid = ha_chassis_group_uuid, .name = name, > > - .external_ids = eids), > > - var hac_uuid = FlatMap(ha_chassis_group.ha_chassis), > > - ha_chassis in nb::HA_Chassis(._uuid = hac_uuid), > > - var ha_chassis_uuid_name = ha_chassis_uuid(ha_chassis.chassis_name.ival(), hac_uuid), > > - var ha_chassis = ha_chassis_uuid_name.group_by((ls_uuid, name, eids)).to_set(), > > - var _uuid = ha_chassis_group_uuid(ls_uuid). > > - > > -/* > > - * SB_Global: copy nb_cfg and options from NB. > > - * If NB_Global does not exist yet, just keep the current value of SB_Global, > > - * if any. > > - */ > > -for (nb_global in nb::NB_Global) { > > - sb::Out_SB_Global(._uuid = nb_global._uuid, > > - .nb_cfg = nb_global.nb_cfg, > > - .options = nb_global.options, > > - .ipsec = nb_global.ipsec) > > -} > > - > > -sb::Out_SB_Global(._uuid = sb_global._uuid, > > - .nb_cfg = sb_global.nb_cfg, > > - .options = sb_global.options, > > - .ipsec = sb_global.ipsec) :- > > - sb_global in sb::SB_Global(), > > - not nb::NB_Global(). > > - > > -/* sb::Chassis_Private joined with is_remote from sb::Chassis, > > - * including a record even for a null Chassis ref. */ > > -relation ChassisPrivate( > > - cp: sb::Chassis_Private, > > - is_remote: bool) > > -ChassisPrivate(cp, c.other_config.get_bool_def(i"is-remote", false)) :- > > - cp in sb::Chassis_Private(.chassis = Some{uuid}), > > - c in sb::Chassis(._uuid = uuid). > > -ChassisPrivate(cp, false), > > -Warning["Chassis not exist for Chassis_Private record, name: ${cp.name}"] :- > > - cp in sb::Chassis_Private(.chassis = Some{uuid}), > > - not sb::Chassis(._uuid = uuid). > > -ChassisPrivate(cp, false), > > -Warning["Chassis not exist for Chassis_Private record, name: ${cp.name}"] :- > > - cp in sb::Chassis_Private(.chassis = None). > > - > > -/* Track minimum hv_cfg across all the (non-remote) chassis. */ > > -relation HvCfg0(hv_cfg: integer) > > -HvCfg0(hv_cfg) :- > > - ChassisPrivate(.cp = sb::Chassis_Private{.nb_cfg = chassis_cfg}, .is_remote = false), > > - var hv_cfg = chassis_cfg.group_by(()).min(). > > -relation HvCfg(hv_cfg: integer) > > -HvCfg(hv_cfg) :- HvCfg0(hv_cfg). > > -HvCfg(hv_cfg) :- > > - nb::NB_Global(.nb_cfg = hv_cfg), > > - not HvCfg0(). > > - > > -/* Track maximum nb_cfg_timestamp among all the (non-remote) chassis > > - * that have the minimum nb_cfg. */ > > -relation HvCfgTimestamp0(hv_cfg_timestamp: integer) > > -HvCfgTimestamp0(hv_cfg_timestamp) :- > > - HvCfg(hv_cfg), > > - ChassisPrivate(.cp = sb::Chassis_Private{.nb_cfg = hv_cfg, > > - .nb_cfg_timestamp = chassis_cfg_timestamp}, > > - .is_remote = false), > > - var hv_cfg_timestamp = chassis_cfg_timestamp.group_by(()).max(). > > -relation HvCfgTimestamp(hv_cfg_timestamp: integer) > > -HvCfgTimestamp(hv_cfg_timestamp) :- HvCfgTimestamp0(hv_cfg_timestamp). > > -HvCfgTimestamp(hv_cfg_timestamp) :- > > - nb::NB_Global(.hv_cfg_timestamp = hv_cfg_timestamp), > > - not HvCfgTimestamp0(). > > - > > -/* > > - * nb::Out_NB_Global. > > - * > > - * OutNBGlobal0 generates the new record in the common case. > > - * OutNBGlobal1 generates the new record as a copy of nb::NB_Global, if sb::SB_Global is missing. > > - * nb::Out_NB_Global makes sure we have only a single record in the relation. > > - * > > - * (We don't generate an NB_Global output record if there isn't > > - * one in the input. We don't have enough entropy available to > > - * generate a random _uuid. Doesn't seem like a big deal, because > > - * OVN probably hasn't really been initialized yet.) > > - */ > > -relation OutNBGlobal0[nb::Out_NB_Global] > > -OutNBGlobal0[nb::Out_NB_Global{._uuid = _uuid, > > - .sb_cfg = sb_cfg, > > - .hv_cfg = hv_cfg, > > - .nb_cfg_timestamp = nb_cfg_timestamp, > > - .hv_cfg_timestamp = hv_cfg_timestamp, > > - .ipsec = ipsec, > > - .options = options}] :- > > - NbCfgTimestamp[nb_cfg_timestamp], > > - HvCfgTimestamp(hv_cfg_timestamp), > > - nbg in nb::NB_Global(._uuid = _uuid, .ipsec = ipsec), > > - sb::SB_Global(.nb_cfg = sb_cfg), > > - HvCfg(hv_cfg), > > - HvCfgTimestamp(hv_cfg_timestamp), > > - MacPrefix(mac_prefix), > > - SvcMonitorMac(svc_monitor_mac), > > - OvnMaxDpKeyLocal[max_tunid], > > - var options = { > > - var options = nbg.options; > > - options.put_mac_prefix(mac_prefix); > > - options.put_svc_monitor_mac(svc_monitor_mac); > > - options.insert(i"max_tunid", i"${max_tunid}"); > > - options.insert(i"northd_internal_version", ovn_internal_version().intern()); > > - options > > - }. > > - > > -relation OutNBGlobal1[nb::Out_NB_Global] > > -OutNBGlobal1[x] :- OutNBGlobal0[x]. > > -OutNBGlobal1[nb::Out_NB_Global{._uuid = nbg._uuid, > > - .sb_cfg = nbg.sb_cfg, > > - .hv_cfg = nbg.hv_cfg, > > - .ipsec = nbg.ipsec, > > - .options = nbg.options, > > - .nb_cfg_timestamp = nbg.nb_cfg_timestamp, > > - .hv_cfg_timestamp = nbg.hv_cfg_timestamp}] :- > > - Unit(), > > - not OutNBGlobal0[_], > > - nbg in nb::NB_Global(). > > - > > -nb::Out_NB_Global[y] :- > > - OutNBGlobal1[x], > > - var y = x.group_by(()).group_first(). > > - > > -// Tracks the value that should go into NB_Global's 'nb_cfg_timestamp' column. > > -// ovn-northd-ddlog.c pushes the current time directly into this relation. > > -input relation NbCfgTimestamp[integer] > > - > > -output relation SbCfg[integer] > > -SbCfg[sb_cfg] :- nb::Out_NB_Global(.sb_cfg = sb_cfg). > > - > > -output relation Northd_Probe_Interval[s64] > > -Northd_Probe_Interval[interval] :- > > - nb in nb::NB_Global(), > > - var interval = nb.options.get(i"northd_probe_interval").and_then(parse_dec_i64).unwrap_or(-1). > > - > > -relation CheckLspIsUp[bool] > > -CheckLspIsUp[check_lsp_is_up] :- > > - nb in nb::NB_Global(), > > - var check_lsp_is_up = not nb.options.get_bool_def(i"ignore_lsp_down", true). > > -CheckLspIsUp[true] :- > > - Unit(), > > - not nb in nb::NB_Global(). > > - > > -/* > > - * Address_Set: copy from NB + additional records generated from NB Port_Group (two records for each > > - * Port_Group for IPv4 and IPv6 addresses). > > - * > > - * There can be name collisions between the two types of Address_Set records. User-defined records > > - * take precedence. > > - */ > > -sb::Out_Address_Set(._uuid = nb_as._uuid, > > - .name = nb_as.name, > > - .addresses = nb_as.addresses) :- > > - nb_as in &nb::Address_Set(). > > - > > -sb::Out_Address_Set(._uuid = hash128("svc_monitor_mac"), > > - .name = i"svc_monitor_mac", > > - .addresses = set_singleton(i"${svc_monitor_mac}")) :- > > - SvcMonitorMac(svc_monitor_mac). > > - > > -sb::Out_Address_Set(hash128(as_name), as_name, pg_ip4addrs.union()) :- > > - PortGroupPort(.pg_name = pg_name, .port = port_uuid), > > - var as_name = i"${pg_name}_ip4", > > - // avoid name collisions with user-defined Address_Sets > > - not &nb::Address_Set(.name = as_name), > > - PortStaticAddresses(.lsport = port_uuid, .ip4addrs = stat), > > - SwitchPortNewDynamicAddress(&SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = port_uuid}}, > > - dyn_addr), > > - var dynamic = match (dyn_addr) { > > - None -> set_empty(), > > - Some{lpaddress} -> match (lpaddress.ipv4_addrs.nth(0)) { > > - None -> set_empty(), > > - Some{addr} -> set_singleton(i"${addr.addr}") > > - } > > - }, > > - //PortDynamicAddresses(.lsport = port_uuid, .ip4addrs = dynamic), > > - var port_ip4addrs = stat.union(dynamic), > > - var pg_ip4addrs = port_ip4addrs.group_by(as_name).to_vec(). > > - > > -sb::Out_Address_Set(hash128(as_name), as_name, set_empty()) :- > > - nb::Port_Group(.ports = set_empty(), .name = pg_name), > > - var as_name = i"${pg_name}_ip4", > > - // avoid name collisions with user-defined Address_Sets > > - not &nb::Address_Set(.name = as_name). > > - > > -sb::Out_Address_Set(hash128(as_name), as_name, pg_ip6addrs.union()) :- > > - PortGroupPort(.pg_name = pg_name, .port = port_uuid), > > - var as_name = i"${pg_name}_ip6", > > - // avoid name collisions with user-defined Address_Sets > > - not &nb::Address_Set(.name = as_name), > > - PortStaticAddresses(.lsport = port_uuid, .ip6addrs = stat), > > - SwitchPortNewDynamicAddress(&SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = port_uuid}}, > > - dyn_addr), > > - var dynamic = match (dyn_addr) { > > - None -> set_empty(), > > - Some{lpaddress} -> match (lpaddress.ipv6_addrs.nth(0)) { > > - None -> set_empty(), > > - Some{addr} -> set_singleton(i"${addr.addr}") > > - } > > - }, > > - //PortDynamicAddresses(.lsport = port_uuid, .ip6addrs = dynamic), > > - var port_ip6addrs = stat.union(dynamic), > > - var pg_ip6addrs = port_ip6addrs.group_by(as_name).to_vec(). > > - > > -sb::Out_Address_Set(hash128(as_name), as_name, set_empty()) :- > > - nb::Port_Group(.ports = set_empty(), .name = pg_name), > > - var as_name = i"${pg_name}_ip6", > > - // avoid name collisions with user-defined Address_Sets > > - not &nb::Address_Set(.name = as_name). > > - > > -/* > > - * Port_Group > > - * > > - * Create one SB Port_Group record for every datapath that has ports > > - * referenced by the NB Port_Group.ports field. In order to maintain the > > - * SB Port_Group.name uniqueness constraint, ovn-northd populates the field > > - * with the value: <SB.Logical_Datapath.tunnel_key>_<NB.Port_Group.name>. > > - */ > > - > > -relation PortGroupPort( > > - pg_uuid: uuid, > > - pg_name: istring, > > - port: uuid) > > - > > -PortGroupPort(pg_uuid, pg_name, port) :- > > - nb::Port_Group(._uuid = pg_uuid, .name = pg_name, .ports = pg_ports), > > - var port = FlatMap(pg_ports). > > - > > -sb::Out_Port_Group(._uuid = hash128(sb_name), .name = sb_name, .ports = port_names) :- > > - PortGroupPort(.pg_uuid = _uuid, .pg_name = nb_name, .port = port_uuid), > > - &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{._uuid = port_uuid, > > - .name = port_name}, > > - .sw = &Switch{._uuid = ls_uuid}), > > - TunKeyAllocation(.datapath = ls_uuid, .tunkey = tunkey), > > - var sb_name = i"${tunkey}_${nb_name}", > > - var port_names = port_name.group_by((_uuid, sb_name)).to_set(). > > - > > -/* > > - * Multicast_Group: > > - * - three static rows per logical switch: one for flooding, one for packets > > - * with unknown destinations, one for flooding IP multicast known traffic to > > - * mrouters. > > - * - dynamically created rows based on IGMP groups learned by controllers. > > - */ > > - > > -function mC_FLOOD(): (istring, integer) = > > - (i"_MC_flood", 32768) > > - > > -function mC_UNKNOWN(): (istring, integer) = > > - (i"_MC_unknown", 32769) > > - > > -function mC_MROUTER_FLOOD(): (istring, integer) = > > - (i"_MC_mrouter_flood", 32770) > > - > > -function mC_MROUTER_STATIC(): (istring, integer) = > > - (i"_MC_mrouter_static", 32771) > > - > > -function mC_STATIC(): (istring, integer) = > > - (i"_MC_static", 32772) > > - > > -function mC_FLOOD_L2(): (istring, integer) = > > - (i"_MC_flood_l2", 32773) > > - > > -function mC_IP_MCAST_MIN(): (istring, integer) = > > - (i"_MC_ip_mcast_min", 32774) > > - > > -function mC_IP_MCAST_MAX(): (istring, integer) = > > - (i"_MC_ip_mcast_max", 65535) > > - > > - > > -// TODO: check that Multicast_Group.ports should not include derived ports > > - > > -/* Proxy table for Out_Multicast_Group: contains all Multicast_Group fields, > > - * except `_uuid`, which will be computed by hashing the remaining fields, > > - * and tunnel key, which case it is allocated separately (see > > - * MulticastGroupTunKeyAllocation). */ > > -relation OutProxy_Multicast_Group ( > > - datapath: uuid, > > - name: istring, > > - ports: Set<uuid> > > -) > > - > > -/* Only create flood group if the switch has enabled ports */ > > -sb::Out_Multicast_Group (._uuid = hash128((datapath,name)), > > - .datapath = datapath, > > - .name = name, > > - .tunnel_key = tunnel_key, > > - .ports = port_ids) :- > > - &SwitchPort(.lsp = lsp, .sw = sw), > > - lsp.is_enabled(), > > - var datapath = sw._uuid, > > - var port_ids = lsp._uuid.group_by((datapath)).to_set(), > > - (var name, var tunnel_key) = mC_FLOOD(). > > - > > -/* Create a multicast group to flood to all switch ports except router ports. > > - */ > > -sb::Out_Multicast_Group (._uuid = hash128((datapath,name)), > > - .datapath = datapath, > > - .name = name, > > - .tunnel_key = tunnel_key, > > - .ports = port_ids) :- > > - &SwitchPort(.lsp = lsp, .sw = sw), > > - lsp.is_enabled(), > > - lsp.__type != i"router", > > - var datapath = sw._uuid, > > - var port_ids = lsp._uuid.group_by((datapath)).to_set(), > > - (var name, var tunnel_key) = mC_FLOOD_L2(). > > - > > -/* Only create unknown group if the switch has ports with "unknown" address */ > > -sb::Out_Multicast_Group (._uuid = hash128((ls,name)), > > - .datapath = ls, > > - .name = name, > > - .tunnel_key = tunnel_key, > > - .ports = ports) :- > > - LogicalSwitchPortWithUnknownAddress(ls, lsp), > > - var ports = lsp.group_by(ls).to_set(), > > - (var name, var tunnel_key) = mC_UNKNOWN(). > > - > > -/* Create a multicast group to flood multicast traffic to routers with > > - * multicast relay enabled. > > - */ > > -sb::Out_Multicast_Group (._uuid = hash128((sw._uuid,name)), > > - .datapath = sw._uuid, > > - .name = name, > > - .tunnel_key = tunnel_key, > > - .ports = port_ids) :- > > - SwitchMcastFloodRelayPorts(sw, port_ids), > > - not port_ids.is_empty(), > > - (var name, var tunnel_key) = mC_MROUTER_FLOOD(). > > - > > -/* Create a multicast group to flood traffic (no reports) to ports with > > - * multicast flood enabled. > > - */ > > -sb::Out_Multicast_Group (._uuid = hash128((sw._uuid,name)), > > - .datapath = sw._uuid, > > - .name = name, > > - .tunnel_key = tunnel_key, > > - .ports = port_ids) :- > > - SwitchMcastFloodPorts(sw, port_ids), > > - not port_ids.is_empty(), > > - (var name, var tunnel_key) = mC_STATIC(). > > - > > -/* Create a multicast group to flood reports to ports with > > - * multicast flood_reports enabled. > > - */ > > -sb::Out_Multicast_Group (._uuid = hash128((sw._uuid,name)), > > - .datapath = sw._uuid, > > - .name = name, > > - .tunnel_key = tunnel_key, > > - .ports = port_ids) :- > > - SwitchMcastFloodReportPorts(sw, port_ids), > > - not port_ids.is_empty(), > > - (var name, var tunnel_key) = mC_MROUTER_STATIC(). > > - > > -/* Create a multicast group to flood traffic and reports to router ports with > > - * multicast flood enabled. > > - */ > > -sb::Out_Multicast_Group (._uuid = hash128((rtr._uuid,name)), > > - .datapath = rtr._uuid, > > - .name = name, > > - .tunnel_key = tunnel_key, > > - .ports = port_ids) :- > > - RouterMcastFloodPorts(rtr, port_ids), > > - not port_ids.is_empty(), > > - (var name, var tunnel_key) = mC_STATIC(). > > - > > -/* Create a multicast group for each IGMP group learned by a Switch. > > - * 'tunnel_key' == 0 triggers an ID allocation later. > > - */ > > -OutProxy_Multicast_Group (.datapath = switch._uuid, > > - .name = address, > > - .ports = port_ids) :- > > - IgmpSwitchMulticastGroup(address, switch, port_ids). > > - > > -/* Create a multicast group for each IGMP group learned by a Router. > > - * 'tunnel_key' == 0 triggers an ID allocation later. > > - */ > > -OutProxy_Multicast_Group (.datapath = router._uuid, > > - .name = address, > > - .ports = port_ids) :- > > - IgmpRouterMulticastGroup(address, router, port_ids). > > - > > -/* Allocate a 'tunnel_key' for dynamic multicast groups. */ > > -sb::Out_Multicast_Group(._uuid = hash128((mcgroup.datapath,mcgroup.name)), > > - .datapath = mcgroup.datapath, > > - .name = mcgroup.name, > > - .tunnel_key = tunnel_key, > > - .ports = mcgroup.ports) :- > > - mcgroup in OutProxy_Multicast_Group(), > > - MulticastGroupTunKeyAllocation(mcgroup.datapath, mcgroup.name, tunnel_key). > > - > > -/* > > - * MAC binding: records inserted by hypervisors; northd removes records for deleted logical ports and datapaths. > > - */ > > -sb::Out_MAC_Binding (._uuid = mb._uuid, > > - .logical_port = mb.logical_port, > > - .ip = mb.ip, > > - .mac = mb.mac, > > - .datapath = mb.datapath) :- > > - sb::MAC_Binding[mb], > > - sb::Out_Port_Binding(.logical_port = mb.logical_port), > > - sb::Out_Datapath_Binding(._uuid = mb.datapath). > > - > > -/* > > - * DHCP options: fixed table > > - */ > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h7d9d898a_179b_4898_8382_b73bec391f23, > > - .name = i"offerip", > > - .code = 0, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hea5e7d14_fd97_491c_8004_a120bdbc4306, > > - .name = i"netmask", > > - .code = 1, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hdab5e39b_6702_4245_9573_6c142aa3724c, > > - .name = i"router", > > - .code = 3, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h340b4bc5_c5c3_43d1_ae77_564da69c8fcc, > > - .name = i"dns_server", > > - .code = 6, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hcd1ab302_cbb2_4eab_9ec5_ec1c8541bd82, > > - .name = i"log_server", > > - .code = 7, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h1c7ea6a0_fe6b_48c1_a920_302583c1ff08, > > - .name = i"lpr_server", > > - .code = 9, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hae312373_2261_41b5_a2c4_186f426dd929, > > - .name = i"hostname", > > - .code = 12, > > - .__type = i"str" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hae35e575_226a_4ab5_a1c4_166f426dd999, > > - .name = i"domain_name", > > - .code = 15, > > - .__type = i"str" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'had0ec3e0_8be9_4c77_bceb_f8954a34c7ba, > > - .name = i"swap_server", > > - .code = 16, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h884c2e02_6e99_4d12_aef7_8454ebf8a3b7, > > - .name = i"policy_filter", > > - .code = 21, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h57cc2c61_fd2a_41c6_b6b1_6ce9a8901f86, > > - .name = i"router_solicitation", > > - .code = 32, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h48249097_03f0_46c1_a32a_2dd57cd4d0f8, > > - .name = i"nis_server", > > - .code = 41, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h333fe07e_bdd1_4371_aa4f_a412bc60f3a2, > > - .name = i"ntp_server", > > - .code = 42, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h6207109c_49d0_4348_8238_dd92afb69bf0, > > - .name = i"server_id", > > - .code = 54, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h2090b783_26d3_4c1d_830c_54c1b6c5d846, > > - .name = i"tftp_server", > > - .code = 66, > > - .__type = i"host_id" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'ha18ff399_caea_406e_af7e_321c6f74e581, > > - .name = i"classless_static_route", > > - .code = 121, > > - .__type = i"static_routes" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hb81ad7b4_62f0_40c7_a9a3_f96677628767, > > - .name = i"ms_classless_static_route", > > - .code = 249, > > - .__type = i"static_routes" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h0c2e144e_4b5f_4e21_8978_0e20bac9a6ea, > > - .name = i"ip_forward_enable", > > - .code = 19, > > - .__type = i"bool" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h6feb1926_9469_4b40_bfbf_478b9888cd3a, > > - .name = i"router_discovery", > > - .code = 31, > > - .__type = i"bool" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hcb776249_e8b1_4502_b33b_fa294d44077d, > > - .name = i"ethernet_encap", > > - .code = 36, > > - .__type = i"bool" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'ha2df9eaa_aea9_497f_b339_0c8ec3e39a07, > > - .name = i"default_ttl", > > - .code = 23, > > - .__type = i"uint8" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hb44b45a9_5004_4ef5_8e6a_aa8629e1afb1, > > - .name = i"tcp_ttl", > > - .code = 37, > > - .__type = i"uint8" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h50f01ca7_c650_46f0_8f50_39a67ec657da, > > - .name = i"mtu", > > - .code = 26, > > - .__type = i"uint16" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h9d31c057_6085_4810_96af_eeac7d3c5308, > > - .name = i"lease_time", > > - .code = 51, > > - .__type = i"uint32" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hea1e2e7a_9585_46ee_ad49_adfdefc0c4ef, > > - .name = i"T1", > > - .code = 58, > > - .__type = i"uint32" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hbc83a233_554b_453a_afca_1eadf76810d2, > > - .name = i"T2", > > - .code = 59, > > - .__type = i"uint32" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h1ab3eeca_0523_4101_9076_eea77d0232f4, > > - .name = i"bootfile_name", > > - .code = 67, > > - .__type = i"str" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'ha5c20b69_f7f3_4fa8_b550_8697aec6cbb7, > > - .name = i"wpad", > > - .code = 252, > > - .__type = i"str" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h1516bcb6_cc93_4233_a63f_bd29c8601831, > > - .name = i"path_prefix", > > - .code = 210, > > - .__type = i"str" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hc98e13cd_f653_473c_85c1_850dcad685fc, > > - .name = i"tftp_server_address", > > - .code = 150, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hfbe06e70_b43d_4dd9_9b21_2f27eb5da5df, > > - .name = i"arp_cache_timeout", > > - .code = 35, > > - .__type = i"uint32" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h2af54a3c_545c_4104_ae1c_432caa3e085e, > > - .name = i"tcp_keepalive_interval", > > - .code = 38, > > - .__type = i"uint32" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h4b2144e8_8d3f_4d96_9032_fe23c1866cd4, > > - .name = i"domain_search_list", > > - .code = 119, > > - .__type = i"domains" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'hb7236164_eea4_4bf2_9306_8619a9e3ad1d, > > - .name = i"broadcast_address", > > - .code = 28, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h32224b72_1561_4279_b430_982423b62a69, > > - .name = i"netbios_name_server", > > - .code = 44, > > - .__type = i"ipv4" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h691db4ae_624e_43e2_9f4a_5ed9de58f0e5, > > - .name = i"netbios_node_type", > > - .code = 46, > > - .__type = i"uint8" > > -). > > - > > -sb::Out_DHCP_Options ( > > - ._uuid = 128'h2d738583_96f4_4a78_99a1_f8f7fe328f3f, > > - .name = i"bootfile_name_alt", > > - .code = 254, > > - .__type = i"str" > > -). > > - > > - > > -/* > > - * DHCPv6 options: fixed table > > - */ > > -sb::Out_DHCPv6_Options ( > > - ._uuid = 128'h100b2659_0ec0_4da7_9ec3_25997f92dc00, > > - .name = i"server_id", > > - .code = 2, > > - .__type = i"mac" > > -). > > - > > -sb::Out_DHCPv6_Options ( > > - ._uuid = 128'h53f49b50_db75_4b0d_83df_50d31009ca9c, > > - .name = i"ia_addr", > > - .code = 5, > > - .__type = i"ipv6" > > -). > > - > > -sb::Out_DHCPv6_Options ( > > - ._uuid = 128'he3619685_d4f7_42ad_936b_4f4440b7eeb4, > > - .name = i"dns_server", > > - .code = 23, > > - .__type = i"ipv6" > > -). > > - > > -sb::Out_DHCPv6_Options ( > > - ._uuid = 128'hcb8a4e7f_a312_4cb1_a846_e474d9f0c531, > > - .name = i"domain_search", > > - .code = 24, > > - .__type = i"str" > > -). > > - > > - > > -/* > > - * DNS: copied from NB + datapaths column pointer to LS datapaths that use the record > > - */ > > - > > -function map_to_lowercase(m_in: Map<istring,istring>): Map<istring,istring> { > > - var m_out = map_empty(); > > - for ((k, v) in m_in) { > > - m_out.insert(k.to_lowercase().intern(), v.to_lowercase().intern()) > > - }; > > - m_out > > -} > > - > > -sb::Out_DNS(._uuid = hash128(nbdns._uuid), > > - .records = map_to_lowercase(nbdns.records), > > - .datapaths = datapaths, > > - .external_ids = nbdns.external_ids.insert_imm(i"dns_id", uuid2str(nbdns._uuid).intern())) :- > > - nb::DNS[nbdns], > > - LogicalSwitchDNS(ls_uuid, nbdns._uuid), > > - var datapaths = ls_uuid.group_by(nbdns).to_set(). > > - > > -/* > > - * RBAC_Permission: fixed > > - */ > > - > > -sb::Out_RBAC_Permission ( > > - ._uuid = 128'h7df3749a_1754_4a78_afa4_3abf526fe510, > > - .table = i"Chassis", > > - .authorization = set_singleton(i"name"), > > - .insert_delete = true, > > - .update = [i"nb_cfg", i"external_ids", i"encaps", > > - i"vtep_logical_switches", i"other_config", > > - i"transport_zones"].to_set() > > -). > > - > > -sb::Out_RBAC_Permission ( > > - ._uuid = 128'h07e623f7_137c_4a11_9084_3b3f89cb4a54, > > - .table = i"Chassis_Private", > > - .authorization = set_singleton(i"name"), > > - .insert_delete = true, > > - .update = [i"nb_cfg", i"nb_cfg_timestamp", i"chassis", i"external_ids"].to_set() > > -). > > - > > -sb::Out_RBAC_Permission ( > > - ._uuid = 128'h94bec860_431e_4d95_82e7_3b75d8997241, > > - .table = i"Encap", > > - .authorization = set_singleton(i"chassis_name"), > > - .insert_delete = true, > > - .update = [i"type", i"options", i"ip"].to_set() > > -). > > - > > -sb::Out_RBAC_Permission ( > > - ._uuid = 128'hd8ceff1a_2b11_48bd_802f_4a991aa4e908, > > - .table = i"Port_Binding", > > - .authorization = set_singleton(i""), > > - .insert_delete = false, > > - .update = [i"chassis", i"encap", i"up", i"virtual_parent"].to_set() > > -). > > - > > -sb::Out_RBAC_Permission ( > > - ._uuid = 128'h6ffdc696_8bfb_4d82_b620_a00d39270b2f, > > - .table = i"MAC_Binding", > > - .authorization = set_singleton(i""), > > - .insert_delete = true, > > - .update = [i"logical_port", i"ip", i"mac", i"datapath"].to_set() > > -). > > - > > -sb::Out_RBAC_Permission ( > > - ._uuid = 128'h39231c7e_4bf1_41d0_ada4_1d8a319c0da3, > > - .table = i"Service_Monitor", > > - .authorization = set_singleton(i""), > > - .insert_delete = false, > > - .update = set_singleton(i"status") > > -). > > - > > -sb::Out_RBAC_Permission ( > > - ._uuid = 128'h5256f48e_172c_4d85_8f04_e199fa817633, > > - .table = i"IGMP_Group", > > - .authorization = set_singleton(i""), > > - .insert_delete = true, > > - .update = [i"address", i"chassis", i"datapath", i"ports"].to_set() > > -). > > - > > -sb::Out_RBAC_Permission ( > > - ._uuid = 128'h2e5cbf3d_26f6_4f8a_9926_d6f77f61654f, > > - .table = i"Controller_Event", > > - .authorization = set_singleton(i""), > > - .insert_delete = true, > > - .update = [i"chassis", i"event_info", i"event_type", > > - i"seq_num"].to_set() > > -). > > - > > -sb::Out_RBAC_Permission ( > > - ._uuid = 128'hb70964fc_322f_4ae5_aee4_ff6afadcc126, > > - .table = i"FDB", > > - .authorization = set_singleton(i""), > > - .insert_delete = true, > > - .update = [i"dp_key", i"mac", i"port_key"].to_set() > > -). > > - > > -/* > > - * RBAC_Role: fixed > > - */ > > -sb::Out_RBAC_Role ( > > - ._uuid = 128'ha406b472_5de8_4456_9f38_bf344c911b22, > > - .name = i"ovn-controller", > > - .permissions = [ > > - i"Chassis" -> 128'h7df3749a_1754_4a78_afa4_3abf526fe510, > > - i"Chassis_Private" -> 128'h07e623f7_137c_4a11_9084_3b3f89cb4a54, > > - i"Controller_Event" -> 128'h2e5cbf3d_26f6_4f8a_9926_d6f77f61654f, > > - i"Encap" -> 128'h94bec860_431e_4d95_82e7_3b75d8997241, > > - i"FDB" -> 128'hb70964fc_322f_4ae5_aee4_ff6afadcc126, > > - i"IGMP_Group" -> 128'h5256f48e_172c_4d85_8f04_e199fa817633, > > - i"Port_Binding" -> 128'hd8ceff1a_2b11_48bd_802f_4a991aa4e908, > > - i"MAC_Binding" -> 128'h6ffdc696_8bfb_4d82_b620_a00d39270b2f, > > - i"Service_Monitor"-> 128'h39231c7e_4bf1_41d0_ada4_1d8a319c0da3] > > - > > -). > > - > > -/* Output modified Logical_Switch_Port table with dynamic address updated */ > > -nb::Out_Logical_Switch_Port(._uuid = lsp._uuid, > > - .tag = tag, > > - .dynamic_addresses = dynamic_addresses, > > - .up = Some{up}) :- > > - SwitchPortNewDynamicAddress(&SwitchPort{.lsp = lsp, .up = up}, opt_dyn_addr), > > - var dynamic_addresses = opt_dyn_addr.and_then(|a| Some{i"${a}"}), > > - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), > > - var tag = match (opt_tag) { > > - None -> lsp.tag, > > - Some{t} -> Some{t} > > - }. > > - > > -relation LRPIPv6Prefix0(lrp_uuid: uuid, ipv6_prefix: istring) > > -LRPIPv6Prefix0(lrp._uuid, ipv6_prefix.intern()) :- > > - lrp in &nb::Logical_Router_Port(), > > - lrp.options.get_bool_def(i"prefix", false), > > - sb::Port_Binding(.logical_port = lrp.name, .options = options), > > - Some{var ipv6_ra_pd_list} = options.get(i"ipv6_ra_pd_list"), > > - var parts = ipv6_ra_pd_list.split(","), > > - Some{var ipv6_prefix} = parts.nth(1). > > - > > -relation LRPIPv6Prefix(lrp_uuid: uuid, ipv6_prefix: Option<istring>) > > -LRPIPv6Prefix(lrp_uuid, Some{ipv6_prefix}) :- > > - LRPIPv6Prefix0(lrp_uuid, ipv6_prefix). > > -LRPIPv6Prefix(lrp_uuid, None) :- > > - &nb::Logical_Router_Port(._uuid = lrp_uuid), > > - not LRPIPv6Prefix0(lrp_uuid, _). > > - > > -nb::Out_Logical_Router_Port(._uuid = _uuid, > > - .ipv6_prefix = to_set(ipv6_prefix)) :- > > - &nb::Logical_Router_Port(._uuid = _uuid, .name = name), > > - LRPIPv6Prefix(_uuid, ipv6_prefix). > > - > > -typedef Pipeline = Ingress | Egress > > - > > -typedef Stage = Stage { > > - pipeline : Pipeline, > > - table_id : bit<8>, > > - table_name : istring > > -} > > - > > -/* Logical switch ingress stages. */ > > -function s_SWITCH_IN_PORT_SEC_L2(): Intern<Stage> { Stage{Ingress, 0, i"ls_in_port_sec_l2"}.intern() } > > -function s_SWITCH_IN_PORT_SEC_IP(): Intern<Stage> { Stage{Ingress, 1, i"ls_in_port_sec_ip"}.intern() } > > -function s_SWITCH_IN_PORT_SEC_ND(): Intern<Stage> { Stage{Ingress, 2, i"ls_in_port_sec_nd"}.intern() } > > -function s_SWITCH_IN_LOOKUP_FDB(): Intern<Stage> { Stage{Ingress, 3, i"ls_in_lookup_fdb"}.intern() } > > -function s_SWITCH_IN_PUT_FDB(): Intern<Stage> { Stage{Ingress, 4, i"ls_in_put_fdb"}.intern() } > > -function s_SWITCH_IN_PRE_ACL(): Intern<Stage> { Stage{Ingress, 5, i"ls_in_pre_acl"}.intern() } > > -function s_SWITCH_IN_PRE_LB(): Intern<Stage> { Stage{Ingress, 6, i"ls_in_pre_lb"}.intern() } > > -function s_SWITCH_IN_PRE_STATEFUL(): Intern<Stage> { Stage{Ingress, 7, i"ls_in_pre_stateful"}.intern() } > > -function s_SWITCH_IN_ACL_HINT(): Intern<Stage> { Stage{Ingress, 8, i"ls_in_acl_hint"}.intern() } > > -function s_SWITCH_IN_ACL(): Intern<Stage> { Stage{Ingress, 9, i"ls_in_acl"}.intern() } > > -function s_SWITCH_IN_QOS_MARK(): Intern<Stage> { Stage{Ingress, 10, i"ls_in_qos_mark"}.intern() } > > -function s_SWITCH_IN_QOS_METER(): Intern<Stage> { Stage{Ingress, 11, i"ls_in_qos_meter"}.intern() } > > -function s_SWITCH_IN_STATEFUL(): Intern<Stage> { Stage{Ingress, 12, i"ls_in_stateful"}.intern() } > > -function s_SWITCH_IN_PRE_HAIRPIN(): Intern<Stage> { Stage{Ingress, 13, i"ls_in_pre_hairpin"}.intern() } > > -function s_SWITCH_IN_NAT_HAIRPIN(): Intern<Stage> { Stage{Ingress, 14, i"ls_in_nat_hairpin"}.intern() } > > -function s_SWITCH_IN_HAIRPIN(): Intern<Stage> { Stage{Ingress, 15, i"ls_in_hairpin"}.intern() } > > -function s_SWITCH_IN_ARP_ND_RSP(): Intern<Stage> { Stage{Ingress, 16, i"ls_in_arp_rsp"}.intern() } > > -function s_SWITCH_IN_DHCP_OPTIONS(): Intern<Stage> { Stage{Ingress, 17, i"ls_in_dhcp_options"}.intern() } > > -function s_SWITCH_IN_DHCP_RESPONSE(): Intern<Stage> { Stage{Ingress, 18, i"ls_in_dhcp_response"}.intern() } > > -function s_SWITCH_IN_DNS_LOOKUP(): Intern<Stage> { Stage{Ingress, 19, i"ls_in_dns_lookup"}.intern() } > > -function s_SWITCH_IN_DNS_RESPONSE(): Intern<Stage> { Stage{Ingress, 20, i"ls_in_dns_response"}.intern() } > > -function s_SWITCH_IN_EXTERNAL_PORT(): Intern<Stage> { Stage{Ingress, 21, i"ls_in_external_port"}.intern() } > > -function s_SWITCH_IN_L2_LKUP(): Intern<Stage> { Stage{Ingress, 22, i"ls_in_l2_lkup"}.intern() } > > -function s_SWITCH_IN_L2_UNKNOWN(): Intern<Stage> { Stage{Ingress, 23, i"ls_in_l2_unknown"}.intern() } > > - > > -/* Logical switch egress stages. */ > > -function s_SWITCH_OUT_PRE_LB(): Intern<Stage> { Stage{ Egress, 0, i"ls_out_pre_lb"}.intern() } > > -function s_SWITCH_OUT_PRE_ACL(): Intern<Stage> { Stage{ Egress, 1, i"ls_out_pre_acl"}.intern() } > > -function s_SWITCH_OUT_PRE_STATEFUL(): Intern<Stage> { Stage{ Egress, 2, i"ls_out_pre_stateful"}.intern() } > > -function s_SWITCH_OUT_ACL_HINT(): Intern<Stage> { Stage{ Egress, 3, i"ls_out_acl_hint"}.intern() } > > -function s_SWITCH_OUT_ACL(): Intern<Stage> { Stage{ Egress, 4, i"ls_out_acl"}.intern() } > > -function s_SWITCH_OUT_QOS_MARK(): Intern<Stage> { Stage{ Egress, 5, i"ls_out_qos_mark"}.intern() } > > -function s_SWITCH_OUT_QOS_METER(): Intern<Stage> { Stage{ Egress, 6, i"ls_out_qos_meter"}.intern() } > > -function s_SWITCH_OUT_STATEFUL(): Intern<Stage> { Stage{ Egress, 7, i"ls_out_stateful"}.intern() } > > -function s_SWITCH_OUT_PORT_SEC_IP(): Intern<Stage> { Stage{ Egress, 8, i"ls_out_port_sec_ip"}.intern() } > > -function s_SWITCH_OUT_PORT_SEC_L2(): Intern<Stage> { Stage{ Egress, 9, i"ls_out_port_sec_l2"}.intern() } > > - > > -/* Logical router ingress stages. */ > > -function s_ROUTER_IN_ADMISSION(): Intern<Stage> { Stage{Ingress, 0, i"lr_in_admission"}.intern() } > > -function s_ROUTER_IN_LOOKUP_NEIGHBOR(): Intern<Stage> { Stage{Ingress, 1, i"lr_in_lookup_neighbor"}.intern() } > > -function s_ROUTER_IN_LEARN_NEIGHBOR(): Intern<Stage> { Stage{Ingress, 2, i"lr_in_learn_neighbor"}.intern() } > > -function s_ROUTER_IN_IP_INPUT(): Intern<Stage> { Stage{Ingress, 3, i"lr_in_ip_input"}.intern() } > > -function s_ROUTER_IN_UNSNAT(): Intern<Stage> { Stage{Ingress, 4, i"lr_in_unsnat"}.intern() } > > -function s_ROUTER_IN_DEFRAG(): Intern<Stage> { Stage{Ingress, 5, i"lr_in_defrag"}.intern() } > > -function s_ROUTER_IN_DNAT(): Intern<Stage> { Stage{Ingress, 6, i"lr_in_dnat"}.intern() } > > -function s_ROUTER_IN_ECMP_STATEFUL(): Intern<Stage> { Stage{Ingress, 7, i"lr_in_ecmp_stateful"}.intern() } > > -function s_ROUTER_IN_ND_RA_OPTIONS(): Intern<Stage> { Stage{Ingress, 8, i"lr_in_nd_ra_options"}.intern() } > > -function s_ROUTER_IN_ND_RA_RESPONSE(): Intern<Stage> { Stage{Ingress, 9, i"lr_in_nd_ra_response"}.intern() } > > -function s_ROUTER_IN_IP_ROUTING(): Intern<Stage> { Stage{Ingress, 10, i"lr_in_ip_routing"}.intern() } > > -function s_ROUTER_IN_IP_ROUTING_ECMP(): Intern<Stage> { Stage{Ingress, 11, i"lr_in_ip_routing_ecmp"}.intern() } > > -function s_ROUTER_IN_POLICY(): Intern<Stage> { Stage{Ingress, 12, i"lr_in_policy"}.intern() } > > -function s_ROUTER_IN_POLICY_ECMP(): Intern<Stage> { Stage{Ingress, 13, i"lr_in_policy_ecmp"}.intern() } > > -function s_ROUTER_IN_ARP_RESOLVE(): Intern<Stage> { Stage{Ingress, 14, i"lr_in_arp_resolve"}.intern() } > > -function s_ROUTER_IN_CHK_PKT_LEN(): Intern<Stage> { Stage{Ingress, 15, i"lr_in_chk_pkt_len"}.intern() } > > -function s_ROUTER_IN_LARGER_PKTS(): Intern<Stage> { Stage{Ingress, 16, i"lr_in_larger_pkts"}.intern() } > > -function s_ROUTER_IN_GW_REDIRECT(): Intern<Stage> { Stage{Ingress, 17, i"lr_in_gw_redirect"}.intern() } > > -function s_ROUTER_IN_ARP_REQUEST(): Intern<Stage> { Stage{Ingress, 18, i"lr_in_arp_request"}.intern() } > > - > > -/* Logical router egress stages. */ > > -function s_ROUTER_OUT_UNDNAT(): Intern<Stage> { Stage{ Egress, 0, i"lr_out_undnat"}.intern() } > > -function s_ROUTER_OUT_POST_UNDNAT(): Intern<Stage> { Stage{ Egress, 1, i"lr_out_post_undnat"}.intern() } > > -function s_ROUTER_OUT_SNAT(): Intern<Stage> { Stage{ Egress, 2, i"lr_out_snat"}.intern() } > > -function s_ROUTER_OUT_EGR_LOOP(): Intern<Stage> { Stage{ Egress, 3, i"lr_out_egr_loop"}.intern() } > > -function s_ROUTER_OUT_DELIVERY(): Intern<Stage> { Stage{ Egress, 4, i"lr_out_delivery"}.intern() } > > - > > -/* > > - * OVS register usage: > > - * > > - * Logical Switch pipeline: > > - * +----+----------------------------------------------+---+------------------+ > > - * | R0 | REGBIT_{CONNTRACK/DHCP/DNS} | | | > > - * | | REGBIT_{HAIRPIN/HAIRPIN_REPLY} | | | > > - * | | REGBIT_ACL_LABEL | X | | > > - * +----+----------------------------------------------+ X | | > > - * | R1 | ORIG_DIP_IPV4 (>= IN_STATEFUL) | R | | > > - * +----+----------------------------------------------+ E | | > > - * | R2 | ORIG_TP_DPORT (>= IN_STATEFUL) | G | | > > - * +----+----------------------------------------------+ 0 | | > > - * | R3 | ACL_LABEL | | | > > - * +----+----------------------------------------------+---+------------------+ > > - * | R4 | UNUSED | | | > > - * +----+----------------------------------------------+ X | ORIG_DIP_IPV6 | > > - * | R5 | UNUSED | X | (>= IN_STATEFUL) | > > - * +----+----------------------------------------------+ R | | > > - * | R6 | UNUSED | E | | > > - * +----+----------------------------------------------+ G | | > > - * | R7 | UNUSED | 1 | | > > - * +----+----------------------------------------------+---+------------------+ > > - * | R8 | UNUSED | > > - * +----+----------------------------------------------+ > > - * | R9 | UNUSED | > > - * +----+----------------------------------------------+ > > - * > > - * Logical Router pipeline: > > - * +-----+--------------------------+---+-----------------+---+---------------+ > > - * | R0 | REGBIT_ND_RA_OPTS_RESULT | | | | | > > - * | | (= IN_ND_RA_OPTIONS) | X | | | | > > - * | | NEXT_HOP_IPV4 | R | | | | > > - * | | (>= IP_INPUT) | E | INPORT_ETH_ADDR | X | | > > - * +-----+--------------------------+ G | (< IP_INPUT) | X | | > > - * | R1 | SRC_IPV4 for ARP-REQ | 0 | | R | | > > - * | | (>= IP_INPUT) | | | E | NEXT_HOP_IPV6 | > > - * +-----+--------------------------+---+-----------------+ G | (>= DEFRAG) | > > - * | R2 | UNUSED | X | | 0 | | > > - * | | | R | | | | > > - * +-----+--------------------------+ E | UNUSED | | | > > - * | R3 | UNUSED | G | | | | > > - * | | | 1 | | | | > > - * +-----+--------------------------+---+-----------------+---+---------------+ > > - * | R4 | UNUSED | X | | | | > > - * | | | R | | | | > > - * +-----+--------------------------+ E | UNUSED | X | | > > - * | R5 | UNUSED | G | | X | | > > - * | | | 2 | | R |SRC_IPV6 for NS| > > - * +-----+--------------------------+---+-----------------+ E | (>= | > > - * | R6 | UNUSED | X | | G | IN_IP_ROUTING)| > > - * | | | R | | 1 | | > > - * +-----+--------------------------+ E | UNUSED | | | > > - * | R7 | UNUSED | G | | | | > > - * | | | 3 | | | | > > - * +-----+--------------------------+---+-----------------+---+---------------+ > > - * | R8 | ECMP_GROUP_ID | | | > > - * | | ECMP_MEMBER_ID | X | | > > - * +-----+--------------------------+ R | | > > - * | | REGBIT_{ | E | | > > - * | | EGRESS_LOOPBACK/ | G | UNUSED | > > - * | R9 | PKT_LARGER/ | 4 | | > > - * | | LOOKUP_NEIGHBOR_RESULT/| | | > > - * | | SKIP_LOOKUP_NEIGHBOR} | | | > > - * | | | | | > > - * | | REG_ORIG_TP_DPORT_ROUTER | | | > > - * | | | | | > > - * +-----+--------------------------+---+-----------------+ > > - * > > - */ > > - > > -/* Register definitions specific to routers. */ > > -function rEG_NEXT_HOP(): istring = i"reg0" /* reg0 for IPv4, xxreg0 for IPv6 */ > > -function rEG_SRC(): istring = i"reg1" /* reg1 for IPv4, xxreg1 for IPv6 */ > > - > > -/* Register definitions specific to switches. */ > > -function rEGBIT_CONNTRACK_DEFRAG() : istring = i"reg0[0]" > > -function rEGBIT_CONNTRACK_COMMIT() : istring = i"reg0[1]" > > -function rEGBIT_CONNTRACK_NAT() : istring = i"reg0[2]" > > -function rEGBIT_DHCP_OPTS_RESULT() : istring = i"reg0[3]" > > -function rEGBIT_DNS_LOOKUP_RESULT(): istring = i"reg0[4]" > > -function rEGBIT_ND_RA_OPTS_RESULT(): istring = i"reg0[5]" > > -function rEGBIT_HAIRPIN() : istring = i"reg0[6]" > > -function rEGBIT_ACL_HINT_ALLOW_NEW(): istring = i"reg0[7]" > > -function rEGBIT_ACL_HINT_ALLOW() : istring = i"reg0[8]" > > -function rEGBIT_ACL_HINT_DROP() : istring = i"reg0[9]" > > -function rEGBIT_ACL_HINT_BLOCK() : istring = i"reg0[10]" > > -function rEGBIT_LKUP_FDB() : istring = i"reg0[11]" > > -function rEGBIT_HAIRPIN_REPLY() : istring = i"reg0[12]" > > -function rEGBIT_ACL_LABEL() : istring = i"reg0[13]" > > - > > -function rEG_ORIG_DIP_IPV4() : istring = i"reg1" > > -function rEG_ORIG_DIP_IPV6() : istring = i"xxreg1" > > -function rEG_ORIG_TP_DPORT() : istring = i"reg2[0..15]" > > - > > -/* Register definitions for switches and routers. */ > > - > > -/* Indicate that this packet has been recirculated using egress > > - * loopback. This allows certain checks to be bypassed, such as a > > -* logical router dropping packets with source IP address equals > > -* one of the logical router's own IP addresses. */ > > -function rEGBIT_EGRESS_LOOPBACK() : istring = i"reg9[0]" > > -/* Register to store the result of check_pkt_larger action. */ > > -function rEGBIT_PKT_LARGER() : istring = i"reg9[1]" > > -function rEGBIT_LOOKUP_NEIGHBOR_RESULT() : istring = i"reg9[2]" > > -function rEGBIT_LOOKUP_NEIGHBOR_IP_RESULT() : istring = i"reg9[3]" > > - > > -/* Register to store the eth address associated to a router port for packets > > - * received in S_ROUTER_IN_ADMISSION. > > - */ > > -function rEG_INPORT_ETH_ADDR() : istring = i"xreg0[0..47]" > > - > > -/* Register for ECMP bucket selection. */ > > -function rEG_ECMP_GROUP_ID() : istring = i"reg8[0..15]" > > -function rEG_ECMP_MEMBER_ID() : istring = i"reg8[16..31]" > > - > > -function rEG_ORIG_TP_DPORT_ROUTER() : string = "reg9[16..31]" > > - > > -/* Register used for setting a label for ACLs in a Logical Switch. */ > > -function rEG_LABEL() : istring = i"reg3" > > - > > -function fLAGBIT_NOT_VXLAN() : istring = i"flags[1] == 0" > > - > > -function mFF_N_LOG_REGS() : bit<32> = 10 > > - > > -/* > > - * Generating sb::Logical_Flow and sb::Logical_DP_Group. > > - * > > - * Some logical flows occur in multiple logical datapaths. These can > > - * be represented two ways: either as multiple Logical_Flow records > > - * (each with logical_datapath set appropriately) or as a single > > - * Logical_Flow record that points to a Logical_DP_Group record that > > - * lists all the datapaths it's in. (It would be possible to mix or > > - * duplicate these methods, but we don't do that.) We have to support > > - * both: > > - * > > - * - There's a setting "use_logical_dp_groups" that globally > > - * enables or disables this feature. > > - */ > > - > > -relation Flow( > > - logical_datapath: uuid, > > - stage: Intern<Stage>, > > - priority: integer, > > - __match: istring, > > - actions: istring, > > - io_port: Option<istring>, > > - controller_meter: Option<istring>, > > - stage_hint: bit<32> > > -) > > - > > -function stage_hint(_uuid: uuid): bit<32> { > > - _uuid[127:96] > > -} > > - > > -/* If this option is 'true' northd will combine logical flows that differ by > > - * logical datapath only by creating a datapath group. */ > > -relation UseLogicalDatapathGroups[bool] > > -UseLogicalDatapathGroups[use_logical_dp_groups] :- > > - nb in nb::NB_Global(), > > - var use_logical_dp_groups = nb.options.get_bool_def(i"use_logical_dp_groups", true). > > -UseLogicalDatapathGroups[false] :- > > - Unit(), > > - not nb in nb::NB_Global(). > > - > > -relation AggregatedFlow ( > > - logical_datapaths: Set<uuid>, > > - stage: Intern<Stage>, > > - priority: integer, > > - __match: istring, > > - actions: istring, > > - io_port: Option<istring>, > > - controller_meter: Option<istring>, > > - stage_hint: bit<32> > > -) > > -function make_flow_tags(io_port: Option<istring>): Map<istring,istring> { > > - match (io_port) { > > - None -> map_empty(), > > - Some{s} -> [ i"in_out_port" -> s ] > > - } > > -} > > -function make_flow_external_ids(stage_hint: bit<32>, stage: Intern<Stage>): Map<istring,istring> { > > - if (stage_hint == 0) { > > - [i"stage-name" -> stage.table_name] > > - } else { > > - [i"stage-name" -> stage.table_name, > > - i"stage-hint" -> i"${hex(stage_hint)}"] > > - } > > -} > > -AggregatedFlow(.logical_datapaths = g.to_set(), > > - .stage = stage, > > - .priority = priority, > > - .__match = __match, > > - .actions = actions, > > - .io_port = io_port, > > - .controller_meter = controller_meter, > > - .stage_hint = stage_hint) :- > > - UseLogicalDatapathGroups[true], > > - Flow(logical_datapath, stage, priority, __match, actions, io_port, controller_meter, stage_hint), > > - var g = logical_datapath.group_by((stage, priority, __match, actions, io_port, controller_meter, stage_hint)). > > - > > - > > -AggregatedFlow(.logical_datapaths = set_singleton(logical_datapath), > > - .stage = stage, > > - .priority = priority, > > - .__match = __match, > > - .actions = actions, > > - .io_port = io_port, > > - .controller_meter = controller_meter, > > - .stage_hint = stage_hint) :- > > - UseLogicalDatapathGroups[false], > > - Flow(logical_datapath, stage, priority, __match, actions, io_port, controller_meter, stage_hint). > > - > > - > > -function to_istring(pipeline: Pipeline): istring { > > - if (pipeline == Ingress) { > > - i"ingress" > > - } else { > > - i"egress" > > - } > > -} > > - > > -for (f in AggregatedFlow()) { > > - if (f.logical_datapaths.size() == 1) { > > - Some{var dp} = f.logical_datapaths.nth(0) in > > - sb::Out_Logical_Flow( > > - ._uuid = hash128((dp, f.stage, f.priority, f.__match, f.actions, f.controller_meter, f.io_port, f.stage_hint)), > > - .logical_datapath = Some{dp}, > > - .logical_dp_group = None, > > - .pipeline = f.stage.pipeline.to_istring(), > > - .table_id = f.stage.table_id as integer, > > - .priority = f.priority, > > - .controller_meter = f.controller_meter, > > - .__match = f.__match, > > - .actions = f.actions, > > - .tags = make_flow_tags(f.io_port), > > - .external_ids = make_flow_external_ids(f.stage_hint, f.stage)) > > - } else { > > - var group_uuid = hash128(f.logical_datapaths) in { > > - sb::Out_Logical_Flow( > > - ._uuid = hash128((group_uuid, f.stage, f.priority, f.__match, f.actions, f.controller_meter, f.io_port, f.stage_hint)), > > - .logical_datapath = None, > > - .logical_dp_group = Some{group_uuid}, > > - .pipeline = f.stage.pipeline.to_istring(), > > - .table_id = f.stage.table_id as integer, > > - .priority = f.priority, > > - .controller_meter = f.controller_meter, > > - .__match = f.__match, > > - .actions = f.actions, > > - .tags = make_flow_tags(f.io_port), > > - .external_ids = make_flow_external_ids(f.stage_hint, f.stage)); > > - sb::Out_Logical_DP_Group(._uuid = group_uuid, .datapaths = f.logical_datapaths) > > - } > > - } > > -} > > - > > -/* Logical flows for forwarding groups. */ > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > > - .priority = 50, > > - .__match = __match, > > - .actions = actions, > > - .stage_hint = stage_hint(fg_uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - sw in &Switch(), > > - &nb::Logical_Switch(._uuid = sw._uuid, .forwarding_groups = forwarding_groups), > > - var fg_uuid = FlatMap(forwarding_groups), > > - fg in nb::Forwarding_Group(._uuid = fg_uuid), > > - not fg.child_port.is_empty(), > > - var __match = i"arp.tpa == ${fg.vip} && arp.op == 1", > > - var actions = i"eth.dst = eth.src; " > > - "eth.src = ${fg.vmac}; " > > - "arp.op = 2; /* ARP reply */ " > > - "arp.tha = arp.sha; " > > - "arp.sha = ${fg.vmac}; " > > - "arp.tpa = arp.spa; " > > - "arp.spa = ${fg.vip}; " > > - "outport = inport; " > > - "flags.loopback = 1; " > > - "output;". > > - > > -function escape_child_ports(child_port: Set<istring>): string { > > - var escaped = vec_with_capacity(child_port.size()); > > - for (s in child_port) { > > - escaped.push(json_escape(s)) > > - }; > > - escaped.join(",") > > -} > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 50, > > - .__match = __match, > > - .actions = actions.intern(), > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - sw in &Switch(), > > - &nb::Logical_Switch(._uuid = sw._uuid, .forwarding_groups = forwarding_groups), > > - var fg_uuid = FlatMap(forwarding_groups), > > - fg in nb::Forwarding_Group(._uuid = fg_uuid), > > - not fg.child_port.is_empty(), > > - var __match = i"eth.dst == ${fg.vmac}", > > - var actions = "fwd_group(" ++ > > - if (fg.liveness) { "liveness=\"true\"," } else { "" } ++ > > - "childports=" ++ escape_child_ports(fg.child_port) ++ ");". > > - > > -/* Logical switch ingress table PORT_SEC_L2: admission control framework > > - * (priority 100) */ > > -for (sw in &Switch()) { > > - if (not sw.is_vlan_transparent) { > > - /* Block logical VLANs. */ > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_L2(), > > - .priority = 100, > > - .__match = i"vlan.present", > > - .actions = i"drop;", > > - .stage_hint = 0 /*TODO: check*/, > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - /* Broadcast/multicast source address is invalid */ > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_L2(), > > - .priority = 100, > > - .__match = i"eth.src[40]", > > - .actions = i"drop;", > > - .stage_hint = 0 /*TODO: check*/, > > - .io_port = None, > > - .controller_meter = None) > > - /* Port security flows have priority 50 (see below) and will continue to the next table > > - if packet source is acceptable. */ > > -} > > - > > -// space-separated set of strings > > -function join(strings: Set<string>, sep: string): string { > > - strings.to_vec().join(sep) > > -} > > - > > -function build_port_security_ipv6_flow( > > - pipeline: Pipeline, > > - ea: eth_addr, > > - ipv6_addrs: Vec<ipv6_netaddr>): string = > > -{ > > - var ip6_addrs = vec_empty(); > > - > > - /* Allow link-local address. */ > > - ip6_addrs.push(ea.to_ipv6_lla().string_mapped()); > > - > > - /* Allow ip6.dst=ff00::/8 for multicast packets */ > > - if (pipeline == Egress) { > > - ip6_addrs.push("ff00::/8") > > - }; > > - for (addr in ipv6_addrs) { > > - ip6_addrs.push(addr.match_network()) > > - }; > > - > > - var dir = if (pipeline == Ingress) { "src" } else { "dst" }; > > - " && ip6.${dir} == {" ++ ip6_addrs.join(", ") ++ "}" > > -} > > - > > -function build_port_security_ipv6_nd_flow( > > - ea: eth_addr, > > - ipv6_addrs: Vec<ipv6_netaddr>): string = > > -{ > > - var __match = " && ip6 && nd && ((nd.sll == ${eth_addr_zero()} || " > > - "nd.sll == ${ea}) || ((nd.tll == ${eth_addr_zero()} || " > > - "nd.tll == ${ea})"; > > - if (ipv6_addrs.is_empty()) { > > - __match ++ "))" > > - } else { > > - __match = __match ++ " && (nd.target == ${ea.to_ipv6_lla()}"; > > - > > - for(addr in ipv6_addrs) { > > - __match = __match ++ " || nd.target == ${addr.match_network()}" > > - }; > > - __match ++ ")))" > > - } > > -} > > - > > -/* Pre-ACL */ > > -for (&Switch(._uuid =ls_uuid)) { > > - /* Ingress and Egress Pre-ACL Table (Priority 0): Packets are > > - * allowed by default. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_ACL(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_ACL(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_ACL(), > > - .priority = 110, > > - .__match = i"eth.dst == $svc_monitor_mac", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_ACL(), > > - .priority = 110, > > - .__match = i"eth.src == $svc_monitor_mac", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* stateless filters always take precedence over stateful ACLs. */ > > -for (&SwitchACL(.sw = sw@&Switch{._uuid = ls_uuid}, .acl = acl, .has_fair_meter = fair_meter)) { > > - if (sw.has_stateful_acl) { > > - if (acl.action == i"allow-stateless") { > > - if (acl.direction == i"from-lport") { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_ACL(), > > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > > - .__match = acl.__match, > > - .actions = i"next;", > > - .stage_hint = stage_hint(acl._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } else { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_ACL(), > > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > > - .__match = acl.__match, > > - .actions = i"next;", > > - .stage_hint = stage_hint(acl._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > - } > > -} > > - > > -/* If there are any stateful ACL rules in this datapath, we must > > - * send all IP packets through the conntrack action, which handles > > - * defragmentation, in order to match L4 headers. */ > > - > > -for (&SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"router"}, > > - .json_name = lsp_name, > > - .sw = &Switch{._uuid = ls_uuid, .has_stateful_acl = true})) { > > - /* Can't use ct() for router ports. Consider the > > - * following configuration: lp1(10.0.0.2) on > > - * hostA--ls1--lr0--ls2--lp2(10.0.1.2) on hostB, For a > > - * ping from lp1 to lp2, First, the response will go > > - * through ct() with a zone for lp2 in the ls2 ingress > > - * pipeline on hostB. That ct zone knows about this > > - * connection. Next, it goes through ct() with the zone > > - * for the router port in the egress pipeline of ls2 on > > - * hostB. This zone does not know about the connection, > > - * as the icmp request went through the logical router > > - * on hostA, not hostB. This would only work with > > - * distributed conntrack state across all chassis. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_ACL(), > > - .priority = 110, > > - .__match = i"ip && inport == ${lsp_name}", > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_ACL(), > > - .priority = 110, > > - .__match = i"ip && outport == ${lsp_name}", > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > -} > > - > > -for (&SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"localnet"}, > > - .json_name = lsp_name, > > - .sw = &Switch{._uuid = ls_uuid, .has_stateful_acl = true})) { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_ACL(), > > - .priority = 110, > > - .__match = i"ip && inport == ${lsp_name}", > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_ACL(), > > - .priority = 110, > > - .__match = i"ip && outport == ${lsp_name}", > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > -} > > - > > -for (&Switch(._uuid = ls_uuid, .has_stateful_acl = true)) { > > - /* Ingress and Egress Pre-ACL Table (Priority 110). > > - * > > - * Not to do conntrack on ND and ICMP destination > > - * unreachable packets. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_ACL(), > > - .priority = 110, > > - .__match = i"nd || nd_rs || nd_ra || mldv1 || mldv2 || " > > - "(udp && udp.src == 546 && udp.dst == 547)", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_ACL(), > > - .priority = 110, > > - .__match = i"nd || nd_rs || nd_ra || mldv1 || mldv2 || " > > - "(udp && udp.src == 546 && udp.dst == 547)", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Ingress and Egress Pre-ACL Table (Priority 100). > > - * > > - * Regardless of whether the ACL is "from-lport" or "to-lport", > > - * we need rules in both the ingress and egress table, because > > - * the return traffic needs to be followed. > > - * > > - * 'REGBIT_CONNTRACK_DEFRAG' is set to let the pre-stateful table send > > - * it to conntrack for tracking and defragmentation. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_ACL(), > > - .priority = 100, > > - .__match = i"ip", > > - .actions = i"${rEGBIT_CONNTRACK_DEFRAG()} = 1; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_ACL(), > > - .priority = 100, > > - .__match = i"ip", > > - .actions = i"${rEGBIT_CONNTRACK_DEFRAG()} = 1; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Pre-LB */ > > -for (&Switch(._uuid = ls_uuid)) { > > - /* Do not send ND packets to conntrack */ > > - var __match = i"nd || nd_rs || nd_ra || mldv1 || mldv2" in { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_LB(), > > - .priority = 110, > > - .__match = __match, > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_LB(), > > - .priority = 110, > > - .__match = __match, > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - /* Do not send service monitor packets to conntrack. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_LB(), > > - .priority = 110, > > - .__match = i"eth.dst == $svc_monitor_mac", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_LB(), > > - .priority = 110, > > - .__match = i"eth.src == $svc_monitor_mac", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Allow all packets to go to next tables by default. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_LB(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_LB(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -for (&SwitchPort(.lsp = lsp, .json_name = lsp_name, .sw = &Switch{._uuid = ls_uuid})) > > -if (lsp.__type == i"router" or lsp.__type == i"localnet") { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_LB(), > > - .priority = 110, > > - .__match = i"ip && inport == ${lsp_name}", > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_LB(), > > - .priority = 110, > > - .__match = i"ip && outport == ${lsp_name}", > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > -} > > - > > -/* Empty LoadBalancer Controller event */ > > -function build_empty_lb_event_flow(key: istring, lb: Intern<nb::Load_Balancer>): Option<(istring, istring)> { > > - (var ip, var port) = match (ip_address_and_port_from_lb_key(key.ival())) { > > - Some{(ip, port)} -> (ip, port), > > - _ -> return None > > - }; > > - > > - var protocol = if (lb.protocol == Some{i"tcp"}) { "tcp" } else { "udp" }; > > - var vip = match (port) { > > - 0 -> "${ip}", > > - _ -> "${ip.to_bracketed_string()}:${port}" > > - }; > > - > > - var __match = vec_with_capacity(2); > > - __match.push("${ip.ipX()}.dst == ${ip}"); > > - if (port != 0) { > > - __match.push("${protocol}.dst == ${port}"); > > - }; > > - > > - var action = i"trigger_event(" > > - "event = \"empty_lb_backends\", " > > - "vip = \"${vip}\", " > > - "protocol = \"${protocol}\", " > > - "load_balancer = \"${uuid2str(lb._uuid)}\");"; > > - > > - Some{(__match.join(" && ").intern(), action)} > > -} > > - > > -/* Contains the load balancers for which an event should be sent each time it > > - * runs out of backends. > > - * > > - * The preferred way to do this by setting an individual Load_Balancer's > > - * options:event=true. > > - * > > - * The deprecated way is to set nb::NB_Global options:controller_event=true, > > - * which enables events for every load balancer. > > - */ > > -relation LoadBalancerEmptyEvents(lb_uuid: uuid) > > -LoadBalancerEmptyEvents(lb_uuid) :- > > - nb::NB_Global(.options = global_options), > > - var global_events = global_options.get_bool_def(i"controller_event", false), > > - &nb::Load_Balancer(._uuid = lb_uuid, .options = local_options), > > - var local_events = local_options.get_bool_def(i"event", false), > > - global_events or local_events. > > - > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PRE_LB(), > > - .priority = 130, > > - .__match = __match, > > - .actions = __action, > > - .io_port = None, > > - .controller_meter = sw.copp.get(cOPP_EVENT_ELB()), > > - .stage_hint = stage_hint(lb._uuid)) :- > > - SwitchLBVIP(.sw_uuid = sw_uuid, .lb = lb, .vip = vip, .backends = backends), > > - LoadBalancerEmptyEvents(lb._uuid), > > - not lb.options.get_bool_def(i"reject", false), > > - sw in &Switch(._uuid = sw_uuid), > > - backends == i"", > > - Some {(var __match, var __action)} = build_empty_lb_event_flow(vip, lb). > > - > > -/* 'REGBIT_CONNTRACK_NAT' is set to let the pre-stateful table send > > - * packet to conntrack for defragmentation. > > - * > > - * Send all the packets to conntrack in the ingress pipeline if the > > - * logical switch has a load balancer with VIP configured. Earlier > > - * we used to set the REGBIT_CONNTRACK_DEFRAG flag in the ingress pipeline > > - * if the IP destination matches the VIP. But this causes few issues when > > - * a logical switch has no ACLs configured with allow-related. > > - * To understand the issue, lets a take a TCP load balancer - > > - * 10.0.0.10:80=10.0.0.3:80. > > - * If a logical port - p1 with IP - 10.0.0.5 opens a TCP connection with > > - * the VIP - 10.0.0.10, then the packet in the ingress pipeline of 'p1' > > - * is sent to the p1's conntrack zone id and the packet is load balanced > > - * to the backend - 10.0.0.3. For the reply packet from the backend lport, > > - * it is not sent to the conntrack of backend lport's zone id. This is fine > > - * as long as the packet is valid. Suppose the backend lport sends an > > - * invalid TCP packet (like incorrect sequence number), the packet gets > > - * delivered to the lport 'p1' without unDNATing the packet to the > > - * VIP - 10.0.0.10. And this causes the connection to be reset by the > > - * lport p1's VIF. > > - * > > - * We can't fix this issue by adding a logical flow to drop ct.inv packets > > - * in the egress pipeline since it will drop all other connections not > > - * destined to the load balancers. > > - * > > - * To fix this issue, we send all the packets to the conntrack in the > > - * ingress pipeline if a load balancer is configured. We can now > > - * add a lflow to drop ct.inv packets. > > - */ > > -for (sw in &Switch(.has_lb_vip = true)) { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PRE_LB(), > > - .priority = 100, > > - .__match = i"ip", > > - .actions = i"${rEGBIT_CONNTRACK_NAT()} = 1; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_OUT_PRE_LB(), > > - .priority = 100, > > - .__match = i"ip", > > - .actions = i"${rEGBIT_CONNTRACK_NAT()} = 1; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Pre-stateful */ > > -relation LbProtocol[string] > > -LbProtocol["tcp"]. > > -LbProtocol["udp"]. > > -LbProtocol["sctp"]. > > -for (&Switch(._uuid = ls_uuid)) { > > - /* Ingress and Egress pre-stateful Table (Priority 0): Packets are > > - * allowed by default. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_STATEFUL(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_STATEFUL(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* If rEGBIT_CONNTRACK_NAT() is set as 1, then packets should just be sent > > - * through nat (without committing). > > - * > > - * rEGBIT_CONNTRACK_COMMIT() is set for new connections and > > - * rEGBIT_CONNTRACK_NAT() is set for established connections. So they > > - * don't overlap. > > - * > > - * In the ingress pipeline, also store the original destination IP and > > - * transport port to be used when detecting hairpin packets. > > - */ > > - for (LbProtocol[protocol]) { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_STATEFUL(), > > - .priority = 120, > > - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1 && ip4 && ${protocol}", > > - .actions = i"${rEG_ORIG_DIP_IPV4()} = ip4.dst; " > > - "${rEG_ORIG_TP_DPORT()} = ${protocol}.dst; ct_lb;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_STATEFUL(), > > - .priority = 120, > > - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1 && ip6 && ${protocol}", > > - .actions = i"${rEG_ORIG_DIP_IPV6()} = ip6.dst; " > > - "${rEG_ORIG_TP_DPORT()} = ${protocol}.dst; ct_lb;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_STATEFUL(), > > - .priority = 110, > > - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1", > > - .actions = i"ct_lb;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_STATEFUL(), > > - .priority = 110, > > - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1", > > - .actions = i"ct_lb;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* If rEGBIT_CONNTRACK_DEFRAG() is set as 1, then the packets should be > > - * sent to conntrack for tracking and defragmentation. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_STATEFUL(), > > - .priority = 100, > > - .__match = i"${rEGBIT_CONNTRACK_DEFRAG()} == 1", > > - .actions = i"ct_next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PRE_STATEFUL(), > > - .priority = 100, > > - .__match = i"${rEGBIT_CONNTRACK_DEFRAG()} == 1", > > - .actions = i"ct_next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -function acl_log_meter_name(meter_name: istring, acl_uuid: uuid): string = > > -{ > > - "${meter_name}__${uuid2str(acl_uuid)}" > > -} > > - > > -function build_acl_log(acl: Intern<nb::ACL>, fair_meter: bool): string = > > -{ > > - if (not acl.log) { > > - "" > > - } else { > > - var strs = vec_empty(); > > - match (acl.name) { > > - None -> (), > > - Some{name} -> strs.push("name=${json_escape(name)}") > > - }; > > - /* If a severity level isn't specified, default to "info". */ > > - match (acl.severity) { > > - None -> strs.push("severity=info"), > > - Some{severity} -> strs.push("severity=${severity}") > > - }; > > - match (acl.action.ival()) { > > - "drop" -> { > > - strs.push("verdict=drop") > > - }, > > - "reject" -> { > > - strs.push("verdict=reject") > > - }, > > - "allow" -> { > > - strs.push("verdict=allow") > > - }, > > - "allow-related" -> { > > - strs.push("verdict=allow") > > - }, > > - "allow-stateless" -> { > > - strs.push("verdict=allow") > > - }, > > - _ -> () > > - }; > > - match (acl.meter) { > > - Some{meter} -> { > > - var name = match (fair_meter) { > > - true -> acl_log_meter_name(meter, acl._uuid), > > - false -> meter.ival() > > - }; > > - strs.push("meter=${json_escape(name)}") > > - }, > > - None -> () > > - }; > > - "log(${strs.join(\", \")}); " > > - } > > -} > > - > > -/* Due to various hard-coded priorities need to implement ACLs, the > > - * northbound database supports a smaller range of ACL priorities than > > - * are available to logical flows. This value is added to an ACL > > - * priority to determine the ACL's logical flow priority. */ > > -function oVN_ACL_PRI_OFFSET(): integer = 1000 > > - > > -/* Intermediate relation that stores reject ACLs. > > - * The following rules generate logical flows for these ACLs. > > - */ > > -relation Reject( > > - lsuuid: uuid, > > - pipeline: Pipeline, > > - stage: Intern<Stage>, > > - acl: Intern<nb::ACL>, > > - fair_meter: bool, > > - controller_meter: Option<istring>, > > - extra_match: istring, > > - extra_actions: istring) > > - > > -/* build_reject_acl_rules() */ > > -function next_to_stage(stage: Intern<Stage>): string { > > - var pipeline = match (stage.pipeline) { > > - Ingress -> "ingress", > > - Egress -> "egress" > > - }; > > - "next(pipeline=${pipeline},table=${stage.table_id})" > > -} > > -for (Reject(lsuuid, pipeline, stage, acl, fair_meter, controller_meter, > > - extra_match_, extra_actions_)) { > > - var extra_match = if (extra_match_ == i"") { "" } else { "(${extra_match_}) && " } in > > - var extra_actions = if (extra_actions_ == i"") { "" } else { "${extra_actions_} " } in > > - var next_stage = match (pipeline) { > > - Ingress -> s_SWITCH_OUT_QOS_MARK(), > > - Egress -> s_SWITCH_IN_L2_LKUP() > > - } in > > - var acl_log = build_acl_log(acl, fair_meter) in > > - var __match = extra_match ++ acl.__match in > > - var actions = acl_log ++ extra_actions ++ "reg0 = 0; " > > - "reject { " > > - "/* eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ " > > - "outport <-> inport; ${next_to_stage(next_stage)}; };" in > > - Flow(.logical_datapath = lsuuid, > > - .stage = stage, > > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > > - .__match = __match.intern(), > > - .actions = actions.intern(), > > - .io_port = None, > > - .controller_meter = controller_meter, > > - .stage_hint = stage_hint(acl._uuid)) > > -} > > - > > -/* build_acls */ > > -for (UseCtInvMatch[use_ct_inv_match]) { > > - (var ct_inv_or, var and_not_ct_inv) = match (use_ct_inv_match) { > > - true -> ("ct.inv || ", "&& !ct.inv "), > > - false -> ("", ""), > > - } in > > - for (sw in &Switch(._uuid = ls_uuid)) > > - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in > > - { > > - /* Ingress and Egress ACL Table (Priority 0): Packets are allowed by > > - * default. If the logical switch has no ACLs or no load balancers, > > - * then add 65535-priority flow to advance the packet to next > > - * stage. > > - * > > - * A related rule at priority 1 is added below if there > > - * are any stateful ACLs in this datapath. */ > > - var priority = if (not sw.has_acls and not sw.has_lb_vip) { 65535 } else { 0 } > > - in > > - { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_ACL(), > > - .priority = priority, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_ACL(), > > - .priority = priority, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - if (has_stateful) { > > - /* Ingress and Egress ACL Table (Priority 1). > > - * > > - * By default, traffic is allowed. This is partially handled by > > - * the Priority 0 ACL flows added earlier, but we also need to > > - * commit IP flows. This is because, while the initiater's > > - * direction may not have any stateful rules, the server's may > > - * and then its return traffic would not have an associated > > - * conntrack entry and would return "+invalid". > > - * > > - * We use "ct_commit" for a connection that is not already known > > - * by the connection tracker. Once a connection is committed, > > - * subsequent packets will hit the flow at priority 0 that just > > - * uses "next;" > > - * > > - * We also check for established connections that have ct_label.blocked > > - * set on them. That's a connection that was disallowed, but is > > - * now allowed by policy again since it hit this default-allow flow. > > - * We need to set ct_label.blocked=0 to let the connection continue, > > - * which will be done by ct_commit() in the "stateful" stage. > > - * Subsequent packets will hit the flow at priority 0 that just > > - * uses "next;". */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_ACL(), > > - .priority = 1, > > - .__match = i"ip && (!ct.est || (ct.est && ct_label.blocked == 1))", > > - .actions = i"${rEGBIT_CONNTRACK_COMMIT()} = 1; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_ACL(), > > - .priority = 1, > > - .__match = i"ip && (!ct.est || (ct.est && ct_label.blocked == 1))", > > - .actions = i"${rEGBIT_CONNTRACK_COMMIT()} = 1; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Ingress and Egress ACL Table (Priority 65532). > > - * > > - * Always drop traffic that's in an invalid state. Also drop > > - * reply direction packets for connections that have been marked > > - * for deletion (bit 0 of ct_label is set). > > - * > > - * This is enforced at a higher priority than ACLs can be defined. */ > > - var __match = (ct_inv_or ++ "(ct.est && ct.rpl && ct_label.blocked == 1)").intern() in { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_ACL(), > > - .priority = 65532, > > - .__match = __match, > > - .actions = i"drop;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_ACL(), > > - .priority = 65532, > > - .__match = __match, > > - .actions = i"drop;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - /* Ingress and Egress ACL Table (Priority 65532). > > - * > > - * Allow reply traffic that is part of an established > > - * conntrack entry that has not been marked for deletion > > - * (bit 0 of ct_label). We only match traffic in the > > - * reply direction because we want traffic in the request > > - * direction to hit the currently defined policy from ACLs. > > - * > > - * This is enforced at a higher priority than ACLs can be defined. */ > > - var __match = ("ct.est && !ct.rel && !ct.new " ++ and_not_ct_inv ++ > > - "&& ct.rpl && ct_label.blocked == 0").intern() in { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_ACL(), > > - .priority = 65532, > > - .__match = __match, > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_ACL(), > > - .priority = 65532, > > - .__match = __match, > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - /* Ingress and Egress ACL Table (Priority 65532). > > - * > > - * Allow traffic that is related to an existing conntrack entry that > > - * has not been marked for deletion (bit 0 of ct_label). > > - * > > - * This is enforced at a higher priority than ACLs can be defined. > > - * > > - * NOTE: This does not support related data sessions (eg, > > - * a dynamically negotiated FTP data channel), but will allow > > - * related traffic such as an ICMP Port Unreachable through > > - * that's generated from a non-listening UDP port. */ > > - var __match = ("!ct.est && ct.rel && !ct.new " ++ and_not_ct_inv ++ > > - "&& ct_label.blocked == 0").intern() in { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_ACL(), > > - .priority = 65532, > > - .__match = __match, > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_ACL(), > > - .priority = 65532, > > - .__match = __match, > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - /* Ingress and Egress ACL Table (Priority 65532). > > - * > > - * Not to do conntrack on ND packets. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_ACL(), > > - .priority = 65532, > > - .__match = i"nd || nd_ra || nd_rs || mldv1 || mldv2", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_ACL(), > > - .priority = 65532, > > - .__match = i"nd || nd_ra || nd_rs || mldv1 || mldv2", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - /* Add a 34000 priority flow to advance the DNS reply from ovn-controller, > > - * if the CMS has configured DNS records for the datapath. > > - */ > > - if (sw.has_dns_records) { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_ACL(), > > - .priority = 34000, > > - .__match = i"udp.src == 53", > > - .actions = if has_stateful i"ct_commit; next;" else i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - if (sw.has_acls or sw.has_lb_vip) { > > - /* Add a 34000 priority flow to advance the service monitor reply > > - * packets to skip applying ingress ACLs. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_ACL(), > > - .priority = 34000, > > - .__match = i"eth.dst == $svc_monitor_mac", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_ACL(), > > - .priority = 34000, > > - .__match = i"eth.src == $svc_monitor_mac", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > -} > > - > > -/* This stage builds hints for the IN/OUT_ACL stage. Based on various > > - * combinations of ct flags packets may hit only a subset of the logical > > - * flows in the IN/OUT_ACL stage. > > - * > > - * Populating ACL hints first and storing them in registers simplifies > > - * the logical flow match expressions in the IN/OUT_ACL stage and > > - * generates less openflows. > > - * > > - * Certain combinations of ct flags might be valid matches for multiple > > - * types of ACL logical flows (e.g., allow/drop). In such cases hints > > - * corresponding to all potential matches are set. > > - */ > > -input relation AclHintStages[Intern<Stage>] > > -AclHintStages[s_SWITCH_IN_ACL_HINT()]. > > -AclHintStages[s_SWITCH_OUT_ACL_HINT()]. > > -for (sw in &Switch(._uuid = ls_uuid)) { > > - for (AclHintStages[stage]) { > > - /* In any case, advance to the next stage. */ > > - var priority = if (not sw.has_acls and not sw.has_lb_vip) { 65535 } else { 0 } in > > - Flow(ls_uuid, stage, priority, i"1", i"next;", None, None, 0) > > - }; > > - > > - for (AclHintStages[stage]) > > - if (sw.has_stateful_acl or sw.has_lb_vip) { > > - /* New, not already established connections, may hit either allow > > - * or drop ACLs. For allow ACLs, the connection must also be committed > > - * to conntrack so we set REGBIT_ACL_HINT_ALLOW_NEW. > > - */ > > - Flow(ls_uuid, stage, 7, i"ct.new && !ct.est", > > - i"${rEGBIT_ACL_HINT_ALLOW_NEW()} = 1; " > > - "${rEGBIT_ACL_HINT_DROP()} = 1; " > > - "next;", None, None, 0); > > - > > - /* Already established connections in the "request" direction that > > - * are already marked as "blocked" may hit either: > > - * - allow ACLs for connections that were previously allowed by a > > - * policy that was deleted and is being readded now. In this case > > - * the connection should be recommitted so we set > > - * REGBIT_ACL_HINT_ALLOW_NEW. > > - * - drop ACLs. > > - */ > > - Flow(ls_uuid, stage, 6, i"!ct.new && ct.est && !ct.rpl && ct_label.blocked == 1", > > - i"${rEGBIT_ACL_HINT_ALLOW_NEW()} = 1; " > > - "${rEGBIT_ACL_HINT_DROP()} = 1; " > > - "next;", None, None, 0); > > - > > - /* Not tracked traffic can either be allowed or dropped. */ > > - Flow(ls_uuid, stage, 5, i"!ct.trk", > > - i"${rEGBIT_ACL_HINT_ALLOW()} = 1; " > > - "${rEGBIT_ACL_HINT_DROP()} = 1; " > > - "next;", None, None, 0); > > - > > - /* Already established connections in the "request" direction may hit > > - * either: > > - * - allow ACLs in which case the traffic should be allowed so we set > > - * REGBIT_ACL_HINT_ALLOW. > > - * - drop ACLs in which case the traffic should be blocked and the > > - * connection must be committed with ct_label.blocked set so we set > > - * REGBIT_ACL_HINT_BLOCK. > > - */ > > - Flow(ls_uuid, stage, 4, i"!ct.new && ct.est && !ct.rpl && ct_label.blocked == 0", > > - i"${rEGBIT_ACL_HINT_ALLOW()} = 1; " > > - "${rEGBIT_ACL_HINT_BLOCK()} = 1; " > > - "next;", None, None, 0); > > - > > - /* Not established or established and already blocked connections may > > - * hit drop ACLs. > > - */ > > - Flow(ls_uuid, stage, 3, i"!ct.est", > > - i"${rEGBIT_ACL_HINT_DROP()} = 1; " > > - "next;", None, None, 0); > > - Flow(ls_uuid, stage, 2, i"ct.est && ct_label.blocked == 1", > > - i"${rEGBIT_ACL_HINT_DROP()} = 1; " > > - "next;", None, None, 0); > > - > > - /* Established connections that were previously allowed might hit > > - * drop ACLs in which case the connection must be committed with > > - * ct_label.blocked set. > > - */ > > - Flow(ls_uuid, stage, 1, i"ct.est && ct_label.blocked == 0", > > - i"${rEGBIT_ACL_HINT_BLOCK()} = 1; " > > - "next;", None, None, 0) > > - } > > -} > > - > > -/* Ingress or Egress ACL Table (Various priorities). */ > > -for (&SwitchACL(.sw = sw, .acl = acl, .has_fair_meter = fair_meter)) { > > - /* consider_acl */ > > - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in > > - var ingress = acl.direction == i"from-lport" in > > - var stage = if (ingress) { s_SWITCH_IN_ACL() } else { s_SWITCH_OUT_ACL() } in > > - var pipeline = if ingress Ingress else Egress in > > - var stage_hint = stage_hint(acl._uuid) in > > - var acl_log = build_acl_log(acl, fair_meter) in > > - var acl_match = acl.__match.intern() in > > - if (acl.action == i"allow" or acl.action == i"allow-related") { > > - /* If there are any stateful flows, we must even commit "allow" > > - * actions. This is because, while the initiater's > > - * direction may not have any stateful rules, the server's > > - * may and then its return traffic would not have an > > - * associated conntrack entry and would return "+invalid". */ > > - if (not has_stateful) { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = stage, > > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > > - .__match = acl.__match, > > - .actions = i"${acl_log}next;", > > - .stage_hint = stage_hint, > > - .io_port = None, > > - .controller_meter = None) > > - } else { > > - /* Commit the connection tracking entry if it's a new > > - * connection that matches this ACL. After this commit, > > - * the reply traffic is allowed by a flow we create at > > - * priority 65535, defined earlier. > > - * > > - * It's also possible that a known connection was marked for > > - * deletion after a policy was deleted, but the policy was > > - * re-added while that connection is still known. We catch > > - * that case here and un-set ct_label.blocked (which will be done > > - * by ct_commit in the "stateful" stage) to indicate that the > > - * connection should be allowed to resume. > > - * If the ACL has a label, then load REG_LABEL with the label and > > - * set the REGBIT_ACL_LABEL field. > > - */ > > - var __action = if (acl.label != 0) { > > - i"${rEGBIT_CONNTRACK_COMMIT()} = 1; ${rEGBIT_ACL_LABEL()} = 1; " > > - "${rEG_LABEL()} = ${acl.label}; ${acl_log}next;" > > - } else { > > - i"${rEGBIT_CONNTRACK_COMMIT()} = 1; ${acl_log}next;" > > - } in Flow(.logical_datapath = sw._uuid, > > - .stage = stage, > > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > > - .__match = i"${rEGBIT_ACL_HINT_ALLOW_NEW()} == 1 && (${acl.__match})", > > - .actions = __action, > > - .stage_hint = stage_hint, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Match on traffic in the request direction for an established > > - * connection tracking entry that has not been marked for > > - * deletion. We use this to ensure that this > > - * connection is still allowed by the currently defined > > - * policy. Match untracked packets too. > > - * Commit the connection only if the ACL has a label. This is done to > > - * update the connection tracking entry label in case the ACL > > - * allowing the connection changes. > > - */ > > - var __action = if (acl.label != 0) { > > - i"${rEGBIT_CONNTRACK_COMMIT()} = 1; ${rEGBIT_ACL_LABEL()} = 1; " > > - "${rEG_LABEL()} = ${acl.label}; ${acl_log}next;" > > - } else { > > - i"${acl_log}next;" > > - } in Flow(.logical_datapath = sw._uuid, > > - .stage = stage, > > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > > - .__match = i"${rEGBIT_ACL_HINT_ALLOW()} == 1 && (${acl.__match})", > > - .actions = __action, > > - .stage_hint = stage_hint, > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } else if (acl.action == i"allow-stateless") { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = stage, > > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > > - .__match = acl.__match, > > - .actions = i"${acl_log}next;", > > - .stage_hint = stage_hint, > > - .io_port = None, > > - .controller_meter = None) > > - } else if (acl.action == i"drop" or acl.action == i"reject") { > > - /* The implementation of "drop" differs if stateful ACLs are in > > - * use for this datapath. In that case, the actions differ > > - * depending on whether the connection was previously committed > > - * to the connection tracker with ct_commit. */ > > - var controller_meter = sw.copp.get(cOPP_REJECT()) in > > - if (has_stateful) { > > - /* If the packet is not tracked or not part of an established > > - * connection, then we can simply reject/drop it. */ > > - var __match = "${rEGBIT_ACL_HINT_DROP()} == 1" in > > - if (acl.action == i"reject") { > > - Reject(sw._uuid, pipeline, stage, acl, fair_meter, controller_meter, __match.intern(), i"") > > - } else { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = stage, > > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > > - .__match = (__match ++ " && (${acl.__match})").intern(), > > - .actions = i"${acl_log}/* drop */", > > - .stage_hint = stage_hint, > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - /* For an existing connection without ct_label set, we've > > - * encountered a policy change. ACLs previously allowed > > - * this connection and we committed the connection tracking > > - * entry. Current policy says that we should drop this > > - * connection. First, we set bit 0 of ct_label to indicate > > - * that this connection is set for deletion. By not > > - * specifying "next;", we implicitly drop the packet after > > - * updating conntrack state. We would normally defer > > - * ct_commit() to the "stateful" stage, but since we're > > - * rejecting/dropping the packet, we go ahead and do it here. > > - */ > > - var __match = "${rEGBIT_ACL_HINT_BLOCK()} == 1" in > > - var actions = "ct_commit { ct_label.blocked = 1; }; " in > > - if (acl.action == i"reject") { > > - Reject(sw._uuid, pipeline, stage, acl, fair_meter, controller_meter, __match.intern(), actions.intern()) > > - } else { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = stage, > > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > > - .__match = (__match ++ " && (${acl.__match})").intern(), > > - .actions = i"${actions}${acl_log}/* drop */", > > - .stage_hint = stage_hint, > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } else { > > - /* There are no stateful ACLs in use on this datapath, > > - * so a "reject/drop" ACL is simply the "reject/drop" > > - * logical flow action in all cases. */ > > - if (acl.action == i"reject") { > > - Reject(sw._uuid, pipeline, stage, acl, fair_meter, controller_meter, i"", i"") > > - } else { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = stage, > > - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), > > - .__match = acl.__match, > > - .actions = i"${acl_log}/* drop */", > > - .stage_hint = stage_hint, > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > - } > > -} > > - > > -/* Add 34000 priority flow to allow DHCP reply from ovn-controller to all > > - * logical ports of the datapath if the CMS has configured DHCPv4 options. > > - * */ > > -for (SwitchPortDHCPv4Options(.port = &SwitchPort{.lsp = lsp, .sw = sw}, > > - .dhcpv4_options = dhcpv4_options@&nb::DHCP_Options{.options = options}) > > - if lsp.__type != i"external") { > > - (Some{var server_id}, Some{var server_mac}, Some{var lease_time}) = > > - (options.get(i"server_id"), options.get(i"server_mac"), options.get(i"lease_time")) in > > - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_OUT_ACL(), > > - .priority = 34000, > > - .__match = i"outport == ${json_escape(lsp.name)} " > > - "&& eth.src == ${server_mac} " > > - "&& ip4.src == ${server_id} && udp && udp.src == 67 " > > - "&& udp.dst == 68", > > - .actions = if (has_stateful) i"ct_commit; next;" else i"next;", > > - .stage_hint = stage_hint(dhcpv4_options._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > -} > > - > > -for (SwitchPortDHCPv6Options(.port = &SwitchPort{.lsp = lsp, .sw = sw}, > > - .dhcpv6_options = dhcpv6_options@&nb::DHCP_Options{.options=options} ) > > - if lsp.__type != i"external") { > > - Some{var server_mac} = options.get(i"server_id") in > > - Some{var ea} = eth_addr_from_string(server_mac.ival()) in > > - var server_ip = ea.to_ipv6_lla() in > > - /* Get the link local IP of the DHCPv6 server from the > > - * server MAC. */ > > - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_OUT_ACL(), > > - .priority = 34000, > > - .__match = i"outport == ${json_escape(lsp.name)} " > > - "&& eth.src == ${server_mac} " > > - "&& ip6.src == ${server_ip} && udp && udp.src == 547 " > > - "&& udp.dst == 546", > > - .actions = if (has_stateful) i"ct_commit; next;" else i"next;", > > - .stage_hint = stage_hint(dhcpv6_options._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > -} > > - > > -relation QoSAction(qos: uuid, key_action: istring, value_action: integer) > > - > > -QoSAction(qos, k, v) :- > > - &nb::QoS(._uuid = qos, .action = actions), > > - (var k, var v) = FlatMap(actions). > > - > > -/* QoS rules */ > > -for (&Switch(._uuid = ls_uuid)) { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_QOS_MARK(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_QOS_MARK(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_QOS_METER(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_QOS_METER(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -for (SwitchQoS(.sw = sw, .qos = qos)) { > > - var ingress = if (qos.direction == i"from-lport") true else false in > > - var pipeline = if ingress "ingress" else "egress" in { > > - var stage = if (ingress) { s_SWITCH_IN_QOS_MARK() } else { s_SWITCH_OUT_QOS_MARK() } in > > - /* FIXME: Can value_action be negative? */ > > - for (QoSAction(qos._uuid, key_action, value_action)) { > > - if (key_action == i"dscp") { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = stage, > > - .priority = qos.priority, > > - .__match = qos.__match, > > - .actions = i"ip.dscp = ${value_action}; next;", > > - .stage_hint = stage_hint(qos._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - }; > > - > > - (var burst, var rate) = { > > - var rate = 0; > > - var burst = 0; > > - for ((key_bandwidth, value_bandwidth) in qos.bandwidth) { > > - /* FIXME: Can value_bandwidth be negative? */ > > - if (key_bandwidth == i"rate") { > > - rate = value_bandwidth > > - } else if (key_bandwidth == i"burst") { > > - burst = value_bandwidth > > - } else () > > - }; > > - (burst, rate) > > - } in > > - if (rate != 0) { > > - var stage = if (ingress) { s_SWITCH_IN_QOS_METER() } else { s_SWITCH_OUT_QOS_METER() } in > > - var meter_action = if (burst != 0) { > > - i"set_meter(${rate}, ${burst}); next;" > > - } else { > > - i"set_meter(${rate}); next;" > > - } in > > - /* Ingress and Egress QoS Meter Table. > > - * > > - * We limit the bandwidth of this flow by adding a meter table. > > - */ > > - Flow(.logical_datapath = sw._uuid, > > - .stage = stage, > > - .priority = qos.priority, > > - .__match = qos.__match, > > - .actions = meter_action, > > - .stage_hint = stage_hint(qos._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > -} > > - > > -/* stateful rules */ > > -for (&Switch(._uuid = ls_uuid)) { > > - /* Ingress and Egress stateful Table (Priority 0): Packets are > > - * allowed by default. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_STATEFUL(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_STATEFUL(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* If REGBIT_CONNTRACK_COMMIT is set as 1 and REGBIT_CONNTRACK_SET_LABEL > > - * is set to 1, then the packets should be > > - * committed to conntrack. We always set ct_label.blocked to 0 here as > > - * any packet that makes it this far is part of a connection we > > - * want to allow to continue. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_STATEFUL(), > > - .priority = 100, > > - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 1", > > - .actions = i"ct_commit { ct_label.blocked = 0; ct_label.label = ${rEG_LABEL()}; }; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_STATEFUL(), > > - .priority = 100, > > - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 1", > > - .actions = i"ct_commit { ct_label.blocked = 0; ct_label.label = ${rEG_LABEL()}; }; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* If REGBIT_CONNTRACK_COMMIT is set as 1, then the packets should be > > - * committed to conntrack. We always set ct_label.blocked to 0 here as > > - * any packet that makes it this far is part of a connection we > > - * want to allow to continue. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_STATEFUL(), > > - .priority = 100, > > - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 0", > > - .actions = i"ct_commit { ct_label.blocked = 0; }; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_STATEFUL(), > > - .priority = 100, > > - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 0", > > - .actions = i"ct_commit { ct_label.blocked = 0; }; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Load balancing rules for new connections get committed to conntrack > > - * table. So even if REGBIT_CONNTRACK_COMMIT is set in a previous table > > - * a higher priority rule for load balancing below also commits the > > - * connection, so it is okay if we do not hit the above match on > > - * REGBIT_CONNTRACK_COMMIT. */ > > -function get_match_for_lb_key(ip_address: v46_ip, > > - port: bit<16>, > > - protocol: Option<istring>, > > - redundancy: bool, > > - use_nexthop_reg: bool, > > - use_dest_tp_reg: bool): string = { > > - var port_match = if (port != 0) { > > - var proto = if (protocol == Some{i"udp"}) { > > - "udp" > > - } else { > > - "tcp" > > - }; > > - if (redundancy) { " && ${proto}" } else { "" } ++ > > - if (use_dest_tp_reg) { > > - " && ${rEG_ORIG_TP_DPORT_ROUTER()} == ${port}" > > - } else { > > - " && ${proto}.dst == ${port}" > > - } > > - } else { > > - "" > > - }; > > - > > - var ip_match = match (ip_address) { > > - IPv4{ipv4} -> > > - if (use_nexthop_reg) { > > - "${rEG_NEXT_HOP()} == ${ipv4}" > > - } else { > > - "ip4.dst == ${ipv4}" > > - }, > > - IPv6{ipv6} -> > > - if (use_nexthop_reg) { > > - "xx${rEG_NEXT_HOP()} == ${ipv6}" > > - } else { > > - "ip6.dst == ${ipv6}" > > - } > > - }; > > - > > - var ipx = match (ip_address) { > > - IPv4{ipv4} -> "ip4", > > - IPv6{ipv6} -> "ip6", > > - }; > > - > > - if (redundancy) { ipx ++ " && " } else { "" } ++ ip_match ++ port_match > > -} > > -/* New connections in Ingress table. */ > > - > > -function ct_lb(backends: istring, > > - selection_fields: Set<istring>, protocol: Option<istring>): string { > > - var args = vec_with_capacity(2); > > - args.push("backends=${backends}"); > > - > > - if (not selection_fields.is_empty()) { > > - var hash_fields = vec_with_capacity(selection_fields.size()); > > - for (sf in selection_fields) { > > - var hf = match ((sf.ival(), protocol)) { > > - ("tp_src", Some{p}) -> "${p}_src", > > - ("tp_dst", Some{p}) -> "${p}_dst", > > - _ -> sf.ival() > > - }; > > - hash_fields.push(hf); > > - }; > > - hash_fields.sort(); > > - args.push("hash_fields=" ++ json_escape(hash_fields.join(","))); > > - }; > > - > > - "ct_lb(" ++ args.join("; ") ++ ");" > > -} > > -function build_lb_vip_actions(lbvip: Intern<LBVIP>, > > - up_backends: istring, > > - stage: Intern<Stage>, > > - actions0: string): (string, bool) { > > - if (up_backends == i"") { > > - if (lbvip.lb.options.get_bool_def(i"reject", false)) { > > - return ("reg0 = 0; reject { outport <-> inport; ${next_to_stage(stage)};};", true) > > - } else if (lbvip.health_check.is_some()) { > > - return ("drop;", false) > > - } // else fall through > > - }; > > - > > - var actions = ct_lb(up_backends, lbvip.lb.selection_fields, lbvip.lb.protocol); > > - (actions0 ++ actions, false) > > -} > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_STATEFUL(), > > - .priority = priority, > > - .__match = __match, > > - .actions = actions, > > - .io_port = None, > > - .controller_meter = meter, > > - .stage_hint = 0) :- > > - LBVIPWithStatus(lbvip@&LBVIP{.lb = lb}, up_backends), > > - var priority = if (lbvip.vip_port != 0) { 120 } else { 110 }, > > - (var actions0, var reject) = { > > - /* Store the original destination IP to be used when generating > > - * hairpin flows. > > - */ > > - var actions0 = match (lbvip.vip_addr) { > > - IPv4{ipv4} -> "${rEG_ORIG_DIP_IPV4()} = ${ipv4}; ", > > - IPv6{ipv6} -> "${rEG_ORIG_DIP_IPV6()} = ${ipv6}; " > > - }; > > - > > - /* Store the original destination port to be used when generating > > - * hairpin flows. > > - */ > > - var actions1 = if (lbvip.vip_port != 0) { > > - "${rEG_ORIG_TP_DPORT()} = ${lbvip.vip_port}; " > > - } else { > > - "" > > - }; > > - > > - build_lb_vip_actions(lbvip, up_backends, s_SWITCH_OUT_QOS_MARK(), actions0 ++ actions1) > > - }, > > - var actions = actions0.intern(), > > - var __match = ("ct.new && " ++ get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port, lb.protocol, false, false, false)).intern(), > > - SwitchLB(sw, lb._uuid), > > - var meter = if (reject) { > > - sw.copp.get(cOPP_REJECT()) > > - } else { > > - None > > - }. > > - > > -/* Ingress Pre-Hairpin/Nat-Hairpin/Hairpin tabled (Priority 0). > > - * Packets that don't need hairpinning should continue processing. > > - */ > > -Flow(.logical_datapath = ls_uuid, > > - .stage = stage, > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - &Switch(._uuid = ls_uuid), > > - var stages = [s_SWITCH_IN_PRE_HAIRPIN(), > > - s_SWITCH_IN_NAT_HAIRPIN(), > > - s_SWITCH_IN_HAIRPIN()], > > - var stage = FlatMap(stages). > > - > > -for (&Switch(._uuid = ls_uuid, .has_lb_vip = true)) { > > - /* Check if the packet needs to be hairpinned. > > - * Set REGBIT_HAIRPIN in the original direction and > > - * REGBIT_HAIRPIN_REPLY in the reply direction. > > - */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PRE_HAIRPIN(), > > - .priority = 100, > > - .__match = i"ip && ct.trk", > > - .actions = i"${rEGBIT_HAIRPIN()} = chk_lb_hairpin(); " > > - "${rEGBIT_HAIRPIN_REPLY()} = chk_lb_hairpin_reply(); " > > - "next;", > > - .stage_hint = stage_hint(ls_uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* If packet needs to be hairpinned, snat the src ip with the VIP > > - * for new sessions. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_NAT_HAIRPIN(), > > - .priority = 100, > > - .__match = i"ip && ct.new && ct.trk && ${rEGBIT_HAIRPIN()} == 1", > > - .actions = i"ct_snat_to_vip; next;", > > - .stage_hint = stage_hint(ls_uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* If packet needs to be hairpinned, for established sessions there > > - * should already be an SNAT conntrack entry. > > - */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_NAT_HAIRPIN(), > > - .priority = 100, > > - .__match = i"ip && ct.est && ct.trk && ${rEGBIT_HAIRPIN()} == 1", > > - .actions = i"ct_snat;", > > - .stage_hint = stage_hint(ls_uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* For the reply of hairpinned traffic, snat the src ip to the VIP. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_NAT_HAIRPIN(), > > - .priority = 90, > > - .__match = i"ip && ${rEGBIT_HAIRPIN_REPLY()} == 1", > > - .actions = i"ct_snat;", > > - .stage_hint = stage_hint(ls_uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Ingress Hairpin table. > > - * - Priority 1: Packets that were SNAT-ed for hairpinning should be > > - * looped back (i.e., swap ETH addresses and send back on inport). > > - */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_HAIRPIN(), > > - .priority = 1, > > - .__match = i"(${rEGBIT_HAIRPIN()} == 1 || ${rEGBIT_HAIRPIN_REPLY()} == 1)", > > - .actions = i"eth.dst <-> eth.src; outport = inport; flags.loopback = 1; output;", > > - .stage_hint = stage_hint(ls_uuid), > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Logical switch ingress table PORT_SEC_L2: ingress port security - L2 (priority 50) > > - ingress table PORT_SEC_IP: ingress port security - IP (priority 90 and 80) > > - ingress table PORT_SEC_ND: ingress port security - ND (priority 90 and 80) */ > > -for (&SwitchPort(.lsp = lsp, .sw = sw, .json_name = json_name, .ps_eth_addresses = ps_eth_addresses) > > - if lsp.is_enabled() and lsp.__type != i"external") { > > - for (pbinding in sb::Out_Port_Binding(.logical_port = lsp.name)) { > > - var __match = if (ps_eth_addresses.is_empty()) { > > - i"inport == ${json_name}" > > - } else { > > - i"inport == ${json_name} && eth.src == {${ps_eth_addresses.join(\" \")}}" > > - } in > > - > > - var actions = { > > - var queue = match (pbinding.options.get(i"qdisc_queue_id")) { > > - None -> i"", > > - Some{id} -> i"set_queue(${id});" > > - }; > > - var ramp = if (lsp.__type == i"vtep") { > > - i"next(pipeline=ingress, table=${s_SWITCH_IN_L2_LKUP().table_id});" > > - } else { > > - i"next;" > > - }; > > - i"${queue}${ramp}" > > - } in > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_L2(), > > - .priority = 50, > > - .__match = __match, > > - .actions = actions, > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > - } > > -} > > - > > -/** > > -* Build port security constraints on IPv4 and IPv6 src and dst fields > > -* and add logical flows to S_SWITCH_(IN/OUT)_PORT_SEC_IP stage. > > -* > > -* For each port security of the logical port, following > > -* logical flows are added > > -* - If the port security has IPv4 addresses, > > -* - Priority 90 flow to allow IPv4 packets for known IPv4 addresses > > -* > > -* - If the port security has IPv6 addresses, > > -* - Priority 90 flow to allow IPv6 packets for known IPv6 addresses > > -* > > -* - If the port security has IPv4 addresses or IPv6 addresses or both > > -* - Priority 80 flow to drop all IPv4 and IPv6 traffic > > -*/ > > -for (SwitchPortPSAddresses(.port = port@&SwitchPort{.sw = sw}, .ps_addrs = ps) > > - if port.is_enabled() and > > - (ps.ipv4_addrs.len() > 0 or ps.ipv6_addrs.len() > 0) and > > - port.lsp.__type != i"external") > > -{ > > - if (ps.ipv4_addrs.len() > 0) { > > - var dhcp_match = i"inport == ${port.json_name}" > > - " && eth.src == ${ps.ea}" > > - " && ip4.src == 0.0.0.0" > > - " && ip4.dst == 255.255.255.255" > > - " && udp.src == 68 && udp.dst == 67" in { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > > - .priority = 90, > > - .__match = dhcp_match, > > - .actions = i"next;", > > - .stage_hint = stage_hint(port.lsp._uuid), > > - .io_port = Some{port.lsp.name}, > > - .controller_meter = None) > > - }; > > - var addrs = { > > - var addrs = vec_empty(); > > - for (addr in ps.ipv4_addrs) { > > - /* When the netmask is applied, if the host portion is > > - * non-zero, the host can only use the specified > > - * address. If zero, the host is allowed to use any > > - * address in the subnet. > > - */ > > - addrs.push(addr.match_host_or_network()) > > - }; > > - addrs > > - } in > > - var __match = > > - "inport == ${port.json_name} && eth.src == ${ps.ea} && ip4.src == {" ++ > > - addrs.join(", ") ++ "}" in > > - { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > > - .priority = 90, > > - .__match = __match.intern(), > > - .actions = i"next;", > > - .stage_hint = stage_hint(port.lsp._uuid), > > - .io_port = Some{port.lsp.name}, > > - .controller_meter = None) > > - } > > - }; > > - if (ps.ipv6_addrs.len() > 0) { > > - var dad_match = i"inport == ${port.json_name}" > > - " && eth.src == ${ps.ea}" > > - " && ip6.src == ::" > > - " && ip6.dst == ff02::/16" > > - " && icmp6.type == {131, 135, 143}" in > > - { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > > - .priority = 90, > > - .__match = dad_match, > > - .actions = i"next;", > > - .stage_hint = stage_hint(port.lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - var __match = "inport == ${port.json_name} && eth.src == ${ps.ea}" ++ > > - build_port_security_ipv6_flow(Ingress, ps.ea, ps.ipv6_addrs) in > > - { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > > - .priority = 90, > > - .__match = __match.intern(), > > - .actions = i"next;", > > - .stage_hint = stage_hint(port.lsp._uuid), > > - .io_port = Some{port.lsp.name}, > > - .controller_meter = None) > > - } > > - }; > > - var __match = i"inport == ${port.json_name} && eth.src == ${ps.ea} && ip" in > > - { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > > - .priority = 80, > > - .__match = __match, > > - .actions = i"drop;", > > - .stage_hint = stage_hint(port.lsp._uuid), > > - .io_port = Some{port.lsp.name}, > > - .controller_meter = None) > > - } > > -} > > - > > -/** > > - * Build port security constraints on ARP and IPv6 ND fields > > - * and add logical flows to S_SWITCH_IN_PORT_SEC_ND stage. > > - * > > - * For each port security of the logical port, following > > - * logical flows are added > > - * - If the port security has no IP (both IPv4 and IPv6) or > > - * if it has IPv4 address(es) > > - * - Priority 90 flow to allow ARP packets for known MAC addresses > > - * in the eth.src and arp.spa fields. If the port security > > - * has IPv4 addresses, allow known IPv4 addresses in the arp.tpa field. > > - * > > - * - If the port security has no IP (both IPv4 and IPv6) or > > - * if it has IPv6 address(es) > > - * - Priority 90 flow to allow IPv6 ND packets for known MAC addresses > > - * in the eth.src and nd.sll/nd.tll fields. If the port security > > - * has IPv6 addresses, allow known IPv6 addresses in the nd.target field > > - * for IPv6 Neighbor Advertisement packet. > > - * > > - * - Priority 80 flow to drop ARP and IPv6 ND packets. > > - */ > > -for (SwitchPortPSAddresses(.port = port@&SwitchPort{.sw = sw}, .ps_addrs = ps) > > - if port.is_enabled() and port.lsp.__type != i"external") > > -{ > > - var no_ip = ps.ipv4_addrs.is_empty() and ps.ipv6_addrs.is_empty() in > > - { > > - if (not ps.ipv4_addrs.is_empty() or no_ip) { > > - var __match = { > > - var prefix = "inport == ${port.json_name} && eth.src == ${ps.ea} && arp.sha == ${ps.ea}"; > > - if (not ps.ipv4_addrs.is_empty()) { > > - var spas = vec_empty(); > > - for (addr in ps.ipv4_addrs) { > > - spas.push(addr.match_host_or_network()) > > - }; > > - prefix ++ " && arp.spa == {${spas.join(\", \")}}" > > - } else { > > - prefix > > - } > > - } in { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_ND(), > > - .priority = 90, > > - .__match = __match.intern(), > > - .actions = i"next;", > > - .stage_hint = stage_hint(port.lsp._uuid), > > - .io_port = Some{port.lsp.name}, > > - .controller_meter = None) > > - } > > - }; > > - if (not ps.ipv6_addrs.is_empty() or no_ip) { > > - var __match = "inport == ${port.json_name} && eth.src == ${ps.ea}" ++ > > - build_port_security_ipv6_nd_flow(ps.ea, ps.ipv6_addrs) in > > - { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_ND(), > > - .priority = 90, > > - .__match = __match.intern(), > > - .actions = i"next;", > > - .stage_hint = stage_hint(port.lsp._uuid), > > - .io_port = Some{port.lsp.name}, > > - .controller_meter = None) > > - } > > - }; > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_ND(), > > - .priority = 80, > > - .__match = i"inport == ${port.json_name} && (arp || nd)", > > - .actions = i"drop;", > > - .stage_hint = stage_hint(port.lsp._uuid), > > - .io_port = Some{port.lsp.name}, > > - .controller_meter = None) > > - } > > -} > > - > > -/* Ingress table PORT_SEC_ND and PORT_SEC_IP: Port security - IP and ND, by > > - * default goto next. (priority 0)*/ > > -for (&Switch(._uuid = ls_uuid)) { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_ND(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PORT_SEC_IP(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Ingress table ARP_ND_RSP: ARP/ND responder, skip requests coming from > > - * localnet and vtep ports. (priority 100); see ovn-northd.8.xml for the > > - * rationale. */ > > -for (&SwitchPort(.lsp = lsp, .sw = sw, .json_name = json_name) > > - if lsp.is_enabled() and > > - (lsp.__type == i"localnet" or lsp.__type == i"vtep")) > > -{ > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > > - .priority = 100, > > - .__match = i"inport == ${json_name}", > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > -} > > - > > -function lsp_is_up(lsp: Intern<nb::Logical_Switch_Port>): bool = { > > - lsp.up == Some{true} > > -} > > - > > -/* Ingress table ARP_ND_RSP: ARP/ND responder, reply for known IPs. > > - * (priority 50). */ > > -/* Handle > > - * - GARPs for virtual ip which belongs to a logical port > > - * of type 'virtual' and bind that port. > > - * > > - * - ARP reply from the virtual ip which belongs to a logical > > - * port of type 'virtual' and bind that port. > > - * */ > > - Flow(.logical_datapath = sp.sw._uuid, > > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > > - .priority = 100, > > - .__match = i"inport == ${vp.json_name} && " > > - "((arp.op == 1 && arp.spa == ${virtual_ip} && arp.tpa == ${virtual_ip}) || " > > - "(arp.op == 2 && arp.spa == ${virtual_ip}))", > > - .actions = i"bind_vport(${sp.json_name}, inport); next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{vp.lsp.name}, > > - .controller_meter = None) :- > > - sp in &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"virtual"}), > > - Some{var virtual_ip} = lsp.options.get(i"virtual-ip"), > > - Some{var virtual_parents} = lsp.options.get(i"virtual-parents"), > > - Some{var ip} = ip_parse(virtual_ip.ival()), > > - var vparent = FlatMap(virtual_parents.split(",")), > > - vp in &SwitchPort(.lsp = &nb::Logical_Switch_Port{.name = vparent.intern()}), > > - vp.sw == sp.sw. > > - > > -/* > > - * Add ARP/ND reply flows if either the > > - * - port is up and it doesn't have 'unknown' address defined or > > - * - port type is router or > > - * - port type is localport > > - */ > > -for (CheckLspIsUp[check_lsp_is_up]) { > > - for (SwitchPortIPv4Address(.port = &SwitchPort{.lsp = lsp, .sw = sw, .json_name = json_name}, > > - .ea = ea, .addr = addr) > > - if lsp.is_enabled() and > > - ((lsp_is_up(lsp) or not check_lsp_is_up) > > - or lsp.__type == i"router" or lsp.__type == i"localport") and > > - lsp.__type != i"external" and lsp.__type != i"virtual" and > > - not lsp.addresses.contains(i"unknown") and > > - not sw.is_vlan_transparent) > > - { > > - var __match = "arp.tpa == ${addr.addr} && arp.op == 1" in > > - { > > - var actions = i"eth.dst = eth.src; " > > - "eth.src = ${ea}; " > > - "arp.op = 2; /* ARP reply */ " > > - "arp.tha = arp.sha; " > > - "arp.sha = ${ea}; " > > - "arp.tpa = arp.spa; " > > - "arp.spa = ${addr.addr}; " > > - "outport = inport; " > > - "flags.loopback = 1; " > > - "output;" in > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > > - .priority = 50, > > - .__match = __match.intern(), > > - .actions = actions, > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Do not reply to an ARP request from the port that owns the > > - * address (otherwise a DHCP client that ARPs to check for a > > - * duplicate address will fail). Instead, forward it the usual > > - * way. > > - * > > - * (Another alternative would be to simply drop the packet. If > > - * everything is working as it is configured, then this would > > - * produce equivalent results, since no one should reply to the > > - * request. But ARPing for one's own IP address is intended to > > - * detect situations where the network is not working as > > - * configured, so dropping the request would frustrate that > > - * intent.) */ > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > > - .priority = 100, > > - .__match = i"${__match} && inport == ${json_name}", > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > - } > > - } > > -} > > - > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > > - .priority = 50, > > - .__match = __match.intern(), > > - .actions = __actions, > > - .stage_hint = stage_hint(sp.lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - > > - sp in &SwitchPort(.sw = sw, .peer = Some{rp}), > > - rp.is_enabled(), > > - var proxy_ips = { > > - match (sp.lsp.options.get(i"arp_proxy")) { > > - None -> "", > > - Some {addresses} -> { > > - match (extract_ip_addresses(addresses.ival())) { > > - None -> "", > > - Some{addr} -> { > > - var ip4_addrs = vec_empty(); > > - for (ip4 in addr.ipv4_addrs) { > > - ip4_addrs.push("${ip4.addr}") > > - }; > > - string_join(ip4_addrs, ",") > > - } > > - } > > - } > > - } > > - }, > > - proxy_ips != "", > > - var __match = "arp.op == 1 && arp.tpa == {" ++ proxy_ips ++ "}", > > - var __actions = i"eth.dst = eth.src; " > > - "eth.src = ${rp.networks.ea}; " > > - "arp.op = 2; /* ARP reply */ " > > - "arp.tha = arp.sha; " > > - "arp.sha = ${rp.networks.ea}; " > > - "arp.tpa <-> arp.spa; " > > - "outport = inport; " > > - "flags.loopback = 1; " > > - "output;". > > - > > -/* For ND solicitations, we need to listen for both the > > - * unicast IPv6 address and its all-nodes multicast address, > > - * but always respond with the unicast IPv6 address. */ > > -for (SwitchPortIPv6Address(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, > > - .ea = ea, .addr = addr) > > - if lsp.is_enabled() and > > - (lsp_is_up(lsp) or lsp.__type == i"router" or lsp.__type == i"localport") and > > - lsp.__type != i"external" and lsp.__type != i"virtual" and > > - not sw.is_vlan_transparent) > > -{ > > - var __match = "nd_ns && ip6.dst == {${addr.addr}, ${addr.solicited_node()}} && nd.target == ${addr.addr}" in > > - var actions = i"${if (lsp.__type == i\"router\") \"nd_na_router\" else \"nd_na\"} { " > > - "eth.src = ${ea}; " > > - "ip6.src = ${addr.addr}; " > > - "nd.target = ${addr.addr}; " > > - "nd.tll = ${ea}; " > > - "outport = inport; " > > - "flags.loopback = 1; " > > - "output; " > > - "};" in > > - { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > > - .priority = 50, > > - .__match = __match.intern(), > > - .actions = actions, > > - .io_port = None, > > - .controller_meter = sw.copp.get(cOPP_ND_NA()), > > - .stage_hint = stage_hint(lsp._uuid)); > > - > > - /* Do not reply to a solicitation from the port that owns the > > - * address (otherwise DAD detection will fail). */ > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > > - .priority = 100, > > - .__match = i"${__match} && inport == ${json_name}", > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > - } > > -} > > - > > -/* Ingress table ARP_ND_RSP: ARP/ND responder, by default goto next. > > - * (priority 0)*/ > > -for (ls in &nb::Logical_Switch) { > > - Flow(.logical_datapath = ls._uuid, > > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Ingress table ARP_ND_RSP: ARP/ND responder for service monitor source ip. > > - * (priority 110)*/ > > -Flow(.logical_datapath = sp.sw._uuid, > > - .stage = s_SWITCH_IN_ARP_ND_RSP(), > > - .priority = 110, > > - .__match = i"arp.tpa == ${svc_mon_src_ip} && arp.op == 1", > > - .actions = i"eth.dst = eth.src; " > > - "eth.src = ${svc_monitor_mac}; " > > - "arp.op = 2; /* ARP reply */ " > > - "arp.tha = arp.sha; " > > - "arp.sha = ${svc_monitor_mac}; " > > - "arp.tpa = arp.spa; " > > - "arp.spa = ${svc_mon_src_ip}; " > > - "outport = inport; " > > - "flags.loopback = 1; " > > - "output;", > > - .stage_hint = stage_hint(lbvip.lb._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - LBVIP[lbvip], > > - var lbvipbackend = FlatMap(lbvip.backends), > > - Some{var svc_monitor} = lbvipbackend.svc_monitor, > > - sp in &SwitchPort( > > - .lsp = &nb::Logical_Switch_Port{.name = svc_monitor.port_name}), > > - var svc_mon_src_ip = svc_monitor.src_ip, > > - SvcMonitorMac(svc_monitor_mac). > > - > > -function build_dhcpv4_action( > > - lsp_json_key: string, > > - dhcpv4_options: Intern<nb::DHCP_Options>, > > - offer_ip: in_addr, > > - lsp_options: Map<istring,istring>) : Option<(istring, istring, string)> = > > -{ > > - match (ip_parse_masked(dhcpv4_options.cidr.ival())) { > > - Left{err} -> { > > - /* cidr defined is invalid */ > > - None > > - }, > > - Right{(var host_ip, var mask)} -> { > > - if (not (offer_ip, host_ip).same_network(mask)) { > > - /* the offer ip of the logical port doesn't belong to the cidr > > - * defined in the DHCPv4 options. > > - */ > > - None > > - } else { > > - match ((dhcpv4_options.options.get(i"server_id"), > > - dhcpv4_options.options.get(i"server_mac"), > > - dhcpv4_options.options.get(i"lease_time"))) > > - { > > - (Some{var server_ip}, Some{var server_mac}, Some{var lease_time}) -> { > > - var options_map = dhcpv4_options.options; > > - > > - /* server_mac is not DHCPv4 option, delete it from the smap. */ > > - options_map.remove(i"server_mac"); > > - options_map.insert(i"netmask", i"${mask}"); > > - > > - match (lsp_options.get(i"hostname")) { > > - None -> (), > > - Some{port_hostname} -> options_map.insert(i"hostname", port_hostname) > > - }; > > - > > - var options = vec_empty(); > > - for (node in options_map) { > > - (var k, var v) = node; > > - options.push("${k} = ${v}") > > - }; > > - options.sort(); > > - var options_action = "${rEGBIT_DHCP_OPTS_RESULT()} = put_dhcp_opts(offerip = ${offer_ip}, " ++ > > - options.join(", ") ++ "); next;"; > > - var response_action = i"eth.dst = eth.src; eth.src = ${server_mac}; " > > - "ip4.src = ${server_ip}; udp.src = 67; " > > - "udp.dst = 68; outport = inport; flags.loopback = 1; " > > - "output;"; > > - > > - var ipv4_addr_match = "ip4.src == ${offer_ip} && ip4.dst == {${server_ip}, 255.255.255.255}"; > > - Some{(options_action.intern(), response_action, ipv4_addr_match)} > > - }, > > - _ -> { > > - /* "server_id", "server_mac" and "lease_time" should be > > - * present in the dhcp_options. */ > > - //static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); > > - warn("Required DHCPv4 options not defined for lport - ${lsp_json_key}"); > > - None > > - } > > - } > > - } > > - } > > - } > > -} > > - > > -function build_dhcpv6_action( > > - lsp_json_key: string, > > - dhcpv6_options: Intern<nb::DHCP_Options>, > > - offer_ip: in6_addr): Option<(istring, istring)> = > > -{ > > - match (ipv6_parse_masked(dhcpv6_options.cidr.ival())) { > > - Left{err} -> { > > - /* cidr defined is invalid */ > > - //warn("cidr is invalid - ${err}"); > > - None > > - }, > > - Right{(var host_ip, var mask)} -> { > > - if (not (offer_ip, host_ip).same_network(mask)) { > > - /* offer_ip doesn't belongs to the cidr defined in lport's DHCPv6 > > - * options.*/ > > - //warn("ip does not belong to cidr"); > > - None > > - } else { > > - /* "server_id" should be the MAC address. */ > > - match (dhcpv6_options.options.get(i"server_id")) { > > - None -> { > > - warn("server_id not present in the DHCPv6 options for lport ${lsp_json_key}"); > > - None > > - }, > > - Some{server_mac} -> { > > - match (eth_addr_from_string(server_mac.ival())) { > > - None -> { > > - warn("server_id not present in the DHCPv6 options for lport ${lsp_json_key}"); > > - None > > - }, > > - Some{ea} -> { > > - /* Get the link local IP of the DHCPv6 server from the server MAC. */ > > - var server_ip = ea.to_ipv6_lla().string_mapped(); > > - var ia_addr = offer_ip.string_mapped(); > > - var options = vec_empty(); > > - > > - /* Check whether the dhcpv6 options should be configured as stateful. > > - * Only reply with ia_addr option for dhcpv6 stateful address mode. */ > > - if (not dhcpv6_options.options.get_bool_def(i"dhcpv6_stateless", false)) { > > - options.push("ia_addr = ${ia_addr}") > > - } else (); > > - > > - for ((k, v) in dhcpv6_options.options) { > > - if (k != i"dhcpv6_stateless") { > > - options.push("${k} = ${v}") > > - } else () > > - }; > > - options.sort(); > > - > > - var options_action = "${rEGBIT_DHCP_OPTS_RESULT()} = put_dhcpv6_opts(" ++ > > - options.join(", ") ++ > > - "); next;"; > > - var response_action = i"eth.dst = eth.src; eth.src = ${server_mac}; " > > - "ip6.dst = ip6.src; ip6.src = ${server_ip}; udp.src = 547; " > > - "udp.dst = 546; outport = inport; flags.loopback = 1; " > > - "output;"; > > - Some{(options_action.intern(), response_action)} > > - } > > - } > > - } > > - } > > - } > > - } > > - } > > -} > > - > > -/* If 'names' has one element, returns json_escape() for it. > > - * Otherwise, returns json_escape() of all of its elements inside "{...}". > > - */ > > -function json_escape_vec(names: Vec<string>): string > > -{ > > - match ((names.len(), names.nth(0))) { > > - (1, Some{name}) -> json_escape(name), > > - _ -> { > > - var json_names = vec_with_capacity(names.len()); > > - for (name in names) { > > - json_names.push(json_escape(name)); > > - }; > > - "{" ++ json_names.join(", ") ++ "}" > > - } > > - } > > -} > > - > > -/* > > - * Ordinarily, returns a single match against 'lsp'. > > - * > > - * If 'lsp' is an external port, returns a match against the localnet port(s) on > > - * its switch along with a condition that it only operate if 'lsp' is > > - * chassis-resident. This makes sense as a condition for sending DHCP replies > > - * to external ports because only one chassis should send such a reply. > > - * > > - * Returns a prefix and a suffix string. There is no reason for this except > > - * that it makes it possible to exactly mimic the format used by northd.c > > - * so that text-based comparisons do not show differences. (This fails if > > - * there's more than one localnet port since the C version uses multiple flows > > - * in that case.) > > - */ > > -function match_dhcp_input(lsp: Intern<SwitchPort>): (string, string) = > > -{ > > - if (lsp.lsp.__type == i"external" and not lsp.sw.localnet_ports.is_empty()) { > > - ("inport == " ++ json_escape_vec(lsp.sw.localnet_ports.map(|x| x.1.ival())) ++ " && ", > > - " && is_chassis_resident(${lsp.json_name})") > > - } else { > > - ("inport == ${lsp.json_name} && ", "") > > - } > > -} > > - > > -/* Logical switch ingress tables DHCP_OPTIONS and DHCP_RESPONSE: DHCP options > > - * and response priority 100 flows. */ > > -for (lsp in &SwitchPort > > - /* Don't add the DHCP flows if the port is not enabled or if the > > - * port is a router port. */ > > - if (lsp.is_enabled() and lsp.lsp.__type != i"router") > > - /* If it's an external port and there is no localnet port > > - * and if it doesn't belong to an HA chassis group ignore it. */ > > - and (lsp.lsp.__type != i"external" > > - or (not lsp.sw.localnet_ports.is_empty() > > - and lsp.lsp.ha_chassis_group.is_some()))) > > -{ > > - for (lps in LogicalSwitchPort(.lport = lsp.lsp._uuid, .lswitch = lsuuid)) { > > - var json_key = json_escape(lsp.lsp.name) in > > - (var pfx, var sfx) = match_dhcp_input(lsp) in > > - { > > - /* DHCPv4 options enabled for this port */ > > - Some{var dhcpv4_options_uuid} = lsp.lsp.dhcpv4_options in > > - { > > - for (dhcpv4_options in &nb::DHCP_Options(._uuid = dhcpv4_options_uuid)) { > > - for (SwitchPortIPv4Address(.port = &SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = lsp.lsp._uuid}}, .ea = ea, .addr = addr)) { > > - Some{(var options_action, var response_action, var ipv4_addr_match)} = > > - build_dhcpv4_action(json_key, dhcpv4_options, addr.addr, lsp.lsp.options) in > > - { > > - var __match = > > - (pfx ++ "eth.src == ${ea} && " > > - "ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && " > > - "udp.src == 68 && udp.dst == 67" ++ sfx).intern() > > - in > > - Flow(.logical_datapath = lsuuid, > > - .stage = s_SWITCH_IN_DHCP_OPTIONS(), > > - .priority = 100, > > - .__match = __match, > > - .actions = options_action, > > - .io_port = None, > > - .controller_meter = lsp.sw.copp.get(cOPP_DHCPV4_OPTS()), > > - .stage_hint = stage_hint(lsp.lsp._uuid)); > > - > > - /* Allow ip4.src = OFFER_IP and > > - * ip4.dst = {SERVER_IP, 255.255.255.255} for the below > > - * cases > > - * - When the client wants to renew the IP by sending > > - * the DHCPREQUEST to the server ip. > > - * - When the client wants to renew the IP by > > - * broadcasting the DHCPREQUEST. > > - */ > > - var __match = pfx ++ "eth.src == ${ea} && " > > - "${ipv4_addr_match} && udp.src == 68 && udp.dst == 67" ++ sfx in > > - Flow(.logical_datapath = lsuuid, > > - .stage = s_SWITCH_IN_DHCP_OPTIONS(), > > - .priority = 100, > > - .__match = __match.intern(), > > - .actions = options_action, > > - .io_port = None, > > - .controller_meter = lsp.sw.copp.get(cOPP_DHCPV4_OPTS()), > > - .stage_hint = stage_hint(lsp.lsp._uuid)); > > - > > - /* If REGBIT_DHCP_OPTS_RESULT is set, it means the > > - * put_dhcp_opts action is successful. */ > > - var __match = pfx ++ "eth.src == ${ea} && " > > - "ip4 && udp.src == 68 && udp.dst == 67 && " ++ > > - rEGBIT_DHCP_OPTS_RESULT() ++ sfx in > > - Flow(.logical_datapath = lsuuid, > > - .stage = s_SWITCH_IN_DHCP_RESPONSE(), > > - .priority = 100, > > - .__match = __match.intern(), > > - .actions = response_action, > > - .stage_hint = stage_hint(lsp.lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - // FIXME: is there a constraint somewhere that guarantees that build_dhcpv4_action > > - // returns Some() for at most 1 address in lsp_addrs? Otherwise, simulate this break > > - // by computing an aggregate that returns the first element of a group. > > - //break; > > - } > > - } > > - } > > - }; > > - > > - /* DHCPv6 options enabled for this port */ > > - Some{var dhcpv6_options_uuid} = lsp.lsp.dhcpv6_options in > > - { > > - for (dhcpv6_options in &nb::DHCP_Options(._uuid = dhcpv6_options_uuid)) { > > - for (SwitchPortIPv6Address(.port = &SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = lsp.lsp._uuid}}, .ea = ea, .addr = addr)) { > > - Some{(var options_action, var response_action)} = > > - build_dhcpv6_action(json_key, dhcpv6_options, addr.addr) in > > - { > > - var __match = pfx ++ "eth.src == ${ea}" > > - " && ip6.dst == ff02::1:2 && udp.src == 546 &&" > > - " udp.dst == 547" ++ sfx in > > - { > > - Flow(.logical_datapath = lsuuid, > > - .stage = s_SWITCH_IN_DHCP_OPTIONS(), > > - .priority = 100, > > - .__match = __match.intern(), > > - .actions = options_action, > > - .io_port = None, > > - .controller_meter = lsp.sw.copp.get(cOPP_DHCPV6_OPTS()), > > - .stage_hint = stage_hint(lsp.lsp._uuid)); > > - > > - /* If REGBIT_DHCP_OPTS_RESULT is set to 1, it means the > > - * put_dhcpv6_opts action is successful */ > > - Flow(.logical_datapath = lsuuid, > > - .stage = s_SWITCH_IN_DHCP_RESPONSE(), > > - .priority = 100, > > - .__match = (__match ++ " && ${rEGBIT_DHCP_OPTS_RESULT()}").intern(), > > - .actions = response_action, > > - .stage_hint = stage_hint(lsp.lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - // FIXME: is there a constraint somewhere that guarantees that build_dhcpv4_action > > - // returns Some() for at most 1 address in lsp_addrs? Otherwise, simulate this breaks > > - // by computing an aggregate that returns the first element of a group. > > - //break; > > - } > > - } > > - } > > - } > > - } > > - } > > - } > > -} > > - > > -/* Logical switch ingress tables DNS_LOOKUP and DNS_RESPONSE: DNS lookup and > > - * response priority 100 flows. > > - */ > > -for (LogicalSwitchHasDNSRecords(ls, true)) > > -{ > > - Flow(.logical_datapath = ls, > > - .stage = s_SWITCH_IN_DNS_LOOKUP(), > > - .priority = 100, > > - .__match = i"udp.dst == 53", > > - .actions = i"${rEGBIT_DNS_LOOKUP_RESULT()} = dns_lookup(); next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - var action = i"eth.dst <-> eth.src; ip4.src <-> ip4.dst; " > > - "udp.dst = udp.src; udp.src = 53; outport = inport; " > > - "flags.loopback = 1; output;" in > > - Flow(.logical_datapath = ls, > > - .stage = s_SWITCH_IN_DNS_RESPONSE(), > > - .priority = 100, > > - .__match = i"udp.dst == 53 && ${rEGBIT_DNS_LOOKUP_RESULT()}", > > - .actions = action, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - var action = i"eth.dst <-> eth.src; ip6.src <-> ip6.dst; " > > - "udp.dst = udp.src; udp.src = 53; outport = inport; " > > - "flags.loopback = 1; output;" in > > - Flow(.logical_datapath = ls, > > - .stage = s_SWITCH_IN_DNS_RESPONSE(), > > - .priority = 100, > > - .__match = i"udp.dst == 53 && ${rEGBIT_DNS_LOOKUP_RESULT()}", > > - .actions = action, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Ingress table DHCP_OPTIONS and DHCP_RESPONSE: DHCP options and response, by > > - * default goto next. (priority 0). > > - * > > - * Ingress table DNS_LOOKUP and DNS_RESPONSE: DNS lookup and response, by > > - * default goto next. (priority 0). > > - > > - * Ingress table EXTERNAL_PORT - External port handling, by default goto next. > > - * (priority 0). */ > > -for (ls in &nb::Logical_Switch) { > > - Flow(.logical_datapath = ls._uuid, > > - .stage = s_SWITCH_IN_DHCP_OPTIONS(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = ls._uuid, > > - .stage = s_SWITCH_IN_DHCP_RESPONSE(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = ls._uuid, > > - .stage = s_SWITCH_IN_DNS_LOOKUP(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = ls._uuid, > > - .stage = s_SWITCH_IN_DNS_RESPONSE(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = ls._uuid, > > - .stage = s_SWITCH_IN_EXTERNAL_PORT(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 110, > > - .__match = i"eth.dst == $svc_monitor_mac", > > - .actions = i"handle_svc_check(inport);", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - sw in &Switch(). > > - > > -for (sw in &Switch(._uuid = ls_uuid, .mcast_cfg = mcast_cfg) > > - if (mcast_cfg.enabled)) { > > - var controller_meter = sw.copp.get(cOPP_IGMP()) in > > - for (SwitchMcastFloodRelayPorts(sw, relay_ports)) { > > - for (SwitchMcastFloodReportPorts(sw, flood_report_ports)) { > > - for (SwitchMcastFloodPorts(sw, flood_ports)) { > > - var flood_relay = not relay_ports.is_empty() in > > - var flood_reports = not flood_report_ports.is_empty() in > > - var flood_static = not flood_ports.is_empty() in > > - var igmp_act = { > > - if (flood_reports) { > > - var mrouter_static = json_escape(mC_MROUTER_STATIC().0); > > - i"clone { " > > - "outport = ${mrouter_static}; " > > - "output; " > > - "};igmp;" > > - } else { > > - i"igmp;" > > - } > > - } in { > > - /* Punt IGMP traffic to controller. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 100, > > - .__match = i"ip4 && ip.proto == 2", > > - .actions = i"${igmp_act}", > > - .io_port = None, > > - .controller_meter = controller_meter, > > - .stage_hint = 0); > > - > > - /* Punt MLD traffic to controller. */ > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 100, > > - .__match = i"mldv1 || mldv2", > > - .actions = igmp_act, > > - .io_port = None, > > - .controller_meter = controller_meter, > > - .stage_hint = 0); > > - > > - /* Flood all IP multicast traffic destined to 224.0.0.X to > > - * all ports - RFC 4541, section 2.1.2, item 2. > > - */ > > - var flood = json_escape(mC_FLOOD().0) in > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 85, > > - .__match = i"ip4.mcast && ip4.dst == 224.0.0.0/24", > > - .actions = i"outport = ${flood}; output;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Flood all IPv6 multicast traffic destined to reserved > > - * multicast IPs (RFC 4291, 2.7.1). > > - */ > > - var flood = json_escape(mC_FLOOD().0) in > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 85, > > - .__match = i"ip6.mcast_flood", > > - .actions = i"outport = ${flood}; output;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Forward uregistered IP multicast to routers with relay > > - * enabled and to any ports configured to flood IP > > - * multicast traffic. If configured to flood unregistered > > - * traffic this will be handled by the L2 multicast flow. > > - */ > > - if (not mcast_cfg.flood_unreg) { > > - var relay_act = { > > - if (flood_relay) { > > - var rtr_flood = json_escape(mC_MROUTER_FLOOD().0); > > - "clone { " > > - "outport = ${rtr_flood}; " > > - "output; " > > - "}; " > > - } else { > > - "" > > - } > > - } in > > - var static_act = { > > - if (flood_static) { > > - var mc_static = json_escape(mC_STATIC().0); > > - "outport =${mc_static}; output;" > > - } else { > > - "" > > - } > > - } in > > - var drop_act = { > > - if (not flood_relay and not flood_static) { > > - "drop;" > > - } else { > > - "" > > - } > > - } in > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 80, > > - .__match = i"ip4.mcast || ip6.mcast", > > - .actions = i"${relay_act}${static_act}${drop_act}", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > - } > > - } > > - } > > -} > > - > > -/* Ingress table L2_LKUP: Add IP multicast flows learnt from IGMP/MLD (priority > > - * 90). */ > > -for (IgmpSwitchMulticastGroup(.address = address, .switch = sw)) { > > - /* RFC 4541, section 2.1.2, item 2: Skip groups in the 224.0.0.X > > - * range. > > - * > > - * RFC 4291, section 2.7.1: Skip groups that correspond to all > > - * hosts. > > - */ > > - Some{var ip} = ip46_parse(address.ival()) in > > - (var skip_address) = match (ip) { > > - IPv4{ipv4} -> ipv4.is_local_multicast(), > > - IPv6{ipv6} -> ipv6.is_all_hosts() > > - } in > > - var ipX = ip.ipX() in > > - for (SwitchMcastFloodRelayPorts(sw, relay_ports) if not skip_address) { > > - for (SwitchMcastFloodPorts(sw, flood_ports)) { > > - var flood_relay = not relay_ports.is_empty() in > > - var flood_static = not flood_ports.is_empty() in > > - var mc_rtr_flood = json_escape(mC_MROUTER_FLOOD().0) in > > - var mc_static = json_escape(mC_STATIC().0) in > > - var relay_act = { > > - if (flood_relay) { > > - "clone { " > > - "outport = ${mc_rtr_flood}; output; " > > - "};" > > - } else { > > - "" > > - } > > - } in > > - var static_act = { > > - if (flood_static) { > > - "clone { " > > - "outport =${mc_static}; " > > - "output; " > > - "};" > > - } else { > > - "" > > - } > > - } in > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 90, > > - .__match = i"eth.mcast && ${ipX} && ${ipX}.dst == ${address}", > > - .actions = > > - i"${relay_act} ${static_act} outport = \"${address}\"; " > > - "output;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > -} > > - > > -/* Table EXTERNAL_PORT: External port. Drop ARP request for router ips from > > - * external ports on chassis not binding those ports. This makes the router > > - * pipeline to be run only on the chassis binding the external ports. > > - * > > - * For an external port X on logical switch LS, if X is not resident on this > > - * chassis, drop ARP requests arriving on localnet ports from X's Ethernet > > - * address, if the ARP request is asking to translate the IP address of a > > - * router port on LS. */ > > -Flow(.logical_datapath = sp.sw._uuid, > > - .stage = s_SWITCH_IN_EXTERNAL_PORT(), > > - .priority = 100, > > - .__match = (i"inport == ${json_escape(localnet_port.1)} && " > > - "eth.src == ${lp_addr.ea} && " > > - "!is_chassis_resident(${sp.json_name}) && " > > - "arp.tpa == ${rp_addr.addr} && arp.op == 1"), > > - .actions = i"drop;", > > - .stage_hint = stage_hint(sp.lsp._uuid), > > - .io_port = Some{localnet_port.1}, > > - .controller_meter = None) :- > > - sp in &SwitchPort(), > > - sp.lsp.__type == i"external", > > - var localnet_port = FlatMap(sp.sw.localnet_ports), > > - var lp_addr = FlatMap(sp.static_addresses), > > - rp in &SwitchPort(.sw = sp.sw), > > - rp.lsp.__type == i"router", > > - SwitchPortIPv4Address(.port = rp, .addr = rp_addr). > > -Flow(.logical_datapath = sp.sw._uuid, > > - .stage = s_SWITCH_IN_EXTERNAL_PORT(), > > - .priority = 100, > > - .__match = (i"inport == ${json_escape(localnet_port.1)} && " > > - "eth.src == ${lp_addr.ea} && " > > - "!is_chassis_resident(${sp.json_name}) && " > > - "nd_ns && ip6.dst == {${rp_addr.addr}, ${rp_addr.solicited_node()}} && " > > - "nd.target == ${rp_addr.addr}"), > > - .actions = i"drop;", > > - .stage_hint = stage_hint(sp.lsp._uuid), > > - .io_port = Some{localnet_port.1}, > > - .controller_meter = None) :- > > - sp in &SwitchPort(), > > - sp.lsp.__type == i"external", > > - var localnet_port = FlatMap(sp.sw.localnet_ports), > > - var lp_addr = FlatMap(sp.static_addresses), > > - rp in &SwitchPort(.sw = sp.sw), > > - rp.lsp.__type == i"router", > > - SwitchPortIPv6Address(.port = rp, .addr = rp_addr). > > -Flow(.logical_datapath = sp.sw._uuid, > > - .stage = s_SWITCH_IN_EXTERNAL_PORT(), > > - .priority = 100, > > - .__match = (i"inport == ${json_escape(localnet_port.1)} && " > > - "eth.src == ${lp_addr.ea} && " > > - "eth.dst == ${ea} && " > > - "!is_chassis_resident(${sp.json_name})"), > > - .actions = i"drop;", > > - .stage_hint = stage_hint(sp.lsp._uuid), > > - .io_port = Some{localnet_port.1}, > > - .controller_meter = None) :- > > - sp in &SwitchPort(), > > - sp.lsp.__type == i"external", > > - var localnet_port = FlatMap(sp.sw.localnet_ports), > > - var lp_addr = FlatMap(sp.static_addresses), > > - rp in &SwitchPort(.sw = sp.sw), > > - rp.lsp.__type == i"router", > > - SwitchPortAddresses(.port = rp, .addrs = LPortAddress{.ea = ea}). > > - > > -/* Ingress table L2_LKUP: Destination lookup, broadcast and multicast handling > > - * (priority 100). */ > > -for (ls in &nb::Logical_Switch) { > > - var mc_flood = json_escape(mC_FLOOD().0) in > > - Flow(.logical_datapath = ls._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 70, > > - .__match = i"eth.mcast", > > - .actions = i"outport = ${mc_flood}; output;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Ingress table L2_LKUP: Destination lookup, unicast handling (priority 50). > > -*/ > > -for (SwitchPortStaticAddresses(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, > > - .addrs = addrs) > > - if lsp.__type != i"external") { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 50, > > - .__match = i"eth.dst == ${addrs.ea}", > > - .actions = i"outport = ${json_name}; output;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* > > - * Ingress table L2_LKUP: Flows that flood self originated ARP/ND packets in the > > - * switching domain. > > - */ > > -/* Self originated ARP requests/ND need to be flooded to the L2 domain > > - * (except on router ports). Determine that packets are self originated > > - * by also matching on source MAC. Matching on ingress port is not > > - * reliable in case this is a VLAN-backed network. > > - * Priority: 75. > > - */ > > - > > -/* Returns 'true' if the IP 'addr' is on the same subnet with one of the > > - * IPs configured on the router port. > > - */ > > -function lrouter_port_ip_reachable(rp: Intern<RouterPort>, addr: v46_ip): bool { > > - match (addr) { > > - IPv4{ipv4} -> { > > - for (na in rp.networks.ipv4_addrs) { > > - if ((ipv4, na.addr).same_network(na.netmask())) { > > - return true > > - } > > - } > > - }, > > - IPv6{ipv6} -> { > > - for (na in rp.networks.ipv6_addrs) { > > - if ((ipv6, na.addr).same_network(na.netmask())) { > > - return true > > - } > > - } > > - } > > - }; > > - false > > -} > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 75, > > - .__match = __match, > > - .actions = actions, > > - .stage_hint = stage_hint(sp.lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - sp in &SwitchPort(.sw = sw@&Switch{.has_non_router_port = true}, .peer = Some{rp}), > > - rp.is_enabled(), > > - var eth_src_set = { > > - var eth_src_set = set_singleton(i"${rp.networks.ea}"); > > - for (nat in rp.router.nats) { > > - match (nat.nat.external_mac) { > > - Some{mac} -> > > - if (lrouter_port_ip_reachable(rp, nat.external_ip)) { > > - eth_src_set.insert(mac) > > - } else (), > > - _ -> () > > - } > > - }; > > - eth_src_set > > - }, > > - var eth_src = "{" ++ eth_src_set.to_vec().join(", ") ++ "}", > > - var __match = i"eth.src == ${eth_src} && (arp.op == 1 || nd_ns)", > > - var mc_flood_l2 = json_escape(mC_FLOOD_L2().0), > > - var actions = i"outport = ${mc_flood_l2}; output;". > > - > > -/* Forward ARP requests for owned IP addresses (L3, VIP, NAT) only to this > > - * router port. > > - * Priority: 80. > > - */ > > -function get_arp_forward_ips(rp: Intern<RouterPort>, lbips: Intern<LogicalRouterLBIPs>): > > - (Set<istring>, Set<istring>, Set<istring>, Set<istring>) = > > -{ > > - var reachable_ips_v4 = set_empty(); > > - var reachable_ips_v6 = set_empty(); > > - var unreachable_ips_v4 = set_empty(); > > - var unreachable_ips_v6 = set_empty(); > > - > > - (var lb_ips_v4, var lb_ips_v6) > > - = get_router_load_balancer_ips(lbips, false); > > - for (a in lb_ips_v4) { > > - /* Check if the ovn port has a network configured on which we could > > - * expect ARP requests for the LB VIP. > > - */ > > - match (ip_parse(a.ival())) { > > - Some{ipv4} -> if (lrouter_port_ip_reachable(rp, IPv4{ipv4})) { > > - reachable_ips_v4.insert(a) > > - } else { > > - unreachable_ips_v4.insert(a) > > - }, > > - _ -> () > > - } > > - }; > > - for (a in lb_ips_v6) { > > - /* Check if the ovn port has a network configured on which we could > > - * expect NS requests for the LB VIP. > > - */ > > - match (ipv6_parse(a.ival())) { > > - Some{ipv6} -> if (lrouter_port_ip_reachable(rp, IPv6{ipv6})) { > > - reachable_ips_v6.insert(a) > > - } else { > > - unreachable_ips_v6.insert(a) > > - }, > > - _ -> () > > - } > > - }; > > - > > - for (nat in rp.router.nats) { > > - if (nat.nat.__type != i"snat") { > > - /* Check if the ovn port has a network configured on which we could > > - * expect ARP requests/NS for the DNAT external_ip. > > - */ > > - if (lrouter_port_ip_reachable(rp, nat.external_ip)) { > > - match (nat.external_ip) { > > - IPv4{_} -> reachable_ips_v4.insert(nat.nat.external_ip), > > - IPv6{_} -> reachable_ips_v6.insert(nat.nat.external_ip) > > - } > > - } else { > > - match (nat.external_ip) { > > - IPv4{_} -> unreachable_ips_v4.insert(nat.nat.external_ip), > > - IPv6{_} -> unreachable_ips_v6.insert(nat.nat.external_ip), > > - } > > - } > > - } > > - }; > > - > > - for (a in rp.networks.ipv4_addrs) { > > - reachable_ips_v4.insert(i"${a.addr}") > > - }; > > - for (a in rp.networks.ipv6_addrs) { > > - reachable_ips_v6.insert(i"${a.addr}") > > - }; > > - > > - (reachable_ips_v4, reachable_ips_v6, unreachable_ips_v4, unreachable_ips_v6) > > -} > > - > > -relation &SwitchPortARPForwards( > > - port: Intern<SwitchPort>, > > - reachable_ips_v4: Set<istring>, > > - reachable_ips_v6: Set<istring>, > > - unreachable_ips_v4: Set<istring>, > > - unreachable_ips_v6: Set<istring> > > -) > > - > > -&SwitchPortARPForwards(.port = port, > > - .reachable_ips_v4 = reachable_ips_v4, > > - .reachable_ips_v6 = reachable_ips_v6, > > - .unreachable_ips_v4 = unreachable_ips_v4, > > - .unreachable_ips_v6 = unreachable_ips_v6) :- > > - port in &SwitchPort(.peer = Some{rp@&RouterPort{.enabled = true}}), > > - lbips in &LogicalRouterLBIPs(.lr = rp.router._uuid), > > - (var reachable_ips_v4, var reachable_ips_v6, var unreachable_ips_v4, var unreachable_ips_v6) = get_arp_forward_ips(rp, lbips). > > - > > -/* Packets received from VXLAN tunnels have already been through the > > - * router pipeline so we should skip them. Normally this is done by the > > - * multicast_group implementation (VXLAN packets skip table 32 which > > - * delivers to patch ports) but we're bypassing multicast_groups. > > - * (This is why we match against fLAGBIT_NOT_VXLAN() here.) > > - */ > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 80, > > - .__match = i"${fLAGBIT_NOT_VXLAN()} && arp.op == 1 && arp.tpa == ${ipv4}", > > - .actions = if (sw.has_non_router_port) { > > - i"clone {outport = ${sp.json_name}; output; }; " > > - "outport = ${mc_flood_l2}; output;" > > - } else { > > - i"outport = ${sp.json_name}; output;" > > - }, > > - .stage_hint = stage_hint(sp.lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - var mc_flood_l2 = json_escape(mC_FLOOD_L2().0), > > - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .reachable_ips_v4 = ips_v4), > > - var ipv4 = FlatMap(ips_v4). > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 80, > > - .__match = i"${fLAGBIT_NOT_VXLAN()} && nd_ns && nd.target == ${ipv6}", > > - .actions = if (sw.has_non_router_port) { > > - i"clone {outport = ${sp.json_name}; output; }; " > > - "outport = ${mc_flood_l2}; output;" > > - } else { > > - i"outport = ${sp.json_name}; output;" > > - }, > > - .stage_hint = stage_hint(sp.lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - var mc_flood_l2 = json_escape(mC_FLOOD_L2().0), > > - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .reachable_ips_v6 = ips_v6), > > - var ipv6 = FlatMap(ips_v6). > > - > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 90, > > - .__match = i"${fLAGBIT_NOT_VXLAN()} && arp.op == 1 && arp.tpa == ${ipv4}", > > - .actions = actions, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - var actions = i"outport = ${json_escape(mC_FLOOD().0)}; output;", > > - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .unreachable_ips_v4 = ips_v4), > > - var ipv4 = FlatMap(ips_v4). > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 90, > > - .__match = i"${fLAGBIT_NOT_VXLAN()} && nd_ns && nd.target == ${ipv6}", > > - .actions = actions, > > - .stage_hint = stage_hint(sp.lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - var actions = i"outport = ${json_escape(mC_FLOOD().0)}; output;", > > - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .unreachable_ips_v6 = ips_v6), > > - var ipv6 = FlatMap(ips_v6). > > - > > -for (SwitchPortNewDynamicAddress(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, > > - .address = Some{addrs}) > > - if lsp.__type != i"external") { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 50, > > - .__match = i"eth.dst == ${addrs.ea}", > > - .actions = i"outport = ${json_name}; output;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -for (&SwitchPort(.lsp = lsp, > > - .json_name = json_name, > > - .sw = sw, > > - .peer = Some{&RouterPort{.lrp = lrp, > > - .is_redirect = is_redirect, > > - .router = &Router{._uuid = lr_uuid, > > - .l3dgw_ports = l3dgw_ports}}}) > > - if (lsp.addresses.contains(i"router") and lsp.__type != i"external")) > > -{ > > - Some{var mac} = scan_eth_addr(lrp.mac.ival()) in { > > - var add_chassis_resident_check = > > - not sw.localnet_ports.is_empty() and > > - (/* The peer of this port represents a distributed > > - * gateway port. The destination lookup flow for the > > - * router's distributed gateway port MAC address should > > - * only be programmed on the "redirect-chassis". */ > > - is_redirect or > > - /* Check if the option 'reside-on-redirect-chassis' > > - * is set to true on the peer port. If set to true > > - * and if the logical switch has a localnet port, it > > - * means the router pipeline for the packets from > > - * this logical switch should be run on the chassis > > - * hosting the gateway port. > > - */ > > - lrp.options.get_bool_def(i"reside-on-redirect-chassis", false)) in > > - var __match = if (add_chassis_resident_check) { > > - var redirect_port_name = if (is_redirect) { > > - json_escape(chassis_redirect_name(lrp.name)) > > - } else { > > - match (l3dgw_ports.nth(0)) { > > - Some {var gw_port} -> json_escape(chassis_redirect_name(gw_port.name)), > > - None -> "" > > - } > > - }; > > - /* The destination lookup flow for the router's > > - * distributed gateway port MAC address should only be > > - * programmed on the "redirect-chassis". */ > > - i"eth.dst == ${mac} && is_chassis_resident(${redirect_port_name})" > > - } else { > > - i"eth.dst == ${mac}" > > - } in > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 50, > > - .__match = __match, > > - .actions = i"outport = ${json_name}; output;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Add ethernet addresses specified in NAT rules on > > - * distributed logical routers. */ > > - if (is_redirect) { > > - for (LogicalRouterNAT(.lr = lr_uuid, .nat = nat)) { > > - if (nat.nat.__type == i"dnat_and_snat") { > > - Some{var lport} = nat.nat.logical_port in > > - Some{var emac} = nat.nat.external_mac in > > - Some{var nat_mac} = eth_addr_from_string(emac.ival()) in > > - var __match = i"eth.dst == ${nat_mac} && is_chassis_resident(${json_escape(lport)})" in > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 50, > > - .__match = __match, > > - .actions = i"outport = ${json_name}; output;", > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > - } > > - } > > -} > > -// FIXME: do we care about this? > > -/* } else { > > - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); > > - > > - VLOG_INFO_RL(&rl, > > - "%s: invalid syntax '%s' in addresses column", > > - op->nbsp->name, op->nbsp->addresses[i]); > > - }*/ > > - > > -/* Ingress table L2_LKUP and L2_UNKNOWN: Destination lookup for unknown MACs (priority 0). */ > > -for (sw in &Switch(._uuid = ls_uuid)) { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_L2_LKUP(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"outport = get_fdb(eth.dst); next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_L2_UNKNOWN(), > > - .priority = 50, > > - .__match = i"outport == \"none\"", > > - .actions = if (sw.has_unknown_ports) { > > - var mc_unknown = json_escape(mC_UNKNOWN().0); > > - i"outport = ${mc_unknown}; output;" > > - } else { > > - i"drop;" > > - }, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_L2_UNKNOWN(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"output;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Egress tables PORT_SEC_IP: Egress port security - IP (priority 0) > > - * Egress table PORT_SEC_L2: Egress port security L2 - multicast/broadcast (priority 100). */ > > -for (&Switch(._uuid = ls_uuid)) { > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PORT_SEC_IP(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_OUT_PORT_SEC_L2(), > > - .priority = 100, > > - .__match = i"eth.mcast", > > - .actions = i"output;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_LOOKUP_FDB(), > > - .priority = 100, > > - .__match = i"inport == ${sp.json_name}", > > - .actions = i"$[rEGBIT_LKUP_FDB()} = lookup_fdb(inport, eth.src); next;", > > - .stage_hint = stage_hint(lsp_uuid), > > - .io_port = Some{sp.lsp.name}, > > - .controller_meter = None), > > -Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_LOOKUP_FDB(), > > - .priority = 100, > > - .__match = i"inport == ${sp.json_name} && ${rEGBIT_LKUP_FDB()} == 0", > > - .actions = i"put_fdb(inport, eth.src); next;", > > - .stage_hint = stage_hint(lsp_uuid), > > - .io_port = Some{sp.lsp.name}, > > - .controller_meter = None) :- > > - LogicalSwitchPortWithUnknownAddress(ls_uuid, lsp_uuid), > > - sp in &SwitchPort(.lsp = &nb::Logical_Switch_Port{._uuid = lsp_uuid, .__type = i""}, > > - .ps_addresses = vec_empty()). > > - > > -Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_LOOKUP_FDB(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None), > > -Flow(.logical_datapath = ls_uuid, > > - .stage = s_SWITCH_IN_PUT_FDB(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - &Switch(._uuid = ls_uuid). > > - > > -/* Egress table PORT_SEC_IP: Egress port security - IP (priorities 90 and 80) > > - * if port security enabled. > > - * > > - * Egress table PORT_SEC_L2: Egress port security - L2 (priorities 50 and 150). > > - * > > - * Priority 50 rules implement port security for enabled logical port. > > - * > > - * Priority 150 rules drop packets to disabled logical ports, so that they > > - * don't even receive multicast or broadcast packets. */ > > -Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_OUT_PORT_SEC_L2(), > > - .priority = 50, > > - .__match = __match, > > - .actions = i"${queue_action}output;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) :- > > - &SwitchPort(.sw = sw, .lsp = lsp, .json_name = json_name, .ps_eth_addresses = ps_eth_addresses), > > - lsp.is_enabled(), > > - lsp.__type != i"external", > > - var __match = if (ps_eth_addresses.is_empty()) { > > - i"outport == ${json_name}" > > - } else { > > - i"outport == ${json_name} && eth.dst == {${ps_eth_addresses.join(\" \")}}" > > - }, > > - pbinding in sb::Out_Port_Binding(.logical_port = lsp.name), > > - var queue_action = match ((lsp.__type.ival(), > > - pbinding.options.get(i"qdisc_queue_id"))) { > > - ("localnet", Some{queue_id}) -> "set_queue(${queue_id});", > > - _ -> "" > > - }. > > - > > -for (&SwitchPort(.lsp = lsp, .json_name = json_name, .sw = sw)) { > > - if (not lsp.is_enabled() and lsp.__type != i"external") { > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_OUT_PORT_SEC_L2(), > > - .priority = 150, > > - .__match = i"outport == {$json_name}", > > - .actions = i"drop;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > - } > > -} > > - > > -for (SwitchPortPSAddresses(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, > > - .ps_addrs = ps) > > - if (ps.ipv4_addrs.len() > 0 or ps.ipv6_addrs.len() > 0) > > - and lsp.__type != i"external") > > -{ > > - if (ps.ipv4_addrs.len() > 0) { > > - var addrs = { > > - var addrs = vec_empty(); > > - for (addr in ps.ipv4_addrs) { > > - /* When the netmask is applied, if the host portion is > > - * non-zero, the host can only use the specified > > - * address. If zero, the host is allowed to use any > > - * address in the subnet. > > - */ > > - addrs.push(addr.match_host_or_network()); > > - if (addr.plen < 32 and not addr.host().is_zero()) { > > - addrs.push("${addr.bcast()}") > > - } > > - }; > > - addrs > > - } in > > - var __match = > > - "outport == ${json_name} && eth.dst == ${ps.ea} && ip4.dst == {255.255.255.255, 224.0.0.0/4, " ++ > > - addrs.join(", ") ++ "}" in > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_OUT_PORT_SEC_IP(), > > - .priority = 90, > > - .__match = __match.intern(), > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > - }; > > - if (ps.ipv6_addrs.len() > 0) { > > - var __match = "outport == ${json_name} && eth.dst == ${ps.ea}" ++ > > - build_port_security_ipv6_flow(Egress, ps.ea, ps.ipv6_addrs) in > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_OUT_PORT_SEC_IP(), > > - .priority = 90, > > - .__match = __match.intern(), > > - .actions = i"next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > - }; > > - var __match = i"outport == ${json_name} && eth.dst == ${ps.ea} && ip" in > > - Flow(.logical_datapath = sw._uuid, > > - .stage = s_SWITCH_OUT_PORT_SEC_IP(), > > - .priority = 80, > > - .__match = __match, > > - .actions = i"drop;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = Some{lsp.name}, > > - .controller_meter = None) > > -} > > - > > -/* Logical router ingress table ADMISSION: Admission control framework. */ > > -for (&Router(._uuid = lr_uuid)) { > > - /* Logical VLANs not supported. > > - * Broadcast/multicast source address is invalid. */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ADMISSION(), > > - .priority = 100, > > - .__match = i"vlan.present || eth.src[40]", > > - .actions = i"drop;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Logical router ingress table ADMISSION: match (priority 50). */ > > -for (&RouterPort(.lrp = lrp, > > - .json_name = json_name, > > - .networks = lrp_networks, > > - .router = router, > > - .is_redirect = is_redirect) > > - /* Drop packets from disabled logical ports (since logical flow > > - * tables are default-drop). */ > > - if lrp.is_enabled()) > > -{ > > - //if (op->derived) { > > - // /* No ingress packets should be received on a chassisredirect > > - // * port. */ > > - // continue; > > - //} > > - > > - /* Store the ethernet address of the port receiving the packet. > > - * This will save us from having to match on inport further down in > > - * the pipeline. > > - */ > > - var gw_mtu = lrp.options.get_int_def(i"gateway_mtu", 0) in > > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN() in > > - var actions = if (gw_mtu > 0) { > > - "${rEGBIT_PKT_LARGER()} = check_pkt_larger(${mtu}); " > > - } else { > > - "" > > - } ++ "${rEG_INPORT_ETH_ADDR()} = ${lrp_networks.ea}; next;" in { > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_ADMISSION(), > > - .priority = 50, > > - .__match = i"eth.mcast && inport == ${json_name}", > > - .actions = actions.intern(), > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - var __match = > > - "eth.dst == ${lrp_networks.ea} && inport == ${json_name}" ++ > > - if is_redirect { > > - /* Traffic with eth.dst = l3dgw_port->lrp_networks.ea > > - * should only be received on the "redirect-chassis". */ > > - " && is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})" > > - } else { "" } in > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_ADMISSION(), > > - .priority = 50, > > - .__match = __match.intern(), > > - .actions = actions.intern(), > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > -} > > - > > - > > -/* Logical router ingress table LOOKUP_NEIGHBOR and > > - * table LEARN_NEIGHBOR. */ > > -/* Learn MAC bindings from ARP/IPv6 ND. > > - * > > - * For ARP packets, table LOOKUP_NEIGHBOR does a lookup for the > > - * (arp.spa, arp.sha) in the mac binding table using the 'lookup_arp' > > - * action and stores the result in REGBIT_LOOKUP_NEIGHBOR_RESULT bit. > > - * If "always_learn_from_arp_request" is set to false, it will also > > - * lookup for the (arp.spa) in the mac binding table using the > > - * "lookup_arp_ip" action for ARP request packets, and stores the > > - * result in REGBIT_LOOKUP_NEIGHBOR_IP_RESULT bit; or set that bit > > - * to "1" directly for ARP response packets. > > - * > > - * For IPv6 ND NA packets, table LOOKUP_NEIGHBOR does a lookup > > - * for the (nd.target, nd.tll) in the mac binding table using the > > - * 'lookup_nd' action and stores the result in > > - * REGBIT_LOOKUP_NEIGHBOR_RESULT bit. If > > - * "always_learn_from_arp_request" is set to false, > > - * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT bit is set. > > - * > > - * For IPv6 ND NS packets, table LOOKUP_NEIGHBOR does a lookup > > - * for the (ip6.src, nd.sll) in the mac binding table using the > > - * 'lookup_nd' action and stores the result in > > - * REGBIT_LOOKUP_NEIGHBOR_RESULT bit. If > > - * "always_learn_from_arp_request" is set to false, it will also lookup > > - * for the (ip6.src) in the mac binding table using the "lookup_nd_ip" > > - * action and stores the result in REGBIT_LOOKUP_NEIGHBOR_IP_RESULT > > - * bit. > > - * > > - * Table LEARN_NEIGHBOR learns the mac-binding using the action > > - * - 'put_arp/put_nd'. Learning mac-binding is skipped if > > - * REGBIT_LOOKUP_NEIGHBOR_RESULT bit is set or > > - * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT is not set. > > - * > > - * */ > > - > > -/* Flows for LOOKUP_NEIGHBOR. */ > > -for (&Router(._uuid = lr_uuid, > > - .learn_from_arp_request = learn_from_arp_request, > > - .copp = copp)) > > -var rLNR = rEGBIT_LOOKUP_NEIGHBOR_RESULT() in > > -var rLNIR = rEGBIT_LOOKUP_NEIGHBOR_IP_RESULT() in > > -{ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > > - .priority = 100, > > - .__match = i"arp.op == 2", > > - .actions = > > - ("${rLNR} = lookup_arp(inport, arp.spa, arp.sha); " ++ > > - { if (learn_from_arp_request) "" else "${rLNIR} = 1; " } ++ > > - "next;").intern(), > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > > - .priority = 100, > > - .__match = i"nd_na", > > - .actions = > > - ("${rLNR} = lookup_nd(inport, nd.target, nd.tll); " ++ > > - { if (learn_from_arp_request) "" else "${rLNIR} = 1; " } ++ > > - "next;").intern(), > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > > - .priority = 100, > > - .__match = i"nd_ns", > > - .actions = > > - ("${rLNR} = lookup_nd(inport, ip6.src, nd.sll); " ++ > > - { if (learn_from_arp_request) "" else > > - "${rLNIR} = lookup_nd_ip(inport, ip6.src); " } ++ > > - "next;").intern(), > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* For other packet types, we can skip neighbor learning. > > - * So set REGBIT_LOOKUP_NEIGHBOR_RESULT to 1. */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"${rLNR} = 1; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Flows for LEARN_NEIGHBOR. */ > > - /* Skip Neighbor learning if not required. */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), > > - .priority = 100, > > - .__match = > > - ("${rLNR} == 1" ++ > > - { if (learn_from_arp_request) "" else " || ${rLNIR} == 0" }).intern(), > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), > > - .priority = 90, > > - .__match = i"arp", > > - .actions = i"put_arp(inport, arp.spa, arp.sha); next;", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_ARP()), > > - .stage_hint = 0); > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), > > - .priority = 90, > > - .__match = i"nd_na", > > - .actions = i"put_nd(inport, nd.target, nd.tll); next;", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_ND_NA()), > > - .stage_hint = 0); > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), > > - .priority = 90, > > - .__match = i"nd_ns", > > - .actions = i"put_nd(inport, ip6.src, nd.sll); next;", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_ND_NS()), > > - .stage_hint = 0) > > -} > > - > > -/* Check if we need to learn mac-binding from ARP requests. */ > > -for (RouterPortNetworksIPv4Addr(rp@&RouterPort{.router = router}, addr)) { > > - var chassis_residence = match (rp.is_redirect) { > > - true -> " && is_chassis_resident(${json_escape(chassis_redirect_name(rp.lrp.name))})", > > - false -> "" > > - } in > > - var rLNR = rEGBIT_LOOKUP_NEIGHBOR_RESULT() in > > - var rLNIR = rEGBIT_LOOKUP_NEIGHBOR_IP_RESULT() in > > - var match0 = "inport == ${rp.json_name} && " > > - "arp.spa == ${addr.match_network()}" in > > - var match1 = "arp.op == 1" ++ chassis_residence in > > - var learn_from_arp_request = router.learn_from_arp_request in { > > - if (not learn_from_arp_request) { > > - /* ARP request to this address should always get learned, > > - * so add a priority-110 flow to set > > - * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT to 1. */ > > - var __match = [match0, "arp.tpa == ${addr.addr}", match1] in > > - var actions = i"${rLNR} = lookup_arp(inport, arp.spa, arp.sha); " > > - "${rLNIR} = 1; " > > - "next;" in > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > > - .priority = 110, > > - .__match = __match.join(" && ").intern(), > > - .actions = actions, > > - .stage_hint = stage_hint(rp.lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - var actions = "${rLNR} = lookup_arp(inport, arp.spa, arp.sha); " ++ > > - { if (learn_from_arp_request) "" else > > - "${rLNIR} = lookup_arp_ip(inport, arp.spa); " } ++ > > - "next;" in > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), > > - .priority = 100, > > - .__match = i"${match0} && ${match1}", > > - .actions = actions.intern(), > > - .stage_hint = stage_hint(rp.lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > -} > > - > > - > > -/* Logical router ingress table IP_INPUT: IP Input. */ > > -for (router in &Router(._uuid = lr_uuid, .mcast_cfg = mcast_cfg)) { > > - /* L3 admission control: drop multicast and broadcast source, localhost > > - * source or destination, and zero network source or destination > > - * (priority 100). */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 100, > > - .__match = i"ip4.src_mcast ||" > > - "ip4.src == 255.255.255.255 || " > > - "ip4.src == 127.0.0.0/8 || " > > - "ip4.dst == 127.0.0.0/8 || " > > - "ip4.src == 0.0.0.0/8 || " > > - "ip4.dst == 0.0.0.0/8", > > - .actions = i"drop;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Drop ARP packets (priority 85). ARP request packets for router's own > > - * IPs are handled with priority-90 flows. > > - * Drop IPv6 ND packets (priority 85). ND NA packets for router's own > > - * IPs are handled with priority-90 flows. > > - */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 85, > > - .__match = i"arp || nd", > > - .actions = i"drop;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Allow IPv6 multicast traffic that's supposed to reach the > > - * router pipeline (e.g., router solicitations). > > - */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 84, > > - .__match = i"nd_rs || nd_ra", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Drop other reserved multicast. */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 83, > > - .__match = i"ip6.mcast_rsvd", > > - .actions = i"drop;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Allow other multicast if relay enabled (priority 82). */ > > - var mcast_action = { if (mcast_cfg.relay) { i"next;" } else { i"drop;" } } in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 82, > > - .__match = i"ip4.mcast || ip6.mcast", > > - .actions = mcast_action, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Drop Ethernet local broadcast. By definition this traffic should > > - * not be forwarded.*/ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 50, > > - .__match = i"eth.bcast", > > - .actions = i"drop;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* TTL discard */ > > - Flow( > > - .logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 30, > > - .__match = i"ip4 && ip.ttl == {0, 1}", > > - .actions = i"drop;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Pass other traffic not already handled to the next table for > > - * routing. */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -function format_v4_networks(networks: lport_addresses, add_bcast: bool): string = > > -{ > > - var addrs = vec_empty(); > > - for (addr in networks.ipv4_addrs) { > > - addrs.push("${addr.addr}"); > > - if (add_bcast) { > > - addrs.push("${addr.bcast()}") > > - } else () > > - }; > > - if (addrs.len() == 1) { > > - addrs.join(", ") > > - } else { > > - "{" ++ addrs.join(", ") ++ "}" > > - } > > -} > > - > > -function format_v6_networks(networks: lport_addresses): string = > > -{ > > - var addrs = vec_empty(); > > - for (addr in networks.ipv6_addrs) { > > - addrs.push("${addr.addr}") > > - }; > > - if (addrs.len() == 1) { > > - addrs.join(", ") > > - } else { > > - "{" ++ addrs.join(", ") ++ "}" > > - } > > -} > > - > > -/* The following relation is used in ARP reply flow generation to determine whether > > - * the is_chassis_resident check must be added to the flow. > > - */ > > -relation AddChassisResidentCheck_(lrp: uuid, add_check: bool) > > - > > -AddChassisResidentCheck_(lrp._uuid, res) :- > > - &SwitchPort(.peer = Some{&RouterPort{.lrp = lrp, .router = router, .is_redirect = is_redirect}}, > > - .sw = sw), > > - not router.l3dgw_ports.is_empty(), > > - not sw.localnet_ports.is_empty(), > > - var res = if (is_redirect) { > > - /* Traffic with eth.src = l3dgw_port->lrp_networks.ea > > - * should only be sent from the "redirect-chassis", so that > > - * upstream MAC learning points to the "redirect-chassis". > > - * Also need to avoid generation of multiple ARP responses > > - * from different chassis. */ > > - true > > - } else { > > - /* Check if the option 'reside-on-redirect-chassis' > > - * is set to true on the router port. If set to true > > - * and if peer's logical switch has a localnet port, it > > - * means the router pipeline for the packets from > > - * peer's logical switch is be run on the chassis > > - * hosting the gateway port and it should reply to the > > - * ARP requests for the router port IPs. > > - */ > > - lrp.options.get_bool_def(i"reside-on-redirect-chassis", false) > > - }. > > - > > - > > -relation AddChassisResidentCheck(lrp: uuid, add_check: bool) > > - > > -AddChassisResidentCheck(lrp, add_check) :- > > - AddChassisResidentCheck_(lrp, add_check). > > - > > -AddChassisResidentCheck(lrp, false) :- > > - &nb::Logical_Router_Port(._uuid = lrp), > > - not AddChassisResidentCheck_(lrp, _). > > - > > - > > -/* Logical router ingress table IP_INPUT: IP Input for IPv4. */ > > -for (&RouterPort(.router = router, .networks = networks, .lrp = lrp) > > - if (not networks.ipv4_addrs.is_empty())) > > -{ > > - /* L3 admission control: drop packets that originate from an > > - * IPv4 address owned by the router or a broadcast address > > - * known to the router (priority 100). */ > > - var __match = "ip4.src == " ++ > > - format_v4_networks(networks, true) ++ > > - " && ${rEGBIT_EGRESS_LOOPBACK()} == 0" in > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 100, > > - .__match = __match.intern(), > > - .actions = i"drop;", > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* ICMP echo reply. These flows reply to ICMP echo requests > > - * received for the router's IP address. Since packets only > > - * get here as part of the logical router datapath, the inport > > - * (i.e. the incoming locally attached net) does not matter. > > - * The ip.ttl also does not matter (RFC1812 section 4.2.2.9) */ > > - var __match = "ip4.dst == " ++ > > - format_v4_networks(networks, false) ++ > > - " && icmp4.type == 8 && icmp4.code == 0" in > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 90, > > - .__match = __match.intern(), > > - .actions = i"ip4.dst <-> ip4.src; " > > - "ip.ttl = 255; " > > - "icmp4.type = 0; " > > - "flags.loopback = 1; " > > - "next; ", > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Priority-90-92 flows handle ARP requests and ND packets. Most are > > - * per logical port but DNAT addresses can be handled per datapath > > - * for non gateway router ports. > > - * > > - * Priority 91 and 92 flows are added for each gateway router > > - * port to handle the special cases. In case we get the packet > > - * on a regular port, just reply with the port's ETH address. > > - */ > > -LogicalRouterNatArpNdFlow(router, nat) :- > > - router in &Router(._uuid = lr), > > - LogicalRouterNAT(.lr = lr, .nat = nat@NAT{.nat = &nb::NAT{.__type = __type}}), > > - /* Skip SNAT entries for now, we handle unique SNAT IPs separately > > - * below. > > - */ > > - __type != i"snat". > > -/* Now handle SNAT entries too, one per unique SNAT IP. */ > > -LogicalRouterNatArpNdFlow(router, nat) :- > > - router in &Router(.snat_ips = snat_ips), > > - var snat_ip = FlatMap(snat_ips), > > - (var ip, var nats) = snat_ip, > > - Some{var nat} = nats.nth(0). > > - > > -relation LogicalRouterNatArpNdFlow(router: Intern<Router>, nat: NAT) > > -LogicalRouterArpNdFlow(router, nat, None, rEG_INPORT_ETH_ADDR(), None, false, 90) :- > > - LogicalRouterNatArpNdFlow(router, nat). > > - > > -/* ARP / ND handling for external IP addresses. > > - * > > - * DNAT and SNAT IP addresses are external IP addresses that need ARP > > - * handling. > > - * > > - * These are already taken care globally, per router. The only > > - * exception is on the l3dgw_port where we might need to use a > > - * different ETH address. > > - */ > > -LogicalRouterPortNatArpNdFlow(router, nat, l3dgw_port) :- > > - router in &Router(._uuid = lr_uuid, .l3dgw_ports = l3dgw_ports), > > - Some {var l3dgw_port} = l3dgw_ports.nth(0), > > - LogicalRouterNAT(lr_uuid, nat), > > - /* Skip SNAT entries for now, we handle unique SNAT IPs separately > > - * below. > > - */ > > - nat.nat.__type != i"snat". > > -/* Now handle SNAT entries too, one per unique SNAT IP. */ > > -LogicalRouterPortNatArpNdFlow(router, nat, l3dgw_port) :- > > - router in &Router(.l3dgw_ports = l3dgw_ports, .snat_ips = snat_ips), > > - Some {var l3dgw_port} = l3dgw_ports.nth(0), > > - var snat_ip = FlatMap(snat_ips), > > - (var ip, var nats) = snat_ip, > > - Some{var nat} = nats.nth(0). > > - > > -/* Respond to ARP/NS requests on the chassis that binds the gw > > - * port. Drop the ARP/NS requests on other chassis. > > - */ > > -relation LogicalRouterPortNatArpNdFlow(router: Intern<Router>, nat: NAT, lrp: Intern<nb::Logical_Router_Port>) > > -LogicalRouterArpNdFlow(router, nat, Some{lrp}, mac, Some{extra_match}, false, 92), > > -LogicalRouterArpNdFlow(router, nat, Some{lrp}, mac, None, true, 91) :- > > - LogicalRouterPortNatArpNdFlow(router, nat, lrp), > > - (var mac, var extra_match) = match ((nat.external_mac, nat.nat.logical_port)) { > > - (Some{external_mac}, Some{logical_port}) -> ( > > - /* distributed NAT case, use nat->external_mac */ > > - external_mac.to_string().intern(), > > - /* Traffic with eth.src = nat->external_mac should only be > > - * sent from the chassis where nat->logical_port is > > - * resident, so that upstream MAC learning points to the > > - * correct chassis. Also need to avoid generation of > > - * multiple ARP responses from different chassis. */ > > - i"is_chassis_resident(${json_escape(logical_port)})" > > - ), > > - _ -> ( > > - rEG_INPORT_ETH_ADDR(), > > - /* Traffic with eth.src = l3dgw_port->lrp_networks.ea_s > > - * should only be sent from the gateway chassis, so that > > - * upstream MAC learning points to the gateway chassis. > > - * Also need to avoid generation of multiple ARP responses > > - * from different chassis. */ > > - match (router.l3dgw_ports.nth(0)) { > > - None -> i"", > > - Some {var gw_port} -> i"is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})" > > - } > > - ) > > - }. > > - > > -/* Now divide the ARP/ND flows into ARP and ND. */ > > -relation LogicalRouterArpNdFlow( > > - router: Intern<Router>, > > - nat: NAT, > > - lrp: Option<Intern<nb::Logical_Router_Port>>, > > - mac: istring, > > - extra_match: Option<istring>, > > - drop: bool, > > - priority: integer) > > -LogicalRouterArpFlow(router, lrp, i"${ipv4}", mac, extra_match, drop, priority, > > - stage_hint(nat.nat._uuid)) :- > > - LogicalRouterArpNdFlow(router, nat@NAT{.external_ip = IPv4{ipv4}}, lrp, > > - mac, extra_match, drop, priority). > > -LogicalRouterNdFlow(router, lrp, i"nd_na", ipv6, true, mac, extra_match, drop, priority, > > - stage_hint(nat.nat._uuid)) :- > > - LogicalRouterArpNdFlow(router, nat@NAT{.external_ip = IPv6{ipv6}}, lrp, > > - mac, extra_match, drop, priority). > > - > > -relation LogicalRouterNdFlowLB( > > - lr: Intern<Router>, > > - lrp: Option<Intern<nb::Logical_Router_Port>>, > > - ip: istring, > > - mac: istring, > > - extra_match: Option<istring>, > > - stage_hint: bit<32>) > > -Flow(.logical_datapath = lr._uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 90, > > - .__match = __match.intern(), > > - .actions = actions, > > - .stage_hint = stage_hint, > > - .io_port = None, > > - .controller_meter = lr.copp.get(cOPP_ND_NA())) :- > > - LogicalRouterNdFlowLB(.lr = lr, .lrp = lrp, .ip = ip, > > - .mac = mac, .extra_match = extra_match, > > - .stage_hint = stage_hint), > > - var __match = { > > - var clauses = vec_with_capacity(4); > > - match (lrp) { > > - Some{p} -> clauses.push(i"inport == ${json_escape(p.name)}"), > > - None -> () > > - }; > > - clauses.push(i"nd_ns && nd.target == ${ip}"); > > - clauses.append(extra_match.to_vec()); > > - clauses.join(" && ") > > - }, > > - var actions = > > - i"nd_na { " > > - "eth.src = ${mac}; " > > - "ip6.src = nd.target; " > > - "nd.tll = ${mac}; " > > - "outport = inport; " > > - "flags.loopback = 1; " > > - "output; " > > - "};". > > - > > -relation LogicalRouterArpFlow( > > - lr: Intern<Router>, > > - lrp: Option<Intern<nb::Logical_Router_Port>>, > > - ip: istring, > > - mac: istring, > > - extra_match: Option<istring>, > > - drop: bool, > > - priority: integer, > > - stage_hint: bit<32>) > > -Flow(.logical_datapath = lr._uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = priority, > > - .__match = __match.intern(), > > - .actions = actions, > > - .stage_hint = stage_hint, > > - .io_port = None, > > - .controller_meter = None) :- > > - LogicalRouterArpFlow(.lr = lr, .lrp = lrp, .ip = ip, .mac = mac, > > - .extra_match = extra_match, .drop = drop, > > - .priority = priority, .stage_hint = stage_hint), > > - var __match = { > > - var clauses = vec_with_capacity(3); > > - match (lrp) { > > - Some{p} -> clauses.push(i"inport == ${json_escape(p.name)}"), > > - None -> () > > - }; > > - clauses.push(i"arp.op == 1 && arp.tpa == ${ip}"); > > - clauses.append(extra_match.to_vec()); > > - clauses.join(" && ") > > - }, > > - var actions = if (drop) { > > - i"drop;" > > - } else { > > - i"eth.dst = eth.src; " > > - "eth.src = ${mac}; " > > - "arp.op = 2; /* ARP reply */ " > > - "arp.tha = arp.sha; " > > - "arp.sha = ${mac}; " > > - "arp.tpa <-> arp.spa; " > > - "outport = inport; " > > - "flags.loopback = 1; " > > - "output;" > > - }. > > - > > -relation LogicalRouterNdFlow( > > - lr: Intern<Router>, > > - lrp: Option<Intern<nb::Logical_Router_Port>>, > > - action: istring, > > - ip: in6_addr, > > - sn_ip: bool, > > - mac: istring, > > - extra_match: Option<istring>, > > - drop: bool, > > - priority: integer, > > - stage_hint: bit<32>) > > -Flow(.logical_datapath = lr._uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = priority, > > - .__match = __match.intern(), > > - .actions = actions, > > - .io_port = None, > > - .controller_meter = controller_meter, > > - .stage_hint = stage_hint) :- > > - LogicalRouterNdFlow(.lr = lr, .lrp = lrp, .action = action, .ip = ip, > > - .sn_ip = sn_ip, .mac = mac, .extra_match = extra_match, > > - .drop = drop, .priority = priority, > > - .stage_hint = stage_hint), > > - var __match = { > > - var clauses = vec_with_capacity(4); > > - match (lrp) { > > - Some{p} -> clauses.push(i"inport == ${json_escape(p.name)}"), > > - None -> () > > - }; > > - if (sn_ip) { > > - clauses.push(i"ip6.dst == {${ip}, ${ip.solicited_node()}}") > > - }; > > - clauses.push(i"nd_ns && nd.target == ${ip}"); > > - clauses.append(extra_match.to_vec()); > > - clauses.join(" && ") > > - }, > > - (var actions, var controller_meter) = if (drop) { > > - (i"drop;", None) > > - } else { > > - (i"${action} { " > > - "eth.src = ${mac}; " > > - "ip6.src = nd.target; " > > - "nd.tll = ${mac}; " > > - "outport = inport; " > > - "flags.loopback = 1; " > > - "output; " > > - "};", > > - lr.copp.get(cOPP_ND_NA())) > > - }. > > - > > -/* ICMP time exceeded */ > > -for (RouterPortNetworksIPv4Addr(.port = &RouterPort{.lrp = lrp, > > - .json_name = json_name, > > - .router = router, > > - .networks = networks, > > - .is_redirect = is_redirect}, > > - .addr = addr)) > > -{ > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 40, > > - .__match = i"inport == ${json_name} && ip4 && " > > - "ip.ttl == {0, 1} && !ip.later_frag", > > - .actions = i"icmp4 {" > > - "eth.dst <-> eth.src; " > > - "icmp4.type = 11; /* Time exceeded */ " > > - "icmp4.code = 0; /* TTL exceeded in transit */ " > > - "ip4.dst = ip4.src; " > > - "ip4.src = ${addr.addr}; " > > - "ip.ttl = 255; " > > - "next; };", > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* ARP reply. These flows reply to ARP requests for the router's own > > - * IP address. */ > > - for (AddChassisResidentCheck(lrp._uuid, add_chassis_resident_check)) { > > - var __match = > > - "arp.spa == ${addr.match_network()}" ++ > > - if (add_chassis_resident_check) { > > - var redirect_port_name = if (is_redirect) { > > - json_escape(chassis_redirect_name(lrp.name)) > > - } else { > > - match (router.l3dgw_ports.nth(0)) { > > - None -> "", > > - Some {var gw_port} -> json_escape(chassis_redirect_name(gw_port.name)) > > - } > > - }; > > - " && is_chassis_resident(${redirect_port_name})" > > - } else "" in > > - LogicalRouterArpFlow(.lr = router, > > - .lrp = Some{lrp}, > > - .ip = i"${addr.addr}", > > - .mac = rEG_INPORT_ETH_ADDR(), > > - .extra_match = Some{__match.intern()}, > > - .drop = false, > > - .priority = 90, > > - .stage_hint = stage_hint(lrp._uuid)) > > - } > > -} > > - > > -LogicalRouterNdFlow(.lr = r, > > - .lrp = Some{lrp}, > > - .action = i"nd_na", > > - .ip = ip, > > - .sn_ip = false, > > - .mac = rEG_INPORT_ETH_ADDR(), > > - .extra_match = residence_check, > > - .drop = false, > > - .priority = 90, > > - .stage_hint = 0) :- > > - &LBVIP(.vip_addr = IPv6{ip}, .lb = lb), > > - RouterLB(r, lb._uuid), > > - &RouterPort(.router = r, .lrp = lrp, .is_redirect = is_redirect), > > - var residence_check = match (is_redirect) { > > - true -> Some{i"is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})"}, > > - false -> None > > - }. > > - > > -for (&RouterPort(.lrp = lrp, > > - .router = router@&Router{._uuid = lr_uuid}, > > - .json_name = json_name, > > - .networks = networks, > > - .is_redirect = is_redirect)) { > > - for (lbips in &LogicalRouterLBIPs(.lr = lr_uuid)) { > > - var residence_check = match (is_redirect) { > > - true -> Some{i"is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})"}, > > - false -> None > > - } in { > > - var all_ipv4s = union(lbips.lb_ipv4s_routable, lbips.lb_ipv4s_unroutable) in > > - not all_ipv4s.is_empty() in > > - LogicalRouterArpFlow(.lr = router, > > - .lrp = Some{lrp}, > > - .ip = i"{ ${all_ipv4s.to_vec().join(\", \")} }", > > - .mac = rEG_INPORT_ETH_ADDR(), > > - .extra_match = residence_check, > > - .drop = false, > > - .priority = 90, > > - .stage_hint = 0); > > - > > - var all_ipv6s = union(lbips.lb_ipv6s_routable, lbips.lb_ipv6s_unroutable) in > > - not all_ipv6s.is_empty() in > > - LogicalRouterNdFlowLB(.lr = router, > > - .lrp = Some{lrp}, > > - .ip = ("{ " ++ all_ipv6s.to_vec().join(", ") ++ " }").intern(), > > - .mac = rEG_INPORT_ETH_ADDR(), > > - .extra_match = residence_check, > > - .stage_hint = 0) > > - } > > - } > > -} > > - > > -/* Drop IP traffic destined to router owned IPs except if the IP is > > - * also a SNAT IP. Those are dropped later, in stage > > - * "lr_in_arp_resolve", if unSNAT was unsuccessful. > > - * > > - * Priority 60. > > - */ > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 60, > > - .__match = ("ip4.dst == {" ++ match_ips.join(", ") ++ "}").intern(), > > - .actions = i"drop;", > > - .stage_hint = stage_hint(lrp_uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, > > - .router = &Router{.snat_ips = snat_ips, > > - .force_lb_snat = false, > > - ._uuid = lr_uuid}, > > - .networks = networks), > > - var addr = FlatMap(networks.ipv4_addrs), > > - not snat_ips.contains_key(IPv4{addr.addr}), > > - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 60, > > - .__match = ("ip6.dst == {" ++ match_ips.join(", ") ++ "}").intern(), > > - .actions = i"drop;", > > - .stage_hint = stage_hint(lrp_uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, > > - .router = &Router{.snat_ips = snat_ips, > > - .force_lb_snat = false, > > - ._uuid = lr_uuid}, > > - .networks = networks), > > - var addr = FlatMap(networks.ipv6_addrs), > > - not snat_ips.contains_key(IPv6{addr.addr}), > > - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). > > - > > -for (RouterPortNetworksIPv4Addr( > > - .port = &RouterPort{ > > - .router = &Router{._uuid = lr_uuid, > > - .l3dgw_ports = vec_empty(), > > - .is_gateway = false, > > - .copp = copp}, > > - .lrp = lrp}, > > - .addr = addr)) > > -{ > > - /* UDP/TCP/SCTP port unreachable. */ > > - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag && udp" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 80, > > - .__match = __match, > > - .actions = i"icmp4 {" > > - "eth.dst <-> eth.src; " > > - "ip4.dst <-> ip4.src; " > > - "ip.ttl = 255; " > > - "icmp4.type = 3; " > > - "icmp4.code = 3; " > > - "next; };", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_ICMP4_ERR()), > > - .stage_hint = stage_hint(lrp._uuid)); > > - > > - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag && tcp" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 80, > > - .__match = __match, > > - .actions = i"tcp_reset {" > > - "eth.dst <-> eth.src; " > > - "ip4.dst <-> ip4.src; " > > - "next; };", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_TCP_RESET()), > > - .stage_hint = stage_hint(lrp._uuid)); > > - > > - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag && sctp" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 80, > > - .__match = __match, > > - .actions = i"sctp_abort {" > > - "eth.dst <-> eth.src; " > > - "ip4.dst <-> ip4.src; " > > - "next; };", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_TCP_RESET()), > > - .stage_hint = stage_hint(lrp._uuid)); > > - > > - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 70, > > - .__match = __match, > > - .actions = i"icmp4 {" > > - "eth.dst <-> eth.src; " > > - "ip4.dst <-> ip4.src; " > > - "ip.ttl = 255; " > > - "icmp4.type = 3; " > > - "icmp4.code = 2; " > > - "next; };", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_ICMP4_ERR()), > > - .stage_hint = stage_hint(lrp._uuid)) > > -} > > - > > -/* DHCPv6 reply handling */ > > -Flow(.logical_datapath = rp.router._uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 100, > > - .__match = i"ip6.dst == ${ipv6_addr.addr} " > > - "&& udp.src == 547 && udp.dst == 546", > > - .actions = i"reg0 = 0; handle_dhcpv6_reply;", > > - .stage_hint = stage_hint(rp.lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - rp in &RouterPort(), > > - var ipv6_addr = FlatMap(rp.networks.ipv6_addrs). > > - > > -/* Logical router ingress table IP_INPUT: IP Input for IPv6. */ > > -for (&RouterPort(.router = router, .networks = networks, .lrp = lrp) > > - if (not networks.ipv6_addrs.is_empty())) > > -{ > > - //if (op->derived) { > > - // /* No ingress packets are accepted on a chassisredirect > > - // * port, so no need to program flows for that port. */ > > - // continue; > > - //} > > - > > - /* ICMPv6 echo reply. These flows reply to echo requests > > - * received for the router's IP address. */ > > - var __match = "ip6.dst == " ++ > > - format_v6_networks(networks) ++ > > - " && icmp6.type == 128 && icmp6.code == 0" in > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 90, > > - .__match = __match.intern(), > > - .actions = i"ip6.dst <-> ip6.src; " > > - "ip.ttl = 255; " > > - "icmp6.type = 129; " > > - "flags.loopback = 1; " > > - "next; ", > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* ND reply. These flows reply to ND solicitations for the > > - * router's own IP address. */ > > -for (RouterPortNetworksIPv6Addr(.port = &RouterPort{.lrp = lrp, > > - .is_redirect = is_redirect, > > - .router = router, > > - .networks = networks, > > - .json_name = json_name}, > > - .addr = addr)) > > -{ > > - var extra_match = if (is_redirect) { > > - /* Traffic with eth.src = l3dgw_port->lrp_networks.ea > > - * should only be sent from the gateway chassis, so that > > - * upstream MAC learning points to the gateway chassis. > > - * Also need to avoid generation of multiple ND replies > > - * from different chassis. */ > > - Some{i"is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})"} > > - } else None in > > - LogicalRouterNdFlow(.lr = router, > > - .lrp = Some{lrp}, > > - .action = i"nd_na_router", > > - .ip = addr.addr, > > - .sn_ip = true, > > - .mac = rEG_INPORT_ETH_ADDR(), > > - .extra_match = extra_match, > > - .drop = false, > > - .priority = 90, > > - .stage_hint = stage_hint(lrp._uuid)) > > -} > > - > > -/* UDP/TCP/SCTP port unreachable */ > > -for (RouterPortNetworksIPv6Addr( > > - .port = &RouterPort{.router = &Router{._uuid = lr_uuid, > > - .l3dgw_ports = vec_empty(), > > - .is_gateway = false, > > - .copp = copp}, > > - .lrp = lrp, > > - .json_name = json_name}, > > - .addr = addr)) > > -{ > > - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag && tcp" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 80, > > - .__match = __match, > > - .actions = i"tcp_reset {" > > - "eth.dst <-> eth.src; " > > - "ip6.dst <-> ip6.src; " > > - "next; };", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_TCP_RESET()), > > - .stage_hint = stage_hint(lrp._uuid)); > > - > > - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag && sctp" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 80, > > - .__match = __match, > > - .actions = i"sctp_abort {" > > - "eth.dst <-> eth.src; " > > - "ip6.dst <-> ip6.src; " > > - "next; };", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_TCP_RESET()), > > - .stage_hint = stage_hint(lrp._uuid)); > > - > > - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag && udp" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 80, > > - .__match = __match, > > - .actions = i"icmp6 {" > > - "eth.dst <-> eth.src; " > > - "ip6.dst <-> ip6.src; " > > - "ip.ttl = 255; " > > - "icmp6.type = 1; " > > - "icmp6.code = 4; " > > - "next; };", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_ICMP6_ERR()), > > - .stage_hint = stage_hint(lrp._uuid)); > > - > > - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 70, > > - .__match = __match, > > - .actions = i"icmp6 {" > > - "eth.dst <-> eth.src; " > > - "ip6.dst <-> ip6.src; " > > - "ip.ttl = 255; " > > - "icmp6.type = 1; " > > - "icmp6.code = 3; " > > - "next; };", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_ICMP6_ERR()), > > - .stage_hint = stage_hint(lrp._uuid)) > > -} > > - > > -/* ICMPv6 time exceeded */ > > -for (RouterPortNetworksIPv6Addr(.port = &RouterPort{.router = router, > > - .lrp = lrp, > > - .json_name = json_name}, > > - .addr = addr) > > - /* skip link-local address */ > > - if (not addr.is_lla())) > > -{ > > - var __match = i"inport == ${json_name} && ip6 && " > > - "ip6.src == ${addr.match_network()} && " > > - "ip.ttl == {0, 1} && !ip.later_frag" in > > - var actions = i"icmp6 {" > > - "eth.dst <-> eth.src; " > > - "ip6.dst = ip6.src; " > > - "ip6.src = ${addr.addr}; " > > - "ip.ttl = 255; " > > - "icmp6.type = 3; /* Time exceeded */ " > > - "icmp6.code = 0; /* TTL exceeded in transit */ " > > - "next; };" in > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 40, > > - .__match = __match, > > - .actions = actions, > > - .io_port = None, > > - .controller_meter = router.copp.get(cOPP_ICMP6_ERR()), > > - .stage_hint = stage_hint(lrp._uuid)) > > -} > > - > > -/* NAT, Defrag and load balancing. */ > > - > > -function default_allow_flow(datapath: uuid, stage: Intern<Stage>): Flow { > > - Flow{.logical_datapath = datapath, > > - .stage = stage, > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .io_port = None, > > - .controller_meter = None, > > - .stage_hint = 0} > > -} > > -for (r in &Router(._uuid = lr_uuid)) { > > - /* Packets are allowed by default. */ > > - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_DEFRAG())]; > > - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_UNSNAT())]; > > - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_SNAT())]; > > - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_DNAT())]; > > - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_UNDNAT())]; > > - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_POST_UNDNAT())]; > > - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_EGR_LOOP())]; > > - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_ECMP_STATEFUL())]; > > - > > - /* Send the IPv6 NS packets to next table. When ovn-controller > > - * generates IPv6 NS (for the action - nd_ns{}), the injected > > - * packet would go through conntrack - which is not required. */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_SNAT(), > > - .priority = 120, > > - .__match = i"nd_ns", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -for (r in &Router(._uuid = lr_uuid, > > - .l3dgw_ports = l3dgw_ports, > > - .is_gateway = is_gateway, > > - .nat = nat)) { > > - for (LogicalRouterLBs(lr_uuid, lbs)) { > > - if ((l3dgw_ports.len() > 0 or is_gateway) and (not is_empty(nat) or not is_empty(lbs))) { > > - /* If the router has load balancer or DNAT rules, re-circulate every packet > > - * through the DNAT zone so that packets that need to be unDNATed in the > > - * reverse direction get unDNATed. > > - * > > - * We also commit newly initiated connections in the reply direction to the > > - * DNAT zone. This ensures that these flows are tracked. If the flow was > > - * not committed, it would produce ongoing datapath flows with the ct.new > > - * flag set. Some NICs are unable to offload these flows. > > - */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_POST_UNDNAT(), > > - .priority = 50, > > - .__match = i"ip && ct.new", > > - .actions = i"ct_commit { } ; next; ", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_UNDNAT(), > > - .priority = 50, > > - .__match = i"ip", > > - .actions = i"flags.loopback = 1; ct_dnat;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > -} > > - > > -Flow(.logical_datapath = lr, > > - .stage = s_ROUTER_OUT_SNAT(), > > - .priority = 120, > > - .__match = i"flags.skip_snat_for_lb == 1 && ip", > > - .actions = i"next;", > > - .stage_hint = stage_hint(lb.lb._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - LogicalRouterLB(lr, lb), > > - lb.lb.options.get_bool_def(i"skip_snat", false) > > - . > > - > > -function lrouter_nat_is_stateless(nat: NAT): bool = { > > - Some{i"true"} == nat.nat.options.get(i"stateless") > > -} > > - > > -/* Handles the match criteria and actions in logical flow > > - * based on external ip based NAT rule filter. > > - * > > - * For ALLOWED_EXT_IPs, we will add an additional match criteria > > - * of comparing ip*.src/dst with the allowed external ip address set. > > - * > > - * For EXEMPTED_EXT_IPs, we will have an additional logical flow > > - * where we compare ip*.src/dst with the exempted external ip address set > > - * and action says "next" instead of ct*. > > - */ > > -function lrouter_nat_add_ext_ip_match( > > - router: Intern<Router>, > > - nat: NAT, > > - __match: string, > > - ipX: string, > > - is_src: bool, > > - mask: v46_ip): (string, Option<Flow>) > > -{ > > - var dir = if (is_src) "src" else "dst"; > > - match (nat.exceptional_ext_ips) { > > - None -> ("", None), > > - Some{AllowedExtIps{__as}} -> (" && ${ipX}.${dir} == $${__as.name}", None), > > - Some{ExemptedExtIps{__as}} -> { > > - /* Priority of logical flows corresponding to exempted_ext_ips is > > - * +1 of the corresponding regular NAT rule. > > - * For example, if we have following NAT rule and we associate > > - * exempted external ips to it: > > - * "ovn-nbctl lr-nat-add router dnat_and_snat 10.15.24.139 50.0.0.11" > > - * > > - * And now we associate exempted external ip address set to it. > > - * Now corresponding to above rule we will have following logical > > - * flows: > > - * lr_out_snat...priority=162, match=(..ip4.dst == $exempt_range), > > - * action=(next;) > > - * lr_out_snat...priority=161, match=(..), action=(ct_snat(....);) > > - * > > - */ > > - var priority = match (is_src) { > > - true -> { > > - /* S_ROUTER_IN_DNAT uses priority 100 */ > > - 100 + 1 > > - }, > > - false -> { > > - /* S_ROUTER_OUT_SNAT uses priority (mask + 1 + 128 + 1) */ > > - var is_gw_router = router.l3dgw_ports.is_empty(); > > - var mask_1bits = mask.cidr_bits().unwrap_or(8'd0) as integer; > > - mask_1bits + 2 + { if (not is_gw_router) 128 else 0 } > > - } > > - }; > > - > > - ("", > > - Some{Flow{.logical_datapath = router._uuid, > > - .stage = if (is_src) { s_ROUTER_IN_DNAT() } else { s_ROUTER_OUT_SNAT() }, > > - .priority = priority, > > - .__match = i"${__match} && ${ipX}.${dir} == $${__as.name}", > > - .actions = i"next;", > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None}}) > > - } > > - } > > -} > > - > > -relation LogicalRouterForceSnatFlows( > > - logical_router: uuid, > > - ips: Set<v46_ip>, > > - context: string) > > -Flow(.logical_datapath = logical_router, > > - .stage = s_ROUTER_IN_UNSNAT(), > > - .priority = 110, > > - .__match = i"${ipX} && ${ipX}.dst == ${ip}", > > - .actions = i"ct_snat;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None), > > -/* Higher priority rules to force SNAT with the IP addresses > > - * configured in the Gateway router. This only takes effect > > - * when the packet has already been DNATed or load balanced once. */ > > -Flow(.logical_datapath = logical_router, > > - .stage = s_ROUTER_OUT_SNAT(), > > - .priority = 100, > > - .__match = i"flags.force_snat_for_${context} == 1 && ${ipX}", > > - .actions = i"ct_snat(${ip});", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - LogicalRouterForceSnatFlows(.logical_router = logical_router, > > - .ips = ips, > > - .context = context), > > - var ip = FlatMap(ips), > > - var ipX = ip.ipX(). > > - > > -/* Higher priority rules to force SNAT with the router port ip. > > - * This only takes effect when the packet has already been > > - * load balanced once. */ > > -for (rp in &RouterPort(.router = &Router{._uuid = lr_uuid, .options = lr_options}, .lrp = lrp)) { > > - if (lb_force_snat_router_ip(lr_options) and rp.peer != PeerNone) { > > - Some{var ipv4} = rp.networks.ipv4_addrs.nth(0) in { > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_UNSNAT(), > > - .priority = 110, > > - .__match = i"inport == ${rp.json_name} && ip4.dst == ${ipv4.addr}", > > - .actions = i"ct_snat;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_SNAT(), > > - .priority = 110, > > - .__match = i"flags.force_snat_for_lb == 1 && ip4 && outport == ${rp.json_name}", > > - .actions = i"ct_snat(${ipv4.addr});", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - if (rp.networks.ipv4_addrs.len() > 1) { > > - Warning["Logical router port ${rp.json_name} is configured with multiple IPv4 " > > - "addresses. Only the first IP [${ipv4.addr}] is considered as SNAT for " > > - "load balancer"] > > - } > > - }; > > - > > - /* op->lrp_networks.ipv6_addrs will always have LLA and that will be > > - * last in the list. So add the flows only if n_ipv6_addrs > 1. */ > > - if (rp.networks.ipv6_addrs.len() > 1) { > > - Some{var ipv6} = rp.networks.ipv6_addrs.nth(0) in { > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_UNSNAT(), > > - .priority = 110, > > - .__match = i"inport == ${rp.json_name} && ip6.dst == ${ipv6.addr}", > > - .actions = i"ct_snat;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_SNAT(), > > - .priority = 110, > > - .__match = i"flags.force_snat_for_lb == 1 && ip6 && outport == ${rp.json_name}", > > - .actions = i"ct_snat(${ipv6.addr});", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - if (rp.networks.ipv6_addrs.len() > 2) { > > - Warning["Logical router port ${rp.json_name} is configured with multiple IPv6 " > > - "addresses. Only the first IP [${ipv6.addr}] is considered as SNAT for " > > - "load balancer"] > > - } > > - } > > - } > > - } > > -} > > - > > -relation VirtualLogicalPort(logical_port: Option<istring>) > > -VirtualLogicalPort(Some{logical_port}) :- > > - lsp in &nb::Logical_Switch_Port(.name = logical_port, .__type = i"virtual"). > > - > > -/* NAT rules are only valid on Gateway routers and routers with > > - * l3dgw_port (router has a port with "redirect-chassis" > > - * specified). */ > > -for (r in &Router(._uuid = lr_uuid, > > - .l3dgw_ports = l3dgw_ports, > > - .is_gateway = is_gateway) > > - if not l3dgw_ports.is_empty() or is_gateway) > > -{ > > - for (LogicalRouterNAT(.lr = lr_uuid, .nat = nat)) { > > - var ipX = nat.external_ip.ipX() in > > - var xx = nat.external_ip.xxreg() in > > - /* Check the validity of nat->logical_ip. 'logical_ip' can > > - * be a subnet when the type is "snat". */ > > - Some{(_, var mask)} = ip46_parse_masked(nat.nat.logical_ip.ival()) in > > - true == match ((mask.is_all_ones(), nat.nat.__type.ival())) { > > - (_, "snat") -> true, > > - (false, _) -> { > > - warn("bad ip ${nat.nat.logical_ip} for dnat in router ${uuid2str(lr_uuid)}"); > > - false > > - }, > > - _ -> true > > - } in > > - /* For distributed router NAT, determine whether this NAT rule > > - * satisfies the conditions for distributed NAT processing. */ > > - var mac = match ((not l3dgw_ports.is_empty() and nat.nat.__type == i"dnat_and_snat", > > - nat.nat.logical_port, nat.external_mac)) { > > - (true, Some{_}, Some{mac}) -> Some{mac}, > > - _ -> None > > - } in > > - var stateless = (lrouter_nat_is_stateless(nat) > > - and nat.nat.__type == i"dnat_and_snat") in > > - { > > - /* Ingress UNSNAT table: It is for already established connections' > > - * reverse traffic. i.e., SNAT has already been done in egress > > - * pipeline and now the packet has entered the ingress pipeline as > > - * part of a reply. We undo the SNAT here. > > - * > > - * Undoing SNAT has to happen before DNAT processing. This is > > - * because when the packet was DNATed in ingress pipeline, it did > > - * not know about the possibility of eventual additional SNAT in > > - * egress pipeline. */ > > - if (nat.nat.__type == i"snat" or nat.nat.__type == i"dnat_and_snat") { > > - if (l3dgw_ports.is_empty()) { > > - /* Gateway router. */ > > - var actions = if (stateless) { > > - i"${ipX}.dst=${nat.nat.logical_ip}; next;" > > - } else { > > - i"ct_snat;" > > - } in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_UNSNAT(), > > - .priority = 90, > > - .__match = i"ip && ${ipX}.dst == ${nat.nat.external_ip}", > > - .actions = actions, > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - Some {var gwport} = l3dgw_ports.nth(0) in { > > - /* Distributed router. */ > > - > > - /* Traffic received on l3dgw_port is subject to NAT. */ > > - var __match = > > - "ip && ${ipX}.dst == ${nat.nat.external_ip}" > > - " && inport == ${json_escape(gwport.name)}" ++ > > - if (mac == None) { > > - /* Flows for NAT rules that are centralized are only > > - * programmed on the "redirect-chassis". */ > > - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" > > - } else { "" } in > > - var actions = if (stateless) { > > - i"${ipX}.dst=${nat.nat.logical_ip}; next;" > > - } else { > > - i"ct_snat;" > > - } in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_UNSNAT(), > > - .priority = 100, > > - .__match = __match.intern(), > > - .actions = actions, > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - }; > > - > > - /* Ingress DNAT table: Packets enter the pipeline with destination > > - * IP address that needs to be DNATted from a external IP address > > - * to a logical IP address. */ > > - var ip_and_ports = "${nat.nat.logical_ip}" ++ > > - if (nat.nat.external_port_range != i"") { > > - " ${nat.nat.external_port_range}" > > - } else { > > - "" > > - } in > > - if (nat.nat.__type == i"dnat" or nat.nat.__type == i"dnat_and_snat") { > > - l3dgw_ports.is_empty() in > > - var __match = "ip && ${ipX}.dst == ${nat.nat.external_ip}" in > > - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( > > - r, nat, __match, ipX, true, mask) in > > - { > > - /* Gateway router. */ > > - /* Packet when it goes from the initiator to destination. > > - * We need to set flags.loopback because the router can > > - * send the packet back through the same interface. */ > > - Some{var f} = ext_flow in Flow[f]; > > - > > - var flag_action = > > - if (has_force_snat_ip(r.options, i"dnat")) { > > - /* Indicate to the future tables that a DNAT has taken > > - * place and a force SNAT needs to be done in the > > - * Egress SNAT table. */ > > - "flags.force_snat_for_dnat = 1; " > > - } else { "" } in > > - var nat_actions = if (stateless) { > > - "${ipX}.dst=${nat.nat.logical_ip}; next;" > > - } else { > > - "flags.loopback = 1; " > > - "ct_dnat(${ip_and_ports});" > > - } in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_DNAT(), > > - .priority = 100, > > - .__match = (__match ++ ext_ip_match).intern(), > > - .actions = (flag_action ++ nat_actions).intern(), > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - Some {var gwport} = l3dgw_ports.nth(0) in > > - var __match = > > - "ip && ${ipX}.dst == ${nat.nat.external_ip}" > > - " && inport == ${json_escape(gwport.name)}" ++ > > - if (mac == None) { > > - /* Flows for NAT rules that are centralized are only > > - * programmed on the "redirect-chassis". */ > > - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" > > - } else { "" } in > > - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( > > - r, nat, __match, ipX, true, mask) in > > - { > > - /* Distributed router. */ > > - /* Traffic received on l3dgw_port is subject to NAT. */ > > - Some{var f} = ext_flow in Flow[f]; > > - > > - var actions = if (stateless) { > > - i"${ipX}.dst=${nat.nat.logical_ip}; next;" > > - } else { > > - i"ct_dnat(${ip_and_ports});" > > - } in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_DNAT(), > > - .priority = 100, > > - .__match = (__match ++ ext_ip_match).intern(), > > - .actions = actions, > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - }; > > - > > - /* ARP resolve for NAT IPs. */ > > - Some {var gwport} = l3dgw_ports.nth(0) in { > > - var gwport_name = json_escape(gwport.name) in { > > - if (nat.nat.__type == i"snat") { > > - var __match = i"inport == ${gwport_name} && " > > - "${ipX}.src == ${nat.nat.external_ip}" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 120, > > - .__match = __match, > > - .actions = i"next;", > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - var nexthop_reg = "${xx}${rEG_NEXT_HOP()}" in > > - var __match = i"outport == ${gwport_name} && " > > - "${nexthop_reg} == ${nat.nat.external_ip}" in > > - var dst_mac = match (mac) { > > - Some{value} -> i"${value}", > > - None -> gwport.mac > > - } in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 100, > > - .__match = __match, > > - .actions = i"eth.dst = ${dst_mac}; next;", > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - }; > > - > > - /* Egress UNDNAT table: It is for already established connections' > > - * reverse traffic. i.e., DNAT has already been done in ingress > > - * pipeline and now the packet has entered the egress pipeline as > > - * part of a reply. We undo the DNAT here. > > - * > > - * Note that this only applies for NAT on a distributed router. > > - */ > > - if ((nat.nat.__type == i"dnat" or nat.nat.__type == i"dnat_and_snat")) { > > - Some {var gwport} = l3dgw_ports.nth(0) in > > - var __match = > > - "ip && ${ipX}.src == ${nat.nat.logical_ip}" > > - " && outport == ${json_escape(gwport.name)}" ++ > > - if (mac == None) { > > - /* Flows for NAT rules that are centralized are only > > - * programmed on the "redirect-chassis". */ > > - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" > > - } else { "" } in > > - var actions = > > - match (mac) { > > - Some{mac_addr} -> "eth.src = ${mac_addr}; ", > > - None -> "" > > - } ++ > > - if (stateless) { > > - "${ipX}.src=${nat.nat.external_ip}; next;" > > - } else { > > - "ct_dnat;" > > - } in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_UNDNAT(), > > - .priority = 100, > > - .__match = __match.intern(), > > - .actions = actions.intern(), > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - /* Egress SNAT table: Packets enter the egress pipeline with > > - * source ip address that needs to be SNATted to a external ip > > - * address. */ > > - var ip_and_ports = "${nat.nat.external_ip}" ++ > > - if (nat.nat.external_port_range != i"") { > > - " ${nat.nat.external_port_range}" > > - } else { > > - "" > > - } in > > - if (nat.nat.__type == i"snat" or nat.nat.__type == i"dnat_and_snat") { > > - l3dgw_ports.is_empty() in > > - var __match = "ip && ${ipX}.src == ${nat.nat.logical_ip}" in > > - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( > > - r, nat, __match, ipX, false, mask) in > > - { > > - /* Gateway router. */ > > - Some{var f} = ext_flow in Flow[f]; > > - > > - /* The priority here is calculated such that the > > - * nat->logical_ip with the longest mask gets a higher > > - * priority. */ > > - var actions = if (stateless) { > > - i"${ipX}.src=${nat.nat.external_ip}; next;" > > - } else { > > - i"ct_snat(${ip_and_ports});" > > - } in > > - Some{var plen} = mask.cidr_bits() in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_SNAT(), > > - .priority = plen as bit<64> + 1, > > - .__match = (__match ++ ext_ip_match).intern(), > > - .actions = actions, > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - Some {var gwport} = l3dgw_ports.nth(0) in > > - var __match = > > - "ip && ${ipX}.src == ${nat.nat.logical_ip}" > > - " && outport == ${json_escape(gwport.name)}" ++ > > - if (mac == None) { > > - /* Flows for NAT rules that are centralized are only > > - * programmed on the "redirect-chassis". */ > > - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" > > - } else { "" } in > > - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( > > - r, nat, __match, ipX, false, mask) in > > - { > > - /* Distributed router. */ > > - Some{var f} = ext_flow in Flow[f]; > > - > > - var actions = > > - match (mac) { > > - Some{mac_addr} -> "eth.src = ${mac_addr}; ", > > - _ -> "" > > - } ++ if (stateless) { > > - "${ipX}.src=${nat.nat.external_ip}; next;" > > - } else { > > - "ct_snat(${ip_and_ports});" > > - } in > > - /* The priority here is calculated such that the > > - * nat->logical_ip with the longest mask gets a higher > > - * priority. */ > > - Some{var plen} = mask.cidr_bits() in > > - var priority = (plen as bit<64>) + 1 in > > - var centralized_boost = if (mac == None) 128 else 0 in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_SNAT(), > > - .priority = priority + centralized_boost, > > - .__match = (__match ++ ext_ip_match).intern(), > > - .actions = actions.intern(), > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - }; > > - > > - /* Logical router ingress table ADMISSION: > > - * For NAT on a distributed router, add rules allowing > > - * ingress traffic with eth.dst matching nat->external_mac > > - * on the l3dgw_port instance where nat->logical_port is > > - * resident. */ > > - Some{var mac_addr} = mac in > > - Some{var gwport} = l3dgw_ports.nth(0) in > > - Some{var logical_port} = nat.nat.logical_port in > > - var __match = > > - i"eth.dst == ${mac_addr} && inport == ${json_escape(gwport.name)}" > > - " && is_chassis_resident(${json_escape(logical_port)})" in > > - /* Store the ethernet address of the port receiving the packet. > > - * This will save us from having to match on inport further > > - * down in the pipeline. > > - */ > > - var actions = i"${rEG_INPORT_ETH_ADDR()} = ${gwport.mac}; next;" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ADMISSION(), > > - .priority = 50, > > - .__match = __match, > > - .actions = actions, > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - /* Ingress Gateway Redirect Table: For NAT on a distributed > > - * router, add flows that are specific to a NAT rule. These > > - * flows indicate the presence of an applicable NAT rule that > > - * can be applied in a distributed manner. > > - * In particulr the IP src register and eth.src are set to NAT external IP and > > - * NAT external mac so the ARP request generated in the following > > - * stage is sent out with proper IP/MAC src addresses > > - */ > > - Some{var mac_addr} = mac in > > - Some{var gwport} = l3dgw_ports.nth(0) in > > - Some{var logical_port} = nat.nat.logical_port in > > - Some{var external_mac} = nat.nat.external_mac in > > - var __match = > > - i"${ipX}.src == ${nat.nat.logical_ip} && " > > - "outport == ${json_escape(gwport.name)} && " > > - "is_chassis_resident(${json_escape(logical_port)})" in > > - var actions = > > - i"eth.src = ${external_mac}; " > > - "${xx}${rEG_SRC()} = ${nat.nat.external_ip}; " > > - "next;" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_GW_REDIRECT(), > > - .priority = 100, > > - .__match = __match, > > - .actions = actions, > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - for (VirtualLogicalPort(nat.nat.logical_port)) { > > - Some{var gwport} = l3dgw_ports.nth(0) in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_GW_REDIRECT(), > > - .priority = 80, > > - .__match = i"${ipX}.src == ${nat.nat.logical_ip} && " > > - "outport == ${json_escape(gwport.name)}", > > - .actions = i"drop;", > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - /* Egress Loopback table: For NAT on a distributed router. > > - * If packets in the egress pipeline on the distributed > > - * gateway port have ip.dst matching a NAT external IP, then > > - * loop a clone of the packet back to the beginning of the > > - * ingress pipeline with inport = outport. */ > > - Some{var gwport} = l3dgw_ports.nth(0) in > > - /* Distributed router. */ > > - Some{var port} = match (mac) { > > - Some{_} -> match (nat.nat.logical_port) { > > - Some{name} -> Some{json_escape(name)}, > > - None -> None: Option<string> > > - }, > > - None -> Some{json_escape(chassis_redirect_name(gwport.name))} > > - } in > > - var __match = i"${ipX}.dst == ${nat.nat.external_ip} && outport == ${json_escape(gwport.name)} && is_chassis_resident(${port})" in > > - var regs = { > > - var regs = vec_empty(); > > - for (j in range_vec(0, mFF_N_LOG_REGS(), 01)) { > > - regs.push("reg${j} = 0; ") > > - }; > > - regs > > - } in > > - var actions = > > - "clone { ct_clear; " > > - "inport = outport; outport = \"\"; " > > - "eth.dst <-> eth.src; " > > - "flags = 0; flags.loopback = 1; " ++ > > - regs.join("") ++ > > - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " > > - "next(pipeline=ingress, table=0); };" in > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_EGR_LOOP(), > > - .priority = 100, > > - .__match = __match, > > - .actions = actions.intern(), > > - .stage_hint = stage_hint(nat.nat._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - }; > > - > > - /* Handle force SNAT options set in the gateway router. */ > > - if (l3dgw_ports.is_empty()) { > > - var dnat_force_snat_ips = get_force_snat_ip(r.options, i"dnat") in > > - if (not dnat_force_snat_ips.is_empty()) > > - LogicalRouterForceSnatFlows(.logical_router = lr_uuid, > > - .ips = dnat_force_snat_ips, > > - .context = "dnat"); > > - > > - var lb_force_snat_ips = get_force_snat_ip(r.options, i"lb") in > > - if (not lb_force_snat_ips.is_empty()) > > - LogicalRouterForceSnatFlows(.logical_router = lr_uuid, > > - .ips = lb_force_snat_ips, > > - .context = "lb") > > - } > > -} > > - > > -function nats_contain_vip(nats: Vec<NAT>, vip: v46_ip): bool { > > - for (nat in nats) { > > - if (nat.external_ip == vip) { > > - return true > > - } > > - }; > > - return false > > -} > > - > > -/* If there are any load balancing rules, we should send > > - * the packet to conntrack for defragmentation and > > - * tracking. This helps with two things. > > - * > > - * 1. With tracking, we can send only new connections to > > - * pick a DNAT ip address from a group. > > - * 2. If there are L4 ports in load balancing rules, we > > - * need the defragmentation to match on L4 ports. > > - * > > - * One of these flows must be created for each unique LB VIP address. > > - * We create one for each VIP:port pair; flows with the same IP and > > - * different port numbers will produce identical flows that will > > - * get merged by DDlog. */ > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_DEFRAG(), > > - .priority = prio, > > - .__match = __match, > > - .actions = __actions, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb), > > - var prio = if (port != 0) { 110 } else { 100 }, > > - var proto = match (lb.protocol) { > > - Some{proto} -> proto, > > - _ -> i"tcp" > > - }, > > - var proto_match = if (port != 0) { " && ${proto}" } else { "" }, > > - var __match = ("ip && ${ip.ipX()}.dst == ${ip}" ++ proto_match).intern(), > > - var actions1 = "${ip.xxreg()}${rEG_NEXT_HOP()} = ${ip}; ", > > - var actions2 = > > - if (port != 0) { > > - "${rEG_ORIG_TP_DPORT_ROUTER()} = ${proto}.dst; ct_dnat;" > > - } else { > > - "ct_dnat;" > > - }, > > - var __actions = (actions1 ++ actions2).intern(), > > - GWRouterLB(r, lb._uuid). > > - > > -/* Higher priority rules are added for load-balancing in DNAT > > - * table. For every match (on a VIP[:port]), we add two flows > > - * via add_router_lb_flow(). One flow is for specific matching > > - * on ct.new with an action of "ct_lb($targets);". The other > > - * flow is for ct.est with an action of "ct_dnat;". */ > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_DNAT(), > > - .priority = prio, > > - .__match = __match, > > - .actions = actions, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - lbvip in &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb), > > - var proto = match (lb.protocol) { > > - Some{proto} -> proto, > > - _ -> i"tcp" > > - }, > > - var match1 = "${ip.ipX()} && ${ip.xxreg()}${rEG_NEXT_HOP()} == ${ip}", > > - var match2 = if (port != 0) { > > - " && ${proto} && ${rEG_ORIG_TP_DPORT_ROUTER()} == ${port}" > > - } else { > > - "" > > - }, > > - var prio = if (port != 0) { 120 } else { 110 }, > > - var match0 = ("ct.est && " ++ match1 ++ match2 ++ " && ct_label.natted == 1").intern(), > > - GWRouterLB(r, lb._uuid), > > - var __match = match ((r.l3dgw_ports.nth(0), lbvip.backend_ips != i"" or lb.options.get_bool_def(i"reject", false))) { > > - (Some {var gw_port}, true) -> i"${match0} && is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})", > > - _ -> match0 > > - }, > > - var actions = match (snat_for_lb(r.options, lb)) { > > - SkipSNAT -> i"flags.skip_snat_for_lb = 1; next;", > > - ForceSNAT -> i"flags.force_snat_for_lb = 1; next;", > > - _ -> i"next;" > > - }. > > - > > -/* The load balancer vip is also present in the NAT entries. > > - * So add a high priority lflow to advance the the packet > > - * destined to the vip (and the vip port if defined) > > - * in the S_ROUTER_IN_UNSNAT stage. > > - * There seems to be an issue with ovs-vswitchd. When the new > > - * connection packet destined for the lb vip is received, > > - * it is dnat'ed in the S_ROUTER_IN_DNAT stage in the dnat > > - * conntrack zone. For the next packet, if it goes through > > - * unsnat stage, the conntrack flags are not set properly, and > > - * it doesn't hit the established state flows in > > - * S_ROUTER_IN_DNAT stage. */ > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_UNSNAT(), > > - .priority = 120, > > - .__match = __match, > > - .actions = i"next;", > > - .stage_hint = stage_hint(lb._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb), > > - var proto = match (lb.protocol) { > > - Some{proto} -> proto, > > - _ -> i"tcp" > > - }, > > - var port_match = if (port != 0) { " && ${proto}.dst == ${port}" } else { "" }, > > - var __match = ("${ip.ipX()} && ${ip.ipX()}.dst == ${ip} && ${proto}" ++ > > - port_match).intern(), > > - GWRouterLB(r, lb._uuid), > > - nats_contain_vip(r.nats, ip). > > - > > -/* Add logical flows to UNDNAT the load balanced reverse traffic in > > - * the router egress pipleine stage - S_ROUTER_OUT_UNDNAT if the logical > > - * router has a gateway router port associated. > > - */ > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_OUT_UNDNAT(), > > - .priority = 120, > > - .__match = __match, > > - .actions = action, > > - .stage_hint = stage_hint(lb._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb, .backends = backends), > > - var proto = match (lb.protocol) { > > - Some{proto} -> proto, > > - _ -> i"tcp" > > - }, > > - var conds = backends.map(|b| { > > - var port_match = if (b.port != 0) { > > - " && ${proto}.src == ${b.port}" > > - } else { > > - "" > > - }; > > - "(${b.ip.ipX()}.src == ${b.ip}" ++ port_match ++ ")" > > - }).join(" || "), > > - conds != "", > > - RouterLB(r, lb._uuid), > > - Some{var gwport} = r.l3dgw_ports.nth(0), > > - var __match = > > - i"${ip.ipX()} && (${conds}) && " > > - "outport == ${json_escape(gwport.name)} && " > > - "is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})", > > - var action = match (snat_for_lb(r.options, lb)) { > > - SkipSNAT -> i"flags.skip_snat_for_lb = 1; ct_dnat;", > > - ForceSNAT -> i"flags.force_snat_for_lb = 1; ct_dnat;", > > - _ -> i"ct_dnat;" > > - }. > > - > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_DNAT(), > > - .priority = 130, > > - .__match = __match, > > - .actions = __action, > > - .io_port = None, > > - .controller_meter = r.copp.get(cOPP_EVENT_ELB()), > > - .stage_hint = stage_hint(lb._uuid)) :- > > - &LBVIP(.vip_key = lb_key, .lb = lb, .backend_ips = i""), > > - not lb.options.get_bool_def(i"reject", false), > > - LoadBalancerEmptyEvents(lb._uuid), > > - Some {(var __match, var __action)} = build_empty_lb_event_flow(lb_key, lb), > > - GWRouterLB(r, lb._uuid). > > - > > -/* Higher priority rules are added for load-balancing in DNAT > > - * table. For every match (on a VIP[:port]), we add two flows > > - * via add_router_lb_flow(). One flow is for specific matching > > - * on ct.new with an action of "ct_lb($targets);". The other > > - * flow is for ct.est with an action of "ct_dnat;". */ > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_DNAT(), > > - .priority = priority, > > - .__match = __match, > > - .actions = actions, > > - .io_port = None, > > - .controller_meter = meter, > > - .stage_hint = 0) :- > > - LBVIPWithStatus(lbvip@&LBVIP{.lb = lb}, up_backends), > > - var priority = if (lbvip.vip_port != 0) 120 else 110, > > - (var actions0, var reject) = build_lb_vip_actions(lbvip, up_backends, s_ROUTER_OUT_SNAT(), ""), > > - var actions1 = actions0.intern(), > > - var match0 = "ct.new && " ++ > > - get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port, lb.protocol, true, true, true), > > - var match1 = match0.intern(), > > - GWRouterLB(r, lb._uuid), > > - var actions = match ((reject, snat_for_lb(r.options, lb))) { > > - (false, SkipSNAT) -> i"flags.skip_snat_for_lb = 1; ${actions1}", > > - (false, ForceSNAT) -> i"flags.force_snat_for_lb = 1; ${actions1}", > > - _ -> actions1 > > - }, > > - var __match = match (r.l3dgw_ports.nth(0)) { > > - Some{gw_port} -> i"${match1} && is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})", > > - _ -> match1 > > - }, > > - var meter = if (reject) { > > - r.copp.get(cOPP_REJECT()) > > - } else { > > - None > > - }. > > - > > - > > -/* Defaults based on MaxRtrInterval and MinRtrInterval from RFC 4861 section > > - * 6.2.1 > > - */ > > -function nD_RA_MAX_INTERVAL_DEFAULT(): integer = 600 > > -function nD_RA_MAX_INTERVAL_RANGE(): (integer, integer) { (4, 1800) } > > - > > -function nd_ra_min_interval_default(max: integer): integer = > > -{ > > - if (max >= 9) { max / 3 } else { max * 3 / 4 } > > -} > > - > > -function nD_RA_MIN_INTERVAL_RANGE(max: integer): (integer, integer) = (3, ((max * 3) / 4)) > > - > > -function nD_MTU_DEFAULT(): integer = 0 > > - > > -function copy_ra_to_sb(port: RouterPort, address_mode: istring): Map<istring, istring> = > > -{ > > - var options = port.sb_options; > > - > > - options.insert(i"ipv6_ra_send_periodic", i"true"); > > - options.insert(i"ipv6_ra_address_mode", address_mode); > > - > > - var max_interval = port.lrp.ipv6_ra_configs > > - .get_int_def(i"max_interval", nD_RA_MAX_INTERVAL_DEFAULT()) > > - .clamp(nD_RA_MAX_INTERVAL_RANGE()); > > - options.insert(i"ipv6_ra_max_interval", i"${max_interval}"); > > - > > - var min_interval = port.lrp.ipv6_ra_configs > > - .get_int_def(i"min_interval", nd_ra_min_interval_default(max_interval)) > > - .clamp(nD_RA_MIN_INTERVAL_RANGE(max_interval)); > > - options.insert(i"ipv6_ra_min_interval", i"${min_interval}"); > > - > > - var mtu = port.lrp.ipv6_ra_configs.get_int_def(i"mtu", nD_MTU_DEFAULT()); > > - > > - /* RFC 2460 requires the MTU for IPv6 to be at least 1280 */ > > - if (mtu != 0 and mtu >= 1280) { > > - options.insert(i"ipv6_ra_mtu", i"${mtu}") > > - }; > > - > > - var prefixes = vec_empty(); > > - for (addr in port.networks.ipv6_addrs) { > > - if (addr.is_lla()) { > > - options.insert(i"ipv6_ra_src_addr", i"${addr.addr}") > > - } else { > > - prefixes.push(addr.match_network()) > > - } > > - }; > > - match (port.sb_options.get(i"ipv6_ra_pd_list")) { > > - Some{value} -> prefixes.push(value.ival()), > > - _ -> () > > - }; > > - options.insert(i"ipv6_ra_prefixes", prefixes.join(" ").intern()); > > - > > - match (port.lrp.ipv6_ra_configs.get(i"rdnss")) { > > - Some{value} -> options.insert(i"ipv6_ra_rdnss", value), > > - _ -> () > > - }; > > - > > - match (port.lrp.ipv6_ra_configs.get(i"dnssl")) { > > - Some{value} -> options.insert(i"ipv6_ra_dnssl", value), > > - _ -> () > > - }; > > - > > - options.insert(i"ipv6_ra_src_eth", i"${port.networks.ea}"); > > - > > - var prf = match (port.lrp.ipv6_ra_configs.get(i"router_preference")) { > > - Some{prf} -> if (prf == i"HIGH" or prf == i"LOW") prf else i"MEDIUM", > > - _ -> i"MEDIUM" > > - }; > > - options.insert(i"ipv6_ra_prf", prf); > > - > > - match (port.lrp.ipv6_ra_configs.get(i"route_info")) { > > - Some{s} -> options.insert(i"ipv6_ra_route_info", s), > > - _ -> () > > - }; > > - > > - options > > -} > > - > > -/* Logical router ingress table ND_RA_OPTIONS and ND_RA_RESPONSE: IPv6 Router > > - * Adv (RA) options and response. */ > > -// FIXME: do these rules apply to derived ports? > > -for (&RouterPort[port@RouterPort{.lrp = lrp@&nb::Logical_Router_Port{.peer = None}, > > - .router = router, > > - .json_name = json_name, > > - .networks = networks, > > - .peer = PeerSwitch{}}] > > - if (not networks.ipv6_addrs.is_empty())) > > -{ > > - Some{var address_mode} = lrp.ipv6_ra_configs.get(i"address_mode") in > > - /* FIXME: we need a nicer wat to write this */ > > - true == > > - if ((address_mode != i"slaac") and > > - (address_mode != i"dhcpv6_stateful") and > > - (address_mode != i"dhcpv6_stateless")) { > > - warn("Invalid address mode [${address_mode}] defined"); > > - false > > - } else { true } in > > - { > > - if (lrp.ipv6_ra_configs.get_bool_def(i"send_periodic", false)) { > > - RouterPortRAOptions(lrp._uuid, copy_ra_to_sb(port, address_mode)) > > - }; > > - > > - (true, var prefix) = > > - { > > - var add_rs_response_flow = false; > > - var prefix = ""; > > - for (addr in networks.ipv6_addrs) { > > - if (not addr.is_lla()) { > > - prefix = prefix ++ ", prefix = ${addr.match_network()}"; > > - add_rs_response_flow = true > > - } else () > > - }; > > - (add_rs_response_flow, prefix) > > - } in > > - { > > - var __match = i"inport == ${json_name} && ip6.dst == ff02::2 && nd_rs" in > > - /* As per RFC 2460, 1280 is minimum IPv6 MTU. */ > > - var mtu = match(lrp.ipv6_ra_configs.get(i"mtu")) { > > - Some{mtu_s} -> { > > - match (parse_dec_u64(mtu_s)) { > > - None -> 0, > > - Some{mtu} -> if (mtu >= 1280) mtu else 0 > > - } > > - }, > > - None -> 0 > > - } in > > - var actions0 = > > - "${rEGBIT_ND_RA_OPTS_RESULT()} = put_nd_ra_opts(" > > - "addr_mode = ${json_escape(address_mode)}, " > > - "slla = ${networks.ea}" ++ > > - if (mtu > 0) { ", mtu = ${mtu}" } else { "" } in > > - var router_preference = match (lrp.ipv6_ra_configs.get(i"router_preference")) { > > - Some{prf} -> if (prf == i"MEDIUM") { "" } else { ", router_preference = \"${prf}\"" }, > > - None -> "" > > - } in > > - var actions = actions0 ++ router_preference ++ prefix ++ "); next;" in > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_ND_RA_OPTIONS(), > > - .priority = 50, > > - .__match = __match, > > - .actions = actions.intern(), > > - .io_port = None, > > - .controller_meter = router.copp.get(cOPP_ND_RA_OPTS()), > > - .stage_hint = stage_hint(lrp._uuid)); > > - > > - var __match = i"inport == ${json_name} && ip6.dst == ff02::2 && " > > - "nd_ra && ${rEGBIT_ND_RA_OPTS_RESULT()}" in > > - var ip6_str = networks.ea.to_ipv6_lla().string_mapped() in > > - var actions = i"eth.dst = eth.src; eth.src = ${networks.ea}; " > > - "ip6.dst = ip6.src; ip6.src = ${ip6_str}; " > > - "outport = inport; flags.loopback = 1; " > > - "output;" in > > - Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_ND_RA_RESPONSE(), > > - .priority = 50, > > - .__match = __match, > > - .actions = actions, > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > -} > > - > > - > > -/* Logical router ingress table ND_RA_OPTIONS, ND_RA_RESPONSE: RS responder, by > > - * default goto next. (priority 0)*/ > > -for (&Router(._uuid = lr_uuid)) > > -{ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ND_RA_OPTIONS(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ND_RA_RESPONSE(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Proxy table that stores per-port routes. > > - * There routes get converted into logical flows by > > - * the following rule. > > - */ > > -relation Route(key: route_key, // matching criteria > > - port: Intern<RouterPort>, // output port > > - src_ip: v46_ip, // source IP address for output > > - gateway: Option<v46_ip>) // next hop (unless being delivered) > > - > > -function build_route_match(key: route_key) : (string, bit<32>) = > > -{ > > - var ipX = key.ip_prefix.ipX(); > > - > > - /* The priority here is calculated to implement longest-prefix-match > > - * routing. */ > > - (var dir, var priority) = match (key.policy) { > > - SrcIp -> ("src", key.plen * 2), > > - DstIp -> ("dst", (key.plen * 2) + 1) > > - }; > > - > > - var network = key.ip_prefix.network(key.plen); > > - var __match = "${ipX}.${dir} == ${network}/${key.plen}"; > > - > > - (__match, priority) > > -} > > -for (Route(.port = port, > > - .key = key, > > - .src_ip = src_ip, > > - .gateway = gateway)) > > -{ > > - var ipX = key.ip_prefix.ipX() in > > - var xx = key.ip_prefix.xxreg() in > > - /* IPv6 link-local addresses must be scoped to the local router port. */ > > - var inport_match = match (key.ip_prefix) { > > - IPv6{prefix} -> if (prefix.is_lla()) { > > - "inport == ${port.json_name} && " > > - } else "", > > - _ -> "" > > - } in > > - (var ip_match, var priority) = build_route_match(key) in > > - var __match = inport_match ++ ip_match in > > - var nexthop = match (gateway) { > > - Some{gw} -> "${gw}", > > - None -> "${ipX}.dst" > > - } in > > - var actions = > > - i"${rEG_ECMP_GROUP_ID()} = 0; " > > - "${xx}${rEG_NEXT_HOP()} = ${nexthop}; " > > - "${xx}${rEG_SRC()} = ${src_ip}; " > > - "eth.src = ${port.networks.ea}; " > > - "outport = ${port.json_name}; " > > - "flags.loopback = 1; " > > - "next;" in > > - { > > - Flow(.logical_datapath = port.router._uuid, > > - .stage = s_ROUTER_IN_IP_ROUTING(), > > - .priority = priority as integer, > > - .__match = __match.intern(), > > - .actions = i"ip.ttl--; ${actions}", > > - .stage_hint = stage_hint(port.lrp._uuid), > > - .io_port = None, > > - .controller_meter = None); > > - > > - if (port.has_bfd) { > > - Flow(.logical_datapath = port.router._uuid, > > - .stage = s_ROUTER_IN_IP_ROUTING(), > > - .priority = priority as integer + 1, > > - .__match = i"${__match} && udp.dst == 3784", > > - .actions = actions, > > - .stage_hint = stage_hint(port.lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > -} > > - > > -/* Install drop routes for all the static routes with nexthop = "discard" */ > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_IP_ROUTING(), > > - .priority = priority as integer, > > - .__match = ip_match.intern(), > > - .actions = i"drop;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - r in RouterDiscardRoute_(.router = router, .key = key), > > - (var ip_match, var priority) = build_route_match(r.key). > > - > > -/* Logical router ingress table IP_ROUTING & IP_ROUTING_ECMP: IP Routing. > > - * > > - * A packet that arrives at this table is an IP packet that should be > > - * routed to the address in 'ip[46].dst'. > > - * > > - * For regular routes without ECMP, table IP_ROUTING sets outport to the > > - * correct output port, eth.src to the output port's MAC address, and > > - * '[xx]${rEG_NEXT_HOP()}' to the next-hop IP address (leaving 'ip[46].dst', the > > - * packet’s final destination, unchanged), and advances to the next table. > > - * > > - * For ECMP routes, i.e. multiple routes with same policy and prefix, table > > - * IP_ROUTING remembers ECMP group id and selects a member id, and advances > > - * to table IP_ROUTING_ECMP, which sets outport, eth.src, and the appropriate > > - * next-hop register for the selected ECMP member. > > - * */ > > -Route(key, port, src_ip, None) :- > > - RouterPortNetworksIPv4Addr(.port = port, .addr = addr), > > - var key = RouteKey{DstIp, IPv4{addr.addr}, addr.plen}, > > - var src_ip = IPv4{addr.addr}. > > - > > -Route(key, port, src_ip, None) :- > > - RouterPortNetworksIPv6Addr(.port = port, .addr = addr), > > - var key = RouteKey{DstIp, IPv6{addr.addr}, addr.plen}, > > - var src_ip = IPv6{addr.addr}. > > - > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_IP_ROUTING_ECMP(), > > - .priority = 150, > > - .__match = i"${rEG_ECMP_GROUP_ID()} == 0", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - r in &Router(). > > - > > -/* Convert the static routes to flows. */ > > -Route(key, dst.port, dst.src_ip, Some{dst.nexthop}) :- > > - RouterStaticRoute(.router = router, .key = key, .dsts = dsts), > > - dsts.size() == 1, > > - Some{var dst} = dsts.nth(0). > > - > > -Route(key, dst.port, dst.src_ip, None) :- > > - RouterStaticRouteEmptyNextHop(.router = router, .key = key, .dsts = dsts), > > - dsts.size() == 1, > > - Some{var dst} = dsts.nth(0). > > - > > -/* Create routes from peer to port's routable addresses */ > > -Route(key, peer, src_ip, None) :- > > - RouterPortRoutableAddresses(port, addresses), > > - FirstHopRouterPortRoutableAddresses(port, peer_uuid), > > - peer_lrp in &nb::Logical_Router_Port(._uuid = peer_uuid), > > - peer in &RouterPort(.lrp = peer_lrp, .networks = networks), > > - Some{var src} = networks.ipv4_addrs.first(), > > - var src_ip = IPv4{src.addr}, > > - var addr = FlatMap(addresses), > > - var ip4_addr = FlatMap(addr.ipv4_addrs), > > - var key = RouteKey{DstIp, IPv4{ip4_addr.addr}, ip4_addr.plen}. > > - > > -/* This relation indicates that logical router port "port" has routable > > - * addresses (i.e. DNAT and Load Balancer VIPs) and that logical router > > - * port "peer" is reachable via a hop across a single logical switch. > > - */ > > -relation FirstHopRouterPortRoutableAddresses( > > - port: uuid, > > - peer: uuid) > > -FirstHopRouterPortRoutableAddresses(port_uuid, peer_uuid) :- > > - FirstHopLogicalRouter(r1, ls), > > - FirstHopLogicalRouter(r2, ls), > > - r1 != r2, > > - LogicalRouterPort(port_uuid, r1), > > - LogicalRouterPort(peer_uuid, r2), > > - RouterPortRoutableAddresses(.rport = port_uuid), > > - lrp in &nb::Logical_Router_Port(._uuid = port_uuid), > > - peer_lrp in &nb::Logical_Router_Port(._uuid = peer_uuid), > > - LogicalSwitchRouterPort(_, lrp.name, ls), > > - LogicalSwitchRouterPort(_, peer_lrp.name, ls). > > - > > -relation RouterPortRoutableAddresses( > > - rport: uuid, > > - addresses: Set<lport_addresses>) > > -RouterPortRoutableAddresses(port.lrp._uuid, addresses) :- > > - port in &RouterPort(.is_redirect = true), > > - lbips in &LogicalRouterLBIPs(.lr = port.router._uuid), > > - var addresses = get_nat_addresses(port, lbips, true).filter_map(|addrs| addrs.ival().extract_addresses()), > > - addresses != set_empty(). > > - > > -/* Return a vector of pairs (1, set[0]), ... (n, set[n - 1]). */ > > -function numbered_vec(set: Set<'A>) : Vec<(bit<16>, 'A)> = { > > - var vec = vec_with_capacity(set.size()); > > - var i = 1; > > - for (x in set) { > > - vec.push((i, x)); > > - i = i + 1 > > - }; > > - vec > > -} > > - > > -relation EcmpGroup( > > - group_id: bit<16>, > > - router: Intern<Router>, > > - key: route_key, > > - dsts: Set<route_dst>, > > - route_match: string, // This is build_route_match(key).0 > > - route_priority: integer) // This is build_route_match(key).1 > > - > > -EcmpGroup(group_id, router, key, dsts, route_match, route_priority) :- > > - r in RouterStaticRoute(.router = router, .key = key, .dsts = dsts), > > - dsts.size() > 1, > > - var groups = (router, key, dsts).group_by(()).to_set(), > > - var group_id_and_group = FlatMap(numbered_vec(groups)), > > - (var group_id, (var router, var key, var dsts)) = group_id_and_group, > > - (var route_match, var route_priority0) = build_route_match(key), > > - var route_priority = route_priority0 as integer. > > - > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_IP_ROUTING(), > > - .priority = route_priority, > > - .__match = route_match.intern(), > > - .actions = actions, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - EcmpGroup(group_id, router, key, dsts, route_match, route_priority), > > - var all_member_ids = { > > - var member_ids = vec_with_capacity(dsts.size()); > > - for (i in range_vec(1, dsts.size()+1, 1)) { > > - member_ids.push("${i}") > > - }; > > - member_ids.join(", ") > > - }, > > - var actions = > > - i"ip.ttl--; " > > - "flags.loopback = 1; " > > - "${rEG_ECMP_GROUP_ID()} = ${group_id}; " /* XXX */ > > - "${rEG_ECMP_MEMBER_ID()} = select(${all_member_ids});". > > - > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_IP_ROUTING_ECMP(), > > - .priority = 100, > > - .__match = __match, > > - .actions = actions, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - EcmpGroup(group_id, router, key, dsts, _, _), > > - var member_id_and_dst = FlatMap(numbered_vec(dsts)), > > - (var member_id, var dst) = member_id_and_dst, > > - var xx = dst.nexthop.xxreg(), > > - var __match = i"${rEG_ECMP_GROUP_ID()} == ${group_id} && " > > - "${rEG_ECMP_MEMBER_ID()} == ${member_id}", > > - var actions = i"${xx}${rEG_NEXT_HOP()} = ${dst.nexthop}; " > > - "${xx}${rEG_SRC()} = ${dst.src_ip}; " > > - "eth.src = ${dst.port.networks.ea}; " > > - "outport = ${dst.port.json_name}; " > > - "next;". > > - > > -/* If symmetric ECMP replies are enabled, then packets that arrive over > > - * an ECMP route need to go through conntrack. > > - */ > > -relation EcmpSymmetricReply( > > - router: Intern<Router>, > > - dst: route_dst, > > - route_match: string, > > - tunkey: integer) > > -EcmpSymmetricReply(router, dst, route_match, tunkey) :- > > - EcmpGroup(.router = router, .dsts = dsts, .route_match = route_match), > > - router.is_gateway, > > - var dst = FlatMap(dsts), > > - dst.ecmp_symmetric_reply, > > - PortTunKeyAllocation(.port = dst.port.lrp._uuid, .tunkey = tunkey). > > - > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_DEFRAG(), > > - .priority = 100, > > - .__match = __match, > > - .actions = i"ct_next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - EcmpSymmetricReply(router, dst, route_match, _), > > - var __match = i"inport == ${dst.port.json_name} && ${route_match}". > > - > > -/* And packets that go out over an ECMP route need conntrack. > > - XXX this seems to exactly duplicate the above flow? */ > > - > > -/* Save src eth and inport in ct_label for packets that arrive over > > - * an ECMP route. > > - */ > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_ECMP_STATEFUL(), > > - .priority = 100, > > - .__match = __match, > > - .actions = actions, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - EcmpSymmetricReply(router, dst, route_match, tunkey), > > - var __match = i"inport == ${dst.port.json_name} && ${route_match} && " > > - "(ct.new && !ct.est)", > > - var actions = i"ct_commit { ct_label.ecmp_reply_eth = eth.src;" > > - " ct_label.ecmp_reply_port = ${tunkey};}; next;". > > - > > -/* Bypass ECMP selection if we already have ct_label information > > - * for where to route the packet. > > - */ > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_IP_ROUTING(), > > - .priority = 300, > > - .__match = i"${ecmp_reply} && ${route_match}", > > - .actions = i"ip.ttl--; " > > - "flags.loopback = 1; " > > - "eth.src = ${dst.port.networks.ea}; " > > - "${xx}reg1 = ${dst.src_ip}; " > > - "outport = ${dst.port.json_name}; " > > - "next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None), > > -/* Egress reply traffic for symmetric ECMP routes skips router policies. */ > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_POLICY(), > > - .priority = 65535, > > - .__match = ecmp_reply, > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None), > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 200, > > - .__match = ecmp_reply, > > - .actions = i"eth.dst = ct_label.ecmp_reply_eth; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - EcmpSymmetricReply(router, dst, route_match, tunkey), > > - var ecmp_reply = i"ct.rpl && ct_label.ecmp_reply_port == ${tunkey}", > > - var xx = dst.nexthop.xxreg(). > > - > > - > > -/* IP Multicast lookup. Here we set the output port, adjust TTL and advance > > - * to next table (priority 500). > > - */ > > -/* Drop IPv6 multicast traffic that shouldn't be forwarded, > > - * i.e., router solicitation and router advertisement. > > - */ > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_IP_ROUTING(), > > - .priority = 550, > > - .__match = i"nd_rs || nd_ra", > > - .actions = i"drop;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - router in &Router(). > > - > > -for (IgmpRouterMulticastGroup(address, rtr, ports)) { > > - for (RouterMcastFloodPorts(rtr, flood_ports) if rtr.mcast_cfg.relay) { > > - var flood_static = not flood_ports.is_empty() in > > - var mc_static = json_escape(mC_STATIC().0) in > > - var static_act = { > > - if (flood_static) { > > - "clone { " > > - "outport = ${mc_static}; " > > - "ip.ttl--; " > > - "next; " > > - "}; " > > - } else { > > - "" > > - } > > - } in > > - Some{var ip} = ip46_parse(address.ival()) in > > - var ipX = ip.ipX() in > > - Flow(.logical_datapath = rtr._uuid, > > - .stage = s_ROUTER_IN_IP_ROUTING(), > > - .priority = 500, > > - .__match = i"${ipX} && ${ipX}.dst == ${address} ", > > - .actions = > > - i"${static_act}outport = ${json_escape(address)}; " > > - "ip.ttl--; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > - } > > -} > > - > > -/* If needed, flood unregistered multicast on statically configured ports. > > - * Priority 450. Otherwise drop any multicast traffic. > > - */ > > -for (RouterMcastFloodPorts(rtr, flood_ports) if rtr.mcast_cfg.relay) { > > - var mc_static = json_escape(mC_STATIC().0) in > > - var flood_static = not flood_ports.is_empty() in > > - var actions = if (flood_static) { > > - i"clone { " > > - "outport = ${mc_static}; " > > - "ip.ttl--; " > > - "next; " > > - "};" > > - } else { > > - i"drop;" > > - } in > > - Flow(.logical_datapath = rtr._uuid, > > - .stage = s_ROUTER_IN_IP_ROUTING(), > > - .priority = 450, > > - .__match = i"ip4.mcast || ip6.mcast", > > - .actions = actions, > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Logical router ingress table POLICY: Policy. > > - * > > - * A packet that arrives at this table is an IP packet that should be > > - * permitted/denied/rerouted to the address in the rule's nexthop. > > - * This table sets outport to the correct out_port, > > - * eth.src to the output port's MAC address, > > - * the appropriate register to the next-hop IP address (leaving > > - * 'ip[46].dst', the packet’s final destination, unchanged), and > > - * advances to the next table for ARP/ND resolution. */ > > -for (&Router(._uuid = lr_uuid)) { > > - /* This is a catch-all rule. It has the lowest priority (0) > > - * does a match-all("1") and pass-through (next) */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_POLICY(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"${rEG_ECMP_GROUP_ID()} = 0; next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_POLICY_ECMP(), > > - .priority = 150, > > - .__match = i"${rEG_ECMP_GROUP_ID()} == 0", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Convert routing policies to flows. */ > > -function pkt_mark_policy(options: Map<istring,istring>): string { > > - var pkt_mark = options.get(i"pkt_mark").and_then(parse_dec_u64).unwrap_or(0); > > - if (pkt_mark > 0 and pkt_mark < (1 << 32)) { > > - "pkt.mark = ${pkt_mark}; " > > - } else { > > - "" > > - } > > -} > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_POLICY(), > > - .priority = policy.priority, > > - .__match = policy.__match, > > - .actions = actions.intern(), > > - .stage_hint = stage_hint(policy._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - r in &Router(), > > - var policy_uuid = FlatMap(r.policies), > > - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), > > - policy.action == i"reroute", > > - Some{var nexthop_s} = match (policy.nexthops.size()) { > > - 0 -> policy.nexthop, > > - 1 -> policy.nexthops.nth(0), > > - _ -> None /* >1 nexthops handled separately as ECMP. */ > > - }, > > - Some{var nexthop} = ip46_parse(nexthop_s.ival()), > > - out_port in &RouterPort(.router = r), > > - Some{var src_ip} = find_lrp_member_ip(out_port.networks, nexthop), > > - /* > > - None: > > - VLOG_WARN_RL(&rl, "lrp_addr not found for routing policy " > > - " priority %"PRId64" nexthop %s", > > - rule->priority, rule->nexthop); > > - */ > > - var xx = src_ip.xxreg(), > > - var actions = (pkt_mark_policy(policy.options) ++ > > - "${xx}${rEG_NEXT_HOP()} = ${nexthop}; " > > - "${xx}${rEG_SRC()} = ${src_ip}; " > > - "eth.src = ${out_port.networks.ea}; " > > - "outport = ${out_port.json_name}; " > > - "flags.loopback = 1; " > > - "${rEG_ECMP_GROUP_ID()} = 0; " > > - "next;"). > > - > > -/* Returns true if the addresses in 'addrs' are all IPv4 or all IPv6, > > - false if they are a mix. */ > > -function all_same_addr_family(addrs: Set<istring>): bool { > > - var addr_families = set_empty(); > > - for (a in addrs) { > > - addr_families.insert(a.contains(".")) > > - }; > > - addr_families.size() <= 1 > > -} > > - > > -relation EcmpReroutePolicy( > > - r: Intern<Router>, > > - policy: nb::Logical_Router_Policy, > > - ecmp_group_id: usize > > -) > > -EcmpReroutePolicy(r, policy, ecmp_group_id) :- > > - r in &Router(), > > - var policy_uuid = FlatMap(r.policies), > > - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), > > - policy.action == i"reroute", > > - policy.nexthops.size() > 1, > > - var policies = policy.group_by(r).to_vec().map(|x| (x.nexthop, x)).sort_imm().map(|x| x.1), > > - var ecmp_group_ids = range_vec(1, policies.len() + 1, 1), > > - var numbered_policies = policies.zip(ecmp_group_ids), > > - var pair = FlatMap(numbered_policies), > > - (var policy, var ecmp_group_id) = pair, > > - all_same_addr_family(policy.nexthops). > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_POLICY_ECMP(), > > - .priority = 100, > > - .__match = __match, > > - .actions = actions.intern(), > > - .stage_hint = stage_hint(policy._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - EcmpReroutePolicy(r, policy, ecmp_group_id), > > - var member_ids = range_vec(1, policy.nexthops.size() + 1, 1), > > - var numbered_nexthops = policy.nexthops.map(ival).to_vec().zip(member_ids), > > - var pair = FlatMap(numbered_nexthops), > > - (var nexthop_s, var member_id) = pair, > > - Some{var nexthop} = ip46_parse(nexthop_s), > > - out_port in &RouterPort(.router = r), > > - Some{var src_ip} = find_lrp_member_ip(out_port.networks, nexthop), // or warn > > - var xx = src_ip.xxreg(), > > - var actions = (pkt_mark_policy(policy.options) ++ > > - "${xx}${rEG_NEXT_HOP()} = ${nexthop}; " > > - "${xx}${rEG_SRC()} = ${src_ip}; " > > - "eth.src = ${out_port.networks.ea}; " > > - "outport = ${out_port.json_name}; " > > - "flags.loopback = 1; " > > - "next;"), > > - var __match = i"${rEG_ECMP_GROUP_ID()} == ${ecmp_group_id} && " > > - "${rEG_ECMP_MEMBER_ID()} == ${member_id}". > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_POLICY(), > > - .priority = policy.priority, > > - .__match = policy.__match, > > - .actions = actions, > > - .stage_hint = stage_hint(policy._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - EcmpReroutePolicy(r, policy, ecmp_group_id), > > - var member_ids = { > > - var n = policy.nexthops.size(); > > - var member_ids = vec_with_capacity(n); > > - for (i in range_vec(1, n + 1, 1)) { > > - member_ids.push("${i}") > > - }; > > - member_ids.join(", ") > > - }, > > - var actions = i"${rEG_ECMP_GROUP_ID()} = ${ecmp_group_id}; " > > - "${rEG_ECMP_MEMBER_ID()} = select(${member_ids});". > > - > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_POLICY(), > > - .priority = policy.priority, > > - .__match = policy.__match, > > - .actions = i"drop;", > > - .stage_hint = stage_hint(policy._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - r in &Router(), > > - var policy_uuid = FlatMap(r.policies), > > - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), > > - policy.action == i"drop". > > -Flow(.logical_datapath = r._uuid, > > - .stage = s_ROUTER_IN_POLICY(), > > - .priority = policy.priority, > > - .__match = policy.__match, > > - .actions = (pkt_mark_policy(policy.options) ++ "${rEG_ECMP_GROUP_ID()} = 0; next;").intern(), > > - .stage_hint = stage_hint(policy._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - r in &Router(), > > - var policy_uuid = FlatMap(r.policies), > > - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), > > - policy.action == i"allow". > > - > > - > > -/* XXX destination unreachable */ > > - > > -/* Local router ingress table ARP_RESOLVE: ARP Resolution. > > - * > > - * Multicast packets already have the outport set so just advance to next > > - * table (priority 500). > > - */ > > -for (&Router(._uuid = lr_uuid)) { > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 500, > > - .__match = i"ip4.mcast || ip6.mcast", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Local router ingress table ARP_RESOLVE: ARP Resolution. > > - * > > - * Any packet that reaches this table is an IP packet whose next-hop IP > > - * address is in the next-hop register. (ip4.dst is the final destination.) This table > > - * resolves the IP address in the next-hop register into an output port in outport and an > > - * Ethernet address in eth.dst. */ > > -// FIXME: does this apply to redirect ports? > > -for (rp in &RouterPort(.peer = PeerRouter{peer_port, _}, > > - .router = router, > > - .networks = networks)) > > -{ > > - for (&RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = peer_port}, > > - .json_name = peer_json_name, > > - .router = peer_router)) > > - { > > - /* This is a logical router port. If next-hop IP address in > > - * the next-hop register matches IP address of this router port, then > > - * the packet is intended to eventually be sent to this > > - * logical port. Set the destination mac address using this > > - * port's mac address. > > - * > > - * The packet is still in peer's logical pipeline. So the match > > - * should be on peer's outport. */ > > - if (not networks.ipv4_addrs.is_empty()) { > > - var __match = "outport == ${peer_json_name} && " > > - "${rEG_NEXT_HOP()} == " ++ > > - format_v4_networks(networks, false) in > > - Flow(.logical_datapath = peer_router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 100, > > - .__match = __match.intern(), > > - .actions = i"eth.dst = ${networks.ea}; next;", > > - .stage_hint = stage_hint(rp.lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - if (not networks.ipv6_addrs.is_empty()) { > > - var __match = "outport == ${peer_json_name} && " > > - "xx${rEG_NEXT_HOP()} == " ++ > > - format_v6_networks(networks) in > > - Flow(.logical_datapath = peer_router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 100, > > - .__match = __match.intern(), > > - .actions = i"eth.dst = ${networks.ea}; next;", > > - .stage_hint = stage_hint(rp.lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > -} > > - > > -/* Packet is on a non gateway chassis and > > - * has an unresolved ARP on a network behind gateway > > - * chassis attached router port. Since, redirect type > > - * is "bridged", instead of calling "get_arp" > > - * on this node, we will redirect the packet to gateway > > - * chassis, by setting destination mac router port mac.*/ > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 50, > > - .__match = i"outport == ${rp.json_name} && " > > - "!is_chassis_resident(${json_escape(chassis_redirect_name(l3dgw_port.name))})", > > - .actions = i"eth.dst = ${rp.networks.ea}; next;", > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - rp in &RouterPort(.lrp = lrp, .router = router), > > - Some{var l3dgw_port} = router.l3dgw_ports.nth(0), > > - Some{i"bridged"} == lrp.options.get(i"redirect-type"). > > - > > - > > -/* Drop IP traffic destined to router owned IPs. Part of it is dropped > > - * in stage "lr_in_ip_input" but traffic that could have been unSNATed > > - * but didn't match any existing session might still end up here. > > - * > > - * Priority 1. > > - */ > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 1, > > - .__match = ("ip4.dst == {" ++ match_ips.join(", ") ++ "}").intern(), > > - .actions = i"drop;", > > - .stage_hint = stage_hint(lrp_uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, > > - .router = &Router{.snat_ips = snat_ips, > > - ._uuid = lr_uuid}, > > - .networks = networks), > > - var addr = FlatMap(networks.ipv4_addrs), > > - snat_ips.contains_key(IPv4{addr.addr}), > > - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 1, > > - .__match = ("ip6.dst == {" ++ match_ips.join(", ") ++ "}").intern(), > > - .actions = i"drop;", > > - .stage_hint = stage_hint(lrp_uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, > > - .router = &Router{.snat_ips = snat_ips, > > - ._uuid = lr_uuid}, > > - .networks = networks), > > - var addr = FlatMap(networks.ipv6_addrs), > > - snat_ips.contains_key(IPv6{addr.addr}), > > - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). > > - > > -/* Create ARP resolution flows for NAT and LB addresses for first hop > > - * logical routers > > - */ > > -Flow(.logical_datapath = peer.router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 100, > > - .__match = ("outport == ${peer.json_name} && " ++ rEG_NEXT_HOP() ++ " == {${ips}}").intern(), > > - .actions = i"eth.dst = ${addr.ea}; next;", > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - RouterPortRoutableAddresses(port, addresses), > > - FirstHopRouterPortRoutableAddresses(port, peer_uuid), > > - peer in &RouterPort(.lrp = lrp), > > - lrp._uuid == peer_uuid, > > - not peer.router.options.get_bool_def(i"dynamic_neigh_routers", false), > > - var addr = FlatMap(addresses), > > - var ips = addr.ipv4_addrs.map(|a| a.addr.to_string()).join(", "). > > - > > -/* This is a logical switch port that backs a VM or a container. > > - * Extract its addresses. For each of the address, go through all > > - * the router ports attached to the switch (to which this port > > - * connects) and if the address in question is reachable from the > > - * router port, add an ARP/ND entry in that router's pipeline. */ > > -for (SwitchPortIPv4Address( > > - .port = &SwitchPort{.lsp = lsp, .sw = sw}, > > - .ea = ea, > > - .addr = addr) > > - if lsp.__type != i"router" and lsp.__type != i"virtual" and lsp.is_enabled()) > > -{ > > - for (&SwitchPort(.sw = &Switch{._uuid = sw._uuid}, > > - .peer = Some{peer@&RouterPort{.router = peer_router}})) > > - { > > - Some{_} = find_lrp_member_ip(peer.networks, IPv4{addr.addr}) in > > - Flow(.logical_datapath = peer_router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 100, > > - .__match = i"outport == ${peer.json_name} && " > > - "${rEG_NEXT_HOP()} == ${addr.addr}", > > - .actions = i"eth.dst = ${ea}; next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > -} > > - > > -for (SwitchPortIPv6Address( > > - .port = &SwitchPort{.lsp = lsp, .sw = sw}, > > - .ea = ea, > > - .addr = addr) > > - if lsp.__type != i"router" and lsp.__type != i"virtual" and lsp.is_enabled()) > > -{ > > - for (&SwitchPort(.sw = &Switch{._uuid = sw._uuid}, > > - .peer = Some{peer@&RouterPort{.router = peer_router}})) > > - { > > - Some{_} = find_lrp_member_ip(peer.networks, IPv6{addr.addr}) in > > - Flow(.logical_datapath = peer_router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 100, > > - .__match = i"outport == ${peer.json_name} && " > > - "xx${rEG_NEXT_HOP()} == ${addr.addr}", > > - .actions = i"eth.dst = ${ea}; next;", > > - .stage_hint = stage_hint(lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > -} > > - > > -/* True if 's' is an empty set or a set that contains just an empty string, > > - * false otherwise. > > - * > > - * This is meant for sets of 0 or 1 elements, like the OVSDB integration > > - * with DDlog uses. */ > > -function is_empty_set_or_string(s: Option<istring>): bool = { > > - match (s) { > > - None -> true, > > - Some{s} -> s == i"" > > - } > > -} > > - > > -/* This is a virtual port. Add ARP replies for the virtual ip with > > - * the mac of the present active virtual parent. > > - * If the logical port doesn't have virtual parent set in > > - * Port_Binding table, then add the flow to set eth.dst to > > - * 00:00:00:00:00:00 and advance to next table so that ARP is > > - * resolved by router pipeline using the arp{} action. > > - * The MAC_Binding entry for the virtual ip might be invalid. */ > > -Flow(.logical_datapath = peer.router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 100, > > - .__match = i"outport == ${peer.json_name} && " > > - "${rEG_NEXT_HOP()} == ${virtual_ip}", > > - .actions = i"eth.dst = 00:00:00:00:00:00; next;", > > - .stage_hint = stage_hint(sp.lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - sp in &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"virtual"}), > > - Some{var virtual_ip_s} = lsp.options.get(i"virtual-ip"), > > - Some{var virtual_parents} = lsp.options.get(i"virtual-parents"), > > - Some{var virtual_ip} = ip_parse(virtual_ip_s.ival()), > > - pb in sb::Port_Binding(.logical_port = sp.lsp.name), > > - is_empty_set_or_string(pb.virtual_parent) or pb.chassis == None, > > - sp2 in &SwitchPort(.sw = sp.sw, .peer = Some{peer}), > > - Some{_} = find_lrp_member_ip(peer.networks, IPv4{virtual_ip}). > > -Flow(.logical_datapath = peer.router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 100, > > - .__match = i"outport == ${peer.json_name} && " > > - "${rEG_NEXT_HOP()} == ${virtual_ip}", > > - .actions = i"eth.dst = ${address.ea}; next;", > > - .stage_hint = stage_hint(sp.lsp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - sp in &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"virtual"}), > > - Some{var virtual_ip_s} = lsp.options.get(i"virtual-ip"), > > - Some{var virtual_parents} = lsp.options.get(i"virtual-parents"), > > - Some{var virtual_ip} = ip_parse(virtual_ip_s.ival()), > > - pb in sb::Port_Binding(.logical_port = sp.lsp.name), > > - not (is_empty_set_or_string(pb.virtual_parent) or pb.chassis == None), > > - Some{var virtual_parent} = pb.virtual_parent, > > - vp in &SwitchPort(.lsp = &nb::Logical_Switch_Port{.name = virtual_parent}), > > - var address = FlatMap(vp.static_addresses), > > - sp2 in &SwitchPort(.sw = sp.sw, .peer = Some{peer}), > > - Some{_} = find_lrp_member_ip(peer.networks, IPv4{virtual_ip}). > > - > > -/* This is a logical switch port that connects to a router. */ > > - > > -/* The peer of this switch port is the router port for which > > - * we need to add logical flows such that it can resolve > > - * ARP entries for all the other router ports connected to > > - * the switch in question. */ > > -for (&SwitchPort(.lsp = lsp1, > > - .peer = Some{peer1@&RouterPort{.router = peer_router}}, > > - .sw = sw) > > - if lsp1.is_enabled() and > > - not peer_router.options.get_bool_def(i"dynamic_neigh_routers", false)) > > -{ > > - for (&SwitchPort(.lsp = lsp2, .peer = Some{peer2}, > > - .sw = &Switch{._uuid = sw._uuid}) > > - /* Skip the router port under consideration. */ > > - if peer2.lrp._uuid != peer1.lrp._uuid) > > - { > > - if (not peer2.networks.ipv4_addrs.is_empty()) { > > - Flow(.logical_datapath = peer_router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 100, > > - .__match = i"outport == ${peer1.json_name} && " > > - "${rEG_NEXT_HOP()} == ${format_v4_networks(peer2.networks, false)}", > > - .actions = i"eth.dst = ${peer2.networks.ea}; next;", > > - .stage_hint = stage_hint(lsp1._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - if (not peer2.networks.ipv6_addrs.is_empty()) { > > - Flow(.logical_datapath = peer_router._uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 100, > > - .__match = i"outport == ${peer1.json_name} && " > > - "xx${rEG_NEXT_HOP()} == ${format_v6_networks(peer2.networks)}", > > - .actions = i"eth.dst = ${peer2.networks.ea}; next;", > > - .stage_hint = stage_hint(lsp1._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - } > > - } > > -} > > - > > -for (&Router(._uuid = lr_uuid)) > > -{ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 0, > > - .__match = i"ip4", > > - .actions = i"get_arp(outport, ${rEG_NEXT_HOP()}); next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None); > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ARP_RESOLVE(), > > - .priority = 0, > > - .__match = i"ip6", > > - .actions = i"get_nd(outport, xx${rEG_NEXT_HOP()}); next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Local router ingress table CHK_PKT_LEN: Check packet length. > > - * > > - * Any IPv4 or IPv6 packet with outport set to a router port that has > > - * gateway_mtu > 0 configured, check the packet length and store the result in > > - * the 'REGBIT_PKT_LARGER' register bit. > > - * > > - * Local router ingress table LARGER_PKTS: Handle larger packets. > > - * > > - * Any IPv4 or IPv6 packet with outport set to a router port that has > > - * gatway_mtu > 0 configured and the 'REGBIT_PKT_LARGER' register bit is set, > > - * generate an ICMPv4/ICMPv6 packet with type 3/2 (Destination > > - * Unreachable/Packet Too Big) and code 4/0 (Fragmentation needed). > > - */ > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_CHK_PKT_LEN(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - &Router(._uuid = lr_uuid). > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LARGER_PKTS(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) :- > > - &Router(._uuid = lr_uuid). > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_CHK_PKT_LEN(), > > - .priority = 50, > > - .__match = i"outport == ${gw_mtu_rp.json_name}", > > - .actions = i"${rEGBIT_PKT_LARGER()} = check_pkt_larger(${mtu}); " > > - "next;", > > - .stage_hint = stage_hint(gw_mtu_rp.lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) :- > > - r in &Router(._uuid = lr_uuid), > > - gw_mtu_rp in &RouterPort(.router = r), > > - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), > > - gw_mtu > 0, > > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(). > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LARGER_PKTS(), > > - .priority = 150, > > - .__match = i"inport == ${rp.json_name} && outport == ${gw_mtu_rp.json_name} && ip4 && " > > - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", > > - .actions = i"icmp4_error {" > > - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " > > - "${rEGBIT_PKT_LARGER()} = 0; " > > - "eth.dst = ${rp.networks.ea}; " > > - "ip4.dst = ip4.src; " > > - "ip4.src = ${first_ipv4.addr}; " > > - "ip.ttl = 255; " > > - "icmp4.type = 3; /* Destination Unreachable. */ " > > - "icmp4.code = 4; /* Frag Needed and DF was Set. */ " > > - /* Set icmp4.frag_mtu to gw_mtu */ > > - "icmp4.frag_mtu = ${gw_mtu}; " > > - "next(pipeline=ingress, table=0); " > > - "};", > > - .io_port = None, > > - .controller_meter = r.copp.get(cOPP_ICMP4_ERR()), > > - .stage_hint = stage_hint(rp.lrp._uuid)) :- > > - r in &Router(._uuid = lr_uuid), > > - gw_mtu_rp in &RouterPort(.router = r), > > - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), > > - gw_mtu > 0, > > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), > > - rp in &RouterPort(.router = r), > > - rp.lrp != gw_mtu_rp.lrp, > > - Some{var first_ipv4} = rp.networks.ipv4_addrs.nth(0). > > - > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 150, > > - .__match = i"inport == ${rp.json_name} && ip4 && " > > - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", > > - .actions = i"icmp4_error {" > > - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " > > - "${rEGBIT_PKT_LARGER()} = 0; " > > - "eth.dst = ${rp.networks.ea}; " > > - "ip4.dst = ip4.src; " > > - "ip4.src = ${first_ipv4.addr}; " > > - "ip.ttl = 255; " > > - "icmp4.type = 3; /* Destination Unreachable. */ " > > - "icmp4.code = 4; /* Frag Needed and DF was Set. */ " > > - /* Set icmp4.frag_mtu to gw_mtu */ > > - "icmp4.frag_mtu = ${gw_mtu}; " > > - "next(pipeline=ingress, table=0); " > > - "};", > > - .io_port = None, > > - .controller_meter = r.copp.get(cOPP_ICMP4_ERR()), > > - .stage_hint = stage_hint(rp.lrp._uuid)) :- > > - r in &Router(._uuid = lr_uuid), > > - gw_mtu_rp in &RouterPort(.router = r), > > - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), > > - gw_mtu > 0, > > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), > > - rp in &RouterPort(.router = r), > > - rp.lrp == gw_mtu_rp.lrp, > > - Some{var first_ipv4} = rp.networks.ipv4_addrs.nth(0). > > - > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_LARGER_PKTS(), > > - .priority = 150, > > - .__match = i"inport == ${rp.json_name} && outport == ${gw_mtu_rp.json_name} && ip6 && " > > - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", > > - .actions = i"icmp6_error {" > > - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " > > - "${rEGBIT_PKT_LARGER()} = 0; " > > - "eth.dst = ${rp.networks.ea}; " > > - "ip6.dst = ip6.src; " > > - "ip6.src = ${first_ipv6.addr}; " > > - "ip.ttl = 255; " > > - "icmp6.type = 2; /* Packet Too Big. */ " > > - "icmp6.code = 0; " > > - /* Set icmp6.frag_mtu to gw_mtu */ > > - "icmp6.frag_mtu = ${gw_mtu}; " > > - "next(pipeline=ingress, table=0); " > > - "};", > > - .io_port = None, > > - .controller_meter = r.copp.get(cOPP_ICMP6_ERR()), > > - .stage_hint = stage_hint(rp.lrp._uuid)) :- > > - r in &Router(._uuid = lr_uuid), > > - gw_mtu_rp in &RouterPort(.router = r), > > - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), > > - gw_mtu > 0, > > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), > > - rp in &RouterPort(.router = r), > > - rp.lrp != gw_mtu_rp.lrp, > > - Some{var first_ipv6} = rp.networks.ipv6_addrs.nth(0). > > - > > -Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 150, > > - .__match = i"inport == ${rp.json_name} && ip6 && " > > - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", > > - .actions = i"icmp6_error {" > > - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " > > - "${rEGBIT_PKT_LARGER()} = 0; " > > - "eth.dst = ${rp.networks.ea}; " > > - "ip6.dst = ip6.src; " > > - "ip6.src = ${first_ipv6.addr}; " > > - "ip.ttl = 255; " > > - "icmp6.type = 2; /* Packet Too Big. */ " > > - "icmp6.code = 0; " > > - /* Set icmp6.frag_mtu to gw_mtu */ > > - "icmp6.frag_mtu = ${gw_mtu}; " > > - "next(pipeline=ingress, table=0); " > > - "};", > > - .io_port = None, > > - .controller_meter = r.copp.get(cOPP_ICMP6_ERR()), > > - .stage_hint = stage_hint(rp.lrp._uuid)) :- > > - r in &Router(._uuid = lr_uuid), > > - gw_mtu_rp in &RouterPort(.router = r), > > - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), > > - gw_mtu > 0, > > - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), > > - rp in &RouterPort(.router = r), > > - rp.lrp == gw_mtu_rp.lrp, > > - Some{var first_ipv6} = rp.networks.ipv6_addrs.nth(0). > > - > > -/* Logical router ingress table GW_REDIRECT: Gateway redirect. > > - * > > - * For traffic with outport equal to the l3dgw_port > > - * on a distributed router, this table redirects a subset > > - * of the traffic to the l3redirect_port which represents > > - * the central instance of the l3dgw_port. > > - */ > > -for (&Router(._uuid = lr_uuid)) > > -{ > > - /* For traffic with outport == l3dgw_port, if the > > - * packet did not match any higher priority redirect > > - * rule, then the traffic is redirected to the central > > - * instance of the l3dgw_port. */ > > - for (DistributedGatewayPort(lrp, lr_uuid, _)) { > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_GW_REDIRECT(), > > - .priority = 50, > > - .__match = i"outport == ${json_escape(lrp.name)}", > > - .actions = i"outport = ${json_escape(chassis_redirect_name(lrp.name))}; next;", > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - > > - /* Packets are allowed by default. */ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_GW_REDIRECT(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"next;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* Local router ingress table ARP_REQUEST: ARP request. > > - * > > - * In the common case where the Ethernet destination has been resolved, > > - * this table outputs the packet (priority 0). Otherwise, it composes > > - * and sends an ARP/IPv6 NA request (priority 100). */ > > -Flow(.logical_datapath = router._uuid, > > - .stage = s_ROUTER_IN_ARP_REQUEST(), > > - .priority = 200, > > - .__match = __match, > > - .actions = actions, > > - .io_port = None, > > - .controller_meter = router.copp.get(cOPP_ND_NS_RESOLVE()), > > - .stage_hint = 0) :- > > - rsr in RouterStaticRoute(.router = router), > > - var dst = FlatMap(rsr.dsts), > > - IPv6{var gw_ip6} = dst.nexthop, > > - var __match = i"eth.dst == 00:00:00:00:00:00 && " > > - "ip6 && xx${rEG_NEXT_HOP()} == ${dst.nexthop}", > > - var sn_addr = gw_ip6.solicited_node(), > > - var eth_dst = sn_addr.multicast_to_ethernet(), > > - var sn_addr_s = sn_addr.string_mapped(), > > - var actions = i"nd_ns { " > > - "eth.dst = ${eth_dst}; " > > - "ip6.dst = ${sn_addr_s}; " > > - "nd.target = ${dst.nexthop}; " > > - "output; " > > - "};". > > - > > -for (&Router(._uuid = lr_uuid, .copp = copp)) > > -{ > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ARP_REQUEST(), > > - .priority = 100, > > - .__match = i"eth.dst == 00:00:00:00:00:00 && ip4", > > - .actions = i"arp { " > > - "eth.dst = ff:ff:ff:ff:ff:ff; " > > - "arp.spa = ${rEG_SRC()}; " > > - "arp.tpa = ${rEG_NEXT_HOP()}; " > > - "arp.op = 1; " /* ARP request */ > > - "output; " > > - "};", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_ARP_RESOLVE()), > > - .stage_hint = 0); > > - > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ARP_REQUEST(), > > - .priority = 100, > > - .__match = i"eth.dst == 00:00:00:00:00:00 && ip6", > > - .actions = i"nd_ns { " > > - "nd.target = xx${rEG_NEXT_HOP()}; " > > - "output; " > > - "};", > > - .io_port = None, > > - .controller_meter = copp.get(cOPP_ND_NS_RESOLVE()), > > - .stage_hint = 0); > > - > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_ARP_REQUEST(), > > - .priority = 0, > > - .__match = i"1", > > - .actions = i"output;", > > - .stage_hint = 0, > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > - > > -/* Logical router egress table DELIVERY: Delivery (priority 100). > > - * > > - * Priority 100 rules deliver packets to enabled logical ports. */ > > -for (&RouterPort(.lrp = lrp, > > - .json_name = json_name, > > - .networks = lrp_networks, > > - .router = &Router{._uuid = lr_uuid, .mcast_cfg = mcast_cfg}) > > - /* Drop packets to disabled logical ports (since logical flow > > - * tables are default-drop). */ > > - if lrp.is_enabled()) > > -{ > > - /* If multicast relay is enabled then also adjust source mac for IP > > - * multicast traffic. > > - */ > > - if (mcast_cfg.relay) { > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_DELIVERY(), > > - .priority = 110, > > - .__match = i"(ip4.mcast || ip6.mcast) && " > > - "outport == ${json_name}", > > - .actions = i"eth.src = ${lrp_networks.ea}; output;", > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > - }; > > - /* No egress packets should be processed in the context of > > - * a chassisredirect port. The chassisredirect port should > > - * be replaced by the l3dgw port in the local output > > - * pipeline stage before egress processing. */ > > - > > - Flow(.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_OUT_DELIVERY(), > > - .priority = 100, > > - .__match = i"outport == ${json_name}", > > - .actions = i"output;", > > - .stage_hint = stage_hint(lrp._uuid), > > - .io_port = None, > > - .controller_meter = None) > > -} > > - > > -/* > > - * Datapath tunnel key allocation: > > - * > > - * Allocates a globally unique tunnel id in the range 1...2**24-1 for > > - * each Logical_Switch and Logical_Router. > > - */ > > - > > -function oVN_MAX_DP_KEY(): integer { (64'd1 << 24) - 1 } > > -function oVN_MAX_DP_GLOBAL_NUM(): integer { (64'd1 << 16) - 1 } > > -function oVN_MIN_DP_KEY_LOCAL(): integer { 1 } > > -function oVN_MAX_DP_KEY_LOCAL(): integer { oVN_MAX_DP_KEY() - oVN_MAX_DP_GLOBAL_NUM() } > > -function oVN_MIN_DP_KEY_GLOBAL(): integer { oVN_MAX_DP_KEY_LOCAL() + 1 } > > -function oVN_MAX_DP_KEY_GLOBAL(): integer { oVN_MAX_DP_KEY() } > > - > > -function oVN_MAX_DP_VXLAN_KEY(): integer { (64'd1 << 12) - 1 } > > -function oVN_MAX_DP_VXLAN_KEY_LOCAL(): integer { oVN_MAX_DP_KEY() - oVN_MAX_DP_GLOBAL_NUM() } > > - > > -/* If any chassis uses VXLAN encapsulation, then the entire deployment is in VXLAN mode. */ > > -relation IsVxlanMode0() > > -IsVxlanMode0() :- > > - sb::Chassis(.encaps = encaps), > > - var encap_uuid = FlatMap(encaps), > > - sb::Encap(._uuid = encap_uuid, .__type = i"vxlan"). > > - > > -relation IsVxlanMode[bool] > > -IsVxlanMode[true] :- > > - IsVxlanMode0(). > > -IsVxlanMode[false] :- > > - Unit(), > > - not IsVxlanMode0(). > > - > > -/* The maximum datapath tunnel key that may be used. */ > > -relation OvnMaxDpKeyLocal[integer] > > -/* OVN_MAX_DP_GLOBAL_NUM doesn't apply for vxlan mode. */ > > -OvnMaxDpKeyLocal[oVN_MAX_DP_VXLAN_KEY()] :- IsVxlanMode[true]. > > -OvnMaxDpKeyLocal[oVN_MAX_DP_KEY() - oVN_MAX_DP_GLOBAL_NUM()] :- IsVxlanMode[false]. > > - > > -relation OvnPortKeyBits[bit<32>] > > -OvnPortKeyBits[12] :- IsVxlanMode[true]. > > -OvnPortKeyBits[16] :- IsVxlanMode[false]. > > - > > -relation OvnDpKeyBits[bit<32>] > > -OvnDpKeyBits[12] :- IsVxlanMode[true]. > > -OvnDpKeyBits[24] :- IsVxlanMode[false]. > > - > > -function get_dp_tunkey(map: Map<istring,istring>, key: istring, bits: bit<32>): Option<integer> { > > - map.get(key) > > - .and_then(parse_dec_u64) > > - .and_then(|x| if (x > 0 and x < (1<<bits)) { > > - Some{x} > > - } else { > > - None > > - }) > > -} > > - > > -// Tunnel keys requested by datapaths. > > -relation RequestedTunKey(datapath: uuid, tunkey: integer) > > -RequestedTunKey(uuid, tunkey) :- > > - OvnDpKeyBits[bits], > > - ls in &nb::Logical_Switch(._uuid = uuid), > > - Some{var tunkey} = get_dp_tunkey(ls.other_config, i"requested-tnl-key", bits). > > -RequestedTunKey(uuid, tunkey) :- > > - OvnDpKeyBits[bits], > > - lr in nb::Logical_Router(._uuid = uuid), > > - Some{var tunkey} = get_dp_tunkey(lr.options, i"requested-tnl-key", bits). > > -Warning[message] :- > > - RequestedTunKey(datapath, tunkey), > > - var count = datapath.group_by((tunkey)).size(), > > - count > 1, > > - var message = "${count} logical switches or routers request " > > - "datapath tunnel key ${tunkey}". > > - > > -// Assign tunnel keys: > > -// - First priority to requested tunnel keys. > > -// - Second priority to already assigned tunnel keys. > > -// In either case, make an arbitrary choice in case of conflicts within a > > -// priority level. > > -relation AssignedTunKey(datapath: uuid, tunkey: integer) > > -AssignedTunKey(datapath, tunkey) :- > > - RequestedTunKey(datapath, tunkey), > > - var datapath = datapath.group_by(tunkey).first(). > > -AssignedTunKey(datapath, tunkey) :- > > - sb::Datapath_Binding(._uuid = datapath, .tunnel_key = tunkey), > > - not RequestedTunKey(_, tunkey), > > - not RequestedTunKey(datapath, _), > > - var datapath = datapath.group_by(tunkey).first(). > > - > > -// all tunnel keys already in use in the Realized table > > -relation AllocatedTunKeys(keys: Set<integer>) > > -AllocatedTunKeys(keys) :- > > - AssignedTunKey(.tunkey = tunkey), > > - var keys = tunkey.group_by(()).to_set(). > > - > > -// Datapath_Binding's not yet in the Realized table > > -relation NotYetAllocatedTunKeys(datapaths: Vec<uuid>) > > - > > -NotYetAllocatedTunKeys(datapaths) :- > > - OutProxy_Datapath_Binding(._uuid = datapath), > > - not AssignedTunKey(datapath, _), > > - var datapaths = datapath.group_by(()).to_vec(). > > - > > -// Perform the allocation > > -relation TunKeyAllocation(datapath: uuid, tunkey: integer) > > - > > -TunKeyAllocation(datapath, tunkey) :- AssignedTunKey(datapath, tunkey). > > - > > -// Case 1: AllocatedTunKeys relation is not empty (i.e., contains > > -// a single record that stores a set of allocated keys) > > -TunKeyAllocation(datapath, tunkey) :- > > - NotYetAllocatedTunKeys(unallocated), > > - AllocatedTunKeys(allocated), > > - OvnMaxDpKeyLocal[max_dp_key_local], > > - var allocation = FlatMap(allocate(allocated, unallocated, 1, max_dp_key_local)), > > - (var datapath, var tunkey) = allocation. > > - > > -// Case 2: AllocatedTunKeys relation is empty > > -TunKeyAllocation(datapath, tunkey) :- > > - NotYetAllocatedTunKeys(unallocated), > > - not AllocatedTunKeys(_), > > - OvnMaxDpKeyLocal[max_dp_key_local], > > - var allocation = FlatMap(allocate(set_empty(), unallocated, 1, max_dp_key_local)), > > - (var datapath, var tunkey) = allocation. > > - > > -/* > > - * Port id allocation: > > - * > > - * Port IDs in a per-datapath space in the range 1...2**(bits-1)-1, where > > - * bits is the number of bits available for port keys (default: 16, vxlan: 12) > > - */ > > - > > -function get_port_tunkey(map: Map<istring,istring>, key: istring, bits: bit<32>): Option<integer> { > > - map.get(key) > > - .and_then(parse_dec_u64) > > - .and_then(|x| if (x > 0 and x < (1<<bits)) { > > - Some{x} > > - } else { > > - None > > - }) > > -} > > - > > -// Tunnel keys requested by port bindings. > > -relation RequestedPortTunKey(datapath: uuid, port: uuid, tunkey: integer) > > -RequestedPortTunKey(datapath, port, tunkey) :- > > - OvnPortKeyBits[bits], > > - sp in &SwitchPort(), > > - var datapath = sp.sw._uuid, > > - var port = sp.lsp._uuid, > > - Some{var tunkey} = get_port_tunkey(sp.lsp.options, i"requested-tnl-key", bits). > > -RequestedPortTunKey(datapath, port, tunkey) :- > > - OvnPortKeyBits[bits], > > - rp in &RouterPort(), > > - var datapath = rp.router._uuid, > > - var port = rp.lrp._uuid, > > - Some{var tunkey} = get_port_tunkey(rp.lrp.options, i"requested-tnl-key", bits). > > -Warning[message] :- > > - RequestedPortTunKey(datapath, port, tunkey), > > - var count = port.group_by((datapath, tunkey)).size(), > > - count > 1, > > - var message = "${count} logical ports in the same datapath " > > - "request port tunnel key ${tunkey}". > > - > > -// Assign tunnel keys: > > -// - First priority to requested tunnel keys. > > -// - Second priority to already assigned tunnel keys. > > -// In either case, make an arbitrary choice in case of conflicts within a > > -// priority level. > > -relation AssignedPortTunKey(datapath: uuid, port: uuid, tunkey: integer) > > -AssignedPortTunKey(datapath, port, tunkey) :- > > - RequestedPortTunKey(datapath, port, tunkey), > > - var port = port.group_by((datapath, tunkey)).first(). > > -AssignedPortTunKey(datapath, port, tunkey) :- > > - sb::Port_Binding(._uuid = port_uuid, > > - .datapath = datapath, > > - .tunnel_key = tunkey), > > - not RequestedPortTunKey(datapath, _, tunkey), > > - not RequestedPortTunKey(datapath, port_uuid, _), > > - var port = port_uuid.group_by((datapath, tunkey)).first(). > > - > > -// all tunnel keys already in use in the Realized table > > -relation AllocatedPortTunKeys(datapath: uuid, keys: Set<integer>) > > - > > -AllocatedPortTunKeys(datapath, keys) :- > > - AssignedPortTunKey(datapath, port, tunkey), > > - var keys = tunkey.group_by(datapath).to_set(). > > - > > -// Port_Binding's not yet in the Realized table > > -relation NotYetAllocatedPortTunKeys(datapath: uuid, all_logical_ids: Vec<uuid>) > > - > > -NotYetAllocatedPortTunKeys(datapath, all_names) :- > > - OutProxy_Port_Binding(._uuid = port_uuid, .datapath = datapath), > > - not AssignedPortTunKey(datapath, port_uuid, _), > > - var all_names = port_uuid.group_by(datapath).to_vec(). > > - > > -// Perform the allocation. > > -relation PortTunKeyAllocation(port: uuid, tunkey: integer) > > - > > -// Transfer existing allocations from the realized table. > > -PortTunKeyAllocation(port, tunkey) :- AssignedPortTunKey(_, port, tunkey). > > - > > -// Case 1: AllocatedPortTunKeys(datapath) is not empty (i.e., contains > > -// a single record that stores a set of allocated keys). > > -PortTunKeyAllocation(port, tunkey) :- > > - AllocatedPortTunKeys(datapath, allocated), > > - NotYetAllocatedPortTunKeys(datapath, unallocated), > > - var allocation = FlatMap(allocate(allocated, unallocated, 1, 64'hffff)), > > - (var port, var tunkey) = allocation. > > - > > -// Case 2: PortAllocatedTunKeys(datapath) relation is empty > > -PortTunKeyAllocation(port, tunkey) :- > > - NotYetAllocatedPortTunKeys(datapath, unallocated), > > - not AllocatedPortTunKeys(datapath, _), > > - var allocation = FlatMap(allocate(set_empty(), unallocated, 1, 64'hffff)), > > - (var port, var tunkey) = allocation. > > - > > -/* > > - * Multicast group tunnel_key allocation: > > - * > > - * Tunnel-keys in a per-datapath space in the range 32770...65535 > > - */ > > - > > -// All tunnel keys already in use in the Realized table. > > -relation AllocatedMulticastGroupTunKeys(datapath_uuid: uuid, keys: Set<integer>) > > - > > -AllocatedMulticastGroupTunKeys(datapath_uuid, keys) :- > > - sb::Multicast_Group(.datapath = datapath_uuid, .tunnel_key = tunkey), > > - //sb::UUIDMap_Datapath_Binding(datapath, Left{datapath_uuid}), > > - var keys = tunkey.group_by(datapath_uuid).to_set(). > > - > > -// Multicast_Group's not yet in the Realized table. > > -relation NotYetAllocatedMulticastGroupTunKeys(datapath_uuid: uuid, > > - all_logical_ids: Vec<istring>) > > - > > -NotYetAllocatedMulticastGroupTunKeys(datapath_uuid, all_names) :- > > - OutProxy_Multicast_Group(.name = name, .datapath = datapath_uuid), > > - not sb::Multicast_Group(.name = name, .datapath = datapath_uuid), > > - var all_names = name.group_by(datapath_uuid).to_vec(). > > - > > -// Perform the allocation > > -relation MulticastGroupTunKeyAllocation(datapath_uuid: uuid, group: istring, tunkey: integer) > > - > > -// transfer existing allocations from the realized table > > -MulticastGroupTunKeyAllocation(datapath_uuid, group, tunkey) :- > > - //sb::UUIDMap_Datapath_Binding(_, datapath_uuid), > > - sb::Multicast_Group(.name = group, > > - .datapath = datapath_uuid, > > - .tunnel_key = tunkey). > > - > > -// Case 1: AllocatedMulticastGroupTunKeys(datapath) is not empty (i.e., > > -// contains a single record that stores a set of allocated keys) > > -MulticastGroupTunKeyAllocation(datapath_uuid, group, tunkey) :- > > - AllocatedMulticastGroupTunKeys(datapath_uuid, allocated), > > - NotYetAllocatedMulticastGroupTunKeys(datapath_uuid, unallocated), > > - (_, var min_key) = mC_IP_MCAST_MIN(), > > - (_, var max_key) = mC_IP_MCAST_MAX(), > > - var allocation = FlatMap(allocate(allocated, unallocated, > > - min_key, max_key)), > > - (var group, var tunkey) = allocation. > > - > > -// Case 2: AllocatedMulticastGroupTunKeys(datapath) relation is empty > > -MulticastGroupTunKeyAllocation(datapath_uuid, group, tunkey) :- > > - NotYetAllocatedMulticastGroupTunKeys(datapath_uuid, unallocated), > > - not AllocatedMulticastGroupTunKeys(datapath_uuid, _), > > - (_, var min_key) = mC_IP_MCAST_MIN(), > > - (_, var max_key) = mC_IP_MCAST_MAX(), > > - var allocation = FlatMap(allocate(set_empty(), unallocated, > > - min_key, max_key)), > > - (var group, var tunkey) = allocation. > > - > > -/* > > - * Queue ID allocation > > - * > > - * Queue IDs on a chassis, for routers that have QoS enabled, in a per-chassis > > - * space in the range 1...0xf000. It looks to me like there'd only be a small > > - * number of these per chassis, and probably a small number overall, in case it > > - * matters. > > - * > > - * Queue ID may also need to be deallocated if port loses QoS attributes > > - * > > - * This logic applies mainly to sb::Port_Binding records bound to a chassis > > - * (i.e. with the chassis column nonempty) but "localnet" ports can also > > - * have a queue ID. For those we use the port's own UUID as the chassis UUID. > > - */ > > - > > -function port_has_qos_params(opts: Map<istring, istring>): bool = { > > - opts.contains_key(i"qos_max_rate") or opts.contains_key(i"qos_burst") > > -} > > - > > - > > -// ports in Out_Port_Binding that require queue ID on chassis > > -relation PortRequiresQID(port: uuid, chassis: uuid) > > - > > -PortRequiresQID(pb._uuid, chassis) :- > > - pb in OutProxy_Port_Binding(), > > - pb.__type != i"localnet", > > - port_has_qos_params(pb.options), > > - sb::Port_Binding(._uuid = pb._uuid, .chassis = chassis_set), > > - Some{var chassis} = chassis_set. > > -PortRequiresQID(pb._uuid, pb._uuid) :- > > - pb in OutProxy_Port_Binding(), > > - pb.__type == i"localnet", > > - port_has_qos_params(pb.options), > > - sb::Port_Binding(._uuid = pb._uuid). > > - > > -relation AggPortRequiresQID(chassis: uuid, ports: Vec<uuid>) > > - > > -AggPortRequiresQID(chassis, ports) :- > > - PortRequiresQID(port, chassis), > > - var ports = port.group_by(chassis).to_vec(). > > - > > -relation AllocatedQIDs(chassis: uuid, allocated_ids: Map<uuid, integer>) > > - > > -AllocatedQIDs(chassis, allocated_ids) :- > > - pb in sb::Port_Binding(), > > - pb.__type != i"localnet", > > - Some{var chassis} = pb.chassis, > > - Some{var qid_str} = pb.options.get(i"qdisc_queue_id"), > > - Some{var qid} = parse_dec_u64(qid_str), > > - var allocated_ids = (pb._uuid, qid).group_by(chassis).to_map(). > > -AllocatedQIDs(chassis, allocated_ids) :- > > - pb in sb::Port_Binding(), > > - pb.__type == i"localnet", > > - var chassis = pb._uuid, > > - Some{var qid_str} = pb.options.get(i"qdisc_queue_id"), > > - Some{var qid} = parse_dec_u64(qid_str), > > - var allocated_ids = (pb._uuid, qid).group_by(chassis).to_map(). > > - > > -// allocate queue IDs to ports > > -relation QueueIDAllocation(port: uuid, qids: Option<integer>) > > - > > -// None for ports that do not require a queue > > -QueueIDAllocation(port, None) :- > > - OutProxy_Port_Binding(._uuid = port), > > - not PortRequiresQID(port, _). > > - > > -QueueIDAllocation(port, Some{qid}) :- > > - AggPortRequiresQID(chassis, ports), > > - AllocatedQIDs(chassis, allocated_ids), > > - var allocations = FlatMap(adjust_allocation(allocated_ids, ports, 1, 64'hf000)), > > - (var port, var qid) = allocations. > > - > > -QueueIDAllocation(port, Some{qid}) :- > > - AggPortRequiresQID(chassis, ports), > > - not AllocatedQIDs(chassis, _), > > - var allocations = FlatMap(adjust_allocation(map_empty(), ports, 1, 64'hf000)), > > - (var port, var qid) = allocations. > > - > > -/* > > - * This allows ovn-northd to preserve options:ipv6_ra_pd_list, which is set by > > - * ovn-controller. > > - */ > > -relation PreserveIPv6RAPDList(lrp_uuid: uuid, ipv6_ra_pd_list: Option<istring>) > > -PreserveIPv6RAPDList(lrp_uuid, ipv6_ra_pd_list) :- > > - sb::Port_Binding(._uuid = lrp_uuid, .options = options), > > - var ipv6_ra_pd_list = options.get(i"ipv6_ra_pd_list"). > > -PreserveIPv6RAPDList(lrp_uuid, None) :- > > - &nb::Logical_Router_Port(._uuid = lrp_uuid), > > - not sb::Port_Binding(._uuid = lrp_uuid). > > - > > -/* > > - * Tag allocation for nested containers. > > - */ > > - > > -/* Reserved tags for each parent port, including: > > - * 1. For ports that need a dynamically allocated tag, existing tag, if any, > > - * 2. For ports that have a statically assigned tag (via `tag_request`), the > > - * `tag_request` value. > > - * 3. For ports that do not have a tag_request, but have a tag statically assigned > > - * by directly setting the `tag` field, use this value. > > - */ > > -relation SwitchPortReservedTag(parent_name: istring, tags: integer) > > - > > -SwitchPortReservedTag(parent_name, tag) :- > > - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = needs_dynamic_tag, .parent_name = Some{parent_name}), > > - Some{var tag} = if (needs_dynamic_tag) { > > - lsp.tag > > - } else { > > - match (lsp.tag_request) { > > - Some{req} -> Some{req}, > > - None -> lsp.tag > > - } > > - }. > > - > > -relation SwitchPortReservedTags(parent_name: istring, tags: Set<integer>) > > - > > -SwitchPortReservedTags(parent_name, tags) :- > > - SwitchPortReservedTag(parent_name, tag), > > - var tags = tag.group_by(parent_name).to_set(). > > - > > -SwitchPortReservedTags(parent_name, set_empty()) :- > > - &nb::Logical_Switch_Port(.name = parent_name), > > - not SwitchPortReservedTag(.parent_name = parent_name). > > - > > -/* Allocate tags for ports that require dynamically allocated tags and do not > > - * have any yet. > > - */ > > -relation SwitchPortAllocatedTags(lsp_uuid: uuid, tag: Option<integer>) > > - > > -SwitchPortAllocatedTags(lsp_uuid, tag) :- > > - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = true, .parent_name = Some{parent_name}), > > - lsp.tag == None, > > - var lsps_need_tag = lsp._uuid.group_by(parent_name).to_vec(), > > - SwitchPortReservedTags(parent_name, reserved), > > - var dyn_tags = allocate_opt(reserved, > > - lsps_need_tag, > > - 1, /* Tag 0 is invalid for nested containers. */ > > - 4095), > > - var lsp_tag = FlatMap(dyn_tags), > > - (var lsp_uuid, var tag) = lsp_tag. > > - > > -/* New tag-to-port assignment: > > - * Case 1. Statically reserved tag (via `tag_request`), if any. > > - * Case 2. Existing tag for ports that require a dynamically allocated tag and already have one. > > - * Case 3. Use newly allocated tags (from `SwitchPortAllocatedTags`) for all other ports. > > - */ > > -relation SwitchPortNewDynamicTag(port: uuid, tag: Option<integer>) > > - > > -/* Case 1 */ > > -SwitchPortNewDynamicTag(lsp._uuid, tag) :- > > - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = false), > > - var tag = match (lsp.tag_request) { > > - Some{0} -> None, > > - treq -> treq > > - }. > > - > > -/* Case 2 */ > > -SwitchPortNewDynamicTag(lsp._uuid, Some{tag}) :- > > - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = true), > > - Some{var tag} = lsp.tag. > > - > > -/* Case 3 */ > > -SwitchPortNewDynamicTag(lsp._uuid, tag) :- > > - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = true), > > - lsp.tag == None, > > - SwitchPortAllocatedTags(lsp._uuid, tag). > > - > > -/* IP_Multicast table (only applicable for Switches). */ > > -sb::Out_IP_Multicast(._uuid = cfg.datapath, > > - .datapath = cfg.datapath, > > - .enabled = Some{cfg.enabled}, > > - .querier = Some{cfg.querier}, > > - .eth_src = cfg.eth_src, > > - .ip4_src = cfg.ip4_src, > > - .ip6_src = cfg.ip6_src, > > - .table_size = Some{cfg.table_size}, > > - .idle_timeout = Some{cfg.idle_timeout}, > > - .query_interval = Some{cfg.query_interval}, > > - .query_max_resp = Some{cfg.query_max_resp}) :- > > - McastSwitchCfg[cfg]. > > - > > - > > -relation PortExists(name: istring) > > -PortExists(name) :- &nb::Logical_Switch_Port(.name = name). > > -PortExists(name) :- &nb::Logical_Router_Port(.name = name). > > - > > -sb::Out_Load_Balancer(._uuid = lb._uuid, > > - .name = lb.name, > > - .vips = lb.vips, > > - .protocol = lb.protocol, > > - .datapaths = datapaths, > > - .external_ids = [i"lb_id" -> uuid2str(lb_uuid).intern()], > > - .options = options) :- > > - nb in &nb::Logical_Switch(._uuid = ls_uuid, .load_balancer = lb_uuids), > > - var lb_uuid = FlatMap(lb_uuids), > > - var datapaths = ls_uuid.group_by(lb_uuid).to_set(), > > - lb in &nb::Load_Balancer(._uuid = lb_uuid), > > - /* Store the fact that northd provides the original (destination IP + > > - * transport port) tuple. > > - */ > > - var options = lb.options.insert_imm(i"hairpin_orig_tuple", i"true"). > > - > > -sb::Out_Service_Monitor(._uuid = hash128((svc_monitor.port_name, lbvipbackend.ip, lbvipbackend.port, protocol)), > > - .ip = i"${lbvipbackend.ip}", > > - .protocol = Some{protocol}, > > - .port = lbvipbackend.port as integer, > > - .logical_port = svc_monitor.port_name, > > - .src_mac = i"${svc_monitor_mac}", > > - .src_ip = svc_monitor.src_ip, > > - .options = health_check.options, > > - .external_ids = map_empty()) :- > > - SvcMonitorMac(svc_monitor_mac), > > - LBVIP[lbvip@&LBVIP{.lb = lb}], > > - Some{var health_check} = lbvip.health_check, > > - var lbvipbackend = FlatMap(lbvip.backends), > > - Some{var svc_monitor} = lbvipbackend.svc_monitor, > > - PortExists(svc_monitor.port_name), > > - var protocol = default_protocol(lb.protocol), > > - protocol != i"sctp". > > - > > -Warning["SCTP load balancers do not currently support " > > - "health checks. Not creating health checks for " > > - "load balancer ${uuid2str(lb._uuid)}"] :- > > - LBVIP[lbvip@&LBVIP{.lb = lb}], > > - default_protocol(lb.protocol) == i"sctp", > > - Some{var health_check} = lbvip.health_check, > > - var lbvipbackend = FlatMap(lbvip.backends), > > - Some{var svc_monitor} = lbvipbackend.svc_monitor. > > - > > -/* > > - * BFD table. > > - */ > > - > > -/* > > - * BFD source port allocation. > > - * > > - * We need to assign a unique source port to each (logical_port, dst_ip) pair. > > - * RFC 5881 section 4 says: > > - * > > - * The source port MUST be in the range 49152 through 65535. > > - * The same UDP source port number MUST be used for all BFD > > - * Control packets associated with a particular session. > > - * The source port number SHOULD be unique among all BFD > > - * sessions on the system > > - */ > > -function bFD_UDP_SRC_PORT_MIN(): integer { 49152 } > > -function bFD_UDP_SRC_PORT_MAX(): integer { 65535 } > > - > > -// Get already assigned BFD source ports. > > -// If there's a conflict, make an arbitrary choice. > > -relation AssignedSrcPort( > > - logical_port: istring, > > - dst_ip: istring, > > - src_port: integer) > > -AssignedSrcPort(logical_port, dst_ip, src_port) :- > > - sb::BFD(.logical_port = logical_port, .dst_ip = dst_ip, .src_port = src_port), > > - var pair = (logical_port, dst_ip).group_by(src_port).first(), > > - (var logical_port, var dst_ip) = pair. > > - > > -// All source ports already in use. > > -relation AllocatedSrcPorts0(src_ports: Set<integer>) > > -AllocatedSrcPorts0(src_ports) :- > > - AssignedSrcPort(.src_port = src_port), > > - var src_ports = src_port.group_by(()).to_set(). > > - > > -relation AllocatedSrcPorts(src_ports: Set<integer>) > > -AllocatedSrcPorts(src_ports) :- AllocatedSrcPorts0(src_ports). > > -AllocatedSrcPorts(set_empty()) :- Unit(), not AllocatedSrcPorts0(_). > > - > > -// (logical_port, dst_ip) pairs not yet in the Realized table > > -relation NotYetAllocatedSrcPorts(pairs: Vec<(istring, istring)>) > > -NotYetAllocatedSrcPorts(pairs) :- > > - nb::BFD(.logical_port = logical_port, .dst_ip = dst_ip), > > - not AssignedSrcPort(logical_port, dst_ip, _), > > - var pairs = (logical_port, dst_ip).group_by(()).to_vec(). > > - > > -// Perform the allocation > > -relation SrcPortAllocation( > > - logical_port: istring, > > - dst_ip: istring, > > - src_port: integer) > > -SrcPortAllocation(logical_port, dst_ip, src_port) :- AssignedSrcPort(logical_port, dst_ip, src_port). > > -SrcPortAllocation(logical_port, dst_ip, src_port) :- > > - NotYetAllocatedSrcPorts(unallocated), > > - AllocatedSrcPorts(allocated), > > - var allocation = FlatMap(allocate(allocated, unallocated, > > - bFD_UDP_SRC_PORT_MIN(), bFD_UDP_SRC_PORT_MAX())), > > - ((var logical_port, var dst_ip), var src_port) = allocation. > > - > > -relation SouthboundBFDStatus( > > - logical_port: istring, > > - dst_ip: istring, > > - status: Option<istring> > > -) > > -SouthboundBFDStatus(bfd.logical_port, bfd.dst_ip, Some{bfd.status}) :- bfd in sb::BFD(). > > -SouthboundBFDStatus(logical_port, dst_ip, None) :- > > - nb::BFD(.logical_port = logical_port, .dst_ip = dst_ip), > > - not sb::BFD(.logical_port = logical_port, .dst_ip = dst_ip). > > - > > -function bFD_DEF_MINTX(): integer { 1000 } // 1 second > > -function bFD_DEF_MINRX(): integer { 1000 } // 1 second > > -function bFD_DEF_DETECT_MULT(): integer { 5 } > > -sb::Out_BFD(._uuid = hash, > > - .src_port = src_port, > > - .disc = max(1, hash as u32) as integer, > > - .logical_port = nb.logical_port, > > - .dst_ip = nb.dst_ip, > > - .min_tx = nb.min_tx.unwrap_or(bFD_DEF_MINTX()), > > - .min_rx = nb.min_rx.unwrap_or(bFD_DEF_MINRX()), > > - .detect_mult = nb.detect_mult.unwrap_or(bFD_DEF_DETECT_MULT()), > > - .status = status, > > - .external_ids = map_empty(), > > - .options = [i"nb_status" -> nb.status.unwrap_or(i""), > > - i"sb_status" -> sb_status.unwrap_or(i""), > > - i"referenced" -> i"${referenced}"]) :- > > - nb in nb::BFD(), > > - SrcPortAllocation(nb.logical_port, nb.dst_ip, src_port), > > - SouthboundBFDStatus(nb.logical_port, nb.dst_ip, sb_status), > > - BFDReferenced(nb._uuid, referenced), > > - var status = bfd_new_status(referenced, nb.status, sb_status).1, > > - var hash = hash128((nb.logical_port, nb.dst_ip)). > > - > > -relation BFDReferenced0(bfd_uuid: uuid) > > -BFDReferenced0(bfd_uuid) :- > > - nb::Logical_Router_Static_Route(.bfd = Some{bfd_uuid}, .nexthop = nexthop), > > - nb::BFD(._uuid = bfd_uuid, .dst_ip = nexthop). > > - > > -relation BFDReferenced(bfd_uuid: uuid, referenced: bool) > > -BFDReferenced(bfd_uuid, true) :- BFDReferenced0(bfd_uuid). > > -BFDReferenced(bfd_uuid, false) :- > > - nb::BFD(._uuid = bfd_uuid), > > - not BFDReferenced0(bfd_uuid). > > - > > -// Given the following: > > -// - 'referenced': whether a BFD object is referenced by a route > > -// - 'nb_status0': 'status' in the existing nb::BFD record > > -// - 'sb_status0': 'status' in the existing sb::BFD record (None, if none exists yet) > > -// computes and returns (nb_status, sb_status), which are the values to use next in these records > > -function bfd_new_status(referenced: bool, > > - nb_status0: Option<istring>, > > - sb_status0: Option<istring>): (istring, istring) { > > - var nb_status = nb_status0.unwrap_or(i"admin_down"); > > - match (sb_status0) { > > - Some{sb_status} -> if (nb_status != i"admin_down" and sb_status != i"admin_down") { > > - nb_status = sb_status > > - }, > > - _ -> () > > - }; > > - var sb_status = nb_status; > > - if (referenced) { > > - if (nb_status == i"admin_down") { > > - nb_status = i"down" > > - } > > - } else { > > - nb_status = i"admin_down" > > - }; > > - warn("nb_status=${nb_status} sb_status=${sb_status} referenced=${referenced}"); > > - (nb_status, sb_status) > > -} > > -nb::Out_BFD(bfd_uuid, Some{status}) :- > > - nb in nb::BFD(._uuid = bfd_uuid), > > - BFDReferenced(bfd_uuid, referenced), > > - SouthboundBFDStatus(nb.logical_port, nb.dst_ip, sb_status), > > - var status = bfd_new_status(referenced, nb.status, sb_status).0. > > - > > -/* > > - * Logical router BFD flows > > - */ > > - > > -function lrouter_bfd_flows(lr_uuid: uuid, > > - lrp_uuid: uuid, > > - ipX: string, > > - networks: string, > > - controller_meter: Option<istring>) > > - : (Flow, Flow) > > -{ > > - (Flow{.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 110, > > - .__match = i"${ipX}.src == ${networks} && udp.dst == 3784", > > - .actions = i"next; ", > > - .stage_hint = stage_hint(lrp_uuid), > > - .io_port = None, > > - .controller_meter = None}, > > - Flow{.logical_datapath = lr_uuid, > > - .stage = s_ROUTER_IN_IP_INPUT(), > > - .priority = 110, > > - .__match = i"${ipX}.dst == ${networks} && udp.dst == 3784", > > - .actions = i"handle_bfd_msg(); ", > > - .io_port = None, > > - .controller_meter = controller_meter, > > - .stage_hint = stage_hint(lrp_uuid)}) > > -} > > -for (&RouterPort(.router = router, .networks = networks, .lrp = lrp, .has_bfd = true)) { > > - var controller_meter = router.copp.get(cOPP_BFD()) in { > > - if (not networks.ipv4_addrs.is_empty()) { > > - (var a, var b) = lrouter_bfd_flows(router._uuid, lrp._uuid, "ip4", > > - format_v4_networks(networks, false), > > - controller_meter) in { > > - Flow[a]; > > - Flow[b] > > - } > > - }; > > - > > - if (not networks.ipv6_addrs.is_empty()) { > > - (var a, var b) = lrouter_bfd_flows(router._uuid, lrp._uuid, "ip6", > > - format_v6_networks(networks), > > - controller_meter) in { > > - Flow[a]; > > - Flow[b] > > - } > > - } > > - } > > -} > > - > > -/* Clean up stale FDB entries. */ > > -sb::Out_FDB(_uuid, mac, dp_key, port_key) :- > > - sb::FDB(_uuid, mac, dp_key, port_key), > > - sb::Out_Datapath_Binding(._uuid = dp_uuid, .tunnel_key = dp_key), > > - sb::Out_Port_Binding(.datapath = dp_uuid, .tunnel_key = port_key). > > diff --git a/northd/ovsdb2ddlog2c b/northd/ovsdb2ddlog2c > > deleted file mode 100755 > > index fa994c99e5..0000000000 > > --- a/northd/ovsdb2ddlog2c > > +++ /dev/null > > @@ -1,131 +0,0 @@ > > -#!/usr/bin/env python3 > > -# Copyright (c) 2020 Nicira, Inc. > > -# > > -# Licensed under the Apache License, Version 2.0 (the "License"); > > -# you may not use this file except in compliance with the License. > > -# You may obtain a copy of the License at: > > -# > > -# http://www.apache.org/licenses/LICENSE-2.0 > > -# > > -# Unless required by applicable law or agreed to in writing, software > > -# distributed under the License is distributed on an "AS IS" BASIS, > > -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > > -# See the License for the specific language governing permissions and > > -# limitations under the License. > > - > > -import getopt > > -import sys > > - > > -import ovs.json > > -import ovs.db.error > > -import ovs.db.schema > > - > > -argv0 = sys.argv[0] > > - > > -def usage(): > > - print("""\ > > -%(argv0)s: ovsdb schema compiler for northd > > -usage: %(argv0)s [OPTIONS] > > - > > -The following option must be specified: > > - -p, --prefix=PREFIX Prefix for declarations in output. > > - > > -The following ovsdb2ddlog options are supported: > > - -f, --schema-file=FILE OVSDB schema file. > > - -o, --output-table=TABLE Mark TABLE as output. > > - --output-only-table=TABLE Mark TABLE as output-only. DDlog will send updates to this table directly to OVSDB without comparing it with current OVSDB state. > > - --ro=TABLE.COLUMN Ignored. > > - --rw=TABLE.COLUMN Ignored. > > - --intern-table=TABLE Ignored. > > - --intern-strings Ignored. > > - --output-file=FILE.inc Write output to FILE.inc. If this option is not specified, output will be written to stdout. > > - > > -The following options are also available: > > - -h, --help display this help message > > - -V, --version display version information\ > > -""" % {'argv0': argv0}) > > - sys.exit(0) > > - > > -if __name__ == "__main__": > > - try: > > - try: > > - options, args = getopt.gnu_getopt(sys.argv[1:], 'p:f:o:hV', > > - ['prefix=', > > - 'schema-file=', > > - 'output-table=', > > - 'output-only-table=', > > - 'intern-table=', > > - 'ro=', > > - 'rw=', > > - 'output-file=', > > - 'intern-strings']) > > - except getopt.GetoptError as geo: > > - sys.stderr.write("%s: %s\n" % (argv0, geo.msg)) > > - sys.exit(1) > > - > > - prefix = None > > - schema_file = None > > - output_tables = set() > > - output_only_tables = set() > > - output_file = None > > - for key, value in options: > > - if key in ['-h', '--help']: > > - usage() > > - elif key in ['-V', '--version']: > > - print("ovsdb2ddlog2c (OVN) @VERSION@") > > - elif key in ['-p', '--prefix']: > > - prefix = value > > - elif key in ['-f', '--schema-file']: > > - schema_file = value > > - elif key in ['-o', '--output-table']: > > - output_tables.add(value) > > - elif key == '--output-only-table': > > - output_only_tables.add(value) > > - elif key in ['--ro', '--rw', '--intern-table', '--intern-strings']: > > - pass > > - elif key == '--output-file': > > - output_file = value > > - else: > > - assert False > > - > > - if schema_file is None: > > - sys.stderr.write("%s: missing -f or --schema-file option\n" % argv0) > > - sys.exit(1) > > - if prefix is None: > > - sys.stderr.write("%s: missing -p or --prefix option\n" % argv0) > > - sys.exit(1) > > - if not output_tables.isdisjoint(output_only_tables): > > - example = next(iter(output_tables.intersect(output_only_tables))) > > - sys.stderr.write("%s: %s may not be both an output table and " > > - "an output-only table\n" % (argv0, example)) > > - sys.exit(1) > > - > > - schema = ovs.db.schema.DbSchema.from_json(ovs.json.from_file( > > - schema_file)) > > - > > - all_tables = set(schema.tables.keys()) > > - missing_tables = (output_tables | output_only_tables) - all_tables > > - if missing_tables: > > - sys.stderr.write("%s: %s is not the name of a table\n" > > - % (argv0, next(iter(missing_tables)))) > > - sys.exit(1) > > - > > - f = sys.stdout if output_file is None else open(output_file, "w") > > - for name, tables in ( > > - ("input_relations", all_tables - output_only_tables), > > - ("output_relations", output_tables), > > - ("output_only_relations", output_only_tables)): > > - f.write("static const char *%s%s[] = {\n" % (prefix, name)) > > - for table in sorted(tables): > > - f.write(" \"%s\",\n" % table) > > - f.write(" NULL,\n") > > - f.write("};\n\n") > > - if schema_file is not None: > > - f.close() > > - except ovs.db.error.Error as e: > > - sys.stderr.write("%s: %s\n" % (argv0, e)) > > - sys.exit(1) > > - > > -# Local variables: > > -# mode: python > > -# End: > > diff --git a/tests/ovn-macros.at b/tests/ovn-macros.at > > index 1b693a22c3..a30b626ef1 100644 > > --- a/tests/ovn-macros.at > > +++ b/tests/ovn-macros.at > > @@ -47,11 +47,11 @@ m4_define([OVN_CLEANUP],[ > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) > > + OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > if test -d northd-backup; then > > as northd-backup > > - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) > > + OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > fi > > > > OVN_CLEANUP_VSWITCH([main]) > > @@ -71,11 +71,11 @@ m4_define([OVN_CLEANUP_AZ],[ > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as $1/northd > > - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) > > + OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > if test -d $1/northd-backup; then > > as $1/northd-backup > > - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) > > + OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > fi > > > > as $1/ic > > @@ -165,11 +165,6 @@ ovn_start_northd() { > > backup) suffix=-backup ;; > > esac > > > > - case ${NORTHD_TYPE:=ovn-northd} in > > - ovn-northd) ;; > > - ovn-northd-ddlog) northd_args="$northd_args --ddlog-record=${AZ:+$AZ/}northd$suffix/replay.dat -v" ;; > > - esac > > - > > if test X$NORTHD_USE_PARALLELIZATION = Xyes; then > > northd_args="$northd_args --n-threads=4" > > fi > > @@ -177,7 +172,7 @@ ovn_start_northd() { > > local name=${d_prefix}northd${suffix} > > echo "${prefix}starting $name" > > test -d "$ovs_base/$name" || mkdir "$ovs_base/$name" > > - as $name start_daemon $NORTHD_TYPE $northd_args -vjsonrpc \ > > + as $name start_daemon ovn-northd $northd_args -vjsonrpc \ > > --ovnnb-db=$OVN_NB_DB --ovnsb-db=$OVN_SB_DB > > } > > > > @@ -219,10 +214,7 @@ ovn_start () { > > fi > > > > if test X$HAVE_OPENSSL = Xyes; then > > - # Create the SB DB pssl+RBAC connection. Ideally we could pre-create > > - # SB_Global and Connection with ovsdb-tool transact at DB creation > > - # time, but unfortunately that does not work, northd-ddlog will replace > > - # the SB_Global record on startup. > > + # Create the SB DB pssl+RBAC connection. > > ovn-sbctl \ > > -- --id=@c create connection \ > > target=\"pssl:0:127.0.0.1\" role=ovn-controller \ > > @@ -945,26 +937,23 @@ m4_define([OVN_POPULATE_ARP], [AT_CHECK(ovn_populate_arp__, [0], [ignore])]) > > # Defines versions of the test with all combinations of northd, > > # parallelization enabled and conditional monitoring on/off. > > m4_define([OVN_FOR_EACH_NORTHD], > > - [m4_foreach([NORTHD_TYPE], [ovn-northd], > > - [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], > > - [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1 > > -])])])]) > > + [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], > > + [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1 > > +])])]) > > > > # Defines versions of the test with all combinations of northd and > > # parallelization enabled. To be used when the ovn-controller configuration > > # is not relevant. > > m4_define([OVN_FOR_EACH_NORTHD_NO_HV], > > - [m4_foreach([NORTHD_TYPE], [ovn-northd], > > - [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], [$1 > > -])])]) > > + [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], [$1 > > +])]) > > > > # Defines versions of the test with all combinations of northd and > > # parallelization on/off. To be used when the ovn-controller configuration > > # is not relevant and we want to test parallelization permutations. > > m4_define([OVN_FOR_EACH_NORTHD_NO_HV_PARALLELIZATION], > > - [m4_foreach([NORTHD_TYPE], [ovn-northd], > > - [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no], [$1 > > -])])]) > > + [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no], [$1 > > +])]) > > > > # OVN_NBCTL(NBCTL_COMMAND) adds NBCTL_COMMAND to list of commands to be run by RUN_OVN_NBCTL(). > > m4_define([OVN_NBCTL], [ > > diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at > > index e7910e83c6..3c2f93567a 100644 > > --- a/tests/ovn-northd.at > > +++ b/tests/ovn-northd.at > > @@ -29,7 +29,7 @@ m4_define([CHECK_NO_CHANGE_AFTER_RECOMPUTE], [ > > wait_for_ports_up > > fi > > _DUMP_DB_TABLES(before) > > - check as northd ovn-appctl -t NORTHD_TYPE inc-engine/recompute > > + check as northd ovn-appctl -t ovn-northd inc-engine/recompute > > check ovn-nbctl --wait=sb sync > > _DUMP_DB_TABLES(after) > > AT_CHECK([as northd diff before after], [0], [dnl > > @@ -357,7 +357,7 @@ ovn_init_db ovn-nb; ovn-nbctl init > > > > # test unixctl option > > mkdir "$ovs_base"/northd > > -as northd start_daemon NORTHD_TYPE --unixctl="$ovs_base"/northd/NORTHD_TYPE[].ctl --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock > > +as northd start_daemon ovn-northd --unixctl="$ovs_base"/northd/ovn-northd.ctl --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock > > ovn-nbctl ls-add sw > > ovn-nbctl --wait=sb lsp-add sw p1 > > # northd created with unixctl option successfully created port_binding entry > > @@ -365,7 +365,7 @@ check_row_count Port_Binding 1 logical_port=p1 > > AT_CHECK([ovn-nbctl --wait=sb lsp-del p1]) > > > > # ovs-appctl exit with unixctl option > > -OVS_APP_EXIT_AND_WAIT_BY_TARGET(["$ovs_base"/northd/]NORTHD_TYPE[.ctl], ["$ovs_base"/northd/]NORTHD_TYPE[.pid]) > > +OVS_APP_EXIT_AND_WAIT_BY_TARGET(["$ovs_base"/northd/ovn-northd.ctl], ["$ovs_base"/northd/ovn-northd.pid]) > > > > # Check no port_binding entry for new port as ovn-northd is not running > > # > > @@ -376,7 +376,7 @@ AT_CHECK([ovn-nbctl --timeout=10 --wait=sb sync], [142], [], [ignore]) > > check_row_count Port_Binding 0 logical_port=p2 > > > > # test default unixctl path > > -as northd start_daemon NORTHD_TYPE --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock > > +as northd start_daemon ovn-northd --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock > > ovn-nbctl --wait=sb lsp-add sw p3 > > # northd created with default unixctl path successfully created port_binding entry > > check_row_count Port_Binding 1 logical_port=p3 > > @@ -386,7 +386,7 @@ OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > AT_CLEANUP > > ]) > > @@ -737,7 +737,7 @@ OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > AT_CLEANUP > > ]) > > @@ -750,10 +750,10 @@ AT_SETUP([ovn-northd pause and resume]) > > ovn_start --backup-northd=paused > > > > get_northd_status() { > > - as northd ovn-appctl -t NORTHD_TYPE is-paused > > - as northd ovn-appctl -t NORTHD_TYPE status > > - as northd-backup ovn-appctl -t NORTHD_TYPE is-paused > > - as northd-backup ovn-appctl -t NORTHD_TYPE status > > + as northd ovn-appctl -t ovn-northd is-paused > > + as northd ovn-appctl -t ovn-northd status > > + as northd-backup ovn-appctl -t ovn-northd is-paused > > + as northd-backup ovn-appctl -t ovn-northd status > > } > > > > AS_BOX([Check that the backup is paused]) > > @@ -764,7 +764,7 @@ Status: paused > > ]) > > > > AS_BOX([Resume the backup]) > > -check as northd-backup ovs-appctl -t NORTHD_TYPE resume > > +check as northd-backup ovs-appctl -t ovn-northd resume > > OVS_WAIT_FOR_OUTPUT([get_northd_status], [0], [false > > Status: active > > false > > @@ -781,8 +781,8 @@ check ovn-nbctl --wait=sb ls-del sw0 > > check_row_count Datapath_Binding 0 > > > > AS_BOX([Pause the main northd]) > > -check as northd ovs-appctl -t NORTHD_TYPE pause > > -check as northd-backup ovs-appctl -t NORTHD_TYPE pause > > +check as northd ovs-appctl -t ovn-northd pause > > +check as northd-backup ovs-appctl -t ovn-northd pause > > AT_CHECK([get_northd_status], [0], [true > > Status: paused > > true > > @@ -798,7 +798,7 @@ check_row_count Datapath_Binding 0 > > # Do not resume both main and backup right after each other > > # as there would be no guarentee of which one would become active > > AS_BOX([Resume the main northd]) > > -check as northd ovs-appctl -t NORTHD_TYPE resume > > +check as northd ovs-appctl -t ovn-northd resume > > OVS_WAIT_FOR_OUTPUT([get_northd_status], [0], [false > > Status: active > > true > > @@ -806,7 +806,7 @@ Status: paused > > ]) > > > > AS_BOX([Resume the backup northd]) > > -check as northd-backup ovs-appctl -t NORTHD_TYPE resume > > +check as northd-backup ovs-appctl -t ovn-northd resume > > OVS_WAIT_FOR_OUTPUT([get_northd_status], [0], [false > > Status: active > > false > > @@ -831,7 +831,7 @@ check_row_count Datapath_Binding 1 > > > > # Kill northd. > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > # With ovn-northd gone, changes to nbdb won't be reflected into sbdb. > > # Make sure. > > @@ -1268,7 +1268,7 @@ AT_CLEANUP > > > > OVN_FOR_EACH_NORTHD_NO_HV([ > > AT_SETUP([check Load balancer health check and Service Monitor sync]) > > -ovn_start NORTHD_TYPE > > +ovn_start ovn-northd > > check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80,20.0.0.3:80 > > > > check ovn-nbctl --wait=sb set load_balancer . ip_port_mappings:10.0.0.3=sw0-p1 > > @@ -4624,7 +4624,7 @@ AT_SKIP_IF([expr "$PKIDIR" : ".*[[ '\" > > ovn_start > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as ovn-sb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > @@ -4652,7 +4652,7 @@ cp $PKIDIR/$key2 $key > > cp $PKIDIR/$cert3 $cert > > cp $PKIDIR/$cacert $cacert > > as northd > > -start_daemon ovn$NORTHD_TYPE -vjsonrpc \ > > +start_daemon ovn-northd -vjsonrpc \ > > --ovnnb-db=$OVN_NB_DB --ovnsb-db=ssl:127.0.0.1:$TCP_PORT \ > > -p $key -c $cert -C $cacert > > > > @@ -4664,7 +4664,7 @@ cp $PKIDIR/$key $key > > cp $PKIDIR/$cert $cert > > check ovn-nbctl --wait=sb sync > > > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > AT_CLEANUP > > ]) > > > > @@ -6506,7 +6506,7 @@ AT_CLEANUP > > OVN_FOR_EACH_NORTHD_NO_HV([ > > AT_SETUP([ovn-northd -- lrp with chassis-redirect and ls with vtep lport]) > > AT_KEYWORDS([multiple-l3dgw-ports]) > > -ovn_start NORTHD_TYPE > > +ovn_start > > check ovn-sbctl chassis-add ch1 geneve 127.0.0.2 > > > > check ovn-nbctl lr-add lr1 > > @@ -6601,7 +6601,7 @@ AT_CLEANUP > > > > OVN_FOR_EACH_NORTHD_NO_HV([ > > AT_SETUP([check options:requested-chassis fills requested_chassis col]) > > -ovn_start NORTHD_TYPE > > +ovn_start > > > > # Add chassis ch1. > > check ovn-sbctl chassis-add ch1 geneve 127.0.0.2 > > @@ -8190,39 +8190,39 @@ OVN_FOR_EACH_NORTHD_NO_HV([ > > AT_SETUP([northd-parallelization unixctl]) > > ovn_start > > > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 > > -OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t NORTHD_TYPE parallel-build/get-n-threads], [0], [1 > > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 > > +OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t ovn-northd parallel-build/get-n-threads], [0], [1 > > ]) > > > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 4 > > -OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t NORTHD_TYPE parallel-build/get-n-threads], [0], [4 > > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 4 > > +OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t ovn-northd parallel-build/get-n-threads], [0], [4 > > ]) > > > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 > > -OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t NORTHD_TYPE parallel-build/get-n-threads], [0], [1 > > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 > > +OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t ovn-northd parallel-build/get-n-threads], [0], [1 > > ]) > > > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 0], [2], [], > > +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 0], [2], [], > > [invalid n_threads: 0 > > ovn-appctl: ovn-northd: server returned an error > > ]) > > > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads -1], [2], [], > > +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads -1], [2], [], > > [invalid n_threads: -1 > > ovn-appctl: ovn-northd: server returned an error > > ]) > > > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 300], [2], [], > > +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 300], [2], [], > > [invalid n_threads: 300 > > ovn-appctl: ovn-northd: server returned an error > > ]) > > > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads], [2], [], > > +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads], [2], [], > > ["parallel-build/set-n-threads" command requires at least 1 arguments > > ovn-appctl: ovn-northd: server returned an error > > ]) > > > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 2], [2], [], > > +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 2], [2], [], > > ["parallel-build/set-n-threads" command takes at most 1 arguments > > ovn-appctl: ovn-northd: server returned an error > > ]) > > @@ -8262,16 +8262,16 @@ check ovn-nbctl lrp-add lr1 lrp0 "f0:00:00:01:00:01" 10.1.255.254/16 > > check ovn-nbctl lr-nat-add lr1 snat 10.2.0.1 10.1.0.0/16 > > add_switch_ports 1 50 > > > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 4 > > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 4 > > add_switch_ports 51 100 > > > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 8 > > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 8 > > add_switch_ports 101 150 > > > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 4 > > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 4 > > add_switch_ports 151 200 > > > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 > > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 > > add_switch_ports 201 250 > > check ovn-nbctl --wait=sb sync > > > > @@ -8288,7 +8288,7 @@ ovn-sbctl dump-flows | DUMP_FLOWS_SORTED > flows2 > > AT_CHECK([diff flows1 flows2]) > > > > # Restart with with 8 threads > > -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 8 > > +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 8 > > delete_switch_ports 1 250 > > add_switch_ports 1 250 > > check ovn-nbctl --wait=sb sync > > @@ -8973,22 +8973,22 @@ p2_uuid=$(fetch_column nb:Logical_Switch_Port _uuid name=sw0-p2) > > echo "p1 uuid - $p1_uuid" > > ovn-nbctl --wait=sb sync > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > foo_as_uuid=$(ovn-nbctl create address_set name=foo addresses=\"1.1.1.1\",\"1.1.1.2\") > > wait_column '1.1.1.1 1.1.1.2' Address_Set addresses name=foo > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [1 > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [1 > > ]) > > > > rm -f northd/ovn-northd.log > > -check as northd ovn-appctl -t NORTHD_TYPE vlog/reopen > > -check as northd ovn-appctl -t NORTHD_TYPE vlog/set jsonrpc:dbg > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd vlog/reopen > > +check as northd ovn-appctl -t ovn-northd vlog/set jsonrpc:dbg > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.3 -- \ > > add address_set $foo_as_uuid addresses 1.1.2.1/4 > > wait_column '1.1.1.1 1.1.1.2 1.1.1.3 1.1.2.1/4' Address_Set addresses name=foo > > > > # There should be no recompute of the sync_to_sb_addr_set engine node . > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > > ]) > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > @@ -8996,14 +8996,14 @@ AT_CHECK([grep transact northd/ovn-northd.log | grep Address_Set | \ > > grep -c mutate], [0], [1 > > ]) > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.4 -- \ > > remove address_set $foo_as_uuid addresses 1.1.1.1 -- \ > > remove address_set $foo_as_uuid addresses 1.1.2.1/4 > > wait_column '1.1.1.2 1.1.1.3 1.1.1.4' Address_Set addresses name=foo > > > > # There should be no recompute of the sync_to_sb_addr_set engine node . > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > > ]) > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > @@ -9013,9 +9013,9 @@ grep -c mutate], [0], [2 > > > > # Pause ovn-northd and add/remove few addresses. when it is resumed > > # it should use mutate for updating the address sets. > > -check as northd ovn-appctl -t NORTHD_TYPE pause > > +check as northd ovn-appctl -t ovn-northd pause > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.5 > > check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.6 > > check ovn-nbctl remove address_set $foo_as_uuid addresses 1.1.1.2 > > @@ -9023,10 +9023,10 @@ check ovn-nbctl remove address_set $foo_as_uuid addresses 1.1.1.2 > > check_column '1.1.1.2 1.1.1.3 1.1.1.4' Address_Set addresses name=foo > > > > # Resume northd now > > -check as northd ovn-appctl -t NORTHD_TYPE resume > > +check as northd ovn-appctl -t ovn-northd resume > > wait_column '1.1.1.3 1.1.1.4 1.1.1.5 1.1.1.6' Address_Set addresses name=foo > > # There should be recompute of the sync_to_sb_addr_set engine node . > > -recompute_stat=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute) > > +recompute_stat=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute) > > AT_CHECK([test $recompute_stat -ge 1]) > > > > AT_CHECK([grep transact northd/ovn-northd.log | grep Address_Set | \ > > @@ -9034,25 +9034,25 @@ grep -c mutate], [0], [3 > > ]) > > > > # Create a port group. This should result in recompute of sb_to_sync_addr_set engine node. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl pg-add pg1 > > wait_column '' Address_Set addresses name=pg1_ip4 > > -recompute_stat=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute) > > +recompute_stat=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute) > > AT_CHECK([test $recompute_stat -ge 1]) > > > > # Add sw0-p1 to port group pg1 > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl add port_group pg1 ports ${p1_uuid} > > wait_column '20.0.0.4' Address_Set addresses name=pg1_ip4 > > > > # There should be no recompute of the sync_to_sb_addr_set engine node. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > > ]) > > > > # No change, no recompute > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb sync > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 > > ]) > > > > AT_CLEANUP > > @@ -9091,18 +9091,18 @@ $4 > > } > > > > AS_BOX([Create new PG1 and PG2]) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb -- pg-add pg1 -- pg-add pg2 > > dnl The northd node should not recompute, it should handle nb_global update > > dnl though, therefore "compute: 1". > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > > Node: northd > > - recompute: 0 > > - compute: 1 > > - abort: 0 > > ]) > > dnl The port_group node recomputes every time a NB port group is added/deleted. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > > Node: port_group > > - recompute: 1 > > - compute: 0 > > @@ -9110,7 +9110,7 @@ Node: port_group > > ]) > > dnl The port_group node is an input for the lflow node. Port_group > > dnl recompute/compute triggers lflow recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > > Node: lflow > > - recompute: 1 > > - compute: 0 > > @@ -9125,7 +9125,7 @@ check ovn-nbctl --wait=sb \ > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > AS_BOX([Add one port from the two switches to PG1]) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb \ > > -- pg-set-ports pg1 sw1.1 sw2.1 > > check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" > > @@ -9133,7 +9133,7 @@ check_column "sw2.1" sb:Port_Group ports name="${sw2_key}_pg1" > > > > dnl The northd node should not recompute, it should handle nb_global update > > dnl though, therefore "compute: 1". > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > > Node: northd > > - recompute: 0 > > - compute: 1 > > @@ -9141,7 +9141,7 @@ Node: northd > > ]) > > dnl The port_group node recomputes also every time a port from a new switch > > dnl is added to the group. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > > Node: port_group > > - recompute: 1 > > - compute: 0 > > @@ -9149,7 +9149,7 @@ Node: port_group > > ]) > > dnl The port_group node is an input for the lflow node. Port_group > > dnl recompute/compute triggers lflow recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > > Node: lflow > > - recompute: 1 > > - compute: 0 > > @@ -9160,7 +9160,7 @@ check_acl_lflows 1 0 1 0 > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > AS_BOX([Add one port from the two switches to PG2]) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb \ > > -- pg-set-ports pg2 sw1.2 sw2.2 > > check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" > > @@ -9170,7 +9170,7 @@ check_column "sw2.2" sb:Port_Group ports name="${sw2_key}_pg2" > > > > dnl The northd node should not recompute, it should handle nb_global update > > dnl though, therefore "compute: 1". > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > > Node: northd > > - recompute: 0 > > - compute: 1 > > @@ -9178,7 +9178,7 @@ Node: northd > > ]) > > dnl The port_group node recomputes also every time a port from a new switch > > dnl is added to the group. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > > Node: port_group > > - recompute: 1 > > - compute: 0 > > @@ -9186,7 +9186,7 @@ Node: port_group > > ]) > > dnl The port_group node is an input for the lflow node. Port_group > > dnl recompute/compute triggers lflow recompute (for ACLs). > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > > Node: lflow > > - recompute: 1 > > - compute: 0 > > @@ -9197,7 +9197,7 @@ check_acl_lflows 1 1 1 1 > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > AS_BOX([Add one more port from the two switches to PG1 and PG2]) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb \ > > -- pg-set-ports pg1 sw1.1 sw2.1 sw1.3 sw2.3 \ > > -- pg-set-ports pg2 sw1.2 sw2.2 sw1.3 sw2.3 > > @@ -9208,7 +9208,7 @@ check_column "sw2.2 sw2.3" sb:Port_Group ports name="${sw2_key}_pg2" > > > > dnl The northd node should not recompute, it should handle nb_global update > > dnl though, therefore "compute: 1". > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > > Node: northd > > - recompute: 0 > > - compute: 1 > > @@ -9216,7 +9216,7 @@ Node: northd > > ]) > > dnl We did not change the set of switches a pg is applied to, there should be > > dnl no recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > > Node: port_group > > - recompute: 0 > > - compute: 1 > > @@ -9224,7 +9224,7 @@ Node: port_group > > ]) > > dnl We did not change the set of switches a pg is applied to, there should be > > dnl no recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > > Node: lflow > > - recompute: 0 > > - compute: 1 > > @@ -9235,7 +9235,7 @@ check_acl_lflows 1 1 1 1 > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > AS_BOX([Remove the last port from PG1 and PG2]) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb \ > > -- pg-set-ports pg1 sw1.1 sw2.1 \ > > -- pg-set-ports pg2 sw1.2 sw2.2 > > @@ -9246,7 +9246,7 @@ check_column "sw2.2" sb:Port_Group ports name="${sw2_key}_pg2" > > > > dnl The northd node should not recompute, it should handle nb_global update > > dnl though, therefore "compute: 1". > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > > Node: northd > > - recompute: 0 > > - compute: 1 > > @@ -9254,7 +9254,7 @@ Node: northd > > ]) > > dnl We did not change the set of switches a pg is applied to, there should be > > dnl no recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > > Node: port_group > > - recompute: 0 > > - compute: 1 > > @@ -9262,7 +9262,7 @@ Node: port_group > > ]) > > dnl We did not change the set of switches a pg is applied to, there should be > > dnl no recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > > Node: lflow > > - recompute: 0 > > - compute: 1 > > @@ -9273,7 +9273,7 @@ check_acl_lflows 1 1 1 1 > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > AS_BOX([Remove the second port from PG2]) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb pg-set-ports pg2 sw1.2 > > check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" > > check_column "sw2.1" sb:Port_Group ports name="${sw2_key}_pg1" > > @@ -9283,7 +9283,7 @@ AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg2"], [0], [ > > > > dnl The northd node should not recompute, it should handle nb_global update > > dnl though, therefore "compute: 1". > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > > Node: northd > > - recompute: 0 > > - compute: 1 > > @@ -9291,7 +9291,7 @@ Node: northd > > ]) > > dnl We changed the set of switches a pg is applied to, there should be > > dnl a recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > > Node: port_group > > - recompute: 1 > > - compute: 0 > > @@ -9299,7 +9299,7 @@ Node: port_group > > ]) > > dnl We changed the set of switches a pg is applied to, there should be > > dnl a recompute (for ACLs). > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > > Node: lflow > > - recompute: 1 > > - compute: 0 > > @@ -9310,7 +9310,7 @@ check_acl_lflows 1 1 1 0 > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > AS_BOX([Remove the second port from PG1]) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb pg-set-ports pg1 sw1.1 > > check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" > > AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg1"], [0], [ > > @@ -9321,7 +9321,7 @@ AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg2"], [0], [ > > > > dnl The northd node should not recompute, it should handle nb_global update > > dnl though, therefore "compute: 1". > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > > Node: northd > > - recompute: 0 > > - compute: 1 > > @@ -9329,7 +9329,7 @@ Node: northd > > ]) > > dnl We changed the set of switches a pg is applied to, there should be > > dnl a recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > > Node: port_group > > - recompute: 1 > > - compute: 0 > > @@ -9337,7 +9337,7 @@ Node: port_group > > ]) > > dnl We changed the set of switches a pg is applied to, there should be > > dnl a recompute (for ACLs). > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > > Node: lflow > > - recompute: 1 > > - compute: 0 > > @@ -9348,7 +9348,7 @@ check_acl_lflows 1 1 0 0 > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > AS_BOX([Add second port to both PGs]) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb \ > > -- pg-set-ports pg1 sw1.1 sw2.1 \ > > -- pg-set-ports pg2 sw1.2 sw2.2 > > @@ -9359,7 +9359,7 @@ check_column "sw2.2" sb:Port_Group ports name="${sw2_key}_pg2" > > > > dnl The northd node should not recompute, it should handle nb_global update > > dnl though, therefore "compute: 1". > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > > Node: northd > > - recompute: 0 > > - compute: 1 > > @@ -9367,7 +9367,7 @@ Node: northd > > ]) > > dnl We changed the set of switches a pg is applied to, there should be a > > dnl recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > > Node: port_group > > - recompute: 1 > > - compute: 0 > > @@ -9375,7 +9375,7 @@ Node: port_group > > ]) > > dnl We changed the set of switches a pg is applied to, there should be a > > dnl recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > > Node: lflow > > - recompute: 1 > > - compute: 0 > > @@ -9386,7 +9386,7 @@ check_acl_lflows 1 1 1 1 > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > AS_BOX([Remove second port from both PGs]) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb \ > > -- pg-set-ports pg1 sw1.1 \ > > -- pg-set-ports pg2 sw1.2 > > @@ -9399,7 +9399,7 @@ AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg2"], [0], [ > > > > dnl The northd node should not recompute, it should handle nb_global update > > dnl though, therefore "compute: 1". > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl > > Node: northd > > - recompute: 0 > > - compute: 1 > > @@ -9407,7 +9407,7 @@ Node: northd > > ]) > > dnl We changed the set of switches a pg is applied to, there should be a > > dnl recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl > > Node: port_group > > - recompute: 1 > > - compute: 0 > > @@ -9415,7 +9415,7 @@ Node: port_group > > ]) > > dnl We changed the set of switches a pg is applied to, there should be a > > dnl recompute. > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl > > Node: lflow > > - recompute: 1 > > - compute: 0 > > @@ -9724,7 +9724,7 @@ check ovn-sbctl chassis-add hv1 geneve 127.0.0.1 \ > > > > check ovn-nbctl --wait=sb sync > > > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE debug/chassis-features-list], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd debug/chassis-features-list], [0], [dnl > > ct_no_masked_label: true > > ct_lb_related: true > > mac_binding_timestamp: true > > @@ -9739,7 +9739,7 @@ check ovn-sbctl chassis-add hv2 geneve 127.0.0.2 \ > > > > check ovn-nbctl --wait=sb sync > > > > -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE debug/chassis-features-list], [0], [dnl > > +AT_CHECK([as northd ovn-appctl -t ovn-northd debug/chassis-features-list], [0], [dnl > > ct_no_masked_label: true > > ct_lb_related: true > > mac_binding_timestamp: true > > @@ -10012,14 +10012,14 @@ check ovn-nbctl ls-add sw > > check ovn-nbctl lsp-add sw p1 > > > > check ovn-nbctl --wait=sb sync > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > > > check ovn-nbctl --wait=sb lsp-set-options p1 foo=bar > > -sb_lb_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_lb recompute) > > +sb_lb_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_lb recompute) > > AT_CHECK([test x$sb_lb_recomp = x0]) > > > > check ovn-nbctl --wait=sb lsp-set-type p1 external > > -sb_lb_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_lb recompute) > > +sb_lb_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_lb recompute) > > AT_CHECK([test x$sb_lb_recomp != x0]) > > > > AT_CLEANUP > > @@ -10043,13 +10043,13 @@ check_recompute_counter() { > > sync_sb_pb_recomp_min=$5 > > sync_sb_pb_recomp_max=$6 > > > > - northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) > > + northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) > > AT_CHECK([test $northd_recomp -ge $northd_recomp_min && test $northd_recomp -le $northd_recomp_max]) > > > > - lflow_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow recompute) > > + lflow_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow recompute) > > AT_CHECK([test $lflow_recomp -ge $lflow_recomp_min && test $lflow_recomp -le $lflow_recomp_max]) > > > > - sync_sb_pb_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_pb recompute) > > + sync_sb_pb_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_pb recompute) > > AT_CHECK([test $sync_sb_pb_recomp -ge $sync_sb_pb_recomp_min && test $sync_sb_pb_recomp -le $sync_sb_pb_recomp_max]) > > } > > > > @@ -10062,32 +10062,32 @@ ovs-vsctl add-port br-int lsp-pilot -- set interface lsp-pilot external_ids:ifac > > wait_for_ports_up > > check ovn-nbctl --wait=hv sync > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=hv lsp-add ls0 lsp0-0 -- lsp-set-addresses lsp0-0 "unknown" > > ovs-vsctl add-port br-int lsp0-0 -- set interface lsp0-0 external_ids:iface-id=lsp0-0 > > wait_for_ports_up > > check ovn-nbctl --wait=hv sync > > check_recompute_counter 4 5 5 5 5 5 > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=hv lsp-add ls0 lsp0-1 -- lsp-set-addresses lsp0-1 "aa:aa:aa:00:00:01 192.168.0.11" > > ovs-vsctl add-port br-int lsp0-1 -- set interface lsp0-1 external_ids:iface-id=lsp0-1 > > wait_for_ports_up > > check ovn-nbctl --wait=hv sync > > check_recompute_counter 0 0 0 0 0 0 > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=hv lsp-add ls0 lsp0-2 -- lsp-set-addresses lsp0-2 "aa:aa:aa:00:00:02 192.168.0.12" > > ovs-vsctl add-port br-int lsp0-2 -- set interface lsp0-2 external_ids:iface-id=lsp0-2 > > wait_for_ports_up > > check ovn-nbctl --wait=hv sync > > check_recompute_counter 0 0 0 0 0 0 > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=hv lsp-del lsp0-1 > > check_recompute_counter 0 0 0 0 0 0 > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=hv lsp-set-addresses lsp0-2 "aa:aa:aa:00:00:88 192.168.0.88" > > check_recompute_counter 0 0 0 0 0 0 > > > > @@ -10106,7 +10106,7 @@ done > > check_recompute_counter 0 0 0 0 0 0 > > > > # No change, no recompute > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb sync > > check_recompute_counter 0 0 0 0 0 0 > > > > @@ -10118,7 +10118,7 @@ ovn-nbctl dhcp-options-create 192.168.0.0/24 > > CIDR_UUID=$(ovn-nbctl --bare --columns=_uuid find dhcp_options cidr="192.168.0.0/24") > > ovn-nbctl dhcp-options-set-options $CIDR_UUID lease_time=3600 router=192.168.0.1 server_id=192.168.0.1 server_mac=c0:ff:ee:00:00:01 hostname="\"foo\"" > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > ovn-nbctl --wait=sb lsp-set-dhcpv4-options lsp0-2 $CIDR_UUID > > check_recompute_counter 0 0 0 0 0 0 > > > > @@ -10129,7 +10129,7 @@ check ovn-nbctl lsp-set-addresses lsp0-2 "aa:aa:aa:00:00:01 192.168.0.11 aef0::4 > > d1="$(ovn-nbctl create DHCP_Options cidr="aef0\:\:/64" \ > > options="\"server_id\"=\"00:00:00:10:00:01\"")" > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > ovn-nbctl --wait=sb lsp-set-dhcpv6-options lsp0-2 ${d1} > > check_recompute_counter 0 0 0 0 0 0 > > > > @@ -10146,10 +10146,10 @@ AT_SETUP([LSP incremental processing with only router ports before and after add > > ovn_start > > > > check_recompute_counter() { > > - northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) > > + northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) > > AT_CHECK([test x$northd_recomp = x$1]) > > > > - lflow_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow recompute) > > + lflow_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow recompute) > > AT_CHECK([test x$lflow_recomp = x$2]) > > } > > > > @@ -10166,7 +10166,7 @@ ovn-nbctl lb-add lb0 192.168.0.10:80 10.0.0.10:8080 > > check ovn-nbctl --wait=sb ls-lb-add ls0 lb0 > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > # Add a lsp. northd and lflow engine shouldn't recompute even though this is > > # the first lsp added after the router ports. > > check ovn-nbctl --wait=hv lsp-add ls0 lsp0-1 -- lsp-set-addresses lsp0-1 "aa:aa:aa:00:00:01 192.168.0.11" > > @@ -10175,7 +10175,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Delete the lsp. northd and lflow engine shouldn't recompute even though > > # the logical switch is now left with only router ports. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=hv lsp-del lsp0-1 > > check_recompute_counter 0 0 > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > @@ -10213,7 +10213,7 @@ wait_row_count port_binding $(($n + 1)) > > > > # Delete multiple ports, and one of them not incrementally processible. This is > > # to trigger partial I-P and then fall back to recompute. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > args="--wait=hv lsp-del lsp0-foo" > > for i in $(seq $n); do > > args="$args -- lsp-del lsp0-$i" > > @@ -10221,7 +10221,7 @@ done > > check ovn-nbctl $args > > > > wait_row_count Port_Binding 0 > > -northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) > > +northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) > > echo northd_recomp $northd_recomp > > AT_CHECK([test $northd_recomp -ge 1]) > > > > @@ -10234,26 +10234,26 @@ AT_SETUP([ACL/Meter incremental processing - no northd recompute]) > > ovn_start > > > > check_recompute_counter() { > > - northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) > > + northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) > > AT_CHECK([test x$northd_recomp = x$1]) > > > > - lflow_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow recompute) > > + lflow_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow recompute) > > AT_CHECK([test x$lflow_recomp = x$2]) > > > > - sync_meters_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_meters recompute) > > + sync_meters_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_meters recompute) > > AT_CHECK([test x$sync_meters_recomp = x$3]) > > } > > > > check ovn-nbctl --wait=sb ls-add ls > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb meter-add m drop 1 pktps > > check ovn-nbctl --wait=sb acl-add ls from-lport 1 1 allow > > dnl Only triggers recompute of the sync_meters and lflow nodes. > > check_recompute_counter 0 2 2 > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb meter-del m > > check ovn-nbctl --wait=sb acl-del ls > > dnl Only triggers recompute of the sync_meters and lflow nodes. > > @@ -10325,7 +10325,7 @@ check ovn-sbctl chassis-add local-ch0 geneve 127.0.0.2 > > wait_row_count Chassis 2 > > > > remote_chassis_uuid=$(fetch_column Chassis _uuid name=remote-ch0) > > -as northd ovn-appctl -t NORTHD_TYPE vlog/set dbg > > +as northd ovn-appctl -t ovn-northd vlog/set dbg > > > > check ovn-nbctl ls-add sw0 > > check ovn-nbctl lsp-add sw0 sw0-r1 -- lsp-set-type sw0-r1 remote > > @@ -10392,7 +10392,7 @@ check_engine_stats() { > > echo "__file__:__line__: Checking engine stats for node $node : recompute - \ > > $recompute : compute - $compute" > > > > - node_stat=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats $node) > > + node_stat=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats $node) > > # node_stat will be of this format : > > # - Node: lflow - recompute: 3 - compute: 0 - abort: 0 > > node_recompute_ct=$(echo $node_stat | cut -d '-' -f2 | cut -d ':' -f2) > > @@ -10420,7 +10420,7 @@ $recompute : compute - $compute" > > # Test I-P for load balancers. > > # Presently ovn-northd handles I-P for NB LBs in northd_lb_data engine node > > # only. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 > > > > check_engine_stats lb_data norecompute compute > > @@ -10429,7 +10429,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > > > check ovn-nbctl --wait=sb set load_balancer . ip_port_mappings:10.0.0.3=sw0-p1:10.0.0.2 > > check_engine_stats lb_data norecompute compute > > @@ -10450,7 +10450,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > > > check ovn-nbctl --wait=sb -- lb-del lb2 -- lb-del lb3 > > check_engine_stats lb_data norecompute compute > > @@ -10459,7 +10459,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > > > AT_CHECK([ovn-nbctl --wait=sb \ > > -- --id=@hc create Load_Balancer_Health_Check vip="10.0.0.10\:80" \ > > @@ -10473,7 +10473,7 @@ check_engine_stats sync_to_sb_lb recompute nocompute > > > > # Any change to load balancer health check should also result in full recompute > > # of northd node (but not northd_lb_data node) > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb set load_balancer_health_check . options:foo=bar1 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > @@ -10481,21 +10481,21 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > # Delete the health check from the load balancer. northd engine node should do a full recompute. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb clear Load_Balancer . health_check > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl ls-add sw0 > > check ovn-nbctl --wait=sb lr-add lr0 > > ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24 > > ovn-nbctl lsp-add sw0 sw0-lr0 > > ovn-nbctl lsp-set-type sw0-lr0 router > > ovn-nbctl lsp-set-addresses sw0-lr0 00:00:00:00:ff:01 > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > ovn-nbctl --wait=sb lsp-set-options sw0-lr0 router-port=lr0-sw0 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > @@ -10503,7 +10503,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > # Associate lb1 to sw0. There should be no recompute of northd engine node > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb ls-lb-add sw0 lb1 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10515,7 +10515,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > > > # Modify the backend of the lb1 vip > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10524,7 +10524,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > > > # Cleanup the vip of lb1. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb clear load_Balancer lb1 vips > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10533,7 +10533,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > > > # Set the vips of lb1 back > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10542,7 +10542,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > > > # Add another vip to lb1 > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10551,7 +10551,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > > > # Disassociate lb1 from sw0. There should be a full recompute of northd engine node. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb ls-lb-del sw0 lb1 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > @@ -10561,7 +10561,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) > > > > # Associate lb1 to sw0 and also create a port sw0p1. This should not result in > > # full recompute of northd, but should rsult in full recompute of lflow node. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb ls-lb-add sw0 lb1 -- lsp-add sw0 sw0p1 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10569,10 +10569,10 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > > > # Disassociate lb1 from sw0. There should be a recompute of northd engine node. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb ls-lb-del sw0 lb1 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > @@ -10581,7 +10581,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Add lb1 to lr0 and then disassociate > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lr-lb-add lr0 lb1 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10590,7 +10590,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Modify the backend of the lb1 vip > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10599,7 +10599,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Cleanup the vip of lb1. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb clear load_Balancer lb1 vips > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10608,7 +10608,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Set the vips of lb1 back > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10617,7 +10617,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Add another vip to lb1 > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10625,7 +10625,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lr-lb-del lr0 lb1 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > @@ -10634,7 +10634,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Test load balancer group now > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > lbg1_uuid=$(ovn-nbctl --wait=sb create load_balancer_group name=lbg1) > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10642,19 +10642,19 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > > > lb1_uuid=$(fetch_column nb:Load_Balancer _uuid) > > > > # Add lb to the lbg1 group > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb add load_balancer_group . load_Balancer $lb1_uuid > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb clear load_balancer_group . load_Balancer > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > @@ -10662,7 +10662,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > # Add back lb to the lbg1 group > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb add load_balancer_group . load_Balancer $lb1_uuid > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10671,7 +10671,7 @@ check_engine_stats sync_to_sb_lb recompute nocompute > > > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb add logical_switch sw0 load_balancer_group $lbg1_uuid > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10679,7 +10679,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > > > # Update lb and this should not result in northd recompute > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb set load_balancer . options:bar=foo > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10687,7 +10687,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > > > # Modify the backend of the lb1 vip > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10696,7 +10696,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Cleanup the vip of lb1. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb clear load_Balancer lb1 vips > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10705,7 +10705,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Set the vips of lb1 back > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10714,7 +10714,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Add another vip to lb1 > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10722,14 +10722,14 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb clear logical_switch sw0 load_balancer_group > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl add logical_router lr0 load_balancer_group $lbg1_uuid > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10738,7 +10738,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Modify the backend of the lb1 vip > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10747,7 +10747,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Cleanup the vip of lb1. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb clear load_Balancer lb1 vips > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10756,7 +10756,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Set the vips of lb1 back > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10765,7 +10765,7 @@ check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Add another vip to lb1 > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10773,7 +10773,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb clear logical_router lr0 load_balancer_group > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > @@ -10781,14 +10781,14 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > > > # Add back lb group to logical switch and then delete it. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb add logical_switch sw0 load_balancer_group $lbg1_uuid > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb clear logical_switch sw0 load_balancer_group -- \ > > destroy load_balancer_group $lbg1_uuid > > check_engine_stats lb_data norecompute compute > > @@ -10812,21 +10812,21 @@ lb2_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb2) > > lb3_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb3) > > lb4_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb4) > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > lbg1_uuid=$(ovn-nbctl --wait=sb create load_balancer_group name=lbg1) > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb set load_balancer_group . load_balancer="$lb2_uuid,$lb3_uuid,$lb4_uuid" > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb set logical_switch sw0 load_balancer_group=$lbg1_uuid > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10834,7 +10834,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb set logical_router lr1 load_balancer_group=$lbg1_uuid > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10842,7 +10842,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb ls-lb-add sw0 lb2 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10850,7 +10850,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb ls-lb-add sw0 lb3 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10858,7 +10858,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lr-lb-add lr1 lb1 > > check ovn-nbctl --wait=sb lr-lb-add lr1 lb2 > > check_engine_stats lb_data norecompute compute > > @@ -10867,7 +10867,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb ls-lb-del sw0 lb2 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > @@ -10875,7 +10875,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute nocompute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lr-lb-del lr1 lb2 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > @@ -10885,7 +10885,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Deleting lb4 should not result in lflow recompute as it is > > # only associated with logical switch sw0. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-del lb4 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10895,7 +10895,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > # Deleting lb2 should result in lflow recompute as it is > > # associated with logical router lr1 through lb group. > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb lb-del lb2 > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd norecompute compute > > @@ -10903,7 +10903,7 @@ check_engine_stats lflow recompute nocompute > > check_engine_stats sync_to_sb_lb recompute compute > > CHECK_NO_CHANGE_AFTER_RECOMPUTE > > > > -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats > > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats > > check ovn-nbctl --wait=sb remove load_balancer_group . load_balancer $lb3_uuid > > check_engine_stats lb_data norecompute compute > > check_engine_stats northd recompute nocompute > > diff --git a/tests/ovn.at b/tests/ovn.at > > index e8c79512b2..67c4ccd39b 100644 > > --- a/tests/ovn.at > > +++ b/tests/ovn.at > > @@ -7248,7 +7248,7 @@ compare_dhcp_packets 1 > > > > # Stop ovn-northd so that we can modify the northd_version. > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > northd_version=$(ovn-sbctl get SB_Global . options:northd_internal_version | sed s/\"//g) > > echo "northd version = $northd_version" > > @@ -8805,7 +8805,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > AT_CLEANUP > > ]) > > > > @@ -9611,8 +9611,8 @@ check test "$c6_tag" != "$c3_tag" > > > > AS_BOX([restart northd and make sure tag allocation is stable]) > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > -start_daemon NORTHD_TYPE \ > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > +start_daemon ovn-northd \ > > --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock \ > > --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock > > > > @@ -25547,7 +25547,7 @@ check_row_count Service_Monitor 0 > > # Let's also be sure the warning message about SCTP load balancers is > > # is in the ovn-northd log > > > > -AT_CHECK([test 1 = `grep -c "SCTP load balancers do not currently support health checks" northd/NORTHD_TYPE.log`]) > > +AT_CHECK([test 1 = `grep -c "SCTP load balancers do not currently support health checks" northd/ovn-northd.log`]) > > > > AT_CLEANUP > > ]) > > @@ -30791,7 +30791,7 @@ check ovn-nbctl --wait=hv ls-lb-del sw0 lb-ipv6 > > # original destination tuple. > > # > > # ovn-controller should fall back to matching on ct_nw_dst()/ct_tp_dst(). > > -as northd ovn-appctl -t NORTHD_TYPE pause > > +as northd ovn-appctl -t ovn-northd pause > > > > check ovn-sbctl \ > > -- remove load_balancer lb-ipv4-tcp options hairpin_orig_tuple \ > > @@ -30840,7 +30840,7 @@ OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_a > > ]) > > > > # Resume ovn-northd. > > -as northd ovn-appctl -t NORTHD_TYPE resume > > +as northd ovn-appctl -t ovn-northd resume > > check ovn-nbctl --wait=hv sync > > > > as hv2 ovs-vsctl del-port hv2-vif1 > > @@ -30959,7 +30959,7 @@ AT_CHECK([grep -c $northd_version hv1/ovn-controller.log], [0], [1 > > > > # Stop ovn-northd so that we can modify the northd_version. > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > check ovn-sbctl set SB_Global . options:northd_internal_version=foo > > > > @@ -31101,7 +31101,7 @@ OVS_WAIT_UNTIL([test `ovs-vsctl get Interface lsp2 external_ids:ovn-installed` = > > AS_BOX([ovn-controller should not reset Port_Binding.up without northd]) > > # Pause northd and clear the "up" field to simulate older ovn-northd > > # versions writing to the Southbound DB. > > -as northd ovn-appctl -t NORTHD_TYPE pause > > +as northd ovn-appctl -t ovn-northd pause > > > > as hv1 ovn-appctl -t ovn-controller debug/pause > > check ovn-sbctl clear Port_Binding lsp1 up > > @@ -31116,7 +31116,7 @@ check_column "" Port_Binding up logical_port=lsp1 > > > > # Once northd should explicitly set the Port_Binding.up field to 'false' and > > # ovn-controller sets it to 'true' as soon as the update is processed. > > -as northd ovn-appctl -t NORTHD_TYPE resume > > +as northd ovn-appctl -t ovn-northd resume > > wait_column "true" Port_Binding up logical_port=lsp1 > > wait_column "true" nb:Logical_Switch_Port up name=lsp1 > > > > @@ -31270,7 +31270,7 @@ check ovn-nbctl lsp-set-addresses sw0-p4 "00:00:00:00:00:04 192.168.47.4" > > # Pause ovn-northd. When it is resumed, all the below NB updates > > # will be sent in one transaction. > > > > -check as northd ovn-appctl -t NORTHD_TYPE pause > > +check as northd ovn-appctl -t ovn-northd pause > > > > check ovn-nbctl lsp-add sw0 sw0-p1 > > check ovn-nbctl lsp-set-addresses sw0-p1 "00:00:00:00:00:01 192.168.47.1" > > @@ -31282,7 +31282,7 @@ check ovn-nbctl acl-add pg1 to-lport 1002 "outport == @pg1 && ip4 && ip4.src == > > > > # resume ovn-northd now. This should result in a single update message > > # from SB ovsdb-server to ovn-controller for all the above NB updates. > > -check as northd ovn-appctl -t NORTHD_TYPE resume > > +check as northd ovn-appctl -t ovn-northd resume > > > > AS_BOX([Wait for sw0-p1 and sw0-p2 to be up]) > > wait_for_ports_up sw0-p1 sw0-p2 > > diff --git a/tests/ovs-macros.at b/tests/ovs-macros.at > > index 5b8da2048a..35760cf06b 100644 > > --- a/tests/ovs-macros.at > > +++ b/tests/ovs-macros.at > > @@ -5,13 +5,11 @@ m4_include([m4/compat.m4]) > > > > dnl Make AT_SETUP automatically do some things for us: > > dnl - Run the ovs_init() shell function as the first step in every test. > > -dnl - If NORTHD_TYPE is defined, then append it to the test name and > > +dnl - If NORTHD_USE_DP_GROUPS is defined, then append it to the test name and > > dnl set it as a shell variable as well. > > m4_rename([AT_SETUP], [OVS_AT_SETUP]) > > m4_define([AT_SETUP], > > - [OVS_AT_SETUP($@[]m4_ifdef([NORTHD_TYPE], [ -- NORTHD_TYPE])[]m4_ifdef([NORTHD_USE_PARALLELIZATION], [ -- parallelization=NORTHD_USE_PARALLELIZATION])[]m4_ifdef([OVN_MONITOR_ALL], [ -- ovn_monitor_all=OVN_MONITOR_ALL])) > > -m4_ifdef([NORTHD_TYPE], [[NORTHD_TYPE]=NORTHD_TYPE > > -])dnl > > + [OVS_AT_SETUP($@[]m4_ifdef([NORTHD_USE_PARALLELIZATION], [ -- parallelization=NORTHD_USE_PARALLELIZATION])[]m4_ifdef([OVN_MONITOR_ALL], [ -- ovn_monitor_all=OVN_MONITOR_ALL])) > > m4_ifdef([NORTHD_USE_PARALLELIZATION], [[NORTHD_USE_PARALLELIZATION]=NORTHD_USE_PARALLELIZATION > > ])dnl > > m4_ifdef([NORTHD_DUMMY_NUMA], [[NORTHD_DUMMY_NUMA]=NORTHD_DUMMY_NUMA > > diff --git a/tests/perf-northd.at b/tests/perf-northd.at > > index ca115dadc2..18fb209146 100644 > > --- a/tests/perf-northd.at > > +++ b/tests/perf-northd.at > > @@ -60,7 +60,7 @@ m4_define([PARSE_STOPWATCH], [ > > # to performance results. > > # > > m4_define([PERF_RECORD_STOPWATCH], [ > > - PERF_RECORD_RESULT($3, [`ovn-appctl -t northd/NORTHD_TYPE stopwatch/show $1 | PARSE_STOPWATCH($2)`]) > > + PERF_RECORD_RESULT($3, [`ovn-appctl -t northd/ovn-northd stopwatch/show $1 | PARSE_STOPWATCH($2)`]) > > ]) > > > > # PERF_RECORD() > > diff --git a/tests/system-common-macros.at b/tests/system-common-macros.at > > index 65c4884ea1..4bfc74582c 100644 > > --- a/tests/system-common-macros.at > > +++ b/tests/system-common-macros.at > > @@ -494,7 +494,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > > diff --git a/tests/system-ovn-kmod.at b/tests/system-ovn-kmod.at > > index a81bae7133..93fd962004 100644 > > --- a/tests/system-ovn-kmod.at > > +++ b/tests/system-ovn-kmod.at > > @@ -507,7 +507,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -801,7 +801,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -947,7 +947,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -1113,7 +1113,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -1263,7 +1263,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP([" > > diff --git a/tests/system-ovn.at b/tests/system-ovn.at > > index c454526073..0abf2828a2 100644 > > --- a/tests/system-ovn.at > > +++ b/tests/system-ovn.at > > @@ -171,7 +171,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -351,7 +351,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -463,7 +463,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -575,7 +575,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -797,7 +797,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -1023,7 +1023,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -1337,7 +1337,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -1620,7 +1620,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) > > @@ -1841,7 +1841,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) > > @@ -1949,7 +1949,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) > > @@ -2059,7 +2059,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) > > @@ -2304,7 +2304,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -2399,7 +2399,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -2556,7 +2556,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -2727,7 +2727,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -2900,7 +2900,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -3119,7 +3119,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -3262,7 +3262,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -3405,7 +3405,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -3586,7 +3586,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -3747,7 +3747,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -3930,7 +3930,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -4098,7 +4098,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -4175,7 +4175,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -4341,7 +4341,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -4565,7 +4565,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -4748,7 +4748,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -4851,7 +4851,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -4949,7 +4949,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -5195,7 +5195,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -5444,7 +5444,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -5567,7 +5567,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -5687,7 +5687,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -5796,7 +5796,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -5905,7 +5905,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -5999,7 +5999,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -6173,7 +6173,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -6366,7 +6366,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -6415,7 +6415,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -6506,7 +6506,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > > @@ -6742,7 +6742,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > > @@ -6893,7 +6893,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > > @@ -7022,7 +7022,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -7147,7 +7147,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > > @@ -7264,7 +7264,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -7479,7 +7479,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d > > @@ -7622,7 +7622,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -7764,7 +7764,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -7865,7 +7865,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -7966,7 +7966,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -8066,7 +8066,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -8423,7 +8423,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -8552,7 +8552,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -8616,7 +8616,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -8713,7 +8713,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -8822,7 +8822,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -8896,7 +8896,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -9097,7 +9097,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -9247,7 +9247,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -9398,7 +9398,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -9526,7 +9526,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -9677,7 +9677,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -9822,7 +9822,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -9933,7 +9933,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -10075,7 +10075,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -10289,7 +10289,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -10434,7 +10434,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -10599,7 +10599,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -10753,7 +10753,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -10889,7 +10889,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -11215,7 +11215,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -11393,7 +11393,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -11535,7 +11535,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -11624,7 +11624,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -11697,7 +11697,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > @@ -11811,7 +11811,7 @@ as ovn-nb > > OVS_APP_EXIT_AND_WAIT([ovsdb-server]) > > > > as northd > > -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) > > +OVS_APP_EXIT_AND_WAIT([ovn-northd]) > > > > as > > OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d > > diff --git a/tutorial/ovs-sandbox b/tutorial/ovs-sandbox > > index 2219b0e99e..787adb87a7 100755 > > --- a/tutorial/ovs-sandbox > > +++ b/tutorial/ovs-sandbox > > @@ -71,8 +71,6 @@ ovssrcdir= > > schema= > > installed=false > > built=false > > -ddlog=false > > -ddlog_record=true > > ic_sb_schema= > > ic_nb_schema= > > ovn_rbac=true > > @@ -147,8 +145,6 @@ General options: > > -S, --schema=FILE use FILE as vswitch.ovsschema > > > > OVN options: > > - --ddlog use ovn-northd-ddlog > > - --no-ddlog-record do not record ddlog transactions (for performance) > > --no-ovn-rbac disable role-based access control for OVN > > --n-northds=NUMBER run NUMBER copies of northd (default: 1) > > --n-ics=NUMBER run NUMBER copies of ic (default: 1) > > @@ -242,12 +238,6 @@ EOF > > --gdb-ovn-controller-vtep) > > gdb_ovn_controller_vtep=true > > ;; > > - --ddlog) > > - ddlog=true > > - ;; > > - --no-ddlog-record | --no-record-ddlog) > > - ddlog_record=false > > - ;; > > --no-ovn-rbac) > > ovn_rbac=false > > ;; > > @@ -680,17 +670,10 @@ for i in $(seq $n_ics); do > > done > > > > northd_args= > > -if $ddlog; then > > - OVN_NORTHD=ovn-northd-ddlog > > -else > > - OVN_NORTHD=ovn-northd > > -fi > > +OVN_NORTHD=ovn-northd > > > > for i in $(seq $n_northds); do > > if [ $i -eq 1 ]; then inst=""; else inst=$i; fi > > - if $ddlog && $ddlog_record; then > > - northd_args=--ddlog-record=replay$inst.txt > > - fi > > rungdb $gdb_ovn_northd $gdb_ovn_northd_ex $OVN_NORTHD --detach \ > > --no-chdir --pidfile=$OVN_NORTHD$inst.pid -vconsole:off \ > > --log-file=$OVN_NORTHD$inst.log -vsyslog:off \ > > diff --git a/utilities/ovn-ctl b/utilities/ovn-ctl > > index dc8865abf8..876565c801 100755 > > --- a/utilities/ovn-ctl > > +++ b/utilities/ovn-ctl > > @@ -786,7 +786,6 @@ set_defaults () { > > OVN_CONTROLLER_WRAPPER= > > OVSDB_NB_WRAPPER= > > OVSDB_SB_WRAPPER= > > - OVN_NORTHD_DDLOG=no > > > > OVSDB_DISABLE_FILE_COLUMN_DIFF=no > > > > @@ -1031,9 +1030,6 @@ Options: > > --db-sb-relay-remote Specifies upstream cluster/server remote for ovsdb relay > > --db-sb-relay-use-remote-in-db=no|yes > > OVN_Sorthbound db listen on target connection table (default: $DB_SB_RELAY_USE_REMOTE_IN_DB) > > - > > - --ovn-northd-ddlog=yes|no whether we should run the DDlog version > > - of ovn-northd. The default is "no". > > -h, --help display this help message > > > > File location options: > > @@ -1212,11 +1208,7 @@ do > > esac > > done > > > > -if test X"$OVN_NORTHD_DDLOG" = Xyes; then > > - OVN_NORTHD_BIN=ovn-northd-ddlog > > -else > > - OVN_NORTHD_BIN=ovn-northd > > -fi > > +OVN_NORTHD_BIN=ovn-northd > > > > case $command in > > start_northd) > > _______________________________________________ > dev mailing list > dev@openvswitch.org > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
On 12/20/23 17:18, Numan Siddique wrote: > On Thu, Dec 7, 2023 at 6:17 AM Dumitru Ceara <dceara@redhat.com> wrote: >> >> On 12/7/23 12:11, Dumitru Ceara wrote: >>> From: Ben Pfaff <blp@ovn.org> >>> >>> Originally posted at: >>> https://patchwork.ozlabs.org/project/ovn/patch/20211007194649.4014965-1-blp@ovn.org/ >>> >>> Signed-off-by: Ben Pfaff <blp@ovn.org> >>> --- >> >> CC: Han, Mark, Numan > > Acked-by: Numan Siddique <numans@ovn.org> > Applied to main. Thanks, Dumitru
diff --git a/Documentation/automake.mk b/Documentation/automake.mk index 7fcd186cac..b00876737b 100644 --- a/Documentation/automake.mk +++ b/Documentation/automake.mk @@ -24,14 +24,12 @@ DOC_SOURCE = \ Documentation/tutorials/images/ovsdb-relay-3.png \ Documentation/tutorials/ovn-rbac.rst \ Documentation/tutorials/ovn-interconnection.rst \ - Documentation/tutorials/ddlog-new-feature.rst \ Documentation/topics/index.rst \ Documentation/topics/testing.rst \ Documentation/topics/high-availability.rst \ Documentation/topics/integration.rst \ Documentation/topics/ovn-news-2.8.rst \ Documentation/topics/role-based-access-control.rst \ - Documentation/topics/debugging-ddlog.rst \ Documentation/topics/vif-plug-providers/index.rst \ Documentation/topics/vif-plug-providers/vif-plug-providers.rst \ Documentation/howto/index.rst \ diff --git a/Documentation/intro/install/general.rst b/Documentation/intro/install/general.rst index dd8bf5c2c0..ab62094828 100644 --- a/Documentation/intro/install/general.rst +++ b/Documentation/intro/install/general.rst @@ -102,13 +102,6 @@ need the following software: The environment variable OVS_RESOLV_CONF can be used to specify DNS server configuration file (the default file on Linux is /etc/resolv.conf). -- `DDlog <https://github.com/vmware/differential-datalog>`, if you - want to build ``ovn-northd-ddlog``, an alternate implementation of - ``ovn-northd`` that scales better to large deployments. The NEWS - file specifies the right version of DDlog to use with this release. - Building with DDlog supports requires Rust to be installed (see - https://www.rust-lang.org/tools/install). - If you are working from a Git tree or snapshot (instead of from a distribution tarball), or if you modify the OVN build system or the database schema, you will also need the following software: @@ -210,37 +203,6 @@ the default database directory, add options as shown here:: ``yum install`` or ``rpm -ivh``) and .deb (e.g. via ``apt-get install`` or ``dpkg -i``) use the above configure options. -Use ``--with-ddlog`` to build with DDlog support. To build with -DDlog, the build system needs to be able to find the ``ddlog`` and -``ovsdb2ddlog`` binaries and the DDlog library directory (the -directory that contains ``ddlog_std.dl``). This option supports a -few ways to do that: - - * If binaries are in $PATH, use the library directory as argument, - e.g. ``--with-ddlog=$HOME/differential-datalog/lib``. This is - suitable if DDlog was installed from source via ``stack install`` or - from (hypothetical) distribution packaging. - - The DDlog documentation recommends pointing $DDLOG_HOME to the - DDlog source directory. If you did this, so that $DDLOG_HOME/lib - is the library directory, you may use ``--with-ddlog`` without an - argument. - - * If the binaries and libraries are in the ``bin`` and ``lib`` - subdirectories of an installation directory, use the installation - directory as the argument. This is suitable if DDlog was - installed from one of the binary tarballs published by the DDlog - developers. - -.. note:: - - Building with DDLog adds a few minutes to the build because the - Rust compiler is slow. Add ``--enable-ddlog-fast-build`` to make - this about 2x faster. This disables some Rust compiler - optimizations, making a much slower ``ovn-northd-ddlog`` - executable, so it should not be used for production builds or for - profiling. - By default, static libraries are built and linked against. If you want to use shared libraries instead:: @@ -418,14 +380,6 @@ An example after install might be:: $ ovn-ctl start_northd $ ovn-ctl start_controller -If you built with DDlog support, then you can start -``ovn-northd-ddlog`` instead of ``ovn-northd`` by adding -``--ovn-northd-ddlog=yes``, e.g.:: - - $ export PATH=$PATH:/usr/local/share/ovn/scripts - $ ovn-ctl --ovn-northd-ddlog=yes start_northd - $ ovn-ctl start_controller - Starting OVN Central services ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -481,11 +435,6 @@ Unix domain socket:: $ ovn-northd --pidfile --detach --log-file -If you built with DDlog support, you can start ``ovn-northd-ddlog`` -instead, the same way:: - - $ ovn-northd-ddlog --pidfile --detach --log-file - Starting OVN Central services in containers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/Documentation/topics/debugging-ddlog.rst b/Documentation/topics/debugging-ddlog.rst deleted file mode 100644 index 2ae72a38ea..0000000000 --- a/Documentation/topics/debugging-ddlog.rst +++ /dev/null @@ -1,280 +0,0 @@ -.. - Licensed under the Apache License, Version 2.0 (the "License"); you may - not use this file except in compliance with the License. You may obtain - a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - License for the specific language governing permissions and limitations - under the License. - - Convention for heading levels in OVN documentation: - - ======= Heading 0 (reserved for the title in a document) - ------- Heading 1 - ~~~~~~~ Heading 2 - +++++++ Heading 3 - ''''''' Heading 4 - - Avoid deeper levels because they do not render well. - -========================================= -Debugging the DDlog version of ovn-northd -========================================= - -This document gives some tips for debugging correctness issues in the -DDlog implementation of ``ovn-northd``. To keep things conrete, we -assume here that a failure occurred in one of the test cases in -``ovn-e2e.at``, but the same methodology applies in any other -environment. If none of these methods helps, ask for assistance or -submit a bug report. - -Before trying these methods, you may want to check the northd log -file, ``tests/testsuite.dir/<test_number>/northd/ovn-northd.log`` for -error messages that might explain the failure. - -Compare OVSDB tables generated by DDlog vs C --------------------------------------------- - -The first thing I typically want to check when ``ovn-northd-ddlog`` -does not behave as expected is how the OVSDB tables computed by DDlog -differ from what the C implementation produces. Fortunately, all the -infrastructure needed to do this already exists in OVN. - -First, let's modify the test script, e.g., ``ovn.at`` to dump the -contents of OVSDB right before the failure. The most common issue is -a difference between the logical flows generated by the two -implementations. To make it easy to compare the generated flows, make -sure that the test contains something like this in the right place:: - - ovn-sbctl dump-flows > sbflows - AT_CAPTURE_FILE([sbflows]) - -The first line above dumps the OVN logical flow table to a file named -``sbflows``. The second line ensures that, if the test fails, -``sbflows`` get logged to ``testsuite.log``. That is not particularly -useful for us right now, but it means that if someone later submits a -bug report, that's one more piece of data that we don't have to ask -for them to submit along with it. - -Next, we want to run the test twice, with the C and DDlog versions of -northd, e.g., ``make check -j6 TESTSUITEFLAGS="-d 111 112"`` if 111 -and 112 are the C and DDlog versions of the same test. The ``-d`` in -this command line makes the test driver keep test directories around -even for tests that succeed, since by default it deletes them. - -Now you can look at ``sbflows`` in each test log directory. The -``ovn-northd-ddlog`` developers have gone to some trouble to make the -DDlog flows as similar as possible to the C ones, right down to white -space and other formatting. Thus, the DDlog output is often identical -to C aside from logical datapath UUIDs. - -Usually, this means that one can get informative results by running -``diff``, e.g.:: - - diff -u tests/testsuite.dir/111/sbflows tests/testsuite.dir/111/sbflows - -Running the input through the ``uuidfilt`` utility from OVS will -generally get rid of the logical datapath UUID differences as well:: - - diff -u <(uuidfilt tests/testsuite.dir/111/sbflows) <(uuidfilt tests/testsuite.dir/111/sbflows) - -If there are nontrivial differences, this often identifies your bug. - -Often, once you have identified the difference between the two OVSDB -dumps, this will immediately lead you to the root cause of the bug, -but if you are not this lucky then the next method may help. - -Record and replay DDlog execution ---------------------------------- - -DDlog offers a way to record all input table updates throughout the -execution of northd and replay them against DDlog running as a -standalone executable without all other OVN components. This has two -advantages. First, this allows one to easily tweak the inputs, e.g. -to simplify the test scenario. Second, the recorded execution can be -easily replayed anywhere without having to reproduce your OVN setup. - -Use the ``--ddlog-record`` option to record updates, -e.g. ``--ddlog-record=replay.dat`` to record to ``replay.dat``. -(OVN's built-in tests automatically do this.) The file contains the -log of transactions in the DDlog command format (see -https://github.com/vmware/differential-datalog/blob/master/doc/command_reference/command_reference.md). - -To replay the log, you will need the standalone DDlog executable. By -default, the build system does not compile this program, because it -increases the already long Rust compilation time. To build it, add -``NORTHD_CLI=1`` to the ``make`` command line, e.g. ``make -NORTHD_CLI=1``. - -You can modify the log before replaying it, e.g., adding ``dump -<table>`` commands to dump the contents of relations at various points -during execution. The <table> name must be fully qualified based on -the file in which it is declared, e.g. ``OVN_Southbound::<table>`` for -southbound tables or ``lrouter::<table>.`` for ``lrouter.dl``. You -can also use ``dump`` without an argument to dump the contents of all -tables. - -The following command replays the log generated by OVN test number -112 and dumps the output of DDlog to ``replay.dump``:: - - northd/ovn_northd_ddlog/target/release/ovn_northd_cli < tests/testsuite.dir/112/northd/replay.dat > replay.dump - -Or, to dump just the table contents following the run, without having -to edit ``replay.dat``:: - - (cat tests/testsuite.dir/112/northd/replay.dat; echo 'dump;') | northd/ovn_northd_ddlog/target/release/ovn_northd_cli --no-delta --no-init-snapshot > replay.dump - -Depending on whether and how you installed OVS and OVN, you might need -to point ``LD_LIBRARY_PATH`` to library build directories to get the -CLI to run, e.g.:: - - export LD_LIBRARY_PATH=$HOME/ovn/_build/lib/.libs:$HOME/ovs/_build/lib/.libs - -.. note:: - - The replay output may be less informative than you expect because - DDlog does not, by default, keep around enough information to - include input relation and intermediate relations in the output. - These relations are often critical to understanding what is going - on. To include them, add the options - ``--output-internal-relations --output-input-relations=In_`` to - ``DDLOG_EXTRA_FLAGS`` for building ``ovn-northd-ddlog``. For - example, ``configure`` as:: - - ./configure DDLOG_EXTRA_FLAGS='--output-internal-relations --output-input-relations=In_' - -Debugging by Logging --------------------- - -One limitation of the previous method is that it allows one to inspect -inputs and outputs of a rule, but not the (sometimes fairly -complicated) computation that goes on inside the rule. You can of -course break up the rule into several rules and dump the intermediate -outputs. - -There are at least two alternatives for generating log messages. -First, you can write rules to add strings to the Warning relation -declared in ``ovn_north.dl``. Code in ``ovn-northd-ddlog.c`` will log -any given string in this relation just once, when it is first added to -the relation. (If it is removed from the relation and then added back -later, it will be logged again.) - -Second, you can call using the ``warn()`` function declared in -``ovn.dl`` from a DDlog rule. It's not straightforward to know -exactly when this function will be called, like it would be in an -imperative language like C, since DDlog is a declarative language -where the user doesn't directly control when rules are triggered. You -might, for example, see the rule being triggered multiple times with -the same input. Nevertheless, this debugging technique is useful in -practice. - -You will find many examples of the use of Warning and ``warn`` in -``ovn_northd.dl``, where it is frequently used to report non-critical -errors. - -Debugging panics ----------------- - -**TODO**: update these instructions as DDlog's internal handling of panic's -is improved. - -DDlog is a safe language, so DDlog programs normally do not crash, -except for the following three cases: - -- A panic in a Rust function imported to DDlog as ``extern function``. - -- A panic in a C function imported to DDlog as ``extern function``. - -- A bug in the DDlog runtime or libraries. - -Below we walk through the steps involved in debugging such failures. -In this scenario, there is an array-index-out-of-bounds error in the -``ovn_scan_static_dynamic_ip6()`` function, which is written in Rust -and imported to DDlog as an ``extern function``. When invoked from a -DDlog rule, this function causes a panic in one of DDlog worker -threads. - -**Step 1: Check for error messages in the northd log.** A panic can -generally lead to unpredictable outcomes, so one cannot count on a -clean error message showing up in the log (Other outcomes include -crashing the entire process and even deadlocks. We are working to -eliminate the latter possibility). In this case we are lucky to -observe a bunch of error messages like the following in the ``northd`` -log: - - ``2019-09-23T16:23:24.549Z|00011|ovn_northd|ERR|ddlog_transaction_commit(): - error: failed to receive flush ack message from timely dataflow - thread`` - -These messages are telling us that something is broken inside the -DDlog runtime. - -**Step 2: Record and replay the failing scenario.** We use DDlog's -record/replay capabilities (see above) to capture the faulty scenario. -We replay the recorded trace:: - - northd/ovn_northd_ddlog/target/release/ovn_northd_cli < tests/testsuite.dir/117/northd/replay.dat - -This generates a bunch of output ending with:: - - thread 'worker thread 2' panicked at 'index out of bounds: the len is 1 but the index is 1', /rustc/eae3437dfe991621e8afdc82734f4a172d7ddf9b/src/libcore/slice/mod.rs:2681:10 - note: run with RUST_BACKTRACE=1 environment variable to display a backtrace. - -We re-run the CLI again with backtrace enabled (as suggested by the -error message):: - - RUST_BACKTRACE=1 northd/ovn_northd_ddlog/target/release/ovn_northd_cli < tests/testsuite.dir/117/northd/replay.dat - -This finally yields the following stack trace, which suggests array -bound violation in ``ovn_scan_static_dynamic_ip6``:: - - 0: backtrace::backtrace::libunwind::trace - at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29 10: core::panicking::panic_bounds_check - at src/libcore/panicking.rs:61 - [SKIPPED] - 11: ovn_northd_ddlog::__ovn::ovn_scan_static_dynamic_ip6 - 12: ovn_northd_ddlog::prog::__f - [SKIPPED] - -Finally, looking at the source code of -``ovn_scan_static_dynamic_ip6``, we identify the following line, -containing an unsafe array indexing operator, as the culprit:: - - ovn_ipv6_parse(&f[1].to_string()) - -Clean build -~~~~~~~~~~~ - -Occasionally it's desirable to a full and complete build of the -DDlog-generated code. To trigger that, delete the generated -``ovn_northd_ddlog`` directory and the ``ddlog.stamp`` witness file, -like this:: - - rm -rf northd/ovn_northd_ddlog northd/ddlog.stamp - -or:: - - make clean-ddlog - -Submitting a bug report ------------------------ - -If you are having trouble with DDlog and the above methods do not -help, please submit a bug report to ``bugs@openvswitch.org``, CC -``ryzhyk@gmail.com``. - -In addition to problem description, please provide as many of the -following as possible: - -- Are you running with the right DDlog for the version of OVN? OVN - and DDlog are both evolving and OVN needs to build against a - specific version of DDlog. - -- ``replay.dat`` file generated as described above - -- Logs: ``ovn-northd.log`` and ``testsuite.log``, if you are running - the OVN test suite diff --git a/Documentation/topics/index.rst b/Documentation/topics/index.rst index e9e49c7426..55bb919c09 100644 --- a/Documentation/topics/index.rst +++ b/Documentation/topics/index.rst @@ -36,7 +36,6 @@ OVN .. toctree:: :maxdepth: 2 - debugging-ddlog integration.rst high-availability role-based-access-control diff --git a/Documentation/tutorials/ddlog-new-feature.rst b/Documentation/tutorials/ddlog-new-feature.rst deleted file mode 100644 index de66ca5ada..0000000000 --- a/Documentation/tutorials/ddlog-new-feature.rst +++ /dev/null @@ -1,393 +0,0 @@ -.. - Licensed under the Apache License, Version 2.0 (the "License"); you may - not use this file except in compliance with the License. You may obtain - a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - License for the specific language governing permissions and limitations - under the License. - - Convention for heading levels in OVN documentation: - - ======= Heading 0 (reserved for the title in a document) - ------- Heading 1 - ~~~~~~~ Heading 2 - +++++++ Heading 3 - ''''''' Heading 4 - - Avoid deeper levels because they do not render well. - -=========================================================== -Adding a new OVN feature to the DDlog version of ovn-northd -=========================================================== - -This document describes the usual steps an OVN developer should go -through when adding a new feature to ``ovn-northd-ddlog``. In order to -make things less abstract we will use the IP Multicast -``ovn-northd-ddlog`` implementation as an example. Even though the -document is structured as a tutorial there might still exist -feature-specific aspects that are not covered here. - -Overview --------- - -DDlog is a dataflow system: it receives data from a data source (a set -of "input relations"), processes it through "intermediate relations" -according to the rules specified in the DDlog program, and sends the -processed "output relations" to a data sink. In OVN, the input -relations primarily come from the OVN Northbound database and the -output relations primarily go to the OVN Southbound database. The -process looks like this:: - - from NBDB +----------+ +-----------------+ +-----------+ to SBDB - ---------->|Input rels|-->|Intermediate rels|-->|Output rels|----------> - +----------+ +-----------------+ +-----------+ - -Adding a new feature to ``ovn-northd-ddlog`` usually involves the -following steps: - -1. Update northbound and/or southbound OVSDB schemas. - -2. Configure DDlog/OVSDB bindings. - -3. Define intermediate DDlog relations and rules to compute them. - -4. Write rules to update output relations. - -5. Generate ``Logical_Flow``s and/or other forwarding records (e.g., - ``Multicast_Group``) that will control the dataplane operations. - -Update NB and/or SB OVSDB schemas ---------------------------------- - -This step is no different from the normal development flow in C. - -Most of the times a developer chooses between two ways of configuring -a new feature: - -1. Adding a set of columns to tables in the NB and/or SB database (or - adding key-value pairs to existing columns). - -2. Adding new tables to the NB and/or SB database. - -Looking at IP Multicast, there are two ``OVN Northbound`` tables where -configuration information is stored: - -- ``Logical_Switch``, column ``other_config``, keys ``mcast_*``. - -- ``Logical_Router``, column ``options``, keys ``mcast_*``. - -These tables become inputs to the DDlog pipeline. - -In addition we add a new table ``IP_Multicast`` to the SB database. -DDlog will update this table, that is, ``IP_Multicast`` receives -output from the above pipeline. - -Configuring DDlog/OVSDB bindings --------------------------------- - -Configuring ``northd/automake.mk`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The OVN build process uses DDlog's ``ovsdb2ddlog`` utility to parse -``ovn-nb.ovsschema`` and ``ovn-sb.ovsschema`` and then automatically -populate ``OVN_Northbound.dl`` and ``OVN_Southbound.dl``. For each -OVN Northbound and Southbound table, it generates one or more -corresponding DDlog relations. - -We need to supply ``ovsdb2ddlog`` with some information that it can't -infer from the OVSDB schemas. This information must be specified as -``ovsdb2ddlog`` arguments, which are read from -``northd/ovn-nb.dlopts`` and ``northd/ovn-sb.dlopts``. - -The main choice for each new table is whether it is used for output. -Output tables can also be used for input, but the converse is not -true. If the table is used for output at all, we add ``-o <table>`` -to the option file. Our new table ``IP_Multicast`` is an output -table, so we add ``-o IP_Multicast`` to ``ovn-sb.dlopts``. - -For input-only tables, ``ovsdb2ddlog`` generates a DDlog input -relation with the same name. For output tables, it generates this -table plus an output relation named ``Out_<table>``. Thus, -``OVN_Southbound.dl`` has two relations for ``IP_Multicast``:: - - input relation IP_Multicast ( - _uuid: uuid, - datapath: string, - enabled: Set<bool>, - querier: Set<bool> - ) - output relation Out_IP_Multicast ( - _uuid: uuid, - datapath: string, - enabled: Set<bool>, - querier: Set<bool> - ) - -For an output table, consider whether only some of the columns are -used for output, that is, some of the columns are effectively -input-only. This is common in OVN for OVSDB columns that are managed -externally (e.g. by a CMS). For each input-only column, we add ``--ro -<table>.<column>``. Alternatively, if most of the columns are -input-only but a few are output columns, add ``--rw <table>.<column>`` -for each of the output columns. In our case, all of the columns are -used for output, so we do not need to add anything. - -Finally, in some cases ``ovn-northd-ddlog`` shouldn't change values in -. One such case is the ``seq_no`` column in the -``IP_Multicast`` table. To do that we need to instruct ``ovsdb2ddlog`` -to treat the column as read-only by using the ``--ro`` switch. - -``ovsdb2ddlog`` generates a number of additional DDlog relations, for -use by auto-generated OVSDB adapter logic. These are irrelevant to -most DDLog developers, although sometimes they can be handy for -debugging. See the appendix_ for details. - -Define intermediate DDlog relations and rules to compute them. --------------------------------------------------------------- - -Obviously there will be a one-to-one relationship between logical -switches/routers and IP multicast configuration. One way to represent -this relationship is to create multicast configuration DDlog relations -to be referenced by ``&Switch`` and ``&Router`` DDlog records:: - - /* IP Multicast per switch configuration. */ - relation &McastSwitchCfg( - datapath : uuid, - enabled : bool, - querier : bool - } - - &McastSwitchCfg( - .datapath = ls_uuid, - .enabled = map_get_bool_def(other_config, "mcast_snoop", false), - .querier = map_get_bool_def(other_config, "mcast_querier", true)) :- - nb.Logical_Switch(._uuid = ls_uuid, - .other_config = other_config). - -Then reference these relations in ``&Switch`` and ``&Router``. For -example, in ``lswitch.dl``, the ``&Switch`` relation definition now -contains:: - - relation &Switch( - ls: nb.Logical_Switch, - [...] - mcast_cfg: Ref<McastSwitchCfg> - ) - -And is populated by the following rule which references the correct -``McastSwitchCfg`` based on the logical switch uuid:: - - &Switch(.ls = ls, - [...] - .mcast_cfg = mcast_cfg) :- - nb.Logical_Switch[ls], - [...] - mcast_cfg in &McastSwitchCfg(.datapath = ls._uuid). - -Build state based on information dynamically updated by ``ovn-controller`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Some OVN features rely on information learned by ``ovn-controller`` to -generate ``Logical_Flow`` or other records that control the dataplane. -In case of IP Multicast, ``ovn-controller`` uses IGMP to learn -multicast groups that are joined by hosts. - -Each ``ovn-controller`` maintains its own set of records to avoid -ownership and concurrency with other controllers. If two hosts that -are connected to the same logical switch but reside on different -hypervisors (different ``ovn-controller`` processes) join the same -multicast group G, each of the controllers will create an -``IGMP_Group`` record in the ``OVN Southbound`` database which will -contain a set of ports to which the interested hosts are connected. - -At this point ``ovn-northd-ddlog`` needs to aggregate the per-chassis -IGMP records to generate a single ``Logical_Flow`` for group G. -Moreover, the ports on which the hosts are connected are represented -as references to ``Port_Binding`` records in the database. These also -need to be translated to ``&SwitchPort`` DDlog relations. The -corresponding DDlog operations that need to be performed are: - -- Flatten the ``<IGMP group, ports>`` mapping in order to be able to - do the translation from ``Port_Binding`` to ``&SwitchPort``. For - each ``IGMP_Group`` record in the ``OVN Southbound`` database - generate an individual record of type ``IgmpSwitchGroupPort`` for - each ``Port_Binding`` in the set of ports that joined the - group. Also, translate the ``Port_Binding`` uuid to the - corresponding ``Logical_Switch_Port`` uuid:: - - relation IgmpSwitchGroupPort( - address: string, - switch : Ref<Switch>, - port : uuid - ) - - IgmpSwitchGroupPort(address, switch, lsp_uuid) :- - sb::IGMP_Group(.address = address, .datapath = igmp_dp_set, - .ports = pb_ports), - var pb_port_uuid = FlatMap(pb_ports), - sb::Port_Binding(._uuid = pb_port_uuid, .logical_port = lsp_name), - &SwitchPort( - .lsp = nb.Logical_Switch_Port{._uuid = lsp_uuid, .name = lsp_name}, - .sw = switch). - -- Aggregate the flattened IgmpSwitchGroupPort (implicitly from all - ``ovn-controller`` instances) grouping by adress and logical - switch:: - - relation IgmpSwitchMulticastGroup( - address: string, - switch : Ref<Switch>, - ports : Set<uuid> - ) - - IgmpSwitchMulticastGroup(address, switch, ports) :- - IgmpSwitchGroupPort(address, switch, port), - var ports = port.group_by((address, switch)).to_set(). - -At this point we have all the feature configuration relevant -information stored in DDlog relations in ``ovn-northd-ddlog`` memory. - -Pitfalls of projections -~~~~~~~~~~~~~~~~~~~~~~~ - -A projection is a join that uses only some of the data in a record. -When the fields that are used have duplicates, the result can be many -"copies" of a record, which DDlog represents internally with an -integer "weight" that counts the number of copies. We don't have a -projection with duplicates in this example, but `lswitch.dl` has many -of them, such as this one:: - - relation LogicalSwitchHasACLs(ls: uuid, has_acls: bool) - - LogicalSwitchHasACLs(ls, true) :- - LogicalSwitchACL(ls, _). - - LogicalSwitchHasACLs(ls, false) :- - nb::Logical_Switch(._uuid = ls), - not LogicalSwitchACL(ls, _). - -When multiple projections get joined together, the weights can -overflow, which causes DDlog to malfunction. The solution is to make -the relation an output relation, which causes DDlog to filter it -through a "distinct" operator that reduces the weights to 1. Thus, -`LogicalSwitchHasACLs` is actually implemented this way:: - - output relation LogicalSwitchHasACLs(ls: uuid, has_acls: bool) - -For more information, see `Avoiding weight overflow -<https://github.com/vmware/differential-datalog/blob/master/doc/tutorial/tutorial.md#avoiding-weight-overflow>`_ -in the DDlog tutorial. - -Write rules to update output relations --------------------------------------- - -The developer updates output tables by writing rules that generate -``Out_*`` relations. For IP Multicast this means:: - - /* IP_Multicast table (only applicable for Switches). */ - sb::Out_IP_Multicast(._uuid = hash128(cfg.datapath), - .datapath = cfg.datapath, - .enabled = set_singleton(cfg.enabled), - .querier = set_singleton(cfg.querier)) :- - &McastSwitchCfg[cfg]. - -.. note:: ``OVN_Southbound.dl`` also contains an ``IP_Multicast`` - relation with ``input`` qualifier. This relation stores the - current snapshot of the OVSDB table and cannot be written to. - -Generate ``Logical_Flow`` and/or other forwarding records ---------------------------------------------------------- - -At this point we have defined all DDlog relations required to generate -``Logical_Flow``s. All we have to do is write the rules to do so. -For each ``IgmpSwitchMulticastGroup`` we generate a ``Flow`` that has -as action ``"outport = <Multicast_Group>; output;"``:: - - /* Ingress table 17: Add IP multicast flows learnt from IGMP (priority 90). */ - for (IgmpSwitchMulticastGroup(.address = address, .switch = &sw)) { - Flow(.logical_datapath = sw.dpname, - .stage = switch_stage(IN, L2_LKUP), - .priority = 90, - .__match = "eth.mcast && ip4 && ip4.dst == ${address}", - .actions = "outport = \"${address}\"; output;", - .external_ids = map_empty()) - } - -In some cases generating a logical flow is not enough. For IGMP we -also need to maintain OVN southbound ``Multicast_Group`` records, -one per IGMP group storing the corresponding ``Port_Binding`` uuids of -ports where multicast traffic should be sent. This is also relatively -straightforward:: - - /* Create a multicast group for each IGMP group learned by a Switch. - * 'tunnel_key' == 0 triggers an ID allocation later. - */ - sb::Out_Multicast_Group (.datapath = switch.dpname, - .name = address, - .tunnel_key = 0, - .ports = set_map_uuid2name(port_ids)) :- - IgmpSwitchMulticastGroup(address, &switch, port_ids). - -We must also define DDlog relations that will allocate ``tunnel_key`` -values. There are two cases: tunnel keys for records that already -existed in the database are preserved to implement stable id -allocation; new multicast groups need new keys. This kind of -allocation can be tricky, especially to new users of DDlog. OVN -contains multiple instances of allocation, so it's probably worth -reading through the existing cases and following their pattern, and, -if it's still tricky, asking for assistance. - -Appendix A. Additional relations generated by ``ovsdb2ddlog`` -------------------------------------------------------------- - -.. _appendix: - -ovsdb2ddlog generates some extra relations to manage communication -with the OVSDB server. It generates records in the following -relations when rows in OVSDB output tables need to be added or deleted -or updated. - -In the steady state, when everything is working well, a given record -stays in any one of these relations only briefly: just long enough for -``ovn-northd-ddlog`` to send a transaction to the OVSDB server. When -the OVSDB server applies the update and sends an acknowledgement, this -ordinarily means that these relations become empty, because there are -no longer any further changes to send. - -Thus, records that persist in one of these relations is a sign of a -problem. One example of such a problem is the database server -rejecting the transactions sent by ``ovn-northd-ddlog``, which might -happen if, for example, a bug in a ``.dl`` file would cause some OVSDB -constraint or relational integrity rule to be violated. (Such a -problem can often be diagnosed by looking in the OVSDB server's log.) - -- ``DeltaPlus_IP_Multicast`` used by the DDlog program to track new - records that are not yet added to the database:: - - output relation DeltaPlus_IP_Multicast ( - datapath: uuid_or_string_t, - enabled: Set<bool>, - querier: Set<bool> - ) - -- ``DeltaMinus_IP_Multicast`` used by the DDlog program to track - records that are no longer needed in the database and need to be - removed:: - - output relation DeltaMinus_IP_Multicast ( - _uuid: uuid - ) - -- ``Update_IP_Multicast`` used by the DDlog program to track records - whose fields need to be updated in the database:: - - output relation Update_IP_Multicast ( - _uuid: uuid, - enabled: Set<bool>, - querier: Set<bool> - ) diff --git a/Documentation/tutorials/index.rst b/Documentation/tutorials/index.rst index de90b780a5..0b7f52d0b7 100644 --- a/Documentation/tutorials/index.rst +++ b/Documentation/tutorials/index.rst @@ -45,4 +45,3 @@ vSwitch. ovn-ovsdb-relay ovn-ipsec ovn-interconnection - ddlog-new-feature diff --git a/NEWS b/NEWS index acb3b854fb..b3f40f4f91 100644 --- a/NEWS +++ b/NEWS @@ -10,6 +10,7 @@ Post v23.09.0 external_ids:ovn-openflow-probe-interval configuration option for ovn-controller no longer matters and is ignored. - Enable PMTU discovery on geneve tunnels for E/W traffic. + - ovn-northd-ddlog has been removed. OVN v23.09.0 - 15 Sep 2023 -------------------------- diff --git a/TODO.rst b/TODO.rst index b790a9fadf..09b4be54d3 100644 --- a/TODO.rst +++ b/TODO.rst @@ -86,12 +86,6 @@ OVN To-do List * Packaging for Debian. -* ovn-northd-ddlog: Calls to warn() and err() from DDlog code would be - better refactored to use the Warning[] relation (and introduce an - Error[] relation once we want to issue some errors that way). This - would be easier with some improvements in DDlog to more easily - output to multiple relations from a single production. - * IP Multicast Relay * When connecting bridged logical switches (localnet) to logical routers diff --git a/acinclude.m4 b/acinclude.m4 index ad3ee9fdf4..1a80dfaa78 100644 --- a/acinclude.m4 +++ b/acinclude.m4 @@ -42,85 +42,6 @@ AC_DEFUN([OVS_ENABLE_WERROR], fi AC_SUBST([SPARSE_WERROR])]) -dnl OVS_CHECK_DDLOG([VERSION]) -dnl -dnl Configure ddlog source tree, checking for the given DDlog VERSION. -dnl VERSION should be a major and minor, e.g. 0.36, which will accept -dnl 0.36.0, 0.36.1, and so on. Omit VERSION to accept any version of -dnl ddlog (which is probably only useful for developers who are trying -dnl different versions, since OVN is currently bound to a particular -dnl DDlog version). -AC_DEFUN([OVS_CHECK_DDLOG], [ - AC_ARG_VAR([DDLOG_HOME], [Root of the DDlog installation]) - AC_ARG_WITH( - [ddlog], - [AS_HELP_STRING([--with-ddlog[[=INSTALLDIR|LIBDIR]]], [Enables DDlog])], - [DDLOG_PATH=$PATH - if test "$withval" = yes; then - # --with-ddlog: $DDLOG_HOME must be set - if test -z "$DDLOG_HOME"; then - AC_MSG_ERROR([To build with DDlog, specify the DDlog install or library directory on --with-ddlog or in \$DDLOG_HOME]) - fi - DDLOGLIBDIR=$DDLOG_HOME/lib - test -d "$DDLOG_HOME/bin" && DDLOG_PATH=$DDLOG_HOME/bin - elif test -f "$withval/lib/ddlog_std.dl"; then - # --with-ddlog=INSTALLDIR - DDLOGLIBDIR=$withval/lib - test -d "$withval/bin" && DDLOG_PATH=$withval/bin - elif test -f "$withval/ddlog_std.dl"; then - # --with-ddlog=LIBDIR - DDLOGLIBDIR=$withval/lib - else - AC_MSG_ERROR([$withval does not contain ddlog_std.dl or lib/ddlog_std.dl]) - fi], - [DDLOGLIBDIR=no - DDLOG_PATH=no]) - - AC_MSG_CHECKING([for DDlog library directory]) - AC_MSG_RESULT([$DDLOGLIBDIR]) - if test "$DDLOGLIBDIR" != no; then - AC_ARG_VAR([DDLOG], [path to ddlog binary]) - AC_PATH_PROGS([DDLOG], [ddlog], [none], [$DDLOG_PATH]) - if test X"$DDLOG" = X"none"; then - AC_MSG_ERROR([ddlog is required to build with DDlog]) - fi - - AC_ARG_VAR([OVSDB2DDLOG], [path to ovsdb2ddlog binary]) - AC_PATH_PROGS([OVSDB2DDLOG], [ovsdb2ddlog], [none], [$DDLOG_PATH]) - if test X"$OVSDB2DDLOG" = X"none"; then - AC_MSG_ERROR([ovsdb2ddlog is required to build with DDlog]) - fi - - for tool in "$DDLOG" "$OVSDB2DDLOG"; do - AC_MSG_CHECKING([$tool version]) - $tool --version >&AS_MESSAGE_LOG_FD 2>&1 - tool_version=$($tool --version | sed -n 's/^.* v\([[0-9]][[^ ]]*\).*/\1/p') - AC_MSG_RESULT([$tool_version]) - m4_if([$1], [], [], [ - AS_CASE([$tool_version], - [$1 | $1.*], [], - [*], [AC_MSG_ERROR([DDlog version $1.x is required, but $tool is version $tool_version])])]) - done - - AC_ARG_VAR([CARGO]) - AC_CHECK_PROGS([CARGO], [cargo], [none]) - if test X"$CARGO" = X"none"; then - AC_MSG_ERROR([cargo is required to build with DDlog]) - fi - - AC_ARG_VAR([RUSTC]) - AC_CHECK_PROGS([RUSTC], [rustc], [none]) - if test X"$RUSTC" = X"none"; then - AC_MSG_ERROR([rustc is required to build with DDlog]) - fi - - AC_SUBST([DDLOGLIBDIR]) - AC_DEFINE([DDLOG], [1], [Build OVN daemons with ddlog.]) - fi - - AM_CONDITIONAL([DDLOG], [test "$DDLOGLIBDIR" != no]) -]) - dnl Checks for net/if_dl.h. dnl dnl (We use this as a proxy for checking whether we're building on FreeBSD diff --git a/configure.ac b/configure.ac index e8a5edbb25..8284800e74 100644 --- a/configure.ac +++ b/configure.ac @@ -132,7 +132,6 @@ OVS_LIBTOOL_VERSIONS OVS_CHECK_CXX AX_FUNC_POSIX_MEMALIGN OVN_CHECK_UNBOUND -OVS_CHECK_DDLOG_FAST_BUILD OVS_CHECK_INCLUDE_NEXT([stdio.h string.h]) AC_CONFIG_FILES([lib/libovn.sym]) @@ -169,7 +168,6 @@ OVS_CONDITIONAL_CC_OPTION([-Wno-unused-parameter], [HAVE_WNO_UNUSED_PARAMETER]) OVS_ENABLE_WERROR OVS_ENABLE_SPARSE -OVS_CHECK_DDLOG([0.47]) OVS_CHECK_PRAGMA_MESSAGE OVN_CHECK_OVS OVN_CHECK_VIF_PLUG_PROVIDER @@ -177,9 +175,6 @@ OVN_ENABLE_VIF_PLUG OVS_CTAGS_IDENTIFIERS AC_SUBST([OVS_CFLAGS]) AC_SUBST([OVS_LDFLAGS]) -AC_SUBST([DDLOG_EXTRA_FLAGS]) -AC_SUBST([DDLOG_EXTRA_RUSTFLAGS]) -AC_SUBST([DDLOG_NORTHD_LIB_ONLY]) AC_SUBST([ovs_srcdir], ['${OVSDIR}']) AC_SUBST([ovs_builddir], ['${OVSBUILDDIR}']) diff --git a/lib/ovn-util.c b/lib/ovn-util.c index 33105202f2..6ef9cac7f2 100644 --- a/lib/ovn-util.c +++ b/lib/ovn-util.c @@ -304,37 +304,27 @@ bool extract_ip_address(const char *address, struct lport_addresses *laddrs) bool extract_lrp_networks(const struct nbrec_logical_router_port *lrp, struct lport_addresses *laddrs) -{ - return extract_lrp_networks__(lrp->mac, lrp->networks, lrp->n_networks, - laddrs); -} - -/* Separate out the body of 'extract_lrp_networks()' for use from DDlog, - * which does not know the 'nbrec_logical_router_port' type. */ -bool -extract_lrp_networks__(char *mac, char **networks, size_t n_networks, - struct lport_addresses *laddrs) { memset(laddrs, 0, sizeof *laddrs); - if (!eth_addr_from_string(mac, &laddrs->ea)) { + if (!eth_addr_from_string(lrp->mac, &laddrs->ea)) { laddrs->ea = eth_addr_zero; return false; } snprintf(laddrs->ea_s, sizeof laddrs->ea_s, ETH_ADDR_FMT, ETH_ADDR_ARGS(laddrs->ea)); - for (int i = 0; i < n_networks; i++) { + for (int i = 0; i < lrp->n_networks; i++) { ovs_be32 ip4; struct in6_addr ip6; unsigned int plen; char *error; - error = ip_parse_cidr(networks[i], &ip4, &plen); + error = ip_parse_cidr(lrp->networks[i], &ip4, &plen); if (!error) { if (!ip4) { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); - VLOG_WARN_RL(&rl, "bad 'networks' %s", networks[i]); + VLOG_WARN_RL(&rl, "bad 'networks' %s", lrp->networks[i]); continue; } @@ -343,13 +333,13 @@ extract_lrp_networks__(char *mac, char **networks, size_t n_networks, } free(error); - error = ipv6_parse_cidr(networks[i], &ip6, &plen); + error = ipv6_parse_cidr(lrp->networks[i], &ip6, &plen); if (!error) { add_ipv6_netaddr(laddrs, ip6, plen); } else { static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); VLOG_INFO_RL(&rl, "invalid syntax '%s' in networks", - networks[i]); + lrp->networks[i]); free(error); } } @@ -889,21 +879,6 @@ ovn_get_internal_version(void) N_OVNACTS, OVN_INTERNAL_MINOR_VER); } -#ifdef DDLOG -/* Callbacks used by the ddlog northd code to print warnings and errors. */ -void -ddlog_warn(const char *msg) -{ - VLOG_WARN("%s", msg); -} - -void -ddlog_err(const char *msg) -{ - VLOG_ERR("%s", msg); -} -#endif - uint32_t get_tunnel_type(const char *name) { diff --git a/lib/ovn-util.h b/lib/ovn-util.h index bff50dbde9..aa0b3b2fb4 100644 --- a/lib/ovn-util.h +++ b/lib/ovn-util.h @@ -311,11 +311,6 @@ BUILD_ASSERT_DECL( /* The number of tables for the ingress and egress pipelines. */ #define LOG_PIPELINE_LEN 29 -#ifdef DDLOG -void ddlog_warn(const char *msg); -void ddlog_err(const char *msg); -#endif - static inline uint32_t hash_add_in6_addr(uint32_t hash, const struct in6_addr *addr) { diff --git a/lib/stopwatch-names.h b/lib/stopwatch-names.h index 3452cc71cf..4e93c1dc14 100644 --- a/lib/stopwatch-names.h +++ b/lib/stopwatch-names.h @@ -15,9 +15,6 @@ #ifndef STOPWATCH_NAMES_H #define STOPWATCH_NAMES_H 1 -/* In order to not duplicate names for stopwatches between ddlog and non-ddlog - * we define them in a common header file. - */ #define NORTHD_LOOP_STOPWATCH_NAME "ovn-northd-loop" #define OVNNB_DB_RUN_STOPWATCH_NAME "ovnnb_db_run" #define OVNSB_DB_RUN_STOPWATCH_NAME "ovnsb_db_run" diff --git a/m4/ovn.m4 b/m4/ovn.m4 index 902865c585..ebe4c96123 100644 --- a/m4/ovn.m4 +++ b/m4/ovn.m4 @@ -576,19 +576,3 @@ AC_DEFUN([OVN_CHECK_UNBOUND], fi AM_CONDITIONAL([HAVE_UNBOUND], [test "$HAVE_UNBOUND" = yes]) AC_SUBST([HAVE_UNBOUND])]) - -dnl Checks for --enable-ddlog-fast-build and updates DDLOG_EXTRA_RUSTFLAGS. -AC_DEFUN([OVS_CHECK_DDLOG_FAST_BUILD], - [AC_ARG_ENABLE( - [ddlog_fast_build], - [AS_HELP_STRING([--enable-ddlog-fast-build], - [Build ddlog programs faster, but generate slower code])], - [case "${enableval}" in - (yes) ddlog_fast_build=true ;; - (no) ddlog_fast_build=false ;; - (*) AC_MSG_ERROR([bad value ${enableval} for --enable-ddlog-fast-build]) ;; - esac], - [ddlog_fast_build=false]) - if $ddlog_fast_build; then - DDLOG_EXTRA_RUSTFLAGS="-C opt-level=z" - fi]) diff --git a/northd/.gitignore b/northd/.gitignore index 0f2b33ae7d..39a6f79887 100644 --- a/northd/.gitignore +++ b/northd/.gitignore @@ -1,6 +1,4 @@ /ovn-northd -/ovn-northd-ddlog /ovn-northd.8 /OVN_Northbound.dl /OVN_Southbound.dl -/ovn_northd_ddlog/ diff --git a/northd/automake.mk b/northd/automake.mk index cf622fc3c9..5d77ca67b7 100644 --- a/northd/automake.mk +++ b/northd/automake.mk @@ -35,114 +35,3 @@ northd_ovn_northd_LDADD = \ man_MANS += northd/ovn-northd.8 EXTRA_DIST += northd/ovn-northd.8.xml CLEANFILES += northd/ovn-northd.8 - -EXTRA_DIST += \ - northd/ovn-nb.dlopts \ - northd/ovn-sb.dlopts \ - northd/ovn.toml \ - northd/ovn.rs \ - northd/bitwise.rs \ - northd/ovsdb2ddlog2c \ - $(ddlog_sources) - -ddlog_sources = \ - northd/ovn_northd.dl \ - northd/lswitch.dl \ - northd/lrouter.dl \ - northd/ipam.dl \ - northd/multicast.dl \ - northd/ovn.dl \ - northd/ovn.rs \ - northd/helpers.dl \ - northd/bitwise.dl \ - northd/copp.dl -ddlog_nodist_sources = \ - northd/OVN_Northbound.dl \ - northd/OVN_Southbound.dl - -if DDLOG -bin_PROGRAMS += northd/ovn-northd-ddlog -northd_ovn_northd_ddlog_SOURCES = northd/ovn-northd-ddlog.c -nodist_northd_ovn_northd_ddlog_SOURCES = \ - northd/ovn-northd-ddlog-sb.inc \ - northd/ovn-northd-ddlog-nb.inc \ - northd/ovn_northd_ddlog/ddlog.h -northd_ovn_northd_ddlog_LDADD = \ - northd/ovn_northd_ddlog/target/release/libovn_northd_ddlog.la \ - lib/libovn.la \ - $(OVSDB_LIBDIR)/libovsdb.la \ - $(OVS_LIBDIR)/libopenvswitch.la - -nb_opts = $$(cat $(srcdir)/northd/ovn-nb.dlopts) -northd/OVN_Northbound.dl: ovn-nb.ovsschema northd/ovn-nb.dlopts - $(AM_V_GEN)$(OVSDB2DDLOG) -f $< --output-file $@ $(nb_opts) -northd/ovn-northd-ddlog-nb.inc: ovn-nb.ovsschema northd/ovn-nb.dlopts northd/ovsdb2ddlog2c - $(AM_V_GEN)$(run_python) $(srcdir)/northd/ovsdb2ddlog2c -p nb_ -f $< --output-file $@ $(nb_opts) - -sb_opts = $$(cat $(srcdir)/northd/ovn-sb.dlopts) -northd/OVN_Southbound.dl: ovn-sb.ovsschema northd/ovn-sb.dlopts - $(AM_V_GEN)$(OVSDB2DDLOG) -f $< --output-file $@ $(sb_opts) -northd/ovn-northd-ddlog-sb.inc: ovn-sb.ovsschema northd/ovn-sb.dlopts northd/ovsdb2ddlog2c - $(AM_V_GEN)$(run_python) $(srcdir)/northd/ovsdb2ddlog2c -p sb_ -f $< --output-file $@ $(sb_opts) - -BUILT_SOURCES += \ - northd/ovn-northd-ddlog-sb.inc \ - northd/ovn-northd-ddlog-nb.inc \ - northd/ovn_northd_ddlog/ddlog.h - -northd/ovn_northd_ddlog/ddlog.h: northd/ddlog.stamp - -CARGO_VERBOSE = $(cargo_verbose_$(V)) -cargo_verbose_ = $(cargo_verbose_$(AM_DEFAULT_VERBOSITY)) -cargo_verbose_0 = -cargo_verbose_1 = --verbose - -DDLOGFLAGS = -L $(DDLOGLIBDIR) -L $(builddir)/northd $(DDLOG_EXTRA_FLAGS) - -RUSTFLAGS = \ - -L ../../lib/.libs \ - -L $(OVS_LIBDIR)/.libs \ - $$LIBOPENVSWITCH_DEPS \ - $$LIBOVN_DEPS \ - -Awarnings $(DDLOG_EXTRA_RUSTFLAGS) - -northd/ddlog.stamp: $(ddlog_sources) $(ddlog_nodist_sources) - $(AM_V_GEN)$(DDLOG) -i $< -o $(builddir)/northd $(DDLOGFLAGS) - $(AM_V_at)touch $@ - -NORTHD_LIB = 1 -NORTHD_CLI = 0 - -ddlog_targets = $(northd_lib_$(NORTHD_LIB)) $(northd_cli_$(NORTHD_CLI)) -northd_lib_1 = northd/ovn_northd_ddlog/target/release/libovn_%_ddlog.la -northd_cli_1 = northd/ovn_northd_ddlog/target/release/ovn_%_cli -EXTRA_northd_ovn_northd_DEPENDENCIES = $(northd_cli_$(NORTHD_CLI)) - -cargo_build = $(cargo_build_$(NORTHD_LIB)$(NORTHD_CLI)) -cargo_build_01 = --features command-line --bin ovn_northd_cli -cargo_build_10 = --lib -cargo_build_11 = --features command-line - -libtool_deps = $(srcdir)/build-aux/libtool-deps -$(ddlog_targets): northd/ddlog.stamp lib/libovn.la $(OVS_LIBDIR)/libopenvswitch.la - $(AM_V_GEN)LIBOVN_DEPS=`$(libtool_deps) lib/libovn.la` && \ - LIBOPENVSWITCH_DEPS=`$(libtool_deps) $(OVS_LIBDIR)/libopenvswitch.la` && \ - cd northd/ovn_northd_ddlog && \ - RUSTC='$(RUSTC)' RUSTFLAGS="$(RUSTFLAGS)" \ - cargo build --release $(CARGO_VERBOSE) $(cargo_build) --no-default-features --features ovsdb,c_api -endif - -CLEAN_LOCAL += clean-ddlog -clean-ddlog: - rm -rf northd/ovn_northd_ddlog northd/ddlog.stamp - -CLEANFILES += \ - northd/ddlog.stamp \ - northd/ovn_northd_ddlog/ddlog.h \ - northd/ovn_northd_ddlog/target/release/libovn_northd_ddlog.a \ - northd/ovn_northd_ddlog/target/release/libovn_northd_ddlog.la \ - northd/ovn_northd_ddlog/target/release/ovn_northd_cli \ - northd/OVN_Northbound.dl \ - northd/OVN_Southbound.dl \ - northd/ovn-northd-ddlog-nb.inc \ - northd/ovn-northd-ddlog-sb.inc diff --git a/northd/bitwise.dl b/northd/bitwise.dl deleted file mode 100644 index 877d155a23..0000000000 --- a/northd/bitwise.dl +++ /dev/null @@ -1,272 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/* - * Returns true if and only if 'x' is a power of 2. - */ -function is_power_of_two(x: u8): bool { x != 0 and (x & (x - 1)) == 0 } -function is_power_of_two(x: u16): bool { x != 0 and (x & (x - 1)) == 0 } -function is_power_of_two(x: u32): bool { x != 0 and (x & (x - 1)) == 0 } -function is_power_of_two(x: u64): bool { x != 0 and (x & (x - 1)) == 0 } -function is_power_of_two(x: u128): bool { x != 0 and (x & (x - 1)) == 0 } - -/* - * Returns the next power of 2 greater than 'x', or None if that's bigger than the - * type's maximum value. - */ -function next_power_of_two(x: u8): Option<u8> { u8_next_power_of_two(x) } -function next_power_of_two(x: u16): Option<u16> { u16_next_power_of_two(x) } -function next_power_of_two(x: u32): Option<u32> { u32_next_power_of_two(x) } -function next_power_of_two(x: u64): Option<u64> { u64_next_power_of_two(x) } -function next_power_of_two(x: u128): Option<u128> { u128_next_power_of_two(x) } - -extern function u8_next_power_of_two(x: u8): Option<u8> -extern function u16_next_power_of_two(x: u16): Option<u16> -extern function u32_next_power_of_two(x: u32): Option<u32> -extern function u64_next_power_of_two(x: u64): Option<u64> -extern function u128_next_power_of_two(x: u128): Option<u128> - -/* - * Returns the next power of 2 greater than 'x', or 0 if that's bigger than the - * type's maximum value. - */ -function wrapping_next_power_of_two(x: u8): u8 { u8_wrapping_next_power_of_two(x) } -function wrapping_next_power_of_two(x: u16): u16 { u16_wrapping_next_power_of_two(x) } -function wrapping_next_power_of_two(x: u32): u32 { u32_wrapping_next_power_of_two(x) } -function wrapping_next_power_of_two(x: u64): u64 { u64_wrapping_next_power_of_two(x) } -function wrapping_next_power_of_two(x: u128): u128 { u128_wrapping_next_power_of_two(x) } - -extern function u8_wrapping_next_power_of_two(x: u8): u8 -extern function u16_wrapping_next_power_of_two(x: u16): u16 -extern function u32_wrapping_next_power_of_two(x: u32): u32 -extern function u64_wrapping_next_power_of_two(x: u64): u64 -extern function u128_wrapping_next_power_of_two(x: u128): u128 - -/* - * Number of 1-bits in the binary representation of 'x'. - */ -function count_ones(x: u8): u32 { u8_count_ones(x) } -function count_ones(x: u16): u32 { u16_count_ones(x) } -function count_ones(x: u32): u32 { u32_count_ones(x) } -function count_ones(x: u64): u32 { u64_count_ones(x) } -function count_ones(x: u128): u32 { u128_count_ones(x) } - -extern function u8_count_ones(x: u8): u32 -extern function u16_count_ones(x: u16): u32 -extern function u32_count_ones(x: u32): u32 -extern function u64_count_ones(x: u64): u32 -extern function u128_count_ones(x: u128): u32 - -/* - * Number of 0-bits in the binary representation of 'x'. - */ -function count_zeros(x: u8): u32 { u8_count_zeros(x) } -function count_zeros(x: u16): u32 { u16_count_zeros(x) } -function count_zeros(x: u32): u32 { u32_count_zeros(x) } -function count_zeros(x: u64): u32 { u64_count_zeros(x) } -function count_zeros(x: u128): u32 { u128_count_zeros(x) } - -extern function u8_count_zeros(x: u8): u32 -extern function u16_count_zeros(x: u16): u32 -extern function u32_count_zeros(x: u32): u32 -extern function u64_count_zeros(x: u64): u32 -extern function u128_count_zeros(x: u128): u32 - -/* - * Number of leading 0-bits in the binary representation of 'x'. - */ -function leading_zeros(x: u8): u32 { u8_leading_zeros(x) } -function leading_zeros(x: u16): u32 { u16_leading_zeros(x) } -function leading_zeros(x: u32): u32 { u32_leading_zeros(x) } -function leading_zeros(x: u64): u32 { u64_leading_zeros(x) } -function leading_zeros(x: u128): u32 { u128_leading_zeros(x) } - -extern function u8_leading_zeros(x: u8): u32 -extern function u16_leading_zeros(x: u16): u32 -extern function u32_leading_zeros(x: u32): u32 -extern function u64_leading_zeros(x: u64): u32 -extern function u128_leading_zeros(x: u128): u32 - -/* - * Number of leading 1-bits in the binary representation of 'x'. - */ -function leading_ones(x: u8): u32 { u8_leading_ones(x) } -function leading_ones(x: u16): u32 { u16_leading_ones(x) } -function leading_ones(x: u32): u32 { u32_leading_ones(x) } -function leading_ones(x: u64): u32 { u64_leading_ones(x) } -function leading_ones(x: u128): u32 { u128_leading_ones(x) } - -extern function u8_leading_ones(x: u8): u32 -extern function u16_leading_ones(x: u16): u32 -extern function u32_leading_ones(x: u32): u32 -extern function u64_leading_ones(x: u64): u32 -extern function u128_leading_ones(x: u128): u32 - -/* - * Number of trailing 0-bits in the binary representation of 'x'. - */ -function trailing_zeros(x: u8): u32 { u8_trailing_zeros(x) } -function trailing_zeros(x: u16): u32 { u16_trailing_zeros(x) } -function trailing_zeros(x: u32): u32 { u32_trailing_zeros(x) } -function trailing_zeros(x: u64): u32 { u64_trailing_zeros(x) } -function trailing_zeros(x: u128): u32 { u128_trailing_zeros(x) } - -extern function u8_trailing_zeros(x: u8): u32 -extern function u16_trailing_zeros(x: u16): u32 -extern function u32_trailing_zeros(x: u32): u32 -extern function u64_trailing_zeros(x: u64): u32 -extern function u128_trailing_zeros(x: u128): u32 - -/* - * Number of trailing 0-bits in the binary representation of 'x'. - */ -function trailing_ones(x: u8): u32 { u8_trailing_ones(x) } -function trailing_ones(x: u16): u32 { u16_trailing_ones(x) } -function trailing_ones(x: u32): u32 { u32_trailing_ones(x) } -function trailing_ones(x: u64): u32 { u64_trailing_ones(x) } -function trailing_ones(x: u128): u32 { u128_trailing_ones(x) } - -extern function u8_trailing_ones(x: u8): u32 -extern function u16_trailing_ones(x: u16): u32 -extern function u32_trailing_ones(x: u32): u32 -extern function u64_trailing_ones(x: u64): u32 -extern function u128_trailing_ones(x: u128): u32 - -/* - * Reverses the order of bits in 'x'. - */ -function reverse_bits(x: u8): u8 { u8_reverse_bits(x) } -function reverse_bits(x: u16): u16 { u16_reverse_bits(x) } -function reverse_bits(x: u32): u32 { u32_reverse_bits(x) } -function reverse_bits(x: u64): u64 { u64_reverse_bits(x) } -function reverse_bits(x: u128): u128 { u128_reverse_bits(x) } - -extern function u8_reverse_bits(x: u8): u8 -extern function u16_reverse_bits(x: u16): u16 -extern function u32_reverse_bits(x: u32): u32 -extern function u64_reverse_bits(x: u64): u64 -extern function u128_reverse_bits(x: u128): u128 - -/* - * Reverses the order of bytes in 'x'. - */ -function swap_bytes(x: u8): u8 { u8_swap_bytes(x) } -function swap_bytes(x: u16): u16 { u16_swap_bytes(x) } -function swap_bytes(x: u32): u32 { u32_swap_bytes(x) } -function swap_bytes(x: u64): u64 { u64_swap_bytes(x) } -function swap_bytes(x: u128): u128 { u128_swap_bytes(x) } - -extern function u8_swap_bytes(x: u8): u8 -extern function u16_swap_bytes(x: u16): u16 -extern function u32_swap_bytes(x: u32): u32 -extern function u64_swap_bytes(x: u64): u64 -extern function u128_swap_bytes(x: u128): u128 - -/* - * Converts 'x' from big endian to the machine's native endianness. - * On a big-endian machine it is a no-op. - * On a little-end machine it is equivalent to swap_bytes(). - */ -function from_be(x: u8): u8 { u8_from_be(x) } -function from_be(x: u16): u16 { u16_from_be(x) } -function from_be(x: u32): u32 { u32_from_be(x) } -function from_be(x: u64): u64 { u64_from_be(x) } -function from_be(x: u128): u128 { u128_from_be(x) } - -extern function u8_from_be(x: u8): u8 -extern function u16_from_be(x: u16): u16 -extern function u32_from_be(x: u32): u32 -extern function u64_from_be(x: u64): u64 -extern function u128_from_be(x: u128): u128 - -/* - * Converts 'x' from the machine's native endianness to big endian. - * On a big-endian machine it is a no-op. - * On a little-endian machine it is equivalent to swap_bytes(). - */ -function to_be(x: u8): u8 { u8_to_be(x) } -function to_be(x: u16): u16 { u16_to_be(x) } -function to_be(x: u32): u32 { u32_to_be(x) } -function to_be(x: u64): u64 { u64_to_be(x) } -function to_be(x: u128): u128 { u128_to_be(x) } - -extern function u8_to_be(x: u8): u8 -extern function u16_to_be(x: u16): u16 -extern function u32_to_be(x: u32): u32 -extern function u64_to_be(x: u64): u64 -extern function u128_to_be(x: u128): u128 - -/* - * Converts 'x' from little endian to the machine's native endianness. - * On a little-endian machine it is a no-op. - * On a big-endian machine it is equivalent to swap_bytes(). - */ -function from_le(x: u8): u8 { u8_from_le(x) } -function from_le(x: u16): u16 { u16_from_le(x) } -function from_le(x: u32): u32 { u32_from_le(x) } -function from_le(x: u64): u64 { u64_from_le(x) } -function from_le(x: u128): u128 { u128_from_le(x) } - -extern function u8_from_le(x: u8): u8 -extern function u16_from_le(x: u16): u16 -extern function u32_from_le(x: u32): u32 -extern function u64_from_le(x: u64): u64 -extern function u128_from_le(x: u128): u128 - -/* - * Converts 'x' from the machine's native endianness to little endian. - * On a little-endian machine it is a no-op. - * On a big-endian machine it is equivalent to swap_bytes(). - */ -function to_le(x: u8): u8 { u8_to_le(x) } -function to_le(x: u16): u16 { u16_to_le(x) } -function to_le(x: u32): u32 { u32_to_le(x) } -function to_le(x: u64): u64 { u64_to_le(x) } -function to_le(x: u128): u128 { u128_to_le(x) } - -extern function u8_to_le(x: u8): u8 -extern function u16_to_le(x: u16): u16 -extern function u32_to_le(x: u32): u32 -extern function u64_to_le(x: u64): u64 -extern function u128_to_le(x: u128): u128 - -/* - * Rotates the bits in 'x' left by 'n' positions. - */ -function rotate_left(x: u8, n: u32): u8 { u8_rotate_left(x, n) } -function rotate_left(x: u16, n: u32): u16 { u16_rotate_left(x, n) } -function rotate_left(x: u32, n: u32): u32 { u32_rotate_left(x, n) } -function rotate_left(x: u64, n: u32): u64 { u64_rotate_left(x, n) } -function rotate_left(x: u128, n: u32): u128 { u128_rotate_left(x, n) } - -extern function u8_rotate_left(x: u8, n: u32): u8 -extern function u16_rotate_left(x: u16, n: u32): u16 -extern function u32_rotate_left(x: u32, n: u32): u32 -extern function u64_rotate_left(x: u64, n: u32): u64 -extern function u128_rotate_left(x: u128, n: u32): u128 - -/* - * Rotates the bits in 'x' right by 'n' positions. - */ -function rotate_right(x: u8, n: u32): u8 { u8_rotate_right(x, n) } -function rotate_right(x: u16, n: u32): u16 { u16_rotate_right(x, n) } -function rotate_right(x: u32, n: u32): u32 { u32_rotate_right(x, n) } -function rotate_right(x: u64, n: u32): u64 { u64_rotate_right(x, n) } -function rotate_right(x: u128, n: u32): u128 { u128_rotate_right(x, n) } - -extern function u8_rotate_right(x: u8, n: u32): u8 -extern function u16_rotate_right(x: u16, n: u32): u16 -extern function u32_rotate_right(x: u32, n: u32): u32 -extern function u64_rotate_right(x: u64, n: u32): u64 -extern function u128_rotate_right(x: u128, n: u32): u128 diff --git a/northd/bitwise.rs b/northd/bitwise.rs deleted file mode 100644 index 97c0ecfa36..0000000000 --- a/northd/bitwise.rs +++ /dev/null @@ -1,133 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -use ddlog_std::option2std; - -pub fn u8_next_power_of_two(x: &u8) -> ddlog_std::Option<u8> { - option2std(x.checked_next_power_of_two()) -} -pub fn u16_next_power_of_two(x: &u16) -> ddlog_std::Option<u16> { - option2std(x.checked_next_power_of_two()) -} -pub fn u32_next_power_of_two(x: &u32) -> ddlog_std::Option<u32> { - option2std(x.checked_next_power_of_two()) -} -pub fn u64_next_power_of_two(x: &u64) -> ddlog_std::Option<u64> { - option2std(x.checked_next_power_of_two()) -} -pub fn u128_next_power_of_two(x: &u128) -> ddlog_std::Option<u128> { - option2std(x.checked_next_power_of_two()) -} - -// Rust has wrapping_next_power_of_two() in nightly. We implement it -// ourselves to avoid the dependency. -pub fn u8_wrapping_next_power_of_two(x: &u8) -> u8 { - x.checked_next_power_of_two().unwrap_or(0) -} -pub fn u16_wrapping_next_power_of_two(x: &u16) -> u16 { - x.checked_next_power_of_two().unwrap_or(0) -} -pub fn u32_wrapping_next_power_of_two(x: &u32) -> u32 { - x.checked_next_power_of_two().unwrap_or(0) -} -pub fn u64_wrapping_next_power_of_two(x: &u64) -> u64 { - x.checked_next_power_of_two().unwrap_or(0) -} -pub fn u128_wrapping_next_power_of_two(x: &u128) -> u128 { - x.checked_next_power_of_two().unwrap_or(0) -} - -pub fn u8_count_ones(x: &u8) -> u32 { x.count_ones() } -pub fn u16_count_ones(x: &u16) -> u32 { x.count_ones() } -pub fn u32_count_ones(x: &u32) -> u32 { x.count_ones() } -pub fn u64_count_ones(x: &u64) -> u32 { x.count_ones() } -pub fn u128_count_ones(x: &u128) -> u32 { x.count_ones() } - -pub fn u8_count_zeros(x: &u8) -> u32 { x.count_zeros() } -pub fn u16_count_zeros(x: &u16) -> u32 { x.count_zeros() } -pub fn u32_count_zeros(x: &u32) -> u32 { x.count_zeros() } -pub fn u64_count_zeros(x: &u64) -> u32 { x.count_zeros() } -pub fn u128_count_zeros(x: &u128) -> u32 { x.count_zeros() } - -pub fn u8_leading_ones(x: &u8) -> u32 { x.leading_ones() } -pub fn u16_leading_ones(x: &u16) -> u32 { x.leading_ones() } -pub fn u32_leading_ones(x: &u32) -> u32 { x.leading_ones() } -pub fn u64_leading_ones(x: &u64) -> u32 { x.leading_ones() } -pub fn u128_leading_ones(x: &u128) -> u32 { x.leading_ones() } - -pub fn u8_leading_zeros(x: &u8) -> u32 { x.leading_zeros() } -pub fn u16_leading_zeros(x: &u16) -> u32 { x.leading_zeros() } -pub fn u32_leading_zeros(x: &u32) -> u32 { x.leading_zeros() } -pub fn u64_leading_zeros(x: &u64) -> u32 { x.leading_zeros() } -pub fn u128_leading_zeros(x: &u128) -> u32 { x.leading_zeros() } - -pub fn u8_trailing_ones(x: &u8) -> u32 { x.trailing_ones() } -pub fn u16_trailing_ones(x: &u16) -> u32 { x.trailing_ones() } -pub fn u32_trailing_ones(x: &u32) -> u32 { x.trailing_ones() } -pub fn u64_trailing_ones(x: &u64) -> u32 { x.trailing_ones() } -pub fn u128_trailing_ones(x: &u128) -> u32 { x.trailing_ones() } - -pub fn u8_trailing_zeros(x: &u8) -> u32 { x.trailing_zeros() } -pub fn u16_trailing_zeros(x: &u16) -> u32 { x.trailing_zeros() } -pub fn u32_trailing_zeros(x: &u32) -> u32 { x.trailing_zeros() } -pub fn u64_trailing_zeros(x: &u64) -> u32 { x.trailing_zeros() } -pub fn u128_trailing_zeros(x: &u128) -> u32 { x.trailing_zeros() } - -pub fn u8_reverse_bits(x: &u8) -> u8 { x.reverse_bits() } -pub fn u16_reverse_bits(x: &u16) -> u16 { x.reverse_bits() } -pub fn u32_reverse_bits(x: &u32) -> u32 { x.reverse_bits() } -pub fn u64_reverse_bits(x: &u64) -> u64 { x.reverse_bits() } -pub fn u128_reverse_bits(x: &u128) -> u128 { x.reverse_bits() } - -pub fn u8_swap_bytes(x: &u8) -> u8 { x.swap_bytes() } -pub fn u16_swap_bytes(x: &u16) -> u16 { x.swap_bytes() } -pub fn u32_swap_bytes(x: &u32) -> u32 { x.swap_bytes() } -pub fn u64_swap_bytes(x: &u64) -> u64 { x.swap_bytes() } -pub fn u128_swap_bytes(x: &u128) -> u128 { x.swap_bytes() } - -pub fn u8_from_be(x: &u8) -> u8 { u8::from_be(*x) } -pub fn u16_from_be(x: &u16) -> u16 { u16::from_be(*x) } -pub fn u32_from_be(x: &u32) -> u32 { u32::from_be(*x) } -pub fn u64_from_be(x: &u64) -> u64 { u64::from_be(*x) } -pub fn u128_from_be(x: &u128) -> u128 { u128::from_be(*x) } - -pub fn u8_to_be(x: &u8) -> u8 { x.to_be() } -pub fn u16_to_be(x: &u16) -> u16 { x.to_be() } -pub fn u32_to_be(x: &u32) -> u32 { x.to_be() } -pub fn u64_to_be(x: &u64) -> u64 { x.to_be() } -pub fn u128_to_be(x: &u128) -> u128 { x.to_be() } - -pub fn u8_from_le(x: &u8) -> u8 { u8::from_le(*x) } -pub fn u16_from_le(x: &u16) -> u16 { u16::from_le(*x) } -pub fn u32_from_le(x: &u32) -> u32 { u32::from_le(*x) } -pub fn u64_from_le(x: &u64) -> u64 { u64::from_le(*x) } -pub fn u128_from_le(x: &u128) -> u128 { u128::from_le(*x) } - -pub fn u8_to_le(x: &u8) -> u8 { x.to_le() } -pub fn u16_to_le(x: &u16) -> u16 { x.to_le() } -pub fn u32_to_le(x: &u32) -> u32 { x.to_le() } -pub fn u64_to_le(x: &u64) -> u64 { x.to_le() } -pub fn u128_to_le(x: &u128) -> u128 { x.to_le() } - -pub fn u8_rotate_left(x: &u8, n: &u32) -> u8 { x.rotate_left(*n) } -pub fn u16_rotate_left(x: &u16, n: &u32) -> u16 { x.rotate_left(*n) } -pub fn u32_rotate_left(x: &u32, n: &u32) -> u32 { x.rotate_left(*n) } -pub fn u64_rotate_left(x: &u64, n: &u32) -> u64 { x.rotate_left(*n) } -pub fn u128_rotate_left(x: &u128, n: &u32) -> u128 { x.rotate_left(*n) } - -pub fn u8_rotate_right(x: &u8, n: &u32) -> u8 { x.rotate_right(*n) } -pub fn u16_rotate_right(x: &u16, n: &u32) -> u16 { x.rotate_right(*n) } -pub fn u32_rotate_right(x: &u32, n: &u32) -> u32 { x.rotate_right(*n) } -pub fn u64_rotate_right(x: &u64, n: &u32) -> u64 { x.rotate_right(*n) } -pub fn u128_rotate_right(x: &u128, n: &u32) -> u128 { x.rotate_right(*n) } diff --git a/northd/copp.dl b/northd/copp.dl deleted file mode 100644 index c4f3b7e70c..0000000000 --- a/northd/copp.dl +++ /dev/null @@ -1,30 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -function cOPP_ARP() : istring { i"arp" } -function cOPP_ARP_RESOLVE() : istring { i"arp-resolve" } -function cOPP_DHCPV4_OPTS() : istring { i"dhcpv4-opts" } -function cOPP_DHCPV6_OPTS() : istring { i"dhcpv6-opts" } -function cOPP_DNS() : istring { i"dns" } -function cOPP_EVENT_ELB() : istring { i"event-elb" } -function cOPP_ICMP4_ERR() : istring { i"icmp4-error" } -function cOPP_ICMP6_ERR() : istring { i"icmp6-error" } -function cOPP_IGMP() : istring { i"igmp" } -function cOPP_ND_NA() : istring { i"nd-na" } -function cOPP_ND_NS() : istring { i"nd-ns" } -function cOPP_ND_NS_RESOLVE() : istring { i"nd-ns-resolve" } -function cOPP_ND_RA_OPTS() : istring { i"nd-ra-opts" } -function cOPP_TCP_RESET() : istring { i"tcp-reset" } -function cOPP_REJECT() : istring { i"reject" } -function cOPP_BFD() : istring { i"bfd" } diff --git a/northd/helpers.dl b/northd/helpers.dl deleted file mode 100644 index 50e137d99e..0000000000 --- a/northd/helpers.dl +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -import OVN_Northbound as nb -import OVN_Southbound as sb -import ovsdb -import ovn - - -output relation Warning[string] - -/* Switch-to-router logical port connections */ -relation SwitchRouterPeer(lsp: uuid, lsp_name: istring, lrp: uuid) -SwitchRouterPeer(lsp, lsp_name, lrp) :- - &nb::Logical_Switch_Port(._uuid = lsp, .name = lsp_name, .__type = i"router", .options = options), - Some{var router_port} = options.get(i"router-port"), - &nb::Logical_Router_Port(.name = router_port, ._uuid = lrp). - -function get_bool_def(m: Map<istring, istring>, k: istring, def: bool): bool = { - m.get(k) - .and_then(|x| match (x.to_lowercase()) { - "false" -> Some{false}, - "true" -> Some{true}, - _ -> None - }) - .unwrap_or(def) -} - -function get_int_def(m: Map<istring, istring>, k: istring, def: integer): integer = { - m.get(k).and_then(|v| v.ival().parse_dec_u64()).unwrap_or(def) -} - -function clamp(x: 'A, range: ('A, 'A)): 'A { - (var min, var max) = range; - if (x < min) { - min - } else if (x > max) { - max - } else { - x - } -} - -function ha_chassis_group_uuid(uuid: uuid): uuid { hash128("hacg" ++ uuid) } -function ha_chassis_uuid(chassis_name: string, nb_chassis_uuid: uuid): uuid { hash128("hac" ++ chassis_name ++ nb_chassis_uuid) } - -/* Dummy relation with one empty row, useful for putting into antijoins. */ -relation Unit() -Unit(). diff --git a/northd/ipam.dl b/northd/ipam.dl deleted file mode 100644 index 600c55f5c8..0000000000 --- a/northd/ipam.dl +++ /dev/null @@ -1,499 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/* - * IPAM (IP address management) and MACAM (MAC address management) - * - * IPAM generally stands for IP address management. In non-virtualized - * world, MAC addresses come with the hardware. But, with virtualized - * workloads, they need to be assigned and managed. This function - * does both IP address management (ipam) and MAC address management - * (macam). - */ - -import OVN_Northbound as nb -import ovsdb -import allocate -import helpers -import ovn -import lswitch -import lrouter - -function mAC_ADDR_SPACE(): bit<48> = 48'hffffff - -/* - * IPv4 dynamic address allocation. - */ - -/* - * The fixed portions of a request for a dynamic LSP address. - */ -typedef dynamic_address_request = DynamicAddressRequest{ - mac: Option<eth_addr>, - ip4: Option<in_addr>, - ip6: Option<in6_addr> -} -function parse_dynamic_address_request(s: string): Option<dynamic_address_request> { - var tokens = string_split(s, " "); - var n = tokens.len(); - if (n < 1 or n > 3) { - return None - }; - - var t0 = tokens.nth(0).unwrap_or(""); - var t1 = tokens.nth(1).unwrap_or(""); - var t2 = tokens.nth(2).unwrap_or(""); - if (t0 == "dynamic") { - if (n == 1) { - Some{DynamicAddressRequest{None, None, None}} - } else if (n == 2) { - match (ip46_parse(t1)) { - Some{IPv4{ipv4}} -> Some{DynamicAddressRequest{None, Some{ipv4}, None}}, - Some{IPv6{ipv6}} -> Some{DynamicAddressRequest{None, None, Some{ipv6}}}, - _ -> None - } - } else if (n == 3) { - match ((ip_parse(t1), ipv6_parse(t2))) { - (Some{ipv4}, Some{ipv6}) -> Some{DynamicAddressRequest{None, Some{ipv4}, Some{ipv6}}}, - _ -> None - } - } else { - None - } - } else if (n == 2 and t1 == "dynamic") { - match (eth_addr_from_string(t0)) { - Some{mac} -> Some{DynamicAddressRequest{Some{mac}, None, None}}, - _ -> None - } - } else { - None - } -} - -/* SwitchIPv4ReservedAddress - keeps track of statically reserved IPv4 addresses - * for each switch whose subnet option is set, including: - * (1) first and last (multicast) address in the subnet range - * (2) addresses from `other_config.exclude_ips` - * (3) port addresses in lsp.addresses, except "unknown" addresses, addresses of - * "router" ports, dynamic addresses - * (4) addresses associated with router ports peered with the switch. - * (5) static IP component of "dynamic" `lsp.addresses`. - * - * Addresses are kept in host-endian format (i.e., bit<32> vs in_addr). - */ -relation SwitchIPv4ReservedAddress(lswitch: uuid, addr: bit<32>) - -/* Add reserved address groups (1) and (2). */ -SwitchIPv4ReservedAddress(.lswitch = sw._uuid, - .addr = addr) :- - sw in &Switch(.subnet = Some{(_, _, start_ipv4, total_ipv4s)}), - var exclude_ips = { - var exclude_ips = set_singleton(start_ipv4); - exclude_ips.insert(start_ipv4 + total_ipv4s - 1); - match (map_get(sw.other_config, i"exclude_ips")) { - None -> exclude_ips, - Some{exclude_ip_list} -> match (parse_ip_list(exclude_ip_list.ival())) { - Left{err} -> { - warn("logical switch ${uuid2str(sw._uuid)}: bad exclude_ips (${err})"); - exclude_ips - }, - Right{ranges} -> { - for (rng in ranges) { - (var ip_start, var ip_end) = rng; - var start = ip_start.a; - var end = match (ip_end) { - None -> start, - Some{ip} -> ip.a - }; - start = max(start_ipv4, start); - end = min(start_ipv4 + total_ipv4s - 1, end); - if (end >= start) { - for (addr in range_vec(start, end+1, 1)) { - exclude_ips.insert(addr) - } - } else { - warn("logical switch ${uuid2str(sw._uuid)}: excluded addresses not in subnet") - } - }; - exclude_ips - } - } - } - }, - var addr = FlatMap(exclude_ips). - -/* Add reserved address group (3). */ -SwitchIPv4ReservedAddress(.lswitch = ls_uuid, - .addr = addr) :- - SwitchPortStaticAddresses( - .port = &SwitchPort{ - .sw = &Switch{._uuid = ls_uuid, - .subnet = Some{(_, _, start_ipv4, total_ipv4s)}}, - .peer = None}, - .addrs = lport_addrs - ), - var addrs = { - var addrs = set_empty(); - for (addr in lport_addrs.ipv4_addrs) { - var addr_host_endian = addr.addr.a; - if (addr_host_endian >= start_ipv4 and addr_host_endian < start_ipv4 + total_ipv4s) { - addrs.insert(addr_host_endian) - } else () - }; - addrs - }, - var addr = FlatMap(addrs). - -/* Add reserved address group (4) */ -SwitchIPv4ReservedAddress(.lswitch = ls_uuid, - .addr = addr) :- - &SwitchPort( - .sw = &Switch{._uuid = ls_uuid, - .subnet = Some{(_, _, start_ipv4, total_ipv4s)}}, - .peer = Some{rport}), - var addrs = { - var addrs = set_empty(); - for (addr in rport.networks.ipv4_addrs) { - var addr_host_endian = addr.addr.a; - if (addr_host_endian >= start_ipv4 and addr_host_endian < start_ipv4 + total_ipv4s) { - addrs.insert(addr_host_endian) - } else () - }; - addrs - }, - var addr = FlatMap(addrs). - -/* Add reserved address group (5) */ -SwitchIPv4ReservedAddress(.lswitch = sw._uuid, - .addr = ip_addr.a) :- - &SwitchPort(.sw = sw, .lsp = lsp, .static_dynamic_ipv4 = Some{ip_addr}). - -/* Aggregate all reserved addresses for each switch. */ -relation SwitchIPv4ReservedAddresses(lswitch: uuid, addrs: Set<bit<32>>) - -SwitchIPv4ReservedAddresses(lswitch, addrs) :- - SwitchIPv4ReservedAddress(lswitch, addr), - var addrs = addr.group_by(lswitch).to_set(). - -SwitchIPv4ReservedAddresses(lswitch_uuid, set_empty()) :- - &nb::Logical_Switch(._uuid = lswitch_uuid), - not SwitchIPv4ReservedAddress(lswitch_uuid, _). - -/* Allocate dynamic IP addresses for ports that require them: - */ -relation SwitchPortAllocatedIPv4DynAddress(lsport: uuid, dyn_addr: Option<in_addr>) - -SwitchPortAllocatedIPv4DynAddress(lsport, dyn_addr) :- - /* Aggregate all ports of a switch that need a dynamic IP address */ - port in &SwitchPort(.needs_dynamic_ipv4address = true, - .sw = sw), - var switch_id = sw._uuid, - var ports = port.group_by(switch_id).to_vec(), - SwitchIPv4ReservedAddresses(switch_id, reserved_addrs), - /* Allocate dynamic addresses only for ports that don't have a dynamic address - * or have one that is no longer valid. */ - var dyn_addresses = { - var used_addrs = reserved_addrs; - var assigned_addrs = vec_empty(); - var need_addr = vec_empty(); - (var start_ipv4, var total_ipv4s) = match (ports.nth(0)) { - None -> { (0, 0) } /* no ports with dynamic addresses */, - Some{port0} -> { - match (port0.sw.subnet) { - None -> { - abort("needs_dynamic_ipv4address is true, but subnet is undefined in port ${uuid2str(port0.lsp._uuid)}"); - (0, 0) - }, - Some{(_, _, start_ipv4, total_ipv4s)} -> (start_ipv4, total_ipv4s) - } - } - }; - for (port in ports) { - //warn("port(${port.lsp._uuid})"); - match (port.dynamic_address) { - None -> { - /* no dynamic address yet -- allocate one now */ - //warn("need_addr(${port.lsp._uuid})"); - need_addr.push(port.lsp._uuid) - }, - Some{dynaddr} -> { - match (dynaddr.ipv4_addrs.nth(0)) { - None -> { - /* dynamic address does not have IPv4 component -- allocate one now */ - //warn("need_addr(${port.lsp._uuid})"); - need_addr.push(port.lsp._uuid) - }, - Some{addr} -> { - var haddr = addr.addr.a; - if (haddr < start_ipv4 or haddr >= start_ipv4 + total_ipv4s) { - need_addr.push(port.lsp._uuid) - } else if (used_addrs.contains(haddr)) { - need_addr.push(port.lsp._uuid); - warn("Duplicate IP set on switch ${port.lsp.name}: ${addr.addr}") - } else { - /* has valid dynamic address -- record it in used_addrs */ - used_addrs.insert(haddr); - assigned_addrs.push((port.lsp._uuid, Some{haddr})) - } - } - } - } - } - }; - assigned_addrs.append(allocate_opt(used_addrs, need_addr, start_ipv4, start_ipv4 + total_ipv4s - 1)); - assigned_addrs - }, - var port_address = FlatMap(dyn_addresses), - (var lsport, var dyn_addr_bits) = port_address, - var dyn_addr = dyn_addr_bits.map(|x| InAddr{x}). - -/* Compute new dynamic IPv4 address assignment: - * - port does not need dynamic IP - use static_dynamic_ip if any - * - a new address has been allocated for port - use this address - * - otherwise, use existing dynamic IP - */ -relation SwitchPortNewIPv4DynAddress(lsport: uuid, dyn_addr: Option<in_addr>) - -SwitchPortNewIPv4DynAddress(lsp._uuid, ip_addr) :- - &SwitchPort(.sw = sw, - .needs_dynamic_ipv4address = false, - .static_dynamic_ipv4 = static_dynamic_ipv4, - .lsp = lsp), - var ip_addr = { - match (static_dynamic_ipv4) { - None -> { None }, - Some{addr} -> { - match (sw.subnet) { - None -> { None }, - Some{(_, _, start_ipv4, total_ipv4s)} -> { - var haddr = addr.a; - if (haddr < start_ipv4 or haddr >= start_ipv4 + total_ipv4s) { - /* new static ip is not valid */ - None - } else { - Some{addr} - } - } - } - } - } - }. - -SwitchPortNewIPv4DynAddress(lsport, addr) :- - SwitchPortAllocatedIPv4DynAddress(lsport, addr). - -/* - * Dynamic MAC address allocation. - */ - -function get_mac_prefix(options: Map<istring,istring>, uuid: uuid) : bit<48> -{ - match (map_get(options, i"mac_prefix").and_then(|pref| pref.ival().scan_eth_addr_prefix())) { - Some{prefix} -> prefix.ha, - None -> eth_addr_pseudorandom(uuid, 16'h1234).ha & 48'hffffff000000 - } -} -function put_mac_prefix(options: mut Map<istring,istring>, mac_prefix: bit<48>) -{ - map_insert(options, i"mac_prefix", - string_substr(to_string(EthAddr{mac_prefix}), 0, 8).intern()) -} -relation MacPrefix(mac_prefix: bit<48>) -MacPrefix(get_mac_prefix(options, uuid)) :- - nb::NB_Global(._uuid = uuid, .options = options). - -/* ReservedMACAddress - keeps track of statically reserved MAC addresses. - * (1) static addresses in `lsp.addresses` - * (2) static MAC component of "dynamic" `lsp.addresses`. - * (3) addresses associated with router ports peered with the switch. - * - * Addresses are kept in host-endian format. - */ -relation ReservedMACAddress(addr: bit<48>) - -/* Add reserved address group (1). */ -ReservedMACAddress(.addr = lport_addrs.ea.ha) :- - SwitchPortStaticAddresses(.addrs = lport_addrs). - -/* Add reserved address group (2). */ -ReservedMACAddress(.addr = mac_addr.ha) :- - &SwitchPort(.lsp = lsp, .static_dynamic_mac = Some{mac_addr}). - -/* Add reserved address group (3). */ -ReservedMACAddress(.addr = rport.networks.ea.ha) :- - &SwitchPort(.peer = Some{rport}). - -/* Aggregate all reserved MAC addresses. */ -relation ReservedMACAddresses(addrs: Set<bit<48>>) - -ReservedMACAddresses(addrs) :- - ReservedMACAddress(addr), - var addrs = addr.group_by(()).to_set(). - -/* Handle case when `ReservedMACAddress` is empty */ -ReservedMACAddresses(set_empty()) :- - // NB_Global should have exactly one record, so we can - // use it as a base for antijoin. - nb::NB_Global(), - not ReservedMACAddress(_). - -/* Allocate dynamic MAC addresses for ports that require them: - * Case 1: port doesn't need dynamic MAC (i.e., does not have dynamic address or - * has a dynamic address with a static MAC). - * Case 2: needs dynamic MAC, has dynamic MAC, has existing dynamic MAC with the right prefix - * needs dynamic MAC, does not have fixed dynamic MAC, doesn't have existing dynamic MAC with correct prefix - */ -relation SwitchPortAllocatedMACDynAddress(lsport: uuid, dyn_addr: bit<48>) - -SwitchPortAllocatedMACDynAddress(lsport, dyn_addr), -SwitchPortDuplicateMACAddress(dup_addrs) :- - /* Group all ports that need a dynamic IP address */ - port in &SwitchPort(.needs_dynamic_macaddress = true, .lsp = lsp), - SwitchPortNewIPv4DynAddress(lsp._uuid, ipv4_addr), - var ports = (port, ipv4_addr).group_by(()).to_vec(), - ReservedMACAddresses(reserved_addrs), - MacPrefix(mac_prefix), - (var dyn_addresses, var dup_addrs) = { - var used_addrs = reserved_addrs; - var need_addr = vec_empty(); - var dup_addrs = set_empty(); - for (port_with_addr in ports) { - (var port, var ipv4_addr) = port_with_addr; - var hint = match (ipv4_addr) { - None -> Some { mac_prefix | 1 }, - Some{addr} -> { - /* The tentative MAC's suffix will be in the interval (1, 0xfffffe). */ - var mac_suffix: bit<24> = addr.a[23:0] % ((mAC_ADDR_SPACE() - 1)[23:0]) + 1; - Some{ mac_prefix | (24'd0 ++ mac_suffix) } - } - }; - match (port.dynamic_address) { - None -> { - /* no dynamic address yet -- allocate one now */ - need_addr.push((port.lsp._uuid, hint)) - }, - Some{dynaddr} -> { - var haddr = dynaddr.ea.ha; - if ((haddr ^ mac_prefix) >> 24 != 0) { - /* existing dynamic address is no longer valid */ - need_addr.push((port.lsp._uuid, hint)) - } else if (used_addrs.contains(haddr)) { - dup_addrs.insert(dynaddr.ea); - } else { - /* has valid dynamic address -- record it in used_addrs */ - used_addrs.insert(haddr) - } - } - } - }; - // FIXME: if a port has a dynamic address that is no longer valid, and - // we are unable to allocate a new address, the current behavior is to - // keep the old invalid address. It should probably be changed to - // removing the old address. - // FIXME: OVN allocates MAC addresses by seeding them with IPv4 address. - // Implement a custom allocation function that simulates this behavior. - var res = allocate_with_hint(used_addrs, need_addr, mac_prefix + 1, mac_prefix + mAC_ADDR_SPACE() - 1); - var res_strs = vec_empty(); - for (x in res) { - (var uuid, var addr) = x; - res_strs.push("${uuid2str(uuid)}: ${EthAddr{addr}}") - }; - (res, dup_addrs) - }, - var port_address = FlatMap(dyn_addresses), - (var lsport, var dyn_addr) = port_address. - -relation SwitchPortDuplicateMACAddress(dup_addrs: Set<eth_addr>) -Warning["Duplicate MAC set: ${ea}"] :- - SwitchPortDuplicateMACAddress(dup_addrs), - var ea = FlatMap(dup_addrs). - -/* Compute new dynamic MAC address assignment: - * - port does not need dynamic MAC - use `static_dynamic_mac` - * - a new address has been allocated for port - use this address - * - otherwise, use existing dynamic MAC - */ -relation SwitchPortNewMACDynAddress(lsport: uuid, dyn_addr: Option<eth_addr>) - -SwitchPortNewMACDynAddress(lsp._uuid, mac_addr) :- - &SwitchPort(.needs_dynamic_macaddress = false, - .lsp = lsp, - .sw = sw, - .static_dynamic_mac = static_dynamic_mac), - var mac_addr = match (static_dynamic_mac) { - None -> None, - Some{addr} -> { - if (sw.subnet.is_some() or sw.ipv6_prefix.is_some() or - map_get(sw.other_config, i"mac_only") == Some{i"true"}) { - Some{addr} - } else { - None - } - } - }. - -SwitchPortNewMACDynAddress(lsport, Some{EthAddr{addr}}) :- - SwitchPortAllocatedMACDynAddress(lsport, addr). - -SwitchPortNewMACDynAddress(lsp._uuid, addr) :- - &SwitchPort(.needs_dynamic_macaddress = true, .lsp = lsp, .dynamic_address = cur_address), - not SwitchPortAllocatedMACDynAddress(lsp._uuid, _), - var addr = match (cur_address) { - None -> None, - Some{dynaddr} -> Some{dynaddr.ea} - }. - -/* - * Dynamic IPv6 address allocation. - * `needs_dynamic_ipv6address` -> mac.to_ipv6_eui64(ipv6_prefix) - */ -relation SwitchPortNewDynamicAddress(port: Intern<SwitchPort>, address: Option<lport_addresses>) - -SwitchPortNewDynamicAddress(port, None) :- - port in &SwitchPort(.lsp = lsp), - SwitchPortNewMACDynAddress(lsp._uuid, None). - -SwitchPortNewDynamicAddress(port, lport_address) :- - port in &SwitchPort(.lsp = lsp, - .sw = sw, - .needs_dynamic_ipv6address = needs_dynamic_ipv6address, - .static_dynamic_ipv6 = static_dynamic_ipv6), - SwitchPortNewMACDynAddress(lsp._uuid, Some{mac_addr}), - SwitchPortNewIPv4DynAddress(lsp._uuid, opt_ip4_addr), - var ip6_addr = match ((static_dynamic_ipv6, needs_dynamic_ipv6address, sw.ipv6_prefix)) { - (Some{ipv6}, _, _) -> " ${ipv6}", - (_, true, Some{prefix}) -> " ${mac_addr.to_ipv6_eui64(prefix)}", - _ -> "" - }, - var ip4_addr = match (opt_ip4_addr) { - None -> "", - Some{ip4} -> " ${ip4}" - }, - var addr_string = "${mac_addr}${ip6_addr}${ip4_addr}", - var lport_address = extract_addresses(addr_string). - - -///* If there's more than one dynamic addresses in port->addresses, log a warning -// and only allocate the first dynamic address */ -// -// VLOG_WARN_RL(&rl, "More than one dynamic address " -// "configured for logical switch port '%s'", -// nbsp->name); -// -////>> * MAC addresses suffixes in OUIs managed by OVN"s MACAM (MAC Address -////>> Management) system, in the range 1...0xfffffe. -////>> * IPv4 addresses in ranges managed by OVN's IPAM (IP Address Management) -////>> system. The range varies depending on the size of the subnet. -////>> -////>> Are these `dynamic_addresses` in OVN_Northbound.Logical_Switch_Port`? diff --git a/northd/lrouter.dl b/northd/lrouter.dl deleted file mode 100644 index 0e4308eb5e..0000000000 --- a/northd/lrouter.dl +++ /dev/null @@ -1,947 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -import OVN_Northbound as nb -import OVN_Southbound as sb -import graph as graph -import multicast -import ovsdb -import ovn -import helpers -import lswitch -import set - -function is_enabled(lr: nb::Logical_Router): bool { is_enabled(lr.enabled) } -function is_enabled(lrp: Intern<nb::Logical_Router_Port>): bool { is_enabled(lrp.enabled) } -function is_enabled(rp: RouterPort): bool { rp.lrp.is_enabled() } -function is_enabled(rp: Intern<RouterPort>): bool { rp.lrp.is_enabled() } - -/* default logical flow prioriry for distributed routes */ -function dROUTE_PRIO(): bit<32> = 400 - -/* LogicalRouterPortCandidate. - * - * Each row pairs a logical router port with its logical router, but without - * checking that the logical router port is on only one logical router. - * - * (Use LogicalRouterPort instead, which guarantees uniqueness.) */ -relation LogicalRouterPortCandidate(lrp_uuid: uuid, lr_uuid: uuid) -LogicalRouterPortCandidate(lrp_uuid, lr_uuid) :- - nb::Logical_Router(._uuid = lr_uuid, .ports = ports), - var lrp_uuid = FlatMap(ports). -Warning[message] :- - LogicalRouterPortCandidate(lrp_uuid, lr_uuid), - var lrs = lr_uuid.group_by(lrp_uuid).to_set(), - lrs.size() > 1, - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid), - var message = "Bad configuration: logical router port ${lrp.name} belongs " - "to more than one logical router". - -/* Each row means 'lport' is in 'lrouter' (and only that lrouter). */ -relation LogicalRouterPort(lport: uuid, lrouter: uuid) -LogicalRouterPort(lrp_uuid, lr_uuid) :- - LogicalRouterPortCandidate(lrp_uuid, lr_uuid), - var lrs = lr_uuid.group_by(lrp_uuid).to_set(), - lrs.size() == 1, - Some{var lr_uuid} = lrs.nth(0). - -/* - * Peer routers. - * - * Each row in the relation indicates that routers 'a' and 'b' can reach - * each other directly through router ports. - * - * This relation is symmetric: if (a,b) then (b,a). - * This relation is antireflexive: if (a,b) then a != b. - * - * Routers aren't peers if they can reach each other only through logical - * switch ports (that's the ReachableLogicalRouter table). - */ -relation PeerLogicalRouter(a: uuid, b: uuid) -PeerLogicalRouter(lrp_uuid, peer._uuid) :- - LogicalRouterPort(lrp_uuid, _), - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid), - Some{var peer_name} = lrp.peer, - peer in &nb::Logical_Router_Port(.name = peer_name), - peer.peer == Some{lrp.name}, // 'peer' must point back to 'lrp' - lrp_uuid != peer._uuid. // No reflexive pointers. - -/* - * First-hop routers. - * - * Each row indicates that 'lrouter' is a first-hop logical router for - * 'lswitch', that is, that a "cable" directly connects 'lrouter' and - * 'lswitch'. - * - * A switch can have multiple first-hop routers. */ -relation FirstHopLogicalRouter(lrouter: uuid, lswitch: uuid) -FirstHopLogicalRouter(lrouter, lswitch) :- - LogicalRouterPort(lrp_uuid, lrouter), - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid, .peer = None), - LogicalSwitchRouterPort(lsp_uuid, lrp.name, lswitch). - -relation LogicalSwitchRouterPort(lsp: uuid, lsp_router_port: istring, ls: uuid) -LogicalSwitchRouterPort(lsp, lsp_router_port, ls) :- - LogicalSwitchPort(lsp, ls), - &nb::Logical_Switch_Port(._uuid = lsp, .__type = i"router", .options = options), - Some{var lsp_router_port} = options.get(i"router-port"). - -/* Undirected edges connecting one router and another. - * This is a building block for ConnectedLogicalRouter. */ -relation LogicalRouterEdge(a: uuid, b: uuid) -LogicalRouterEdge(a, b) :- - FirstHopLogicalRouter(a, ls), - FirstHopLogicalRouter(b, ls), - a <= b. -LogicalRouterEdge(a, b) :- PeerLogicalRouter(a, b). -function edge_from(e: LogicalRouterEdge): uuid = { e.a } -function edge_to(e: LogicalRouterEdge): uuid = { e.b } - -/* - * Sets of routers such that packets can transit directly or indirectly among - * any of the routers in a set. Any given router is in exactly one set. - * - * Each row (set, elem) identifes the membership of router with UUID 'elem' in - * set 'set', where 'set' is the minimum UUID across all its elements. - * - * We implement this using the graph transformer because there is no - * way to implement "connected components" in raw DDlog that avoids O(n**2) - * blowup in the number of nodes in a component. - */ -relation ConnectedLogicalRouter[(uuid, uuid)] -apply graph::ConnectedComponents(LogicalRouterEdge, edge_from, edge_to) - -> (ConnectedLogicalRouter) - -// ha_chassis_group and gateway_chassis may not both be present. -Warning[message] :- - lrp in &nb::Logical_Router_Port(), - lrp.ha_chassis_group.is_some(), - not lrp.gateway_chassis.is_empty(), - var message = "Both ha_chassis_group and gateway_chassis configured on " - "port ${lrp.name}; ignoring the latter". - -// A distributed gateway port cannot also be an L3 gateway router. -Warning[message] :- - lrp in &nb::Logical_Router_Port(), - lrp.ha_chassis_group.is_some() or not lrp.gateway_chassis.is_empty(), - lrp.options.contains_key(i"chassis"), - var message = "Bad configuration: distributed gateway port configured on " - "port ${lrp.name} on L3 gateway router". - -/* Distributed gateway ports. - * - * Each row means 'lrp' is a distributed gateway port on 'lr_uuid'. - * - * A logical router can have multiple distributed gateway ports. */ -relation DistributedGatewayPort(lrp: Intern<nb::Logical_Router_Port>, - lr_uuid: uuid, cr_lrp_uuid: uuid) - -// lrp._uuid is already in use; generate a new UUID by hashing it. -DistributedGatewayPort(lrp, lr_uuid, hash128(lrp_uuid)) :- - lr in nb::Logical_Router(._uuid = lr_uuid), - LogicalRouterPort(lrp_uuid, lr._uuid), - lrp in &nb::Logical_Router_Port(._uuid = lrp_uuid), - not lrp.options.contains_key(i"chassis"), - var has_hcg = lrp.ha_chassis_group.is_some(), - var has_gc = not lrp.gateway_chassis.is_empty(), - has_hcg or has_gc. - -/* HAChassis is an abstraction over nb::Gateway_Chassis and nb::HA_Chassis, which - * are different ways to represent the same configuration. Each row is - * effectively one HA_Chassis record. (Usually, we could associate each - * row with a particular 'lr_uuid', but it's permissible for more than one - * logical router to use a HA chassis group, so we omit it so that multiple - * references get merged.) - * - * nb::Gateway_Chassis has an "options" column that this omits because - * nb::HA_Chassis doesn't have anything similar. That's OK because no options - * were ever defined. */ -relation HAChassis(hacg_uuid: uuid, - hac_uuid: uuid, - chassis_name: istring, - priority: integer, - external_ids: Map<istring,istring>) -HAChassis(ha_chassis_group_uuid(lrp._uuid), gw_chassis_uuid, - chassis_name, priority, external_ids) :- - DistributedGatewayPort(.lrp = lrp), - lrp.ha_chassis_group == None, - var gw_chassis_uuid = FlatMap(lrp.gateway_chassis), - nb::Gateway_Chassis(._uuid = gw_chassis_uuid, - .chassis_name = chassis_name, - .priority = priority, - .external_ids = eids), - var external_ids = eids.insert_imm(i"chassis-name", chassis_name). -HAChassis(ha_chassis_group_uuid(ha_chassis_group._uuid), ha_chassis_uuid, - chassis_name, priority, external_ids) :- - DistributedGatewayPort(.lrp = lrp), - Some{var hac_group_uuid} = lrp.ha_chassis_group, - ha_chassis_group in nb::HA_Chassis_Group(._uuid = hac_group_uuid), - var ha_chassis_uuid = FlatMap(ha_chassis_group.ha_chassis), - nb::HA_Chassis(._uuid = ha_chassis_uuid, - .chassis_name = chassis_name, - .priority = priority, - .external_ids = eids), - var external_ids = eids.insert_imm(i"chassis-name", chassis_name). - -/* HAChassisGroup is an abstraction for sb::HA_Chassis_Group that papers over - * the two southbound ways to configure it via nb::Gateway_Chassis and - * nb::HA_Chassis. The former configuration method does not provide a name or - * external_ids for the group (only for individual chassis), so we generate - * them. - * - * (Usually, we could associated each row with a particular 'lr_uuid', but it's - * permissible for more than one logical router to use a HA chassis group, so - * we omit it so that multiple references get merged.) - */ -relation HAChassisGroup(uuid: uuid, - name: istring, - external_ids: Map<istring,istring>) -HAChassisGroup(ha_chassis_group_uuid(lrp._uuid), lrp.name, map_empty()) :- - DistributedGatewayPort(.lrp = lrp), - lrp.ha_chassis_group == None, - not lrp.gateway_chassis.is_empty(). -HAChassisGroup(ha_chassis_group_uuid(hac_group_uuid), - name, external_ids) :- - DistributedGatewayPort(.lrp = lrp), - Some{var hac_group_uuid} = lrp.ha_chassis_group, - nb::HA_Chassis_Group(._uuid = hacg_uuid, - .name = name, - .external_ids = external_ids). - -/* Each row maps from a distributed gateway logical router port to the name of - * its HAChassisGroup. - * This level of indirection is needed because multiple distributed gateway - * logical router ports are allowed to reference a given HAChassisGroup. */ -relation DistributedGatewayPortHAChassisGroup( - lrp: Intern<nb::Logical_Router_Port>, - hacg_uuid: uuid) -DistributedGatewayPortHAChassisGroup(lrp, ha_chassis_group_uuid(lrp._uuid)) :- - DistributedGatewayPort(.lrp = lrp), - lrp.ha_chassis_group == None, - lrp.gateway_chassis.size() > 0. -DistributedGatewayPortHAChassisGroup(lrp, - ha_chassis_group_uuid(hac_group_uuid)) :- - DistributedGatewayPort(.lrp = lrp), - Some{var hac_group_uuid} = lrp.ha_chassis_group, - nb::HA_Chassis_Group(._uuid = hac_group_uuid). - - -/* For each router port, tracks whether it's a redirect port of its router */ -relation RouterPortIsRedirect(lrp: uuid, is_redirect: bool) -RouterPortIsRedirect(lrp, true) :- DistributedGatewayPort(&nb::Logical_Router_Port{._uuid = lrp}, _, _). -RouterPortIsRedirect(lrp, false) :- - &nb::Logical_Router_Port(._uuid = lrp), - not DistributedGatewayPort(&nb::Logical_Router_Port{._uuid = lrp}, _, _). - -/* - * LogicalRouterDGWPorts maps from each logical router UUID - * to the logical router's set of distributed gateway (or redirect) ports. */ -relation LogicalRouterDGWPorts( - lr_uuid: uuid, - l3dgw_ports: Vec<Intern<nb::Logical_Router_Port>>) -LogicalRouterDGWPorts(lr_uuid, l3dgw_ports) :- - DistributedGatewayPort(lrp, lr_uuid, _), - var l3dgw_ports = lrp.group_by(lr_uuid).to_vec(). -LogicalRouterDGWPorts(lr_uuid, vec_empty()) :- - lr in nb::Logical_Router(), - var lr_uuid = lr._uuid, - not DistributedGatewayPort(_, lr_uuid, _). - -typedef ExceptionalExtIps = AllowedExtIps{ips: Intern<nb::Address_Set>} - | ExemptedExtIps{ips: Intern<nb::Address_Set>} - -typedef NAT = NAT{ - nat: Intern<nb::NAT>, - external_ip: v46_ip, - external_mac: Option<eth_addr>, - exceptional_ext_ips: Option<ExceptionalExtIps> -} - -relation LogicalRouterNAT0( - lr: uuid, - nat: Intern<nb::NAT>, - external_ip: v46_ip, - external_mac: Option<eth_addr>) -LogicalRouterNAT0(lr, nat, external_ip, external_mac) :- - nb::Logical_Router(._uuid = lr, .nat = nats), - var nat_uuid = FlatMap(nats), - nat in &nb::NAT(._uuid = nat_uuid), - Some{var external_ip} = ip46_parse(nat.external_ip.ival()), - var external_mac = match (nat.external_mac) { - Some{s} -> eth_addr_from_string(s.ival()), - None -> None - }. -Warning["Bad ip address ${nat.external_ip} in nat configuration for router ${lr_name}."] :- - nb::Logical_Router(._uuid = lr, .nat = nats, .name = lr_name), - var nat_uuid = FlatMap(nats), - nat in &nb::NAT(._uuid = nat_uuid), - None = ip46_parse(nat.external_ip.ival()). -Warning["Bad MAC address ${s} in nat configuration for router ${lr_name}."] :- - nb::Logical_Router(._uuid = lr, .nat = nats, .name = lr_name), - var nat_uuid = FlatMap(nats), - nat in &nb::NAT(._uuid = nat_uuid), - Some{var s} = nat.external_mac, - None = eth_addr_from_string(s.ival()). - -relation LogicalRouterNAT(lr: uuid, nat: NAT) -LogicalRouterNAT(lr, NAT{nat, external_ip, external_mac, None}) :- - LogicalRouterNAT0(lr, nat, external_ip, external_mac), - nat.allowed_ext_ips == None, - nat.exempted_ext_ips == None. -LogicalRouterNAT(lr, NAT{nat, external_ip, external_mac, Some{AllowedExtIps{__as}}}) :- - LogicalRouterNAT0(lr, nat, external_ip, external_mac), - nat.exempted_ext_ips == None, - Some{var __as_uuid} = nat.allowed_ext_ips, - __as in &nb::Address_Set(._uuid = __as_uuid). -LogicalRouterNAT(lr, NAT{nat, external_ip, external_mac, Some{ExemptedExtIps{__as}}}) :- - LogicalRouterNAT0(lr, nat, external_ip, external_mac), - nat.allowed_ext_ips == None, - Some{var __as_uuid} = nat.exempted_ext_ips, - __as in &nb::Address_Set(._uuid = __as_uuid). -Warning["NAT rule: ${nat._uuid} not applied, since" - "both allowed and exempt external ips set"] :- - LogicalRouterNAT0(lr, nat, _, _), - nat.allowed_ext_ips.is_some() and nat.exempted_ext_ips.is_some(). - -relation LogicalRouterNATs(lr: uuid, nat: Vec<NAT>) - -LogicalRouterNATs(lr, nats) :- - LogicalRouterNAT(lr, nat), - var nats = nat.group_by(lr).to_vec(). - -LogicalRouterNATs(lr, vec_empty()) :- - nb::Logical_Router(._uuid = lr), - not LogicalRouterNAT(lr, _). - -function get_force_snat_ip(options: Map<istring,istring>, key_type: istring): Set<v46_ip> = -{ - var ips = set_empty(); - match (options.get(i"${key_type}_force_snat_ip")) { - None -> (), - Some{s} -> { - for (token in s.split(" ")) { - match (ip46_parse(token)) { - Some{ip} -> ips.insert(ip), - _ -> () // XXX warn - } - }; - } - }; - ips -} - -function has_force_snat_ip(options: Map<istring, istring>, key_type: istring): bool { - not get_force_snat_ip(options, key_type).is_empty() -} - -function lb_force_snat_router_ip(lr_options: Map<istring, istring>): bool { - lr_options.get(i"lb_force_snat_ip") == Some{i"router_ip"} and - lr_options.contains_key(i"chassis") -} - -typedef LBForceSNAT = NoForceSNAT - | ForceSNAT - | SkipSNAT - -function snat_for_lb(lr_options: Map<istring, istring>, lb: Intern<nb::Load_Balancer>): LBForceSNAT { - if (lb.options.get_bool_def(i"skip_snat", false)) { - return SkipSNAT - }; - if (not get_force_snat_ip(lr_options, i"lb").is_empty() or lb_force_snat_router_ip(lr_options)) { - return ForceSNAT - }; - return NoForceSNAT -} - -/* For each router, collect the set of IPv4 and IPv6 addresses used for SNAT, - * which includes: - * - * - dnat_force_snat_addrs - * - lb_force_snat_addrs - * - IP addresses used in the router's attached NAT rules - * - * This is like init_nat_entries() in northd.c. */ -relation LogicalRouterSnatIP(lr: uuid, snat_ip: v46_ip, nat: Option<NAT>) -LogicalRouterSnatIP(lr._uuid, force_snat_ip, None) :- - lr in nb::Logical_Router(), - var dnat_force_snat_ips = get_force_snat_ip(lr.options, i"dnat"), - var lb_force_snat_ips = if (lb_force_snat_router_ip(lr.options)) { - set_empty() - } else { - get_force_snat_ip(lr.options, i"lb") - }, - var force_snat_ip = FlatMap(dnat_force_snat_ips.union(lb_force_snat_ips)). -LogicalRouterSnatIP(lr, snat_ip, Some{nat}) :- - LogicalRouterNAT(lr, nat@NAT{.nat = &nb::NAT{.__type = i"snat"}, .external_ip = snat_ip}). - -function group_to_setunionmap(g: Group<'K1, ('K2,Set<'V>)>): Map<'K2,Set<'V>> { - var map = map_empty(); - for ((entry, _) in g) { - (var key, var value) = entry; - match (map.get(key)) { - None -> map.insert(key, value), - Some{old_value} -> map.insert(key, old_value.union(value)) - } - }; - map -} -relation LogicalRouterSnatIPs(lr: uuid, snat_ips: Map<v46_ip, Set<NAT>>) -LogicalRouterSnatIPs(lr, snat_ips) :- - LogicalRouterSnatIP(lr, snat_ip, nat), - var snat_ips = (snat_ip, nat.to_set()).group_by(lr).group_to_setunionmap(). -LogicalRouterSnatIPs(lr._uuid, map_empty()) :- - lr in nb::Logical_Router(), - not LogicalRouterSnatIP(.lr = lr._uuid). - -relation LogicalRouterLB(lr: uuid, lb: Intern<LoadBalancer>) -LogicalRouterLB(lr, lb) :- - nb::Logical_Router(._uuid = lr, .load_balancer = lbs), - var lb_uuid = FlatMap(lbs), - lb in &LoadBalancer(.lb = &nb::Load_Balancer{._uuid = lb_uuid}). - -relation LogicalRouterLBs(lr: uuid, lb: Vec<Intern<LoadBalancer>>) - -LogicalRouterLBs(lr, lbs) :- - LogicalRouterLB(lr, lb), - var lbs = lb.group_by(lr).to_vec(). - -LogicalRouterLBs(lr, vec_empty()) :- - nb::Logical_Router(._uuid = lr), - not LogicalRouterLB(lr, _). - -// LogicalRouterCopp maps from each LR to its collection of Copp meters, -// dropping any Copp meter whose meter name doesn't exist. -relation LogicalRouterCopp(lr: uuid, meters: Map<istring,istring>) -LogicalRouterCopp(lr, meters) :- LogicalRouterCopp0(lr, meters). -LogicalRouterCopp(lr, map_empty()) :- - nb::Logical_Router(._uuid = lr), - not LogicalRouterCopp0(lr, _). - -relation LogicalRouterCopp0(lr: uuid, meters: Map<istring,istring>) -LogicalRouterCopp0(lr, meters) :- - nb::Logical_Router(._uuid = lr, .copp = Some{copp_uuid}), - nb::Copp(._uuid = copp_uuid, .meters = meters), - var entry = FlatMap(meters), - (var copp_id, var meter_name) = entry, - &nb::Meter(.name = meter_name), - var meters = (copp_id, meter_name).group_by(lr).to_map(). - -/* Router relation collects all attributes of a logical router. - * - * `l3dgw_ports` - optional redirect ports (see `DistributedGatewayPort`) - * `is_gateway` - true iff the router is a gateway router. Together with - * `l3dgw_port`, this flag affects the generation of various flows - * related to NAT and load balancing. - * `learn_from_arp_request` - whether ARP requests to addresses on the router - * should always be learned - */ - -function chassis_redirect_name(port_name: istring): string = "cr-${port_name}" - -typedef LoadBalancer = LoadBalancer { - lb: Intern<nb::Load_Balancer>, - ipv4s: Set<istring>, - ipv6s: Set<istring>, - routable: bool -} - -relation LoadBalancer[Intern<LoadBalancer>] -LoadBalancer[LoadBalancer{lb, ipv4s, ipv6s, routable}.intern()] :- - nb::Load_Balancer[lb], - var routable = lb.options.get_bool_def(i"add_route", false), - (var ipv4s, var ipv6s) = { - var ipv4s = set_empty(); - var ipv6s = set_empty(); - for ((vip, _) in lb.vips) { - /* node->key contains IP:port or just IP. */ - match (ip_address_and_port_from_lb_key(vip.ival())) { - None -> (), - Some{(IPv4{ipv4}, _)} -> ipv4s.insert(i"${ipv4}"), - Some{(IPv6{ipv6}, _)} -> ipv6s.insert(i"${ipv6}"), - } - }; - (ipv4s, ipv6s) - }. - -typedef Router = Router { - /* Fields copied from nb::Logical_Router. */ - _uuid: uuid, - name: istring, - policies: Set<uuid>, - enabled: Option<bool>, - nat: Set<uuid>, - options: Map<istring,istring>, - external_ids: Map<istring,istring>, - - /* Additional computed fields. */ - l3dgw_ports: Vec<Intern<nb::Logical_Router_Port>>, - is_gateway: bool, - nats: Vec<NAT>, - snat_ips: Map<v46_ip, Set<NAT>>, - mcast_cfg: Intern<McastRouterCfg>, - learn_from_arp_request: bool, - force_lb_snat: bool, - copp: Map<istring, istring>, -} - -relation Router[Intern<Router>] - -Router[Router{ - ._uuid = lr._uuid, - .name = lr.name, - .policies = lr.policies, - .enabled = lr.enabled, - .nat = lr.nat, - .options = lr.options, - .external_ids = lr.external_ids, - - .l3dgw_ports = l3dgw_ports, - .is_gateway = lr.options.contains_key(i"chassis"), - .nats = nats, - .snat_ips = snat_ips, - .mcast_cfg = mcast_cfg, - .learn_from_arp_request = learn_from_arp_request, - .force_lb_snat = force_lb_snat, - .copp = copp}.intern()] :- - lr in nb::Logical_Router(), - lr.is_enabled(), - LogicalRouterDGWPorts(lr._uuid, l3dgw_ports), - LogicalRouterNATs(lr._uuid, nats), - LogicalRouterSnatIPs(lr._uuid, snat_ips), - LogicalRouterCopp(lr._uuid, copp), - mcast_cfg in &McastRouterCfg(.datapath = lr._uuid), - var learn_from_arp_request = lr.options.get_bool_def(i"always_learn_from_arp_request", true), - var force_lb_snat = lb_force_snat_router_ip(lr.options). - -typedef LogicalRouterLBIPs = LogicalRouterLBIPs { - lr: uuid, - lb_ipv4s_routable: Set<istring>, - lb_ipv4s_unroutable: Set<istring>, - lb_ipv6s_routable: Set<istring>, - lb_ipv6s_unroutable: Set<istring>, -} - -relation LogicalRouterLBIPs[Intern<LogicalRouterLBIPs>] - -LogicalRouterLBIPs[LogicalRouterLBIPs{ - .lr = lr_uuid, - .lb_ipv4s_routable = lb_ipv4s_routable, - .lb_ipv4s_unroutable = lb_ipv4s_unroutable, - .lb_ipv6s_routable = lb_ipv6s_routable, - .lb_ipv6s_unroutable = lb_ipv6s_unroutable - }.intern() -] :- - LogicalRouterLBs(lr_uuid, lbs), - (var lb_ipv4s_routable, var lb_ipv4s_unroutable, - var lb_ipv6s_routable, var lb_ipv6s_unroutable) = { - var lb_ipv4s_routable = set_empty(); - var lb_ipv4s_unroutable = set_empty(); - var lb_ipv6s_routable = set_empty(); - var lb_ipv6s_unroutable = set_empty(); - for (lb in lbs) { - if (lb.routable) { - lb_ipv4s_routable = lb_ipv4s_routable.union(lb.ipv4s); - lb_ipv6s_routable = lb_ipv6s_routable.union(lb.ipv6s); - } else { - lb_ipv4s_unroutable = lb_ipv4s_unroutable.union(lb.ipv4s); - lb_ipv6s_unroutable = lb_ipv6s_unroutable.union(lb.ipv6s); - } - }; - (lb_ipv4s_routable, lb_ipv4s_unroutable, - lb_ipv6s_routable, lb_ipv6s_unroutable) - }. - -/* Router - to - LB-uuid */ -relation RouterLB(router: Intern<Router>, lb_uuid: uuid) - -RouterLB(router, lb.lb._uuid) :- - LogicalRouterLB(lr_uuid, lb), - router in &Router(._uuid = lr_uuid). - -/* Like RouterLB, but only includes gateway routers. */ -relation GWRouterLB(router: Intern<Router>, lb_uuid: uuid) - -GWRouterLB(router, lb_uuid) :- - RouterLB(router, lb_uuid), - router.l3dgw_ports.len() > 0 or router.is_gateway. - -/* Router-to-router logical port connections */ -relation RouterRouterPeer(rport1: uuid, rport2: uuid, rport2_name: istring) - -RouterRouterPeer(rport1, rport2, peer_name) :- - &nb::Logical_Router_Port(._uuid = rport1, .peer = peer), - Some{var peer_name} = peer, - &nb::Logical_Router_Port(._uuid = rport2, .name = peer_name). - -/* Router port can peer with anothe router port, a switch port or have - * no peer. - */ -typedef RouterPeer = PeerRouter{rport: uuid, name: istring} - | PeerSwitch{sport: uuid, name: istring} - | PeerNone - -function router_peer_name(peer: RouterPeer): Option<istring> = { - match (peer) { - PeerRouter{_, n} -> Some{n}, - PeerSwitch{_, n} -> Some{n}, - PeerNone -> None - } -} - -relation RouterPortPeer(rport: uuid, peer: RouterPeer) - -/* Router-to-router logical port connections */ -RouterPortPeer(rport, PeerSwitch{sport, sport_name}) :- - SwitchRouterPeer(sport, sport_name, rport). - -RouterPortPeer(rport1, PeerRouter{rport2, rport2_name}) :- - RouterRouterPeer(rport1, rport2, rport2_name). - -RouterPortPeer(rport, PeerNone) :- - &nb::Logical_Router_Port(._uuid = rport), - not SwitchRouterPeer(_, _, rport), - not RouterRouterPeer(rport, _, _). - -/* Each row maps from a Logical_Router port to the input options in its - * corresponding Port_Binding (if any). This is because northd preserves - * most of the options in that column. (northd unconditionally sets the - * ipv6_prefix_delegation and ipv6_prefix options, so we remove them for - * faster convergence.) */ -relation RouterPortSbOptions(lrp_uuid: uuid, options: Map<istring,istring>) -RouterPortSbOptions(lrp._uuid, options) :- - lrp in &nb::Logical_Router_Port(), - pb in sb::Port_Binding(._uuid = lrp._uuid), - var options = { - var options = pb.options; - options.remove(i"ipv6_prefix"); - options.remove(i"ipv6_prefix_delegation"); - options - }. -RouterPortSbOptions(lrp._uuid, map_empty()) :- - lrp in &nb::Logical_Router_Port(), - not sb::Port_Binding(._uuid = lrp._uuid). - -relation RouterPortHasBfd(lrp_uuid: uuid, has_bfd: bool) -RouterPortHasBfd(lrp_uuid, true) :- - &nb::Logical_Router_Port(._uuid = lrp_uuid, .name = logical_port), - nb::BFD(.logical_port = logical_port). -RouterPortHasBfd(lrp_uuid, false) :- - &nb::Logical_Router_Port(._uuid = lrp_uuid, .name = logical_port), - not nb::BFD(.logical_port = logical_port). - -/* FIXME: what should happen when extract_lrp_networks fails? */ -/* RouterPort relation collects all attributes of a logical router port */ -typedef RouterPort = RouterPort { - lrp: Intern<nb::Logical_Router_Port>, - json_name: string, - networks: lport_addresses, - router: Intern<Router>, - is_redirect: bool, - peer: RouterPeer, - mcast_cfg: Intern<McastPortCfg>, - sb_options: Map<istring,istring>, - has_bfd: bool, - enabled: bool -} - -relation RouterPort[Intern<RouterPort>] - -RouterPort[RouterPort{ - .lrp = lrp, - .json_name = json_escape(lrp.name), - .networks = networks, - .router = router, - .is_redirect = is_redirect, - .peer = peer, - .mcast_cfg = mcast_cfg, - .sb_options = sb_options, - .has_bfd = has_bfd, - .enabled = lrp.is_enabled() - }.intern()] :- - lrp in &nb::Logical_Router_Port(), - Some{var networks} = extract_lrp_networks(lrp.mac.ival(), lrp.networks.map(ival)), - LogicalRouterPort(lrp._uuid, lrouter_uuid), - router in &Router(._uuid = lrouter_uuid), - RouterPortIsRedirect(lrp._uuid, is_redirect), - RouterPortPeer(lrp._uuid, peer), - mcast_cfg in &McastPortCfg(.port = lrp._uuid, .router_port = true), - RouterPortSbOptions(lrp._uuid, sb_options), - RouterPortHasBfd(lrp._uuid, has_bfd). - -relation RouterPortNetworksIPv4Addr(port: Intern<RouterPort>, addr: ipv4_netaddr) - -RouterPortNetworksIPv4Addr(port, addr) :- - port in &RouterPort(.networks = networks), - var addr = FlatMap(networks.ipv4_addrs). - -relation RouterPortNetworksIPv6Addr(port: Intern<RouterPort>, addr: ipv6_netaddr) - -RouterPortNetworksIPv6Addr(port, addr) :- - port in &RouterPort(.networks = networks), - var addr = FlatMap(networks.ipv6_addrs). - -/* StaticRoute: Collects and parses attributes of a static route. */ -typedef route_policy = SrcIp | DstIp -function route_policy_from_string(s: Option<istring>): route_policy = { - if (s == Some{i"src-ip"}) { SrcIp } else { DstIp } -} -function to_string(policy: route_policy): string = { - match (policy) { - SrcIp -> "src-ip", - DstIp -> "dst-ip" - } -} - -typedef route_key = RouteKey { - policy: route_policy, - ip_prefix: v46_ip, - plen: bit<32> -} - -/* StaticRouteDown contains the UUID of all the static routes that are down. - * A static route is down if it has a BFD whose dst_ip matches it nexthop and - * that BFD is down or admin_down. */ -relation StaticRouteDown(lrsr_uuid: uuid) -StaticRouteDown(lrsr_uuid) :- - nb::Logical_Router_Static_Route(._uuid = lrsr_uuid, .bfd = Some{bfd_uuid}, .nexthop = nexthop), - bfd in nb::BFD(._uuid = bfd_uuid, .dst_ip = nexthop), - match (bfd.status) { - None -> true, - Some{status} -> (status == i"admin_down" or status == i"down") - }. - -relation &StaticRoute(lrsr: nb::Logical_Router_Static_Route, - key: route_key, - nexthop: v46_ip, - output_port: Option<istring>, - ecmp_symmetric_reply: bool) - -&StaticRoute(.lrsr = lrsr, - .key = RouteKey{policy, ip_prefix, plen}, - .nexthop = nexthop, - .output_port = lrsr.output_port, - .ecmp_symmetric_reply = esr) :- - lrsr in nb::Logical_Router_Static_Route(), - not StaticRouteDown(lrsr._uuid), - var policy = route_policy_from_string(lrsr.policy), - Some{(var nexthop, var nexthop_plen)} = ip46_parse_cidr(lrsr.nexthop.ival()), - match (nexthop) { - IPv4{_} -> nexthop_plen == 32, - IPv6{_} -> nexthop_plen == 128 - }, - Some{(var ip_prefix, var plen)} = ip46_parse_cidr(lrsr.ip_prefix.ival()), - match ((nexthop, ip_prefix)) { - (IPv4{_}, IPv4{_}) -> true, - (IPv6{_}, IPv6{_}) -> true, - _ -> false - }, - var esr = lrsr.options.get_bool_def(i"ecmp_symmetric_reply", false). - -relation &StaticRouteEmptyNextHop(lrsr: nb::Logical_Router_Static_Route, - key: route_key, - output_port: Option<istring>) -&StaticRouteEmptyNextHop(.lrsr = lrsr, - .key = RouteKey{policy, ip_prefix, plen}, - .output_port = lrsr.output_port) :- - lrsr in nb::Logical_Router_Static_Route(.nexthop = i""), - not StaticRouteDown(lrsr._uuid), - var policy = route_policy_from_string(lrsr.policy), - Some{(var ip_prefix, var plen)} = ip46_parse_cidr(lrsr.ip_prefix.ival()). - -/* Returns the IP address of the router port 'op' that - * overlaps with 'ip'. If one is not found, returns None. */ -function find_lrp_member_ip(networks: lport_addresses, ip: v46_ip): Option<v46_ip> = -{ - match (ip) { - IPv4{ip4} -> { - for (na in networks.ipv4_addrs) { - if ((na.addr, ip4).same_network(na.netmask())) { - /* There should be only 1 interface that matches the - * supplied IP. Otherwise, it's a configuration error, - * because subnets of a router's interfaces should NOT - * overlap. */ - return Some{IPv4{na.addr}} - } - }; - return None - }, - IPv6{ip6} -> { - for (na in networks.ipv6_addrs) { - if ((na.addr, ip6).same_network(na.netmask())) { - /* There should be only 1 interface that matches the - * supplied IP. Otherwise, it's a configuration error, - * because subnets of a router's interfaces should NOT - * overlap. */ - return Some{IPv6{na.addr}} - } - }; - return None - } - } -} - - -/* Step 1: compute router-route pairs */ -relation RouterStaticRoute_( - router : Intern<Router>, - key : route_key, - nexthop : v46_ip, - output_port : Option<istring>, - ecmp_symmetric_reply : bool) - -RouterStaticRoute_(.router = router, - .key = route.key, - .nexthop = route.nexthop, - .output_port = route.output_port, - .ecmp_symmetric_reply = route.ecmp_symmetric_reply) :- - router in &Router(), - nb::Logical_Router(._uuid = router._uuid, .static_routes = routes), - var route_id = FlatMap(routes), - route in &StaticRoute(.lrsr = nb::Logical_Router_Static_Route{._uuid = route_id}). - -relation RouterStaticRouteEmptyNextHop_( - router : Intern<Router>, - key : route_key, - output_port : Option<istring>) - -RouterStaticRouteEmptyNextHop_(.router = router, - .key = route.key, - .output_port = route.output_port) :- - router in &Router(), - nb::Logical_Router(._uuid = router._uuid, .static_routes = routes), - var route_id = FlatMap(routes), - route in &StaticRouteEmptyNextHop(.lrsr = nb::Logical_Router_Static_Route{._uuid = route_id}). - -/* Step-2: compute output_port for each pair */ -typedef route_dst = RouteDst { - nexthop: v46_ip, - src_ip: v46_ip, - port: Intern<RouterPort>, - ecmp_symmetric_reply: bool -} - -relation RouterStaticRoute( - router : Intern<Router>, - key : route_key, - dsts : Set<route_dst>) - -RouterStaticRoute(router, key, dsts) :- - rsr in RouterStaticRoute_(.router = router, .output_port = None), - /* output_port is not specified, find the - * router port matching the next hop. */ - port in &RouterPort(.router = &Router{._uuid = router._uuid}, - .networks = networks), - Some{var src_ip} = find_lrp_member_ip(networks, rsr.nexthop), - var dst = RouteDst{rsr.nexthop, src_ip, port, rsr.ecmp_symmetric_reply}, - var key = rsr.key, - var dsts = dst.group_by((router, key)).to_set(). - -RouterStaticRoute(router, key, dsts) :- - RouterStaticRoute_(.router = router, - .key = key, - .nexthop = nexthop, - .output_port = Some{oport}, - .ecmp_symmetric_reply = ecmp_symmetric_reply), - /* output_port specified */ - port in &RouterPort(.lrp = &nb::Logical_Router_Port{.name = oport}, - .networks = networks), - Some{var src_ip} = match (find_lrp_member_ip(networks, nexthop)) { - Some{src_ip} -> Some{src_ip}, - None -> { - /* There are no IP networks configured on the router's port via - * which 'route->nexthop' is theoretically reachable. But since - * 'out_port' has been specified, we honor it by trying to reach - * 'route->nexthop' via the first IP address of 'out_port'. - * (There are cases, e.g in GCE, where each VM gets a /32 IP - * address and the default gateway is still reachable from it.) */ - match (key.ip_prefix) { - IPv4{_} -> match (networks.ipv4_addrs.nth(0)) { - Some{addr} -> Some{IPv4{addr.addr}}, - None -> { - warn("No path for static route ${key.ip_prefix}; next hop ${nexthop}"); - None - } - }, - IPv6{_} -> match (networks.ipv6_addrs.nth(0)) { - Some{addr} -> Some{IPv6{addr.addr}}, - None -> { - warn("No path for static route ${key.ip_prefix}; next hop ${nexthop}"); - None - } - } - } - } - }, - var dsts = set_singleton(RouteDst{nexthop, src_ip, port, ecmp_symmetric_reply}). - -relation RouterStaticRouteEmptyNextHop( - router : Intern<Router>, - key : route_key, - dsts : Set<route_dst>) - -RouterStaticRouteEmptyNextHop(router, key, dsts) :- - RouterStaticRouteEmptyNextHop_(.router = router, - .key = key, - .output_port = Some{oport}), - /* output_port specified */ - port in &RouterPort(.lrp = &nb::Logical_Router_Port{.name = oport}, - .networks = networks), - /* There are no IP networks configured on the router's port via - * which 'route->nexthop' is theoretically reachable. But since - * 'out_port' has been specified, we honor it by trying to reach - * 'route->nexthop' via the first IP address of 'out_port'. - * (There are cases, e.g in GCE, where each VM gets a /32 IP - * address and the default gateway is still reachable from it.) */ - Some{var src_ip} = match (key.ip_prefix) { - IPv4{_} -> match (networks.ipv4_addrs.nth(0)) { - Some{addr} -> Some{IPv4{addr.addr}}, - None -> { - warn("No path for static route ${key.ip_prefix}"); - None - } - }, - IPv6{_} -> match (networks.ipv6_addrs.nth(0)) { - Some{addr} -> Some{IPv6{addr.addr}}, - None -> { - warn("No path for static route ${key.ip_prefix}"); - None - } - } - }, - var dsts = set_singleton(RouteDst{src_ip, src_ip, port, false}). - -/* compute route-route pairs for nexthop = "discard" routes */ -relation &DiscardRoute(lrsr: nb::Logical_Router_Static_Route, - key: route_key) -&DiscardRoute(.lrsr = lrsr, - .key = RouteKey{policy, ip_prefix, plen}) :- - lrsr in nb::Logical_Router_Static_Route(.nexthop = i"discard"), - var policy = route_policy_from_string(lrsr.policy), - Some{(var ip_prefix, var plen)} = ip46_parse_cidr(lrsr.ip_prefix.ival()). - -relation RouterDiscardRoute_( - router : Intern<Router>, - key : route_key) - -RouterDiscardRoute_(.router = router, - .key = route.key) :- - router in &Router(), - nb::Logical_Router(._uuid = router._uuid, .static_routes = routes), - var route_id = FlatMap(routes), - route in &DiscardRoute(.lrsr = nb::Logical_Router_Static_Route{._uuid = route_id}). - -Warning[message] :- - RouterStaticRoute_(.router = router, .key = key, .nexthop = nexthop), - not RouterStaticRoute(.router = router, .key = key), - var message = "No path for ${key.policy} static route ${key.ip_prefix}/${key.plen} with next hop ${nexthop}". diff --git a/northd/lswitch.dl b/northd/lswitch.dl deleted file mode 100644 index 33c5c706b3..0000000000 --- a/northd/lswitch.dl +++ /dev/null @@ -1,824 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -import OVN_Northbound as nb -import OVN_Southbound as sb -import ovsdb -import ovn -import lrouter -import multicast -import helpers -import ipam -import vec -import set - -function is_enabled(lsp: Intern<nb::Logical_Switch_Port>): bool { is_enabled(lsp.enabled) } -function is_enabled(sp: SwitchPort): bool { sp.lsp.is_enabled() } -function is_enabled(sp: Intern<SwitchPort>): bool { sp.lsp.is_enabled() } - -relation SwitchRouterPeerRef(lsp: uuid, rport: Option<Intern<RouterPort>>) - -SwitchRouterPeerRef(lsp, Some{rport}) :- - SwitchRouterPeer(lsp, _, lrp), - rport in &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp}). - -SwitchRouterPeerRef(lsp, None) :- - &nb::Logical_Switch_Port(._uuid = lsp), - not SwitchRouterPeer(lsp, _, _). - -/* LogicalSwitchPortCandidate. - * - * Each row pairs a logical switch port with its logical switch, but without - * checking that the logical switch port is on only one logical switch. - * - * (Use LogicalSwitchPort instead, which guarantees uniqueness.) */ -relation LogicalSwitchPortCandidate(lsp_uuid: uuid, ls_uuid: uuid) -LogicalSwitchPortCandidate(lsp_uuid, ls_uuid) :- - &nb::Logical_Switch(._uuid = ls_uuid, .ports = ports), - var lsp_uuid = FlatMap(ports). -Warning[message] :- - LogicalSwitchPortCandidate(lsp_uuid, ls_uuid), - var lss = ls_uuid.group_by(lsp_uuid).to_set(), - lss.size() > 1, - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), - var message = "Bad configuration: logical switch port ${lsp.name} belongs " - "to more than one logical switch". - -/* Each row means 'lport' is in 'lswitch' (and only that lswitch). */ -relation LogicalSwitchPort(lport: uuid, lswitch: uuid) -LogicalSwitchPort(lsp_uuid, ls_uuid) :- - LogicalSwitchPortCandidate(lsp_uuid, ls_uuid), - var lss = ls_uuid.group_by(lsp_uuid).to_set(), - lss.size() == 1, - Some{var ls_uuid} = lss.nth(0). - -/* Each logical switch port with an "unknown" address (with its logical switch). */ -relation LogicalSwitchPortWithUnknownAddress(ls: uuid, lsp: uuid) -LogicalSwitchPortWithUnknownAddress(ls_uuid, lsp_uuid) :- - LogicalSwitchPort(lsp_uuid, ls_uuid), - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), - lsp.is_enabled() and lsp.addresses.contains(i"unknown"). - -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this -// is an output relation: -output relation LogicalSwitchHasUnknownPorts(ls: uuid, has_unknown: bool) -LogicalSwitchHasUnknownPorts(ls, true) :- LogicalSwitchPortWithUnknownAddress(ls, _). -LogicalSwitchHasUnknownPorts(ls, false) :- - &nb::Logical_Switch(._uuid = ls), - not LogicalSwitchPortWithUnknownAddress(ls, _). - -/* PortStaticAddresses: static IP addresses associated with each Logical_Switch_Port */ -relation PortStaticAddresses(lsport: uuid, ip4addrs: Set<istring>, ip6addrs: Set<istring>) - -PortStaticAddresses(.lsport = port_uuid, - .ip4addrs = ip4_addrs.union().map(intern), - .ip6addrs = ip6_addrs.union().map(intern)) :- - &nb::Logical_Switch_Port(._uuid = port_uuid, .addresses = addresses), - var address = FlatMap(if (addresses.is_empty()) { set_singleton(i"") } else { addresses }), - (var ip4addrs, var ip6addrs) = if (not is_dynamic_lsp_address(address.ival())) { - split_addresses(address.ival()) - } else { (set_empty(), set_empty()) }, - (var ip4_addrs, var ip6_addrs) = (ip4addrs, ip6addrs).group_by(port_uuid).group_unzip(). - -relation PortInGroup(port: uuid, group: uuid) - -PortInGroup(port, group) :- - nb::Port_Group(._uuid = group, .ports = ports), - var port = FlatMap(ports). - -/* All ACLs associated with logical switch */ -relation LogicalSwitchACL(ls: uuid, acl: uuid) - -LogicalSwitchACL(ls, acl) :- - &nb::Logical_Switch(._uuid = ls, .acls = acls), - var acl = FlatMap(acls). - -LogicalSwitchACL(ls, acl) :- - &nb::Logical_Switch(._uuid = ls, .ports = ports), - var port_id = FlatMap(ports), - PortInGroup(port_id, group_id), - nb::Port_Group(._uuid = group_id, .acls = acls), - var acl = FlatMap(acls). - -relation LogicalSwitchStatefulACL(ls: uuid, acl: uuid) - -LogicalSwitchStatefulACL(ls, acl) :- - LogicalSwitchACL(ls, acl), - &nb::ACL(._uuid = acl, .action = i"allow-related"). - -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this -// is an output relation: -output relation LogicalSwitchHasStatefulACL(ls: uuid, has_stateful_acl: bool) - -LogicalSwitchHasStatefulACL(ls, true) :- - LogicalSwitchStatefulACL(ls, _). - -LogicalSwitchHasStatefulACL(ls, false) :- - &nb::Logical_Switch(._uuid = ls), - not LogicalSwitchStatefulACL(ls, _). - -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this -// is an output relation: -output relation LogicalSwitchHasACLs(ls: uuid, has_acls: bool) - -LogicalSwitchHasACLs(ls, true) :- - LogicalSwitchACL(ls, _). - -LogicalSwitchHasACLs(ls, false) :- - &nb::Logical_Switch(._uuid = ls), - not LogicalSwitchACL(ls, _). - -/* - * LogicalSwitchLocalnetPorts maps from each logical switch UUID - * to the logical switch's set of localnet ports. Each localnet - * port is expressed as a tuple of its UUID and its name. - */ -relation LogicalSwitchLocalnetPort0(ls_uuid: uuid, lsp: (uuid, istring)) -LogicalSwitchLocalnetPort0(ls_uuid, (lsp_uuid, lsp.name)) :- - ls in &nb::Logical_Switch(._uuid = ls_uuid), - var lsp_uuid = FlatMap(ls.ports), - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), - lsp.__type == i"localnet". - -relation LogicalSwitchLocalnetPorts(ls_uuid: uuid, localnet_ports: Vec<(uuid, istring)>) -LogicalSwitchLocalnetPorts(ls_uuid, localnet_ports) :- - LogicalSwitchLocalnetPort0(ls_uuid, lsp), - var localnet_ports = lsp.group_by(ls_uuid).to_vec(). -LogicalSwitchLocalnetPorts(ls_uuid, vec_empty()) :- - ls in &nb::Logical_Switch(), - var ls_uuid = ls._uuid, - not LogicalSwitchLocalnetPort0(ls_uuid, _). - -/* Flatten the list of dns_records in Logical_Switch */ -relation LogicalSwitchDNS(ls_uuid: uuid, dns_uuid: uuid) - -LogicalSwitchDNS(ls._uuid, dns_uuid) :- - &nb::Logical_Switch[ls], - var dns_uuid = FlatMap(ls.dns_records), - nb::DNS(._uuid = dns_uuid). - -relation LogicalSwitchWithDNSRecords(ls: uuid) - -LogicalSwitchWithDNSRecords(ls) :- - LogicalSwitchDNS(ls, dns_uuid), - nb::DNS(._uuid = dns_uuid, .records = records), - not records.is_empty(). - -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this -// is an output relation: -output relation LogicalSwitchHasDNSRecords(ls: uuid, has_dns_records: bool) - -LogicalSwitchHasDNSRecords(ls, true) :- - LogicalSwitchWithDNSRecords(ls). - -LogicalSwitchHasDNSRecords(ls, false) :- - &nb::Logical_Switch(._uuid = ls), - not LogicalSwitchWithDNSRecords(ls). - -relation LogicalSwitchHasNonRouterPort0(ls: uuid) -LogicalSwitchHasNonRouterPort0(ls_uuid) :- - ls in &nb::Logical_Switch(._uuid = ls_uuid), - var lsp_uuid = FlatMap(ls.ports), - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), - lsp.__type != i"router". - -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this -// is an output relation: -output relation LogicalSwitchHasNonRouterPort(ls: uuid, has_non_router_port: bool) -LogicalSwitchHasNonRouterPort(ls, true) :- - LogicalSwitchHasNonRouterPort0(ls). -LogicalSwitchHasNonRouterPort(ls, false) :- - &nb::Logical_Switch(._uuid = ls), - not LogicalSwitchHasNonRouterPort0(ls). - -// LogicalSwitchCopp maps from each LS to its collection of Copp meters, -// dropping any Copp meter whose meter name doesn't exist. -relation LogicalSwitchCopp(ls: uuid, meters: Map<istring,istring>) -LogicalSwitchCopp(ls, meters) :- LogicalSwitchCopp0(ls, meters). -LogicalSwitchCopp(ls, map_empty()) :- - &nb::Logical_Switch(._uuid = ls), - not LogicalSwitchCopp0(ls, _). - -relation LogicalSwitchCopp0(ls: uuid, meters: Map<istring,istring>) -LogicalSwitchCopp0(ls, meters) :- - &nb::Logical_Switch(._uuid = ls, .copp = Some{copp_uuid}), - nb::Copp(._uuid = copp_uuid, .meters = meters), - var entry = FlatMap(meters), - (var copp_id, var meter_name) = entry, - &nb::Meter(.name = meter_name), - var meters = (copp_id, meter_name).group_by(ls).to_map(). - -/* Switch relation collects all attributes of a logical switch */ - -typedef Switch = Switch { - /* Fields copied from nb::Logical_Switch_Port. */ - _uuid: uuid, - name: istring, - other_config: Map<istring,istring>, - external_ids: Map<istring,istring>, - - /* Additional computed fields. */ - has_stateful_acl: bool, - has_acls: bool, - has_lb_vip: bool, - has_dns_records: bool, - has_unknown_ports: bool, - localnet_ports: Vec<(uuid, istring)>, // UUID and name of each localnet port. - subnet: Option<(in_addr/*subnet*/, in_addr/*mask*/, bit<32>/*start_ipv4*/, bit<32>/*total_ipv4s*/)>, - ipv6_prefix: Option<in6_addr>, - mcast_cfg: Intern<McastSwitchCfg>, - is_vlan_transparent: bool, - copp: Map<istring, istring>, - - /* Does this switch have at least one port with type != "router"? */ - has_non_router_port: bool -} - - -relation Switch[Intern<Switch>] - -function ipv6_parse_prefix(s: string): Option<in6_addr> { - if (string_contains(s, "/")) { - match (ipv6_parse_cidr(s)) { - Right{(addr, 64)} -> Some{addr}, - _ -> None - } - } else { - ipv6_parse(s) - } -} - -Switch[Switch{ - ._uuid = ls._uuid, - .name = ls.name, - .other_config = ls.other_config, - .external_ids = ls.external_ids, - - .has_stateful_acl = has_stateful_acl, - .has_acls = has_acls, - .has_lb_vip = has_lb_vip, - .has_dns_records = has_dns_records, - .has_unknown_ports = has_unknown_ports, - .localnet_ports = localnet_ports, - .subnet = subnet, - .ipv6_prefix = ipv6_prefix, - .mcast_cfg = mcast_cfg, - .has_non_router_port = has_non_router_port, - .copp = copp, - .is_vlan_transparent = is_vlan_transparent - }.intern()] :- - &nb::Logical_Switch[ls], - LogicalSwitchHasStatefulACL(ls._uuid, has_stateful_acl), - LogicalSwitchHasACLs(ls._uuid, has_acls), - LogicalSwitchHasLBVIP(ls._uuid, has_lb_vip), - LogicalSwitchHasDNSRecords(ls._uuid, has_dns_records), - LogicalSwitchHasUnknownPorts(ls._uuid, has_unknown_ports), - LogicalSwitchLocalnetPorts(ls._uuid, localnet_ports), - LogicalSwitchHasNonRouterPort(ls._uuid, has_non_router_port), - LogicalSwitchCopp(ls._uuid, copp), - mcast_cfg in &McastSwitchCfg(.datapath = ls._uuid), - var subnet = - match (ls.other_config.get(i"subnet")) { - None -> None, - Some{subnet_str} -> { - match (ip_parse_masked(subnet_str.ival())) { - Left{err} -> { - warn("bad 'subnet' ${subnet_str}"); - None - }, - Right{(subnet, mask)} -> { - if (mask.cidr_bits() == Some{32} or not mask.is_cidr()) { - warn("bad 'subnet' ${subnet_str}"); - None - } else { - Some{(subnet, mask, (subnet.a & mask.a) + 1, ~mask.a)} - } - } - } - } - }, - var ipv6_prefix = - match (ls.other_config.get(i"ipv6_prefix")) { - None -> None, - Some{prefix} -> ipv6_parse_prefix(prefix.ival()) - }, - var is_vlan_transparent = ls.other_config.get_bool_def(i"vlan-passthru", false). - -/* LogicalSwitchLB: many-to-many relation between logical switches and nb::LB */ -relation LogicalSwitchLB(sw_uuid: uuid, lb: Intern<nb::Load_Balancer>) -LogicalSwitchLB(sw_uuid, lb) :- - &nb::Logical_Switch(._uuid = sw_uuid, .load_balancer = lb_ids), - var lb_id = FlatMap(lb_ids), - lb in &nb::Load_Balancer(._uuid = lb_id). - - -relation SwitchLB(sw: Intern<Switch>, lb_uuid: uuid) - -SwitchLB(sw, lb._uuid) :- - LogicalSwitchLB(sw_uuid, lb), - sw in &Switch(._uuid = sw_uuid). - -/* Load balancer VIPs associated with switch */ -relation SwitchLBVIP(sw_uuid: uuid, lb: Intern<nb::Load_Balancer>, vip: istring, backends: istring) -SwitchLBVIP(sw_uuid, lb, vip, backends) :- - LogicalSwitchLB(sw_uuid, lb@(&nb::Load_Balancer{.vips = vips})), - var kv = FlatMap(vips), - (var vip, var backends) = kv. - -// "Pitfalls of projections" in ddlog-new-feature.rst explains why this -// is an output relation: -output relation LogicalSwitchHasLBVIP(sw_uuid: uuid, has_lb_vip: bool) -LogicalSwitchHasLBVIP(sw_uuid, true) :- - SwitchLBVIP(.sw_uuid = sw_uuid). -LogicalSwitchHasLBVIP(sw_uuid, false) :- - &nb::Logical_Switch(._uuid = sw_uuid), - not SwitchLBVIP(.sw_uuid = sw_uuid). - -/* Load balancer virtual IPs. - * - * Three levels: - * - LBVIP0 is load balancer virtual IPs with health checks. - * - LBVIP1 also includes virtual IPs without health checks. - * - LBVIP parses the IP address and port (and drops VIPs where those are invalid). - */ -relation LBVIP0( - lb: Intern<nb::Load_Balancer>, - vip_key: istring, - backend_ips: istring, - health_check: Intern<nb::Load_Balancer_Health_Check>) -LBVIP0(lb, vip_key, backend_ips, health_check) :- - lb in &nb::Load_Balancer(), - var vip = FlatMap(lb.vips), - (var vip_key, var backend_ips) = vip, - health_check in &nb::Load_Balancer_Health_Check(.vip = vip_key), - lb.health_check.contains(health_check._uuid). - -relation LBVIP1( - lb: Intern<nb::Load_Balancer>, - vip_key: istring, - backend_ips: istring, - health_check: Option<Intern<nb::Load_Balancer_Health_Check>>) -LBVIP1(lb, vip_key, backend_ips, Some{health_check}) :- - LBVIP0(lb, vip_key, backend_ips, health_check). -LBVIP1(lb, vip_key, backend_ips, None) :- - lb in &nb::Load_Balancer(), - var vip = FlatMap(lb.vips), - (var vip_key, var backend_ips) = vip, - not LBVIP0(lb, vip_key, backend_ips, _). - -typedef LBVIP = LBVIP { - lb: Intern<nb::Load_Balancer>, - vip_key: istring, - backend_ips: istring, - health_check: Option<Intern<nb::Load_Balancer_Health_Check>>, - vip_addr: v46_ip, - vip_port: bit<16>, - backends: Vec<lb_vip_backend> -} - -relation LBVIP[Intern<LBVIP>] - -LBVIP[LBVIP{lb, vip_key, backend_ips, health_check, vip_addr, vip_port, backends}.intern()] :- - LBVIP1(lb, vip_key, backend_ips, health_check), - Some{(var vip_addr, var vip_port)} = ip_address_and_port_from_lb_key(vip_key.ival()), - var backends = backend_ips.split(",").filter_map( - |ip| parse_vip_backend(ip, lb.ip_port_mappings)). - -typedef svc_monitor = SvcMonitor{ - port_name: istring, // Might name a switch or router port. - src_ip: istring -} - -/* Backends for load balancer virtual IPs. - * - * Use caution with this table, because load balancer virtual IPs - * sometimes have no backends and there is some significance to that. - * In cases that are really per-LBVIP, instead of per-LBVIPBackend, - * process the LBVIPs directly. */ -typedef lb_vip_backend = LBVIPBackend{ - ip: v46_ip, - port: bit<16>, - svc_monitor: Option<svc_monitor>} - -function parse_vip_backend(backend_ip: string, - mappings: Map<istring,istring>): Option<lb_vip_backend> { - match (ip_address_and_port_from_lb_key(backend_ip)) { - Some{(ip, port)} -> Some{LBVIPBackend{ip, port, parse_ip_port_mapping(mappings, ip)}}, - _ -> None - } -} - -function parse_ip_port_mapping(mappings: Map<istring,istring>, ip: v46_ip) - : Option<svc_monitor> { - for ((key, value) in mappings) { - if (ip46_parse(key.ival()) == Some{ip}) { - var strs = value.split(":"); - if (strs.len() != 2) { - return None - }; - - return match ((strs.nth(0), strs.nth(1))) { - (Some{port_name}, Some{src_ip}) -> Some{SvcMonitor{port_name.intern(), src_ip.intern()}}, - _ -> None - } - } - }; - return None -} - -function is_online(status: Option<istring>): bool = { - match (status) { - Some{s} -> s == i"online", - _ -> true - } -} -function default_protocol(protocol: Option<istring>): istring = { - match (protocol) { - Some{x} -> x, - None -> i"tcp" - } -} - -relation LBVIPWithStatus( - lbvip: Intern<LBVIP>, - up_backends: istring) -LBVIPWithStatus(lbvip, i"") :- - lbvip in &LBVIP(.backends = vec_empty()). -LBVIPWithStatus(lbvip, up_backends) :- - LBVIPBackendStatus(lbvip, backend, up), - var up_backends = ((backend, up)).group_by(lbvip).to_vec().filter_map(|x| { - (LBVIPBackend{var ip, var port, _}, var up) = x; - match ((up, port)) { - (true, 0) -> Some{"${ip.to_bracketed_string()}"}, - (true, _) -> Some{"${ip.to_bracketed_string()}:${port}"}, - _ -> None - } - }).join(",").intern(). - -/* Maps from a load-balancer virtual IP backend to whether it's up or not. - * - * Only some backends have health checking enabled. The ones that don't - * are always considered to be up. */ -relation LBVIPBackendStatus0( - lbvip: Intern<LBVIP>, - backend: lb_vip_backend, - up: bool) -LBVIPBackendStatus0(lbvip, backend, is_online(sm.status)) :- - LBVIP[lbvip@&LBVIP{.lb = lb}], - var backend = FlatMap(lbvip.backends), - Some{var svc_monitor} = backend.svc_monitor, - sm in &sb::Service_Monitor(.port = backend.port as integer), - ip46_parse(sm.ip.ival()) == Some{backend.ip}, - svc_monitor.port_name == sm.logical_port, - default_protocol(lb.protocol) == default_protocol(sm.protocol). - -relation LBVIPBackendStatus( - lbvip: Intern<LBVIP>, - backend: lb_vip_backend, - up: bool) -LBVIPBackendStatus(lbvip, backend, up) :- LBVIPBackendStatus0(lbvip, backend, up). -LBVIPBackendStatus(lbvip, backend, true) :- - LBVIP[lbvip@&LBVIP{.lb = lb}], - var backend = FlatMap(lbvip.backends), - not LBVIPBackendStatus0(lbvip, backend, _). - -/* SwitchPortDHCPv4Options: many-to-one relation between logical switches and DHCPv4 options */ -relation SwitchPortDHCPv4Options( - port: Intern<SwitchPort>, - dhcpv4_options: Intern<nb::DHCP_Options>) - -SwitchPortDHCPv4Options(port, options) :- - port in &SwitchPort(.lsp = lsp), - port.lsp.__type != i"external", - Some{var dhcpv4_uuid} = lsp.dhcpv4_options, - options in &nb::DHCP_Options(._uuid = dhcpv4_uuid). - -/* SwitchPortDHCPv6Options: many-to-one relation between logical switches and DHCPv4 options */ -relation SwitchPortDHCPv6Options( - port: Intern<SwitchPort>, - dhcpv6_options: Intern<nb::DHCP_Options>) - -SwitchPortDHCPv6Options(port, options) :- - port in &SwitchPort(.lsp = lsp), - port.lsp.__type != i"external", - Some{var dhcpv6_uuid} = lsp.dhcpv6_options, - options in &nb::DHCP_Options(._uuid = dhcpv6_uuid). - -/* SwitchQoS: many-to-one relation between logical switches and nb::QoS */ -relation SwitchQoS(sw: Intern<Switch>, qos: Intern<nb::QoS>) - -SwitchQoS(sw, qos) :- - sw in &Switch(), - &nb::Logical_Switch(._uuid = sw._uuid, .qos_rules = qos_rules), - var qos_rule = FlatMap(qos_rules), - qos in &nb::QoS(._uuid = qos_rule). - -/* Reports whether a given ACL is associated with a fair meter. - * 'has_fair_meter' is false if 'acl' has no meter, or if has a meter - * that isn't a fair meter. (The latter case has two subcases: the - * case where the meter that the ACL names corresponds to an nb::Meter - * with that name, and the case where it doesn't.) */ -relation ACLHasFairMeter(acl: Intern<nb::ACL>, has_fair_meter: bool) -ACLHasFairMeter(acl, true) :- - ACLWithFairMeter(acl, _). -ACLHasFairMeter(acl, false) :- - acl in &nb::ACL(), - not ACLWithFairMeter(acl, _). - -/* All the ACLs associated with a fair meter, with their fair meters. */ -relation ACLWithFairMeter(acl: Intern<nb::ACL>, meter: Intern<nb::Meter>) -ACLWithFairMeter(acl, meter) :- - acl in &nb::ACL(.meter = Some{meter_name}), - meter in &nb::Meter(.name = meter_name, .fair = Some{true}). - -/* SwitchACL: many-to-many relation between logical switches and ACLs */ -relation &SwitchACL(sw: Intern<Switch>, - acl: Intern<nb::ACL>, - has_fair_meter: bool) - -&SwitchACL(.sw = sw, .acl = acl, .has_fair_meter = has_fair_meter) :- - LogicalSwitchACL(sw_uuid, acl_uuid), - sw in &Switch(._uuid = sw_uuid), - acl in &nb::ACL(._uuid = acl_uuid), - ACLHasFairMeter(acl, has_fair_meter). - -function oVN_FEATURE_PORT_UP_NOTIF(): istring { i"port-up-notif" } -relation SwitchPortUp0(lsp: uuid) -SwitchPortUp0(lsp) :- - &nb::Logical_Switch_Port(._uuid = lsp, .__type = i"router"). -SwitchPortUp0(lsp) :- - &nb::Logical_Switch_Port(._uuid = lsp, .name = lsp_name, .__type = __type), - sb::Port_Binding(.logical_port = lsp_name, .up = up, .chassis = Some{chassis_uuid}), - sb::Chassis(._uuid = chassis_uuid, .other_config = other_config), - if (other_config.get_bool_def(oVN_FEATURE_PORT_UP_NOTIF(), false)) { - up == Some{true} - } else { - true - }. - -relation SwitchPortUp(lsp: uuid, up: bool) -SwitchPortUp(lsp, true) :- SwitchPortUp0(lsp). -SwitchPortUp(lsp, false) :- &nb::Logical_Switch_Port(._uuid = lsp), not SwitchPortUp0(lsp). - -relation SwitchPortHAChassisGroup0(lsp_uuid: uuid, hac_group_uuid: uuid) -SwitchPortHAChassisGroup0(lsp_uuid, ha_chassis_group_uuid(ls_uuid)) :- - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), - lsp.__type == i"external", - Some{var hac_group_uuid} = lsp.ha_chassis_group, - ha_chassis_group in nb::HA_Chassis_Group(._uuid = hac_group_uuid), - /* If the group is empty, then HA_Chassis_Group record will not be created in SB, - * and so we should not create a reference to the group in Port_Binding table, - * to avoid integrity violation. */ - not ha_chassis_group.ha_chassis.is_empty(), - LogicalSwitchPort(.lport = lsp_uuid, .lswitch = ls_uuid). -relation SwitchPortHAChassisGroup(lsp_uuid: uuid, hac_group_uuid: Option<uuid>) -SwitchPortHAChassisGroup(lsp_uuid, Some{hac_group_uuid}) :- - SwitchPortHAChassisGroup0(lsp_uuid, hac_group_uuid). -SwitchPortHAChassisGroup(lsp_uuid, None) :- - lsp in &nb::Logical_Switch_Port(._uuid = lsp_uuid), - not SwitchPortHAChassisGroup0(lsp_uuid, _). - -/* SwitchPort relation collects all attributes of a logical switch port - * - `peer` - peer router port, if any - * - `static_dynamic_mac` - port has a "dynamic" address that contains a static MAC, - * e.g., "80:fa:5b:06:72:b7 dynamic" - * - `static_dynamic_ipv4`, `static_dynamic_ipv6` - port has a "dynamic" address that contains a static IP, - * e.g., "dynamic 192.168.1.2" - * - `needs_dynamic_ipv4address` - port requires a dynamically allocated IPv4 address - * - `needs_dynamic_macaddress` - port requires a dynamically allocated MAC address - * - `needs_dynamic_tag` - port requires a dynamically allocated tag - * - `up` - true if the port is bound to a chassis or has type "" - * - 'hac_group_uuid' - uuid of sb::HA_Chassis_Group, only for "external" ports - */ -typedef SwitchPort = SwitchPort { - lsp: Intern<nb::Logical_Switch_Port>, - json_name: istring, - sw: Intern<Switch>, - peer: Option<Intern<RouterPort>>, - static_addresses: Vec<lport_addresses>, - dynamic_address: Option<lport_addresses>, - static_dynamic_mac: Option<eth_addr>, - static_dynamic_ipv4: Option<in_addr>, - static_dynamic_ipv6: Option<in6_addr>, - ps_addresses: Vec<lport_addresses>, - ps_eth_addresses: Vec<istring>, - parent_name: Option<istring>, - needs_dynamic_ipv4address: bool, - needs_dynamic_macaddress: bool, - needs_dynamic_ipv6address: bool, - needs_dynamic_tag: bool, - up: bool, - mcast_cfg: Intern<McastPortCfg>, - hac_group_uuid: Option<uuid> -} - -relation SwitchPort[Intern<SwitchPort>] - -SwitchPort[SwitchPort{ - .lsp = lsp, - .json_name = lsp.name.json_escape().intern(), - .sw = sw, - .peer = peer, - .static_addresses = static_addresses, - .dynamic_address = dynamic_address, - .static_dynamic_mac = static_dynamic_mac, - .static_dynamic_ipv4 = static_dynamic_ipv4, - .static_dynamic_ipv6 = static_dynamic_ipv6, - .ps_addresses = ps_addresses, - .ps_eth_addresses = ps_eth_addresses, - .parent_name = parent_name, - .needs_dynamic_ipv4address = needs_dynamic_ipv4address, - .needs_dynamic_macaddress = needs_dynamic_macaddress, - .needs_dynamic_ipv6address = needs_dynamic_ipv6address, - .needs_dynamic_tag = needs_dynamic_tag, - .up = up, - .mcast_cfg = mcast_cfg, - .hac_group_uuid = hac_group_uuid - }.intern()] :- - lsp in &nb::Logical_Switch_Port(), - LogicalSwitchPort(lsp._uuid, lswitch_uuid), - sw in &Switch(._uuid = lswitch_uuid, - .other_config = other_config, - .subnet = subnet, - .ipv6_prefix = ipv6_prefix), - SwitchRouterPeerRef(lsp._uuid, peer), - SwitchPortUp(lsp._uuid, up), - mcast_cfg in &McastPortCfg(.port = lsp._uuid, .router_port = false), - var static_addresses = { - var static_addresses = vec_empty(); - for (addr in lsp.addresses) { - if ((addr != i"router") and (not is_dynamic_lsp_address(addr.ival()))) { - match (extract_lsp_addresses(addr.ival())) { - None -> (), - Some{lport_addr} -> static_addresses.push(lport_addr) - } - } else () - }; - static_addresses - }, - var ps_addresses = { - var ps_addresses = vec_empty(); - for (addr in lsp.port_security) { - match (extract_lsp_addresses(addr.ival())) { - None -> (), - Some{lport_addr} -> ps_addresses.push(lport_addr) - } - }; - ps_addresses - }, - var ps_eth_addresses = { - var ps_eth_addresses = vec_empty(); - for (ps_addr in ps_addresses) { - ps_eth_addresses.push(i"${ps_addr.ea}") - }; - ps_eth_addresses - }, - var dynamic_address = match (lsp.dynamic_addresses) { - None -> None, - Some{lport_addr} -> extract_lsp_addresses(lport_addr.ival()) - }, - (var static_dynamic_mac, - var static_dynamic_ipv4, - var static_dynamic_ipv6, - var has_dyn_lsp_addr) = { - var dynamic_address_request = None; - for (addr in lsp.addresses) { - dynamic_address_request = parse_dynamic_address_request(addr.ival()); - if (dynamic_address_request.is_some()) { - break - } - }; - - match (dynamic_address_request) { - Some{DynamicAddressRequest{mac, ipv4, ipv6}} -> (mac, ipv4, ipv6, true), - None -> (None, None, None, false) - } - }, - var needs_dynamic_ipv4address = has_dyn_lsp_addr and peer == None and subnet.is_some() and - static_dynamic_ipv4 == None, - var needs_dynamic_macaddress = has_dyn_lsp_addr and peer == None and static_dynamic_mac == None and - (subnet.is_some() or ipv6_prefix.is_some() or - other_config.get(i"mac_only") == Some{i"true"}), - var needs_dynamic_ipv6address = has_dyn_lsp_addr and peer == None and ipv6_prefix.is_some() and static_dynamic_ipv6 == None, - var parent_name = match (lsp.parent_name) { - None -> None, - Some{pname} -> if (pname == i"") { None } else { Some{pname} } - }, - /* Port needs dynamic tag if it has a parent and its `tag_request` is 0. */ - var needs_dynamic_tag = parent_name.is_some() and lsp.tag_request == Some{0}, - SwitchPortHAChassisGroup(.lsp_uuid = lsp._uuid, - .hac_group_uuid = hac_group_uuid). - -/* Switch port port security addresses */ -relation SwitchPortPSAddresses(port: Intern<SwitchPort>, - ps_addrs: lport_addresses) - -SwitchPortPSAddresses(port, ps_addrs) :- - port in &SwitchPort(.ps_addresses = ps_addresses), - var ps_addrs = FlatMap(ps_addresses). - -/* All static addresses associated with a port parsed into - * the lport_addresses data structure */ -relation SwitchPortStaticAddresses(port: Intern<SwitchPort>, - addrs: lport_addresses) -SwitchPortStaticAddresses(port, addrs) :- - port in &SwitchPort(.static_addresses = static_addresses), - var addrs = FlatMap(static_addresses). - -/* All static and dynamic addresses associated with a port parsed into - * the lport_addresses data structure */ -relation SwitchPortAddresses(port: Intern<SwitchPort>, - addrs: lport_addresses) - -SwitchPortAddresses(port, addrs) :- SwitchPortStaticAddresses(port, addrs). - -SwitchPortAddresses(port, dynamic_address) :- - SwitchPortNewDynamicAddress(port, Some{dynamic_address}). - -/* "router" is a special Logical_Switch_Port address value that indicates that the Ethernet, IPv4, and IPv6 - * this port should be obtained from the connected logical router port, as specified by router-port in - * options. - * - * The resulting addresses are used to populate the logical switch’s destination lookup, and also for the - * logical switch to generate ARP and ND replies. - * - * If the connected logical router port is a distributed gateway port and the logical router has rules - * specified in nat with external_mac, then those addresses are also used to populate the switch’s destination - * lookup. */ -SwitchPortAddresses(port, addrs) :- - port in &SwitchPort(.lsp = lsp, .peer = Some{&rport}), - Some{var addrs} = { - var opt_addrs = None; - for (addr in lsp.addresses) { - if (addr == i"router") { - opt_addrs = Some{rport.networks} - } else () - }; - opt_addrs - }. - -/* All static and dynamic IPv4 addresses associated with a port */ -relation SwitchPortIPv4Address(port: Intern<SwitchPort>, - ea: eth_addr, - addr: ipv4_netaddr) - -SwitchPortIPv4Address(port, ea, addr) :- - SwitchPortAddresses(port, LPortAddress{.ea = ea, .ipv4_addrs = addrs}), - var addr = FlatMap(addrs). - -/* All static and dynamic IPv6 addresses associated with a port */ -relation SwitchPortIPv6Address(port: Intern<SwitchPort>, - ea: eth_addr, - addr: ipv6_netaddr) - -SwitchPortIPv6Address(port, ea, addr) :- - SwitchPortAddresses(port, LPortAddress{.ea = ea, .ipv6_addrs = addrs}), - var addr = FlatMap(addrs). - -/* Service monitoring. */ - -/* MAC allocated for service monitor usage. Just one mac is allocated - * for this purpose and ovn-controller's on each chassis will make use - * of this mac when sending out the packets to monitor the services - * defined in Service_Monitor Southbound table. Since these packets - * all locally handled, having just one mac is good enough. */ -function get_svc_monitor_mac(options: Map<istring,istring>, uuid: uuid) - : eth_addr = -{ - var existing_mac = match ( - options.get(i"svc_monitor_mac")) - { - Some{mac} -> scan_eth_addr(mac.ival()), - None -> None - }; - match (existing_mac) { - Some{mac} -> mac, - None -> eth_addr_pseudorandom(uuid, 'h5678) - } -} -function put_svc_monitor_mac(options: mut Map<istring,istring>, - svc_monitor_mac: eth_addr) -{ - options.insert(i"svc_monitor_mac", svc_monitor_mac.to_string().intern()); -} -relation SvcMonitorMac(mac: eth_addr) -SvcMonitorMac(get_svc_monitor_mac(options, uuid)) :- - nb::NB_Global(._uuid = uuid, .options = options). - -relation UseCtInvMatch[bool] -UseCtInvMatch[options.get_bool_def(i"use_ct_inv_match", true)] :- - nb::NB_Global(.options = options). -UseCtInvMatch[true] :- - Unit(), - not nb in nb::NB_Global(). diff --git a/northd/multicast.dl b/northd/multicast.dl deleted file mode 100644 index 56bfa0c637..0000000000 --- a/northd/multicast.dl +++ /dev/null @@ -1,273 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -import OVN_Northbound as nb -import OVN_Southbound as sb -import ovn -import ovsdb -import helpers -import lswitch -import lrouter - -function mCAST_DEFAULT_MAX_ENTRIES(): integer = 2048 - -function mCAST_DEFAULT_IDLE_TIMEOUT_S(): integer = 300 -function mCAST_IDLE_TIMEOUT_S_RANGE(): (integer, integer) = (15, 3600) - -function mCAST_DEFAULT_QUERY_INTERVAL_S(): integer = 1 -function mCAST_QUERY_INTERVAL_S_RANGE(): (integer, integer) = (1, 3600) - -function mCAST_DEFAULT_QUERY_MAX_RESPONSE_S(): integer = 1 - -/* IP Multicast per switch configuration. */ -typedef McastSwitchCfg = McastSwitchCfg { - datapath : uuid, - enabled : bool, - querier : bool, - flood_unreg : bool, - eth_src : istring, - ip4_src : istring, - ip6_src : istring, - table_size : integer, - idle_timeout : integer, - query_interval: integer, - query_max_resp: integer -} - -relation McastSwitchCfg[Intern<McastSwitchCfg>] - - /* FIXME: Right now table_size is enforced only in ovn-controller but in - * the ovn-northd C version we enforce it on the aggregate groups too. - */ - -McastSwitchCfg[McastSwitchCfg { - .datapath = ls_uuid, - .enabled = other_config.get_bool_def(i"mcast_snoop", false), - .querier = other_config.get_bool_def(i"mcast_querier", true), - .flood_unreg = other_config.get_bool_def(i"mcast_flood_unregistered", false), - .eth_src = other_config.get(i"mcast_eth_src").unwrap_or(i""), - .ip4_src = other_config.get(i"mcast_ip4_src").unwrap_or(i""), - .ip6_src = other_config.get(i"mcast_ip6_src").unwrap_or(i""), - .table_size = other_config.get_int_def(i"mcast_table_size", mCAST_DEFAULT_MAX_ENTRIES()), - .idle_timeout = idle_timeout, - .query_interval = query_interval, - .query_max_resp = query_max_resp - }.intern()] :- - &nb::Logical_Switch(._uuid = ls_uuid, - .other_config = other_config), - var idle_timeout = other_config.get_int_def(i"mcast_idle_timeout", mCAST_DEFAULT_IDLE_TIMEOUT_S()) - .clamp(mCAST_IDLE_TIMEOUT_S_RANGE()), - var query_interval = other_config.get_int_def(i"mcast_query_interval", idle_timeout / 2) - .clamp(mCAST_QUERY_INTERVAL_S_RANGE()), - var query_max_resp = other_config.get_int_def(i"mcast_query_max_response", - mCAST_DEFAULT_QUERY_MAX_RESPONSE_S()). - -/* IP Multicast per router configuration. */ -typedef McastRouterCfg = McastRouterCfg { - datapath: uuid, - relay : bool -} - -relation McastRouterCfg[Intern<McastRouterCfg>] - -McastRouterCfg[McastRouterCfg{lr_uuid, mcast_relay}.intern()] :- - nb::Logical_Router(._uuid = lr_uuid, .options = options), - var mcast_relay = options.get_bool_def(i"mcast_relay", false). - -/* IP Multicast port configuration. */ -typedef McastPortCfg = McastPortCfg { - port : uuid, - router_port : bool, - flood : bool, - flood_reports : bool -} - -relation McastPortCfg[Intern<McastPortCfg>] - -McastPortCfg[McastPortCfg{lsp_uuid, false, flood, flood_reports}.intern()] :- - &nb::Logical_Switch_Port(._uuid = lsp_uuid, .options = options), - var flood = options.get_bool_def(i"mcast_flood", false), - var flood_reports = options.get_bool_def(i"mcast_flood_reports", false). - -McastPortCfg[McastPortCfg{lrp_uuid, true, flood, flood}.intern()] :- - &nb::Logical_Router_Port(._uuid = lrp_uuid, .options = options), - var flood = options.get_bool_def(i"mcast_flood", false). - -/* Mapping between Switch and the set of router port uuids on which to flood - * IP multicast for relay. - */ -relation SwitchMcastFloodRelayPorts(sw: Intern<Switch>, ports: Set<uuid>) - -SwitchMcastFloodRelayPorts(switch, relay_ports) :- - &SwitchPort( - .lsp = lsp, - .sw = switch, - .peer = Some{&RouterPort{.router = &Router{.mcast_cfg = mcast_cfg}}} - ), mcast_cfg.relay, - var relay_ports = lsp._uuid.group_by(switch).to_set(). - -SwitchMcastFloodRelayPorts(switch, set_empty()) :- - Switch[switch], - not &SwitchPort( - .sw = switch, - .peer = Some{ - &RouterPort{ - .router = &Router{.mcast_cfg = &McastRouterCfg{.relay=true}} - } - } - ). - -/* Mapping between Switch and the set of port uuids on which to - * flood IP multicast statically. - */ -relation SwitchMcastFloodPorts(sw: Intern<Switch>, ports: Set<uuid>) - -SwitchMcastFloodPorts(switch, flood_ports) :- - &SwitchPort( - .lsp = lsp, - .sw = switch, - .mcast_cfg = &McastPortCfg{.flood = true}), - var flood_ports = lsp._uuid.group_by(switch).to_set(). - -SwitchMcastFloodPorts(switch, set_empty()) :- - Switch[switch], - not &SwitchPort( - .sw = switch, - .mcast_cfg = &McastPortCfg{.flood = true}). - -/* Mapping between Switch and the set of port uuids on which to - * flood IP multicast reports statically. - */ -relation SwitchMcastFloodReportPorts(sw: Intern<Switch>, ports: Set<uuid>) - -SwitchMcastFloodReportPorts(switch, flood_ports) :- - &SwitchPort( - .lsp = lsp, - .sw = switch, - .mcast_cfg = &McastPortCfg{.flood_reports = true}), - var flood_ports = lsp._uuid.group_by(switch).to_set(). - -SwitchMcastFloodReportPorts(switch, set_empty()) :- - Switch[switch], - not &SwitchPort( - .sw = switch, - .mcast_cfg = &McastPortCfg{.flood_reports = true}). - -/* Mapping between Router and the set of port uuids on which to - * flood IP multicast reports statically. - */ -relation RouterMcastFloodPorts(sw: Intern<Router>, ports: Set<uuid>) - -RouterMcastFloodPorts(router, flood_ports) :- - &RouterPort( - .lrp = lrp, - .router = router, - .mcast_cfg = &McastPortCfg{.flood = true} - ), - var flood_ports = lrp._uuid.group_by(router).to_set(). - -RouterMcastFloodPorts(router, set_empty()) :- - Router[router], - not &RouterPort( - .router = router, - .mcast_cfg = &McastPortCfg{.flood = true}). - -/* Flattened IGMP group. One record per address-port tuple. */ -relation IgmpSwitchGroupPort( - address: istring, - switch : Intern<Switch>, - port : uuid -) - -IgmpSwitchGroupPort(address, switch, lsp_uuid) :- - sb::IGMP_Group(.address = address, .ports = pb_ports), - var pb_port_uuid = FlatMap(pb_ports), - sb::Port_Binding(._uuid = pb_port_uuid, .logical_port = lsp_name), - &SwitchPort( - .lsp = &nb::Logical_Switch_Port{._uuid = lsp_uuid, .name = lsp_name}, - .sw = switch). -IgmpSwitchGroupPort(address, switch, localnet_port.0) :- - IgmpSwitchGroupPort(address, switch, _), - var localnet_port = FlatMap(switch.localnet_ports). - -/* Aggregated IGMP group: merges all IgmpSwitchGroupPort for a given - * address-switch tuple from all chassis. - */ -relation IgmpSwitchMulticastGroup( - address: istring, - switch : Intern<Switch>, - ports : Set<uuid> -) - -IgmpSwitchMulticastGroup(address, switch, ports) :- - IgmpSwitchGroupPort(address, switch, port), - var ports = port.group_by((address, switch)).to_set(). - -/* Flattened IGMP group representation for routers with relay enabled. One - * record per address-port tuple for all IGMP groups learned by switches - * connected to the router. - */ -relation IgmpRouterGroupPort( - address: istring, - router : Intern<Router>, - port : uuid -) - -IgmpRouterGroupPort(address, rtr_port.router, rtr_port.lrp._uuid) :- - SwitchMcastFloodRelayPorts(switch, sw_flood_ports), - IgmpSwitchMulticastGroup(address, switch, _), - /* For IPv6 only relay routable multicast groups - * (RFC 4291 2.7). - */ - match (ipv6_parse(address.ival())) { - Some{ipv6} -> ipv6.is_routable_multicast(), - None -> true - }, - var flood_port = FlatMap(sw_flood_ports), - &SwitchPort(.lsp = &nb::Logical_Switch_Port{._uuid = flood_port}, - .peer = Some{rtr_port}), - RouterPortIsRedirect(rtr_port.lrp._uuid, false). - -/* Store the chassis redirect port uuid for redirect ports, otherwise traffic - * will not be tunneled properly. This will be translated back to the patch - * port on the remote hypervisor. - */ -IgmpRouterGroupPort(address, rtr_port.router, cr_lrp_uuid) :- - SwitchMcastFloodRelayPorts(switch, sw_flood_ports), - IgmpSwitchMulticastGroup(address, switch, _), - /* For IPv6 only relay routable multicast groups - * (RFC 4291 2.7). - */ - match (ipv6_parse(address.ival())) { - Some{ipv6} -> ipv6.is_routable_multicast(), - None -> true - }, - var flood_port = FlatMap(sw_flood_ports), - &SwitchPort(.lsp = &nb::Logical_Switch_Port{._uuid = flood_port}, - .peer = Some{rtr_port}), - RouterPortIsRedirect(rtr_port.lrp._uuid, true), - DistributedGatewayPort(rtr_port.lrp, _, cr_lrp_uuid). - -/* Aggregated IGMP group for routers: merges all IgmpRouterGroupPort for - * a given address-router tuple from all connected switches. - */ -relation IgmpRouterMulticastGroup( - address: istring, - router : Intern<Router>, - ports : Set<uuid> -) - -IgmpRouterMulticastGroup(address, router, ports) :- - IgmpRouterGroupPort(address, router, port), - var ports = port.group_by((address, router)).to_set(). diff --git a/northd/ovn-nb.dlopts b/northd/ovn-nb.dlopts deleted file mode 100644 index 9a460adef4..0000000000 --- a/northd/ovn-nb.dlopts +++ /dev/null @@ -1,27 +0,0 @@ ---intern-strings --o BFD ---rw BFD.status --o Logical_Router_Port ---rw Logical_Router_Port.ipv6_prefix --o Logical_Switch_Port ---rw Logical_Switch_Port.tag ---rw Logical_Switch_Port.dynamic_addresses ---rw Logical_Switch_Port.up --o NB_Global ---rw NB_Global.sb_cfg ---rw NB_Global.hv_cfg ---rw NB_Global.options ---rw NB_Global.ipsec ---rw NB_Global.nb_cfg_timestamp ---rw NB_Global.hv_cfg_timestamp ---intern-table DHCP_Options ---intern-table ACL ---intern-table QoS ---intern-table Load_Balancer ---intern-table Logical_Switch ---intern-table Load_Balancer_Health_Check ---intern-table Meter ---intern-table NAT ---intern-table Address_Set ---intern-table Logical_Router_Port ---intern-table Logical_Switch_Port diff --git a/northd/ovn-northd-ddlog.c b/northd/ovn-northd-ddlog.c deleted file mode 100644 index 1c06bd0028..0000000000 --- a/northd/ovn-northd-ddlog.c +++ /dev/null @@ -1,1368 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#include <config.h> - -#include <getopt.h> -#include <stdlib.h> -#include <stdio.h> -#include <fcntl.h> -#include <unistd.h> - -#include "command-line.h" -#include "daemon.h" -#include "fatal-signal.h" -#include "hash.h" -#include "jsonrpc.h" -#include "lib/ovn-util.h" -#include "memory.h" -#include "openvswitch/json.h" -#include "openvswitch/poll-loop.h" -#include "openvswitch/vlog.h" -#include "ovs-numa.h" -#include "ovsdb-cs.h" -#include "ovsdb-data.h" -#include "ovsdb-error.h" -#include "ovsdb-parser.h" -#include "ovsdb-types.h" -#include "simap.h" -#include "stopwatch.h" -#include "lib/stopwatch-names.h" -#include "lib/uuidset.h" -#include "stream-ssl.h" -#include "stream.h" -#include "unixctl.h" -#include "util.h" -#include "uuid.h" - -#include "northd/ovn_northd_ddlog/ddlog.h" - -VLOG_DEFINE_THIS_MODULE(ovn_northd); - -#include "northd/ovn-northd-ddlog-nb.inc" -#include "northd/ovn-northd-ddlog-sb.inc" - -struct northd_status { - bool locked; - bool pause; -}; - -static unixctl_cb_func ovn_northd_exit; -static unixctl_cb_func ovn_northd_pause; -static unixctl_cb_func ovn_northd_resume; -static unixctl_cb_func ovn_northd_is_paused; -static unixctl_cb_func ovn_northd_status; - -static unixctl_cb_func ovn_northd_enable_cpu_profiling; -static unixctl_cb_func ovn_northd_disable_cpu_profiling; -static unixctl_cb_func ovn_northd_profile; - -/* --ddlog-record: The name of a file to which to record DDlog commands for - * later replay. Useful for debugging. If null (by default), DDlog commands - * are not recorded. */ -static const char *record_file; - -static const char *ovnnb_db; -static const char *ovnsb_db; -static const char *unixctl_path; - -/* SSL options */ -static const char *ssl_private_key_file; -static const char *ssl_certificate_file; -static const char *ssl_ca_cert_file; - -/* Frequently used table ids. */ -static table_id WARNING_TABLE_ID; -static table_id NB_CFG_TIMESTAMP_ID; - -/* Initialize frequently used table ids. */ -static void -init_table_ids(ddlog_prog ddlog) -{ - WARNING_TABLE_ID = ddlog_get_table_id(ddlog, "helpers::Warning"); - NB_CFG_TIMESTAMP_ID = ddlog_get_table_id(ddlog, "NbCfgTimestamp"); -} - -struct northd_ctx { - /* Shared between NB and SB database contexts. */ - ddlog_prog ddlog; - ddlog_delta *delta; /* Accumulated delta to send to OVSDB. */ - - /* Database info. - * - * The '*_relations' vectors are arrays of strings that contain DDlog - * relation names, terminated by a null pointer. 'prefix' is the prefix - * for the DDlog module that contains the relations. */ - char *prefix; /* e.g. "OVN_Northbound::" */ - const char **input_relations; - const char **output_relations; - const char **output_only_relations; - - /* Whether this is the database that has the 'nb_cfg_timestamp' and - * 'sb_cfg_timestamp' columns in NB_Global. True for the northbound - * database, false for the southbound database. */ - bool has_timestamp_columns; - - /* OVSDB connection. */ - struct ovsdb_cs *cs; - struct json *request_id; /* JSON request ID for outstanding txn if any. */ - enum { - /* Initial state, before the output-only data (if any) has been - * requested. */ - S_INITIAL, - - /* Output-only data has been requested. Waiting for reply. */ - S_OUTPUT_ONLY_DATA_REQUESTED, - - /* Output-only data (if any) has been received. Any request sent out - * now would be to update data. */ - S_UPDATE, - } state; - - /* Database info. */ - const char *db_name; /* e.g. "OVN_Northbound". */ - struct json *output_only_data; - const char *lock_name; /* Name of lock we need, NULL if none. */ - bool paused; -}; - -static struct ovsdb_cs_ops northd_cs_ops; - -static struct json *get_database_ops(struct northd_ctx *); -static int ddlog_clear(struct northd_ctx *); - -static void -northd_ctx_connection_status(struct unixctl_conn *conn, int argc OVS_UNUSED, - const char *argv[] OVS_UNUSED, void *ctx_) -{ - const struct northd_ctx *ctx = ctx_; - bool connected = ovsdb_cs_is_connected(ctx->cs); - unixctl_command_reply(conn, connected ? "connected" : "not connected"); -} - -static void -northd_ctx_cluster_state_reset(struct unixctl_conn *conn, int argc OVS_UNUSED, - const char *argv[] OVS_UNUSED, void *ctx_) -{ - const struct northd_ctx *ctx = ctx_; - VLOG_INFO("Resetting %s database cluster state", ctx->db_name); - ovsdb_cs_reset_min_index(ctx->cs); - unixctl_command_reply(conn, NULL); -} - -static struct northd_ctx * -northd_ctx_create(const char *server, const char *database, - const char *unixctl_command_prefix, - const char *lock_name, - ddlog_prog ddlog, ddlog_delta *delta, - const char **input_relations, - const char **output_relations, - const char **output_only_relations, - bool paused) -{ - struct northd_ctx *ctx = xmalloc(sizeof *ctx); - *ctx = (struct northd_ctx) { - .ddlog = ddlog, - .delta = delta, - .prefix = xasprintf("%s::", database), - .input_relations = input_relations, - .output_relations = output_relations, - .output_only_relations = output_only_relations, - /* 'has_timestamp_columns' will get filled in later. */ - .cs = ovsdb_cs_create(database, 1 /* XXX */, &northd_cs_ops, ctx), - .state = S_INITIAL, - .db_name = database, - /* 'output_only_relations' will get filled in later. */ - .lock_name = lock_name, - .paused = paused, - }; - - ovsdb_cs_set_remote(ctx->cs, server, true); - ovsdb_cs_set_lock(ctx->cs, lock_name); - - char *cmd = xasprintf("%s-connection-status", unixctl_command_prefix); - unixctl_command_register(cmd, "", 0, 0, - northd_ctx_connection_status, ctx); - free(cmd); - - cmd = xasprintf("%s-cluster-state-reset", unixctl_command_prefix); - unixctl_command_register(cmd, "", 0, 0, - northd_ctx_cluster_state_reset, ctx); - free(cmd); - - return ctx; -} - -static void -northd_ctx_destroy(struct northd_ctx *ctx) -{ - if (ctx) { - ovsdb_cs_destroy(ctx->cs); - json_destroy(ctx->request_id); - json_destroy(ctx->output_only_data); - free(ctx->prefix); - free(ctx); - } -} - -static struct json * -northd_compose_monitor_request(const struct json *schema_json, void *ctx_) -{ - struct northd_ctx *ctx = ctx_; - - struct shash *schema = ovsdb_cs_parse_schema(schema_json); - - const struct sset *nb_global = shash_find_data( - schema, "NB_Global"); - ctx->has_timestamp_columns - = (nb_global - && sset_contains(nb_global, "nb_cfg_timestamp") - && sset_contains(nb_global, "sb_cfg_timestamp")); - - struct json *monitor_requests = json_object_create(); - - /* This should be smarter about ignoring not needed ones. There's a lot - * more logic for this in ovsdb_idl_compose_monitor_request(). */ - const struct shash_node *node; - SHASH_FOR_EACH (node, schema) { - const char *table_name = node->name; - - /* Only subscribe to input relations we care about. */ - for (const char **p = ctx->input_relations; *p; p++) { - if (!strcmp(table_name, *p)) { - const struct sset *schema_columns = node->data; - struct json *subscribed_columns = json_array_create_empty(); - - const char *column; - SSET_FOR_EACH (column, schema_columns) { - if (strcmp(column, "_version")) { - json_array_add(subscribed_columns, - json_string_create(column)); - } - } - - struct json *monitor_request = json_object_create(); - json_object_put(monitor_request, "columns", - subscribed_columns); - json_object_put(monitor_requests, table_name, - json_array_create_1(monitor_request)); - break; - } - } - } - ovsdb_cs_free_schema(schema); - - return monitor_requests; -} - -static struct ovsdb_cs_ops northd_cs_ops = { northd_compose_monitor_request }; - -/* Sends the database server a request for all the row UUIDs in output-only - * tables. */ -static void -northd_send_output_only_data_request(struct northd_ctx *ctx) -{ - if (ctx->output_only_relations[0]) { - json_destroy(ctx->output_only_data); - ctx->output_only_data = NULL; - - struct json *ops = json_array_create_1( - json_string_create(ctx->db_name)); - for (size_t i = 0; ctx->output_only_relations[i]; i++) { - const char *table = ctx->output_only_relations[i]; - struct json *op = json_object_create(); - json_object_put_string(op, "op", "select"); - json_object_put_string(op, "table", table); - json_object_put(op, "columns", - json_array_create_1(json_string_create("_uuid"))); - json_object_put(op, "where", json_array_create_empty()); - json_array_add(ops, op); - } - - ctx->state = S_OUTPUT_ONLY_DATA_REQUESTED; - ctx->request_id = ovsdb_cs_send_transaction(ctx->cs, ops); - } else { - ctx->state = S_UPDATE; - } -} - -static void -northd_pause(struct northd_ctx *ctx) -{ - if (!ctx->paused && ctx->lock_name) { - ctx->paused = true; - VLOG_INFO("This ovn-northd instance is now paused."); - ovsdb_cs_set_lock(ctx->cs, NULL); - } -} - -static void -northd_unpause(struct northd_ctx *ctx) -{ - if (ctx->paused) { - ovsdb_cs_set_lock(ctx->cs, ctx->lock_name); - ctx->paused = false; - } -} - -static void -warning_cb(uintptr_t arg OVS_UNUSED, - table_id table OVS_UNUSED, - const ddlog_record *rec, - ssize_t weight) -{ - size_t len; - const char *s = ddlog_get_str_with_length(rec, &len); - if (weight > 0) { - VLOG_WARN("New warning: %.*s", (int)len, s); - } else { - VLOG_WARN("Warning cleared: %.*s", (int)len, s); - } -} - -static int -ddlog_commit(ddlog_prog ddlog, ddlog_delta *delta) -{ - ddlog_delta *new_delta = ddlog_transaction_commit_dump_changes(ddlog); - if (!new_delta) { - VLOG_WARN("Transaction commit failed"); - return -1; - } - - /* Remove warnings from delta and output them straight away. */ - ddlog_delta *warnings = ddlog_delta_remove_table(new_delta, WARNING_TABLE_ID); - ddlog_delta_enumerate(warnings, warning_cb, 0); - ddlog_free_delta(warnings); - - /* Merge changes into `delta`. */ - ddlog_delta_union(delta, new_delta); - ddlog_free_delta(new_delta); - - return 0; -} - -static int -ddlog_clear(struct northd_ctx *ctx) -{ - int n_failures = 0; - for (int i = 0; ctx->input_relations[i]; i++) { - char *table = xasprintf("%s%s", ctx->prefix, ctx->input_relations[i]); - if (ddlog_clear_relation(ctx->ddlog, ddlog_get_table_id(ctx->ddlog, - table))) { - n_failures++; - } - free(table); - } - if (n_failures) { - VLOG_WARN("failed to clear %d tables in %s database", - n_failures, ctx->db_name); - } - return n_failures; -} - -static const struct json * -json_object_get(const struct json *json, const char *member_name) -{ - return (json && json->type == JSON_OBJECT - ? shash_find_data(json_object(json), member_name) - : NULL); -} - -/* Stores into '*nb_cfgp' the new value of NB_Global::nb_cfg in the updates in - * <table-updates> provided by the caller. Leaves '*nb_cfgp' alone if the - * updates don't set NB_Global::nb_cfg. */ -static void -get_nb_cfg(const struct json *table_updates, int64_t *nb_cfgp) -{ - const struct json *nb_global = json_object_get(table_updates, "NB_Global"); - if (nb_global) { - struct shash_node *row; - SHASH_FOR_EACH (row, json_object(nb_global)) { - const struct json *value = row->data; - const struct json *new = json_object_get(value, "new"); - const struct json *nb_cfg = json_object_get(new, "nb_cfg"); - if (nb_cfg && nb_cfg->type == JSON_INTEGER) { - *nb_cfgp = json_integer(nb_cfg); - return; - } - } - } -} - -static void -northd_parse_updates(struct northd_ctx *ctx, struct ovs_list *updates) -{ - if (ovs_list_is_empty(updates)) { - return; - } - - if (ddlog_transaction_start(ctx->ddlog)) { - VLOG_WARN("DDlog failed to start transaction"); - return; - } - - - /* Whenever a new 'nb_cfg' value comes in, we take the current time and - * push it into the NbCfgTimestamp relation for the DDlog program to put - * into nb::NB_Global.nb_cfg_timestamp. - * - * The 'old_nb_cfg' variables track the state we've pushed into DDlog. - * The 'new_nb_cfg' variables track what 'updates' sets (by default, - * no change, so we initialize from the old variables). */ - static int64_t old_nb_cfg = INT64_MIN; - static int64_t old_nb_cfg_timestamp = INT64_MIN; - int64_t new_nb_cfg = old_nb_cfg == INT64_MIN ? 0 : old_nb_cfg; - int64_t new_nb_cfg_timestamp = old_nb_cfg_timestamp; - - struct ovsdb_cs_event *event; - LIST_FOR_EACH (event, list_node, updates) { - ovs_assert(event->type == OVSDB_CS_EVENT_TYPE_UPDATE); - struct ovsdb_cs_update_event *update = &event->update; - if (update->clear && ddlog_clear(ctx)) { - goto error; - } - - char *updates_s = json_to_string(update->table_updates, 0); - if (ddlog_apply_ovsdb_updates(ctx->ddlog, ctx->prefix, updates_s)) { - VLOG_WARN("DDlog failed to apply updates %s", updates_s); - free(updates_s); - goto error; - } - free(updates_s); - - if (ctx->has_timestamp_columns) { - get_nb_cfg(update->table_updates, &new_nb_cfg); - } - } - - if (ctx->has_timestamp_columns && new_nb_cfg != old_nb_cfg) { - new_nb_cfg_timestamp = time_wall_msec(); - - ddlog_cmd *cmds[2]; - int n_cmds = 0; - if (old_nb_cfg_timestamp != INT64_MIN) { - cmds[n_cmds++] = ddlog_delete_val_cmd( - NB_CFG_TIMESTAMP_ID, ddlog_i64(old_nb_cfg_timestamp)); - } - cmds[n_cmds++] = ddlog_insert_cmd( - NB_CFG_TIMESTAMP_ID, ddlog_i64(new_nb_cfg_timestamp)); - if (ddlog_apply_updates(ctx->ddlog, cmds, n_cmds) < 0) { - goto error; - } - } - - /* Commit changes to DDlog. */ - if (ddlog_commit(ctx->ddlog, ctx->delta)) { - goto error; - } - if (ctx->has_timestamp_columns) { - old_nb_cfg = new_nb_cfg; - old_nb_cfg_timestamp = new_nb_cfg_timestamp; - } - - /* This update may have implications for the other side, so - * immediately wake to check for more changes to be applied. */ - poll_immediate_wake(); - - return; - -error: - ddlog_transaction_rollback(ctx->ddlog); -} - -static void -northd_process_txn_reply(struct northd_ctx *ctx, - const struct jsonrpc_msg *reply) -{ - if (!json_equal(reply->id, ctx->request_id)) { - VLOG_WARN("unexpected transaction reply"); - return; - } - - json_destroy(ctx->request_id); - ctx->request_id = NULL; - - if (reply->type == JSONRPC_ERROR) { - char *s = jsonrpc_msg_to_string(reply); - VLOG_WARN("received database error: %s", s); - free(s); - - ovsdb_cs_force_reconnect(ctx->cs); - return; - } - - switch (ctx->state) { - case S_INITIAL: - OVS_NOT_REACHED(); - break; - - case S_OUTPUT_ONLY_DATA_REQUESTED: - json_destroy(ctx->output_only_data); - ctx->output_only_data = json_clone(reply->result); - - ctx->state = S_UPDATE; - break; - - case S_UPDATE: - /* Nothing to do. */ - break; - - default: - OVS_NOT_REACHED(); - } -} - -static void -destroy_event_list(struct ovs_list *events) -{ - struct ovsdb_cs_event *event; - LIST_FOR_EACH_POP (event, list_node, events) { - ovsdb_cs_event_destroy(event); - } -} - -/* Processes a batch of messages from the database server on 'ctx'. */ -static void -northd_run(struct northd_ctx *ctx) -{ - struct ovs_list events; - ovsdb_cs_run(ctx->cs, &events); - - struct ovs_list updates = OVS_LIST_INITIALIZER(&updates); - struct ovsdb_cs_event *event; - LIST_FOR_EACH_POP (event, list_node, &events) { - switch (event->type) { - case OVSDB_CS_EVENT_TYPE_RECONNECT: - json_destroy(ctx->request_id); - ctx->state = S_INITIAL; - break; - - case OVSDB_CS_EVENT_TYPE_LOCKED: - break; - - case OVSDB_CS_EVENT_TYPE_UPDATE: - if (event->update.clear) { - destroy_event_list(&updates); - } - ovs_list_push_back(&updates, &event->list_node); - continue; - - case OVSDB_CS_EVENT_TYPE_TXN_REPLY: - northd_process_txn_reply(ctx, event->txn_reply); - break; - } - ovsdb_cs_event_destroy(event); - } - - northd_parse_updates(ctx, &updates); - destroy_event_list(&updates); - - if (ctx->state == S_INITIAL && ovsdb_cs_may_send_transaction(ctx->cs)) { - northd_send_output_only_data_request(ctx); - } -} - -/* Pass the changes for 'ctx' to its database server. */ -static void -northd_send_deltas(struct northd_ctx *ctx) -{ - if (ctx->request_id || !ovsdb_cs_may_send_transaction(ctx->cs)) { - return; - } - - struct json *ops = get_database_ops(ctx); - if (!ops) { - return; - } - - struct json *comment = json_object_create(); - json_object_put_string(comment, "op", "comment"); - json_object_put_string(comment, "comment", "ovn-northd-ddlog"); - json_array_add(ops, comment); - - ctx->request_id = ovsdb_cs_send_transaction(ctx->cs, ops); -} - -static void -northd_update_probe_interval_cb( - uintptr_t probe_intervalp_, - table_id table OVS_UNUSED, - const ddlog_record *rec, - ssize_t weight OVS_UNUSED) -{ - int *probe_intervalp = (int *) probe_intervalp_; - - int64_t x = ddlog_get_i64(rec); - *probe_intervalp = (x > 0 && x < 1000 ? 1000 - : x > INT_MAX ? INT_MAX - : x); -} - -static void -northd_update_probe_interval(struct northd_ctx *nb, struct northd_ctx *sb) -{ - /* 0 means that Northd_Probe_Interval is empty. That means that we haven't - * connected to the database and retrieved an initial snapshot. Thus, we - * set an infinite probe interval to allow for retrieving and stabilizing - * an initial snapshot of the databse, which can take a long time. - * - * -1 means that Northd_Probe_Interval is nonempty but the database doesn't - * set a probe interval. Thus, we use the default probe interval. - * - * Any other value is an explicit probe interval request from the - * database. */ - int probe_interval = 0; - table_id tid = ddlog_get_table_id(nb->ddlog, "Northd_Probe_Interval"); - ddlog_delta *probe_delta = ddlog_delta_remove_table(nb->delta, tid); - ddlog_delta_enumerate(probe_delta, northd_update_probe_interval_cb, (uintptr_t) &probe_interval); - ddlog_free_delta(probe_delta); - - ovsdb_cs_set_probe_interval(nb->cs, probe_interval); - ovsdb_cs_set_probe_interval(sb->cs, probe_interval); -} - -/* Arranges for poll_block() to wake up when northd_run() has something to - * do or when activity occurs on a transaction on 'ctx'. */ -static void -northd_wait(struct northd_ctx *ctx) -{ - ovsdb_cs_wait(ctx->cs); -} - -/* ddlog-specific actions. */ - -/* Generate OVSDB update command for delta-plus, delta-minus, and delta-update - * tables. */ -static void -ddlog_table_update_deltas(struct ds *ds, ddlog_prog ddlog, ddlog_delta *delta, - const char *db, const char *table) -{ - int error; - char *updates; - - error = ddlog_dump_ovsdb_delta_tables(ddlog, delta, db, table, &updates); - if (error) { - VLOG_INFO("DDlog error %d dumping delta for table %s", error, table); - return; - } - - if (!updates[0]) { - ddlog_free_json(updates); - return; - } - - ds_put_cstr(ds, updates); - ds_put_char(ds, ','); - ddlog_free_json(updates); -} - -/* Generate OVSDB update command for a output-only table. */ -static void -ddlog_table_update_output(struct ds *ds, ddlog_prog ddlog, ddlog_delta *delta, - const char *db, const char *table) -{ - int error; - char *updates; - - error = ddlog_dump_ovsdb_output_table(ddlog, delta, db, table, &updates); - if (error) { - VLOG_WARN("%s: failed to generate update commands for " - "output-only table (error %d)", table, error); - return; - } - char *table_name = xasprintf("%s::Out_%s", db, table); - ddlog_delta_clear_table(delta, ddlog_get_table_id(ddlog, table_name)); - free(table_name); - - if (!updates[0]) { - ddlog_free_json(updates); - return; - } - - ds_put_cstr(ds, updates); - ds_put_char(ds, ','); - ddlog_free_json(updates); -} - -static struct ovsdb_error * -parse_output_only_data(const struct json *txn_result, size_t index, - struct uuidset *uuidset) -{ - if (txn_result->type != JSON_ARRAY || txn_result->array.n <= index) { - return ovsdb_syntax_error(txn_result, NULL, - "transaction result missing for " - "output-only relation %"PRIuSIZE, index); - } - - struct ovsdb_parser p; - ovsdb_parser_init(&p, txn_result->array.elems[0], "select result"); - const struct json *rows = ovsdb_parser_member(&p, "rows", OP_ARRAY); - struct ovsdb_error *error = ovsdb_parser_finish(&p); - if (error) { - return error; - } - - for (size_t i = 0; i < rows->array.n; i++) { - const struct json *row = rows->array.elems[i]; - - ovsdb_parser_init(&p, row, "row"); - const struct json *uuid = ovsdb_parser_member(&p, "_uuid", OP_ARRAY); - error = ovsdb_parser_finish(&p); - if (error) { - return error; - } - - struct ovsdb_base_type base_type = OVSDB_BASE_UUID_INIT; - union ovsdb_atom atom; - error = ovsdb_atom_from_json(&atom, &base_type, uuid, NULL); - if (error) { - return error; - } - uuidset_insert(uuidset, &atom.uuid); - } - - return NULL; -} - -static bool -get_ddlog_uuid(const ddlog_record *rec, struct uuid *uuid) -{ - if (!ddlog_is_int(rec)) { - return false; - } - - __uint128_t u128 = ddlog_get_u128(rec); - uuid->parts[0] = u128 >> 96; - uuid->parts[1] = u128 >> 64; - uuid->parts[2] = u128 >> 32; - uuid->parts[3] = u128; - return true; -} - -struct dump_index_data { - ddlog_prog prog; - struct uuidset *rows_present; - const char *table; - struct ds *ops_s; -}; - -static void OVS_UNUSED -index_cb(uintptr_t data_, const ddlog_record *rec) -{ - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5); - struct dump_index_data *data = (struct dump_index_data *) data_; - - /* Extract the rec's row UUID as 'uuid'. */ - const ddlog_record *rec_uuid = ddlog_get_named_struct_field(rec, "_uuid"); - if (!rec_uuid) { - VLOG_WARN_RL(&rl, "%s: row has no _uuid column", data->table); - return; - } - struct uuid uuid; - if (!get_ddlog_uuid(rec_uuid, &uuid)) { - VLOG_WARN_RL(&rl, "%s: _uuid column has unexpected type", data->table); - return; - } - - /* If a row with the given UUID was already in the database, then - * send a operation to update it; otherwise, send an operation to - * insert it. */ - struct uuidset_node *node = uuidset_find(data->rows_present, &uuid); - char *s = NULL; - int ret; - if (node) { - uuidset_delete(data->rows_present, node); - ret = ddlog_into_ovsdb_update_str(data->prog, data->table, rec, &s); - } else { - ret = ddlog_into_ovsdb_insert_str(data->prog, data->table, rec, &s); - } - if (ret) { - VLOG_WARN_RL(&rl, "%s: ddlog could not convert row into database op", - data->table); - return; - } - ds_put_format(data->ops_s, "%s,", s); - ddlog_free_json(s); -} - -static struct json * -where_uuid_equals(const struct uuid *uuid) -{ - return - json_array_create_1( - json_array_create_3( - json_string_create("_uuid"), - json_string_create("=="), - json_array_create_2( - json_string_create("uuid"), - json_string_create_nocopy( - xasprintf(UUID_FMT, UUID_ARGS(uuid)))))); -} - -static void -add_delete_row_op(const char *table, const struct uuid *uuid, struct ds *ops_s) -{ - struct json *op = json_object_create(); - json_object_put_string(op, "op", "delete"); - json_object_put_string(op, "table", table); - json_object_put(op, "where", where_uuid_equals(uuid)); - json_to_ds(op, 0, ops_s); - json_destroy(op); - ds_put_char(ops_s, ','); -} - -static void -northd_update_sb_cfg_cb( - uintptr_t new_sb_cfgp_, - table_id table OVS_UNUSED, - const ddlog_record *rec, - ssize_t weight) -{ - int64_t *new_sb_cfgp = (int64_t *) new_sb_cfgp_; - - if (weight < 0) { - return; - } - - if (ddlog_get_int(rec, NULL, 0) <= sizeof *new_sb_cfgp) { - *new_sb_cfgp = ddlog_get_i64(rec); - } -} - -static struct json * -get_database_ops(struct northd_ctx *ctx) -{ - struct ds ops_s = DS_EMPTY_INITIALIZER; - ds_put_char(&ops_s, '['); - json_string_escape(ctx->db_name, &ops_s); - ds_put_char(&ops_s, ','); - size_t start_len = ops_s.length; - - for (const char **p = ctx->output_relations; *p; p++) { - ddlog_table_update_deltas(&ops_s, ctx->ddlog, ctx->delta, - ctx->db_name, *p); - } - - if (ctx->output_only_data) { - /* - * We just reconnected to the database (or connected for the first time - * in this execution). We assume that the contents of the output-only - * tables might have changed (this is especially true the first time we - * connect to the database a given execution, of course; we can't - * assume that the tables have any particular contents in this case). - * - * ctx->output_only_data is a database reply that tells us the - * UUIDs of the rows that exist in the database. Our strategy is to - * compare these UUIDs to the UUIDs of the rows that exist in the DDlog - * analogues of these tables, and then add, delete, or update rows as - * necessary. - * - * (ctx->output_only_data only gives row UUIDs, not full row - * contents. That means that for rows that exist in OVSDB and in - * DDLog, we always send an update to set all the columns. It wouldn't - * save bandwidth to do anything else, since we'd always have to send - * the full row contents in one direction and if there were differences - * we'd have to send the contents in both directions. With this - * strategy we only send them in one direction even in the worst case.) - * - * (We can't just send an operation to delete all the rows and then - * re-add them all in the same transaction, because ovsdb-server - * rejecting deleting a row with a given UUID and the adding the same - * UUID back in a single transaction.) - */ - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 2); - - for (size_t i = 0; ctx->output_only_relations[i]; i++) { - const char *table = ctx->output_only_relations[i]; - - /* Parse the list of row UUIDs received from OVSDB. */ - struct uuidset rows_present = UUIDSET_INITIALIZER(&rows_present); - struct ovsdb_error *error = parse_output_only_data( - ctx->output_only_data, i, &rows_present); - if (error) { - char *s = ovsdb_error_to_string_free(error); - VLOG_WARN_RL(&rl, "%s", s); - free(s); - uuidset_destroy(&rows_present); - continue; - } - - /* Get the index_id for the DDlog table. - * - * We require output-only tables to have an accompanying index - * named <table>_Index. */ - char *index = xasprintf("%s_Index", table); - index_id idxid = ddlog_get_index_id(ctx->ddlog, index); - if (idxid == -1) { - VLOG_WARN_RL(&rl, "%s: unknown index", index); - free(index); - uuidset_destroy(&rows_present); - continue; - } - free(index); - - /* For each row in the index, update a corresponding OVSDB row, if - * there is one, otherwise insert a new row. */ - struct dump_index_data cbdata = { - ctx->ddlog, &rows_present, table, &ops_s - }; - ddlog_dump_index(ctx->ddlog, idxid, index_cb, (uintptr_t) &cbdata); - - /* Any uuids remaining in 'rows_present' are rows that are in OVSDB - * but not DDlog. Delete them from OVSDB. */ - struct uuidset_node *node; - UUIDSET_FOR_EACH (node, &rows_present) { - add_delete_row_op(table, &node->uuid, &ops_s); - } - uuidset_destroy(&rows_present); - - /* Discard any queued output to this table, since we just - * did a full sync to it. */ - struct ds tmp = DS_EMPTY_INITIALIZER; - ddlog_table_update_output(&tmp, ctx->ddlog, ctx->delta, - ctx->db_name, table); - ds_destroy(&tmp); - } - - json_destroy(ctx->output_only_data); - ctx->output_only_data = NULL; - } else { - for (const char **p = ctx->output_only_relations; *p; p++) { - ddlog_table_update_output(&ops_s, ctx->ddlog, ctx->delta, - ctx->db_name, *p); - } - } - - /* If we're updating nb::NB_Global.sb_cfg, then also update - * sb_cfg_timestamp. - * - * XXX If the transaction we're sending to the database fails, then - * currently as written we'll never find out about it and sb_cfg_timestamp - * will not be updated. - */ - static int64_t old_sb_cfg = INT64_MIN; - static int64_t old_sb_cfg_timestamp = INT64_MIN; - int64_t new_sb_cfg = old_sb_cfg; - if (ctx->has_timestamp_columns) { - table_id sb_cfg_tid = ddlog_get_table_id(ctx->ddlog, "SbCfg"); - ddlog_delta *sb_cfg_delta = ddlog_delta_remove_table(ctx->delta, - sb_cfg_tid); - ddlog_delta_enumerate(sb_cfg_delta, northd_update_sb_cfg_cb, - (uintptr_t) &new_sb_cfg); - ddlog_free_delta(sb_cfg_delta); - - if (new_sb_cfg != old_sb_cfg) { - old_sb_cfg = new_sb_cfg; - old_sb_cfg_timestamp = time_wall_msec(); - ds_put_format(&ops_s, "{\"op\":\"update\",\"table\":\"NB_Global\",\"where\":[]," - "\"row\":{\"sb_cfg_timestamp\":%"PRId64"}},", old_sb_cfg_timestamp); - } - } - - struct json *ops; - if (ops_s.length > start_len) { - ds_chomp(&ops_s, ','); - ds_put_char(&ops_s, ']'); - ops = json_from_string(ds_cstr(&ops_s)); - } else { - ops = NULL; - } - - ds_destroy(&ops_s); - - return ops; -} - -/* Callback used by the ddlog engine to print error messages. Note that - * this is only used by the ddlog runtime, as opposed to the application - * code in ovn_northd.dl, which uses the vlog facility directly. */ -static void -ddlog_print_error(const char *msg) -{ - VLOG_ERR("%s", msg); -} - -static void -usage(void) -{ - printf("\ -%s: OVN northbound management daemon (DDlog version)\n\ -usage: %s [OPTIONS]\n\ -\n\ -Options:\n\ - --ovnnb-db=DATABASE connect to ovn-nb database at DATABASE\n\ - (default: %s)\n\ - --ovnsb-db=DATABASE connect to ovn-sb database at DATABASE\n\ - (default: %s)\n\ - --dry-run start in paused state (do not commit db changes)\n\ - --ddlog-record=FILE.TXT record db changes to replay later for debugging\n\ - --unixctl=SOCKET override default control socket name\n\ - -h, --help display this help message\n\ - -o, --options list available options\n\ - -V, --version display version information\n\ -", program_name, program_name, default_nb_db(), default_sb_db()); - daemon_usage(); - vlog_usage(); - stream_usage("database", true, true, false); -} - -static void -parse_options(int argc OVS_UNUSED, char *argv[] OVS_UNUSED, - bool *pause) -{ - enum { - OVN_DAEMON_OPTION_ENUMS, - VLOG_OPTION_ENUMS, - SSL_OPTION_ENUMS, - OPT_DRY_RUN, - OPT_DDLOG_RECORD, - }; - static const struct option long_options[] = { - {"ovnsb-db", required_argument, NULL, 'd'}, - {"ovnnb-db", required_argument, NULL, 'D'}, - {"unixctl", required_argument, NULL, 'u'}, - {"help", no_argument, NULL, 'h'}, - {"options", no_argument, NULL, 'o'}, - {"version", no_argument, NULL, 'V'}, - {"dry-run", no_argument, NULL, OPT_DRY_RUN}, - {"ddlog-record", required_argument, NULL, OPT_DDLOG_RECORD}, - OVN_DAEMON_LONG_OPTIONS, - VLOG_LONG_OPTIONS, - STREAM_SSL_LONG_OPTIONS, - {NULL, 0, NULL, 0}, - }; - char *short_options = ovs_cmdl_long_options_to_short_options(long_options); - - for (;;) { - int c; - - c = getopt_long(argc, argv, short_options, long_options, NULL); - if (c == -1) { - break; - } - - switch (c) { - OVN_DAEMON_OPTION_HANDLERS; - VLOG_OPTION_HANDLERS; - - case 'p': - ssl_private_key_file = optarg; - break; - - case 'c': - ssl_certificate_file = optarg; - break; - - case 'C': - ssl_ca_cert_file = optarg; - break; - - case 'd': - ovnsb_db = optarg; - break; - - case 'D': - ovnnb_db = optarg; - break; - - case 'u': - unixctl_path = optarg; - break; - - case 'h': - usage(); - exit(EXIT_SUCCESS); - - case 'o': - ovs_cmdl_print_options(long_options); - exit(EXIT_SUCCESS); - - case 'V': - ovs_print_version(0, 0); - exit(EXIT_SUCCESS); - - case OPT_DRY_RUN: - *pause = true; - break; - - case OPT_DDLOG_RECORD: - record_file = optarg; - break; - - default: - break; - } - } - - if (!ovnsb_db || !ovnsb_db[0]) { - ovnsb_db = default_sb_db(); - } - - if (!ovnnb_db || !ovnnb_db[0]) { - ovnnb_db = default_nb_db(); - } - - free(short_options); -} - -static void -update_ssl_config(void) -{ - if (ssl_private_key_file && ssl_certificate_file) { - stream_ssl_set_key_and_cert(ssl_private_key_file, - ssl_certificate_file); - } - if (ssl_ca_cert_file) { - stream_ssl_set_ca_cert_file(ssl_ca_cert_file, false); - } -} - -int -main(int argc, char *argv[]) -{ - int res = EXIT_SUCCESS; - struct unixctl_server *unixctl; - int retval; - bool exiting; - struct northd_status status = { - .locked = false, - .pause = false, - }; - - fatal_ignore_sigpipe(); - ovs_cmdl_proctitle_init(argc, argv); - set_program_name(argv[0]); - service_start(&argc, &argv); - parse_options(argc, argv, &status.pause); - - daemonize_start(false); - - char *abs_unixctl_path = get_abs_unix_ctl_path(unixctl_path); - retval = unixctl_server_create(abs_unixctl_path, &unixctl); - free(abs_unixctl_path); - - if (retval) { - exit(EXIT_FAILURE); - } - - unixctl_command_register("exit", "", 0, 0, ovn_northd_exit, &exiting); - unixctl_command_register("status", "", 0, 0, ovn_northd_status, &status); - - ddlog_prog ddlog; - ddlog_delta *delta; - ddlog = ddlog_run(1, false, ddlog_print_error, &delta); - if (!ddlog) { - ovs_fatal(0, "DDlog instance could not be created"); - } - init_table_ids(ddlog); - - unixctl_command_register("enable-cpu-profiling", "", 0, 0, - ovn_northd_enable_cpu_profiling, ddlog); - unixctl_command_register("disable-cpu-profiling", "", 0, 0, - ovn_northd_disable_cpu_profiling, ddlog); - unixctl_command_register("profile", "", 0, 0, ovn_northd_profile, ddlog); - - int replay_fd = -1; - if (record_file) { - replay_fd = open(record_file, O_CREAT | O_WRONLY | O_TRUNC, 0666); - if (replay_fd < 0) { - ovs_fatal(errno, "%s: could not create DDlog record file", - record_file); - } - - if (ddlog_record_commands(ddlog, replay_fd)) { - ovs_fatal(0, "could not enable DDlog command recording"); - } - } - - struct northd_ctx *nb_ctx = northd_ctx_create( - ovnnb_db, "OVN_Northbound", "nb", NULL, ddlog, delta, - nb_input_relations, nb_output_relations, nb_output_only_relations, - status.pause); - struct northd_ctx *sb_ctx = northd_ctx_create( - ovnsb_db, "OVN_Southbound", "sb", "ovn_northd", ddlog, delta, - sb_input_relations, sb_output_relations, sb_output_only_relations, - status.pause); - - unixctl_command_register("pause", "", 0, 0, ovn_northd_pause, sb_ctx); - unixctl_command_register("resume", "", 0, 0, ovn_northd_resume, sb_ctx); - unixctl_command_register("is-paused", "", 0, 0, ovn_northd_is_paused, - sb_ctx); - - char *ovn_internal_version = ovn_get_internal_version(); - VLOG_INFO("OVN internal version is : [%s]", ovn_internal_version); - free(ovn_internal_version); - - daemonize_complete(); - - stopwatch_create(NORTHD_LOOP_STOPWATCH_NAME, SW_MS); - stopwatch_create(OVNNB_DB_RUN_STOPWATCH_NAME, SW_MS); - stopwatch_create(OVNSB_DB_RUN_STOPWATCH_NAME, SW_MS); - - /* Main loop. */ - exiting = false; - while (!exiting) { - update_ssl_config(); - memory_run(); - if (memory_should_report()) { - struct simap usage = SIMAP_INITIALIZER(&usage); - - /* Nothing special to report yet. */ - memory_report(&usage); - simap_destroy(&usage); - } - - bool has_lock = ovsdb_cs_has_lock(sb_ctx->cs); - if (!sb_ctx->paused) { - if (has_lock && !status.locked) { - VLOG_INFO("ovn-northd lock acquired. " - "This ovn-northd instance is now active."); - } else if (!has_lock && status.locked) { - VLOG_INFO("ovn-northd lock lost. " - "This ovn-northd instance is now on standby."); - } - } - status.locked = has_lock; - status.pause = sb_ctx->paused; - - stopwatch_start(OVNNB_DB_RUN_STOPWATCH_NAME, time_msec()); - northd_run(nb_ctx); - stopwatch_stop(OVNNB_DB_RUN_STOPWATCH_NAME, time_msec()); - stopwatch_start(OVNSB_DB_RUN_STOPWATCH_NAME, time_msec()); - northd_run(sb_ctx); - stopwatch_stop(OVNSB_DB_RUN_STOPWATCH_NAME, time_msec()); - northd_update_probe_interval(nb_ctx, sb_ctx); - if (ovsdb_cs_has_lock(sb_ctx->cs) && - sb_ctx->state == S_UPDATE && - nb_ctx->state == S_UPDATE && - ovsdb_cs_may_send_transaction(sb_ctx->cs) && - ovsdb_cs_may_send_transaction(nb_ctx->cs)) { - northd_send_deltas(nb_ctx); - northd_send_deltas(sb_ctx); - } - - stopwatch_stop(NORTHD_LOOP_STOPWATCH_NAME, time_msec()); - stopwatch_start(NORTHD_LOOP_STOPWATCH_NAME, time_msec()); - unixctl_server_run(unixctl); - - northd_wait(nb_ctx); - northd_wait(sb_ctx); - unixctl_server_wait(unixctl); - memory_wait(); - if (exiting) { - poll_immediate_wake(); - } - poll_block(); - if (should_service_stop()) { - exiting = true; - } - } - - northd_ctx_destroy(nb_ctx); - northd_ctx_destroy(sb_ctx); - - ddlog_free_delta(delta); - ddlog_stop(ddlog); - - if (replay_fd >= 0) { - fsync(replay_fd); - close(replay_fd); - } - - unixctl_server_destroy(unixctl); - service_stop(); - - exit(res); -} - -static void -ovn_northd_exit(struct unixctl_conn *conn, int argc OVS_UNUSED, - const char *argv[] OVS_UNUSED, void *exiting_) -{ - bool *exiting = exiting_; - *exiting = true; - - unixctl_command_reply(conn, NULL); -} - -static void -ovn_northd_pause(struct unixctl_conn *conn, int argc OVS_UNUSED, - const char *argv[] OVS_UNUSED, void *sb_ctx_) -{ - struct northd_ctx *sb_ctx = sb_ctx_; - northd_pause(sb_ctx); - unixctl_command_reply(conn, NULL); -} - -static void -ovn_northd_resume(struct unixctl_conn *conn, int argc OVS_UNUSED, - const char *argv[] OVS_UNUSED, void *sb_ctx_) -{ - struct northd_ctx *sb_ctx = sb_ctx_; - northd_unpause(sb_ctx); - unixctl_command_reply(conn, NULL); -} - -static void -ovn_northd_is_paused(struct unixctl_conn *conn, int argc OVS_UNUSED, - const char *argv[] OVS_UNUSED, void *sb_ctx_) -{ - struct northd_ctx *sb_ctx = sb_ctx_; - unixctl_command_reply(conn, sb_ctx->paused ? "true" : "false"); -} - -static void -ovn_northd_status(struct unixctl_conn *conn, int argc OVS_UNUSED, - const char *argv[] OVS_UNUSED, void *status_) -{ - struct northd_status *status = status_; - - /* Use a labeled formatted output so we can add more to the status command - * later without breaking any consuming scripts. */ - char *s = xasprintf("Status: %s\n", - status->pause ? "paused" - : status->locked ? "active" - : "standby"); - unixctl_command_reply(conn, s); - free(s); -} - -static void -ovn_northd_enable_cpu_profiling(struct unixctl_conn *conn, int argc OVS_UNUSED, - const char *argv[] OVS_UNUSED, void *prog_) -{ - ddlog_prog prog = prog_; - ddlog_enable_cpu_profiling(prog, true); - unixctl_command_reply(conn, NULL); -} - -static void -ovn_northd_disable_cpu_profiling(struct unixctl_conn *conn, - int argc OVS_UNUSED, - const char *argv[] OVS_UNUSED, void *prog_) -{ - ddlog_prog prog = prog_; - ddlog_enable_cpu_profiling(prog, false); - unixctl_command_reply(conn, NULL); -} - -static void -ovn_northd_profile(struct unixctl_conn *conn, int argc OVS_UNUSED, - const char *argv[] OVS_UNUSED, void *prog_) -{ - ddlog_prog prog = prog_; - char *profile = ddlog_profile(prog); - unixctl_command_reply(conn, profile); - free(profile); -} diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml index a77bd719e8..bf12ed5cd8 100644 --- a/northd/ovn-northd.8.xml +++ b/northd/ovn-northd.8.xml @@ -1,7 +1,7 @@ <?xml version="1.0" encoding="utf-8"?> <manpage program="ovn-northd" section="8" title="ovn-northd"> <h1>Name</h1> - <p>ovn-northd and ovn-northd-ddlog -- Open Virtual Network central control daemon</p> + <p>ovn-northd -- Open Virtual Network central control daemon</p> <h1>Synopsis</h1> <p><code>ovn-northd</code> [<var>options</var>]</p> @@ -18,14 +18,6 @@ <code>ovn-sb</code>(5)) below it. </p> - <p> - <code>ovn-northd</code> is implemented in C. - <code>ovn-northd-ddlog</code> is a compatible implementation written in - DDlog, a language for incremental database processing. This - documentation applies to both implementations, with differences indicated - where relevant. - </p> - <h1>Options</h1> <dl> <dt><code>--ovnnb-db=<var>database</var></code></dt> @@ -42,16 +34,6 @@ as the default. Otherwise, the default is <code>unix:@RUNDIR@/ovnsb_db.sock</code>. </dd> - <dt><code>--ddlog-record=<var>file</var></code></dt> - <dd> - This option is for <code>ovn-north-ddlog</code> only. It causes the - daemon to record the initial database state and later changes to - <var>file</var> in the text-based DDlog command format. The - <code>ovn_northd_cli</code> program can later replay these changes for - debugging purposes. This option has a performance impact. See - <code>debugging-ddlog.rst</code> in the OVN documentation for more - details. - </dd> <dt><code>--dry-run</code></dt> <dd> <p> @@ -61,12 +43,6 @@ <code>pause</code> command, under <code>Runtime Management Commands</code> below. </p> - - <p> - For <code>ovn-northd-ddlog</code>, one could use this option with - <code>--ddlog-record</code> to generate a replay log without - restarting a process or disturbing a running system. - </p> </dd> <dt><code>n-threads N</code></dt> <dd> @@ -85,10 +61,6 @@ If N is more than 256, then N is set to 256, parallelization is enabled (with 256 threads) and a warning is logged. </p> - - <p> - ovn-northd-ddlog does not support this option. - </p> </dd> </dl> <p> @@ -241,32 +213,6 @@ </dl> </p> - <p> - Only <code>ovn-northd-ddlog</code> supports the following commands: - </p> - - <dl> - <dt><code>enable-cpu-profiling</code></dt> - <dt><code>disable-cpu-profiling</code></dt> - <dd> - Enables or disables profiling of CPU time used by the DDlog engine. - When CPU profiling is enabled, the <code>profile</code> command (see - below) will include DDlog CPU usage statistics in its output. Enabling - CPU profiling will slow <code>ovn-northd-ddlog</code>. Disabling CPU - profiling does not clear any previously recorded statistics. - </dd> - - <dt><code>profile</code></dt> - <dd> - Outputs a profile of the current and peak sizes of arrangements inside - DDlog. This profiling data can be useful for optimizing DDlog code. - If CPU profiling was previously enabled (even if it was later - disabled), the output also includes a CPU time profile. See - <code>Profiling</code> inside the tutorial in the DDlog repository for - an introduction to profiling DDlog. - </dd> - </dl> - <h1>Active-Standby for High Availability</h1> <p> You may run <code>ovn-northd</code> more than once in an OVN deployment. diff --git a/northd/ovn-sb.dlopts b/northd/ovn-sb.dlopts deleted file mode 100644 index 99b65f1019..0000000000 --- a/northd/ovn-sb.dlopts +++ /dev/null @@ -1,34 +0,0 @@ ---intern-strings --o Address_Set --o BFD --o DHCP_Options --o DHCPv6_Options --o DNS --o Datapath_Binding --o FDB --o Gateway_Chassis --o HA_Chassis --o HA_Chassis_Group --o IP_Multicast --o Load_Balancer --o Logical_DP_Group --o MAC_Binding --o Meter --o Meter_Band --o Multicast_Group --o Port_Binding --o Port_Group --o RBAC_Permission --o RBAC_Role --o SB_Global --o Service_Monitor ---output-only Logical_Flow ---ro IP_Multicast.seq_no ---ro Port_Binding.chassis ---ro Port_Binding.encap ---ro Port_Binding.virtual_parent ---ro SB_Global.connections ---ro SB_Global.external_ids ---ro SB_Global.ssl ---ro Service_Monitor.status ---intern-table Service_Monitor diff --git a/northd/ovn.dl b/northd/ovn.dl deleted file mode 100644 index 3585eb3dc2..0000000000 --- a/northd/ovn.dl +++ /dev/null @@ -1,387 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -import ovsdb -import bitwise - -/* Logical port is enabled if it does not have an enabled flag or the flag is true */ -function is_enabled(s: Option<bool>): bool = { - s != Some{false} -} - -/* - * Ethernet addresses - */ -typedef eth_addr = EthAddr { - ha: bit<48> // In host byte order, e.g. ha[40] is the multicast bit. -} - -function to_string(addr: eth_addr): string { - eth_addr2string(addr) -} -extern function eth_addr_from_string(s: string): Option<eth_addr> -extern function scan_eth_addr(s: string): Option<eth_addr> -extern function scan_eth_addr_prefix(s: string): Option<eth_addr> -function eth_addr_zero(): eth_addr { EthAddr{0} } -function eth_addr_pseudorandom(seed: uuid, variant: bit<16>) : eth_addr { - EthAddr{hash64(seed ++ variant) as bit<48>}.mark_random() -} -function mark_random(ea: eth_addr): eth_addr { EthAddr{ea.ha & ~(1 << 40) | (1 << 41)} } - -function to_eui64(ea: eth_addr): bit<64> { - var ha = ea.ha as u64; - (((ha & 64'hffffff000000) << 16) | 64'hfffe000000 | (ha & 64'hffffff)) ^ (1 << 57) -} - -extern function eth_addr2string(addr: eth_addr): string - -/* - * IPv4 addresses - */ - -typedef in_addr = InAddr { - a: bit<32> // In host byte order. -} - -extern function ip_parse(s: string): Option<in_addr> -extern function ip_parse_masked(s: string): Either<string/*err*/, (in_addr/*host_ip*/, in_addr/*mask*/)> -extern function ip_parse_cidr(s: string): Either<string/*err*/, (in_addr/*ip*/, bit<32>/*plen*/)> -extern function scan_static_dynamic_ip(s: string): Option<in_addr> -function ip_create_mask(plen: bit<32>): in_addr { InAddr{(64'hffffffff << (32 - plen))[31:0]} } - -function to_string(ip: in_addr): string = { - "${ip.a >> 24}.${(ip.a >> 16) & 'hff}.${(ip.a >> 8) & 'hff}.${ip.a & 'hff}" -} - -function is_cidr(netmask: in_addr): bool { var x = ~netmask.a; (x & (x + 1)) == 0 } -function is_local_multicast(ip: in_addr): bool { (ip.a & 32'hffffff00) == 32'he0000000 } -function is_zero(a: in_addr): bool { a.a == 0 } -function is_all_ones(a: in_addr): bool { a.a == 32'hffffffff } -function cidr_bits(ip: in_addr): Option<bit<8>> { - if (ip.is_cidr()) { - Some{32 - ip.a.trailing_zeros() as u8} - } else { - None - } -} - -function network(addr: in_addr, mask: in_addr): in_addr { InAddr{addr.a & mask.a} } -function host(addr: in_addr, mask: in_addr): in_addr { InAddr{addr.a & ~mask.a} } -function bcast(addr: in_addr, mask: in_addr): in_addr { InAddr{addr.a | ~mask.a} } - -/* True if both 'ips' are in the same network as defined by netmask 'mask', - * false otherwise. */ -function same_network(ips: (in_addr, in_addr), mask: in_addr): bool { - ((ips.0.a ^ ips.1.a) & mask.a) == 0 -} - -/* - * parse IPv4 address list of the form: - * "10.0.0.4 10.0.0.10 10.0.0.20..10.0.0.50 10.0.0.100..10.0.0.110" - */ -extern function parse_ip_list(ips: string): Either<string, Vec<(in_addr, Option<in_addr>)>> - -/* - * IPv6 addresses - */ -typedef in6_addr = In6Addr { - aaaa: bit<128> // In host byte order. -} - -extern function ipv6_parse(s: string): Option<in6_addr> -extern function ipv6_parse_masked(s: string): Either<string/*err*/, (in6_addr/*ip*/, in6_addr/*mask*/)> -extern function ipv6_parse_cidr(s: string): Either<string/*err*/, (in6_addr/*ip*/, bit<32>/*plen*/)> - -// Return IPv6 link local address for the given 'ea'. -function to_ipv6_lla(ea: eth_addr): in6_addr { - In6Addr{(128'hfe80 << 112) | (ea.to_eui64() as u128)} -} - -// Returns IPv6 EUI64 address for 'ea' with the given 'prefix'. -function to_ipv6_eui64(ea: eth_addr, prefix: in6_addr): in6_addr { - In6Addr{(prefix.aaaa & ~128'hffffffffffffffff) | (ea.to_eui64() as u128)} -} - -function ipv6_create_mask(plen: bit<32>): in6_addr { - if (plen == 0) { - In6Addr{0} - } else { - var shift = max(0, 128 - plen); - In6Addr{128'hffffffffffffffffffffffffffffffff << shift} - } -} - -function is_zero(a: in6_addr): bool { a.aaaa == 0 } -function is_all_ones(a: in6_addr): bool { a.aaaa == 128'hffffffffffffffffffffffffffffffff } -function is_lla(a: in6_addr): bool { (a.aaaa >> 64) == 128'hfe80000000000000 } -function is_all_hosts(a: in6_addr): bool { a.aaaa == 128'hff020000000000000000000000000001 } -function is_cidr(netmask: in6_addr): bool { var x = ~netmask.aaaa; (x & (x + 1)) == 0 } -function is_multicast(a: in6_addr): bool { (a.aaaa >> 120) == 128'hff } -function is_routable_multicast(a: in6_addr): bool { - a.is_multicast() and match ((a.aaaa >> 112) as u8 & 8'hf) { - 0 -> false, - 1 -> false, - 2 -> false, - 3 -> false, - 15 -> false, - _ -> true - } -} - -extern function string_mapped(addr: in6_addr): string - -function network(addr: in6_addr, mask: in6_addr): in6_addr { In6Addr{addr.aaaa & mask.aaaa} } -function host(addr: in6_addr, mask: in6_addr): in6_addr { In6Addr{addr.aaaa & ~mask.aaaa} } -function solicited_node(ip6: in6_addr): in6_addr { - In6Addr{(ip6.aaaa & 128'hffffff) | 128'hff02_0000_0000_0000_0000_0001_ff00_0000} -} - -/* True if both 'ips' are in the same network as defined by netmask 'mask', - * false otherwise. */ -function same_network(ips: (in6_addr, in6_addr), mask: in6_addr): bool { - ips.0.network(mask) == ips.1.network(mask) -} - -function multicast_to_ethernet(ip6: in6_addr): eth_addr { - EthAddr{48'h333300000000 | (ip6.aaaa as bit<48> & 48'hffffffff)} -} - -function cidr_bits(ip6: in6_addr): Option<bit<8>> { - if (ip6.is_cidr()) { - Some{128 - ip6.aaaa.trailing_zeros() as u8} - } else { - None - } -} - -function to_string(addr: in6_addr): string { inet6_ntop(addr) } -extern function inet6_ntop(addr: in6_addr): string - -/* - * IPv4 | IPv6 addresses - */ - -typedef v46_ip = IPv4 { ipv4: in_addr } | IPv6 { ipv6: in6_addr } - -function ip46_parse_cidr(s: string) : Option<(v46_ip, bit<32>)> = { - match (ip_parse_cidr(s)) { - Right{(ipv4, plen)} -> return Some{(IPv4{ipv4}, plen)}, - _ -> () - }; - match (ipv6_parse_cidr(s)) { - Right{(ipv6, plen)} -> return Some{(IPv6{ipv6}, plen)}, - _ -> () - }; - None -} -function ip46_parse_masked(s: string) : Option<(v46_ip, v46_ip)> = { - match (ip_parse_masked(s)) { - Right{(ipv4, mask)} -> return Some{(IPv4{ipv4}, IPv4{mask})}, - _ -> () - }; - match (ipv6_parse_masked(s)) { - Right{(ipv6, mask)} -> return Some{(IPv6{ipv6}, IPv6{mask})}, - _ -> () - }; - None -} -function ip46_parse(s: string) : Option<v46_ip> = { - match (ip_parse(s)) { - Some{ipv4} -> return Some{IPv4{ipv4}}, - _ -> () - }; - match (ipv6_parse(s)) { - Some{ipv6} -> return Some{IPv6{ipv6}}, - _ -> () - }; - None -} -function to_string(ip46: v46_ip) : string = { - match (ip46) { - IPv4{ipv4} -> "${ipv4}", - IPv6{ipv6} -> "${ipv6}" - } -} -function to_bracketed_string(ip46: v46_ip) : string = { - match (ip46) { - IPv4{ipv4} -> "${ipv4}", - IPv6{ipv6} -> "[${ipv6}]" - } -} - -function network(ip46: v46_ip, plen: bit<32>) : v46_ip { - match (ip46) { - IPv4{ipv4} -> IPv4{InAddr{ipv4.a & ip_create_mask(plen).a}}, - IPv6{ipv6} -> IPv6{In6Addr{ipv6.aaaa & ipv6_create_mask(plen).aaaa}} - } -} - -function is_all_ones(ip46: v46_ip) : bool { - match (ip46) { - IPv4{ipv4} -> ipv4.is_all_ones(), - IPv6{ipv6} -> ipv6.is_all_ones() - } -} - -function cidr_bits(ip46: v46_ip) : Option<bit<8>> { - match (ip46) { - IPv4{ipv4} -> ipv4.cidr_bits(), - IPv6{ipv6} -> ipv6.cidr_bits() - } -} - -function ipX(ip46: v46_ip) : string { - match (ip46) { - IPv4{_} -> "ip4", - IPv6{_} -> "ip6" - } -} - -function xxreg(ip46: v46_ip) : string { - match (ip46) { - IPv4{_} -> "", - IPv6{_} -> "xx" - } -} - -/* - * CIDR-masked IPv4 address - */ - -typedef ipv4_netaddr = IPV4NetAddr { - addr: in_addr, /* 192.168.10.123 */ - plen: bit<32> /* CIDR Prefix: 24. */ -} - -function netmask(na: ipv4_netaddr): in_addr { ip_create_mask(na.plen) } -function bcast(na: ipv4_netaddr): in_addr { na.addr.bcast(na.netmask()) } - -/* Returns the network (with the host bits zeroed) - * or the host (with the network bits zeroed). */ -function network(na: ipv4_netaddr): in_addr { na.addr.network(na.netmask()) } -function host(na: ipv4_netaddr): in_addr { na.addr.host(na.netmask()) } - -/* Match on the host, if the host part is nonzero, or on the network - * otherwise. */ -function match_host_or_network(na: ipv4_netaddr): string { - if (na.plen < 32 and na.host().is_zero()) { - "${na.addr}/${na.plen}" - } else { - "${na.addr}" - } -} - -/* Match on the network. */ -function match_network(na: ipv4_netaddr): string { - if (na.plen < 32) { - "${na.network()}/${na.plen}" - } else { - "${na.addr}" - } -} - -/* - * CIDR-masked IPv6 address - */ - -typedef ipv6_netaddr = IPV6NetAddr { - addr: in6_addr, /* fc00::1 */ - plen: bit<32> /* CIDR Prefix: 64 */ -} - -function netmask(na: ipv6_netaddr): in6_addr { ipv6_create_mask(na.plen) } - -/* Returns the network (with the host bits zeroed). - * or the host (with the network bits zeroed). */ -function network(na: ipv6_netaddr): in6_addr { na.addr.network(na.netmask()) } -function host(na: ipv6_netaddr): in6_addr { na.addr.host(na.netmask()) } - -function solicited_node(na: ipv6_netaddr): in6_addr { na.addr.solicited_node() } - -function is_lla(na: ipv6_netaddr): bool { na.network().is_lla() } - -/* Match on the network. */ -function match_network(na: ipv6_netaddr): string { - if (na.plen < 128) { - "${na.network()}/${na.plen}" - } else { - "${na.addr}" - } -} - -/* - * Set of addresses associated with a logical port. - * - * A logical port always has one Ethernet address, plus any number of IPv4 and IPv6 addresses. - */ -typedef lport_addresses = LPortAddress { - ea: eth_addr, - ipv4_addrs: Vec<ipv4_netaddr>, - ipv6_addrs: Vec<ipv6_netaddr> -} - -function to_string(addr: lport_addresses): string = { - var addrs = ["${addr.ea}"]; - for (ip4 in addr.ipv4_addrs) { - addrs.push("${ip4.addr}") - }; - - for (ip6 in addr.ipv6_addrs) { - addrs.push("${ip6.addr}") - }; - - string_join(addrs, " ") -} - -/* - * Packet header lengths - */ -function eTH_HEADER_LEN(): integer = 14 -function vLAN_HEADER_LEN(): integer = 4 -function vLAN_ETH_HEADER_LEN(): integer = eTH_HEADER_LEN() + vLAN_HEADER_LEN() - -/* - * Logging - */ -extern function warn(msg: string): () -extern function abort(msg: string): () - -/* - * C functions imported from OVN - */ -extern function is_dynamic_lsp_address(addr: string): bool -extern function extract_lsp_addresses(address: string): Option<lport_addresses> -extern function extract_addresses(address: string): Option<lport_addresses> -extern function extract_lrp_networks(mac: string, networks: Set<string>): Option<lport_addresses> -extern function extract_ip_addresses(address: string): Option<lport_addresses> - -extern function split_addresses(addr: string): (Set<string>, Set<string>) - -extern function ovn_internal_version(): string - -/* - * C functions imported from OVS - */ -extern function json_string_escape(s: string): string - -function json_escape(s: string): string { - s.json_string_escape() -} -function json_escape(s: istring): string { - s.ival().json_string_escape() -} - -/* For a 'key' of the form "IP:port" or just "IP", returns - * (v46_ip, port) tuple. */ -extern function ip_address_and_port_from_lb_key(k: string): Option<(v46_ip, bit<16>)> diff --git a/northd/ovn.rs b/northd/ovn.rs deleted file mode 100644 index 746884071e..0000000000 --- a/northd/ovn.rs +++ /dev/null @@ -1,750 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -use ::nom::*; -use ::differential_datalog::record; -use ::std::ffi; -use ::std::ptr; -use ::std::default; -use ::std::process; -use ::std::os::raw; -use ::libc; - -pub fn warn(msg: &String) { - warn_(msg.as_str()) -} - -pub fn warn_(msg: &str) { - unsafe { - ddlog_warn(ffi::CString::new(msg).unwrap().as_ptr()); - } -} - -pub fn err_(msg: &str) { - unsafe { - ddlog_err(ffi::CString::new(msg).unwrap().as_ptr()); - } -} - -pub fn abort(msg: &String) { - abort_(msg.as_str()) -} - -fn abort_(msg: &str) { - err_(format!("DDlog error: {}.", msg).as_ref()); - process::abort(); -} - -const ETH_ADDR_SIZE: usize = 6; -const IN6_ADDR_SIZE: usize = 16; -const INET6_ADDRSTRLEN: usize = 46; -const INET_ADDRSTRLEN: usize = 16; -const ETH_ADDR_STRLEN: usize = 17; - -const AF_INET: usize = 2; -const AF_INET6: usize = 10; - -/* Implementation for externs declared in ovn.dl */ - -#[repr(C)] -#[derive(Default, PartialEq, Eq, PartialOrd, Ord, Clone, Hash, Serialize, Deserialize, Debug, IntoRecord, Mutator)] -pub struct eth_addr_c { - x: [u8; ETH_ADDR_SIZE] -} - -impl eth_addr_c { - pub fn from_ddlog(d: ð_addr) -> Self { - eth_addr_c { - x: [(d.ha >> 40) as u8, - (d.ha >> 32) as u8, - (d.ha >> 24) as u8, - (d.ha >> 16) as u8, - (d.ha >> 8) as u8, - d.ha as u8] - } - } - pub fn to_ddlog(&self) -> eth_addr { - let ea0 = u16::from_be_bytes([self.x[0], self.x[1]]) as u64; - let ea1 = u16::from_be_bytes([self.x[2], self.x[3]]) as u64; - let ea2 = u16::from_be_bytes([self.x[4], self.x[5]]) as u64; - eth_addr { ha: (ea0 << 32) | (ea1 << 16) | ea2 } - } -} - -pub fn eth_addr2string(addr: ð_addr) -> String { - let c = eth_addr_c::from_ddlog(addr); - format!("{:02x}:{:02x}:{:02x}:{:02x}:{:02x}:{:02x}", - c.x[0], c.x[1], c.x[2], c.x[3], c.x[4], c.x[5]) -} - -pub fn eth_addr_from_string(s: &String) -> ddlog_std::Option<eth_addr> { - let mut ea: eth_addr_c = Default::default(); - unsafe { - if ovs::eth_addr_from_string(string2cstr(s).as_ptr(), &mut ea as *mut eth_addr_c) { - ddlog_std::Option::Some{x: ea.to_ddlog()} - } else { - ddlog_std::Option::None - } - } -} - -#[repr(C)] -struct in6_addr_c { - bytes: [u8; 16] -} - -impl Default for in6_addr_c { - fn default() -> Self { - in6_addr_c { - bytes: [0; 16] - } - } -} - -impl in6_addr_c { - pub fn from_ddlog(d: &in6_addr) -> Self { - in6_addr_c{bytes: d.aaaa.to_be_bytes()} - } - pub fn to_ddlog(&self) -> in6_addr { - in6_addr{aaaa: u128::from_be_bytes(self.bytes)} - } -} - -pub fn string_mapped(addr: &in6_addr) -> String { - let addr = in6_addr_c::from_ddlog(addr); - let mut addr_str = [0 as i8; INET6_ADDRSTRLEN]; - unsafe { - ovs::ipv6_string_mapped(&mut addr_str[0] as *mut raw::c_char, &addr as *const in6_addr_c); - cstr2string(&addr_str as *const raw::c_char) - } -} - -pub fn json_string_escape(s: &String) -> String { - let mut ds = ovs_ds::new(); - unsafe { - ovs::json_string_escape(ffi::CString::new(s.as_str()).unwrap().as_ptr() as *const raw::c_char, - &mut ds as *mut ovs_ds); - }; - unsafe{ds.into_string()} -} - -pub fn extract_lsp_addresses(address: &String) -> ddlog_std::Option<lport_addresses> { - unsafe { - let mut laddrs: lport_addresses_c = Default::default(); - if ovn_c::extract_lsp_addresses(string2cstr(address).as_ptr(), - &mut laddrs as *mut lport_addresses_c) { - ddlog_std::Option::Some{x: laddrs.into_ddlog()} - } else { - ddlog_std::Option::None - } - } -} - -pub fn extract_addresses(address: &String) -> ddlog_std::Option<lport_addresses> { - unsafe { - let mut laddrs: lport_addresses_c = Default::default(); - let mut ofs: raw::c_int = 0; - if ovn_c::extract_addresses(string2cstr(address).as_ptr(), - &mut laddrs as *mut lport_addresses_c, - &mut ofs as *mut raw::c_int) { - ddlog_std::Option::Some{x: laddrs.into_ddlog()} - } else { - ddlog_std::Option::None - } - } -} - -pub fn extract_lrp_networks(mac: &String, networks: &ddlog_std::Set<String>) -> ddlog_std::Option<lport_addresses> -{ - unsafe { - let mut laddrs: lport_addresses_c = Default::default(); - let mut networks_cstrs = Vec::with_capacity(networks.x.len()); - let mut networks_ptrs = Vec::with_capacity(networks.x.len()); - for net in networks.x.iter() { - networks_cstrs.push(string2cstr(net)); - networks_ptrs.push(networks_cstrs.last().unwrap().as_ptr()); - }; - if ovn_c::extract_lrp_networks__(string2cstr(mac).as_ptr(), networks_ptrs.as_ptr() as *const *const raw::c_char, - networks_ptrs.len(), &mut laddrs as *mut lport_addresses_c) { - ddlog_std::Option::Some{x: laddrs.into_ddlog()} - } else { - ddlog_std::Option::None - } - } -} - -pub fn extract_ip_addresses(address: &String) -> ddlog_std::Option<lport_addresses> { - unsafe { - let mut laddrs: lport_addresses_c = Default::default(); - if ovn_c::extract_ip_addresses(string2cstr(address).as_ptr(), - &mut laddrs as *mut lport_addresses_c) { - ddlog_std::Option::Some{x: laddrs.into_ddlog()} - } else { - ddlog_std::Option::None - } - } -} - -pub fn ovn_internal_version() -> String { - unsafe { - let s = ovn_c::ovn_get_internal_version(); - let retval = cstr2string(s); - free(s as *mut raw::c_void); - retval - } -} - -pub fn ipv6_parse_masked(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in6_addr, in6_addr>> -{ - unsafe { - let mut ip: in6_addr_c = Default::default(); - let mut mask: in6_addr_c = Default::default(); - let err = ovs::ipv6_parse_masked(string2cstr(s).as_ptr(), &mut ip as *mut in6_addr_c, &mut mask as *mut in6_addr_c); - if (err != ptr::null_mut()) { - let errstr = cstr2string(err); - free(err as *mut raw::c_void); - ddlog_std::Either::Left{l: errstr} - } else { - ddlog_std::Either::Right{r: ddlog_std::tuple2(ip.to_ddlog(), mask.to_ddlog())} - } - } -} - -pub fn ipv6_parse_cidr(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in6_addr, u32>> -{ - unsafe { - let mut ip: in6_addr_c = Default::default(); - let mut plen: raw::c_uint = 0; - let err = ovs::ipv6_parse_cidr(string2cstr(s).as_ptr(), &mut ip as *mut in6_addr_c, &mut plen as *mut raw::c_uint); - if (err != ptr::null_mut()) { - let errstr = cstr2string(err); - free(err as *mut raw::c_void); - ddlog_std::Either::Left{l: errstr} - } else { - ddlog_std::Either::Right{r: ddlog_std::tuple2(ip.to_ddlog(), plen as u32)} - } - } -} - -pub fn ipv6_parse(s: &String) -> ddlog_std::Option<in6_addr> -{ - unsafe { - let mut ip: in6_addr_c = Default::default(); - let res = ovs::ipv6_parse(string2cstr(s).as_ptr(), &mut ip as *mut in6_addr_c); - if (res) { - ddlog_std::Option::Some{x: ip.to_ddlog()} - } else { - ddlog_std::Option::None - } - } -} - -pub type ovs_be32 = u32; - -impl in_addr { - pub fn from_be32(nl: ovs_be32) -> in_addr { - in_addr{a: ddlog_std::ntohl(&nl)} - } - pub fn to_be32(&self) -> ovs_be32 { - ddlog_std::htonl(&self.a) - } -} - -pub fn ip_parse_masked(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in_addr, in_addr>> -{ - unsafe { - let mut ip: ovs_be32 = 0; - let mut mask: ovs_be32 = 0; - let err = ovs::ip_parse_masked(string2cstr(s).as_ptr(), &mut ip as *mut ovs_be32, &mut mask as *mut ovs_be32); - if (err != ptr::null_mut()) { - let errstr = cstr2string(err); - free(err as *mut raw::c_void); - ddlog_std::Either::Left{l: errstr} - } else { - ddlog_std::Either::Right{r: ddlog_std::tuple2(in_addr::from_be32(ip), - in_addr::from_be32(mask))} - } - } -} - -pub fn ip_parse_cidr(s: &String) -> ddlog_std::Either<String, ddlog_std::tuple2<in_addr, u32>> -{ - unsafe { - let mut ip: ovs_be32 = 0; - let mut plen: raw::c_uint = 0; - let err = ovs::ip_parse_cidr(string2cstr(s).as_ptr(), &mut ip as *mut ovs_be32, &mut plen as *mut raw::c_uint); - if (err != ptr::null_mut()) { - let errstr = cstr2string(err); - free(err as *mut raw::c_void); - ddlog_std::Either::Left{l: errstr} - } else { - ddlog_std::Either::Right{r: ddlog_std::tuple2(in_addr::from_be32(ip), plen as u32)} - } - } -} - -pub fn ip_parse(s: &String) -> ddlog_std::Option<in_addr> -{ - unsafe { - let mut ip: ovs_be32 = 0; - if (ovs::ip_parse(string2cstr(s).as_ptr(), &mut ip as *mut ovs_be32)) { - ddlog_std::Option::Some{x: in_addr::from_be32(ip)} - } else { - ddlog_std::Option::None - } - } -} - -pub fn is_dynamic_lsp_address(address: &String) -> bool { - unsafe { - ovn_c::is_dynamic_lsp_address(string2cstr(address).as_ptr()) - } -} - -pub fn split_addresses(addresses: &String) -> ddlog_std::tuple2<ddlog_std::Set<String>, ddlog_std::Set<String>> { - let mut ip4_addrs = ovs_svec::new(); - let mut ip6_addrs = ovs_svec::new(); - unsafe { - ovn_c::split_addresses(string2cstr(addresses).as_ptr(), &mut ip4_addrs as *mut ovs_svec, &mut ip6_addrs as *mut ovs_svec); - ddlog_std::tuple2(ip4_addrs.into_strings(), ip6_addrs.into_strings()) - } -} - -pub fn scan_eth_addr(s: &String) -> ddlog_std::Option<eth_addr> { - let mut ea: eth_addr_c = Default::default(); - unsafe { - if ovs::ovs_scan(string2cstr(s).as_ptr(), b"%hhx:%hhx:%hhx:%hhx:%hhx:%hhx\0".as_ptr() as *const raw::c_char, - &mut ea.x[0] as *mut u8, &mut ea.x[1] as *mut u8, - &mut ea.x[2] as *mut u8, &mut ea.x[3] as *mut u8, - &mut ea.x[4] as *mut u8, &mut ea.x[5] as *mut u8) - { - ddlog_std::Option::Some{x: ea.to_ddlog()} - } else { - ddlog_std::Option::None - } - } -} - -pub fn scan_eth_addr_prefix(s: &String) -> ddlog_std::Option<eth_addr> { - let mut b2: u8 = 0; - let mut b1: u8 = 0; - let mut b0: u8 = 0; - unsafe { - if ovs::ovs_scan(string2cstr(s).as_ptr(), b"%hhx:%hhx:%hhx\0".as_ptr() as *const raw::c_char, - &mut b2 as *mut u8, &mut b1 as *mut u8, &mut b0 as *mut u8) - { - ddlog_std::Option::Some{x: eth_addr{ha: ((b2 as u64) << 40) | ((b1 as u64) << 32) | ((b0 as u64) << 24)} } - } else { - ddlog_std::Option::None - } - } -} - -pub fn scan_static_dynamic_ip(s: &String) -> ddlog_std::Option<in_addr> { - let mut ip0: u8 = 0; - let mut ip1: u8 = 0; - let mut ip2: u8 = 0; - let mut ip3: u8 = 0; - let mut n: raw::c_uint = 0; - unsafe { - if ovs::ovs_scan(string2cstr(s).as_ptr(), b"dynamic %hhu.%hhu.%hhu.%hhu%n\0".as_ptr() as *const raw::c_char, - &mut ip0 as *mut u8, - &mut ip1 as *mut u8, - &mut ip2 as *mut u8, - &mut ip3 as *mut u8, - &mut n) && s.len() == (n as usize) - { - let a0 = (ip0 as u32) << 24; - let a1 = (ip1 as u32) << 16; - let a2 = (ip2 as u32) << 8; - let a3 = ip3 as u32; - ddlog_std::Option::Some{x: in_addr{a: a0 | a1 | a2 | a3}} - } else { - ddlog_std::Option::None - } - } -} - -pub fn ip_address_and_port_from_lb_key(k: &String) -> - ddlog_std::Option<ddlog_std::tuple2<v46_ip, u16>> -{ - unsafe { - let mut ip_address: *mut raw::c_char = ptr::null_mut(); - let mut port: libc::uint16_t = 0; - let mut addr_family: raw::c_int = 0; - - ovn_c::ip_address_and_port_from_lb_key(string2cstr(k).as_ptr(), &mut ip_address as *mut *mut raw::c_char, - &mut port as *mut libc::uint16_t, &mut addr_family as *mut raw::c_int); - if (ip_address != ptr::null_mut()) { - match (ip46_parse(&cstr2string(ip_address))) { - ddlog_std::Option::Some{x: ip46} => { - let res = ddlog_std::tuple2(ip46, port as u16); - free(ip_address as *mut raw::c_void); - return ddlog_std::Option::Some{x: res} - }, - _ => () - } - } - ddlog_std::Option::None - } -} - -pub fn str_to_int(s: &String, base: &u16) -> ddlog_std::Option<u64> { - let mut i: raw::c_int = 0; - let ok = unsafe { - ovs::str_to_int(string2cstr(s).as_ptr(), *base as raw::c_int, &mut i as *mut raw::c_int) - }; - if ok { - ddlog_std::Option::Some{x: i as u64} - } else { - ddlog_std::Option::None - } -} - -pub fn str_to_uint(s: &String, base: &u16) -> ddlog_std::Option<u64> { - let mut i: raw::c_uint = 0; - let ok = unsafe { - ovs::str_to_uint(string2cstr(s).as_ptr(), *base as raw::c_int, &mut i as *mut raw::c_uint) - }; - if ok { - ddlog_std::Option::Some{x: i as u64} - } else { - ddlog_std::Option::None - } -} - -pub fn inet6_ntop(addr: &in6_addr) -> String { - let addr_c = in6_addr_c::from_ddlog(addr); - let mut buf = [0 as i8; INET6_ADDRSTRLEN]; - unsafe { - let res = inet_ntop(AF_INET6 as raw::c_int, &addr_c as *const in6_addr_c as *const raw::c_void, - &mut buf[0] as *mut raw::c_char, INET6_ADDRSTRLEN as libc::socklen_t); - if res == ptr::null() { - warn(&format!("inet_ntop({:?}) failed", *addr)); - "".to_owned() - } else { - cstr2string(&buf as *const raw::c_char) - } - } -} - -/* Internals */ - -unsafe fn cstr2string(s: *const raw::c_char) -> String { - ffi::CStr::from_ptr(s).to_owned().into_string(). - unwrap_or_else(|e|{ warn(&format!("cstr2string: {}", e)); "".to_owned() }) -} - -fn string2cstr(s: &String) -> ffi::CString { - ffi::CString::new(s.as_str()).unwrap() -} - -/* OVS dynamic string type */ -#[repr(C)] -struct ovs_ds { - s: *mut raw::c_char, /* Null-terminated string. */ - length: libc::size_t, /* Bytes used, not including null terminator. */ - allocated: libc::size_t /* Bytes allocated, not including null terminator. */ -} - -impl ovs_ds { - pub fn new() -> ovs_ds { - ovs_ds{s: ptr::null_mut(), length: 0, allocated: 0} - } - - pub unsafe fn into_string(mut self) -> String { - let res = cstr2string(ovs::ds_cstr(&self as *const ovs_ds)); - ovs::ds_destroy(&mut self as *mut ovs_ds); - res - } -} - -/* OVS string vector type */ -#[repr(C)] -struct ovs_svec { - names: *mut *mut raw::c_char, - n: libc::size_t, - allocated: libc::size_t -} - -impl ovs_svec { - pub fn new() -> ovs_svec { - ovs_svec{names: ptr::null_mut(), n: 0, allocated: 0} - } - - pub unsafe fn into_strings(mut self) -> ddlog_std::Set<String> { - let mut res: ddlog_std::Set<String> = ddlog_std::Set::new(); - unsafe { - for i in 0..self.n { - res.insert(cstr2string(*self.names.offset(i as isize))); - } - ovs::svec_destroy(&mut self as *mut ovs_svec); - } - res - } -} - - -// ovn/lib/ovn-util.h -#[repr(C)] -struct ipv4_netaddr_c { - addr: libc::uint32_t, - mask: libc::uint32_t, - network: libc::uint32_t, - plen: raw::c_uint, - - addr_s: [raw::c_char; INET_ADDRSTRLEN + 1], /* "192.168.10.123" */ - network_s: [raw::c_char; INET_ADDRSTRLEN + 1], /* "192.168.10.0" */ - bcast_s: [raw::c_char; INET_ADDRSTRLEN + 1] /* "192.168.10.255" */ -} - -impl Default for ipv4_netaddr_c { - fn default() -> Self { - ipv4_netaddr_c { - addr: 0, - mask: 0, - network: 0, - plen: 0, - addr_s: [0; INET_ADDRSTRLEN + 1], - network_s: [0; INET_ADDRSTRLEN + 1], - bcast_s: [0; INET_ADDRSTRLEN + 1] - } - } -} - -impl ipv4_netaddr_c { - pub fn to_ddlog(&self) -> ipv4_netaddr { - ipv4_netaddr{ - addr: in_addr::from_be32(self.addr), - plen: self.plen, - } - } -} - -#[repr(C)] -struct ipv6_netaddr_c { - addr: in6_addr_c, /* fc00::1 */ - mask: in6_addr_c, /* ffff:ffff:ffff:ffff:: */ - sn_addr: in6_addr_c, /* ff02:1:ff00::1 */ - network: in6_addr_c, /* fc00:: */ - plen: raw::c_uint, /* CIDR Prefix: 64 */ - - addr_s: [raw::c_char; INET6_ADDRSTRLEN + 1], /* "fc00::1" */ - sn_addr_s: [raw::c_char; INET6_ADDRSTRLEN + 1], /* "ff02:1:ff00::1" */ - network_s: [raw::c_char; INET6_ADDRSTRLEN + 1] /* "fc00::" */ -} - -impl Default for ipv6_netaddr_c { - fn default() -> Self { - ipv6_netaddr_c { - addr: Default::default(), - mask: Default::default(), - sn_addr: Default::default(), - network: Default::default(), - plen: 0, - addr_s: [0; INET6_ADDRSTRLEN + 1], - sn_addr_s: [0; INET6_ADDRSTRLEN + 1], - network_s: [0; INET6_ADDRSTRLEN + 1] - } - } -} - -impl ipv6_netaddr_c { - pub unsafe fn to_ddlog(&self) -> ipv6_netaddr { - ipv6_netaddr{ - addr: in6_addr_c::to_ddlog(&self.addr), - plen: self.plen - } - } -} - - -// ovn-util.h -#[repr(C)] -struct lport_addresses_c { - ea_s: [raw::c_char; ETH_ADDR_STRLEN + 1], - ea: eth_addr_c, - n_ipv4_addrs: libc::size_t, - ipv4_addrs: *mut ipv4_netaddr_c, - n_ipv6_addrs: libc::size_t, - ipv6_addrs: *mut ipv6_netaddr_c -} - -impl Default for lport_addresses_c { - fn default() -> Self { - lport_addresses_c { - ea_s: [0; ETH_ADDR_STRLEN + 1], - ea: Default::default(), - n_ipv4_addrs: 0, - ipv4_addrs: ptr::null_mut(), - n_ipv6_addrs: 0, - ipv6_addrs: ptr::null_mut() - } - } -} - -impl lport_addresses_c { - pub unsafe fn into_ddlog(mut self) -> lport_addresses { - let mut ipv4_addrs = ddlog_std::Vec::with_capacity(self.n_ipv4_addrs); - for i in 0..self.n_ipv4_addrs { - ipv4_addrs.push((&*self.ipv4_addrs.offset(i as isize)).to_ddlog()) - } - let mut ipv6_addrs = ddlog_std::Vec::with_capacity(self.n_ipv6_addrs); - for i in 0..self.n_ipv6_addrs { - ipv6_addrs.push((&*self.ipv6_addrs.offset(i as isize)).to_ddlog()) - } - let res = lport_addresses { - ea: self.ea.to_ddlog(), - ipv4_addrs: ipv4_addrs, - ipv6_addrs: ipv6_addrs - }; - ovn_c::destroy_lport_addresses(&mut self as *mut lport_addresses_c); - res - } -} - -/* functions imported from northd.c */ -extern "C" { - fn ddlog_warn(msg: *const raw::c_char); - fn ddlog_err(msg: *const raw::c_char); -} - -/* functions imported from libovn */ -mod ovn_c { - use ::std::os::raw; - use ::libc; - use super::lport_addresses_c; - use super::ovs_svec; - use super::in6_addr_c; - - #[link(name = "ovn")] - extern "C" { - // ovn/lib/ovn-util.h - pub fn extract_lsp_addresses(address: *const raw::c_char, laddrs: *mut lport_addresses_c) -> bool; - pub fn extract_addresses(address: *const raw::c_char, laddrs: *mut lport_addresses_c, ofs: *mut raw::c_int) -> bool; - pub fn extract_lrp_networks__(mac: *const raw::c_char, networks: *const *const raw::c_char, - n_networks: libc::size_t, laddrs: *mut lport_addresses_c) -> bool; - pub fn extract_ip_addresses(address: *const raw::c_char, laddrs: *mut lport_addresses_c) -> bool; - pub fn destroy_lport_addresses(addrs: *mut lport_addresses_c); - pub fn is_dynamic_lsp_address(address: *const raw::c_char) -> bool; - pub fn split_addresses(addresses: *const raw::c_char, ip4_addrs: *mut ovs_svec, ipv6_addrs: *mut ovs_svec); - pub fn ip_address_and_port_from_lb_key(key: *const raw::c_char, ip_address: *mut *mut raw::c_char, - port: *mut libc::uint16_t, addr_family: *mut raw::c_int); - pub fn ovn_get_internal_version() -> *mut raw::c_char; - } -} - -mod ovs { - use ::std::os::raw; - use ::libc; - use super::in6_addr_c; - use super::ovs_be32; - use super::ovs_ds; - use super::eth_addr_c; - use super::ovs_svec; - - /* functions imported from libopenvswitch */ - #[link(name = "openvswitch")] - extern "C" { - // lib/packets.h - pub fn ipv6_string_mapped(addr_str: *mut raw::c_char, addr: *const in6_addr_c) -> *const raw::c_char; - pub fn ipv6_parse_masked(s: *const raw::c_char, ip: *mut in6_addr_c, mask: *mut in6_addr_c) -> *mut raw::c_char; - pub fn ipv6_parse_cidr(s: *const raw::c_char, ip: *mut in6_addr_c, plen: *mut raw::c_uint) -> *mut raw::c_char; - pub fn ipv6_parse(s: *const raw::c_char, ip: *mut in6_addr_c) -> bool; - pub fn ip_parse_masked(s: *const raw::c_char, ip: *mut ovs_be32, mask: *mut ovs_be32) -> *mut raw::c_char; - pub fn ip_parse_cidr(s: *const raw::c_char, ip: *mut ovs_be32, plen: *mut raw::c_uint) -> *mut raw::c_char; - pub fn ip_parse(s: *const raw::c_char, ip: *mut ovs_be32) -> bool; - pub fn eth_addr_from_string(s: *const raw::c_char, ea: *mut eth_addr_c) -> bool; - - // include/openvswitch/json.h - pub fn json_string_escape(str: *const raw::c_char, out: *mut ovs_ds); - // openvswitch/dynamic-string.h - pub fn ds_destroy(ds: *mut ovs_ds); - pub fn ds_cstr(ds: *const ovs_ds) -> *const raw::c_char; - pub fn svec_destroy(v: *mut ovs_svec); - pub fn ovs_scan(s: *const raw::c_char, format: *const raw::c_char, ...) -> bool; - pub fn str_to_int(s: *const raw::c_char, base: raw::c_int, i: *mut raw::c_int) -> bool; - pub fn str_to_uint(s: *const raw::c_char, base: raw::c_int, i: *mut raw::c_uint) -> bool; - } -} - -/* functions imported from libc */ -#[link(name = "c")] -extern "C" { - fn free(ptr: *mut raw::c_void); -} - -/* functions imported from arp/inet6 */ -extern "C" { - fn inet_ntop(af: raw::c_int, cp: *const raw::c_void, - buf: *mut raw::c_char, len: libc::socklen_t) -> *const raw::c_char; -} - -/* - * Parse IPv4 address list. - */ - -named!(parse_spaces<nom::types::CompleteStr, ()>, - do_parse!(many1!(one_of!(&" \t\n\r\x0c\x0b")) >> (()) ) -); - -named!(parse_opt_spaces<nom::types::CompleteStr, ()>, - do_parse!(opt!(parse_spaces) >> (())) -); - -named!(parse_ipv4_range<nom::types::CompleteStr, (String, Option<String>)>, - do_parse!(addr1: many_till!(complete!(nom::anychar), alt!(do_parse!(eof!() >> (nom::types::CompleteStr(""))) | peek!(tag!("..")) | tag!(" ") )) >> - parse_opt_spaces >> - addr2: opt!(do_parse!(tag!("..") >> - parse_opt_spaces >> - addr2: many_till!(complete!(nom::anychar), alt!(do_parse!(eof!() >> (' ')) | char!(' ')) ) >> - (addr2) )) >> - parse_opt_spaces >> - (addr1.0.into_iter().collect(), addr2.map(|x|x.0.into_iter().collect())) ) -); - -named!(parse_ipv4_address_list<nom::types::CompleteStr, Vec<(String, Option<String>)>>, - do_parse!(parse_opt_spaces >> - ranges: many0!(parse_ipv4_range) >> - (ranges))); - -pub fn parse_ip_list(ips: &String) -> ddlog_std::Either<String, ddlog_std::Vec<ddlog_std::tuple2<in_addr, ddlog_std::Option<in_addr>>>> -{ - match parse_ipv4_address_list(nom::types::CompleteStr(ips.as_str())) { - Err(e) => { - ddlog_std::Either::Left{l: format!("invalid IP list format: \"{}\"", ips.as_str())} - }, - Ok((nom::types::CompleteStr(""), ranges)) => { - let mut res = vec![]; - for (ip1, ip2) in ranges.iter() { - let start = match ip_parse(&ip1) { - ddlog_std::Option::None => return ddlog_std::Either::Left{l: format!("invalid IP address: \"{}\"", *ip1)}, - ddlog_std::Option::Some{x: ip} => ip - }; - let end = match ip2 { - None => ddlog_std::Option::None, - Some(ip_str) => match ip_parse(&ip_str.clone()) { - ddlog_std::Option::None => return ddlog_std::Either::Left{l: format!("invalid IP address: \"{}\"", *ip_str)}, - x => x - } - }; - res.push(ddlog_std::tuple2(start, end)); - }; - ddlog_std::Either::Right{r: ddlog_std::Vec::from(res)} - }, - Ok((suffix, _)) => { - ddlog_std::Either::Left{l: format!("IP address list contains trailing characters: \"{}\"", suffix)} - } - } -} diff --git a/northd/ovn.toml b/northd/ovn.toml deleted file mode 100644 index 64108996ed..0000000000 --- a/northd/ovn.toml +++ /dev/null @@ -1,2 +0,0 @@ -[dependencies.nom] -version = "4.0" diff --git a/northd/ovn_northd.dl b/northd/ovn_northd.dl deleted file mode 100644 index 2fe73959c6..0000000000 --- a/northd/ovn_northd.dl +++ /dev/null @@ -1,9105 +0,0 @@ -/* - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at: - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -import OVN_Northbound as nb -import OVN_Southbound as sb -import copp -import ovsdb -import allocate -import ovn -import lswitch -import lrouter -import multicast -import helpers -import ipam -import vec -import set - -index Logical_Flow_Index() on sb::Out_Logical_Flow() - -/* Meter_Band table */ -for (mb in nb::Meter_Band) { - sb::Out_Meter_Band(._uuid = mb._uuid, - .action = mb.action, - .rate = mb.rate, - .burst_size = mb.burst_size) -} - -/* Meter table */ -for (meter in &nb::Meter) { - sb::Out_Meter(._uuid = meter._uuid, - .name = meter.name, - .unit = meter.unit, - .bands = meter.bands) -} -sb::Out_Meter(._uuid = hash128(name), - .name = name, - .unit = meter.unit, - .bands = meter.bands) :- - ACLWithFairMeter(acl, meter), - var name = acl_log_meter_name(meter.name, acl._uuid).intern(). - -/* Proxy table for Out_Datapath_Binding: contains all Datapath_Binding fields, - * except tunnel id, which is allocated separately (see TunKeyAllocation). */ -relation OutProxy_Datapath_Binding ( - _uuid: uuid, - external_ids: Map<istring,istring> -) - -/* Datapath_Binding table */ -OutProxy_Datapath_Binding(uuid, external_ids) :- - &nb::Logical_Switch(._uuid = uuid, .name = name, .external_ids = ids, - .other_config = other_config), - var uuid_str = uuid2str(uuid).intern(), - var external_ids = { - var eids = [i"logical-switch" -> uuid_str, i"name" -> name]; - match (ids.get(i"neutron:network_name")) { - None -> (), - Some{nnn} -> eids.insert(i"name2", nnn) - }; - match (other_config.get(i"interconn-ts")) { - None -> (), - Some{value} -> eids.insert(i"interconn-ts", value) - }; - eids - }. - -OutProxy_Datapath_Binding(uuid, external_ids) :- - lr in nb::Logical_Router(._uuid = uuid, .name = name, .external_ids = ids, - .options = options), - lr.is_enabled(), - var uuid_str = uuid2str(uuid).intern(), - var external_ids = { - var eids = [i"logical-router" -> uuid_str, i"name" -> name]; - match (ids.get(i"neutron:router_name")) { - None -> (), - Some{nnn} -> eids.insert(i"name2", nnn) - }; - match (options.get(i"snat-ct-zone").and_then(parse_dec_u64)) { - None -> (), - Some{zone} -> eids.insert(i"snat-ct-zone", i"${zone}") - }; - var learn_from_arp_request = options.get_bool_def(i"always_learn_from_arp_request", true); - if (not learn_from_arp_request) { - eids.insert(i"always_learn_from_arp_request", i"false") - }; - eids - }. - -sb::Out_Datapath_Binding(uuid, tunkey, load_balancers, external_ids) :- - OutProxy_Datapath_Binding(uuid, external_ids), - TunKeyAllocation(uuid, tunkey), - /* Datapath_Binding.load_balancers is not used anymore, it's still in the - * schema for compatibility reasons. Reset it to empty, just in case. - */ - var load_balancers = set_empty(). - -function get_requested_chassis(options: Map<istring,istring>) : istring = { - var requested_chassis = match(options.get(i"requested-chassis")) { - None -> i"", - Some{requested_chassis} -> requested_chassis, - }; - requested_chassis -} - -relation RequestedChassis( - name: istring, - chassis: uuid, -) -RequestedChassis(name, chassis) :- - sb::Chassis(._uuid = chassis, .name=name). -RequestedChassis(hostname, chassis) :- - sb::Chassis(._uuid = chassis, .hostname=hostname). - -/* Proxy table for Out_Datapath_Binding: contains all Datapath_Binding fields, - * except tunnel id, which is allocated separately (see PortTunKeyAllocation). */ -relation OutProxy_Port_Binding ( - _uuid: uuid, - logical_port: istring, - __type: istring, - gateway_chassis: Set<uuid>, - ha_chassis_group: Option<uuid>, - options: Map<istring,istring>, - datapath: uuid, - parent_port: Option<istring>, - tag: Option<integer>, - mac: Set<istring>, - nat_addresses: Set<istring>, - external_ids: Map<istring,istring>, - requested_chassis: Option<uuid> -) - -/* Case 1a: Create a Port_Binding per logical switch port that is not of type - * "router" */ -OutProxy_Port_Binding(._uuid = lsp._uuid, - .logical_port = lsp.name, - .__type = lsp.__type, - .gateway_chassis = set_empty(), - .ha_chassis_group = sp.hac_group_uuid, - .options = options, - .datapath = sw._uuid, - .parent_port = lsp.parent_name, - .tag = tag, - .mac = lsp.addresses, - .nat_addresses = set_empty(), - .external_ids = eids, - .requested_chassis = None) :- - sp in &SwitchPort(.lsp = lsp, .sw = sw), - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), - var tag = match (opt_tag) { - None -> lsp.tag, - Some{t} -> Some{t} - }, - lsp.__type != i"router", - var chassis_name_or_hostname = get_requested_chassis(lsp.options), - chassis_name_or_hostname == i"", - var eids = { - var eids = lsp.external_ids; - match (lsp.external_ids.get(i"neutron:port_name")) { - None -> (), - Some{name} -> eids.insert(i"name", name) - }; - eids - }, - var options = { - var options = lsp.options; - if (sw.other_config.get(i"vlan-passthru") == Some{i"true"}) { - options.insert(i"vlan-passthru", i"true") - }; - options - }. - -/* Case 1b: Create a Port_Binding per logical switch port that is not of type - * "router" and has options "requested-chassis" pointing at chassis name or - * hostname. */ -OutProxy_Port_Binding(._uuid = lsp._uuid, - .logical_port = lsp.name, - .__type = lsp.__type, - .gateway_chassis = set_empty(), - .ha_chassis_group = sp.hac_group_uuid, - .options = options, - .datapath = sw._uuid, - .parent_port = lsp.parent_name, - .tag = tag, - .mac = lsp.addresses, - .nat_addresses = set_empty(), - .external_ids = eids, - .requested_chassis = Some{requested_chassis}) :- - sp in &SwitchPort(.lsp = lsp, .sw = sw), - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), - var tag = match (opt_tag) { - None -> lsp.tag, - Some{t} -> Some{t} - }, - lsp.__type != i"router", - var chassis_name_or_hostname = get_requested_chassis(lsp.options), - chassis_name_or_hostname != i"", - RequestedChassis(chassis_name_or_hostname, requested_chassis), - var eids = { - var eids = lsp.external_ids; - match (lsp.external_ids.get(i"neutron:port_name")) { - None -> (), - Some{name} -> eids.insert(i"name", name) - }; - eids - }, - var options = { - var options = lsp.options; - if (sw.other_config.get(i"vlan-passthru") == Some{i"true"}) { - options.insert(i"vlan-passthru", i"true") - }; - options - }. - -/* Case 1c: Create a Port_Binding per logical switch port that is not of type - * "router" and has options "requested-chassis" pointing at non-existent - * chassis name or hostname. */ -OutProxy_Port_Binding(._uuid = lsp._uuid, - .logical_port = lsp.name, - .__type = lsp.__type, - .gateway_chassis = set_empty(), - .ha_chassis_group = sp.hac_group_uuid, - .options = options, - .datapath = sw._uuid, - .parent_port = lsp.parent_name, - .tag = tag, - .mac = lsp.addresses, - .nat_addresses = set_empty(), - .external_ids = eids, - .requested_chassis = None) :- - sp in &SwitchPort(.lsp = lsp, .sw = sw), - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), - var tag = match (opt_tag) { - None -> lsp.tag, - Some{t} -> Some{t} - }, - lsp.__type != i"router", - var chassis_name_or_hostname = get_requested_chassis(lsp.options), - chassis_name_or_hostname != i"", - not RequestedChassis(chassis_name_or_hostname, _), - var eids = { - var eids = lsp.external_ids; - match (lsp.external_ids.get(i"neutron:port_name")) { - None -> (), - Some{name} -> eids.insert(i"name", name) - }; - eids - }, - var options = { - var options = lsp.options; - if (sw.other_config.get(i"vlan-passthru") == Some{i"true"}) { - options.insert(i"vlan-passthru", i"true") - }; - options - }. - -relation SwitchPortLBIPs( - port: Intern<SwitchPort>, - lbips: Option<Intern<LogicalRouterLBIPs>>) - -SwitchPortLBIPs(.port = port, - .lbips = Some{lbips}) :- - port in &SwitchPort(.peer = Some{peer}), - port.lsp.options.get(i"router-port").is_some(), - lbips in &LogicalRouterLBIPs(.lr = peer.router._uuid). - -SwitchPortLBIPs(.port = port, - .lbips = None) :- - port in &SwitchPort(.peer = peer), - peer.is_none() or port.lsp.options.get(i"router-port").is_none(). - -/* Case 2: Create a Port_Binding per logical switch port of type "router" */ -OutProxy_Port_Binding(._uuid = lsp._uuid, - .logical_port = lsp.name, - .__type = __type, - .gateway_chassis = set_empty(), - .ha_chassis_group = None, - .options = options, - .datapath = sw._uuid, - .parent_port = lsp.parent_name, - .tag = None, - .mac = lsp.addresses, - .nat_addresses = nat_addresses, - .external_ids = eids, - .requested_chassis = None) :- - SwitchPortLBIPs(.port = &SwitchPort{.lsp = lsp, .sw = sw, .peer = peer}, - .lbips = lbips), - var eids = { - var eids = lsp.external_ids; - match (lsp.external_ids.get(i"neutron:port_name")) { - None -> (), - Some{name} -> eids.insert(i"name", name) - }; - eids - }, - Some{var router_port} = lsp.options.get(i"router-port"), - var opt_chassis = peer.and_then(|p| p.router.options.get(i"chassis")), - var l3dgw_port = peer.and_then(|p| p.router.l3dgw_ports.nth(0)), - (var __type, var options) = { - var options = [i"peer" -> router_port]; - match (opt_chassis) { - None -> { - (i"patch", options) - }, - Some{chassis} -> { - options.insert(i"l3gateway-chassis", chassis); - (i"l3gateway", options) - } - } - }, - var base_nat_addresses = { - match (lsp.options.get(i"nat-addresses")) { - None -> { set_empty() }, - Some{nat_addresses} -> { - if (nat_addresses == i"router") { - match ((l3dgw_port, opt_chassis, peer)) { - (None, None, _) -> set_empty(), - (_, _, None) -> set_empty(), - (_, _, Some{rport}) -> get_nat_addresses(rport, lbips.unwrap_or_default(), false) - } - } else { - /* Only accept manual specification of ethernet address - * followed by IPv4 addresses on type "l3gateway" ports. */ - if (opt_chassis.is_some()) { - match (extract_lsp_addresses(nat_addresses.ival())) { - None -> { - warn("Error extracting nat-addresses."); - set_empty() - }, - Some{_} -> { set_singleton(nat_addresses) } - } - } else { set_empty() } - } - } - } - }, - /* Add the router mac and IPv4 addresses to - * Port_Binding.nat_addresses so that GARP is sent for these - * IPs by the ovn-controller on which the distributed gateway - * router port resides if: - * - * 1. The peer has 'reside-on-redirect-chassis' set and the - * the logical router datapath has distributed router port. - * - * 2. The peer is distributed gateway router port. - * - * 3. The peer's router is a gateway router and the port has a localnet - * port. - * - * Note: Port_Binding.nat_addresses column is also used for - * sending the GARPs for the router port IPs. - * */ - var garp_nat_addresses = match (peer) { - Some{rport} -> match ( - (rport.lrp.options.get_bool_def(i"reside-on-redirect-chassis", false) - and l3dgw_port.is_some()) or - rport.is_redirect or - (rport.router.options.contains_key(i"chassis") and - not sw.localnet_ports.is_empty())) { - false -> set_empty(), - true -> set_singleton(get_garp_nat_addresses(rport).intern()) - }, - None -> set_empty() - }, - var nat_addresses = set_union(base_nat_addresses, garp_nat_addresses). - -/* Case 3: Port_Binding per logical router port */ -OutProxy_Port_Binding(._uuid = lrp._uuid, - .logical_port = lrp.name, - .__type = __type, - .gateway_chassis = set_empty(), - .ha_chassis_group = None, - .options = options, - .datapath = router._uuid, - .parent_port = None, - .tag = None, // always empty for router ports - .mac = set_singleton(i"${lrp.mac} ${lrp.networks.map(ival).to_vec().join(\" \")}"), - .nat_addresses = set_empty(), - .external_ids = lrp.external_ids, - .requested_chassis = None) :- - rp in &RouterPort(.lrp = lrp, .router = router, .peer = peer), - RouterPortRAOptionsComplete(lrp._uuid, options0), - (var __type, var options1) = match (router.options.get(i"chassis")) { - /* TODO: derived ports */ - None -> (i"patch", map_empty()), - Some{lrchassis} -> (i"l3gateway", [i"l3gateway-chassis" -> lrchassis]) - }, - var options2 = match (router_peer_name(peer)) { - None -> map_empty(), - Some{peer_name} -> [i"peer" -> peer_name] - }, - var options3 = match ((peer, rp.networks.ipv6_addrs.is_empty())) { - (PeerSwitch{_, _}, false) -> { - var enabled = lrp.is_enabled(); - var pd = lrp.options.get_bool_def(i"prefix_delegation", false); - var p = lrp.options.get_bool_def(i"prefix", false); - [i"ipv6_prefix_delegation" -> i"${pd and enabled}", - i"ipv6_prefix" -> i"${p and enabled}"] - }, - _ -> map_empty() - }, - PreserveIPv6RAPDList(lrp._uuid, ipv6_ra_pd_list), - var options4 = match (ipv6_ra_pd_list) { - None -> map_empty(), - Some{value} -> [i"ipv6_ra_pd_list" -> value] - }, - RouterPortIsRedirect(lrp._uuid, is_redirect), - var options5 = match (is_redirect) { - false -> map_empty(), - true -> [i"chassis-redirect-port" -> chassis_redirect_name(lrp.name).intern()] - }, - var options = options0.union(options1).union(options2).union(options3).union(options4).union(options5), - var eids = { - var eids = lrp.external_ids; - match (lrp.external_ids.get(i"neutron:port_name")) { - None -> (), - Some{name} -> eids.insert(i"name", name) - }; - eids - }. -/* -*/ -function get_router_load_balancer_ips(lbips: Intern<LogicalRouterLBIPs>, - routable_only: bool) : - (Set<istring>, Set<istring>) = -{ - if (routable_only) { - (lbips.lb_ipv4s_routable, lbips.lb_ipv6s_routable) - } else { - (union(lbips.lb_ipv4s_routable, lbips.lb_ipv4s_unroutable), - union(lbips.lb_ipv6s_routable, lbips.lb_ipv6s_unroutable)) - } -} - -/* Returns an array of strings, each consisting of a MAC address followed - * by one or more IP addresses, and if the port is a distributed gateway - * port, followed by 'is_chassis_resident("LPORT_NAME")', where the - * LPORT_NAME is the name of the L3 redirect port or the name of the - * logical_port specified in a NAT rule. These strings include the - * external IP addresses of all NAT rules defined on that router, and all - * of the IP addresses used in load balancer VIPs defined on that router. - */ -function get_nat_addresses(rport: Intern<RouterPort>, lbips: Intern<LogicalRouterLBIPs>, routable_only: bool): Set<istring> = -{ - var addresses = set_empty(); - var has_redirect = not rport.router.l3dgw_ports.is_empty(); - match (eth_addr_from_string(rport.lrp.mac.ival())) { - None -> addresses, - Some{mac} -> { - var c_addresses = "${mac}"; - var central_ip_address = false; - - /* Get NAT IP addresses. */ - for (nat in rport.router.nats) { - if (routable_only and - (nat.nat.__type == i"snat" or - not nat.nat.options.get_bool_def(i"add_route", false))) { - continue; - }; - /* Determine whether this NAT rule satisfies the conditions for - * distributed NAT processing. */ - if (has_redirect and nat.nat.__type == i"dnat_and_snat" and - nat.nat.logical_port.is_some() and nat.external_mac.is_some()) { - /* Distributed NAT rule. */ - var logical_port = nat.nat.logical_port.unwrap_or_default(); - var external_mac = nat.external_mac.unwrap_or_default(); - addresses.insert(i"${external_mac} ${nat.external_ip} " - "is_chassis_resident(${json_escape(logical_port)})") - } else { - /* Centralized NAT rule, either on gateway router or distributed - * router. - * Check if external_ip is same as router ip. If so, then there - * is no need to add this to the nat_addresses. The router IPs - * will be added separately. */ - var is_router_ip = false; - match (nat.external_ip) { - IPv4{ei} -> { - for (ipv4 in rport.networks.ipv4_addrs) { - if (ei == ipv4.addr) { - is_router_ip = true; - break - } - } - }, - IPv6{ei} -> { - for (ipv6 in rport.networks.ipv6_addrs) { - if (ei == ipv6.addr) { - is_router_ip = true; - break - } - } - } - }; - if (not is_router_ip) { - c_addresses = c_addresses ++ " ${nat.external_ip}"; - central_ip_address = true - } - } - }; - - /* A set to hold all load-balancer vips. */ - (var all_ips_v4, var all_ips_v6) = get_router_load_balancer_ips(lbips, routable_only); - - for (ip_address in set_union(all_ips_v4, all_ips_v6)) { - c_addresses = c_addresses ++ " ${ip_address}"; - central_ip_address = true - }; - - if (central_ip_address) { - /* Gratuitous ARP for centralized NAT rules on distributed gateway - * ports should be restricted to the gateway chassis. */ - if (has_redirect) { - c_addresses = c_addresses ++ match (rport.router.l3dgw_ports.nth(0)) { - None -> "", - Some {var gw_port} -> " is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})" - } - } else (); - - addresses.insert(c_addresses.intern()) - } else (); - addresses - } - } -} - -function get_garp_nat_addresses(rport: Intern<RouterPort>): string = { - var garp_info = ["${rport.networks.ea}"]; - for (ipv4_addr in rport.networks.ipv4_addrs) { - garp_info.push("${ipv4_addr.addr}") - }; - match (rport.router.l3dgw_ports.nth(0)) { - None -> (), - Some {var gw_port} -> garp_info.push( - "is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})") - }; - garp_info.join(" ") -} - -/* Extra options computed for router ports by the logical flow generation code */ -relation RouterPortRAOptions(lrp: uuid, options: Map<istring, istring>) - -relation RouterPortRAOptionsComplete(lrp: uuid, options: Map<istring, istring>) - -RouterPortRAOptionsComplete(lrp, options) :- - RouterPortRAOptions(lrp, options). -RouterPortRAOptionsComplete(lrp, map_empty()) :- - &nb::Logical_Router_Port(._uuid = lrp), - not RouterPortRAOptions(lrp, _). - -function has_distributed_nat(nats: Vec<NAT>): bool { - for (nat in nats) { - if (nat.nat.__type == i"dnat_and_snat") { - return true - } - }; - return false -} - -/* - * Create derived port for Logical_Router_Ports with non-empty 'gateway_chassis' column. - */ - -/* Create derived ports */ -OutProxy_Port_Binding(._uuid = cr_lrp_uuid, - .logical_port = chassis_redirect_name(lrp.name).intern(), - .__type = i"chassisredirect", - .gateway_chassis = set_empty(), - .ha_chassis_group = Some{hacg_uuid}, - .options = options, - .datapath = lr_uuid, - .parent_port = None, - .tag = None, //always empty for router ports - .mac = set_singleton(i"${lrp.mac} ${lrp.networks.map(ival).to_vec().join(\" \")}"), - .nat_addresses = set_empty(), - .external_ids = lrp.external_ids, - .requested_chassis = None) :- - DistributedGatewayPort(lrp, lr_uuid, cr_lrp_uuid), - DistributedGatewayPortHAChassisGroup(lrp, hacg_uuid), - var redirect_type = match (lrp.options.get(i"redirect-type")) { - Some{var value} -> [i"redirect-type" -> value], - _ -> map_empty() - }, - LogicalRouterNATs(lr_uuid, nats), - var always_redirect = if (has_distributed_nat(nats) or - lrp.options.get(i"redirect-type") == Some{i"bridged"}) { - map_empty() - } else { - [i"always-redirect" -> i"true"] - }, - var options = redirect_type.union(always_redirect).insert_imm(i"distributed-port", lrp.name). - -/* - * We want to preserve 'up' (set by ovn-controller) for Port_Binding rows. - * We need to set set 'up' in new rows to Some{false}; if we don't set - * it at all, ovn-controller will never update it. - */ -relation PortBindingUp0(pb_uuid: uuid, up: bool) -PortBindingUp0(pb_uuid, up) :- sb::Port_Binding(._uuid = pb_uuid, .up = Some{up}). - -relation PortBindingUp(pb_uuid: uuid, up: bool) -PortBindingUp(pb_uuid, up) :- PortBindingUp0(pb_uuid, up). -PortBindingUp(pb_uuid, false) :- - OutProxy_Port_Binding(._uuid = pb_uuid), - not PortBindingUp0(pb_uuid, _). - -/* Add allocated qdisc_queue_id and tunnel key to Port_Binding. - */ -sb::Out_Port_Binding(._uuid = pbinding._uuid, - .logical_port = pbinding.logical_port, - .__type = pbinding.__type, - .gateway_chassis = pbinding.gateway_chassis, - .ha_chassis_group = pbinding.ha_chassis_group, - .options = options0, - .datapath = pbinding.datapath, - .tunnel_key = tunkey, - .parent_port = pbinding.parent_port, - .tag = pbinding.tag, - .mac = pbinding.mac, - .nat_addresses = pbinding.nat_addresses, - .external_ids = pbinding.external_ids, - .up = Some{up}, - .requested_chassis = pbinding.requested_chassis) :- - pbinding in OutProxy_Port_Binding(), - PortTunKeyAllocation(pbinding._uuid, tunkey), - QueueIDAllocation(pbinding._uuid, qid), - PortBindingUp(pbinding._uuid, up), - var options0 = match (qid) { - None -> pbinding.options, - Some{id} -> pbinding.options.insert_imm(i"qdisc_queue_id", i"${id}") - }. - -/* Referenced chassis. - * - * These tables track the sb::Chassis that a packet that traverses logical - * router 'lr_uuid' can end up at (or start from). This is used for - * sb::Out_HA_Chassis_Group's ref_chassis column. - * - * Only HA Chassis Groups with more than 1 chassis need to maintain the - * referenced chassis. - * - * RefChassisSet0 has a row for each logical router that actually references a - * chassis. RefChassisSet has a row for every logical router. */ -relation RefChassis(lr_uuid: uuid, chassis_uuid: uuid) -RefChassis(lr_uuid, chassis_uuid) :- - DistributedGatewayPortHAChassisGroup(lrp, hacg_uuid), - HAChassis(.hacg_uuid = hacg_uuid, .hac_uuid = hac_uuid), - var hacg_size = hac_uuid.group_by(hacg_uuid).to_set().size(), - hacg_size > 1, - DistributedGatewayPort(lrp, lr_uuid, _), - ConnectedLogicalRouter[(lr_uuid, set_uuid)], - ConnectedLogicalRouter[(lr2_uuid, set_uuid)], - FirstHopLogicalRouter(lr2_uuid, ls_uuid), - LogicalSwitchPort(lsp_uuid, ls_uuid), - &nb::Logical_Switch_Port(._uuid = lsp_uuid, .name = lsp_name), - sb::Port_Binding(.logical_port = lsp_name, .chassis = chassis_uuids), - Some{var chassis_uuid} = chassis_uuids. -relation RefChassisSet0(lr_uuid: uuid, chassis_uuids: Set<uuid>) -RefChassisSet0(lr_uuid, chassis_uuids) :- - RefChassis(lr_uuid, chassis_uuid), - var chassis_uuids = chassis_uuid.group_by(lr_uuid).to_set(). -relation RefChassisSet(lr_uuid: uuid, chassis_uuids: Set<uuid>) -RefChassisSet(lr_uuid, chassis_uuids) :- - RefChassisSet0(lr_uuid, chassis_uuids). -RefChassisSet(lr_uuid, set_empty()) :- - nb::Logical_Router(._uuid = lr_uuid), - not RefChassisSet0(lr_uuid, _). - -/* Referenced chassis for an HA chassis group. - * - * Multiple logical routers can reference an HA chassis group so we merge the - * referenced chassis across all of them. - */ -relation HAChassisGroupRefChassisSet(hacg_uuid: uuid, - chassis_uuids: Set<uuid>) -HAChassisGroupRefChassisSet(hacg_uuid, chassis_uuids) :- - DistributedGatewayPortHAChassisGroup(lrp, hacg_uuid), - DistributedGatewayPort(lrp, lr_uuid, _), - RefChassisSet(lr_uuid, chassis_uuids), - var chassis_uuids = chassis_uuids.group_by(hacg_uuid).union(). - -/* HA_Chassis_Group and HA_Chassis. */ -sb::Out_HA_Chassis_Group(hacg_uuid, hacg_name, ha_chassis, ref_chassis, eids) :- - HAChassis(hacg_uuid, hac_uuid, chassis_name, _, _), - var chassis_uuid = ha_chassis_uuid(chassis_name.ival(), hac_uuid), - var ha_chassis = chassis_uuid.group_by(hacg_uuid).to_set(), - HAChassisGroup(hacg_uuid, hacg_name, eids), - HAChassisGroupRefChassisSet(hacg_uuid, ref_chassis). - -sb::Out_HA_Chassis(ha_chassis_uuid(chassis_name.ival(), hac_uuid), chassis, priority, eids) :- - HAChassis(_, hac_uuid, chassis_name, priority, eids), - chassis_rec in sb::Chassis(.name = chassis_name), - var chassis = Some{chassis_rec._uuid}. -sb::Out_HA_Chassis(ha_chassis_uuid(chassis_name.ival(), hac_uuid), None, priority, eids) :- - HAChassis(_, hac_uuid, chassis_name, priority, eids), - not chassis_rec in sb::Chassis(.name = chassis_name). - -relation HAChassisToChassis(name: istring, chassis: Option<uuid>) -HAChassisToChassis(name, Some{chassis}) :- - sb::Chassis(._uuid = chassis, .name = name). -HAChassisToChassis(name, None) :- - nb::HA_Chassis(.chassis_name = name), - not sb::Chassis(.name = name). -sb::Out_HA_Chassis(ha_chassis_uuid(ha_chassis.chassis_name.ival(), hac_uuid), chassis, priority, eids) :- - sp in &SwitchPort(), - sp.lsp.__type == i"external", - Some{var ha_chassis_group_uuid} = sp.lsp.ha_chassis_group, - ha_chassis_group in nb::HA_Chassis_Group(._uuid = ha_chassis_group_uuid), - var hac_uuid = FlatMap(ha_chassis_group.ha_chassis), - ha_chassis in nb::HA_Chassis(._uuid = hac_uuid, .priority = priority, .external_ids = eids), - HAChassisToChassis(ha_chassis.chassis_name, chassis). -sb::Out_HA_Chassis_Group(_uuid, name, ha_chassis, set_empty() /* XXX? */, eids) :- - sp in &SwitchPort(), - sp.lsp.__type == i"external", - var ls_uuid = sp.sw._uuid, - Some{var ha_chassis_group_uuid} = sp.lsp.ha_chassis_group, - ha_chassis_group in nb::HA_Chassis_Group(._uuid = ha_chassis_group_uuid, .name = name, - .external_ids = eids), - var hac_uuid = FlatMap(ha_chassis_group.ha_chassis), - ha_chassis in nb::HA_Chassis(._uuid = hac_uuid), - var ha_chassis_uuid_name = ha_chassis_uuid(ha_chassis.chassis_name.ival(), hac_uuid), - var ha_chassis = ha_chassis_uuid_name.group_by((ls_uuid, name, eids)).to_set(), - var _uuid = ha_chassis_group_uuid(ls_uuid). - -/* - * SB_Global: copy nb_cfg and options from NB. - * If NB_Global does not exist yet, just keep the current value of SB_Global, - * if any. - */ -for (nb_global in nb::NB_Global) { - sb::Out_SB_Global(._uuid = nb_global._uuid, - .nb_cfg = nb_global.nb_cfg, - .options = nb_global.options, - .ipsec = nb_global.ipsec) -} - -sb::Out_SB_Global(._uuid = sb_global._uuid, - .nb_cfg = sb_global.nb_cfg, - .options = sb_global.options, - .ipsec = sb_global.ipsec) :- - sb_global in sb::SB_Global(), - not nb::NB_Global(). - -/* sb::Chassis_Private joined with is_remote from sb::Chassis, - * including a record even for a null Chassis ref. */ -relation ChassisPrivate( - cp: sb::Chassis_Private, - is_remote: bool) -ChassisPrivate(cp, c.other_config.get_bool_def(i"is-remote", false)) :- - cp in sb::Chassis_Private(.chassis = Some{uuid}), - c in sb::Chassis(._uuid = uuid). -ChassisPrivate(cp, false), -Warning["Chassis not exist for Chassis_Private record, name: ${cp.name}"] :- - cp in sb::Chassis_Private(.chassis = Some{uuid}), - not sb::Chassis(._uuid = uuid). -ChassisPrivate(cp, false), -Warning["Chassis not exist for Chassis_Private record, name: ${cp.name}"] :- - cp in sb::Chassis_Private(.chassis = None). - -/* Track minimum hv_cfg across all the (non-remote) chassis. */ -relation HvCfg0(hv_cfg: integer) -HvCfg0(hv_cfg) :- - ChassisPrivate(.cp = sb::Chassis_Private{.nb_cfg = chassis_cfg}, .is_remote = false), - var hv_cfg = chassis_cfg.group_by(()).min(). -relation HvCfg(hv_cfg: integer) -HvCfg(hv_cfg) :- HvCfg0(hv_cfg). -HvCfg(hv_cfg) :- - nb::NB_Global(.nb_cfg = hv_cfg), - not HvCfg0(). - -/* Track maximum nb_cfg_timestamp among all the (non-remote) chassis - * that have the minimum nb_cfg. */ -relation HvCfgTimestamp0(hv_cfg_timestamp: integer) -HvCfgTimestamp0(hv_cfg_timestamp) :- - HvCfg(hv_cfg), - ChassisPrivate(.cp = sb::Chassis_Private{.nb_cfg = hv_cfg, - .nb_cfg_timestamp = chassis_cfg_timestamp}, - .is_remote = false), - var hv_cfg_timestamp = chassis_cfg_timestamp.group_by(()).max(). -relation HvCfgTimestamp(hv_cfg_timestamp: integer) -HvCfgTimestamp(hv_cfg_timestamp) :- HvCfgTimestamp0(hv_cfg_timestamp). -HvCfgTimestamp(hv_cfg_timestamp) :- - nb::NB_Global(.hv_cfg_timestamp = hv_cfg_timestamp), - not HvCfgTimestamp0(). - -/* - * nb::Out_NB_Global. - * - * OutNBGlobal0 generates the new record in the common case. - * OutNBGlobal1 generates the new record as a copy of nb::NB_Global, if sb::SB_Global is missing. - * nb::Out_NB_Global makes sure we have only a single record in the relation. - * - * (We don't generate an NB_Global output record if there isn't - * one in the input. We don't have enough entropy available to - * generate a random _uuid. Doesn't seem like a big deal, because - * OVN probably hasn't really been initialized yet.) - */ -relation OutNBGlobal0[nb::Out_NB_Global] -OutNBGlobal0[nb::Out_NB_Global{._uuid = _uuid, - .sb_cfg = sb_cfg, - .hv_cfg = hv_cfg, - .nb_cfg_timestamp = nb_cfg_timestamp, - .hv_cfg_timestamp = hv_cfg_timestamp, - .ipsec = ipsec, - .options = options}] :- - NbCfgTimestamp[nb_cfg_timestamp], - HvCfgTimestamp(hv_cfg_timestamp), - nbg in nb::NB_Global(._uuid = _uuid, .ipsec = ipsec), - sb::SB_Global(.nb_cfg = sb_cfg), - HvCfg(hv_cfg), - HvCfgTimestamp(hv_cfg_timestamp), - MacPrefix(mac_prefix), - SvcMonitorMac(svc_monitor_mac), - OvnMaxDpKeyLocal[max_tunid], - var options = { - var options = nbg.options; - options.put_mac_prefix(mac_prefix); - options.put_svc_monitor_mac(svc_monitor_mac); - options.insert(i"max_tunid", i"${max_tunid}"); - options.insert(i"northd_internal_version", ovn_internal_version().intern()); - options - }. - -relation OutNBGlobal1[nb::Out_NB_Global] -OutNBGlobal1[x] :- OutNBGlobal0[x]. -OutNBGlobal1[nb::Out_NB_Global{._uuid = nbg._uuid, - .sb_cfg = nbg.sb_cfg, - .hv_cfg = nbg.hv_cfg, - .ipsec = nbg.ipsec, - .options = nbg.options, - .nb_cfg_timestamp = nbg.nb_cfg_timestamp, - .hv_cfg_timestamp = nbg.hv_cfg_timestamp}] :- - Unit(), - not OutNBGlobal0[_], - nbg in nb::NB_Global(). - -nb::Out_NB_Global[y] :- - OutNBGlobal1[x], - var y = x.group_by(()).group_first(). - -// Tracks the value that should go into NB_Global's 'nb_cfg_timestamp' column. -// ovn-northd-ddlog.c pushes the current time directly into this relation. -input relation NbCfgTimestamp[integer] - -output relation SbCfg[integer] -SbCfg[sb_cfg] :- nb::Out_NB_Global(.sb_cfg = sb_cfg). - -output relation Northd_Probe_Interval[s64] -Northd_Probe_Interval[interval] :- - nb in nb::NB_Global(), - var interval = nb.options.get(i"northd_probe_interval").and_then(parse_dec_i64).unwrap_or(-1). - -relation CheckLspIsUp[bool] -CheckLspIsUp[check_lsp_is_up] :- - nb in nb::NB_Global(), - var check_lsp_is_up = not nb.options.get_bool_def(i"ignore_lsp_down", true). -CheckLspIsUp[true] :- - Unit(), - not nb in nb::NB_Global(). - -/* - * Address_Set: copy from NB + additional records generated from NB Port_Group (two records for each - * Port_Group for IPv4 and IPv6 addresses). - * - * There can be name collisions between the two types of Address_Set records. User-defined records - * take precedence. - */ -sb::Out_Address_Set(._uuid = nb_as._uuid, - .name = nb_as.name, - .addresses = nb_as.addresses) :- - nb_as in &nb::Address_Set(). - -sb::Out_Address_Set(._uuid = hash128("svc_monitor_mac"), - .name = i"svc_monitor_mac", - .addresses = set_singleton(i"${svc_monitor_mac}")) :- - SvcMonitorMac(svc_monitor_mac). - -sb::Out_Address_Set(hash128(as_name), as_name, pg_ip4addrs.union()) :- - PortGroupPort(.pg_name = pg_name, .port = port_uuid), - var as_name = i"${pg_name}_ip4", - // avoid name collisions with user-defined Address_Sets - not &nb::Address_Set(.name = as_name), - PortStaticAddresses(.lsport = port_uuid, .ip4addrs = stat), - SwitchPortNewDynamicAddress(&SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = port_uuid}}, - dyn_addr), - var dynamic = match (dyn_addr) { - None -> set_empty(), - Some{lpaddress} -> match (lpaddress.ipv4_addrs.nth(0)) { - None -> set_empty(), - Some{addr} -> set_singleton(i"${addr.addr}") - } - }, - //PortDynamicAddresses(.lsport = port_uuid, .ip4addrs = dynamic), - var port_ip4addrs = stat.union(dynamic), - var pg_ip4addrs = port_ip4addrs.group_by(as_name).to_vec(). - -sb::Out_Address_Set(hash128(as_name), as_name, set_empty()) :- - nb::Port_Group(.ports = set_empty(), .name = pg_name), - var as_name = i"${pg_name}_ip4", - // avoid name collisions with user-defined Address_Sets - not &nb::Address_Set(.name = as_name). - -sb::Out_Address_Set(hash128(as_name), as_name, pg_ip6addrs.union()) :- - PortGroupPort(.pg_name = pg_name, .port = port_uuid), - var as_name = i"${pg_name}_ip6", - // avoid name collisions with user-defined Address_Sets - not &nb::Address_Set(.name = as_name), - PortStaticAddresses(.lsport = port_uuid, .ip6addrs = stat), - SwitchPortNewDynamicAddress(&SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = port_uuid}}, - dyn_addr), - var dynamic = match (dyn_addr) { - None -> set_empty(), - Some{lpaddress} -> match (lpaddress.ipv6_addrs.nth(0)) { - None -> set_empty(), - Some{addr} -> set_singleton(i"${addr.addr}") - } - }, - //PortDynamicAddresses(.lsport = port_uuid, .ip6addrs = dynamic), - var port_ip6addrs = stat.union(dynamic), - var pg_ip6addrs = port_ip6addrs.group_by(as_name).to_vec(). - -sb::Out_Address_Set(hash128(as_name), as_name, set_empty()) :- - nb::Port_Group(.ports = set_empty(), .name = pg_name), - var as_name = i"${pg_name}_ip6", - // avoid name collisions with user-defined Address_Sets - not &nb::Address_Set(.name = as_name). - -/* - * Port_Group - * - * Create one SB Port_Group record for every datapath that has ports - * referenced by the NB Port_Group.ports field. In order to maintain the - * SB Port_Group.name uniqueness constraint, ovn-northd populates the field - * with the value: <SB.Logical_Datapath.tunnel_key>_<NB.Port_Group.name>. - */ - -relation PortGroupPort( - pg_uuid: uuid, - pg_name: istring, - port: uuid) - -PortGroupPort(pg_uuid, pg_name, port) :- - nb::Port_Group(._uuid = pg_uuid, .name = pg_name, .ports = pg_ports), - var port = FlatMap(pg_ports). - -sb::Out_Port_Group(._uuid = hash128(sb_name), .name = sb_name, .ports = port_names) :- - PortGroupPort(.pg_uuid = _uuid, .pg_name = nb_name, .port = port_uuid), - &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{._uuid = port_uuid, - .name = port_name}, - .sw = &Switch{._uuid = ls_uuid}), - TunKeyAllocation(.datapath = ls_uuid, .tunkey = tunkey), - var sb_name = i"${tunkey}_${nb_name}", - var port_names = port_name.group_by((_uuid, sb_name)).to_set(). - -/* - * Multicast_Group: - * - three static rows per logical switch: one for flooding, one for packets - * with unknown destinations, one for flooding IP multicast known traffic to - * mrouters. - * - dynamically created rows based on IGMP groups learned by controllers. - */ - -function mC_FLOOD(): (istring, integer) = - (i"_MC_flood", 32768) - -function mC_UNKNOWN(): (istring, integer) = - (i"_MC_unknown", 32769) - -function mC_MROUTER_FLOOD(): (istring, integer) = - (i"_MC_mrouter_flood", 32770) - -function mC_MROUTER_STATIC(): (istring, integer) = - (i"_MC_mrouter_static", 32771) - -function mC_STATIC(): (istring, integer) = - (i"_MC_static", 32772) - -function mC_FLOOD_L2(): (istring, integer) = - (i"_MC_flood_l2", 32773) - -function mC_IP_MCAST_MIN(): (istring, integer) = - (i"_MC_ip_mcast_min", 32774) - -function mC_IP_MCAST_MAX(): (istring, integer) = - (i"_MC_ip_mcast_max", 65535) - - -// TODO: check that Multicast_Group.ports should not include derived ports - -/* Proxy table for Out_Multicast_Group: contains all Multicast_Group fields, - * except `_uuid`, which will be computed by hashing the remaining fields, - * and tunnel key, which case it is allocated separately (see - * MulticastGroupTunKeyAllocation). */ -relation OutProxy_Multicast_Group ( - datapath: uuid, - name: istring, - ports: Set<uuid> -) - -/* Only create flood group if the switch has enabled ports */ -sb::Out_Multicast_Group (._uuid = hash128((datapath,name)), - .datapath = datapath, - .name = name, - .tunnel_key = tunnel_key, - .ports = port_ids) :- - &SwitchPort(.lsp = lsp, .sw = sw), - lsp.is_enabled(), - var datapath = sw._uuid, - var port_ids = lsp._uuid.group_by((datapath)).to_set(), - (var name, var tunnel_key) = mC_FLOOD(). - -/* Create a multicast group to flood to all switch ports except router ports. - */ -sb::Out_Multicast_Group (._uuid = hash128((datapath,name)), - .datapath = datapath, - .name = name, - .tunnel_key = tunnel_key, - .ports = port_ids) :- - &SwitchPort(.lsp = lsp, .sw = sw), - lsp.is_enabled(), - lsp.__type != i"router", - var datapath = sw._uuid, - var port_ids = lsp._uuid.group_by((datapath)).to_set(), - (var name, var tunnel_key) = mC_FLOOD_L2(). - -/* Only create unknown group if the switch has ports with "unknown" address */ -sb::Out_Multicast_Group (._uuid = hash128((ls,name)), - .datapath = ls, - .name = name, - .tunnel_key = tunnel_key, - .ports = ports) :- - LogicalSwitchPortWithUnknownAddress(ls, lsp), - var ports = lsp.group_by(ls).to_set(), - (var name, var tunnel_key) = mC_UNKNOWN(). - -/* Create a multicast group to flood multicast traffic to routers with - * multicast relay enabled. - */ -sb::Out_Multicast_Group (._uuid = hash128((sw._uuid,name)), - .datapath = sw._uuid, - .name = name, - .tunnel_key = tunnel_key, - .ports = port_ids) :- - SwitchMcastFloodRelayPorts(sw, port_ids), - not port_ids.is_empty(), - (var name, var tunnel_key) = mC_MROUTER_FLOOD(). - -/* Create a multicast group to flood traffic (no reports) to ports with - * multicast flood enabled. - */ -sb::Out_Multicast_Group (._uuid = hash128((sw._uuid,name)), - .datapath = sw._uuid, - .name = name, - .tunnel_key = tunnel_key, - .ports = port_ids) :- - SwitchMcastFloodPorts(sw, port_ids), - not port_ids.is_empty(), - (var name, var tunnel_key) = mC_STATIC(). - -/* Create a multicast group to flood reports to ports with - * multicast flood_reports enabled. - */ -sb::Out_Multicast_Group (._uuid = hash128((sw._uuid,name)), - .datapath = sw._uuid, - .name = name, - .tunnel_key = tunnel_key, - .ports = port_ids) :- - SwitchMcastFloodReportPorts(sw, port_ids), - not port_ids.is_empty(), - (var name, var tunnel_key) = mC_MROUTER_STATIC(). - -/* Create a multicast group to flood traffic and reports to router ports with - * multicast flood enabled. - */ -sb::Out_Multicast_Group (._uuid = hash128((rtr._uuid,name)), - .datapath = rtr._uuid, - .name = name, - .tunnel_key = tunnel_key, - .ports = port_ids) :- - RouterMcastFloodPorts(rtr, port_ids), - not port_ids.is_empty(), - (var name, var tunnel_key) = mC_STATIC(). - -/* Create a multicast group for each IGMP group learned by a Switch. - * 'tunnel_key' == 0 triggers an ID allocation later. - */ -OutProxy_Multicast_Group (.datapath = switch._uuid, - .name = address, - .ports = port_ids) :- - IgmpSwitchMulticastGroup(address, switch, port_ids). - -/* Create a multicast group for each IGMP group learned by a Router. - * 'tunnel_key' == 0 triggers an ID allocation later. - */ -OutProxy_Multicast_Group (.datapath = router._uuid, - .name = address, - .ports = port_ids) :- - IgmpRouterMulticastGroup(address, router, port_ids). - -/* Allocate a 'tunnel_key' for dynamic multicast groups. */ -sb::Out_Multicast_Group(._uuid = hash128((mcgroup.datapath,mcgroup.name)), - .datapath = mcgroup.datapath, - .name = mcgroup.name, - .tunnel_key = tunnel_key, - .ports = mcgroup.ports) :- - mcgroup in OutProxy_Multicast_Group(), - MulticastGroupTunKeyAllocation(mcgroup.datapath, mcgroup.name, tunnel_key). - -/* - * MAC binding: records inserted by hypervisors; northd removes records for deleted logical ports and datapaths. - */ -sb::Out_MAC_Binding (._uuid = mb._uuid, - .logical_port = mb.logical_port, - .ip = mb.ip, - .mac = mb.mac, - .datapath = mb.datapath) :- - sb::MAC_Binding[mb], - sb::Out_Port_Binding(.logical_port = mb.logical_port), - sb::Out_Datapath_Binding(._uuid = mb.datapath). - -/* - * DHCP options: fixed table - */ -sb::Out_DHCP_Options ( - ._uuid = 128'h7d9d898a_179b_4898_8382_b73bec391f23, - .name = i"offerip", - .code = 0, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hea5e7d14_fd97_491c_8004_a120bdbc4306, - .name = i"netmask", - .code = 1, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hdab5e39b_6702_4245_9573_6c142aa3724c, - .name = i"router", - .code = 3, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h340b4bc5_c5c3_43d1_ae77_564da69c8fcc, - .name = i"dns_server", - .code = 6, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hcd1ab302_cbb2_4eab_9ec5_ec1c8541bd82, - .name = i"log_server", - .code = 7, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h1c7ea6a0_fe6b_48c1_a920_302583c1ff08, - .name = i"lpr_server", - .code = 9, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hae312373_2261_41b5_a2c4_186f426dd929, - .name = i"hostname", - .code = 12, - .__type = i"str" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hae35e575_226a_4ab5_a1c4_166f426dd999, - .name = i"domain_name", - .code = 15, - .__type = i"str" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'had0ec3e0_8be9_4c77_bceb_f8954a34c7ba, - .name = i"swap_server", - .code = 16, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h884c2e02_6e99_4d12_aef7_8454ebf8a3b7, - .name = i"policy_filter", - .code = 21, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h57cc2c61_fd2a_41c6_b6b1_6ce9a8901f86, - .name = i"router_solicitation", - .code = 32, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h48249097_03f0_46c1_a32a_2dd57cd4d0f8, - .name = i"nis_server", - .code = 41, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h333fe07e_bdd1_4371_aa4f_a412bc60f3a2, - .name = i"ntp_server", - .code = 42, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h6207109c_49d0_4348_8238_dd92afb69bf0, - .name = i"server_id", - .code = 54, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h2090b783_26d3_4c1d_830c_54c1b6c5d846, - .name = i"tftp_server", - .code = 66, - .__type = i"host_id" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'ha18ff399_caea_406e_af7e_321c6f74e581, - .name = i"classless_static_route", - .code = 121, - .__type = i"static_routes" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hb81ad7b4_62f0_40c7_a9a3_f96677628767, - .name = i"ms_classless_static_route", - .code = 249, - .__type = i"static_routes" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h0c2e144e_4b5f_4e21_8978_0e20bac9a6ea, - .name = i"ip_forward_enable", - .code = 19, - .__type = i"bool" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h6feb1926_9469_4b40_bfbf_478b9888cd3a, - .name = i"router_discovery", - .code = 31, - .__type = i"bool" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hcb776249_e8b1_4502_b33b_fa294d44077d, - .name = i"ethernet_encap", - .code = 36, - .__type = i"bool" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'ha2df9eaa_aea9_497f_b339_0c8ec3e39a07, - .name = i"default_ttl", - .code = 23, - .__type = i"uint8" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hb44b45a9_5004_4ef5_8e6a_aa8629e1afb1, - .name = i"tcp_ttl", - .code = 37, - .__type = i"uint8" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h50f01ca7_c650_46f0_8f50_39a67ec657da, - .name = i"mtu", - .code = 26, - .__type = i"uint16" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h9d31c057_6085_4810_96af_eeac7d3c5308, - .name = i"lease_time", - .code = 51, - .__type = i"uint32" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hea1e2e7a_9585_46ee_ad49_adfdefc0c4ef, - .name = i"T1", - .code = 58, - .__type = i"uint32" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hbc83a233_554b_453a_afca_1eadf76810d2, - .name = i"T2", - .code = 59, - .__type = i"uint32" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h1ab3eeca_0523_4101_9076_eea77d0232f4, - .name = i"bootfile_name", - .code = 67, - .__type = i"str" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'ha5c20b69_f7f3_4fa8_b550_8697aec6cbb7, - .name = i"wpad", - .code = 252, - .__type = i"str" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h1516bcb6_cc93_4233_a63f_bd29c8601831, - .name = i"path_prefix", - .code = 210, - .__type = i"str" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hc98e13cd_f653_473c_85c1_850dcad685fc, - .name = i"tftp_server_address", - .code = 150, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hfbe06e70_b43d_4dd9_9b21_2f27eb5da5df, - .name = i"arp_cache_timeout", - .code = 35, - .__type = i"uint32" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h2af54a3c_545c_4104_ae1c_432caa3e085e, - .name = i"tcp_keepalive_interval", - .code = 38, - .__type = i"uint32" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h4b2144e8_8d3f_4d96_9032_fe23c1866cd4, - .name = i"domain_search_list", - .code = 119, - .__type = i"domains" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'hb7236164_eea4_4bf2_9306_8619a9e3ad1d, - .name = i"broadcast_address", - .code = 28, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h32224b72_1561_4279_b430_982423b62a69, - .name = i"netbios_name_server", - .code = 44, - .__type = i"ipv4" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h691db4ae_624e_43e2_9f4a_5ed9de58f0e5, - .name = i"netbios_node_type", - .code = 46, - .__type = i"uint8" -). - -sb::Out_DHCP_Options ( - ._uuid = 128'h2d738583_96f4_4a78_99a1_f8f7fe328f3f, - .name = i"bootfile_name_alt", - .code = 254, - .__type = i"str" -). - - -/* - * DHCPv6 options: fixed table - */ -sb::Out_DHCPv6_Options ( - ._uuid = 128'h100b2659_0ec0_4da7_9ec3_25997f92dc00, - .name = i"server_id", - .code = 2, - .__type = i"mac" -). - -sb::Out_DHCPv6_Options ( - ._uuid = 128'h53f49b50_db75_4b0d_83df_50d31009ca9c, - .name = i"ia_addr", - .code = 5, - .__type = i"ipv6" -). - -sb::Out_DHCPv6_Options ( - ._uuid = 128'he3619685_d4f7_42ad_936b_4f4440b7eeb4, - .name = i"dns_server", - .code = 23, - .__type = i"ipv6" -). - -sb::Out_DHCPv6_Options ( - ._uuid = 128'hcb8a4e7f_a312_4cb1_a846_e474d9f0c531, - .name = i"domain_search", - .code = 24, - .__type = i"str" -). - - -/* - * DNS: copied from NB + datapaths column pointer to LS datapaths that use the record - */ - -function map_to_lowercase(m_in: Map<istring,istring>): Map<istring,istring> { - var m_out = map_empty(); - for ((k, v) in m_in) { - m_out.insert(k.to_lowercase().intern(), v.to_lowercase().intern()) - }; - m_out -} - -sb::Out_DNS(._uuid = hash128(nbdns._uuid), - .records = map_to_lowercase(nbdns.records), - .datapaths = datapaths, - .external_ids = nbdns.external_ids.insert_imm(i"dns_id", uuid2str(nbdns._uuid).intern())) :- - nb::DNS[nbdns], - LogicalSwitchDNS(ls_uuid, nbdns._uuid), - var datapaths = ls_uuid.group_by(nbdns).to_set(). - -/* - * RBAC_Permission: fixed - */ - -sb::Out_RBAC_Permission ( - ._uuid = 128'h7df3749a_1754_4a78_afa4_3abf526fe510, - .table = i"Chassis", - .authorization = set_singleton(i"name"), - .insert_delete = true, - .update = [i"nb_cfg", i"external_ids", i"encaps", - i"vtep_logical_switches", i"other_config", - i"transport_zones"].to_set() -). - -sb::Out_RBAC_Permission ( - ._uuid = 128'h07e623f7_137c_4a11_9084_3b3f89cb4a54, - .table = i"Chassis_Private", - .authorization = set_singleton(i"name"), - .insert_delete = true, - .update = [i"nb_cfg", i"nb_cfg_timestamp", i"chassis", i"external_ids"].to_set() -). - -sb::Out_RBAC_Permission ( - ._uuid = 128'h94bec860_431e_4d95_82e7_3b75d8997241, - .table = i"Encap", - .authorization = set_singleton(i"chassis_name"), - .insert_delete = true, - .update = [i"type", i"options", i"ip"].to_set() -). - -sb::Out_RBAC_Permission ( - ._uuid = 128'hd8ceff1a_2b11_48bd_802f_4a991aa4e908, - .table = i"Port_Binding", - .authorization = set_singleton(i""), - .insert_delete = false, - .update = [i"chassis", i"encap", i"up", i"virtual_parent"].to_set() -). - -sb::Out_RBAC_Permission ( - ._uuid = 128'h6ffdc696_8bfb_4d82_b620_a00d39270b2f, - .table = i"MAC_Binding", - .authorization = set_singleton(i""), - .insert_delete = true, - .update = [i"logical_port", i"ip", i"mac", i"datapath"].to_set() -). - -sb::Out_RBAC_Permission ( - ._uuid = 128'h39231c7e_4bf1_41d0_ada4_1d8a319c0da3, - .table = i"Service_Monitor", - .authorization = set_singleton(i""), - .insert_delete = false, - .update = set_singleton(i"status") -). - -sb::Out_RBAC_Permission ( - ._uuid = 128'h5256f48e_172c_4d85_8f04_e199fa817633, - .table = i"IGMP_Group", - .authorization = set_singleton(i""), - .insert_delete = true, - .update = [i"address", i"chassis", i"datapath", i"ports"].to_set() -). - -sb::Out_RBAC_Permission ( - ._uuid = 128'h2e5cbf3d_26f6_4f8a_9926_d6f77f61654f, - .table = i"Controller_Event", - .authorization = set_singleton(i""), - .insert_delete = true, - .update = [i"chassis", i"event_info", i"event_type", - i"seq_num"].to_set() -). - -sb::Out_RBAC_Permission ( - ._uuid = 128'hb70964fc_322f_4ae5_aee4_ff6afadcc126, - .table = i"FDB", - .authorization = set_singleton(i""), - .insert_delete = true, - .update = [i"dp_key", i"mac", i"port_key"].to_set() -). - -/* - * RBAC_Role: fixed - */ -sb::Out_RBAC_Role ( - ._uuid = 128'ha406b472_5de8_4456_9f38_bf344c911b22, - .name = i"ovn-controller", - .permissions = [ - i"Chassis" -> 128'h7df3749a_1754_4a78_afa4_3abf526fe510, - i"Chassis_Private" -> 128'h07e623f7_137c_4a11_9084_3b3f89cb4a54, - i"Controller_Event" -> 128'h2e5cbf3d_26f6_4f8a_9926_d6f77f61654f, - i"Encap" -> 128'h94bec860_431e_4d95_82e7_3b75d8997241, - i"FDB" -> 128'hb70964fc_322f_4ae5_aee4_ff6afadcc126, - i"IGMP_Group" -> 128'h5256f48e_172c_4d85_8f04_e199fa817633, - i"Port_Binding" -> 128'hd8ceff1a_2b11_48bd_802f_4a991aa4e908, - i"MAC_Binding" -> 128'h6ffdc696_8bfb_4d82_b620_a00d39270b2f, - i"Service_Monitor"-> 128'h39231c7e_4bf1_41d0_ada4_1d8a319c0da3] - -). - -/* Output modified Logical_Switch_Port table with dynamic address updated */ -nb::Out_Logical_Switch_Port(._uuid = lsp._uuid, - .tag = tag, - .dynamic_addresses = dynamic_addresses, - .up = Some{up}) :- - SwitchPortNewDynamicAddress(&SwitchPort{.lsp = lsp, .up = up}, opt_dyn_addr), - var dynamic_addresses = opt_dyn_addr.and_then(|a| Some{i"${a}"}), - SwitchPortNewDynamicTag(lsp._uuid, opt_tag), - var tag = match (opt_tag) { - None -> lsp.tag, - Some{t} -> Some{t} - }. - -relation LRPIPv6Prefix0(lrp_uuid: uuid, ipv6_prefix: istring) -LRPIPv6Prefix0(lrp._uuid, ipv6_prefix.intern()) :- - lrp in &nb::Logical_Router_Port(), - lrp.options.get_bool_def(i"prefix", false), - sb::Port_Binding(.logical_port = lrp.name, .options = options), - Some{var ipv6_ra_pd_list} = options.get(i"ipv6_ra_pd_list"), - var parts = ipv6_ra_pd_list.split(","), - Some{var ipv6_prefix} = parts.nth(1). - -relation LRPIPv6Prefix(lrp_uuid: uuid, ipv6_prefix: Option<istring>) -LRPIPv6Prefix(lrp_uuid, Some{ipv6_prefix}) :- - LRPIPv6Prefix0(lrp_uuid, ipv6_prefix). -LRPIPv6Prefix(lrp_uuid, None) :- - &nb::Logical_Router_Port(._uuid = lrp_uuid), - not LRPIPv6Prefix0(lrp_uuid, _). - -nb::Out_Logical_Router_Port(._uuid = _uuid, - .ipv6_prefix = to_set(ipv6_prefix)) :- - &nb::Logical_Router_Port(._uuid = _uuid, .name = name), - LRPIPv6Prefix(_uuid, ipv6_prefix). - -typedef Pipeline = Ingress | Egress - -typedef Stage = Stage { - pipeline : Pipeline, - table_id : bit<8>, - table_name : istring -} - -/* Logical switch ingress stages. */ -function s_SWITCH_IN_PORT_SEC_L2(): Intern<Stage> { Stage{Ingress, 0, i"ls_in_port_sec_l2"}.intern() } -function s_SWITCH_IN_PORT_SEC_IP(): Intern<Stage> { Stage{Ingress, 1, i"ls_in_port_sec_ip"}.intern() } -function s_SWITCH_IN_PORT_SEC_ND(): Intern<Stage> { Stage{Ingress, 2, i"ls_in_port_sec_nd"}.intern() } -function s_SWITCH_IN_LOOKUP_FDB(): Intern<Stage> { Stage{Ingress, 3, i"ls_in_lookup_fdb"}.intern() } -function s_SWITCH_IN_PUT_FDB(): Intern<Stage> { Stage{Ingress, 4, i"ls_in_put_fdb"}.intern() } -function s_SWITCH_IN_PRE_ACL(): Intern<Stage> { Stage{Ingress, 5, i"ls_in_pre_acl"}.intern() } -function s_SWITCH_IN_PRE_LB(): Intern<Stage> { Stage{Ingress, 6, i"ls_in_pre_lb"}.intern() } -function s_SWITCH_IN_PRE_STATEFUL(): Intern<Stage> { Stage{Ingress, 7, i"ls_in_pre_stateful"}.intern() } -function s_SWITCH_IN_ACL_HINT(): Intern<Stage> { Stage{Ingress, 8, i"ls_in_acl_hint"}.intern() } -function s_SWITCH_IN_ACL(): Intern<Stage> { Stage{Ingress, 9, i"ls_in_acl"}.intern() } -function s_SWITCH_IN_QOS_MARK(): Intern<Stage> { Stage{Ingress, 10, i"ls_in_qos_mark"}.intern() } -function s_SWITCH_IN_QOS_METER(): Intern<Stage> { Stage{Ingress, 11, i"ls_in_qos_meter"}.intern() } -function s_SWITCH_IN_STATEFUL(): Intern<Stage> { Stage{Ingress, 12, i"ls_in_stateful"}.intern() } -function s_SWITCH_IN_PRE_HAIRPIN(): Intern<Stage> { Stage{Ingress, 13, i"ls_in_pre_hairpin"}.intern() } -function s_SWITCH_IN_NAT_HAIRPIN(): Intern<Stage> { Stage{Ingress, 14, i"ls_in_nat_hairpin"}.intern() } -function s_SWITCH_IN_HAIRPIN(): Intern<Stage> { Stage{Ingress, 15, i"ls_in_hairpin"}.intern() } -function s_SWITCH_IN_ARP_ND_RSP(): Intern<Stage> { Stage{Ingress, 16, i"ls_in_arp_rsp"}.intern() } -function s_SWITCH_IN_DHCP_OPTIONS(): Intern<Stage> { Stage{Ingress, 17, i"ls_in_dhcp_options"}.intern() } -function s_SWITCH_IN_DHCP_RESPONSE(): Intern<Stage> { Stage{Ingress, 18, i"ls_in_dhcp_response"}.intern() } -function s_SWITCH_IN_DNS_LOOKUP(): Intern<Stage> { Stage{Ingress, 19, i"ls_in_dns_lookup"}.intern() } -function s_SWITCH_IN_DNS_RESPONSE(): Intern<Stage> { Stage{Ingress, 20, i"ls_in_dns_response"}.intern() } -function s_SWITCH_IN_EXTERNAL_PORT(): Intern<Stage> { Stage{Ingress, 21, i"ls_in_external_port"}.intern() } -function s_SWITCH_IN_L2_LKUP(): Intern<Stage> { Stage{Ingress, 22, i"ls_in_l2_lkup"}.intern() } -function s_SWITCH_IN_L2_UNKNOWN(): Intern<Stage> { Stage{Ingress, 23, i"ls_in_l2_unknown"}.intern() } - -/* Logical switch egress stages. */ -function s_SWITCH_OUT_PRE_LB(): Intern<Stage> { Stage{ Egress, 0, i"ls_out_pre_lb"}.intern() } -function s_SWITCH_OUT_PRE_ACL(): Intern<Stage> { Stage{ Egress, 1, i"ls_out_pre_acl"}.intern() } -function s_SWITCH_OUT_PRE_STATEFUL(): Intern<Stage> { Stage{ Egress, 2, i"ls_out_pre_stateful"}.intern() } -function s_SWITCH_OUT_ACL_HINT(): Intern<Stage> { Stage{ Egress, 3, i"ls_out_acl_hint"}.intern() } -function s_SWITCH_OUT_ACL(): Intern<Stage> { Stage{ Egress, 4, i"ls_out_acl"}.intern() } -function s_SWITCH_OUT_QOS_MARK(): Intern<Stage> { Stage{ Egress, 5, i"ls_out_qos_mark"}.intern() } -function s_SWITCH_OUT_QOS_METER(): Intern<Stage> { Stage{ Egress, 6, i"ls_out_qos_meter"}.intern() } -function s_SWITCH_OUT_STATEFUL(): Intern<Stage> { Stage{ Egress, 7, i"ls_out_stateful"}.intern() } -function s_SWITCH_OUT_PORT_SEC_IP(): Intern<Stage> { Stage{ Egress, 8, i"ls_out_port_sec_ip"}.intern() } -function s_SWITCH_OUT_PORT_SEC_L2(): Intern<Stage> { Stage{ Egress, 9, i"ls_out_port_sec_l2"}.intern() } - -/* Logical router ingress stages. */ -function s_ROUTER_IN_ADMISSION(): Intern<Stage> { Stage{Ingress, 0, i"lr_in_admission"}.intern() } -function s_ROUTER_IN_LOOKUP_NEIGHBOR(): Intern<Stage> { Stage{Ingress, 1, i"lr_in_lookup_neighbor"}.intern() } -function s_ROUTER_IN_LEARN_NEIGHBOR(): Intern<Stage> { Stage{Ingress, 2, i"lr_in_learn_neighbor"}.intern() } -function s_ROUTER_IN_IP_INPUT(): Intern<Stage> { Stage{Ingress, 3, i"lr_in_ip_input"}.intern() } -function s_ROUTER_IN_UNSNAT(): Intern<Stage> { Stage{Ingress, 4, i"lr_in_unsnat"}.intern() } -function s_ROUTER_IN_DEFRAG(): Intern<Stage> { Stage{Ingress, 5, i"lr_in_defrag"}.intern() } -function s_ROUTER_IN_DNAT(): Intern<Stage> { Stage{Ingress, 6, i"lr_in_dnat"}.intern() } -function s_ROUTER_IN_ECMP_STATEFUL(): Intern<Stage> { Stage{Ingress, 7, i"lr_in_ecmp_stateful"}.intern() } -function s_ROUTER_IN_ND_RA_OPTIONS(): Intern<Stage> { Stage{Ingress, 8, i"lr_in_nd_ra_options"}.intern() } -function s_ROUTER_IN_ND_RA_RESPONSE(): Intern<Stage> { Stage{Ingress, 9, i"lr_in_nd_ra_response"}.intern() } -function s_ROUTER_IN_IP_ROUTING(): Intern<Stage> { Stage{Ingress, 10, i"lr_in_ip_routing"}.intern() } -function s_ROUTER_IN_IP_ROUTING_ECMP(): Intern<Stage> { Stage{Ingress, 11, i"lr_in_ip_routing_ecmp"}.intern() } -function s_ROUTER_IN_POLICY(): Intern<Stage> { Stage{Ingress, 12, i"lr_in_policy"}.intern() } -function s_ROUTER_IN_POLICY_ECMP(): Intern<Stage> { Stage{Ingress, 13, i"lr_in_policy_ecmp"}.intern() } -function s_ROUTER_IN_ARP_RESOLVE(): Intern<Stage> { Stage{Ingress, 14, i"lr_in_arp_resolve"}.intern() } -function s_ROUTER_IN_CHK_PKT_LEN(): Intern<Stage> { Stage{Ingress, 15, i"lr_in_chk_pkt_len"}.intern() } -function s_ROUTER_IN_LARGER_PKTS(): Intern<Stage> { Stage{Ingress, 16, i"lr_in_larger_pkts"}.intern() } -function s_ROUTER_IN_GW_REDIRECT(): Intern<Stage> { Stage{Ingress, 17, i"lr_in_gw_redirect"}.intern() } -function s_ROUTER_IN_ARP_REQUEST(): Intern<Stage> { Stage{Ingress, 18, i"lr_in_arp_request"}.intern() } - -/* Logical router egress stages. */ -function s_ROUTER_OUT_UNDNAT(): Intern<Stage> { Stage{ Egress, 0, i"lr_out_undnat"}.intern() } -function s_ROUTER_OUT_POST_UNDNAT(): Intern<Stage> { Stage{ Egress, 1, i"lr_out_post_undnat"}.intern() } -function s_ROUTER_OUT_SNAT(): Intern<Stage> { Stage{ Egress, 2, i"lr_out_snat"}.intern() } -function s_ROUTER_OUT_EGR_LOOP(): Intern<Stage> { Stage{ Egress, 3, i"lr_out_egr_loop"}.intern() } -function s_ROUTER_OUT_DELIVERY(): Intern<Stage> { Stage{ Egress, 4, i"lr_out_delivery"}.intern() } - -/* - * OVS register usage: - * - * Logical Switch pipeline: - * +----+----------------------------------------------+---+------------------+ - * | R0 | REGBIT_{CONNTRACK/DHCP/DNS} | | | - * | | REGBIT_{HAIRPIN/HAIRPIN_REPLY} | | | - * | | REGBIT_ACL_LABEL | X | | - * +----+----------------------------------------------+ X | | - * | R1 | ORIG_DIP_IPV4 (>= IN_STATEFUL) | R | | - * +----+----------------------------------------------+ E | | - * | R2 | ORIG_TP_DPORT (>= IN_STATEFUL) | G | | - * +----+----------------------------------------------+ 0 | | - * | R3 | ACL_LABEL | | | - * +----+----------------------------------------------+---+------------------+ - * | R4 | UNUSED | | | - * +----+----------------------------------------------+ X | ORIG_DIP_IPV6 | - * | R5 | UNUSED | X | (>= IN_STATEFUL) | - * +----+----------------------------------------------+ R | | - * | R6 | UNUSED | E | | - * +----+----------------------------------------------+ G | | - * | R7 | UNUSED | 1 | | - * +----+----------------------------------------------+---+------------------+ - * | R8 | UNUSED | - * +----+----------------------------------------------+ - * | R9 | UNUSED | - * +----+----------------------------------------------+ - * - * Logical Router pipeline: - * +-----+--------------------------+---+-----------------+---+---------------+ - * | R0 | REGBIT_ND_RA_OPTS_RESULT | | | | | - * | | (= IN_ND_RA_OPTIONS) | X | | | | - * | | NEXT_HOP_IPV4 | R | | | | - * | | (>= IP_INPUT) | E | INPORT_ETH_ADDR | X | | - * +-----+--------------------------+ G | (< IP_INPUT) | X | | - * | R1 | SRC_IPV4 for ARP-REQ | 0 | | R | | - * | | (>= IP_INPUT) | | | E | NEXT_HOP_IPV6 | - * +-----+--------------------------+---+-----------------+ G | (>= DEFRAG) | - * | R2 | UNUSED | X | | 0 | | - * | | | R | | | | - * +-----+--------------------------+ E | UNUSED | | | - * | R3 | UNUSED | G | | | | - * | | | 1 | | | | - * +-----+--------------------------+---+-----------------+---+---------------+ - * | R4 | UNUSED | X | | | | - * | | | R | | | | - * +-----+--------------------------+ E | UNUSED | X | | - * | R5 | UNUSED | G | | X | | - * | | | 2 | | R |SRC_IPV6 for NS| - * +-----+--------------------------+---+-----------------+ E | (>= | - * | R6 | UNUSED | X | | G | IN_IP_ROUTING)| - * | | | R | | 1 | | - * +-----+--------------------------+ E | UNUSED | | | - * | R7 | UNUSED | G | | | | - * | | | 3 | | | | - * +-----+--------------------------+---+-----------------+---+---------------+ - * | R8 | ECMP_GROUP_ID | | | - * | | ECMP_MEMBER_ID | X | | - * +-----+--------------------------+ R | | - * | | REGBIT_{ | E | | - * | | EGRESS_LOOPBACK/ | G | UNUSED | - * | R9 | PKT_LARGER/ | 4 | | - * | | LOOKUP_NEIGHBOR_RESULT/| | | - * | | SKIP_LOOKUP_NEIGHBOR} | | | - * | | | | | - * | | REG_ORIG_TP_DPORT_ROUTER | | | - * | | | | | - * +-----+--------------------------+---+-----------------+ - * - */ - -/* Register definitions specific to routers. */ -function rEG_NEXT_HOP(): istring = i"reg0" /* reg0 for IPv4, xxreg0 for IPv6 */ -function rEG_SRC(): istring = i"reg1" /* reg1 for IPv4, xxreg1 for IPv6 */ - -/* Register definitions specific to switches. */ -function rEGBIT_CONNTRACK_DEFRAG() : istring = i"reg0[0]" -function rEGBIT_CONNTRACK_COMMIT() : istring = i"reg0[1]" -function rEGBIT_CONNTRACK_NAT() : istring = i"reg0[2]" -function rEGBIT_DHCP_OPTS_RESULT() : istring = i"reg0[3]" -function rEGBIT_DNS_LOOKUP_RESULT(): istring = i"reg0[4]" -function rEGBIT_ND_RA_OPTS_RESULT(): istring = i"reg0[5]" -function rEGBIT_HAIRPIN() : istring = i"reg0[6]" -function rEGBIT_ACL_HINT_ALLOW_NEW(): istring = i"reg0[7]" -function rEGBIT_ACL_HINT_ALLOW() : istring = i"reg0[8]" -function rEGBIT_ACL_HINT_DROP() : istring = i"reg0[9]" -function rEGBIT_ACL_HINT_BLOCK() : istring = i"reg0[10]" -function rEGBIT_LKUP_FDB() : istring = i"reg0[11]" -function rEGBIT_HAIRPIN_REPLY() : istring = i"reg0[12]" -function rEGBIT_ACL_LABEL() : istring = i"reg0[13]" - -function rEG_ORIG_DIP_IPV4() : istring = i"reg1" -function rEG_ORIG_DIP_IPV6() : istring = i"xxreg1" -function rEG_ORIG_TP_DPORT() : istring = i"reg2[0..15]" - -/* Register definitions for switches and routers. */ - -/* Indicate that this packet has been recirculated using egress - * loopback. This allows certain checks to be bypassed, such as a -* logical router dropping packets with source IP address equals -* one of the logical router's own IP addresses. */ -function rEGBIT_EGRESS_LOOPBACK() : istring = i"reg9[0]" -/* Register to store the result of check_pkt_larger action. */ -function rEGBIT_PKT_LARGER() : istring = i"reg9[1]" -function rEGBIT_LOOKUP_NEIGHBOR_RESULT() : istring = i"reg9[2]" -function rEGBIT_LOOKUP_NEIGHBOR_IP_RESULT() : istring = i"reg9[3]" - -/* Register to store the eth address associated to a router port for packets - * received in S_ROUTER_IN_ADMISSION. - */ -function rEG_INPORT_ETH_ADDR() : istring = i"xreg0[0..47]" - -/* Register for ECMP bucket selection. */ -function rEG_ECMP_GROUP_ID() : istring = i"reg8[0..15]" -function rEG_ECMP_MEMBER_ID() : istring = i"reg8[16..31]" - -function rEG_ORIG_TP_DPORT_ROUTER() : string = "reg9[16..31]" - -/* Register used for setting a label for ACLs in a Logical Switch. */ -function rEG_LABEL() : istring = i"reg3" - -function fLAGBIT_NOT_VXLAN() : istring = i"flags[1] == 0" - -function mFF_N_LOG_REGS() : bit<32> = 10 - -/* - * Generating sb::Logical_Flow and sb::Logical_DP_Group. - * - * Some logical flows occur in multiple logical datapaths. These can - * be represented two ways: either as multiple Logical_Flow records - * (each with logical_datapath set appropriately) or as a single - * Logical_Flow record that points to a Logical_DP_Group record that - * lists all the datapaths it's in. (It would be possible to mix or - * duplicate these methods, but we don't do that.) We have to support - * both: - * - * - There's a setting "use_logical_dp_groups" that globally - * enables or disables this feature. - */ - -relation Flow( - logical_datapath: uuid, - stage: Intern<Stage>, - priority: integer, - __match: istring, - actions: istring, - io_port: Option<istring>, - controller_meter: Option<istring>, - stage_hint: bit<32> -) - -function stage_hint(_uuid: uuid): bit<32> { - _uuid[127:96] -} - -/* If this option is 'true' northd will combine logical flows that differ by - * logical datapath only by creating a datapath group. */ -relation UseLogicalDatapathGroups[bool] -UseLogicalDatapathGroups[use_logical_dp_groups] :- - nb in nb::NB_Global(), - var use_logical_dp_groups = nb.options.get_bool_def(i"use_logical_dp_groups", true). -UseLogicalDatapathGroups[false] :- - Unit(), - not nb in nb::NB_Global(). - -relation AggregatedFlow ( - logical_datapaths: Set<uuid>, - stage: Intern<Stage>, - priority: integer, - __match: istring, - actions: istring, - io_port: Option<istring>, - controller_meter: Option<istring>, - stage_hint: bit<32> -) -function make_flow_tags(io_port: Option<istring>): Map<istring,istring> { - match (io_port) { - None -> map_empty(), - Some{s} -> [ i"in_out_port" -> s ] - } -} -function make_flow_external_ids(stage_hint: bit<32>, stage: Intern<Stage>): Map<istring,istring> { - if (stage_hint == 0) { - [i"stage-name" -> stage.table_name] - } else { - [i"stage-name" -> stage.table_name, - i"stage-hint" -> i"${hex(stage_hint)}"] - } -} -AggregatedFlow(.logical_datapaths = g.to_set(), - .stage = stage, - .priority = priority, - .__match = __match, - .actions = actions, - .io_port = io_port, - .controller_meter = controller_meter, - .stage_hint = stage_hint) :- - UseLogicalDatapathGroups[true], - Flow(logical_datapath, stage, priority, __match, actions, io_port, controller_meter, stage_hint), - var g = logical_datapath.group_by((stage, priority, __match, actions, io_port, controller_meter, stage_hint)). - - -AggregatedFlow(.logical_datapaths = set_singleton(logical_datapath), - .stage = stage, - .priority = priority, - .__match = __match, - .actions = actions, - .io_port = io_port, - .controller_meter = controller_meter, - .stage_hint = stage_hint) :- - UseLogicalDatapathGroups[false], - Flow(logical_datapath, stage, priority, __match, actions, io_port, controller_meter, stage_hint). - - -function to_istring(pipeline: Pipeline): istring { - if (pipeline == Ingress) { - i"ingress" - } else { - i"egress" - } -} - -for (f in AggregatedFlow()) { - if (f.logical_datapaths.size() == 1) { - Some{var dp} = f.logical_datapaths.nth(0) in - sb::Out_Logical_Flow( - ._uuid = hash128((dp, f.stage, f.priority, f.__match, f.actions, f.controller_meter, f.io_port, f.stage_hint)), - .logical_datapath = Some{dp}, - .logical_dp_group = None, - .pipeline = f.stage.pipeline.to_istring(), - .table_id = f.stage.table_id as integer, - .priority = f.priority, - .controller_meter = f.controller_meter, - .__match = f.__match, - .actions = f.actions, - .tags = make_flow_tags(f.io_port), - .external_ids = make_flow_external_ids(f.stage_hint, f.stage)) - } else { - var group_uuid = hash128(f.logical_datapaths) in { - sb::Out_Logical_Flow( - ._uuid = hash128((group_uuid, f.stage, f.priority, f.__match, f.actions, f.controller_meter, f.io_port, f.stage_hint)), - .logical_datapath = None, - .logical_dp_group = Some{group_uuid}, - .pipeline = f.stage.pipeline.to_istring(), - .table_id = f.stage.table_id as integer, - .priority = f.priority, - .controller_meter = f.controller_meter, - .__match = f.__match, - .actions = f.actions, - .tags = make_flow_tags(f.io_port), - .external_ids = make_flow_external_ids(f.stage_hint, f.stage)); - sb::Out_Logical_DP_Group(._uuid = group_uuid, .datapaths = f.logical_datapaths) - } - } -} - -/* Logical flows for forwarding groups. */ -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_ARP_ND_RSP(), - .priority = 50, - .__match = __match, - .actions = actions, - .stage_hint = stage_hint(fg_uuid), - .io_port = None, - .controller_meter = None) :- - sw in &Switch(), - &nb::Logical_Switch(._uuid = sw._uuid, .forwarding_groups = forwarding_groups), - var fg_uuid = FlatMap(forwarding_groups), - fg in nb::Forwarding_Group(._uuid = fg_uuid), - not fg.child_port.is_empty(), - var __match = i"arp.tpa == ${fg.vip} && arp.op == 1", - var actions = i"eth.dst = eth.src; " - "eth.src = ${fg.vmac}; " - "arp.op = 2; /* ARP reply */ " - "arp.tha = arp.sha; " - "arp.sha = ${fg.vmac}; " - "arp.tpa = arp.spa; " - "arp.spa = ${fg.vip}; " - "outport = inport; " - "flags.loopback = 1; " - "output;". - -function escape_child_ports(child_port: Set<istring>): string { - var escaped = vec_with_capacity(child_port.size()); - for (s in child_port) { - escaped.push(json_escape(s)) - }; - escaped.join(",") -} -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 50, - .__match = __match, - .actions = actions.intern(), - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - sw in &Switch(), - &nb::Logical_Switch(._uuid = sw._uuid, .forwarding_groups = forwarding_groups), - var fg_uuid = FlatMap(forwarding_groups), - fg in nb::Forwarding_Group(._uuid = fg_uuid), - not fg.child_port.is_empty(), - var __match = i"eth.dst == ${fg.vmac}", - var actions = "fwd_group(" ++ - if (fg.liveness) { "liveness=\"true\"," } else { "" } ++ - "childports=" ++ escape_child_ports(fg.child_port) ++ ");". - -/* Logical switch ingress table PORT_SEC_L2: admission control framework - * (priority 100) */ -for (sw in &Switch()) { - if (not sw.is_vlan_transparent) { - /* Block logical VLANs. */ - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_L2(), - .priority = 100, - .__match = i"vlan.present", - .actions = i"drop;", - .stage_hint = 0 /*TODO: check*/, - .io_port = None, - .controller_meter = None) - }; - - /* Broadcast/multicast source address is invalid */ - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_L2(), - .priority = 100, - .__match = i"eth.src[40]", - .actions = i"drop;", - .stage_hint = 0 /*TODO: check*/, - .io_port = None, - .controller_meter = None) - /* Port security flows have priority 50 (see below) and will continue to the next table - if packet source is acceptable. */ -} - -// space-separated set of strings -function join(strings: Set<string>, sep: string): string { - strings.to_vec().join(sep) -} - -function build_port_security_ipv6_flow( - pipeline: Pipeline, - ea: eth_addr, - ipv6_addrs: Vec<ipv6_netaddr>): string = -{ - var ip6_addrs = vec_empty(); - - /* Allow link-local address. */ - ip6_addrs.push(ea.to_ipv6_lla().string_mapped()); - - /* Allow ip6.dst=ff00::/8 for multicast packets */ - if (pipeline == Egress) { - ip6_addrs.push("ff00::/8") - }; - for (addr in ipv6_addrs) { - ip6_addrs.push(addr.match_network()) - }; - - var dir = if (pipeline == Ingress) { "src" } else { "dst" }; - " && ip6.${dir} == {" ++ ip6_addrs.join(", ") ++ "}" -} - -function build_port_security_ipv6_nd_flow( - ea: eth_addr, - ipv6_addrs: Vec<ipv6_netaddr>): string = -{ - var __match = " && ip6 && nd && ((nd.sll == ${eth_addr_zero()} || " - "nd.sll == ${ea}) || ((nd.tll == ${eth_addr_zero()} || " - "nd.tll == ${ea})"; - if (ipv6_addrs.is_empty()) { - __match ++ "))" - } else { - __match = __match ++ " && (nd.target == ${ea.to_ipv6_lla()}"; - - for(addr in ipv6_addrs) { - __match = __match ++ " || nd.target == ${addr.match_network()}" - }; - __match ++ ")))" - } -} - -/* Pre-ACL */ -for (&Switch(._uuid =ls_uuid)) { - /* Ingress and Egress Pre-ACL Table (Priority 0): Packets are - * allowed by default. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_ACL(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_ACL(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_ACL(), - .priority = 110, - .__match = i"eth.dst == $svc_monitor_mac", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_ACL(), - .priority = 110, - .__match = i"eth.src == $svc_monitor_mac", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* stateless filters always take precedence over stateful ACLs. */ -for (&SwitchACL(.sw = sw@&Switch{._uuid = ls_uuid}, .acl = acl, .has_fair_meter = fair_meter)) { - if (sw.has_stateful_acl) { - if (acl.action == i"allow-stateless") { - if (acl.direction == i"from-lport") { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_ACL(), - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), - .__match = acl.__match, - .actions = i"next;", - .stage_hint = stage_hint(acl._uuid), - .io_port = None, - .controller_meter = None) - } else { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_ACL(), - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), - .__match = acl.__match, - .actions = i"next;", - .stage_hint = stage_hint(acl._uuid), - .io_port = None, - .controller_meter = None) - } - } - } -} - -/* If there are any stateful ACL rules in this datapath, we must - * send all IP packets through the conntrack action, which handles - * defragmentation, in order to match L4 headers. */ - -for (&SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"router"}, - .json_name = lsp_name, - .sw = &Switch{._uuid = ls_uuid, .has_stateful_acl = true})) { - /* Can't use ct() for router ports. Consider the - * following configuration: lp1(10.0.0.2) on - * hostA--ls1--lr0--ls2--lp2(10.0.1.2) on hostB, For a - * ping from lp1 to lp2, First, the response will go - * through ct() with a zone for lp2 in the ls2 ingress - * pipeline on hostB. That ct zone knows about this - * connection. Next, it goes through ct() with the zone - * for the router port in the egress pipeline of ls2 on - * hostB. This zone does not know about the connection, - * as the icmp request went through the logical router - * on hostA, not hostB. This would only work with - * distributed conntrack state across all chassis. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_ACL(), - .priority = 110, - .__match = i"ip && inport == ${lsp_name}", - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_ACL(), - .priority = 110, - .__match = i"ip && outport == ${lsp_name}", - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) -} - -for (&SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"localnet"}, - .json_name = lsp_name, - .sw = &Switch{._uuid = ls_uuid, .has_stateful_acl = true})) { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_ACL(), - .priority = 110, - .__match = i"ip && inport == ${lsp_name}", - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_ACL(), - .priority = 110, - .__match = i"ip && outport == ${lsp_name}", - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) -} - -for (&Switch(._uuid = ls_uuid, .has_stateful_acl = true)) { - /* Ingress and Egress Pre-ACL Table (Priority 110). - * - * Not to do conntrack on ND and ICMP destination - * unreachable packets. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_ACL(), - .priority = 110, - .__match = i"nd || nd_rs || nd_ra || mldv1 || mldv2 || " - "(udp && udp.src == 546 && udp.dst == 547)", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_ACL(), - .priority = 110, - .__match = i"nd || nd_rs || nd_ra || mldv1 || mldv2 || " - "(udp && udp.src == 546 && udp.dst == 547)", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Ingress and Egress Pre-ACL Table (Priority 100). - * - * Regardless of whether the ACL is "from-lport" or "to-lport", - * we need rules in both the ingress and egress table, because - * the return traffic needs to be followed. - * - * 'REGBIT_CONNTRACK_DEFRAG' is set to let the pre-stateful table send - * it to conntrack for tracking and defragmentation. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_ACL(), - .priority = 100, - .__match = i"ip", - .actions = i"${rEGBIT_CONNTRACK_DEFRAG()} = 1; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_ACL(), - .priority = 100, - .__match = i"ip", - .actions = i"${rEGBIT_CONNTRACK_DEFRAG()} = 1; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Pre-LB */ -for (&Switch(._uuid = ls_uuid)) { - /* Do not send ND packets to conntrack */ - var __match = i"nd || nd_rs || nd_ra || mldv1 || mldv2" in { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_LB(), - .priority = 110, - .__match = __match, - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_LB(), - .priority = 110, - .__match = __match, - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - }; - - /* Do not send service monitor packets to conntrack. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_LB(), - .priority = 110, - .__match = i"eth.dst == $svc_monitor_mac", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_LB(), - .priority = 110, - .__match = i"eth.src == $svc_monitor_mac", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Allow all packets to go to next tables by default. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_LB(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_LB(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -for (&SwitchPort(.lsp = lsp, .json_name = lsp_name, .sw = &Switch{._uuid = ls_uuid})) -if (lsp.__type == i"router" or lsp.__type == i"localnet") { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_LB(), - .priority = 110, - .__match = i"ip && inport == ${lsp_name}", - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_LB(), - .priority = 110, - .__match = i"ip && outport == ${lsp_name}", - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) -} - -/* Empty LoadBalancer Controller event */ -function build_empty_lb_event_flow(key: istring, lb: Intern<nb::Load_Balancer>): Option<(istring, istring)> { - (var ip, var port) = match (ip_address_and_port_from_lb_key(key.ival())) { - Some{(ip, port)} -> (ip, port), - _ -> return None - }; - - var protocol = if (lb.protocol == Some{i"tcp"}) { "tcp" } else { "udp" }; - var vip = match (port) { - 0 -> "${ip}", - _ -> "${ip.to_bracketed_string()}:${port}" - }; - - var __match = vec_with_capacity(2); - __match.push("${ip.ipX()}.dst == ${ip}"); - if (port != 0) { - __match.push("${protocol}.dst == ${port}"); - }; - - var action = i"trigger_event(" - "event = \"empty_lb_backends\", " - "vip = \"${vip}\", " - "protocol = \"${protocol}\", " - "load_balancer = \"${uuid2str(lb._uuid)}\");"; - - Some{(__match.join(" && ").intern(), action)} -} - -/* Contains the load balancers for which an event should be sent each time it - * runs out of backends. - * - * The preferred way to do this by setting an individual Load_Balancer's - * options:event=true. - * - * The deprecated way is to set nb::NB_Global options:controller_event=true, - * which enables events for every load balancer. - */ -relation LoadBalancerEmptyEvents(lb_uuid: uuid) -LoadBalancerEmptyEvents(lb_uuid) :- - nb::NB_Global(.options = global_options), - var global_events = global_options.get_bool_def(i"controller_event", false), - &nb::Load_Balancer(._uuid = lb_uuid, .options = local_options), - var local_events = local_options.get_bool_def(i"event", false), - global_events or local_events. - -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PRE_LB(), - .priority = 130, - .__match = __match, - .actions = __action, - .io_port = None, - .controller_meter = sw.copp.get(cOPP_EVENT_ELB()), - .stage_hint = stage_hint(lb._uuid)) :- - SwitchLBVIP(.sw_uuid = sw_uuid, .lb = lb, .vip = vip, .backends = backends), - LoadBalancerEmptyEvents(lb._uuid), - not lb.options.get_bool_def(i"reject", false), - sw in &Switch(._uuid = sw_uuid), - backends == i"", - Some {(var __match, var __action)} = build_empty_lb_event_flow(vip, lb). - -/* 'REGBIT_CONNTRACK_NAT' is set to let the pre-stateful table send - * packet to conntrack for defragmentation. - * - * Send all the packets to conntrack in the ingress pipeline if the - * logical switch has a load balancer with VIP configured. Earlier - * we used to set the REGBIT_CONNTRACK_DEFRAG flag in the ingress pipeline - * if the IP destination matches the VIP. But this causes few issues when - * a logical switch has no ACLs configured with allow-related. - * To understand the issue, lets a take a TCP load balancer - - * 10.0.0.10:80=10.0.0.3:80. - * If a logical port - p1 with IP - 10.0.0.5 opens a TCP connection with - * the VIP - 10.0.0.10, then the packet in the ingress pipeline of 'p1' - * is sent to the p1's conntrack zone id and the packet is load balanced - * to the backend - 10.0.0.3. For the reply packet from the backend lport, - * it is not sent to the conntrack of backend lport's zone id. This is fine - * as long as the packet is valid. Suppose the backend lport sends an - * invalid TCP packet (like incorrect sequence number), the packet gets - * delivered to the lport 'p1' without unDNATing the packet to the - * VIP - 10.0.0.10. And this causes the connection to be reset by the - * lport p1's VIF. - * - * We can't fix this issue by adding a logical flow to drop ct.inv packets - * in the egress pipeline since it will drop all other connections not - * destined to the load balancers. - * - * To fix this issue, we send all the packets to the conntrack in the - * ingress pipeline if a load balancer is configured. We can now - * add a lflow to drop ct.inv packets. - */ -for (sw in &Switch(.has_lb_vip = true)) { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PRE_LB(), - .priority = 100, - .__match = i"ip", - .actions = i"${rEGBIT_CONNTRACK_NAT()} = 1; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_OUT_PRE_LB(), - .priority = 100, - .__match = i"ip", - .actions = i"${rEGBIT_CONNTRACK_NAT()} = 1; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Pre-stateful */ -relation LbProtocol[string] -LbProtocol["tcp"]. -LbProtocol["udp"]. -LbProtocol["sctp"]. -for (&Switch(._uuid = ls_uuid)) { - /* Ingress and Egress pre-stateful Table (Priority 0): Packets are - * allowed by default. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_STATEFUL(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_STATEFUL(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* If rEGBIT_CONNTRACK_NAT() is set as 1, then packets should just be sent - * through nat (without committing). - * - * rEGBIT_CONNTRACK_COMMIT() is set for new connections and - * rEGBIT_CONNTRACK_NAT() is set for established connections. So they - * don't overlap. - * - * In the ingress pipeline, also store the original destination IP and - * transport port to be used when detecting hairpin packets. - */ - for (LbProtocol[protocol]) { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_STATEFUL(), - .priority = 120, - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1 && ip4 && ${protocol}", - .actions = i"${rEG_ORIG_DIP_IPV4()} = ip4.dst; " - "${rEG_ORIG_TP_DPORT()} = ${protocol}.dst; ct_lb;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_STATEFUL(), - .priority = 120, - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1 && ip6 && ${protocol}", - .actions = i"${rEG_ORIG_DIP_IPV6()} = ip6.dst; " - "${rEG_ORIG_TP_DPORT()} = ${protocol}.dst; ct_lb;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - }; - - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_STATEFUL(), - .priority = 110, - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1", - .actions = i"ct_lb;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_STATEFUL(), - .priority = 110, - .__match = i"${rEGBIT_CONNTRACK_NAT()} == 1", - .actions = i"ct_lb;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* If rEGBIT_CONNTRACK_DEFRAG() is set as 1, then the packets should be - * sent to conntrack for tracking and defragmentation. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_STATEFUL(), - .priority = 100, - .__match = i"${rEGBIT_CONNTRACK_DEFRAG()} == 1", - .actions = i"ct_next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PRE_STATEFUL(), - .priority = 100, - .__match = i"${rEGBIT_CONNTRACK_DEFRAG()} == 1", - .actions = i"ct_next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -function acl_log_meter_name(meter_name: istring, acl_uuid: uuid): string = -{ - "${meter_name}__${uuid2str(acl_uuid)}" -} - -function build_acl_log(acl: Intern<nb::ACL>, fair_meter: bool): string = -{ - if (not acl.log) { - "" - } else { - var strs = vec_empty(); - match (acl.name) { - None -> (), - Some{name} -> strs.push("name=${json_escape(name)}") - }; - /* If a severity level isn't specified, default to "info". */ - match (acl.severity) { - None -> strs.push("severity=info"), - Some{severity} -> strs.push("severity=${severity}") - }; - match (acl.action.ival()) { - "drop" -> { - strs.push("verdict=drop") - }, - "reject" -> { - strs.push("verdict=reject") - }, - "allow" -> { - strs.push("verdict=allow") - }, - "allow-related" -> { - strs.push("verdict=allow") - }, - "allow-stateless" -> { - strs.push("verdict=allow") - }, - _ -> () - }; - match (acl.meter) { - Some{meter} -> { - var name = match (fair_meter) { - true -> acl_log_meter_name(meter, acl._uuid), - false -> meter.ival() - }; - strs.push("meter=${json_escape(name)}") - }, - None -> () - }; - "log(${strs.join(\", \")}); " - } -} - -/* Due to various hard-coded priorities need to implement ACLs, the - * northbound database supports a smaller range of ACL priorities than - * are available to logical flows. This value is added to an ACL - * priority to determine the ACL's logical flow priority. */ -function oVN_ACL_PRI_OFFSET(): integer = 1000 - -/* Intermediate relation that stores reject ACLs. - * The following rules generate logical flows for these ACLs. - */ -relation Reject( - lsuuid: uuid, - pipeline: Pipeline, - stage: Intern<Stage>, - acl: Intern<nb::ACL>, - fair_meter: bool, - controller_meter: Option<istring>, - extra_match: istring, - extra_actions: istring) - -/* build_reject_acl_rules() */ -function next_to_stage(stage: Intern<Stage>): string { - var pipeline = match (stage.pipeline) { - Ingress -> "ingress", - Egress -> "egress" - }; - "next(pipeline=${pipeline},table=${stage.table_id})" -} -for (Reject(lsuuid, pipeline, stage, acl, fair_meter, controller_meter, - extra_match_, extra_actions_)) { - var extra_match = if (extra_match_ == i"") { "" } else { "(${extra_match_}) && " } in - var extra_actions = if (extra_actions_ == i"") { "" } else { "${extra_actions_} " } in - var next_stage = match (pipeline) { - Ingress -> s_SWITCH_OUT_QOS_MARK(), - Egress -> s_SWITCH_IN_L2_LKUP() - } in - var acl_log = build_acl_log(acl, fair_meter) in - var __match = extra_match ++ acl.__match in - var actions = acl_log ++ extra_actions ++ "reg0 = 0; " - "reject { " - "/* eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ " - "outport <-> inport; ${next_to_stage(next_stage)}; };" in - Flow(.logical_datapath = lsuuid, - .stage = stage, - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), - .__match = __match.intern(), - .actions = actions.intern(), - .io_port = None, - .controller_meter = controller_meter, - .stage_hint = stage_hint(acl._uuid)) -} - -/* build_acls */ -for (UseCtInvMatch[use_ct_inv_match]) { - (var ct_inv_or, var and_not_ct_inv) = match (use_ct_inv_match) { - true -> ("ct.inv || ", "&& !ct.inv "), - false -> ("", ""), - } in - for (sw in &Switch(._uuid = ls_uuid)) - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in - { - /* Ingress and Egress ACL Table (Priority 0): Packets are allowed by - * default. If the logical switch has no ACLs or no load balancers, - * then add 65535-priority flow to advance the packet to next - * stage. - * - * A related rule at priority 1 is added below if there - * are any stateful ACLs in this datapath. */ - var priority = if (not sw.has_acls and not sw.has_lb_vip) { 65535 } else { 0 } - in - { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_ACL(), - .priority = priority, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_ACL(), - .priority = priority, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - }; - - if (has_stateful) { - /* Ingress and Egress ACL Table (Priority 1). - * - * By default, traffic is allowed. This is partially handled by - * the Priority 0 ACL flows added earlier, but we also need to - * commit IP flows. This is because, while the initiater's - * direction may not have any stateful rules, the server's may - * and then its return traffic would not have an associated - * conntrack entry and would return "+invalid". - * - * We use "ct_commit" for a connection that is not already known - * by the connection tracker. Once a connection is committed, - * subsequent packets will hit the flow at priority 0 that just - * uses "next;" - * - * We also check for established connections that have ct_label.blocked - * set on them. That's a connection that was disallowed, but is - * now allowed by policy again since it hit this default-allow flow. - * We need to set ct_label.blocked=0 to let the connection continue, - * which will be done by ct_commit() in the "stateful" stage. - * Subsequent packets will hit the flow at priority 0 that just - * uses "next;". */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_ACL(), - .priority = 1, - .__match = i"ip && (!ct.est || (ct.est && ct_label.blocked == 1))", - .actions = i"${rEGBIT_CONNTRACK_COMMIT()} = 1; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_ACL(), - .priority = 1, - .__match = i"ip && (!ct.est || (ct.est && ct_label.blocked == 1))", - .actions = i"${rEGBIT_CONNTRACK_COMMIT()} = 1; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Ingress and Egress ACL Table (Priority 65532). - * - * Always drop traffic that's in an invalid state. Also drop - * reply direction packets for connections that have been marked - * for deletion (bit 0 of ct_label is set). - * - * This is enforced at a higher priority than ACLs can be defined. */ - var __match = (ct_inv_or ++ "(ct.est && ct.rpl && ct_label.blocked == 1)").intern() in { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_ACL(), - .priority = 65532, - .__match = __match, - .actions = i"drop;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_ACL(), - .priority = 65532, - .__match = __match, - .actions = i"drop;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - }; - - /* Ingress and Egress ACL Table (Priority 65532). - * - * Allow reply traffic that is part of an established - * conntrack entry that has not been marked for deletion - * (bit 0 of ct_label). We only match traffic in the - * reply direction because we want traffic in the request - * direction to hit the currently defined policy from ACLs. - * - * This is enforced at a higher priority than ACLs can be defined. */ - var __match = ("ct.est && !ct.rel && !ct.new " ++ and_not_ct_inv ++ - "&& ct.rpl && ct_label.blocked == 0").intern() in { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_ACL(), - .priority = 65532, - .__match = __match, - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_ACL(), - .priority = 65532, - .__match = __match, - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - }; - - /* Ingress and Egress ACL Table (Priority 65532). - * - * Allow traffic that is related to an existing conntrack entry that - * has not been marked for deletion (bit 0 of ct_label). - * - * This is enforced at a higher priority than ACLs can be defined. - * - * NOTE: This does not support related data sessions (eg, - * a dynamically negotiated FTP data channel), but will allow - * related traffic such as an ICMP Port Unreachable through - * that's generated from a non-listening UDP port. */ - var __match = ("!ct.est && ct.rel && !ct.new " ++ and_not_ct_inv ++ - "&& ct_label.blocked == 0").intern() in { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_ACL(), - .priority = 65532, - .__match = __match, - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_ACL(), - .priority = 65532, - .__match = __match, - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - }; - - /* Ingress and Egress ACL Table (Priority 65532). - * - * Not to do conntrack on ND packets. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_ACL(), - .priority = 65532, - .__match = i"nd || nd_ra || nd_rs || mldv1 || mldv2", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_ACL(), - .priority = 65532, - .__match = i"nd || nd_ra || nd_rs || mldv1 || mldv2", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - }; - - /* Add a 34000 priority flow to advance the DNS reply from ovn-controller, - * if the CMS has configured DNS records for the datapath. - */ - if (sw.has_dns_records) { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_ACL(), - .priority = 34000, - .__match = i"udp.src == 53", - .actions = if has_stateful i"ct_commit; next;" else i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - }; - - if (sw.has_acls or sw.has_lb_vip) { - /* Add a 34000 priority flow to advance the service monitor reply - * packets to skip applying ingress ACLs. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_ACL(), - .priority = 34000, - .__match = i"eth.dst == $svc_monitor_mac", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_ACL(), - .priority = 34000, - .__match = i"eth.src == $svc_monitor_mac", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - } - } -} - -/* This stage builds hints for the IN/OUT_ACL stage. Based on various - * combinations of ct flags packets may hit only a subset of the logical - * flows in the IN/OUT_ACL stage. - * - * Populating ACL hints first and storing them in registers simplifies - * the logical flow match expressions in the IN/OUT_ACL stage and - * generates less openflows. - * - * Certain combinations of ct flags might be valid matches for multiple - * types of ACL logical flows (e.g., allow/drop). In such cases hints - * corresponding to all potential matches are set. - */ -input relation AclHintStages[Intern<Stage>] -AclHintStages[s_SWITCH_IN_ACL_HINT()]. -AclHintStages[s_SWITCH_OUT_ACL_HINT()]. -for (sw in &Switch(._uuid = ls_uuid)) { - for (AclHintStages[stage]) { - /* In any case, advance to the next stage. */ - var priority = if (not sw.has_acls and not sw.has_lb_vip) { 65535 } else { 0 } in - Flow(ls_uuid, stage, priority, i"1", i"next;", None, None, 0) - }; - - for (AclHintStages[stage]) - if (sw.has_stateful_acl or sw.has_lb_vip) { - /* New, not already established connections, may hit either allow - * or drop ACLs. For allow ACLs, the connection must also be committed - * to conntrack so we set REGBIT_ACL_HINT_ALLOW_NEW. - */ - Flow(ls_uuid, stage, 7, i"ct.new && !ct.est", - i"${rEGBIT_ACL_HINT_ALLOW_NEW()} = 1; " - "${rEGBIT_ACL_HINT_DROP()} = 1; " - "next;", None, None, 0); - - /* Already established connections in the "request" direction that - * are already marked as "blocked" may hit either: - * - allow ACLs for connections that were previously allowed by a - * policy that was deleted and is being readded now. In this case - * the connection should be recommitted so we set - * REGBIT_ACL_HINT_ALLOW_NEW. - * - drop ACLs. - */ - Flow(ls_uuid, stage, 6, i"!ct.new && ct.est && !ct.rpl && ct_label.blocked == 1", - i"${rEGBIT_ACL_HINT_ALLOW_NEW()} = 1; " - "${rEGBIT_ACL_HINT_DROP()} = 1; " - "next;", None, None, 0); - - /* Not tracked traffic can either be allowed or dropped. */ - Flow(ls_uuid, stage, 5, i"!ct.trk", - i"${rEGBIT_ACL_HINT_ALLOW()} = 1; " - "${rEGBIT_ACL_HINT_DROP()} = 1; " - "next;", None, None, 0); - - /* Already established connections in the "request" direction may hit - * either: - * - allow ACLs in which case the traffic should be allowed so we set - * REGBIT_ACL_HINT_ALLOW. - * - drop ACLs in which case the traffic should be blocked and the - * connection must be committed with ct_label.blocked set so we set - * REGBIT_ACL_HINT_BLOCK. - */ - Flow(ls_uuid, stage, 4, i"!ct.new && ct.est && !ct.rpl && ct_label.blocked == 0", - i"${rEGBIT_ACL_HINT_ALLOW()} = 1; " - "${rEGBIT_ACL_HINT_BLOCK()} = 1; " - "next;", None, None, 0); - - /* Not established or established and already blocked connections may - * hit drop ACLs. - */ - Flow(ls_uuid, stage, 3, i"!ct.est", - i"${rEGBIT_ACL_HINT_DROP()} = 1; " - "next;", None, None, 0); - Flow(ls_uuid, stage, 2, i"ct.est && ct_label.blocked == 1", - i"${rEGBIT_ACL_HINT_DROP()} = 1; " - "next;", None, None, 0); - - /* Established connections that were previously allowed might hit - * drop ACLs in which case the connection must be committed with - * ct_label.blocked set. - */ - Flow(ls_uuid, stage, 1, i"ct.est && ct_label.blocked == 0", - i"${rEGBIT_ACL_HINT_BLOCK()} = 1; " - "next;", None, None, 0) - } -} - -/* Ingress or Egress ACL Table (Various priorities). */ -for (&SwitchACL(.sw = sw, .acl = acl, .has_fair_meter = fair_meter)) { - /* consider_acl */ - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in - var ingress = acl.direction == i"from-lport" in - var stage = if (ingress) { s_SWITCH_IN_ACL() } else { s_SWITCH_OUT_ACL() } in - var pipeline = if ingress Ingress else Egress in - var stage_hint = stage_hint(acl._uuid) in - var acl_log = build_acl_log(acl, fair_meter) in - var acl_match = acl.__match.intern() in - if (acl.action == i"allow" or acl.action == i"allow-related") { - /* If there are any stateful flows, we must even commit "allow" - * actions. This is because, while the initiater's - * direction may not have any stateful rules, the server's - * may and then its return traffic would not have an - * associated conntrack entry and would return "+invalid". */ - if (not has_stateful) { - Flow(.logical_datapath = sw._uuid, - .stage = stage, - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), - .__match = acl.__match, - .actions = i"${acl_log}next;", - .stage_hint = stage_hint, - .io_port = None, - .controller_meter = None) - } else { - /* Commit the connection tracking entry if it's a new - * connection that matches this ACL. After this commit, - * the reply traffic is allowed by a flow we create at - * priority 65535, defined earlier. - * - * It's also possible that a known connection was marked for - * deletion after a policy was deleted, but the policy was - * re-added while that connection is still known. We catch - * that case here and un-set ct_label.blocked (which will be done - * by ct_commit in the "stateful" stage) to indicate that the - * connection should be allowed to resume. - * If the ACL has a label, then load REG_LABEL with the label and - * set the REGBIT_ACL_LABEL field. - */ - var __action = if (acl.label != 0) { - i"${rEGBIT_CONNTRACK_COMMIT()} = 1; ${rEGBIT_ACL_LABEL()} = 1; " - "${rEG_LABEL()} = ${acl.label}; ${acl_log}next;" - } else { - i"${rEGBIT_CONNTRACK_COMMIT()} = 1; ${acl_log}next;" - } in Flow(.logical_datapath = sw._uuid, - .stage = stage, - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), - .__match = i"${rEGBIT_ACL_HINT_ALLOW_NEW()} == 1 && (${acl.__match})", - .actions = __action, - .stage_hint = stage_hint, - .io_port = None, - .controller_meter = None); - - /* Match on traffic in the request direction for an established - * connection tracking entry that has not been marked for - * deletion. We use this to ensure that this - * connection is still allowed by the currently defined - * policy. Match untracked packets too. - * Commit the connection only if the ACL has a label. This is done to - * update the connection tracking entry label in case the ACL - * allowing the connection changes. - */ - var __action = if (acl.label != 0) { - i"${rEGBIT_CONNTRACK_COMMIT()} = 1; ${rEGBIT_ACL_LABEL()} = 1; " - "${rEG_LABEL()} = ${acl.label}; ${acl_log}next;" - } else { - i"${acl_log}next;" - } in Flow(.logical_datapath = sw._uuid, - .stage = stage, - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), - .__match = i"${rEGBIT_ACL_HINT_ALLOW()} == 1 && (${acl.__match})", - .actions = __action, - .stage_hint = stage_hint, - .io_port = None, - .controller_meter = None) - } - } else if (acl.action == i"allow-stateless") { - Flow(.logical_datapath = sw._uuid, - .stage = stage, - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), - .__match = acl.__match, - .actions = i"${acl_log}next;", - .stage_hint = stage_hint, - .io_port = None, - .controller_meter = None) - } else if (acl.action == i"drop" or acl.action == i"reject") { - /* The implementation of "drop" differs if stateful ACLs are in - * use for this datapath. In that case, the actions differ - * depending on whether the connection was previously committed - * to the connection tracker with ct_commit. */ - var controller_meter = sw.copp.get(cOPP_REJECT()) in - if (has_stateful) { - /* If the packet is not tracked or not part of an established - * connection, then we can simply reject/drop it. */ - var __match = "${rEGBIT_ACL_HINT_DROP()} == 1" in - if (acl.action == i"reject") { - Reject(sw._uuid, pipeline, stage, acl, fair_meter, controller_meter, __match.intern(), i"") - } else { - Flow(.logical_datapath = sw._uuid, - .stage = stage, - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), - .__match = (__match ++ " && (${acl.__match})").intern(), - .actions = i"${acl_log}/* drop */", - .stage_hint = stage_hint, - .io_port = None, - .controller_meter = None) - }; - /* For an existing connection without ct_label set, we've - * encountered a policy change. ACLs previously allowed - * this connection and we committed the connection tracking - * entry. Current policy says that we should drop this - * connection. First, we set bit 0 of ct_label to indicate - * that this connection is set for deletion. By not - * specifying "next;", we implicitly drop the packet after - * updating conntrack state. We would normally defer - * ct_commit() to the "stateful" stage, but since we're - * rejecting/dropping the packet, we go ahead and do it here. - */ - var __match = "${rEGBIT_ACL_HINT_BLOCK()} == 1" in - var actions = "ct_commit { ct_label.blocked = 1; }; " in - if (acl.action == i"reject") { - Reject(sw._uuid, pipeline, stage, acl, fair_meter, controller_meter, __match.intern(), actions.intern()) - } else { - Flow(.logical_datapath = sw._uuid, - .stage = stage, - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), - .__match = (__match ++ " && (${acl.__match})").intern(), - .actions = i"${actions}${acl_log}/* drop */", - .stage_hint = stage_hint, - .io_port = None, - .controller_meter = None) - } - } else { - /* There are no stateful ACLs in use on this datapath, - * so a "reject/drop" ACL is simply the "reject/drop" - * logical flow action in all cases. */ - if (acl.action == i"reject") { - Reject(sw._uuid, pipeline, stage, acl, fair_meter, controller_meter, i"", i"") - } else { - Flow(.logical_datapath = sw._uuid, - .stage = stage, - .priority = acl.priority + oVN_ACL_PRI_OFFSET(), - .__match = acl.__match, - .actions = i"${acl_log}/* drop */", - .stage_hint = stage_hint, - .io_port = None, - .controller_meter = None) - } - } - } -} - -/* Add 34000 priority flow to allow DHCP reply from ovn-controller to all - * logical ports of the datapath if the CMS has configured DHCPv4 options. - * */ -for (SwitchPortDHCPv4Options(.port = &SwitchPort{.lsp = lsp, .sw = sw}, - .dhcpv4_options = dhcpv4_options@&nb::DHCP_Options{.options = options}) - if lsp.__type != i"external") { - (Some{var server_id}, Some{var server_mac}, Some{var lease_time}) = - (options.get(i"server_id"), options.get(i"server_mac"), options.get(i"lease_time")) in - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_OUT_ACL(), - .priority = 34000, - .__match = i"outport == ${json_escape(lsp.name)} " - "&& eth.src == ${server_mac} " - "&& ip4.src == ${server_id} && udp && udp.src == 67 " - "&& udp.dst == 68", - .actions = if (has_stateful) i"ct_commit; next;" else i"next;", - .stage_hint = stage_hint(dhcpv4_options._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) -} - -for (SwitchPortDHCPv6Options(.port = &SwitchPort{.lsp = lsp, .sw = sw}, - .dhcpv6_options = dhcpv6_options@&nb::DHCP_Options{.options=options} ) - if lsp.__type != i"external") { - Some{var server_mac} = options.get(i"server_id") in - Some{var ea} = eth_addr_from_string(server_mac.ival()) in - var server_ip = ea.to_ipv6_lla() in - /* Get the link local IP of the DHCPv6 server from the - * server MAC. */ - var has_stateful = sw.has_stateful_acl or sw.has_lb_vip in - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_OUT_ACL(), - .priority = 34000, - .__match = i"outport == ${json_escape(lsp.name)} " - "&& eth.src == ${server_mac} " - "&& ip6.src == ${server_ip} && udp && udp.src == 547 " - "&& udp.dst == 546", - .actions = if (has_stateful) i"ct_commit; next;" else i"next;", - .stage_hint = stage_hint(dhcpv6_options._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) -} - -relation QoSAction(qos: uuid, key_action: istring, value_action: integer) - -QoSAction(qos, k, v) :- - &nb::QoS(._uuid = qos, .action = actions), - (var k, var v) = FlatMap(actions). - -/* QoS rules */ -for (&Switch(._uuid = ls_uuid)) { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_QOS_MARK(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_QOS_MARK(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_QOS_METER(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_QOS_METER(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -for (SwitchQoS(.sw = sw, .qos = qos)) { - var ingress = if (qos.direction == i"from-lport") true else false in - var pipeline = if ingress "ingress" else "egress" in { - var stage = if (ingress) { s_SWITCH_IN_QOS_MARK() } else { s_SWITCH_OUT_QOS_MARK() } in - /* FIXME: Can value_action be negative? */ - for (QoSAction(qos._uuid, key_action, value_action)) { - if (key_action == i"dscp") { - Flow(.logical_datapath = sw._uuid, - .stage = stage, - .priority = qos.priority, - .__match = qos.__match, - .actions = i"ip.dscp = ${value_action}; next;", - .stage_hint = stage_hint(qos._uuid), - .io_port = None, - .controller_meter = None) - } - }; - - (var burst, var rate) = { - var rate = 0; - var burst = 0; - for ((key_bandwidth, value_bandwidth) in qos.bandwidth) { - /* FIXME: Can value_bandwidth be negative? */ - if (key_bandwidth == i"rate") { - rate = value_bandwidth - } else if (key_bandwidth == i"burst") { - burst = value_bandwidth - } else () - }; - (burst, rate) - } in - if (rate != 0) { - var stage = if (ingress) { s_SWITCH_IN_QOS_METER() } else { s_SWITCH_OUT_QOS_METER() } in - var meter_action = if (burst != 0) { - i"set_meter(${rate}, ${burst}); next;" - } else { - i"set_meter(${rate}); next;" - } in - /* Ingress and Egress QoS Meter Table. - * - * We limit the bandwidth of this flow by adding a meter table. - */ - Flow(.logical_datapath = sw._uuid, - .stage = stage, - .priority = qos.priority, - .__match = qos.__match, - .actions = meter_action, - .stage_hint = stage_hint(qos._uuid), - .io_port = None, - .controller_meter = None) - } - } -} - -/* stateful rules */ -for (&Switch(._uuid = ls_uuid)) { - /* Ingress and Egress stateful Table (Priority 0): Packets are - * allowed by default. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_STATEFUL(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_STATEFUL(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* If REGBIT_CONNTRACK_COMMIT is set as 1 and REGBIT_CONNTRACK_SET_LABEL - * is set to 1, then the packets should be - * committed to conntrack. We always set ct_label.blocked to 0 here as - * any packet that makes it this far is part of a connection we - * want to allow to continue. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_STATEFUL(), - .priority = 100, - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 1", - .actions = i"ct_commit { ct_label.blocked = 0; ct_label.label = ${rEG_LABEL()}; }; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_STATEFUL(), - .priority = 100, - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 1", - .actions = i"ct_commit { ct_label.blocked = 0; ct_label.label = ${rEG_LABEL()}; }; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* If REGBIT_CONNTRACK_COMMIT is set as 1, then the packets should be - * committed to conntrack. We always set ct_label.blocked to 0 here as - * any packet that makes it this far is part of a connection we - * want to allow to continue. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_STATEFUL(), - .priority = 100, - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 0", - .actions = i"ct_commit { ct_label.blocked = 0; }; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_STATEFUL(), - .priority = 100, - .__match = i"${rEGBIT_CONNTRACK_COMMIT()} == 1 && ${rEGBIT_ACL_LABEL()} == 0", - .actions = i"ct_commit { ct_label.blocked = 0; }; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Load balancing rules for new connections get committed to conntrack - * table. So even if REGBIT_CONNTRACK_COMMIT is set in a previous table - * a higher priority rule for load balancing below also commits the - * connection, so it is okay if we do not hit the above match on - * REGBIT_CONNTRACK_COMMIT. */ -function get_match_for_lb_key(ip_address: v46_ip, - port: bit<16>, - protocol: Option<istring>, - redundancy: bool, - use_nexthop_reg: bool, - use_dest_tp_reg: bool): string = { - var port_match = if (port != 0) { - var proto = if (protocol == Some{i"udp"}) { - "udp" - } else { - "tcp" - }; - if (redundancy) { " && ${proto}" } else { "" } ++ - if (use_dest_tp_reg) { - " && ${rEG_ORIG_TP_DPORT_ROUTER()} == ${port}" - } else { - " && ${proto}.dst == ${port}" - } - } else { - "" - }; - - var ip_match = match (ip_address) { - IPv4{ipv4} -> - if (use_nexthop_reg) { - "${rEG_NEXT_HOP()} == ${ipv4}" - } else { - "ip4.dst == ${ipv4}" - }, - IPv6{ipv6} -> - if (use_nexthop_reg) { - "xx${rEG_NEXT_HOP()} == ${ipv6}" - } else { - "ip6.dst == ${ipv6}" - } - }; - - var ipx = match (ip_address) { - IPv4{ipv4} -> "ip4", - IPv6{ipv6} -> "ip6", - }; - - if (redundancy) { ipx ++ " && " } else { "" } ++ ip_match ++ port_match -} -/* New connections in Ingress table. */ - -function ct_lb(backends: istring, - selection_fields: Set<istring>, protocol: Option<istring>): string { - var args = vec_with_capacity(2); - args.push("backends=${backends}"); - - if (not selection_fields.is_empty()) { - var hash_fields = vec_with_capacity(selection_fields.size()); - for (sf in selection_fields) { - var hf = match ((sf.ival(), protocol)) { - ("tp_src", Some{p}) -> "${p}_src", - ("tp_dst", Some{p}) -> "${p}_dst", - _ -> sf.ival() - }; - hash_fields.push(hf); - }; - hash_fields.sort(); - args.push("hash_fields=" ++ json_escape(hash_fields.join(","))); - }; - - "ct_lb(" ++ args.join("; ") ++ ");" -} -function build_lb_vip_actions(lbvip: Intern<LBVIP>, - up_backends: istring, - stage: Intern<Stage>, - actions0: string): (string, bool) { - if (up_backends == i"") { - if (lbvip.lb.options.get_bool_def(i"reject", false)) { - return ("reg0 = 0; reject { outport <-> inport; ${next_to_stage(stage)};};", true) - } else if (lbvip.health_check.is_some()) { - return ("drop;", false) - } // else fall through - }; - - var actions = ct_lb(up_backends, lbvip.lb.selection_fields, lbvip.lb.protocol); - (actions0 ++ actions, false) -} -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_STATEFUL(), - .priority = priority, - .__match = __match, - .actions = actions, - .io_port = None, - .controller_meter = meter, - .stage_hint = 0) :- - LBVIPWithStatus(lbvip@&LBVIP{.lb = lb}, up_backends), - var priority = if (lbvip.vip_port != 0) { 120 } else { 110 }, - (var actions0, var reject) = { - /* Store the original destination IP to be used when generating - * hairpin flows. - */ - var actions0 = match (lbvip.vip_addr) { - IPv4{ipv4} -> "${rEG_ORIG_DIP_IPV4()} = ${ipv4}; ", - IPv6{ipv6} -> "${rEG_ORIG_DIP_IPV6()} = ${ipv6}; " - }; - - /* Store the original destination port to be used when generating - * hairpin flows. - */ - var actions1 = if (lbvip.vip_port != 0) { - "${rEG_ORIG_TP_DPORT()} = ${lbvip.vip_port}; " - } else { - "" - }; - - build_lb_vip_actions(lbvip, up_backends, s_SWITCH_OUT_QOS_MARK(), actions0 ++ actions1) - }, - var actions = actions0.intern(), - var __match = ("ct.new && " ++ get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port, lb.protocol, false, false, false)).intern(), - SwitchLB(sw, lb._uuid), - var meter = if (reject) { - sw.copp.get(cOPP_REJECT()) - } else { - None - }. - -/* Ingress Pre-Hairpin/Nat-Hairpin/Hairpin tabled (Priority 0). - * Packets that don't need hairpinning should continue processing. - */ -Flow(.logical_datapath = ls_uuid, - .stage = stage, - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - &Switch(._uuid = ls_uuid), - var stages = [s_SWITCH_IN_PRE_HAIRPIN(), - s_SWITCH_IN_NAT_HAIRPIN(), - s_SWITCH_IN_HAIRPIN()], - var stage = FlatMap(stages). - -for (&Switch(._uuid = ls_uuid, .has_lb_vip = true)) { - /* Check if the packet needs to be hairpinned. - * Set REGBIT_HAIRPIN in the original direction and - * REGBIT_HAIRPIN_REPLY in the reply direction. - */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PRE_HAIRPIN(), - .priority = 100, - .__match = i"ip && ct.trk", - .actions = i"${rEGBIT_HAIRPIN()} = chk_lb_hairpin(); " - "${rEGBIT_HAIRPIN_REPLY()} = chk_lb_hairpin_reply(); " - "next;", - .stage_hint = stage_hint(ls_uuid), - .io_port = None, - .controller_meter = None); - - /* If packet needs to be hairpinned, snat the src ip with the VIP - * for new sessions. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_NAT_HAIRPIN(), - .priority = 100, - .__match = i"ip && ct.new && ct.trk && ${rEGBIT_HAIRPIN()} == 1", - .actions = i"ct_snat_to_vip; next;", - .stage_hint = stage_hint(ls_uuid), - .io_port = None, - .controller_meter = None); - - /* If packet needs to be hairpinned, for established sessions there - * should already be an SNAT conntrack entry. - */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_NAT_HAIRPIN(), - .priority = 100, - .__match = i"ip && ct.est && ct.trk && ${rEGBIT_HAIRPIN()} == 1", - .actions = i"ct_snat;", - .stage_hint = stage_hint(ls_uuid), - .io_port = None, - .controller_meter = None); - - /* For the reply of hairpinned traffic, snat the src ip to the VIP. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_NAT_HAIRPIN(), - .priority = 90, - .__match = i"ip && ${rEGBIT_HAIRPIN_REPLY()} == 1", - .actions = i"ct_snat;", - .stage_hint = stage_hint(ls_uuid), - .io_port = None, - .controller_meter = None); - - /* Ingress Hairpin table. - * - Priority 1: Packets that were SNAT-ed for hairpinning should be - * looped back (i.e., swap ETH addresses and send back on inport). - */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_HAIRPIN(), - .priority = 1, - .__match = i"(${rEGBIT_HAIRPIN()} == 1 || ${rEGBIT_HAIRPIN_REPLY()} == 1)", - .actions = i"eth.dst <-> eth.src; outport = inport; flags.loopback = 1; output;", - .stage_hint = stage_hint(ls_uuid), - .io_port = None, - .controller_meter = None) -} - -/* Logical switch ingress table PORT_SEC_L2: ingress port security - L2 (priority 50) - ingress table PORT_SEC_IP: ingress port security - IP (priority 90 and 80) - ingress table PORT_SEC_ND: ingress port security - ND (priority 90 and 80) */ -for (&SwitchPort(.lsp = lsp, .sw = sw, .json_name = json_name, .ps_eth_addresses = ps_eth_addresses) - if lsp.is_enabled() and lsp.__type != i"external") { - for (pbinding in sb::Out_Port_Binding(.logical_port = lsp.name)) { - var __match = if (ps_eth_addresses.is_empty()) { - i"inport == ${json_name}" - } else { - i"inport == ${json_name} && eth.src == {${ps_eth_addresses.join(\" \")}}" - } in - - var actions = { - var queue = match (pbinding.options.get(i"qdisc_queue_id")) { - None -> i"", - Some{id} -> i"set_queue(${id});" - }; - var ramp = if (lsp.__type == i"vtep") { - i"next(pipeline=ingress, table=${s_SWITCH_IN_L2_LKUP().table_id});" - } else { - i"next;" - }; - i"${queue}${ramp}" - } in - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_L2(), - .priority = 50, - .__match = __match, - .actions = actions, - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) - } -} - -/** -* Build port security constraints on IPv4 and IPv6 src and dst fields -* and add logical flows to S_SWITCH_(IN/OUT)_PORT_SEC_IP stage. -* -* For each port security of the logical port, following -* logical flows are added -* - If the port security has IPv4 addresses, -* - Priority 90 flow to allow IPv4 packets for known IPv4 addresses -* -* - If the port security has IPv6 addresses, -* - Priority 90 flow to allow IPv6 packets for known IPv6 addresses -* -* - If the port security has IPv4 addresses or IPv6 addresses or both -* - Priority 80 flow to drop all IPv4 and IPv6 traffic -*/ -for (SwitchPortPSAddresses(.port = port@&SwitchPort{.sw = sw}, .ps_addrs = ps) - if port.is_enabled() and - (ps.ipv4_addrs.len() > 0 or ps.ipv6_addrs.len() > 0) and - port.lsp.__type != i"external") -{ - if (ps.ipv4_addrs.len() > 0) { - var dhcp_match = i"inport == ${port.json_name}" - " && eth.src == ${ps.ea}" - " && ip4.src == 0.0.0.0" - " && ip4.dst == 255.255.255.255" - " && udp.src == 68 && udp.dst == 67" in { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_IP(), - .priority = 90, - .__match = dhcp_match, - .actions = i"next;", - .stage_hint = stage_hint(port.lsp._uuid), - .io_port = Some{port.lsp.name}, - .controller_meter = None) - }; - var addrs = { - var addrs = vec_empty(); - for (addr in ps.ipv4_addrs) { - /* When the netmask is applied, if the host portion is - * non-zero, the host can only use the specified - * address. If zero, the host is allowed to use any - * address in the subnet. - */ - addrs.push(addr.match_host_or_network()) - }; - addrs - } in - var __match = - "inport == ${port.json_name} && eth.src == ${ps.ea} && ip4.src == {" ++ - addrs.join(", ") ++ "}" in - { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_IP(), - .priority = 90, - .__match = __match.intern(), - .actions = i"next;", - .stage_hint = stage_hint(port.lsp._uuid), - .io_port = Some{port.lsp.name}, - .controller_meter = None) - } - }; - if (ps.ipv6_addrs.len() > 0) { - var dad_match = i"inport == ${port.json_name}" - " && eth.src == ${ps.ea}" - " && ip6.src == ::" - " && ip6.dst == ff02::/16" - " && icmp6.type == {131, 135, 143}" in - { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_IP(), - .priority = 90, - .__match = dad_match, - .actions = i"next;", - .stage_hint = stage_hint(port.lsp._uuid), - .io_port = None, - .controller_meter = None) - }; - var __match = "inport == ${port.json_name} && eth.src == ${ps.ea}" ++ - build_port_security_ipv6_flow(Ingress, ps.ea, ps.ipv6_addrs) in - { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_IP(), - .priority = 90, - .__match = __match.intern(), - .actions = i"next;", - .stage_hint = stage_hint(port.lsp._uuid), - .io_port = Some{port.lsp.name}, - .controller_meter = None) - } - }; - var __match = i"inport == ${port.json_name} && eth.src == ${ps.ea} && ip" in - { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_IP(), - .priority = 80, - .__match = __match, - .actions = i"drop;", - .stage_hint = stage_hint(port.lsp._uuid), - .io_port = Some{port.lsp.name}, - .controller_meter = None) - } -} - -/** - * Build port security constraints on ARP and IPv6 ND fields - * and add logical flows to S_SWITCH_IN_PORT_SEC_ND stage. - * - * For each port security of the logical port, following - * logical flows are added - * - If the port security has no IP (both IPv4 and IPv6) or - * if it has IPv4 address(es) - * - Priority 90 flow to allow ARP packets for known MAC addresses - * in the eth.src and arp.spa fields. If the port security - * has IPv4 addresses, allow known IPv4 addresses in the arp.tpa field. - * - * - If the port security has no IP (both IPv4 and IPv6) or - * if it has IPv6 address(es) - * - Priority 90 flow to allow IPv6 ND packets for known MAC addresses - * in the eth.src and nd.sll/nd.tll fields. If the port security - * has IPv6 addresses, allow known IPv6 addresses in the nd.target field - * for IPv6 Neighbor Advertisement packet. - * - * - Priority 80 flow to drop ARP and IPv6 ND packets. - */ -for (SwitchPortPSAddresses(.port = port@&SwitchPort{.sw = sw}, .ps_addrs = ps) - if port.is_enabled() and port.lsp.__type != i"external") -{ - var no_ip = ps.ipv4_addrs.is_empty() and ps.ipv6_addrs.is_empty() in - { - if (not ps.ipv4_addrs.is_empty() or no_ip) { - var __match = { - var prefix = "inport == ${port.json_name} && eth.src == ${ps.ea} && arp.sha == ${ps.ea}"; - if (not ps.ipv4_addrs.is_empty()) { - var spas = vec_empty(); - for (addr in ps.ipv4_addrs) { - spas.push(addr.match_host_or_network()) - }; - prefix ++ " && arp.spa == {${spas.join(\", \")}}" - } else { - prefix - } - } in { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_ND(), - .priority = 90, - .__match = __match.intern(), - .actions = i"next;", - .stage_hint = stage_hint(port.lsp._uuid), - .io_port = Some{port.lsp.name}, - .controller_meter = None) - } - }; - if (not ps.ipv6_addrs.is_empty() or no_ip) { - var __match = "inport == ${port.json_name} && eth.src == ${ps.ea}" ++ - build_port_security_ipv6_nd_flow(ps.ea, ps.ipv6_addrs) in - { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_ND(), - .priority = 90, - .__match = __match.intern(), - .actions = i"next;", - .stage_hint = stage_hint(port.lsp._uuid), - .io_port = Some{port.lsp.name}, - .controller_meter = None) - } - }; - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_PORT_SEC_ND(), - .priority = 80, - .__match = i"inport == ${port.json_name} && (arp || nd)", - .actions = i"drop;", - .stage_hint = stage_hint(port.lsp._uuid), - .io_port = Some{port.lsp.name}, - .controller_meter = None) - } -} - -/* Ingress table PORT_SEC_ND and PORT_SEC_IP: Port security - IP and ND, by - * default goto next. (priority 0)*/ -for (&Switch(._uuid = ls_uuid)) { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PORT_SEC_ND(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PORT_SEC_IP(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Ingress table ARP_ND_RSP: ARP/ND responder, skip requests coming from - * localnet and vtep ports. (priority 100); see ovn-northd.8.xml for the - * rationale. */ -for (&SwitchPort(.lsp = lsp, .sw = sw, .json_name = json_name) - if lsp.is_enabled() and - (lsp.__type == i"localnet" or lsp.__type == i"vtep")) -{ - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_ARP_ND_RSP(), - .priority = 100, - .__match = i"inport == ${json_name}", - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) -} - -function lsp_is_up(lsp: Intern<nb::Logical_Switch_Port>): bool = { - lsp.up == Some{true} -} - -/* Ingress table ARP_ND_RSP: ARP/ND responder, reply for known IPs. - * (priority 50). */ -/* Handle - * - GARPs for virtual ip which belongs to a logical port - * of type 'virtual' and bind that port. - * - * - ARP reply from the virtual ip which belongs to a logical - * port of type 'virtual' and bind that port. - * */ - Flow(.logical_datapath = sp.sw._uuid, - .stage = s_SWITCH_IN_ARP_ND_RSP(), - .priority = 100, - .__match = i"inport == ${vp.json_name} && " - "((arp.op == 1 && arp.spa == ${virtual_ip} && arp.tpa == ${virtual_ip}) || " - "(arp.op == 2 && arp.spa == ${virtual_ip}))", - .actions = i"bind_vport(${sp.json_name}, inport); next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{vp.lsp.name}, - .controller_meter = None) :- - sp in &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"virtual"}), - Some{var virtual_ip} = lsp.options.get(i"virtual-ip"), - Some{var virtual_parents} = lsp.options.get(i"virtual-parents"), - Some{var ip} = ip_parse(virtual_ip.ival()), - var vparent = FlatMap(virtual_parents.split(",")), - vp in &SwitchPort(.lsp = &nb::Logical_Switch_Port{.name = vparent.intern()}), - vp.sw == sp.sw. - -/* - * Add ARP/ND reply flows if either the - * - port is up and it doesn't have 'unknown' address defined or - * - port type is router or - * - port type is localport - */ -for (CheckLspIsUp[check_lsp_is_up]) { - for (SwitchPortIPv4Address(.port = &SwitchPort{.lsp = lsp, .sw = sw, .json_name = json_name}, - .ea = ea, .addr = addr) - if lsp.is_enabled() and - ((lsp_is_up(lsp) or not check_lsp_is_up) - or lsp.__type == i"router" or lsp.__type == i"localport") and - lsp.__type != i"external" and lsp.__type != i"virtual" and - not lsp.addresses.contains(i"unknown") and - not sw.is_vlan_transparent) - { - var __match = "arp.tpa == ${addr.addr} && arp.op == 1" in - { - var actions = i"eth.dst = eth.src; " - "eth.src = ${ea}; " - "arp.op = 2; /* ARP reply */ " - "arp.tha = arp.sha; " - "arp.sha = ${ea}; " - "arp.tpa = arp.spa; " - "arp.spa = ${addr.addr}; " - "outport = inport; " - "flags.loopback = 1; " - "output;" in - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_ARP_ND_RSP(), - .priority = 50, - .__match = __match.intern(), - .actions = actions, - .stage_hint = stage_hint(lsp._uuid), - .io_port = None, - .controller_meter = None); - - /* Do not reply to an ARP request from the port that owns the - * address (otherwise a DHCP client that ARPs to check for a - * duplicate address will fail). Instead, forward it the usual - * way. - * - * (Another alternative would be to simply drop the packet. If - * everything is working as it is configured, then this would - * produce equivalent results, since no one should reply to the - * request. But ARPing for one's own IP address is intended to - * detect situations where the network is not working as - * configured, so dropping the request would frustrate that - * intent.) */ - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_ARP_ND_RSP(), - .priority = 100, - .__match = i"${__match} && inport == ${json_name}", - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) - } - } -} - -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_ARP_ND_RSP(), - .priority = 50, - .__match = __match.intern(), - .actions = __actions, - .stage_hint = stage_hint(sp.lsp._uuid), - .io_port = None, - .controller_meter = None) :- - - sp in &SwitchPort(.sw = sw, .peer = Some{rp}), - rp.is_enabled(), - var proxy_ips = { - match (sp.lsp.options.get(i"arp_proxy")) { - None -> "", - Some {addresses} -> { - match (extract_ip_addresses(addresses.ival())) { - None -> "", - Some{addr} -> { - var ip4_addrs = vec_empty(); - for (ip4 in addr.ipv4_addrs) { - ip4_addrs.push("${ip4.addr}") - }; - string_join(ip4_addrs, ",") - } - } - } - } - }, - proxy_ips != "", - var __match = "arp.op == 1 && arp.tpa == {" ++ proxy_ips ++ "}", - var __actions = i"eth.dst = eth.src; " - "eth.src = ${rp.networks.ea}; " - "arp.op = 2; /* ARP reply */ " - "arp.tha = arp.sha; " - "arp.sha = ${rp.networks.ea}; " - "arp.tpa <-> arp.spa; " - "outport = inport; " - "flags.loopback = 1; " - "output;". - -/* For ND solicitations, we need to listen for both the - * unicast IPv6 address and its all-nodes multicast address, - * but always respond with the unicast IPv6 address. */ -for (SwitchPortIPv6Address(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, - .ea = ea, .addr = addr) - if lsp.is_enabled() and - (lsp_is_up(lsp) or lsp.__type == i"router" or lsp.__type == i"localport") and - lsp.__type != i"external" and lsp.__type != i"virtual" and - not sw.is_vlan_transparent) -{ - var __match = "nd_ns && ip6.dst == {${addr.addr}, ${addr.solicited_node()}} && nd.target == ${addr.addr}" in - var actions = i"${if (lsp.__type == i\"router\") \"nd_na_router\" else \"nd_na\"} { " - "eth.src = ${ea}; " - "ip6.src = ${addr.addr}; " - "nd.target = ${addr.addr}; " - "nd.tll = ${ea}; " - "outport = inport; " - "flags.loopback = 1; " - "output; " - "};" in - { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_ARP_ND_RSP(), - .priority = 50, - .__match = __match.intern(), - .actions = actions, - .io_port = None, - .controller_meter = sw.copp.get(cOPP_ND_NA()), - .stage_hint = stage_hint(lsp._uuid)); - - /* Do not reply to a solicitation from the port that owns the - * address (otherwise DAD detection will fail). */ - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_ARP_ND_RSP(), - .priority = 100, - .__match = i"${__match} && inport == ${json_name}", - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) - } -} - -/* Ingress table ARP_ND_RSP: ARP/ND responder, by default goto next. - * (priority 0)*/ -for (ls in &nb::Logical_Switch) { - Flow(.logical_datapath = ls._uuid, - .stage = s_SWITCH_IN_ARP_ND_RSP(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Ingress table ARP_ND_RSP: ARP/ND responder for service monitor source ip. - * (priority 110)*/ -Flow(.logical_datapath = sp.sw._uuid, - .stage = s_SWITCH_IN_ARP_ND_RSP(), - .priority = 110, - .__match = i"arp.tpa == ${svc_mon_src_ip} && arp.op == 1", - .actions = i"eth.dst = eth.src; " - "eth.src = ${svc_monitor_mac}; " - "arp.op = 2; /* ARP reply */ " - "arp.tha = arp.sha; " - "arp.sha = ${svc_monitor_mac}; " - "arp.tpa = arp.spa; " - "arp.spa = ${svc_mon_src_ip}; " - "outport = inport; " - "flags.loopback = 1; " - "output;", - .stage_hint = stage_hint(lbvip.lb._uuid), - .io_port = None, - .controller_meter = None) :- - LBVIP[lbvip], - var lbvipbackend = FlatMap(lbvip.backends), - Some{var svc_monitor} = lbvipbackend.svc_monitor, - sp in &SwitchPort( - .lsp = &nb::Logical_Switch_Port{.name = svc_monitor.port_name}), - var svc_mon_src_ip = svc_monitor.src_ip, - SvcMonitorMac(svc_monitor_mac). - -function build_dhcpv4_action( - lsp_json_key: string, - dhcpv4_options: Intern<nb::DHCP_Options>, - offer_ip: in_addr, - lsp_options: Map<istring,istring>) : Option<(istring, istring, string)> = -{ - match (ip_parse_masked(dhcpv4_options.cidr.ival())) { - Left{err} -> { - /* cidr defined is invalid */ - None - }, - Right{(var host_ip, var mask)} -> { - if (not (offer_ip, host_ip).same_network(mask)) { - /* the offer ip of the logical port doesn't belong to the cidr - * defined in the DHCPv4 options. - */ - None - } else { - match ((dhcpv4_options.options.get(i"server_id"), - dhcpv4_options.options.get(i"server_mac"), - dhcpv4_options.options.get(i"lease_time"))) - { - (Some{var server_ip}, Some{var server_mac}, Some{var lease_time}) -> { - var options_map = dhcpv4_options.options; - - /* server_mac is not DHCPv4 option, delete it from the smap. */ - options_map.remove(i"server_mac"); - options_map.insert(i"netmask", i"${mask}"); - - match (lsp_options.get(i"hostname")) { - None -> (), - Some{port_hostname} -> options_map.insert(i"hostname", port_hostname) - }; - - var options = vec_empty(); - for (node in options_map) { - (var k, var v) = node; - options.push("${k} = ${v}") - }; - options.sort(); - var options_action = "${rEGBIT_DHCP_OPTS_RESULT()} = put_dhcp_opts(offerip = ${offer_ip}, " ++ - options.join(", ") ++ "); next;"; - var response_action = i"eth.dst = eth.src; eth.src = ${server_mac}; " - "ip4.src = ${server_ip}; udp.src = 67; " - "udp.dst = 68; outport = inport; flags.loopback = 1; " - "output;"; - - var ipv4_addr_match = "ip4.src == ${offer_ip} && ip4.dst == {${server_ip}, 255.255.255.255}"; - Some{(options_action.intern(), response_action, ipv4_addr_match)} - }, - _ -> { - /* "server_id", "server_mac" and "lease_time" should be - * present in the dhcp_options. */ - //static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - warn("Required DHCPv4 options not defined for lport - ${lsp_json_key}"); - None - } - } - } - } - } -} - -function build_dhcpv6_action( - lsp_json_key: string, - dhcpv6_options: Intern<nb::DHCP_Options>, - offer_ip: in6_addr): Option<(istring, istring)> = -{ - match (ipv6_parse_masked(dhcpv6_options.cidr.ival())) { - Left{err} -> { - /* cidr defined is invalid */ - //warn("cidr is invalid - ${err}"); - None - }, - Right{(var host_ip, var mask)} -> { - if (not (offer_ip, host_ip).same_network(mask)) { - /* offer_ip doesn't belongs to the cidr defined in lport's DHCPv6 - * options.*/ - //warn("ip does not belong to cidr"); - None - } else { - /* "server_id" should be the MAC address. */ - match (dhcpv6_options.options.get(i"server_id")) { - None -> { - warn("server_id not present in the DHCPv6 options for lport ${lsp_json_key}"); - None - }, - Some{server_mac} -> { - match (eth_addr_from_string(server_mac.ival())) { - None -> { - warn("server_id not present in the DHCPv6 options for lport ${lsp_json_key}"); - None - }, - Some{ea} -> { - /* Get the link local IP of the DHCPv6 server from the server MAC. */ - var server_ip = ea.to_ipv6_lla().string_mapped(); - var ia_addr = offer_ip.string_mapped(); - var options = vec_empty(); - - /* Check whether the dhcpv6 options should be configured as stateful. - * Only reply with ia_addr option for dhcpv6 stateful address mode. */ - if (not dhcpv6_options.options.get_bool_def(i"dhcpv6_stateless", false)) { - options.push("ia_addr = ${ia_addr}") - } else (); - - for ((k, v) in dhcpv6_options.options) { - if (k != i"dhcpv6_stateless") { - options.push("${k} = ${v}") - } else () - }; - options.sort(); - - var options_action = "${rEGBIT_DHCP_OPTS_RESULT()} = put_dhcpv6_opts(" ++ - options.join(", ") ++ - "); next;"; - var response_action = i"eth.dst = eth.src; eth.src = ${server_mac}; " - "ip6.dst = ip6.src; ip6.src = ${server_ip}; udp.src = 547; " - "udp.dst = 546; outport = inport; flags.loopback = 1; " - "output;"; - Some{(options_action.intern(), response_action)} - } - } - } - } - } - } - } -} - -/* If 'names' has one element, returns json_escape() for it. - * Otherwise, returns json_escape() of all of its elements inside "{...}". - */ -function json_escape_vec(names: Vec<string>): string -{ - match ((names.len(), names.nth(0))) { - (1, Some{name}) -> json_escape(name), - _ -> { - var json_names = vec_with_capacity(names.len()); - for (name in names) { - json_names.push(json_escape(name)); - }; - "{" ++ json_names.join(", ") ++ "}" - } - } -} - -/* - * Ordinarily, returns a single match against 'lsp'. - * - * If 'lsp' is an external port, returns a match against the localnet port(s) on - * its switch along with a condition that it only operate if 'lsp' is - * chassis-resident. This makes sense as a condition for sending DHCP replies - * to external ports because only one chassis should send such a reply. - * - * Returns a prefix and a suffix string. There is no reason for this except - * that it makes it possible to exactly mimic the format used by northd.c - * so that text-based comparisons do not show differences. (This fails if - * there's more than one localnet port since the C version uses multiple flows - * in that case.) - */ -function match_dhcp_input(lsp: Intern<SwitchPort>): (string, string) = -{ - if (lsp.lsp.__type == i"external" and not lsp.sw.localnet_ports.is_empty()) { - ("inport == " ++ json_escape_vec(lsp.sw.localnet_ports.map(|x| x.1.ival())) ++ " && ", - " && is_chassis_resident(${lsp.json_name})") - } else { - ("inport == ${lsp.json_name} && ", "") - } -} - -/* Logical switch ingress tables DHCP_OPTIONS and DHCP_RESPONSE: DHCP options - * and response priority 100 flows. */ -for (lsp in &SwitchPort - /* Don't add the DHCP flows if the port is not enabled or if the - * port is a router port. */ - if (lsp.is_enabled() and lsp.lsp.__type != i"router") - /* If it's an external port and there is no localnet port - * and if it doesn't belong to an HA chassis group ignore it. */ - and (lsp.lsp.__type != i"external" - or (not lsp.sw.localnet_ports.is_empty() - and lsp.lsp.ha_chassis_group.is_some()))) -{ - for (lps in LogicalSwitchPort(.lport = lsp.lsp._uuid, .lswitch = lsuuid)) { - var json_key = json_escape(lsp.lsp.name) in - (var pfx, var sfx) = match_dhcp_input(lsp) in - { - /* DHCPv4 options enabled for this port */ - Some{var dhcpv4_options_uuid} = lsp.lsp.dhcpv4_options in - { - for (dhcpv4_options in &nb::DHCP_Options(._uuid = dhcpv4_options_uuid)) { - for (SwitchPortIPv4Address(.port = &SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = lsp.lsp._uuid}}, .ea = ea, .addr = addr)) { - Some{(var options_action, var response_action, var ipv4_addr_match)} = - build_dhcpv4_action(json_key, dhcpv4_options, addr.addr, lsp.lsp.options) in - { - var __match = - (pfx ++ "eth.src == ${ea} && " - "ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && " - "udp.src == 68 && udp.dst == 67" ++ sfx).intern() - in - Flow(.logical_datapath = lsuuid, - .stage = s_SWITCH_IN_DHCP_OPTIONS(), - .priority = 100, - .__match = __match, - .actions = options_action, - .io_port = None, - .controller_meter = lsp.sw.copp.get(cOPP_DHCPV4_OPTS()), - .stage_hint = stage_hint(lsp.lsp._uuid)); - - /* Allow ip4.src = OFFER_IP and - * ip4.dst = {SERVER_IP, 255.255.255.255} for the below - * cases - * - When the client wants to renew the IP by sending - * the DHCPREQUEST to the server ip. - * - When the client wants to renew the IP by - * broadcasting the DHCPREQUEST. - */ - var __match = pfx ++ "eth.src == ${ea} && " - "${ipv4_addr_match} && udp.src == 68 && udp.dst == 67" ++ sfx in - Flow(.logical_datapath = lsuuid, - .stage = s_SWITCH_IN_DHCP_OPTIONS(), - .priority = 100, - .__match = __match.intern(), - .actions = options_action, - .io_port = None, - .controller_meter = lsp.sw.copp.get(cOPP_DHCPV4_OPTS()), - .stage_hint = stage_hint(lsp.lsp._uuid)); - - /* If REGBIT_DHCP_OPTS_RESULT is set, it means the - * put_dhcp_opts action is successful. */ - var __match = pfx ++ "eth.src == ${ea} && " - "ip4 && udp.src == 68 && udp.dst == 67 && " ++ - rEGBIT_DHCP_OPTS_RESULT() ++ sfx in - Flow(.logical_datapath = lsuuid, - .stage = s_SWITCH_IN_DHCP_RESPONSE(), - .priority = 100, - .__match = __match.intern(), - .actions = response_action, - .stage_hint = stage_hint(lsp.lsp._uuid), - .io_port = None, - .controller_meter = None) - // FIXME: is there a constraint somewhere that guarantees that build_dhcpv4_action - // returns Some() for at most 1 address in lsp_addrs? Otherwise, simulate this break - // by computing an aggregate that returns the first element of a group. - //break; - } - } - } - }; - - /* DHCPv6 options enabled for this port */ - Some{var dhcpv6_options_uuid} = lsp.lsp.dhcpv6_options in - { - for (dhcpv6_options in &nb::DHCP_Options(._uuid = dhcpv6_options_uuid)) { - for (SwitchPortIPv6Address(.port = &SwitchPort{.lsp = &nb::Logical_Switch_Port{._uuid = lsp.lsp._uuid}}, .ea = ea, .addr = addr)) { - Some{(var options_action, var response_action)} = - build_dhcpv6_action(json_key, dhcpv6_options, addr.addr) in - { - var __match = pfx ++ "eth.src == ${ea}" - " && ip6.dst == ff02::1:2 && udp.src == 546 &&" - " udp.dst == 547" ++ sfx in - { - Flow(.logical_datapath = lsuuid, - .stage = s_SWITCH_IN_DHCP_OPTIONS(), - .priority = 100, - .__match = __match.intern(), - .actions = options_action, - .io_port = None, - .controller_meter = lsp.sw.copp.get(cOPP_DHCPV6_OPTS()), - .stage_hint = stage_hint(lsp.lsp._uuid)); - - /* If REGBIT_DHCP_OPTS_RESULT is set to 1, it means the - * put_dhcpv6_opts action is successful */ - Flow(.logical_datapath = lsuuid, - .stage = s_SWITCH_IN_DHCP_RESPONSE(), - .priority = 100, - .__match = (__match ++ " && ${rEGBIT_DHCP_OPTS_RESULT()}").intern(), - .actions = response_action, - .stage_hint = stage_hint(lsp.lsp._uuid), - .io_port = None, - .controller_meter = None) - // FIXME: is there a constraint somewhere that guarantees that build_dhcpv4_action - // returns Some() for at most 1 address in lsp_addrs? Otherwise, simulate this breaks - // by computing an aggregate that returns the first element of a group. - //break; - } - } - } - } - } - } - } -} - -/* Logical switch ingress tables DNS_LOOKUP and DNS_RESPONSE: DNS lookup and - * response priority 100 flows. - */ -for (LogicalSwitchHasDNSRecords(ls, true)) -{ - Flow(.logical_datapath = ls, - .stage = s_SWITCH_IN_DNS_LOOKUP(), - .priority = 100, - .__match = i"udp.dst == 53", - .actions = i"${rEGBIT_DNS_LOOKUP_RESULT()} = dns_lookup(); next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - var action = i"eth.dst <-> eth.src; ip4.src <-> ip4.dst; " - "udp.dst = udp.src; udp.src = 53; outport = inport; " - "flags.loopback = 1; output;" in - Flow(.logical_datapath = ls, - .stage = s_SWITCH_IN_DNS_RESPONSE(), - .priority = 100, - .__match = i"udp.dst == 53 && ${rEGBIT_DNS_LOOKUP_RESULT()}", - .actions = action, - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - var action = i"eth.dst <-> eth.src; ip6.src <-> ip6.dst; " - "udp.dst = udp.src; udp.src = 53; outport = inport; " - "flags.loopback = 1; output;" in - Flow(.logical_datapath = ls, - .stage = s_SWITCH_IN_DNS_RESPONSE(), - .priority = 100, - .__match = i"udp.dst == 53 && ${rEGBIT_DNS_LOOKUP_RESULT()}", - .actions = action, - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Ingress table DHCP_OPTIONS and DHCP_RESPONSE: DHCP options and response, by - * default goto next. (priority 0). - * - * Ingress table DNS_LOOKUP and DNS_RESPONSE: DNS lookup and response, by - * default goto next. (priority 0). - - * Ingress table EXTERNAL_PORT - External port handling, by default goto next. - * (priority 0). */ -for (ls in &nb::Logical_Switch) { - Flow(.logical_datapath = ls._uuid, - .stage = s_SWITCH_IN_DHCP_OPTIONS(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = ls._uuid, - .stage = s_SWITCH_IN_DHCP_RESPONSE(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = ls._uuid, - .stage = s_SWITCH_IN_DNS_LOOKUP(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = ls._uuid, - .stage = s_SWITCH_IN_DNS_RESPONSE(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = ls._uuid, - .stage = s_SWITCH_IN_EXTERNAL_PORT(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 110, - .__match = i"eth.dst == $svc_monitor_mac", - .actions = i"handle_svc_check(inport);", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - sw in &Switch(). - -for (sw in &Switch(._uuid = ls_uuid, .mcast_cfg = mcast_cfg) - if (mcast_cfg.enabled)) { - var controller_meter = sw.copp.get(cOPP_IGMP()) in - for (SwitchMcastFloodRelayPorts(sw, relay_ports)) { - for (SwitchMcastFloodReportPorts(sw, flood_report_ports)) { - for (SwitchMcastFloodPorts(sw, flood_ports)) { - var flood_relay = not relay_ports.is_empty() in - var flood_reports = not flood_report_ports.is_empty() in - var flood_static = not flood_ports.is_empty() in - var igmp_act = { - if (flood_reports) { - var mrouter_static = json_escape(mC_MROUTER_STATIC().0); - i"clone { " - "outport = ${mrouter_static}; " - "output; " - "};igmp;" - } else { - i"igmp;" - } - } in { - /* Punt IGMP traffic to controller. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 100, - .__match = i"ip4 && ip.proto == 2", - .actions = i"${igmp_act}", - .io_port = None, - .controller_meter = controller_meter, - .stage_hint = 0); - - /* Punt MLD traffic to controller. */ - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 100, - .__match = i"mldv1 || mldv2", - .actions = igmp_act, - .io_port = None, - .controller_meter = controller_meter, - .stage_hint = 0); - - /* Flood all IP multicast traffic destined to 224.0.0.X to - * all ports - RFC 4541, section 2.1.2, item 2. - */ - var flood = json_escape(mC_FLOOD().0) in - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 85, - .__match = i"ip4.mcast && ip4.dst == 224.0.0.0/24", - .actions = i"outport = ${flood}; output;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Flood all IPv6 multicast traffic destined to reserved - * multicast IPs (RFC 4291, 2.7.1). - */ - var flood = json_escape(mC_FLOOD().0) in - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 85, - .__match = i"ip6.mcast_flood", - .actions = i"outport = ${flood}; output;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Forward uregistered IP multicast to routers with relay - * enabled and to any ports configured to flood IP - * multicast traffic. If configured to flood unregistered - * traffic this will be handled by the L2 multicast flow. - */ - if (not mcast_cfg.flood_unreg) { - var relay_act = { - if (flood_relay) { - var rtr_flood = json_escape(mC_MROUTER_FLOOD().0); - "clone { " - "outport = ${rtr_flood}; " - "output; " - "}; " - } else { - "" - } - } in - var static_act = { - if (flood_static) { - var mc_static = json_escape(mC_STATIC().0); - "outport =${mc_static}; output;" - } else { - "" - } - } in - var drop_act = { - if (not flood_relay and not flood_static) { - "drop;" - } else { - "" - } - } in - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 80, - .__match = i"ip4.mcast || ip6.mcast", - .actions = i"${relay_act}${static_act}${drop_act}", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - } - } - } - } - } -} - -/* Ingress table L2_LKUP: Add IP multicast flows learnt from IGMP/MLD (priority - * 90). */ -for (IgmpSwitchMulticastGroup(.address = address, .switch = sw)) { - /* RFC 4541, section 2.1.2, item 2: Skip groups in the 224.0.0.X - * range. - * - * RFC 4291, section 2.7.1: Skip groups that correspond to all - * hosts. - */ - Some{var ip} = ip46_parse(address.ival()) in - (var skip_address) = match (ip) { - IPv4{ipv4} -> ipv4.is_local_multicast(), - IPv6{ipv6} -> ipv6.is_all_hosts() - } in - var ipX = ip.ipX() in - for (SwitchMcastFloodRelayPorts(sw, relay_ports) if not skip_address) { - for (SwitchMcastFloodPorts(sw, flood_ports)) { - var flood_relay = not relay_ports.is_empty() in - var flood_static = not flood_ports.is_empty() in - var mc_rtr_flood = json_escape(mC_MROUTER_FLOOD().0) in - var mc_static = json_escape(mC_STATIC().0) in - var relay_act = { - if (flood_relay) { - "clone { " - "outport = ${mc_rtr_flood}; output; " - "};" - } else { - "" - } - } in - var static_act = { - if (flood_static) { - "clone { " - "outport =${mc_static}; " - "output; " - "};" - } else { - "" - } - } in - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 90, - .__match = i"eth.mcast && ${ipX} && ${ipX}.dst == ${address}", - .actions = - i"${relay_act} ${static_act} outport = \"${address}\"; " - "output;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - } - } -} - -/* Table EXTERNAL_PORT: External port. Drop ARP request for router ips from - * external ports on chassis not binding those ports. This makes the router - * pipeline to be run only on the chassis binding the external ports. - * - * For an external port X on logical switch LS, if X is not resident on this - * chassis, drop ARP requests arriving on localnet ports from X's Ethernet - * address, if the ARP request is asking to translate the IP address of a - * router port on LS. */ -Flow(.logical_datapath = sp.sw._uuid, - .stage = s_SWITCH_IN_EXTERNAL_PORT(), - .priority = 100, - .__match = (i"inport == ${json_escape(localnet_port.1)} && " - "eth.src == ${lp_addr.ea} && " - "!is_chassis_resident(${sp.json_name}) && " - "arp.tpa == ${rp_addr.addr} && arp.op == 1"), - .actions = i"drop;", - .stage_hint = stage_hint(sp.lsp._uuid), - .io_port = Some{localnet_port.1}, - .controller_meter = None) :- - sp in &SwitchPort(), - sp.lsp.__type == i"external", - var localnet_port = FlatMap(sp.sw.localnet_ports), - var lp_addr = FlatMap(sp.static_addresses), - rp in &SwitchPort(.sw = sp.sw), - rp.lsp.__type == i"router", - SwitchPortIPv4Address(.port = rp, .addr = rp_addr). -Flow(.logical_datapath = sp.sw._uuid, - .stage = s_SWITCH_IN_EXTERNAL_PORT(), - .priority = 100, - .__match = (i"inport == ${json_escape(localnet_port.1)} && " - "eth.src == ${lp_addr.ea} && " - "!is_chassis_resident(${sp.json_name}) && " - "nd_ns && ip6.dst == {${rp_addr.addr}, ${rp_addr.solicited_node()}} && " - "nd.target == ${rp_addr.addr}"), - .actions = i"drop;", - .stage_hint = stage_hint(sp.lsp._uuid), - .io_port = Some{localnet_port.1}, - .controller_meter = None) :- - sp in &SwitchPort(), - sp.lsp.__type == i"external", - var localnet_port = FlatMap(sp.sw.localnet_ports), - var lp_addr = FlatMap(sp.static_addresses), - rp in &SwitchPort(.sw = sp.sw), - rp.lsp.__type == i"router", - SwitchPortIPv6Address(.port = rp, .addr = rp_addr). -Flow(.logical_datapath = sp.sw._uuid, - .stage = s_SWITCH_IN_EXTERNAL_PORT(), - .priority = 100, - .__match = (i"inport == ${json_escape(localnet_port.1)} && " - "eth.src == ${lp_addr.ea} && " - "eth.dst == ${ea} && " - "!is_chassis_resident(${sp.json_name})"), - .actions = i"drop;", - .stage_hint = stage_hint(sp.lsp._uuid), - .io_port = Some{localnet_port.1}, - .controller_meter = None) :- - sp in &SwitchPort(), - sp.lsp.__type == i"external", - var localnet_port = FlatMap(sp.sw.localnet_ports), - var lp_addr = FlatMap(sp.static_addresses), - rp in &SwitchPort(.sw = sp.sw), - rp.lsp.__type == i"router", - SwitchPortAddresses(.port = rp, .addrs = LPortAddress{.ea = ea}). - -/* Ingress table L2_LKUP: Destination lookup, broadcast and multicast handling - * (priority 100). */ -for (ls in &nb::Logical_Switch) { - var mc_flood = json_escape(mC_FLOOD().0) in - Flow(.logical_datapath = ls._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 70, - .__match = i"eth.mcast", - .actions = i"outport = ${mc_flood}; output;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Ingress table L2_LKUP: Destination lookup, unicast handling (priority 50). -*/ -for (SwitchPortStaticAddresses(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, - .addrs = addrs) - if lsp.__type != i"external") { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 50, - .__match = i"eth.dst == ${addrs.ea}", - .actions = i"outport = ${json_name}; output;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = None, - .controller_meter = None) -} - -/* - * Ingress table L2_LKUP: Flows that flood self originated ARP/ND packets in the - * switching domain. - */ -/* Self originated ARP requests/ND need to be flooded to the L2 domain - * (except on router ports). Determine that packets are self originated - * by also matching on source MAC. Matching on ingress port is not - * reliable in case this is a VLAN-backed network. - * Priority: 75. - */ - -/* Returns 'true' if the IP 'addr' is on the same subnet with one of the - * IPs configured on the router port. - */ -function lrouter_port_ip_reachable(rp: Intern<RouterPort>, addr: v46_ip): bool { - match (addr) { - IPv4{ipv4} -> { - for (na in rp.networks.ipv4_addrs) { - if ((ipv4, na.addr).same_network(na.netmask())) { - return true - } - } - }, - IPv6{ipv6} -> { - for (na in rp.networks.ipv6_addrs) { - if ((ipv6, na.addr).same_network(na.netmask())) { - return true - } - } - } - }; - false -} -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 75, - .__match = __match, - .actions = actions, - .stage_hint = stage_hint(sp.lsp._uuid), - .io_port = None, - .controller_meter = None) :- - sp in &SwitchPort(.sw = sw@&Switch{.has_non_router_port = true}, .peer = Some{rp}), - rp.is_enabled(), - var eth_src_set = { - var eth_src_set = set_singleton(i"${rp.networks.ea}"); - for (nat in rp.router.nats) { - match (nat.nat.external_mac) { - Some{mac} -> - if (lrouter_port_ip_reachable(rp, nat.external_ip)) { - eth_src_set.insert(mac) - } else (), - _ -> () - } - }; - eth_src_set - }, - var eth_src = "{" ++ eth_src_set.to_vec().join(", ") ++ "}", - var __match = i"eth.src == ${eth_src} && (arp.op == 1 || nd_ns)", - var mc_flood_l2 = json_escape(mC_FLOOD_L2().0), - var actions = i"outport = ${mc_flood_l2}; output;". - -/* Forward ARP requests for owned IP addresses (L3, VIP, NAT) only to this - * router port. - * Priority: 80. - */ -function get_arp_forward_ips(rp: Intern<RouterPort>, lbips: Intern<LogicalRouterLBIPs>): - (Set<istring>, Set<istring>, Set<istring>, Set<istring>) = -{ - var reachable_ips_v4 = set_empty(); - var reachable_ips_v6 = set_empty(); - var unreachable_ips_v4 = set_empty(); - var unreachable_ips_v6 = set_empty(); - - (var lb_ips_v4, var lb_ips_v6) - = get_router_load_balancer_ips(lbips, false); - for (a in lb_ips_v4) { - /* Check if the ovn port has a network configured on which we could - * expect ARP requests for the LB VIP. - */ - match (ip_parse(a.ival())) { - Some{ipv4} -> if (lrouter_port_ip_reachable(rp, IPv4{ipv4})) { - reachable_ips_v4.insert(a) - } else { - unreachable_ips_v4.insert(a) - }, - _ -> () - } - }; - for (a in lb_ips_v6) { - /* Check if the ovn port has a network configured on which we could - * expect NS requests for the LB VIP. - */ - match (ipv6_parse(a.ival())) { - Some{ipv6} -> if (lrouter_port_ip_reachable(rp, IPv6{ipv6})) { - reachable_ips_v6.insert(a) - } else { - unreachable_ips_v6.insert(a) - }, - _ -> () - } - }; - - for (nat in rp.router.nats) { - if (nat.nat.__type != i"snat") { - /* Check if the ovn port has a network configured on which we could - * expect ARP requests/NS for the DNAT external_ip. - */ - if (lrouter_port_ip_reachable(rp, nat.external_ip)) { - match (nat.external_ip) { - IPv4{_} -> reachable_ips_v4.insert(nat.nat.external_ip), - IPv6{_} -> reachable_ips_v6.insert(nat.nat.external_ip) - } - } else { - match (nat.external_ip) { - IPv4{_} -> unreachable_ips_v4.insert(nat.nat.external_ip), - IPv6{_} -> unreachable_ips_v6.insert(nat.nat.external_ip), - } - } - } - }; - - for (a in rp.networks.ipv4_addrs) { - reachable_ips_v4.insert(i"${a.addr}") - }; - for (a in rp.networks.ipv6_addrs) { - reachable_ips_v6.insert(i"${a.addr}") - }; - - (reachable_ips_v4, reachable_ips_v6, unreachable_ips_v4, unreachable_ips_v6) -} - -relation &SwitchPortARPForwards( - port: Intern<SwitchPort>, - reachable_ips_v4: Set<istring>, - reachable_ips_v6: Set<istring>, - unreachable_ips_v4: Set<istring>, - unreachable_ips_v6: Set<istring> -) - -&SwitchPortARPForwards(.port = port, - .reachable_ips_v4 = reachable_ips_v4, - .reachable_ips_v6 = reachable_ips_v6, - .unreachable_ips_v4 = unreachable_ips_v4, - .unreachable_ips_v6 = unreachable_ips_v6) :- - port in &SwitchPort(.peer = Some{rp@&RouterPort{.enabled = true}}), - lbips in &LogicalRouterLBIPs(.lr = rp.router._uuid), - (var reachable_ips_v4, var reachable_ips_v6, var unreachable_ips_v4, var unreachable_ips_v6) = get_arp_forward_ips(rp, lbips). - -/* Packets received from VXLAN tunnels have already been through the - * router pipeline so we should skip them. Normally this is done by the - * multicast_group implementation (VXLAN packets skip table 32 which - * delivers to patch ports) but we're bypassing multicast_groups. - * (This is why we match against fLAGBIT_NOT_VXLAN() here.) - */ -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 80, - .__match = i"${fLAGBIT_NOT_VXLAN()} && arp.op == 1 && arp.tpa == ${ipv4}", - .actions = if (sw.has_non_router_port) { - i"clone {outport = ${sp.json_name}; output; }; " - "outport = ${mc_flood_l2}; output;" - } else { - i"outport = ${sp.json_name}; output;" - }, - .stage_hint = stage_hint(sp.lsp._uuid), - .io_port = None, - .controller_meter = None) :- - var mc_flood_l2 = json_escape(mC_FLOOD_L2().0), - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .reachable_ips_v4 = ips_v4), - var ipv4 = FlatMap(ips_v4). -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 80, - .__match = i"${fLAGBIT_NOT_VXLAN()} && nd_ns && nd.target == ${ipv6}", - .actions = if (sw.has_non_router_port) { - i"clone {outport = ${sp.json_name}; output; }; " - "outport = ${mc_flood_l2}; output;" - } else { - i"outport = ${sp.json_name}; output;" - }, - .stage_hint = stage_hint(sp.lsp._uuid), - .io_port = None, - .controller_meter = None) :- - var mc_flood_l2 = json_escape(mC_FLOOD_L2().0), - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .reachable_ips_v6 = ips_v6), - var ipv6 = FlatMap(ips_v6). - -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 90, - .__match = i"${fLAGBIT_NOT_VXLAN()} && arp.op == 1 && arp.tpa == ${ipv4}", - .actions = actions, - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - var actions = i"outport = ${json_escape(mC_FLOOD().0)}; output;", - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .unreachable_ips_v4 = ips_v4), - var ipv4 = FlatMap(ips_v4). -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 90, - .__match = i"${fLAGBIT_NOT_VXLAN()} && nd_ns && nd.target == ${ipv6}", - .actions = actions, - .stage_hint = stage_hint(sp.lsp._uuid), - .io_port = None, - .controller_meter = None) :- - var actions = i"outport = ${json_escape(mC_FLOOD().0)}; output;", - &SwitchPortARPForwards(.port = sp@&SwitchPort{.sw = sw}, .unreachable_ips_v6 = ips_v6), - var ipv6 = FlatMap(ips_v6). - -for (SwitchPortNewDynamicAddress(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, - .address = Some{addrs}) - if lsp.__type != i"external") { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 50, - .__match = i"eth.dst == ${addrs.ea}", - .actions = i"outport = ${json_name}; output;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = None, - .controller_meter = None) -} - -for (&SwitchPort(.lsp = lsp, - .json_name = json_name, - .sw = sw, - .peer = Some{&RouterPort{.lrp = lrp, - .is_redirect = is_redirect, - .router = &Router{._uuid = lr_uuid, - .l3dgw_ports = l3dgw_ports}}}) - if (lsp.addresses.contains(i"router") and lsp.__type != i"external")) -{ - Some{var mac} = scan_eth_addr(lrp.mac.ival()) in { - var add_chassis_resident_check = - not sw.localnet_ports.is_empty() and - (/* The peer of this port represents a distributed - * gateway port. The destination lookup flow for the - * router's distributed gateway port MAC address should - * only be programmed on the "redirect-chassis". */ - is_redirect or - /* Check if the option 'reside-on-redirect-chassis' - * is set to true on the peer port. If set to true - * and if the logical switch has a localnet port, it - * means the router pipeline for the packets from - * this logical switch should be run on the chassis - * hosting the gateway port. - */ - lrp.options.get_bool_def(i"reside-on-redirect-chassis", false)) in - var __match = if (add_chassis_resident_check) { - var redirect_port_name = if (is_redirect) { - json_escape(chassis_redirect_name(lrp.name)) - } else { - match (l3dgw_ports.nth(0)) { - Some {var gw_port} -> json_escape(chassis_redirect_name(gw_port.name)), - None -> "" - } - }; - /* The destination lookup flow for the router's - * distributed gateway port MAC address should only be - * programmed on the "redirect-chassis". */ - i"eth.dst == ${mac} && is_chassis_resident(${redirect_port_name})" - } else { - i"eth.dst == ${mac}" - } in - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 50, - .__match = __match, - .actions = i"outport = ${json_name}; output;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = None, - .controller_meter = None); - - /* Add ethernet addresses specified in NAT rules on - * distributed logical routers. */ - if (is_redirect) { - for (LogicalRouterNAT(.lr = lr_uuid, .nat = nat)) { - if (nat.nat.__type == i"dnat_and_snat") { - Some{var lport} = nat.nat.logical_port in - Some{var emac} = nat.nat.external_mac in - Some{var nat_mac} = eth_addr_from_string(emac.ival()) in - var __match = i"eth.dst == ${nat_mac} && is_chassis_resident(${json_escape(lport)})" in - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 50, - .__match = __match, - .actions = i"outport = ${json_name}; output;", - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - } - } - } - } -} -// FIXME: do we care about this? -/* } else { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); - - VLOG_INFO_RL(&rl, - "%s: invalid syntax '%s' in addresses column", - op->nbsp->name, op->nbsp->addresses[i]); - }*/ - -/* Ingress table L2_LKUP and L2_UNKNOWN: Destination lookup for unknown MACs (priority 0). */ -for (sw in &Switch(._uuid = ls_uuid)) { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_L2_LKUP(), - .priority = 0, - .__match = i"1", - .actions = i"outport = get_fdb(eth.dst); next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_L2_UNKNOWN(), - .priority = 50, - .__match = i"outport == \"none\"", - .actions = if (sw.has_unknown_ports) { - var mc_unknown = json_escape(mC_UNKNOWN().0); - i"outport = ${mc_unknown}; output;" - } else { - i"drop;" - }, - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_L2_UNKNOWN(), - .priority = 0, - .__match = i"1", - .actions = i"output;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Egress tables PORT_SEC_IP: Egress port security - IP (priority 0) - * Egress table PORT_SEC_L2: Egress port security L2 - multicast/broadcast (priority 100). */ -for (&Switch(._uuid = ls_uuid)) { - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PORT_SEC_IP(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_OUT_PORT_SEC_L2(), - .priority = 100, - .__match = i"eth.mcast", - .actions = i"output;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_LOOKUP_FDB(), - .priority = 100, - .__match = i"inport == ${sp.json_name}", - .actions = i"$[rEGBIT_LKUP_FDB()} = lookup_fdb(inport, eth.src); next;", - .stage_hint = stage_hint(lsp_uuid), - .io_port = Some{sp.lsp.name}, - .controller_meter = None), -Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_LOOKUP_FDB(), - .priority = 100, - .__match = i"inport == ${sp.json_name} && ${rEGBIT_LKUP_FDB()} == 0", - .actions = i"put_fdb(inport, eth.src); next;", - .stage_hint = stage_hint(lsp_uuid), - .io_port = Some{sp.lsp.name}, - .controller_meter = None) :- - LogicalSwitchPortWithUnknownAddress(ls_uuid, lsp_uuid), - sp in &SwitchPort(.lsp = &nb::Logical_Switch_Port{._uuid = lsp_uuid, .__type = i""}, - .ps_addresses = vec_empty()). - -Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_LOOKUP_FDB(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None), -Flow(.logical_datapath = ls_uuid, - .stage = s_SWITCH_IN_PUT_FDB(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - &Switch(._uuid = ls_uuid). - -/* Egress table PORT_SEC_IP: Egress port security - IP (priorities 90 and 80) - * if port security enabled. - * - * Egress table PORT_SEC_L2: Egress port security - L2 (priorities 50 and 150). - * - * Priority 50 rules implement port security for enabled logical port. - * - * Priority 150 rules drop packets to disabled logical ports, so that they - * don't even receive multicast or broadcast packets. */ -Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_OUT_PORT_SEC_L2(), - .priority = 50, - .__match = __match, - .actions = i"${queue_action}output;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) :- - &SwitchPort(.sw = sw, .lsp = lsp, .json_name = json_name, .ps_eth_addresses = ps_eth_addresses), - lsp.is_enabled(), - lsp.__type != i"external", - var __match = if (ps_eth_addresses.is_empty()) { - i"outport == ${json_name}" - } else { - i"outport == ${json_name} && eth.dst == {${ps_eth_addresses.join(\" \")}}" - }, - pbinding in sb::Out_Port_Binding(.logical_port = lsp.name), - var queue_action = match ((lsp.__type.ival(), - pbinding.options.get(i"qdisc_queue_id"))) { - ("localnet", Some{queue_id}) -> "set_queue(${queue_id});", - _ -> "" - }. - -for (&SwitchPort(.lsp = lsp, .json_name = json_name, .sw = sw)) { - if (not lsp.is_enabled() and lsp.__type != i"external") { - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_OUT_PORT_SEC_L2(), - .priority = 150, - .__match = i"outport == {$json_name}", - .actions = i"drop;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) - } -} - -for (SwitchPortPSAddresses(.port = &SwitchPort{.lsp = lsp, .json_name = json_name, .sw = sw}, - .ps_addrs = ps) - if (ps.ipv4_addrs.len() > 0 or ps.ipv6_addrs.len() > 0) - and lsp.__type != i"external") -{ - if (ps.ipv4_addrs.len() > 0) { - var addrs = { - var addrs = vec_empty(); - for (addr in ps.ipv4_addrs) { - /* When the netmask is applied, if the host portion is - * non-zero, the host can only use the specified - * address. If zero, the host is allowed to use any - * address in the subnet. - */ - addrs.push(addr.match_host_or_network()); - if (addr.plen < 32 and not addr.host().is_zero()) { - addrs.push("${addr.bcast()}") - } - }; - addrs - } in - var __match = - "outport == ${json_name} && eth.dst == ${ps.ea} && ip4.dst == {255.255.255.255, 224.0.0.0/4, " ++ - addrs.join(", ") ++ "}" in - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_OUT_PORT_SEC_IP(), - .priority = 90, - .__match = __match.intern(), - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) - }; - if (ps.ipv6_addrs.len() > 0) { - var __match = "outport == ${json_name} && eth.dst == ${ps.ea}" ++ - build_port_security_ipv6_flow(Egress, ps.ea, ps.ipv6_addrs) in - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_OUT_PORT_SEC_IP(), - .priority = 90, - .__match = __match.intern(), - .actions = i"next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) - }; - var __match = i"outport == ${json_name} && eth.dst == ${ps.ea} && ip" in - Flow(.logical_datapath = sw._uuid, - .stage = s_SWITCH_OUT_PORT_SEC_IP(), - .priority = 80, - .__match = __match, - .actions = i"drop;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = Some{lsp.name}, - .controller_meter = None) -} - -/* Logical router ingress table ADMISSION: Admission control framework. */ -for (&Router(._uuid = lr_uuid)) { - /* Logical VLANs not supported. - * Broadcast/multicast source address is invalid. */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ADMISSION(), - .priority = 100, - .__match = i"vlan.present || eth.src[40]", - .actions = i"drop;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Logical router ingress table ADMISSION: match (priority 50). */ -for (&RouterPort(.lrp = lrp, - .json_name = json_name, - .networks = lrp_networks, - .router = router, - .is_redirect = is_redirect) - /* Drop packets from disabled logical ports (since logical flow - * tables are default-drop). */ - if lrp.is_enabled()) -{ - //if (op->derived) { - // /* No ingress packets should be received on a chassisredirect - // * port. */ - // continue; - //} - - /* Store the ethernet address of the port receiving the packet. - * This will save us from having to match on inport further down in - * the pipeline. - */ - var gw_mtu = lrp.options.get_int_def(i"gateway_mtu", 0) in - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN() in - var actions = if (gw_mtu > 0) { - "${rEGBIT_PKT_LARGER()} = check_pkt_larger(${mtu}); " - } else { - "" - } ++ "${rEG_INPORT_ETH_ADDR()} = ${lrp_networks.ea}; next;" in { - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_ADMISSION(), - .priority = 50, - .__match = i"eth.mcast && inport == ${json_name}", - .actions = actions.intern(), - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None); - - var __match = - "eth.dst == ${lrp_networks.ea} && inport == ${json_name}" ++ - if is_redirect { - /* Traffic with eth.dst = l3dgw_port->lrp_networks.ea - * should only be received on the "redirect-chassis". */ - " && is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})" - } else { "" } in - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_ADMISSION(), - .priority = 50, - .__match = __match.intern(), - .actions = actions.intern(), - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None) - } -} - - -/* Logical router ingress table LOOKUP_NEIGHBOR and - * table LEARN_NEIGHBOR. */ -/* Learn MAC bindings from ARP/IPv6 ND. - * - * For ARP packets, table LOOKUP_NEIGHBOR does a lookup for the - * (arp.spa, arp.sha) in the mac binding table using the 'lookup_arp' - * action and stores the result in REGBIT_LOOKUP_NEIGHBOR_RESULT bit. - * If "always_learn_from_arp_request" is set to false, it will also - * lookup for the (arp.spa) in the mac binding table using the - * "lookup_arp_ip" action for ARP request packets, and stores the - * result in REGBIT_LOOKUP_NEIGHBOR_IP_RESULT bit; or set that bit - * to "1" directly for ARP response packets. - * - * For IPv6 ND NA packets, table LOOKUP_NEIGHBOR does a lookup - * for the (nd.target, nd.tll) in the mac binding table using the - * 'lookup_nd' action and stores the result in - * REGBIT_LOOKUP_NEIGHBOR_RESULT bit. If - * "always_learn_from_arp_request" is set to false, - * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT bit is set. - * - * For IPv6 ND NS packets, table LOOKUP_NEIGHBOR does a lookup - * for the (ip6.src, nd.sll) in the mac binding table using the - * 'lookup_nd' action and stores the result in - * REGBIT_LOOKUP_NEIGHBOR_RESULT bit. If - * "always_learn_from_arp_request" is set to false, it will also lookup - * for the (ip6.src) in the mac binding table using the "lookup_nd_ip" - * action and stores the result in REGBIT_LOOKUP_NEIGHBOR_IP_RESULT - * bit. - * - * Table LEARN_NEIGHBOR learns the mac-binding using the action - * - 'put_arp/put_nd'. Learning mac-binding is skipped if - * REGBIT_LOOKUP_NEIGHBOR_RESULT bit is set or - * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT is not set. - * - * */ - -/* Flows for LOOKUP_NEIGHBOR. */ -for (&Router(._uuid = lr_uuid, - .learn_from_arp_request = learn_from_arp_request, - .copp = copp)) -var rLNR = rEGBIT_LOOKUP_NEIGHBOR_RESULT() in -var rLNIR = rEGBIT_LOOKUP_NEIGHBOR_IP_RESULT() in -{ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), - .priority = 100, - .__match = i"arp.op == 2", - .actions = - ("${rLNR} = lookup_arp(inport, arp.spa, arp.sha); " ++ - { if (learn_from_arp_request) "" else "${rLNIR} = 1; " } ++ - "next;").intern(), - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), - .priority = 100, - .__match = i"nd_na", - .actions = - ("${rLNR} = lookup_nd(inport, nd.target, nd.tll); " ++ - { if (learn_from_arp_request) "" else "${rLNIR} = 1; " } ++ - "next;").intern(), - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), - .priority = 100, - .__match = i"nd_ns", - .actions = - ("${rLNR} = lookup_nd(inport, ip6.src, nd.sll); " ++ - { if (learn_from_arp_request) "" else - "${rLNIR} = lookup_nd_ip(inport, ip6.src); " } ++ - "next;").intern(), - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* For other packet types, we can skip neighbor learning. - * So set REGBIT_LOOKUP_NEIGHBOR_RESULT to 1. */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), - .priority = 0, - .__match = i"1", - .actions = i"${rLNR} = 1; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Flows for LEARN_NEIGHBOR. */ - /* Skip Neighbor learning if not required. */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), - .priority = 100, - .__match = - ("${rLNR} == 1" ++ - { if (learn_from_arp_request) "" else " || ${rLNIR} == 0" }).intern(), - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), - .priority = 90, - .__match = i"arp", - .actions = i"put_arp(inport, arp.spa, arp.sha); next;", - .io_port = None, - .controller_meter = copp.get(cOPP_ARP()), - .stage_hint = 0); - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), - .priority = 90, - .__match = i"nd_na", - .actions = i"put_nd(inport, nd.target, nd.tll); next;", - .io_port = None, - .controller_meter = copp.get(cOPP_ND_NA()), - .stage_hint = 0); - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LEARN_NEIGHBOR(), - .priority = 90, - .__match = i"nd_ns", - .actions = i"put_nd(inport, ip6.src, nd.sll); next;", - .io_port = None, - .controller_meter = copp.get(cOPP_ND_NS()), - .stage_hint = 0) -} - -/* Check if we need to learn mac-binding from ARP requests. */ -for (RouterPortNetworksIPv4Addr(rp@&RouterPort{.router = router}, addr)) { - var chassis_residence = match (rp.is_redirect) { - true -> " && is_chassis_resident(${json_escape(chassis_redirect_name(rp.lrp.name))})", - false -> "" - } in - var rLNR = rEGBIT_LOOKUP_NEIGHBOR_RESULT() in - var rLNIR = rEGBIT_LOOKUP_NEIGHBOR_IP_RESULT() in - var match0 = "inport == ${rp.json_name} && " - "arp.spa == ${addr.match_network()}" in - var match1 = "arp.op == 1" ++ chassis_residence in - var learn_from_arp_request = router.learn_from_arp_request in { - if (not learn_from_arp_request) { - /* ARP request to this address should always get learned, - * so add a priority-110 flow to set - * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT to 1. */ - var __match = [match0, "arp.tpa == ${addr.addr}", match1] in - var actions = i"${rLNR} = lookup_arp(inport, arp.spa, arp.sha); " - "${rLNIR} = 1; " - "next;" in - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), - .priority = 110, - .__match = __match.join(" && ").intern(), - .actions = actions, - .stage_hint = stage_hint(rp.lrp._uuid), - .io_port = None, - .controller_meter = None) - }; - - var actions = "${rLNR} = lookup_arp(inport, arp.spa, arp.sha); " ++ - { if (learn_from_arp_request) "" else - "${rLNIR} = lookup_arp_ip(inport, arp.spa); " } ++ - "next;" in - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_LOOKUP_NEIGHBOR(), - .priority = 100, - .__match = i"${match0} && ${match1}", - .actions = actions.intern(), - .stage_hint = stage_hint(rp.lrp._uuid), - .io_port = None, - .controller_meter = None) - } -} - - -/* Logical router ingress table IP_INPUT: IP Input. */ -for (router in &Router(._uuid = lr_uuid, .mcast_cfg = mcast_cfg)) { - /* L3 admission control: drop multicast and broadcast source, localhost - * source or destination, and zero network source or destination - * (priority 100). */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 100, - .__match = i"ip4.src_mcast ||" - "ip4.src == 255.255.255.255 || " - "ip4.src == 127.0.0.0/8 || " - "ip4.dst == 127.0.0.0/8 || " - "ip4.src == 0.0.0.0/8 || " - "ip4.dst == 0.0.0.0/8", - .actions = i"drop;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Drop ARP packets (priority 85). ARP request packets for router's own - * IPs are handled with priority-90 flows. - * Drop IPv6 ND packets (priority 85). ND NA packets for router's own - * IPs are handled with priority-90 flows. - */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 85, - .__match = i"arp || nd", - .actions = i"drop;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Allow IPv6 multicast traffic that's supposed to reach the - * router pipeline (e.g., router solicitations). - */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 84, - .__match = i"nd_rs || nd_ra", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Drop other reserved multicast. */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 83, - .__match = i"ip6.mcast_rsvd", - .actions = i"drop;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Allow other multicast if relay enabled (priority 82). */ - var mcast_action = { if (mcast_cfg.relay) { i"next;" } else { i"drop;" } } in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 82, - .__match = i"ip4.mcast || ip6.mcast", - .actions = mcast_action, - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Drop Ethernet local broadcast. By definition this traffic should - * not be forwarded.*/ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 50, - .__match = i"eth.bcast", - .actions = i"drop;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* TTL discard */ - Flow( - .logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 30, - .__match = i"ip4 && ip.ttl == {0, 1}", - .actions = i"drop;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - /* Pass other traffic not already handled to the next table for - * routing. */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -function format_v4_networks(networks: lport_addresses, add_bcast: bool): string = -{ - var addrs = vec_empty(); - for (addr in networks.ipv4_addrs) { - addrs.push("${addr.addr}"); - if (add_bcast) { - addrs.push("${addr.bcast()}") - } else () - }; - if (addrs.len() == 1) { - addrs.join(", ") - } else { - "{" ++ addrs.join(", ") ++ "}" - } -} - -function format_v6_networks(networks: lport_addresses): string = -{ - var addrs = vec_empty(); - for (addr in networks.ipv6_addrs) { - addrs.push("${addr.addr}") - }; - if (addrs.len() == 1) { - addrs.join(", ") - } else { - "{" ++ addrs.join(", ") ++ "}" - } -} - -/* The following relation is used in ARP reply flow generation to determine whether - * the is_chassis_resident check must be added to the flow. - */ -relation AddChassisResidentCheck_(lrp: uuid, add_check: bool) - -AddChassisResidentCheck_(lrp._uuid, res) :- - &SwitchPort(.peer = Some{&RouterPort{.lrp = lrp, .router = router, .is_redirect = is_redirect}}, - .sw = sw), - not router.l3dgw_ports.is_empty(), - not sw.localnet_ports.is_empty(), - var res = if (is_redirect) { - /* Traffic with eth.src = l3dgw_port->lrp_networks.ea - * should only be sent from the "redirect-chassis", so that - * upstream MAC learning points to the "redirect-chassis". - * Also need to avoid generation of multiple ARP responses - * from different chassis. */ - true - } else { - /* Check if the option 'reside-on-redirect-chassis' - * is set to true on the router port. If set to true - * and if peer's logical switch has a localnet port, it - * means the router pipeline for the packets from - * peer's logical switch is be run on the chassis - * hosting the gateway port and it should reply to the - * ARP requests for the router port IPs. - */ - lrp.options.get_bool_def(i"reside-on-redirect-chassis", false) - }. - - -relation AddChassisResidentCheck(lrp: uuid, add_check: bool) - -AddChassisResidentCheck(lrp, add_check) :- - AddChassisResidentCheck_(lrp, add_check). - -AddChassisResidentCheck(lrp, false) :- - &nb::Logical_Router_Port(._uuid = lrp), - not AddChassisResidentCheck_(lrp, _). - - -/* Logical router ingress table IP_INPUT: IP Input for IPv4. */ -for (&RouterPort(.router = router, .networks = networks, .lrp = lrp) - if (not networks.ipv4_addrs.is_empty())) -{ - /* L3 admission control: drop packets that originate from an - * IPv4 address owned by the router or a broadcast address - * known to the router (priority 100). */ - var __match = "ip4.src == " ++ - format_v4_networks(networks, true) ++ - " && ${rEGBIT_EGRESS_LOOPBACK()} == 0" in - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 100, - .__match = __match.intern(), - .actions = i"drop;", - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None); - - /* ICMP echo reply. These flows reply to ICMP echo requests - * received for the router's IP address. Since packets only - * get here as part of the logical router datapath, the inport - * (i.e. the incoming locally attached net) does not matter. - * The ip.ttl also does not matter (RFC1812 section 4.2.2.9) */ - var __match = "ip4.dst == " ++ - format_v4_networks(networks, false) ++ - " && icmp4.type == 8 && icmp4.code == 0" in - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 90, - .__match = __match.intern(), - .actions = i"ip4.dst <-> ip4.src; " - "ip.ttl = 255; " - "icmp4.type = 0; " - "flags.loopback = 1; " - "next; ", - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None) -} - -/* Priority-90-92 flows handle ARP requests and ND packets. Most are - * per logical port but DNAT addresses can be handled per datapath - * for non gateway router ports. - * - * Priority 91 and 92 flows are added for each gateway router - * port to handle the special cases. In case we get the packet - * on a regular port, just reply with the port's ETH address. - */ -LogicalRouterNatArpNdFlow(router, nat) :- - router in &Router(._uuid = lr), - LogicalRouterNAT(.lr = lr, .nat = nat@NAT{.nat = &nb::NAT{.__type = __type}}), - /* Skip SNAT entries for now, we handle unique SNAT IPs separately - * below. - */ - __type != i"snat". -/* Now handle SNAT entries too, one per unique SNAT IP. */ -LogicalRouterNatArpNdFlow(router, nat) :- - router in &Router(.snat_ips = snat_ips), - var snat_ip = FlatMap(snat_ips), - (var ip, var nats) = snat_ip, - Some{var nat} = nats.nth(0). - -relation LogicalRouterNatArpNdFlow(router: Intern<Router>, nat: NAT) -LogicalRouterArpNdFlow(router, nat, None, rEG_INPORT_ETH_ADDR(), None, false, 90) :- - LogicalRouterNatArpNdFlow(router, nat). - -/* ARP / ND handling for external IP addresses. - * - * DNAT and SNAT IP addresses are external IP addresses that need ARP - * handling. - * - * These are already taken care globally, per router. The only - * exception is on the l3dgw_port where we might need to use a - * different ETH address. - */ -LogicalRouterPortNatArpNdFlow(router, nat, l3dgw_port) :- - router in &Router(._uuid = lr_uuid, .l3dgw_ports = l3dgw_ports), - Some {var l3dgw_port} = l3dgw_ports.nth(0), - LogicalRouterNAT(lr_uuid, nat), - /* Skip SNAT entries for now, we handle unique SNAT IPs separately - * below. - */ - nat.nat.__type != i"snat". -/* Now handle SNAT entries too, one per unique SNAT IP. */ -LogicalRouterPortNatArpNdFlow(router, nat, l3dgw_port) :- - router in &Router(.l3dgw_ports = l3dgw_ports, .snat_ips = snat_ips), - Some {var l3dgw_port} = l3dgw_ports.nth(0), - var snat_ip = FlatMap(snat_ips), - (var ip, var nats) = snat_ip, - Some{var nat} = nats.nth(0). - -/* Respond to ARP/NS requests on the chassis that binds the gw - * port. Drop the ARP/NS requests on other chassis. - */ -relation LogicalRouterPortNatArpNdFlow(router: Intern<Router>, nat: NAT, lrp: Intern<nb::Logical_Router_Port>) -LogicalRouterArpNdFlow(router, nat, Some{lrp}, mac, Some{extra_match}, false, 92), -LogicalRouterArpNdFlow(router, nat, Some{lrp}, mac, None, true, 91) :- - LogicalRouterPortNatArpNdFlow(router, nat, lrp), - (var mac, var extra_match) = match ((nat.external_mac, nat.nat.logical_port)) { - (Some{external_mac}, Some{logical_port}) -> ( - /* distributed NAT case, use nat->external_mac */ - external_mac.to_string().intern(), - /* Traffic with eth.src = nat->external_mac should only be - * sent from the chassis where nat->logical_port is - * resident, so that upstream MAC learning points to the - * correct chassis. Also need to avoid generation of - * multiple ARP responses from different chassis. */ - i"is_chassis_resident(${json_escape(logical_port)})" - ), - _ -> ( - rEG_INPORT_ETH_ADDR(), - /* Traffic with eth.src = l3dgw_port->lrp_networks.ea_s - * should only be sent from the gateway chassis, so that - * upstream MAC learning points to the gateway chassis. - * Also need to avoid generation of multiple ARP responses - * from different chassis. */ - match (router.l3dgw_ports.nth(0)) { - None -> i"", - Some {var gw_port} -> i"is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})" - } - ) - }. - -/* Now divide the ARP/ND flows into ARP and ND. */ -relation LogicalRouterArpNdFlow( - router: Intern<Router>, - nat: NAT, - lrp: Option<Intern<nb::Logical_Router_Port>>, - mac: istring, - extra_match: Option<istring>, - drop: bool, - priority: integer) -LogicalRouterArpFlow(router, lrp, i"${ipv4}", mac, extra_match, drop, priority, - stage_hint(nat.nat._uuid)) :- - LogicalRouterArpNdFlow(router, nat@NAT{.external_ip = IPv4{ipv4}}, lrp, - mac, extra_match, drop, priority). -LogicalRouterNdFlow(router, lrp, i"nd_na", ipv6, true, mac, extra_match, drop, priority, - stage_hint(nat.nat._uuid)) :- - LogicalRouterArpNdFlow(router, nat@NAT{.external_ip = IPv6{ipv6}}, lrp, - mac, extra_match, drop, priority). - -relation LogicalRouterNdFlowLB( - lr: Intern<Router>, - lrp: Option<Intern<nb::Logical_Router_Port>>, - ip: istring, - mac: istring, - extra_match: Option<istring>, - stage_hint: bit<32>) -Flow(.logical_datapath = lr._uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 90, - .__match = __match.intern(), - .actions = actions, - .stage_hint = stage_hint, - .io_port = None, - .controller_meter = lr.copp.get(cOPP_ND_NA())) :- - LogicalRouterNdFlowLB(.lr = lr, .lrp = lrp, .ip = ip, - .mac = mac, .extra_match = extra_match, - .stage_hint = stage_hint), - var __match = { - var clauses = vec_with_capacity(4); - match (lrp) { - Some{p} -> clauses.push(i"inport == ${json_escape(p.name)}"), - None -> () - }; - clauses.push(i"nd_ns && nd.target == ${ip}"); - clauses.append(extra_match.to_vec()); - clauses.join(" && ") - }, - var actions = - i"nd_na { " - "eth.src = ${mac}; " - "ip6.src = nd.target; " - "nd.tll = ${mac}; " - "outport = inport; " - "flags.loopback = 1; " - "output; " - "};". - -relation LogicalRouterArpFlow( - lr: Intern<Router>, - lrp: Option<Intern<nb::Logical_Router_Port>>, - ip: istring, - mac: istring, - extra_match: Option<istring>, - drop: bool, - priority: integer, - stage_hint: bit<32>) -Flow(.logical_datapath = lr._uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = priority, - .__match = __match.intern(), - .actions = actions, - .stage_hint = stage_hint, - .io_port = None, - .controller_meter = None) :- - LogicalRouterArpFlow(.lr = lr, .lrp = lrp, .ip = ip, .mac = mac, - .extra_match = extra_match, .drop = drop, - .priority = priority, .stage_hint = stage_hint), - var __match = { - var clauses = vec_with_capacity(3); - match (lrp) { - Some{p} -> clauses.push(i"inport == ${json_escape(p.name)}"), - None -> () - }; - clauses.push(i"arp.op == 1 && arp.tpa == ${ip}"); - clauses.append(extra_match.to_vec()); - clauses.join(" && ") - }, - var actions = if (drop) { - i"drop;" - } else { - i"eth.dst = eth.src; " - "eth.src = ${mac}; " - "arp.op = 2; /* ARP reply */ " - "arp.tha = arp.sha; " - "arp.sha = ${mac}; " - "arp.tpa <-> arp.spa; " - "outport = inport; " - "flags.loopback = 1; " - "output;" - }. - -relation LogicalRouterNdFlow( - lr: Intern<Router>, - lrp: Option<Intern<nb::Logical_Router_Port>>, - action: istring, - ip: in6_addr, - sn_ip: bool, - mac: istring, - extra_match: Option<istring>, - drop: bool, - priority: integer, - stage_hint: bit<32>) -Flow(.logical_datapath = lr._uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = priority, - .__match = __match.intern(), - .actions = actions, - .io_port = None, - .controller_meter = controller_meter, - .stage_hint = stage_hint) :- - LogicalRouterNdFlow(.lr = lr, .lrp = lrp, .action = action, .ip = ip, - .sn_ip = sn_ip, .mac = mac, .extra_match = extra_match, - .drop = drop, .priority = priority, - .stage_hint = stage_hint), - var __match = { - var clauses = vec_with_capacity(4); - match (lrp) { - Some{p} -> clauses.push(i"inport == ${json_escape(p.name)}"), - None -> () - }; - if (sn_ip) { - clauses.push(i"ip6.dst == {${ip}, ${ip.solicited_node()}}") - }; - clauses.push(i"nd_ns && nd.target == ${ip}"); - clauses.append(extra_match.to_vec()); - clauses.join(" && ") - }, - (var actions, var controller_meter) = if (drop) { - (i"drop;", None) - } else { - (i"${action} { " - "eth.src = ${mac}; " - "ip6.src = nd.target; " - "nd.tll = ${mac}; " - "outport = inport; " - "flags.loopback = 1; " - "output; " - "};", - lr.copp.get(cOPP_ND_NA())) - }. - -/* ICMP time exceeded */ -for (RouterPortNetworksIPv4Addr(.port = &RouterPort{.lrp = lrp, - .json_name = json_name, - .router = router, - .networks = networks, - .is_redirect = is_redirect}, - .addr = addr)) -{ - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 40, - .__match = i"inport == ${json_name} && ip4 && " - "ip.ttl == {0, 1} && !ip.later_frag", - .actions = i"icmp4 {" - "eth.dst <-> eth.src; " - "icmp4.type = 11; /* Time exceeded */ " - "icmp4.code = 0; /* TTL exceeded in transit */ " - "ip4.dst = ip4.src; " - "ip4.src = ${addr.addr}; " - "ip.ttl = 255; " - "next; };", - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None); - - /* ARP reply. These flows reply to ARP requests for the router's own - * IP address. */ - for (AddChassisResidentCheck(lrp._uuid, add_chassis_resident_check)) { - var __match = - "arp.spa == ${addr.match_network()}" ++ - if (add_chassis_resident_check) { - var redirect_port_name = if (is_redirect) { - json_escape(chassis_redirect_name(lrp.name)) - } else { - match (router.l3dgw_ports.nth(0)) { - None -> "", - Some {var gw_port} -> json_escape(chassis_redirect_name(gw_port.name)) - } - }; - " && is_chassis_resident(${redirect_port_name})" - } else "" in - LogicalRouterArpFlow(.lr = router, - .lrp = Some{lrp}, - .ip = i"${addr.addr}", - .mac = rEG_INPORT_ETH_ADDR(), - .extra_match = Some{__match.intern()}, - .drop = false, - .priority = 90, - .stage_hint = stage_hint(lrp._uuid)) - } -} - -LogicalRouterNdFlow(.lr = r, - .lrp = Some{lrp}, - .action = i"nd_na", - .ip = ip, - .sn_ip = false, - .mac = rEG_INPORT_ETH_ADDR(), - .extra_match = residence_check, - .drop = false, - .priority = 90, - .stage_hint = 0) :- - &LBVIP(.vip_addr = IPv6{ip}, .lb = lb), - RouterLB(r, lb._uuid), - &RouterPort(.router = r, .lrp = lrp, .is_redirect = is_redirect), - var residence_check = match (is_redirect) { - true -> Some{i"is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})"}, - false -> None - }. - -for (&RouterPort(.lrp = lrp, - .router = router@&Router{._uuid = lr_uuid}, - .json_name = json_name, - .networks = networks, - .is_redirect = is_redirect)) { - for (lbips in &LogicalRouterLBIPs(.lr = lr_uuid)) { - var residence_check = match (is_redirect) { - true -> Some{i"is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})"}, - false -> None - } in { - var all_ipv4s = union(lbips.lb_ipv4s_routable, lbips.lb_ipv4s_unroutable) in - not all_ipv4s.is_empty() in - LogicalRouterArpFlow(.lr = router, - .lrp = Some{lrp}, - .ip = i"{ ${all_ipv4s.to_vec().join(\", \")} }", - .mac = rEG_INPORT_ETH_ADDR(), - .extra_match = residence_check, - .drop = false, - .priority = 90, - .stage_hint = 0); - - var all_ipv6s = union(lbips.lb_ipv6s_routable, lbips.lb_ipv6s_unroutable) in - not all_ipv6s.is_empty() in - LogicalRouterNdFlowLB(.lr = router, - .lrp = Some{lrp}, - .ip = ("{ " ++ all_ipv6s.to_vec().join(", ") ++ " }").intern(), - .mac = rEG_INPORT_ETH_ADDR(), - .extra_match = residence_check, - .stage_hint = 0) - } - } -} - -/* Drop IP traffic destined to router owned IPs except if the IP is - * also a SNAT IP. Those are dropped later, in stage - * "lr_in_arp_resolve", if unSNAT was unsuccessful. - * - * Priority 60. - */ -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 60, - .__match = ("ip4.dst == {" ++ match_ips.join(", ") ++ "}").intern(), - .actions = i"drop;", - .stage_hint = stage_hint(lrp_uuid), - .io_port = None, - .controller_meter = None) :- - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, - .router = &Router{.snat_ips = snat_ips, - .force_lb_snat = false, - ._uuid = lr_uuid}, - .networks = networks), - var addr = FlatMap(networks.ipv4_addrs), - not snat_ips.contains_key(IPv4{addr.addr}), - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 60, - .__match = ("ip6.dst == {" ++ match_ips.join(", ") ++ "}").intern(), - .actions = i"drop;", - .stage_hint = stage_hint(lrp_uuid), - .io_port = None, - .controller_meter = None) :- - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, - .router = &Router{.snat_ips = snat_ips, - .force_lb_snat = false, - ._uuid = lr_uuid}, - .networks = networks), - var addr = FlatMap(networks.ipv6_addrs), - not snat_ips.contains_key(IPv6{addr.addr}), - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). - -for (RouterPortNetworksIPv4Addr( - .port = &RouterPort{ - .router = &Router{._uuid = lr_uuid, - .l3dgw_ports = vec_empty(), - .is_gateway = false, - .copp = copp}, - .lrp = lrp}, - .addr = addr)) -{ - /* UDP/TCP/SCTP port unreachable. */ - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag && udp" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 80, - .__match = __match, - .actions = i"icmp4 {" - "eth.dst <-> eth.src; " - "ip4.dst <-> ip4.src; " - "ip.ttl = 255; " - "icmp4.type = 3; " - "icmp4.code = 3; " - "next; };", - .io_port = None, - .controller_meter = copp.get(cOPP_ICMP4_ERR()), - .stage_hint = stage_hint(lrp._uuid)); - - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag && tcp" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 80, - .__match = __match, - .actions = i"tcp_reset {" - "eth.dst <-> eth.src; " - "ip4.dst <-> ip4.src; " - "next; };", - .io_port = None, - .controller_meter = copp.get(cOPP_TCP_RESET()), - .stage_hint = stage_hint(lrp._uuid)); - - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag && sctp" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 80, - .__match = __match, - .actions = i"sctp_abort {" - "eth.dst <-> eth.src; " - "ip4.dst <-> ip4.src; " - "next; };", - .io_port = None, - .controller_meter = copp.get(cOPP_TCP_RESET()), - .stage_hint = stage_hint(lrp._uuid)); - - var __match = i"ip4 && ip4.dst == ${addr.addr} && !ip.later_frag" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 70, - .__match = __match, - .actions = i"icmp4 {" - "eth.dst <-> eth.src; " - "ip4.dst <-> ip4.src; " - "ip.ttl = 255; " - "icmp4.type = 3; " - "icmp4.code = 2; " - "next; };", - .io_port = None, - .controller_meter = copp.get(cOPP_ICMP4_ERR()), - .stage_hint = stage_hint(lrp._uuid)) -} - -/* DHCPv6 reply handling */ -Flow(.logical_datapath = rp.router._uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 100, - .__match = i"ip6.dst == ${ipv6_addr.addr} " - "&& udp.src == 547 && udp.dst == 546", - .actions = i"reg0 = 0; handle_dhcpv6_reply;", - .stage_hint = stage_hint(rp.lrp._uuid), - .io_port = None, - .controller_meter = None) :- - rp in &RouterPort(), - var ipv6_addr = FlatMap(rp.networks.ipv6_addrs). - -/* Logical router ingress table IP_INPUT: IP Input for IPv6. */ -for (&RouterPort(.router = router, .networks = networks, .lrp = lrp) - if (not networks.ipv6_addrs.is_empty())) -{ - //if (op->derived) { - // /* No ingress packets are accepted on a chassisredirect - // * port, so no need to program flows for that port. */ - // continue; - //} - - /* ICMPv6 echo reply. These flows reply to echo requests - * received for the router's IP address. */ - var __match = "ip6.dst == " ++ - format_v6_networks(networks) ++ - " && icmp6.type == 128 && icmp6.code == 0" in - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 90, - .__match = __match.intern(), - .actions = i"ip6.dst <-> ip6.src; " - "ip.ttl = 255; " - "icmp6.type = 129; " - "flags.loopback = 1; " - "next; ", - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None) -} - -/* ND reply. These flows reply to ND solicitations for the - * router's own IP address. */ -for (RouterPortNetworksIPv6Addr(.port = &RouterPort{.lrp = lrp, - .is_redirect = is_redirect, - .router = router, - .networks = networks, - .json_name = json_name}, - .addr = addr)) -{ - var extra_match = if (is_redirect) { - /* Traffic with eth.src = l3dgw_port->lrp_networks.ea - * should only be sent from the gateway chassis, so that - * upstream MAC learning points to the gateway chassis. - * Also need to avoid generation of multiple ND replies - * from different chassis. */ - Some{i"is_chassis_resident(${json_escape(chassis_redirect_name(lrp.name))})"} - } else None in - LogicalRouterNdFlow(.lr = router, - .lrp = Some{lrp}, - .action = i"nd_na_router", - .ip = addr.addr, - .sn_ip = true, - .mac = rEG_INPORT_ETH_ADDR(), - .extra_match = extra_match, - .drop = false, - .priority = 90, - .stage_hint = stage_hint(lrp._uuid)) -} - -/* UDP/TCP/SCTP port unreachable */ -for (RouterPortNetworksIPv6Addr( - .port = &RouterPort{.router = &Router{._uuid = lr_uuid, - .l3dgw_ports = vec_empty(), - .is_gateway = false, - .copp = copp}, - .lrp = lrp, - .json_name = json_name}, - .addr = addr)) -{ - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag && tcp" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 80, - .__match = __match, - .actions = i"tcp_reset {" - "eth.dst <-> eth.src; " - "ip6.dst <-> ip6.src; " - "next; };", - .io_port = None, - .controller_meter = copp.get(cOPP_TCP_RESET()), - .stage_hint = stage_hint(lrp._uuid)); - - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag && sctp" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 80, - .__match = __match, - .actions = i"sctp_abort {" - "eth.dst <-> eth.src; " - "ip6.dst <-> ip6.src; " - "next; };", - .io_port = None, - .controller_meter = copp.get(cOPP_TCP_RESET()), - .stage_hint = stage_hint(lrp._uuid)); - - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag && udp" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 80, - .__match = __match, - .actions = i"icmp6 {" - "eth.dst <-> eth.src; " - "ip6.dst <-> ip6.src; " - "ip.ttl = 255; " - "icmp6.type = 1; " - "icmp6.code = 4; " - "next; };", - .io_port = None, - .controller_meter = copp.get(cOPP_ICMP6_ERR()), - .stage_hint = stage_hint(lrp._uuid)); - - var __match = i"ip6 && ip6.dst == ${addr.addr} && !ip.later_frag" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 70, - .__match = __match, - .actions = i"icmp6 {" - "eth.dst <-> eth.src; " - "ip6.dst <-> ip6.src; " - "ip.ttl = 255; " - "icmp6.type = 1; " - "icmp6.code = 3; " - "next; };", - .io_port = None, - .controller_meter = copp.get(cOPP_ICMP6_ERR()), - .stage_hint = stage_hint(lrp._uuid)) -} - -/* ICMPv6 time exceeded */ -for (RouterPortNetworksIPv6Addr(.port = &RouterPort{.router = router, - .lrp = lrp, - .json_name = json_name}, - .addr = addr) - /* skip link-local address */ - if (not addr.is_lla())) -{ - var __match = i"inport == ${json_name} && ip6 && " - "ip6.src == ${addr.match_network()} && " - "ip.ttl == {0, 1} && !ip.later_frag" in - var actions = i"icmp6 {" - "eth.dst <-> eth.src; " - "ip6.dst = ip6.src; " - "ip6.src = ${addr.addr}; " - "ip.ttl = 255; " - "icmp6.type = 3; /* Time exceeded */ " - "icmp6.code = 0; /* TTL exceeded in transit */ " - "next; };" in - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 40, - .__match = __match, - .actions = actions, - .io_port = None, - .controller_meter = router.copp.get(cOPP_ICMP6_ERR()), - .stage_hint = stage_hint(lrp._uuid)) -} - -/* NAT, Defrag and load balancing. */ - -function default_allow_flow(datapath: uuid, stage: Intern<Stage>): Flow { - Flow{.logical_datapath = datapath, - .stage = stage, - .priority = 0, - .__match = i"1", - .actions = i"next;", - .io_port = None, - .controller_meter = None, - .stage_hint = 0} -} -for (r in &Router(._uuid = lr_uuid)) { - /* Packets are allowed by default. */ - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_DEFRAG())]; - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_UNSNAT())]; - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_SNAT())]; - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_DNAT())]; - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_UNDNAT())]; - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_POST_UNDNAT())]; - Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_EGR_LOOP())]; - Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_ECMP_STATEFUL())]; - - /* Send the IPv6 NS packets to next table. When ovn-controller - * generates IPv6 NS (for the action - nd_ns{}), the injected - * packet would go through conntrack - which is not required. */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_SNAT(), - .priority = 120, - .__match = i"nd_ns", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -for (r in &Router(._uuid = lr_uuid, - .l3dgw_ports = l3dgw_ports, - .is_gateway = is_gateway, - .nat = nat)) { - for (LogicalRouterLBs(lr_uuid, lbs)) { - if ((l3dgw_ports.len() > 0 or is_gateway) and (not is_empty(nat) or not is_empty(lbs))) { - /* If the router has load balancer or DNAT rules, re-circulate every packet - * through the DNAT zone so that packets that need to be unDNATed in the - * reverse direction get unDNATed. - * - * We also commit newly initiated connections in the reply direction to the - * DNAT zone. This ensures that these flows are tracked. If the flow was - * not committed, it would produce ongoing datapath flows with the ct.new - * flag set. Some NICs are unable to offload these flows. - */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_POST_UNDNAT(), - .priority = 50, - .__match = i"ip && ct.new", - .actions = i"ct_commit { } ; next; ", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_UNDNAT(), - .priority = 50, - .__match = i"ip", - .actions = i"flags.loopback = 1; ct_dnat;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - } - } -} - -Flow(.logical_datapath = lr, - .stage = s_ROUTER_OUT_SNAT(), - .priority = 120, - .__match = i"flags.skip_snat_for_lb == 1 && ip", - .actions = i"next;", - .stage_hint = stage_hint(lb.lb._uuid), - .io_port = None, - .controller_meter = None) :- - LogicalRouterLB(lr, lb), - lb.lb.options.get_bool_def(i"skip_snat", false) - . - -function lrouter_nat_is_stateless(nat: NAT): bool = { - Some{i"true"} == nat.nat.options.get(i"stateless") -} - -/* Handles the match criteria and actions in logical flow - * based on external ip based NAT rule filter. - * - * For ALLOWED_EXT_IPs, we will add an additional match criteria - * of comparing ip*.src/dst with the allowed external ip address set. - * - * For EXEMPTED_EXT_IPs, we will have an additional logical flow - * where we compare ip*.src/dst with the exempted external ip address set - * and action says "next" instead of ct*. - */ -function lrouter_nat_add_ext_ip_match( - router: Intern<Router>, - nat: NAT, - __match: string, - ipX: string, - is_src: bool, - mask: v46_ip): (string, Option<Flow>) -{ - var dir = if (is_src) "src" else "dst"; - match (nat.exceptional_ext_ips) { - None -> ("", None), - Some{AllowedExtIps{__as}} -> (" && ${ipX}.${dir} == $${__as.name}", None), - Some{ExemptedExtIps{__as}} -> { - /* Priority of logical flows corresponding to exempted_ext_ips is - * +1 of the corresponding regular NAT rule. - * For example, if we have following NAT rule and we associate - * exempted external ips to it: - * "ovn-nbctl lr-nat-add router dnat_and_snat 10.15.24.139 50.0.0.11" - * - * And now we associate exempted external ip address set to it. - * Now corresponding to above rule we will have following logical - * flows: - * lr_out_snat...priority=162, match=(..ip4.dst == $exempt_range), - * action=(next;) - * lr_out_snat...priority=161, match=(..), action=(ct_snat(....);) - * - */ - var priority = match (is_src) { - true -> { - /* S_ROUTER_IN_DNAT uses priority 100 */ - 100 + 1 - }, - false -> { - /* S_ROUTER_OUT_SNAT uses priority (mask + 1 + 128 + 1) */ - var is_gw_router = router.l3dgw_ports.is_empty(); - var mask_1bits = mask.cidr_bits().unwrap_or(8'd0) as integer; - mask_1bits + 2 + { if (not is_gw_router) 128 else 0 } - } - }; - - ("", - Some{Flow{.logical_datapath = router._uuid, - .stage = if (is_src) { s_ROUTER_IN_DNAT() } else { s_ROUTER_OUT_SNAT() }, - .priority = priority, - .__match = i"${__match} && ${ipX}.${dir} == $${__as.name}", - .actions = i"next;", - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None}}) - } - } -} - -relation LogicalRouterForceSnatFlows( - logical_router: uuid, - ips: Set<v46_ip>, - context: string) -Flow(.logical_datapath = logical_router, - .stage = s_ROUTER_IN_UNSNAT(), - .priority = 110, - .__match = i"${ipX} && ${ipX}.dst == ${ip}", - .actions = i"ct_snat;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None), -/* Higher priority rules to force SNAT with the IP addresses - * configured in the Gateway router. This only takes effect - * when the packet has already been DNATed or load balanced once. */ -Flow(.logical_datapath = logical_router, - .stage = s_ROUTER_OUT_SNAT(), - .priority = 100, - .__match = i"flags.force_snat_for_${context} == 1 && ${ipX}", - .actions = i"ct_snat(${ip});", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - LogicalRouterForceSnatFlows(.logical_router = logical_router, - .ips = ips, - .context = context), - var ip = FlatMap(ips), - var ipX = ip.ipX(). - -/* Higher priority rules to force SNAT with the router port ip. - * This only takes effect when the packet has already been - * load balanced once. */ -for (rp in &RouterPort(.router = &Router{._uuid = lr_uuid, .options = lr_options}, .lrp = lrp)) { - if (lb_force_snat_router_ip(lr_options) and rp.peer != PeerNone) { - Some{var ipv4} = rp.networks.ipv4_addrs.nth(0) in { - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_UNSNAT(), - .priority = 110, - .__match = i"inport == ${rp.json_name} && ip4.dst == ${ipv4.addr}", - .actions = i"ct_snat;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_SNAT(), - .priority = 110, - .__match = i"flags.force_snat_for_lb == 1 && ip4 && outport == ${rp.json_name}", - .actions = i"ct_snat(${ipv4.addr});", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - if (rp.networks.ipv4_addrs.len() > 1) { - Warning["Logical router port ${rp.json_name} is configured with multiple IPv4 " - "addresses. Only the first IP [${ipv4.addr}] is considered as SNAT for " - "load balancer"] - } - }; - - /* op->lrp_networks.ipv6_addrs will always have LLA and that will be - * last in the list. So add the flows only if n_ipv6_addrs > 1. */ - if (rp.networks.ipv6_addrs.len() > 1) { - Some{var ipv6} = rp.networks.ipv6_addrs.nth(0) in { - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_UNSNAT(), - .priority = 110, - .__match = i"inport == ${rp.json_name} && ip6.dst == ${ipv6.addr}", - .actions = i"ct_snat;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_SNAT(), - .priority = 110, - .__match = i"flags.force_snat_for_lb == 1 && ip6 && outport == ${rp.json_name}", - .actions = i"ct_snat(${ipv6.addr});", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - if (rp.networks.ipv6_addrs.len() > 2) { - Warning["Logical router port ${rp.json_name} is configured with multiple IPv6 " - "addresses. Only the first IP [${ipv6.addr}] is considered as SNAT for " - "load balancer"] - } - } - } - } -} - -relation VirtualLogicalPort(logical_port: Option<istring>) -VirtualLogicalPort(Some{logical_port}) :- - lsp in &nb::Logical_Switch_Port(.name = logical_port, .__type = i"virtual"). - -/* NAT rules are only valid on Gateway routers and routers with - * l3dgw_port (router has a port with "redirect-chassis" - * specified). */ -for (r in &Router(._uuid = lr_uuid, - .l3dgw_ports = l3dgw_ports, - .is_gateway = is_gateway) - if not l3dgw_ports.is_empty() or is_gateway) -{ - for (LogicalRouterNAT(.lr = lr_uuid, .nat = nat)) { - var ipX = nat.external_ip.ipX() in - var xx = nat.external_ip.xxreg() in - /* Check the validity of nat->logical_ip. 'logical_ip' can - * be a subnet when the type is "snat". */ - Some{(_, var mask)} = ip46_parse_masked(nat.nat.logical_ip.ival()) in - true == match ((mask.is_all_ones(), nat.nat.__type.ival())) { - (_, "snat") -> true, - (false, _) -> { - warn("bad ip ${nat.nat.logical_ip} for dnat in router ${uuid2str(lr_uuid)}"); - false - }, - _ -> true - } in - /* For distributed router NAT, determine whether this NAT rule - * satisfies the conditions for distributed NAT processing. */ - var mac = match ((not l3dgw_ports.is_empty() and nat.nat.__type == i"dnat_and_snat", - nat.nat.logical_port, nat.external_mac)) { - (true, Some{_}, Some{mac}) -> Some{mac}, - _ -> None - } in - var stateless = (lrouter_nat_is_stateless(nat) - and nat.nat.__type == i"dnat_and_snat") in - { - /* Ingress UNSNAT table: It is for already established connections' - * reverse traffic. i.e., SNAT has already been done in egress - * pipeline and now the packet has entered the ingress pipeline as - * part of a reply. We undo the SNAT here. - * - * Undoing SNAT has to happen before DNAT processing. This is - * because when the packet was DNATed in ingress pipeline, it did - * not know about the possibility of eventual additional SNAT in - * egress pipeline. */ - if (nat.nat.__type == i"snat" or nat.nat.__type == i"dnat_and_snat") { - if (l3dgw_ports.is_empty()) { - /* Gateway router. */ - var actions = if (stateless) { - i"${ipX}.dst=${nat.nat.logical_ip}; next;" - } else { - i"ct_snat;" - } in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_UNSNAT(), - .priority = 90, - .__match = i"ip && ${ipX}.dst == ${nat.nat.external_ip}", - .actions = actions, - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - }; - Some {var gwport} = l3dgw_ports.nth(0) in { - /* Distributed router. */ - - /* Traffic received on l3dgw_port is subject to NAT. */ - var __match = - "ip && ${ipX}.dst == ${nat.nat.external_ip}" - " && inport == ${json_escape(gwport.name)}" ++ - if (mac == None) { - /* Flows for NAT rules that are centralized are only - * programmed on the "redirect-chassis". */ - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" - } else { "" } in - var actions = if (stateless) { - i"${ipX}.dst=${nat.nat.logical_ip}; next;" - } else { - i"ct_snat;" - } in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_UNSNAT(), - .priority = 100, - .__match = __match.intern(), - .actions = actions, - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - } - }; - - /* Ingress DNAT table: Packets enter the pipeline with destination - * IP address that needs to be DNATted from a external IP address - * to a logical IP address. */ - var ip_and_ports = "${nat.nat.logical_ip}" ++ - if (nat.nat.external_port_range != i"") { - " ${nat.nat.external_port_range}" - } else { - "" - } in - if (nat.nat.__type == i"dnat" or nat.nat.__type == i"dnat_and_snat") { - l3dgw_ports.is_empty() in - var __match = "ip && ${ipX}.dst == ${nat.nat.external_ip}" in - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( - r, nat, __match, ipX, true, mask) in - { - /* Gateway router. */ - /* Packet when it goes from the initiator to destination. - * We need to set flags.loopback because the router can - * send the packet back through the same interface. */ - Some{var f} = ext_flow in Flow[f]; - - var flag_action = - if (has_force_snat_ip(r.options, i"dnat")) { - /* Indicate to the future tables that a DNAT has taken - * place and a force SNAT needs to be done in the - * Egress SNAT table. */ - "flags.force_snat_for_dnat = 1; " - } else { "" } in - var nat_actions = if (stateless) { - "${ipX}.dst=${nat.nat.logical_ip}; next;" - } else { - "flags.loopback = 1; " - "ct_dnat(${ip_and_ports});" - } in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_DNAT(), - .priority = 100, - .__match = (__match ++ ext_ip_match).intern(), - .actions = (flag_action ++ nat_actions).intern(), - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - }; - - Some {var gwport} = l3dgw_ports.nth(0) in - var __match = - "ip && ${ipX}.dst == ${nat.nat.external_ip}" - " && inport == ${json_escape(gwport.name)}" ++ - if (mac == None) { - /* Flows for NAT rules that are centralized are only - * programmed on the "redirect-chassis". */ - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" - } else { "" } in - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( - r, nat, __match, ipX, true, mask) in - { - /* Distributed router. */ - /* Traffic received on l3dgw_port is subject to NAT. */ - Some{var f} = ext_flow in Flow[f]; - - var actions = if (stateless) { - i"${ipX}.dst=${nat.nat.logical_ip}; next;" - } else { - i"ct_dnat(${ip_and_ports});" - } in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_DNAT(), - .priority = 100, - .__match = (__match ++ ext_ip_match).intern(), - .actions = actions, - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - } - }; - - /* ARP resolve for NAT IPs. */ - Some {var gwport} = l3dgw_ports.nth(0) in { - var gwport_name = json_escape(gwport.name) in { - if (nat.nat.__type == i"snat") { - var __match = i"inport == ${gwport_name} && " - "${ipX}.src == ${nat.nat.external_ip}" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 120, - .__match = __match, - .actions = i"next;", - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - }; - - var nexthop_reg = "${xx}${rEG_NEXT_HOP()}" in - var __match = i"outport == ${gwport_name} && " - "${nexthop_reg} == ${nat.nat.external_ip}" in - var dst_mac = match (mac) { - Some{value} -> i"${value}", - None -> gwport.mac - } in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 100, - .__match = __match, - .actions = i"eth.dst = ${dst_mac}; next;", - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - } - }; - - /* Egress UNDNAT table: It is for already established connections' - * reverse traffic. i.e., DNAT has already been done in ingress - * pipeline and now the packet has entered the egress pipeline as - * part of a reply. We undo the DNAT here. - * - * Note that this only applies for NAT on a distributed router. - */ - if ((nat.nat.__type == i"dnat" or nat.nat.__type == i"dnat_and_snat")) { - Some {var gwport} = l3dgw_ports.nth(0) in - var __match = - "ip && ${ipX}.src == ${nat.nat.logical_ip}" - " && outport == ${json_escape(gwport.name)}" ++ - if (mac == None) { - /* Flows for NAT rules that are centralized are only - * programmed on the "redirect-chassis". */ - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" - } else { "" } in - var actions = - match (mac) { - Some{mac_addr} -> "eth.src = ${mac_addr}; ", - None -> "" - } ++ - if (stateless) { - "${ipX}.src=${nat.nat.external_ip}; next;" - } else { - "ct_dnat;" - } in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_UNDNAT(), - .priority = 100, - .__match = __match.intern(), - .actions = actions.intern(), - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - }; - - /* Egress SNAT table: Packets enter the egress pipeline with - * source ip address that needs to be SNATted to a external ip - * address. */ - var ip_and_ports = "${nat.nat.external_ip}" ++ - if (nat.nat.external_port_range != i"") { - " ${nat.nat.external_port_range}" - } else { - "" - } in - if (nat.nat.__type == i"snat" or nat.nat.__type == i"dnat_and_snat") { - l3dgw_ports.is_empty() in - var __match = "ip && ${ipX}.src == ${nat.nat.logical_ip}" in - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( - r, nat, __match, ipX, false, mask) in - { - /* Gateway router. */ - Some{var f} = ext_flow in Flow[f]; - - /* The priority here is calculated such that the - * nat->logical_ip with the longest mask gets a higher - * priority. */ - var actions = if (stateless) { - i"${ipX}.src=${nat.nat.external_ip}; next;" - } else { - i"ct_snat(${ip_and_ports});" - } in - Some{var plen} = mask.cidr_bits() in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_SNAT(), - .priority = plen as bit<64> + 1, - .__match = (__match ++ ext_ip_match).intern(), - .actions = actions, - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - }; - - Some {var gwport} = l3dgw_ports.nth(0) in - var __match = - "ip && ${ipX}.src == ${nat.nat.logical_ip}" - " && outport == ${json_escape(gwport.name)}" ++ - if (mac == None) { - /* Flows for NAT rules that are centralized are only - * programmed on the "redirect-chassis". */ - " && is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})" - } else { "" } in - (var ext_ip_match, var ext_flow) = lrouter_nat_add_ext_ip_match( - r, nat, __match, ipX, false, mask) in - { - /* Distributed router. */ - Some{var f} = ext_flow in Flow[f]; - - var actions = - match (mac) { - Some{mac_addr} -> "eth.src = ${mac_addr}; ", - _ -> "" - } ++ if (stateless) { - "${ipX}.src=${nat.nat.external_ip}; next;" - } else { - "ct_snat(${ip_and_ports});" - } in - /* The priority here is calculated such that the - * nat->logical_ip with the longest mask gets a higher - * priority. */ - Some{var plen} = mask.cidr_bits() in - var priority = (plen as bit<64>) + 1 in - var centralized_boost = if (mac == None) 128 else 0 in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_SNAT(), - .priority = priority + centralized_boost, - .__match = (__match ++ ext_ip_match).intern(), - .actions = actions.intern(), - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - } - }; - - /* Logical router ingress table ADMISSION: - * For NAT on a distributed router, add rules allowing - * ingress traffic with eth.dst matching nat->external_mac - * on the l3dgw_port instance where nat->logical_port is - * resident. */ - Some{var mac_addr} = mac in - Some{var gwport} = l3dgw_ports.nth(0) in - Some{var logical_port} = nat.nat.logical_port in - var __match = - i"eth.dst == ${mac_addr} && inport == ${json_escape(gwport.name)}" - " && is_chassis_resident(${json_escape(logical_port)})" in - /* Store the ethernet address of the port receiving the packet. - * This will save us from having to match on inport further - * down in the pipeline. - */ - var actions = i"${rEG_INPORT_ETH_ADDR()} = ${gwport.mac}; next;" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ADMISSION(), - .priority = 50, - .__match = __match, - .actions = actions, - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None); - - /* Ingress Gateway Redirect Table: For NAT on a distributed - * router, add flows that are specific to a NAT rule. These - * flows indicate the presence of an applicable NAT rule that - * can be applied in a distributed manner. - * In particulr the IP src register and eth.src are set to NAT external IP and - * NAT external mac so the ARP request generated in the following - * stage is sent out with proper IP/MAC src addresses - */ - Some{var mac_addr} = mac in - Some{var gwport} = l3dgw_ports.nth(0) in - Some{var logical_port} = nat.nat.logical_port in - Some{var external_mac} = nat.nat.external_mac in - var __match = - i"${ipX}.src == ${nat.nat.logical_ip} && " - "outport == ${json_escape(gwport.name)} && " - "is_chassis_resident(${json_escape(logical_port)})" in - var actions = - i"eth.src = ${external_mac}; " - "${xx}${rEG_SRC()} = ${nat.nat.external_ip}; " - "next;" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_GW_REDIRECT(), - .priority = 100, - .__match = __match, - .actions = actions, - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None); - - for (VirtualLogicalPort(nat.nat.logical_port)) { - Some{var gwport} = l3dgw_ports.nth(0) in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_GW_REDIRECT(), - .priority = 80, - .__match = i"${ipX}.src == ${nat.nat.logical_ip} && " - "outport == ${json_escape(gwport.name)}", - .actions = i"drop;", - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - }; - - /* Egress Loopback table: For NAT on a distributed router. - * If packets in the egress pipeline on the distributed - * gateway port have ip.dst matching a NAT external IP, then - * loop a clone of the packet back to the beginning of the - * ingress pipeline with inport = outport. */ - Some{var gwport} = l3dgw_ports.nth(0) in - /* Distributed router. */ - Some{var port} = match (mac) { - Some{_} -> match (nat.nat.logical_port) { - Some{name} -> Some{json_escape(name)}, - None -> None: Option<string> - }, - None -> Some{json_escape(chassis_redirect_name(gwport.name))} - } in - var __match = i"${ipX}.dst == ${nat.nat.external_ip} && outport == ${json_escape(gwport.name)} && is_chassis_resident(${port})" in - var regs = { - var regs = vec_empty(); - for (j in range_vec(0, mFF_N_LOG_REGS(), 01)) { - regs.push("reg${j} = 0; ") - }; - regs - } in - var actions = - "clone { ct_clear; " - "inport = outport; outport = \"\"; " - "eth.dst <-> eth.src; " - "flags = 0; flags.loopback = 1; " ++ - regs.join("") ++ - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " - "next(pipeline=ingress, table=0); };" in - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_EGR_LOOP(), - .priority = 100, - .__match = __match, - .actions = actions.intern(), - .stage_hint = stage_hint(nat.nat._uuid), - .io_port = None, - .controller_meter = None) - } - }; - - /* Handle force SNAT options set in the gateway router. */ - if (l3dgw_ports.is_empty()) { - var dnat_force_snat_ips = get_force_snat_ip(r.options, i"dnat") in - if (not dnat_force_snat_ips.is_empty()) - LogicalRouterForceSnatFlows(.logical_router = lr_uuid, - .ips = dnat_force_snat_ips, - .context = "dnat"); - - var lb_force_snat_ips = get_force_snat_ip(r.options, i"lb") in - if (not lb_force_snat_ips.is_empty()) - LogicalRouterForceSnatFlows(.logical_router = lr_uuid, - .ips = lb_force_snat_ips, - .context = "lb") - } -} - -function nats_contain_vip(nats: Vec<NAT>, vip: v46_ip): bool { - for (nat in nats) { - if (nat.external_ip == vip) { - return true - } - }; - return false -} - -/* If there are any load balancing rules, we should send - * the packet to conntrack for defragmentation and - * tracking. This helps with two things. - * - * 1. With tracking, we can send only new connections to - * pick a DNAT ip address from a group. - * 2. If there are L4 ports in load balancing rules, we - * need the defragmentation to match on L4 ports. - * - * One of these flows must be created for each unique LB VIP address. - * We create one for each VIP:port pair; flows with the same IP and - * different port numbers will produce identical flows that will - * get merged by DDlog. */ -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_DEFRAG(), - .priority = prio, - .__match = __match, - .actions = __actions, - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb), - var prio = if (port != 0) { 110 } else { 100 }, - var proto = match (lb.protocol) { - Some{proto} -> proto, - _ -> i"tcp" - }, - var proto_match = if (port != 0) { " && ${proto}" } else { "" }, - var __match = ("ip && ${ip.ipX()}.dst == ${ip}" ++ proto_match).intern(), - var actions1 = "${ip.xxreg()}${rEG_NEXT_HOP()} = ${ip}; ", - var actions2 = - if (port != 0) { - "${rEG_ORIG_TP_DPORT_ROUTER()} = ${proto}.dst; ct_dnat;" - } else { - "ct_dnat;" - }, - var __actions = (actions1 ++ actions2).intern(), - GWRouterLB(r, lb._uuid). - -/* Higher priority rules are added for load-balancing in DNAT - * table. For every match (on a VIP[:port]), we add two flows - * via add_router_lb_flow(). One flow is for specific matching - * on ct.new with an action of "ct_lb($targets);". The other - * flow is for ct.est with an action of "ct_dnat;". */ -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_DNAT(), - .priority = prio, - .__match = __match, - .actions = actions, - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - lbvip in &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb), - var proto = match (lb.protocol) { - Some{proto} -> proto, - _ -> i"tcp" - }, - var match1 = "${ip.ipX()} && ${ip.xxreg()}${rEG_NEXT_HOP()} == ${ip}", - var match2 = if (port != 0) { - " && ${proto} && ${rEG_ORIG_TP_DPORT_ROUTER()} == ${port}" - } else { - "" - }, - var prio = if (port != 0) { 120 } else { 110 }, - var match0 = ("ct.est && " ++ match1 ++ match2 ++ " && ct_label.natted == 1").intern(), - GWRouterLB(r, lb._uuid), - var __match = match ((r.l3dgw_ports.nth(0), lbvip.backend_ips != i"" or lb.options.get_bool_def(i"reject", false))) { - (Some {var gw_port}, true) -> i"${match0} && is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})", - _ -> match0 - }, - var actions = match (snat_for_lb(r.options, lb)) { - SkipSNAT -> i"flags.skip_snat_for_lb = 1; next;", - ForceSNAT -> i"flags.force_snat_for_lb = 1; next;", - _ -> i"next;" - }. - -/* The load balancer vip is also present in the NAT entries. - * So add a high priority lflow to advance the the packet - * destined to the vip (and the vip port if defined) - * in the S_ROUTER_IN_UNSNAT stage. - * There seems to be an issue with ovs-vswitchd. When the new - * connection packet destined for the lb vip is received, - * it is dnat'ed in the S_ROUTER_IN_DNAT stage in the dnat - * conntrack zone. For the next packet, if it goes through - * unsnat stage, the conntrack flags are not set properly, and - * it doesn't hit the established state flows in - * S_ROUTER_IN_DNAT stage. */ -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_UNSNAT(), - .priority = 120, - .__match = __match, - .actions = i"next;", - .stage_hint = stage_hint(lb._uuid), - .io_port = None, - .controller_meter = None) :- - &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb), - var proto = match (lb.protocol) { - Some{proto} -> proto, - _ -> i"tcp" - }, - var port_match = if (port != 0) { " && ${proto}.dst == ${port}" } else { "" }, - var __match = ("${ip.ipX()} && ${ip.ipX()}.dst == ${ip} && ${proto}" ++ - port_match).intern(), - GWRouterLB(r, lb._uuid), - nats_contain_vip(r.nats, ip). - -/* Add logical flows to UNDNAT the load balanced reverse traffic in - * the router egress pipleine stage - S_ROUTER_OUT_UNDNAT if the logical - * router has a gateway router port associated. - */ -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_OUT_UNDNAT(), - .priority = 120, - .__match = __match, - .actions = action, - .stage_hint = stage_hint(lb._uuid), - .io_port = None, - .controller_meter = None) :- - &LBVIP(.vip_addr = ip, .vip_port = port, .lb = lb, .backends = backends), - var proto = match (lb.protocol) { - Some{proto} -> proto, - _ -> i"tcp" - }, - var conds = backends.map(|b| { - var port_match = if (b.port != 0) { - " && ${proto}.src == ${b.port}" - } else { - "" - }; - "(${b.ip.ipX()}.src == ${b.ip}" ++ port_match ++ ")" - }).join(" || "), - conds != "", - RouterLB(r, lb._uuid), - Some{var gwport} = r.l3dgw_ports.nth(0), - var __match = - i"${ip.ipX()} && (${conds}) && " - "outport == ${json_escape(gwport.name)} && " - "is_chassis_resident(${json_escape(chassis_redirect_name(gwport.name))})", - var action = match (snat_for_lb(r.options, lb)) { - SkipSNAT -> i"flags.skip_snat_for_lb = 1; ct_dnat;", - ForceSNAT -> i"flags.force_snat_for_lb = 1; ct_dnat;", - _ -> i"ct_dnat;" - }. - -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_DNAT(), - .priority = 130, - .__match = __match, - .actions = __action, - .io_port = None, - .controller_meter = r.copp.get(cOPP_EVENT_ELB()), - .stage_hint = stage_hint(lb._uuid)) :- - &LBVIP(.vip_key = lb_key, .lb = lb, .backend_ips = i""), - not lb.options.get_bool_def(i"reject", false), - LoadBalancerEmptyEvents(lb._uuid), - Some {(var __match, var __action)} = build_empty_lb_event_flow(lb_key, lb), - GWRouterLB(r, lb._uuid). - -/* Higher priority rules are added for load-balancing in DNAT - * table. For every match (on a VIP[:port]), we add two flows - * via add_router_lb_flow(). One flow is for specific matching - * on ct.new with an action of "ct_lb($targets);". The other - * flow is for ct.est with an action of "ct_dnat;". */ -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_DNAT(), - .priority = priority, - .__match = __match, - .actions = actions, - .io_port = None, - .controller_meter = meter, - .stage_hint = 0) :- - LBVIPWithStatus(lbvip@&LBVIP{.lb = lb}, up_backends), - var priority = if (lbvip.vip_port != 0) 120 else 110, - (var actions0, var reject) = build_lb_vip_actions(lbvip, up_backends, s_ROUTER_OUT_SNAT(), ""), - var actions1 = actions0.intern(), - var match0 = "ct.new && " ++ - get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port, lb.protocol, true, true, true), - var match1 = match0.intern(), - GWRouterLB(r, lb._uuid), - var actions = match ((reject, snat_for_lb(r.options, lb))) { - (false, SkipSNAT) -> i"flags.skip_snat_for_lb = 1; ${actions1}", - (false, ForceSNAT) -> i"flags.force_snat_for_lb = 1; ${actions1}", - _ -> actions1 - }, - var __match = match (r.l3dgw_ports.nth(0)) { - Some{gw_port} -> i"${match1} && is_chassis_resident(${json_escape(chassis_redirect_name(gw_port.name))})", - _ -> match1 - }, - var meter = if (reject) { - r.copp.get(cOPP_REJECT()) - } else { - None - }. - - -/* Defaults based on MaxRtrInterval and MinRtrInterval from RFC 4861 section - * 6.2.1 - */ -function nD_RA_MAX_INTERVAL_DEFAULT(): integer = 600 -function nD_RA_MAX_INTERVAL_RANGE(): (integer, integer) { (4, 1800) } - -function nd_ra_min_interval_default(max: integer): integer = -{ - if (max >= 9) { max / 3 } else { max * 3 / 4 } -} - -function nD_RA_MIN_INTERVAL_RANGE(max: integer): (integer, integer) = (3, ((max * 3) / 4)) - -function nD_MTU_DEFAULT(): integer = 0 - -function copy_ra_to_sb(port: RouterPort, address_mode: istring): Map<istring, istring> = -{ - var options = port.sb_options; - - options.insert(i"ipv6_ra_send_periodic", i"true"); - options.insert(i"ipv6_ra_address_mode", address_mode); - - var max_interval = port.lrp.ipv6_ra_configs - .get_int_def(i"max_interval", nD_RA_MAX_INTERVAL_DEFAULT()) - .clamp(nD_RA_MAX_INTERVAL_RANGE()); - options.insert(i"ipv6_ra_max_interval", i"${max_interval}"); - - var min_interval = port.lrp.ipv6_ra_configs - .get_int_def(i"min_interval", nd_ra_min_interval_default(max_interval)) - .clamp(nD_RA_MIN_INTERVAL_RANGE(max_interval)); - options.insert(i"ipv6_ra_min_interval", i"${min_interval}"); - - var mtu = port.lrp.ipv6_ra_configs.get_int_def(i"mtu", nD_MTU_DEFAULT()); - - /* RFC 2460 requires the MTU for IPv6 to be at least 1280 */ - if (mtu != 0 and mtu >= 1280) { - options.insert(i"ipv6_ra_mtu", i"${mtu}") - }; - - var prefixes = vec_empty(); - for (addr in port.networks.ipv6_addrs) { - if (addr.is_lla()) { - options.insert(i"ipv6_ra_src_addr", i"${addr.addr}") - } else { - prefixes.push(addr.match_network()) - } - }; - match (port.sb_options.get(i"ipv6_ra_pd_list")) { - Some{value} -> prefixes.push(value.ival()), - _ -> () - }; - options.insert(i"ipv6_ra_prefixes", prefixes.join(" ").intern()); - - match (port.lrp.ipv6_ra_configs.get(i"rdnss")) { - Some{value} -> options.insert(i"ipv6_ra_rdnss", value), - _ -> () - }; - - match (port.lrp.ipv6_ra_configs.get(i"dnssl")) { - Some{value} -> options.insert(i"ipv6_ra_dnssl", value), - _ -> () - }; - - options.insert(i"ipv6_ra_src_eth", i"${port.networks.ea}"); - - var prf = match (port.lrp.ipv6_ra_configs.get(i"router_preference")) { - Some{prf} -> if (prf == i"HIGH" or prf == i"LOW") prf else i"MEDIUM", - _ -> i"MEDIUM" - }; - options.insert(i"ipv6_ra_prf", prf); - - match (port.lrp.ipv6_ra_configs.get(i"route_info")) { - Some{s} -> options.insert(i"ipv6_ra_route_info", s), - _ -> () - }; - - options -} - -/* Logical router ingress table ND_RA_OPTIONS and ND_RA_RESPONSE: IPv6 Router - * Adv (RA) options and response. */ -// FIXME: do these rules apply to derived ports? -for (&RouterPort[port@RouterPort{.lrp = lrp@&nb::Logical_Router_Port{.peer = None}, - .router = router, - .json_name = json_name, - .networks = networks, - .peer = PeerSwitch{}}] - if (not networks.ipv6_addrs.is_empty())) -{ - Some{var address_mode} = lrp.ipv6_ra_configs.get(i"address_mode") in - /* FIXME: we need a nicer wat to write this */ - true == - if ((address_mode != i"slaac") and - (address_mode != i"dhcpv6_stateful") and - (address_mode != i"dhcpv6_stateless")) { - warn("Invalid address mode [${address_mode}] defined"); - false - } else { true } in - { - if (lrp.ipv6_ra_configs.get_bool_def(i"send_periodic", false)) { - RouterPortRAOptions(lrp._uuid, copy_ra_to_sb(port, address_mode)) - }; - - (true, var prefix) = - { - var add_rs_response_flow = false; - var prefix = ""; - for (addr in networks.ipv6_addrs) { - if (not addr.is_lla()) { - prefix = prefix ++ ", prefix = ${addr.match_network()}"; - add_rs_response_flow = true - } else () - }; - (add_rs_response_flow, prefix) - } in - { - var __match = i"inport == ${json_name} && ip6.dst == ff02::2 && nd_rs" in - /* As per RFC 2460, 1280 is minimum IPv6 MTU. */ - var mtu = match(lrp.ipv6_ra_configs.get(i"mtu")) { - Some{mtu_s} -> { - match (parse_dec_u64(mtu_s)) { - None -> 0, - Some{mtu} -> if (mtu >= 1280) mtu else 0 - } - }, - None -> 0 - } in - var actions0 = - "${rEGBIT_ND_RA_OPTS_RESULT()} = put_nd_ra_opts(" - "addr_mode = ${json_escape(address_mode)}, " - "slla = ${networks.ea}" ++ - if (mtu > 0) { ", mtu = ${mtu}" } else { "" } in - var router_preference = match (lrp.ipv6_ra_configs.get(i"router_preference")) { - Some{prf} -> if (prf == i"MEDIUM") { "" } else { ", router_preference = \"${prf}\"" }, - None -> "" - } in - var actions = actions0 ++ router_preference ++ prefix ++ "); next;" in - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_ND_RA_OPTIONS(), - .priority = 50, - .__match = __match, - .actions = actions.intern(), - .io_port = None, - .controller_meter = router.copp.get(cOPP_ND_RA_OPTS()), - .stage_hint = stage_hint(lrp._uuid)); - - var __match = i"inport == ${json_name} && ip6.dst == ff02::2 && " - "nd_ra && ${rEGBIT_ND_RA_OPTS_RESULT()}" in - var ip6_str = networks.ea.to_ipv6_lla().string_mapped() in - var actions = i"eth.dst = eth.src; eth.src = ${networks.ea}; " - "ip6.dst = ip6.src; ip6.src = ${ip6_str}; " - "outport = inport; flags.loopback = 1; " - "output;" in - Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_ND_RA_RESPONSE(), - .priority = 50, - .__match = __match, - .actions = actions, - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None) - } - } -} - - -/* Logical router ingress table ND_RA_OPTIONS, ND_RA_RESPONSE: RS responder, by - * default goto next. (priority 0)*/ -for (&Router(._uuid = lr_uuid)) -{ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ND_RA_OPTIONS(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ND_RA_RESPONSE(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Proxy table that stores per-port routes. - * There routes get converted into logical flows by - * the following rule. - */ -relation Route(key: route_key, // matching criteria - port: Intern<RouterPort>, // output port - src_ip: v46_ip, // source IP address for output - gateway: Option<v46_ip>) // next hop (unless being delivered) - -function build_route_match(key: route_key) : (string, bit<32>) = -{ - var ipX = key.ip_prefix.ipX(); - - /* The priority here is calculated to implement longest-prefix-match - * routing. */ - (var dir, var priority) = match (key.policy) { - SrcIp -> ("src", key.plen * 2), - DstIp -> ("dst", (key.plen * 2) + 1) - }; - - var network = key.ip_prefix.network(key.plen); - var __match = "${ipX}.${dir} == ${network}/${key.plen}"; - - (__match, priority) -} -for (Route(.port = port, - .key = key, - .src_ip = src_ip, - .gateway = gateway)) -{ - var ipX = key.ip_prefix.ipX() in - var xx = key.ip_prefix.xxreg() in - /* IPv6 link-local addresses must be scoped to the local router port. */ - var inport_match = match (key.ip_prefix) { - IPv6{prefix} -> if (prefix.is_lla()) { - "inport == ${port.json_name} && " - } else "", - _ -> "" - } in - (var ip_match, var priority) = build_route_match(key) in - var __match = inport_match ++ ip_match in - var nexthop = match (gateway) { - Some{gw} -> "${gw}", - None -> "${ipX}.dst" - } in - var actions = - i"${rEG_ECMP_GROUP_ID()} = 0; " - "${xx}${rEG_NEXT_HOP()} = ${nexthop}; " - "${xx}${rEG_SRC()} = ${src_ip}; " - "eth.src = ${port.networks.ea}; " - "outport = ${port.json_name}; " - "flags.loopback = 1; " - "next;" in - { - Flow(.logical_datapath = port.router._uuid, - .stage = s_ROUTER_IN_IP_ROUTING(), - .priority = priority as integer, - .__match = __match.intern(), - .actions = i"ip.ttl--; ${actions}", - .stage_hint = stage_hint(port.lrp._uuid), - .io_port = None, - .controller_meter = None); - - if (port.has_bfd) { - Flow(.logical_datapath = port.router._uuid, - .stage = s_ROUTER_IN_IP_ROUTING(), - .priority = priority as integer + 1, - .__match = i"${__match} && udp.dst == 3784", - .actions = actions, - .stage_hint = stage_hint(port.lrp._uuid), - .io_port = None, - .controller_meter = None) - } - } -} - -/* Install drop routes for all the static routes with nexthop = "discard" */ -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_IP_ROUTING(), - .priority = priority as integer, - .__match = ip_match.intern(), - .actions = i"drop;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - r in RouterDiscardRoute_(.router = router, .key = key), - (var ip_match, var priority) = build_route_match(r.key). - -/* Logical router ingress table IP_ROUTING & IP_ROUTING_ECMP: IP Routing. - * - * A packet that arrives at this table is an IP packet that should be - * routed to the address in 'ip[46].dst'. - * - * For regular routes without ECMP, table IP_ROUTING sets outport to the - * correct output port, eth.src to the output port's MAC address, and - * '[xx]${rEG_NEXT_HOP()}' to the next-hop IP address (leaving 'ip[46].dst', the - * packet’s final destination, unchanged), and advances to the next table. - * - * For ECMP routes, i.e. multiple routes with same policy and prefix, table - * IP_ROUTING remembers ECMP group id and selects a member id, and advances - * to table IP_ROUTING_ECMP, which sets outport, eth.src, and the appropriate - * next-hop register for the selected ECMP member. - * */ -Route(key, port, src_ip, None) :- - RouterPortNetworksIPv4Addr(.port = port, .addr = addr), - var key = RouteKey{DstIp, IPv4{addr.addr}, addr.plen}, - var src_ip = IPv4{addr.addr}. - -Route(key, port, src_ip, None) :- - RouterPortNetworksIPv6Addr(.port = port, .addr = addr), - var key = RouteKey{DstIp, IPv6{addr.addr}, addr.plen}, - var src_ip = IPv6{addr.addr}. - -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_IP_ROUTING_ECMP(), - .priority = 150, - .__match = i"${rEG_ECMP_GROUP_ID()} == 0", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - r in &Router(). - -/* Convert the static routes to flows. */ -Route(key, dst.port, dst.src_ip, Some{dst.nexthop}) :- - RouterStaticRoute(.router = router, .key = key, .dsts = dsts), - dsts.size() == 1, - Some{var dst} = dsts.nth(0). - -Route(key, dst.port, dst.src_ip, None) :- - RouterStaticRouteEmptyNextHop(.router = router, .key = key, .dsts = dsts), - dsts.size() == 1, - Some{var dst} = dsts.nth(0). - -/* Create routes from peer to port's routable addresses */ -Route(key, peer, src_ip, None) :- - RouterPortRoutableAddresses(port, addresses), - FirstHopRouterPortRoutableAddresses(port, peer_uuid), - peer_lrp in &nb::Logical_Router_Port(._uuid = peer_uuid), - peer in &RouterPort(.lrp = peer_lrp, .networks = networks), - Some{var src} = networks.ipv4_addrs.first(), - var src_ip = IPv4{src.addr}, - var addr = FlatMap(addresses), - var ip4_addr = FlatMap(addr.ipv4_addrs), - var key = RouteKey{DstIp, IPv4{ip4_addr.addr}, ip4_addr.plen}. - -/* This relation indicates that logical router port "port" has routable - * addresses (i.e. DNAT and Load Balancer VIPs) and that logical router - * port "peer" is reachable via a hop across a single logical switch. - */ -relation FirstHopRouterPortRoutableAddresses( - port: uuid, - peer: uuid) -FirstHopRouterPortRoutableAddresses(port_uuid, peer_uuid) :- - FirstHopLogicalRouter(r1, ls), - FirstHopLogicalRouter(r2, ls), - r1 != r2, - LogicalRouterPort(port_uuid, r1), - LogicalRouterPort(peer_uuid, r2), - RouterPortRoutableAddresses(.rport = port_uuid), - lrp in &nb::Logical_Router_Port(._uuid = port_uuid), - peer_lrp in &nb::Logical_Router_Port(._uuid = peer_uuid), - LogicalSwitchRouterPort(_, lrp.name, ls), - LogicalSwitchRouterPort(_, peer_lrp.name, ls). - -relation RouterPortRoutableAddresses( - rport: uuid, - addresses: Set<lport_addresses>) -RouterPortRoutableAddresses(port.lrp._uuid, addresses) :- - port in &RouterPort(.is_redirect = true), - lbips in &LogicalRouterLBIPs(.lr = port.router._uuid), - var addresses = get_nat_addresses(port, lbips, true).filter_map(|addrs| addrs.ival().extract_addresses()), - addresses != set_empty(). - -/* Return a vector of pairs (1, set[0]), ... (n, set[n - 1]). */ -function numbered_vec(set: Set<'A>) : Vec<(bit<16>, 'A)> = { - var vec = vec_with_capacity(set.size()); - var i = 1; - for (x in set) { - vec.push((i, x)); - i = i + 1 - }; - vec -} - -relation EcmpGroup( - group_id: bit<16>, - router: Intern<Router>, - key: route_key, - dsts: Set<route_dst>, - route_match: string, // This is build_route_match(key).0 - route_priority: integer) // This is build_route_match(key).1 - -EcmpGroup(group_id, router, key, dsts, route_match, route_priority) :- - r in RouterStaticRoute(.router = router, .key = key, .dsts = dsts), - dsts.size() > 1, - var groups = (router, key, dsts).group_by(()).to_set(), - var group_id_and_group = FlatMap(numbered_vec(groups)), - (var group_id, (var router, var key, var dsts)) = group_id_and_group, - (var route_match, var route_priority0) = build_route_match(key), - var route_priority = route_priority0 as integer. - -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_IP_ROUTING(), - .priority = route_priority, - .__match = route_match.intern(), - .actions = actions, - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - EcmpGroup(group_id, router, key, dsts, route_match, route_priority), - var all_member_ids = { - var member_ids = vec_with_capacity(dsts.size()); - for (i in range_vec(1, dsts.size()+1, 1)) { - member_ids.push("${i}") - }; - member_ids.join(", ") - }, - var actions = - i"ip.ttl--; " - "flags.loopback = 1; " - "${rEG_ECMP_GROUP_ID()} = ${group_id}; " /* XXX */ - "${rEG_ECMP_MEMBER_ID()} = select(${all_member_ids});". - -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_IP_ROUTING_ECMP(), - .priority = 100, - .__match = __match, - .actions = actions, - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - EcmpGroup(group_id, router, key, dsts, _, _), - var member_id_and_dst = FlatMap(numbered_vec(dsts)), - (var member_id, var dst) = member_id_and_dst, - var xx = dst.nexthop.xxreg(), - var __match = i"${rEG_ECMP_GROUP_ID()} == ${group_id} && " - "${rEG_ECMP_MEMBER_ID()} == ${member_id}", - var actions = i"${xx}${rEG_NEXT_HOP()} = ${dst.nexthop}; " - "${xx}${rEG_SRC()} = ${dst.src_ip}; " - "eth.src = ${dst.port.networks.ea}; " - "outport = ${dst.port.json_name}; " - "next;". - -/* If symmetric ECMP replies are enabled, then packets that arrive over - * an ECMP route need to go through conntrack. - */ -relation EcmpSymmetricReply( - router: Intern<Router>, - dst: route_dst, - route_match: string, - tunkey: integer) -EcmpSymmetricReply(router, dst, route_match, tunkey) :- - EcmpGroup(.router = router, .dsts = dsts, .route_match = route_match), - router.is_gateway, - var dst = FlatMap(dsts), - dst.ecmp_symmetric_reply, - PortTunKeyAllocation(.port = dst.port.lrp._uuid, .tunkey = tunkey). - -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_DEFRAG(), - .priority = 100, - .__match = __match, - .actions = i"ct_next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - EcmpSymmetricReply(router, dst, route_match, _), - var __match = i"inport == ${dst.port.json_name} && ${route_match}". - -/* And packets that go out over an ECMP route need conntrack. - XXX this seems to exactly duplicate the above flow? */ - -/* Save src eth and inport in ct_label for packets that arrive over - * an ECMP route. - */ -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_ECMP_STATEFUL(), - .priority = 100, - .__match = __match, - .actions = actions, - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - EcmpSymmetricReply(router, dst, route_match, tunkey), - var __match = i"inport == ${dst.port.json_name} && ${route_match} && " - "(ct.new && !ct.est)", - var actions = i"ct_commit { ct_label.ecmp_reply_eth = eth.src;" - " ct_label.ecmp_reply_port = ${tunkey};}; next;". - -/* Bypass ECMP selection if we already have ct_label information - * for where to route the packet. - */ -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_IP_ROUTING(), - .priority = 300, - .__match = i"${ecmp_reply} && ${route_match}", - .actions = i"ip.ttl--; " - "flags.loopback = 1; " - "eth.src = ${dst.port.networks.ea}; " - "${xx}reg1 = ${dst.src_ip}; " - "outport = ${dst.port.json_name}; " - "next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None), -/* Egress reply traffic for symmetric ECMP routes skips router policies. */ -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_POLICY(), - .priority = 65535, - .__match = ecmp_reply, - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None), -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 200, - .__match = ecmp_reply, - .actions = i"eth.dst = ct_label.ecmp_reply_eth; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - EcmpSymmetricReply(router, dst, route_match, tunkey), - var ecmp_reply = i"ct.rpl && ct_label.ecmp_reply_port == ${tunkey}", - var xx = dst.nexthop.xxreg(). - - -/* IP Multicast lookup. Here we set the output port, adjust TTL and advance - * to next table (priority 500). - */ -/* Drop IPv6 multicast traffic that shouldn't be forwarded, - * i.e., router solicitation and router advertisement. - */ -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_IP_ROUTING(), - .priority = 550, - .__match = i"nd_rs || nd_ra", - .actions = i"drop;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - router in &Router(). - -for (IgmpRouterMulticastGroup(address, rtr, ports)) { - for (RouterMcastFloodPorts(rtr, flood_ports) if rtr.mcast_cfg.relay) { - var flood_static = not flood_ports.is_empty() in - var mc_static = json_escape(mC_STATIC().0) in - var static_act = { - if (flood_static) { - "clone { " - "outport = ${mc_static}; " - "ip.ttl--; " - "next; " - "}; " - } else { - "" - } - } in - Some{var ip} = ip46_parse(address.ival()) in - var ipX = ip.ipX() in - Flow(.logical_datapath = rtr._uuid, - .stage = s_ROUTER_IN_IP_ROUTING(), - .priority = 500, - .__match = i"${ipX} && ${ipX}.dst == ${address} ", - .actions = - i"${static_act}outport = ${json_escape(address)}; " - "ip.ttl--; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) - } -} - -/* If needed, flood unregistered multicast on statically configured ports. - * Priority 450. Otherwise drop any multicast traffic. - */ -for (RouterMcastFloodPorts(rtr, flood_ports) if rtr.mcast_cfg.relay) { - var mc_static = json_escape(mC_STATIC().0) in - var flood_static = not flood_ports.is_empty() in - var actions = if (flood_static) { - i"clone { " - "outport = ${mc_static}; " - "ip.ttl--; " - "next; " - "};" - } else { - i"drop;" - } in - Flow(.logical_datapath = rtr._uuid, - .stage = s_ROUTER_IN_IP_ROUTING(), - .priority = 450, - .__match = i"ip4.mcast || ip6.mcast", - .actions = actions, - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Logical router ingress table POLICY: Policy. - * - * A packet that arrives at this table is an IP packet that should be - * permitted/denied/rerouted to the address in the rule's nexthop. - * This table sets outport to the correct out_port, - * eth.src to the output port's MAC address, - * the appropriate register to the next-hop IP address (leaving - * 'ip[46].dst', the packet’s final destination, unchanged), and - * advances to the next table for ARP/ND resolution. */ -for (&Router(._uuid = lr_uuid)) { - /* This is a catch-all rule. It has the lowest priority (0) - * does a match-all("1") and pass-through (next) */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_POLICY(), - .priority = 0, - .__match = i"1", - .actions = i"${rEG_ECMP_GROUP_ID()} = 0; next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_POLICY_ECMP(), - .priority = 150, - .__match = i"${rEG_ECMP_GROUP_ID()} == 0", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Convert routing policies to flows. */ -function pkt_mark_policy(options: Map<istring,istring>): string { - var pkt_mark = options.get(i"pkt_mark").and_then(parse_dec_u64).unwrap_or(0); - if (pkt_mark > 0 and pkt_mark < (1 << 32)) { - "pkt.mark = ${pkt_mark}; " - } else { - "" - } -} -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_POLICY(), - .priority = policy.priority, - .__match = policy.__match, - .actions = actions.intern(), - .stage_hint = stage_hint(policy._uuid), - .io_port = None, - .controller_meter = None) :- - r in &Router(), - var policy_uuid = FlatMap(r.policies), - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), - policy.action == i"reroute", - Some{var nexthop_s} = match (policy.nexthops.size()) { - 0 -> policy.nexthop, - 1 -> policy.nexthops.nth(0), - _ -> None /* >1 nexthops handled separately as ECMP. */ - }, - Some{var nexthop} = ip46_parse(nexthop_s.ival()), - out_port in &RouterPort(.router = r), - Some{var src_ip} = find_lrp_member_ip(out_port.networks, nexthop), - /* - None: - VLOG_WARN_RL(&rl, "lrp_addr not found for routing policy " - " priority %"PRId64" nexthop %s", - rule->priority, rule->nexthop); - */ - var xx = src_ip.xxreg(), - var actions = (pkt_mark_policy(policy.options) ++ - "${xx}${rEG_NEXT_HOP()} = ${nexthop}; " - "${xx}${rEG_SRC()} = ${src_ip}; " - "eth.src = ${out_port.networks.ea}; " - "outport = ${out_port.json_name}; " - "flags.loopback = 1; " - "${rEG_ECMP_GROUP_ID()} = 0; " - "next;"). - -/* Returns true if the addresses in 'addrs' are all IPv4 or all IPv6, - false if they are a mix. */ -function all_same_addr_family(addrs: Set<istring>): bool { - var addr_families = set_empty(); - for (a in addrs) { - addr_families.insert(a.contains(".")) - }; - addr_families.size() <= 1 -} - -relation EcmpReroutePolicy( - r: Intern<Router>, - policy: nb::Logical_Router_Policy, - ecmp_group_id: usize -) -EcmpReroutePolicy(r, policy, ecmp_group_id) :- - r in &Router(), - var policy_uuid = FlatMap(r.policies), - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), - policy.action == i"reroute", - policy.nexthops.size() > 1, - var policies = policy.group_by(r).to_vec().map(|x| (x.nexthop, x)).sort_imm().map(|x| x.1), - var ecmp_group_ids = range_vec(1, policies.len() + 1, 1), - var numbered_policies = policies.zip(ecmp_group_ids), - var pair = FlatMap(numbered_policies), - (var policy, var ecmp_group_id) = pair, - all_same_addr_family(policy.nexthops). -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_POLICY_ECMP(), - .priority = 100, - .__match = __match, - .actions = actions.intern(), - .stage_hint = stage_hint(policy._uuid), - .io_port = None, - .controller_meter = None) :- - EcmpReroutePolicy(r, policy, ecmp_group_id), - var member_ids = range_vec(1, policy.nexthops.size() + 1, 1), - var numbered_nexthops = policy.nexthops.map(ival).to_vec().zip(member_ids), - var pair = FlatMap(numbered_nexthops), - (var nexthop_s, var member_id) = pair, - Some{var nexthop} = ip46_parse(nexthop_s), - out_port in &RouterPort(.router = r), - Some{var src_ip} = find_lrp_member_ip(out_port.networks, nexthop), // or warn - var xx = src_ip.xxreg(), - var actions = (pkt_mark_policy(policy.options) ++ - "${xx}${rEG_NEXT_HOP()} = ${nexthop}; " - "${xx}${rEG_SRC()} = ${src_ip}; " - "eth.src = ${out_port.networks.ea}; " - "outport = ${out_port.json_name}; " - "flags.loopback = 1; " - "next;"), - var __match = i"${rEG_ECMP_GROUP_ID()} == ${ecmp_group_id} && " - "${rEG_ECMP_MEMBER_ID()} == ${member_id}". -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_POLICY(), - .priority = policy.priority, - .__match = policy.__match, - .actions = actions, - .stage_hint = stage_hint(policy._uuid), - .io_port = None, - .controller_meter = None) :- - EcmpReroutePolicy(r, policy, ecmp_group_id), - var member_ids = { - var n = policy.nexthops.size(); - var member_ids = vec_with_capacity(n); - for (i in range_vec(1, n + 1, 1)) { - member_ids.push("${i}") - }; - member_ids.join(", ") - }, - var actions = i"${rEG_ECMP_GROUP_ID()} = ${ecmp_group_id}; " - "${rEG_ECMP_MEMBER_ID()} = select(${member_ids});". - -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_POLICY(), - .priority = policy.priority, - .__match = policy.__match, - .actions = i"drop;", - .stage_hint = stage_hint(policy._uuid), - .io_port = None, - .controller_meter = None) :- - r in &Router(), - var policy_uuid = FlatMap(r.policies), - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), - policy.action == i"drop". -Flow(.logical_datapath = r._uuid, - .stage = s_ROUTER_IN_POLICY(), - .priority = policy.priority, - .__match = policy.__match, - .actions = (pkt_mark_policy(policy.options) ++ "${rEG_ECMP_GROUP_ID()} = 0; next;").intern(), - .stage_hint = stage_hint(policy._uuid), - .io_port = None, - .controller_meter = None) :- - r in &Router(), - var policy_uuid = FlatMap(r.policies), - policy in nb::Logical_Router_Policy(._uuid = policy_uuid), - policy.action == i"allow". - - -/* XXX destination unreachable */ - -/* Local router ingress table ARP_RESOLVE: ARP Resolution. - * - * Multicast packets already have the outport set so just advance to next - * table (priority 500). - */ -for (&Router(._uuid = lr_uuid)) { - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 500, - .__match = i"ip4.mcast || ip6.mcast", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Local router ingress table ARP_RESOLVE: ARP Resolution. - * - * Any packet that reaches this table is an IP packet whose next-hop IP - * address is in the next-hop register. (ip4.dst is the final destination.) This table - * resolves the IP address in the next-hop register into an output port in outport and an - * Ethernet address in eth.dst. */ -// FIXME: does this apply to redirect ports? -for (rp in &RouterPort(.peer = PeerRouter{peer_port, _}, - .router = router, - .networks = networks)) -{ - for (&RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = peer_port}, - .json_name = peer_json_name, - .router = peer_router)) - { - /* This is a logical router port. If next-hop IP address in - * the next-hop register matches IP address of this router port, then - * the packet is intended to eventually be sent to this - * logical port. Set the destination mac address using this - * port's mac address. - * - * The packet is still in peer's logical pipeline. So the match - * should be on peer's outport. */ - if (not networks.ipv4_addrs.is_empty()) { - var __match = "outport == ${peer_json_name} && " - "${rEG_NEXT_HOP()} == " ++ - format_v4_networks(networks, false) in - Flow(.logical_datapath = peer_router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 100, - .__match = __match.intern(), - .actions = i"eth.dst = ${networks.ea}; next;", - .stage_hint = stage_hint(rp.lrp._uuid), - .io_port = None, - .controller_meter = None) - }; - - if (not networks.ipv6_addrs.is_empty()) { - var __match = "outport == ${peer_json_name} && " - "xx${rEG_NEXT_HOP()} == " ++ - format_v6_networks(networks) in - Flow(.logical_datapath = peer_router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 100, - .__match = __match.intern(), - .actions = i"eth.dst = ${networks.ea}; next;", - .stage_hint = stage_hint(rp.lrp._uuid), - .io_port = None, - .controller_meter = None) - } - } -} - -/* Packet is on a non gateway chassis and - * has an unresolved ARP on a network behind gateway - * chassis attached router port. Since, redirect type - * is "bridged", instead of calling "get_arp" - * on this node, we will redirect the packet to gateway - * chassis, by setting destination mac router port mac.*/ -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 50, - .__match = i"outport == ${rp.json_name} && " - "!is_chassis_resident(${json_escape(chassis_redirect_name(l3dgw_port.name))})", - .actions = i"eth.dst = ${rp.networks.ea}; next;", - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None) :- - rp in &RouterPort(.lrp = lrp, .router = router), - Some{var l3dgw_port} = router.l3dgw_ports.nth(0), - Some{i"bridged"} == lrp.options.get(i"redirect-type"). - - -/* Drop IP traffic destined to router owned IPs. Part of it is dropped - * in stage "lr_in_ip_input" but traffic that could have been unSNATed - * but didn't match any existing session might still end up here. - * - * Priority 1. - */ -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 1, - .__match = ("ip4.dst == {" ++ match_ips.join(", ") ++ "}").intern(), - .actions = i"drop;", - .stage_hint = stage_hint(lrp_uuid), - .io_port = None, - .controller_meter = None) :- - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, - .router = &Router{.snat_ips = snat_ips, - ._uuid = lr_uuid}, - .networks = networks), - var addr = FlatMap(networks.ipv4_addrs), - snat_ips.contains_key(IPv4{addr.addr}), - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 1, - .__match = ("ip6.dst == {" ++ match_ips.join(", ") ++ "}").intern(), - .actions = i"drop;", - .stage_hint = stage_hint(lrp_uuid), - .io_port = None, - .controller_meter = None) :- - &RouterPort(.lrp = &nb::Logical_Router_Port{._uuid = lrp_uuid}, - .router = &Router{.snat_ips = snat_ips, - ._uuid = lr_uuid}, - .networks = networks), - var addr = FlatMap(networks.ipv6_addrs), - snat_ips.contains_key(IPv6{addr.addr}), - var match_ips = "${addr.addr}".group_by((lr_uuid, lrp_uuid)).to_vec(). - -/* Create ARP resolution flows for NAT and LB addresses for first hop - * logical routers - */ -Flow(.logical_datapath = peer.router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 100, - .__match = ("outport == ${peer.json_name} && " ++ rEG_NEXT_HOP() ++ " == {${ips}}").intern(), - .actions = i"eth.dst = ${addr.ea}; next;", - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None) :- - RouterPortRoutableAddresses(port, addresses), - FirstHopRouterPortRoutableAddresses(port, peer_uuid), - peer in &RouterPort(.lrp = lrp), - lrp._uuid == peer_uuid, - not peer.router.options.get_bool_def(i"dynamic_neigh_routers", false), - var addr = FlatMap(addresses), - var ips = addr.ipv4_addrs.map(|a| a.addr.to_string()).join(", "). - -/* This is a logical switch port that backs a VM or a container. - * Extract its addresses. For each of the address, go through all - * the router ports attached to the switch (to which this port - * connects) and if the address in question is reachable from the - * router port, add an ARP/ND entry in that router's pipeline. */ -for (SwitchPortIPv4Address( - .port = &SwitchPort{.lsp = lsp, .sw = sw}, - .ea = ea, - .addr = addr) - if lsp.__type != i"router" and lsp.__type != i"virtual" and lsp.is_enabled()) -{ - for (&SwitchPort(.sw = &Switch{._uuid = sw._uuid}, - .peer = Some{peer@&RouterPort{.router = peer_router}})) - { - Some{_} = find_lrp_member_ip(peer.networks, IPv4{addr.addr}) in - Flow(.logical_datapath = peer_router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 100, - .__match = i"outport == ${peer.json_name} && " - "${rEG_NEXT_HOP()} == ${addr.addr}", - .actions = i"eth.dst = ${ea}; next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = None, - .controller_meter = None) - } -} - -for (SwitchPortIPv6Address( - .port = &SwitchPort{.lsp = lsp, .sw = sw}, - .ea = ea, - .addr = addr) - if lsp.__type != i"router" and lsp.__type != i"virtual" and lsp.is_enabled()) -{ - for (&SwitchPort(.sw = &Switch{._uuid = sw._uuid}, - .peer = Some{peer@&RouterPort{.router = peer_router}})) - { - Some{_} = find_lrp_member_ip(peer.networks, IPv6{addr.addr}) in - Flow(.logical_datapath = peer_router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 100, - .__match = i"outport == ${peer.json_name} && " - "xx${rEG_NEXT_HOP()} == ${addr.addr}", - .actions = i"eth.dst = ${ea}; next;", - .stage_hint = stage_hint(lsp._uuid), - .io_port = None, - .controller_meter = None) - } -} - -/* True if 's' is an empty set or a set that contains just an empty string, - * false otherwise. - * - * This is meant for sets of 0 or 1 elements, like the OVSDB integration - * with DDlog uses. */ -function is_empty_set_or_string(s: Option<istring>): bool = { - match (s) { - None -> true, - Some{s} -> s == i"" - } -} - -/* This is a virtual port. Add ARP replies for the virtual ip with - * the mac of the present active virtual parent. - * If the logical port doesn't have virtual parent set in - * Port_Binding table, then add the flow to set eth.dst to - * 00:00:00:00:00:00 and advance to next table so that ARP is - * resolved by router pipeline using the arp{} action. - * The MAC_Binding entry for the virtual ip might be invalid. */ -Flow(.logical_datapath = peer.router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 100, - .__match = i"outport == ${peer.json_name} && " - "${rEG_NEXT_HOP()} == ${virtual_ip}", - .actions = i"eth.dst = 00:00:00:00:00:00; next;", - .stage_hint = stage_hint(sp.lsp._uuid), - .io_port = None, - .controller_meter = None) :- - sp in &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"virtual"}), - Some{var virtual_ip_s} = lsp.options.get(i"virtual-ip"), - Some{var virtual_parents} = lsp.options.get(i"virtual-parents"), - Some{var virtual_ip} = ip_parse(virtual_ip_s.ival()), - pb in sb::Port_Binding(.logical_port = sp.lsp.name), - is_empty_set_or_string(pb.virtual_parent) or pb.chassis == None, - sp2 in &SwitchPort(.sw = sp.sw, .peer = Some{peer}), - Some{_} = find_lrp_member_ip(peer.networks, IPv4{virtual_ip}). -Flow(.logical_datapath = peer.router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 100, - .__match = i"outport == ${peer.json_name} && " - "${rEG_NEXT_HOP()} == ${virtual_ip}", - .actions = i"eth.dst = ${address.ea}; next;", - .stage_hint = stage_hint(sp.lsp._uuid), - .io_port = None, - .controller_meter = None) :- - sp in &SwitchPort(.lsp = lsp@&nb::Logical_Switch_Port{.__type = i"virtual"}), - Some{var virtual_ip_s} = lsp.options.get(i"virtual-ip"), - Some{var virtual_parents} = lsp.options.get(i"virtual-parents"), - Some{var virtual_ip} = ip_parse(virtual_ip_s.ival()), - pb in sb::Port_Binding(.logical_port = sp.lsp.name), - not (is_empty_set_or_string(pb.virtual_parent) or pb.chassis == None), - Some{var virtual_parent} = pb.virtual_parent, - vp in &SwitchPort(.lsp = &nb::Logical_Switch_Port{.name = virtual_parent}), - var address = FlatMap(vp.static_addresses), - sp2 in &SwitchPort(.sw = sp.sw, .peer = Some{peer}), - Some{_} = find_lrp_member_ip(peer.networks, IPv4{virtual_ip}). - -/* This is a logical switch port that connects to a router. */ - -/* The peer of this switch port is the router port for which - * we need to add logical flows such that it can resolve - * ARP entries for all the other router ports connected to - * the switch in question. */ -for (&SwitchPort(.lsp = lsp1, - .peer = Some{peer1@&RouterPort{.router = peer_router}}, - .sw = sw) - if lsp1.is_enabled() and - not peer_router.options.get_bool_def(i"dynamic_neigh_routers", false)) -{ - for (&SwitchPort(.lsp = lsp2, .peer = Some{peer2}, - .sw = &Switch{._uuid = sw._uuid}) - /* Skip the router port under consideration. */ - if peer2.lrp._uuid != peer1.lrp._uuid) - { - if (not peer2.networks.ipv4_addrs.is_empty()) { - Flow(.logical_datapath = peer_router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 100, - .__match = i"outport == ${peer1.json_name} && " - "${rEG_NEXT_HOP()} == ${format_v4_networks(peer2.networks, false)}", - .actions = i"eth.dst = ${peer2.networks.ea}; next;", - .stage_hint = stage_hint(lsp1._uuid), - .io_port = None, - .controller_meter = None) - }; - - if (not peer2.networks.ipv6_addrs.is_empty()) { - Flow(.logical_datapath = peer_router._uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 100, - .__match = i"outport == ${peer1.json_name} && " - "xx${rEG_NEXT_HOP()} == ${format_v6_networks(peer2.networks)}", - .actions = i"eth.dst = ${peer2.networks.ea}; next;", - .stage_hint = stage_hint(lsp1._uuid), - .io_port = None, - .controller_meter = None) - } - } -} - -for (&Router(._uuid = lr_uuid)) -{ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 0, - .__match = i"ip4", - .actions = i"get_arp(outport, ${rEG_NEXT_HOP()}); next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None); - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ARP_RESOLVE(), - .priority = 0, - .__match = i"ip6", - .actions = i"get_nd(outport, xx${rEG_NEXT_HOP()}); next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Local router ingress table CHK_PKT_LEN: Check packet length. - * - * Any IPv4 or IPv6 packet with outport set to a router port that has - * gateway_mtu > 0 configured, check the packet length and store the result in - * the 'REGBIT_PKT_LARGER' register bit. - * - * Local router ingress table LARGER_PKTS: Handle larger packets. - * - * Any IPv4 or IPv6 packet with outport set to a router port that has - * gatway_mtu > 0 configured and the 'REGBIT_PKT_LARGER' register bit is set, - * generate an ICMPv4/ICMPv6 packet with type 3/2 (Destination - * Unreachable/Packet Too Big) and code 4/0 (Fragmentation needed). - */ -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_CHK_PKT_LEN(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - &Router(._uuid = lr_uuid). -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LARGER_PKTS(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) :- - &Router(._uuid = lr_uuid). -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_CHK_PKT_LEN(), - .priority = 50, - .__match = i"outport == ${gw_mtu_rp.json_name}", - .actions = i"${rEGBIT_PKT_LARGER()} = check_pkt_larger(${mtu}); " - "next;", - .stage_hint = stage_hint(gw_mtu_rp.lrp._uuid), - .io_port = None, - .controller_meter = None) :- - r in &Router(._uuid = lr_uuid), - gw_mtu_rp in &RouterPort(.router = r), - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), - gw_mtu > 0, - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(). -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LARGER_PKTS(), - .priority = 150, - .__match = i"inport == ${rp.json_name} && outport == ${gw_mtu_rp.json_name} && ip4 && " - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", - .actions = i"icmp4_error {" - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " - "${rEGBIT_PKT_LARGER()} = 0; " - "eth.dst = ${rp.networks.ea}; " - "ip4.dst = ip4.src; " - "ip4.src = ${first_ipv4.addr}; " - "ip.ttl = 255; " - "icmp4.type = 3; /* Destination Unreachable. */ " - "icmp4.code = 4; /* Frag Needed and DF was Set. */ " - /* Set icmp4.frag_mtu to gw_mtu */ - "icmp4.frag_mtu = ${gw_mtu}; " - "next(pipeline=ingress, table=0); " - "};", - .io_port = None, - .controller_meter = r.copp.get(cOPP_ICMP4_ERR()), - .stage_hint = stage_hint(rp.lrp._uuid)) :- - r in &Router(._uuid = lr_uuid), - gw_mtu_rp in &RouterPort(.router = r), - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), - gw_mtu > 0, - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), - rp in &RouterPort(.router = r), - rp.lrp != gw_mtu_rp.lrp, - Some{var first_ipv4} = rp.networks.ipv4_addrs.nth(0). - -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 150, - .__match = i"inport == ${rp.json_name} && ip4 && " - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", - .actions = i"icmp4_error {" - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " - "${rEGBIT_PKT_LARGER()} = 0; " - "eth.dst = ${rp.networks.ea}; " - "ip4.dst = ip4.src; " - "ip4.src = ${first_ipv4.addr}; " - "ip.ttl = 255; " - "icmp4.type = 3; /* Destination Unreachable. */ " - "icmp4.code = 4; /* Frag Needed and DF was Set. */ " - /* Set icmp4.frag_mtu to gw_mtu */ - "icmp4.frag_mtu = ${gw_mtu}; " - "next(pipeline=ingress, table=0); " - "};", - .io_port = None, - .controller_meter = r.copp.get(cOPP_ICMP4_ERR()), - .stage_hint = stage_hint(rp.lrp._uuid)) :- - r in &Router(._uuid = lr_uuid), - gw_mtu_rp in &RouterPort(.router = r), - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), - gw_mtu > 0, - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), - rp in &RouterPort(.router = r), - rp.lrp == gw_mtu_rp.lrp, - Some{var first_ipv4} = rp.networks.ipv4_addrs.nth(0). - -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_LARGER_PKTS(), - .priority = 150, - .__match = i"inport == ${rp.json_name} && outport == ${gw_mtu_rp.json_name} && ip6 && " - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", - .actions = i"icmp6_error {" - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " - "${rEGBIT_PKT_LARGER()} = 0; " - "eth.dst = ${rp.networks.ea}; " - "ip6.dst = ip6.src; " - "ip6.src = ${first_ipv6.addr}; " - "ip.ttl = 255; " - "icmp6.type = 2; /* Packet Too Big. */ " - "icmp6.code = 0; " - /* Set icmp6.frag_mtu to gw_mtu */ - "icmp6.frag_mtu = ${gw_mtu}; " - "next(pipeline=ingress, table=0); " - "};", - .io_port = None, - .controller_meter = r.copp.get(cOPP_ICMP6_ERR()), - .stage_hint = stage_hint(rp.lrp._uuid)) :- - r in &Router(._uuid = lr_uuid), - gw_mtu_rp in &RouterPort(.router = r), - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), - gw_mtu > 0, - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), - rp in &RouterPort(.router = r), - rp.lrp != gw_mtu_rp.lrp, - Some{var first_ipv6} = rp.networks.ipv6_addrs.nth(0). - -Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 150, - .__match = i"inport == ${rp.json_name} && ip6 && " - "${rEGBIT_PKT_LARGER()} && ${rEGBIT_EGRESS_LOOPBACK()} == 0", - .actions = i"icmp6_error {" - "${rEGBIT_EGRESS_LOOPBACK()} = 1; " - "${rEGBIT_PKT_LARGER()} = 0; " - "eth.dst = ${rp.networks.ea}; " - "ip6.dst = ip6.src; " - "ip6.src = ${first_ipv6.addr}; " - "ip.ttl = 255; " - "icmp6.type = 2; /* Packet Too Big. */ " - "icmp6.code = 0; " - /* Set icmp6.frag_mtu to gw_mtu */ - "icmp6.frag_mtu = ${gw_mtu}; " - "next(pipeline=ingress, table=0); " - "};", - .io_port = None, - .controller_meter = r.copp.get(cOPP_ICMP6_ERR()), - .stage_hint = stage_hint(rp.lrp._uuid)) :- - r in &Router(._uuid = lr_uuid), - gw_mtu_rp in &RouterPort(.router = r), - var gw_mtu = gw_mtu_rp.lrp.options.get_int_def(i"gateway_mtu", 0), - gw_mtu > 0, - var mtu = gw_mtu + vLAN_ETH_HEADER_LEN(), - rp in &RouterPort(.router = r), - rp.lrp == gw_mtu_rp.lrp, - Some{var first_ipv6} = rp.networks.ipv6_addrs.nth(0). - -/* Logical router ingress table GW_REDIRECT: Gateway redirect. - * - * For traffic with outport equal to the l3dgw_port - * on a distributed router, this table redirects a subset - * of the traffic to the l3redirect_port which represents - * the central instance of the l3dgw_port. - */ -for (&Router(._uuid = lr_uuid)) -{ - /* For traffic with outport == l3dgw_port, if the - * packet did not match any higher priority redirect - * rule, then the traffic is redirected to the central - * instance of the l3dgw_port. */ - for (DistributedGatewayPort(lrp, lr_uuid, _)) { - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_GW_REDIRECT(), - .priority = 50, - .__match = i"outport == ${json_escape(lrp.name)}", - .actions = i"outport = ${json_escape(chassis_redirect_name(lrp.name))}; next;", - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None) - }; - - /* Packets are allowed by default. */ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_GW_REDIRECT(), - .priority = 0, - .__match = i"1", - .actions = i"next;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - -/* Local router ingress table ARP_REQUEST: ARP request. - * - * In the common case where the Ethernet destination has been resolved, - * this table outputs the packet (priority 0). Otherwise, it composes - * and sends an ARP/IPv6 NA request (priority 100). */ -Flow(.logical_datapath = router._uuid, - .stage = s_ROUTER_IN_ARP_REQUEST(), - .priority = 200, - .__match = __match, - .actions = actions, - .io_port = None, - .controller_meter = router.copp.get(cOPP_ND_NS_RESOLVE()), - .stage_hint = 0) :- - rsr in RouterStaticRoute(.router = router), - var dst = FlatMap(rsr.dsts), - IPv6{var gw_ip6} = dst.nexthop, - var __match = i"eth.dst == 00:00:00:00:00:00 && " - "ip6 && xx${rEG_NEXT_HOP()} == ${dst.nexthop}", - var sn_addr = gw_ip6.solicited_node(), - var eth_dst = sn_addr.multicast_to_ethernet(), - var sn_addr_s = sn_addr.string_mapped(), - var actions = i"nd_ns { " - "eth.dst = ${eth_dst}; " - "ip6.dst = ${sn_addr_s}; " - "nd.target = ${dst.nexthop}; " - "output; " - "};". - -for (&Router(._uuid = lr_uuid, .copp = copp)) -{ - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ARP_REQUEST(), - .priority = 100, - .__match = i"eth.dst == 00:00:00:00:00:00 && ip4", - .actions = i"arp { " - "eth.dst = ff:ff:ff:ff:ff:ff; " - "arp.spa = ${rEG_SRC()}; " - "arp.tpa = ${rEG_NEXT_HOP()}; " - "arp.op = 1; " /* ARP request */ - "output; " - "};", - .io_port = None, - .controller_meter = copp.get(cOPP_ARP_RESOLVE()), - .stage_hint = 0); - - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ARP_REQUEST(), - .priority = 100, - .__match = i"eth.dst == 00:00:00:00:00:00 && ip6", - .actions = i"nd_ns { " - "nd.target = xx${rEG_NEXT_HOP()}; " - "output; " - "};", - .io_port = None, - .controller_meter = copp.get(cOPP_ND_NS_RESOLVE()), - .stage_hint = 0); - - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_ARP_REQUEST(), - .priority = 0, - .__match = i"1", - .actions = i"output;", - .stage_hint = 0, - .io_port = None, - .controller_meter = None) -} - - -/* Logical router egress table DELIVERY: Delivery (priority 100). - * - * Priority 100 rules deliver packets to enabled logical ports. */ -for (&RouterPort(.lrp = lrp, - .json_name = json_name, - .networks = lrp_networks, - .router = &Router{._uuid = lr_uuid, .mcast_cfg = mcast_cfg}) - /* Drop packets to disabled logical ports (since logical flow - * tables are default-drop). */ - if lrp.is_enabled()) -{ - /* If multicast relay is enabled then also adjust source mac for IP - * multicast traffic. - */ - if (mcast_cfg.relay) { - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_DELIVERY(), - .priority = 110, - .__match = i"(ip4.mcast || ip6.mcast) && " - "outport == ${json_name}", - .actions = i"eth.src = ${lrp_networks.ea}; output;", - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None) - }; - /* No egress packets should be processed in the context of - * a chassisredirect port. The chassisredirect port should - * be replaced by the l3dgw port in the local output - * pipeline stage before egress processing. */ - - Flow(.logical_datapath = lr_uuid, - .stage = s_ROUTER_OUT_DELIVERY(), - .priority = 100, - .__match = i"outport == ${json_name}", - .actions = i"output;", - .stage_hint = stage_hint(lrp._uuid), - .io_port = None, - .controller_meter = None) -} - -/* - * Datapath tunnel key allocation: - * - * Allocates a globally unique tunnel id in the range 1...2**24-1 for - * each Logical_Switch and Logical_Router. - */ - -function oVN_MAX_DP_KEY(): integer { (64'd1 << 24) - 1 } -function oVN_MAX_DP_GLOBAL_NUM(): integer { (64'd1 << 16) - 1 } -function oVN_MIN_DP_KEY_LOCAL(): integer { 1 } -function oVN_MAX_DP_KEY_LOCAL(): integer { oVN_MAX_DP_KEY() - oVN_MAX_DP_GLOBAL_NUM() } -function oVN_MIN_DP_KEY_GLOBAL(): integer { oVN_MAX_DP_KEY_LOCAL() + 1 } -function oVN_MAX_DP_KEY_GLOBAL(): integer { oVN_MAX_DP_KEY() } - -function oVN_MAX_DP_VXLAN_KEY(): integer { (64'd1 << 12) - 1 } -function oVN_MAX_DP_VXLAN_KEY_LOCAL(): integer { oVN_MAX_DP_KEY() - oVN_MAX_DP_GLOBAL_NUM() } - -/* If any chassis uses VXLAN encapsulation, then the entire deployment is in VXLAN mode. */ -relation IsVxlanMode0() -IsVxlanMode0() :- - sb::Chassis(.encaps = encaps), - var encap_uuid = FlatMap(encaps), - sb::Encap(._uuid = encap_uuid, .__type = i"vxlan"). - -relation IsVxlanMode[bool] -IsVxlanMode[true] :- - IsVxlanMode0(). -IsVxlanMode[false] :- - Unit(), - not IsVxlanMode0(). - -/* The maximum datapath tunnel key that may be used. */ -relation OvnMaxDpKeyLocal[integer] -/* OVN_MAX_DP_GLOBAL_NUM doesn't apply for vxlan mode. */ -OvnMaxDpKeyLocal[oVN_MAX_DP_VXLAN_KEY()] :- IsVxlanMode[true]. -OvnMaxDpKeyLocal[oVN_MAX_DP_KEY() - oVN_MAX_DP_GLOBAL_NUM()] :- IsVxlanMode[false]. - -relation OvnPortKeyBits[bit<32>] -OvnPortKeyBits[12] :- IsVxlanMode[true]. -OvnPortKeyBits[16] :- IsVxlanMode[false]. - -relation OvnDpKeyBits[bit<32>] -OvnDpKeyBits[12] :- IsVxlanMode[true]. -OvnDpKeyBits[24] :- IsVxlanMode[false]. - -function get_dp_tunkey(map: Map<istring,istring>, key: istring, bits: bit<32>): Option<integer> { - map.get(key) - .and_then(parse_dec_u64) - .and_then(|x| if (x > 0 and x < (1<<bits)) { - Some{x} - } else { - None - }) -} - -// Tunnel keys requested by datapaths. -relation RequestedTunKey(datapath: uuid, tunkey: integer) -RequestedTunKey(uuid, tunkey) :- - OvnDpKeyBits[bits], - ls in &nb::Logical_Switch(._uuid = uuid), - Some{var tunkey} = get_dp_tunkey(ls.other_config, i"requested-tnl-key", bits). -RequestedTunKey(uuid, tunkey) :- - OvnDpKeyBits[bits], - lr in nb::Logical_Router(._uuid = uuid), - Some{var tunkey} = get_dp_tunkey(lr.options, i"requested-tnl-key", bits). -Warning[message] :- - RequestedTunKey(datapath, tunkey), - var count = datapath.group_by((tunkey)).size(), - count > 1, - var message = "${count} logical switches or routers request " - "datapath tunnel key ${tunkey}". - -// Assign tunnel keys: -// - First priority to requested tunnel keys. -// - Second priority to already assigned tunnel keys. -// In either case, make an arbitrary choice in case of conflicts within a -// priority level. -relation AssignedTunKey(datapath: uuid, tunkey: integer) -AssignedTunKey(datapath, tunkey) :- - RequestedTunKey(datapath, tunkey), - var datapath = datapath.group_by(tunkey).first(). -AssignedTunKey(datapath, tunkey) :- - sb::Datapath_Binding(._uuid = datapath, .tunnel_key = tunkey), - not RequestedTunKey(_, tunkey), - not RequestedTunKey(datapath, _), - var datapath = datapath.group_by(tunkey).first(). - -// all tunnel keys already in use in the Realized table -relation AllocatedTunKeys(keys: Set<integer>) -AllocatedTunKeys(keys) :- - AssignedTunKey(.tunkey = tunkey), - var keys = tunkey.group_by(()).to_set(). - -// Datapath_Binding's not yet in the Realized table -relation NotYetAllocatedTunKeys(datapaths: Vec<uuid>) - -NotYetAllocatedTunKeys(datapaths) :- - OutProxy_Datapath_Binding(._uuid = datapath), - not AssignedTunKey(datapath, _), - var datapaths = datapath.group_by(()).to_vec(). - -// Perform the allocation -relation TunKeyAllocation(datapath: uuid, tunkey: integer) - -TunKeyAllocation(datapath, tunkey) :- AssignedTunKey(datapath, tunkey). - -// Case 1: AllocatedTunKeys relation is not empty (i.e., contains -// a single record that stores a set of allocated keys) -TunKeyAllocation(datapath, tunkey) :- - NotYetAllocatedTunKeys(unallocated), - AllocatedTunKeys(allocated), - OvnMaxDpKeyLocal[max_dp_key_local], - var allocation = FlatMap(allocate(allocated, unallocated, 1, max_dp_key_local)), - (var datapath, var tunkey) = allocation. - -// Case 2: AllocatedTunKeys relation is empty -TunKeyAllocation(datapath, tunkey) :- - NotYetAllocatedTunKeys(unallocated), - not AllocatedTunKeys(_), - OvnMaxDpKeyLocal[max_dp_key_local], - var allocation = FlatMap(allocate(set_empty(), unallocated, 1, max_dp_key_local)), - (var datapath, var tunkey) = allocation. - -/* - * Port id allocation: - * - * Port IDs in a per-datapath space in the range 1...2**(bits-1)-1, where - * bits is the number of bits available for port keys (default: 16, vxlan: 12) - */ - -function get_port_tunkey(map: Map<istring,istring>, key: istring, bits: bit<32>): Option<integer> { - map.get(key) - .and_then(parse_dec_u64) - .and_then(|x| if (x > 0 and x < (1<<bits)) { - Some{x} - } else { - None - }) -} - -// Tunnel keys requested by port bindings. -relation RequestedPortTunKey(datapath: uuid, port: uuid, tunkey: integer) -RequestedPortTunKey(datapath, port, tunkey) :- - OvnPortKeyBits[bits], - sp in &SwitchPort(), - var datapath = sp.sw._uuid, - var port = sp.lsp._uuid, - Some{var tunkey} = get_port_tunkey(sp.lsp.options, i"requested-tnl-key", bits). -RequestedPortTunKey(datapath, port, tunkey) :- - OvnPortKeyBits[bits], - rp in &RouterPort(), - var datapath = rp.router._uuid, - var port = rp.lrp._uuid, - Some{var tunkey} = get_port_tunkey(rp.lrp.options, i"requested-tnl-key", bits). -Warning[message] :- - RequestedPortTunKey(datapath, port, tunkey), - var count = port.group_by((datapath, tunkey)).size(), - count > 1, - var message = "${count} logical ports in the same datapath " - "request port tunnel key ${tunkey}". - -// Assign tunnel keys: -// - First priority to requested tunnel keys. -// - Second priority to already assigned tunnel keys. -// In either case, make an arbitrary choice in case of conflicts within a -// priority level. -relation AssignedPortTunKey(datapath: uuid, port: uuid, tunkey: integer) -AssignedPortTunKey(datapath, port, tunkey) :- - RequestedPortTunKey(datapath, port, tunkey), - var port = port.group_by((datapath, tunkey)).first(). -AssignedPortTunKey(datapath, port, tunkey) :- - sb::Port_Binding(._uuid = port_uuid, - .datapath = datapath, - .tunnel_key = tunkey), - not RequestedPortTunKey(datapath, _, tunkey), - not RequestedPortTunKey(datapath, port_uuid, _), - var port = port_uuid.group_by((datapath, tunkey)).first(). - -// all tunnel keys already in use in the Realized table -relation AllocatedPortTunKeys(datapath: uuid, keys: Set<integer>) - -AllocatedPortTunKeys(datapath, keys) :- - AssignedPortTunKey(datapath, port, tunkey), - var keys = tunkey.group_by(datapath).to_set(). - -// Port_Binding's not yet in the Realized table -relation NotYetAllocatedPortTunKeys(datapath: uuid, all_logical_ids: Vec<uuid>) - -NotYetAllocatedPortTunKeys(datapath, all_names) :- - OutProxy_Port_Binding(._uuid = port_uuid, .datapath = datapath), - not AssignedPortTunKey(datapath, port_uuid, _), - var all_names = port_uuid.group_by(datapath).to_vec(). - -// Perform the allocation. -relation PortTunKeyAllocation(port: uuid, tunkey: integer) - -// Transfer existing allocations from the realized table. -PortTunKeyAllocation(port, tunkey) :- AssignedPortTunKey(_, port, tunkey). - -// Case 1: AllocatedPortTunKeys(datapath) is not empty (i.e., contains -// a single record that stores a set of allocated keys). -PortTunKeyAllocation(port, tunkey) :- - AllocatedPortTunKeys(datapath, allocated), - NotYetAllocatedPortTunKeys(datapath, unallocated), - var allocation = FlatMap(allocate(allocated, unallocated, 1, 64'hffff)), - (var port, var tunkey) = allocation. - -// Case 2: PortAllocatedTunKeys(datapath) relation is empty -PortTunKeyAllocation(port, tunkey) :- - NotYetAllocatedPortTunKeys(datapath, unallocated), - not AllocatedPortTunKeys(datapath, _), - var allocation = FlatMap(allocate(set_empty(), unallocated, 1, 64'hffff)), - (var port, var tunkey) = allocation. - -/* - * Multicast group tunnel_key allocation: - * - * Tunnel-keys in a per-datapath space in the range 32770...65535 - */ - -// All tunnel keys already in use in the Realized table. -relation AllocatedMulticastGroupTunKeys(datapath_uuid: uuid, keys: Set<integer>) - -AllocatedMulticastGroupTunKeys(datapath_uuid, keys) :- - sb::Multicast_Group(.datapath = datapath_uuid, .tunnel_key = tunkey), - //sb::UUIDMap_Datapath_Binding(datapath, Left{datapath_uuid}), - var keys = tunkey.group_by(datapath_uuid).to_set(). - -// Multicast_Group's not yet in the Realized table. -relation NotYetAllocatedMulticastGroupTunKeys(datapath_uuid: uuid, - all_logical_ids: Vec<istring>) - -NotYetAllocatedMulticastGroupTunKeys(datapath_uuid, all_names) :- - OutProxy_Multicast_Group(.name = name, .datapath = datapath_uuid), - not sb::Multicast_Group(.name = name, .datapath = datapath_uuid), - var all_names = name.group_by(datapath_uuid).to_vec(). - -// Perform the allocation -relation MulticastGroupTunKeyAllocation(datapath_uuid: uuid, group: istring, tunkey: integer) - -// transfer existing allocations from the realized table -MulticastGroupTunKeyAllocation(datapath_uuid, group, tunkey) :- - //sb::UUIDMap_Datapath_Binding(_, datapath_uuid), - sb::Multicast_Group(.name = group, - .datapath = datapath_uuid, - .tunnel_key = tunkey). - -// Case 1: AllocatedMulticastGroupTunKeys(datapath) is not empty (i.e., -// contains a single record that stores a set of allocated keys) -MulticastGroupTunKeyAllocation(datapath_uuid, group, tunkey) :- - AllocatedMulticastGroupTunKeys(datapath_uuid, allocated), - NotYetAllocatedMulticastGroupTunKeys(datapath_uuid, unallocated), - (_, var min_key) = mC_IP_MCAST_MIN(), - (_, var max_key) = mC_IP_MCAST_MAX(), - var allocation = FlatMap(allocate(allocated, unallocated, - min_key, max_key)), - (var group, var tunkey) = allocation. - -// Case 2: AllocatedMulticastGroupTunKeys(datapath) relation is empty -MulticastGroupTunKeyAllocation(datapath_uuid, group, tunkey) :- - NotYetAllocatedMulticastGroupTunKeys(datapath_uuid, unallocated), - not AllocatedMulticastGroupTunKeys(datapath_uuid, _), - (_, var min_key) = mC_IP_MCAST_MIN(), - (_, var max_key) = mC_IP_MCAST_MAX(), - var allocation = FlatMap(allocate(set_empty(), unallocated, - min_key, max_key)), - (var group, var tunkey) = allocation. - -/* - * Queue ID allocation - * - * Queue IDs on a chassis, for routers that have QoS enabled, in a per-chassis - * space in the range 1...0xf000. It looks to me like there'd only be a small - * number of these per chassis, and probably a small number overall, in case it - * matters. - * - * Queue ID may also need to be deallocated if port loses QoS attributes - * - * This logic applies mainly to sb::Port_Binding records bound to a chassis - * (i.e. with the chassis column nonempty) but "localnet" ports can also - * have a queue ID. For those we use the port's own UUID as the chassis UUID. - */ - -function port_has_qos_params(opts: Map<istring, istring>): bool = { - opts.contains_key(i"qos_max_rate") or opts.contains_key(i"qos_burst") -} - - -// ports in Out_Port_Binding that require queue ID on chassis -relation PortRequiresQID(port: uuid, chassis: uuid) - -PortRequiresQID(pb._uuid, chassis) :- - pb in OutProxy_Port_Binding(), - pb.__type != i"localnet", - port_has_qos_params(pb.options), - sb::Port_Binding(._uuid = pb._uuid, .chassis = chassis_set), - Some{var chassis} = chassis_set. -PortRequiresQID(pb._uuid, pb._uuid) :- - pb in OutProxy_Port_Binding(), - pb.__type == i"localnet", - port_has_qos_params(pb.options), - sb::Port_Binding(._uuid = pb._uuid). - -relation AggPortRequiresQID(chassis: uuid, ports: Vec<uuid>) - -AggPortRequiresQID(chassis, ports) :- - PortRequiresQID(port, chassis), - var ports = port.group_by(chassis).to_vec(). - -relation AllocatedQIDs(chassis: uuid, allocated_ids: Map<uuid, integer>) - -AllocatedQIDs(chassis, allocated_ids) :- - pb in sb::Port_Binding(), - pb.__type != i"localnet", - Some{var chassis} = pb.chassis, - Some{var qid_str} = pb.options.get(i"qdisc_queue_id"), - Some{var qid} = parse_dec_u64(qid_str), - var allocated_ids = (pb._uuid, qid).group_by(chassis).to_map(). -AllocatedQIDs(chassis, allocated_ids) :- - pb in sb::Port_Binding(), - pb.__type == i"localnet", - var chassis = pb._uuid, - Some{var qid_str} = pb.options.get(i"qdisc_queue_id"), - Some{var qid} = parse_dec_u64(qid_str), - var allocated_ids = (pb._uuid, qid).group_by(chassis).to_map(). - -// allocate queue IDs to ports -relation QueueIDAllocation(port: uuid, qids: Option<integer>) - -// None for ports that do not require a queue -QueueIDAllocation(port, None) :- - OutProxy_Port_Binding(._uuid = port), - not PortRequiresQID(port, _). - -QueueIDAllocation(port, Some{qid}) :- - AggPortRequiresQID(chassis, ports), - AllocatedQIDs(chassis, allocated_ids), - var allocations = FlatMap(adjust_allocation(allocated_ids, ports, 1, 64'hf000)), - (var port, var qid) = allocations. - -QueueIDAllocation(port, Some{qid}) :- - AggPortRequiresQID(chassis, ports), - not AllocatedQIDs(chassis, _), - var allocations = FlatMap(adjust_allocation(map_empty(), ports, 1, 64'hf000)), - (var port, var qid) = allocations. - -/* - * This allows ovn-northd to preserve options:ipv6_ra_pd_list, which is set by - * ovn-controller. - */ -relation PreserveIPv6RAPDList(lrp_uuid: uuid, ipv6_ra_pd_list: Option<istring>) -PreserveIPv6RAPDList(lrp_uuid, ipv6_ra_pd_list) :- - sb::Port_Binding(._uuid = lrp_uuid, .options = options), - var ipv6_ra_pd_list = options.get(i"ipv6_ra_pd_list"). -PreserveIPv6RAPDList(lrp_uuid, None) :- - &nb::Logical_Router_Port(._uuid = lrp_uuid), - not sb::Port_Binding(._uuid = lrp_uuid). - -/* - * Tag allocation for nested containers. - */ - -/* Reserved tags for each parent port, including: - * 1. For ports that need a dynamically allocated tag, existing tag, if any, - * 2. For ports that have a statically assigned tag (via `tag_request`), the - * `tag_request` value. - * 3. For ports that do not have a tag_request, but have a tag statically assigned - * by directly setting the `tag` field, use this value. - */ -relation SwitchPortReservedTag(parent_name: istring, tags: integer) - -SwitchPortReservedTag(parent_name, tag) :- - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = needs_dynamic_tag, .parent_name = Some{parent_name}), - Some{var tag} = if (needs_dynamic_tag) { - lsp.tag - } else { - match (lsp.tag_request) { - Some{req} -> Some{req}, - None -> lsp.tag - } - }. - -relation SwitchPortReservedTags(parent_name: istring, tags: Set<integer>) - -SwitchPortReservedTags(parent_name, tags) :- - SwitchPortReservedTag(parent_name, tag), - var tags = tag.group_by(parent_name).to_set(). - -SwitchPortReservedTags(parent_name, set_empty()) :- - &nb::Logical_Switch_Port(.name = parent_name), - not SwitchPortReservedTag(.parent_name = parent_name). - -/* Allocate tags for ports that require dynamically allocated tags and do not - * have any yet. - */ -relation SwitchPortAllocatedTags(lsp_uuid: uuid, tag: Option<integer>) - -SwitchPortAllocatedTags(lsp_uuid, tag) :- - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = true, .parent_name = Some{parent_name}), - lsp.tag == None, - var lsps_need_tag = lsp._uuid.group_by(parent_name).to_vec(), - SwitchPortReservedTags(parent_name, reserved), - var dyn_tags = allocate_opt(reserved, - lsps_need_tag, - 1, /* Tag 0 is invalid for nested containers. */ - 4095), - var lsp_tag = FlatMap(dyn_tags), - (var lsp_uuid, var tag) = lsp_tag. - -/* New tag-to-port assignment: - * Case 1. Statically reserved tag (via `tag_request`), if any. - * Case 2. Existing tag for ports that require a dynamically allocated tag and already have one. - * Case 3. Use newly allocated tags (from `SwitchPortAllocatedTags`) for all other ports. - */ -relation SwitchPortNewDynamicTag(port: uuid, tag: Option<integer>) - -/* Case 1 */ -SwitchPortNewDynamicTag(lsp._uuid, tag) :- - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = false), - var tag = match (lsp.tag_request) { - Some{0} -> None, - treq -> treq - }. - -/* Case 2 */ -SwitchPortNewDynamicTag(lsp._uuid, Some{tag}) :- - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = true), - Some{var tag} = lsp.tag. - -/* Case 3 */ -SwitchPortNewDynamicTag(lsp._uuid, tag) :- - &SwitchPort(.lsp = lsp, .needs_dynamic_tag = true), - lsp.tag == None, - SwitchPortAllocatedTags(lsp._uuid, tag). - -/* IP_Multicast table (only applicable for Switches). */ -sb::Out_IP_Multicast(._uuid = cfg.datapath, - .datapath = cfg.datapath, - .enabled = Some{cfg.enabled}, - .querier = Some{cfg.querier}, - .eth_src = cfg.eth_src, - .ip4_src = cfg.ip4_src, - .ip6_src = cfg.ip6_src, - .table_size = Some{cfg.table_size}, - .idle_timeout = Some{cfg.idle_timeout}, - .query_interval = Some{cfg.query_interval}, - .query_max_resp = Some{cfg.query_max_resp}) :- - McastSwitchCfg[cfg]. - - -relation PortExists(name: istring) -PortExists(name) :- &nb::Logical_Switch_Port(.name = name). -PortExists(name) :- &nb::Logical_Router_Port(.name = name). - -sb::Out_Load_Balancer(._uuid = lb._uuid, - .name = lb.name, - .vips = lb.vips, - .protocol = lb.protocol, - .datapaths = datapaths, - .external_ids = [i"lb_id" -> uuid2str(lb_uuid).intern()], - .options = options) :- - nb in &nb::Logical_Switch(._uuid = ls_uuid, .load_balancer = lb_uuids), - var lb_uuid = FlatMap(lb_uuids), - var datapaths = ls_uuid.group_by(lb_uuid).to_set(), - lb in &nb::Load_Balancer(._uuid = lb_uuid), - /* Store the fact that northd provides the original (destination IP + - * transport port) tuple. - */ - var options = lb.options.insert_imm(i"hairpin_orig_tuple", i"true"). - -sb::Out_Service_Monitor(._uuid = hash128((svc_monitor.port_name, lbvipbackend.ip, lbvipbackend.port, protocol)), - .ip = i"${lbvipbackend.ip}", - .protocol = Some{protocol}, - .port = lbvipbackend.port as integer, - .logical_port = svc_monitor.port_name, - .src_mac = i"${svc_monitor_mac}", - .src_ip = svc_monitor.src_ip, - .options = health_check.options, - .external_ids = map_empty()) :- - SvcMonitorMac(svc_monitor_mac), - LBVIP[lbvip@&LBVIP{.lb = lb}], - Some{var health_check} = lbvip.health_check, - var lbvipbackend = FlatMap(lbvip.backends), - Some{var svc_monitor} = lbvipbackend.svc_monitor, - PortExists(svc_monitor.port_name), - var protocol = default_protocol(lb.protocol), - protocol != i"sctp". - -Warning["SCTP load balancers do not currently support " - "health checks. Not creating health checks for " - "load balancer ${uuid2str(lb._uuid)}"] :- - LBVIP[lbvip@&LBVIP{.lb = lb}], - default_protocol(lb.protocol) == i"sctp", - Some{var health_check} = lbvip.health_check, - var lbvipbackend = FlatMap(lbvip.backends), - Some{var svc_monitor} = lbvipbackend.svc_monitor. - -/* - * BFD table. - */ - -/* - * BFD source port allocation. - * - * We need to assign a unique source port to each (logical_port, dst_ip) pair. - * RFC 5881 section 4 says: - * - * The source port MUST be in the range 49152 through 65535. - * The same UDP source port number MUST be used for all BFD - * Control packets associated with a particular session. - * The source port number SHOULD be unique among all BFD - * sessions on the system - */ -function bFD_UDP_SRC_PORT_MIN(): integer { 49152 } -function bFD_UDP_SRC_PORT_MAX(): integer { 65535 } - -// Get already assigned BFD source ports. -// If there's a conflict, make an arbitrary choice. -relation AssignedSrcPort( - logical_port: istring, - dst_ip: istring, - src_port: integer) -AssignedSrcPort(logical_port, dst_ip, src_port) :- - sb::BFD(.logical_port = logical_port, .dst_ip = dst_ip, .src_port = src_port), - var pair = (logical_port, dst_ip).group_by(src_port).first(), - (var logical_port, var dst_ip) = pair. - -// All source ports already in use. -relation AllocatedSrcPorts0(src_ports: Set<integer>) -AllocatedSrcPorts0(src_ports) :- - AssignedSrcPort(.src_port = src_port), - var src_ports = src_port.group_by(()).to_set(). - -relation AllocatedSrcPorts(src_ports: Set<integer>) -AllocatedSrcPorts(src_ports) :- AllocatedSrcPorts0(src_ports). -AllocatedSrcPorts(set_empty()) :- Unit(), not AllocatedSrcPorts0(_). - -// (logical_port, dst_ip) pairs not yet in the Realized table -relation NotYetAllocatedSrcPorts(pairs: Vec<(istring, istring)>) -NotYetAllocatedSrcPorts(pairs) :- - nb::BFD(.logical_port = logical_port, .dst_ip = dst_ip), - not AssignedSrcPort(logical_port, dst_ip, _), - var pairs = (logical_port, dst_ip).group_by(()).to_vec(). - -// Perform the allocation -relation SrcPortAllocation( - logical_port: istring, - dst_ip: istring, - src_port: integer) -SrcPortAllocation(logical_port, dst_ip, src_port) :- AssignedSrcPort(logical_port, dst_ip, src_port). -SrcPortAllocation(logical_port, dst_ip, src_port) :- - NotYetAllocatedSrcPorts(unallocated), - AllocatedSrcPorts(allocated), - var allocation = FlatMap(allocate(allocated, unallocated, - bFD_UDP_SRC_PORT_MIN(), bFD_UDP_SRC_PORT_MAX())), - ((var logical_port, var dst_ip), var src_port) = allocation. - -relation SouthboundBFDStatus( - logical_port: istring, - dst_ip: istring, - status: Option<istring> -) -SouthboundBFDStatus(bfd.logical_port, bfd.dst_ip, Some{bfd.status}) :- bfd in sb::BFD(). -SouthboundBFDStatus(logical_port, dst_ip, None) :- - nb::BFD(.logical_port = logical_port, .dst_ip = dst_ip), - not sb::BFD(.logical_port = logical_port, .dst_ip = dst_ip). - -function bFD_DEF_MINTX(): integer { 1000 } // 1 second -function bFD_DEF_MINRX(): integer { 1000 } // 1 second -function bFD_DEF_DETECT_MULT(): integer { 5 } -sb::Out_BFD(._uuid = hash, - .src_port = src_port, - .disc = max(1, hash as u32) as integer, - .logical_port = nb.logical_port, - .dst_ip = nb.dst_ip, - .min_tx = nb.min_tx.unwrap_or(bFD_DEF_MINTX()), - .min_rx = nb.min_rx.unwrap_or(bFD_DEF_MINRX()), - .detect_mult = nb.detect_mult.unwrap_or(bFD_DEF_DETECT_MULT()), - .status = status, - .external_ids = map_empty(), - .options = [i"nb_status" -> nb.status.unwrap_or(i""), - i"sb_status" -> sb_status.unwrap_or(i""), - i"referenced" -> i"${referenced}"]) :- - nb in nb::BFD(), - SrcPortAllocation(nb.logical_port, nb.dst_ip, src_port), - SouthboundBFDStatus(nb.logical_port, nb.dst_ip, sb_status), - BFDReferenced(nb._uuid, referenced), - var status = bfd_new_status(referenced, nb.status, sb_status).1, - var hash = hash128((nb.logical_port, nb.dst_ip)). - -relation BFDReferenced0(bfd_uuid: uuid) -BFDReferenced0(bfd_uuid) :- - nb::Logical_Router_Static_Route(.bfd = Some{bfd_uuid}, .nexthop = nexthop), - nb::BFD(._uuid = bfd_uuid, .dst_ip = nexthop). - -relation BFDReferenced(bfd_uuid: uuid, referenced: bool) -BFDReferenced(bfd_uuid, true) :- BFDReferenced0(bfd_uuid). -BFDReferenced(bfd_uuid, false) :- - nb::BFD(._uuid = bfd_uuid), - not BFDReferenced0(bfd_uuid). - -// Given the following: -// - 'referenced': whether a BFD object is referenced by a route -// - 'nb_status0': 'status' in the existing nb::BFD record -// - 'sb_status0': 'status' in the existing sb::BFD record (None, if none exists yet) -// computes and returns (nb_status, sb_status), which are the values to use next in these records -function bfd_new_status(referenced: bool, - nb_status0: Option<istring>, - sb_status0: Option<istring>): (istring, istring) { - var nb_status = nb_status0.unwrap_or(i"admin_down"); - match (sb_status0) { - Some{sb_status} -> if (nb_status != i"admin_down" and sb_status != i"admin_down") { - nb_status = sb_status - }, - _ -> () - }; - var sb_status = nb_status; - if (referenced) { - if (nb_status == i"admin_down") { - nb_status = i"down" - } - } else { - nb_status = i"admin_down" - }; - warn("nb_status=${nb_status} sb_status=${sb_status} referenced=${referenced}"); - (nb_status, sb_status) -} -nb::Out_BFD(bfd_uuid, Some{status}) :- - nb in nb::BFD(._uuid = bfd_uuid), - BFDReferenced(bfd_uuid, referenced), - SouthboundBFDStatus(nb.logical_port, nb.dst_ip, sb_status), - var status = bfd_new_status(referenced, nb.status, sb_status).0. - -/* - * Logical router BFD flows - */ - -function lrouter_bfd_flows(lr_uuid: uuid, - lrp_uuid: uuid, - ipX: string, - networks: string, - controller_meter: Option<istring>) - : (Flow, Flow) -{ - (Flow{.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 110, - .__match = i"${ipX}.src == ${networks} && udp.dst == 3784", - .actions = i"next; ", - .stage_hint = stage_hint(lrp_uuid), - .io_port = None, - .controller_meter = None}, - Flow{.logical_datapath = lr_uuid, - .stage = s_ROUTER_IN_IP_INPUT(), - .priority = 110, - .__match = i"${ipX}.dst == ${networks} && udp.dst == 3784", - .actions = i"handle_bfd_msg(); ", - .io_port = None, - .controller_meter = controller_meter, - .stage_hint = stage_hint(lrp_uuid)}) -} -for (&RouterPort(.router = router, .networks = networks, .lrp = lrp, .has_bfd = true)) { - var controller_meter = router.copp.get(cOPP_BFD()) in { - if (not networks.ipv4_addrs.is_empty()) { - (var a, var b) = lrouter_bfd_flows(router._uuid, lrp._uuid, "ip4", - format_v4_networks(networks, false), - controller_meter) in { - Flow[a]; - Flow[b] - } - }; - - if (not networks.ipv6_addrs.is_empty()) { - (var a, var b) = lrouter_bfd_flows(router._uuid, lrp._uuid, "ip6", - format_v6_networks(networks), - controller_meter) in { - Flow[a]; - Flow[b] - } - } - } -} - -/* Clean up stale FDB entries. */ -sb::Out_FDB(_uuid, mac, dp_key, port_key) :- - sb::FDB(_uuid, mac, dp_key, port_key), - sb::Out_Datapath_Binding(._uuid = dp_uuid, .tunnel_key = dp_key), - sb::Out_Port_Binding(.datapath = dp_uuid, .tunnel_key = port_key). diff --git a/northd/ovsdb2ddlog2c b/northd/ovsdb2ddlog2c deleted file mode 100755 index fa994c99e5..0000000000 --- a/northd/ovsdb2ddlog2c +++ /dev/null @@ -1,131 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) 2020 Nicira, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at: -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import getopt -import sys - -import ovs.json -import ovs.db.error -import ovs.db.schema - -argv0 = sys.argv[0] - -def usage(): - print("""\ -%(argv0)s: ovsdb schema compiler for northd -usage: %(argv0)s [OPTIONS] - -The following option must be specified: - -p, --prefix=PREFIX Prefix for declarations in output. - -The following ovsdb2ddlog options are supported: - -f, --schema-file=FILE OVSDB schema file. - -o, --output-table=TABLE Mark TABLE as output. - --output-only-table=TABLE Mark TABLE as output-only. DDlog will send updates to this table directly to OVSDB without comparing it with current OVSDB state. - --ro=TABLE.COLUMN Ignored. - --rw=TABLE.COLUMN Ignored. - --intern-table=TABLE Ignored. - --intern-strings Ignored. - --output-file=FILE.inc Write output to FILE.inc. If this option is not specified, output will be written to stdout. - -The following options are also available: - -h, --help display this help message - -V, --version display version information\ -""" % {'argv0': argv0}) - sys.exit(0) - -if __name__ == "__main__": - try: - try: - options, args = getopt.gnu_getopt(sys.argv[1:], 'p:f:o:hV', - ['prefix=', - 'schema-file=', - 'output-table=', - 'output-only-table=', - 'intern-table=', - 'ro=', - 'rw=', - 'output-file=', - 'intern-strings']) - except getopt.GetoptError as geo: - sys.stderr.write("%s: %s\n" % (argv0, geo.msg)) - sys.exit(1) - - prefix = None - schema_file = None - output_tables = set() - output_only_tables = set() - output_file = None - for key, value in options: - if key in ['-h', '--help']: - usage() - elif key in ['-V', '--version']: - print("ovsdb2ddlog2c (OVN) @VERSION@") - elif key in ['-p', '--prefix']: - prefix = value - elif key in ['-f', '--schema-file']: - schema_file = value - elif key in ['-o', '--output-table']: - output_tables.add(value) - elif key == '--output-only-table': - output_only_tables.add(value) - elif key in ['--ro', '--rw', '--intern-table', '--intern-strings']: - pass - elif key == '--output-file': - output_file = value - else: - assert False - - if schema_file is None: - sys.stderr.write("%s: missing -f or --schema-file option\n" % argv0) - sys.exit(1) - if prefix is None: - sys.stderr.write("%s: missing -p or --prefix option\n" % argv0) - sys.exit(1) - if not output_tables.isdisjoint(output_only_tables): - example = next(iter(output_tables.intersect(output_only_tables))) - sys.stderr.write("%s: %s may not be both an output table and " - "an output-only table\n" % (argv0, example)) - sys.exit(1) - - schema = ovs.db.schema.DbSchema.from_json(ovs.json.from_file( - schema_file)) - - all_tables = set(schema.tables.keys()) - missing_tables = (output_tables | output_only_tables) - all_tables - if missing_tables: - sys.stderr.write("%s: %s is not the name of a table\n" - % (argv0, next(iter(missing_tables)))) - sys.exit(1) - - f = sys.stdout if output_file is None else open(output_file, "w") - for name, tables in ( - ("input_relations", all_tables - output_only_tables), - ("output_relations", output_tables), - ("output_only_relations", output_only_tables)): - f.write("static const char *%s%s[] = {\n" % (prefix, name)) - for table in sorted(tables): - f.write(" \"%s\",\n" % table) - f.write(" NULL,\n") - f.write("};\n\n") - if schema_file is not None: - f.close() - except ovs.db.error.Error as e: - sys.stderr.write("%s: %s\n" % (argv0, e)) - sys.exit(1) - -# Local variables: -# mode: python -# End: diff --git a/tests/ovn-macros.at b/tests/ovn-macros.at index 1b693a22c3..a30b626ef1 100644 --- a/tests/ovn-macros.at +++ b/tests/ovn-macros.at @@ -47,11 +47,11 @@ m4_define([OVN_CLEANUP],[ OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) + OVS_APP_EXIT_AND_WAIT([ovn-northd]) if test -d northd-backup; then as northd-backup - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) + OVS_APP_EXIT_AND_WAIT([ovn-northd]) fi OVN_CLEANUP_VSWITCH([main]) @@ -71,11 +71,11 @@ m4_define([OVN_CLEANUP_AZ],[ OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as $1/northd - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) + OVS_APP_EXIT_AND_WAIT([ovn-northd]) if test -d $1/northd-backup; then as $1/northd-backup - OVS_APP_EXIT_AND_WAIT([[$NORTHD_TYPE]]) + OVS_APP_EXIT_AND_WAIT([ovn-northd]) fi as $1/ic @@ -165,11 +165,6 @@ ovn_start_northd() { backup) suffix=-backup ;; esac - case ${NORTHD_TYPE:=ovn-northd} in - ovn-northd) ;; - ovn-northd-ddlog) northd_args="$northd_args --ddlog-record=${AZ:+$AZ/}northd$suffix/replay.dat -v" ;; - esac - if test X$NORTHD_USE_PARALLELIZATION = Xyes; then northd_args="$northd_args --n-threads=4" fi @@ -177,7 +172,7 @@ ovn_start_northd() { local name=${d_prefix}northd${suffix} echo "${prefix}starting $name" test -d "$ovs_base/$name" || mkdir "$ovs_base/$name" - as $name start_daemon $NORTHD_TYPE $northd_args -vjsonrpc \ + as $name start_daemon ovn-northd $northd_args -vjsonrpc \ --ovnnb-db=$OVN_NB_DB --ovnsb-db=$OVN_SB_DB } @@ -219,10 +214,7 @@ ovn_start () { fi if test X$HAVE_OPENSSL = Xyes; then - # Create the SB DB pssl+RBAC connection. Ideally we could pre-create - # SB_Global and Connection with ovsdb-tool transact at DB creation - # time, but unfortunately that does not work, northd-ddlog will replace - # the SB_Global record on startup. + # Create the SB DB pssl+RBAC connection. ovn-sbctl \ -- --id=@c create connection \ target=\"pssl:0:127.0.0.1\" role=ovn-controller \ @@ -945,26 +937,23 @@ m4_define([OVN_POPULATE_ARP], [AT_CHECK(ovn_populate_arp__, [0], [ignore])]) # Defines versions of the test with all combinations of northd, # parallelization enabled and conditional monitoring on/off. m4_define([OVN_FOR_EACH_NORTHD], - [m4_foreach([NORTHD_TYPE], [ovn-northd], - [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], - [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1 -])])])]) + [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], + [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1 +])])]) # Defines versions of the test with all combinations of northd and # parallelization enabled. To be used when the ovn-controller configuration # is not relevant. m4_define([OVN_FOR_EACH_NORTHD_NO_HV], - [m4_foreach([NORTHD_TYPE], [ovn-northd], - [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], [$1 -])])]) + [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes], [$1 +])]) # Defines versions of the test with all combinations of northd and # parallelization on/off. To be used when the ovn-controller configuration # is not relevant and we want to test parallelization permutations. m4_define([OVN_FOR_EACH_NORTHD_NO_HV_PARALLELIZATION], - [m4_foreach([NORTHD_TYPE], [ovn-northd], - [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no], [$1 -])])]) + [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no], [$1 +])]) # OVN_NBCTL(NBCTL_COMMAND) adds NBCTL_COMMAND to list of commands to be run by RUN_OVN_NBCTL(). m4_define([OVN_NBCTL], [ diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at index e7910e83c6..3c2f93567a 100644 --- a/tests/ovn-northd.at +++ b/tests/ovn-northd.at @@ -29,7 +29,7 @@ m4_define([CHECK_NO_CHANGE_AFTER_RECOMPUTE], [ wait_for_ports_up fi _DUMP_DB_TABLES(before) - check as northd ovn-appctl -t NORTHD_TYPE inc-engine/recompute + check as northd ovn-appctl -t ovn-northd inc-engine/recompute check ovn-nbctl --wait=sb sync _DUMP_DB_TABLES(after) AT_CHECK([as northd diff before after], [0], [dnl @@ -357,7 +357,7 @@ ovn_init_db ovn-nb; ovn-nbctl init # test unixctl option mkdir "$ovs_base"/northd -as northd start_daemon NORTHD_TYPE --unixctl="$ovs_base"/northd/NORTHD_TYPE[].ctl --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock +as northd start_daemon ovn-northd --unixctl="$ovs_base"/northd/ovn-northd.ctl --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock ovn-nbctl ls-add sw ovn-nbctl --wait=sb lsp-add sw p1 # northd created with unixctl option successfully created port_binding entry @@ -365,7 +365,7 @@ check_row_count Port_Binding 1 logical_port=p1 AT_CHECK([ovn-nbctl --wait=sb lsp-del p1]) # ovs-appctl exit with unixctl option -OVS_APP_EXIT_AND_WAIT_BY_TARGET(["$ovs_base"/northd/]NORTHD_TYPE[.ctl], ["$ovs_base"/northd/]NORTHD_TYPE[.pid]) +OVS_APP_EXIT_AND_WAIT_BY_TARGET(["$ovs_base"/northd/ovn-northd.ctl], ["$ovs_base"/northd/ovn-northd.pid]) # Check no port_binding entry for new port as ovn-northd is not running # @@ -376,7 +376,7 @@ AT_CHECK([ovn-nbctl --timeout=10 --wait=sb sync], [142], [], [ignore]) check_row_count Port_Binding 0 logical_port=p2 # test default unixctl path -as northd start_daemon NORTHD_TYPE --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock +as northd start_daemon ovn-northd --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock ovn-nbctl --wait=sb lsp-add sw p3 # northd created with default unixctl path successfully created port_binding entry check_row_count Port_Binding 1 logical_port=p3 @@ -386,7 +386,7 @@ OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) AT_CLEANUP ]) @@ -737,7 +737,7 @@ OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) AT_CLEANUP ]) @@ -750,10 +750,10 @@ AT_SETUP([ovn-northd pause and resume]) ovn_start --backup-northd=paused get_northd_status() { - as northd ovn-appctl -t NORTHD_TYPE is-paused - as northd ovn-appctl -t NORTHD_TYPE status - as northd-backup ovn-appctl -t NORTHD_TYPE is-paused - as northd-backup ovn-appctl -t NORTHD_TYPE status + as northd ovn-appctl -t ovn-northd is-paused + as northd ovn-appctl -t ovn-northd status + as northd-backup ovn-appctl -t ovn-northd is-paused + as northd-backup ovn-appctl -t ovn-northd status } AS_BOX([Check that the backup is paused]) @@ -764,7 +764,7 @@ Status: paused ]) AS_BOX([Resume the backup]) -check as northd-backup ovs-appctl -t NORTHD_TYPE resume +check as northd-backup ovs-appctl -t ovn-northd resume OVS_WAIT_FOR_OUTPUT([get_northd_status], [0], [false Status: active false @@ -781,8 +781,8 @@ check ovn-nbctl --wait=sb ls-del sw0 check_row_count Datapath_Binding 0 AS_BOX([Pause the main northd]) -check as northd ovs-appctl -t NORTHD_TYPE pause -check as northd-backup ovs-appctl -t NORTHD_TYPE pause +check as northd ovs-appctl -t ovn-northd pause +check as northd-backup ovs-appctl -t ovn-northd pause AT_CHECK([get_northd_status], [0], [true Status: paused true @@ -798,7 +798,7 @@ check_row_count Datapath_Binding 0 # Do not resume both main and backup right after each other # as there would be no guarentee of which one would become active AS_BOX([Resume the main northd]) -check as northd ovs-appctl -t NORTHD_TYPE resume +check as northd ovs-appctl -t ovn-northd resume OVS_WAIT_FOR_OUTPUT([get_northd_status], [0], [false Status: active true @@ -806,7 +806,7 @@ Status: paused ]) AS_BOX([Resume the backup northd]) -check as northd-backup ovs-appctl -t NORTHD_TYPE resume +check as northd-backup ovs-appctl -t ovn-northd resume OVS_WAIT_FOR_OUTPUT([get_northd_status], [0], [false Status: active false @@ -831,7 +831,7 @@ check_row_count Datapath_Binding 1 # Kill northd. as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) # With ovn-northd gone, changes to nbdb won't be reflected into sbdb. # Make sure. @@ -1268,7 +1268,7 @@ AT_CLEANUP OVN_FOR_EACH_NORTHD_NO_HV([ AT_SETUP([check Load balancer health check and Service Monitor sync]) -ovn_start NORTHD_TYPE +ovn_start ovn-northd check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80,20.0.0.3:80 check ovn-nbctl --wait=sb set load_balancer . ip_port_mappings:10.0.0.3=sw0-p1 @@ -4624,7 +4624,7 @@ AT_SKIP_IF([expr "$PKIDIR" : ".*[[ '\" ovn_start as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as ovn-sb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) @@ -4652,7 +4652,7 @@ cp $PKIDIR/$key2 $key cp $PKIDIR/$cert3 $cert cp $PKIDIR/$cacert $cacert as northd -start_daemon ovn$NORTHD_TYPE -vjsonrpc \ +start_daemon ovn-northd -vjsonrpc \ --ovnnb-db=$OVN_NB_DB --ovnsb-db=ssl:127.0.0.1:$TCP_PORT \ -p $key -c $cert -C $cacert @@ -4664,7 +4664,7 @@ cp $PKIDIR/$key $key cp $PKIDIR/$cert $cert check ovn-nbctl --wait=sb sync -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) AT_CLEANUP ]) @@ -6506,7 +6506,7 @@ AT_CLEANUP OVN_FOR_EACH_NORTHD_NO_HV([ AT_SETUP([ovn-northd -- lrp with chassis-redirect and ls with vtep lport]) AT_KEYWORDS([multiple-l3dgw-ports]) -ovn_start NORTHD_TYPE +ovn_start check ovn-sbctl chassis-add ch1 geneve 127.0.0.2 check ovn-nbctl lr-add lr1 @@ -6601,7 +6601,7 @@ AT_CLEANUP OVN_FOR_EACH_NORTHD_NO_HV([ AT_SETUP([check options:requested-chassis fills requested_chassis col]) -ovn_start NORTHD_TYPE +ovn_start # Add chassis ch1. check ovn-sbctl chassis-add ch1 geneve 127.0.0.2 @@ -8190,39 +8190,39 @@ OVN_FOR_EACH_NORTHD_NO_HV([ AT_SETUP([northd-parallelization unixctl]) ovn_start -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 -OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t NORTHD_TYPE parallel-build/get-n-threads], [0], [1 +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 +OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t ovn-northd parallel-build/get-n-threads], [0], [1 ]) -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 4 -OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t NORTHD_TYPE parallel-build/get-n-threads], [0], [4 +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 4 +OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t ovn-northd parallel-build/get-n-threads], [0], [4 ]) -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 -OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t NORTHD_TYPE parallel-build/get-n-threads], [0], [1 +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 +OVS_WAIT_FOR_OUTPUT([as northd ovn-appctl -t ovn-northd parallel-build/get-n-threads], [0], [1 ]) -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 0], [2], [], +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 0], [2], [], [invalid n_threads: 0 ovn-appctl: ovn-northd: server returned an error ]) -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads -1], [2], [], +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads -1], [2], [], [invalid n_threads: -1 ovn-appctl: ovn-northd: server returned an error ]) -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 300], [2], [], +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 300], [2], [], [invalid n_threads: 300 ovn-appctl: ovn-northd: server returned an error ]) -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads], [2], [], +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads], [2], [], ["parallel-build/set-n-threads" command requires at least 1 arguments ovn-appctl: ovn-northd: server returned an error ]) -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 2], [2], [], +AT_CHECK([as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 2], [2], [], ["parallel-build/set-n-threads" command takes at most 1 arguments ovn-appctl: ovn-northd: server returned an error ]) @@ -8262,16 +8262,16 @@ check ovn-nbctl lrp-add lr1 lrp0 "f0:00:00:01:00:01" 10.1.255.254/16 check ovn-nbctl lr-nat-add lr1 snat 10.2.0.1 10.1.0.0/16 add_switch_ports 1 50 -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 4 +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 4 add_switch_ports 51 100 -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 8 +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 8 add_switch_ports 101 150 -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 4 +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 4 add_switch_ports 151 200 -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 1 +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 1 add_switch_ports 201 250 check ovn-nbctl --wait=sb sync @@ -8288,7 +8288,7 @@ ovn-sbctl dump-flows | DUMP_FLOWS_SORTED > flows2 AT_CHECK([diff flows1 flows2]) # Restart with with 8 threads -check as northd ovn-appctl -t NORTHD_TYPE parallel-build/set-n-threads 8 +check as northd ovn-appctl -t ovn-northd parallel-build/set-n-threads 8 delete_switch_ports 1 250 add_switch_ports 1 250 check ovn-nbctl --wait=sb sync @@ -8973,22 +8973,22 @@ p2_uuid=$(fetch_column nb:Logical_Switch_Port _uuid name=sw0-p2) echo "p1 uuid - $p1_uuid" ovn-nbctl --wait=sb sync -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats foo_as_uuid=$(ovn-nbctl create address_set name=foo addresses=\"1.1.1.1\",\"1.1.1.2\") wait_column '1.1.1.1 1.1.1.2' Address_Set addresses name=foo -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [1 +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [1 ]) rm -f northd/ovn-northd.log -check as northd ovn-appctl -t NORTHD_TYPE vlog/reopen -check as northd ovn-appctl -t NORTHD_TYPE vlog/set jsonrpc:dbg -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd vlog/reopen +check as northd ovn-appctl -t ovn-northd vlog/set jsonrpc:dbg +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.3 -- \ add address_set $foo_as_uuid addresses 1.1.2.1/4 wait_column '1.1.1.1 1.1.1.2 1.1.1.3 1.1.2.1/4' Address_Set addresses name=foo # There should be no recompute of the sync_to_sb_addr_set engine node . -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 ]) CHECK_NO_CHANGE_AFTER_RECOMPUTE @@ -8996,14 +8996,14 @@ AT_CHECK([grep transact northd/ovn-northd.log | grep Address_Set | \ grep -c mutate], [0], [1 ]) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.4 -- \ remove address_set $foo_as_uuid addresses 1.1.1.1 -- \ remove address_set $foo_as_uuid addresses 1.1.2.1/4 wait_column '1.1.1.2 1.1.1.3 1.1.1.4' Address_Set addresses name=foo # There should be no recompute of the sync_to_sb_addr_set engine node . -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 ]) CHECK_NO_CHANGE_AFTER_RECOMPUTE @@ -9013,9 +9013,9 @@ grep -c mutate], [0], [2 # Pause ovn-northd and add/remove few addresses. when it is resumed # it should use mutate for updating the address sets. -check as northd ovn-appctl -t NORTHD_TYPE pause +check as northd ovn-appctl -t ovn-northd pause -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.5 check ovn-nbctl add address_set $foo_as_uuid addresses 1.1.1.6 check ovn-nbctl remove address_set $foo_as_uuid addresses 1.1.1.2 @@ -9023,10 +9023,10 @@ check ovn-nbctl remove address_set $foo_as_uuid addresses 1.1.1.2 check_column '1.1.1.2 1.1.1.3 1.1.1.4' Address_Set addresses name=foo # Resume northd now -check as northd ovn-appctl -t NORTHD_TYPE resume +check as northd ovn-appctl -t ovn-northd resume wait_column '1.1.1.3 1.1.1.4 1.1.1.5 1.1.1.6' Address_Set addresses name=foo # There should be recompute of the sync_to_sb_addr_set engine node . -recompute_stat=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute) +recompute_stat=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute) AT_CHECK([test $recompute_stat -ge 1]) AT_CHECK([grep transact northd/ovn-northd.log | grep Address_Set | \ @@ -9034,25 +9034,25 @@ grep -c mutate], [0], [3 ]) # Create a port group. This should result in recompute of sb_to_sync_addr_set engine node. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl pg-add pg1 wait_column '' Address_Set addresses name=pg1_ip4 -recompute_stat=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute) +recompute_stat=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute) AT_CHECK([test $recompute_stat -ge 1]) # Add sw0-p1 to port group pg1 -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl add port_group pg1 ports ${p1_uuid} wait_column '20.0.0.4' Address_Set addresses name=pg1_ip4 # There should be no recompute of the sync_to_sb_addr_set engine node. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 ]) # No change, no recompute -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb sync -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_addr_set recompute], [0], [0 ]) AT_CLEANUP @@ -9091,18 +9091,18 @@ $4 } AS_BOX([Create new PG1 and PG2]) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb -- pg-add pg1 -- pg-add pg2 dnl The northd node should not recompute, it should handle nb_global update dnl though, therefore "compute: 1". -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl Node: northd - recompute: 0 - compute: 1 - abort: 0 ]) dnl The port_group node recomputes every time a NB port group is added/deleted. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl Node: port_group - recompute: 1 - compute: 0 @@ -9110,7 +9110,7 @@ Node: port_group ]) dnl The port_group node is an input for the lflow node. Port_group dnl recompute/compute triggers lflow recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl Node: lflow - recompute: 1 - compute: 0 @@ -9125,7 +9125,7 @@ check ovn-nbctl --wait=sb \ CHECK_NO_CHANGE_AFTER_RECOMPUTE AS_BOX([Add one port from the two switches to PG1]) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb \ -- pg-set-ports pg1 sw1.1 sw2.1 check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" @@ -9133,7 +9133,7 @@ check_column "sw2.1" sb:Port_Group ports name="${sw2_key}_pg1" dnl The northd node should not recompute, it should handle nb_global update dnl though, therefore "compute: 1". -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl Node: northd - recompute: 0 - compute: 1 @@ -9141,7 +9141,7 @@ Node: northd ]) dnl The port_group node recomputes also every time a port from a new switch dnl is added to the group. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl Node: port_group - recompute: 1 - compute: 0 @@ -9149,7 +9149,7 @@ Node: port_group ]) dnl The port_group node is an input for the lflow node. Port_group dnl recompute/compute triggers lflow recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl Node: lflow - recompute: 1 - compute: 0 @@ -9160,7 +9160,7 @@ check_acl_lflows 1 0 1 0 CHECK_NO_CHANGE_AFTER_RECOMPUTE AS_BOX([Add one port from the two switches to PG2]) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb \ -- pg-set-ports pg2 sw1.2 sw2.2 check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" @@ -9170,7 +9170,7 @@ check_column "sw2.2" sb:Port_Group ports name="${sw2_key}_pg2" dnl The northd node should not recompute, it should handle nb_global update dnl though, therefore "compute: 1". -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl Node: northd - recompute: 0 - compute: 1 @@ -9178,7 +9178,7 @@ Node: northd ]) dnl The port_group node recomputes also every time a port from a new switch dnl is added to the group. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl Node: port_group - recompute: 1 - compute: 0 @@ -9186,7 +9186,7 @@ Node: port_group ]) dnl The port_group node is an input for the lflow node. Port_group dnl recompute/compute triggers lflow recompute (for ACLs). -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl Node: lflow - recompute: 1 - compute: 0 @@ -9197,7 +9197,7 @@ check_acl_lflows 1 1 1 1 CHECK_NO_CHANGE_AFTER_RECOMPUTE AS_BOX([Add one more port from the two switches to PG1 and PG2]) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb \ -- pg-set-ports pg1 sw1.1 sw2.1 sw1.3 sw2.3 \ -- pg-set-ports pg2 sw1.2 sw2.2 sw1.3 sw2.3 @@ -9208,7 +9208,7 @@ check_column "sw2.2 sw2.3" sb:Port_Group ports name="${sw2_key}_pg2" dnl The northd node should not recompute, it should handle nb_global update dnl though, therefore "compute: 1". -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl Node: northd - recompute: 0 - compute: 1 @@ -9216,7 +9216,7 @@ Node: northd ]) dnl We did not change the set of switches a pg is applied to, there should be dnl no recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl Node: port_group - recompute: 0 - compute: 1 @@ -9224,7 +9224,7 @@ Node: port_group ]) dnl We did not change the set of switches a pg is applied to, there should be dnl no recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl Node: lflow - recompute: 0 - compute: 1 @@ -9235,7 +9235,7 @@ check_acl_lflows 1 1 1 1 CHECK_NO_CHANGE_AFTER_RECOMPUTE AS_BOX([Remove the last port from PG1 and PG2]) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb \ -- pg-set-ports pg1 sw1.1 sw2.1 \ -- pg-set-ports pg2 sw1.2 sw2.2 @@ -9246,7 +9246,7 @@ check_column "sw2.2" sb:Port_Group ports name="${sw2_key}_pg2" dnl The northd node should not recompute, it should handle nb_global update dnl though, therefore "compute: 1". -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl Node: northd - recompute: 0 - compute: 1 @@ -9254,7 +9254,7 @@ Node: northd ]) dnl We did not change the set of switches a pg is applied to, there should be dnl no recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl Node: port_group - recompute: 0 - compute: 1 @@ -9262,7 +9262,7 @@ Node: port_group ]) dnl We did not change the set of switches a pg is applied to, there should be dnl no recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl Node: lflow - recompute: 0 - compute: 1 @@ -9273,7 +9273,7 @@ check_acl_lflows 1 1 1 1 CHECK_NO_CHANGE_AFTER_RECOMPUTE AS_BOX([Remove the second port from PG2]) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb pg-set-ports pg2 sw1.2 check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" check_column "sw2.1" sb:Port_Group ports name="${sw2_key}_pg1" @@ -9283,7 +9283,7 @@ AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg2"], [0], [ dnl The northd node should not recompute, it should handle nb_global update dnl though, therefore "compute: 1". -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl Node: northd - recompute: 0 - compute: 1 @@ -9291,7 +9291,7 @@ Node: northd ]) dnl We changed the set of switches a pg is applied to, there should be dnl a recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl Node: port_group - recompute: 1 - compute: 0 @@ -9299,7 +9299,7 @@ Node: port_group ]) dnl We changed the set of switches a pg is applied to, there should be dnl a recompute (for ACLs). -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl Node: lflow - recompute: 1 - compute: 0 @@ -9310,7 +9310,7 @@ check_acl_lflows 1 1 1 0 CHECK_NO_CHANGE_AFTER_RECOMPUTE AS_BOX([Remove the second port from PG1]) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb pg-set-ports pg1 sw1.1 check_column "sw1.1" sb:Port_Group ports name="${sw1_key}_pg1" AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg1"], [0], [ @@ -9321,7 +9321,7 @@ AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg2"], [0], [ dnl The northd node should not recompute, it should handle nb_global update dnl though, therefore "compute: 1". -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl Node: northd - recompute: 0 - compute: 1 @@ -9329,7 +9329,7 @@ Node: northd ]) dnl We changed the set of switches a pg is applied to, there should be dnl a recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl Node: port_group - recompute: 1 - compute: 0 @@ -9337,7 +9337,7 @@ Node: port_group ]) dnl We changed the set of switches a pg is applied to, there should be dnl a recompute (for ACLs). -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl Node: lflow - recompute: 1 - compute: 0 @@ -9348,7 +9348,7 @@ check_acl_lflows 1 1 0 0 CHECK_NO_CHANGE_AFTER_RECOMPUTE AS_BOX([Add second port to both PGs]) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb \ -- pg-set-ports pg1 sw1.1 sw2.1 \ -- pg-set-ports pg2 sw1.2 sw2.2 @@ -9359,7 +9359,7 @@ check_column "sw2.2" sb:Port_Group ports name="${sw2_key}_pg2" dnl The northd node should not recompute, it should handle nb_global update dnl though, therefore "compute: 1". -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl Node: northd - recompute: 0 - compute: 1 @@ -9367,7 +9367,7 @@ Node: northd ]) dnl We changed the set of switches a pg is applied to, there should be a dnl recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl Node: port_group - recompute: 1 - compute: 0 @@ -9375,7 +9375,7 @@ Node: port_group ]) dnl We changed the set of switches a pg is applied to, there should be a dnl recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl Node: lflow - recompute: 1 - compute: 0 @@ -9386,7 +9386,7 @@ check_acl_lflows 1 1 1 1 CHECK_NO_CHANGE_AFTER_RECOMPUTE AS_BOX([Remove second port from both PGs]) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb \ -- pg-set-ports pg1 sw1.1 \ -- pg-set-ports pg2 sw1.2 @@ -9399,7 +9399,7 @@ AT_CHECK([fetch_column sb:Port_Group ports name="${sw2_key}_pg2"], [0], [ dnl The northd node should not recompute, it should handle nb_global update dnl though, therefore "compute: 1". -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd], [0], [dnl Node: northd - recompute: 0 - compute: 1 @@ -9407,7 +9407,7 @@ Node: northd ]) dnl We changed the set of switches a pg is applied to, there should be a dnl recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats port_group], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats port_group], [0], [dnl Node: port_group - recompute: 1 - compute: 0 @@ -9415,7 +9415,7 @@ Node: port_group ]) dnl We changed the set of switches a pg is applied to, there should be a dnl recompute. -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow], [0], [dnl Node: lflow - recompute: 1 - compute: 0 @@ -9724,7 +9724,7 @@ check ovn-sbctl chassis-add hv1 geneve 127.0.0.1 \ check ovn-nbctl --wait=sb sync -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE debug/chassis-features-list], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd debug/chassis-features-list], [0], [dnl ct_no_masked_label: true ct_lb_related: true mac_binding_timestamp: true @@ -9739,7 +9739,7 @@ check ovn-sbctl chassis-add hv2 geneve 127.0.0.2 \ check ovn-nbctl --wait=sb sync -AT_CHECK([as northd ovn-appctl -t NORTHD_TYPE debug/chassis-features-list], [0], [dnl +AT_CHECK([as northd ovn-appctl -t ovn-northd debug/chassis-features-list], [0], [dnl ct_no_masked_label: true ct_lb_related: true mac_binding_timestamp: true @@ -10012,14 +10012,14 @@ check ovn-nbctl ls-add sw check ovn-nbctl lsp-add sw p1 check ovn-nbctl --wait=sb sync -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lsp-set-options p1 foo=bar -sb_lb_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_lb recompute) +sb_lb_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_lb recompute) AT_CHECK([test x$sb_lb_recomp = x0]) check ovn-nbctl --wait=sb lsp-set-type p1 external -sb_lb_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_lb recompute) +sb_lb_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_lb recompute) AT_CHECK([test x$sb_lb_recomp != x0]) AT_CLEANUP @@ -10043,13 +10043,13 @@ check_recompute_counter() { sync_sb_pb_recomp_min=$5 sync_sb_pb_recomp_max=$6 - northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) + northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) AT_CHECK([test $northd_recomp -ge $northd_recomp_min && test $northd_recomp -le $northd_recomp_max]) - lflow_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow recompute) + lflow_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow recompute) AT_CHECK([test $lflow_recomp -ge $lflow_recomp_min && test $lflow_recomp -le $lflow_recomp_max]) - sync_sb_pb_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_to_sb_pb recompute) + sync_sb_pb_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_to_sb_pb recompute) AT_CHECK([test $sync_sb_pb_recomp -ge $sync_sb_pb_recomp_min && test $sync_sb_pb_recomp -le $sync_sb_pb_recomp_max]) } @@ -10062,32 +10062,32 @@ ovs-vsctl add-port br-int lsp-pilot -- set interface lsp-pilot external_ids:ifac wait_for_ports_up check ovn-nbctl --wait=hv sync -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=hv lsp-add ls0 lsp0-0 -- lsp-set-addresses lsp0-0 "unknown" ovs-vsctl add-port br-int lsp0-0 -- set interface lsp0-0 external_ids:iface-id=lsp0-0 wait_for_ports_up check ovn-nbctl --wait=hv sync check_recompute_counter 4 5 5 5 5 5 -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=hv lsp-add ls0 lsp0-1 -- lsp-set-addresses lsp0-1 "aa:aa:aa:00:00:01 192.168.0.11" ovs-vsctl add-port br-int lsp0-1 -- set interface lsp0-1 external_ids:iface-id=lsp0-1 wait_for_ports_up check ovn-nbctl --wait=hv sync check_recompute_counter 0 0 0 0 0 0 -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=hv lsp-add ls0 lsp0-2 -- lsp-set-addresses lsp0-2 "aa:aa:aa:00:00:02 192.168.0.12" ovs-vsctl add-port br-int lsp0-2 -- set interface lsp0-2 external_ids:iface-id=lsp0-2 wait_for_ports_up check ovn-nbctl --wait=hv sync check_recompute_counter 0 0 0 0 0 0 -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=hv lsp-del lsp0-1 check_recompute_counter 0 0 0 0 0 0 -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=hv lsp-set-addresses lsp0-2 "aa:aa:aa:00:00:88 192.168.0.88" check_recompute_counter 0 0 0 0 0 0 @@ -10106,7 +10106,7 @@ done check_recompute_counter 0 0 0 0 0 0 # No change, no recompute -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb sync check_recompute_counter 0 0 0 0 0 0 @@ -10118,7 +10118,7 @@ ovn-nbctl dhcp-options-create 192.168.0.0/24 CIDR_UUID=$(ovn-nbctl --bare --columns=_uuid find dhcp_options cidr="192.168.0.0/24") ovn-nbctl dhcp-options-set-options $CIDR_UUID lease_time=3600 router=192.168.0.1 server_id=192.168.0.1 server_mac=c0:ff:ee:00:00:01 hostname="\"foo\"" -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats ovn-nbctl --wait=sb lsp-set-dhcpv4-options lsp0-2 $CIDR_UUID check_recompute_counter 0 0 0 0 0 0 @@ -10129,7 +10129,7 @@ check ovn-nbctl lsp-set-addresses lsp0-2 "aa:aa:aa:00:00:01 192.168.0.11 aef0::4 d1="$(ovn-nbctl create DHCP_Options cidr="aef0\:\:/64" \ options="\"server_id\"=\"00:00:00:10:00:01\"")" -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats ovn-nbctl --wait=sb lsp-set-dhcpv6-options lsp0-2 ${d1} check_recompute_counter 0 0 0 0 0 0 @@ -10146,10 +10146,10 @@ AT_SETUP([LSP incremental processing with only router ports before and after add ovn_start check_recompute_counter() { - northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) + northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) AT_CHECK([test x$northd_recomp = x$1]) - lflow_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow recompute) + lflow_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow recompute) AT_CHECK([test x$lflow_recomp = x$2]) } @@ -10166,7 +10166,7 @@ ovn-nbctl lb-add lb0 192.168.0.10:80 10.0.0.10:8080 check ovn-nbctl --wait=sb ls-lb-add ls0 lb0 CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats # Add a lsp. northd and lflow engine shouldn't recompute even though this is # the first lsp added after the router ports. check ovn-nbctl --wait=hv lsp-add ls0 lsp0-1 -- lsp-set-addresses lsp0-1 "aa:aa:aa:00:00:01 192.168.0.11" @@ -10175,7 +10175,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE # Delete the lsp. northd and lflow engine shouldn't recompute even though # the logical switch is now left with only router ports. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=hv lsp-del lsp0-1 check_recompute_counter 0 0 CHECK_NO_CHANGE_AFTER_RECOMPUTE @@ -10213,7 +10213,7 @@ wait_row_count port_binding $(($n + 1)) # Delete multiple ports, and one of them not incrementally processible. This is # to trigger partial I-P and then fall back to recompute. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats args="--wait=hv lsp-del lsp0-foo" for i in $(seq $n); do args="$args -- lsp-del lsp0-$i" @@ -10221,7 +10221,7 @@ done check ovn-nbctl $args wait_row_count Port_Binding 0 -northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) +northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) echo northd_recomp $northd_recomp AT_CHECK([test $northd_recomp -ge 1]) @@ -10234,26 +10234,26 @@ AT_SETUP([ACL/Meter incremental processing - no northd recompute]) ovn_start check_recompute_counter() { - northd_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats northd recompute) + northd_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats northd recompute) AT_CHECK([test x$northd_recomp = x$1]) - lflow_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats lflow recompute) + lflow_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats lflow recompute) AT_CHECK([test x$lflow_recomp = x$2]) - sync_meters_recomp=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats sync_meters recompute) + sync_meters_recomp=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats sync_meters recompute) AT_CHECK([test x$sync_meters_recomp = x$3]) } check ovn-nbctl --wait=sb ls-add ls -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb meter-add m drop 1 pktps check ovn-nbctl --wait=sb acl-add ls from-lport 1 1 allow dnl Only triggers recompute of the sync_meters and lflow nodes. check_recompute_counter 0 2 2 CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb meter-del m check ovn-nbctl --wait=sb acl-del ls dnl Only triggers recompute of the sync_meters and lflow nodes. @@ -10325,7 +10325,7 @@ check ovn-sbctl chassis-add local-ch0 geneve 127.0.0.2 wait_row_count Chassis 2 remote_chassis_uuid=$(fetch_column Chassis _uuid name=remote-ch0) -as northd ovn-appctl -t NORTHD_TYPE vlog/set dbg +as northd ovn-appctl -t ovn-northd vlog/set dbg check ovn-nbctl ls-add sw0 check ovn-nbctl lsp-add sw0 sw0-r1 -- lsp-set-type sw0-r1 remote @@ -10392,7 +10392,7 @@ check_engine_stats() { echo "__file__:__line__: Checking engine stats for node $node : recompute - \ $recompute : compute - $compute" - node_stat=$(as northd ovn-appctl -t NORTHD_TYPE inc-engine/show-stats $node) + node_stat=$(as northd ovn-appctl -t ovn-northd inc-engine/show-stats $node) # node_stat will be of this format : # - Node: lflow - recompute: 3 - compute: 0 - abort: 0 node_recompute_ct=$(echo $node_stat | cut -d '-' -f2 | cut -d ':' -f2) @@ -10420,7 +10420,7 @@ $recompute : compute - $compute" # Test I-P for load balancers. # Presently ovn-northd handles I-P for NB LBs in northd_lb_data engine node # only. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 check_engine_stats lb_data norecompute compute @@ -10429,7 +10429,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb set load_balancer . ip_port_mappings:10.0.0.3=sw0-p1:10.0.0.2 check_engine_stats lb_data norecompute compute @@ -10450,7 +10450,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb -- lb-del lb2 -- lb-del lb3 check_engine_stats lb_data norecompute compute @@ -10459,7 +10459,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats AT_CHECK([ovn-nbctl --wait=sb \ -- --id=@hc create Load_Balancer_Health_Check vip="10.0.0.10\:80" \ @@ -10473,7 +10473,7 @@ check_engine_stats sync_to_sb_lb recompute nocompute # Any change to load balancer health check should also result in full recompute # of northd node (but not northd_lb_data node) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb set load_balancer_health_check . options:foo=bar1 check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute @@ -10481,21 +10481,21 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute # Delete the health check from the load balancer. northd engine node should do a full recompute. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb clear Load_Balancer . health_check check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl ls-add sw0 check ovn-nbctl --wait=sb lr-add lr0 ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24 ovn-nbctl lsp-add sw0 sw0-lr0 ovn-nbctl lsp-set-type sw0-lr0 router ovn-nbctl lsp-set-addresses sw0-lr0 00:00:00:00:ff:01 -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats ovn-nbctl --wait=sb lsp-set-options sw0-lr0 router-port=lr0-sw0 check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute @@ -10503,7 +10503,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute # Associate lb1 to sw0. There should be no recompute of northd engine node -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb ls-lb-add sw0 lb1 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10515,7 +10515,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) # Modify the backend of the lb1 vip -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10524,7 +10524,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) # Cleanup the vip of lb1. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb clear load_Balancer lb1 vips check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10533,7 +10533,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) # Set the vips of lb1 back -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10542,7 +10542,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) # Add another vip to lb1 -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10551,7 +10551,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) # Disassociate lb1 from sw0. There should be a full recompute of northd engine node. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb ls-lb-del sw0 lb1 check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute @@ -10561,7 +10561,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE(1) # Associate lb1 to sw0 and also create a port sw0p1. This should not result in # full recompute of northd, but should rsult in full recompute of lflow node. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb ls-lb-add sw0 lb1 -- lsp-add sw0 sw0p1 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10569,10 +10569,10 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats # Disassociate lb1 from sw0. There should be a recompute of northd engine node. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb ls-lb-del sw0 lb1 check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute @@ -10581,7 +10581,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Add lb1 to lr0 and then disassociate -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lr-lb-add lr0 lb1 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10590,7 +10590,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Modify the backend of the lb1 vip -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10599,7 +10599,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Cleanup the vip of lb1. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb clear load_Balancer lb1 vips check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10608,7 +10608,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Set the vips of lb1 back -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10617,7 +10617,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Add another vip to lb1 -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10625,7 +10625,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lr-lb-del lr0 lb1 check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute @@ -10634,7 +10634,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Test load balancer group now -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats lbg1_uuid=$(ovn-nbctl --wait=sb create load_balancer_group name=lbg1) check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10642,19 +10642,19 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats lb1_uuid=$(fetch_column nb:Load_Balancer _uuid) # Add lb to the lbg1 group -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb add load_balancer_group . load_Balancer $lb1_uuid check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb clear load_balancer_group . load_Balancer check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute @@ -10662,7 +10662,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute # Add back lb to the lbg1 group -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb add load_balancer_group . load_Balancer $lb1_uuid check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10671,7 +10671,7 @@ check_engine_stats sync_to_sb_lb recompute nocompute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb add logical_switch sw0 load_balancer_group $lbg1_uuid check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10679,7 +10679,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute # Update lb and this should not result in northd recompute -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb set load_balancer . options:bar=foo check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10687,7 +10687,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute # Modify the backend of the lb1 vip -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10696,7 +10696,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Cleanup the vip of lb1. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb clear load_Balancer lb1 vips check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10705,7 +10705,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Set the vips of lb1 back -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10714,7 +10714,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Add another vip to lb1 -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10722,14 +10722,14 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb clear logical_switch sw0 load_balancer_group check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl add logical_router lr0 load_balancer_group $lbg1_uuid check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10738,7 +10738,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Modify the backend of the lb1 vip -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb set load_balancer lb1 vips:'"10.0.0.10:80"'='"10.0.0.100:80"' check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10747,7 +10747,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Cleanup the vip of lb1. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb clear load_Balancer lb1 vips check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10756,7 +10756,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Set the vips of lb1 back -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-add lb1 10.0.0.10:80 10.0.0.3:80 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10765,7 +10765,7 @@ check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE # Add another vip to lb1 -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-add lb1 10.0.0.20:80 10.0.0.30:8080 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10773,7 +10773,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb clear logical_router lr0 load_balancer_group check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute @@ -10781,14 +10781,14 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute # Add back lb group to logical switch and then delete it. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb add logical_switch sw0 load_balancer_group $lbg1_uuid check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb clear logical_switch sw0 load_balancer_group -- \ destroy load_balancer_group $lbg1_uuid check_engine_stats lb_data norecompute compute @@ -10812,21 +10812,21 @@ lb2_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb2) lb3_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb3) lb4_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb4) -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats lbg1_uuid=$(ovn-nbctl --wait=sb create load_balancer_group name=lbg1) check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb set load_balancer_group . load_balancer="$lb2_uuid,$lb3_uuid,$lb4_uuid" check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb set logical_switch sw0 load_balancer_group=$lbg1_uuid check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10834,7 +10834,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb set logical_router lr1 load_balancer_group=$lbg1_uuid check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10842,7 +10842,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb ls-lb-add sw0 lb2 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10850,7 +10850,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb ls-lb-add sw0 lb3 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10858,7 +10858,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lr-lb-add lr1 lb1 check ovn-nbctl --wait=sb lr-lb-add lr1 lb2 check_engine_stats lb_data norecompute compute @@ -10867,7 +10867,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb ls-lb-del sw0 lb2 check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute @@ -10875,7 +10875,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute nocompute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lr-lb-del lr1 lb2 check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute @@ -10885,7 +10885,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE # Deleting lb4 should not result in lflow recompute as it is # only associated with logical switch sw0. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-del lb4 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10895,7 +10895,7 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE # Deleting lb2 should result in lflow recompute as it is # associated with logical router lr1 through lb group. -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb lb-del lb2 check_engine_stats lb_data norecompute compute check_engine_stats northd norecompute compute @@ -10903,7 +10903,7 @@ check_engine_stats lflow recompute nocompute check_engine_stats sync_to_sb_lb recompute compute CHECK_NO_CHANGE_AFTER_RECOMPUTE -check as northd ovn-appctl -t NORTHD_TYPE inc-engine/clear-stats +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats check ovn-nbctl --wait=sb remove load_balancer_group . load_balancer $lb3_uuid check_engine_stats lb_data norecompute compute check_engine_stats northd recompute nocompute diff --git a/tests/ovn.at b/tests/ovn.at index e8c79512b2..67c4ccd39b 100644 --- a/tests/ovn.at +++ b/tests/ovn.at @@ -7248,7 +7248,7 @@ compare_dhcp_packets 1 # Stop ovn-northd so that we can modify the northd_version. as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) northd_version=$(ovn-sbctl get SB_Global . options:northd_internal_version | sed s/\"//g) echo "northd version = $northd_version" @@ -8805,7 +8805,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) AT_CLEANUP ]) @@ -9611,8 +9611,8 @@ check test "$c6_tag" != "$c3_tag" AS_BOX([restart northd and make sure tag allocation is stable]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) -start_daemon NORTHD_TYPE \ +OVS_APP_EXIT_AND_WAIT([ovn-northd]) +start_daemon ovn-northd \ --ovnnb-db=unix:"$ovs_base"/ovn-nb/ovn-nb.sock \ --ovnsb-db=unix:"$ovs_base"/ovn-sb/ovn-sb.sock @@ -25547,7 +25547,7 @@ check_row_count Service_Monitor 0 # Let's also be sure the warning message about SCTP load balancers is # is in the ovn-northd log -AT_CHECK([test 1 = `grep -c "SCTP load balancers do not currently support health checks" northd/NORTHD_TYPE.log`]) +AT_CHECK([test 1 = `grep -c "SCTP load balancers do not currently support health checks" northd/ovn-northd.log`]) AT_CLEANUP ]) @@ -30791,7 +30791,7 @@ check ovn-nbctl --wait=hv ls-lb-del sw0 lb-ipv6 # original destination tuple. # # ovn-controller should fall back to matching on ct_nw_dst()/ct_tp_dst(). -as northd ovn-appctl -t NORTHD_TYPE pause +as northd ovn-appctl -t ovn-northd pause check ovn-sbctl \ -- remove load_balancer lb-ipv4-tcp options hairpin_orig_tuple \ @@ -30840,7 +30840,7 @@ OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=70 | ofctl_strip_a ]) # Resume ovn-northd. -as northd ovn-appctl -t NORTHD_TYPE resume +as northd ovn-appctl -t ovn-northd resume check ovn-nbctl --wait=hv sync as hv2 ovs-vsctl del-port hv2-vif1 @@ -30959,7 +30959,7 @@ AT_CHECK([grep -c $northd_version hv1/ovn-controller.log], [0], [1 # Stop ovn-northd so that we can modify the northd_version. as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) check ovn-sbctl set SB_Global . options:northd_internal_version=foo @@ -31101,7 +31101,7 @@ OVS_WAIT_UNTIL([test `ovs-vsctl get Interface lsp2 external_ids:ovn-installed` = AS_BOX([ovn-controller should not reset Port_Binding.up without northd]) # Pause northd and clear the "up" field to simulate older ovn-northd # versions writing to the Southbound DB. -as northd ovn-appctl -t NORTHD_TYPE pause +as northd ovn-appctl -t ovn-northd pause as hv1 ovn-appctl -t ovn-controller debug/pause check ovn-sbctl clear Port_Binding lsp1 up @@ -31116,7 +31116,7 @@ check_column "" Port_Binding up logical_port=lsp1 # Once northd should explicitly set the Port_Binding.up field to 'false' and # ovn-controller sets it to 'true' as soon as the update is processed. -as northd ovn-appctl -t NORTHD_TYPE resume +as northd ovn-appctl -t ovn-northd resume wait_column "true" Port_Binding up logical_port=lsp1 wait_column "true" nb:Logical_Switch_Port up name=lsp1 @@ -31270,7 +31270,7 @@ check ovn-nbctl lsp-set-addresses sw0-p4 "00:00:00:00:00:04 192.168.47.4" # Pause ovn-northd. When it is resumed, all the below NB updates # will be sent in one transaction. -check as northd ovn-appctl -t NORTHD_TYPE pause +check as northd ovn-appctl -t ovn-northd pause check ovn-nbctl lsp-add sw0 sw0-p1 check ovn-nbctl lsp-set-addresses sw0-p1 "00:00:00:00:00:01 192.168.47.1" @@ -31282,7 +31282,7 @@ check ovn-nbctl acl-add pg1 to-lport 1002 "outport == @pg1 && ip4 && ip4.src == # resume ovn-northd now. This should result in a single update message # from SB ovsdb-server to ovn-controller for all the above NB updates. -check as northd ovn-appctl -t NORTHD_TYPE resume +check as northd ovn-appctl -t ovn-northd resume AS_BOX([Wait for sw0-p1 and sw0-p2 to be up]) wait_for_ports_up sw0-p1 sw0-p2 diff --git a/tests/ovs-macros.at b/tests/ovs-macros.at index 5b8da2048a..35760cf06b 100644 --- a/tests/ovs-macros.at +++ b/tests/ovs-macros.at @@ -5,13 +5,11 @@ m4_include([m4/compat.m4]) dnl Make AT_SETUP automatically do some things for us: dnl - Run the ovs_init() shell function as the first step in every test. -dnl - If NORTHD_TYPE is defined, then append it to the test name and +dnl - If NORTHD_USE_DP_GROUPS is defined, then append it to the test name and dnl set it as a shell variable as well. m4_rename([AT_SETUP], [OVS_AT_SETUP]) m4_define([AT_SETUP], - [OVS_AT_SETUP($@[]m4_ifdef([NORTHD_TYPE], [ -- NORTHD_TYPE])[]m4_ifdef([NORTHD_USE_PARALLELIZATION], [ -- parallelization=NORTHD_USE_PARALLELIZATION])[]m4_ifdef([OVN_MONITOR_ALL], [ -- ovn_monitor_all=OVN_MONITOR_ALL])) -m4_ifdef([NORTHD_TYPE], [[NORTHD_TYPE]=NORTHD_TYPE -])dnl + [OVS_AT_SETUP($@[]m4_ifdef([NORTHD_USE_PARALLELIZATION], [ -- parallelization=NORTHD_USE_PARALLELIZATION])[]m4_ifdef([OVN_MONITOR_ALL], [ -- ovn_monitor_all=OVN_MONITOR_ALL])) m4_ifdef([NORTHD_USE_PARALLELIZATION], [[NORTHD_USE_PARALLELIZATION]=NORTHD_USE_PARALLELIZATION ])dnl m4_ifdef([NORTHD_DUMMY_NUMA], [[NORTHD_DUMMY_NUMA]=NORTHD_DUMMY_NUMA diff --git a/tests/perf-northd.at b/tests/perf-northd.at index ca115dadc2..18fb209146 100644 --- a/tests/perf-northd.at +++ b/tests/perf-northd.at @@ -60,7 +60,7 @@ m4_define([PARSE_STOPWATCH], [ # to performance results. # m4_define([PERF_RECORD_STOPWATCH], [ - PERF_RECORD_RESULT($3, [`ovn-appctl -t northd/NORTHD_TYPE stopwatch/show $1 | PARSE_STOPWATCH($2)`]) + PERF_RECORD_RESULT($3, [`ovn-appctl -t northd/ovn-northd stopwatch/show $1 | PARSE_STOPWATCH($2)`]) ]) # PERF_RECORD() diff --git a/tests/system-common-macros.at b/tests/system-common-macros.at index 65c4884ea1..4bfc74582c 100644 --- a/tests/system-common-macros.at +++ b/tests/system-common-macros.at @@ -494,7 +494,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d diff --git a/tests/system-ovn-kmod.at b/tests/system-ovn-kmod.at index a81bae7133..93fd962004 100644 --- a/tests/system-ovn-kmod.at +++ b/tests/system-ovn-kmod.at @@ -507,7 +507,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -801,7 +801,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -947,7 +947,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -1113,7 +1113,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -1263,7 +1263,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP([" diff --git a/tests/system-ovn.at b/tests/system-ovn.at index c454526073..0abf2828a2 100644 --- a/tests/system-ovn.at +++ b/tests/system-ovn.at @@ -171,7 +171,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -351,7 +351,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -463,7 +463,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -575,7 +575,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -797,7 +797,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -1023,7 +1023,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -1337,7 +1337,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -1620,7 +1620,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) @@ -1841,7 +1841,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) @@ -1949,7 +1949,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) @@ -2059,7 +2059,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"]) @@ -2304,7 +2304,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -2399,7 +2399,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -2556,7 +2556,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -2727,7 +2727,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -2900,7 +2900,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -3119,7 +3119,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -3262,7 +3262,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -3405,7 +3405,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -3586,7 +3586,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -3747,7 +3747,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -3930,7 +3930,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -4098,7 +4098,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -4175,7 +4175,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -4341,7 +4341,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -4565,7 +4565,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -4748,7 +4748,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -4851,7 +4851,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -4949,7 +4949,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -5195,7 +5195,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -5444,7 +5444,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -5567,7 +5567,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -5687,7 +5687,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -5796,7 +5796,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -5905,7 +5905,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -5999,7 +5999,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -6173,7 +6173,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -6366,7 +6366,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -6415,7 +6415,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -6506,7 +6506,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d @@ -6742,7 +6742,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d @@ -6893,7 +6893,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d @@ -7022,7 +7022,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -7147,7 +7147,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d @@ -7264,7 +7264,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -7479,7 +7479,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d @@ -7622,7 +7622,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -7764,7 +7764,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -7865,7 +7865,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -7966,7 +7966,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -8066,7 +8066,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -8423,7 +8423,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -8552,7 +8552,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -8616,7 +8616,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -8713,7 +8713,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -8822,7 +8822,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -8896,7 +8896,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -9097,7 +9097,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -9247,7 +9247,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -9398,7 +9398,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -9526,7 +9526,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -9677,7 +9677,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -9822,7 +9822,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -9933,7 +9933,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -10075,7 +10075,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -10289,7 +10289,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -10434,7 +10434,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -10599,7 +10599,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -10753,7 +10753,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -10889,7 +10889,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -11215,7 +11215,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -11393,7 +11393,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -11535,7 +11535,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -11624,7 +11624,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -11697,7 +11697,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d @@ -11811,7 +11811,7 @@ as ovn-nb OVS_APP_EXIT_AND_WAIT([ovsdb-server]) as northd -OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE]) +OVS_APP_EXIT_AND_WAIT([ovn-northd]) as OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d diff --git a/tutorial/ovs-sandbox b/tutorial/ovs-sandbox index 2219b0e99e..787adb87a7 100755 --- a/tutorial/ovs-sandbox +++ b/tutorial/ovs-sandbox @@ -71,8 +71,6 @@ ovssrcdir= schema= installed=false built=false -ddlog=false -ddlog_record=true ic_sb_schema= ic_nb_schema= ovn_rbac=true @@ -147,8 +145,6 @@ General options: -S, --schema=FILE use FILE as vswitch.ovsschema OVN options: - --ddlog use ovn-northd-ddlog - --no-ddlog-record do not record ddlog transactions (for performance) --no-ovn-rbac disable role-based access control for OVN --n-northds=NUMBER run NUMBER copies of northd (default: 1) --n-ics=NUMBER run NUMBER copies of ic (default: 1) @@ -242,12 +238,6 @@ EOF --gdb-ovn-controller-vtep) gdb_ovn_controller_vtep=true ;; - --ddlog) - ddlog=true - ;; - --no-ddlog-record | --no-record-ddlog) - ddlog_record=false - ;; --no-ovn-rbac) ovn_rbac=false ;; @@ -680,17 +670,10 @@ for i in $(seq $n_ics); do done northd_args= -if $ddlog; then - OVN_NORTHD=ovn-northd-ddlog -else - OVN_NORTHD=ovn-northd -fi +OVN_NORTHD=ovn-northd for i in $(seq $n_northds); do if [ $i -eq 1 ]; then inst=""; else inst=$i; fi - if $ddlog && $ddlog_record; then - northd_args=--ddlog-record=replay$inst.txt - fi rungdb $gdb_ovn_northd $gdb_ovn_northd_ex $OVN_NORTHD --detach \ --no-chdir --pidfile=$OVN_NORTHD$inst.pid -vconsole:off \ --log-file=$OVN_NORTHD$inst.log -vsyslog:off \ diff --git a/utilities/ovn-ctl b/utilities/ovn-ctl index dc8865abf8..876565c801 100755 --- a/utilities/ovn-ctl +++ b/utilities/ovn-ctl @@ -786,7 +786,6 @@ set_defaults () { OVN_CONTROLLER_WRAPPER= OVSDB_NB_WRAPPER= OVSDB_SB_WRAPPER= - OVN_NORTHD_DDLOG=no OVSDB_DISABLE_FILE_COLUMN_DIFF=no @@ -1031,9 +1030,6 @@ Options: --db-sb-relay-remote Specifies upstream cluster/server remote for ovsdb relay --db-sb-relay-use-remote-in-db=no|yes OVN_Sorthbound db listen on target connection table (default: $DB_SB_RELAY_USE_REMOTE_IN_DB) - - --ovn-northd-ddlog=yes|no whether we should run the DDlog version - of ovn-northd. The default is "no". -h, --help display this help message File location options: @@ -1212,11 +1208,7 @@ do esac done -if test X"$OVN_NORTHD_DDLOG" = Xyes; then - OVN_NORTHD_BIN=ovn-northd-ddlog -else - OVN_NORTHD_BIN=ovn-northd -fi +OVN_NORTHD_BIN=ovn-northd case $command in start_northd)