diff mbox series

[ovs-dev,v5] northd: Use next-hop network for SNAT when lb_force_snat_ip=router_ip.

Message ID 20250319160240.2102058-1-rosemarie@redhat.com
State Accepted
Delegated to: Dumitru Ceara
Headers show
Series [ovs-dev,v5] northd: Use next-hop network for SNAT when lb_force_snat_ip=router_ip. | expand

Checks

Context Check Description
ovsrobot/apply-robot success apply and check: success
ovsrobot/github-robot-_Build_and_Test success github build: passed
ovsrobot/github-robot-_ovn-kubernetes success github build: passed

Commit Message

Rosemarie O'Riorden March 19, 2025, 4:02 p.m. UTC
When a gateway router has a load balancer configured, the option
lb_force_snat_ip=router_ip can be set so that OVN SNATs load balanced
packets to the logical router's egress interface's IP address, that is, the
port chosen as "outport" in the lr_in_ip_routing stage.

However, this was only designed to work when one network was configured
on the logical router outport. When multiple networks were configured,
OVN's behavior was to simply choose the lexicographically first IP address for
SNAT. This often lead to an incorrect address being used for SNAT.

To fix this, two main components have been added:
 1. A new flag, flags.network_id. It is 4 bits and stores an index.
 2. A new stage in the router ingress pipeline, lr_in_network_id.

Now in the stage lr_in_network_id, OVN generates flows that assign
flags.network_id with an index, which is chosen by looping through the networks
on the port, and assigning flags.network_id = i. flags.network_id is then
matched on later in lr_out_snat and the network at that index will be chosen
for SNAT.

However, if there are more than 16 networks, flags.network_id will be 0
for networks 17 and up. Then, for those networks, the first
lexicographical network will be chosen for SNAT.

There are also two lower priority flows: one with the old behavior to make
upgrades smooth, and another that sets flags.network_id to zero when the
next-hop doesn't belong to any of the port networks.

Three tests have been added to verify that:
 1. The correct network is chosen for SNAT.
 2. The new and updated flows with flags.network_id are correct.
 3. When there's one IPv6 addr and no IPv4 addr, the priority-105 flow is not
    added.

And tests that were broken by this new behavior have been updated.

Reported-at: https://issues.redhat.com/browse/FDP-871
Reported-at: https://mail.openvswitch.org/pipermail/ovs-dev/2024-October/417717.html
Signed-off-by: Rosemarie O'Riorden <rosemarie@redhat.com>
---
v5:
- Increment OVN_INTERNAL_MINOR_VER because I add a logical stage
- Add another catch-all flow, priority 105, to set flags.network_id to 0 in
  case next-hop doesn't belong to any port networks
- Add a comment to explain other lower-priority flow
- Add "ip4" or "ip6" to match for Network ID stage flows to prevent undefined
  behavior (in case IPv4 and IPv6 next-hop registers have same first 32 bits)
- Drop comments with specific number of networks, documentation is enough and
  code is readable
- Move network_id assignment after warning for readability
- Fix spacing (new lines, indentation)
- Update commit message to contain all 3 tests
- Update documentation to cover IPv6 as well, and new priority-105 flow and
  other priority-105 flow. Also add a better explanation for snat stage and
  remove redundancy.
- Tests:
  - Wrap tests in OVN_FOR_EACH_NORTHD_NO_HV
  - Remove random typo
  - Replace "--wait=hv" with "--wait=sb" for ovn-northd.at tests
  - Add check that ovn-northd.log contains warning for adding over 16 networks
  - Fix flow grep
  - For "one IPv6 addr" test, check for case when an explicit IPv6 addr is
    added (instead of just LLA)
  - Fix indentation

v4:
- Change "op->lrp_networks.ipv4_addrs" to "op->lrp_networks.ipv6_addrs" in
  error message for IPv6
- Add flow with old behavior only if the router port has at least one IPv4
  address, or 2 IPv6 addresses (2 because LLA is always there).
- Add ovn-northd.at test to cover case when LRP has just a single IPv6 address
  (LLA) and no IPv4. Make sure priority-105 flow is not added.
- Fixed tests where flows were affected by changes

v3:
- Fix "16 bits" typo, flags.network_id 4 bits (fits 16 networks)
- Replace inconsistent comments (networks "17 and up" get network_id=0)
- Define OVN_MAX_NETWORK_ID macro, use instead of hardcoding 15 or 16
- Use OVN_MAX_NETWORK_ID while defining MLF_NETWORK_ID
- Fix incorrect expression resulting in wrong hex value
- Remove extra quotations around port name, resulting in double quotes and thus
  invalid flows
- Make priority-100 flow (with old behavior) 105 instead
- Remove "flags.force_snat_for_lb == 1" from match in new stage, set network_id
  for all IP packets instead
- Change function name for new stage since it's no longer restricted to "force
  snat for lb" case
- Update affected test flows
- Fix comment that was too long, improve wording
- Replace "routerip" with "router_ip"

v2:
- Improve wording of commit message and add missing explanation
- Add Reported-at tag
- Correct hex value for MLF_NETWORK_ID in include/ovn/logical-fields.h
- In northd/northd.c:
  - Change behavior to set flags.network_id=0 for networks 16 and up
  - Add flow with old behavior for consistency during upgrades
  - Fix redundant loop conditions
  - Remove code I accidentally included in v1
  - Fix indentation in multiple spots
  - Remove incorrect changes in build_lrouter_nat_defrag_and_lb()
- Fix documentation, add new stage
- Move end-to-end test from ovn-northd.at to ovn.at
- Add check before all ovn and ovs commands in test
- Remove unnecessary section included in test
- Updated all flows in tests that were affected by new changes
- Added more IP addresses to test to test behavior with >16 networks
---
 include/ovn/logical-fields.h |  12 ++-
 lib/logical-fields.c         |   3 +
 lib/ovn-util.c               |   2 +-
 northd/northd.c              | 189 +++++++++++++++++++++++++++++------
 northd/northd.h              |   3 +-
 northd/ovn-northd.8.xml      |  89 +++++++++++++++--
 tests/ovn-northd.at          | 166 +++++++++++++++++++++++++++---
 tests/ovn.at                 |  77 ++++++++++++++
 8 files changed, 485 insertions(+), 56 deletions(-)

Comments

Ilya Maximets March 21, 2025, 2:47 p.m. UTC | #1
On 3/19/25 17:02, Rosemarie O'Riorden wrote:
> When a gateway router has a load balancer configured, the option
> lb_force_snat_ip=router_ip can be set so that OVN SNATs load balanced
> packets to the logical router's egress interface's IP address, that is, the
> port chosen as "outport" in the lr_in_ip_routing stage.
> 
> However, this was only designed to work when one network was configured
> on the logical router outport. When multiple networks were configured,
> OVN's behavior was to simply choose the lexicographically first IP address for
> SNAT. This often lead to an incorrect address being used for SNAT.
> 
> To fix this, two main components have been added:
>  1. A new flag, flags.network_id. It is 4 bits and stores an index.
>  2. A new stage in the router ingress pipeline, lr_in_network_id.
> 
> Now in the stage lr_in_network_id, OVN generates flows that assign
> flags.network_id with an index, which is chosen by looping through the networks
> on the port, and assigning flags.network_id = i.

nit: "chosen by looping through the networks" doesn't actually explain how
they are being chosen.  We should say something like:

 Now in the stage lr_in_network_id, OVN generates flows that assign
 flags.network_id with an index, which is chosen by looping through the networks
 on the port, and selecting the one that contains IP of a next hop.

There is a couple more nits below (mostly for the docs), but nothing major.
May probably be fixed on commit.  With those addressed:

Acked-by: Ilya Maximets <i.maximets@ovn.org>

To save some trouble, the following commit has the code nits addressed
and can be sqaushed in:
  https://github.com/igsilya/ovn/commit/1866f305f38d2bf9365102e000ceed377a6a1510

Best regards, Ilya Maximets.

> flags.network_id is then
> matched on later in lr_out_snat and the network at that index will be chosen
> for SNAT.
> 
> However, if there are more than 16 networks, flags.network_id will be 0
> for networks 17 and up. Then, for those networks, the first
> lexicographical network will be chosen for SNAT.
> 
> There are also two lower priority flows: one with the old behavior to make
> upgrades smooth, and another that sets flags.network_id to zero when the
> next-hop doesn't belong to any of the port networks.
> 
> Three tests have been added to verify that:
>  1. The correct network is chosen for SNAT.
>  2. The new and updated flows with flags.network_id are correct.
>  3. When there's one IPv6 addr and no IPv4 addr, the priority-105 flow is not
>     added.
> 
> And tests that were broken by this new behavior have been updated.
> 
> Reported-at: https://issues.redhat.com/browse/FDP-871
> Reported-at: https://mail.openvswitch.org/pipermail/ovs-dev/2024-October/417717.html
> Signed-off-by: Rosemarie O'Riorden <rosemarie@redhat.com>
> ---
> v5:
> - Increment OVN_INTERNAL_MINOR_VER because I add a logical stage
> - Add another catch-all flow, priority 105, to set flags.network_id to 0 in
>   case next-hop doesn't belong to any port networks
> - Add a comment to explain other lower-priority flow
> - Add "ip4" or "ip6" to match for Network ID stage flows to prevent undefined
>   behavior (in case IPv4 and IPv6 next-hop registers have same first 32 bits)
> - Drop comments with specific number of networks, documentation is enough and
>   code is readable
> - Move network_id assignment after warning for readability
> - Fix spacing (new lines, indentation)
> - Update commit message to contain all 3 tests
> - Update documentation to cover IPv6 as well, and new priority-105 flow and
>   other priority-105 flow. Also add a better explanation for snat stage and
>   remove redundancy.
> - Tests:
>   - Wrap tests in OVN_FOR_EACH_NORTHD_NO_HV
>   - Remove random typo
>   - Replace "--wait=hv" with "--wait=sb" for ovn-northd.at tests
>   - Add check that ovn-northd.log contains warning for adding over 16 networks
>   - Fix flow grep
>   - For "one IPv6 addr" test, check for case when an explicit IPv6 addr is
>     added (instead of just LLA)
>   - Fix indentation
> 
> v4:
> - Change "op->lrp_networks.ipv4_addrs" to "op->lrp_networks.ipv6_addrs" in
>   error message for IPv6
> - Add flow with old behavior only if the router port has at least one IPv4
>   address, or 2 IPv6 addresses (2 because LLA is always there).
> - Add ovn-northd.at test to cover case when LRP has just a single IPv6 address
>   (LLA) and no IPv4. Make sure priority-105 flow is not added.
> - Fixed tests where flows were affected by changes
> 
> v3:
> - Fix "16 bits" typo, flags.network_id 4 bits (fits 16 networks)
> - Replace inconsistent comments (networks "17 and up" get network_id=0)
> - Define OVN_MAX_NETWORK_ID macro, use instead of hardcoding 15 or 16
> - Use OVN_MAX_NETWORK_ID while defining MLF_NETWORK_ID
> - Fix incorrect expression resulting in wrong hex value
> - Remove extra quotations around port name, resulting in double quotes and thus
>   invalid flows
> - Make priority-100 flow (with old behavior) 105 instead
> - Remove "flags.force_snat_for_lb == 1" from match in new stage, set network_id
>   for all IP packets instead
> - Change function name for new stage since it's no longer restricted to "force
>   snat for lb" case
> - Update affected test flows
> - Fix comment that was too long, improve wording
> - Replace "routerip" with "router_ip"
> 
> v2:
> - Improve wording of commit message and add missing explanation
> - Add Reported-at tag
> - Correct hex value for MLF_NETWORK_ID in include/ovn/logical-fields.h
> - In northd/northd.c:
>   - Change behavior to set flags.network_id=0 for networks 16 and up
>   - Add flow with old behavior for consistency during upgrades
>   - Fix redundant loop conditions
>   - Remove code I accidentally included in v1
>   - Fix indentation in multiple spots
>   - Remove incorrect changes in build_lrouter_nat_defrag_and_lb()
> - Fix documentation, add new stage
> - Move end-to-end test from ovn-northd.at to ovn.at
> - Add check before all ovn and ovs commands in test
> - Remove unnecessary section included in test
> - Updated all flows in tests that were affected by new changes
> - Added more IP addresses to test to test behavior with >16 networks
> ---
>  include/ovn/logical-fields.h |  12 ++-
>  lib/logical-fields.c         |   3 +
>  lib/ovn-util.c               |   2 +-
>  northd/northd.c              | 189 +++++++++++++++++++++++++++++------
>  northd/northd.h              |   3 +-
>  northd/ovn-northd.8.xml      |  89 +++++++++++++++--
>  tests/ovn-northd.at          | 166 +++++++++++++++++++++++++++---
>  tests/ovn.at                 |  77 ++++++++++++++
>  8 files changed, 485 insertions(+), 56 deletions(-)
> 
> diff --git a/include/ovn/logical-fields.h b/include/ovn/logical-fields.h
> index 196ac9dd8..b9446a042 100644
> --- a/include/ovn/logical-fields.h
> +++ b/include/ovn/logical-fields.h
> @@ -71,6 +71,10 @@ enum ovn_controller_event {
>  #define MFF_LOG_LB_AFF_MATCH_PORT_OLD        MFF_REG8
>  #define MFF_LOG_LB_AFF_MATCH_LS_IP6_ADDR_OLD MFF_XXREG0
>  
> +/* Maximum number of networks supported by 4-bit flags.network_id. */
> +#define OVN_MAX_NETWORK_ID \
> +    ((1 << (MLF_NETWORK_ID_END_BIT - MLF_NETWORK_ID_START_BIT + 1)) - 1)
> +
>  void ovn_init_symtab(struct shash *symtab);
>  
>  /* MFF_LOG_FLAGS_REG bit assignments */
> @@ -97,6 +101,8 @@ enum mff_log_flags_bits {
>      MLF_FROM_CTRL_BIT = 19,
>      MLF_UNSNAT_NEW_BIT = 20,
>      MLF_UNSNAT_NOT_TRACKED_BIT = 21,
> +    MLF_NETWORK_ID_START_BIT = 28,
> +    MLF_NETWORK_ID_END_BIT = 31,
>  };
>  
>  /* MFF_LOG_FLAGS_REG flag assignments */
> @@ -159,7 +165,11 @@ enum mff_log_flags {
>      MLF_UNSNAT_NEW = (1 << MLF_UNSNAT_NEW_BIT),
>  
>      /* Indicate that the packet didn't go through unSNAT. */
> -    MLF_UNSNAT_NOT_TRACKED = (1 << MLF_UNSNAT_NOT_TRACKED_BIT)
> +    MLF_UNSNAT_NOT_TRACKED = (1 << MLF_UNSNAT_NOT_TRACKED_BIT),
> +
> +    /* Assign network ID to packet to choose correct network for snat when
> +     * lb_force_snat_ip=router_ip. */
> +    MLF_NETWORK_ID = (OVN_MAX_NETWORK_ID << MLF_NETWORK_ID_START_BIT),
>  };
>  
>  /* OVN logical fields
> diff --git a/lib/logical-fields.c b/lib/logical-fields.c
> index ed287f42b..db2d08ada 100644
> --- a/lib/logical-fields.c
> +++ b/lib/logical-fields.c
> @@ -147,6 +147,9 @@ ovn_init_symtab(struct shash *symtab)
>               MLF_UNSNAT_NOT_TRACKED_BIT);
>      expr_symtab_add_subfield(symtab, "flags.unsnat_not_tracked", NULL,
>                               flags_str);
> +    snprintf(flags_str, sizeof flags_str, "flags[%d..%d]",
> +             MLF_NETWORK_ID_START_BIT, MLF_NETWORK_ID_END_BIT);
> +    expr_symtab_add_subfield(symtab, "flags.network_id", NULL, flags_str);
>  
>      snprintf(flags_str, sizeof flags_str, "flags[%d]", MLF_FROM_CTRL_BIT);
>      expr_symtab_add_subfield(symtab, "flags.from_ctrl", NULL, flags_str);
> diff --git a/lib/ovn-util.c b/lib/ovn-util.c
> index c65b36bb5..032b16ff3 100644
> --- a/lib/ovn-util.c
> +++ b/lib/ovn-util.c
> @@ -897,7 +897,7 @@ ip_address_and_port_from_lb_key(const char *key, char **ip_address,
>   *
>   * This value is also used to handle some backward compatibility during
>   * upgrading. It should never decrease or rewind. */
> -#define OVN_INTERNAL_MINOR_VER 6
> +#define OVN_INTERNAL_MINOR_VER 7
>  
>  /* Returns the OVN version. The caller must free the returned value. */
>  char *
> diff --git a/northd/northd.c b/northd/northd.c
> index 1d3e132d4..5322a771a 100644
> --- a/northd/northd.c
> +++ b/northd/northd.c
> @@ -12985,73 +12985,115 @@ build_lrouter_force_snat_flows_op(struct ovn_port *op,
>                                    struct ds *match, struct ds *actions,
>                                    struct lflow_ref *lflow_ref)
>  {
> +    size_t network_id;
> +
>      ovs_assert(op->nbrp);
>      if (!op->peer || !lrnat_rec->lb_force_snat_router_ip) {
>          return;
>      }
>  
> -    if (op->lrp_networks.n_ipv4_addrs) {
> +    for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
>          ds_clear(match);
>          ds_clear(actions);
>  
>          ds_put_format(match, "inport == %s && ip4.dst == %s",
> -                      op->json_key, op->lrp_networks.ipv4_addrs[0].addr_s);
> +                      op->json_key, op->lrp_networks.ipv4_addrs[i].addr_s);
>          ovn_lflow_add(lflows, op->od, S_ROUTER_IN_UNSNAT, 110,
>                        ds_cstr(match), "ct_snat;", lflow_ref);
> -
>          ds_clear(match);
>  
> +        if (i > OVN_MAX_NETWORK_ID) {
> +            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> +            VLOG_WARN_RL(&rl, "Logical router port %s already has the max of "
> +                              "%d networks configured, so for network "
> +                              "\"%s/%d\" the first IP [%s] will be considered "
> +                              "as SNAT for load balancer.", op->json_key,
> +                              OVN_MAX_NETWORK_ID + 1,
> +                              op->lrp_networks.ipv4_addrs[i].addr_s,
> +                              op->lrp_networks.ipv4_addrs[i].plen,
> +                              op->lrp_networks.ipv4_addrs[0].addr_s);
> +            network_id = 0;
> +        } else {
> +            network_id = i;
> +        }
> +
>          /* Higher priority rules to force SNAT with the router port ip.
>           * This only takes effect when the packet has already been
>           * load balanced once. */
> -        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip4 && "
> -                      "outport == %s", op->json_key);
> +        ds_put_format(match, "flags.force_snat_for_lb == 1 && "
> +                      "flags.network_id == %"PRIuSIZE" && ip4 && "
> +                      "outport == %s", network_id, op->json_key);
>          ds_put_format(actions, "ct_snat(%s);",
> -                      op->lrp_networks.ipv4_addrs[0].addr_s);
> +                      op->lrp_networks.ipv4_addrs[network_id].addr_s);
>          ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 110,
>                        ds_cstr(match), ds_cstr(actions),
>                        lflow_ref);
> -        if (op->lrp_networks.n_ipv4_addrs > 1) {
> -            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> -            VLOG_WARN_RL(&rl, "Logical router port %s is configured with "
> -                              "multiple IPv4 addresses.  Only the first "
> -                              "IP [%s] is considered as SNAT for load "
> -                              "balancer", op->json_key,
> -                              op->lrp_networks.ipv4_addrs[0].addr_s);
> -        }
>      }
>  
>      /* op->lrp_networks.ipv6_addrs will always have LLA and that will be
> -     * last in the list. So add the flows only if n_ipv6_addrs > 1. */
> -    if (op->lrp_networks.n_ipv6_addrs > 1) {
> +     * last in the list. So loop to add flows n_ipv6_addrs - 1 times. */
> +    for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs - 1; i++) {
>          ds_clear(match);
>          ds_clear(actions);
>  
>          ds_put_format(match, "inport == %s && ip6.dst == %s",
> -                      op->json_key, op->lrp_networks.ipv6_addrs[0].addr_s);
> +                      op->json_key, op->lrp_networks.ipv6_addrs[i].addr_s);
>          ovn_lflow_add(lflows, op->od, S_ROUTER_IN_UNSNAT, 110,
>                        ds_cstr(match), "ct_snat;", lflow_ref);
> -
>          ds_clear(match);
>  
> +        if (i > OVN_MAX_NETWORK_ID) {
> +            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> +            VLOG_WARN_RL(&rl, "Logical router port %s already has the max of "
> +                              "%d networks configured, so for network "
> +                              "\"%s/%d\" the first IP [%s] will be considered "
> +                              "as SNAT for load balancer.", op->json_key,
> +                              OVN_MAX_NETWORK_ID + 1,
> +                              op->lrp_networks.ipv6_addrs[i].addr_s,
> +                              op->lrp_networks.ipv6_addrs[i].plen,
> +                              op->lrp_networks.ipv6_addrs[0].addr_s);
> +            network_id = 0;
> +        } else {
> +            network_id = i;
> +        }
> +
>          /* Higher priority rules to force SNAT with the router port ip.
>           * This only takes effect when the packet has already been
>           * load balanced once. */
> -        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip6 && "
> -                      "outport == %s", op->json_key);
> +        ds_put_format(match, "flags.force_snat_for_lb == 1 && "
> +                      "flags.network_id == %"PRIuSIZE" && ip6 && "
> +                      "outport == %s", network_id, op->json_key);
>          ds_put_format(actions, "ct_snat(%s);",
> -                      op->lrp_networks.ipv6_addrs[0].addr_s);
> +                      op->lrp_networks.ipv6_addrs[network_id].addr_s);
>          ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 110,
>                        ds_cstr(match), ds_cstr(actions),
>                        lflow_ref);
> -        if (op->lrp_networks.n_ipv6_addrs > 2) {
> -            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> -            VLOG_WARN_RL(&rl, "Logical router port %s is configured with "
> -                              "multiple IPv6 addresses.  Only the first "
> -                              "IP [%s] is considered as SNAT for load "
> -                              "balancer", op->json_key,
> -                              op->lrp_networks.ipv6_addrs[0].addr_s);
> -        }
> +    }
> +
> +    /* This lower-priority flow matches the old behavior for if northd is
> +     * upgraded before controller and flags.network_id is not recognized. */
> +    if (op->lrp_networks.n_ipv4_addrs > 0) {
> +        ds_clear(match);
> +        ds_clear(actions);
> +        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip4 && "
> +                      "outport == %s", op->json_key);
> +        ds_put_format(actions, "ct_snat(%s);",
> +                      op->lrp_networks.ipv4_addrs[0].addr_s);
> +        ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 105,
> +                      ds_cstr(match), ds_cstr(actions), lflow_ref);
> +    }
> +
> +    /* op->lrp_networks.ipv6_addrs will always have LLA, so only add flow if
> +     * there is more than 1. */
> +    if (op->lrp_networks.n_ipv6_addrs > 1) {
> +        ds_clear(match);
> +        ds_clear(actions);
> +        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip6 && "
> +                      "outport == %s", op->json_key);
> +        ds_put_format(actions, "ct_snat(%s);",
> +                      op->lrp_networks.ipv6_addrs[0].addr_s);
> +        ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 105,
> +                      ds_cstr(match), ds_cstr(actions), lflow_ref);
>      }
>  }
>  
> @@ -14930,6 +14972,93 @@ build_arp_request_flows_for_lrouter(
>                    lflow_ref);
>  }
>  
> +static void
> +build_lrouter_network_id_flows(
> +        struct ovn_datapath *od, struct lflow_table *lflows,
> +        struct ds *match, struct ds *actions, struct lflow_ref *lflow_ref)
> +{
> +    const struct ovn_port *op;
> +    size_t network_id;
> +
> +    HMAP_FOR_EACH (op, dp_node, &od->ports) {
> +        for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> +            if (i > OVN_MAX_NETWORK_ID) {
> +                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> +                VLOG_WARN_RL(&rl, "Logical router port %s already has the max "
> +                                  "of %d networks configured, so network "
> +                                  "\"%s/%d\" is assigned "
> +                                  "flags.network_id = 0.", op->json_key,
> +                                  OVN_MAX_NETWORK_ID + 1,
> +                                  op->lrp_networks.ipv4_addrs[i].addr_s,
> +                                  op->lrp_networks.ipv4_addrs[i].plen);
> +                network_id = 0;
> +            } else {
> +                network_id = i;
> +            }
> +
> +            ds_clear(match);
> +            ds_clear(actions);
> +
> +            ds_put_format(match, "outport == %s && ip4 && "
> +                          REG_NEXT_HOP_IPV4 " == %s/%d", op->json_key,
> +                          op->lrp_networks.ipv4_addrs[i].addr_s,
> +                          op->lrp_networks.ipv4_addrs[i].plen);
> +
> +            ds_put_format(actions, "flags.network_id = %"PRIuSIZE"; ",
> +                          network_id);
> +            ds_put_format(actions, "next;");

nit: I'd combine these two calls, there is enough space on the line:

            ds_put_format(actions, "flags.network_id = %"PRIuSIZE"; next;",
                          network_id);

Should be slightly faster.

> +
> +            ovn_lflow_add(lflows, op->od, S_ROUTER_IN_NETWORK_ID, 110,
> +                          ds_cstr(match), ds_cstr(actions),
> +                          lflow_ref);
> +        }
> +
> +        /* op->lrp_networks.ipv6_addrs will always have LLA and that will be
> +         * last in the list. So add the flows only if n_ipv6_addrs > 1, and
> +         * loop n_ipv6_addrs - 1 times. */
> +        for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs - 1; i++) {
> +            if (i > OVN_MAX_NETWORK_ID) {
> +                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> +                VLOG_WARN_RL(&rl, "Logical router port %s already has the max "
> +                                  "of %d networks configured, so network "
> +                                  "\"%s/%d\" is assigned "
> +                                  "flags.network_id = 0.", op->json_key,
> +                                  OVN_MAX_NETWORK_ID + 1,
> +                                  op->lrp_networks.ipv6_addrs[i].addr_s,
> +                                  op->lrp_networks.ipv6_addrs[i].plen);
> +                network_id = 0;
> +            } else {
> +                network_id = i;
> +            }
> +
> +            ds_clear(match);
> +            ds_clear(actions);
> +
> +            ds_put_format(match, "outport == %s && ip6 && "
> +                          REG_NEXT_HOP_IPV6 " == %s/%d", op->json_key,
> +                          op->lrp_networks.ipv6_addrs[i].addr_s,
> +                          op->lrp_networks.ipv6_addrs[i].plen);
> +
> +            ds_put_format(actions, "flags.network_id = %"PRIuSIZE"; ", i);
> +            ds_put_format(actions, "next;");

nit:  Same here:

            ds_put_format(actions, "flags.network_id = %"PRIuSIZE"; next;", i);

> +
> +            ovn_lflow_add(lflows, op->od, S_ROUTER_IN_NETWORK_ID, 110,
> +                          ds_cstr(match), ds_cstr(actions), lflow_ref);
> +        }
> +    }
> +
> +    /* Lower-priority flow for the case the next-hop doesn't belong to
> +     * any of the port networks.  In this case, setting network id to zero
> +     * explicitly. */
> +    ovn_lflow_add(lflows, od, S_ROUTER_IN_NETWORK_ID, 105, "1",
> +                  "flags.network_id = 0; next;", lflow_ref);
> +
> +    /* This lower-priority flow is for the case where northd is upgraded before
> +     * controller and flags.network_id is not recognized. */
> +    ovn_lflow_add(lflows, od, S_ROUTER_IN_NETWORK_ID, 0,
> +                  "1", "next;", lflow_ref);
> +}
> +
>  /* Logical router egress table DELIVERY: Delivery (priority 100-110).
>   *
>   * Priority 100 rules deliver packets to enabled logical ports.
> @@ -17328,6 +17457,8 @@ build_lswitch_and_lrouter_iterate_by_lr(struct ovn_datapath *od,
>                                          &lsi->actions,
>                                          lsi->meter_groups,
>                                          NULL);
> +    build_lrouter_network_id_flows(od, lsi->lflows, &lsi->match,
> +                                   &lsi->actions, NULL);
>      build_misc_local_traffic_drop_flows_for_lrouter(od, lsi->lflows, NULL);
>  
>      build_lr_nat_defrag_and_lb_default_flows(od, lsi->lflows, NULL);
> diff --git a/northd/northd.h b/northd/northd.h
> index 1a7afe902..388bac6df 100644
> --- a/northd/northd.h
> +++ b/northd/northd.h
> @@ -547,7 +547,8 @@ enum ovn_stage {
>      PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     22, "lr_in_chk_pkt_len")     \
>      PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     23, "lr_in_larger_pkts")     \
>      PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     24, "lr_in_gw_redirect")     \
> -    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     25, "lr_in_arp_request")     \
> +    PIPELINE_STAGE(ROUTER, IN,  NETWORK_ID,      25, "lr_in_network_id")      \
> +    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     26, "lr_in_arp_request")     \
>                                                                        \
>      /* Logical router egress stages. */                               \
>      PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
> diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
> index 155ba8a49..ee1f1ebaa 100644
> --- a/northd/ovn-northd.8.xml
> +++ b/northd/ovn-northd.8.xml
> @@ -5066,7 +5066,61 @@ icmp6 {
>        </li>
>      </ul>
>  
> -   <h3>Ingress Table 25: ARP Request</h3>
> +   <h3>Ingress Table 25: Network ID</h3>
> +
> +    <p>
> +      This table generates flows that set flags.network_id for IP packets. It
> +      holds the following flows:

nit: I'd replace above two sentences with one:
"This table contains flows that set <code>flags.network_id</code> for
IP packets:"

> +    </p>
> +
> +    <ul>
> +      <li>
> +        <p>
> +          A priority-110 flow with match:
> +          <ul>
> +            <li>
> +              for IPv4: <code>outport == <var>P</var> &amp;&amp;
> +              REG_NEXT_HOP_IPV4 == <var>I</var>/<var>C</var> &amp;&amp;
> +              ip4</code>
> +            </li>
> +            <li>
> +              for IPv6: <code>outport == <var>P</var> &amp;&amp;
> +              REG_NEXT_HOP_IPV6 == <var>I</var>/<var>C</var> &amp;&amp;
> +              ip6</code>
> +            </li>
> +          </ul>
> +        </p>
> +
> +        <p>
> +          and actions <code>flags.network_id = <var>N</var>; next;</code>.
> +          Where <var>P</var> is the outport, <var>I</var> is the next-hop IP,
> +          <var>C</var> is the next-hop network CIDR, and <var>N</var> is the

I is not a next-hop IP.  Should be:
     "<var>I</var>/<var>C</var> is a network CIDR of the port <var>P</var>"

> +          network id (index).

May also add: "There is one flow like this per router port's network."

> +        </p>
> +
> +        <p>
> +        <code>flags.network_id</code> is 4 bits, and thus only 16 networks can
> +        be indexed. If the number of networks is greater than 16, networks 17
> +        and up will have the actions <code>flags.network_id = 0; next;</code>
> +        and only the first lexicographical IP will be considered for SNAT for

s/first lexicographical/lexicographically first/

> +        those networks.
> +        </p>
> +      </li>
> +
> +      <li>
> +        A lower priority-105 flow with match <code>1</code> and actions
> +        <code>flags.network_id = 0; next;</code>. The is for the case that the

s/The is/This is/ ?

> +        next-hop doesn't belong to any of the port networks, so
> +        flags.network_id should be set to zero.

Wrap the flag into <code></code>.

> +      </li>
> +
> +      <li>
> +        Catch-all: A priority-0 flow with match <code>1</code> has
> +        actions <code>next;</code>.
> +      </li>
> +    </ul>
> +
> +   <h3>Ingress Table 26: ARP Request</h3>
>  
>      <p>
>        In the common case where the Ethernet destination has been resolved, this
> @@ -5330,18 +5384,35 @@ nd_ns {
>            table="Logical_Router"/>:lb_force_snat_ip=router_ip), then for
>            each logical router port <var>P</var> attached to the Gateway
>            router, 

and for each network configured for this port,

>            a priority-110 flow matches
> -          <code>flags.force_snat_for_lb == 1 &amp;&amp; outport == <var>P</var>
> -          </code> with an action <code>ct_snat(<var>R</var>);</code>
> -          where <var>R</var> is the IP configured on the router port.
> -          If <code>R</code> is an IPv4 address then the match will also
> -          include <code>ip4</code> and if it is an IPv6 address, then the
> -          match will also include <code>ip6</code>.
> +          <code>flags.force_snat_for_lb == 1 &amp;&amp; ip4 &amp;&amp;
> +          flags.network_id == <var>N</var> &amp;&amp; outport ==
> +          <var>P</var></code>, where <var>N</var> is the network index, with an
> +          action <code>ct_snat(<var>R</var>);</code> where <var>R</var> is the
> +          IP configured on the router port. A similar flow is created for IPv6,
> +          with <code>ip6</code> instead of <code>ip4</code>. <var>N</var>, the
> +          network index, will be 0 for networks 17 and up.
>          </p>
>  
>          <p>
>            If the logical router port <var>P</var> is configured with multiple
> -          IPv4 and multiple IPv6 addresses, only the first IPv4 and first IPv6
> -          address is considered.
> +          IPv4 and multiple IPv6 addresses, the IPv4 and IPv6 address within
> +          the same network as the next-hop will be chosen as <var>R</var> for
> +          SNAT. However, if there are more than 16 networks configured, the
> +          first lexicographical IP will be considered for SNAT for networks 17

s/first lexicographical/lexicographically first/

> +          and up.
> +        </p>
> +      </li>
> +      <li>
> +        <p>
> +          A priority-105 flow matches the old behavior for if northd is
> +          upgraded before controller and flags.network_id is not recognized. It

Wrap the flag into <code></code>.

> +          is only added if there's at least one network configured (excluding
> +          LLA for IPv6). It matches on: <code>flags.force_snat_for_lb == 1
> +          </code> &amp;&amp; <code>ip4</code> &amp;&amp; <code>outport ==
> +          <var>P</var></code>, with action: <code>ct_snat(<var>R</var>)</code>.
> +          <var>R</var> is the first lexicographical IP address configured.

s/first lexicographical/lexicographically first/

> +          There is a similar flow for IPv6 with <code>ip6</code> instead of
> +          <code>ip4</code>.
>          </p>
>        </li>
>  
> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> index 419130aa8..2841aa76f 100644
> --- a/tests/ovn-northd.at
> +++ b/tests/ovn-northd.at
> @@ -4493,9 +4493,12 @@ AT_CHECK([grep "lr_in_dnat" lr0flows | ovn_strip_lflows], [0], [dnl
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
>    table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
>    table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
>  ])
>  
> @@ -4558,10 +4561,14 @@ AT_CHECK([grep "lr_in_dnat" lr0flows | ovn_strip_lflows], [0], [dnl
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
>    table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw1"), action=(ct_snat(bef0::1);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw1"), action=(ct_snat(bef0::1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && outport == "lr0-sw1"), action=(ct_snat(bef0::1);)
>    table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
>  ])
>  
> @@ -6268,8 +6275,10 @@ AT_CHECK([grep "lr_out_post_undnat" lr0flows | ovn_strip_lflows], [0], [dnl
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
>    table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
>    table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
>    table=??(lr_out_snat        ), priority=25   , match=(ip && ip4.src == 10.0.0.0/24 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.10);)
>    table=??(lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.10 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.30);)
> @@ -6334,8 +6343,10 @@ AT_CHECK([grep "lr_out_post_undnat" lr0flows | ovn_strip_lflows], [0], [dnl
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
>    table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
>    table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
>    table=??(lr_out_snat        ), priority=25   , match=(ip && ip4.src == 10.0.0.0/24 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.10);)
>    table=??(lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.10 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.30);)
> @@ -6412,10 +6423,14 @@ AT_CHECK([grep "lr_out_post_undnat" lr0flows | ovn_strip_lflows], [0], [dnl
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
>    table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-public"), action=(ct_snat(def0::10);)
> -  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw0"), action=(ct_snat(aef0::1);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-public"), action=(ct_snat(def0::10);)
> +  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw0"), action=(ct_snat(aef0::1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && outport == "lr0-public"), action=(ct_snat(def0::10);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && outport == "lr0-sw0"), action=(ct_snat(aef0::1);)
>    table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
>    table=??(lr_out_snat        ), priority=25   , match=(ip && ip4.src == 10.0.0.0/24 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.10);)
>    table=??(lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.10 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.30);)
> @@ -16753,3 +16768,124 @@ check grep -q "Bad configuration: The peer of the switch port 'ls-lrp1' (LRP pee
>  
>  AT_CLEANUP
>  ])
> +
> +OVN_FOR_EACH_NORTHD_NO_HV([
> +AT_SETUP([lb_force_snat_ip=router_ip generate flags.network_id flows])
> +ovn_start
> +
> +check ovn-nbctl lr-add lr
> +check ovn-nbctl set logical_router lr options:chassis=hv1
> +check ovn-nbctl set logical_router lr options:lb_force_snat_ip=router_ip
> +check ovn-nbctl lrp-add lr lrp-client 02:00:00:00:00:02 1.1.1.1/24 ff01::01
> +check ovn-nbctl lrp-add lr lrp-server 02:00:00:00:00:03 1.1.2.1/24 7.7.7.1/24 \
> +            8.8.8.1/24 1.2.1.1/24 1.2.2.1/24 3.3.3.1/24 4.4.4.1/24 5.5.5.1/24 \
> +            6.6.6.1/24 6.2.1.1/24 6.3.1.1/24 6.4.1.1/24 6.5.1.1/24 6.6.1.1/24 \
> +                                  6.7.1.1/24 6.8.1.1/24 6.9.1.1/24 7.2.1.1/24 \
> +                                                   ff01::02 ff01::03 ff01::06
> +check ovn-nbctl ls-add ls-client
> +check ovn-nbctl ls-add ls-server
> +check ovn-nbctl lsp-add ls-client lsp-client-router
> +check ovn-nbctl lsp-set-type lsp-client-router router
> +check ovn-nbctl lsp-add ls-server lsp-server-router
> +check ovn-nbctl lsp-set-type lsp-server-router router
> +check ovn-nbctl set logical_switch_port lsp-client-router options:router-port=lrp-client
> +check ovn-nbctl set logical_switch_port lsp-server-router options:router-port=lrp-server
> +check ovn-nbctl lsp-add ls-client client
> +check ovn-nbctl lsp-add ls-server server
> +check ovn-nbctl lsp-set-addresses client "02:00:00:00:00:01 1.1.1.10"
> +check ovn-nbctl lsp-set-addresses server "02:00:00:00:00:04 2.2.2.10"
> +check ovn-nbctl lsp-set-addresses lsp-client-router router
> +check ovn-nbctl lsp-set-addresses lsp-server-router router
> +check ovn-nbctl lb-add lb 42.42.42.42:80 2.2.2.10:80 udp
> +check ovn-nbctl lr-lb-add lr lb
> +check ovn-nbctl --wait=sb sync
> +
> +ovn-sbctl dump-flows lr > lrflows
> +AT_CAPTURE_FILE([lrflows])
> +
> +AT_CHECK([grep -E flags.network_id lrflows | ovn_strip_lflows], [0], [dnl
> +  table=??(lr_in_network_id   ), priority=105  , match=(1), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-client" && ip4 && reg0 == 1.1.1.1/24), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-client" && ip6 && xxreg0 == ff01::1/128), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 1.1.2.1/24), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 1.2.1.1/24), action=(flags.network_id = 1; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 1.2.2.1/24), action=(flags.network_id = 2; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 3.3.3.1/24), action=(flags.network_id = 3; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 4.4.4.1/24), action=(flags.network_id = 4; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 5.5.5.1/24), action=(flags.network_id = 5; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.2.1.1/24), action=(flags.network_id = 6; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.3.1.1/24), action=(flags.network_id = 7; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.4.1.1/24), action=(flags.network_id = 8; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.5.1.1/24), action=(flags.network_id = 9; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.6.1.1/24), action=(flags.network_id = 10; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.6.6.1/24), action=(flags.network_id = 11; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.7.1.1/24), action=(flags.network_id = 12; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.8.1.1/24), action=(flags.network_id = 13; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.9.1.1/24), action=(flags.network_id = 14; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 7.2.1.1/24), action=(flags.network_id = 15; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 7.7.7.1/24), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 8.8.8.1/24), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip6 && xxreg0 == ff01::2/128), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip6 && xxreg0 == ff01::3/128), action=(flags.network_id = 1; next;)
> +  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip6 && xxreg0 == ff01::6/128), action=(flags.network_id = 2; next;)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lrp-client"), action=(ct_snat(1.1.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lrp-server"), action=(ct_snat(1.1.2.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && outport == "lrp-client"), action=(ct_snat(ff01::1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && outport == "lrp-server"), action=(ct_snat(ff01::2);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 1 && ip4 && outport == "lrp-server"), action=(ct_snat(1.2.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 1 && ip6 && outport == "lrp-server"), action=(ct_snat(ff01::3);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 10 && ip4 && outport == "lrp-server"), action=(ct_snat(6.6.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 11 && ip4 && outport == "lrp-server"), action=(ct_snat(6.6.6.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 12 && ip4 && outport == "lrp-server"), action=(ct_snat(6.7.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 13 && ip4 && outport == "lrp-server"), action=(ct_snat(6.8.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 14 && ip4 && outport == "lrp-server"), action=(ct_snat(6.9.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 15 && ip4 && outport == "lrp-server"), action=(ct_snat(7.2.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 2 && ip4 && outport == "lrp-server"), action=(ct_snat(1.2.2.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 2 && ip6 && outport == "lrp-server"), action=(ct_snat(ff01::6);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 3 && ip4 && outport == "lrp-server"), action=(ct_snat(3.3.3.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 4 && ip4 && outport == "lrp-server"), action=(ct_snat(4.4.4.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 5 && ip4 && outport == "lrp-server"), action=(ct_snat(5.5.5.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 6 && ip4 && outport == "lrp-server"), action=(ct_snat(6.2.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 7 && ip4 && outport == "lrp-server"), action=(ct_snat(6.3.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 8 && ip4 && outport == "lrp-server"), action=(ct_snat(6.4.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 9 && ip4 && outport == "lrp-server"), action=(ct_snat(6.5.1.1);)
> +])
> +AT_CLEANUP
> +])
> +
> +OVN_FOR_EACH_NORTHD_NO_HV([
> +AT_SETUP([lb_force_snat_ip=router_ip one IPv6 address])
> +ovn_start
> +
> +check ovn-nbctl ls-add sw0
> +check ovn-nbctl lr-add lr0
> +check ovn-nbctl set logical_router lr0 options:chassis=hv1
> +check ovn-nbctl set logical_router lr0 options:lb_force_snat_ip=router_ip
> +check ovn-nbctl lrp-add lr0 lrp0 02:00:00:00:00:01
> +check ovn-nbctl lsp-add sw0 lrp0-attachment
> +check ovn-nbctl lsp-set-type lrp0-attachment router
> +check ovn-nbctl lsp-set-addresses lrp0-attachment 02:00:00:00:00:01
> +check ovn-nbctl lsp-set-options lrp0-attachment router-port=lrp0
> +check ovn-nbctl --wait=sb sync
> +
> +ovn-sbctl dump-flows lr0 > lrflows
> +AT_CAPTURE_FILE([lrflows])
> +AT_CHECK([! ovn_strip_lflows < lrflows | grep priority=105 | grep -q "flags.force_snat_for_lb == 1"])
> +check ovn-nbctl lrp-del lrp0 -- lrp-add lr0 lrp0 02:00:00:00:00:01 4242::4242/64
> +check ovn-nbctl --wait=sb sync
> +ovn-sbctl dump-flows lr0 > lrflows
> +AT_CAPTURE_FILE([lrflows])
> +AT_CHECK([ovn_strip_lflows < lrflows | grep priority=105 | grep -c "flags.force_snat_for_lb == 1"], [0], [dnl
> +1
> +])
> +
> +check ovn-nbctl lrp-del lrp0 -- lrp-add lr0 lrp0 02:00:00:00:00:01 4242::4242/64
> +check ovn-nbctl --wait=sb sync
> +ovn-sbctl dump-flows lr0 > lrflows
> +AT_CAPTURE_FILE([lrflows])
> +AT_CHECK([ovn_strip_lflows < lrflows | grep priority=105 | grep -c "flags.force_snat_for_lb == 1"], [0], [dnl
> +1
> +])

This block is repeated twice.  One copy should be removed, I suppose.

> +
> +AT_CLEANUP
> +])
> diff --git a/tests/ovn.at b/tests/ovn.at
> index afde2576f..c0b37fd2e 100644
> --- a/tests/ovn.at
> +++ b/tests/ovn.at
> @@ -42500,3 +42500,80 @@ OVN_CHECK_PACKETS_REMOVE_BROADCAST([hv2/vif1-tx.pcap], [expected])
>  OVN_CLEANUP([hv1], [hv2])
>  AT_CLEANUP
>  ])
> +
> +OVN_FOR_EACH_NORTHD([
> +AT_SETUP([lb_force_snat_ip=router_ip select correct network for snat])
> +AT_SKIP_IF([test $HAVE_SCAPY = no])
> +ovn_start
> +
> +check ovn-nbctl lr-add lr
> +check ovn-nbctl set logical_router lr options:chassis=hv1
> +check ovn-nbctl set logical_router lr options:lb_force_snat_ip=router_ip
> +check ovn-nbctl lrp-add lr lrp-client 02:00:00:00:00:02 1.1.1.1/24
> +check ovn-nbctl lrp-add lr lrp-server 02:00:00:00:00:03 1.1.2.1/24 1.2.1.1/24 \
> +            1.1.3.1/24 2.2.2.1/24 7.7.7.1/24 8.8.8.1/24 6.7.1.1/24 6.8.1.1/24 \
> +            1.2.2.1/24 3.3.3.1/24 4.4.4.1/24 5.5.5.1/24 6.6.6.1/24 6.2.1.1/24 \
> +            6.3.1.1/24 6.4.1.1/24 6.5.1.1/24 6.6.1.1/24
> +check ovn-nbctl ls-add ls-client
> +check ovn-nbctl ls-add ls-server
> +check ovn-nbctl lsp-add ls-client lsp-client-router
> +check ovn-nbctl lsp-set-type lsp-client-router router
> +check ovn-nbctl lsp-add ls-server lsp-server-router
> +check ovn-nbctl lsp-set-type lsp-server-router router
> +check ovn-nbctl set logical_switch_port lsp-client-router options:router-port=lrp-client
> +check ovn-nbctl set logical_switch_port lsp-server-router options:router-port=lrp-server
> +check ovn-nbctl lsp-add ls-client client
> +check ovn-nbctl lsp-add ls-server server
> +check ovn-nbctl lsp-set-addresses client "02:00:00:00:00:01 1.1.1.10"
> +check ovn-nbctl lsp-set-addresses server "02:00:00:00:00:04 2.2.2.10"
> +check ovn-nbctl lsp-set-addresses lsp-client-router router
> +check ovn-nbctl lsp-set-addresses lsp-server-router router
> +check ovn-nbctl lb-add lb 42.42.42.42:80 2.2.2.10:80 udp
> +check ovn-nbctl lr-lb-add lr lb
> +
> +# Create a hypervisor and create OVS ports corresponding to logical ports.
> +net_add n1
> +sim_add hv1
> +as hv1
> +check ovs-vsctl add-br br-phys
> +ovn_attach n1 br-phys 192.168.0.1
> +
> +check ovs-vsctl -- add-port br-int hv1-vif1 -- \
> +    set interface hv1-vif1 external-ids:iface-id=client \
> +    options:tx_pcap=hv1/client-tx.pcap \
> +    options:rxq_pcap=hv1/client-rx.pcap
> +
> +check ovs-vsctl -- add-port br-int hv1-vif2 -- \
> +    set interface hv1-vif2 external-ids:iface-id=server \
> +    options:tx_pcap=hv1/server-tx.pcap \
> +    options:rxq_pcap=hv1/server-rx.pcap
> +
> +wait_for_ports_up
> +check ovn-nbctl --wait=hv sync
> +
> +check grep -q \
> +    'Logical router port "lrp-server" already has the max of 16 networks configured' \
> +    northd/ovn-northd.log
> +
> +tx_src_mac="02:00:00:00:00:01"
> +tx_dst_mac="02:00:00:00:00:02"
> +tx_src_ip=1.1.1.10
> +tx_dst_ip=42.42.42.42
> +request=$(fmt_pkt "Ether(dst='${tx_dst_mac}', src='${tx_src_mac}')/ \
> +                   IP(src='${tx_src_ip}', dst='${tx_dst_ip}')/ \
> +                   UDP(sport=20001, dport=80)")
> +
> +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 $request
> +
> +rx_src_mac="02:00:00:00:00:03"
> +rx_dst_mac="02:00:00:00:00:04"
> +rx_src_ip=2.2.2.1
> +rx_dst_ip=2.2.2.10
> +expected=$(fmt_pkt "Ether(dst='${rx_dst_mac}', src='${rx_src_mac}')/ \
> +                    IP(src='${rx_src_ip}', dst='${rx_dst_ip}', ttl=0x3F)/ \
> +                    UDP(sport=20001, dport=80)")
> +
> +echo $expected > expected
> +OVN_CHECK_PACKETS([hv1/server-tx.pcap], [expected])
> +AT_CLEANUP
> +])
Dumitru Ceara April 1, 2025, 3:12 p.m. UTC | #2
Hi Rosemarie, Ilya,

On 3/21/25 3:47 PM, Ilya Maximets wrote:
> On 3/19/25 17:02, Rosemarie O'Riorden wrote:
>> When a gateway router has a load balancer configured, the option
>> lb_force_snat_ip=router_ip can be set so that OVN SNATs load balanced
>> packets to the logical router's egress interface's IP address, that is, the
>> port chosen as "outport" in the lr_in_ip_routing stage.
>>
>> However, this was only designed to work when one network was configured
>> on the logical router outport. When multiple networks were configured,
>> OVN's behavior was to simply choose the lexicographically first IP address for
>> SNAT. This often lead to an incorrect address being used for SNAT.
>>
>> To fix this, two main components have been added:
>>  1. A new flag, flags.network_id. It is 4 bits and stores an index.
>>  2. A new stage in the router ingress pipeline, lr_in_network_id.
>>
>> Now in the stage lr_in_network_id, OVN generates flows that assign
>> flags.network_id with an index, which is chosen by looping through the networks
>> on the port, and assigning flags.network_id = i.
> 
> nit: "chosen by looping through the networks" doesn't actually explain how
> they are being chosen.  We should say something like:
> 
>  Now in the stage lr_in_network_id, OVN generates flows that assign
>  flags.network_id with an index, which is chosen by looping through the networks
>  on the port, and selecting the one that contains IP of a next hop.
> 
> There is a couple more nits below (mostly for the docs), but nothing major.
> May probably be fixed on commit.  With those addressed:
> 
> Acked-by: Ilya Maximets <i.maximets@ovn.org>
> 
> To save some trouble, the following commit has the code nits addressed
> and can be sqaushed in:
>   https://github.com/igsilya/ovn/commit/1866f305f38d2bf9365102e000ceed377a6a1510
> 
> Best regards, Ilya Maximets.
> 

Thanks, for the bug fix!

I made the suggested changes and pushed this to main, 25.03, 24.09 and
24.03.

On branches 24.09 and 24.03 I didn't backport the
"lb_force_snat_ip=router_ip one IPv6 address" test because we don't
support LRPs with no explicit IP address configured on versions older
than 25.03.

On branch 24.03 I had to adapt the ovn-northd.at tests a bit because we
don't have the ovn_strip_lflows helper there.

Hope that's fine with you too.

Regards,
Dumitru
diff mbox series

Patch

diff --git a/include/ovn/logical-fields.h b/include/ovn/logical-fields.h
index 196ac9dd8..b9446a042 100644
--- a/include/ovn/logical-fields.h
+++ b/include/ovn/logical-fields.h
@@ -71,6 +71,10 @@  enum ovn_controller_event {
 #define MFF_LOG_LB_AFF_MATCH_PORT_OLD        MFF_REG8
 #define MFF_LOG_LB_AFF_MATCH_LS_IP6_ADDR_OLD MFF_XXREG0
 
+/* Maximum number of networks supported by 4-bit flags.network_id. */
+#define OVN_MAX_NETWORK_ID \
+    ((1 << (MLF_NETWORK_ID_END_BIT - MLF_NETWORK_ID_START_BIT + 1)) - 1)
+
 void ovn_init_symtab(struct shash *symtab);
 
 /* MFF_LOG_FLAGS_REG bit assignments */
@@ -97,6 +101,8 @@  enum mff_log_flags_bits {
     MLF_FROM_CTRL_BIT = 19,
     MLF_UNSNAT_NEW_BIT = 20,
     MLF_UNSNAT_NOT_TRACKED_BIT = 21,
+    MLF_NETWORK_ID_START_BIT = 28,
+    MLF_NETWORK_ID_END_BIT = 31,
 };
 
 /* MFF_LOG_FLAGS_REG flag assignments */
@@ -159,7 +165,11 @@  enum mff_log_flags {
     MLF_UNSNAT_NEW = (1 << MLF_UNSNAT_NEW_BIT),
 
     /* Indicate that the packet didn't go through unSNAT. */
-    MLF_UNSNAT_NOT_TRACKED = (1 << MLF_UNSNAT_NOT_TRACKED_BIT)
+    MLF_UNSNAT_NOT_TRACKED = (1 << MLF_UNSNAT_NOT_TRACKED_BIT),
+
+    /* Assign network ID to packet to choose correct network for snat when
+     * lb_force_snat_ip=router_ip. */
+    MLF_NETWORK_ID = (OVN_MAX_NETWORK_ID << MLF_NETWORK_ID_START_BIT),
 };
 
 /* OVN logical fields
diff --git a/lib/logical-fields.c b/lib/logical-fields.c
index ed287f42b..db2d08ada 100644
--- a/lib/logical-fields.c
+++ b/lib/logical-fields.c
@@ -147,6 +147,9 @@  ovn_init_symtab(struct shash *symtab)
              MLF_UNSNAT_NOT_TRACKED_BIT);
     expr_symtab_add_subfield(symtab, "flags.unsnat_not_tracked", NULL,
                              flags_str);
+    snprintf(flags_str, sizeof flags_str, "flags[%d..%d]",
+             MLF_NETWORK_ID_START_BIT, MLF_NETWORK_ID_END_BIT);
+    expr_symtab_add_subfield(symtab, "flags.network_id", NULL, flags_str);
 
     snprintf(flags_str, sizeof flags_str, "flags[%d]", MLF_FROM_CTRL_BIT);
     expr_symtab_add_subfield(symtab, "flags.from_ctrl", NULL, flags_str);
diff --git a/lib/ovn-util.c b/lib/ovn-util.c
index c65b36bb5..032b16ff3 100644
--- a/lib/ovn-util.c
+++ b/lib/ovn-util.c
@@ -897,7 +897,7 @@  ip_address_and_port_from_lb_key(const char *key, char **ip_address,
  *
  * This value is also used to handle some backward compatibility during
  * upgrading. It should never decrease or rewind. */
-#define OVN_INTERNAL_MINOR_VER 6
+#define OVN_INTERNAL_MINOR_VER 7
 
 /* Returns the OVN version. The caller must free the returned value. */
 char *
diff --git a/northd/northd.c b/northd/northd.c
index 1d3e132d4..5322a771a 100644
--- a/northd/northd.c
+++ b/northd/northd.c
@@ -12985,73 +12985,115 @@  build_lrouter_force_snat_flows_op(struct ovn_port *op,
                                   struct ds *match, struct ds *actions,
                                   struct lflow_ref *lflow_ref)
 {
+    size_t network_id;
+
     ovs_assert(op->nbrp);
     if (!op->peer || !lrnat_rec->lb_force_snat_router_ip) {
         return;
     }
 
-    if (op->lrp_networks.n_ipv4_addrs) {
+    for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
         ds_clear(match);
         ds_clear(actions);
 
         ds_put_format(match, "inport == %s && ip4.dst == %s",
-                      op->json_key, op->lrp_networks.ipv4_addrs[0].addr_s);
+                      op->json_key, op->lrp_networks.ipv4_addrs[i].addr_s);
         ovn_lflow_add(lflows, op->od, S_ROUTER_IN_UNSNAT, 110,
                       ds_cstr(match), "ct_snat;", lflow_ref);
-
         ds_clear(match);
 
+        if (i > OVN_MAX_NETWORK_ID) {
+            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+            VLOG_WARN_RL(&rl, "Logical router port %s already has the max of "
+                              "%d networks configured, so for network "
+                              "\"%s/%d\" the first IP [%s] will be considered "
+                              "as SNAT for load balancer.", op->json_key,
+                              OVN_MAX_NETWORK_ID + 1,
+                              op->lrp_networks.ipv4_addrs[i].addr_s,
+                              op->lrp_networks.ipv4_addrs[i].plen,
+                              op->lrp_networks.ipv4_addrs[0].addr_s);
+            network_id = 0;
+        } else {
+            network_id = i;
+        }
+
         /* Higher priority rules to force SNAT with the router port ip.
          * This only takes effect when the packet has already been
          * load balanced once. */
-        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip4 && "
-                      "outport == %s", op->json_key);
+        ds_put_format(match, "flags.force_snat_for_lb == 1 && "
+                      "flags.network_id == %"PRIuSIZE" && ip4 && "
+                      "outport == %s", network_id, op->json_key);
         ds_put_format(actions, "ct_snat(%s);",
-                      op->lrp_networks.ipv4_addrs[0].addr_s);
+                      op->lrp_networks.ipv4_addrs[network_id].addr_s);
         ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 110,
                       ds_cstr(match), ds_cstr(actions),
                       lflow_ref);
-        if (op->lrp_networks.n_ipv4_addrs > 1) {
-            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
-            VLOG_WARN_RL(&rl, "Logical router port %s is configured with "
-                              "multiple IPv4 addresses.  Only the first "
-                              "IP [%s] is considered as SNAT for load "
-                              "balancer", op->json_key,
-                              op->lrp_networks.ipv4_addrs[0].addr_s);
-        }
     }
 
     /* op->lrp_networks.ipv6_addrs will always have LLA and that will be
-     * last in the list. So add the flows only if n_ipv6_addrs > 1. */
-    if (op->lrp_networks.n_ipv6_addrs > 1) {
+     * last in the list. So loop to add flows n_ipv6_addrs - 1 times. */
+    for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs - 1; i++) {
         ds_clear(match);
         ds_clear(actions);
 
         ds_put_format(match, "inport == %s && ip6.dst == %s",
-                      op->json_key, op->lrp_networks.ipv6_addrs[0].addr_s);
+                      op->json_key, op->lrp_networks.ipv6_addrs[i].addr_s);
         ovn_lflow_add(lflows, op->od, S_ROUTER_IN_UNSNAT, 110,
                       ds_cstr(match), "ct_snat;", lflow_ref);
-
         ds_clear(match);
 
+        if (i > OVN_MAX_NETWORK_ID) {
+            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+            VLOG_WARN_RL(&rl, "Logical router port %s already has the max of "
+                              "%d networks configured, so for network "
+                              "\"%s/%d\" the first IP [%s] will be considered "
+                              "as SNAT for load balancer.", op->json_key,
+                              OVN_MAX_NETWORK_ID + 1,
+                              op->lrp_networks.ipv6_addrs[i].addr_s,
+                              op->lrp_networks.ipv6_addrs[i].plen,
+                              op->lrp_networks.ipv6_addrs[0].addr_s);
+            network_id = 0;
+        } else {
+            network_id = i;
+        }
+
         /* Higher priority rules to force SNAT with the router port ip.
          * This only takes effect when the packet has already been
          * load balanced once. */
-        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip6 && "
-                      "outport == %s", op->json_key);
+        ds_put_format(match, "flags.force_snat_for_lb == 1 && "
+                      "flags.network_id == %"PRIuSIZE" && ip6 && "
+                      "outport == %s", network_id, op->json_key);
         ds_put_format(actions, "ct_snat(%s);",
-                      op->lrp_networks.ipv6_addrs[0].addr_s);
+                      op->lrp_networks.ipv6_addrs[network_id].addr_s);
         ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 110,
                       ds_cstr(match), ds_cstr(actions),
                       lflow_ref);
-        if (op->lrp_networks.n_ipv6_addrs > 2) {
-            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
-            VLOG_WARN_RL(&rl, "Logical router port %s is configured with "
-                              "multiple IPv6 addresses.  Only the first "
-                              "IP [%s] is considered as SNAT for load "
-                              "balancer", op->json_key,
-                              op->lrp_networks.ipv6_addrs[0].addr_s);
-        }
+    }
+
+    /* This lower-priority flow matches the old behavior for if northd is
+     * upgraded before controller and flags.network_id is not recognized. */
+    if (op->lrp_networks.n_ipv4_addrs > 0) {
+        ds_clear(match);
+        ds_clear(actions);
+        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip4 && "
+                      "outport == %s", op->json_key);
+        ds_put_format(actions, "ct_snat(%s);",
+                      op->lrp_networks.ipv4_addrs[0].addr_s);
+        ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 105,
+                      ds_cstr(match), ds_cstr(actions), lflow_ref);
+    }
+
+    /* op->lrp_networks.ipv6_addrs will always have LLA, so only add flow if
+     * there is more than 1. */
+    if (op->lrp_networks.n_ipv6_addrs > 1) {
+        ds_clear(match);
+        ds_clear(actions);
+        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip6 && "
+                      "outport == %s", op->json_key);
+        ds_put_format(actions, "ct_snat(%s);",
+                      op->lrp_networks.ipv6_addrs[0].addr_s);
+        ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 105,
+                      ds_cstr(match), ds_cstr(actions), lflow_ref);
     }
 }
 
@@ -14930,6 +14972,93 @@  build_arp_request_flows_for_lrouter(
                   lflow_ref);
 }
 
+static void
+build_lrouter_network_id_flows(
+        struct ovn_datapath *od, struct lflow_table *lflows,
+        struct ds *match, struct ds *actions, struct lflow_ref *lflow_ref)
+{
+    const struct ovn_port *op;
+    size_t network_id;
+
+    HMAP_FOR_EACH (op, dp_node, &od->ports) {
+        for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
+            if (i > OVN_MAX_NETWORK_ID) {
+                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+                VLOG_WARN_RL(&rl, "Logical router port %s already has the max "
+                                  "of %d networks configured, so network "
+                                  "\"%s/%d\" is assigned "
+                                  "flags.network_id = 0.", op->json_key,
+                                  OVN_MAX_NETWORK_ID + 1,
+                                  op->lrp_networks.ipv4_addrs[i].addr_s,
+                                  op->lrp_networks.ipv4_addrs[i].plen);
+                network_id = 0;
+            } else {
+                network_id = i;
+            }
+
+            ds_clear(match);
+            ds_clear(actions);
+
+            ds_put_format(match, "outport == %s && ip4 && "
+                          REG_NEXT_HOP_IPV4 " == %s/%d", op->json_key,
+                          op->lrp_networks.ipv4_addrs[i].addr_s,
+                          op->lrp_networks.ipv4_addrs[i].plen);
+
+            ds_put_format(actions, "flags.network_id = %"PRIuSIZE"; ",
+                          network_id);
+            ds_put_format(actions, "next;");
+
+            ovn_lflow_add(lflows, op->od, S_ROUTER_IN_NETWORK_ID, 110,
+                          ds_cstr(match), ds_cstr(actions),
+                          lflow_ref);
+        }
+
+        /* op->lrp_networks.ipv6_addrs will always have LLA and that will be
+         * last in the list. So add the flows only if n_ipv6_addrs > 1, and
+         * loop n_ipv6_addrs - 1 times. */
+        for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs - 1; i++) {
+            if (i > OVN_MAX_NETWORK_ID) {
+                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+                VLOG_WARN_RL(&rl, "Logical router port %s already has the max "
+                                  "of %d networks configured, so network "
+                                  "\"%s/%d\" is assigned "
+                                  "flags.network_id = 0.", op->json_key,
+                                  OVN_MAX_NETWORK_ID + 1,
+                                  op->lrp_networks.ipv6_addrs[i].addr_s,
+                                  op->lrp_networks.ipv6_addrs[i].plen);
+                network_id = 0;
+            } else {
+                network_id = i;
+            }
+
+            ds_clear(match);
+            ds_clear(actions);
+
+            ds_put_format(match, "outport == %s && ip6 && "
+                          REG_NEXT_HOP_IPV6 " == %s/%d", op->json_key,
+                          op->lrp_networks.ipv6_addrs[i].addr_s,
+                          op->lrp_networks.ipv6_addrs[i].plen);
+
+            ds_put_format(actions, "flags.network_id = %"PRIuSIZE"; ", i);
+            ds_put_format(actions, "next;");
+
+            ovn_lflow_add(lflows, op->od, S_ROUTER_IN_NETWORK_ID, 110,
+                          ds_cstr(match), ds_cstr(actions), lflow_ref);
+        }
+    }
+
+    /* Lower-priority flow for the case the next-hop doesn't belong to
+     * any of the port networks.  In this case, setting network id to zero
+     * explicitly. */
+    ovn_lflow_add(lflows, od, S_ROUTER_IN_NETWORK_ID, 105, "1",
+                  "flags.network_id = 0; next;", lflow_ref);
+
+    /* This lower-priority flow is for the case where northd is upgraded before
+     * controller and flags.network_id is not recognized. */
+    ovn_lflow_add(lflows, od, S_ROUTER_IN_NETWORK_ID, 0,
+                  "1", "next;", lflow_ref);
+}
+
 /* Logical router egress table DELIVERY: Delivery (priority 100-110).
  *
  * Priority 100 rules deliver packets to enabled logical ports.
@@ -17328,6 +17457,8 @@  build_lswitch_and_lrouter_iterate_by_lr(struct ovn_datapath *od,
                                         &lsi->actions,
                                         lsi->meter_groups,
                                         NULL);
+    build_lrouter_network_id_flows(od, lsi->lflows, &lsi->match,
+                                   &lsi->actions, NULL);
     build_misc_local_traffic_drop_flows_for_lrouter(od, lsi->lflows, NULL);
 
     build_lr_nat_defrag_and_lb_default_flows(od, lsi->lflows, NULL);
diff --git a/northd/northd.h b/northd/northd.h
index 1a7afe902..388bac6df 100644
--- a/northd/northd.h
+++ b/northd/northd.h
@@ -547,7 +547,8 @@  enum ovn_stage {
     PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     22, "lr_in_chk_pkt_len")     \
     PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     23, "lr_in_larger_pkts")     \
     PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     24, "lr_in_gw_redirect")     \
-    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     25, "lr_in_arp_request")     \
+    PIPELINE_STAGE(ROUTER, IN,  NETWORK_ID,      25, "lr_in_network_id")      \
+    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     26, "lr_in_arp_request")     \
                                                                       \
     /* Logical router egress stages. */                               \
     PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
index 155ba8a49..ee1f1ebaa 100644
--- a/northd/ovn-northd.8.xml
+++ b/northd/ovn-northd.8.xml
@@ -5066,7 +5066,61 @@  icmp6 {
       </li>
     </ul>
 
-   <h3>Ingress Table 25: ARP Request</h3>
+   <h3>Ingress Table 25: Network ID</h3>
+
+    <p>
+      This table generates flows that set flags.network_id for IP packets. It
+      holds the following flows:
+    </p>
+
+    <ul>
+      <li>
+        <p>
+          A priority-110 flow with match:
+          <ul>
+            <li>
+              for IPv4: <code>outport == <var>P</var> &amp;&amp;
+              REG_NEXT_HOP_IPV4 == <var>I</var>/<var>C</var> &amp;&amp;
+              ip4</code>
+            </li>
+            <li>
+              for IPv6: <code>outport == <var>P</var> &amp;&amp;
+              REG_NEXT_HOP_IPV6 == <var>I</var>/<var>C</var> &amp;&amp;
+              ip6</code>
+            </li>
+          </ul>
+        </p>
+
+        <p>
+          and actions <code>flags.network_id = <var>N</var>; next;</code>.
+          Where <var>P</var> is the outport, <var>I</var> is the next-hop IP,
+          <var>C</var> is the next-hop network CIDR, and <var>N</var> is the
+          network id (index).
+        </p>
+
+        <p>
+        <code>flags.network_id</code> is 4 bits, and thus only 16 networks can
+        be indexed. If the number of networks is greater than 16, networks 17
+        and up will have the actions <code>flags.network_id = 0; next;</code>
+        and only the first lexicographical IP will be considered for SNAT for
+        those networks.
+        </p>
+      </li>
+
+      <li>
+        A lower priority-105 flow with match <code>1</code> and actions
+        <code>flags.network_id = 0; next;</code>. The is for the case that the
+        next-hop doesn't belong to any of the port networks, so
+        flags.network_id should be set to zero.
+      </li>
+
+      <li>
+        Catch-all: A priority-0 flow with match <code>1</code> has
+        actions <code>next;</code>.
+      </li>
+    </ul>
+
+   <h3>Ingress Table 26: ARP Request</h3>
 
     <p>
       In the common case where the Ethernet destination has been resolved, this
@@ -5330,18 +5384,35 @@  nd_ns {
           table="Logical_Router"/>:lb_force_snat_ip=router_ip), then for
           each logical router port <var>P</var> attached to the Gateway
           router, a priority-110 flow matches
-          <code>flags.force_snat_for_lb == 1 &amp;&amp; outport == <var>P</var>
-          </code> with an action <code>ct_snat(<var>R</var>);</code>
-          where <var>R</var> is the IP configured on the router port.
-          If <code>R</code> is an IPv4 address then the match will also
-          include <code>ip4</code> and if it is an IPv6 address, then the
-          match will also include <code>ip6</code>.
+          <code>flags.force_snat_for_lb == 1 &amp;&amp; ip4 &amp;&amp;
+          flags.network_id == <var>N</var> &amp;&amp; outport ==
+          <var>P</var></code>, where <var>N</var> is the network index, with an
+          action <code>ct_snat(<var>R</var>);</code> where <var>R</var> is the
+          IP configured on the router port. A similar flow is created for IPv6,
+          with <code>ip6</code> instead of <code>ip4</code>. <var>N</var>, the
+          network index, will be 0 for networks 17 and up.
         </p>
 
         <p>
           If the logical router port <var>P</var> is configured with multiple
-          IPv4 and multiple IPv6 addresses, only the first IPv4 and first IPv6
-          address is considered.
+          IPv4 and multiple IPv6 addresses, the IPv4 and IPv6 address within
+          the same network as the next-hop will be chosen as <var>R</var> for
+          SNAT. However, if there are more than 16 networks configured, the
+          first lexicographical IP will be considered for SNAT for networks 17
+          and up.
+        </p>
+      </li>
+      <li>
+        <p>
+          A priority-105 flow matches the old behavior for if northd is
+          upgraded before controller and flags.network_id is not recognized. It
+          is only added if there's at least one network configured (excluding
+          LLA for IPv6). It matches on: <code>flags.force_snat_for_lb == 1
+          </code> &amp;&amp; <code>ip4</code> &amp;&amp; <code>outport ==
+          <var>P</var></code>, with action: <code>ct_snat(<var>R</var>)</code>.
+          <var>R</var> is the first lexicographical IP address configured.
+          There is a similar flow for IPv6 with <code>ip6</code> instead of
+          <code>ip4</code>.
         </p>
       </li>
 
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index 419130aa8..2841aa76f 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -4493,9 +4493,12 @@  AT_CHECK([grep "lr_in_dnat" lr0flows | ovn_strip_lflows], [0], [dnl
 
 AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
   table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
   table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
 ])
 
@@ -4558,10 +4561,14 @@  AT_CHECK([grep "lr_in_dnat" lr0flows | ovn_strip_lflows], [0], [dnl
 
 AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
   table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw1"), action=(ct_snat(bef0::1);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw1"), action=(ct_snat(bef0::1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && outport == "lr0-sw1"), action=(ct_snat(bef0::1);)
   table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
 ])
 
@@ -6268,8 +6275,10 @@  AT_CHECK([grep "lr_out_post_undnat" lr0flows | ovn_strip_lflows], [0], [dnl
 
 AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
   table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
   table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
   table=??(lr_out_snat        ), priority=25   , match=(ip && ip4.src == 10.0.0.0/24 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.10);)
   table=??(lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.10 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.30);)
@@ -6334,8 +6343,10 @@  AT_CHECK([grep "lr_out_post_undnat" lr0flows | ovn_strip_lflows], [0], [dnl
 
 AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
   table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
   table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
   table=??(lr_out_snat        ), priority=25   , match=(ip && ip4.src == 10.0.0.0/24 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.10);)
   table=??(lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.10 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.30);)
@@ -6412,10 +6423,14 @@  AT_CHECK([grep "lr_out_post_undnat" lr0flows | ovn_strip_lflows], [0], [dnl
 
 AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
   table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-public"), action=(ct_snat(def0::10);)
-  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw0"), action=(ct_snat(aef0::1);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-public"), action=(ct_snat(def0::10);)
+  table=??(lr_out_snat        ), priority=105  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw0"), action=(ct_snat(aef0::1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && outport == "lr0-public"), action=(ct_snat(def0::10);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && outport == "lr0-sw0"), action=(ct_snat(aef0::1);)
   table=??(lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
   table=??(lr_out_snat        ), priority=25   , match=(ip && ip4.src == 10.0.0.0/24 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.10);)
   table=??(lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.10 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.30);)
@@ -16753,3 +16768,124 @@  check grep -q "Bad configuration: The peer of the switch port 'ls-lrp1' (LRP pee
 
 AT_CLEANUP
 ])
+
+OVN_FOR_EACH_NORTHD_NO_HV([
+AT_SETUP([lb_force_snat_ip=router_ip generate flags.network_id flows])
+ovn_start
+
+check ovn-nbctl lr-add lr
+check ovn-nbctl set logical_router lr options:chassis=hv1
+check ovn-nbctl set logical_router lr options:lb_force_snat_ip=router_ip
+check ovn-nbctl lrp-add lr lrp-client 02:00:00:00:00:02 1.1.1.1/24 ff01::01
+check ovn-nbctl lrp-add lr lrp-server 02:00:00:00:00:03 1.1.2.1/24 7.7.7.1/24 \
+            8.8.8.1/24 1.2.1.1/24 1.2.2.1/24 3.3.3.1/24 4.4.4.1/24 5.5.5.1/24 \
+            6.6.6.1/24 6.2.1.1/24 6.3.1.1/24 6.4.1.1/24 6.5.1.1/24 6.6.1.1/24 \
+                                  6.7.1.1/24 6.8.1.1/24 6.9.1.1/24 7.2.1.1/24 \
+                                                   ff01::02 ff01::03 ff01::06
+check ovn-nbctl ls-add ls-client
+check ovn-nbctl ls-add ls-server
+check ovn-nbctl lsp-add ls-client lsp-client-router
+check ovn-nbctl lsp-set-type lsp-client-router router
+check ovn-nbctl lsp-add ls-server lsp-server-router
+check ovn-nbctl lsp-set-type lsp-server-router router
+check ovn-nbctl set logical_switch_port lsp-client-router options:router-port=lrp-client
+check ovn-nbctl set logical_switch_port lsp-server-router options:router-port=lrp-server
+check ovn-nbctl lsp-add ls-client client
+check ovn-nbctl lsp-add ls-server server
+check ovn-nbctl lsp-set-addresses client "02:00:00:00:00:01 1.1.1.10"
+check ovn-nbctl lsp-set-addresses server "02:00:00:00:00:04 2.2.2.10"
+check ovn-nbctl lsp-set-addresses lsp-client-router router
+check ovn-nbctl lsp-set-addresses lsp-server-router router
+check ovn-nbctl lb-add lb 42.42.42.42:80 2.2.2.10:80 udp
+check ovn-nbctl lr-lb-add lr lb
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr > lrflows
+AT_CAPTURE_FILE([lrflows])
+
+AT_CHECK([grep -E flags.network_id lrflows | ovn_strip_lflows], [0], [dnl
+  table=??(lr_in_network_id   ), priority=105  , match=(1), action=(flags.network_id = 0; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-client" && ip4 && reg0 == 1.1.1.1/24), action=(flags.network_id = 0; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-client" && ip6 && xxreg0 == ff01::1/128), action=(flags.network_id = 0; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 1.1.2.1/24), action=(flags.network_id = 0; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 1.2.1.1/24), action=(flags.network_id = 1; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 1.2.2.1/24), action=(flags.network_id = 2; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 3.3.3.1/24), action=(flags.network_id = 3; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 4.4.4.1/24), action=(flags.network_id = 4; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 5.5.5.1/24), action=(flags.network_id = 5; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.2.1.1/24), action=(flags.network_id = 6; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.3.1.1/24), action=(flags.network_id = 7; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.4.1.1/24), action=(flags.network_id = 8; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.5.1.1/24), action=(flags.network_id = 9; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.6.1.1/24), action=(flags.network_id = 10; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.6.6.1/24), action=(flags.network_id = 11; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.7.1.1/24), action=(flags.network_id = 12; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.8.1.1/24), action=(flags.network_id = 13; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 6.9.1.1/24), action=(flags.network_id = 14; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 7.2.1.1/24), action=(flags.network_id = 15; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 7.7.7.1/24), action=(flags.network_id = 0; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip4 && reg0 == 8.8.8.1/24), action=(flags.network_id = 0; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip6 && xxreg0 == ff01::2/128), action=(flags.network_id = 0; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip6 && xxreg0 == ff01::3/128), action=(flags.network_id = 1; next;)
+  table=??(lr_in_network_id   ), priority=110  , match=(outport == "lrp-server" && ip6 && xxreg0 == ff01::6/128), action=(flags.network_id = 2; next;)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lrp-client"), action=(ct_snat(1.1.1.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && outport == "lrp-server"), action=(ct_snat(1.1.2.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && outport == "lrp-client"), action=(ct_snat(ff01::1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && outport == "lrp-server"), action=(ct_snat(ff01::2);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 1 && ip4 && outport == "lrp-server"), action=(ct_snat(1.2.1.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 1 && ip6 && outport == "lrp-server"), action=(ct_snat(ff01::3);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 10 && ip4 && outport == "lrp-server"), action=(ct_snat(6.6.1.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 11 && ip4 && outport == "lrp-server"), action=(ct_snat(6.6.6.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 12 && ip4 && outport == "lrp-server"), action=(ct_snat(6.7.1.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 13 && ip4 && outport == "lrp-server"), action=(ct_snat(6.8.1.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 14 && ip4 && outport == "lrp-server"), action=(ct_snat(6.9.1.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 15 && ip4 && outport == "lrp-server"), action=(ct_snat(7.2.1.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 2 && ip4 && outport == "lrp-server"), action=(ct_snat(1.2.2.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 2 && ip6 && outport == "lrp-server"), action=(ct_snat(ff01::6);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 3 && ip4 && outport == "lrp-server"), action=(ct_snat(3.3.3.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 4 && ip4 && outport == "lrp-server"), action=(ct_snat(4.4.4.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 5 && ip4 && outport == "lrp-server"), action=(ct_snat(5.5.5.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 6 && ip4 && outport == "lrp-server"), action=(ct_snat(6.2.1.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 7 && ip4 && outport == "lrp-server"), action=(ct_snat(6.3.1.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 8 && ip4 && outport == "lrp-server"), action=(ct_snat(6.4.1.1);)
+  table=??(lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && flags.network_id == 9 && ip4 && outport == "lrp-server"), action=(ct_snat(6.5.1.1);)
+])
+AT_CLEANUP
+])
+
+OVN_FOR_EACH_NORTHD_NO_HV([
+AT_SETUP([lb_force_snat_ip=router_ip one IPv6 address])
+ovn_start
+
+check ovn-nbctl ls-add sw0
+check ovn-nbctl lr-add lr0
+check ovn-nbctl set logical_router lr0 options:chassis=hv1
+check ovn-nbctl set logical_router lr0 options:lb_force_snat_ip=router_ip
+check ovn-nbctl lrp-add lr0 lrp0 02:00:00:00:00:01
+check ovn-nbctl lsp-add sw0 lrp0-attachment
+check ovn-nbctl lsp-set-type lrp0-attachment router
+check ovn-nbctl lsp-set-addresses lrp0-attachment 02:00:00:00:00:01
+check ovn-nbctl lsp-set-options lrp0-attachment router-port=lrp0
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lrflows
+AT_CAPTURE_FILE([lrflows])
+AT_CHECK([! ovn_strip_lflows < lrflows | grep priority=105 | grep -q "flags.force_snat_for_lb == 1"])
+check ovn-nbctl lrp-del lrp0 -- lrp-add lr0 lrp0 02:00:00:00:00:01 4242::4242/64
+check ovn-nbctl --wait=sb sync
+ovn-sbctl dump-flows lr0 > lrflows
+AT_CAPTURE_FILE([lrflows])
+AT_CHECK([ovn_strip_lflows < lrflows | grep priority=105 | grep -c "flags.force_snat_for_lb == 1"], [0], [dnl
+1
+])
+
+check ovn-nbctl lrp-del lrp0 -- lrp-add lr0 lrp0 02:00:00:00:00:01 4242::4242/64
+check ovn-nbctl --wait=sb sync
+ovn-sbctl dump-flows lr0 > lrflows
+AT_CAPTURE_FILE([lrflows])
+AT_CHECK([ovn_strip_lflows < lrflows | grep priority=105 | grep -c "flags.force_snat_for_lb == 1"], [0], [dnl
+1
+])
+
+AT_CLEANUP
+])
diff --git a/tests/ovn.at b/tests/ovn.at
index afde2576f..c0b37fd2e 100644
--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -42500,3 +42500,80 @@  OVN_CHECK_PACKETS_REMOVE_BROADCAST([hv2/vif1-tx.pcap], [expected])
 OVN_CLEANUP([hv1], [hv2])
 AT_CLEANUP
 ])
+
+OVN_FOR_EACH_NORTHD([
+AT_SETUP([lb_force_snat_ip=router_ip select correct network for snat])
+AT_SKIP_IF([test $HAVE_SCAPY = no])
+ovn_start
+
+check ovn-nbctl lr-add lr
+check ovn-nbctl set logical_router lr options:chassis=hv1
+check ovn-nbctl set logical_router lr options:lb_force_snat_ip=router_ip
+check ovn-nbctl lrp-add lr lrp-client 02:00:00:00:00:02 1.1.1.1/24
+check ovn-nbctl lrp-add lr lrp-server 02:00:00:00:00:03 1.1.2.1/24 1.2.1.1/24 \
+            1.1.3.1/24 2.2.2.1/24 7.7.7.1/24 8.8.8.1/24 6.7.1.1/24 6.8.1.1/24 \
+            1.2.2.1/24 3.3.3.1/24 4.4.4.1/24 5.5.5.1/24 6.6.6.1/24 6.2.1.1/24 \
+            6.3.1.1/24 6.4.1.1/24 6.5.1.1/24 6.6.1.1/24
+check ovn-nbctl ls-add ls-client
+check ovn-nbctl ls-add ls-server
+check ovn-nbctl lsp-add ls-client lsp-client-router
+check ovn-nbctl lsp-set-type lsp-client-router router
+check ovn-nbctl lsp-add ls-server lsp-server-router
+check ovn-nbctl lsp-set-type lsp-server-router router
+check ovn-nbctl set logical_switch_port lsp-client-router options:router-port=lrp-client
+check ovn-nbctl set logical_switch_port lsp-server-router options:router-port=lrp-server
+check ovn-nbctl lsp-add ls-client client
+check ovn-nbctl lsp-add ls-server server
+check ovn-nbctl lsp-set-addresses client "02:00:00:00:00:01 1.1.1.10"
+check ovn-nbctl lsp-set-addresses server "02:00:00:00:00:04 2.2.2.10"
+check ovn-nbctl lsp-set-addresses lsp-client-router router
+check ovn-nbctl lsp-set-addresses lsp-server-router router
+check ovn-nbctl lb-add lb 42.42.42.42:80 2.2.2.10:80 udp
+check ovn-nbctl lr-lb-add lr lb
+
+# Create a hypervisor and create OVS ports corresponding to logical ports.
+net_add n1
+sim_add hv1
+as hv1
+check ovs-vsctl add-br br-phys
+ovn_attach n1 br-phys 192.168.0.1
+
+check ovs-vsctl -- add-port br-int hv1-vif1 -- \
+    set interface hv1-vif1 external-ids:iface-id=client \
+    options:tx_pcap=hv1/client-tx.pcap \
+    options:rxq_pcap=hv1/client-rx.pcap
+
+check ovs-vsctl -- add-port br-int hv1-vif2 -- \
+    set interface hv1-vif2 external-ids:iface-id=server \
+    options:tx_pcap=hv1/server-tx.pcap \
+    options:rxq_pcap=hv1/server-rx.pcap
+
+wait_for_ports_up
+check ovn-nbctl --wait=hv sync
+
+check grep -q \
+    'Logical router port "lrp-server" already has the max of 16 networks configured' \
+    northd/ovn-northd.log
+
+tx_src_mac="02:00:00:00:00:01"
+tx_dst_mac="02:00:00:00:00:02"
+tx_src_ip=1.1.1.10
+tx_dst_ip=42.42.42.42
+request=$(fmt_pkt "Ether(dst='${tx_dst_mac}', src='${tx_src_mac}')/ \
+                   IP(src='${tx_src_ip}', dst='${tx_dst_ip}')/ \
+                   UDP(sport=20001, dport=80)")
+
+check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 $request
+
+rx_src_mac="02:00:00:00:00:03"
+rx_dst_mac="02:00:00:00:00:04"
+rx_src_ip=2.2.2.1
+rx_dst_ip=2.2.2.10
+expected=$(fmt_pkt "Ether(dst='${rx_dst_mac}', src='${rx_src_mac}')/ \
+                    IP(src='${rx_src_ip}', dst='${rx_dst_ip}', ttl=0x3F)/ \
+                    UDP(sport=20001, dport=80)")
+
+echo $expected > expected
+OVN_CHECK_PACKETS([hv1/server-tx.pcap], [expected])
+AT_CLEANUP
+])