diff mbox series

[ovs-dev,1/2] northd: Treat reachable and unreachable VIPs differently.

Message ID 20211108171018.1005682-1-numans@ovn.org
State Accepted
Headers show
Series [ovs-dev,1/2] northd: Treat reachable and unreachable VIPs differently. | expand

Checks

Context Check Description
ovsrobot/apply-robot success apply and check: success
ovsrobot/github-robot-_Build_and_Test success github build: passed
ovsrobot/github-robot-_ovn-kubernetes fail github build: failed

Commit Message

Numan Siddique Nov. 8, 2021, 5:10 p.m. UTC
From: Numan Siddique <numans@ovn.org>

If a logical switch is used to join multiple gateway
routers and if each of these gateway routers are
configured with the same load balancer,  then after the
commit [1] it's results in a flow explosion in
the "ls_in_l2_lkup" stage of this logical switch.

So, this patch reverts parts of the commit [1] related to
the load balancer VIPs.

Eg. If a load balancer with vip - 5.0.0.48 is configured
on gateway routers g0, g1 and g2.  Then ovn-northd will add
the below logical flows

table=22(ls_in_l2_lkup), priority=80,
  match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
  action=(outport = "join-to-g0-port"; output;)
table=22(ls_in_l2_lkup), priority=80,
  match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
  action=(outport = "join-to-g1-port"; output;)
table=22(ls_in_l2_lkup), priority=80,
  match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
  action=(outport = "join-to-g2-port"; output;)

After this patch,  we will just have one lflow
table=22(ls_in_l2_lkup), priority=90,
  match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
  action=(outport = "_MC_flood"; output;)

[1] - ccbbbd0584e5("ovn-northd: Treat reachable and unreachable addresses identically.")
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2020710
Fixes: ccbbbd0584e5("ovn-northd: Treat reachable and unreachable addresses identically.")
Signed-off-by: Numan Siddique <numans@ovn.org>
---
 northd/northd.c         | 105 +++++++++++++++++++-
 northd/ovn-northd.8.xml |   8 ++
 tests/ovn-northd.at     | 206 ++++++++++++++++++++++++++++++++++++++++
 tests/ovn.at            |   2 -
 4 files changed, 317 insertions(+), 4 deletions(-)

Comments

Dumitru Ceara Nov. 9, 2021, 12:41 p.m. UTC | #1
On 11/8/21 6:10 PM, numans@ovn.org wrote:
> From: Numan Siddique <numans@ovn.org>
> 
> If a logical switch is used to join multiple gateway
> routers and if each of these gateway routers are
> configured with the same load balancer,  then after the
> commit [1] it's results in a flow explosion in
> the "ls_in_l2_lkup" stage of this logical switch.
> 
> So, this patch reverts parts of the commit [1] related to
> the load balancer VIPs.
> 
> Eg. If a load balancer with vip - 5.0.0.48 is configured
> on gateway routers g0, g1 and g2.  Then ovn-northd will add
> the below logical flows
> 
> table=22(ls_in_l2_lkup), priority=80,
>   match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
>   action=(outport = "join-to-g0-port"; output;)
> table=22(ls_in_l2_lkup), priority=80,
>   match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
>   action=(outport = "join-to-g1-port"; output;)
> table=22(ls_in_l2_lkup), priority=80,
>   match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
>   action=(outport = "join-to-g2-port"; output;)
> 
> After this patch,  we will just have one lflow
> table=22(ls_in_l2_lkup), priority=90,
>   match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
>   action=(outport = "_MC_flood"; output;)
> 
> [1] - ccbbbd0584e5("ovn-northd: Treat reachable and unreachable addresses identically.")
> Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2020710
> Fixes: ccbbbd0584e5("ovn-northd: Treat reachable and unreachable addresses identically.")
> Signed-off-by: Numan Siddique <numans@ovn.org>
> ---

Some test numbers with a topology simulating 120 ovn-k8s nodes (120
gateway routers connected to a single join switch) with 10K load
balancers applied to all gateway router:

A. Before this patch:
- northd loop processing time: 6854ms
- SB DB size (on disk, after compaction): 520M
- number of logical flows: 1245942

B. With this patch:
- northd loop processing time: 2638ms
- SB DB size (on disk, after compaction): 94M
- number of logical flows: 55942

Looks good to me, thanks for the quick fix!

Acked-by: Dumitru Ceara <dceara@redhat.com>
Numan Siddique Nov. 9, 2021, 4:27 p.m. UTC | #2
On Tue, Nov 9, 2021 at 7:42 AM Dumitru Ceara <dceara@redhat.com> wrote:
>
> On 11/8/21 6:10 PM, numans@ovn.org wrote:
> > From: Numan Siddique <numans@ovn.org>
> >
> > If a logical switch is used to join multiple gateway
> > routers and if each of these gateway routers are
> > configured with the same load balancer,  then after the
> > commit [1] it's results in a flow explosion in
> > the "ls_in_l2_lkup" stage of this logical switch.
> >
> > So, this patch reverts parts of the commit [1] related to
> > the load balancer VIPs.
> >
> > Eg. If a load balancer with vip - 5.0.0.48 is configured
> > on gateway routers g0, g1 and g2.  Then ovn-northd will add
> > the below logical flows
> >
> > table=22(ls_in_l2_lkup), priority=80,
> >   match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
> >   action=(outport = "join-to-g0-port"; output;)
> > table=22(ls_in_l2_lkup), priority=80,
> >   match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
> >   action=(outport = "join-to-g1-port"; output;)
> > table=22(ls_in_l2_lkup), priority=80,
> >   match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
> >   action=(outport = "join-to-g2-port"; output;)
> >
> > After this patch,  we will just have one lflow
> > table=22(ls_in_l2_lkup), priority=90,
> >   match=(flags[1] == 0 && arp.op == 1 && arp.tpa == 5.0.0.48),
> >   action=(outport = "_MC_flood"; output;)
> >
> > [1] - ccbbbd0584e5("ovn-northd: Treat reachable and unreachable addresses identically.")
> > Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2020710
> > Fixes: ccbbbd0584e5("ovn-northd: Treat reachable and unreachable addresses identically.")
> > Signed-off-by: Numan Siddique <numans@ovn.org>
> > ---
>
> Some test numbers with a topology simulating 120 ovn-k8s nodes (120
> gateway routers connected to a single join switch) with 10K load
> balancers applied to all gateway router:
>
> A. Before this patch:
> - northd loop processing time: 6854ms
> - SB DB size (on disk, after compaction): 520M
> - number of logical flows: 1245942
>
> B. With this patch:
> - northd loop processing time: 2638ms
> - SB DB size (on disk, after compaction): 94M
> - number of logical flows: 55942
>
> Looks good to me, thanks for the quick fix!
>
> Acked-by: Dumitru Ceara <dceara@redhat.com>

Thanks Dumitru for the reviews.  I applied both the patches to the
main branch and branch-21.09.

Numan

>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
diff mbox series

Patch

diff --git a/northd/northd.c b/northd/northd.c
index ee4d4df26..1011518a9 100644
--- a/northd/northd.c
+++ b/northd/northd.c
@@ -6953,6 +6953,42 @@  arp_nd_ns_match(const char *ips, int addr_family, struct ds *match)
     }
 }
 
+/* Returns 'true' if the IPv4 'addr' is on the same subnet with one of the
+ * IPs configured on the router port.
+ */
+static bool
+lrouter_port_ipv4_reachable(const struct ovn_port *op, ovs_be32 addr)
+{
+    for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
+        struct ipv4_netaddr *op_addr = &op->lrp_networks.ipv4_addrs[i];
+
+        if ((addr & op_addr->mask) == op_addr->network) {
+            return true;
+        }
+    }
+    return false;
+}
+
+/* Returns 'true' if the IPv6 'addr' is on the same subnet with one of the
+ * IPs configured on the router port.
+ */
+static bool
+lrouter_port_ipv6_reachable(const struct ovn_port *op,
+                            const struct in6_addr *addr)
+{
+    for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
+        struct ipv6_netaddr *op_addr = &op->lrp_networks.ipv6_addrs[i];
+
+        struct in6_addr nat_addr6_masked =
+            ipv6_addr_bitand(addr, &op_addr->mask);
+
+        if (ipv6_addr_equals(&nat_addr6_masked, &op_addr->network)) {
+            return true;
+        }
+    }
+    return false;
+}
+
 /*
  * Ingress table 22: Flows that forward ARP/ND requests only to the routers
  * that own the addresses. Other ARP/ND packets are still flooded in the
@@ -7024,7 +7060,8 @@  build_lswitch_rport_arp_req_flows(struct ovn_port *op,
         /* Check if the ovn port has a network configured on which we could
          * expect ARP requests for the LB VIP.
          */
-        if (ip_parse(ip_addr, &ipv4_addr)) {
+        if (ip_parse(ip_addr, &ipv4_addr) &&
+            lrouter_port_ipv4_reachable(op, ipv4_addr)) {
             build_lswitch_rport_arp_req_flow(
                 ip_addr, AF_INET, sw_op, sw_od, 80, lflows,
                 stage_hint);
@@ -7036,7 +7073,8 @@  build_lswitch_rport_arp_req_flows(struct ovn_port *op,
         /* Check if the ovn port has a network configured on which we could
          * expect NS requests for the LB VIP.
          */
-        if (ipv6_parse(ip_addr, &ipv6_addr)) {
+        if (ipv6_parse(ip_addr, &ipv6_addr) &&
+            lrouter_port_ipv6_reachable(op, &ipv6_addr)) {
             build_lswitch_rport_arp_req_flow(
                 ip_addr, AF_INET6, sw_op, sw_od, 80, lflows,
                 stage_hint);
@@ -7096,6 +7134,68 @@  build_lswitch_rport_arp_req_flows(struct ovn_port *op,
     }
 }
 
+static void
+build_lflows_for_unreachable_vips(struct ovn_northd_lb *lb,
+                                  struct ovn_lb_vip *lb_vip,
+                                  struct hmap *lflows,
+                                  struct ds *match)
+{
+    static const char *action = "outport = \"_MC_flood\"; output;";
+    bool ipv4 = IN6_IS_ADDR_V4MAPPED(&lb_vip->vip);
+    ovs_be32 ipv4_addr;
+
+    ds_clear(match);
+    if (ipv4) {
+        if (!ip_parse(lb_vip->vip_str, &ipv4_addr)) {
+            return;
+        }
+        ds_put_format(match, "%s && arp.op == 1 && arp.tpa == %s",
+                      FLAGBIT_NOT_VXLAN, lb_vip->vip_str);
+    } else {
+        ds_put_format(match, "%s && nd_ns && nd.target == %s",
+                      FLAGBIT_NOT_VXLAN, lb_vip->vip_str);
+    }
+
+    struct ovn_lflow *lflow_ref = NULL;
+    uint32_t hash = ovn_logical_flow_hash(
+            ovn_stage_get_table(S_SWITCH_IN_L2_LKUP),
+            ovn_stage_get_pipeline(S_SWITCH_IN_L2_LKUP), 90,
+            ds_cstr(match), action);
+
+    for (size_t i = 0; i < lb->n_nb_lr; i++) {
+        struct ovn_datapath *od = lb->nb_lr[i];
+
+        if (!od->is_gw_router && !od->n_l3dgw_ports) {
+            continue;
+        }
+
+        struct ovn_port *op;
+        LIST_FOR_EACH (op, dp_node, &od->port_list) {
+            if (!od->is_gw_router && !is_l3dgw_port(op)) {
+                continue;
+            }
+
+            struct ovn_port *peer = op->peer;
+            if (!peer || !peer->nbsp || lsp_is_external(peer->nbsp)) {
+                continue;
+            }
+
+            if ((ipv4 && lrouter_port_ipv4_reachable(op, ipv4_addr)) ||
+                (!ipv4 && lrouter_port_ipv6_reachable(op, &lb_vip->vip))) {
+                continue;
+            }
+
+            if (ovn_dp_group_add_with_reference(lflow_ref, peer->od)) {
+                continue;
+            }
+            lflow_ref = ovn_lflow_add_at_with_hash(
+                lflows, peer->od, S_SWITCH_IN_L2_LKUP, 90, ds_cstr(match),
+                action, NULL, NULL, &peer->nbsp->header_,
+                OVS_SOURCE_LOCATOR, hash);
+        }
+    }
+}
+
 static void
 build_dhcpv4_options_flows(struct ovn_port *op,
                            struct lport_addresses *lsp_addrs,
@@ -9583,6 +9683,7 @@  build_lrouter_flows_for_lb(struct ovn_northd_lb *lb, struct hmap *lflows,
     for (size_t i = 0; i < lb->n_vips; i++) {
         struct ovn_lb_vip *lb_vip = &lb->vips[i];
 
+        build_lflows_for_unreachable_vips(lb, lb_vip, lflows, match);
         build_lrouter_nat_flows_for_lb(lb_vip, lb, &lb->vips_nb[i],
                                        lflows, match, action,
                                        meter_groups);
diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
index e42c70be1..5c6b85d70 100644
--- a/northd/ovn-northd.8.xml
+++ b/northd/ovn-northd.8.xml
@@ -1582,6 +1582,14 @@  output;
         <ref column="options" table="Logical_Router"/>:mcast_relay='true'.
       </li>
 
+      <li>
+        Priority-90 flows for each VIP address of a load balancer configured
+        outside its owning router port's subnet. These flows match ARP
+        requests and ND packets for the specific IP addresses.  Matched packets
+        are forwarded to the <code>MC_FLOOD</code> multicast group which
+        contains all connected logical ports.
+      </li>
+
       <li>
         A priority-85 flow that forwards all IP multicast traffic destined to
         224.0.0.X to the <code>MC_FLOOD</code> multicast group, which
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index 193f465db..9499b2e45 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -4522,6 +4522,212 @@  check_lflows 0
 
 AT_CLEANUP
 
+OVN_FOR_EACH_NORTHD([
+AT_SETUP([ovn -- ARP flows for unreachable addresses - NAT and LB])
+ovn_start
+
+AS_BOX([Setting up the logical network])
+
+# This network is the same as the one from "Router Address Propagation"
+check ovn-nbctl ls-add sw
+
+check ovn-nbctl lr-add ro1
+check ovn-nbctl lrp-add ro1 ro1-sw 00:00:00:00:00:01 10.0.0.1/24
+check ovn-nbctl lsp-add sw sw-ro1
+check ovn-nbctl lsp-set-type sw-ro1 router
+check ovn-nbctl lsp-set-addresses sw-ro1 router
+check ovn-nbctl lsp-set-options sw-ro1 router-port=ro1-sw
+
+check ovn-nbctl lr-add ro2
+check ovn-nbctl lrp-add ro2 ro2-sw 00:00:00:00:00:02 20.0.0.1/24
+check ovn-nbctl lsp-add sw sw-ro2
+check ovn-nbctl lsp-set-type sw-ro2 router
+check ovn-nbctl lsp-set-addresses sw-ro2 router
+check ovn-nbctl --wait=sb lsp-set-options sw-ro2 router-port=ro2-sw
+
+check ovn-nbctl ls-add ls1
+check ovn-nbctl lsp-add ls1 vm1
+check ovn-nbctl lsp-set-addresses vm1 "00:00:00:00:01:02 192.168.1.2"
+check ovn-nbctl lrp-add ro1 ro1-ls1 00:00:00:00:01:01 192.168.1.1/24
+check ovn-nbctl lsp-add ls1 ls1-ro1
+check ovn-nbctl lsp-set-type ls1-ro1 router
+check ovn-nbctl lsp-set-addresses ls1-ro1 router
+check ovn-nbctl lsp-set-options ls1-ro1 router-port=ro1-ls1
+
+check ovn-nbctl ls-add ls2
+check ovn-nbctl lsp-add ls2 vm2
+check ovn-nbctl lsp-set-addresses vm2 "00:00:00:00:02:02 192.168.2.2"
+check ovn-nbctl lrp-add ro2 ro2-ls2 00:00:00:00:02:01 192.168.2.1/24
+check ovn-nbctl lsp-add ls2 ls2-ro2
+check ovn-nbctl lsp-set-type ls2-ro2 router
+check ovn-nbctl lsp-set-addresses ls2-ro2 router
+check ovn-nbctl lsp-set-options ls2-ro2 router-port=ro2-ls2
+
+
+ovn-sbctl lflow-list ls1 > ls1_lflows
+AT_CHECK([grep "ls_in_l2_lkup" ls1_lflows | sed 's/table=../table=??/' | sort], [0], [dnl
+  table=??(ls_in_l2_lkup      ), priority=0    , match=(1), action=(outport = get_fdb(eth.dst); next;)
+  table=??(ls_in_l2_lkup      ), priority=110  , match=(eth.dst == $svc_monitor_mac), action=(handle_svc_check(inport);)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:01), action=(outport = "ls1-ro1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:02), action=(outport = "vm1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=70   , match=(eth.mcast), action=(outport = "_MC_flood"; output;)
+  table=??(ls_in_l2_lkup      ), priority=75   , match=(eth.src == {00:00:00:00:01:01} && (arp.op == 1 || nd_ns)), action=(outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.1.1), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && nd_ns && nd.target == fe80::200:ff:fe00:101), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+])
+
+ovn-sbctl lflow-list ls2 > ls2_lflows
+AT_CHECK([grep "ls_in_l2_lkup" ls2_lflows | sed 's/table=../table=??/' | sort], [0], [dnl
+  table=??(ls_in_l2_lkup      ), priority=0    , match=(1), action=(outport = get_fdb(eth.dst); next;)
+  table=??(ls_in_l2_lkup      ), priority=110  , match=(eth.dst == $svc_monitor_mac), action=(handle_svc_check(inport);)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:02:01), action=(outport = "ls2-ro2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:02:02), action=(outport = "vm2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=70   , match=(eth.mcast), action=(outport = "_MC_flood"; output;)
+  table=??(ls_in_l2_lkup      ), priority=75   , match=(eth.src == {00:00:00:00:02:01} && (arp.op == 1 || nd_ns)), action=(outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.2.1), action=(clone {outport = "ls2-ro2"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && nd_ns && nd.target == fe80::200:ff:fe00:201), action=(clone {outport = "ls2-ro2"; output; }; outport = "_MC_flood_l2"; output;)
+])
+
+AS_BOX([Adding some reachable NAT addresses])
+
+check ovn-nbctl lr-nat-add ro1 dnat 10.0.0.100 192.168.1.100
+check ovn-nbctl lr-nat-add ro1 snat 10.0.0.200 192.168.1.200/30
+
+check ovn-nbctl lr-nat-add ro2 dnat 20.0.0.100 192.168.2.100
+check ovn-nbctl --wait=sb lr-nat-add ro2 snat 20.0.0.200 192.168.2.200/30
+
+ovn-sbctl lflow-list ls1 > ls1_lflows
+AT_CHECK([grep "ls_in_l2_lkup" ls1_lflows | sed 's/table=../table=??/' | sort], [0], [dnl
+  table=??(ls_in_l2_lkup      ), priority=0    , match=(1), action=(outport = get_fdb(eth.dst); next;)
+  table=??(ls_in_l2_lkup      ), priority=110  , match=(eth.dst == $svc_monitor_mac), action=(handle_svc_check(inport);)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:01), action=(outport = "ls1-ro1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:02), action=(outport = "vm1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=70   , match=(eth.mcast), action=(outport = "_MC_flood"; output;)
+  table=??(ls_in_l2_lkup      ), priority=75   , match=(eth.src == {00:00:00:00:01:01} && (arp.op == 1 || nd_ns)), action=(outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 10.0.0.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.1.1), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && nd_ns && nd.target == fe80::200:ff:fe00:101), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+])
+
+ovn-sbctl lflow-list ls2 > ls2_lflows
+AT_CHECK([grep "ls_in_l2_lkup" ls2_lflows | sed 's/table=../table=??/' | sort], [0], [dnl
+  table=??(ls_in_l2_lkup      ), priority=0    , match=(1), action=(outport = get_fdb(eth.dst); next;)
+  table=??(ls_in_l2_lkup      ), priority=110  , match=(eth.dst == $svc_monitor_mac), action=(handle_svc_check(inport);)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:02:01), action=(outport = "ls2-ro2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:02:02), action=(outport = "vm2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=70   , match=(eth.mcast), action=(outport = "_MC_flood"; output;)
+  table=??(ls_in_l2_lkup      ), priority=75   , match=(eth.src == {00:00:00:00:02:01} && (arp.op == 1 || nd_ns)), action=(outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.2.1), action=(clone {outport = "ls2-ro2"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 20.0.0.100), action=(clone {outport = "ls2-ro2"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && nd_ns && nd.target == fe80::200:ff:fe00:201), action=(clone {outport = "ls2-ro2"; output; }; outport = "_MC_flood_l2"; output;)
+])
+
+AS_BOX([Adding some unreachable NAT addresses])
+
+check ovn-nbctl lr-nat-add ro1 dnat 30.0.0.100 192.168.1.130
+check ovn-nbctl lr-nat-add ro1 snat 30.0.0.200 192.168.1.148/30
+
+check ovn-nbctl lr-nat-add ro2 dnat 40.0.0.100 192.168.2.130
+check ovn-nbctl --wait=sb lr-nat-add ro2 snat 40.0.0.200 192.168.2.148/30
+
+ovn-sbctl lflow-list ls1 > ls1_lflows
+AT_CHECK([grep "ls_in_l2_lkup" ls1_lflows | sed 's/table=../table=??/' | sort], [0], [dnl
+  table=??(ls_in_l2_lkup      ), priority=0    , match=(1), action=(outport = get_fdb(eth.dst); next;)
+  table=??(ls_in_l2_lkup      ), priority=110  , match=(eth.dst == $svc_monitor_mac), action=(handle_svc_check(inport);)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:01), action=(outport = "ls1-ro1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:02), action=(outport = "vm1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=70   , match=(eth.mcast), action=(outport = "_MC_flood"; output;)
+  table=??(ls_in_l2_lkup      ), priority=75   , match=(eth.src == {00:00:00:00:01:01} && (arp.op == 1 || nd_ns)), action=(outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 10.0.0.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.1.1), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 30.0.0.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && nd_ns && nd.target == fe80::200:ff:fe00:101), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+])
+
+ovn-sbctl lflow-list ls2 > ls2_lflows
+AT_CHECK([grep "ls_in_l2_lkup" ls2_lflows | sed 's/table=../table=??/' | sort], [0], [dnl
+  table=??(ls_in_l2_lkup      ), priority=0    , match=(1), action=(outport = get_fdb(eth.dst); next;)
+  table=??(ls_in_l2_lkup      ), priority=110  , match=(eth.dst == $svc_monitor_mac), action=(handle_svc_check(inport);)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:02:01), action=(outport = "ls2-ro2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:02:02), action=(outport = "vm2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=70   , match=(eth.mcast), action=(outport = "_MC_flood"; output;)
+  table=??(ls_in_l2_lkup      ), priority=75   , match=(eth.src == {00:00:00:00:02:01} && (arp.op == 1 || nd_ns)), action=(outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.2.1), action=(clone {outport = "ls2-ro2"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 20.0.0.100), action=(clone {outport = "ls2-ro2"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 40.0.0.100), action=(clone {outport = "ls2-ro2"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && nd_ns && nd.target == fe80::200:ff:fe00:201), action=(clone {outport = "ls2-ro2"; output; }; outport = "_MC_flood_l2"; output;)
+])
+
+AS_BOX([Adding load balancer reachable VIPs to ro1])
+
+ovn-nbctl lb-add lb1 192.168.1.100:80 10.0.0.10:80
+ovn-nbctl --wait=sb lr-lb-add ro1 lb1
+
+ovn-sbctl lflow-list ls1 > ls1_lflows
+AT_CHECK([grep "ls_in_l2_lkup" ls1_lflows | sed 's/table=../table=??/' | sort], [0], [dnl
+  table=??(ls_in_l2_lkup      ), priority=0    , match=(1), action=(outport = get_fdb(eth.dst); next;)
+  table=??(ls_in_l2_lkup      ), priority=110  , match=(eth.dst == $svc_monitor_mac), action=(handle_svc_check(inport);)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:01), action=(outport = "ls1-ro1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:02), action=(outport = "vm1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=70   , match=(eth.mcast), action=(outport = "_MC_flood"; output;)
+  table=??(ls_in_l2_lkup      ), priority=75   , match=(eth.src == {00:00:00:00:01:01} && (arp.op == 1 || nd_ns)), action=(outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 10.0.0.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.1.1), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.1.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 30.0.0.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && nd_ns && nd.target == fe80::200:ff:fe00:101), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+])
+
+AS_BOX([Adding load balancer unreachable VIPs to ro1])
+ovn-nbctl --wait=sb lb-add lb1 192.168.4.100:80 10.0.0.10:80
+
+ovn-sbctl lflow-list ls1 > ls1_lflows
+AT_CHECK([grep "ls_in_l2_lkup" ls1_lflows | sed 's/table=../table=??/' | sort], [0], [dnl
+  table=??(ls_in_l2_lkup      ), priority=0    , match=(1), action=(outport = get_fdb(eth.dst); next;)
+  table=??(ls_in_l2_lkup      ), priority=110  , match=(eth.dst == $svc_monitor_mac), action=(handle_svc_check(inport);)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:01), action=(outport = "ls1-ro1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:02), action=(outport = "vm1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=70   , match=(eth.mcast), action=(outport = "_MC_flood"; output;)
+  table=??(ls_in_l2_lkup      ), priority=75   , match=(eth.src == {00:00:00:00:01:01} && (arp.op == 1 || nd_ns)), action=(outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 10.0.0.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.1.1), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.1.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 30.0.0.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && nd_ns && nd.target == fe80::200:ff:fe00:101), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+])
+
+# Make sure that there is no flow for VIP 192.168.4.100 as ro1-ls1 doesn't
+# have a gw router port or is not a gateway router.
+AT_CHECK([grep "ls_in_l2_lkup" ls1_lflows | grep "192.168.4.100" | grep "_MC_flood" -c], [1], [0
+])
+
+AS_BOX([Configuring ro1-ls1 router port as a gateway router port])
+
+ovn-nbctl --wait=sb lrp-set-gateway-chassis ro1-ls1 chassis-1 30
+
+ovn-sbctl lflow-list ls1 > ls1_lflows
+AT_CHECK([grep "ls_in_l2_lkup" ls1_lflows | sed 's/table=../table=??/' | sort], [0], [dnl
+  table=??(ls_in_l2_lkup      ), priority=0    , match=(1), action=(outport = get_fdb(eth.dst); next;)
+  table=??(ls_in_l2_lkup      ), priority=110  , match=(eth.dst == $svc_monitor_mac), action=(handle_svc_check(inport);)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:01), action=(outport = "ls1-ro1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == 00:00:00:00:01:02), action=(outport = "vm1"; output;)
+  table=??(ls_in_l2_lkup      ), priority=70   , match=(eth.mcast), action=(outport = "_MC_flood"; output;)
+  table=??(ls_in_l2_lkup      ), priority=75   , match=(eth.src == {00:00:00:00:01:01} && (arp.op == 1 || nd_ns)), action=(outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 10.0.0.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.1.1), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.1.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 30.0.0.100), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=80   , match=(flags[[1]] == 0 && nd_ns && nd.target == fe80::200:ff:fe00:101), action=(clone {outport = "ls1-ro1"; output; }; outport = "_MC_flood_l2"; output;)
+  table=??(ls_in_l2_lkup      ), priority=90   , match=(flags[[1]] == 0 && arp.op == 1 && arp.tpa == 192.168.4.100), action=(outport = "_MC_flood"; output;)
+])
+
+# Make sure that there is flow for VIP 192.168.4.100 to flood as it is unreachable.
+AT_CHECK([grep "ls_in_l2_lkup" ls1_lflows | grep "192.168.4.100" | grep -v clone | grep "_MC_flood" -c], [0], [1
+])
+
+AT_CLEANUP
+])
+
 OVN_FOR_EACH_NORTHD([
 AT_SETUP([ovn -- LR NAT flows])
 ovn_start
diff --git a/tests/ovn.at b/tests/ovn.at
index 8163ad84a..ff1dee0fa 100644
--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -22150,14 +22150,12 @@  nd_target=fe80::200:ff:fe00:200
 # - 20.0.0.1, 20::1, fe80::200:1ff:fe00:0 - interface IPs.
 as hv1
 AT_CHECK([ovs-ofctl dump-flows br-int | grep -E "priority=80,.*${match_sw1_metadata}" | grep -oE "arp_tpa=[[0-9.]]+" | sort], [0], [dnl
-arp_tpa=10.0.0.11
 arp_tpa=10.0.0.111
 arp_tpa=10.0.0.121
 arp_tpa=10.0.0.122
 arp_tpa=20.0.0.1
 ])
 AT_CHECK([ovs-ofctl dump-flows br-int | grep -E "priority=80,.*${match_sw1_metadata}" | grep -oE "nd_target=[[0-9a-f:]]+" | sort], [0], [dnl
-nd_target=10::11
 nd_target=10::111
 nd_target=10::121
 nd_target=10::122