diff mbox series

[ovs-dev,v3] northd: Fix multicast table full

Message ID 20220829113034.2850198-1-xsimonar@redhat.com
State Accepted
Delegated to: Dumitru Ceara
Headers show
Series [ovs-dev,v3] northd: Fix multicast table full | expand

Checks

Context Check Description
ovsrobot/apply-robot success apply and check: success
ovsrobot/github-robot-_Build_and_Test fail github build: failed
ovsrobot/github-robot-_ovn-kubernetes success github build: passed

Commit Message

Xavier Simonart Aug. 29, 2022, 11:30 a.m. UTC
active_v4_flows count was intialized when the northd node was computed.
However, neither sb_multicast_group nor en_sb_igmp_group causes northd
updates. Hence this count could keep increasing while processing igmp groups.

This issue was sometimes 'hidden' by northd recomputes due to lflow unable
to be incrementally processed (sb busy).

active_v4_flows is now reinitialized right before building flows
(i.e. as part of the lflow node, which is computed on igmp group changes).

Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2094710

Signed-off-by: Xavier Simonart <xsimonar@redhat.com>

 ---
v2: - rebased on main
    - use platform independent print format
v3: - updated based on Dumitru's feedback
    - removed other unused ipv6 configuration and unused tcpdumps from testcase
---
 northd/northd.c     |  22 +++++++--
 tests/system-ovn.at | 117 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 136 insertions(+), 3 deletions(-)

Comments

Dumitru Ceara Aug. 30, 2022, 10:02 a.m. UTC | #1
On 8/29/22 13:30, Xavier Simonart wrote:
> active_v4_flows count was intialized when the northd node was computed.
> However, neither sb_multicast_group nor en_sb_igmp_group causes northd
> updates. Hence this count could keep increasing while processing igmp groups.
> 
> This issue was sometimes 'hidden' by northd recomputes due to lflow unable
> to be incrementally processed (sb busy).
> 
> active_v4_flows is now reinitialized right before building flows
> (i.e. as part of the lflow node, which is computed on igmp group changes).
> 
> Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2094710
> 
> Signed-off-by: Xavier Simonart <xsimonar@redhat.com>

Looks good to me, thanks!

Acked-by Dumitru Ceara <dceara@redhat.com>

> 
>  ---

Note for the maintainers: there's an extra whitespace before --- which
causes the notes below to be added to the commit log when applying the
patch.  I guess that should be fixed up at apply time.

Regards,
Dumitru

> v2: - rebased on main
>     - use platform independent print format
> v3: - updated based on Dumitru's feedback
>     - removed other unused ipv6 configuration and unused tcpdumps from testcase
> ---
>  northd/northd.c     |  22 +++++++--
>  tests/system-ovn.at | 117 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 136 insertions(+), 3 deletions(-)
> 
> diff --git a/northd/northd.c b/northd/northd.c
> index 7e2681865..4f33ae3c3 100644
> --- a/northd/northd.c
> +++ b/northd/northd.c
> @@ -1051,7 +1051,16 @@ init_mcast_info_for_switch_datapath(struct ovn_datapath *od)
>      mcast_sw_info->query_max_response =
>          smap_get_ullong(&od->nbs->other_config, "mcast_query_max_response",
>                          OVN_MCAST_DEFAULT_QUERY_MAX_RESPONSE_S);
> +}
> +
> +static void
> +init_mcast_flow_count(struct ovn_datapath *od)
> +{
> +    if (od->nbr) {
> +        return;
> +    }
>  
> +    struct mcast_switch_info *mcast_sw_info = &od->mcast_info.sw;
>      mcast_sw_info->active_v4_flows = ATOMIC_VAR_INIT(0);
>      mcast_sw_info->active_v6_flows = ATOMIC_VAR_INIT(0);
>  }
> @@ -8368,6 +8377,10 @@ build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
>              if (atomic_compare_exchange_strong(
>                          &mcast_sw_info->active_v4_flows, &table_size,
>                          mcast_sw_info->table_size)) {
> +                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
> +
> +                VLOG_INFO_RL(&rl, "Too many active mcast flows: %"PRIu64,
> +                             mcast_sw_info->active_v4_flows);
>                  return;
>              }
>              atomic_add(&mcast_sw_info->active_v4_flows, 1, &dummy);
> @@ -15069,6 +15082,11 @@ build_mcast_groups(struct lflow_input *input_data,
>  
>      hmap_init(mcast_groups);
>      hmap_init(igmp_groups);
> +    struct ovn_datapath *od;
> +
> +    HMAP_FOR_EACH (od, key_node, datapaths) {
> +        init_mcast_flow_count(od);
> +    }
>  
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (op->nbrp && lrport_is_enabled(op->nbrp)) {
> @@ -15126,8 +15144,7 @@ build_mcast_groups(struct lflow_input *input_data,
>          }
>  
>          /* If the datapath value is stale, purge the group. */
> -        struct ovn_datapath *od =
> -            ovn_datapath_from_sbrec(datapaths, sb_igmp->datapath);
> +        od = ovn_datapath_from_sbrec(datapaths, sb_igmp->datapath);
>  
>          if (!od || ovn_datapath_is_stale(od)) {
>              sbrec_igmp_group_delete(sb_igmp);
> @@ -15172,7 +15189,6 @@ build_mcast_groups(struct lflow_input *input_data,
>       * IGMP groups are based on the groups learnt by their multicast enabled
>       * peers.
>       */
> -    struct ovn_datapath *od;
>      HMAP_FOR_EACH (od, key_node, datapaths) {
>  
>          if (ovs_list_is_empty(&od->mcast_info.groups)) {
> diff --git a/tests/system-ovn.at b/tests/system-ovn.at
> index 992813614..7f919f0ed 100644
> --- a/tests/system-ovn.at
> +++ b/tests/system-ovn.at
> @@ -8266,3 +8266,120 @@ OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
>  
>  AT_CLEANUP
>  ])
> +
> +OVN_FOR_EACH_NORTHD([
> +AT_SETUP([mcast flow count])
> +AT_KEYWORDS([ovnigmp IP-multicast])
> +AT_SKIP_IF([test $HAVE_TCPDUMP = no])
> +ovn_start
> +
> +OVS_TRAFFIC_VSWITCHD_START()
> +ADD_BR([br-int])
> +
> +# Set external-ids in br-int needed for ovn-controller
> +ovs-vsctl \
> +        -- set Open_vSwitch . external-ids:system-id=hv1 \
> +        -- set Open_vSwitch . external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \
> +        -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \
> +        -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \
> +        -- set bridge br-int fail-mode=secure other-config:disable-in-band=true
> +
> +# Start ovn-controller
> +start_daemon ovn-controller
> +
> +check ovn-nbctl ls-add ls
> +check ovn-nbctl lsp-add ls vm1
> +check ovn-nbctl lsp-set-addresses vm1 00:00:00:00:00:01
> +check ovn-nbctl lsp-add ls vm2
> +check ovn-nbctl lsp-set-addresses vm2 00:00:00:00:00:02
> +check ovn-nbctl lsp-add ls vm3
> +check ovn-nbctl lsp-set-addresses vm3 00:00:00:00:00:03
> +
> +check ovn-nbctl set logical_switch ls other_config:mcast_querier=false other_config:mcast_snoop=true other_config:mcast_query_interval=30 other_config:mcast_eth_src=00:00:00:00:00:05 other_config:mcast_ip4_src=42.42.42.5 other_config:mcast_ip6_src=fe80::1 other_config:mcast_idle_timeout=3000
> +ovn-sbctl list ip_multicast
> +
> +wait_igmp_flows_installed()
> +{
> +    OVS_WAIT_UNTIL([ovs-ofctl dump-flows br-int table=31 | \
> +    grep 'priority=90' | grep "nw_dst=$1"])
> +}
> +
> +ADD_NAMESPACES(vm1)
> +ADD_INT([vm1], [vm1], [br-int], [42.42.42.1/24])
> +NS_CHECK_EXEC([vm1], [ip link set vm1 address 00:00:00:00:00:01], [0])
> +NS_CHECK_EXEC([vm1], [ip route add default via 42.42.42.5], [0])
> +check ovs-vsctl set Interface vm1 external_ids:iface-id=vm1
> +
> +ADD_NAMESPACES(vm2)
> +ADD_INT([vm2], [vm2], [br-int], [42.42.42.2/24])
> +NS_CHECK_EXEC([vm2], [ip link set vm2 address 00:00:00:00:00:02], [0])
> +NS_CHECK_EXEC([vm2], [ip link set lo up], [0])
> +check ovs-vsctl set Interface vm2 external_ids:iface-id=vm2
> +
> +ADD_NAMESPACES(vm3)
> +NETNS_DAEMONIZE([vm3], [tcpdump -n -i any -nnleX > vm3.pcap 2>/dev/null], [tcpdump3.pid])
> +
> +ADD_INT([vm3], [vm3], [br-int], [42.42.42.3/24])
> +NS_CHECK_EXEC([vm3], [ip link set vm3 address 00:00:00:00:00:03], [0])
> +NS_CHECK_EXEC([vm3], [ip link set lo up], [0])
> +NS_CHECK_EXEC([vm3], [ip route add default via 42.42.42.5], [0])
> +check ovs-vsctl set Interface vm3 external_ids:iface-id=vm3
> +
> +NS_CHECK_EXEC([vm2], [sysctl -w net.ipv4.igmp_max_memberships=100], [ignore], [ignore])
> +NS_CHECK_EXEC([vm3], [sysctl -w net.ipv4.igmp_max_memberships=100], [ignore], [ignore])
> +wait_for_ports_up
> +
> +NS_CHECK_EXEC([vm3], [ip addr add 228.0.0.1 dev vm3 autojoin], [0])
> +wait_igmp_flows_installed 228.0.0.1
> +
> +NS_CHECK_EXEC([vm1], [ping -q -c 3 -i 0.3 -w 2 228.0.0.1], [ignore], [ignore])
> +
> +OVS_WAIT_UNTIL([
> +    requests=`grep "ICMP echo request" -c vm3.pcap`
> +    test "${requests}" -ge "3"
> +])
> +
> +NETNS_DAEMONIZE([vm2], [tcpdump -n -i any -nnleX > vm2.pcap 2>/dev/null], [tcpdump2.pid])
> +
> +for i in `seq 1 40`;do
> +    NS_CHECK_EXEC([vm2], [ip addr add 228.1.$i.1 dev vm2 autojoin &], [0])
> +    NS_CHECK_EXEC([vm3], [ip addr add 229.1.$i.1 dev vm3 autojoin &], [0])
> +    # Do not go too fast. If going fast, there is a higher chance of sb being busy, causing full recompute (engine has not run)
> +    # In this test, we do not want too many recomputes as they might hide I+I related errors
> +    sleep 0.2
> +done
> +
> +for i in `seq 1 40`;do
> +    wait_igmp_flows_installed 228.1.$i.1
> +    wait_igmp_flows_installed 229.1.$i.1
> +done
> +ovn-sbctl list multicast_group
> +
> +NS_CHECK_EXEC([vm1], [ping -q -c 3 -i 0.3 -w 2 228.1.1.1], [ignore], [ignore])
> +
> +OVS_WAIT_UNTIL([
> +    requests=`grep "ICMP echo request" -c vm2.pcap`
> +    test "${requests}" -ge "3"
> +])
> +
> +# The test could succeed thanks to a lucky northd recompute...after hitting too any flows
> +# Double check we never hit error condition
> +AT_CHECK([grep -qE 'Too many active mcast flows' northd/ovn-northd.log], [1])
> +
> +OVS_APP_EXIT_AND_WAIT([ovn-controller])
> +
> +as ovn-sb
> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> +
> +as ovn-nb
> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> +
> +as northd
> +OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE])
> +
> +as
> +OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
> +/connection dropped.*/d
> +/removing policing failed: No such device/d"])
> +AT_CLEANUP
> +])
Dumitru Ceara Sept. 9, 2022, 7:57 a.m. UTC | #2
On 8/30/22 12:02, Dumitru Ceara wrote:
> On 8/29/22 13:30, Xavier Simonart wrote:
>> active_v4_flows count was intialized when the northd node was computed.
>> However, neither sb_multicast_group nor en_sb_igmp_group causes northd
>> updates. Hence this count could keep increasing while processing igmp groups.
>>
>> This issue was sometimes 'hidden' by northd recomputes due to lflow unable
>> to be incrementally processed (sb busy).
>>
>> active_v4_flows is now reinitialized right before building flows
>> (i.e. as part of the lflow node, which is computed on igmp group changes).
>>
>> Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2094710
>>
>> Signed-off-by: Xavier Simonart <xsimonar@redhat.com>
> Looks good to me, thanks!
> 
> Acked-by Dumitru Ceara <dceara@redhat.com>
> 

Fixing the ack tag so that patchwork marks it properly:

Acked-by: Dumitru Ceara <dceara@redhat.com>

>>  ---
> Note for the maintainers: there's an extra whitespace before --- which
> causes the notes below to be added to the commit log when applying the
> patch.  I guess that should be fixed up at apply time.
> 
> Regards,
> Dumitru
>
Dumitru Ceara Sept. 30, 2022, 10:43 a.m. UTC | #3
On 9/9/22 09:57, Dumitru Ceara wrote:
> On 8/30/22 12:02, Dumitru Ceara wrote:
>> On 8/29/22 13:30, Xavier Simonart wrote:
>>> active_v4_flows count was intialized when the northd node was computed.
>>> However, neither sb_multicast_group nor en_sb_igmp_group causes northd
>>> updates. Hence this count could keep increasing while processing igmp groups.
>>>
>>> This issue was sometimes 'hidden' by northd recomputes due to lflow unable
>>> to be incrementally processed (sb busy).
>>>
>>> active_v4_flows is now reinitialized right before building flows
>>> (i.e. as part of the lflow node, which is computed on igmp group changes).
>>>
>>> Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2094710
>>>
>>> Signed-off-by: Xavier Simonart <xsimonar@redhat.com>
>> Looks good to me, thanks!
>>
>> Acked-by Dumitru Ceara <dceara@redhat.com>
>>
> 
> Fixing the ack tag so that patchwork marks it properly:
> 
> Acked-by: Dumitru Ceara <dceara@redhat.com>
> 
>>>  ---
>> Note for the maintainers: there's an extra whitespace before --- which
>> causes the notes below to be added to the commit log when applying the
>> patch.  I guess that should be fixed up at apply time.
>>
>> Regards,
>> Dumitru
>>

I applied this to the main branch and backported it to all stable
branches down to branch-22.03.

On branch-22.03 I had to make a small change to the test to account
for table number changes:

diff --git a/tests/system-ovn.at b/tests/system-ovn.at
index bfe441ddf..d3eb20978 100644
--- a/tests/system-ovn.at
+++ b/tests/system-ovn.at
@@ -8337,7 +8337,7 @@ ovn-sbctl list ip_multicast
 
 wait_igmp_flows_installed()
 {
-    OVS_WAIT_UNTIL([ovs-ofctl dump-flows br-int table=31 | \
+    OVS_WAIT_UNTIL([ovs-ofctl dump-flows br-int table=32 | \
     grep 'priority=90' | grep "nw_dst=$1"])
 }
 
---

Thanks,
Dumitru
diff mbox series

Patch

diff --git a/northd/northd.c b/northd/northd.c
index 7e2681865..4f33ae3c3 100644
--- a/northd/northd.c
+++ b/northd/northd.c
@@ -1051,7 +1051,16 @@  init_mcast_info_for_switch_datapath(struct ovn_datapath *od)
     mcast_sw_info->query_max_response =
         smap_get_ullong(&od->nbs->other_config, "mcast_query_max_response",
                         OVN_MCAST_DEFAULT_QUERY_MAX_RESPONSE_S);
+}
+
+static void
+init_mcast_flow_count(struct ovn_datapath *od)
+{
+    if (od->nbr) {
+        return;
+    }
 
+    struct mcast_switch_info *mcast_sw_info = &od->mcast_info.sw;
     mcast_sw_info->active_v4_flows = ATOMIC_VAR_INIT(0);
     mcast_sw_info->active_v6_flows = ATOMIC_VAR_INIT(0);
 }
@@ -8368,6 +8377,10 @@  build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
             if (atomic_compare_exchange_strong(
                         &mcast_sw_info->active_v4_flows, &table_size,
                         mcast_sw_info->table_size)) {
+                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
+
+                VLOG_INFO_RL(&rl, "Too many active mcast flows: %"PRIu64,
+                             mcast_sw_info->active_v4_flows);
                 return;
             }
             atomic_add(&mcast_sw_info->active_v4_flows, 1, &dummy);
@@ -15069,6 +15082,11 @@  build_mcast_groups(struct lflow_input *input_data,
 
     hmap_init(mcast_groups);
     hmap_init(igmp_groups);
+    struct ovn_datapath *od;
+
+    HMAP_FOR_EACH (od, key_node, datapaths) {
+        init_mcast_flow_count(od);
+    }
 
     HMAP_FOR_EACH (op, key_node, ports) {
         if (op->nbrp && lrport_is_enabled(op->nbrp)) {
@@ -15126,8 +15144,7 @@  build_mcast_groups(struct lflow_input *input_data,
         }
 
         /* If the datapath value is stale, purge the group. */
-        struct ovn_datapath *od =
-            ovn_datapath_from_sbrec(datapaths, sb_igmp->datapath);
+        od = ovn_datapath_from_sbrec(datapaths, sb_igmp->datapath);
 
         if (!od || ovn_datapath_is_stale(od)) {
             sbrec_igmp_group_delete(sb_igmp);
@@ -15172,7 +15189,6 @@  build_mcast_groups(struct lflow_input *input_data,
      * IGMP groups are based on the groups learnt by their multicast enabled
      * peers.
      */
-    struct ovn_datapath *od;
     HMAP_FOR_EACH (od, key_node, datapaths) {
 
         if (ovs_list_is_empty(&od->mcast_info.groups)) {
diff --git a/tests/system-ovn.at b/tests/system-ovn.at
index 992813614..7f919f0ed 100644
--- a/tests/system-ovn.at
+++ b/tests/system-ovn.at
@@ -8266,3 +8266,120 @@  OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
 
 AT_CLEANUP
 ])
+
+OVN_FOR_EACH_NORTHD([
+AT_SETUP([mcast flow count])
+AT_KEYWORDS([ovnigmp IP-multicast])
+AT_SKIP_IF([test $HAVE_TCPDUMP = no])
+ovn_start
+
+OVS_TRAFFIC_VSWITCHD_START()
+ADD_BR([br-int])
+
+# Set external-ids in br-int needed for ovn-controller
+ovs-vsctl \
+        -- set Open_vSwitch . external-ids:system-id=hv1 \
+        -- set Open_vSwitch . external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \
+        -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \
+        -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \
+        -- set bridge br-int fail-mode=secure other-config:disable-in-band=true
+
+# Start ovn-controller
+start_daemon ovn-controller
+
+check ovn-nbctl ls-add ls
+check ovn-nbctl lsp-add ls vm1
+check ovn-nbctl lsp-set-addresses vm1 00:00:00:00:00:01
+check ovn-nbctl lsp-add ls vm2
+check ovn-nbctl lsp-set-addresses vm2 00:00:00:00:00:02
+check ovn-nbctl lsp-add ls vm3
+check ovn-nbctl lsp-set-addresses vm3 00:00:00:00:00:03
+
+check ovn-nbctl set logical_switch ls other_config:mcast_querier=false other_config:mcast_snoop=true other_config:mcast_query_interval=30 other_config:mcast_eth_src=00:00:00:00:00:05 other_config:mcast_ip4_src=42.42.42.5 other_config:mcast_ip6_src=fe80::1 other_config:mcast_idle_timeout=3000
+ovn-sbctl list ip_multicast
+
+wait_igmp_flows_installed()
+{
+    OVS_WAIT_UNTIL([ovs-ofctl dump-flows br-int table=31 | \
+    grep 'priority=90' | grep "nw_dst=$1"])
+}
+
+ADD_NAMESPACES(vm1)
+ADD_INT([vm1], [vm1], [br-int], [42.42.42.1/24])
+NS_CHECK_EXEC([vm1], [ip link set vm1 address 00:00:00:00:00:01], [0])
+NS_CHECK_EXEC([vm1], [ip route add default via 42.42.42.5], [0])
+check ovs-vsctl set Interface vm1 external_ids:iface-id=vm1
+
+ADD_NAMESPACES(vm2)
+ADD_INT([vm2], [vm2], [br-int], [42.42.42.2/24])
+NS_CHECK_EXEC([vm2], [ip link set vm2 address 00:00:00:00:00:02], [0])
+NS_CHECK_EXEC([vm2], [ip link set lo up], [0])
+check ovs-vsctl set Interface vm2 external_ids:iface-id=vm2
+
+ADD_NAMESPACES(vm3)
+NETNS_DAEMONIZE([vm3], [tcpdump -n -i any -nnleX > vm3.pcap 2>/dev/null], [tcpdump3.pid])
+
+ADD_INT([vm3], [vm3], [br-int], [42.42.42.3/24])
+NS_CHECK_EXEC([vm3], [ip link set vm3 address 00:00:00:00:00:03], [0])
+NS_CHECK_EXEC([vm3], [ip link set lo up], [0])
+NS_CHECK_EXEC([vm3], [ip route add default via 42.42.42.5], [0])
+check ovs-vsctl set Interface vm3 external_ids:iface-id=vm3
+
+NS_CHECK_EXEC([vm2], [sysctl -w net.ipv4.igmp_max_memberships=100], [ignore], [ignore])
+NS_CHECK_EXEC([vm3], [sysctl -w net.ipv4.igmp_max_memberships=100], [ignore], [ignore])
+wait_for_ports_up
+
+NS_CHECK_EXEC([vm3], [ip addr add 228.0.0.1 dev vm3 autojoin], [0])
+wait_igmp_flows_installed 228.0.0.1
+
+NS_CHECK_EXEC([vm1], [ping -q -c 3 -i 0.3 -w 2 228.0.0.1], [ignore], [ignore])
+
+OVS_WAIT_UNTIL([
+    requests=`grep "ICMP echo request" -c vm3.pcap`
+    test "${requests}" -ge "3"
+])
+
+NETNS_DAEMONIZE([vm2], [tcpdump -n -i any -nnleX > vm2.pcap 2>/dev/null], [tcpdump2.pid])
+
+for i in `seq 1 40`;do
+    NS_CHECK_EXEC([vm2], [ip addr add 228.1.$i.1 dev vm2 autojoin &], [0])
+    NS_CHECK_EXEC([vm3], [ip addr add 229.1.$i.1 dev vm3 autojoin &], [0])
+    # Do not go too fast. If going fast, there is a higher chance of sb being busy, causing full recompute (engine has not run)
+    # In this test, we do not want too many recomputes as they might hide I+I related errors
+    sleep 0.2
+done
+
+for i in `seq 1 40`;do
+    wait_igmp_flows_installed 228.1.$i.1
+    wait_igmp_flows_installed 229.1.$i.1
+done
+ovn-sbctl list multicast_group
+
+NS_CHECK_EXEC([vm1], [ping -q -c 3 -i 0.3 -w 2 228.1.1.1], [ignore], [ignore])
+
+OVS_WAIT_UNTIL([
+    requests=`grep "ICMP echo request" -c vm2.pcap`
+    test "${requests}" -ge "3"
+])
+
+# The test could succeed thanks to a lucky northd recompute...after hitting too any flows
+# Double check we never hit error condition
+AT_CHECK([grep -qE 'Too many active mcast flows' northd/ovn-northd.log], [1])
+
+OVS_APP_EXIT_AND_WAIT([ovn-controller])
+
+as ovn-sb
+OVS_APP_EXIT_AND_WAIT([ovsdb-server])
+
+as ovn-nb
+OVS_APP_EXIT_AND_WAIT([ovsdb-server])
+
+as northd
+OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE])
+
+as
+OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
+/connection dropped.*/d
+/removing policing failed: No such device/d"])
+AT_CLEANUP
+])