diff mbox series

[ovs-dev,v5,08/16] northd: Refactor lflow management into a separate module.

Message ID 20240111153112.2790251-1-numans@ovn.org
State Superseded
Headers show
Series northd lflow incremental processing | expand

Checks

Context Check Description
ovsrobot/apply-robot warning apply and check: warning
ovsrobot/github-robot-_Build_and_Test success github build: passed
ovsrobot/github-robot-_ovn-kubernetes success github build: passed

Commit Message

Numan Siddique Jan. 11, 2024, 3:31 p.m. UTC
From: Numan Siddique <numans@ovn.org>

ovn_lflow_add() and other related functions/macros are now moved
into a separate module - lflow-mgr.c.  This module maintains a
table 'struct lflow_table' for the logical flows.  lflow table
maintains a hmap to store the logical flows.

It also maintains the logical switch and router dp groups.

Previous commits which added lflow incremental processing for
the VIF logical ports, stored the references to
the logical ports' lflows using 'struct lflow_ref_list'.  This
struct is renamed to 'struct lflow_ref' and is part of lflow-mgr.c.
It is  modified a bit to store the resource to lflow references.

Example usage of 'struct lflow_ref'.

'struct ovn_port' maintains 2 instances of lflow_ref.  i,e

struct ovn_port {
   ...
   ...
   struct lflow_ref *lflow_ref;
   struct lflow_ref *stateful_lflow_ref;
};

All the logical flows generated by
build_lswitch_and_lrouter_iterate_by_lsp() uses the ovn_port->lflow_ref.

All the logical flows generated by build_lsp_lflows_for_lbnats()
uses the ovn_port->stateful_lflow_ref.

When handling the ovn_port changes incrementally, the lflows referenced
in 'struct ovn_port' are cleared and regenerated and synced to the
SB logical flows.

eg.

lflow_ref_clear_lflows(op->lflow_ref);
build_lswitch_and_lrouter_iterate_by_lsp(op, ...);
lflow_ref_sync_lflows_to_sb(op->lflow_ref, ...);

This patch does few more changes:
  -  Logical flows are now hashed without the logical
     datapaths.  If a logical flow is referenced by just one
     datapath, we don't rehash it.

  -  The synthetic 'hash' column of sbrec_logical_flow now
     doesn't use the logical datapath.  This means that
     when ovn-northd is updated/upgraded and has this commit,
     all the logical flows with 'logical_datapath' column
     set will get deleted and re-added causing some disruptions.

  -  With the commit [1] which added I-P support for logical
     port changes, multiple logical flows with same match 'M'
     and actions 'A' are generated and stored without the
     dp groups, which was not the case prior to
     that patch.
     One example to generate these lflows is:
             ovn-nbctl lsp-set-addresses sw0p1 "MAC1 IP1"
             ovn-nbctl lsp-set-addresses sw1p1 "MAC1 IP1"
	     ovn-nbctl lsp-set-addresses sw2p1 "MAC1 IP1"

     Now with this patch we go back to the earlier way.  i.e
     one logical flow with logical_dp_groups set.

  -  With this patch any updates to a logical port which
     doesn't result in new logical flows will not result in
     deletion and addition of same logical flows.
     Eg.
     ovn-nbctl set logical_switch_port sw0p1 external_ids:foo=bar
     will be a no-op to the SB logical flow table.

[1] - 8bbd678("northd: Incremental processing of VIF additions in 'lflow' node.")

Signed-off-by: Numan Siddique <numans@ovn.org>
---
 lib/ovn-util.c           |   18 +-
 lib/ovn-util.h           |    2 -
 northd/automake.mk       |    4 +-
 northd/en-lflow.c        |   24 +-
 northd/en-lflow.h        |    6 +
 northd/inc-proc-northd.c |    4 +-
 northd/lflow-mgr.c       | 1261 +++++++++++++++++++++++++
 northd/lflow-mgr.h       |  186 ++++
 northd/northd.c          | 1918 ++++++++++----------------------------
 northd/northd.h          |  218 ++++-
 northd/ovn-northd.c      |    4 +
 tests/ovn-northd.at      |  216 +++++
 12 files changed, 2395 insertions(+), 1466 deletions(-)
 create mode 100644 northd/lflow-mgr.c
 create mode 100644 northd/lflow-mgr.h

Comments

Dumitru Ceara Jan. 17, 2024, 11:22 p.m. UTC | #1
On 1/11/24 16:31, numans@ovn.org wrote:
> From: Numan Siddique <numans@ovn.org>
> 
> ovn_lflow_add() and other related functions/macros are now moved
> into a separate module - lflow-mgr.c.  This module maintains a
> table 'struct lflow_table' for the logical flows.  lflow table
> maintains a hmap to store the logical flows.
> 
> It also maintains the logical switch and router dp groups.
> 
> Previous commits which added lflow incremental processing for
> the VIF logical ports, stored the references to
> the logical ports' lflows using 'struct lflow_ref_list'.  This
> struct is renamed to 'struct lflow_ref' and is part of lflow-mgr.c.
> It is  modified a bit to store the resource to lflow references.
> 
> Example usage of 'struct lflow_ref'.
> 
> 'struct ovn_port' maintains 2 instances of lflow_ref.  i,e
> 
> struct ovn_port {
>    ...
>    ...
>    struct lflow_ref *lflow_ref;
>    struct lflow_ref *stateful_lflow_ref;
> };
> 
> All the logical flows generated by
> build_lswitch_and_lrouter_iterate_by_lsp() uses the ovn_port->lflow_ref.
> 
> All the logical flows generated by build_lsp_lflows_for_lbnats()
> uses the ovn_port->stateful_lflow_ref.
> 
> When handling the ovn_port changes incrementally, the lflows referenced
> in 'struct ovn_port' are cleared and regenerated and synced to the
> SB logical flows.
> 
> eg.
> 
> lflow_ref_clear_lflows(op->lflow_ref);
> build_lswitch_and_lrouter_iterate_by_lsp(op, ...);
> lflow_ref_sync_lflows_to_sb(op->lflow_ref, ...);
> 
> This patch does few more changes:
>   -  Logical flows are now hashed without the logical
>      datapaths.  If a logical flow is referenced by just one
>      datapath, we don't rehash it.
> 
>   -  The synthetic 'hash' column of sbrec_logical_flow now
>      doesn't use the logical datapath.  This means that
>      when ovn-northd is updated/upgraded and has this commit,
>      all the logical flows with 'logical_datapath' column
>      set will get deleted and re-added causing some disruptions.
> 
>   -  With the commit [1] which added I-P support for logical
>      port changes, multiple logical flows with same match 'M'
>      and actions 'A' are generated and stored without the
>      dp groups, which was not the case prior to
>      that patch.
>      One example to generate these lflows is:
>              ovn-nbctl lsp-set-addresses sw0p1 "MAC1 IP1"
>              ovn-nbctl lsp-set-addresses sw1p1 "MAC1 IP1"
> 	     ovn-nbctl lsp-set-addresses sw2p1 "MAC1 IP1"
> 
>      Now with this patch we go back to the earlier way.  i.e
>      one logical flow with logical_dp_groups set.
> 
>   -  With this patch any updates to a logical port which
>      doesn't result in new logical flows will not result in
>      deletion and addition of same logical flows.
>      Eg.
>      ovn-nbctl set logical_switch_port sw0p1 external_ids:foo=bar
>      will be a no-op to the SB logical flow table.
> 
> [1] - 8bbd678("northd: Incremental processing of VIF additions in 'lflow' node.")
> 
> Signed-off-by: Numan Siddique <numans@ovn.org>
> ---

Hi Numan,

This is a huge patch and I'm not sure I can thoroughly review it.  I do
think there are a few improvements we can make (even if we don't split
it in smaller parts).  Hopefully those simplify the code a bit.  Please
see inline.

>  lib/ovn-util.c           |   18 +-
>  lib/ovn-util.h           |    2 -
>  northd/automake.mk       |    4 +-
>  northd/en-lflow.c        |   24 +-
>  northd/en-lflow.h        |    6 +
>  northd/inc-proc-northd.c |    4 +-
>  northd/lflow-mgr.c       | 1261 +++++++++++++++++++++++++
>  northd/lflow-mgr.h       |  186 ++++
>  northd/northd.c          | 1918 ++++++++++----------------------------
>  northd/northd.h          |  218 ++++-
>  northd/ovn-northd.c      |    4 +
>  tests/ovn-northd.at      |  216 +++++
>  12 files changed, 2395 insertions(+), 1466 deletions(-)
>  create mode 100644 northd/lflow-mgr.c
>  create mode 100644 northd/lflow-mgr.h
> 
> diff --git a/lib/ovn-util.c b/lib/ovn-util.c
> index c8b89cc216..ba29baea63 100644
> --- a/lib/ovn-util.c
> +++ b/lib/ovn-util.c
> @@ -620,13 +620,10 @@ ovn_pipeline_from_name(const char *pipeline)
>  uint32_t
>  sbrec_logical_flow_hash(const struct sbrec_logical_flow *lf)
>  {
> -    const struct sbrec_datapath_binding *ld = lf->logical_datapath;
> -    uint32_t hash = ovn_logical_flow_hash(lf->table_id,
> -                                          ovn_pipeline_from_name(lf->pipeline),
> -                                          lf->priority, lf->match,
> -                                          lf->actions);
> -
> -    return ld ? ovn_logical_flow_hash_datapath(&ld->header_.uuid, hash) : hash;
> +    return ovn_logical_flow_hash(lf->table_id,
> +                                 ovn_pipeline_from_name(lf->pipeline),
> +                                 lf->priority, lf->match,
> +                                 lf->actions);
>  }
>  
>  uint32_t
> @@ -639,13 +636,6 @@ ovn_logical_flow_hash(uint8_t table_id, enum ovn_pipeline pipeline,
>      return hash_string(actions, hash);
>  }
>  
> -uint32_t
> -ovn_logical_flow_hash_datapath(const struct uuid *logical_datapath,
> -                               uint32_t hash)
> -{
> -    return hash_add(hash, uuid_hash(logical_datapath));
> -}
> -
>  
>  struct tnlid_node {
>      struct hmap_node hmap_node;
> diff --git a/lib/ovn-util.h b/lib/ovn-util.h
> index d245d57d56..1c430027c6 100644
> --- a/lib/ovn-util.h
> +++ b/lib/ovn-util.h
> @@ -145,8 +145,6 @@ uint32_t sbrec_logical_flow_hash(const struct sbrec_logical_flow *);
>  uint32_t ovn_logical_flow_hash(uint8_t table_id, enum ovn_pipeline pipeline,
>                                 uint16_t priority,
>                                 const char *match, const char *actions);
> -uint32_t ovn_logical_flow_hash_datapath(const struct uuid *logical_datapath,
> -                                        uint32_t hash);
>  void ovn_conn_show(struct unixctl_conn *conn, int argc OVS_UNUSED,
>                     const char *argv[] OVS_UNUSED, void *idl_);
>  
> diff --git a/northd/automake.mk b/northd/automake.mk
> index a178541759..7c6d56a4ff 100644
> --- a/northd/automake.mk
> +++ b/northd/automake.mk
> @@ -33,7 +33,9 @@ northd_ovn_northd_SOURCES = \
>  	northd/inc-proc-northd.c \
>  	northd/inc-proc-northd.h \
>  	northd/ipam.c \
> -	northd/ipam.h
> +	northd/ipam.h \
> +	northd/lflow-mgr.c \
> +	northd/lflow-mgr.h
>  northd_ovn_northd_LDADD = \
>  	lib/libovn.la \
>  	$(OVSDB_LIBDIR)/libovsdb.la \
> diff --git a/northd/en-lflow.c b/northd/en-lflow.c
> index eb6b2a8666..fef9a1352d 100644
> --- a/northd/en-lflow.c
> +++ b/northd/en-lflow.c
> @@ -24,6 +24,7 @@
>  #include "en-ls-stateful.h"
>  #include "en-northd.h"
>  #include "en-meters.h"
> +#include "lflow-mgr.h"
>  
>  #include "lib/inc-proc-eng.h"
>  #include "northd.h"
> @@ -58,6 +59,8 @@ lflow_get_input_data(struct engine_node *node,
>          EN_OVSDB_GET(engine_get_input("SB_multicast_group", node));
>      lflow_input->sbrec_igmp_group_table =
>          EN_OVSDB_GET(engine_get_input("SB_igmp_group", node));
> +    lflow_input->sbrec_logical_dp_group_table =
> +        EN_OVSDB_GET(engine_get_input("SB_logical_dp_group", node));
>  
>      lflow_input->sbrec_mcast_group_by_name_dp =
>             engine_ovsdb_node_get_index(
> @@ -90,17 +93,19 @@ void en_lflow_run(struct engine_node *node, void *data)
>      struct hmap bfd_connections = HMAP_INITIALIZER(&bfd_connections);
>      lflow_input.bfd_connections = &bfd_connections;
>  
> +    stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec());
> +
>      struct lflow_data *lflow_data = data;
> -    lflow_data_destroy(lflow_data);
> -    lflow_data_init(lflow_data);
> +    lflow_table_clear(lflow_data->lflow_table);
> +    lflow_reset_northd_refs(&lflow_input);
>  
> -    stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec());
>      build_bfd_table(eng_ctx->ovnsb_idl_txn,
>                      lflow_input.nbrec_bfd_table,
>                      lflow_input.sbrec_bfd_table,
>                      lflow_input.lr_ports,
>                      &bfd_connections);
> -    build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input, &lflow_data->lflows);
> +    build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input,
> +                 lflow_data->lflow_table);
>      bfd_cleanup_connections(lflow_input.nbrec_bfd_table,
>                              &bfd_connections);
>      hmap_destroy(&bfd_connections);
> @@ -131,7 +136,7 @@ lflow_northd_handler(struct engine_node *node,
>  
>      if (!lflow_handle_northd_port_changes(eng_ctx->ovnsb_idl_txn,
>                                  &northd_data->trk_data.trk_lsps,
> -                                &lflow_input, &lflow_data->lflows)) {
> +                                &lflow_input, lflow_data->lflow_table)) {
>          return false;
>      }
>  
> @@ -160,11 +165,14 @@ void *en_lflow_init(struct engine_node *node OVS_UNUSED,
>                       struct engine_arg *arg OVS_UNUSED)
>  {
>      struct lflow_data *data = xmalloc(sizeof *data);
> -    lflow_data_init(data);
> +    data->lflow_table = lflow_table_alloc();
> +    lflow_table_init(data->lflow_table);
>      return data;
>  }
>  
> -void en_lflow_cleanup(void *data)
> +void en_lflow_cleanup(void *data_)
>  {
> -    lflow_data_destroy(data);
> +    struct lflow_data *data = data_;
> +    lflow_table_destroy(data->lflow_table);
> +    data->lflow_table = NULL;

It's not really needed to set lflow_table to NULL, right?

>  }
> diff --git a/northd/en-lflow.h b/northd/en-lflow.h
> index 5417b2faff..f7325c56b1 100644
> --- a/northd/en-lflow.h
> +++ b/northd/en-lflow.h
> @@ -9,6 +9,12 @@
>  
>  #include "lib/inc-proc-eng.h"
>  
> +struct lflow_table;
> +
> +struct lflow_data {
> +    struct lflow_table *lflow_table;
> +};
> +
>  void en_lflow_run(struct engine_node *node, void *data);
>  void *en_lflow_init(struct engine_node *node, struct engine_arg *arg);
>  void en_lflow_cleanup(void *data);
> diff --git a/northd/inc-proc-northd.c b/northd/inc-proc-northd.c
> index 9ce4279ee8..0e17bfe2e6 100644
> --- a/northd/inc-proc-northd.c
> +++ b/northd/inc-proc-northd.c
> @@ -99,7 +99,8 @@ static unixctl_cb_func chassis_features_list;
>      SB_NODE(bfd, "bfd") \
>      SB_NODE(fdb, "fdb") \
>      SB_NODE(static_mac_binding, "static_mac_binding") \
> -    SB_NODE(chassis_template_var, "chassis_template_var")
> +    SB_NODE(chassis_template_var, "chassis_template_var") \
> +    SB_NODE(logical_dp_group, "logical_dp_group")
>  
>  enum sb_engine_node {
>  #define SB_NODE(NAME, NAME_STR) SB_##NAME,
> @@ -229,6 +230,7 @@ void inc_proc_northd_init(struct ovsdb_idl_loop *nb,
>      engine_add_input(&en_lflow, &en_sb_igmp_group, NULL);
>      engine_add_input(&en_lflow, &en_lr_stateful, NULL);
>      engine_add_input(&en_lflow, &en_ls_stateful, NULL);
> +    engine_add_input(&en_lflow, &en_sb_logical_dp_group, NULL);
>      engine_add_input(&en_lflow, &en_northd, lflow_northd_handler);
>      engine_add_input(&en_lflow, &en_port_group, lflow_port_group_handler);
>  
> diff --git a/northd/lflow-mgr.c b/northd/lflow-mgr.c
> new file mode 100644
> index 0000000000..3cf9696f6e
> --- /dev/null
> +++ b/northd/lflow-mgr.c
> @@ -0,0 +1,1261 @@
> +/*
> + * Copyright (c) 2023, Red Hat, Inc.

2024

> + *
> + * Licensed under the Apache License, Version 2.0 (the "License");
> + * you may not use this file except in compliance with the License.
> + * You may obtain a copy of the License at:
> + *
> + *     http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
> +
> +#include <config.h>
> +
> +#include <getopt.h>
> +#include <stdlib.h>
> +#include <stdio.h>

None of these three includes is needed.

> +
> +/* OVS includes */
> +#include "include/openvswitch/hmap.h"
> +#include "include/openvswitch/thread.h"
> +#include "lib/bitmap.h"
> +#include "lib/uuidset.h"
> +#include "openvswitch/util.h"
> +#include "openvswitch/vlog.h"
> +#include "ovs-thread.h"
> +#include "stopwatch.h"

stopwatch.h is also not needed.

> +
> +/* OVN includes */
> +#include "debug.h"
> +#include "lflow-mgr.h"
> +#include "lib/ovn-parallel-hmap.h"
> +#include "northd.h"

Actually, the whole include list can be reduced to:

#include <config.h>

/* OVS includes */
#include "include/openvswitch/thread.h"
#include "lib/bitmap.h"
#include "openvswitch/vlog.h"

/* OVN includes */
#include "debug.h"
#include "lflow-mgr.h"
#include "lib/ovn-parallel-hmap.h"

> +
> +VLOG_DEFINE_THIS_MODULE(lflow_mgr);
> +
> +/* Static function declarations. */
> +struct ovn_lflow;
> +
> +static void ovn_lflow_init(struct ovn_lflow *, struct ovn_datapath *od,
> +                           size_t dp_bitmap_len, enum ovn_stage stage,
> +                           uint16_t priority, char *match,
> +                           char *actions, char *io_port,
> +                           char *ctrl_meter, char *stage_hint,
> +                           const char *where);
> +static struct ovn_lflow *ovn_lflow_find(const struct hmap *lflows,
> +                                        enum ovn_stage stage,
> +                                        uint16_t priority, const char *match,
> +                                        const char *actions,
> +                                        const char *ctrl_meter, uint32_t hash);
> +static void ovn_lflow_destroy(struct lflow_table *lflow_table,
> +                              struct ovn_lflow *lflow);
> +static char *ovn_lflow_hint(const struct ovsdb_idl_row *row);
> +
> +static struct ovn_lflow *do_ovn_lflow_add(
> +    struct lflow_table *, const struct ovn_datapath *,
> +    const unsigned long *dp_bitmap, size_t dp_bitmap_len, uint32_t hash,
> +    enum ovn_stage stage, uint16_t priority, const char *match,
> +    const char *actions, const char *io_port,
> +    const char *ctrl_meter,
> +    const struct ovsdb_idl_row *stage_hint,
> +    const char *where);
> +
> +
> +static struct ovs_mutex *lflow_hash_lock(const struct hmap *lflow_table,
> +                                         uint32_t hash);
> +static void lflow_hash_unlock(struct ovs_mutex *hash_lock);
> +
> +static struct ovn_dp_group *ovn_dp_group_get(
> +    struct hmap *dp_groups, size_t desired_n,
> +    const unsigned long *desired_bitmap,
> +    size_t bitmap_len);
> +static struct ovn_dp_group *ovn_dp_group_create(
> +    struct ovsdb_idl_txn *ovnsb_txn, struct hmap *dp_groups,
> +    struct sbrec_logical_dp_group *, size_t desired_n,
> +    const unsigned long *desired_bitmap,
> +    size_t bitmap_len, bool is_switch,
> +    const struct ovn_datapaths *ls_datapaths,
> +    const struct ovn_datapaths *lr_datapaths);
> +static struct ovn_dp_group *ovn_dp_group_get(
> +    struct hmap *dp_groups, size_t desired_n,
> +    const unsigned long *desired_bitmap,
> +    size_t bitmap_len);
> +static struct sbrec_logical_dp_group *ovn_sb_insert_or_update_logical_dp_group(
> +    struct ovsdb_idl_txn *ovnsb_txn,
> +    struct sbrec_logical_dp_group *,
> +    const unsigned long *dpg_bitmap,
> +    const struct ovn_datapaths *);
> +static struct ovn_dp_group *ovn_dp_group_find(const struct hmap *dp_groups,
> +                                              const unsigned long *dpg_bitmap,
> +                                              size_t bitmap_len,
> +                                              uint32_t hash);
> +static void inc_ovn_dp_group_ref(struct ovn_dp_group *);
> +static void dec_ovn_dp_group_ref(struct hmap *dp_groups,
> +                                 struct ovn_dp_group *);
> +static void ovn_dp_group_add_with_reference(struct ovn_lflow *,
> +                                            const struct ovn_datapath *od,
> +                                            const unsigned long *dp_bitmap,
> +                                            size_t bitmap_len);
> +
> +static bool lflow_ref_sync_lflows__(
> +    struct lflow_ref  *, struct lflow_table *,
> +    struct ovsdb_idl_txn *ovnsb_txn,
> +    const struct ovn_datapaths *ls_datapaths,
> +    const struct ovn_datapaths *lr_datapaths,
> +    bool ovn_internal_version_changed,
> +    const struct sbrec_logical_flow_table *,
> +    const struct sbrec_logical_dp_group_table *);
> +static bool sync_lflow_to_sb(struct ovn_lflow *,
> +                             struct ovsdb_idl_txn *ovnsb_txn,
> +                             struct lflow_table *,
> +                             const struct ovn_datapaths *ls_datapaths,
> +                             const struct ovn_datapaths *lr_datapaths,
> +                             bool ovn_internal_version_changed,
> +                             const struct sbrec_logical_flow *sbflow,
> +                             const struct sbrec_logical_dp_group_table *);
> +
> +extern int parallelization_state;
> +extern thread_local size_t thread_lflow_counter;
> +
> +struct dp_refcnt;
> +static struct dp_refcnt *dp_refcnt_find(struct hmap *dp_refcnts_map,
> +                                        size_t dp_index);
> +static void inc_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index);
> +static bool dec_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index);
> +static void ovn_lflow_clear_dp_refcnts_map(struct ovn_lflow *);
> +static struct lflow_ref_node *lflow_ref_node_find(struct hmap *lflow_ref_nodes,
> +                                                  struct ovn_lflow *lflow,
> +                                                  uint32_t lflow_hash);
> +static void lflow_ref_node_destroy(struct lflow_ref_node *,
> +                                   struct hmap *lflow_ref_nodes);
> +
> +static bool lflow_hash_lock_initialized = false;
> +/* The lflow_hash_lock is a mutex array that protects updates to the shared
> + * lflow table across threads when parallel lflow build and dp-group are both
> + * enabled. To avoid high contention between threads, a big array of mutexes
> + * are used instead of just one. This is possible because when parallel build
> + * is used we only use hmap_insert_fast() to update the hmap, which would not
> + * touch the bucket array but only the list in a single bucket. We only need to
> + * make sure that when adding lflows to the same hash bucket, the same lock is
> + * used, so that no two threads can add to the bucket at the same time.  It is
> + * ok that the same lock is used to protect multiple buckets, so a fixed sized
> + * mutex array is used instead of 1-1 mapping to the hash buckets. This
> + * simplies the implementation while effectively reduces lock contention
> + * because the chance that different threads contending the same lock amongst
> + * the big number of locks is very low. */
> +#define LFLOW_HASH_LOCK_MASK 0xFFFF
> +static struct ovs_mutex lflow_hash_locks[LFLOW_HASH_LOCK_MASK + 1];
> +
> +/* Full thread safety analysis is not possible with hash locks, because
> + * they are taken conditionally based on the 'parallelization_state' and
> + * a flow hash.  Also, the order in which two hash locks are taken is not
> + * predictable during the static analysis.
> + *
> + * Since the order of taking two locks depends on a random hash, to avoid
> + * ABBA deadlocks, no two hash locks can be nested.  In that sense an array
> + * of hash locks is similar to a single mutex.
> + *
> + * Using a fake mutex to partially simulate thread safety restrictions, as
> + * if it were actually a single mutex.
> + *
> + * OVS_NO_THREAD_SAFETY_ANALYSIS below allows us to ignore conditional
> + * nature of the lock.  Unlike other attributes, it applies to the
> + * implementation and not to the interface.  So, we can define a function
> + * that acquires the lock without analysing the way it does that.
> + */
> +extern struct ovs_mutex fake_hash_mutex;
> +
> +/* Represents a logical ovn flow (lflow).
> + *
> + * A logical flow with match 'M' and actions 'A' - L(M, A) is created
> + * when lflow engine node (northd.c) calls lflow_table_add_lflow
> + * (or one of the helper macros ovn_lflow_add_*).
> + *
> + * Each lflow is stored in the lflow_table (see 'struct lflow_table' below)
> + * and possibly referenced by zero or more lflow_refs
> + * (see 'struct lflow_ref' and 'struct lflow_ref_node' below).
> + *
> + * */
> +struct ovn_lflow {
> +    struct hmap_node hmap_node;
> +
> +    struct ovn_datapath *od;     /* 'logical_datapath' in SB schema.  */
> +    unsigned long *dpg_bitmap;   /* Bitmap of all datapaths by their 'index'.*/
> +    enum ovn_stage stage;
> +    uint16_t priority;
> +    char *match;
> +    char *actions;
> +    char *io_port;
> +    char *stage_hint;
> +    char *ctrl_meter;
> +    size_t n_ods;                /* Number of datapaths referenced by 'od' and
> +                                  * 'dpg_bitmap'. */
> +    struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group. */
> +    const char *where;
> +
> +    struct uuid sb_uuid;         /* SB DB row uuid, specified by northd. */
> +    struct ovs_list referenced_by;  /* List of struct lflow_ref_node. */
> +    struct hmap dp_refcnts_map; /* Maintains the number of times this ovn_lflow
> +                                 * is referenced by a given datapath.
> +                                 * Contains 'struct dp_refcnt' in the map. */
> +};
> +
> +/* Logical flow table. */
> +struct lflow_table {
> +    struct hmap entries; /* hmap of lflows. */
> +    struct hmap ls_dp_groups; /* hmap of logical switch dp groups. */
> +    struct hmap lr_dp_groups; /* hmap of logical router dp groups. */
> +    ssize_t max_seen_lflow_size;
> +};
> +
> +struct lflow_table *
> +lflow_table_alloc(void)
> +{
> +    struct lflow_table *lflow_table = xzalloc(sizeof *lflow_table);
> +    lflow_table->max_seen_lflow_size = 128;
> +
> +    return lflow_table;
> +}
> +
> +void
> +lflow_table_init(struct lflow_table *lflow_table)
> +{
> +    fast_hmap_size_for(&lflow_table->entries,
> +                       lflow_table->max_seen_lflow_size);
> +    ovn_dp_groups_init(&lflow_table->ls_dp_groups);
> +    ovn_dp_groups_init(&lflow_table->lr_dp_groups);
> +}
> +
> +void
> +lflow_table_clear(struct lflow_table *lflow_table)
> +{
> +    struct ovn_lflow *lflow;
> +    HMAP_FOR_EACH_POP (lflow, hmap_node, &lflow_table->entries) {
> +        ovn_lflow_destroy(NULL, lflow);
> +    }
> +
> +    ovn_dp_groups_clear(&lflow_table->ls_dp_groups);
> +    ovn_dp_groups_clear(&lflow_table->lr_dp_groups);
> +}
> +
> +void
> +lflow_table_destroy(struct lflow_table *lflow_table)
> +{
> +    lflow_table_clear(lflow_table);
> +    hmap_destroy(&lflow_table->entries);
> +    ovn_dp_groups_destroy(&lflow_table->ls_dp_groups);
> +    ovn_dp_groups_destroy(&lflow_table->lr_dp_groups);
> +    free(lflow_table);
> +}
> +
> +void
> +lflow_table_expand(struct lflow_table *lflow_table)
> +{
> +    hmap_expand(&lflow_table->entries);
> +
> +    if (hmap_count(&lflow_table->entries) >
> +            lflow_table->max_seen_lflow_size) {
> +        lflow_table->max_seen_lflow_size = hmap_count(&lflow_table->entries);
> +    }
> +}
> +
> +void
> +lflow_table_set_size(struct lflow_table *lflow_table, size_t size)
> +{
> +    lflow_table->entries.n = size;
> +}
> +
> +void
> +lflow_table_sync_to_sb(struct lflow_table *lflow_table,
> +                       struct ovsdb_idl_txn *ovnsb_txn,
> +                       const struct ovn_datapaths *ls_datapaths,
> +                       const struct ovn_datapaths *lr_datapaths,
> +                       bool ovn_internal_version_changed,
> +                       const struct sbrec_logical_flow_table *sb_flow_table,
> +                       const struct sbrec_logical_dp_group_table *dpgrp_table)
> +{
> +    struct hmap lflows_temp = HMAP_INITIALIZER(&lflows_temp);
> +    struct hmap *lflows = &lflow_table->entries;
> +    struct ovn_lflow *lflow;
> +
> +    /* Push changes to the Logical_Flow table to database. */
> +    const struct sbrec_logical_flow *sbflow;
> +    SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_SAFE (sbflow, sb_flow_table) {
> +        struct sbrec_logical_dp_group *dp_group = sbflow->logical_dp_group;
> +        struct ovn_datapath *logical_datapath_od = NULL;
> +        size_t i;
> +
> +        /* Find one valid datapath to get the datapath type. */
> +        struct sbrec_datapath_binding *dp = sbflow->logical_datapath;
> +        if (dp) {
> +            logical_datapath_od = ovn_datapath_from_sbrec(
> +                &ls_datapaths->datapaths, &lr_datapaths->datapaths, dp);
> +            if (logical_datapath_od
> +                && ovn_datapath_is_stale(logical_datapath_od)) {
> +                logical_datapath_od = NULL;
> +            }
> +        }
> +        for (i = 0; dp_group && i < dp_group->n_datapaths; i++) {
> +            logical_datapath_od = ovn_datapath_from_sbrec(
> +                &ls_datapaths->datapaths, &lr_datapaths->datapaths,
> +                dp_group->datapaths[i]);
> +            if (logical_datapath_od
> +                && !ovn_datapath_is_stale(logical_datapath_od)) {
> +                break;
> +            }
> +            logical_datapath_od = NULL;
> +        }
> +
> +        if (!logical_datapath_od) {
> +            /* This lflow has no valid logical datapaths. */
> +            sbrec_logical_flow_delete(sbflow);
> +            continue;
> +        }
> +
> +        enum ovn_pipeline pipeline
> +            = !strcmp(sbflow->pipeline, "ingress") ? P_IN : P_OUT;
> +
> +        lflow = ovn_lflow_find(
> +            lflows,
> +            ovn_stage_build(ovn_datapath_get_type(logical_datapath_od),
> +                            pipeline, sbflow->table_id),
> +            sbflow->priority, sbflow->match, sbflow->actions,
> +            sbflow->controller_meter, sbflow->hash);
> +        if (lflow) {
> +            sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
> +                             lr_datapaths, ovn_internal_version_changed,
> +                             sbflow, dpgrp_table);
> +
> +            hmap_remove(lflows, &lflow->hmap_node);
> +            hmap_insert(&lflows_temp, &lflow->hmap_node,
> +                        hmap_node_hash(&lflow->hmap_node));
> +        } else {
> +            sbrec_logical_flow_delete(sbflow);
> +        }
> +    }
> +
> +    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
> +        sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
> +                         lr_datapaths, ovn_internal_version_changed,
> +                         NULL, dpgrp_table);
> +
> +        hmap_remove(lflows, &lflow->hmap_node);
> +        hmap_insert(&lflows_temp, &lflow->hmap_node,
> +                    hmap_node_hash(&lflow->hmap_node));
> +    }
> +    hmap_swap(lflows, &lflows_temp);
> +    hmap_destroy(&lflows_temp);
> +}
> +
> +/* lflow ref */
> +struct lflow_ref {
> +    /* head of the list 'struct lflow_ref_node'. */
> +    struct ovs_list lflows_ref_list;

Why do we need this additional list?  AFAICT we always insert both in
the lflow_ref_nodes hmap and in this list.  Can't we walk the whole hmap
every time we need to walk all struct lflow_ref_node?

> +
> +    /* hmap_node is 'struct lflow_ref_node *'.  This is used to ensure
> +     * that there are no duplicates in 'lflow_ref_list' above. */
> +    struct hmap lflow_ref_nodes;
> +};
> +
> +struct lflow_ref_node {
> +    /* hmap node in the hmap - 'struct lflow_ref->lflow_ref_nodes' */
> +    struct hmap_node ref_node;
> +
> +    /* This list follows different lflows referenced by the
> +     * 'struct lflow_ref'. List head is lflow_ref->lflows_ref_list. */
> +    struct ovs_list lflow_list_node;
> +    /* This list follows different objects that reference the same lflow. List
> +     * head is ovn_lflow->referenced_by. */
> +    struct ovs_list ref_list_node;
> +    /* The lflow. */
> +    struct ovn_lflow *lflow;
> +
> +    /* Index id of the datapath this lflow_ref_node belongs to. */
> +    size_t dp_index;
> +
> +    /* Indicates if the lflow_ref_node for an lflow - L(M, A) is linked
> +     * to datapath(s) or not.
> +     * It is set to true when an lflow L(M, A) is referenced by an lflow ref
> +     * in lflow_table_add_lflow().  It is set to false when it is unlinked
> +     * from the datapath when lflow_ref_unlink_lflows() is called. */
> +    bool linked;
> +};
> +
> +struct lflow_ref *
> +lflow_ref_create(void)
> +{
> +    struct lflow_ref *lflow_ref = xzalloc(sizeof *lflow_ref);
> +    ovs_list_init(&lflow_ref->lflows_ref_list);
> +    hmap_init(&lflow_ref->lflow_ref_nodes);
> +    return lflow_ref;
> +}
> +
> +void
> +lflow_ref_destroy(struct lflow_ref *lflow_ref)
> +{
> +    struct lflow_ref_node *l;
> +
> +    LIST_FOR_EACH_SAFE (l, lflow_list_node, &lflow_ref->lflows_ref_list) {
> +        lflow_ref_node_destroy(l, NULL);
> +    }
> +
> +    hmap_destroy(&lflow_ref->lflow_ref_nodes);
> +    free(lflow_ref);

The whole body of the function is actually:

lflow_ref_clear(lflow_ref);
hmap_destroy(&lflow_ref->lflow_ref_nodes);
free(lflow_ref);

> +}
> +
> +void
> +lflow_ref_clear(struct lflow_ref *lflow_ref)
> +{
> +    struct lflow_ref_node *l;
> +    LIST_FOR_EACH_SAFE (l, lflow_list_node, &lflow_ref->lflows_ref_list) {
> +        lflow_ref_node_destroy(l, NULL);

Why not pass &lflow_ref->lflow_ref_nodes instead of NULL and avoid the
hmap_clear() below?  On a second look, maybe it's faster to do as you
did but I guess it just looks weird to me.

> +    }
> +
> +    hmap_clear(&lflow_ref->lflow_ref_nodes);
> +}
> +
> +/* Unlinks the lflows referenced by the 'lflow_ref'.
> + * For each lflow_ref_node (lrn) in the lflow_ref, it basically clears
> + * the datapath id (lrn->dp_index) from the lrn->lflow's dpg bitmap.
> + */
> +void
> +lflow_ref_unlink_lflows(struct lflow_ref *lflow_ref)
> +{
> +    struct lflow_ref_node *lrn;

In some places we use 'struct lflow_ref_node *lrn' in others just
'struct lflow_ref_node *l'.  I'd prefer if we're consistent.  What if we
use lrn everywhere?

> +
> +    LIST_FOR_EACH (lrn, lflow_list_node, &lflow_ref->lflows_ref_list) {
> +        if (dec_dp_refcnt(&lrn->lflow->dp_refcnts_map,
> +                          lrn->dp_index)) {
> +            bitmap_set0(lrn->lflow->dpg_bitmap, lrn->dp_index);
> +        }
> +
> +        lrn->linked = false;
> +    }
> +}
> +
> +bool
> +lflow_ref_resync_flows(struct lflow_ref *lflow_ref,
> +                       struct lflow_table *lflow_table,
> +                       struct ovsdb_idl_txn *ovnsb_txn,
> +                       const struct ovn_datapaths *ls_datapaths,
> +                       const struct ovn_datapaths *lr_datapaths,
> +                       bool ovn_internal_version_changed,
> +                       const struct sbrec_logical_flow_table *sbflow_table,
> +                       const struct sbrec_logical_dp_group_table *dpgrp_table)
> +{
> +    lflow_ref_unlink_lflows(lflow_ref);
> +    return lflow_ref_sync_lflows__(lflow_ref, lflow_table, ovnsb_txn,
> +                                   ls_datapaths, lr_datapaths,
> +                                   ovn_internal_version_changed, sbflow_table,
> +                                   dpgrp_table);
> +}
> +
> +bool
> +lflow_ref_sync_lflows(struct lflow_ref *lflow_ref,
> +                      struct lflow_table *lflow_table,
> +                      struct ovsdb_idl_txn *ovnsb_txn,
> +                      const struct ovn_datapaths *ls_datapaths,
> +                      const struct ovn_datapaths *lr_datapaths,
> +                      bool ovn_internal_version_changed,
> +                      const struct sbrec_logical_flow_table *sbflow_table,
> +                      const struct sbrec_logical_dp_group_table *dpgrp_table)
> +{
> +    return lflow_ref_sync_lflows__(lflow_ref, lflow_table, ovnsb_txn,
> +                                   ls_datapaths, lr_datapaths,
> +                                   ovn_internal_version_changed, sbflow_table,
> +                                   dpgrp_table);
> +}
> +
> +void
> +lflow_table_add_lflow(struct lflow_table *lflow_table,
> +                      const struct ovn_datapath *od,
> +                      const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> +                      enum ovn_stage stage, uint16_t priority,
> +                      const char *match, const char *actions,
> +                      const char *io_port, const char *ctrl_meter,
> +                      const struct ovsdb_idl_row *stage_hint,
> +                      const char *where,
> +                      struct lflow_ref *lflow_ref)
> +    OVS_EXCLUDED(fake_hash_mutex)
> +{
> +    struct ovs_mutex *hash_lock;
> +    uint32_t hash;
> +
> +    ovs_assert(!od ||
> +               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
> +
> +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> +                                 ovn_stage_get_pipeline(stage),
> +                                 priority, match,
> +                                 actions);
> +
> +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
> +    struct ovn_lflow *lflow =
> +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
> +                         dp_bitmap_len, hash, stage,
> +                         priority, match, actions,
> +                         io_port, ctrl_meter, stage_hint, where);
> +
> +    if (lflow_ref) {
> +        /* lflow referencing is only supported if 'od' is not NULL. */
> +        ovs_assert(od);
> +
> +        struct lflow_ref_node *lrn =
> +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes, lflow, hash);
> +        if (!lrn) {
> +            lrn = xzalloc(sizeof *lrn);
> +            lrn->lflow = lflow;
> +            lrn->dp_index = od->index;
> +            ovs_list_insert(&lflow_ref->lflows_ref_list,
> +                            &lrn->lflow_list_node);
> +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
> +            ovs_list_insert(&lflow->referenced_by, &lrn->ref_list_node);
> +
> +            hmap_insert(&lflow_ref->lflow_ref_nodes, &lrn->ref_node, hash);
> +        }
> +
> +        lrn->linked = true;
> +    }
> +
> +    lflow_hash_unlock(hash_lock);
> +
> +}
> +
> +void
> +lflow_table_add_lflow_default_drop(struct lflow_table *lflow_table,
> +                                   const struct ovn_datapath *od,
> +                                   enum ovn_stage stage,
> +                                   const char *where,
> +                                   struct lflow_ref *lflow_ref)
> +{
> +    lflow_table_add_lflow(lflow_table, od, NULL, 0, stage, 0, "1",
> +                          debug_drop_action(), NULL, NULL, NULL,
> +                          where, lflow_ref);
> +}
> +
> +/* Given a desired bitmap, finds a datapath group in 'dp_groups'.  If it
> + * doesn't exist, creates a new one and adds it to 'dp_groups'.
> + * If 'sb_group' is provided, function will try to re-use this group by
> + * either taking it directly, or by modifying, if it's not already in use. */
> +struct ovn_dp_group *
> +ovn_dp_group_get_or_create(struct ovsdb_idl_txn *ovnsb_txn,
> +                           struct hmap *dp_groups,
> +                           struct sbrec_logical_dp_group *sb_group,
> +                           size_t desired_n,
> +                           const unsigned long *desired_bitmap,
> +                           size_t bitmap_len,
> +                           bool is_switch,
> +                           const struct ovn_datapaths *ls_datapaths,
> +                           const struct ovn_datapaths *lr_datapaths)
> +{
> +    struct ovn_dp_group *dpg;
> +
> +    dpg = ovn_dp_group_get(dp_groups, desired_n, desired_bitmap, bitmap_len);
> +    if (dpg) {
> +        return dpg;
> +    }
> +
> +    return ovn_dp_group_create(ovnsb_txn, dp_groups, sb_group, desired_n,
> +                               desired_bitmap, bitmap_len, is_switch,
> +                               ls_datapaths, lr_datapaths);
> +}
> +
> +void
> +ovn_dp_groups_clear(struct hmap *dp_groups)
> +{
> +    struct ovn_dp_group *dpg;
> +    HMAP_FOR_EACH_POP (dpg, node, dp_groups) {
> +        bitmap_free(dpg->bitmap);
> +        free(dpg);

This is duplicated in dec_ovn_dp_group_ref().  Also, should we assert
that all refcounts are 0 here?

I think we need a "ovn_dp_group_destroy()" function to avoid duplicating
things.

> +    }
> +}
> +
> +void
> +ovn_dp_groups_destroy(struct hmap *dp_groups)
> +{
> +    ovn_dp_groups_clear(dp_groups);
> +    hmap_destroy(dp_groups);
> +}
> +
> +void
> +lflow_hash_lock_init(void)
> +{
> +    if (!lflow_hash_lock_initialized) {
> +        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> +            ovs_mutex_init(&lflow_hash_locks[i]);
> +        }
> +        lflow_hash_lock_initialized = true;
> +    }
> +}
> +
> +void
> +lflow_hash_lock_destroy(void)
> +{
> +    if (lflow_hash_lock_initialized) {
> +        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> +            ovs_mutex_destroy(&lflow_hash_locks[i]);
> +        }
> +    }
> +    lflow_hash_lock_initialized = false;
> +}
> +
> +/* static functions. */
> +static void
> +ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
> +               size_t dp_bitmap_len, enum ovn_stage stage, uint16_t priority,
> +               char *match, char *actions, char *io_port, char *ctrl_meter,
> +               char *stage_hint, const char *where)
> +{
> +    lflow->dpg_bitmap = bitmap_allocate(dp_bitmap_len);
> +    lflow->od = od;
> +    lflow->stage = stage;
> +    lflow->priority = priority;
> +    lflow->match = match;
> +    lflow->actions = actions;
> +    lflow->io_port = io_port;
> +    lflow->stage_hint = stage_hint;
> +    lflow->ctrl_meter = ctrl_meter;
> +    lflow->dpg = NULL;
> +    lflow->where = where;
> +    lflow->sb_uuid = UUID_ZERO;
> +    hmap_init(&lflow->dp_refcnts_map);
> +    ovs_list_init(&lflow->referenced_by);
> +}
> +
> +static struct ovs_mutex *
> +lflow_hash_lock(const struct hmap *lflow_table, uint32_t hash)
> +    OVS_ACQUIRES(fake_hash_mutex)
> +    OVS_NO_THREAD_SAFETY_ANALYSIS
> +{
> +    struct ovs_mutex *hash_lock = NULL;
> +
> +    if (parallelization_state == STATE_USE_PARALLELIZATION) {
> +        hash_lock =
> +            &lflow_hash_locks[hash & lflow_table->mask & LFLOW_HASH_LOCK_MASK];
> +        ovs_mutex_lock(hash_lock);
> +    }
> +    return hash_lock;
> +}
> +
> +static void
> +lflow_hash_unlock(struct ovs_mutex *hash_lock)
> +    OVS_RELEASES(fake_hash_mutex)
> +    OVS_NO_THREAD_SAFETY_ANALYSIS
> +{
> +    if (hash_lock) {
> +        ovs_mutex_unlock(hash_lock);
> +    }
> +}
> +
> +static bool
> +ovn_lflow_equal(const struct ovn_lflow *a, enum ovn_stage stage,
> +                uint16_t priority, const char *match,
> +                const char *actions, const char *ctrl_meter)
> +{
> +    return (a->stage == stage
> +            && a->priority == priority
> +            && !strcmp(a->match, match)
> +            && !strcmp(a->actions, actions)
> +            && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
> +}
> +
> +static struct ovn_lflow *
> +ovn_lflow_find(const struct hmap *lflows,
> +               enum ovn_stage stage, uint16_t priority,
> +               const char *match, const char *actions,
> +               const char *ctrl_meter, uint32_t hash)
> +{
> +    struct ovn_lflow *lflow;
> +    HMAP_FOR_EACH_WITH_HASH (lflow, hmap_node, hash, lflows) {
> +        if (ovn_lflow_equal(lflow, stage, priority, match, actions,
> +                            ctrl_meter)) {
> +            return lflow;
> +        }
> +    }
> +    return NULL;
> +}
> +
> +static char *
> +ovn_lflow_hint(const struct ovsdb_idl_row *row)
> +{
> +    if (!row) {
> +        return NULL;
> +    }
> +    return xasprintf("%08x", row->uuid.parts[0]);
> +}
> +
> +static void
> +ovn_lflow_destroy(struct lflow_table *lflow_table, struct ovn_lflow *lflow)
> +{
> +    if (lflow) {
> +        if (lflow_table) {
> +            hmap_remove(&lflow_table->entries, &lflow->hmap_node);
> +        }
> +        bitmap_free(lflow->dpg_bitmap);
> +        free(lflow->match);
> +        free(lflow->actions);
> +        free(lflow->io_port);
> +        free(lflow->stage_hint);
> +        free(lflow->ctrl_meter);
> +        ovn_lflow_clear_dp_refcnts_map(lflow);
> +        struct lflow_ref_node *l;
> +        LIST_FOR_EACH_SAFE (l, ref_list_node, &lflow->referenced_by) {
> +            lflow_ref_node_destroy(l, NULL);

This is very risky in my opinion.  We end up with 'l' being freed while
there still might be a 'lflow_ref_nodes' hmap that points to it.  I
think we should just store a backpointer from lflow_ref_node to the
'struct lflow_ref' that contains it?

I think that then we won't need to do all the tricks we do of passing
NULL instead of the hmap in lots of places.

> +        }
> +        free(lflow);
> +    }
> +}
> +
> +static struct ovn_lflow *
> +do_ovn_lflow_add(struct lflow_table *lflow_table,
> +                 const struct ovn_datapath *od,
> +                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> +                 uint32_t hash, enum ovn_stage stage, uint16_t priority,
> +                 const char *match, const char *actions,
> +                 const char *io_port, const char *ctrl_meter,
> +                 const struct ovsdb_idl_row *stage_hint,
> +                 const char *where)
> +    OVS_REQUIRES(fake_hash_mutex)
> +{
> +    struct ovn_lflow *old_lflow;
> +    struct ovn_lflow *lflow;
> +
> +    size_t bitmap_len = od ? ods_size(od->datapaths) : dp_bitmap_len;
> +    ovs_assert(bitmap_len);
> +
> +    old_lflow = ovn_lflow_find(&lflow_table->entries, stage,
> +                               priority, match, actions, ctrl_meter, hash);
> +    if (old_lflow) {
> +        ovn_dp_group_add_with_reference(old_lflow, od, dp_bitmap,
> +                                        bitmap_len);
> +        return old_lflow;
> +    }
> +
> +    lflow = xzalloc(sizeof *lflow);
> +    /* While adding new logical flows we're not setting single datapath, but
> +     * collecting a group.  'od' will be updated later for all flows with only
> +     * one datapath in a group, so it could be hashed correctly. */
> +    ovn_lflow_init(lflow, NULL, bitmap_len, stage, priority,
> +                   xstrdup(match), xstrdup(actions),
> +                   io_port ? xstrdup(io_port) : NULL,
> +                   nullable_xstrdup(ctrl_meter),
> +                   ovn_lflow_hint(stage_hint), where);
> +
> +    ovn_dp_group_add_with_reference(lflow, od, dp_bitmap, bitmap_len);
> +
> +    if (parallelization_state != STATE_USE_PARALLELIZATION) {
> +        hmap_insert(&lflow_table->entries, &lflow->hmap_node, hash);
> +    } else {
> +        hmap_insert_fast(&lflow_table->entries, &lflow->hmap_node,
> +                         hash);
> +        thread_lflow_counter++;
> +    }
> +
> +    return lflow;
> +}
> +
> +static bool
> +sync_lflow_to_sb(struct ovn_lflow *lflow,
> +                 struct ovsdb_idl_txn *ovnsb_txn,
> +                 struct lflow_table *lflow_table,
> +                 const struct ovn_datapaths *ls_datapaths,
> +                 const struct ovn_datapaths *lr_datapaths,
> +                 bool ovn_internal_version_changed,
> +                 const struct sbrec_logical_flow *sbflow,
> +                 const struct sbrec_logical_dp_group_table *sb_dpgrp_table)
> +{
> +    struct sbrec_logical_dp_group *sbrec_dp_group = NULL;
> +    struct ovn_dp_group *pre_sync_dpg = lflow->dpg;
> +    struct ovn_datapath **datapaths_array;
> +    struct hmap *dp_groups;
> +    size_t n_datapaths;
> +    bool is_switch;
> +
> +    if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> +        n_datapaths = ods_size(ls_datapaths);
> +        datapaths_array = ls_datapaths->array;
> +        dp_groups = &lflow_table->ls_dp_groups;
> +        is_switch = true;
> +    } else {
> +        n_datapaths = ods_size(lr_datapaths);
> +        datapaths_array = lr_datapaths->array;
> +        dp_groups = &lflow_table->lr_dp_groups;
> +        is_switch = false;
> +    }
> +
> +    lflow->n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> +    ovs_assert(lflow->n_ods);
> +
> +    if (lflow->n_ods == 1) {
> +        /* There is only one datapath, so it should be moved out of the
> +         * group to a single 'od'. */
> +        size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
> +                                    n_datapaths);
> +
> +        lflow->od = datapaths_array[index];
> +        lflow->dpg = NULL;
> +    } else {
> +        lflow->od = NULL;
> +    }
> +
> +    if (!sbflow) {
> +        lflow->sb_uuid = uuid_random();
> +        sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
> +                                                        &lflow->sb_uuid);
> +        const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
> +        uint8_t table = ovn_stage_get_table(lflow->stage);
> +        sbrec_logical_flow_set_pipeline(sbflow, pipeline);
> +        sbrec_logical_flow_set_table_id(sbflow, table);
> +        sbrec_logical_flow_set_priority(sbflow, lflow->priority);
> +        sbrec_logical_flow_set_match(sbflow, lflow->match);
> +        sbrec_logical_flow_set_actions(sbflow, lflow->actions);
> +        if (lflow->io_port) {
> +            struct smap tags = SMAP_INITIALIZER(&tags);
> +            smap_add(&tags, "in_out_port", lflow->io_port);
> +            sbrec_logical_flow_set_tags(sbflow, &tags);
> +            smap_destroy(&tags);
> +        }
> +        sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
> +
> +        /* Trim the source locator lflow->where, which looks something like
> +         * "ovn/northd/northd.c:1234", down to just the part following the
> +         * last slash, e.g. "northd.c:1234". */
> +        const char *slash = strrchr(lflow->where, '/');
> +#if _WIN32
> +        const char *backslash = strrchr(lflow->where, '\\');
> +        if (!slash || backslash > slash) {
> +            slash = backslash;
> +        }
> +#endif
> +        const char *where = slash ? slash + 1 : lflow->where;
> +
> +        struct smap ids = SMAP_INITIALIZER(&ids);
> +        smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
> +        smap_add(&ids, "source", where);
> +        if (lflow->stage_hint) {
> +            smap_add(&ids, "stage-hint", lflow->stage_hint);
> +        }
> +        sbrec_logical_flow_set_external_ids(sbflow, &ids);
> +        smap_destroy(&ids);
> +
> +    } else {
> +        lflow->sb_uuid = sbflow->header_.uuid;
> +        sbrec_dp_group = sbflow->logical_dp_group;
> +
> +        if (ovn_internal_version_changed) {
> +            const char *stage_name = smap_get_def(&sbflow->external_ids,
> +                                                  "stage-name", "");
> +            const char *stage_hint = smap_get_def(&sbflow->external_ids,
> +                                                  "stage-hint", "");
> +            const char *source = smap_get_def(&sbflow->external_ids,
> +                                              "source", "");
> +
> +            if (strcmp(stage_name, ovn_stage_to_str(lflow->stage))) {
> +                sbrec_logical_flow_update_external_ids_setkey(
> +                    sbflow, "stage-name", ovn_stage_to_str(lflow->stage));
> +            }
> +            if (lflow->stage_hint) {
> +                if (strcmp(stage_hint, lflow->stage_hint)) {
> +                    sbrec_logical_flow_update_external_ids_setkey(
> +                        sbflow, "stage-hint", lflow->stage_hint);
> +                }
> +            }
> +            if (lflow->where) {
> +
> +                /* Trim the source locator lflow->where, which looks something
> +                 * like "ovn/northd/northd.c:1234", down to just the part
> +                 * following the last slash, e.g. "northd.c:1234". */
> +                const char *slash = strrchr(lflow->where, '/');
> +#if _WIN32
> +                const char *backslash = strrchr(lflow->where, '\\');
> +                if (!slash || backslash > slash) {
> +                    slash = backslash;
> +                }
> +#endif
> +                const char *where = slash ? slash + 1 : lflow->where;
> +
> +                if (strcmp(source, where)) {
> +                    sbrec_logical_flow_update_external_ids_setkey(
> +                        sbflow, "source", where);
> +                }
> +            }
> +        }
> +    }
> +
> +    if (lflow->od) {
> +        sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> +        sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
> +    } else {
> +        sbrec_logical_flow_set_logical_datapath(sbflow, NULL);
> +        lflow->dpg = ovn_dp_group_get(dp_groups, lflow->n_ods,
> +                                      lflow->dpg_bitmap,
> +                                      n_datapaths);
> +        if (lflow->dpg) {
> +            /* Update the dpg's sb dp_group. */
> +            lflow->dpg->dp_group = sbrec_logical_dp_group_table_get_for_uuid(
> +                sb_dpgrp_table,
> +                &lflow->dpg->dpg_uuid);
> +
> +            if (!lflow->dpg->dp_group) {
> +                /* Ideally this should not happen.  But it can still happen
> +                 * due to 2 reasons:
> +                 * 1. There is a bug in the dp_group management.  We should
> +                 *    perhaps assert here.
> +                 * 2. A User or CMS may delete the logical_dp_groups in SB DB
> +                 *    or clear the SB:Logical_flow.logical_dp_groups column
> +                 *    (intentionally or accidentally)
> +                 *
> +                 * Because of (2) it is better to return false instead of
> +                 * assert,so that we recover from th inconsistent SB DB.
> +                 */
> +                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
> +                VLOG_WARN_RL(&rl, "SB Logical flow ["UUID_FMT"]'s "
> +                            "logical_dp_group column is not set "
> +                            "(which is unexpected).  It should have been "
> +                            "referencing the dp group ["UUID_FMT"]",
> +                            UUID_ARGS(&sbflow->header_.uuid),
> +                            UUID_ARGS(&lflow->dpg->dpg_uuid));
> +                return false;
> +            }
> +        } else {
> +            lflow->dpg = ovn_dp_group_create(
> +                                ovnsb_txn, dp_groups, sbrec_dp_group,
> +                                lflow->n_ods, lflow->dpg_bitmap,
> +                                n_datapaths, is_switch,
> +                                ls_datapaths,
> +                                lr_datapaths);
> +        }
> +        sbrec_logical_flow_set_logical_dp_group(sbflow,
> +                                                lflow->dpg->dp_group);
> +    }
> +
> +    if (pre_sync_dpg != lflow->dpg) {
> +        if (lflow->dpg) {
> +            inc_ovn_dp_group_ref(lflow->dpg);
> +        }
> +        if (pre_sync_dpg) {
> +           dec_ovn_dp_group_ref(dp_groups, pre_sync_dpg);
> +        }
> +    }
> +
> +    return true;
> +}
> +
> +static struct ovn_dp_group *
> +ovn_dp_group_find(const struct hmap *dp_groups,
> +                  const unsigned long *dpg_bitmap, size_t bitmap_len,
> +                  uint32_t hash)
> +{
> +    struct ovn_dp_group *dpg;
> +
> +    HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) {
> +        if (bitmap_equal(dpg->bitmap, dpg_bitmap, bitmap_len)) {
> +            return dpg;
> +        }
> +    }
> +    return NULL;
> +}
> +
> +static void
> +inc_ovn_dp_group_ref(struct ovn_dp_group *dpg)
> +{
> +    dpg->refcnt++;
> +}
> +
> +static void
> +dec_ovn_dp_group_ref(struct hmap *dp_groups, struct ovn_dp_group *dpg)
> +{
> +    dpg->refcnt--;
> +
> +    if (!dpg->refcnt) {
> +        hmap_remove(dp_groups, &dpg->node);
> +        free(dpg->bitmap);

This should be bitmap_free().

> +        free(dpg);
> +    }
> +}

To simplify callers I'd write these two functions as follows (inspired a
bit from json.c in OVS):

static void
ovn_dp_group_use(struct ovn_dp_group *dpg)
{
    if (dpg) {
        dpg->refcnt++;
    }
}

static void
ovn_dp_group_release(struct ovn_dp_group *dpg)
{
    if (dpg && !--dpg->refcnt) {
        hmap_remove(dp_groups, &dpg->node);
        bitmap_free(dpg->bitmap);
        free(dpg);
    }
}

> +
> +static struct sbrec_logical_dp_group *
> +ovn_sb_insert_or_update_logical_dp_group(
> +                            struct ovsdb_idl_txn *ovnsb_txn,
> +                            struct sbrec_logical_dp_group *dp_group,
> +                            const unsigned long *dpg_bitmap,
> +                            const struct ovn_datapaths *datapaths)
> +{
> +    const struct sbrec_datapath_binding **sb;
> +    size_t n = 0, index;
> +
> +    sb = xmalloc(bitmap_count1(dpg_bitmap, ods_size(datapaths)) * sizeof *sb);
> +    BITMAP_FOR_EACH_1 (index, ods_size(datapaths), dpg_bitmap) {
> +        sb[n++] = datapaths->array[index]->sb;
> +    }
> +    if (!dp_group) {
> +        struct uuid dpg_uuid = uuid_random();
> +        dp_group = sbrec_logical_dp_group_insert_persist_uuid(
> +            ovnsb_txn, &dpg_uuid);
> +    }
> +    sbrec_logical_dp_group_set_datapaths(
> +        dp_group, (struct sbrec_datapath_binding **) sb, n);
> +    free(sb);
> +
> +    return dp_group;
> +}
> +
> +static struct ovn_dp_group *
> +ovn_dp_group_get(struct hmap *dp_groups, size_t desired_n,
> +                 const unsigned long *desired_bitmap,
> +                 size_t bitmap_len)
> +{
> +    uint32_t hash;
> +
> +    hash = hash_int(desired_n, 0);
> +    return ovn_dp_group_find(dp_groups, desired_bitmap, bitmap_len, hash);
> +}
> +
> +/* Creates a new datapath group and adds it to 'dp_groups'.
> + * If 'sb_group' is provided, function will try to re-use this group by
> + * either taking it directly, or by modifying, if it's not already in use.
> + * Caller should first call ovn_dp_group_get() before calling this function. */
> +static struct ovn_dp_group *
> +ovn_dp_group_create(struct ovsdb_idl_txn *ovnsb_txn,
> +                    struct hmap *dp_groups,
> +                    struct sbrec_logical_dp_group *sb_group,
> +                    size_t desired_n,
> +                    const unsigned long *desired_bitmap,
> +                    size_t bitmap_len,
> +                    bool is_switch,
> +                    const struct ovn_datapaths *ls_datapaths,
> +                    const struct ovn_datapaths *lr_datapaths)
> +{
> +    struct ovn_dp_group *dpg;
> +
> +    bool update_dp_group = false, can_modify = false;
> +    unsigned long *dpg_bitmap;
> +    size_t i, n = 0;
> +
> +    dpg_bitmap = sb_group ? bitmap_allocate(bitmap_len) : NULL;
> +    for (i = 0; sb_group && i < sb_group->n_datapaths; i++) {
> +        struct ovn_datapath *datapath_od;
> +
> +        datapath_od = ovn_datapath_from_sbrec(
> +                        ls_datapaths ? &ls_datapaths->datapaths : NULL,
> +                        lr_datapaths ? &lr_datapaths->datapaths : NULL,
> +                        sb_group->datapaths[i]);
> +        if (!datapath_od || ovn_datapath_is_stale(datapath_od)) {
> +            break;
> +        }
> +        bitmap_set1(dpg_bitmap, datapath_od->index);
> +        n++;
> +    }
> +    if (!sb_group || i != sb_group->n_datapaths) {
> +        /* No group or stale group.  Not going to be used. */
> +        update_dp_group = true;
> +        can_modify = true;
> +    } else if (!bitmap_equal(dpg_bitmap, desired_bitmap, bitmap_len)) {
> +        /* The group in Sb is different. */
> +        update_dp_group = true;
> +        /* We can modify existing group if it's not already in use. */
> +        can_modify = !ovn_dp_group_find(dp_groups, dpg_bitmap,
> +                                        bitmap_len, hash_int(n, 0));
> +    }
> +
> +    bitmap_free(dpg_bitmap);
> +
> +    dpg = xzalloc(sizeof *dpg);
> +    dpg->bitmap = bitmap_clone(desired_bitmap, bitmap_len);
> +    if (!update_dp_group) {
> +        dpg->dp_group = sb_group;
> +    } else {
> +        dpg->dp_group = ovn_sb_insert_or_update_logical_dp_group(
> +                            ovnsb_txn,
> +                            can_modify ? sb_group : NULL,
> +                            desired_bitmap,
> +                            is_switch ? ls_datapaths : lr_datapaths);
> +    }
> +    dpg->dpg_uuid = dpg->dp_group->header_.uuid;
> +    hmap_insert(dp_groups, &dpg->node, hash_int(desired_n, 0));
> +
> +    return dpg;
> +}
> +
> +/* Adds an OVN datapath to a datapath group of existing logical flow.
> + * Version to use when hash bucket locking is NOT required or the corresponding
> + * hash lock is already taken. */
> +static void
> +ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
> +                                const struct ovn_datapath *od,
> +                                const unsigned long *dp_bitmap,
> +                                size_t bitmap_len)
> +    OVS_REQUIRES(fake_hash_mutex)
> +{
> +    if (od) {
> +        bitmap_set1(lflow_ref->dpg_bitmap, od->index);
> +    }
> +    if (dp_bitmap) {
> +        bitmap_or(lflow_ref->dpg_bitmap, dp_bitmap, bitmap_len);
> +    }
> +}
> +
> +static bool
> +lflow_ref_sync_lflows__(struct lflow_ref  *lflow_ref,
> +                        struct lflow_table *lflow_table,
> +                        struct ovsdb_idl_txn *ovnsb_txn,
> +                        const struct ovn_datapaths *ls_datapaths,
> +                        const struct ovn_datapaths *lr_datapaths,
> +                        bool ovn_internal_version_changed,
> +                        const struct sbrec_logical_flow_table *sbflow_table,
> +                        const struct sbrec_logical_dp_group_table *dpgrp_table)
> +{
> +    struct lflow_ref_node *lrn;
> +    struct ovn_lflow *lflow;
> +    LIST_FOR_EACH_SAFE (lrn, lflow_list_node, &lflow_ref->lflows_ref_list) {
> +        lflow = lrn->lflow;
> +        const struct sbrec_logical_flow *sblflow =
> +            sbrec_logical_flow_table_get_for_uuid(sbflow_table,
> +                                                  &lflow->sb_uuid);
> +
> +        struct hmap *dp_groups = NULL;
> +        size_t n_datapaths;
> +        if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> +            dp_groups = &lflow_table->ls_dp_groups;
> +            n_datapaths = ods_size(ls_datapaths);
> +        } else {
> +            dp_groups = &lflow_table->lr_dp_groups;
> +            n_datapaths = ods_size(lr_datapaths);
> +        }
> +
> +        size_t n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> +
> +        if (n_ods) {
> +            if (!sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
> +                                  lr_datapaths, ovn_internal_version_changed,
> +                                  sblflow, dpgrp_table)) {
> +                return false;
> +            }
> +        }
> +
> +        if (!lrn->linked) {
> +            lflow_ref_node_destroy(lrn, &lflow_ref->lflow_ref_nodes);
> +
> +            if (ovs_list_is_empty(&lflow->referenced_by)) {
> +                if (lflow->dpg) {
> +                    dec_ovn_dp_group_ref(dp_groups, lflow->dpg);
> +                }
> +                ovn_lflow_destroy(lflow_table, lflow);
> +
> +                if (sblflow) {
> +                    sbrec_logical_flow_delete(sblflow);
> +                }
> +            }
> +        }
> +    }
> +
> +    return true;
> +}
> +
> +/* Used for the datapath reference counting for a given 'struct ovn_lflow'.
> + * See the hmap 'dp_refcnts_map in 'struct ovn_lflow'.
> + * For a given lflow L(M, A) with match - M and actions - A, it can be
> + * referenced by multiple lflow_refs for the same datapath
> + * Eg. Two lflow_ref's - op->lflow_ref and op->stateful_lflow_ref of a
> + * datapath can have a reference to the same lflow L (M, A).  In this it
> + * is important to maintain this reference count so that the sync to the
> + * SB DB logical_flow is correct. */
> +struct dp_refcnt {
> +    struct hmap_node key_node;
> +
> +    size_t dp_index; /* datapath index.  Also used as hmap key. */
> +    size_t refcnt;  /* reference counter. */

Nit: please align the comments.

> +};
> +
> +static struct dp_refcnt *
> +dp_refcnt_find(struct hmap *dp_refcnts_map, size_t dp_index)
> +{
> +    struct dp_refcnt *dp_refcnt;
> +    HMAP_FOR_EACH_WITH_HASH (dp_refcnt, key_node, dp_index, dp_refcnts_map) {
> +        if (dp_refcnt->dp_index == dp_index) {
> +            return dp_refcnt;
> +        }
> +    }
> +
> +    return NULL;
> +}
> +
> +static void
> +inc_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index)
> +{
> +    struct dp_refcnt *dp_refcnt = dp_refcnt_find(dp_refcnts_map, dp_index);
> +
> +    if (!dp_refcnt) {
> +        dp_refcnt = xzalloc(sizeof *dp_refcnt);
> +        dp_refcnt->dp_index = dp_index;
> +
> +        hmap_insert(dp_refcnts_map, &dp_refcnt->key_node, dp_index);
> +    }
> +
> +    dp_refcnt->refcnt++;
> +}
> +
> +/* Decrements the datapath's refcnt from the 'dp_refcnts_map' if it exists
> + * and returns true of the refcnt is 0 or if the dp refcnt doesn't exist. */
> +static bool
> +dec_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index)
> +{
> +    bool retval = true;
> +
> +    struct dp_refcnt *dp_refcnt = dp_refcnt_find(dp_refcnts_map, dp_index);
> +    if (dp_refcnt) {
> +        dp_refcnt->refcnt--;
> +
> +        if (!dp_refcnt->refcnt) {
> +            hmap_remove(dp_refcnts_map, &dp_refcnt->key_node);
> +            free(dp_refcnt);
> +        } else {
> +            retval = false;

We can just 'return false;' here.

> +        }
> +    }
> +
> +    return retval;

And 'return true;' here.

> +}
> +
> +static void
> +ovn_lflow_clear_dp_refcnts_map(struct ovn_lflow *lflow)
> +{
> +    struct dp_refcnt *dp_refcnt;
> +
> +    HMAP_FOR_EACH_POP (dp_refcnt, key_node, &lflow->dp_refcnts_map) {
> +        free(dp_refcnt);
> +    }
> +
> +    hmap_destroy(&lflow->dp_refcnts_map);
> +}
> +
> +static struct lflow_ref_node *
> +lflow_ref_node_find(struct hmap *lflow_ref_nodes, struct ovn_lflow *lflow,
> +                    uint32_t lflow_hash)
> +{
> +    struct lflow_ref_node *lrn;
> +    HMAP_FOR_EACH_WITH_HASH (lrn, ref_node, lflow_hash, lflow_ref_nodes) {
> +        if (lrn->lflow == lflow) {
> +            return lrn;
> +        }
> +    }
> +
> +    return NULL;
> +}
> +
> +static void
> +lflow_ref_node_destroy(struct lflow_ref_node *lrn,
> +                       struct hmap *lflow_ref_nodes)
> +{
> +    if (lflow_ref_nodes) {
> +        hmap_remove(lflow_ref_nodes, &lrn->ref_node);
> +    }
> +    ovs_list_remove(&lrn->lflow_list_node);
> +    ovs_list_remove(&lrn->ref_list_node);
> +    free(lrn);
> +}
> diff --git a/northd/lflow-mgr.h b/northd/lflow-mgr.h
> new file mode 100644
> index 0000000000..5a0cc28965
> --- /dev/null
> +++ b/northd/lflow-mgr.h
> @@ -0,0 +1,186 @@
> +    /*
> + * Copyright (c) 2023, Red Hat, Inc.

2024

> + *
> + * Licensed under the Apache License, Version 2.0 (the "License");
> + * you may not use this file except in compliance with the License.
> + * You may obtain a copy of the License at:
> + *
> + *     http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
> +#ifndef LFLOW_MGR_H
> +#define LFLOW_MGR_H 1
> +
> +#include "include/openvswitch/hmap.h"
> +#include "include/openvswitch/uuid.h"
> +
> +#include "northd.h"
> +
> +struct ovsdb_idl_txn;
> +struct ovn_datapath;
> +struct ovsdb_idl_row;
> +
> +/* lflow map which stores the logical flows. */
> +struct lflow_table;
> +struct lflow_table *lflow_table_alloc(void);
> +void lflow_table_init(struct lflow_table *);
> +void lflow_table_clear(struct lflow_table *);
> +void lflow_table_destroy(struct lflow_table *);
> +void lflow_table_expand(struct lflow_table *);
> +void lflow_table_set_size(struct lflow_table *, size_t);
> +void lflow_table_sync_to_sb(struct lflow_table *,
> +                            struct ovsdb_idl_txn *ovnsb_txn,
> +                            const struct ovn_datapaths *ls_datapaths,
> +                            const struct ovn_datapaths *lr_datapaths,
> +                            bool ovn_internal_version_changed,
> +                            const struct sbrec_logical_flow_table *,
> +                            const struct sbrec_logical_dp_group_table *);
> +void lflow_table_destroy(struct lflow_table *);
> +
> +void lflow_hash_lock_init(void);
> +void lflow_hash_lock_destroy(void);
> +
> +/* lflow mgr manages logical flows for a resource (like logical port
> + * or datapath). */
> +struct lflow_ref;
> +
> +struct lflow_ref *lflow_ref_create(void);
> +void lflow_ref_destroy(struct lflow_ref *);
> +void lflow_ref_clear(struct lflow_ref *lflow_ref);
> +void lflow_ref_unlink_lflows(struct lflow_ref *);
> +bool lflow_ref_resync_flows(struct lflow_ref *,
> +                            struct lflow_table *lflow_table,
> +                            struct ovsdb_idl_txn *ovnsb_txn,
> +                            const struct ovn_datapaths *ls_datapaths,
> +                            const struct ovn_datapaths *lr_datapaths,
> +                            bool ovn_internal_version_changed,
> +                            const struct sbrec_logical_flow_table *,
> +                            const struct sbrec_logical_dp_group_table *);
> +bool lflow_ref_sync_lflows(struct lflow_ref *,
> +                           struct lflow_table *lflow_table,
> +                           struct ovsdb_idl_txn *ovnsb_txn,
> +                           const struct ovn_datapaths *ls_datapaths,
> +                           const struct ovn_datapaths *lr_datapaths,
> +                           bool ovn_internal_version_changed,
> +                           const struct sbrec_logical_flow_table *,
> +                           const struct sbrec_logical_dp_group_table *);
> +
> +
> +void lflow_table_add_lflow(struct lflow_table *, const struct ovn_datapath *,
> +                           const unsigned long *dp_bitmap,
> +                           size_t dp_bitmap_len, enum ovn_stage stage,
> +                           uint16_t priority, const char *match,
> +                           const char *actions, const char *io_port,
> +                           const char *ctrl_meter,
> +                           const struct ovsdb_idl_row *stage_hint,
> +                           const char *where, struct lflow_ref *);
> +void lflow_table_add_lflow_default_drop(struct lflow_table *,
> +                                        const struct ovn_datapath *,
> +                                        enum ovn_stage stage,
> +                                        const char *where,
> +                                        struct lflow_ref *);
> +
> +/* Adds a row with the specified contents to the Logical_Flow table. */
> +#define ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> +                                  ACTIONS, IN_OUT_PORT, CTRL_METER, \
> +                                  STAGE_HINT) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, IN_OUT_PORT, CTRL_METER, STAGE_HINT, \
> +                          OVS_SOURCE_LOCATOR, NULL)
> +
> +#define ovn_lflow_add_with_lflow_ref_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, \
> +                                            MATCH, ACTIONS, IN_OUT_PORT, \
> +                                            CTRL_METER, STAGE_HINT, LFLOW_REF)\
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, IN_OUT_PORT, CTRL_METER, STAGE_HINT, \
> +                          OVS_SOURCE_LOCATOR, LFLOW_REF)
> +
> +#define ovn_lflow_add_with_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> +                                ACTIONS, STAGE_HINT) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, NULL, NULL, STAGE_HINT,  \
> +                          OVS_SOURCE_LOCATOR, NULL)
> +
> +#define ovn_lflow_add_with_lflow_ref_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, \
> +                                          MATCH, ACTIONS, STAGE_HINT, \
> +                                          LFLOW_REF) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, NULL, NULL, STAGE_HINT,  \
> +                          OVS_SOURCE_LOCATOR, LFLOW_REF)
> +
> +#define ovn_lflow_add_with_dp_group(LFLOW_TABLE, DP_BITMAP, DP_BITMAP_LEN, \
> +                                    STAGE, PRIORITY, MATCH, ACTIONS, \
> +                                    STAGE_HINT) \
> +    lflow_table_add_lflow(LFLOW_TABLE, NULL, DP_BITMAP, DP_BITMAP_LEN, STAGE, \
> +                          PRIORITY, MATCH, ACTIONS, NULL, NULL, STAGE_HINT, \
> +                          OVS_SOURCE_LOCATOR, NULL)
> +
> +#define ovn_lflow_add_default_drop(LFLOW_TABLE, OD, STAGE)                    \
> +    lflow_table_add_lflow_default_drop(LFLOW_TABLE, OD, STAGE, \
> +                                       OVS_SOURCE_LOCATOR, NULL)
> +
> +
> +/* This macro is similar to ovn_lflow_add_with_hint, except that it requires
> + * the IN_OUT_PORT argument, which tells the lport name that appears in the
> + * MATCH, which helps ovn-controller to bypass lflows parsing when the lport is
> + * not local to the chassis. The critiera of the lport to be added using this
> + * argument:
> + *
> + * - For ingress pipeline, the lport that is used to match "inport".
> + * - For egress pipeline, the lport that is used to match "outport".
> + *
> + * For now, only LS pipelines should use this macro.  */
> +#define ovn_lflow_add_with_lport_and_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, \
> +                                          MATCH, ACTIONS, IN_OUT_PORT, \
> +                                          STAGE_HINT, LFLOW_REF) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, IN_OUT_PORT, NULL, STAGE_HINT, \
> +                          OVS_SOURCE_LOCATOR, LFLOW_REF)
> +
> +#define ovn_lflow_add(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, ACTIONS) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, NULL, NULL, NULL, OVS_SOURCE_LOCATOR, NULL)
> +
> +#define ovn_lflow_add_with_lflow_ref(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> +                                     ACTIONS, LFLOW_REF) \
> +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> +                          ACTIONS, NULL, NULL, NULL, OVS_SOURCE_LOCATOR, \
> +                          LFLOW_REF)
> +
> +#define ovn_lflow_metered(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, ACTIONS, \
> +                          CTRL_METER) \
> +    ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> +                              ACTIONS, NULL, CTRL_METER, NULL)
> +
> +struct sbrec_logical_dp_group;
> +
> +struct ovn_dp_group {
> +    unsigned long *bitmap;
> +    const struct sbrec_logical_dp_group *dp_group;
> +    struct uuid dpg_uuid;
> +    struct hmap_node node;
> +    size_t refcnt;
> +};
> +
> +static inline void
> +ovn_dp_groups_init(struct hmap *dp_groups)
> +{
> +    hmap_init(dp_groups);
> +}
> +
> +void ovn_dp_groups_clear(struct hmap *dp_groups);
> +void ovn_dp_groups_destroy(struct hmap *dp_groups);
> +struct ovn_dp_group *ovn_dp_group_get_or_create(
> +    struct ovsdb_idl_txn *ovnsb_txn, struct hmap *dp_groups,
> +    struct sbrec_logical_dp_group *sb_group,
> +    size_t desired_n, const unsigned long *desired_bitmap,
> +    size_t bitmap_len, bool is_switch,
> +    const struct ovn_datapaths *ls_datapaths,
> +    const struct ovn_datapaths *lr_datapaths);
> +
> +#endif /* LFLOW_MGR_H */
> \ No newline at end of file
> diff --git a/northd/northd.c b/northd/northd.c
> index 68f2b0bab4..f41df83c40 100644
> --- a/northd/northd.c
> +++ b/northd/northd.c
> @@ -41,6 +41,7 @@
>  #include "lib/ovn-sb-idl.h"
>  #include "lib/ovn-util.h"
>  #include "lib/lb.h"
> +#include "lflow-mgr.h"
>  #include "memory.h"
>  #include "northd.h"
>  #include "en-lb-data.h"
> @@ -68,7 +69,7 @@
>  VLOG_DEFINE_THIS_MODULE(northd);
>  
>  static bool controller_event_en;
> -static bool lflow_hash_lock_initialized = false;
> +
>  
>  static bool check_lsp_is_up;
>  
> @@ -97,116 +98,6 @@ static bool default_acl_drop;
>  
>  #define MAX_OVN_TAGS 4096
>  
> -/* Pipeline stages. */
> -
> -/* The two purposes for which ovn-northd uses OVN logical datapaths. */
> -enum ovn_datapath_type {
> -    DP_SWITCH,                  /* OVN logical switch. */
> -    DP_ROUTER                   /* OVN logical router. */
> -};
> -
> -/* Returns an "enum ovn_stage" built from the arguments.
> - *
> - * (It's better to use ovn_stage_build() for type-safety reasons, but inline
> - * functions can't be used in enums or switch cases.) */
> -#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
> -    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
> -
> -/* A stage within an OVN logical switch or router.
> - *
> - * An "enum ovn_stage" indicates whether the stage is part of a logical switch
> - * or router, whether the stage is part of the ingress or egress pipeline, and
> - * the table within that pipeline.  The first three components are combined to
> - * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
> - * S_ROUTER_OUT_DELIVERY. */
> -enum ovn_stage {
> -#define PIPELINE_STAGES                                                   \
> -    /* Logical switch ingress stages. */                                  \
> -    PIPELINE_STAGE(SWITCH, IN,  CHECK_PORT_SEC, 0, "ls_in_check_port_sec")   \
> -    PIPELINE_STAGE(SWITCH, IN,  APPLY_PORT_SEC, 1, "ls_in_apply_port_sec")   \
> -    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    2, "ls_in_lookup_fdb")    \
> -    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        3, "ls_in_put_fdb")       \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")       \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")        \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")  \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       7, "ls_in_acl_hint")      \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_EVAL,       8, "ls_in_acl_eval")      \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_ACTION,     9, "ls_in_acl_action")    \
> -    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
> -    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
> -    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  12, "ls_in_lb_aff_check")  \
> -    PIPELINE_STAGE(SWITCH, IN,  LB,            13, "ls_in_lb")            \
> -    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  14, "ls_in_lb_aff_learn")  \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   15, "ls_in_pre_hairpin")   \
> -    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   16, "ls_in_nat_hairpin")   \
> -    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       17, "ls_in_hairpin")       \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_EVAL,  18, \
> -                   "ls_in_acl_after_lb_eval")  \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_ACTION,  19, \
> -                   "ls_in_acl_after_lb_action")  \
> -    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      20, "ls_in_stateful")      \
> -    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    21, "ls_in_arp_rsp")       \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  22, "ls_in_dhcp_options")  \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 23, "ls_in_dhcp_response") \
> -    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    24, "ls_in_dns_lookup")    \
> -    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  25, "ls_in_dns_response")  \
> -    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 26, "ls_in_external_port") \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       27, "ls_in_l2_lkup")       \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    28, "ls_in_l2_unknown")    \
> -                                                                          \
> -    /* Logical switch egress stages. */                                   \
> -    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      0, "ls_out_pre_acl")        \
> -    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       1, "ls_out_pre_lb")         \
> -    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
> -    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
> -    PIPELINE_STAGE(SWITCH, OUT, ACL_EVAL,     4, "ls_out_acl_eval")       \
> -    PIPELINE_STAGE(SWITCH, OUT, ACL_ACTION,   5, "ls_out_acl_action")     \
> -    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     6, "ls_out_qos_mark")       \
> -    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    7, "ls_out_qos_meter")      \
> -    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     8, "ls_out_stateful")       \
> -    PIPELINE_STAGE(SWITCH, OUT, CHECK_PORT_SEC,  9, "ls_out_check_port_sec") \
> -    PIPELINE_STAGE(SWITCH, OUT, APPLY_PORT_SEC, 10, "ls_out_apply_port_sec") \
> -                                                                      \
> -    /* Logical router ingress stages. */                              \
> -    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
> -    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
> -    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
> -    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
> -    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
> -    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
> -    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
> -    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
> -    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
> -    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
> -    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre")  \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")      \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, "lr_in_ip_routing_ecmp") \
> -    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")          \
> -    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")     \
> -    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")     \
> -    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")     \
> -    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")     \
> -    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")     \
> -    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")     \
> -                                                                      \
> -    /* Logical router egress stages. */                               \
> -    PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
> -                   "lr_out_chk_dnat_local")                                  \
> -    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,             1, "lr_out_undnat")      \
> -    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT,        2, "lr_out_post_undnat") \
> -    PIPELINE_STAGE(ROUTER, OUT, SNAT,               3, "lr_out_snat")        \
> -    PIPELINE_STAGE(ROUTER, OUT, POST_SNAT,          4, "lr_out_post_snat")   \
> -    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,           5, "lr_out_egr_loop")    \
> -    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,           6, "lr_out_delivery")
> -
> -#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
> -    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
> -        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
> -    PIPELINE_STAGES
> -#undef PIPELINE_STAGE
> -};
>  
>  /* Due to various hard-coded priorities need to implement ACLs, the
>   * northbound database supports a smaller range of ACL priorities than
> @@ -391,51 +282,9 @@ enum ovn_stage {
>  #define ROUTE_PRIO_OFFSET_STATIC 1
>  #define ROUTE_PRIO_OFFSET_CONNECTED 2
>  
> -/* Returns an "enum ovn_stage" built from the arguments. */
> -static enum ovn_stage
> -ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
> -                uint8_t table)
> -{
> -    return OVN_STAGE_BUILD(dp_type, pipeline, table);
> -}
> -
> -/* Returns the pipeline to which 'stage' belongs. */
> -static enum ovn_pipeline
> -ovn_stage_get_pipeline(enum ovn_stage stage)
> -{
> -    return (stage >> 8) & 1;
> -}
> -
> -/* Returns the pipeline name to which 'stage' belongs. */
> -static const char *
> -ovn_stage_get_pipeline_name(enum ovn_stage stage)
> -{
> -    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
> -}
> -
> -/* Returns the table to which 'stage' belongs. */
> -static uint8_t
> -ovn_stage_get_table(enum ovn_stage stage)
> -{
> -    return stage & 0xff;
> -}
> -
> -/* Returns a string name for 'stage'. */
> -static const char *
> -ovn_stage_to_str(enum ovn_stage stage)
> -{
> -    switch (stage) {
> -#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
> -        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
> -    PIPELINE_STAGES
> -#undef PIPELINE_STAGE
> -        default: return "<unknown>";
> -    }
> -}
> -
>  /* Returns the type of the datapath to which a flow with the given 'stage' may
>   * be added. */
> -static enum ovn_datapath_type
> +enum ovn_datapath_type
>  ovn_stage_to_datapath_type(enum ovn_stage stage)
>  {
>      switch (stage) {
> @@ -680,13 +529,6 @@ ovn_datapath_destroy(struct hmap *datapaths, struct ovn_datapath *od)
>      }
>  }
>  
> -/* Returns 'od''s datapath type. */
> -static enum ovn_datapath_type
> -ovn_datapath_get_type(const struct ovn_datapath *od)
> -{
> -    return od->nbs ? DP_SWITCH : DP_ROUTER;
> -}
> -
>  static struct ovn_datapath *
>  ovn_datapath_find_(const struct hmap *datapaths,
>                     const struct uuid *uuid)
> @@ -722,13 +564,7 @@ ovn_datapath_find_by_key(struct hmap *datapaths, uint32_t dp_key)
>      return NULL;
>  }
>  
> -static bool
> -ovn_datapath_is_stale(const struct ovn_datapath *od)
> -{
> -    return !od->nbr && !od->nbs;
> -}
> -
> -static struct ovn_datapath *
> +struct ovn_datapath *
>  ovn_datapath_from_sbrec(const struct hmap *ls_datapaths,
>                          const struct hmap *lr_datapaths,
>                          const struct sbrec_datapath_binding *sb)
> @@ -1297,19 +1133,6 @@ struct ovn_port_routable_addresses {
>      size_t n_addrs;
>  };
>  
> -/* A node that maintains link between an object (such as an ovn_port) and
> - * a lflow. */
> -struct lflow_ref_node {
> -    /* This list follows different lflows referenced by the same object. List
> -     * head is, for example, ovn_port->lflows.  */
> -    struct ovs_list lflow_list_node;
> -    /* This list follows different objects that reference the same lflow. List
> -     * head is ovn_lflow->referenced_by. */
> -    struct ovs_list ref_list_node;
> -    /* The lflow. */
> -    struct ovn_lflow *lflow;
> -};
> -
>  static bool lsp_can_be_inc_processed(const struct nbrec_logical_switch_port *);
>  
>  static bool
> @@ -1389,6 +1212,8 @@ ovn_port_set_nb(struct ovn_port *op,
>      init_mcast_port_info(&op->mcast_info, op->nbsp, op->nbrp);
>  }
>  
> +static bool lsp_is_router(const struct nbrec_logical_switch_port *nbsp);
> +
>  static struct ovn_port *
>  ovn_port_create(struct hmap *ports, const char *key,
>                  const struct nbrec_logical_switch_port *nbsp,
> @@ -1407,12 +1232,14 @@ ovn_port_create(struct hmap *ports, const char *key,
>      op->l3dgw_port = op->cr_port = NULL;
>      hmap_insert(ports, &op->key_node, hash_string(op->key, 0));
>  
> -    ovs_list_init(&op->lflows);
> +    op->lflow_ref = lflow_ref_create();
> +    op->stateful_lflow_ref = lflow_ref_create();
> +
>      return op;
>  }
>  
>  static void
> -ovn_port_destroy_orphan(struct ovn_port *port)
> +ovn_port_cleanup(struct ovn_port *port)
>  {
>      if (port->tunnel_key) {
>          ovs_assert(port->od);
> @@ -1422,6 +1249,8 @@ ovn_port_destroy_orphan(struct ovn_port *port)
>          destroy_lport_addresses(&port->lsp_addrs[i]);
>      }
>      free(port->lsp_addrs);
> +    port->n_lsp_addrs = 0;
> +    port->lsp_addrs = NULL;
>  
>      if (port->peer) {
>          port->peer->peer = NULL;
> @@ -1431,18 +1260,22 @@ ovn_port_destroy_orphan(struct ovn_port *port)
>          destroy_lport_addresses(&port->ps_addrs[i]);
>      }
>      free(port->ps_addrs);
> +    port->ps_addrs = NULL;
> +    port->n_ps_addrs = 0;
>  
>      destroy_lport_addresses(&port->lrp_networks);
>      destroy_lport_addresses(&port->proxy_arp_addrs);
> +}
> +
> +static void
> +ovn_port_destroy_orphan(struct ovn_port *port)
> +{
> +    ovn_port_cleanup(port);
>      free(port->json_key);
>      free(port->key);
> +    lflow_ref_destroy(port->lflow_ref);
> +    lflow_ref_destroy(port->stateful_lflow_ref);
>  
> -    struct lflow_ref_node *l;
> -    LIST_FOR_EACH_SAFE (l, lflow_list_node, &port->lflows) {
> -        ovs_list_remove(&l->lflow_list_node);
> -        ovs_list_remove(&l->ref_list_node);
> -        free(l);
> -    }
>      free(port);
>  }
>  
> @@ -3911,124 +3744,6 @@ build_lb_port_related_data(
>      build_lswitch_lbs_from_lrouter(lr_datapaths, lb_dps_map, lb_group_dps_map);
>  }
>  
> -
> -struct ovn_dp_group {
> -    unsigned long *bitmap;
> -    struct sbrec_logical_dp_group *dp_group;
> -    struct hmap_node node;
> -};
> -
> -static struct ovn_dp_group *
> -ovn_dp_group_find(const struct hmap *dp_groups,
> -                  const unsigned long *dpg_bitmap, size_t bitmap_len,
> -                  uint32_t hash)
> -{
> -    struct ovn_dp_group *dpg;
> -
> -    HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) {
> -        if (bitmap_equal(dpg->bitmap, dpg_bitmap, bitmap_len)) {
> -            return dpg;
> -        }
> -    }
> -    return NULL;
> -}
> -
> -static struct sbrec_logical_dp_group *
> -ovn_sb_insert_or_update_logical_dp_group(
> -                            struct ovsdb_idl_txn *ovnsb_txn,
> -                            struct sbrec_logical_dp_group *dp_group,
> -                            const unsigned long *dpg_bitmap,
> -                            const struct ovn_datapaths *datapaths)
> -{
> -    const struct sbrec_datapath_binding **sb;
> -    size_t n = 0, index;
> -
> -    sb = xmalloc(bitmap_count1(dpg_bitmap, ods_size(datapaths)) * sizeof *sb);
> -    BITMAP_FOR_EACH_1 (index, ods_size(datapaths), dpg_bitmap) {
> -        sb[n++] = datapaths->array[index]->sb;
> -    }
> -    if (!dp_group) {
> -        dp_group = sbrec_logical_dp_group_insert(ovnsb_txn);
> -    }
> -    sbrec_logical_dp_group_set_datapaths(
> -        dp_group, (struct sbrec_datapath_binding **) sb, n);
> -    free(sb);
> -
> -    return dp_group;
> -}
> -
> -/* Given a desired bitmap, finds a datapath group in 'dp_groups'.  If it
> - * doesn't exist, creates a new one and adds it to 'dp_groups'.
> - * If 'sb_group' is provided, function will try to re-use this group by
> - * either taking it directly, or by modifying, if it's not already in use. */
> -static struct ovn_dp_group *
> -ovn_dp_group_get_or_create(struct ovsdb_idl_txn *ovnsb_txn,
> -                           struct hmap *dp_groups,
> -                           struct sbrec_logical_dp_group *sb_group,
> -                           size_t desired_n,
> -                           const unsigned long *desired_bitmap,
> -                           size_t bitmap_len,
> -                           bool is_switch,
> -                           const struct ovn_datapaths *ls_datapaths,
> -                           const struct ovn_datapaths *lr_datapaths)
> -{
> -    struct ovn_dp_group *dpg;
> -    uint32_t hash;
> -
> -    hash = hash_int(desired_n, 0);
> -    dpg = ovn_dp_group_find(dp_groups, desired_bitmap, bitmap_len, hash);
> -    if (dpg) {
> -        return dpg;
> -    }
> -
> -    bool update_dp_group = false, can_modify = false;
> -    unsigned long *dpg_bitmap;
> -    size_t i, n = 0;
> -
> -    dpg_bitmap = sb_group ? bitmap_allocate(bitmap_len) : NULL;
> -    for (i = 0; sb_group && i < sb_group->n_datapaths; i++) {
> -        struct ovn_datapath *datapath_od;
> -
> -        datapath_od = ovn_datapath_from_sbrec(
> -                        ls_datapaths ? &ls_datapaths->datapaths : NULL,
> -                        lr_datapaths ? &lr_datapaths->datapaths : NULL,
> -                        sb_group->datapaths[i]);
> -        if (!datapath_od || ovn_datapath_is_stale(datapath_od)) {
> -            break;
> -        }
> -        bitmap_set1(dpg_bitmap, datapath_od->index);
> -        n++;
> -    }
> -    if (!sb_group || i != sb_group->n_datapaths) {
> -        /* No group or stale group.  Not going to be used. */
> -        update_dp_group = true;
> -        can_modify = true;
> -    } else if (!bitmap_equal(dpg_bitmap, desired_bitmap, bitmap_len)) {
> -        /* The group in Sb is different. */
> -        update_dp_group = true;
> -        /* We can modify existing group if it's not already in use. */
> -        can_modify = !ovn_dp_group_find(dp_groups, dpg_bitmap,
> -                                        bitmap_len, hash_int(n, 0));
> -    }
> -
> -    bitmap_free(dpg_bitmap);
> -
> -    dpg = xzalloc(sizeof *dpg);
> -    dpg->bitmap = bitmap_clone(desired_bitmap, bitmap_len);
> -    if (!update_dp_group) {
> -        dpg->dp_group = sb_group;
> -    } else {
> -        dpg->dp_group = ovn_sb_insert_or_update_logical_dp_group(
> -                            ovnsb_txn,
> -                            can_modify ? sb_group : NULL,
> -                            desired_bitmap,
> -                            is_switch ? ls_datapaths : lr_datapaths);
> -    }
> -    hmap_insert(dp_groups, &dpg->node, hash);
> -
> -    return dpg;
> -}
> -
>  struct sb_lb {
>      struct hmap_node hmap_node;
>  
> @@ -4846,28 +4561,20 @@ ovn_port_find_in_datapath(struct ovn_datapath *od,
>      return NULL;
>  }
>  
> -static struct ovn_port *
> -ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
> -               const char *key, const struct nbrec_logical_switch_port *nbsp,
> -               struct ovn_datapath *od, const struct sbrec_port_binding *sb,
> -               struct ovs_list *lflows,
> -               const struct sbrec_mirror_table *sbrec_mirror_table,
> -               const struct sbrec_chassis_table *sbrec_chassis_table,
> -               struct ovsdb_idl_index *sbrec_chassis_by_name,
> -               struct ovsdb_idl_index *sbrec_chassis_by_hostname)
> +static bool
> +ls_port_init(struct ovn_port *op, struct ovsdb_idl_txn *ovnsb_txn,
> +             struct hmap *ls_ports, struct ovn_datapath *od,
> +             const struct sbrec_port_binding *sb,
> +             const struct sbrec_mirror_table *sbrec_mirror_table,
> +             const struct sbrec_chassis_table *sbrec_chassis_table,
> +             struct ovsdb_idl_index *sbrec_chassis_by_name,
> +             struct ovsdb_idl_index *sbrec_chassis_by_hostname)
>  {
> -    struct ovn_port *op = ovn_port_create(ls_ports, key, nbsp, NULL,
> -                                          NULL);
> -    parse_lsp_addrs(op);
>      op->od = od;
> -    hmap_insert(&od->ports, &op->dp_node, hmap_node_hash(&op->key_node));
> -    if (lflows) {
> -        ovs_list_splice(&op->lflows, lflows->next, lflows);
> -    }
> -
> +    parse_lsp_addrs(op);
>      /* Assign explicitly requested tunnel ids first. */
>      if (!ovn_port_assign_requested_tnl_id(sbrec_chassis_table, op)) {
> -        return NULL;
> +        return false;
>      }
>      if (sb) {
>          op->sb = sb;
> @@ -4884,14 +4591,57 @@ ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
>      }
>      /* Assign new tunnel ids where needed. */
>      if (!ovn_port_allocate_key(sbrec_chassis_table, ls_ports, op)) {
> -        return NULL;
> +        return false;
>      }
>      ovn_port_update_sbrec(ovnsb_txn, sbrec_chassis_by_name,
>                            sbrec_chassis_by_hostname, NULL, sbrec_mirror_table,
>                            op, NULL, NULL);
> +    return true;
> +}
> +
> +static struct ovn_port *
> +ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
> +               const char *key, const struct nbrec_logical_switch_port *nbsp,
> +               struct ovn_datapath *od, const struct sbrec_port_binding *sb,
> +               const struct sbrec_mirror_table *sbrec_mirror_table,
> +               const struct sbrec_chassis_table *sbrec_chassis_table,
> +               struct ovsdb_idl_index *sbrec_chassis_by_name,
> +               struct ovsdb_idl_index *sbrec_chassis_by_hostname)
> +{
> +    struct ovn_port *op = ovn_port_create(ls_ports, key, nbsp, NULL,
> +                                          NULL);
> +    hmap_insert(&od->ports, &op->dp_node, hmap_node_hash(&op->key_node));
> +    if (!ls_port_init(op, ovnsb_txn, ls_ports, od, sb,
> +                      sbrec_mirror_table, sbrec_chassis_table,
> +                      sbrec_chassis_by_name, sbrec_chassis_by_hostname)) {
> +        ovn_port_destroy(ls_ports, op);
> +        return NULL;
> +    }
> +
>      return op;
>  }
>  
> +static bool
> +ls_port_reinit(struct ovn_port *op, struct ovsdb_idl_txn *ovnsb_txn,
> +                struct hmap *ls_ports,
> +                const struct nbrec_logical_switch_port *nbsp,
> +                const struct nbrec_logical_router_port *nbrp,
> +                struct ovn_datapath *od,
> +                const struct sbrec_port_binding *sb,
> +                const struct sbrec_mirror_table *sbrec_mirror_table,
> +                const struct sbrec_chassis_table *sbrec_chassis_table,
> +                struct ovsdb_idl_index *sbrec_chassis_by_name,
> +                struct ovsdb_idl_index *sbrec_chassis_by_hostname)
> +{
> +    ovn_port_cleanup(op);
> +    op->sb = sb;
> +    ovn_port_set_nb(op, nbsp, nbrp);
> +    op->l3dgw_port = op->cr_port = NULL;
> +    return ls_port_init(op, ovnsb_txn, ls_ports, od, sb,
> +                        sbrec_mirror_table, sbrec_chassis_table,
> +                        sbrec_chassis_by_name, sbrec_chassis_by_hostname);
> +}
> +
>  /* Returns true if the logical switch has changes which can be
>   * incrementally handled.
>   * Presently supports i-p for the below changes:
> @@ -5031,7 +4781,7 @@ ls_handle_lsp_changes(struct ovsdb_idl_txn *ovnsb_idl_txn,
>                  goto fail;
>              }
>              op = ls_port_create(ovnsb_idl_txn, &nd->ls_ports,
> -                                new_nbsp->name, new_nbsp, od, NULL, NULL,
> +                                new_nbsp->name, new_nbsp, od, NULL,
>                                  ni->sbrec_mirror_table,
>                                  ni->sbrec_chassis_table,
>                                  ni->sbrec_chassis_by_name,
> @@ -5062,17 +4812,12 @@ ls_handle_lsp_changes(struct ovsdb_idl_txn *ovnsb_idl_txn,
>                  op->visited = true;
>                  continue;
>              }
> -            struct ovs_list lflows = OVS_LIST_INITIALIZER(&lflows);
> -            ovs_list_splice(&lflows, op->lflows.next, &op->lflows);
> -            ovn_port_destroy(&nd->ls_ports, op);
> -            op = ls_port_create(ovnsb_idl_txn, &nd->ls_ports,
> -                                new_nbsp->name, new_nbsp, od, sb, &lflows,
> -                                ni->sbrec_mirror_table,
> +            if (!ls_port_reinit(op, ovnsb_idl_txn, &nd->ls_ports,
> +                                new_nbsp, NULL,
> +                                od, sb, ni->sbrec_mirror_table,
>                                  ni->sbrec_chassis_table,
>                                  ni->sbrec_chassis_by_name,
> -                                ni->sbrec_chassis_by_hostname);
> -            ovs_assert(ovs_list_is_empty(&lflows));
> -            if (!op) {
> +                                ni->sbrec_chassis_by_hostname)) {
>                  goto fail;
>              }
>              add_op_to_northd_tracked_ports(&trk_lsps->updated, op);
> @@ -6017,170 +5762,7 @@ ovn_igmp_group_destroy(struct hmap *igmp_groups,
>   * function of most of the northbound database.
>   */
>  
> -struct ovn_lflow {
> -    struct hmap_node hmap_node;
> -    struct ovs_list list_node;   /* For temporary list of lflows. Don't remove
> -                                    at destroy. */
> -
> -    struct ovn_datapath *od;     /* 'logical_datapath' in SB schema.  */
> -    unsigned long *dpg_bitmap;   /* Bitmap of all datapaths by their 'index'.*/
> -    enum ovn_stage stage;
> -    uint16_t priority;
> -    char *match;
> -    char *actions;
> -    char *io_port;
> -    char *stage_hint;
> -    char *ctrl_meter;
> -    size_t n_ods;                /* Number of datapaths referenced by 'od' and
> -                                  * 'dpg_bitmap'. */
> -    struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group. */
> -
> -    struct ovs_list referenced_by;  /* List of struct lflow_ref_node. */
> -    const char *where;
> -
> -    struct uuid sb_uuid;         /* SB DB row uuid, specified by northd. */
> -};
> -
> -static void ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow);
> -static struct ovn_lflow *ovn_lflow_find(const struct hmap *lflows,
> -                                        const struct ovn_datapath *od,
> -                                        enum ovn_stage stage,
> -                                        uint16_t priority, const char *match,
> -                                        const char *actions,
> -                                        const char *ctrl_meter, uint32_t hash);
> -
> -static char *
> -ovn_lflow_hint(const struct ovsdb_idl_row *row)
> -{
> -    if (!row) {
> -        return NULL;
> -    }
> -    return xasprintf("%08x", row->uuid.parts[0]);
> -}
> -
> -static bool
> -ovn_lflow_equal(const struct ovn_lflow *a, const struct ovn_datapath *od,
> -                enum ovn_stage stage, uint16_t priority, const char *match,
> -                const char *actions, const char *ctrl_meter)
> -{
> -    return (a->od == od
> -            && a->stage == stage
> -            && a->priority == priority
> -            && !strcmp(a->match, match)
> -            && !strcmp(a->actions, actions)
> -            && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
> -}
> -
> -enum {
> -    STATE_NULL,               /* parallelization is off */
> -    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
> -    STATE_USE_PARALLELIZATION /* parallelization is on */
> -};
> -static int parallelization_state = STATE_NULL;
> -
> -static void
> -ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
> -               size_t dp_bitmap_len, enum ovn_stage stage, uint16_t priority,
> -               char *match, char *actions, char *io_port, char *ctrl_meter,
> -               char *stage_hint, const char *where)
> -{
> -    ovs_list_init(&lflow->list_node);
> -    ovs_list_init(&lflow->referenced_by);
> -    lflow->dpg_bitmap = bitmap_allocate(dp_bitmap_len);
> -    lflow->od = od;
> -    lflow->stage = stage;
> -    lflow->priority = priority;
> -    lflow->match = match;
> -    lflow->actions = actions;
> -    lflow->io_port = io_port;
> -    lflow->stage_hint = stage_hint;
> -    lflow->ctrl_meter = ctrl_meter;
> -    lflow->dpg = NULL;
> -    lflow->where = where;
> -    lflow->sb_uuid = UUID_ZERO;
> -}
> -
> -/* The lflow_hash_lock is a mutex array that protects updates to the shared
> - * lflow table across threads when parallel lflow build and dp-group are both
> - * enabled. To avoid high contention between threads, a big array of mutexes
> - * are used instead of just one. This is possible because when parallel build
> - * is used we only use hmap_insert_fast() to update the hmap, which would not
> - * touch the bucket array but only the list in a single bucket. We only need to
> - * make sure that when adding lflows to the same hash bucket, the same lock is
> - * used, so that no two threads can add to the bucket at the same time.  It is
> - * ok that the same lock is used to protect multiple buckets, so a fixed sized
> - * mutex array is used instead of 1-1 mapping to the hash buckets. This
> - * simplies the implementation while effectively reduces lock contention
> - * because the chance that different threads contending the same lock amongst
> - * the big number of locks is very low. */
> -#define LFLOW_HASH_LOCK_MASK 0xFFFF
> -static struct ovs_mutex lflow_hash_locks[LFLOW_HASH_LOCK_MASK + 1];
> -
> -static void
> -lflow_hash_lock_init(void)
> -{
> -    if (!lflow_hash_lock_initialized) {
> -        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> -            ovs_mutex_init(&lflow_hash_locks[i]);
> -        }
> -        lflow_hash_lock_initialized = true;
> -    }
> -}
> -
> -static void
> -lflow_hash_lock_destroy(void)
> -{
> -    if (lflow_hash_lock_initialized) {
> -        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> -            ovs_mutex_destroy(&lflow_hash_locks[i]);
> -        }
> -    }
> -    lflow_hash_lock_initialized = false;
> -}
> -
> -/* Full thread safety analysis is not possible with hash locks, because
> - * they are taken conditionally based on the 'parallelization_state' and
> - * a flow hash.  Also, the order in which two hash locks are taken is not
> - * predictable during the static analysis.
> - *
> - * Since the order of taking two locks depends on a random hash, to avoid
> - * ABBA deadlocks, no two hash locks can be nested.  In that sense an array
> - * of hash locks is similar to a single mutex.
> - *
> - * Using a fake mutex to partially simulate thread safety restrictions, as
> - * if it were actually a single mutex.
> - *
> - * OVS_NO_THREAD_SAFETY_ANALYSIS below allows us to ignore conditional
> - * nature of the lock.  Unlike other attributes, it applies to the
> - * implementation and not to the interface.  So, we can define a function
> - * that acquires the lock without analysing the way it does that.
> - */
> -extern struct ovs_mutex fake_hash_mutex;
> -
> -static struct ovs_mutex *
> -lflow_hash_lock(const struct hmap *lflow_map, uint32_t hash)
> -    OVS_ACQUIRES(fake_hash_mutex)
> -    OVS_NO_THREAD_SAFETY_ANALYSIS
> -{
> -    struct ovs_mutex *hash_lock = NULL;
> -
> -    if (parallelization_state == STATE_USE_PARALLELIZATION) {
> -        hash_lock =
> -            &lflow_hash_locks[hash & lflow_map->mask & LFLOW_HASH_LOCK_MASK];
> -        ovs_mutex_lock(hash_lock);
> -    }
> -    return hash_lock;
> -}
> -
> -static void
> -lflow_hash_unlock(struct ovs_mutex *hash_lock)
> -    OVS_RELEASES(fake_hash_mutex)
> -    OVS_NO_THREAD_SAFETY_ANALYSIS
> -{
> -    if (hash_lock) {
> -        ovs_mutex_unlock(hash_lock);
> -    }
> -}
> +int parallelization_state = STATE_NULL;
>  
>  
>  /* This thread-local var is used for parallel lflow building when dp-groups is
> @@ -6193,240 +5775,7 @@ lflow_hash_unlock(struct ovs_mutex *hash_lock)
>   * threads are collected to fix the lflow hmap's size (by the function
>   * fix_flow_map_size()).
>   * */
> -static thread_local size_t thread_lflow_counter = 0;
> -
> -/* Adds an OVN datapath to a datapath group of existing logical flow.
> - * Version to use when hash bucket locking is NOT required or the corresponding
> - * hash lock is already taken. */
> -static void
> -ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
> -                                const struct ovn_datapath *od,
> -                                const unsigned long *dp_bitmap,
> -                                size_t bitmap_len)
> -    OVS_REQUIRES(fake_hash_mutex)
> -{
> -    if (od) {
> -        bitmap_set1(lflow_ref->dpg_bitmap, od->index);
> -    }
> -    if (dp_bitmap) {
> -        bitmap_or(lflow_ref->dpg_bitmap, dp_bitmap, bitmap_len);
> -    }
> -}
> -
> -/* This global variable collects the lflows generated by do_ovn_lflow_add().
> - * start_collecting_lflows() will enable the lflow collection and the calls to
> - * do_ovn_lflow_add (or the macros ovn_lflow_add_...) will add generated lflows
> - * to the list. end_collecting_lflows() will disable it. */
> -static thread_local struct ovs_list collected_lflows;
> -static thread_local bool collecting_lflows = false;
> -
> -static void
> -start_collecting_lflows(void)
> -{
> -    ovs_assert(!collecting_lflows);
> -    ovs_list_init(&collected_lflows);
> -    collecting_lflows = true;
> -}
> -
> -static void
> -end_collecting_lflows(void)
> -{
> -    ovs_assert(collecting_lflows);
> -    collecting_lflows = false;
> -}
> -
> -/* Adds a row with the specified contents to the Logical_Flow table.
> - * Version to use when hash bucket locking is NOT required. */
> -static void
> -do_ovn_lflow_add(struct hmap *lflow_map, const struct ovn_datapath *od,
> -                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> -                 uint32_t hash, enum ovn_stage stage, uint16_t priority,
> -                 const char *match, const char *actions, const char *io_port,
> -                 const struct ovsdb_idl_row *stage_hint,
> -                 const char *where, const char *ctrl_meter)
> -    OVS_REQUIRES(fake_hash_mutex)
> -{
> -
> -    struct ovn_lflow *old_lflow;
> -    struct ovn_lflow *lflow;
> -
> -    size_t bitmap_len = od ? ods_size(od->datapaths) : dp_bitmap_len;
> -    ovs_assert(bitmap_len);
> -
> -    if (collecting_lflows) {
> -        ovs_assert(od);
> -        ovs_assert(!dp_bitmap);
> -    } else {
> -        old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
> -                                   actions, ctrl_meter, hash);
> -        if (old_lflow) {
> -            ovn_dp_group_add_with_reference(old_lflow, od, dp_bitmap,
> -                                            bitmap_len);
> -            return;
> -        }
> -    }
> -
> -    lflow = xmalloc(sizeof *lflow);
> -    /* While adding new logical flows we're not setting single datapath, but
> -     * collecting a group.  'od' will be updated later for all flows with only
> -     * one datapath in a group, so it could be hashed correctly. */
> -    ovn_lflow_init(lflow, NULL, bitmap_len, stage, priority,
> -                   xstrdup(match), xstrdup(actions),
> -                   io_port ? xstrdup(io_port) : NULL,
> -                   nullable_xstrdup(ctrl_meter),
> -                   ovn_lflow_hint(stage_hint), where);
> -
> -    ovn_dp_group_add_with_reference(lflow, od, dp_bitmap, bitmap_len);
> -
> -    if (parallelization_state != STATE_USE_PARALLELIZATION) {
> -        hmap_insert(lflow_map, &lflow->hmap_node, hash);
> -    } else {
> -        hmap_insert_fast(lflow_map, &lflow->hmap_node, hash);
> -        thread_lflow_counter++;
> -    }
> -
> -    if (collecting_lflows) {
> -        ovs_list_insert(&collected_lflows, &lflow->list_node);
> -    }
> -}
> -
> -/* Adds a row with the specified contents to the Logical_Flow table. */
> -static void
> -ovn_lflow_add_at(struct hmap *lflow_map, const struct ovn_datapath *od,
> -                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> -                 enum ovn_stage stage, uint16_t priority,
> -                 const char *match, const char *actions, const char *io_port,
> -                 const char *ctrl_meter,
> -                 const struct ovsdb_idl_row *stage_hint, const char *where)
> -    OVS_EXCLUDED(fake_hash_mutex)
> -{
> -    struct ovs_mutex *hash_lock;
> -    uint32_t hash;
> -
> -    ovs_assert(!od ||
> -               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
> -
> -    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> -                                 ovn_stage_get_pipeline(stage),
> -                                 priority, match,
> -                                 actions);
> -
> -    hash_lock = lflow_hash_lock(lflow_map, hash);
> -    do_ovn_lflow_add(lflow_map, od, dp_bitmap, dp_bitmap_len, hash, stage,
> -                     priority, match, actions, io_port, stage_hint, where,
> -                     ctrl_meter);
> -    lflow_hash_unlock(hash_lock);
> -}
> -
> -static void
> -__ovn_lflow_add_default_drop(struct hmap *lflow_map,
> -                             struct ovn_datapath *od,
> -                             enum ovn_stage stage,
> -                             const char *where)
> -{
> -        ovn_lflow_add_at(lflow_map, od, NULL, 0, stage, 0, "1",
> -                         debug_drop_action(),
> -                         NULL, NULL, NULL, where );
> -}
> -
> -/* Adds a row with the specified contents to the Logical_Flow table. */
> -#define ovn_lflow_add_with_hint__(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
> -                                  ACTIONS, IN_OUT_PORT, CTRL_METER, \
> -                                  STAGE_HINT) \
> -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> -                     IN_OUT_PORT, CTRL_METER, STAGE_HINT, OVS_SOURCE_LOCATOR)
> -
> -#define ovn_lflow_add_with_hint(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
> -                                ACTIONS, STAGE_HINT) \
> -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> -                     NULL, NULL, STAGE_HINT, OVS_SOURCE_LOCATOR)
> -
> -#define ovn_lflow_add_with_dp_group(LFLOW_MAP, DP_BITMAP, DP_BITMAP_LEN, \
> -                                    STAGE, PRIORITY, MATCH, ACTIONS, \
> -                                    STAGE_HINT) \
> -    ovn_lflow_add_at(LFLOW_MAP, NULL, DP_BITMAP, DP_BITMAP_LEN, STAGE, \
> -                     PRIORITY, MATCH, ACTIONS, NULL, NULL, STAGE_HINT, \
> -                     OVS_SOURCE_LOCATOR)
> -
> -#define ovn_lflow_add_default_drop(LFLOW_MAP, OD, STAGE)                    \
> -    __ovn_lflow_add_default_drop(LFLOW_MAP, OD, STAGE, OVS_SOURCE_LOCATOR)
> -
> -
> -/* This macro is similar to ovn_lflow_add_with_hint, except that it requires
> - * the IN_OUT_PORT argument, which tells the lport name that appears in the
> - * MATCH, which helps ovn-controller to bypass lflows parsing when the lport is
> - * not local to the chassis. The critiera of the lport to be added using this
> - * argument:
> - *
> - * - For ingress pipeline, the lport that is used to match "inport".
> - * - For egress pipeline, the lport that is used to match "outport".
> - *
> - * For now, only LS pipelines should use this macro.  */
> -#define ovn_lflow_add_with_lport_and_hint(LFLOW_MAP, OD, STAGE, PRIORITY, \
> -                                          MATCH, ACTIONS, IN_OUT_PORT, \
> -                                          STAGE_HINT) \
> -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> -                     IN_OUT_PORT, NULL, STAGE_HINT, OVS_SOURCE_LOCATOR)
> -
> -#define ovn_lflow_add(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, ACTIONS) \
> -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> -                     NULL, NULL, NULL, OVS_SOURCE_LOCATOR)
> -
> -#define ovn_lflow_metered(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, ACTIONS, \
> -                          CTRL_METER) \
> -    ovn_lflow_add_with_hint__(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
> -                              ACTIONS, NULL, CTRL_METER, NULL)
> -
> -static struct ovn_lflow *
> -ovn_lflow_find(const struct hmap *lflows, const struct ovn_datapath *od,
> -               enum ovn_stage stage, uint16_t priority,
> -               const char *match, const char *actions, const char *ctrl_meter,
> -               uint32_t hash)
> -{
> -    struct ovn_lflow *lflow;
> -    HMAP_FOR_EACH_WITH_HASH (lflow, hmap_node, hash, lflows) {
> -        if (ovn_lflow_equal(lflow, od, stage, priority, match, actions,
> -                            ctrl_meter)) {
> -            return lflow;
> -        }
> -    }
> -    return NULL;
> -}
> -
> -static void
> -ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow)
> -{
> -    if (lflow) {
> -        if (lflows) {
> -            hmap_remove(lflows, &lflow->hmap_node);
> -        }
> -        bitmap_free(lflow->dpg_bitmap);
> -        free(lflow->match);
> -        free(lflow->actions);
> -        free(lflow->io_port);
> -        free(lflow->stage_hint);
> -        free(lflow->ctrl_meter);
> -        struct lflow_ref_node *l;
> -        LIST_FOR_EACH_SAFE (l, ref_list_node, &lflow->referenced_by) {
> -            ovs_list_remove(&l->lflow_list_node);
> -            ovs_list_remove(&l->ref_list_node);
> -            free(l);
> -        }
> -        free(lflow);
> -    }
> -}
> -
> -static void
> -link_ovn_port_to_lflows(struct ovn_port *op, struct ovs_list *lflows)
> -{
> -    struct ovn_lflow *f;
> -    LIST_FOR_EACH (f, list_node, lflows) {
> -        struct lflow_ref_node *lfrn = xmalloc(sizeof *lfrn);
> -        lfrn->lflow = f;
> -        ovs_list_insert(&op->lflows, &lfrn->lflow_list_node);
> -        ovs_list_insert(&f->referenced_by, &lfrn->ref_list_node);
> -    }
> -}
> +thread_local size_t thread_lflow_counter = 0;
>  
>  static bool
>  build_dhcpv4_action(struct ovn_port *op, ovs_be32 offer_ip,
> @@ -6604,8 +5953,8 @@ build_dhcpv6_action(struct ovn_port *op, struct in6_addr *offer_ip,
>   * build_lswitch_lflows_admission_control() handles the port security.
>   */
>  static void
> -build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
> -                                struct ds *actions, struct ds *match)
> +build_lswitch_port_sec_op(struct ovn_port *op, struct lflow_table *lflows,
> +                          struct ds *actions, struct ds *match)
>  {
>      ovs_assert(op->nbsp);
>  
> @@ -6621,13 +5970,13 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
>          ovn_lflow_add_with_lport_and_hint(
>              lflows, op->od, S_SWITCH_IN_CHECK_PORT_SEC,
>              100, ds_cstr(match), REGBIT_PORT_SEC_DROP" = 1; next;",
> -            op->key, &op->nbsp->header_);
> +            op->key, &op->nbsp->header_, op->lflow_ref);
>  
>          ds_clear(match);
>          ds_put_format(match, "outport == %s", op->json_key);
>          ovn_lflow_add_with_lport_and_hint(
>              lflows, op->od, S_SWITCH_IN_L2_UNKNOWN, 50, ds_cstr(match),
> -            debug_drop_action(), op->key, &op->nbsp->header_);
> +            debug_drop_action(), op->key, &op->nbsp->header_, op->lflow_ref);
>          return;
>      }
>  
> @@ -6643,14 +5992,16 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
>          ovn_lflow_add_with_lport_and_hint(lflows, op->od,
>                                            S_SWITCH_IN_CHECK_PORT_SEC, 70,
>                                            ds_cstr(match), ds_cstr(actions),
> -                                          op->key, &op->nbsp->header_);
> +                                          op->key, &op->nbsp->header_,
> +                                          op->lflow_ref);
>      } else if (queue_id) {
>          ds_put_cstr(actions,
>                      REGBIT_PORT_SEC_DROP" = check_in_port_sec(); next;");
>          ovn_lflow_add_with_lport_and_hint(lflows, op->od,
>                                            S_SWITCH_IN_CHECK_PORT_SEC, 70,
>                                            ds_cstr(match), ds_cstr(actions),
> -                                          op->key, &op->nbsp->header_);
> +                                          op->key, &op->nbsp->header_,
> +                                          op->lflow_ref);
>  
>          if (!lsp_is_localnet(op->nbsp) && !op->od->n_localnet_ports) {
>              return;
> @@ -6665,7 +6016,8 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
>              ovn_lflow_add_with_lport_and_hint(lflows, op->od,
>                                                S_SWITCH_OUT_APPLY_PORT_SEC, 100,
>                                                ds_cstr(match), ds_cstr(actions),
> -                                              op->key, &op->nbsp->header_);
> +                                              op->key, &op->nbsp->header_,
> +                                              op->lflow_ref);
>          } else if (op->od->n_localnet_ports) {
>              ds_put_format(match, "outport == %s && inport == %s",
>                            op->od->localnet_ports[0]->json_key,
> @@ -6674,15 +6026,16 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
>                      S_SWITCH_OUT_APPLY_PORT_SEC, 110,
>                      ds_cstr(match), ds_cstr(actions),
>                      op->od->localnet_ports[0]->key,
> -                    &op->od->localnet_ports[0]->nbsp->header_);
> +                    &op->od->localnet_ports[0]->nbsp->header_,
> +                    op->lflow_ref);
>          }
>      }
>  }
>  
>  static void
>  build_lswitch_learn_fdb_op(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct ds *actions, struct ds *match)
> +    struct ovn_port *op, struct lflow_table *lflows,
> +    struct ds *actions, struct ds *match)
>  {
>      ovs_assert(op->nbsp);
>  
> @@ -6699,7 +6052,8 @@ build_lswitch_learn_fdb_op(
>          ovn_lflow_add_with_lport_and_hint(lflows, op->od,
>                                            S_SWITCH_IN_LOOKUP_FDB, 100,
>                                            ds_cstr(match), ds_cstr(actions),
> -                                          op->key, &op->nbsp->header_);
> +                                          op->key, &op->nbsp->header_,
> +                                          op->lflow_ref);
>  
>          ds_put_cstr(match, " && "REGBIT_LKUP_FDB" == 0");
>          ds_clear(actions);
> @@ -6707,13 +6061,14 @@ build_lswitch_learn_fdb_op(
>          ovn_lflow_add_with_lport_and_hint(lflows, op->od, S_SWITCH_IN_PUT_FDB,
>                                            100, ds_cstr(match),
>                                            ds_cstr(actions), op->key,
> -                                          &op->nbsp->header_);
> +                                          &op->nbsp->header_,
> +                                          op->lflow_ref);
>      }
>  }
>  
>  static void
>  build_lswitch_learn_fdb_od(
> -        struct ovn_datapath *od, struct hmap *lflows)
> +    struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_LOOKUP_FDB, 0, "1", "next;");
> @@ -6727,7 +6082,7 @@ build_lswitch_learn_fdb_od(
>   *                 (priority 100). */
>  static void
>  build_lswitch_output_port_sec_od(struct ovn_datapath *od,
> -                              struct hmap *lflows)
> +                                 struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      ovn_lflow_add(lflows, od, S_SWITCH_OUT_CHECK_PORT_SEC, 100,
> @@ -6745,7 +6100,7 @@ static void
>  skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
>                           bool has_stateful_acl, enum ovn_stage in_stage,
>                           enum ovn_stage out_stage, uint16_t priority,
> -                         struct hmap *lflows)
> +                         struct lflow_table *lflows)
>  {
>      /* Can't use ct() for router ports. Consider the following configuration:
>       * lp1(10.0.0.2) on hostA--ls1--lr0--ls2--lp2(10.0.1.2) on hostB, For a
> @@ -6767,10 +6122,10 @@ skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
>  
>      ovn_lflow_add_with_lport_and_hint(lflows, od, in_stage, priority,
>                                        ingress_match, ingress_action,
> -                                      op->key, &op->nbsp->header_);
> +                                      op->key, &op->nbsp->header_, NULL);
>      ovn_lflow_add_with_lport_and_hint(lflows, od, out_stage, priority,
>                                        egress_match, egress_action,
> -                                      op->key, &op->nbsp->header_);
> +                                      op->key, &op->nbsp->header_, NULL);
>  
>      free(ingress_match);
>      free(egress_match);
> @@ -6779,7 +6134,7 @@ skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
>  static void
>  build_stateless_filter(const struct ovn_datapath *od,
>                         const struct nbrec_acl *acl,
> -                       struct hmap *lflows)
> +                       struct lflow_table *lflows)
>  {
>      const char *action = REGBIT_ACL_STATELESS" = 1; next;";
>      if (!strcmp(acl->direction, "from-lport")) {
> @@ -6800,7 +6155,7 @@ build_stateless_filter(const struct ovn_datapath *od,
>  static void
>  build_stateless_filters(const struct ovn_datapath *od,
>                          const struct ls_port_group_table *ls_port_groups,
> -                        struct hmap *lflows)
> +                        struct lflow_table *lflows)
>  {
>      for (size_t i = 0; i < od->nbs->n_acls; i++) {
>          const struct nbrec_acl *acl = od->nbs->acls[i];
> @@ -6828,7 +6183,7 @@ build_stateless_filters(const struct ovn_datapath *od,
>  }
>  
>  static void
> -build_pre_acls(struct ovn_datapath *od, struct hmap *lflows)
> +build_pre_acls(struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      /* Ingress and Egress Pre-ACL Table (Priority 0): Packets are
>       * allowed by default. */
> @@ -6843,16 +6198,17 @@ build_pre_acls(struct ovn_datapath *od, struct hmap *lflows)
>  }
>  
>  static void
> -build_ls_stateful_rec_pre_acls(const struct ls_stateful_record *ls_sful_rec,
> -                             const struct ls_port_group_table *ls_port_groups,
> -                             struct hmap *lflows)
> +build_ls_stateful_rec_pre_acls(
> +    const struct ls_stateful_record *ls_stateful_rec,
> +    const struct ls_port_group_table *ls_port_groups,
> +    struct lflow_table *lflows)
>  {
> -    const struct ovn_datapath *od = ls_sful_rec->od;
> +    const struct ovn_datapath *od = ls_stateful_rec->od;
>  
>      /* If there are any stateful ACL rules in this datapath, we may
>       * send IP packets for some (allow) filters through the conntrack action,
>       * which handles defragmentation, in order to match L4 headers. */
> -    if (ls_sful_rec->has_stateful_acl) {
> +    if (ls_stateful_rec->has_stateful_acl) {
>          for (size_t i = 0; i < od->n_router_ports; i++) {
>              struct ovn_port *op = od->router_ports[i];
>              if (op->enable_router_port_acl) {
> @@ -6901,7 +6257,7 @@ build_ls_stateful_rec_pre_acls(const struct ls_stateful_record *ls_sful_rec,
>                        REGBIT_CONNTRACK_DEFRAG" = 1; next;");
>          ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_ACL, 100, "ip",
>                        REGBIT_CONNTRACK_DEFRAG" = 1; next;");
> -    } else if (ls_sful_rec->has_lb_vip) {
> +    } else if (ls_stateful_rec->has_lb_vip) {
>          /* We'll build stateless filters if there are LB rules so that
>           * the stateless flows are not tracked in pre-lb. */
>           build_stateless_filters(od, ls_port_groups, lflows);
> @@ -6968,7 +6324,7 @@ build_empty_lb_event_flow(struct ovn_lb_vip *lb_vip,
>  static void
>  build_interconn_mcast_snoop_flows(struct ovn_datapath *od,
>                                    const struct shash *meter_groups,
> -                                  struct hmap *lflows)
> +                                  struct lflow_table *lflows)
>  {
>      struct mcast_switch_info *mcast_sw_info = &od->mcast_info.sw;
>      if (!mcast_sw_info->enabled
> @@ -7002,7 +6358,7 @@ build_interconn_mcast_snoop_flows(struct ovn_datapath *od,
>  
>  static void
>  build_pre_lb(struct ovn_datapath *od, const struct shash *meter_groups,
> -             struct hmap *lflows)
> +             struct lflow_table *lflows)
>  {
>      /* Handle IGMP/MLD packets crossing AZs. */
>      build_interconn_mcast_snoop_flows(od, meter_groups, lflows);
> @@ -7038,7 +6394,7 @@ build_pre_lb(struct ovn_datapath *od, const struct shash *meter_groups,
>  
>  static void
>  build_ls_stateful_rec_pre_lb(const struct ls_stateful_record *ls_stateful_rec,
> -                             struct hmap *lflows)
> +                             struct lflow_table *lflows)
>  {
>      const struct ovn_datapath *od = ls_stateful_rec->od;
>  
> @@ -7104,7 +6460,7 @@ build_ls_stateful_rec_pre_lb(const struct ls_stateful_record *ls_stateful_rec,
>  static void
>  build_pre_stateful(struct ovn_datapath *od,
>                     const struct chassis_features *features,
> -                   struct hmap *lflows)
> +                   struct lflow_table *lflows)
>  {
>      /* Ingress and Egress pre-stateful Table (Priority 0): Packets are
>       * allowed by default. */
> @@ -7134,11 +6490,11 @@ build_pre_stateful(struct ovn_datapath *od,
>  }
>  
>  static void
> -build_acl_hints(const struct ls_stateful_record *ls_sful_rec,
> +build_acl_hints(const struct ls_stateful_record *ls_stateful_rec,
>                  const struct chassis_features *features,
> -                struct hmap *lflows)
> +                struct lflow_table *lflows)
>  {
> -    const struct ovn_datapath *od = ls_sful_rec->od;
> +    const struct ovn_datapath *od = ls_stateful_rec->od;
>  
>      /* This stage builds hints for the IN/OUT_ACL stage. Based on various
>       * combinations of ct flags packets may hit only a subset of the logical
> @@ -7163,13 +6519,14 @@ build_acl_hints(const struct ls_stateful_record *ls_sful_rec,
>          const char *match;
>  
>          /* In any case, advance to the next stage. */
> -        if (!ls_sful_rec->has_acls && !ls_sful_rec->has_lb_vip) {
> +        if (!ls_stateful_rec->has_acls && !ls_stateful_rec->has_lb_vip) {
>              ovn_lflow_add(lflows, od, stage, UINT16_MAX, "1", "next;");
>          } else {
>              ovn_lflow_add(lflows, od, stage, 0, "1", "next;");
>          }
>  
> -        if (!ls_sful_rec->has_stateful_acl && !ls_sful_rec->has_lb_vip) {
> +        if (!ls_stateful_rec->has_stateful_acl &&
> +            !ls_stateful_rec->has_lb_vip) {
>              continue;
>          }
>  
> @@ -7305,7 +6662,7 @@ build_acl_log(struct ds *actions, const struct nbrec_acl *acl,
>  }
>  
>  static void
> -consider_acl(struct hmap *lflows, const struct ovn_datapath *od,
> +consider_acl(struct lflow_table *lflows, const struct ovn_datapath *od,
>               const struct nbrec_acl *acl, bool has_stateful,
>               bool ct_masked_mark, const struct shash *meter_groups,
>               uint64_t max_acl_tier, struct ds *match, struct ds *actions)
> @@ -7533,7 +6890,7 @@ ovn_update_ipv6_options(struct hmap *lr_ports)
>  
>  static void
>  build_acl_action_lflows(const struct ls_stateful_record *ls_stateful_rec,
> -                        struct hmap *lflows,
> +                        struct lflow_table *lflows,
>                          const char *default_acl_action,
>                          const struct shash *meter_groups,
>                          struct ds *match,
> @@ -7610,7 +6967,8 @@ build_acl_action_lflows(const struct ls_stateful_record *ls_stateful_rec,
>  }
>  
>  static void
> -build_acl_log_related_flows(const struct ovn_datapath *od, struct hmap *lflows,
> +build_acl_log_related_flows(const struct ovn_datapath *od,
> +                            struct lflow_table *lflows,
>                              const struct nbrec_acl *acl, bool has_stateful,
>                              bool ct_masked_mark,
>                              const struct shash *meter_groups,
> @@ -7685,7 +7043,7 @@ build_acl_log_related_flows(const struct ovn_datapath *od, struct hmap *lflows,
>  static void
>  build_acls(const struct ls_stateful_record *ls_stateful_rec,
>             const struct chassis_features *features,
> -           struct hmap *lflows,
> +           struct lflow_table *lflows,
>             const struct ls_port_group_table *ls_port_groups,
>             const struct shash *meter_groups)
>  {
> @@ -7931,7 +7289,7 @@ build_acls(const struct ls_stateful_record *ls_stateful_rec,
>  }
>  
>  static void
> -build_qos(struct ovn_datapath *od, struct hmap *lflows) {
> +build_qos(struct ovn_datapath *od, struct lflow_table *lflows) {
>      struct ds action = DS_EMPTY_INITIALIZER;
>  
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_MARK, 0, "1", "next;");
> @@ -7992,7 +7350,7 @@ build_qos(struct ovn_datapath *od, struct hmap *lflows) {
>  }
>  
>  static void
> -build_lb_rules_pre_stateful(struct hmap *lflows,
> +build_lb_rules_pre_stateful(struct lflow_table *lflows,
>                              struct ovn_lb_datapaths *lb_dps,
>                              bool ct_lb_mark,
>                              const struct ovn_datapaths *ls_datapaths,
> @@ -8094,7 +7452,8 @@ build_lb_rules_pre_stateful(struct hmap *lflows,
>   *
>   */
>  static void
> -build_lb_affinity_lr_flows(struct hmap *lflows, const struct ovn_northd_lb *lb,
> +build_lb_affinity_lr_flows(struct lflow_table *lflows,
> +                           const struct ovn_northd_lb *lb,
>                             struct ovn_lb_vip *lb_vip, char *new_lb_match,
>                             char *lb_action, const unsigned long *dp_bitmap,
>                             const struct ovn_datapaths *lr_datapaths)
> @@ -8280,7 +7639,7 @@ build_lb_affinity_lr_flows(struct hmap *lflows, const struct ovn_northd_lb *lb,
>   *
>   */
>  static void
> -build_lb_affinity_ls_flows(struct hmap *lflows,
> +build_lb_affinity_ls_flows(struct lflow_table *lflows,
>                             struct ovn_lb_datapaths *lb_dps,
>                             struct ovn_lb_vip *lb_vip,
>                             const struct ovn_datapaths *ls_datapaths)
> @@ -8423,7 +7782,7 @@ build_lb_affinity_ls_flows(struct hmap *lflows,
>  
>  static void
>  build_lswitch_lb_affinity_default_flows(struct ovn_datapath *od,
> -                                        struct hmap *lflows)
> +                                        struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_LB_AFF_CHECK, 0, "1", "next;");
> @@ -8432,7 +7791,7 @@ build_lswitch_lb_affinity_default_flows(struct ovn_datapath *od,
>  
>  static void
>  build_lrouter_lb_affinity_default_flows(struct ovn_datapath *od,
> -                                        struct hmap *lflows)
> +                                        struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      ovn_lflow_add(lflows, od, S_ROUTER_IN_LB_AFF_CHECK, 0, "1", "next;");
> @@ -8440,7 +7799,7 @@ build_lrouter_lb_affinity_default_flows(struct ovn_datapath *od,
>  }
>  
>  static void
> -build_lb_rules(struct hmap *lflows, struct ovn_lb_datapaths *lb_dps,
> +build_lb_rules(struct lflow_table *lflows, struct ovn_lb_datapaths *lb_dps,
>                 const struct ovn_datapaths *ls_datapaths,
>                 const struct chassis_features *features, struct ds *match,
>                 struct ds *action, const struct shash *meter_groups,
> @@ -8520,7 +7879,7 @@ build_lb_rules(struct hmap *lflows, struct ovn_lb_datapaths *lb_dps,
>  static void
>  build_stateful(struct ovn_datapath *od,
>                 const struct chassis_features *features,
> -               struct hmap *lflows)
> +               struct lflow_table *lflows)
>  {
>      const char *ct_block_action = features->ct_no_masked_label
>                                    ? "ct_mark.blocked"
> @@ -8570,7 +7929,7 @@ build_stateful(struct ovn_datapath *od,
>  
>  static void
>  build_lb_hairpin(const struct ls_stateful_record *ls_stateful_rec,
> -                 struct hmap *lflows)
> +                 struct lflow_table *lflows)
>  {
>      const struct ovn_datapath *od = ls_stateful_rec->od;
>  
> @@ -8629,7 +7988,7 @@ build_lb_hairpin(const struct ls_stateful_record *ls_stateful_rec,
>  }
>  
>  static void
> -build_vtep_hairpin(struct ovn_datapath *od, struct hmap *lflows)
> +build_vtep_hairpin(struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      if (!od->has_vtep_lports) {
>          /* There is no need in these flows if datapath has no vtep lports. */
> @@ -8677,7 +8036,7 @@ build_vtep_hairpin(struct ovn_datapath *od, struct hmap *lflows)
>  
>  /* Build logical flows for the forwarding groups */
>  static void
> -build_fwd_group_lflows(struct ovn_datapath *od, struct hmap *lflows)
> +build_fwd_group_lflows(struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      if (!od->nbs->n_forwarding_groups) {
> @@ -8858,7 +8217,8 @@ build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op,
>                                          uint32_t priority,
>                                          const struct ovn_datapath *od,
>                                          const struct lr_nat_record *lrnat_rec,
> -                                        struct hmap *lflows)
> +                                        struct lflow_table *lflows,
> +                                        struct lflow_ref *lflow_ref)
>  {
>      struct ds eth_src = DS_EMPTY_INITIALIZER;
>      struct ds match = DS_EMPTY_INITIALIZER;
> @@ -8882,8 +8242,10 @@ build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op,
>      ds_put_format(&match,
>                    "eth.src == %s && (arp.op == 1 || rarp.op == 3 || nd_ns)",
>                    ds_cstr(&eth_src));
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_L2_LKUP, priority, ds_cstr(&match),
> -                  "outport = \""MC_FLOOD_L2"\"; output;");
> +    ovn_lflow_add_with_lflow_ref(lflows, od, S_SWITCH_IN_L2_LKUP, priority,
> +                                 ds_cstr(&match),
> +                                 "outport = \""MC_FLOOD_L2"\"; output;",
> +                                 lflow_ref);
>  
>      ds_destroy(&eth_src);
>      ds_destroy(&match);
> @@ -8948,11 +8310,11 @@ lrouter_port_ipv6_reachable(const struct ovn_port *op,
>   * switching domain as regular broadcast.
>   */
>  static void
> -build_lswitch_rport_arp_req_flow(const char *ips, int addr_family,
> -                                 struct ovn_port *patch_op,
> -                                 const struct ovn_datapath *od,
> -                                 uint32_t priority, struct hmap *lflows,
> -                                 const struct ovsdb_idl_row *stage_hint)
> +build_lswitch_rport_arp_req_flow(
> +    const char *ips, int addr_family, struct ovn_port *patch_op,
> +    const struct ovn_datapath *od, uint32_t priority,
> +    struct lflow_table *lflows, const struct ovsdb_idl_row *stage_hint,
> +    struct lflow_ref *lflow_ref)
>  {
>      struct ds match   = DS_EMPTY_INITIALIZER;
>      struct ds actions = DS_EMPTY_INITIALIZER;
> @@ -8966,14 +8328,17 @@ build_lswitch_rport_arp_req_flow(const char *ips, int addr_family,
>          ds_put_format(&actions, "clone {outport = %s; output; }; "
>                                  "outport = \""MC_FLOOD_L2"\"; output;",
>                        patch_op->json_key);
> -        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
> -                                priority, ds_cstr(&match),
> -                                ds_cstr(&actions), stage_hint);
> +        ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
> +                                          priority, ds_cstr(&match),
> +                                          ds_cstr(&actions), stage_hint,
> +                                          lflow_ref);
>      } else {
>          ds_put_format(&actions, "outport = %s; output;", patch_op->json_key);
> -        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP, priority,
> -                                ds_cstr(&match), ds_cstr(&actions),
> -                                stage_hint);
> +        ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
> +                                          priority, ds_cstr(&match),
> +                                          ds_cstr(&actions),
> +                                          stage_hint,
> +                                          lflow_ref);
>      }
>  
>      ds_destroy(&match);
> @@ -8991,7 +8356,7 @@ static void
>  build_lswitch_rport_arp_req_flows(struct ovn_port *op,
>                                    struct ovn_datapath *sw_od,
>                                    struct ovn_port *sw_op,
> -                                  struct hmap *lflows,
> +                                  struct lflow_table *lflows,
>                                    const struct ovsdb_idl_row *stage_hint)
>  {
>      if (!op || !op->nbrp) {
> @@ -9009,12 +8374,12 @@ build_lswitch_rport_arp_req_flows(struct ovn_port *op,
>      for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
>          build_lswitch_rport_arp_req_flow(
>              op->lrp_networks.ipv4_addrs[i].addr_s, AF_INET, sw_op, sw_od, 80,
> -            lflows, stage_hint);
> +            lflows, stage_hint, sw_op->lflow_ref);
>      }
>      for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
>          build_lswitch_rport_arp_req_flow(
>              op->lrp_networks.ipv6_addrs[i].addr_s, AF_INET6, sw_op, sw_od, 80,
> -            lflows, stage_hint);
> +            lflows, stage_hint, sw_op->lflow_ref);
>      }
>  }
>  
> @@ -9029,7 +8394,8 @@ static void
>  build_lswitch_rport_arp_req_flows_for_lbnats(
>      struct ovn_port *op, const struct lr_stateful_record *lr_stateful_rec,
>      const struct ovn_datapath *sw_od, struct ovn_port *sw_op,
> -    struct hmap *lflows, const struct ovsdb_idl_row *stage_hint)
> +    struct lflow_table *lflows, const struct ovsdb_idl_row *stage_hint,
> +    struct lflow_ref *lflow_ref)
>  {
>      if (!op || !op->nbrp) {
>          return;
> @@ -9057,7 +8423,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>                  lrouter_port_ipv4_reachable(op, ipv4_addr)) {
>                  build_lswitch_rport_arp_req_flow(
>                      ip_addr, AF_INET, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          }
>          SSET_FOR_EACH (ip_addr, &lr_stateful_rec->lb_ips->ips_v6_reachable) {
> @@ -9070,7 +8436,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>                  lrouter_port_ipv6_reachable(op, &ipv6_addr)) {
>                  build_lswitch_rport_arp_req_flow(
>                      ip_addr, AF_INET6, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          }
>      }
> @@ -9085,7 +8451,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>      if (sw_od->n_router_ports != sw_od->nbs->n_ports) {
>          build_lswitch_rport_arp_req_self_orig_flow(op, 75, sw_od,
>                                                     lr_stateful_rec->lrnat_rec,
> -                                                   lflows);
> +                                                   lflows, lflow_ref);
>      }
>  
>      for (size_t i = 0; i < lr_stateful_rec->lrnat_rec->n_nat_entries; i++) {
> @@ -9109,14 +8475,14 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>                                 nat->external_ip)) {
>                  build_lswitch_rport_arp_req_flow(
>                      nat->external_ip, AF_INET6, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          } else {
>              if (!sset_contains(&lr_stateful_rec->lb_ips->ips_v4,
>                                 nat->external_ip)) {
>                  build_lswitch_rport_arp_req_flow(
>                      nat->external_ip, AF_INET, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          }
>      }
> @@ -9143,7 +8509,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>                                 nat->external_ip)) {
>                  build_lswitch_rport_arp_req_flow(
>                      nat->external_ip, AF_INET6, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          } else {
>              if (!lr_stateful_rec ||
> @@ -9151,7 +8517,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
>                                 nat->external_ip)) {
>                  build_lswitch_rport_arp_req_flow(
>                      nat->external_ip, AF_INET, sw_op, sw_od, 80, lflows,
> -                    stage_hint);
> +                    stage_hint, lflow_ref);
>              }
>          }
>      }
> @@ -9162,7 +8528,7 @@ build_dhcpv4_options_flows(struct ovn_port *op,
>                             struct lport_addresses *lsp_addrs,
>                             struct ovn_port *inport, bool is_external,
>                             const struct shash *meter_groups,
> -                           struct hmap *lflows)
> +                           struct lflow_table *lflows)
>  {
>      struct ds match = DS_EMPTY_INITIALIZER;
>  
> @@ -9193,7 +8559,7 @@ build_dhcpv4_options_flows(struct ovn_port *op,
>                                op->json_key);
>              }
>  
> -            ovn_lflow_add_with_hint__(lflows, op->od,
> +            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
>                                        S_SWITCH_IN_DHCP_OPTIONS, 100,
>                                        ds_cstr(&match),
>                                        ds_cstr(&options_action),
> @@ -9201,7 +8567,8 @@ build_dhcpv4_options_flows(struct ovn_port *op,
>                                        copp_meter_get(COPP_DHCPV4_OPTS,
>                                                       op->od->nbs->copp,
>                                                       meter_groups),
> -                                      &op->nbsp->dhcpv4_options->header_);
> +                                      &op->nbsp->dhcpv4_options->header_,
> +                                      op->lflow_ref);
>              ds_clear(&match);
>  
>              /* If REGBIT_DHCP_OPTS_RESULT is set, it means the
> @@ -9220,7 +8587,8 @@ build_dhcpv4_options_flows(struct ovn_port *op,
>              ovn_lflow_add_with_lport_and_hint(
>                  lflows, op->od, S_SWITCH_IN_DHCP_RESPONSE, 100,
>                  ds_cstr(&match), ds_cstr(&response_action), inport->key,
> -                &op->nbsp->dhcpv4_options->header_);
> +                &op->nbsp->dhcpv4_options->header_,
> +                op->lflow_ref);
>              ds_destroy(&options_action);
>              ds_destroy(&response_action);
>              ds_destroy(&ipv4_addr_match);
> @@ -9247,7 +8615,8 @@ build_dhcpv4_options_flows(struct ovn_port *op,
>                  ovn_lflow_add_with_lport_and_hint(
>                      lflows, op->od, S_SWITCH_OUT_ACL_EVAL, 34000,
>                      ds_cstr(&match),dhcp_actions, op->key,
> -                    &op->nbsp->dhcpv4_options->header_);
> +                    &op->nbsp->dhcpv4_options->header_,
> +                    op->lflow_ref);
>              }
>              break;
>          }
> @@ -9260,7 +8629,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>                             struct lport_addresses *lsp_addrs,
>                             struct ovn_port *inport, bool is_external,
>                             const struct shash *meter_groups,
> -                           struct hmap *lflows)
> +                           struct lflow_table *lflows)
>  {
>      struct ds match = DS_EMPTY_INITIALIZER;
>  
> @@ -9282,7 +8651,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>                                op->json_key);
>              }
>  
> -            ovn_lflow_add_with_hint__(lflows, op->od,
> +            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
>                                        S_SWITCH_IN_DHCP_OPTIONS, 100,
>                                        ds_cstr(&match),
>                                        ds_cstr(&options_action),
> @@ -9290,7 +8659,8 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>                                        copp_meter_get(COPP_DHCPV6_OPTS,
>                                                       op->od->nbs->copp,
>                                                       meter_groups),
> -                                      &op->nbsp->dhcpv6_options->header_);
> +                                      &op->nbsp->dhcpv6_options->header_,
> +                                      op->lflow_ref);
>  
>              /* If REGBIT_DHCP_OPTS_RESULT is set to 1, it means the
>               * put_dhcpv6_opts action is successful */
> @@ -9298,7 +8668,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>              ovn_lflow_add_with_lport_and_hint(
>                  lflows, op->od, S_SWITCH_IN_DHCP_RESPONSE, 100,
>                  ds_cstr(&match), ds_cstr(&response_action), inport->key,
> -                &op->nbsp->dhcpv6_options->header_);
> +                &op->nbsp->dhcpv6_options->header_, op->lflow_ref);
>              ds_destroy(&options_action);
>              ds_destroy(&response_action);
>  
> @@ -9330,7 +8700,8 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>                  ovn_lflow_add_with_lport_and_hint(
>                      lflows, op->od, S_SWITCH_OUT_ACL_EVAL, 34000,
>                      ds_cstr(&match),dhcp6_actions, op->key,
> -                    &op->nbsp->dhcpv6_options->header_);
> +                    &op->nbsp->dhcpv6_options->header_,
> +                    op->lflow_ref);
>              }
>              break;
>          }
> @@ -9341,7 +8712,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
>  static void
>  build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
>                                                   const struct ovn_port *port,
> -                                                 struct hmap *lflows)
> +                                                 struct lflow_table *lflows)
>  {
>      struct ds match = DS_EMPTY_INITIALIZER;
>  
> @@ -9361,7 +8732,7 @@ build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
>                      ovn_lflow_add_with_lport_and_hint(
>                          lflows, op->od, S_SWITCH_IN_EXTERNAL_PORT, 100,
>                          ds_cstr(&match),  debug_drop_action(), port->key,
> -                        &op->nbsp->header_);
> +                        &op->nbsp->header_, op->lflow_ref);
>                  }
>                  for (size_t l = 0; l < rp->lsp_addrs[k].n_ipv6_addrs; l++) {
>                      ds_clear(&match);
> @@ -9377,7 +8748,7 @@ build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
>                      ovn_lflow_add_with_lport_and_hint(
>                          lflows, op->od, S_SWITCH_IN_EXTERNAL_PORT, 100,
>                          ds_cstr(&match), debug_drop_action(), port->key,
> -                        &op->nbsp->header_);
> +                        &op->nbsp->header_, op->lflow_ref);
>                  }
>  
>                  ds_clear(&match);
> @@ -9393,7 +8764,8 @@ build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
>                                                    100, ds_cstr(&match),
>                                                    debug_drop_action(),
>                                                    port->key,
> -                                                  &op->nbsp->header_);
> +                                                  &op->nbsp->header_,
> +                                                  op->lflow_ref);
>              }
>          }
>      }
> @@ -9408,7 +8780,7 @@ is_vlan_transparent(const struct ovn_datapath *od)
>  
>  static void
>  build_lswitch_lflows_l2_unknown(struct ovn_datapath *od,
> -                                struct hmap *lflows)
> +                                struct lflow_table *lflows)
>  {
>      /* Ingress table 25/26: Destination lookup for unknown MACs. */
>      if (od->has_unknown) {
> @@ -9429,7 +8801,7 @@ static void
>  build_lswitch_lflows_pre_acl_and_acl(
>      struct ovn_datapath *od,
>      const struct chassis_features *features,
> -    struct hmap *lflows,
> +    struct lflow_table *lflows,
>      const struct shash *meter_groups)
>  {
>      ovs_assert(od->nbs);
> @@ -9445,7 +8817,7 @@ build_lswitch_lflows_pre_acl_and_acl(
>   * 100). */
>  static void
>  build_lswitch_lflows_admission_control(struct ovn_datapath *od,
> -                                       struct hmap *lflows)
> +                                       struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      /* Logical VLANs not supported. */
> @@ -9473,7 +8845,7 @@ build_lswitch_lflows_admission_control(struct ovn_datapath *od,
>  
>  static void
>  build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
> -                                          struct hmap *lflows,
> +                                          struct lflow_table *lflows,
>                                            struct ds *match)
>  {
>      ovs_assert(op->nbsp);
> @@ -9485,14 +8857,14 @@ build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
>      ovn_lflow_add_with_lport_and_hint(lflows, op->od,
>                                        S_SWITCH_IN_ARP_ND_RSP, 100,
>                                        ds_cstr(match), "next;", op->key,
> -                                      &op->nbsp->header_);
> +                                      &op->nbsp->header_, op->lflow_ref);
>  }
>  
>  /* Ingress table 19: ARP/ND responder, reply for known IPs.
>   * (priority 50). */
>  static void
>  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> -                                         struct hmap *lflows,
> +                                         struct lflow_table *lflows,
>                                           const struct hmap *ls_ports,
>                                           const struct shash *meter_groups,
>                                           struct ds *actions,
> @@ -9577,7 +8949,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                                                S_SWITCH_IN_ARP_ND_RSP, 100,
>                                                ds_cstr(match),
>                                                ds_cstr(actions), vparent,
> -                                              &vp->nbsp->header_);
> +                                              &vp->nbsp->header_,
> +                                              op->lflow_ref);
>          }
>  
>          free(tokstr);
> @@ -9621,11 +8994,12 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                      "output;",
>                      op->lsp_addrs[i].ea_s, op->lsp_addrs[i].ea_s,
>                      op->lsp_addrs[i].ipv4_addrs[j].addr_s);
> -                ovn_lflow_add_with_hint(lflows, op->od,
> -                                        S_SWITCH_IN_ARP_ND_RSP, 50,
> -                                        ds_cstr(match),
> -                                        ds_cstr(actions),
> -                                        &op->nbsp->header_);
> +                ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                                  S_SWITCH_IN_ARP_ND_RSP, 50,
> +                                                  ds_cstr(match),
> +                                                  ds_cstr(actions),
> +                                                  &op->nbsp->header_,
> +                                                  op->lflow_ref);
>  
>                  /* Do not reply to an ARP request from the port that owns
>                   * the address (otherwise a DHCP client that ARPs to check
> @@ -9644,7 +9018,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                                                    S_SWITCH_IN_ARP_ND_RSP,
>                                                    100, ds_cstr(match),
>                                                    "next;", op->key,
> -                                                  &op->nbsp->header_);
> +                                                  &op->nbsp->header_,
> +                                                  op->lflow_ref);
>              }
>  
>              /* For ND solicitations, we need to listen for both the
> @@ -9674,15 +9049,16 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                          op->lsp_addrs[i].ipv6_addrs[j].addr_s,
>                          op->lsp_addrs[i].ipv6_addrs[j].addr_s,
>                          op->lsp_addrs[i].ea_s);
> -                ovn_lflow_add_with_hint__(lflows, op->od,
> -                                          S_SWITCH_IN_ARP_ND_RSP, 50,
> -                                          ds_cstr(match),
> -                                          ds_cstr(actions),
> -                                          NULL,
> -                                          copp_meter_get(COPP_ND_NA,
> -                                              op->od->nbs->copp,
> -                                              meter_groups),
> -                                          &op->nbsp->header_);
> +                ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
> +                                                    S_SWITCH_IN_ARP_ND_RSP, 50,
> +                                                    ds_cstr(match),
> +                                                    ds_cstr(actions),
> +                                                    NULL,
> +                                                    copp_meter_get(COPP_ND_NA,
> +                                                        op->od->nbs->copp,
> +                                                        meter_groups),
> +                                                    &op->nbsp->header_,
> +                                                    op->lflow_ref);
>  
>                  /* Do not reply to a solicitation from the port that owns
>                   * the address (otherwise DAD detection will fail). */
> @@ -9691,7 +9067,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                                                    S_SWITCH_IN_ARP_ND_RSP,
>                                                    100, ds_cstr(match),
>                                                    "next;", op->key,
> -                                                  &op->nbsp->header_);
> +                                                  &op->nbsp->header_,
> +                                                  op->lflow_ref);
>              }
>          }
>      }
> @@ -9737,8 +9114,12 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                  ea_s,
>                  ea_s);
>  
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_ARP_ND_RSP,
> -                30, ds_cstr(match), ds_cstr(actions), &op->nbsp->header_);
> +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                              S_SWITCH_IN_ARP_ND_RSP,
> +                                              30, ds_cstr(match),
> +                                              ds_cstr(actions),
> +                                              &op->nbsp->header_,
> +                                              op->lflow_ref);
>          }
>  
>          /* Add IPv6 NDP responses.
> @@ -9781,15 +9162,16 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>                      lsp_is_router(op->nbsp) ? "nd_na_router" : "nd_na",
>                      ea_s,
>                      ea_s);
> -            ovn_lflow_add_with_hint__(lflows, op->od,
> -                                      S_SWITCH_IN_ARP_ND_RSP, 30,
> -                                      ds_cstr(match),
> -                                      ds_cstr(actions),
> -                                      NULL,
> -                                      copp_meter_get(COPP_ND_NA,
> -                                          op->od->nbs->copp,
> -                                          meter_groups),
> -                                      &op->nbsp->header_);
> +            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
> +                                                S_SWITCH_IN_ARP_ND_RSP, 30,
> +                                                ds_cstr(match),
> +                                                ds_cstr(actions),
> +                                                NULL,
> +                                                copp_meter_get(COPP_ND_NA,
> +                                                    op->od->nbs->copp,
> +                                                    meter_groups),
> +                                                &op->nbsp->header_,
> +                                                op->lflow_ref);
>              ds_destroy(&ip6_dst_match);
>              ds_destroy(&nd_target_match);
>          }
> @@ -9800,7 +9182,7 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
>   * (priority 0)*/
>  static void
>  build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
> -                                       struct hmap *lflows)
> +                                       struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_ARP_ND_RSP, 0, "1", "next;");
> @@ -9811,7 +9193,7 @@ build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
>  static void
>  build_lswitch_arp_nd_service_monitor(const struct ovn_northd_lb *lb,
>                                       const struct hmap *ls_ports,
> -                                     struct hmap *lflows,
> +                                     struct lflow_table *lflows,
>                                       struct ds *actions,
>                                       struct ds *match)
>  {
> @@ -9887,7 +9269,7 @@ build_lswitch_arp_nd_service_monitor(const struct ovn_northd_lb *lb,
>   * priority 100 flows. */
>  static void
>  build_lswitch_dhcp_options_and_response(struct ovn_port *op,
> -                                        struct hmap *lflows,
> +                                        struct lflow_table *lflows,
>                                          const struct shash *meter_groups)
>  {
>      ovs_assert(op->nbsp);
> @@ -9942,7 +9324,7 @@ build_lswitch_dhcp_options_and_response(struct ovn_port *op,
>   * (priority 0). */
>  static void
>  build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
> -                                        struct hmap *lflows)
> +                                        struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbs);
>      ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_OPTIONS, 0, "1", "next;");
> @@ -9957,7 +9339,7 @@ build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
>  */
>  static void
>  build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
> -                                      struct hmap *lflows,
> +                                      struct lflow_table *lflows,
>                                        const struct shash *meter_groups)
>  {
>      ovs_assert(od->nbs);
> @@ -9988,7 +9370,7 @@ build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
>   * binding the external ports. */
>  static void
>  build_lswitch_external_port(struct ovn_port *op,
> -                            struct hmap *lflows)
> +                            struct lflow_table *lflows)
>  {
>      ovs_assert(op->nbsp);
>      if (!lsp_is_external(op->nbsp)) {
> @@ -10004,7 +9386,7 @@ build_lswitch_external_port(struct ovn_port *op,
>   * (priority 70 - 100). */
>  static void
>  build_lswitch_destination_lookup_bmcast(struct ovn_datapath *od,
> -                                        struct hmap *lflows,
> +                                        struct lflow_table *lflows,
>                                          struct ds *actions,
>                                          const struct shash *meter_groups)
>  {
> @@ -10097,7 +9479,7 @@ build_lswitch_destination_lookup_bmcast(struct ovn_datapath *od,
>   * (priority 90). */
>  static void
>  build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
> -                                struct hmap *lflows,
> +                                struct lflow_table *lflows,
>                                  struct ds *actions,
>                                  struct ds *match)
>  {
> @@ -10177,7 +9559,8 @@ build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
>  
>  /* Ingress table 25: Destination lookup, unicast handling (priority 50), */
>  static void
> -build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
> +build_lswitch_ip_unicast_lookup(struct ovn_port *op,
> +                                struct lflow_table *lflows,
>                                  struct ds *actions, struct ds *match)
>  {
>      ovs_assert(op->nbsp);
> @@ -10210,10 +9593,12 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
>  
>              ds_clear(actions);
>              ds_put_format(actions, action, op->json_key);
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
> -                                    50, ds_cstr(match),
> -                                    ds_cstr(actions),
> -                                    &op->nbsp->header_);
> +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                              S_SWITCH_IN_L2_LKUP,
> +                                              50, ds_cstr(match),
> +                                              ds_cstr(actions),
> +                                              &op->nbsp->header_,
> +                                              op->lflow_ref);
>          } else if (!strcmp(op->nbsp->addresses[i], "unknown")) {
>              continue;
>          } else if (is_dynamic_lsp_address(op->nbsp->addresses[i])) {
> @@ -10228,10 +9613,12 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
>  
>              ds_clear(actions);
>              ds_put_format(actions, action, op->json_key);
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
> -                                    50, ds_cstr(match),
> -                                    ds_cstr(actions),
> -                                    &op->nbsp->header_);
> +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                              S_SWITCH_IN_L2_LKUP,
> +                                              50, ds_cstr(match),
> +                                              ds_cstr(actions),
> +                                              &op->nbsp->header_,
> +                                              op->lflow_ref);
>          } else if (!strcmp(op->nbsp->addresses[i], "router")) {
>              if (!op->peer || !op->peer->nbrp
>                  || !ovs_scan(op->peer->nbrp->mac,
> @@ -10283,10 +9670,11 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
>  
>              ds_clear(actions);
>              ds_put_format(actions, action, op->json_key);
> -            ovn_lflow_add_with_hint(lflows, op->od,
> -                                    S_SWITCH_IN_L2_LKUP, 50,
> -                                    ds_cstr(match), ds_cstr(actions),
> -                                    &op->nbsp->header_);
> +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                              S_SWITCH_IN_L2_LKUP, 50,
> +                                              ds_cstr(match), ds_cstr(actions),
> +                                              &op->nbsp->header_,
> +                                              op->lflow_ref);
>          } else {
>              static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
>  
> @@ -10301,7 +9689,8 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
>  static void
>  build_lswitch_ip_unicast_lookup_for_nats(
>      struct ovn_port *op, const struct lr_stateful_record *lr_stateful_rec,
> -    struct hmap *lflows, struct ds *match, struct ds *actions)
> +    struct lflow_table *lflows, struct ds *match, struct ds *actions,
> +    struct lflow_ref *lflow_ref)
>  {
>      ovs_assert(op->nbsp);
>  
> @@ -10334,11 +9723,12 @@ build_lswitch_ip_unicast_lookup_for_nats(
>  
>              ds_clear(actions);
>              ds_put_format(actions, action, op->json_key);
> -            ovn_lflow_add_with_hint(lflows, op->od,
> +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
>                                      S_SWITCH_IN_L2_LKUP, 50,
>                                      ds_cstr(match),
>                                      ds_cstr(actions),
> -                                    &op->nbsp->header_);
> +                                    &op->nbsp->header_,
> +                                    lflow_ref);
>          }
>      }
>  }
> @@ -10578,7 +9968,7 @@ get_outport_for_routing_policy_nexthop(struct ovn_datapath *od,
>  }
>  
>  static void
> -build_routing_policy_flow(struct hmap *lflows, struct ovn_datapath *od,
> +build_routing_policy_flow(struct lflow_table *lflows, struct ovn_datapath *od,
>                            const struct hmap *lr_ports,
>                            const struct nbrec_logical_router_policy *rule,
>                            const struct ovsdb_idl_row *stage_hint)
> @@ -10643,7 +10033,8 @@ build_routing_policy_flow(struct hmap *lflows, struct ovn_datapath *od,
>  }
>  
>  static void
> -build_ecmp_routing_policy_flows(struct hmap *lflows, struct ovn_datapath *od,
> +build_ecmp_routing_policy_flows(struct lflow_table *lflows,
> +                                struct ovn_datapath *od,
>                                  const struct hmap *lr_ports,
>                                  const struct nbrec_logical_router_policy *rule,
>                                  uint16_t ecmp_group_id)
> @@ -10779,7 +10170,7 @@ get_route_table_id(struct simap *route_tables, const char *route_table_name)
>  }
>  
>  static void
> -build_route_table_lflow(struct ovn_datapath *od, struct hmap *lflows,
> +build_route_table_lflow(struct ovn_datapath *od, struct lflow_table *lflows,
>                          struct nbrec_logical_router_port *lrp,
>                          struct simap *route_tables)
>  {
> @@ -11190,7 +10581,7 @@ find_static_route_outport(struct ovn_datapath *od, const struct hmap *lr_ports,
>  }
>  
>  static void
> -add_ecmp_symmetric_reply_flows(struct hmap *lflows,
> +add_ecmp_symmetric_reply_flows(struct lflow_table *lflows,
>                                 struct ovn_datapath *od,
>                                 bool ct_masked_mark,
>                                 const char *port_ip,
> @@ -11355,7 +10746,7 @@ add_ecmp_symmetric_reply_flows(struct hmap *lflows,
>  }
>  
>  static void
> -build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
> +build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
>                        bool ct_masked_mark, const struct hmap *lr_ports,
>                        struct ecmp_groups_node *eg)
>  
> @@ -11442,12 +10833,12 @@ build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
>  }
>  
>  static void
> -add_route(struct hmap *lflows, struct ovn_datapath *od,
> +add_route(struct lflow_table *lflows, struct ovn_datapath *od,
>            const struct ovn_port *op, const char *lrp_addr_s,
>            const char *network_s, int plen, const char *gateway,
>            bool is_src_route, const uint32_t rtb_id,
>            const struct ovsdb_idl_row *stage_hint, bool is_discard_route,
> -          int ofs)
> +          int ofs, struct lflow_ref *lflow_ref)
>  {
>      bool is_ipv4 = strchr(network_s, '.') ? true : false;
>      struct ds match = DS_EMPTY_INITIALIZER;
> @@ -11490,14 +10881,17 @@ add_route(struct hmap *lflows, struct ovn_datapath *od,
>          ds_put_format(&actions, "ip.ttl--; %s", ds_cstr(&common_actions));
>      }
>  
> -    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_IP_ROUTING, priority,
> -                            ds_cstr(&match), ds_cstr(&actions),
> -                            stage_hint);
> +    ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_ROUTER_IN_IP_ROUTING,
> +                                      priority, ds_cstr(&match),
> +                                      ds_cstr(&actions), stage_hint,
> +                                      lflow_ref);
>      if (op && op->has_bfd) {
>          ds_put_format(&match, " && udp.dst == 3784");
> -        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_ROUTING,
> -                                priority + 1, ds_cstr(&match),
> -                                ds_cstr(&common_actions), stage_hint);
> +        ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> +                                          S_ROUTER_IN_IP_ROUTING,
> +                                          priority + 1, ds_cstr(&match),
> +                                          ds_cstr(&common_actions),\
> +                                          stage_hint, lflow_ref);
>      }
>      ds_destroy(&match);
>      ds_destroy(&common_actions);
> @@ -11505,7 +10899,7 @@ add_route(struct hmap *lflows, struct ovn_datapath *od,
>  }
>  
>  static void
> -build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
> +build_static_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
>                          const struct hmap *lr_ports,
>                          const struct parsed_route *route_)
>  {
> @@ -11531,7 +10925,7 @@ build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
>      add_route(lflows, route_->is_discard_route ? od : out_port->od, out_port,
>                lrp_addr_s, prefix_s, route_->plen, route->nexthop,
>                route_->is_src_route, route_->route_table_id, &route->header_,
> -              route_->is_discard_route, ofs);
> +              route_->is_discard_route, ofs, NULL);
>  
>      free(prefix_s);
>  }
> @@ -11594,7 +10988,7 @@ struct lrouter_nat_lb_flows_ctx {
>  
>      int prio;
>  
> -    struct hmap *lflows;
> +    struct lflow_table *lflows;
>      const struct shash *meter_groups;
>  };
>  
> @@ -11726,7 +11120,7 @@ build_lrouter_nat_flows_for_lb(
>      struct ovn_northd_lb_vip *vips_nb,
>      const struct ovn_datapaths *lr_datapaths,
>      const struct lr_stateful_table *lr_stateful_table,
> -    struct hmap *lflows,
> +    struct lflow_table *lflows,
>      struct ds *match, struct ds *action,
>      const struct shash *meter_groups,
>      const struct chassis_features *features,
> @@ -11895,7 +11289,7 @@ build_lrouter_nat_flows_for_lb(
>  
>  static void
>  build_lswitch_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> -                           struct hmap *lflows,
> +                           struct lflow_table *lflows,
>                             const struct shash *meter_groups,
>                             const struct ovn_datapaths *ls_datapaths,
>                             const struct chassis_features *features,
> @@ -11956,7 +11350,7 @@ build_lswitch_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
>   */
>  static void
>  build_lrouter_defrag_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> -                                  struct hmap *lflows,
> +                                  struct lflow_table *lflows,
>                                    const struct ovn_datapaths *lr_datapaths,
>                                    struct ds *match)
>  {
> @@ -11982,7 +11376,7 @@ build_lrouter_defrag_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
>  
>  static void
>  build_lrouter_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> -                           struct hmap *lflows,
> +                           struct lflow_table *lflows,
>                             const struct shash *meter_groups,
>                             const struct ovn_datapaths *lr_datapaths,
>                             const struct lr_stateful_table *lr_stateful_table,
> @@ -12140,7 +11534,7 @@ lrouter_dnat_and_snat_is_stateless(const struct nbrec_nat *nat)
>   */
>  static inline void
>  lrouter_nat_add_ext_ip_match(const struct ovn_datapath *od,
> -                             struct hmap *lflows, struct ds *match,
> +                             struct lflow_table *lflows, struct ds *match,
>                               const struct nbrec_nat *nat,
>                               bool is_v6, bool is_src, int cidr_bits)
>  {
> @@ -12207,7 +11601,7 @@ build_lrouter_arp_flow(const struct ovn_datapath *od, struct ovn_port *op,
>                         const char *ip_address, const char *eth_addr,
>                         struct ds *extra_match, bool drop, uint16_t priority,
>                         const struct ovsdb_idl_row *hint,
> -                       struct hmap *lflows)
> +                       struct lflow_table *lflows)
>  {
>      struct ds match = DS_EMPTY_INITIALIZER;
>      struct ds actions = DS_EMPTY_INITIALIZER;
> @@ -12257,7 +11651,8 @@ build_lrouter_nd_flow(const struct ovn_datapath *od, struct ovn_port *op,
>                        const char *sn_ip_address, const char *eth_addr,
>                        struct ds *extra_match, bool drop, uint16_t priority,
>                        const struct ovsdb_idl_row *hint,
> -                      struct hmap *lflows, const struct shash *meter_groups)
> +                      struct lflow_table *lflows,
> +                      const struct shash *meter_groups)
>  {
>      struct ds match = DS_EMPTY_INITIALIZER;
>      struct ds actions = DS_EMPTY_INITIALIZER;
> @@ -12308,7 +11703,7 @@ build_lrouter_nd_flow(const struct ovn_datapath *od, struct ovn_port *op,
>  static void
>  build_lrouter_nat_arp_nd_flow(const struct ovn_datapath *od,
>                                struct ovn_nat *nat_entry,
> -                              struct hmap *lflows,
> +                              struct lflow_table *lflows,
>                                const struct shash *meter_groups)
>  {
>      struct lport_addresses *ext_addrs = &nat_entry->ext_addrs;
> @@ -12331,7 +11726,7 @@ build_lrouter_nat_arp_nd_flow(const struct ovn_datapath *od,
>  static void
>  build_lrouter_port_nat_arp_nd_flow(struct ovn_port *op,
>                                     struct ovn_nat *nat_entry,
> -                                   struct hmap *lflows,
> +                                   struct lflow_table *lflows,
>                                     const struct shash *meter_groups)
>  {
>      struct lport_addresses *ext_addrs = &nat_entry->ext_addrs;
> @@ -12405,7 +11800,7 @@ build_lrouter_drop_own_dest(struct ovn_port *op,
>                              const struct lr_stateful_record *lr_stateful_rec,
>                              enum ovn_stage stage,
>                              uint16_t priority, bool drop_snat_ip,
> -                            struct hmap *lflows)
> +                            struct lflow_table *lflows)
>  {
>      struct ds match_ips = DS_EMPTY_INITIALIZER;
>  
> @@ -12470,7 +11865,7 @@ build_lrouter_drop_own_dest(struct ovn_port *op,
>  }
>  
>  static void
> -build_lrouter_force_snat_flows(struct hmap *lflows,
> +build_lrouter_force_snat_flows(struct lflow_table *lflows,
>                                 const struct ovn_datapath *od,
>                                 const char *ip_version, const char *ip_addr,
>                                 const char *context)
> @@ -12499,7 +11894,7 @@ build_lrouter_force_snat_flows(struct hmap *lflows,
>  static void
>  build_lrouter_force_snat_flows_op(struct ovn_port *op,
>                                    const struct lr_nat_record *lrnat_rec,
> -                                  struct hmap *lflows,
> +                                  struct lflow_table *lflows,
>                                    struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -12571,7 +11966,7 @@ build_lrouter_force_snat_flows_op(struct ovn_port *op,
>  }
>  
>  static void
> -build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op,
> +build_lrouter_bfd_flows(struct lflow_table *lflows, struct ovn_port *op,
>                          const struct shash *meter_groups)
>  {
>      if (!op->has_bfd) {
> @@ -12626,7 +12021,7 @@ build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op,
>   */
>  static void
>  build_adm_ctrl_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows)
> +        struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      /* Logical VLANs not supported.
> @@ -12670,7 +12065,7 @@ build_gateway_get_l2_hdr_size(struct ovn_port *op)
>   * function.
>   */
>  static void OVS_PRINTF_FORMAT(9, 10)
> -build_gateway_mtu_flow(struct hmap *lflows, struct ovn_port *op,
> +build_gateway_mtu_flow(struct lflow_table *lflows, struct ovn_port *op,
>                         enum ovn_stage stage, uint16_t prio_low,
>                         uint16_t prio_high, struct ds *match,
>                         struct ds *actions, const struct ovsdb_idl_row *hint,
> @@ -12731,7 +12126,7 @@ consider_l3dgw_port_is_centralized(struct ovn_port *op)
>   */
>  static void
>  build_adm_ctrl_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -12785,7 +12180,7 @@ build_adm_ctrl_flows_for_lrouter_port(
>   * lflows for logical routers. */
>  static void
>  build_neigh_learning_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions,
>          const struct shash *meter_groups)
>  {
> @@ -12916,7 +12311,7 @@ build_neigh_learning_flows_for_lrouter(
>   * for logical router ports. */
>  static void
>  build_neigh_learning_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -12978,7 +12373,7 @@ build_neigh_learning_flows_for_lrouter_port(
>   * Adv (RA) options and response. */
>  static void
>  build_ND_RA_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions,
>          const struct shash *meter_groups)
>  {
> @@ -13093,7 +12488,8 @@ build_ND_RA_flows_for_lrouter_port(
>  /* Logical router ingress table ND_RA_OPTIONS & ND_RA_RESPONSE: RS
>   * responder, by default goto next. (priority 0). */
>  static void
> -build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
> +build_ND_RA_flows_for_lrouter(struct ovn_datapath *od,
> +                              struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      ovn_lflow_add(lflows, od, S_ROUTER_IN_ND_RA_OPTIONS, 0, "1", "next;");
> @@ -13104,7 +12500,7 @@ build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
>   * by default goto next. (priority 0). */
>  static void
>  build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
> -                                       struct hmap *lflows)
> +                                       struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING_PRE, 0, "1",
> @@ -13132,21 +12528,23 @@ build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
>   */
>  static void
>  build_ip_routing_flows_for_lrp(
> -        struct ovn_port *op, struct hmap *lflows)
> +        struct ovn_port *op, struct lflow_table *lflows)
>  {
>      ovs_assert(op->nbrp);
>      for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
>          add_route(lflows, op->od, op, op->lrp_networks.ipv4_addrs[i].addr_s,
>                    op->lrp_networks.ipv4_addrs[i].network_s,
>                    op->lrp_networks.ipv4_addrs[i].plen, NULL, false, 0,
> -                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED);
> +                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED,
> +                  NULL);
>      }
>  
>      for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
>          add_route(lflows, op->od, op, op->lrp_networks.ipv6_addrs[i].addr_s,
>                    op->lrp_networks.ipv6_addrs[i].network_s,
>                    op->lrp_networks.ipv6_addrs[i].plen, NULL, false, 0,
> -                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED);
> +                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED,
> +                  NULL);
>      }
>  }
>  
> @@ -13159,8 +12557,9 @@ build_ip_routing_flows_for_lrp(
>   */
>  static void
>  build_ip_routing_flows_for_router_type_lsp(
> -        struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
> -        const struct hmap *lr_ports, struct hmap *lflows)
> +    struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
> +    const struct hmap *lr_ports, struct lflow_table *lflows,
> +    struct lflow_ref *lflow_ref)
>  {
>      ovs_assert(op->nbsp);
>      if (!lsp_is_router(op->nbsp)) {
> @@ -13196,7 +12595,8 @@ build_ip_routing_flows_for_router_type_lsp(
>                              laddrs->ipv4_addrs[k].network_s,
>                              laddrs->ipv4_addrs[k].plen, NULL, false, 0,
>                              &peer->nbrp->header_, false,
> -                            ROUTE_PRIO_OFFSET_CONNECTED);
> +                            ROUTE_PRIO_OFFSET_CONNECTED,
> +                            lflow_ref);
>                  }
>              }
>              destroy_routable_addresses(&ra);
> @@ -13208,7 +12608,7 @@ build_ip_routing_flows_for_router_type_lsp(
>  static void
>  build_static_route_flows_for_lrouter(
>          struct ovn_datapath *od, const struct chassis_features *features,
> -        struct hmap *lflows, const struct hmap *lr_ports,
> +        struct lflow_table *lflows, const struct hmap *lr_ports,
>          const struct hmap *bfd_connections)
>  {
>      ovs_assert(od->nbr);
> @@ -13272,7 +12672,7 @@ build_static_route_flows_for_lrouter(
>   */
>  static void
>  build_mcast_lookup_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(od->nbr);
> @@ -13373,7 +12773,7 @@ build_mcast_lookup_flows_for_lrouter(
>   * advances to the next table for ARP/ND resolution. */
>  static void
>  build_ingress_policy_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          const struct hmap *lr_ports)
>  {
>      ovs_assert(od->nbr);
> @@ -13407,7 +12807,7 @@ build_ingress_policy_flows_for_lrouter(
>  /* Local router ingress table ARP_RESOLVE: ARP Resolution. */
>  static void
>  build_arp_resolve_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows)
> +        struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      /* Multicast packets already have the outport set so just advance to
> @@ -13425,10 +12825,12 @@ build_arp_resolve_flows_for_lrouter(
>  }
>  
>  static void
> -routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
> +routable_addresses_to_lflows(struct lflow_table *lflows,
> +                             struct ovn_port *router_port,
>                               struct ovn_port *peer,
>                               const struct lr_stateful_record *lr_stateful_rec,
> -                             struct ds *match, struct ds *actions)
> +                             struct ds *match, struct ds *actions,
> +                             struct lflow_ref *lflow_ref)
>  {
>      struct ovn_port_routable_addresses ra =
>          get_op_routable_addresses(router_port, lr_stateful_rec);
> @@ -13452,8 +12854,9 @@ routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
>  
>          ds_clear(actions);
>          ds_put_format(actions, "eth.dst = %s; next;", ra.laddrs[i].ea_s);
> -        ovn_lflow_add(lflows, peer->od, S_ROUTER_IN_ARP_RESOLVE, 100,
> -                      ds_cstr(match), ds_cstr(actions));
> +        ovn_lflow_add_with_lflow_ref(lflows, peer->od, S_ROUTER_IN_ARP_RESOLVE,
> +                                     100, ds_cstr(match), ds_cstr(actions),
> +                                     lflow_ref);
>      }
>      destroy_routable_addresses(&ra);
>  }
> @@ -13470,7 +12873,8 @@ routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
>  
>  /* This function adds ARP resolve flows related to a LRP. */
>  static void
> -build_arp_resolve_flows_for_lrp(struct ovn_port *op, struct hmap *lflows,
> +build_arp_resolve_flows_for_lrp(struct ovn_port *op,
> +                                struct lflow_table *lflows,
>                                  struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -13545,7 +12949,7 @@ build_arp_resolve_flows_for_lrp(struct ovn_port *op, struct hmap *lflows,
>  /* This function adds ARP resolve flows related to a LSP. */
>  static void
>  build_arp_resolve_flows_for_lsp(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          const struct hmap *lr_ports,
>          struct ds *match, struct ds *actions)
>  {
> @@ -13587,11 +12991,12 @@ build_arp_resolve_flows_for_lsp(
>  
>                      ds_clear(actions);
>                      ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> -                    ovn_lflow_add_with_hint(lflows, peer->od,
> +                    ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
>                                              S_ROUTER_IN_ARP_RESOLVE, 100,
>                                              ds_cstr(match),
>                                              ds_cstr(actions),
> -                                            &op->nbsp->header_);
> +                                            &op->nbsp->header_,
> +                                            op->lflow_ref);
>                  }
>              }
>  
> @@ -13618,11 +13023,12 @@ build_arp_resolve_flows_for_lsp(
>  
>                      ds_clear(actions);
>                      ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> -                    ovn_lflow_add_with_hint(lflows, peer->od,
> +                    ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
>                                              S_ROUTER_IN_ARP_RESOLVE, 100,
>                                              ds_cstr(match),
>                                              ds_cstr(actions),
> -                                            &op->nbsp->header_);
> +                                            &op->nbsp->header_,
> +                                            op->lflow_ref);
>                  }
>              }
>          }
> @@ -13666,10 +13072,11 @@ build_arp_resolve_flows_for_lsp(
>                  ds_clear(actions);
>                  ds_put_format(actions, "eth.dst = %s; next;",
>                                            router_port->lrp_networks.ea_s);
> -                ovn_lflow_add_with_hint(lflows, peer->od,
> +                ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
>                                          S_ROUTER_IN_ARP_RESOLVE, 100,
>                                          ds_cstr(match), ds_cstr(actions),
> -                                        &op->nbsp->header_);
> +                                        &op->nbsp->header_,
> +                                        op->lflow_ref);
>              }
>  
>              if (router_port->lrp_networks.n_ipv6_addrs) {
> @@ -13682,10 +13089,11 @@ build_arp_resolve_flows_for_lsp(
>                  ds_clear(actions);
>                  ds_put_format(actions, "eth.dst = %s; next;",
>                                router_port->lrp_networks.ea_s);
> -                ovn_lflow_add_with_hint(lflows, peer->od,
> +                ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
>                                          S_ROUTER_IN_ARP_RESOLVE, 100,
>                                          ds_cstr(match), ds_cstr(actions),
> -                                        &op->nbsp->header_);
> +                                        &op->nbsp->header_,
> +                                        op->lflow_ref);
>              }
>          }
>      }
> @@ -13693,10 +13101,11 @@ build_arp_resolve_flows_for_lsp(
>  
>  static void
>  build_arp_resolve_flows_for_lsp_routable_addresses(
> -        struct ovn_port *op, struct hmap *lflows,
> -        const struct hmap *lr_ports,
> -        const struct lr_stateful_table *lr_stateful_table,
> -        struct ds *match, struct ds *actions)
> +    struct ovn_port *op, struct lflow_table *lflows,
> +    const struct hmap *lr_ports,
> +    const struct lr_stateful_table *lr_stateful_table,
> +    struct ds *match, struct ds *actions,
> +    struct lflow_ref *lflow_ref)
>  {
>      if (!lsp_is_router(op->nbsp)) {
>          return;
> @@ -13730,13 +13139,15 @@ build_arp_resolve_flows_for_lsp_routable_addresses(
>              lr_stateful_rec = lr_stateful_table_find_by_index(
>                  lr_stateful_table, router_port->od->index);
>              routable_addresses_to_lflows(lflows, router_port, peer,
> -                                         lr_stateful_rec, match, actions);
> +                                         lr_stateful_rec, match, actions,
> +                                         lflow_ref);
>          }
>      }
>  }
>  
>  static void
> -build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu, struct hmap *lflows,
> +build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu,
> +                            struct lflow_table *lflows,
>                              const struct shash *meter_groups, struct ds *match,
>                              struct ds *actions, enum ovn_stage stage,
>                              struct ovn_port *outport)
> @@ -13829,7 +13240,7 @@ build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu, struct hmap *lflows,
>  
>  static void
>  build_check_pkt_len_flows_for_lrp(struct ovn_port *op,
> -                                  struct hmap *lflows,
> +                                  struct lflow_table *lflows,
>                                    const struct hmap *lr_ports,
>                                    const struct shash *meter_groups,
>                                    struct ds *match,
> @@ -13879,7 +13290,7 @@ build_check_pkt_len_flows_for_lrp(struct ovn_port *op,
>   * */
>  static void
>  build_check_pkt_len_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          const struct hmap *lr_ports,
>          struct ds *match, struct ds *actions,
>          const struct shash *meter_groups)
> @@ -13906,7 +13317,7 @@ build_check_pkt_len_flows_for_lrouter(
>  /* Logical router ingress table GW_REDIRECT: Gateway redirect. */
>  static void
>  build_gateway_redirect_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(od->nbr);
> @@ -13950,8 +13361,8 @@ build_gateway_redirect_flows_for_lrouter(
>  /* Logical router ingress table GW_REDIRECT: Gateway redirect. */
>  static void
>  build_lr_gateway_redirect_flows_for_nats(
> -    const struct ovn_datapath *od, const struct lr_nat_record *lrnat_rec,
> -    struct hmap *lflows, struct ds *match, struct ds *actions)
> +        const struct ovn_datapath *od, const struct lr_nat_record *lrnat_rec,
> +        struct lflow_table *lflows, struct ds *match, struct ds *actions)
>  {
>      ovs_assert(od->nbr);
>      for (size_t i = 0; i < od->n_l3dgw_ports; i++) {
> @@ -14020,7 +13431,7 @@ build_lr_gateway_redirect_flows_for_nats(
>   * and sends an ARP/IPv6 NA request (priority 100). */
>  static void
>  build_arp_request_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> +        struct ovn_datapath *od, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions,
>          const struct shash *meter_groups)
>  {
> @@ -14098,7 +13509,7 @@ build_arp_request_flows_for_lrouter(
>   */
>  static void
>  build_egress_delivery_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions)
>  {
>      ovs_assert(op->nbrp);
> @@ -14140,7 +13551,7 @@ build_egress_delivery_flows_for_lrouter_port(
>  
>  static void
>  build_misc_local_traffic_drop_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows)
> +        struct ovn_datapath *od, struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>      /* Allow IGMP and MLD packets (with TTL = 1) if the router is
> @@ -14222,7 +13633,7 @@ build_misc_local_traffic_drop_flows_for_lrouter(
>  
>  static void
>  build_dhcpv6_reply_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match)
>  {
>      ovs_assert(op->nbrp);
> @@ -14242,7 +13653,7 @@ build_dhcpv6_reply_flows_for_lrouter_port(
>  
>  static void
>  build_ipv6_input_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> +        struct ovn_port *op, struct lflow_table *lflows,
>          struct ds *match, struct ds *actions,
>          const struct shash *meter_groups)
>  {
> @@ -14411,7 +13822,7 @@ build_ipv6_input_flows_for_lrouter_port(
>  static void
>  build_lrouter_arp_nd_for_datapath(const struct ovn_datapath *od,
>                                    const struct lr_nat_record *lrnat_rec,
> -                                  struct hmap *lflows,
> +                                  struct lflow_table *lflows,
>                                    const struct shash *meter_groups)
>  {
>      ovs_assert(od->nbr);
> @@ -14463,7 +13874,7 @@ build_lrouter_arp_nd_for_datapath(const struct ovn_datapath *od,
>  /* Logical router ingress table 3: IP Input for IPv4. */
>  static void
>  build_lrouter_ipv4_ip_input(struct ovn_port *op,
> -                            struct hmap *lflows,
> +                            struct lflow_table *lflows,
>                              struct ds *match, struct ds *actions,
>                              const struct shash *meter_groups)
>  {
> @@ -14667,7 +14078,7 @@ build_lrouter_ipv4_ip_input(struct ovn_port *op,
>  /* Logical router ingress table 3: IP Input for IPv4. */
>  static void
>  build_lrouter_ipv4_ip_input_for_lbnats(
> -    struct ovn_port *op, struct hmap *lflows,
> +    struct ovn_port *op, struct lflow_table *lflows,
>      const struct lr_stateful_record *lr_stateful_rec,
>      struct ds *match, const struct shash *meter_groups)
>  {
> @@ -14787,7 +14198,7 @@ build_lrouter_in_unsnat_match(const struct ovn_datapath *od,
>  }
>  
>  static void
> -build_lrouter_in_unsnat_stateless_flow(struct hmap *lflows,
> +build_lrouter_in_unsnat_stateless_flow(struct lflow_table *lflows,
>                                         const struct ovn_datapath *od,
>                                         const struct nbrec_nat *nat,
>                                         struct ds *match,
> @@ -14809,7 +14220,7 @@ build_lrouter_in_unsnat_stateless_flow(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
> +build_lrouter_in_unsnat_in_czone_flow(struct lflow_table *lflows,
>                                        const struct ovn_datapath *od,
>                                        const struct nbrec_nat *nat,
>                                        struct ds *match, bool distributed_nat,
> @@ -14843,7 +14254,7 @@ build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_in_unsnat_flow(struct hmap *lflows,
> +build_lrouter_in_unsnat_flow(struct lflow_table *lflows,
>                               const struct ovn_datapath *od,
>                               const struct nbrec_nat *nat, struct ds *match,
>                               bool distributed_nat, bool is_v6,
> @@ -14865,7 +14276,7 @@ build_lrouter_in_unsnat_flow(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_in_dnat_flow(struct hmap *lflows,
> +build_lrouter_in_dnat_flow(struct lflow_table *lflows,
>                             const struct ovn_datapath *od,
>                             const struct lr_nat_record *lrnat_rec,
>                             const struct nbrec_nat *nat, struct ds *match,
> @@ -14937,7 +14348,7 @@ build_lrouter_in_dnat_flow(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_out_undnat_flow(struct hmap *lflows,
> +build_lrouter_out_undnat_flow(struct lflow_table *lflows,
>                                const struct ovn_datapath *od,
>                                const struct nbrec_nat *nat, struct ds *match,
>                                struct ds *actions, bool distributed_nat,
> @@ -14988,7 +14399,7 @@ build_lrouter_out_undnat_flow(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_out_is_dnat_local(struct hmap *lflows,
> +build_lrouter_out_is_dnat_local(struct lflow_table *lflows,
>                                  const struct ovn_datapath *od,
>                                  const struct nbrec_nat *nat, struct ds *match,
>                                  struct ds *actions, bool distributed_nat,
> @@ -15019,7 +14430,7 @@ build_lrouter_out_is_dnat_local(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_out_snat_match(struct hmap *lflows,
> +build_lrouter_out_snat_match(struct lflow_table *lflows,
>                               const struct ovn_datapath *od,
>                               const struct nbrec_nat *nat, struct ds *match,
>                               bool distributed_nat, int cidr_bits, bool is_v6,
> @@ -15048,7 +14459,7 @@ build_lrouter_out_snat_match(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_out_snat_stateless_flow(struct hmap *lflows,
> +build_lrouter_out_snat_stateless_flow(struct lflow_table *lflows,
>                                        const struct ovn_datapath *od,
>                                        const struct nbrec_nat *nat,
>                                        struct ds *match, struct ds *actions,
> @@ -15091,7 +14502,7 @@ build_lrouter_out_snat_stateless_flow(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
> +build_lrouter_out_snat_in_czone_flow(struct lflow_table *lflows,
>                                       const struct ovn_datapath *od,
>                                       const struct nbrec_nat *nat,
>                                       struct ds *match,
> @@ -15153,7 +14564,7 @@ build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_out_snat_flow(struct hmap *lflows,
> +build_lrouter_out_snat_flow(struct lflow_table *lflows,
>                              const struct ovn_datapath *od,
>                              const struct nbrec_nat *nat, struct ds *match,
>                              struct ds *actions, bool distributed_nat,
> @@ -15199,7 +14610,7 @@ build_lrouter_out_snat_flow(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_ingress_nat_check_pkt_len(struct hmap *lflows,
> +build_lrouter_ingress_nat_check_pkt_len(struct lflow_table *lflows,
>                                          const struct nbrec_nat *nat,
>                                          const struct ovn_datapath *od,
>                                          bool is_v6, struct ds *match,
> @@ -15271,7 +14682,7 @@ build_lrouter_ingress_nat_check_pkt_len(struct hmap *lflows,
>  }
>  
>  static void
> -build_lrouter_ingress_flow(struct hmap *lflows,
> +build_lrouter_ingress_flow(struct lflow_table *lflows,
>                             const struct ovn_datapath *od,
>                             const struct nbrec_nat *nat, struct ds *match,
>                             struct ds *actions, struct eth_addr mac,
> @@ -15451,7 +14862,7 @@ lrouter_check_nat_entry(const struct ovn_datapath *od,
>  
>  /* NAT, Defrag and load balancing. */
>  static void build_lr_nat_defrag_and_lb_default_flows(struct ovn_datapath *od,
> -                                                     struct hmap *lflows)
> +                                                struct lflow_table *lflows)
>  {
>      ovs_assert(od->nbr);
>  
> @@ -15476,7 +14887,8 @@ static void build_lr_nat_defrag_and_lb_default_flows(struct ovn_datapath *od,
>  
>  static void
>  build_lrouter_nat_defrag_and_lb(
> -    const struct lr_stateful_record *lr_stateful_rec, struct hmap *lflows,
> +    const struct lr_stateful_record *lr_stateful_rec,
> +    struct lflow_table *lflows,
>      const struct hmap *ls_ports, const struct hmap *lr_ports,
>      struct ds *match, struct ds *actions,
>      const struct shash *meter_groups,
> @@ -15858,31 +15270,30 @@ build_lsp_lflows_for_lbnats(struct ovn_port *lsp,
>                              const struct lr_stateful_record *lr_stateful_rec,
>                              const struct lr_stateful_table *lr_stateful_table,
>                              const struct hmap *lr_ports,
> -                            struct hmap *lflows,
> +                            struct lflow_table *lflows,
>                              struct ds *match,
> -                            struct ds *actions)
> +                            struct ds *actions,
> +                            struct lflow_ref *lflow_ref)
>  {
>      ovs_assert(lsp->nbsp);
>      ovs_assert(lsp->peer);
> -    start_collecting_lflows();
>      build_lswitch_rport_arp_req_flows_for_lbnats(
>          lsp->peer, lr_stateful_rec, lsp->od, lsp,
> -        lflows, &lsp->nbsp->header_);
> +        lflows, &lsp->nbsp->header_, lflow_ref);
>      build_ip_routing_flows_for_router_type_lsp(lsp, lr_stateful_table,
> -                                               lr_ports, lflows);
> +                                               lr_ports, lflows,
> +                                               lflow_ref);
>      build_arp_resolve_flows_for_lsp_routable_addresses(
> -        lsp, lflows, lr_ports, lr_stateful_table, match, actions);
> +        lsp, lflows, lr_ports, lr_stateful_table, match, actions, lflow_ref);
>      build_lswitch_ip_unicast_lookup_for_nats(lsp, lr_stateful_rec, lflows,
> -                                             match, actions);
> -    link_ovn_port_to_lflows(lsp, &collected_lflows);
> -    end_collecting_lflows();
> +                                             match, actions, lflow_ref);
>  }
>  
>  static void
>  build_lbnat_lflows_iterate_by_lsp(
>      struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
>      const struct hmap *lr_ports, struct ds *match, struct ds *actions,
> -    struct hmap *lflows)
> +    struct lflow_table *lflows)
>  {
>      ovs_assert(op->nbsp);
>  
> @@ -15895,8 +15306,9 @@ build_lbnat_lflows_iterate_by_lsp(
>                                                        op->peer->od->index);
>      ovs_assert(lr_stateful_rec);
>  
> -    build_lsp_lflows_for_lbnats(op, lr_stateful_rec, lr_stateful_table,
> -                                lr_ports, lflows, match, actions);
> +    build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
> +                                lr_stateful_table, lr_ports, lflows,
> +                                match, actions, op->stateful_lflow_ref);
>  }
>  
>  static void
> @@ -15904,7 +15316,7 @@ build_lrp_lflows_for_lbnats(struct ovn_port *op,
>                              const struct lr_stateful_record *lr_stateful_rec,
>                              const struct shash *meter_groups,
>                              struct ds *match, struct ds *actions,
> -                            struct hmap *lflows)
> +                            struct lflow_table *lflows)
>  {
>      /* Drop IP traffic destined to router owned IPs except if the IP is
>       * also a SNAT IP. Those are dropped later, in stage
> @@ -15939,7 +15351,7 @@ static void
>  build_lbnat_lflows_iterate_by_lrp(
>      struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
>      const struct shash *meter_groups, struct ds *match,
> -    struct ds *actions, struct hmap *lflows)
> +    struct ds *actions, struct lflow_table *lflows)
>  {
>      ovs_assert(op->nbrp);
>  
> @@ -15954,7 +15366,7 @@ build_lbnat_lflows_iterate_by_lrp(
>  
>  static void
>  build_lr_stateful_flows(const struct lr_stateful_record *lr_stateful_rec,
> -                        struct hmap *lflows,
> +                        struct lflow_table *lflows,
>                          const struct hmap *ls_ports,
>                          const struct hmap *lr_ports,
>                          struct ds *match,
> @@ -15978,7 +15390,7 @@ build_ls_stateful_flows(const struct ls_stateful_record *ls_stateful_rec,
>                        const struct ls_port_group_table *ls_pgs,
>                        const struct chassis_features *features,
>                        const struct shash *meter_groups,
> -                      struct hmap *lflows)
> +                      struct lflow_table *lflows)
>  {
>      ovs_assert(ls_stateful_rec->od);
>  
> @@ -15997,7 +15409,7 @@ struct lswitch_flow_build_info {
>      const struct ls_port_group_table *ls_port_groups;
>      const struct lr_stateful_table *lr_stateful_table;
>      const struct ls_stateful_table *ls_stateful_table;
> -    struct hmap *lflows;
> +    struct lflow_table *lflows;
>      struct hmap *igmp_groups;
>      const struct shash *meter_groups;
>      const struct hmap *lb_dps_map;
> @@ -16080,10 +15492,9 @@ build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,
>                                           const struct shash *meter_groups,
>                                           struct ds *match,
>                                           struct ds *actions,
> -                                         struct hmap *lflows)
> +                                         struct lflow_table *lflows)
>  {
>      ovs_assert(op->nbsp);
> -    start_collecting_lflows();
>  
>      /* Build Logical Switch Flows. */
>      build_lswitch_port_sec_op(op, lflows, actions, match);
> @@ -16098,9 +15509,6 @@ build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,
>  
>      /* Build Logical Router Flows. */
>      build_arp_resolve_flows_for_lsp(op, lflows, lr_ports, match, actions);
> -
> -    link_ovn_port_to_lflows(op, &collected_lflows);
> -    end_collecting_lflows();
>  }
>  
>  /* Helper function to combine all lflow generation which is iterated by logical
> @@ -16307,7 +15715,7 @@ noop_callback(struct worker_pool *pool OVS_UNUSED,
>      /* Do nothing */
>  }
>  
> -/* Fixes the hmap size (hmap->n) after parallel building the lflow_map when
> +/* Fixes the hmap size (hmap->n) after parallel building the lflow_table when
>   * dp-groups is enabled, because in that case all threads are updating the
>   * global lflow hmap. Although the lflow_hash_lock prevents currently inserting
>   * to the same hash bucket, the hmap->n is updated currently by all threads and
> @@ -16317,7 +15725,7 @@ noop_callback(struct worker_pool *pool OVS_UNUSED,
>   * after the worker threads complete the tasks in each iteration before any
>   * future operations on the lflow map. */
>  static void
> -fix_flow_map_size(struct hmap *lflow_map,
> +fix_flow_table_size(struct lflow_table *lflow_table,
>                    struct lswitch_flow_build_info *lsiv,
>                    size_t n_lsiv)
>  {
> @@ -16325,7 +15733,7 @@ fix_flow_map_size(struct hmap *lflow_map,
>      for (size_t i = 0; i < n_lsiv; i++) {
>          total += lsiv[i].thread_lflow_counter;
>      }
> -    lflow_map->n = total;
> +    lflow_table_set_size(lflow_table, total);
>  }
>  
>  static void
> @@ -16337,7 +15745,7 @@ build_lswitch_and_lrouter_flows(
>      const struct ls_port_group_table *ls_pgs,
>      const struct lr_stateful_table *lr_stateful_table,
>      const struct ls_stateful_table *ls_stateful_table,
> -    struct hmap *lflows,
> +    struct lflow_table *lflows,
>      struct hmap *igmp_groups,
>      const struct shash *meter_groups,
>      const struct hmap *lb_dps_map,
> @@ -16384,7 +15792,7 @@ build_lswitch_and_lrouter_flows(
>  
>          /* Run thread pool. */
>          run_pool_callback(build_lflows_pool, NULL, NULL, noop_callback);
> -        fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);
> +        fix_flow_table_size(lflows, lsiv, build_lflows_pool->size);
>  
>          for (index = 0; index < build_lflows_pool->size; index++) {
>              ds_destroy(&lsiv[index].match);
> @@ -16498,24 +15906,6 @@ build_lswitch_and_lrouter_flows(
>      free(svc_check_match);
>  }
>  
> -static ssize_t max_seen_lflow_size = 128;
> -
> -void
> -lflow_data_init(struct lflow_data *data)
> -{
> -    fast_hmap_size_for(&data->lflows, max_seen_lflow_size);
> -}
> -
> -void
> -lflow_data_destroy(struct lflow_data *data)
> -{
> -    struct ovn_lflow *lflow;
> -    HMAP_FOR_EACH_SAFE (lflow, hmap_node, &data->lflows) {
> -        ovn_lflow_destroy(&data->lflows, lflow);
> -    }
> -    hmap_destroy(&data->lflows);
> -}
> -
>  void run_update_worker_pool(int n_threads)
>  {
>      /* If number of threads has been updated (or initially set),
> @@ -16561,7 +15951,7 @@ create_sb_multicast_group(struct ovsdb_idl_txn *ovnsb_txn,
>   * constructing their contents based on the OVN_NB database. */
>  void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
>                    struct lflow_input *input_data,
> -                  struct hmap *lflows)
> +                  struct lflow_table *lflows)
>  {
>      struct hmap mcast_groups;
>      struct hmap igmp_groups;
> @@ -16592,281 +15982,26 @@ void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
>      }
>  
>      /* Parallel build may result in a suboptimal hash. Resize the
> -     * hash to a correct size before doing lookups */
> -
> -    hmap_expand(lflows);
> -
> -    if (hmap_count(lflows) > max_seen_lflow_size) {
> -        max_seen_lflow_size = hmap_count(lflows);
> -    }
> -
> -    stopwatch_start(LFLOWS_DP_GROUPS_STOPWATCH_NAME, time_msec());
> -    /* Collecting all unique datapath groups. */
> -    struct hmap ls_dp_groups = HMAP_INITIALIZER(&ls_dp_groups);
> -    struct hmap lr_dp_groups = HMAP_INITIALIZER(&lr_dp_groups);
> -    struct hmap single_dp_lflows;
> -
> -    /* Single dp_flows will never grow bigger than lflows,
> -     * thus the two hmaps will remain the same size regardless
> -     * of how many elements we remove from lflows and add to
> -     * single_dp_lflows.
> -     * Note - lflows is always sized for at least 128 flows.
> -     */
> -    fast_hmap_size_for(&single_dp_lflows, max_seen_lflow_size);
> -
> -    struct ovn_lflow *lflow;
> -    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
> -        struct ovn_datapath **datapaths_array;
> -        size_t n_datapaths;
> -
> -        if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> -            n_datapaths = ods_size(input_data->ls_datapaths);
> -            datapaths_array = input_data->ls_datapaths->array;
> -        } else {
> -            n_datapaths = ods_size(input_data->lr_datapaths);
> -            datapaths_array = input_data->lr_datapaths->array;
> -        }
> -
> -        lflow->n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> -
> -        ovs_assert(lflow->n_ods);
> -
> -        if (lflow->n_ods == 1) {
> -            /* There is only one datapath, so it should be moved out of the
> -             * group to a single 'od'. */
> -            size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
> -                                       n_datapaths);
> -
> -            bitmap_set0(lflow->dpg_bitmap, index);
> -            lflow->od = datapaths_array[index];
> -
> -            /* Logical flow should be re-hashed to allow lookups. */
> -            uint32_t hash = hmap_node_hash(&lflow->hmap_node);
> -            /* Remove from lflows. */
> -            hmap_remove(lflows, &lflow->hmap_node);
> -            hash = ovn_logical_flow_hash_datapath(&lflow->od->sb->header_.uuid,
> -                                                  hash);
> -            /* Add to single_dp_lflows. */
> -            hmap_insert_fast(&single_dp_lflows, &lflow->hmap_node, hash);
> -        }
> -    }
> -
> -    /* Merge multiple and single dp hashes. */
> -
> -    fast_hmap_merge(lflows, &single_dp_lflows);
> -
> -    hmap_destroy(&single_dp_lflows);
> -
> -    stopwatch_stop(LFLOWS_DP_GROUPS_STOPWATCH_NAME, time_msec());
> +     * lflow map to a correct size before doing lookups */
> +    lflow_table_expand(lflows);
> +    
>      stopwatch_start(LFLOWS_TO_SB_STOPWATCH_NAME, time_msec());
> -
> -    struct hmap lflows_temp = HMAP_INITIALIZER(&lflows_temp);
> -    /* Push changes to the Logical_Flow table to database. */
> -    const struct sbrec_logical_flow *sbflow;
> -    SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_SAFE (sbflow,
> -                                     input_data->sbrec_logical_flow_table) {
> -        struct sbrec_logical_dp_group *dp_group = sbflow->logical_dp_group;
> -        struct ovn_datapath *logical_datapath_od = NULL;
> -        size_t i;
> -
> -        /* Find one valid datapath to get the datapath type. */
> -        struct sbrec_datapath_binding *dp = sbflow->logical_datapath;
> -        if (dp) {
> -            logical_datapath_od = ovn_datapath_from_sbrec(
> -                                        &input_data->ls_datapaths->datapaths,
> -                                        &input_data->lr_datapaths->datapaths,
> -                                        dp);
> -            if (logical_datapath_od
> -                && ovn_datapath_is_stale(logical_datapath_od)) {
> -                logical_datapath_od = NULL;
> -            }
> -        }
> -        for (i = 0; dp_group && i < dp_group->n_datapaths; i++) {
> -            logical_datapath_od = ovn_datapath_from_sbrec(
> -                                        &input_data->ls_datapaths->datapaths,
> -                                        &input_data->lr_datapaths->datapaths,
> -                                        dp_group->datapaths[i]);
> -            if (logical_datapath_od
> -                && !ovn_datapath_is_stale(logical_datapath_od)) {
> -                break;
> -            }
> -            logical_datapath_od = NULL;
> -        }
> -
> -        if (!logical_datapath_od) {
> -            /* This lflow has no valid logical datapaths. */
> -            sbrec_logical_flow_delete(sbflow);
> -            continue;
> -        }
> -
> -        enum ovn_pipeline pipeline
> -            = !strcmp(sbflow->pipeline, "ingress") ? P_IN : P_OUT;
> -
> -        lflow = ovn_lflow_find(
> -            lflows, dp_group ? NULL : logical_datapath_od,
> -            ovn_stage_build(ovn_datapath_get_type(logical_datapath_od),
> -                            pipeline, sbflow->table_id),
> -            sbflow->priority, sbflow->match, sbflow->actions,
> -            sbflow->controller_meter, sbflow->hash);
> -        if (lflow) {
> -            struct hmap *dp_groups;
> -            size_t n_datapaths;
> -            bool is_switch;
> -
> -            lflow->sb_uuid = sbflow->header_.uuid;
> -            is_switch = ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH;
> -            if (is_switch) {
> -                n_datapaths = ods_size(input_data->ls_datapaths);
> -                dp_groups = &ls_dp_groups;
> -            } else {
> -                n_datapaths = ods_size(input_data->lr_datapaths);
> -                dp_groups = &lr_dp_groups;
> -            }
> -            if (input_data->ovn_internal_version_changed) {
> -                const char *stage_name = smap_get_def(&sbflow->external_ids,
> -                                                  "stage-name", "");
> -                const char *stage_hint = smap_get_def(&sbflow->external_ids,
> -                                                  "stage-hint", "");
> -                const char *source = smap_get_def(&sbflow->external_ids,
> -                                                  "source", "");
> -
> -                if (strcmp(stage_name, ovn_stage_to_str(lflow->stage))) {
> -                    sbrec_logical_flow_update_external_ids_setkey(sbflow,
> -                     "stage-name", ovn_stage_to_str(lflow->stage));
> -                }
> -                if (lflow->stage_hint) {
> -                    if (strcmp(stage_hint, lflow->stage_hint)) {
> -                        sbrec_logical_flow_update_external_ids_setkey(sbflow,
> -                        "stage-hint", lflow->stage_hint);
> -                    }
> -                }
> -                if (lflow->where) {
> -                    if (strcmp(source, lflow->where)) {
> -                        sbrec_logical_flow_update_external_ids_setkey(sbflow,
> -                        "source", lflow->where);
> -                    }
> -                }
> -            }
> -
> -            if (lflow->od) {
> -                sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> -                sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
> -            } else {
> -                lflow->dpg = ovn_dp_group_get_or_create(
> -                                ovnsb_txn, dp_groups, dp_group,
> -                                lflow->n_ods, lflow->dpg_bitmap,
> -                                n_datapaths, is_switch,
> -                                input_data->ls_datapaths,
> -                                input_data->lr_datapaths);
> -
> -                sbrec_logical_flow_set_logical_datapath(sbflow, NULL);
> -                sbrec_logical_flow_set_logical_dp_group(sbflow,
> -                                                        lflow->dpg->dp_group);
> -            }
> -
> -            /* This lflow updated.  Not needed anymore. */
> -            hmap_remove(lflows, &lflow->hmap_node);
> -            hmap_insert(&lflows_temp, &lflow->hmap_node,
> -                        hmap_node_hash(&lflow->hmap_node));
> -        } else {
> -            sbrec_logical_flow_delete(sbflow);
> -        }
> -    }
> -
> -    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
> -        const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
> -        uint8_t table = ovn_stage_get_table(lflow->stage);
> -        struct hmap *dp_groups;
> -        size_t n_datapaths;
> -        bool is_switch;
> -
> -        is_switch = ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH;
> -        if (is_switch) {
> -            n_datapaths = ods_size(input_data->ls_datapaths);
> -            dp_groups = &ls_dp_groups;
> -        } else {
> -            n_datapaths = ods_size(input_data->lr_datapaths);
> -            dp_groups = &lr_dp_groups;
> -        }
> -
> -        lflow->sb_uuid = uuid_random();
> -        sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
> -                                                        &lflow->sb_uuid);
> -        if (lflow->od) {
> -            sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> -        } else {
> -            lflow->dpg = ovn_dp_group_get_or_create(
> -                                ovnsb_txn, dp_groups, NULL,
> -                                lflow->n_ods, lflow->dpg_bitmap,
> -                                n_datapaths, is_switch,
> -                                input_data->ls_datapaths,
> -                                input_data->lr_datapaths);
> -
> -            sbrec_logical_flow_set_logical_dp_group(sbflow,
> -                                                    lflow->dpg->dp_group);
> -        }
> -
> -        sbrec_logical_flow_set_pipeline(sbflow, pipeline);
> -        sbrec_logical_flow_set_table_id(sbflow, table);
> -        sbrec_logical_flow_set_priority(sbflow, lflow->priority);
> -        sbrec_logical_flow_set_match(sbflow, lflow->match);
> -        sbrec_logical_flow_set_actions(sbflow, lflow->actions);
> -        if (lflow->io_port) {
> -            struct smap tags = SMAP_INITIALIZER(&tags);
> -            smap_add(&tags, "in_out_port", lflow->io_port);
> -            sbrec_logical_flow_set_tags(sbflow, &tags);
> -            smap_destroy(&tags);
> -        }
> -        sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
> -
> -        /* Trim the source locator lflow->where, which looks something like
> -         * "ovn/northd/northd.c:1234", down to just the part following the
> -         * last slash, e.g. "northd.c:1234". */
> -        const char *slash = strrchr(lflow->where, '/');
> -#if _WIN32
> -        const char *backslash = strrchr(lflow->where, '\\');
> -        if (!slash || backslash > slash) {
> -            slash = backslash;
> -        }
> -#endif
> -        const char *where = slash ? slash + 1 : lflow->where;
> -
> -        struct smap ids = SMAP_INITIALIZER(&ids);
> -        smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
> -        smap_add(&ids, "source", where);
> -        if (lflow->stage_hint) {
> -            smap_add(&ids, "stage-hint", lflow->stage_hint);
> -        }
> -        sbrec_logical_flow_set_external_ids(sbflow, &ids);
> -        smap_destroy(&ids);
> -        hmap_remove(lflows, &lflow->hmap_node);
> -        hmap_insert(&lflows_temp, &lflow->hmap_node,
> -                    hmap_node_hash(&lflow->hmap_node));
> -    }
> -    hmap_swap(lflows, &lflows_temp);
> -    hmap_destroy(&lflows_temp);
> +    lflow_table_sync_to_sb(lflows, ovnsb_txn, input_data->ls_datapaths,
> +                           input_data->lr_datapaths,
> +                           input_data->ovn_internal_version_changed,
> +                           input_data->sbrec_logical_flow_table,
> +                           input_data->sbrec_logical_dp_group_table);
>  
>      stopwatch_stop(LFLOWS_TO_SB_STOPWATCH_NAME, time_msec());
> -    struct ovn_dp_group *dpg;
> -    HMAP_FOR_EACH_POP (dpg, node, &ls_dp_groups) {
> -        bitmap_free(dpg->bitmap);
> -        free(dpg);
> -    }
> -    hmap_destroy(&ls_dp_groups);
> -    HMAP_FOR_EACH_POP (dpg, node, &lr_dp_groups) {
> -        bitmap_free(dpg->bitmap);
> -        free(dpg);
> -    }
> -    hmap_destroy(&lr_dp_groups);
>  
>      /* Push changes to the Multicast_Group table to database. */
>      const struct sbrec_multicast_group *sbmc;
> -    SBREC_MULTICAST_GROUP_TABLE_FOR_EACH_SAFE (sbmc,
> -                                input_data->sbrec_multicast_group_table) {
> +    SBREC_MULTICAST_GROUP_TABLE_FOR_EACH_SAFE (
> +            sbmc, input_data->sbrec_multicast_group_table) {
>          struct ovn_datapath *od = ovn_datapath_from_sbrec(
> -                                       &input_data->ls_datapaths->datapaths,
> -                                       &input_data->lr_datapaths->datapaths,
> -                                       sbmc->datapath);
> +            &input_data->ls_datapaths->datapaths,
> +            &input_data->lr_datapaths->datapaths,
> +            sbmc->datapath);
>  
>          if (!od || ovn_datapath_is_stale(od)) {
>              sbrec_multicast_group_delete(sbmc);
> @@ -16906,120 +16041,22 @@ void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
>      hmap_destroy(&mcast_groups);
>  }
>  
> -static void
> -sync_lsp_lflows_to_sb(struct ovsdb_idl_txn *ovnsb_txn,
> -                      struct lflow_input *lflow_input,
> -                      struct hmap *lflows,
> -                      struct ovn_lflow *lflow)
> -{
> -    size_t n_datapaths;
> -    struct ovn_datapath **datapaths_array;
> -    if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> -        n_datapaths = ods_size(lflow_input->ls_datapaths);
> -        datapaths_array = lflow_input->ls_datapaths->array;
> -    } else {
> -        n_datapaths = ods_size(lflow_input->lr_datapaths);
> -        datapaths_array = lflow_input->lr_datapaths->array;
> -    }
> -    uint32_t n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> -    ovs_assert(n_ods == 1);
> -    /* There is only one datapath, so it should be moved out of the
> -     * group to a single 'od'. */
> -    size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
> -                               n_datapaths);
> -
> -    bitmap_set0(lflow->dpg_bitmap, index);
> -    lflow->od = datapaths_array[index];
> -
> -    /* Logical flow should be re-hashed to allow lookups. */
> -    uint32_t hash = hmap_node_hash(&lflow->hmap_node);
> -    /* Remove from lflows. */
> -    hmap_remove(lflows, &lflow->hmap_node);
> -    hash = ovn_logical_flow_hash_datapath(&lflow->od->sb->header_.uuid,
> -                                          hash);
> -    /* Add back. */
> -    hmap_insert(lflows, &lflow->hmap_node, hash);
> -
> -    /* Sync to SB. */
> -    const struct sbrec_logical_flow *sbflow;
> -    /* Note: uuid_random acquires a global mutex. If we parallelize the sync to
> -     * SB this may become a bottleneck. */
> -    lflow->sb_uuid = uuid_random();
> -    sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
> -                                                    &lflow->sb_uuid);
> -    const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
> -    uint8_t table = ovn_stage_get_table(lflow->stage);
> -    sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> -    sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
> -    sbrec_logical_flow_set_pipeline(sbflow, pipeline);
> -    sbrec_logical_flow_set_table_id(sbflow, table);
> -    sbrec_logical_flow_set_priority(sbflow, lflow->priority);
> -    sbrec_logical_flow_set_match(sbflow, lflow->match);
> -    sbrec_logical_flow_set_actions(sbflow, lflow->actions);
> -    if (lflow->io_port) {
> -        struct smap tags = SMAP_INITIALIZER(&tags);
> -        smap_add(&tags, "in_out_port", lflow->io_port);
> -        sbrec_logical_flow_set_tags(sbflow, &tags);
> -        smap_destroy(&tags);
> -    }
> -    sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
> -    /* Trim the source locator lflow->where, which looks something like
> -     * "ovn/northd/northd.c:1234", down to just the part following the
> -     * last slash, e.g. "northd.c:1234". */
> -    const char *slash = strrchr(lflow->where, '/');
> -#if _WIN32
> -    const char *backslash = strrchr(lflow->where, '\\');
> -    if (!slash || backslash > slash) {
> -        slash = backslash;
> -    }
> -#endif
> -    const char *where = slash ? slash + 1 : lflow->where;
> -
> -    struct smap ids = SMAP_INITIALIZER(&ids);
> -    smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
> -    smap_add(&ids, "source", where);
> -    if (lflow->stage_hint) {
> -        smap_add(&ids, "stage-hint", lflow->stage_hint);
> -    }
> -    sbrec_logical_flow_set_external_ids(sbflow, &ids);
> -    smap_destroy(&ids);
> -}
> -
> -static bool
> -delete_lflow_for_lsp(struct ovn_port *op, bool is_update,
> -                     const struct sbrec_logical_flow_table *sb_lflow_table,
> -                     struct hmap *lflows)
> -{
> -    struct lflow_ref_node *lfrn;
> -    const char *operation = is_update ? "updated" : "deleted";
> -    LIST_FOR_EACH_SAFE (lfrn, lflow_list_node, &op->lflows) {
> -        VLOG_DBG("Deleting SB lflow "UUID_FMT" for %s port %s",
> -                 UUID_ARGS(&lfrn->lflow->sb_uuid), operation, op->key);
> -
> -        const struct sbrec_logical_flow *sblflow =
> -            sbrec_logical_flow_table_get_for_uuid(sb_lflow_table,
> -                                              &lfrn->lflow->sb_uuid);
> -        if (sblflow) {
> -            sbrec_logical_flow_delete(sblflow);
> -        } else {
> -            static struct vlog_rate_limit rl =
> -                VLOG_RATE_LIMIT_INIT(1, 1);
> -            VLOG_WARN_RL(&rl, "SB lflow "UUID_FMT" not found when handling "
> -                         "%s port %s. Recompute.",
> -                         UUID_ARGS(&lfrn->lflow->sb_uuid), operation, op->key);
> -            return false;
> -        }
> +void
> +lflow_reset_northd_refs(struct lflow_input *lflow_input)
> +{
> +    struct ovn_port *op;
>  
> -        ovn_lflow_destroy(lflows, lfrn->lflow);
> +    HMAP_FOR_EACH (op, key_node, lflow_input->ls_ports) {
> +        lflow_ref_clear(op->lflow_ref);
> +        lflow_ref_clear(op->stateful_lflow_ref);
>      }
> -    return true;
>  }
>  
>  bool
>  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>                                   struct tracked_ovn_ports *trk_lsps,
>                                   struct lflow_input *lflow_input,
> -                                 struct hmap *lflows)
> +                                 struct lflow_table *lflows)
>  {
>      struct hmapx_node *hmapx_node;
>      struct ovn_port *op;
> @@ -17028,13 +16065,15 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>          op = hmapx_node->data;
>          /* Make sure 'op' is an lsp and not lrp. */
>          ovs_assert(op->nbsp);
> -
> -        if (!delete_lflow_for_lsp(op, false,
> -                                  lflow_input->sbrec_logical_flow_table,
> -                                  lflows)) {
> -                return false;
> -            }
> -
> +        bool handled = lflow_ref_resync_flows(
> +            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
> +            lflow_input->lr_datapaths,
> +            lflow_input->ovn_internal_version_changed,
> +            lflow_input->sbrec_logical_flow_table,
> +            lflow_input->sbrec_logical_dp_group_table);
> +        if (!handled) {
> +            return false;
> +        }
>          /* No need to update SB multicast groups, thanks to weak
>           * references. */
>      }
> @@ -17043,13 +16082,8 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>          op = hmapx_node->data;
>          /* Make sure 'op' is an lsp and not lrp. */
>          ovs_assert(op->nbsp);
> -
> -        /* Delete old lflows. */
> -        if (!delete_lflow_for_lsp(op, true,
> -                                  lflow_input->sbrec_logical_flow_table,
> -                                  lflows)) {
> -            return false;
> -        }
> +        /* Clear old lflows. */
> +        lflow_ref_unlink_lflows(op->lflow_ref);
>  
>          /* Generate new lflows. */
>          struct ds match = DS_EMPTY_INITIALIZER;
> @@ -17060,16 +16094,42 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>                                                   &match, &actions,
>                                                   lflows);
>  
> +        /* Sync the new flows to SB. */
> +        bool handled = lflow_ref_sync_lflows(
> +            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
> +            lflow_input->lr_datapaths,
> +            lflow_input->ovn_internal_version_changed,
> +            lflow_input->sbrec_logical_flow_table,
> +            lflow_input->sbrec_logical_dp_group_table);
> +        if (!handled) {
> +            return false;
> +        }
> +
>          if (lsp_is_router(op->nbsp) && op->peer && op->peer->od->nbr) {
>              const struct lr_stateful_record *lr_stateful_rec =
>                  lr_stateful_table_find_by_index(lflow_input->lr_stateful_table,
>                                                  op->peer->od->index);
>              ovs_assert(lr_stateful_rec);
>  
> +            /* Clear old lflows. */
> +            lflow_ref_unlink_lflows(op->stateful_lflow_ref);
> +
> +            /* Generate new lflows. */
>              build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
>                                          lflow_input->lr_stateful_table,
>                                          lflow_input->lr_ports,
> -                                        lflows, &match, &actions);
> +                                        lflows, &match, &actions,
> +                                        op->stateful_lflow_ref);
> +            handled = lflow_ref_sync_lflows(
> +                op->stateful_lflow_ref, lflows, ovnsb_txn,
> +                lflow_input->ls_datapaths,
> +                lflow_input->lr_datapaths,
> +                lflow_input->ovn_internal_version_changed,
> +                lflow_input->sbrec_logical_flow_table,
> +                lflow_input->sbrec_logical_dp_group_table);
> +            if (!handled) {
> +                return false;
> +            }
>          }
>  
>          ds_destroy(&match);
> @@ -17077,13 +16137,6 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>  
>          /* SB port_binding is not deleted, so don't update SB multicast
>           * groups. */
> -
> -        /* Sync the new flows to SB. */
> -        struct lflow_ref_node *lfrn;
> -        LIST_FOR_EACH (lfrn, lflow_list_node, &op->lflows) {
> -            sync_lsp_lflows_to_sb(ovnsb_txn, lflow_input, lflows,
> -                                  lfrn->lflow);
> -        }
>      }
>  
>      HMAPX_FOR_EACH (hmapx_node, &trk_lsps->created) {
> @@ -17108,6 +16161,17 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>                                                   lflow_input->meter_groups,
>                                                   &match, &actions, lflows);
>  
> +        /* Sync the newly added flows to SB. */
> +        bool handled = lflow_ref_sync_lflows(
> +            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
> +            lflow_input->lr_datapaths,
> +            lflow_input->ovn_internal_version_changed,
> +            lflow_input->sbrec_logical_flow_table,
> +            lflow_input->sbrec_logical_dp_group_table);
> +        if (!handled) {
> +            return false;
> +        }
> +
>          if (lsp_is_router(op->nbsp) && op->peer && op->peer->od->nbr) {
>              const struct lr_stateful_record *lr_stateful_rec =
>                  lr_stateful_table_find_by_index(lflow_input->lr_stateful_table,
> @@ -17117,7 +16181,18 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>              build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
>                                          lflow_input->lr_stateful_table,
>                                          lflow_input->lr_ports,
> -                                        lflows, &match, &actions);
> +                                        lflows, &match, &actions,
> +                                        op->stateful_lflow_ref);
> +            handled = lflow_ref_sync_lflows(
> +                op->stateful_lflow_ref, lflows, ovnsb_txn,
> +                lflow_input->ls_datapaths,
> +                lflow_input->lr_datapaths,
> +                lflow_input->ovn_internal_version_changed,
> +                lflow_input->sbrec_logical_flow_table,
> +                lflow_input->sbrec_logical_dp_group_table);
> +            if (!handled) {
> +                return false;
> +            }
>          }
>  
>          ds_destroy(&match);
> @@ -17146,13 +16221,6 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>              sbrec_multicast_group_update_ports_addvalue(sbmc_unknown,
>                                                          op->sb);
>          }
> -
> -        /* Sync the newly added flows to SB. */
> -        struct lflow_ref_node *lfrn;
> -        LIST_FOR_EACH (lfrn, lflow_list_node, &op->lflows) {
> -            sync_lsp_lflows_to_sb(ovnsb_txn, lflow_input, lflows,
> -                                    lfrn->lflow);
> -        }
>      }
>  
>      return true;
> diff --git a/northd/northd.h b/northd/northd.h
> index 88406bffee..42b4eee607 100644
> --- a/northd/northd.h
> +++ b/northd/northd.h
> @@ -23,6 +23,7 @@
>  #include "northd/en-port-group.h"
>  #include "northd/ipam.h"
>  #include "openvswitch/hmap.h"
> +#include "ovs-thread.h"
>  
>  struct northd_input {
>      /* Northbound table references */
> @@ -164,13 +165,6 @@ struct northd_data {
>      struct northd_tracked_data trk_data;
>  };
>  
> -struct lflow_data {
> -    struct hmap lflows;
> -};
> -
> -void lflow_data_init(struct lflow_data *);
> -void lflow_data_destroy(struct lflow_data *);
> -
>  struct lr_nat_table;
>  
>  struct lflow_input {
> @@ -182,6 +176,7 @@ struct lflow_input {
>      const struct sbrec_logical_flow_table *sbrec_logical_flow_table;
>      const struct sbrec_multicast_group_table *sbrec_multicast_group_table;
>      const struct sbrec_igmp_group_table *sbrec_igmp_group_table;
> +    const struct sbrec_logical_dp_group_table *sbrec_logical_dp_group_table;
>  
>      /* Indexes */
>      struct ovsdb_idl_index *sbrec_mcast_group_by_name_dp;
> @@ -201,6 +196,15 @@ struct lflow_input {
>      bool ovn_internal_version_changed;
>  };
>  
> +extern int parallelization_state;
> +enum {
> +    STATE_NULL,               /* parallelization is off */
> +    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
> +    STATE_USE_PARALLELIZATION /* parallelization is on */
> +};
> +
> +extern thread_local size_t thread_lflow_counter;
> +
>  /*
>   * Multicast snooping and querier per datapath configuration.
>   */
> @@ -344,6 +348,179 @@ struct ovn_datapath {
>  const struct ovn_datapath *ovn_datapath_find(const struct hmap *datapaths,
>                                               const struct uuid *uuid);
>  
> +struct ovn_datapath *ovn_datapath_from_sbrec(
> +    const struct hmap *ls_datapaths, const struct hmap *lr_datapaths,
> +    const struct sbrec_datapath_binding *);
> +
> +static inline bool
> +ovn_datapath_is_stale(const struct ovn_datapath *od)
> +{
> +    return !od->nbr && !od->nbs;
> +};
> +
> +/* Pipeline stages. */
> +
> +/* The two purposes for which ovn-northd uses OVN logical datapaths. */
> +enum ovn_datapath_type {
> +    DP_SWITCH,                  /* OVN logical switch. */
> +    DP_ROUTER                   /* OVN logical router. */
> +};
> +
> +/* Returns an "enum ovn_stage" built from the arguments.
> + *
> + * (It's better to use ovn_stage_build() for type-safety reasons, but inline
> + * functions can't be used in enums or switch cases.) */
> +#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
> +    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
> +
> +/* A stage within an OVN logical switch or router.
> + *
> + * An "enum ovn_stage" indicates whether the stage is part of a logical switch
> + * or router, whether the stage is part of the ingress or egress pipeline, and
> + * the table within that pipeline.  The first three components are combined to
> + * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
> + * S_ROUTER_OUT_DELIVERY. */
> +enum ovn_stage {
> +#define PIPELINE_STAGES                                                   \
> +    /* Logical switch ingress stages. */                                  \
> +    PIPELINE_STAGE(SWITCH, IN,  CHECK_PORT_SEC, 0, "ls_in_check_port_sec")   \
> +    PIPELINE_STAGE(SWITCH, IN,  APPLY_PORT_SEC, 1, "ls_in_apply_port_sec")   \
> +    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    2, "ls_in_lookup_fdb")    \
> +    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        3, "ls_in_put_fdb")       \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")       \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")        \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")  \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       7, "ls_in_acl_hint")      \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_EVAL,       8, "ls_in_acl_eval")      \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_ACTION,     9, "ls_in_acl_action")    \
> +    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
> +    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
> +    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  12, "ls_in_lb_aff_check")  \
> +    PIPELINE_STAGE(SWITCH, IN,  LB,            13, "ls_in_lb")            \
> +    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  14, "ls_in_lb_aff_learn")  \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   15, "ls_in_pre_hairpin")   \
> +    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   16, "ls_in_nat_hairpin")   \
> +    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       17, "ls_in_hairpin")       \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_EVAL,  18, \
> +                   "ls_in_acl_after_lb_eval")  \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_ACTION,  19, \
> +                   "ls_in_acl_after_lb_action")  \
> +    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      20, "ls_in_stateful")      \
> +    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    21, "ls_in_arp_rsp")       \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  22, "ls_in_dhcp_options")  \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 23, "ls_in_dhcp_response") \
> +    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    24, "ls_in_dns_lookup")    \
> +    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  25, "ls_in_dns_response")  \
> +    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 26, "ls_in_external_port") \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       27, "ls_in_l2_lkup")       \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    28, "ls_in_l2_unknown")    \
> +                                                                          \
> +    /* Logical switch egress stages. */                                   \
> +    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      0, "ls_out_pre_acl")        \
> +    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       1, "ls_out_pre_lb")         \
> +    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
> +    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
> +    PIPELINE_STAGE(SWITCH, OUT, ACL_EVAL,     4, "ls_out_acl_eval")       \
> +    PIPELINE_STAGE(SWITCH, OUT, ACL_ACTION,   5, "ls_out_acl_action")     \
> +    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     6, "ls_out_qos_mark")       \
> +    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    7, "ls_out_qos_meter")      \
> +    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     8, "ls_out_stateful")       \
> +    PIPELINE_STAGE(SWITCH, OUT, CHECK_PORT_SEC,  9, "ls_out_check_port_sec") \
> +    PIPELINE_STAGE(SWITCH, OUT, APPLY_PORT_SEC, 10, "ls_out_apply_port_sec") \
> +                                                                      \
> +    /* Logical router ingress stages. */                              \
> +    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
> +    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
> +    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
> +    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
> +    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
> +    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
> +    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
> +    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
> +    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
> +    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
> +    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre")  \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")      \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, "lr_in_ip_routing_ecmp") \
> +    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")          \
> +    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")     \
> +    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")     \
> +    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")     \
> +    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")     \
> +    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")     \
> +    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")     \
> +                                                                      \
> +    /* Logical router egress stages. */                               \
> +    PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
> +                   "lr_out_chk_dnat_local")                                  \
> +    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,             1, "lr_out_undnat")      \
> +    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT,        2, "lr_out_post_undnat") \
> +    PIPELINE_STAGE(ROUTER, OUT, SNAT,               3, "lr_out_snat")        \
> +    PIPELINE_STAGE(ROUTER, OUT, POST_SNAT,          4, "lr_out_post_snat")   \
> +    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,           5, "lr_out_egr_loop")    \
> +    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,           6, "lr_out_delivery")
> +
> +#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
> +    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
> +        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
> +    PIPELINE_STAGES
> +#undef PIPELINE_STAGE
> +};
> +
> +enum ovn_datapath_type ovn_stage_to_datapath_type(enum ovn_stage stage);
> +
> +
> +/* Returns 'od''s datapath type. */
> +static inline enum ovn_datapath_type
> +ovn_datapath_get_type(const struct ovn_datapath *od)
> +{
> +    return od->nbs ? DP_SWITCH : DP_ROUTER;
> +}
> +
> +/* Returns an "enum ovn_stage" built from the arguments. */
> +static inline enum ovn_stage
> +ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
> +                uint8_t table)
> +{
> +    return OVN_STAGE_BUILD(dp_type, pipeline, table);
> +}
> +
> +/* Returns the pipeline to which 'stage' belongs. */
> +static inline enum ovn_pipeline
> +ovn_stage_get_pipeline(enum ovn_stage stage)
> +{
> +    return (stage >> 8) & 1;
> +}
> +
> +/* Returns the pipeline name to which 'stage' belongs. */
> +static inline const char *
> +ovn_stage_get_pipeline_name(enum ovn_stage stage)
> +{
> +    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
> +}
> +
> +/* Returns the table to which 'stage' belongs. */
> +static inline uint8_t
> +ovn_stage_get_table(enum ovn_stage stage)
> +{
> +    return stage & 0xff;
> +}
> +
> +/* Returns a string name for 'stage'. */
> +static inline const char *
> +ovn_stage_to_str(enum ovn_stage stage)
> +{
> +    switch (stage) {
> +#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
> +        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
> +    PIPELINE_STAGES
> +#undef PIPELINE_STAGE
> +        default: return "<unknown>";
> +    }
> +}
> +
>  /* A logical switch port or logical router port.
>   *
>   * In steady state, an ovn_port points to a northbound Logical_Switch_Port
> @@ -434,8 +611,10 @@ struct ovn_port {
>      /* Temporarily used for traversing a list (or hmap) of ports. */
>      bool visited;
>  
> -    /* List of struct lflow_ref_node that points to the lflows generated by
> -     * this ovn_port.
> +    /* Only used for the router type LSP whose peer is l3dgw_port */
> +    bool enable_router_port_acl;
> +
> +    /* Reference of lflows generated for this ovn_port.
>       *
>       * This data is initialized and destroyed by the en_northd node, but
>       * populated and used only by the en_lflow node. Ideally this data should
> @@ -453,11 +632,16 @@ struct ovn_port {
>       * Adding the list here is more straightforward. The drawback is that we
>       * need to keep in mind that this data belongs to en_lflow node, so never
>       * access it from any other nodes.
> +     *
> +     * 'lflow_ref' is used to reference generic logical flows generated for
> +     *  this ovn_port.
> +     *
> +     * 'stateful_lflow_ref' is used for logical switch ports of type
> +     * 'patch/router' to reference logical flows generated fo this ovn_port
> +     *  from the 'lr_stateful' record of the peer port's datapath.
>       */
> -    struct ovs_list lflows;
> -
> -    /* Only used for the router type LSP whose peer is l3dgw_port */
> -    bool enable_router_port_acl;
> +    struct lflow_ref *lflow_ref;
> +    struct lflow_ref *stateful_lflow_ref;
>  };
>  
>  void ovnnb_db_run(struct northd_input *input_data,
> @@ -480,13 +664,17 @@ void northd_destroy(struct northd_data *data);
>  void northd_init(struct northd_data *data);
>  void northd_indices_create(struct northd_data *data,
>                             struct ovsdb_idl *ovnsb_idl);
> +
> +struct lflow_table;
>  void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
>                    struct lflow_input *input_data,
> -                  struct hmap *lflows);
> +                  struct lflow_table *);
> +void lflow_reset_northd_refs(struct lflow_input *);
> +
>  bool lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
>                                        struct tracked_ovn_ports *,
>                                        struct lflow_input *,
> -                                      struct hmap *lflows);
> +                                      struct lflow_table *lflows);
>  bool northd_handle_sb_port_binding_changes(
>      const struct sbrec_port_binding_table *, struct hmap *ls_ports,
>      struct hmap *lr_ports);
> diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
> index 084d675644..e0e60f3559 100644
> --- a/northd/ovn-northd.c
> +++ b/northd/ovn-northd.c
> @@ -848,6 +848,10 @@ main(int argc, char *argv[])
>          ovsdb_idl_omit_alert(ovnsb_idl_loop.idl,
>                               &sbrec_port_group_columns[i]);
>      }
> +    for (size_t i = 0; i < SBREC_LOGICAL_DP_GROUP_N_COLUMNS; i++) {
> +        ovsdb_idl_omit_alert(ovnsb_idl_loop.idl,
> +                             &sbrec_logical_dp_group_columns[i]);
> +    }
>  
>      unixctl_command_register("sb-connection-status", "", 0, 0,
>                               ovn_conn_show, ovnsb_idl_loop.idl);
> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> index 1825fd3e18..f7d47fc7e4 100644
> --- a/tests/ovn-northd.at
> +++ b/tests/ovn-northd.at
> @@ -11267,6 +11267,222 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE
>  AT_CLEANUP
>  ])
>  
> +OVN_FOR_EACH_NORTHD_NO_HV([
> +AT_SETUP([Load balancer incremental processing for multiple LBs with same VIPs])
> +ovn_start
> +
> +check ovn-nbctl ls-add sw0
> +check ovn-nbctl ls-add sw1
> +check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80
> +check ovn-nbctl --wait=sb lb-add lb2 10.0.0.10:80 10.0.0.3:80
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb ls-lb-add sw0 lb1
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> +sw0_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw0)
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = "$sw0_uuid"])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" = ""])
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb ls-lb-add sw1 lb2
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = ""])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +# Clear the SB:Logical_Flow.logical_dp_groups column of all the
> +# logical flows and then modify the NB:Load_balancer.  ovn-northd
> +# should resync the logical flows.
> +for l in $(ovn-sbctl --bare --columns _uuid list logical_flow)
> +do
> +    ovn-sbctl clear logical_flow $l logical_dp_group
> +done
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb set load_balancer lb1 options:foo=bar
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb clear load_balancer lb2 vips
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = "$sw0_uuid"])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" = ""])
> +
> +# Add back the vip to lb2.
> +check ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80
> +
> +# Create additional logical switches and associate lb1 to sw0, sw1 and sw2
> +# and associate lb2 to sw3, sw4 and sw5
> +check ovn-nbctl ls-add sw2
> +check ovn-nbctl ls-add sw3
> +check ovn-nbctl ls-add sw4
> +check ovn-nbctl ls-add sw5
> +check ovn-nbctl --wait=sb ls-lb-del sw1 lb2
> +check ovn-nbctl ls-lb-add sw1 lb1
> +check ovn-nbctl ls-lb-add sw2 lb1
> +check ovn-nbctl ls-lb-add sw3 lb2
> +check ovn-nbctl ls-lb-add sw4 lb2
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb ls-lb-add sw5 lb2
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = ""])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +sw1_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw1)
> +sw2_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw2)
> +sw3_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw3)
> +sw4_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw4)
> +sw5_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw5)
> +
> +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> +
> +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> +
> +echo "dpgrp_dps - $dpgrp_dps"
> +
> +# Clear the vips for lb2.  The logical lb logical flow dp group should have
> +# only sw0, sw1 and sw2 uuids.
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb clear load_balancer lb2 vips
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> +
> +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [1], [ignore])
> +
> +# Clear the vips for lb1.  The logical flow should be deleted.
> +check ovn-nbctl --wait=sb clear load_balancer lb1 vips
> +
> +AT_CHECK([ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid], [1], [ignore], [ignore])
> +
> +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> +AT_CHECK([test "$lb_lflow_uuid" = ""])
> +
> +
> +# Now add back the vips,  create another lb with the same vips and associate to
> +# sw0 and sw1
> +check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80
> +check ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80
> +check ovn-nbctl --wait=sb lb-add lb3 10.0.0.10:80 10.0.0.3:80
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +
> +check ovn-nbctl ls-lb-add sw0 lb3
> +check ovn-nbctl --wait=sb ls-lb-add sw1 lb3
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = ""])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> +
> +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> +
> +# Now clear lb1 vips.
> +# Since lb3 is associated with sw0 and sw1, the logical flow db group
> +# should have reference to sw0 and sw1, but not to sw2.
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb clear load_balancer lb1 vips
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = ""])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> +
> +echo "dpgrp dps - $dpgrp_dps"
> +
> +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> +
> +# Now clear lb3 vips.  The logical flow db group
> +# should have reference only to sw3, sw4 and sw5 because lb2 is
> +# associated to them.
> +
> +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> +check ovn-nbctl --wait=sb clear load_balancer lb3 vips
> +check_engine_stats lflow recompute nocompute
> +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> +
> +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dp" = ""])
> +
> +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> +
> +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> +
> +echo "dpgrp dps - $dpgrp_dps"
> +
> +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [1], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> +
> +AT_CLEANUP
> +])
> +
>  OVN_FOR_EACH_NORTHD_NO_HV([
>  AT_SETUP([Logical router incremental processing for NAT])
>  

Regards,
Dumitru
Numan Siddique Jan. 18, 2024, 9:39 p.m. UTC | #2
On Wed, Jan 17, 2024 at 6:23 PM Dumitru Ceara <dceara@redhat.com> wrote:
>
> On 1/11/24 16:31, numans@ovn.org wrote:
> > From: Numan Siddique <numans@ovn.org>
> >
> > ovn_lflow_add() and other related functions/macros are now moved
> > into a separate module - lflow-mgr.c.  This module maintains a
> > table 'struct lflow_table' for the logical flows.  lflow table
> > maintains a hmap to store the logical flows.
> >
> > It also maintains the logical switch and router dp groups.
> >
> > Previous commits which added lflow incremental processing for
> > the VIF logical ports, stored the references to
> > the logical ports' lflows using 'struct lflow_ref_list'.  This
> > struct is renamed to 'struct lflow_ref' and is part of lflow-mgr.c.
> > It is  modified a bit to store the resource to lflow references.
> >
> > Example usage of 'struct lflow_ref'.
> >
> > 'struct ovn_port' maintains 2 instances of lflow_ref.  i,e
> >
> > struct ovn_port {
> >    ...
> >    ...
> >    struct lflow_ref *lflow_ref;
> >    struct lflow_ref *stateful_lflow_ref;
> > };
> >
> > All the logical flows generated by
> > build_lswitch_and_lrouter_iterate_by_lsp() uses the ovn_port->lflow_ref.
> >
> > All the logical flows generated by build_lsp_lflows_for_lbnats()
> > uses the ovn_port->stateful_lflow_ref.
> >
> > When handling the ovn_port changes incrementally, the lflows referenced
> > in 'struct ovn_port' are cleared and regenerated and synced to the
> > SB logical flows.
> >
> > eg.
> >
> > lflow_ref_clear_lflows(op->lflow_ref);
> > build_lswitch_and_lrouter_iterate_by_lsp(op, ...);
> > lflow_ref_sync_lflows_to_sb(op->lflow_ref, ...);
> >
> > This patch does few more changes:
> >   -  Logical flows are now hashed without the logical
> >      datapaths.  If a logical flow is referenced by just one
> >      datapath, we don't rehash it.
> >
> >   -  The synthetic 'hash' column of sbrec_logical_flow now
> >      doesn't use the logical datapath.  This means that
> >      when ovn-northd is updated/upgraded and has this commit,
> >      all the logical flows with 'logical_datapath' column
> >      set will get deleted and re-added causing some disruptions.
> >
> >   -  With the commit [1] which added I-P support for logical
> >      port changes, multiple logical flows with same match 'M'
> >      and actions 'A' are generated and stored without the
> >      dp groups, which was not the case prior to
> >      that patch.
> >      One example to generate these lflows is:
> >              ovn-nbctl lsp-set-addresses sw0p1 "MAC1 IP1"
> >              ovn-nbctl lsp-set-addresses sw1p1 "MAC1 IP1"
> >            ovn-nbctl lsp-set-addresses sw2p1 "MAC1 IP1"
> >
> >      Now with this patch we go back to the earlier way.  i.e
> >      one logical flow with logical_dp_groups set.
> >
> >   -  With this patch any updates to a logical port which
> >      doesn't result in new logical flows will not result in
> >      deletion and addition of same logical flows.
> >      Eg.
> >      ovn-nbctl set logical_switch_port sw0p1 external_ids:foo=bar
> >      will be a no-op to the SB logical flow table.
> >
> > [1] - 8bbd678("northd: Incremental processing of VIF additions in 'lflow' node.")
> >
> > Signed-off-by: Numan Siddique <numans@ovn.org>
> > ---
>
> Hi Numan,
>
> This is a huge patch and I'm not sure I can thoroughly review it.  I do
> think there are a few improvements we can make (even if we don't split
> it in smaller parts).  Hopefully those simplify the code a bit.  Please
> see inline.
>
> >  lib/ovn-util.c           |   18 +-
> >  lib/ovn-util.h           |    2 -
> >  northd/automake.mk       |    4 +-
> >  northd/en-lflow.c        |   24 +-
> >  northd/en-lflow.h        |    6 +
> >  northd/inc-proc-northd.c |    4 +-
> >  northd/lflow-mgr.c       | 1261 +++++++++++++++++++++++++
> >  northd/lflow-mgr.h       |  186 ++++
> >  northd/northd.c          | 1918 ++++++++++----------------------------
> >  northd/northd.h          |  218 ++++-
> >  northd/ovn-northd.c      |    4 +
> >  tests/ovn-northd.at      |  216 +++++
> >  12 files changed, 2395 insertions(+), 1466 deletions(-)
> >  create mode 100644 northd/lflow-mgr.c
> >  create mode 100644 northd/lflow-mgr.h
> >
> > diff --git a/lib/ovn-util.c b/lib/ovn-util.c
> > index c8b89cc216..ba29baea63 100644
> > --- a/lib/ovn-util.c
> > +++ b/lib/ovn-util.c
> > @@ -620,13 +620,10 @@ ovn_pipeline_from_name(const char *pipeline)
> >  uint32_t
> >  sbrec_logical_flow_hash(const struct sbrec_logical_flow *lf)
> >  {
> > -    const struct sbrec_datapath_binding *ld = lf->logical_datapath;
> > -    uint32_t hash = ovn_logical_flow_hash(lf->table_id,
> > -                                          ovn_pipeline_from_name(lf->pipeline),
> > -                                          lf->priority, lf->match,
> > -                                          lf->actions);
> > -
> > -    return ld ? ovn_logical_flow_hash_datapath(&ld->header_.uuid, hash) : hash;
> > +    return ovn_logical_flow_hash(lf->table_id,
> > +                                 ovn_pipeline_from_name(lf->pipeline),
> > +                                 lf->priority, lf->match,
> > +                                 lf->actions);
> >  }
> >
> >  uint32_t
> > @@ -639,13 +636,6 @@ ovn_logical_flow_hash(uint8_t table_id, enum ovn_pipeline pipeline,
> >      return hash_string(actions, hash);
> >  }
> >
> > -uint32_t
> > -ovn_logical_flow_hash_datapath(const struct uuid *logical_datapath,
> > -                               uint32_t hash)
> > -{
> > -    return hash_add(hash, uuid_hash(logical_datapath));
> > -}
> > -
> >
> >  struct tnlid_node {
> >      struct hmap_node hmap_node;
> > diff --git a/lib/ovn-util.h b/lib/ovn-util.h
> > index d245d57d56..1c430027c6 100644
> > --- a/lib/ovn-util.h
> > +++ b/lib/ovn-util.h
> > @@ -145,8 +145,6 @@ uint32_t sbrec_logical_flow_hash(const struct sbrec_logical_flow *);
> >  uint32_t ovn_logical_flow_hash(uint8_t table_id, enum ovn_pipeline pipeline,
> >                                 uint16_t priority,
> >                                 const char *match, const char *actions);
> > -uint32_t ovn_logical_flow_hash_datapath(const struct uuid *logical_datapath,
> > -                                        uint32_t hash);
> >  void ovn_conn_show(struct unixctl_conn *conn, int argc OVS_UNUSED,
> >                     const char *argv[] OVS_UNUSED, void *idl_);
> >
> > diff --git a/northd/automake.mk b/northd/automake.mk
> > index a178541759..7c6d56a4ff 100644
> > --- a/northd/automake.mk
> > +++ b/northd/automake.mk
> > @@ -33,7 +33,9 @@ northd_ovn_northd_SOURCES = \
> >       northd/inc-proc-northd.c \
> >       northd/inc-proc-northd.h \
> >       northd/ipam.c \
> > -     northd/ipam.h
> > +     northd/ipam.h \
> > +     northd/lflow-mgr.c \
> > +     northd/lflow-mgr.h
> >  northd_ovn_northd_LDADD = \
> >       lib/libovn.la \
> >       $(OVSDB_LIBDIR)/libovsdb.la \
> > diff --git a/northd/en-lflow.c b/northd/en-lflow.c
> > index eb6b2a8666..fef9a1352d 100644
> > --- a/northd/en-lflow.c
> > +++ b/northd/en-lflow.c
> > @@ -24,6 +24,7 @@
> >  #include "en-ls-stateful.h"
> >  #include "en-northd.h"
> >  #include "en-meters.h"
> > +#include "lflow-mgr.h"
> >
> >  #include "lib/inc-proc-eng.h"
> >  #include "northd.h"
> > @@ -58,6 +59,8 @@ lflow_get_input_data(struct engine_node *node,
> >          EN_OVSDB_GET(engine_get_input("SB_multicast_group", node));
> >      lflow_input->sbrec_igmp_group_table =
> >          EN_OVSDB_GET(engine_get_input("SB_igmp_group", node));
> > +    lflow_input->sbrec_logical_dp_group_table =
> > +        EN_OVSDB_GET(engine_get_input("SB_logical_dp_group", node));
> >
> >      lflow_input->sbrec_mcast_group_by_name_dp =
> >             engine_ovsdb_node_get_index(
> > @@ -90,17 +93,19 @@ void en_lflow_run(struct engine_node *node, void *data)
> >      struct hmap bfd_connections = HMAP_INITIALIZER(&bfd_connections);
> >      lflow_input.bfd_connections = &bfd_connections;
> >
> > +    stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec());
> > +
> >      struct lflow_data *lflow_data = data;
> > -    lflow_data_destroy(lflow_data);
> > -    lflow_data_init(lflow_data);
> > +    lflow_table_clear(lflow_data->lflow_table);
> > +    lflow_reset_northd_refs(&lflow_input);
> >
> > -    stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec());
> >      build_bfd_table(eng_ctx->ovnsb_idl_txn,
> >                      lflow_input.nbrec_bfd_table,
> >                      lflow_input.sbrec_bfd_table,
> >                      lflow_input.lr_ports,
> >                      &bfd_connections);
> > -    build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input, &lflow_data->lflows);
> > +    build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input,
> > +                 lflow_data->lflow_table);
> >      bfd_cleanup_connections(lflow_input.nbrec_bfd_table,
> >                              &bfd_connections);
> >      hmap_destroy(&bfd_connections);
> > @@ -131,7 +136,7 @@ lflow_northd_handler(struct engine_node *node,
> >
> >      if (!lflow_handle_northd_port_changes(eng_ctx->ovnsb_idl_txn,
> >                                  &northd_data->trk_data.trk_lsps,
> > -                                &lflow_input, &lflow_data->lflows)) {
> > +                                &lflow_input, lflow_data->lflow_table)) {
> >          return false;
> >      }
> >
> > @@ -160,11 +165,14 @@ void *en_lflow_init(struct engine_node *node OVS_UNUSED,
> >                       struct engine_arg *arg OVS_UNUSED)
> >  {
> >      struct lflow_data *data = xmalloc(sizeof *data);
> > -    lflow_data_init(data);
> > +    data->lflow_table = lflow_table_alloc();
> > +    lflow_table_init(data->lflow_table);
> >      return data;
> >  }
> >
> > -void en_lflow_cleanup(void *data)
> > +void en_lflow_cleanup(void *data_)
> >  {
> > -    lflow_data_destroy(data);
> > +    struct lflow_data *data = data_;
> > +    lflow_table_destroy(data->lflow_table);
> > +    data->lflow_table = NULL;
>
> It's not really needed to set lflow_table to NULL, right?

Ack.

>
> >  }
> > diff --git a/northd/en-lflow.h b/northd/en-lflow.h
> > index 5417b2faff..f7325c56b1 100644
> > --- a/northd/en-lflow.h
> > +++ b/northd/en-lflow.h
> > @@ -9,6 +9,12 @@
> >
> >  #include "lib/inc-proc-eng.h"
> >
> > +struct lflow_table;
> > +
> > +struct lflow_data {
> > +    struct lflow_table *lflow_table;
> > +};
> > +
> >  void en_lflow_run(struct engine_node *node, void *data);
> >  void *en_lflow_init(struct engine_node *node, struct engine_arg *arg);
> >  void en_lflow_cleanup(void *data);
> > diff --git a/northd/inc-proc-northd.c b/northd/inc-proc-northd.c
> > index 9ce4279ee8..0e17bfe2e6 100644
> > --- a/northd/inc-proc-northd.c
> > +++ b/northd/inc-proc-northd.c
> > @@ -99,7 +99,8 @@ static unixctl_cb_func chassis_features_list;
> >      SB_NODE(bfd, "bfd") \
> >      SB_NODE(fdb, "fdb") \
> >      SB_NODE(static_mac_binding, "static_mac_binding") \
> > -    SB_NODE(chassis_template_var, "chassis_template_var")
> > +    SB_NODE(chassis_template_var, "chassis_template_var") \
> > +    SB_NODE(logical_dp_group, "logical_dp_group")
> >
> >  enum sb_engine_node {
> >  #define SB_NODE(NAME, NAME_STR) SB_##NAME,
> > @@ -229,6 +230,7 @@ void inc_proc_northd_init(struct ovsdb_idl_loop *nb,
> >      engine_add_input(&en_lflow, &en_sb_igmp_group, NULL);
> >      engine_add_input(&en_lflow, &en_lr_stateful, NULL);
> >      engine_add_input(&en_lflow, &en_ls_stateful, NULL);
> > +    engine_add_input(&en_lflow, &en_sb_logical_dp_group, NULL);
> >      engine_add_input(&en_lflow, &en_northd, lflow_northd_handler);
> >      engine_add_input(&en_lflow, &en_port_group, lflow_port_group_handler);
> >
> > diff --git a/northd/lflow-mgr.c b/northd/lflow-mgr.c
> > new file mode 100644
> > index 0000000000..3cf9696f6e
> > --- /dev/null
> > +++ b/northd/lflow-mgr.c
> > @@ -0,0 +1,1261 @@
> > +/*
> > + * Copyright (c) 2023, Red Hat, Inc.
>
> 2024

Ack.


>
> > + *
> > + * Licensed under the Apache License, Version 2.0 (the "License");
> > + * you may not use this file except in compliance with the License.
> > + * You may obtain a copy of the License at:
> > + *
> > + *     http://www.apache.org/licenses/LICENSE-2.0
> > + *
> > + * Unless required by applicable law or agreed to in writing, software
> > + * distributed under the License is distributed on an "AS IS" BASIS,
> > + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> > + * See the License for the specific language governing permissions and
> > + * limitations under the License.
> > + */
> > +
> > +#include <config.h>
> > +
> > +#include <getopt.h>
> > +#include <stdlib.h>
> > +#include <stdio.h>
>
> None of these three includes is needed.
>
> > +
> > +/* OVS includes */
> > +#include "include/openvswitch/hmap.h"
> > +#include "include/openvswitch/thread.h"
> > +#include "lib/bitmap.h"
> > +#include "lib/uuidset.h"
> > +#include "openvswitch/util.h"
> > +#include "openvswitch/vlog.h"
> > +#include "ovs-thread.h"
> > +#include "stopwatch.h"
>
> stopwatch.h is also not needed.
>
> > +
> > +/* OVN includes */
> > +#include "debug.h"
> > +#include "lflow-mgr.h"
> > +#include "lib/ovn-parallel-hmap.h"
> > +#include "northd.h"
>
> Actually, the whole include list can be reduced to:
>
> #include <config.h>
>
> /* OVS includes */
> #include "include/openvswitch/thread.h"
> #include "lib/bitmap.h"
> #include "openvswitch/vlog.h"
>
> /* OVN includes */
> #include "debug.h"
> #include "lflow-mgr.h"
> #include "lib/ovn-parallel-hmap.h"

Makes sense.  Thanks.  I'll do this in v6.


>
> > +
> > +VLOG_DEFINE_THIS_MODULE(lflow_mgr);
> > +
> > +/* Static function declarations. */
> > +struct ovn_lflow;
> > +
> > +static void ovn_lflow_init(struct ovn_lflow *, struct ovn_datapath *od,
> > +                           size_t dp_bitmap_len, enum ovn_stage stage,
> > +                           uint16_t priority, char *match,
> > +                           char *actions, char *io_port,
> > +                           char *ctrl_meter, char *stage_hint,
> > +                           const char *where);
> > +static struct ovn_lflow *ovn_lflow_find(const struct hmap *lflows,
> > +                                        enum ovn_stage stage,
> > +                                        uint16_t priority, const char *match,
> > +                                        const char *actions,
> > +                                        const char *ctrl_meter, uint32_t hash);
> > +static void ovn_lflow_destroy(struct lflow_table *lflow_table,
> > +                              struct ovn_lflow *lflow);
> > +static char *ovn_lflow_hint(const struct ovsdb_idl_row *row);
> > +
> > +static struct ovn_lflow *do_ovn_lflow_add(
> > +    struct lflow_table *, const struct ovn_datapath *,
> > +    const unsigned long *dp_bitmap, size_t dp_bitmap_len, uint32_t hash,
> > +    enum ovn_stage stage, uint16_t priority, const char *match,
> > +    const char *actions, const char *io_port,
> > +    const char *ctrl_meter,
> > +    const struct ovsdb_idl_row *stage_hint,
> > +    const char *where);
> > +
> > +
> > +static struct ovs_mutex *lflow_hash_lock(const struct hmap *lflow_table,
> > +                                         uint32_t hash);
> > +static void lflow_hash_unlock(struct ovs_mutex *hash_lock);
> > +
> > +static struct ovn_dp_group *ovn_dp_group_get(
> > +    struct hmap *dp_groups, size_t desired_n,
> > +    const unsigned long *desired_bitmap,
> > +    size_t bitmap_len);
> > +static struct ovn_dp_group *ovn_dp_group_create(
> > +    struct ovsdb_idl_txn *ovnsb_txn, struct hmap *dp_groups,
> > +    struct sbrec_logical_dp_group *, size_t desired_n,
> > +    const unsigned long *desired_bitmap,
> > +    size_t bitmap_len, bool is_switch,
> > +    const struct ovn_datapaths *ls_datapaths,
> > +    const struct ovn_datapaths *lr_datapaths);
> > +static struct ovn_dp_group *ovn_dp_group_get(
> > +    struct hmap *dp_groups, size_t desired_n,
> > +    const unsigned long *desired_bitmap,
> > +    size_t bitmap_len);
> > +static struct sbrec_logical_dp_group *ovn_sb_insert_or_update_logical_dp_group(
> > +    struct ovsdb_idl_txn *ovnsb_txn,
> > +    struct sbrec_logical_dp_group *,
> > +    const unsigned long *dpg_bitmap,
> > +    const struct ovn_datapaths *);
> > +static struct ovn_dp_group *ovn_dp_group_find(const struct hmap *dp_groups,
> > +                                              const unsigned long *dpg_bitmap,
> > +                                              size_t bitmap_len,
> > +                                              uint32_t hash);
> > +static void inc_ovn_dp_group_ref(struct ovn_dp_group *);
> > +static void dec_ovn_dp_group_ref(struct hmap *dp_groups,
> > +                                 struct ovn_dp_group *);
> > +static void ovn_dp_group_add_with_reference(struct ovn_lflow *,
> > +                                            const struct ovn_datapath *od,
> > +                                            const unsigned long *dp_bitmap,
> > +                                            size_t bitmap_len);
> > +
> > +static bool lflow_ref_sync_lflows__(
> > +    struct lflow_ref  *, struct lflow_table *,
> > +    struct ovsdb_idl_txn *ovnsb_txn,
> > +    const struct ovn_datapaths *ls_datapaths,
> > +    const struct ovn_datapaths *lr_datapaths,
> > +    bool ovn_internal_version_changed,
> > +    const struct sbrec_logical_flow_table *,
> > +    const struct sbrec_logical_dp_group_table *);
> > +static bool sync_lflow_to_sb(struct ovn_lflow *,
> > +                             struct ovsdb_idl_txn *ovnsb_txn,
> > +                             struct lflow_table *,
> > +                             const struct ovn_datapaths *ls_datapaths,
> > +                             const struct ovn_datapaths *lr_datapaths,
> > +                             bool ovn_internal_version_changed,
> > +                             const struct sbrec_logical_flow *sbflow,
> > +                             const struct sbrec_logical_dp_group_table *);
> > +
> > +extern int parallelization_state;
> > +extern thread_local size_t thread_lflow_counter;
> > +
> > +struct dp_refcnt;
> > +static struct dp_refcnt *dp_refcnt_find(struct hmap *dp_refcnts_map,
> > +                                        size_t dp_index);
> > +static void inc_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index);
> > +static bool dec_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index);
> > +static void ovn_lflow_clear_dp_refcnts_map(struct ovn_lflow *);
> > +static struct lflow_ref_node *lflow_ref_node_find(struct hmap *lflow_ref_nodes,
> > +                                                  struct ovn_lflow *lflow,
> > +                                                  uint32_t lflow_hash);
> > +static void lflow_ref_node_destroy(struct lflow_ref_node *,
> > +                                   struct hmap *lflow_ref_nodes);
> > +
> > +static bool lflow_hash_lock_initialized = false;
> > +/* The lflow_hash_lock is a mutex array that protects updates to the shared
> > + * lflow table across threads when parallel lflow build and dp-group are both
> > + * enabled. To avoid high contention between threads, a big array of mutexes
> > + * are used instead of just one. This is possible because when parallel build
> > + * is used we only use hmap_insert_fast() to update the hmap, which would not
> > + * touch the bucket array but only the list in a single bucket. We only need to
> > + * make sure that when adding lflows to the same hash bucket, the same lock is
> > + * used, so that no two threads can add to the bucket at the same time.  It is
> > + * ok that the same lock is used to protect multiple buckets, so a fixed sized
> > + * mutex array is used instead of 1-1 mapping to the hash buckets. This
> > + * simplies the implementation while effectively reduces lock contention
> > + * because the chance that different threads contending the same lock amongst
> > + * the big number of locks is very low. */
> > +#define LFLOW_HASH_LOCK_MASK 0xFFFF
> > +static struct ovs_mutex lflow_hash_locks[LFLOW_HASH_LOCK_MASK + 1];
> > +
> > +/* Full thread safety analysis is not possible with hash locks, because
> > + * they are taken conditionally based on the 'parallelization_state' and
> > + * a flow hash.  Also, the order in which two hash locks are taken is not
> > + * predictable during the static analysis.
> > + *
> > + * Since the order of taking two locks depends on a random hash, to avoid
> > + * ABBA deadlocks, no two hash locks can be nested.  In that sense an array
> > + * of hash locks is similar to a single mutex.
> > + *
> > + * Using a fake mutex to partially simulate thread safety restrictions, as
> > + * if it were actually a single mutex.
> > + *
> > + * OVS_NO_THREAD_SAFETY_ANALYSIS below allows us to ignore conditional
> > + * nature of the lock.  Unlike other attributes, it applies to the
> > + * implementation and not to the interface.  So, we can define a function
> > + * that acquires the lock without analysing the way it does that.
> > + */
> > +extern struct ovs_mutex fake_hash_mutex;
> > +
> > +/* Represents a logical ovn flow (lflow).
> > + *
> > + * A logical flow with match 'M' and actions 'A' - L(M, A) is created
> > + * when lflow engine node (northd.c) calls lflow_table_add_lflow
> > + * (or one of the helper macros ovn_lflow_add_*).
> > + *
> > + * Each lflow is stored in the lflow_table (see 'struct lflow_table' below)
> > + * and possibly referenced by zero or more lflow_refs
> > + * (see 'struct lflow_ref' and 'struct lflow_ref_node' below).
> > + *
> > + * */
> > +struct ovn_lflow {
> > +    struct hmap_node hmap_node;
> > +
> > +    struct ovn_datapath *od;     /* 'logical_datapath' in SB schema.  */
> > +    unsigned long *dpg_bitmap;   /* Bitmap of all datapaths by their 'index'.*/
> > +    enum ovn_stage stage;
> > +    uint16_t priority;
> > +    char *match;
> > +    char *actions;
> > +    char *io_port;
> > +    char *stage_hint;
> > +    char *ctrl_meter;
> > +    size_t n_ods;                /* Number of datapaths referenced by 'od' and
> > +                                  * 'dpg_bitmap'. */
> > +    struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group. */
> > +    const char *where;
> > +
> > +    struct uuid sb_uuid;         /* SB DB row uuid, specified by northd. */
> > +    struct ovs_list referenced_by;  /* List of struct lflow_ref_node. */
> > +    struct hmap dp_refcnts_map; /* Maintains the number of times this ovn_lflow
> > +                                 * is referenced by a given datapath.
> > +                                 * Contains 'struct dp_refcnt' in the map. */
> > +};
> > +
> > +/* Logical flow table. */
> > +struct lflow_table {
> > +    struct hmap entries; /* hmap of lflows. */
> > +    struct hmap ls_dp_groups; /* hmap of logical switch dp groups. */
> > +    struct hmap lr_dp_groups; /* hmap of logical router dp groups. */
> > +    ssize_t max_seen_lflow_size;
> > +};
> > +
> > +struct lflow_table *
> > +lflow_table_alloc(void)
> > +{
> > +    struct lflow_table *lflow_table = xzalloc(sizeof *lflow_table);
> > +    lflow_table->max_seen_lflow_size = 128;
> > +
> > +    return lflow_table;
> > +}
> > +
> > +void
> > +lflow_table_init(struct lflow_table *lflow_table)
> > +{
> > +    fast_hmap_size_for(&lflow_table->entries,
> > +                       lflow_table->max_seen_lflow_size);
> > +    ovn_dp_groups_init(&lflow_table->ls_dp_groups);
> > +    ovn_dp_groups_init(&lflow_table->lr_dp_groups);
> > +}
> > +
> > +void
> > +lflow_table_clear(struct lflow_table *lflow_table)
> > +{
> > +    struct ovn_lflow *lflow;
> > +    HMAP_FOR_EACH_POP (lflow, hmap_node, &lflow_table->entries) {
> > +        ovn_lflow_destroy(NULL, lflow);
> > +    }
> > +
> > +    ovn_dp_groups_clear(&lflow_table->ls_dp_groups);
> > +    ovn_dp_groups_clear(&lflow_table->lr_dp_groups);
> > +}
> > +
> > +void
> > +lflow_table_destroy(struct lflow_table *lflow_table)
> > +{
> > +    lflow_table_clear(lflow_table);
> > +    hmap_destroy(&lflow_table->entries);
> > +    ovn_dp_groups_destroy(&lflow_table->ls_dp_groups);
> > +    ovn_dp_groups_destroy(&lflow_table->lr_dp_groups);
> > +    free(lflow_table);
> > +}
> > +
> > +void
> > +lflow_table_expand(struct lflow_table *lflow_table)
> > +{
> > +    hmap_expand(&lflow_table->entries);
> > +
> > +    if (hmap_count(&lflow_table->entries) >
> > +            lflow_table->max_seen_lflow_size) {
> > +        lflow_table->max_seen_lflow_size = hmap_count(&lflow_table->entries);
> > +    }
> > +}
> > +
> > +void
> > +lflow_table_set_size(struct lflow_table *lflow_table, size_t size)
> > +{
> > +    lflow_table->entries.n = size;
> > +}
> > +
> > +void
> > +lflow_table_sync_to_sb(struct lflow_table *lflow_table,
> > +                       struct ovsdb_idl_txn *ovnsb_txn,
> > +                       const struct ovn_datapaths *ls_datapaths,
> > +                       const struct ovn_datapaths *lr_datapaths,
> > +                       bool ovn_internal_version_changed,
> > +                       const struct sbrec_logical_flow_table *sb_flow_table,
> > +                       const struct sbrec_logical_dp_group_table *dpgrp_table)
> > +{
> > +    struct hmap lflows_temp = HMAP_INITIALIZER(&lflows_temp);
> > +    struct hmap *lflows = &lflow_table->entries;
> > +    struct ovn_lflow *lflow;
> > +
> > +    /* Push changes to the Logical_Flow table to database. */
> > +    const struct sbrec_logical_flow *sbflow;
> > +    SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_SAFE (sbflow, sb_flow_table) {
> > +        struct sbrec_logical_dp_group *dp_group = sbflow->logical_dp_group;
> > +        struct ovn_datapath *logical_datapath_od = NULL;
> > +        size_t i;
> > +
> > +        /* Find one valid datapath to get the datapath type. */
> > +        struct sbrec_datapath_binding *dp = sbflow->logical_datapath;
> > +        if (dp) {
> > +            logical_datapath_od = ovn_datapath_from_sbrec(
> > +                &ls_datapaths->datapaths, &lr_datapaths->datapaths, dp);
> > +            if (logical_datapath_od
> > +                && ovn_datapath_is_stale(logical_datapath_od)) {
> > +                logical_datapath_od = NULL;
> > +            }
> > +        }
> > +        for (i = 0; dp_group && i < dp_group->n_datapaths; i++) {
> > +            logical_datapath_od = ovn_datapath_from_sbrec(
> > +                &ls_datapaths->datapaths, &lr_datapaths->datapaths,
> > +                dp_group->datapaths[i]);
> > +            if (logical_datapath_od
> > +                && !ovn_datapath_is_stale(logical_datapath_od)) {
> > +                break;
> > +            }
> > +            logical_datapath_od = NULL;
> > +        }
> > +
> > +        if (!logical_datapath_od) {
> > +            /* This lflow has no valid logical datapaths. */
> > +            sbrec_logical_flow_delete(sbflow);
> > +            continue;
> > +        }
> > +
> > +        enum ovn_pipeline pipeline
> > +            = !strcmp(sbflow->pipeline, "ingress") ? P_IN : P_OUT;
> > +
> > +        lflow = ovn_lflow_find(
> > +            lflows,
> > +            ovn_stage_build(ovn_datapath_get_type(logical_datapath_od),
> > +                            pipeline, sbflow->table_id),
> > +            sbflow->priority, sbflow->match, sbflow->actions,
> > +            sbflow->controller_meter, sbflow->hash);
> > +        if (lflow) {
> > +            sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
> > +                             lr_datapaths, ovn_internal_version_changed,
> > +                             sbflow, dpgrp_table);
> > +
> > +            hmap_remove(lflows, &lflow->hmap_node);
> > +            hmap_insert(&lflows_temp, &lflow->hmap_node,
> > +                        hmap_node_hash(&lflow->hmap_node));
> > +        } else {
> > +            sbrec_logical_flow_delete(sbflow);
> > +        }
> > +    }
> > +
> > +    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
> > +        sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
> > +                         lr_datapaths, ovn_internal_version_changed,
> > +                         NULL, dpgrp_table);
> > +
> > +        hmap_remove(lflows, &lflow->hmap_node);
> > +        hmap_insert(&lflows_temp, &lflow->hmap_node,
> > +                    hmap_node_hash(&lflow->hmap_node));
> > +    }
> > +    hmap_swap(lflows, &lflows_temp);
> > +    hmap_destroy(&lflows_temp);
> > +}
> > +
> > +/* lflow ref */
> > +struct lflow_ref {
> > +    /* head of the list 'struct lflow_ref_node'. */
> > +    struct ovs_list lflows_ref_list;
>
> Why do we need this additional list?  AFAICT we always insert both in
> the lflow_ref_nodes hmap and in this list.  Can't we walk the whole hmap
> every time we need to walk all struct lflow_ref_node?

Good point.  Thanks for the suggestion.   The list can be removed.
I'll address it in v6.

>
> > +
> > +    /* hmap_node is 'struct lflow_ref_node *'.  This is used to ensure
> > +     * that there are no duplicates in 'lflow_ref_list' above. */
> > +    struct hmap lflow_ref_nodes;
> > +};
> > +
> > +struct lflow_ref_node {
> > +    /* hmap node in the hmap - 'struct lflow_ref->lflow_ref_nodes' */
> > +    struct hmap_node ref_node;
> > +
> > +    /* This list follows different lflows referenced by the
> > +     * 'struct lflow_ref'. List head is lflow_ref->lflows_ref_list. */
> > +    struct ovs_list lflow_list_node;
> > +    /* This list follows different objects that reference the same lflow. List
> > +     * head is ovn_lflow->referenced_by. */
> > +    struct ovs_list ref_list_node;
> > +    /* The lflow. */
> > +    struct ovn_lflow *lflow;
> > +
> > +    /* Index id of the datapath this lflow_ref_node belongs to. */
> > +    size_t dp_index;
> > +
> > +    /* Indicates if the lflow_ref_node for an lflow - L(M, A) is linked
> > +     * to datapath(s) or not.
> > +     * It is set to true when an lflow L(M, A) is referenced by an lflow ref
> > +     * in lflow_table_add_lflow().  It is set to false when it is unlinked
> > +     * from the datapath when lflow_ref_unlink_lflows() is called. */
> > +    bool linked;
> > +};
> > +
> > +struct lflow_ref *
> > +lflow_ref_create(void)
> > +{
> > +    struct lflow_ref *lflow_ref = xzalloc(sizeof *lflow_ref);
> > +    ovs_list_init(&lflow_ref->lflows_ref_list);
> > +    hmap_init(&lflow_ref->lflow_ref_nodes);
> > +    return lflow_ref;
> > +}
> > +
> > +void
> > +lflow_ref_destroy(struct lflow_ref *lflow_ref)
> > +{
> > +    struct lflow_ref_node *l;
> > +
> > +    LIST_FOR_EACH_SAFE (l, lflow_list_node, &lflow_ref->lflows_ref_list) {
> > +        lflow_ref_node_destroy(l, NULL);
> > +    }
> > +
> > +    hmap_destroy(&lflow_ref->lflow_ref_nodes);
> > +    free(lflow_ref);
>
> The whole body of the function is actually:
>
> lflow_ref_clear(lflow_ref);
> hmap_destroy(&lflow_ref->lflow_ref_nodes);
> free(lflow_ref);

Ack.

>
> > +}
> > +
> > +void
> > +lflow_ref_clear(struct lflow_ref *lflow_ref)
> > +{
> > +    struct lflow_ref_node *l;
> > +    LIST_FOR_EACH_SAFE (l, lflow_list_node, &lflow_ref->lflows_ref_list) {
> > +        lflow_ref_node_destroy(l, NULL);
>
> Why not pass &lflow_ref->lflow_ref_nodes instead of NULL and avoid the
> hmap_clear() below?  On a second look, maybe it's faster to do as you
> did but I guess it just looks weird to me.

I agree it's weird.   In v6 I'll remove the list from lfow_ref and this  would
be more cleaner.

>
> > +    }
> > +
> > +    hmap_clear(&lflow_ref->lflow_ref_nodes);
> > +}
> > +
> > +/* Unlinks the lflows referenced by the 'lflow_ref'.
> > + * For each lflow_ref_node (lrn) in the lflow_ref, it basically clears
> > + * the datapath id (lrn->dp_index) from the lrn->lflow's dpg bitmap.
> > + */
> > +void
> > +lflow_ref_unlink_lflows(struct lflow_ref *lflow_ref)
> > +{
> > +    struct lflow_ref_node *lrn;
>
> In some places we use 'struct lflow_ref_node *lrn' in others just
> 'struct lflow_ref_node *l'.  I'd prefer if we're consistent.  What if we
> use lrn everywhere?

Ack.


>
> > +
> > +    LIST_FOR_EACH (lrn, lflow_list_node, &lflow_ref->lflows_ref_list) {
> > +        if (dec_dp_refcnt(&lrn->lflow->dp_refcnts_map,
> > +                          lrn->dp_index)) {
> > +            bitmap_set0(lrn->lflow->dpg_bitmap, lrn->dp_index);
> > +        }
> > +
> > +        lrn->linked = false;
> > +    }
> > +}
> > +
> > +bool
> > +lflow_ref_resync_flows(struct lflow_ref *lflow_ref,
> > +                       struct lflow_table *lflow_table,
> > +                       struct ovsdb_idl_txn *ovnsb_txn,
> > +                       const struct ovn_datapaths *ls_datapaths,
> > +                       const struct ovn_datapaths *lr_datapaths,
> > +                       bool ovn_internal_version_changed,
> > +                       const struct sbrec_logical_flow_table *sbflow_table,
> > +                       const struct sbrec_logical_dp_group_table *dpgrp_table)
> > +{
> > +    lflow_ref_unlink_lflows(lflow_ref);
> > +    return lflow_ref_sync_lflows__(lflow_ref, lflow_table, ovnsb_txn,
> > +                                   ls_datapaths, lr_datapaths,
> > +                                   ovn_internal_version_changed, sbflow_table,
> > +                                   dpgrp_table);
> > +}
> > +
> > +bool
> > +lflow_ref_sync_lflows(struct lflow_ref *lflow_ref,
> > +                      struct lflow_table *lflow_table,
> > +                      struct ovsdb_idl_txn *ovnsb_txn,
> > +                      const struct ovn_datapaths *ls_datapaths,
> > +                      const struct ovn_datapaths *lr_datapaths,
> > +                      bool ovn_internal_version_changed,
> > +                      const struct sbrec_logical_flow_table *sbflow_table,
> > +                      const struct sbrec_logical_dp_group_table *dpgrp_table)
> > +{
> > +    return lflow_ref_sync_lflows__(lflow_ref, lflow_table, ovnsb_txn,
> > +                                   ls_datapaths, lr_datapaths,
> > +                                   ovn_internal_version_changed, sbflow_table,
> > +                                   dpgrp_table);
> > +}
> > +
> > +void
> > +lflow_table_add_lflow(struct lflow_table *lflow_table,
> > +                      const struct ovn_datapath *od,
> > +                      const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> > +                      enum ovn_stage stage, uint16_t priority,
> > +                      const char *match, const char *actions,
> > +                      const char *io_port, const char *ctrl_meter,
> > +                      const struct ovsdb_idl_row *stage_hint,
> > +                      const char *where,
> > +                      struct lflow_ref *lflow_ref)
> > +    OVS_EXCLUDED(fake_hash_mutex)
> > +{
> > +    struct ovs_mutex *hash_lock;
> > +    uint32_t hash;
> > +
> > +    ovs_assert(!od ||
> > +               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
> > +
> > +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> > +                                 ovn_stage_get_pipeline(stage),
> > +                                 priority, match,
> > +                                 actions);
> > +
> > +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
> > +    struct ovn_lflow *lflow =
> > +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
> > +                         dp_bitmap_len, hash, stage,
> > +                         priority, match, actions,
> > +                         io_port, ctrl_meter, stage_hint, where);
> > +
> > +    if (lflow_ref) {
> > +        /* lflow referencing is only supported if 'od' is not NULL. */
> > +        ovs_assert(od);
> > +
> > +        struct lflow_ref_node *lrn =
> > +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes, lflow, hash);
> > +        if (!lrn) {
> > +            lrn = xzalloc(sizeof *lrn);
> > +            lrn->lflow = lflow;
> > +            lrn->dp_index = od->index;
> > +            ovs_list_insert(&lflow_ref->lflows_ref_list,
> > +                            &lrn->lflow_list_node);
> > +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
> > +            ovs_list_insert(&lflow->referenced_by, &lrn->ref_list_node);
> > +
> > +            hmap_insert(&lflow_ref->lflow_ref_nodes, &lrn->ref_node, hash);
> > +        }
> > +
> > +        lrn->linked = true;
> > +    }
> > +
> > +    lflow_hash_unlock(hash_lock);
> > +
> > +}
> > +
> > +void
> > +lflow_table_add_lflow_default_drop(struct lflow_table *lflow_table,
> > +                                   const struct ovn_datapath *od,
> > +                                   enum ovn_stage stage,
> > +                                   const char *where,
> > +                                   struct lflow_ref *lflow_ref)
> > +{
> > +    lflow_table_add_lflow(lflow_table, od, NULL, 0, stage, 0, "1",
> > +                          debug_drop_action(), NULL, NULL, NULL,
> > +                          where, lflow_ref);
> > +}
> > +
> > +/* Given a desired bitmap, finds a datapath group in 'dp_groups'.  If it
> > + * doesn't exist, creates a new one and adds it to 'dp_groups'.
> > + * If 'sb_group' is provided, function will try to re-use this group by
> > + * either taking it directly, or by modifying, if it's not already in use. */
> > +struct ovn_dp_group *
> > +ovn_dp_group_get_or_create(struct ovsdb_idl_txn *ovnsb_txn,
> > +                           struct hmap *dp_groups,
> > +                           struct sbrec_logical_dp_group *sb_group,
> > +                           size_t desired_n,
> > +                           const unsigned long *desired_bitmap,
> > +                           size_t bitmap_len,
> > +                           bool is_switch,
> > +                           const struct ovn_datapaths *ls_datapaths,
> > +                           const struct ovn_datapaths *lr_datapaths)
> > +{
> > +    struct ovn_dp_group *dpg;
> > +
> > +    dpg = ovn_dp_group_get(dp_groups, desired_n, desired_bitmap, bitmap_len);
> > +    if (dpg) {
> > +        return dpg;
> > +    }
> > +
> > +    return ovn_dp_group_create(ovnsb_txn, dp_groups, sb_group, desired_n,
> > +                               desired_bitmap, bitmap_len, is_switch,
> > +                               ls_datapaths, lr_datapaths);
> > +}
> > +
> > +void
> > +ovn_dp_groups_clear(struct hmap *dp_groups)
> > +{
> > +    struct ovn_dp_group *dpg;
> > +    HMAP_FOR_EACH_POP (dpg, node, dp_groups) {
> > +        bitmap_free(dpg->bitmap);
> > +        free(dpg);
>
> This is duplicated in dec_ovn_dp_group_ref().  Also, should we assert
> that all refcounts are 0 here?

I don't think we can.  Since ovn_dp_groups_clear() will be called
whenever a full recompute
happens.  In this case,  we are destroying the internal data and
rebuilding and the ref count may not be 0.


>
> I think we need a "ovn_dp_group_destroy()" function to avoid duplicating
> things.

Ack.


>
> > +    }
> > +}
> > +
> > +void
> > +ovn_dp_groups_destroy(struct hmap *dp_groups)
> > +{
> > +    ovn_dp_groups_clear(dp_groups);
> > +    hmap_destroy(dp_groups);
> > +}
> > +
> > +void
> > +lflow_hash_lock_init(void)
> > +{
> > +    if (!lflow_hash_lock_initialized) {
> > +        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> > +            ovs_mutex_init(&lflow_hash_locks[i]);
> > +        }
> > +        lflow_hash_lock_initialized = true;
> > +    }
> > +}
> > +
> > +void
> > +lflow_hash_lock_destroy(void)
> > +{
> > +    if (lflow_hash_lock_initialized) {
> > +        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> > +            ovs_mutex_destroy(&lflow_hash_locks[i]);
> > +        }
> > +    }
> > +    lflow_hash_lock_initialized = false;
> > +}
> > +
> > +/* static functions. */
> > +static void
> > +ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
> > +               size_t dp_bitmap_len, enum ovn_stage stage, uint16_t priority,
> > +               char *match, char *actions, char *io_port, char *ctrl_meter,
> > +               char *stage_hint, const char *where)
> > +{
> > +    lflow->dpg_bitmap = bitmap_allocate(dp_bitmap_len);
> > +    lflow->od = od;
> > +    lflow->stage = stage;
> > +    lflow->priority = priority;
> > +    lflow->match = match;
> > +    lflow->actions = actions;
> > +    lflow->io_port = io_port;
> > +    lflow->stage_hint = stage_hint;
> > +    lflow->ctrl_meter = ctrl_meter;
> > +    lflow->dpg = NULL;
> > +    lflow->where = where;
> > +    lflow->sb_uuid = UUID_ZERO;
> > +    hmap_init(&lflow->dp_refcnts_map);
> > +    ovs_list_init(&lflow->referenced_by);
> > +}
> > +
> > +static struct ovs_mutex *
> > +lflow_hash_lock(const struct hmap *lflow_table, uint32_t hash)
> > +    OVS_ACQUIRES(fake_hash_mutex)
> > +    OVS_NO_THREAD_SAFETY_ANALYSIS
> > +{
> > +    struct ovs_mutex *hash_lock = NULL;
> > +
> > +    if (parallelization_state == STATE_USE_PARALLELIZATION) {
> > +        hash_lock =
> > +            &lflow_hash_locks[hash & lflow_table->mask & LFLOW_HASH_LOCK_MASK];
> > +        ovs_mutex_lock(hash_lock);
> > +    }
> > +    return hash_lock;
> > +}
> > +
> > +static void
> > +lflow_hash_unlock(struct ovs_mutex *hash_lock)
> > +    OVS_RELEASES(fake_hash_mutex)
> > +    OVS_NO_THREAD_SAFETY_ANALYSIS
> > +{
> > +    if (hash_lock) {
> > +        ovs_mutex_unlock(hash_lock);
> > +    }
> > +}
> > +
> > +static bool
> > +ovn_lflow_equal(const struct ovn_lflow *a, enum ovn_stage stage,
> > +                uint16_t priority, const char *match,
> > +                const char *actions, const char *ctrl_meter)
> > +{
> > +    return (a->stage == stage
> > +            && a->priority == priority
> > +            && !strcmp(a->match, match)
> > +            && !strcmp(a->actions, actions)
> > +            && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
> > +}
> > +
> > +static struct ovn_lflow *
> > +ovn_lflow_find(const struct hmap *lflows,
> > +               enum ovn_stage stage, uint16_t priority,
> > +               const char *match, const char *actions,
> > +               const char *ctrl_meter, uint32_t hash)
> > +{
> > +    struct ovn_lflow *lflow;
> > +    HMAP_FOR_EACH_WITH_HASH (lflow, hmap_node, hash, lflows) {
> > +        if (ovn_lflow_equal(lflow, stage, priority, match, actions,
> > +                            ctrl_meter)) {
> > +            return lflow;
> > +        }
> > +    }
> > +    return NULL;
> > +}
> > +
> > +static char *
> > +ovn_lflow_hint(const struct ovsdb_idl_row *row)
> > +{
> > +    if (!row) {
> > +        return NULL;
> > +    }
> > +    return xasprintf("%08x", row->uuid.parts[0]);
> > +}
> > +
> > +static void
> > +ovn_lflow_destroy(struct lflow_table *lflow_table, struct ovn_lflow *lflow)
> > +{
> > +    if (lflow) {
> > +        if (lflow_table) {
> > +            hmap_remove(&lflow_table->entries, &lflow->hmap_node);
> > +        }
> > +        bitmap_free(lflow->dpg_bitmap);
> > +        free(lflow->match);
> > +        free(lflow->actions);
> > +        free(lflow->io_port);
> > +        free(lflow->stage_hint);
> > +        free(lflow->ctrl_meter);
> > +        ovn_lflow_clear_dp_refcnts_map(lflow);
> > +        struct lflow_ref_node *l;
> > +        LIST_FOR_EACH_SAFE (l, ref_list_node, &lflow->referenced_by) {
> > +            lflow_ref_node_destroy(l, NULL);
>
> This is very risky in my opinion.  We end up with 'l' being freed while
> there still might be a 'lflow_ref_nodes' hmap that points to it.  I
> think we should just store a backpointer from lflow_ref_node to the
> 'struct lflow_ref' that contains it?
>


> I think that then we won't need to do all the tricks we do of passing
> NULL instead of the hmap in lots of places.

Ack.  Sounds good.  I removed the list and the tests passed.  I'll
include in v6.

>
> > +        }
> > +        free(lflow);
> > +    }
> > +}
> > +
> > +static struct ovn_lflow *
> > +do_ovn_lflow_add(struct lflow_table *lflow_table,
> > +                 const struct ovn_datapath *od,
> > +                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> > +                 uint32_t hash, enum ovn_stage stage, uint16_t priority,
> > +                 const char *match, const char *actions,
> > +                 const char *io_port, const char *ctrl_meter,
> > +                 const struct ovsdb_idl_row *stage_hint,
> > +                 const char *where)
> > +    OVS_REQUIRES(fake_hash_mutex)
> > +{
> > +    struct ovn_lflow *old_lflow;
> > +    struct ovn_lflow *lflow;
> > +
> > +    size_t bitmap_len = od ? ods_size(od->datapaths) : dp_bitmap_len;
> > +    ovs_assert(bitmap_len);
> > +
> > +    old_lflow = ovn_lflow_find(&lflow_table->entries, stage,
> > +                               priority, match, actions, ctrl_meter, hash);
> > +    if (old_lflow) {
> > +        ovn_dp_group_add_with_reference(old_lflow, od, dp_bitmap,
> > +                                        bitmap_len);
> > +        return old_lflow;
> > +    }
> > +
> > +    lflow = xzalloc(sizeof *lflow);
> > +    /* While adding new logical flows we're not setting single datapath, but
> > +     * collecting a group.  'od' will be updated later for all flows with only
> > +     * one datapath in a group, so it could be hashed correctly. */
> > +    ovn_lflow_init(lflow, NULL, bitmap_len, stage, priority,
> > +                   xstrdup(match), xstrdup(actions),
> > +                   io_port ? xstrdup(io_port) : NULL,
> > +                   nullable_xstrdup(ctrl_meter),
> > +                   ovn_lflow_hint(stage_hint), where);
> > +
> > +    ovn_dp_group_add_with_reference(lflow, od, dp_bitmap, bitmap_len);
> > +
> > +    if (parallelization_state != STATE_USE_PARALLELIZATION) {
> > +        hmap_insert(&lflow_table->entries, &lflow->hmap_node, hash);
> > +    } else {
> > +        hmap_insert_fast(&lflow_table->entries, &lflow->hmap_node,
> > +                         hash);
> > +        thread_lflow_counter++;
> > +    }
> > +
> > +    return lflow;
> > +}
> > +
> > +static bool
> > +sync_lflow_to_sb(struct ovn_lflow *lflow,
> > +                 struct ovsdb_idl_txn *ovnsb_txn,
> > +                 struct lflow_table *lflow_table,
> > +                 const struct ovn_datapaths *ls_datapaths,
> > +                 const struct ovn_datapaths *lr_datapaths,
> > +                 bool ovn_internal_version_changed,
> > +                 const struct sbrec_logical_flow *sbflow,
> > +                 const struct sbrec_logical_dp_group_table *sb_dpgrp_table)
> > +{
> > +    struct sbrec_logical_dp_group *sbrec_dp_group = NULL;
> > +    struct ovn_dp_group *pre_sync_dpg = lflow->dpg;
> > +    struct ovn_datapath **datapaths_array;
> > +    struct hmap *dp_groups;
> > +    size_t n_datapaths;
> > +    bool is_switch;
> > +
> > +    if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> > +        n_datapaths = ods_size(ls_datapaths);
> > +        datapaths_array = ls_datapaths->array;
> > +        dp_groups = &lflow_table->ls_dp_groups;
> > +        is_switch = true;
> > +    } else {
> > +        n_datapaths = ods_size(lr_datapaths);
> > +        datapaths_array = lr_datapaths->array;
> > +        dp_groups = &lflow_table->lr_dp_groups;
> > +        is_switch = false;
> > +    }
> > +
> > +    lflow->n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> > +    ovs_assert(lflow->n_ods);
> > +
> > +    if (lflow->n_ods == 1) {
> > +        /* There is only one datapath, so it should be moved out of the
> > +         * group to a single 'od'. */
> > +        size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
> > +                                    n_datapaths);
> > +
> > +        lflow->od = datapaths_array[index];
> > +        lflow->dpg = NULL;
> > +    } else {
> > +        lflow->od = NULL;
> > +    }
> > +
> > +    if (!sbflow) {
> > +        lflow->sb_uuid = uuid_random();
> > +        sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
> > +                                                        &lflow->sb_uuid);
> > +        const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
> > +        uint8_t table = ovn_stage_get_table(lflow->stage);
> > +        sbrec_logical_flow_set_pipeline(sbflow, pipeline);
> > +        sbrec_logical_flow_set_table_id(sbflow, table);
> > +        sbrec_logical_flow_set_priority(sbflow, lflow->priority);
> > +        sbrec_logical_flow_set_match(sbflow, lflow->match);
> > +        sbrec_logical_flow_set_actions(sbflow, lflow->actions);
> > +        if (lflow->io_port) {
> > +            struct smap tags = SMAP_INITIALIZER(&tags);
> > +            smap_add(&tags, "in_out_port", lflow->io_port);
> > +            sbrec_logical_flow_set_tags(sbflow, &tags);
> > +            smap_destroy(&tags);
> > +        }
> > +        sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
> > +
> > +        /* Trim the source locator lflow->where, which looks something like
> > +         * "ovn/northd/northd.c:1234", down to just the part following the
> > +         * last slash, e.g. "northd.c:1234". */
> > +        const char *slash = strrchr(lflow->where, '/');
> > +#if _WIN32
> > +        const char *backslash = strrchr(lflow->where, '\\');
> > +        if (!slash || backslash > slash) {
> > +            slash = backslash;
> > +        }
> > +#endif
> > +        const char *where = slash ? slash + 1 : lflow->where;
> > +
> > +        struct smap ids = SMAP_INITIALIZER(&ids);
> > +        smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
> > +        smap_add(&ids, "source", where);
> > +        if (lflow->stage_hint) {
> > +            smap_add(&ids, "stage-hint", lflow->stage_hint);
> > +        }
> > +        sbrec_logical_flow_set_external_ids(sbflow, &ids);
> > +        smap_destroy(&ids);
> > +
> > +    } else {
> > +        lflow->sb_uuid = sbflow->header_.uuid;
> > +        sbrec_dp_group = sbflow->logical_dp_group;
> > +
> > +        if (ovn_internal_version_changed) {
> > +            const char *stage_name = smap_get_def(&sbflow->external_ids,
> > +                                                  "stage-name", "");
> > +            const char *stage_hint = smap_get_def(&sbflow->external_ids,
> > +                                                  "stage-hint", "");
> > +            const char *source = smap_get_def(&sbflow->external_ids,
> > +                                              "source", "");
> > +
> > +            if (strcmp(stage_name, ovn_stage_to_str(lflow->stage))) {
> > +                sbrec_logical_flow_update_external_ids_setkey(
> > +                    sbflow, "stage-name", ovn_stage_to_str(lflow->stage));
> > +            }
> > +            if (lflow->stage_hint) {
> > +                if (strcmp(stage_hint, lflow->stage_hint)) {
> > +                    sbrec_logical_flow_update_external_ids_setkey(
> > +                        sbflow, "stage-hint", lflow->stage_hint);
> > +                }
> > +            }
> > +            if (lflow->where) {
> > +
> > +                /* Trim the source locator lflow->where, which looks something
> > +                 * like "ovn/northd/northd.c:1234", down to just the part
> > +                 * following the last slash, e.g. "northd.c:1234". */
> > +                const char *slash = strrchr(lflow->where, '/');
> > +#if _WIN32
> > +                const char *backslash = strrchr(lflow->where, '\\');
> > +                if (!slash || backslash > slash) {
> > +                    slash = backslash;
> > +                }
> > +#endif
> > +                const char *where = slash ? slash + 1 : lflow->where;
> > +
> > +                if (strcmp(source, where)) {
> > +                    sbrec_logical_flow_update_external_ids_setkey(
> > +                        sbflow, "source", where);
> > +                }
> > +            }
> > +        }
> > +    }
> > +
> > +    if (lflow->od) {
> > +        sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> > +        sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
> > +    } else {
> > +        sbrec_logical_flow_set_logical_datapath(sbflow, NULL);
> > +        lflow->dpg = ovn_dp_group_get(dp_groups, lflow->n_ods,
> > +                                      lflow->dpg_bitmap,
> > +                                      n_datapaths);
> > +        if (lflow->dpg) {
> > +            /* Update the dpg's sb dp_group. */
> > +            lflow->dpg->dp_group = sbrec_logical_dp_group_table_get_for_uuid(
> > +                sb_dpgrp_table,
> > +                &lflow->dpg->dpg_uuid);
> > +
> > +            if (!lflow->dpg->dp_group) {
> > +                /* Ideally this should not happen.  But it can still happen
> > +                 * due to 2 reasons:
> > +                 * 1. There is a bug in the dp_group management.  We should
> > +                 *    perhaps assert here.
> > +                 * 2. A User or CMS may delete the logical_dp_groups in SB DB
> > +                 *    or clear the SB:Logical_flow.logical_dp_groups column
> > +                 *    (intentionally or accidentally)
> > +                 *
> > +                 * Because of (2) it is better to return false instead of
> > +                 * assert,so that we recover from th inconsistent SB DB.
> > +                 */
> > +                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
> > +                VLOG_WARN_RL(&rl, "SB Logical flow ["UUID_FMT"]'s "
> > +                            "logical_dp_group column is not set "
> > +                            "(which is unexpected).  It should have been "
> > +                            "referencing the dp group ["UUID_FMT"]",
> > +                            UUID_ARGS(&sbflow->header_.uuid),
> > +                            UUID_ARGS(&lflow->dpg->dpg_uuid));
> > +                return false;
> > +            }
> > +        } else {
> > +            lflow->dpg = ovn_dp_group_create(
> > +                                ovnsb_txn, dp_groups, sbrec_dp_group,
> > +                                lflow->n_ods, lflow->dpg_bitmap,
> > +                                n_datapaths, is_switch,
> > +                                ls_datapaths,
> > +                                lr_datapaths);
> > +        }
> > +        sbrec_logical_flow_set_logical_dp_group(sbflow,
> > +                                                lflow->dpg->dp_group);
> > +    }
> > +
> > +    if (pre_sync_dpg != lflow->dpg) {
> > +        if (lflow->dpg) {
> > +            inc_ovn_dp_group_ref(lflow->dpg);
> > +        }
> > +        if (pre_sync_dpg) {
> > +           dec_ovn_dp_group_ref(dp_groups, pre_sync_dpg);
> > +        }
> > +    }
> > +
> > +    return true;
> > +}
> > +
> > +static struct ovn_dp_group *
> > +ovn_dp_group_find(const struct hmap *dp_groups,
> > +                  const unsigned long *dpg_bitmap, size_t bitmap_len,
> > +                  uint32_t hash)
> > +{
> > +    struct ovn_dp_group *dpg;
> > +
> > +    HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) {
> > +        if (bitmap_equal(dpg->bitmap, dpg_bitmap, bitmap_len)) {
> > +            return dpg;
> > +        }
> > +    }
> > +    return NULL;
> > +}
> > +
> > +static void
> > +inc_ovn_dp_group_ref(struct ovn_dp_group *dpg)
> > +{
> > +    dpg->refcnt++;
> > +}
> > +
> > +static void
> > +dec_ovn_dp_group_ref(struct hmap *dp_groups, struct ovn_dp_group *dpg)
> > +{
> > +    dpg->refcnt--;
> > +
> > +    if (!dpg->refcnt) {
> > +        hmap_remove(dp_groups, &dpg->node);
> > +        free(dpg->bitmap);
>
> This should be bitmap_free().
>
> > +        free(dpg);
> > +    }
> > +}
>
> To simplify callers I'd write these two functions as follows (inspired a
> bit from json.c in OVS):
>
> static void
> ovn_dp_group_use(struct ovn_dp_group *dpg)
> {
>     if (dpg) {
>         dpg->refcnt++;
>     }
> }
>
> static void
> ovn_dp_group_release(struct ovn_dp_group *dpg)
> {
>     if (dpg && !--dpg->refcnt) {
>         hmap_remove(dp_groups, &dpg->node);
>         bitmap_free(dpg->bitmap);
>         free(dpg);
>     }
> }
>

Ack

> > +
> > +static struct sbrec_logical_dp_group *
> > +ovn_sb_insert_or_update_logical_dp_group(
> > +                            struct ovsdb_idl_txn *ovnsb_txn,
> > +                            struct sbrec_logical_dp_group *dp_group,
> > +                            const unsigned long *dpg_bitmap,
> > +                            const struct ovn_datapaths *datapaths)
> > +{
> > +    const struct sbrec_datapath_binding **sb;
> > +    size_t n = 0, index;
> > +
> > +    sb = xmalloc(bitmap_count1(dpg_bitmap, ods_size(datapaths)) * sizeof *sb);
> > +    BITMAP_FOR_EACH_1 (index, ods_size(datapaths), dpg_bitmap) {
> > +        sb[n++] = datapaths->array[index]->sb;
> > +    }
> > +    if (!dp_group) {
> > +        struct uuid dpg_uuid = uuid_random();
> > +        dp_group = sbrec_logical_dp_group_insert_persist_uuid(
> > +            ovnsb_txn, &dpg_uuid);
> > +    }
> > +    sbrec_logical_dp_group_set_datapaths(
> > +        dp_group, (struct sbrec_datapath_binding **) sb, n);
> > +    free(sb);
> > +
> > +    return dp_group;
> > +}
> > +
> > +static struct ovn_dp_group *
> > +ovn_dp_group_get(struct hmap *dp_groups, size_t desired_n,
> > +                 const unsigned long *desired_bitmap,
> > +                 size_t bitmap_len)
> > +{
> > +    uint32_t hash;
> > +
> > +    hash = hash_int(desired_n, 0);
> > +    return ovn_dp_group_find(dp_groups, desired_bitmap, bitmap_len, hash);
> > +}
> > +
> > +/* Creates a new datapath group and adds it to 'dp_groups'.
> > + * If 'sb_group' is provided, function will try to re-use this group by
> > + * either taking it directly, or by modifying, if it's not already in use.
> > + * Caller should first call ovn_dp_group_get() before calling this function. */
> > +static struct ovn_dp_group *
> > +ovn_dp_group_create(struct ovsdb_idl_txn *ovnsb_txn,
> > +                    struct hmap *dp_groups,
> > +                    struct sbrec_logical_dp_group *sb_group,
> > +                    size_t desired_n,
> > +                    const unsigned long *desired_bitmap,
> > +                    size_t bitmap_len,
> > +                    bool is_switch,
> > +                    const struct ovn_datapaths *ls_datapaths,
> > +                    const struct ovn_datapaths *lr_datapaths)
> > +{
> > +    struct ovn_dp_group *dpg;
> > +
> > +    bool update_dp_group = false, can_modify = false;
> > +    unsigned long *dpg_bitmap;
> > +    size_t i, n = 0;
> > +
> > +    dpg_bitmap = sb_group ? bitmap_allocate(bitmap_len) : NULL;
> > +    for (i = 0; sb_group && i < sb_group->n_datapaths; i++) {
> > +        struct ovn_datapath *datapath_od;
> > +
> > +        datapath_od = ovn_datapath_from_sbrec(
> > +                        ls_datapaths ? &ls_datapaths->datapaths : NULL,
> > +                        lr_datapaths ? &lr_datapaths->datapaths : NULL,
> > +                        sb_group->datapaths[i]);
> > +        if (!datapath_od || ovn_datapath_is_stale(datapath_od)) {
> > +            break;
> > +        }
> > +        bitmap_set1(dpg_bitmap, datapath_od->index);
> > +        n++;
> > +    }
> > +    if (!sb_group || i != sb_group->n_datapaths) {
> > +        /* No group or stale group.  Not going to be used. */
> > +        update_dp_group = true;
> > +        can_modify = true;
> > +    } else if (!bitmap_equal(dpg_bitmap, desired_bitmap, bitmap_len)) {
> > +        /* The group in Sb is different. */
> > +        update_dp_group = true;
> > +        /* We can modify existing group if it's not already in use. */
> > +        can_modify = !ovn_dp_group_find(dp_groups, dpg_bitmap,
> > +                                        bitmap_len, hash_int(n, 0));
> > +    }
> > +
> > +    bitmap_free(dpg_bitmap);
> > +
> > +    dpg = xzalloc(sizeof *dpg);
> > +    dpg->bitmap = bitmap_clone(desired_bitmap, bitmap_len);
> > +    if (!update_dp_group) {
> > +        dpg->dp_group = sb_group;
> > +    } else {
> > +        dpg->dp_group = ovn_sb_insert_or_update_logical_dp_group(
> > +                            ovnsb_txn,
> > +                            can_modify ? sb_group : NULL,
> > +                            desired_bitmap,
> > +                            is_switch ? ls_datapaths : lr_datapaths);
> > +    }
> > +    dpg->dpg_uuid = dpg->dp_group->header_.uuid;
> > +    hmap_insert(dp_groups, &dpg->node, hash_int(desired_n, 0));
> > +
> > +    return dpg;
> > +}
> > +
> > +/* Adds an OVN datapath to a datapath group of existing logical flow.
> > + * Version to use when hash bucket locking is NOT required or the corresponding
> > + * hash lock is already taken. */
> > +static void
> > +ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
> > +                                const struct ovn_datapath *od,
> > +                                const unsigned long *dp_bitmap,
> > +                                size_t bitmap_len)
> > +    OVS_REQUIRES(fake_hash_mutex)
> > +{
> > +    if (od) {
> > +        bitmap_set1(lflow_ref->dpg_bitmap, od->index);
> > +    }
> > +    if (dp_bitmap) {
> > +        bitmap_or(lflow_ref->dpg_bitmap, dp_bitmap, bitmap_len);
> > +    }
> > +}
> > +
> > +static bool
> > +lflow_ref_sync_lflows__(struct lflow_ref  *lflow_ref,
> > +                        struct lflow_table *lflow_table,
> > +                        struct ovsdb_idl_txn *ovnsb_txn,
> > +                        const struct ovn_datapaths *ls_datapaths,
> > +                        const struct ovn_datapaths *lr_datapaths,
> > +                        bool ovn_internal_version_changed,
> > +                        const struct sbrec_logical_flow_table *sbflow_table,
> > +                        const struct sbrec_logical_dp_group_table *dpgrp_table)
> > +{
> > +    struct lflow_ref_node *lrn;
> > +    struct ovn_lflow *lflow;
> > +    LIST_FOR_EACH_SAFE (lrn, lflow_list_node, &lflow_ref->lflows_ref_list) {
> > +        lflow = lrn->lflow;
> > +        const struct sbrec_logical_flow *sblflow =
> > +            sbrec_logical_flow_table_get_for_uuid(sbflow_table,
> > +                                                  &lflow->sb_uuid);
> > +
> > +        struct hmap *dp_groups = NULL;
> > +        size_t n_datapaths;
> > +        if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> > +            dp_groups = &lflow_table->ls_dp_groups;
> > +            n_datapaths = ods_size(ls_datapaths);
> > +        } else {
> > +            dp_groups = &lflow_table->lr_dp_groups;
> > +            n_datapaths = ods_size(lr_datapaths);
> > +        }
> > +
> > +        size_t n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> > +
> > +        if (n_ods) {
> > +            if (!sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
> > +                                  lr_datapaths, ovn_internal_version_changed,
> > +                                  sblflow, dpgrp_table)) {
> > +                return false;
> > +            }
> > +        }
> > +
> > +        if (!lrn->linked) {
> > +            lflow_ref_node_destroy(lrn, &lflow_ref->lflow_ref_nodes);
> > +
> > +            if (ovs_list_is_empty(&lflow->referenced_by)) {
> > +                if (lflow->dpg) {
> > +                    dec_ovn_dp_group_ref(dp_groups, lflow->dpg);
> > +                }
> > +                ovn_lflow_destroy(lflow_table, lflow);
> > +
> > +                if (sblflow) {
> > +                    sbrec_logical_flow_delete(sblflow);
> > +                }
> > +            }
> > +        }
> > +    }
> > +
> > +    return true;
> > +}
> > +
> > +/* Used for the datapath reference counting for a given 'struct ovn_lflow'.
> > + * See the hmap 'dp_refcnts_map in 'struct ovn_lflow'.
> > + * For a given lflow L(M, A) with match - M and actions - A, it can be
> > + * referenced by multiple lflow_refs for the same datapath
> > + * Eg. Two lflow_ref's - op->lflow_ref and op->stateful_lflow_ref of a
> > + * datapath can have a reference to the same lflow L (M, A).  In this it
> > + * is important to maintain this reference count so that the sync to the
> > + * SB DB logical_flow is correct. */
> > +struct dp_refcnt {
> > +    struct hmap_node key_node;
> > +
> > +    size_t dp_index; /* datapath index.  Also used as hmap key. */
> > +    size_t refcnt;  /* reference counter. */
>
> Nit: please align the comments.

Ack.

>
> > +};
> > +
> > +static struct dp_refcnt *
> > +dp_refcnt_find(struct hmap *dp_refcnts_map, size_t dp_index)
> > +{
> > +    struct dp_refcnt *dp_refcnt;
> > +    HMAP_FOR_EACH_WITH_HASH (dp_refcnt, key_node, dp_index, dp_refcnts_map) {
> > +        if (dp_refcnt->dp_index == dp_index) {
> > +            return dp_refcnt;
> > +        }
> > +    }
> > +
> > +    return NULL;
> > +}
> > +
> > +static void
> > +inc_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index)
> > +{
> > +    struct dp_refcnt *dp_refcnt = dp_refcnt_find(dp_refcnts_map, dp_index);
> > +
> > +    if (!dp_refcnt) {
> > +        dp_refcnt = xzalloc(sizeof *dp_refcnt);
> > +        dp_refcnt->dp_index = dp_index;
> > +
> > +        hmap_insert(dp_refcnts_map, &dp_refcnt->key_node, dp_index);
> > +    }
> > +
> > +    dp_refcnt->refcnt++;
> > +}
> > +
> > +/* Decrements the datapath's refcnt from the 'dp_refcnts_map' if it exists
> > + * and returns true of the refcnt is 0 or if the dp refcnt doesn't exist. */
> > +static bool
> > +dec_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index)
> > +{
> > +    bool retval = true;
> > +
> > +    struct dp_refcnt *dp_refcnt = dp_refcnt_find(dp_refcnts_map, dp_index);
> > +    if (dp_refcnt) {
> > +        dp_refcnt->refcnt--;
> > +
> > +        if (!dp_refcnt->refcnt) {
> > +            hmap_remove(dp_refcnts_map, &dp_refcnt->key_node);
> > +            free(dp_refcnt);
> > +        } else {
> > +            retval = false;
>
> We can just 'return false;' here.
>
> > +        }
> > +    }
> > +
> > +    return retval;
>
> And 'return true;' here.

Ack

>
> > +}
> > +
> > +static void
> > +ovn_lflow_clear_dp_refcnts_map(struct ovn_lflow *lflow)
> > +{
> > +    struct dp_refcnt *dp_refcnt;
> > +
> > +    HMAP_FOR_EACH_POP (dp_refcnt, key_node, &lflow->dp_refcnts_map) {
> > +        free(dp_refcnt);
> > +    }
> > +
> > +    hmap_destroy(&lflow->dp_refcnts_map);
> > +}
> > +
> > +static struct lflow_ref_node *
> > +lflow_ref_node_find(struct hmap *lflow_ref_nodes, struct ovn_lflow *lflow,
> > +                    uint32_t lflow_hash)
> > +{
> > +    struct lflow_ref_node *lrn;
> > +    HMAP_FOR_EACH_WITH_HASH (lrn, ref_node, lflow_hash, lflow_ref_nodes) {
> > +        if (lrn->lflow == lflow) {
> > +            return lrn;
> > +        }
> > +    }
> > +
> > +    return NULL;
> > +}
> > +
> > +static void
> > +lflow_ref_node_destroy(struct lflow_ref_node *lrn,
> > +                       struct hmap *lflow_ref_nodes)
> > +{
> > +    if (lflow_ref_nodes) {
> > +        hmap_remove(lflow_ref_nodes, &lrn->ref_node);
> > +    }
> > +    ovs_list_remove(&lrn->lflow_list_node);
> > +    ovs_list_remove(&lrn->ref_list_node);
> > +    free(lrn);
> > +}
> > diff --git a/northd/lflow-mgr.h b/northd/lflow-mgr.h
> > new file mode 100644
> > index 0000000000..5a0cc28965
> > --- /dev/null
> > +++ b/northd/lflow-mgr.h
> > @@ -0,0 +1,186 @@
> > +    /*
> > + * Copyright (c) 2023, Red Hat, Inc.
>
> 2024
>
> > + *
> > + * Licensed under the Apache License, Version 2.0 (the "License");
> > + * you may not use this file except in compliance with the License.
> > + * You may obtain a copy of the License at:
> > + *
> > + *     http://www.apache.org/licenses/LICENSE-2.0
> > + *
> > + * Unless required by applicable law or agreed to in writing, software
> > + * distributed under the License is distributed on an "AS IS" BASIS,
> > + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> > + * See the License for the specific language governing permissions and
> > + * limitations under the License.
> > + */
> > +#ifndef LFLOW_MGR_H
> > +#define LFLOW_MGR_H 1
> > +
> > +#include "include/openvswitch/hmap.h"
> > +#include "include/openvswitch/uuid.h"
> > +
> > +#include "northd.h"
> > +
> > +struct ovsdb_idl_txn;
> > +struct ovn_datapath;
> > +struct ovsdb_idl_row;
> > +
> > +/* lflow map which stores the logical flows. */
> > +struct lflow_table;
> > +struct lflow_table *lflow_table_alloc(void);
> > +void lflow_table_init(struct lflow_table *);
> > +void lflow_table_clear(struct lflow_table *);
> > +void lflow_table_destroy(struct lflow_table *);
> > +void lflow_table_expand(struct lflow_table *);
> > +void lflow_table_set_size(struct lflow_table *, size_t);
> > +void lflow_table_sync_to_sb(struct lflow_table *,
> > +                            struct ovsdb_idl_txn *ovnsb_txn,
> > +                            const struct ovn_datapaths *ls_datapaths,
> > +                            const struct ovn_datapaths *lr_datapaths,
> > +                            bool ovn_internal_version_changed,
> > +                            const struct sbrec_logical_flow_table *,
> > +                            const struct sbrec_logical_dp_group_table *);
> > +void lflow_table_destroy(struct lflow_table *);
> > +
> > +void lflow_hash_lock_init(void);
> > +void lflow_hash_lock_destroy(void);
> > +
> > +/* lflow mgr manages logical flows for a resource (like logical port
> > + * or datapath). */
> > +struct lflow_ref;
> > +
> > +struct lflow_ref *lflow_ref_create(void);
> > +void lflow_ref_destroy(struct lflow_ref *);
> > +void lflow_ref_clear(struct lflow_ref *lflow_ref);
> > +void lflow_ref_unlink_lflows(struct lflow_ref *);
> > +bool lflow_ref_resync_flows(struct lflow_ref *,
> > +                            struct lflow_table *lflow_table,
> > +                            struct ovsdb_idl_txn *ovnsb_txn,
> > +                            const struct ovn_datapaths *ls_datapaths,
> > +                            const struct ovn_datapaths *lr_datapaths,
> > +                            bool ovn_internal_version_changed,
> > +                            const struct sbrec_logical_flow_table *,
> > +                            const struct sbrec_logical_dp_group_table *);
> > +bool lflow_ref_sync_lflows(struct lflow_ref *,
> > +                           struct lflow_table *lflow_table,
> > +                           struct ovsdb_idl_txn *ovnsb_txn,
> > +                           const struct ovn_datapaths *ls_datapaths,
> > +                           const struct ovn_datapaths *lr_datapaths,
> > +                           bool ovn_internal_version_changed,
> > +                           const struct sbrec_logical_flow_table *,
> > +                           const struct sbrec_logical_dp_group_table *);
> > +
> > +
> > +void lflow_table_add_lflow(struct lflow_table *, const struct ovn_datapath *,
> > +                           const unsigned long *dp_bitmap,
> > +                           size_t dp_bitmap_len, enum ovn_stage stage,
> > +                           uint16_t priority, const char *match,
> > +                           const char *actions, const char *io_port,
> > +                           const char *ctrl_meter,
> > +                           const struct ovsdb_idl_row *stage_hint,
> > +                           const char *where, struct lflow_ref *);
> > +void lflow_table_add_lflow_default_drop(struct lflow_table *,
> > +                                        const struct ovn_datapath *,
> > +                                        enum ovn_stage stage,
> > +                                        const char *where,
> > +                                        struct lflow_ref *);
> > +
> > +/* Adds a row with the specified contents to the Logical_Flow table. */
> > +#define ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> > +                                  ACTIONS, IN_OUT_PORT, CTRL_METER, \
> > +                                  STAGE_HINT) \
> > +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> > +                          ACTIONS, IN_OUT_PORT, CTRL_METER, STAGE_HINT, \
> > +                          OVS_SOURCE_LOCATOR, NULL)
> > +
> > +#define ovn_lflow_add_with_lflow_ref_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, \
> > +                                            MATCH, ACTIONS, IN_OUT_PORT, \
> > +                                            CTRL_METER, STAGE_HINT, LFLOW_REF)\
> > +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> > +                          ACTIONS, IN_OUT_PORT, CTRL_METER, STAGE_HINT, \
> > +                          OVS_SOURCE_LOCATOR, LFLOW_REF)
> > +
> > +#define ovn_lflow_add_with_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> > +                                ACTIONS, STAGE_HINT) \
> > +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> > +                          ACTIONS, NULL, NULL, STAGE_HINT,  \
> > +                          OVS_SOURCE_LOCATOR, NULL)
> > +
> > +#define ovn_lflow_add_with_lflow_ref_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, \
> > +                                          MATCH, ACTIONS, STAGE_HINT, \
> > +                                          LFLOW_REF) \
> > +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> > +                          ACTIONS, NULL, NULL, STAGE_HINT,  \
> > +                          OVS_SOURCE_LOCATOR, LFLOW_REF)
> > +
> > +#define ovn_lflow_add_with_dp_group(LFLOW_TABLE, DP_BITMAP, DP_BITMAP_LEN, \
> > +                                    STAGE, PRIORITY, MATCH, ACTIONS, \
> > +                                    STAGE_HINT) \
> > +    lflow_table_add_lflow(LFLOW_TABLE, NULL, DP_BITMAP, DP_BITMAP_LEN, STAGE, \
> > +                          PRIORITY, MATCH, ACTIONS, NULL, NULL, STAGE_HINT, \
> > +                          OVS_SOURCE_LOCATOR, NULL)
> > +
> > +#define ovn_lflow_add_default_drop(LFLOW_TABLE, OD, STAGE)                    \
> > +    lflow_table_add_lflow_default_drop(LFLOW_TABLE, OD, STAGE, \
> > +                                       OVS_SOURCE_LOCATOR, NULL)
> > +
> > +
> > +/* This macro is similar to ovn_lflow_add_with_hint, except that it requires
> > + * the IN_OUT_PORT argument, which tells the lport name that appears in the
> > + * MATCH, which helps ovn-controller to bypass lflows parsing when the lport is
> > + * not local to the chassis. The critiera of the lport to be added using this
> > + * argument:
> > + *
> > + * - For ingress pipeline, the lport that is used to match "inport".
> > + * - For egress pipeline, the lport that is used to match "outport".
> > + *
> > + * For now, only LS pipelines should use this macro.  */
> > +#define ovn_lflow_add_with_lport_and_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, \
> > +                                          MATCH, ACTIONS, IN_OUT_PORT, \
> > +                                          STAGE_HINT, LFLOW_REF) \
> > +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> > +                          ACTIONS, IN_OUT_PORT, NULL, STAGE_HINT, \
> > +                          OVS_SOURCE_LOCATOR, LFLOW_REF)
> > +
> > +#define ovn_lflow_add(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, ACTIONS) \
> > +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> > +                          ACTIONS, NULL, NULL, NULL, OVS_SOURCE_LOCATOR, NULL)
> > +
> > +#define ovn_lflow_add_with_lflow_ref(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> > +                                     ACTIONS, LFLOW_REF) \
> > +    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
> > +                          ACTIONS, NULL, NULL, NULL, OVS_SOURCE_LOCATOR, \
> > +                          LFLOW_REF)
> > +
> > +#define ovn_lflow_metered(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, ACTIONS, \
> > +                          CTRL_METER) \
> > +    ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
> > +                              ACTIONS, NULL, CTRL_METER, NULL)
> > +
> > +struct sbrec_logical_dp_group;
> > +
> > +struct ovn_dp_group {
> > +    unsigned long *bitmap;
> > +    const struct sbrec_logical_dp_group *dp_group;
> > +    struct uuid dpg_uuid;
> > +    struct hmap_node node;
> > +    size_t refcnt;
> > +};
> > +
> > +static inline void
> > +ovn_dp_groups_init(struct hmap *dp_groups)
> > +{
> > +    hmap_init(dp_groups);
> > +}
> > +
> > +void ovn_dp_groups_clear(struct hmap *dp_groups);
> > +void ovn_dp_groups_destroy(struct hmap *dp_groups);
> > +struct ovn_dp_group *ovn_dp_group_get_or_create(
> > +    struct ovsdb_idl_txn *ovnsb_txn, struct hmap *dp_groups,
> > +    struct sbrec_logical_dp_group *sb_group,
> > +    size_t desired_n, const unsigned long *desired_bitmap,
> > +    size_t bitmap_len, bool is_switch,
> > +    const struct ovn_datapaths *ls_datapaths,
> > +    const struct ovn_datapaths *lr_datapaths);
> > +
> > +#endif /* LFLOW_MGR_H */
> > \ No newline at end of file
> > diff --git a/northd/northd.c b/northd/northd.c
> > index 68f2b0bab4..f41df83c40 100644
> > --- a/northd/northd.c
> > +++ b/northd/northd.c
> > @@ -41,6 +41,7 @@
> >  #include "lib/ovn-sb-idl.h"
> >  #include "lib/ovn-util.h"
> >  #include "lib/lb.h"
> > +#include "lflow-mgr.h"
> >  #include "memory.h"
> >  #include "northd.h"
> >  #include "en-lb-data.h"
> > @@ -68,7 +69,7 @@
> >  VLOG_DEFINE_THIS_MODULE(northd);
> >
> >  static bool controller_event_en;
> > -static bool lflow_hash_lock_initialized = false;
> > +
> >
> >  static bool check_lsp_is_up;
> >
> > @@ -97,116 +98,6 @@ static bool default_acl_drop;
> >
> >  #define MAX_OVN_TAGS 4096
> >
> > -/* Pipeline stages. */
> > -
> > -/* The two purposes for which ovn-northd uses OVN logical datapaths. */
> > -enum ovn_datapath_type {
> > -    DP_SWITCH,                  /* OVN logical switch. */
> > -    DP_ROUTER                   /* OVN logical router. */
> > -};
> > -
> > -/* Returns an "enum ovn_stage" built from the arguments.
> > - *
> > - * (It's better to use ovn_stage_build() for type-safety reasons, but inline
> > - * functions can't be used in enums or switch cases.) */
> > -#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
> > -    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
> > -
> > -/* A stage within an OVN logical switch or router.
> > - *
> > - * An "enum ovn_stage" indicates whether the stage is part of a logical switch
> > - * or router, whether the stage is part of the ingress or egress pipeline, and
> > - * the table within that pipeline.  The first three components are combined to
> > - * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
> > - * S_ROUTER_OUT_DELIVERY. */
> > -enum ovn_stage {
> > -#define PIPELINE_STAGES                                                   \
> > -    /* Logical switch ingress stages. */                                  \
> > -    PIPELINE_STAGE(SWITCH, IN,  CHECK_PORT_SEC, 0, "ls_in_check_port_sec")   \
> > -    PIPELINE_STAGE(SWITCH, IN,  APPLY_PORT_SEC, 1, "ls_in_apply_port_sec")   \
> > -    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    2, "ls_in_lookup_fdb")    \
> > -    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        3, "ls_in_put_fdb")       \
> > -    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")       \
> > -    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")        \
> > -    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")  \
> > -    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       7, "ls_in_acl_hint")      \
> > -    PIPELINE_STAGE(SWITCH, IN,  ACL_EVAL,       8, "ls_in_acl_eval")      \
> > -    PIPELINE_STAGE(SWITCH, IN,  ACL_ACTION,     9, "ls_in_acl_action")    \
> > -    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
> > -    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
> > -    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  12, "ls_in_lb_aff_check")  \
> > -    PIPELINE_STAGE(SWITCH, IN,  LB,            13, "ls_in_lb")            \
> > -    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  14, "ls_in_lb_aff_learn")  \
> > -    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   15, "ls_in_pre_hairpin")   \
> > -    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   16, "ls_in_nat_hairpin")   \
> > -    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       17, "ls_in_hairpin")       \
> > -    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_EVAL,  18, \
> > -                   "ls_in_acl_after_lb_eval")  \
> > -    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_ACTION,  19, \
> > -                   "ls_in_acl_after_lb_action")  \
> > -    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      20, "ls_in_stateful")      \
> > -    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    21, "ls_in_arp_rsp")       \
> > -    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  22, "ls_in_dhcp_options")  \
> > -    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 23, "ls_in_dhcp_response") \
> > -    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    24, "ls_in_dns_lookup")    \
> > -    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  25, "ls_in_dns_response")  \
> > -    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 26, "ls_in_external_port") \
> > -    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       27, "ls_in_l2_lkup")       \
> > -    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    28, "ls_in_l2_unknown")    \
> > -                                                                          \
> > -    /* Logical switch egress stages. */                                   \
> > -    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      0, "ls_out_pre_acl")        \
> > -    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       1, "ls_out_pre_lb")         \
> > -    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
> > -    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
> > -    PIPELINE_STAGE(SWITCH, OUT, ACL_EVAL,     4, "ls_out_acl_eval")       \
> > -    PIPELINE_STAGE(SWITCH, OUT, ACL_ACTION,   5, "ls_out_acl_action")     \
> > -    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     6, "ls_out_qos_mark")       \
> > -    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    7, "ls_out_qos_meter")      \
> > -    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     8, "ls_out_stateful")       \
> > -    PIPELINE_STAGE(SWITCH, OUT, CHECK_PORT_SEC,  9, "ls_out_check_port_sec") \
> > -    PIPELINE_STAGE(SWITCH, OUT, APPLY_PORT_SEC, 10, "ls_out_apply_port_sec") \
> > -                                                                      \
> > -    /* Logical router ingress stages. */                              \
> > -    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
> > -    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
> > -    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
> > -    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
> > -    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
> > -    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
> > -    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
> > -    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
> > -    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
> > -    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
> > -    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
> > -    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") \
> > -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre")  \
> > -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")      \
> > -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, "lr_in_ip_routing_ecmp") \
> > -    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")          \
> > -    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")     \
> > -    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")     \
> > -    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")     \
> > -    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")     \
> > -    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")     \
> > -    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")     \
> > -                                                                      \
> > -    /* Logical router egress stages. */                               \
> > -    PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
> > -                   "lr_out_chk_dnat_local")                                  \
> > -    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,             1, "lr_out_undnat")      \
> > -    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT,        2, "lr_out_post_undnat") \
> > -    PIPELINE_STAGE(ROUTER, OUT, SNAT,               3, "lr_out_snat")        \
> > -    PIPELINE_STAGE(ROUTER, OUT, POST_SNAT,          4, "lr_out_post_snat")   \
> > -    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,           5, "lr_out_egr_loop")    \
> > -    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,           6, "lr_out_delivery")
> > -
> > -#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
> > -    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
> > -        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
> > -    PIPELINE_STAGES
> > -#undef PIPELINE_STAGE
> > -};
> >
> >  /* Due to various hard-coded priorities need to implement ACLs, the
> >   * northbound database supports a smaller range of ACL priorities than
> > @@ -391,51 +282,9 @@ enum ovn_stage {
> >  #define ROUTE_PRIO_OFFSET_STATIC 1
> >  #define ROUTE_PRIO_OFFSET_CONNECTED 2
> >
> > -/* Returns an "enum ovn_stage" built from the arguments. */
> > -static enum ovn_stage
> > -ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
> > -                uint8_t table)
> > -{
> > -    return OVN_STAGE_BUILD(dp_type, pipeline, table);
> > -}
> > -
> > -/* Returns the pipeline to which 'stage' belongs. */
> > -static enum ovn_pipeline
> > -ovn_stage_get_pipeline(enum ovn_stage stage)
> > -{
> > -    return (stage >> 8) & 1;
> > -}
> > -
> > -/* Returns the pipeline name to which 'stage' belongs. */
> > -static const char *
> > -ovn_stage_get_pipeline_name(enum ovn_stage stage)
> > -{
> > -    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
> > -}
> > -
> > -/* Returns the table to which 'stage' belongs. */
> > -static uint8_t
> > -ovn_stage_get_table(enum ovn_stage stage)
> > -{
> > -    return stage & 0xff;
> > -}
> > -
> > -/* Returns a string name for 'stage'. */
> > -static const char *
> > -ovn_stage_to_str(enum ovn_stage stage)
> > -{
> > -    switch (stage) {
> > -#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
> > -        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
> > -    PIPELINE_STAGES
> > -#undef PIPELINE_STAGE
> > -        default: return "<unknown>";
> > -    }
> > -}
> > -
> >  /* Returns the type of the datapath to which a flow with the given 'stage' may
> >   * be added. */
> > -static enum ovn_datapath_type
> > +enum ovn_datapath_type
> >  ovn_stage_to_datapath_type(enum ovn_stage stage)
> >  {
> >      switch (stage) {
> > @@ -680,13 +529,6 @@ ovn_datapath_destroy(struct hmap *datapaths, struct ovn_datapath *od)
> >      }
> >  }
> >
> > -/* Returns 'od''s datapath type. */
> > -static enum ovn_datapath_type
> > -ovn_datapath_get_type(const struct ovn_datapath *od)
> > -{
> > -    return od->nbs ? DP_SWITCH : DP_ROUTER;
> > -}
> > -
> >  static struct ovn_datapath *
> >  ovn_datapath_find_(const struct hmap *datapaths,
> >                     const struct uuid *uuid)
> > @@ -722,13 +564,7 @@ ovn_datapath_find_by_key(struct hmap *datapaths, uint32_t dp_key)
> >      return NULL;
> >  }
> >
> > -static bool
> > -ovn_datapath_is_stale(const struct ovn_datapath *od)
> > -{
> > -    return !od->nbr && !od->nbs;
> > -}
> > -
> > -static struct ovn_datapath *
> > +struct ovn_datapath *
> >  ovn_datapath_from_sbrec(const struct hmap *ls_datapaths,
> >                          const struct hmap *lr_datapaths,
> >                          const struct sbrec_datapath_binding *sb)
> > @@ -1297,19 +1133,6 @@ struct ovn_port_routable_addresses {
> >      size_t n_addrs;
> >  };
> >
> > -/* A node that maintains link between an object (such as an ovn_port) and
> > - * a lflow. */
> > -struct lflow_ref_node {
> > -    /* This list follows different lflows referenced by the same object. List
> > -     * head is, for example, ovn_port->lflows.  */
> > -    struct ovs_list lflow_list_node;
> > -    /* This list follows different objects that reference the same lflow. List
> > -     * head is ovn_lflow->referenced_by. */
> > -    struct ovs_list ref_list_node;
> > -    /* The lflow. */
> > -    struct ovn_lflow *lflow;
> > -};
> > -
> >  static bool lsp_can_be_inc_processed(const struct nbrec_logical_switch_port *);
> >
> >  static bool
> > @@ -1389,6 +1212,8 @@ ovn_port_set_nb(struct ovn_port *op,
> >      init_mcast_port_info(&op->mcast_info, op->nbsp, op->nbrp);
> >  }
> >
> > +static bool lsp_is_router(const struct nbrec_logical_switch_port *nbsp);
> > +
> >  static struct ovn_port *
> >  ovn_port_create(struct hmap *ports, const char *key,
> >                  const struct nbrec_logical_switch_port *nbsp,
> > @@ -1407,12 +1232,14 @@ ovn_port_create(struct hmap *ports, const char *key,
> >      op->l3dgw_port = op->cr_port = NULL;
> >      hmap_insert(ports, &op->key_node, hash_string(op->key, 0));
> >
> > -    ovs_list_init(&op->lflows);
> > +    op->lflow_ref = lflow_ref_create();
> > +    op->stateful_lflow_ref = lflow_ref_create();
> > +
> >      return op;
> >  }
> >
> >  static void
> > -ovn_port_destroy_orphan(struct ovn_port *port)
> > +ovn_port_cleanup(struct ovn_port *port)
> >  {
> >      if (port->tunnel_key) {
> >          ovs_assert(port->od);
> > @@ -1422,6 +1249,8 @@ ovn_port_destroy_orphan(struct ovn_port *port)
> >          destroy_lport_addresses(&port->lsp_addrs[i]);
> >      }
> >      free(port->lsp_addrs);
> > +    port->n_lsp_addrs = 0;
> > +    port->lsp_addrs = NULL;
> >
> >      if (port->peer) {
> >          port->peer->peer = NULL;
> > @@ -1431,18 +1260,22 @@ ovn_port_destroy_orphan(struct ovn_port *port)
> >          destroy_lport_addresses(&port->ps_addrs[i]);
> >      }
> >      free(port->ps_addrs);
> > +    port->ps_addrs = NULL;
> > +    port->n_ps_addrs = 0;
> >
> >      destroy_lport_addresses(&port->lrp_networks);
> >      destroy_lport_addresses(&port->proxy_arp_addrs);
> > +}
> > +
> > +static void
> > +ovn_port_destroy_orphan(struct ovn_port *port)
> > +{
> > +    ovn_port_cleanup(port);
> >      free(port->json_key);
> >      free(port->key);
> > +    lflow_ref_destroy(port->lflow_ref);
> > +    lflow_ref_destroy(port->stateful_lflow_ref);
> >
> > -    struct lflow_ref_node *l;
> > -    LIST_FOR_EACH_SAFE (l, lflow_list_node, &port->lflows) {
> > -        ovs_list_remove(&l->lflow_list_node);
> > -        ovs_list_remove(&l->ref_list_node);
> > -        free(l);
> > -    }
> >      free(port);
> >  }
> >
> > @@ -3911,124 +3744,6 @@ build_lb_port_related_data(
> >      build_lswitch_lbs_from_lrouter(lr_datapaths, lb_dps_map, lb_group_dps_map);
> >  }
> >
> > -
> > -struct ovn_dp_group {
> > -    unsigned long *bitmap;
> > -    struct sbrec_logical_dp_group *dp_group;
> > -    struct hmap_node node;
> > -};
> > -
> > -static struct ovn_dp_group *
> > -ovn_dp_group_find(const struct hmap *dp_groups,
> > -                  const unsigned long *dpg_bitmap, size_t bitmap_len,
> > -                  uint32_t hash)
> > -{
> > -    struct ovn_dp_group *dpg;
> > -
> > -    HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) {
> > -        if (bitmap_equal(dpg->bitmap, dpg_bitmap, bitmap_len)) {
> > -            return dpg;
> > -        }
> > -    }
> > -    return NULL;
> > -}
> > -
> > -static struct sbrec_logical_dp_group *
> > -ovn_sb_insert_or_update_logical_dp_group(
> > -                            struct ovsdb_idl_txn *ovnsb_txn,
> > -                            struct sbrec_logical_dp_group *dp_group,
> > -                            const unsigned long *dpg_bitmap,
> > -                            const struct ovn_datapaths *datapaths)
> > -{
> > -    const struct sbrec_datapath_binding **sb;
> > -    size_t n = 0, index;
> > -
> > -    sb = xmalloc(bitmap_count1(dpg_bitmap, ods_size(datapaths)) * sizeof *sb);
> > -    BITMAP_FOR_EACH_1 (index, ods_size(datapaths), dpg_bitmap) {
> > -        sb[n++] = datapaths->array[index]->sb;
> > -    }
> > -    if (!dp_group) {
> > -        dp_group = sbrec_logical_dp_group_insert(ovnsb_txn);
> > -    }
> > -    sbrec_logical_dp_group_set_datapaths(
> > -        dp_group, (struct sbrec_datapath_binding **) sb, n);
> > -    free(sb);
> > -
> > -    return dp_group;
> > -}
> > -
> > -/* Given a desired bitmap, finds a datapath group in 'dp_groups'.  If it
> > - * doesn't exist, creates a new one and adds it to 'dp_groups'.
> > - * If 'sb_group' is provided, function will try to re-use this group by
> > - * either taking it directly, or by modifying, if it's not already in use. */
> > -static struct ovn_dp_group *
> > -ovn_dp_group_get_or_create(struct ovsdb_idl_txn *ovnsb_txn,
> > -                           struct hmap *dp_groups,
> > -                           struct sbrec_logical_dp_group *sb_group,
> > -                           size_t desired_n,
> > -                           const unsigned long *desired_bitmap,
> > -                           size_t bitmap_len,
> > -                           bool is_switch,
> > -                           const struct ovn_datapaths *ls_datapaths,
> > -                           const struct ovn_datapaths *lr_datapaths)
> > -{
> > -    struct ovn_dp_group *dpg;
> > -    uint32_t hash;
> > -
> > -    hash = hash_int(desired_n, 0);
> > -    dpg = ovn_dp_group_find(dp_groups, desired_bitmap, bitmap_len, hash);
> > -    if (dpg) {
> > -        return dpg;
> > -    }
> > -
> > -    bool update_dp_group = false, can_modify = false;
> > -    unsigned long *dpg_bitmap;
> > -    size_t i, n = 0;
> > -
> > -    dpg_bitmap = sb_group ? bitmap_allocate(bitmap_len) : NULL;
> > -    for (i = 0; sb_group && i < sb_group->n_datapaths; i++) {
> > -        struct ovn_datapath *datapath_od;
> > -
> > -        datapath_od = ovn_datapath_from_sbrec(
> > -                        ls_datapaths ? &ls_datapaths->datapaths : NULL,
> > -                        lr_datapaths ? &lr_datapaths->datapaths : NULL,
> > -                        sb_group->datapaths[i]);
> > -        if (!datapath_od || ovn_datapath_is_stale(datapath_od)) {
> > -            break;
> > -        }
> > -        bitmap_set1(dpg_bitmap, datapath_od->index);
> > -        n++;
> > -    }
> > -    if (!sb_group || i != sb_group->n_datapaths) {
> > -        /* No group or stale group.  Not going to be used. */
> > -        update_dp_group = true;
> > -        can_modify = true;
> > -    } else if (!bitmap_equal(dpg_bitmap, desired_bitmap, bitmap_len)) {
> > -        /* The group in Sb is different. */
> > -        update_dp_group = true;
> > -        /* We can modify existing group if it's not already in use. */
> > -        can_modify = !ovn_dp_group_find(dp_groups, dpg_bitmap,
> > -                                        bitmap_len, hash_int(n, 0));
> > -    }
> > -
> > -    bitmap_free(dpg_bitmap);
> > -
> > -    dpg = xzalloc(sizeof *dpg);
> > -    dpg->bitmap = bitmap_clone(desired_bitmap, bitmap_len);
> > -    if (!update_dp_group) {
> > -        dpg->dp_group = sb_group;
> > -    } else {
> > -        dpg->dp_group = ovn_sb_insert_or_update_logical_dp_group(
> > -                            ovnsb_txn,
> > -                            can_modify ? sb_group : NULL,
> > -                            desired_bitmap,
> > -                            is_switch ? ls_datapaths : lr_datapaths);
> > -    }
> > -    hmap_insert(dp_groups, &dpg->node, hash);
> > -
> > -    return dpg;
> > -}
> > -
> >  struct sb_lb {
> >      struct hmap_node hmap_node;
> >
> > @@ -4846,28 +4561,20 @@ ovn_port_find_in_datapath(struct ovn_datapath *od,
> >      return NULL;
> >  }
> >
> > -static struct ovn_port *
> > -ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
> > -               const char *key, const struct nbrec_logical_switch_port *nbsp,
> > -               struct ovn_datapath *od, const struct sbrec_port_binding *sb,
> > -               struct ovs_list *lflows,
> > -               const struct sbrec_mirror_table *sbrec_mirror_table,
> > -               const struct sbrec_chassis_table *sbrec_chassis_table,
> > -               struct ovsdb_idl_index *sbrec_chassis_by_name,
> > -               struct ovsdb_idl_index *sbrec_chassis_by_hostname)
> > +static bool
> > +ls_port_init(struct ovn_port *op, struct ovsdb_idl_txn *ovnsb_txn,
> > +             struct hmap *ls_ports, struct ovn_datapath *od,
> > +             const struct sbrec_port_binding *sb,
> > +             const struct sbrec_mirror_table *sbrec_mirror_table,
> > +             const struct sbrec_chassis_table *sbrec_chassis_table,
> > +             struct ovsdb_idl_index *sbrec_chassis_by_name,
> > +             struct ovsdb_idl_index *sbrec_chassis_by_hostname)
> >  {
> > -    struct ovn_port *op = ovn_port_create(ls_ports, key, nbsp, NULL,
> > -                                          NULL);
> > -    parse_lsp_addrs(op);
> >      op->od = od;
> > -    hmap_insert(&od->ports, &op->dp_node, hmap_node_hash(&op->key_node));
> > -    if (lflows) {
> > -        ovs_list_splice(&op->lflows, lflows->next, lflows);
> > -    }
> > -
> > +    parse_lsp_addrs(op);
> >      /* Assign explicitly requested tunnel ids first. */
> >      if (!ovn_port_assign_requested_tnl_id(sbrec_chassis_table, op)) {
> > -        return NULL;
> > +        return false;
> >      }
> >      if (sb) {
> >          op->sb = sb;
> > @@ -4884,14 +4591,57 @@ ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
> >      }
> >      /* Assign new tunnel ids where needed. */
> >      if (!ovn_port_allocate_key(sbrec_chassis_table, ls_ports, op)) {
> > -        return NULL;
> > +        return false;
> >      }
> >      ovn_port_update_sbrec(ovnsb_txn, sbrec_chassis_by_name,
> >                            sbrec_chassis_by_hostname, NULL, sbrec_mirror_table,
> >                            op, NULL, NULL);
> > +    return true;
> > +}
> > +
> > +static struct ovn_port *
> > +ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
> > +               const char *key, const struct nbrec_logical_switch_port *nbsp,
> > +               struct ovn_datapath *od, const struct sbrec_port_binding *sb,
> > +               const struct sbrec_mirror_table *sbrec_mirror_table,
> > +               const struct sbrec_chassis_table *sbrec_chassis_table,
> > +               struct ovsdb_idl_index *sbrec_chassis_by_name,
> > +               struct ovsdb_idl_index *sbrec_chassis_by_hostname)
> > +{
> > +    struct ovn_port *op = ovn_port_create(ls_ports, key, nbsp, NULL,
> > +                                          NULL);
> > +    hmap_insert(&od->ports, &op->dp_node, hmap_node_hash(&op->key_node));
> > +    if (!ls_port_init(op, ovnsb_txn, ls_ports, od, sb,
> > +                      sbrec_mirror_table, sbrec_chassis_table,
> > +                      sbrec_chassis_by_name, sbrec_chassis_by_hostname)) {
> > +        ovn_port_destroy(ls_ports, op);
> > +        return NULL;
> > +    }
> > +
> >      return op;
> >  }
> >
> > +static bool
> > +ls_port_reinit(struct ovn_port *op, struct ovsdb_idl_txn *ovnsb_txn,
> > +                struct hmap *ls_ports,
> > +                const struct nbrec_logical_switch_port *nbsp,
> > +                const struct nbrec_logical_router_port *nbrp,
> > +                struct ovn_datapath *od,
> > +                const struct sbrec_port_binding *sb,
> > +                const struct sbrec_mirror_table *sbrec_mirror_table,
> > +                const struct sbrec_chassis_table *sbrec_chassis_table,
> > +                struct ovsdb_idl_index *sbrec_chassis_by_name,
> > +                struct ovsdb_idl_index *sbrec_chassis_by_hostname)
> > +{
> > +    ovn_port_cleanup(op);
> > +    op->sb = sb;
> > +    ovn_port_set_nb(op, nbsp, nbrp);
> > +    op->l3dgw_port = op->cr_port = NULL;
> > +    return ls_port_init(op, ovnsb_txn, ls_ports, od, sb,
> > +                        sbrec_mirror_table, sbrec_chassis_table,
> > +                        sbrec_chassis_by_name, sbrec_chassis_by_hostname);
> > +}
> > +
> >  /* Returns true if the logical switch has changes which can be
> >   * incrementally handled.
> >   * Presently supports i-p for the below changes:
> > @@ -5031,7 +4781,7 @@ ls_handle_lsp_changes(struct ovsdb_idl_txn *ovnsb_idl_txn,
> >                  goto fail;
> >              }
> >              op = ls_port_create(ovnsb_idl_txn, &nd->ls_ports,
> > -                                new_nbsp->name, new_nbsp, od, NULL, NULL,
> > +                                new_nbsp->name, new_nbsp, od, NULL,
> >                                  ni->sbrec_mirror_table,
> >                                  ni->sbrec_chassis_table,
> >                                  ni->sbrec_chassis_by_name,
> > @@ -5062,17 +4812,12 @@ ls_handle_lsp_changes(struct ovsdb_idl_txn *ovnsb_idl_txn,
> >                  op->visited = true;
> >                  continue;
> >              }
> > -            struct ovs_list lflows = OVS_LIST_INITIALIZER(&lflows);
> > -            ovs_list_splice(&lflows, op->lflows.next, &op->lflows);
> > -            ovn_port_destroy(&nd->ls_ports, op);
> > -            op = ls_port_create(ovnsb_idl_txn, &nd->ls_ports,
> > -                                new_nbsp->name, new_nbsp, od, sb, &lflows,
> > -                                ni->sbrec_mirror_table,
> > +            if (!ls_port_reinit(op, ovnsb_idl_txn, &nd->ls_ports,
> > +                                new_nbsp, NULL,
> > +                                od, sb, ni->sbrec_mirror_table,
> >                                  ni->sbrec_chassis_table,
> >                                  ni->sbrec_chassis_by_name,
> > -                                ni->sbrec_chassis_by_hostname);
> > -            ovs_assert(ovs_list_is_empty(&lflows));
> > -            if (!op) {
> > +                                ni->sbrec_chassis_by_hostname)) {
> >                  goto fail;
> >              }
> >              add_op_to_northd_tracked_ports(&trk_lsps->updated, op);
> > @@ -6017,170 +5762,7 @@ ovn_igmp_group_destroy(struct hmap *igmp_groups,
> >   * function of most of the northbound database.
> >   */
> >
> > -struct ovn_lflow {
> > -    struct hmap_node hmap_node;
> > -    struct ovs_list list_node;   /* For temporary list of lflows. Don't remove
> > -                                    at destroy. */
> > -
> > -    struct ovn_datapath *od;     /* 'logical_datapath' in SB schema.  */
> > -    unsigned long *dpg_bitmap;   /* Bitmap of all datapaths by their 'index'.*/
> > -    enum ovn_stage stage;
> > -    uint16_t priority;
> > -    char *match;
> > -    char *actions;
> > -    char *io_port;
> > -    char *stage_hint;
> > -    char *ctrl_meter;
> > -    size_t n_ods;                /* Number of datapaths referenced by 'od' and
> > -                                  * 'dpg_bitmap'. */
> > -    struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group. */
> > -
> > -    struct ovs_list referenced_by;  /* List of struct lflow_ref_node. */
> > -    const char *where;
> > -
> > -    struct uuid sb_uuid;         /* SB DB row uuid, specified by northd. */
> > -};
> > -
> > -static void ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow);
> > -static struct ovn_lflow *ovn_lflow_find(const struct hmap *lflows,
> > -                                        const struct ovn_datapath *od,
> > -                                        enum ovn_stage stage,
> > -                                        uint16_t priority, const char *match,
> > -                                        const char *actions,
> > -                                        const char *ctrl_meter, uint32_t hash);
> > -
> > -static char *
> > -ovn_lflow_hint(const struct ovsdb_idl_row *row)
> > -{
> > -    if (!row) {
> > -        return NULL;
> > -    }
> > -    return xasprintf("%08x", row->uuid.parts[0]);
> > -}
> > -
> > -static bool
> > -ovn_lflow_equal(const struct ovn_lflow *a, const struct ovn_datapath *od,
> > -                enum ovn_stage stage, uint16_t priority, const char *match,
> > -                const char *actions, const char *ctrl_meter)
> > -{
> > -    return (a->od == od
> > -            && a->stage == stage
> > -            && a->priority == priority
> > -            && !strcmp(a->match, match)
> > -            && !strcmp(a->actions, actions)
> > -            && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
> > -}
> > -
> > -enum {
> > -    STATE_NULL,               /* parallelization is off */
> > -    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
> > -    STATE_USE_PARALLELIZATION /* parallelization is on */
> > -};
> > -static int parallelization_state = STATE_NULL;
> > -
> > -static void
> > -ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
> > -               size_t dp_bitmap_len, enum ovn_stage stage, uint16_t priority,
> > -               char *match, char *actions, char *io_port, char *ctrl_meter,
> > -               char *stage_hint, const char *where)
> > -{
> > -    ovs_list_init(&lflow->list_node);
> > -    ovs_list_init(&lflow->referenced_by);
> > -    lflow->dpg_bitmap = bitmap_allocate(dp_bitmap_len);
> > -    lflow->od = od;
> > -    lflow->stage = stage;
> > -    lflow->priority = priority;
> > -    lflow->match = match;
> > -    lflow->actions = actions;
> > -    lflow->io_port = io_port;
> > -    lflow->stage_hint = stage_hint;
> > -    lflow->ctrl_meter = ctrl_meter;
> > -    lflow->dpg = NULL;
> > -    lflow->where = where;
> > -    lflow->sb_uuid = UUID_ZERO;
> > -}
> > -
> > -/* The lflow_hash_lock is a mutex array that protects updates to the shared
> > - * lflow table across threads when parallel lflow build and dp-group are both
> > - * enabled. To avoid high contention between threads, a big array of mutexes
> > - * are used instead of just one. This is possible because when parallel build
> > - * is used we only use hmap_insert_fast() to update the hmap, which would not
> > - * touch the bucket array but only the list in a single bucket. We only need to
> > - * make sure that when adding lflows to the same hash bucket, the same lock is
> > - * used, so that no two threads can add to the bucket at the same time.  It is
> > - * ok that the same lock is used to protect multiple buckets, so a fixed sized
> > - * mutex array is used instead of 1-1 mapping to the hash buckets. This
> > - * simplies the implementation while effectively reduces lock contention
> > - * because the chance that different threads contending the same lock amongst
> > - * the big number of locks is very low. */
> > -#define LFLOW_HASH_LOCK_MASK 0xFFFF
> > -static struct ovs_mutex lflow_hash_locks[LFLOW_HASH_LOCK_MASK + 1];
> > -
> > -static void
> > -lflow_hash_lock_init(void)
> > -{
> > -    if (!lflow_hash_lock_initialized) {
> > -        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> > -            ovs_mutex_init(&lflow_hash_locks[i]);
> > -        }
> > -        lflow_hash_lock_initialized = true;
> > -    }
> > -}
> > -
> > -static void
> > -lflow_hash_lock_destroy(void)
> > -{
> > -    if (lflow_hash_lock_initialized) {
> > -        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
> > -            ovs_mutex_destroy(&lflow_hash_locks[i]);
> > -        }
> > -    }
> > -    lflow_hash_lock_initialized = false;
> > -}
> > -
> > -/* Full thread safety analysis is not possible with hash locks, because
> > - * they are taken conditionally based on the 'parallelization_state' and
> > - * a flow hash.  Also, the order in which two hash locks are taken is not
> > - * predictable during the static analysis.
> > - *
> > - * Since the order of taking two locks depends on a random hash, to avoid
> > - * ABBA deadlocks, no two hash locks can be nested.  In that sense an array
> > - * of hash locks is similar to a single mutex.
> > - *
> > - * Using a fake mutex to partially simulate thread safety restrictions, as
> > - * if it were actually a single mutex.
> > - *
> > - * OVS_NO_THREAD_SAFETY_ANALYSIS below allows us to ignore conditional
> > - * nature of the lock.  Unlike other attributes, it applies to the
> > - * implementation and not to the interface.  So, we can define a function
> > - * that acquires the lock without analysing the way it does that.
> > - */
> > -extern struct ovs_mutex fake_hash_mutex;
> > -
> > -static struct ovs_mutex *
> > -lflow_hash_lock(const struct hmap *lflow_map, uint32_t hash)
> > -    OVS_ACQUIRES(fake_hash_mutex)
> > -    OVS_NO_THREAD_SAFETY_ANALYSIS
> > -{
> > -    struct ovs_mutex *hash_lock = NULL;
> > -
> > -    if (parallelization_state == STATE_USE_PARALLELIZATION) {
> > -        hash_lock =
> > -            &lflow_hash_locks[hash & lflow_map->mask & LFLOW_HASH_LOCK_MASK];
> > -        ovs_mutex_lock(hash_lock);
> > -    }
> > -    return hash_lock;
> > -}
> > -
> > -static void
> > -lflow_hash_unlock(struct ovs_mutex *hash_lock)
> > -    OVS_RELEASES(fake_hash_mutex)
> > -    OVS_NO_THREAD_SAFETY_ANALYSIS
> > -{
> > -    if (hash_lock) {
> > -        ovs_mutex_unlock(hash_lock);
> > -    }
> > -}
> > +int parallelization_state = STATE_NULL;
> >
> >
> >  /* This thread-local var is used for parallel lflow building when dp-groups is
> > @@ -6193,240 +5775,7 @@ lflow_hash_unlock(struct ovs_mutex *hash_lock)
> >   * threads are collected to fix the lflow hmap's size (by the function
> >   * fix_flow_map_size()).
> >   * */
> > -static thread_local size_t thread_lflow_counter = 0;
> > -
> > -/* Adds an OVN datapath to a datapath group of existing logical flow.
> > - * Version to use when hash bucket locking is NOT required or the corresponding
> > - * hash lock is already taken. */
> > -static void
> > -ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
> > -                                const struct ovn_datapath *od,
> > -                                const unsigned long *dp_bitmap,
> > -                                size_t bitmap_len)
> > -    OVS_REQUIRES(fake_hash_mutex)
> > -{
> > -    if (od) {
> > -        bitmap_set1(lflow_ref->dpg_bitmap, od->index);
> > -    }
> > -    if (dp_bitmap) {
> > -        bitmap_or(lflow_ref->dpg_bitmap, dp_bitmap, bitmap_len);
> > -    }
> > -}
> > -
> > -/* This global variable collects the lflows generated by do_ovn_lflow_add().
> > - * start_collecting_lflows() will enable the lflow collection and the calls to
> > - * do_ovn_lflow_add (or the macros ovn_lflow_add_...) will add generated lflows
> > - * to the list. end_collecting_lflows() will disable it. */
> > -static thread_local struct ovs_list collected_lflows;
> > -static thread_local bool collecting_lflows = false;
> > -
> > -static void
> > -start_collecting_lflows(void)
> > -{
> > -    ovs_assert(!collecting_lflows);
> > -    ovs_list_init(&collected_lflows);
> > -    collecting_lflows = true;
> > -}
> > -
> > -static void
> > -end_collecting_lflows(void)
> > -{
> > -    ovs_assert(collecting_lflows);
> > -    collecting_lflows = false;
> > -}
> > -
> > -/* Adds a row with the specified contents to the Logical_Flow table.
> > - * Version to use when hash bucket locking is NOT required. */
> > -static void
> > -do_ovn_lflow_add(struct hmap *lflow_map, const struct ovn_datapath *od,
> > -                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> > -                 uint32_t hash, enum ovn_stage stage, uint16_t priority,
> > -                 const char *match, const char *actions, const char *io_port,
> > -                 const struct ovsdb_idl_row *stage_hint,
> > -                 const char *where, const char *ctrl_meter)
> > -    OVS_REQUIRES(fake_hash_mutex)
> > -{
> > -
> > -    struct ovn_lflow *old_lflow;
> > -    struct ovn_lflow *lflow;
> > -
> > -    size_t bitmap_len = od ? ods_size(od->datapaths) : dp_bitmap_len;
> > -    ovs_assert(bitmap_len);
> > -
> > -    if (collecting_lflows) {
> > -        ovs_assert(od);
> > -        ovs_assert(!dp_bitmap);
> > -    } else {
> > -        old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
> > -                                   actions, ctrl_meter, hash);
> > -        if (old_lflow) {
> > -            ovn_dp_group_add_with_reference(old_lflow, od, dp_bitmap,
> > -                                            bitmap_len);
> > -            return;
> > -        }
> > -    }
> > -
> > -    lflow = xmalloc(sizeof *lflow);
> > -    /* While adding new logical flows we're not setting single datapath, but
> > -     * collecting a group.  'od' will be updated later for all flows with only
> > -     * one datapath in a group, so it could be hashed correctly. */
> > -    ovn_lflow_init(lflow, NULL, bitmap_len, stage, priority,
> > -                   xstrdup(match), xstrdup(actions),
> > -                   io_port ? xstrdup(io_port) : NULL,
> > -                   nullable_xstrdup(ctrl_meter),
> > -                   ovn_lflow_hint(stage_hint), where);
> > -
> > -    ovn_dp_group_add_with_reference(lflow, od, dp_bitmap, bitmap_len);
> > -
> > -    if (parallelization_state != STATE_USE_PARALLELIZATION) {
> > -        hmap_insert(lflow_map, &lflow->hmap_node, hash);
> > -    } else {
> > -        hmap_insert_fast(lflow_map, &lflow->hmap_node, hash);
> > -        thread_lflow_counter++;
> > -    }
> > -
> > -    if (collecting_lflows) {
> > -        ovs_list_insert(&collected_lflows, &lflow->list_node);
> > -    }
> > -}
> > -
> > -/* Adds a row with the specified contents to the Logical_Flow table. */
> > -static void
> > -ovn_lflow_add_at(struct hmap *lflow_map, const struct ovn_datapath *od,
> > -                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> > -                 enum ovn_stage stage, uint16_t priority,
> > -                 const char *match, const char *actions, const char *io_port,
> > -                 const char *ctrl_meter,
> > -                 const struct ovsdb_idl_row *stage_hint, const char *where)
> > -    OVS_EXCLUDED(fake_hash_mutex)
> > -{
> > -    struct ovs_mutex *hash_lock;
> > -    uint32_t hash;
> > -
> > -    ovs_assert(!od ||
> > -               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
> > -
> > -    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> > -                                 ovn_stage_get_pipeline(stage),
> > -                                 priority, match,
> > -                                 actions);
> > -
> > -    hash_lock = lflow_hash_lock(lflow_map, hash);
> > -    do_ovn_lflow_add(lflow_map, od, dp_bitmap, dp_bitmap_len, hash, stage,
> > -                     priority, match, actions, io_port, stage_hint, where,
> > -                     ctrl_meter);
> > -    lflow_hash_unlock(hash_lock);
> > -}
> > -
> > -static void
> > -__ovn_lflow_add_default_drop(struct hmap *lflow_map,
> > -                             struct ovn_datapath *od,
> > -                             enum ovn_stage stage,
> > -                             const char *where)
> > -{
> > -        ovn_lflow_add_at(lflow_map, od, NULL, 0, stage, 0, "1",
> > -                         debug_drop_action(),
> > -                         NULL, NULL, NULL, where );
> > -}
> > -
> > -/* Adds a row with the specified contents to the Logical_Flow table. */
> > -#define ovn_lflow_add_with_hint__(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
> > -                                  ACTIONS, IN_OUT_PORT, CTRL_METER, \
> > -                                  STAGE_HINT) \
> > -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> > -                     IN_OUT_PORT, CTRL_METER, STAGE_HINT, OVS_SOURCE_LOCATOR)
> > -
> > -#define ovn_lflow_add_with_hint(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
> > -                                ACTIONS, STAGE_HINT) \
> > -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> > -                     NULL, NULL, STAGE_HINT, OVS_SOURCE_LOCATOR)
> > -
> > -#define ovn_lflow_add_with_dp_group(LFLOW_MAP, DP_BITMAP, DP_BITMAP_LEN, \
> > -                                    STAGE, PRIORITY, MATCH, ACTIONS, \
> > -                                    STAGE_HINT) \
> > -    ovn_lflow_add_at(LFLOW_MAP, NULL, DP_BITMAP, DP_BITMAP_LEN, STAGE, \
> > -                     PRIORITY, MATCH, ACTIONS, NULL, NULL, STAGE_HINT, \
> > -                     OVS_SOURCE_LOCATOR)
> > -
> > -#define ovn_lflow_add_default_drop(LFLOW_MAP, OD, STAGE)                    \
> > -    __ovn_lflow_add_default_drop(LFLOW_MAP, OD, STAGE, OVS_SOURCE_LOCATOR)
> > -
> > -
> > -/* This macro is similar to ovn_lflow_add_with_hint, except that it requires
> > - * the IN_OUT_PORT argument, which tells the lport name that appears in the
> > - * MATCH, which helps ovn-controller to bypass lflows parsing when the lport is
> > - * not local to the chassis. The critiera of the lport to be added using this
> > - * argument:
> > - *
> > - * - For ingress pipeline, the lport that is used to match "inport".
> > - * - For egress pipeline, the lport that is used to match "outport".
> > - *
> > - * For now, only LS pipelines should use this macro.  */
> > -#define ovn_lflow_add_with_lport_and_hint(LFLOW_MAP, OD, STAGE, PRIORITY, \
> > -                                          MATCH, ACTIONS, IN_OUT_PORT, \
> > -                                          STAGE_HINT) \
> > -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> > -                     IN_OUT_PORT, NULL, STAGE_HINT, OVS_SOURCE_LOCATOR)
> > -
> > -#define ovn_lflow_add(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, ACTIONS) \
> > -    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> > -                     NULL, NULL, NULL, OVS_SOURCE_LOCATOR)
> > -
> > -#define ovn_lflow_metered(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, ACTIONS, \
> > -                          CTRL_METER) \
> > -    ovn_lflow_add_with_hint__(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
> > -                              ACTIONS, NULL, CTRL_METER, NULL)
> > -
> > -static struct ovn_lflow *
> > -ovn_lflow_find(const struct hmap *lflows, const struct ovn_datapath *od,
> > -               enum ovn_stage stage, uint16_t priority,
> > -               const char *match, const char *actions, const char *ctrl_meter,
> > -               uint32_t hash)
> > -{
> > -    struct ovn_lflow *lflow;
> > -    HMAP_FOR_EACH_WITH_HASH (lflow, hmap_node, hash, lflows) {
> > -        if (ovn_lflow_equal(lflow, od, stage, priority, match, actions,
> > -                            ctrl_meter)) {
> > -            return lflow;
> > -        }
> > -    }
> > -    return NULL;
> > -}
> > -
> > -static void
> > -ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow)
> > -{
> > -    if (lflow) {
> > -        if (lflows) {
> > -            hmap_remove(lflows, &lflow->hmap_node);
> > -        }
> > -        bitmap_free(lflow->dpg_bitmap);
> > -        free(lflow->match);
> > -        free(lflow->actions);
> > -        free(lflow->io_port);
> > -        free(lflow->stage_hint);
> > -        free(lflow->ctrl_meter);
> > -        struct lflow_ref_node *l;
> > -        LIST_FOR_EACH_SAFE (l, ref_list_node, &lflow->referenced_by) {
> > -            ovs_list_remove(&l->lflow_list_node);
> > -            ovs_list_remove(&l->ref_list_node);
> > -            free(l);
> > -        }
> > -        free(lflow);
> > -    }
> > -}
> > -
> > -static void
> > -link_ovn_port_to_lflows(struct ovn_port *op, struct ovs_list *lflows)
> > -{
> > -    struct ovn_lflow *f;
> > -    LIST_FOR_EACH (f, list_node, lflows) {
> > -        struct lflow_ref_node *lfrn = xmalloc(sizeof *lfrn);
> > -        lfrn->lflow = f;
> > -        ovs_list_insert(&op->lflows, &lfrn->lflow_list_node);
> > -        ovs_list_insert(&f->referenced_by, &lfrn->ref_list_node);
> > -    }
> > -}
> > +thread_local size_t thread_lflow_counter = 0;
> >
> >  static bool
> >  build_dhcpv4_action(struct ovn_port *op, ovs_be32 offer_ip,
> > @@ -6604,8 +5953,8 @@ build_dhcpv6_action(struct ovn_port *op, struct in6_addr *offer_ip,
> >   * build_lswitch_lflows_admission_control() handles the port security.
> >   */
> >  static void
> > -build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
> > -                                struct ds *actions, struct ds *match)
> > +build_lswitch_port_sec_op(struct ovn_port *op, struct lflow_table *lflows,
> > +                          struct ds *actions, struct ds *match)
> >  {
> >      ovs_assert(op->nbsp);
> >
> > @@ -6621,13 +5970,13 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
> >          ovn_lflow_add_with_lport_and_hint(
> >              lflows, op->od, S_SWITCH_IN_CHECK_PORT_SEC,
> >              100, ds_cstr(match), REGBIT_PORT_SEC_DROP" = 1; next;",
> > -            op->key, &op->nbsp->header_);
> > +            op->key, &op->nbsp->header_, op->lflow_ref);
> >
> >          ds_clear(match);
> >          ds_put_format(match, "outport == %s", op->json_key);
> >          ovn_lflow_add_with_lport_and_hint(
> >              lflows, op->od, S_SWITCH_IN_L2_UNKNOWN, 50, ds_cstr(match),
> > -            debug_drop_action(), op->key, &op->nbsp->header_);
> > +            debug_drop_action(), op->key, &op->nbsp->header_, op->lflow_ref);
> >          return;
> >      }
> >
> > @@ -6643,14 +5992,16 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
> >          ovn_lflow_add_with_lport_and_hint(lflows, op->od,
> >                                            S_SWITCH_IN_CHECK_PORT_SEC, 70,
> >                                            ds_cstr(match), ds_cstr(actions),
> > -                                          op->key, &op->nbsp->header_);
> > +                                          op->key, &op->nbsp->header_,
> > +                                          op->lflow_ref);
> >      } else if (queue_id) {
> >          ds_put_cstr(actions,
> >                      REGBIT_PORT_SEC_DROP" = check_in_port_sec(); next;");
> >          ovn_lflow_add_with_lport_and_hint(lflows, op->od,
> >                                            S_SWITCH_IN_CHECK_PORT_SEC, 70,
> >                                            ds_cstr(match), ds_cstr(actions),
> > -                                          op->key, &op->nbsp->header_);
> > +                                          op->key, &op->nbsp->header_,
> > +                                          op->lflow_ref);
> >
> >          if (!lsp_is_localnet(op->nbsp) && !op->od->n_localnet_ports) {
> >              return;
> > @@ -6665,7 +6016,8 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
> >              ovn_lflow_add_with_lport_and_hint(lflows, op->od,
> >                                                S_SWITCH_OUT_APPLY_PORT_SEC, 100,
> >                                                ds_cstr(match), ds_cstr(actions),
> > -                                              op->key, &op->nbsp->header_);
> > +                                              op->key, &op->nbsp->header_,
> > +                                              op->lflow_ref);
> >          } else if (op->od->n_localnet_ports) {
> >              ds_put_format(match, "outport == %s && inport == %s",
> >                            op->od->localnet_ports[0]->json_key,
> > @@ -6674,15 +6026,16 @@ build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
> >                      S_SWITCH_OUT_APPLY_PORT_SEC, 110,
> >                      ds_cstr(match), ds_cstr(actions),
> >                      op->od->localnet_ports[0]->key,
> > -                    &op->od->localnet_ports[0]->nbsp->header_);
> > +                    &op->od->localnet_ports[0]->nbsp->header_,
> > +                    op->lflow_ref);
> >          }
> >      }
> >  }
> >
> >  static void
> >  build_lswitch_learn_fdb_op(
> > -        struct ovn_port *op, struct hmap *lflows,
> > -        struct ds *actions, struct ds *match)
> > +    struct ovn_port *op, struct lflow_table *lflows,
> > +    struct ds *actions, struct ds *match)
> >  {
> >      ovs_assert(op->nbsp);
> >
> > @@ -6699,7 +6052,8 @@ build_lswitch_learn_fdb_op(
> >          ovn_lflow_add_with_lport_and_hint(lflows, op->od,
> >                                            S_SWITCH_IN_LOOKUP_FDB, 100,
> >                                            ds_cstr(match), ds_cstr(actions),
> > -                                          op->key, &op->nbsp->header_);
> > +                                          op->key, &op->nbsp->header_,
> > +                                          op->lflow_ref);
> >
> >          ds_put_cstr(match, " && "REGBIT_LKUP_FDB" == 0");
> >          ds_clear(actions);
> > @@ -6707,13 +6061,14 @@ build_lswitch_learn_fdb_op(
> >          ovn_lflow_add_with_lport_and_hint(lflows, op->od, S_SWITCH_IN_PUT_FDB,
> >                                            100, ds_cstr(match),
> >                                            ds_cstr(actions), op->key,
> > -                                          &op->nbsp->header_);
> > +                                          &op->nbsp->header_,
> > +                                          op->lflow_ref);
> >      }
> >  }
> >
> >  static void
> >  build_lswitch_learn_fdb_od(
> > -        struct ovn_datapath *od, struct hmap *lflows)
> > +    struct ovn_datapath *od, struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbs);
> >      ovn_lflow_add(lflows, od, S_SWITCH_IN_LOOKUP_FDB, 0, "1", "next;");
> > @@ -6727,7 +6082,7 @@ build_lswitch_learn_fdb_od(
> >   *                 (priority 100). */
> >  static void
> >  build_lswitch_output_port_sec_od(struct ovn_datapath *od,
> > -                              struct hmap *lflows)
> > +                                 struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbs);
> >      ovn_lflow_add(lflows, od, S_SWITCH_OUT_CHECK_PORT_SEC, 100,
> > @@ -6745,7 +6100,7 @@ static void
> >  skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
> >                           bool has_stateful_acl, enum ovn_stage in_stage,
> >                           enum ovn_stage out_stage, uint16_t priority,
> > -                         struct hmap *lflows)
> > +                         struct lflow_table *lflows)
> >  {
> >      /* Can't use ct() for router ports. Consider the following configuration:
> >       * lp1(10.0.0.2) on hostA--ls1--lr0--ls2--lp2(10.0.1.2) on hostB, For a
> > @@ -6767,10 +6122,10 @@ skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
> >
> >      ovn_lflow_add_with_lport_and_hint(lflows, od, in_stage, priority,
> >                                        ingress_match, ingress_action,
> > -                                      op->key, &op->nbsp->header_);
> > +                                      op->key, &op->nbsp->header_, NULL);
> >      ovn_lflow_add_with_lport_and_hint(lflows, od, out_stage, priority,
> >                                        egress_match, egress_action,
> > -                                      op->key, &op->nbsp->header_);
> > +                                      op->key, &op->nbsp->header_, NULL);
> >
> >      free(ingress_match);
> >      free(egress_match);
> > @@ -6779,7 +6134,7 @@ skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
> >  static void
> >  build_stateless_filter(const struct ovn_datapath *od,
> >                         const struct nbrec_acl *acl,
> > -                       struct hmap *lflows)
> > +                       struct lflow_table *lflows)
> >  {
> >      const char *action = REGBIT_ACL_STATELESS" = 1; next;";
> >      if (!strcmp(acl->direction, "from-lport")) {
> > @@ -6800,7 +6155,7 @@ build_stateless_filter(const struct ovn_datapath *od,
> >  static void
> >  build_stateless_filters(const struct ovn_datapath *od,
> >                          const struct ls_port_group_table *ls_port_groups,
> > -                        struct hmap *lflows)
> > +                        struct lflow_table *lflows)
> >  {
> >      for (size_t i = 0; i < od->nbs->n_acls; i++) {
> >          const struct nbrec_acl *acl = od->nbs->acls[i];
> > @@ -6828,7 +6183,7 @@ build_stateless_filters(const struct ovn_datapath *od,
> >  }
> >
> >  static void
> > -build_pre_acls(struct ovn_datapath *od, struct hmap *lflows)
> > +build_pre_acls(struct ovn_datapath *od, struct lflow_table *lflows)
> >  {
> >      /* Ingress and Egress Pre-ACL Table (Priority 0): Packets are
> >       * allowed by default. */
> > @@ -6843,16 +6198,17 @@ build_pre_acls(struct ovn_datapath *od, struct hmap *lflows)
> >  }
> >
> >  static void
> > -build_ls_stateful_rec_pre_acls(const struct ls_stateful_record *ls_sful_rec,
> > -                             const struct ls_port_group_table *ls_port_groups,
> > -                             struct hmap *lflows)
> > +build_ls_stateful_rec_pre_acls(
> > +    const struct ls_stateful_record *ls_stateful_rec,
> > +    const struct ls_port_group_table *ls_port_groups,
> > +    struct lflow_table *lflows)
> >  {
> > -    const struct ovn_datapath *od = ls_sful_rec->od;
> > +    const struct ovn_datapath *od = ls_stateful_rec->od;
> >
> >      /* If there are any stateful ACL rules in this datapath, we may
> >       * send IP packets for some (allow) filters through the conntrack action,
> >       * which handles defragmentation, in order to match L4 headers. */
> > -    if (ls_sful_rec->has_stateful_acl) {
> > +    if (ls_stateful_rec->has_stateful_acl) {
> >          for (size_t i = 0; i < od->n_router_ports; i++) {
> >              struct ovn_port *op = od->router_ports[i];
> >              if (op->enable_router_port_acl) {
> > @@ -6901,7 +6257,7 @@ build_ls_stateful_rec_pre_acls(const struct ls_stateful_record *ls_sful_rec,
> >                        REGBIT_CONNTRACK_DEFRAG" = 1; next;");
> >          ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_ACL, 100, "ip",
> >                        REGBIT_CONNTRACK_DEFRAG" = 1; next;");
> > -    } else if (ls_sful_rec->has_lb_vip) {
> > +    } else if (ls_stateful_rec->has_lb_vip) {
> >          /* We'll build stateless filters if there are LB rules so that
> >           * the stateless flows are not tracked in pre-lb. */
> >           build_stateless_filters(od, ls_port_groups, lflows);
> > @@ -6968,7 +6324,7 @@ build_empty_lb_event_flow(struct ovn_lb_vip *lb_vip,
> >  static void
> >  build_interconn_mcast_snoop_flows(struct ovn_datapath *od,
> >                                    const struct shash *meter_groups,
> > -                                  struct hmap *lflows)
> > +                                  struct lflow_table *lflows)
> >  {
> >      struct mcast_switch_info *mcast_sw_info = &od->mcast_info.sw;
> >      if (!mcast_sw_info->enabled
> > @@ -7002,7 +6358,7 @@ build_interconn_mcast_snoop_flows(struct ovn_datapath *od,
> >
> >  static void
> >  build_pre_lb(struct ovn_datapath *od, const struct shash *meter_groups,
> > -             struct hmap *lflows)
> > +             struct lflow_table *lflows)
> >  {
> >      /* Handle IGMP/MLD packets crossing AZs. */
> >      build_interconn_mcast_snoop_flows(od, meter_groups, lflows);
> > @@ -7038,7 +6394,7 @@ build_pre_lb(struct ovn_datapath *od, const struct shash *meter_groups,
> >
> >  static void
> >  build_ls_stateful_rec_pre_lb(const struct ls_stateful_record *ls_stateful_rec,
> > -                             struct hmap *lflows)
> > +                             struct lflow_table *lflows)
> >  {
> >      const struct ovn_datapath *od = ls_stateful_rec->od;
> >
> > @@ -7104,7 +6460,7 @@ build_ls_stateful_rec_pre_lb(const struct ls_stateful_record *ls_stateful_rec,
> >  static void
> >  build_pre_stateful(struct ovn_datapath *od,
> >                     const struct chassis_features *features,
> > -                   struct hmap *lflows)
> > +                   struct lflow_table *lflows)
> >  {
> >      /* Ingress and Egress pre-stateful Table (Priority 0): Packets are
> >       * allowed by default. */
> > @@ -7134,11 +6490,11 @@ build_pre_stateful(struct ovn_datapath *od,
> >  }
> >
> >  static void
> > -build_acl_hints(const struct ls_stateful_record *ls_sful_rec,
> > +build_acl_hints(const struct ls_stateful_record *ls_stateful_rec,
> >                  const struct chassis_features *features,
> > -                struct hmap *lflows)
> > +                struct lflow_table *lflows)
> >  {
> > -    const struct ovn_datapath *od = ls_sful_rec->od;
> > +    const struct ovn_datapath *od = ls_stateful_rec->od;
> >
> >      /* This stage builds hints for the IN/OUT_ACL stage. Based on various
> >       * combinations of ct flags packets may hit only a subset of the logical
> > @@ -7163,13 +6519,14 @@ build_acl_hints(const struct ls_stateful_record *ls_sful_rec,
> >          const char *match;
> >
> >          /* In any case, advance to the next stage. */
> > -        if (!ls_sful_rec->has_acls && !ls_sful_rec->has_lb_vip) {
> > +        if (!ls_stateful_rec->has_acls && !ls_stateful_rec->has_lb_vip) {
> >              ovn_lflow_add(lflows, od, stage, UINT16_MAX, "1", "next;");
> >          } else {
> >              ovn_lflow_add(lflows, od, stage, 0, "1", "next;");
> >          }
> >
> > -        if (!ls_sful_rec->has_stateful_acl && !ls_sful_rec->has_lb_vip) {
> > +        if (!ls_stateful_rec->has_stateful_acl &&
> > +            !ls_stateful_rec->has_lb_vip) {
> >              continue;
> >          }
> >
> > @@ -7305,7 +6662,7 @@ build_acl_log(struct ds *actions, const struct nbrec_acl *acl,
> >  }
> >
> >  static void
> > -consider_acl(struct hmap *lflows, const struct ovn_datapath *od,
> > +consider_acl(struct lflow_table *lflows, const struct ovn_datapath *od,
> >               const struct nbrec_acl *acl, bool has_stateful,
> >               bool ct_masked_mark, const struct shash *meter_groups,
> >               uint64_t max_acl_tier, struct ds *match, struct ds *actions)
> > @@ -7533,7 +6890,7 @@ ovn_update_ipv6_options(struct hmap *lr_ports)
> >
> >  static void
> >  build_acl_action_lflows(const struct ls_stateful_record *ls_stateful_rec,
> > -                        struct hmap *lflows,
> > +                        struct lflow_table *lflows,
> >                          const char *default_acl_action,
> >                          const struct shash *meter_groups,
> >                          struct ds *match,
> > @@ -7610,7 +6967,8 @@ build_acl_action_lflows(const struct ls_stateful_record *ls_stateful_rec,
> >  }
> >
> >  static void
> > -build_acl_log_related_flows(const struct ovn_datapath *od, struct hmap *lflows,
> > +build_acl_log_related_flows(const struct ovn_datapath *od,
> > +                            struct lflow_table *lflows,
> >                              const struct nbrec_acl *acl, bool has_stateful,
> >                              bool ct_masked_mark,
> >                              const struct shash *meter_groups,
> > @@ -7685,7 +7043,7 @@ build_acl_log_related_flows(const struct ovn_datapath *od, struct hmap *lflows,
> >  static void
> >  build_acls(const struct ls_stateful_record *ls_stateful_rec,
> >             const struct chassis_features *features,
> > -           struct hmap *lflows,
> > +           struct lflow_table *lflows,
> >             const struct ls_port_group_table *ls_port_groups,
> >             const struct shash *meter_groups)
> >  {
> > @@ -7931,7 +7289,7 @@ build_acls(const struct ls_stateful_record *ls_stateful_rec,
> >  }
> >
> >  static void
> > -build_qos(struct ovn_datapath *od, struct hmap *lflows) {
> > +build_qos(struct ovn_datapath *od, struct lflow_table *lflows) {
> >      struct ds action = DS_EMPTY_INITIALIZER;
> >
> >      ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_MARK, 0, "1", "next;");
> > @@ -7992,7 +7350,7 @@ build_qos(struct ovn_datapath *od, struct hmap *lflows) {
> >  }
> >
> >  static void
> > -build_lb_rules_pre_stateful(struct hmap *lflows,
> > +build_lb_rules_pre_stateful(struct lflow_table *lflows,
> >                              struct ovn_lb_datapaths *lb_dps,
> >                              bool ct_lb_mark,
> >                              const struct ovn_datapaths *ls_datapaths,
> > @@ -8094,7 +7452,8 @@ build_lb_rules_pre_stateful(struct hmap *lflows,
> >   *
> >   */
> >  static void
> > -build_lb_affinity_lr_flows(struct hmap *lflows, const struct ovn_northd_lb *lb,
> > +build_lb_affinity_lr_flows(struct lflow_table *lflows,
> > +                           const struct ovn_northd_lb *lb,
> >                             struct ovn_lb_vip *lb_vip, char *new_lb_match,
> >                             char *lb_action, const unsigned long *dp_bitmap,
> >                             const struct ovn_datapaths *lr_datapaths)
> > @@ -8280,7 +7639,7 @@ build_lb_affinity_lr_flows(struct hmap *lflows, const struct ovn_northd_lb *lb,
> >   *
> >   */
> >  static void
> > -build_lb_affinity_ls_flows(struct hmap *lflows,
> > +build_lb_affinity_ls_flows(struct lflow_table *lflows,
> >                             struct ovn_lb_datapaths *lb_dps,
> >                             struct ovn_lb_vip *lb_vip,
> >                             const struct ovn_datapaths *ls_datapaths)
> > @@ -8423,7 +7782,7 @@ build_lb_affinity_ls_flows(struct hmap *lflows,
> >
> >  static void
> >  build_lswitch_lb_affinity_default_flows(struct ovn_datapath *od,
> > -                                        struct hmap *lflows)
> > +                                        struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbs);
> >      ovn_lflow_add(lflows, od, S_SWITCH_IN_LB_AFF_CHECK, 0, "1", "next;");
> > @@ -8432,7 +7791,7 @@ build_lswitch_lb_affinity_default_flows(struct ovn_datapath *od,
> >
> >  static void
> >  build_lrouter_lb_affinity_default_flows(struct ovn_datapath *od,
> > -                                        struct hmap *lflows)
> > +                                        struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbr);
> >      ovn_lflow_add(lflows, od, S_ROUTER_IN_LB_AFF_CHECK, 0, "1", "next;");
> > @@ -8440,7 +7799,7 @@ build_lrouter_lb_affinity_default_flows(struct ovn_datapath *od,
> >  }
> >
> >  static void
> > -build_lb_rules(struct hmap *lflows, struct ovn_lb_datapaths *lb_dps,
> > +build_lb_rules(struct lflow_table *lflows, struct ovn_lb_datapaths *lb_dps,
> >                 const struct ovn_datapaths *ls_datapaths,
> >                 const struct chassis_features *features, struct ds *match,
> >                 struct ds *action, const struct shash *meter_groups,
> > @@ -8520,7 +7879,7 @@ build_lb_rules(struct hmap *lflows, struct ovn_lb_datapaths *lb_dps,
> >  static void
> >  build_stateful(struct ovn_datapath *od,
> >                 const struct chassis_features *features,
> > -               struct hmap *lflows)
> > +               struct lflow_table *lflows)
> >  {
> >      const char *ct_block_action = features->ct_no_masked_label
> >                                    ? "ct_mark.blocked"
> > @@ -8570,7 +7929,7 @@ build_stateful(struct ovn_datapath *od,
> >
> >  static void
> >  build_lb_hairpin(const struct ls_stateful_record *ls_stateful_rec,
> > -                 struct hmap *lflows)
> > +                 struct lflow_table *lflows)
> >  {
> >      const struct ovn_datapath *od = ls_stateful_rec->od;
> >
> > @@ -8629,7 +7988,7 @@ build_lb_hairpin(const struct ls_stateful_record *ls_stateful_rec,
> >  }
> >
> >  static void
> > -build_vtep_hairpin(struct ovn_datapath *od, struct hmap *lflows)
> > +build_vtep_hairpin(struct ovn_datapath *od, struct lflow_table *lflows)
> >  {
> >      if (!od->has_vtep_lports) {
> >          /* There is no need in these flows if datapath has no vtep lports. */
> > @@ -8677,7 +8036,7 @@ build_vtep_hairpin(struct ovn_datapath *od, struct hmap *lflows)
> >
> >  /* Build logical flows for the forwarding groups */
> >  static void
> > -build_fwd_group_lflows(struct ovn_datapath *od, struct hmap *lflows)
> > +build_fwd_group_lflows(struct ovn_datapath *od, struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbs);
> >      if (!od->nbs->n_forwarding_groups) {
> > @@ -8858,7 +8217,8 @@ build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op,
> >                                          uint32_t priority,
> >                                          const struct ovn_datapath *od,
> >                                          const struct lr_nat_record *lrnat_rec,
> > -                                        struct hmap *lflows)
> > +                                        struct lflow_table *lflows,
> > +                                        struct lflow_ref *lflow_ref)
> >  {
> >      struct ds eth_src = DS_EMPTY_INITIALIZER;
> >      struct ds match = DS_EMPTY_INITIALIZER;
> > @@ -8882,8 +8242,10 @@ build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op,
> >      ds_put_format(&match,
> >                    "eth.src == %s && (arp.op == 1 || rarp.op == 3 || nd_ns)",
> >                    ds_cstr(&eth_src));
> > -    ovn_lflow_add(lflows, od, S_SWITCH_IN_L2_LKUP, priority, ds_cstr(&match),
> > -                  "outport = \""MC_FLOOD_L2"\"; output;");
> > +    ovn_lflow_add_with_lflow_ref(lflows, od, S_SWITCH_IN_L2_LKUP, priority,
> > +                                 ds_cstr(&match),
> > +                                 "outport = \""MC_FLOOD_L2"\"; output;",
> > +                                 lflow_ref);
> >
> >      ds_destroy(&eth_src);
> >      ds_destroy(&match);
> > @@ -8948,11 +8310,11 @@ lrouter_port_ipv6_reachable(const struct ovn_port *op,
> >   * switching domain as regular broadcast.
> >   */
> >  static void
> > -build_lswitch_rport_arp_req_flow(const char *ips, int addr_family,
> > -                                 struct ovn_port *patch_op,
> > -                                 const struct ovn_datapath *od,
> > -                                 uint32_t priority, struct hmap *lflows,
> > -                                 const struct ovsdb_idl_row *stage_hint)
> > +build_lswitch_rport_arp_req_flow(
> > +    const char *ips, int addr_family, struct ovn_port *patch_op,
> > +    const struct ovn_datapath *od, uint32_t priority,
> > +    struct lflow_table *lflows, const struct ovsdb_idl_row *stage_hint,
> > +    struct lflow_ref *lflow_ref)
> >  {
> >      struct ds match   = DS_EMPTY_INITIALIZER;
> >      struct ds actions = DS_EMPTY_INITIALIZER;
> > @@ -8966,14 +8328,17 @@ build_lswitch_rport_arp_req_flow(const char *ips, int addr_family,
> >          ds_put_format(&actions, "clone {outport = %s; output; }; "
> >                                  "outport = \""MC_FLOOD_L2"\"; output;",
> >                        patch_op->json_key);
> > -        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
> > -                                priority, ds_cstr(&match),
> > -                                ds_cstr(&actions), stage_hint);
> > +        ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
> > +                                          priority, ds_cstr(&match),
> > +                                          ds_cstr(&actions), stage_hint,
> > +                                          lflow_ref);
> >      } else {
> >          ds_put_format(&actions, "outport = %s; output;", patch_op->json_key);
> > -        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP, priority,
> > -                                ds_cstr(&match), ds_cstr(&actions),
> > -                                stage_hint);
> > +        ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
> > +                                          priority, ds_cstr(&match),
> > +                                          ds_cstr(&actions),
> > +                                          stage_hint,
> > +                                          lflow_ref);
> >      }
> >
> >      ds_destroy(&match);
> > @@ -8991,7 +8356,7 @@ static void
> >  build_lswitch_rport_arp_req_flows(struct ovn_port *op,
> >                                    struct ovn_datapath *sw_od,
> >                                    struct ovn_port *sw_op,
> > -                                  struct hmap *lflows,
> > +                                  struct lflow_table *lflows,
> >                                    const struct ovsdb_idl_row *stage_hint)
> >  {
> >      if (!op || !op->nbrp) {
> > @@ -9009,12 +8374,12 @@ build_lswitch_rport_arp_req_flows(struct ovn_port *op,
> >      for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> >          build_lswitch_rport_arp_req_flow(
> >              op->lrp_networks.ipv4_addrs[i].addr_s, AF_INET, sw_op, sw_od, 80,
> > -            lflows, stage_hint);
> > +            lflows, stage_hint, sw_op->lflow_ref);
> >      }
> >      for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
> >          build_lswitch_rport_arp_req_flow(
> >              op->lrp_networks.ipv6_addrs[i].addr_s, AF_INET6, sw_op, sw_od, 80,
> > -            lflows, stage_hint);
> > +            lflows, stage_hint, sw_op->lflow_ref);
> >      }
> >  }
> >
> > @@ -9029,7 +8394,8 @@ static void
> >  build_lswitch_rport_arp_req_flows_for_lbnats(
> >      struct ovn_port *op, const struct lr_stateful_record *lr_stateful_rec,
> >      const struct ovn_datapath *sw_od, struct ovn_port *sw_op,
> > -    struct hmap *lflows, const struct ovsdb_idl_row *stage_hint)
> > +    struct lflow_table *lflows, const struct ovsdb_idl_row *stage_hint,
> > +    struct lflow_ref *lflow_ref)
> >  {
> >      if (!op || !op->nbrp) {
> >          return;
> > @@ -9057,7 +8423,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
> >                  lrouter_port_ipv4_reachable(op, ipv4_addr)) {
> >                  build_lswitch_rport_arp_req_flow(
> >                      ip_addr, AF_INET, sw_op, sw_od, 80, lflows,
> > -                    stage_hint);
> > +                    stage_hint, lflow_ref);
> >              }
> >          }
> >          SSET_FOR_EACH (ip_addr, &lr_stateful_rec->lb_ips->ips_v6_reachable) {
> > @@ -9070,7 +8436,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
> >                  lrouter_port_ipv6_reachable(op, &ipv6_addr)) {
> >                  build_lswitch_rport_arp_req_flow(
> >                      ip_addr, AF_INET6, sw_op, sw_od, 80, lflows,
> > -                    stage_hint);
> > +                    stage_hint, lflow_ref);
> >              }
> >          }
> >      }
> > @@ -9085,7 +8451,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
> >      if (sw_od->n_router_ports != sw_od->nbs->n_ports) {
> >          build_lswitch_rport_arp_req_self_orig_flow(op, 75, sw_od,
> >                                                     lr_stateful_rec->lrnat_rec,
> > -                                                   lflows);
> > +                                                   lflows, lflow_ref);
> >      }
> >
> >      for (size_t i = 0; i < lr_stateful_rec->lrnat_rec->n_nat_entries; i++) {
> > @@ -9109,14 +8475,14 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
> >                                 nat->external_ip)) {
> >                  build_lswitch_rport_arp_req_flow(
> >                      nat->external_ip, AF_INET6, sw_op, sw_od, 80, lflows,
> > -                    stage_hint);
> > +                    stage_hint, lflow_ref);
> >              }
> >          } else {
> >              if (!sset_contains(&lr_stateful_rec->lb_ips->ips_v4,
> >                                 nat->external_ip)) {
> >                  build_lswitch_rport_arp_req_flow(
> >                      nat->external_ip, AF_INET, sw_op, sw_od, 80, lflows,
> > -                    stage_hint);
> > +                    stage_hint, lflow_ref);
> >              }
> >          }
> >      }
> > @@ -9143,7 +8509,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
> >                                 nat->external_ip)) {
> >                  build_lswitch_rport_arp_req_flow(
> >                      nat->external_ip, AF_INET6, sw_op, sw_od, 80, lflows,
> > -                    stage_hint);
> > +                    stage_hint, lflow_ref);
> >              }
> >          } else {
> >              if (!lr_stateful_rec ||
> > @@ -9151,7 +8517,7 @@ build_lswitch_rport_arp_req_flows_for_lbnats(
> >                                 nat->external_ip)) {
> >                  build_lswitch_rport_arp_req_flow(
> >                      nat->external_ip, AF_INET, sw_op, sw_od, 80, lflows,
> > -                    stage_hint);
> > +                    stage_hint, lflow_ref);
> >              }
> >          }
> >      }
> > @@ -9162,7 +8528,7 @@ build_dhcpv4_options_flows(struct ovn_port *op,
> >                             struct lport_addresses *lsp_addrs,
> >                             struct ovn_port *inport, bool is_external,
> >                             const struct shash *meter_groups,
> > -                           struct hmap *lflows)
> > +                           struct lflow_table *lflows)
> >  {
> >      struct ds match = DS_EMPTY_INITIALIZER;
> >
> > @@ -9193,7 +8559,7 @@ build_dhcpv4_options_flows(struct ovn_port *op,
> >                                op->json_key);
> >              }
> >
> > -            ovn_lflow_add_with_hint__(lflows, op->od,
> > +            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
> >                                        S_SWITCH_IN_DHCP_OPTIONS, 100,
> >                                        ds_cstr(&match),
> >                                        ds_cstr(&options_action),
> > @@ -9201,7 +8567,8 @@ build_dhcpv4_options_flows(struct ovn_port *op,
> >                                        copp_meter_get(COPP_DHCPV4_OPTS,
> >                                                       op->od->nbs->copp,
> >                                                       meter_groups),
> > -                                      &op->nbsp->dhcpv4_options->header_);
> > +                                      &op->nbsp->dhcpv4_options->header_,
> > +                                      op->lflow_ref);
> >              ds_clear(&match);
> >
> >              /* If REGBIT_DHCP_OPTS_RESULT is set, it means the
> > @@ -9220,7 +8587,8 @@ build_dhcpv4_options_flows(struct ovn_port *op,
> >              ovn_lflow_add_with_lport_and_hint(
> >                  lflows, op->od, S_SWITCH_IN_DHCP_RESPONSE, 100,
> >                  ds_cstr(&match), ds_cstr(&response_action), inport->key,
> > -                &op->nbsp->dhcpv4_options->header_);
> > +                &op->nbsp->dhcpv4_options->header_,
> > +                op->lflow_ref);
> >              ds_destroy(&options_action);
> >              ds_destroy(&response_action);
> >              ds_destroy(&ipv4_addr_match);
> > @@ -9247,7 +8615,8 @@ build_dhcpv4_options_flows(struct ovn_port *op,
> >                  ovn_lflow_add_with_lport_and_hint(
> >                      lflows, op->od, S_SWITCH_OUT_ACL_EVAL, 34000,
> >                      ds_cstr(&match),dhcp_actions, op->key,
> > -                    &op->nbsp->dhcpv4_options->header_);
> > +                    &op->nbsp->dhcpv4_options->header_,
> > +                    op->lflow_ref);
> >              }
> >              break;
> >          }
> > @@ -9260,7 +8629,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
> >                             struct lport_addresses *lsp_addrs,
> >                             struct ovn_port *inport, bool is_external,
> >                             const struct shash *meter_groups,
> > -                           struct hmap *lflows)
> > +                           struct lflow_table *lflows)
> >  {
> >      struct ds match = DS_EMPTY_INITIALIZER;
> >
> > @@ -9282,7 +8651,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
> >                                op->json_key);
> >              }
> >
> > -            ovn_lflow_add_with_hint__(lflows, op->od,
> > +            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
> >                                        S_SWITCH_IN_DHCP_OPTIONS, 100,
> >                                        ds_cstr(&match),
> >                                        ds_cstr(&options_action),
> > @@ -9290,7 +8659,8 @@ build_dhcpv6_options_flows(struct ovn_port *op,
> >                                        copp_meter_get(COPP_DHCPV6_OPTS,
> >                                                       op->od->nbs->copp,
> >                                                       meter_groups),
> > -                                      &op->nbsp->dhcpv6_options->header_);
> > +                                      &op->nbsp->dhcpv6_options->header_,
> > +                                      op->lflow_ref);
> >
> >              /* If REGBIT_DHCP_OPTS_RESULT is set to 1, it means the
> >               * put_dhcpv6_opts action is successful */
> > @@ -9298,7 +8668,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
> >              ovn_lflow_add_with_lport_and_hint(
> >                  lflows, op->od, S_SWITCH_IN_DHCP_RESPONSE, 100,
> >                  ds_cstr(&match), ds_cstr(&response_action), inport->key,
> > -                &op->nbsp->dhcpv6_options->header_);
> > +                &op->nbsp->dhcpv6_options->header_, op->lflow_ref);
> >              ds_destroy(&options_action);
> >              ds_destroy(&response_action);
> >
> > @@ -9330,7 +8700,8 @@ build_dhcpv6_options_flows(struct ovn_port *op,
> >                  ovn_lflow_add_with_lport_and_hint(
> >                      lflows, op->od, S_SWITCH_OUT_ACL_EVAL, 34000,
> >                      ds_cstr(&match),dhcp6_actions, op->key,
> > -                    &op->nbsp->dhcpv6_options->header_);
> > +                    &op->nbsp->dhcpv6_options->header_,
> > +                    op->lflow_ref);
> >              }
> >              break;
> >          }
> > @@ -9341,7 +8712,7 @@ build_dhcpv6_options_flows(struct ovn_port *op,
> >  static void
> >  build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
> >                                                   const struct ovn_port *port,
> > -                                                 struct hmap *lflows)
> > +                                                 struct lflow_table *lflows)
> >  {
> >      struct ds match = DS_EMPTY_INITIALIZER;
> >
> > @@ -9361,7 +8732,7 @@ build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
> >                      ovn_lflow_add_with_lport_and_hint(
> >                          lflows, op->od, S_SWITCH_IN_EXTERNAL_PORT, 100,
> >                          ds_cstr(&match),  debug_drop_action(), port->key,
> > -                        &op->nbsp->header_);
> > +                        &op->nbsp->header_, op->lflow_ref);
> >                  }
> >                  for (size_t l = 0; l < rp->lsp_addrs[k].n_ipv6_addrs; l++) {
> >                      ds_clear(&match);
> > @@ -9377,7 +8748,7 @@ build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
> >                      ovn_lflow_add_with_lport_and_hint(
> >                          lflows, op->od, S_SWITCH_IN_EXTERNAL_PORT, 100,
> >                          ds_cstr(&match), debug_drop_action(), port->key,
> > -                        &op->nbsp->header_);
> > +                        &op->nbsp->header_, op->lflow_ref);
> >                  }
> >
> >                  ds_clear(&match);
> > @@ -9393,7 +8764,8 @@ build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
> >                                                    100, ds_cstr(&match),
> >                                                    debug_drop_action(),
> >                                                    port->key,
> > -                                                  &op->nbsp->header_);
> > +                                                  &op->nbsp->header_,
> > +                                                  op->lflow_ref);
> >              }
> >          }
> >      }
> > @@ -9408,7 +8780,7 @@ is_vlan_transparent(const struct ovn_datapath *od)
> >
> >  static void
> >  build_lswitch_lflows_l2_unknown(struct ovn_datapath *od,
> > -                                struct hmap *lflows)
> > +                                struct lflow_table *lflows)
> >  {
> >      /* Ingress table 25/26: Destination lookup for unknown MACs. */
> >      if (od->has_unknown) {
> > @@ -9429,7 +8801,7 @@ static void
> >  build_lswitch_lflows_pre_acl_and_acl(
> >      struct ovn_datapath *od,
> >      const struct chassis_features *features,
> > -    struct hmap *lflows,
> > +    struct lflow_table *lflows,
> >      const struct shash *meter_groups)
> >  {
> >      ovs_assert(od->nbs);
> > @@ -9445,7 +8817,7 @@ build_lswitch_lflows_pre_acl_and_acl(
> >   * 100). */
> >  static void
> >  build_lswitch_lflows_admission_control(struct ovn_datapath *od,
> > -                                       struct hmap *lflows)
> > +                                       struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbs);
> >      /* Logical VLANs not supported. */
> > @@ -9473,7 +8845,7 @@ build_lswitch_lflows_admission_control(struct ovn_datapath *od,
> >
> >  static void
> >  build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
> > -                                          struct hmap *lflows,
> > +                                          struct lflow_table *lflows,
> >                                            struct ds *match)
> >  {
> >      ovs_assert(op->nbsp);
> > @@ -9485,14 +8857,14 @@ build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
> >      ovn_lflow_add_with_lport_and_hint(lflows, op->od,
> >                                        S_SWITCH_IN_ARP_ND_RSP, 100,
> >                                        ds_cstr(match), "next;", op->key,
> > -                                      &op->nbsp->header_);
> > +                                      &op->nbsp->header_, op->lflow_ref);
> >  }
> >
> >  /* Ingress table 19: ARP/ND responder, reply for known IPs.
> >   * (priority 50). */
> >  static void
> >  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> > -                                         struct hmap *lflows,
> > +                                         struct lflow_table *lflows,
> >                                           const struct hmap *ls_ports,
> >                                           const struct shash *meter_groups,
> >                                           struct ds *actions,
> > @@ -9577,7 +8949,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> >                                                S_SWITCH_IN_ARP_ND_RSP, 100,
> >                                                ds_cstr(match),
> >                                                ds_cstr(actions), vparent,
> > -                                              &vp->nbsp->header_);
> > +                                              &vp->nbsp->header_,
> > +                                              op->lflow_ref);
> >          }
> >
> >          free(tokstr);
> > @@ -9621,11 +8994,12 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> >                      "output;",
> >                      op->lsp_addrs[i].ea_s, op->lsp_addrs[i].ea_s,
> >                      op->lsp_addrs[i].ipv4_addrs[j].addr_s);
> > -                ovn_lflow_add_with_hint(lflows, op->od,
> > -                                        S_SWITCH_IN_ARP_ND_RSP, 50,
> > -                                        ds_cstr(match),
> > -                                        ds_cstr(actions),
> > -                                        &op->nbsp->header_);
> > +                ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> > +                                                  S_SWITCH_IN_ARP_ND_RSP, 50,
> > +                                                  ds_cstr(match),
> > +                                                  ds_cstr(actions),
> > +                                                  &op->nbsp->header_,
> > +                                                  op->lflow_ref);
> >
> >                  /* Do not reply to an ARP request from the port that owns
> >                   * the address (otherwise a DHCP client that ARPs to check
> > @@ -9644,7 +9018,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> >                                                    S_SWITCH_IN_ARP_ND_RSP,
> >                                                    100, ds_cstr(match),
> >                                                    "next;", op->key,
> > -                                                  &op->nbsp->header_);
> > +                                                  &op->nbsp->header_,
> > +                                                  op->lflow_ref);
> >              }
> >
> >              /* For ND solicitations, we need to listen for both the
> > @@ -9674,15 +9049,16 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> >                          op->lsp_addrs[i].ipv6_addrs[j].addr_s,
> >                          op->lsp_addrs[i].ipv6_addrs[j].addr_s,
> >                          op->lsp_addrs[i].ea_s);
> > -                ovn_lflow_add_with_hint__(lflows, op->od,
> > -                                          S_SWITCH_IN_ARP_ND_RSP, 50,
> > -                                          ds_cstr(match),
> > -                                          ds_cstr(actions),
> > -                                          NULL,
> > -                                          copp_meter_get(COPP_ND_NA,
> > -                                              op->od->nbs->copp,
> > -                                              meter_groups),
> > -                                          &op->nbsp->header_);
> > +                ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
> > +                                                    S_SWITCH_IN_ARP_ND_RSP, 50,
> > +                                                    ds_cstr(match),
> > +                                                    ds_cstr(actions),
> > +                                                    NULL,
> > +                                                    copp_meter_get(COPP_ND_NA,
> > +                                                        op->od->nbs->copp,
> > +                                                        meter_groups),
> > +                                                    &op->nbsp->header_,
> > +                                                    op->lflow_ref);
> >
> >                  /* Do not reply to a solicitation from the port that owns
> >                   * the address (otherwise DAD detection will fail). */
> > @@ -9691,7 +9067,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> >                                                    S_SWITCH_IN_ARP_ND_RSP,
> >                                                    100, ds_cstr(match),
> >                                                    "next;", op->key,
> > -                                                  &op->nbsp->header_);
> > +                                                  &op->nbsp->header_,
> > +                                                  op->lflow_ref);
> >              }
> >          }
> >      }
> > @@ -9737,8 +9114,12 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> >                  ea_s,
> >                  ea_s);
> >
> > -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_ARP_ND_RSP,
> > -                30, ds_cstr(match), ds_cstr(actions), &op->nbsp->header_);
> > +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> > +                                              S_SWITCH_IN_ARP_ND_RSP,
> > +                                              30, ds_cstr(match),
> > +                                              ds_cstr(actions),
> > +                                              &op->nbsp->header_,
> > +                                              op->lflow_ref);
> >          }
> >
> >          /* Add IPv6 NDP responses.
> > @@ -9781,15 +9162,16 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> >                      lsp_is_router(op->nbsp) ? "nd_na_router" : "nd_na",
> >                      ea_s,
> >                      ea_s);
> > -            ovn_lflow_add_with_hint__(lflows, op->od,
> > -                                      S_SWITCH_IN_ARP_ND_RSP, 30,
> > -                                      ds_cstr(match),
> > -                                      ds_cstr(actions),
> > -                                      NULL,
> > -                                      copp_meter_get(COPP_ND_NA,
> > -                                          op->od->nbs->copp,
> > -                                          meter_groups),
> > -                                      &op->nbsp->header_);
> > +            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
> > +                                                S_SWITCH_IN_ARP_ND_RSP, 30,
> > +                                                ds_cstr(match),
> > +                                                ds_cstr(actions),
> > +                                                NULL,
> > +                                                copp_meter_get(COPP_ND_NA,
> > +                                                    op->od->nbs->copp,
> > +                                                    meter_groups),
> > +                                                &op->nbsp->header_,
> > +                                                op->lflow_ref);
> >              ds_destroy(&ip6_dst_match);
> >              ds_destroy(&nd_target_match);
> >          }
> > @@ -9800,7 +9182,7 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> >   * (priority 0)*/
> >  static void
> >  build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
> > -                                       struct hmap *lflows)
> > +                                       struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbs);
> >      ovn_lflow_add(lflows, od, S_SWITCH_IN_ARP_ND_RSP, 0, "1", "next;");
> > @@ -9811,7 +9193,7 @@ build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
> >  static void
> >  build_lswitch_arp_nd_service_monitor(const struct ovn_northd_lb *lb,
> >                                       const struct hmap *ls_ports,
> > -                                     struct hmap *lflows,
> > +                                     struct lflow_table *lflows,
> >                                       struct ds *actions,
> >                                       struct ds *match)
> >  {
> > @@ -9887,7 +9269,7 @@ build_lswitch_arp_nd_service_monitor(const struct ovn_northd_lb *lb,
> >   * priority 100 flows. */
> >  static void
> >  build_lswitch_dhcp_options_and_response(struct ovn_port *op,
> > -                                        struct hmap *lflows,
> > +                                        struct lflow_table *lflows,
> >                                          const struct shash *meter_groups)
> >  {
> >      ovs_assert(op->nbsp);
> > @@ -9942,7 +9324,7 @@ build_lswitch_dhcp_options_and_response(struct ovn_port *op,
> >   * (priority 0). */
> >  static void
> >  build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
> > -                                        struct hmap *lflows)
> > +                                        struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbs);
> >      ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_OPTIONS, 0, "1", "next;");
> > @@ -9957,7 +9339,7 @@ build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
> >  */
> >  static void
> >  build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
> > -                                      struct hmap *lflows,
> > +                                      struct lflow_table *lflows,
> >                                        const struct shash *meter_groups)
> >  {
> >      ovs_assert(od->nbs);
> > @@ -9988,7 +9370,7 @@ build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
> >   * binding the external ports. */
> >  static void
> >  build_lswitch_external_port(struct ovn_port *op,
> > -                            struct hmap *lflows)
> > +                            struct lflow_table *lflows)
> >  {
> >      ovs_assert(op->nbsp);
> >      if (!lsp_is_external(op->nbsp)) {
> > @@ -10004,7 +9386,7 @@ build_lswitch_external_port(struct ovn_port *op,
> >   * (priority 70 - 100). */
> >  static void
> >  build_lswitch_destination_lookup_bmcast(struct ovn_datapath *od,
> > -                                        struct hmap *lflows,
> > +                                        struct lflow_table *lflows,
> >                                          struct ds *actions,
> >                                          const struct shash *meter_groups)
> >  {
> > @@ -10097,7 +9479,7 @@ build_lswitch_destination_lookup_bmcast(struct ovn_datapath *od,
> >   * (priority 90). */
> >  static void
> >  build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
> > -                                struct hmap *lflows,
> > +                                struct lflow_table *lflows,
> >                                  struct ds *actions,
> >                                  struct ds *match)
> >  {
> > @@ -10177,7 +9559,8 @@ build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
> >
> >  /* Ingress table 25: Destination lookup, unicast handling (priority 50), */
> >  static void
> > -build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
> > +build_lswitch_ip_unicast_lookup(struct ovn_port *op,
> > +                                struct lflow_table *lflows,
> >                                  struct ds *actions, struct ds *match)
> >  {
> >      ovs_assert(op->nbsp);
> > @@ -10210,10 +9593,12 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
> >
> >              ds_clear(actions);
> >              ds_put_format(actions, action, op->json_key);
> > -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
> > -                                    50, ds_cstr(match),
> > -                                    ds_cstr(actions),
> > -                                    &op->nbsp->header_);
> > +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> > +                                              S_SWITCH_IN_L2_LKUP,
> > +                                              50, ds_cstr(match),
> > +                                              ds_cstr(actions),
> > +                                              &op->nbsp->header_,
> > +                                              op->lflow_ref);
> >          } else if (!strcmp(op->nbsp->addresses[i], "unknown")) {
> >              continue;
> >          } else if (is_dynamic_lsp_address(op->nbsp->addresses[i])) {
> > @@ -10228,10 +9613,12 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
> >
> >              ds_clear(actions);
> >              ds_put_format(actions, action, op->json_key);
> > -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
> > -                                    50, ds_cstr(match),
> > -                                    ds_cstr(actions),
> > -                                    &op->nbsp->header_);
> > +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> > +                                              S_SWITCH_IN_L2_LKUP,
> > +                                              50, ds_cstr(match),
> > +                                              ds_cstr(actions),
> > +                                              &op->nbsp->header_,
> > +                                              op->lflow_ref);
> >          } else if (!strcmp(op->nbsp->addresses[i], "router")) {
> >              if (!op->peer || !op->peer->nbrp
> >                  || !ovs_scan(op->peer->nbrp->mac,
> > @@ -10283,10 +9670,11 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
> >
> >              ds_clear(actions);
> >              ds_put_format(actions, action, op->json_key);
> > -            ovn_lflow_add_with_hint(lflows, op->od,
> > -                                    S_SWITCH_IN_L2_LKUP, 50,
> > -                                    ds_cstr(match), ds_cstr(actions),
> > -                                    &op->nbsp->header_);
> > +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> > +                                              S_SWITCH_IN_L2_LKUP, 50,
> > +                                              ds_cstr(match), ds_cstr(actions),
> > +                                              &op->nbsp->header_,
> > +                                              op->lflow_ref);
> >          } else {
> >              static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
> >
> > @@ -10301,7 +9689,8 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
> >  static void
> >  build_lswitch_ip_unicast_lookup_for_nats(
> >      struct ovn_port *op, const struct lr_stateful_record *lr_stateful_rec,
> > -    struct hmap *lflows, struct ds *match, struct ds *actions)
> > +    struct lflow_table *lflows, struct ds *match, struct ds *actions,
> > +    struct lflow_ref *lflow_ref)
> >  {
> >      ovs_assert(op->nbsp);
> >
> > @@ -10334,11 +9723,12 @@ build_lswitch_ip_unicast_lookup_for_nats(
> >
> >              ds_clear(actions);
> >              ds_put_format(actions, action, op->json_key);
> > -            ovn_lflow_add_with_hint(lflows, op->od,
> > +            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> >                                      S_SWITCH_IN_L2_LKUP, 50,
> >                                      ds_cstr(match),
> >                                      ds_cstr(actions),
> > -                                    &op->nbsp->header_);
> > +                                    &op->nbsp->header_,
> > +                                    lflow_ref);
> >          }
> >      }
> >  }
> > @@ -10578,7 +9968,7 @@ get_outport_for_routing_policy_nexthop(struct ovn_datapath *od,
> >  }
> >
> >  static void
> > -build_routing_policy_flow(struct hmap *lflows, struct ovn_datapath *od,
> > +build_routing_policy_flow(struct lflow_table *lflows, struct ovn_datapath *od,
> >                            const struct hmap *lr_ports,
> >                            const struct nbrec_logical_router_policy *rule,
> >                            const struct ovsdb_idl_row *stage_hint)
> > @@ -10643,7 +10033,8 @@ build_routing_policy_flow(struct hmap *lflows, struct ovn_datapath *od,
> >  }
> >
> >  static void
> > -build_ecmp_routing_policy_flows(struct hmap *lflows, struct ovn_datapath *od,
> > +build_ecmp_routing_policy_flows(struct lflow_table *lflows,
> > +                                struct ovn_datapath *od,
> >                                  const struct hmap *lr_ports,
> >                                  const struct nbrec_logical_router_policy *rule,
> >                                  uint16_t ecmp_group_id)
> > @@ -10779,7 +10170,7 @@ get_route_table_id(struct simap *route_tables, const char *route_table_name)
> >  }
> >
> >  static void
> > -build_route_table_lflow(struct ovn_datapath *od, struct hmap *lflows,
> > +build_route_table_lflow(struct ovn_datapath *od, struct lflow_table *lflows,
> >                          struct nbrec_logical_router_port *lrp,
> >                          struct simap *route_tables)
> >  {
> > @@ -11190,7 +10581,7 @@ find_static_route_outport(struct ovn_datapath *od, const struct hmap *lr_ports,
> >  }
> >
> >  static void
> > -add_ecmp_symmetric_reply_flows(struct hmap *lflows,
> > +add_ecmp_symmetric_reply_flows(struct lflow_table *lflows,
> >                                 struct ovn_datapath *od,
> >                                 bool ct_masked_mark,
> >                                 const char *port_ip,
> > @@ -11355,7 +10746,7 @@ add_ecmp_symmetric_reply_flows(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
> > +build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
> >                        bool ct_masked_mark, const struct hmap *lr_ports,
> >                        struct ecmp_groups_node *eg)
> >
> > @@ -11442,12 +10833,12 @@ build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
> >  }
> >
> >  static void
> > -add_route(struct hmap *lflows, struct ovn_datapath *od,
> > +add_route(struct lflow_table *lflows, struct ovn_datapath *od,
> >            const struct ovn_port *op, const char *lrp_addr_s,
> >            const char *network_s, int plen, const char *gateway,
> >            bool is_src_route, const uint32_t rtb_id,
> >            const struct ovsdb_idl_row *stage_hint, bool is_discard_route,
> > -          int ofs)
> > +          int ofs, struct lflow_ref *lflow_ref)
> >  {
> >      bool is_ipv4 = strchr(network_s, '.') ? true : false;
> >      struct ds match = DS_EMPTY_INITIALIZER;
> > @@ -11490,14 +10881,17 @@ add_route(struct hmap *lflows, struct ovn_datapath *od,
> >          ds_put_format(&actions, "ip.ttl--; %s", ds_cstr(&common_actions));
> >      }
> >
> > -    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_IP_ROUTING, priority,
> > -                            ds_cstr(&match), ds_cstr(&actions),
> > -                            stage_hint);
> > +    ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_ROUTER_IN_IP_ROUTING,
> > +                                      priority, ds_cstr(&match),
> > +                                      ds_cstr(&actions), stage_hint,
> > +                                      lflow_ref);
> >      if (op && op->has_bfd) {
> >          ds_put_format(&match, " && udp.dst == 3784");
> > -        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_ROUTING,
> > -                                priority + 1, ds_cstr(&match),
> > -                                ds_cstr(&common_actions), stage_hint);
> > +        ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
> > +                                          S_ROUTER_IN_IP_ROUTING,
> > +                                          priority + 1, ds_cstr(&match),
> > +                                          ds_cstr(&common_actions),\
> > +                                          stage_hint, lflow_ref);
> >      }
> >      ds_destroy(&match);
> >      ds_destroy(&common_actions);
> > @@ -11505,7 +10899,7 @@ add_route(struct hmap *lflows, struct ovn_datapath *od,
> >  }
> >
> >  static void
> > -build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
> > +build_static_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
> >                          const struct hmap *lr_ports,
> >                          const struct parsed_route *route_)
> >  {
> > @@ -11531,7 +10925,7 @@ build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
> >      add_route(lflows, route_->is_discard_route ? od : out_port->od, out_port,
> >                lrp_addr_s, prefix_s, route_->plen, route->nexthop,
> >                route_->is_src_route, route_->route_table_id, &route->header_,
> > -              route_->is_discard_route, ofs);
> > +              route_->is_discard_route, ofs, NULL);
> >
> >      free(prefix_s);
> >  }
> > @@ -11594,7 +10988,7 @@ struct lrouter_nat_lb_flows_ctx {
> >
> >      int prio;
> >
> > -    struct hmap *lflows;
> > +    struct lflow_table *lflows;
> >      const struct shash *meter_groups;
> >  };
> >
> > @@ -11726,7 +11120,7 @@ build_lrouter_nat_flows_for_lb(
> >      struct ovn_northd_lb_vip *vips_nb,
> >      const struct ovn_datapaths *lr_datapaths,
> >      const struct lr_stateful_table *lr_stateful_table,
> > -    struct hmap *lflows,
> > +    struct lflow_table *lflows,
> >      struct ds *match, struct ds *action,
> >      const struct shash *meter_groups,
> >      const struct chassis_features *features,
> > @@ -11895,7 +11289,7 @@ build_lrouter_nat_flows_for_lb(
> >
> >  static void
> >  build_lswitch_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> > -                           struct hmap *lflows,
> > +                           struct lflow_table *lflows,
> >                             const struct shash *meter_groups,
> >                             const struct ovn_datapaths *ls_datapaths,
> >                             const struct chassis_features *features,
> > @@ -11956,7 +11350,7 @@ build_lswitch_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> >   */
> >  static void
> >  build_lrouter_defrag_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> > -                                  struct hmap *lflows,
> > +                                  struct lflow_table *lflows,
> >                                    const struct ovn_datapaths *lr_datapaths,
> >                                    struct ds *match)
> >  {
> > @@ -11982,7 +11376,7 @@ build_lrouter_defrag_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> >
> >  static void
> >  build_lrouter_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
> > -                           struct hmap *lflows,
> > +                           struct lflow_table *lflows,
> >                             const struct shash *meter_groups,
> >                             const struct ovn_datapaths *lr_datapaths,
> >                             const struct lr_stateful_table *lr_stateful_table,
> > @@ -12140,7 +11534,7 @@ lrouter_dnat_and_snat_is_stateless(const struct nbrec_nat *nat)
> >   */
> >  static inline void
> >  lrouter_nat_add_ext_ip_match(const struct ovn_datapath *od,
> > -                             struct hmap *lflows, struct ds *match,
> > +                             struct lflow_table *lflows, struct ds *match,
> >                               const struct nbrec_nat *nat,
> >                               bool is_v6, bool is_src, int cidr_bits)
> >  {
> > @@ -12207,7 +11601,7 @@ build_lrouter_arp_flow(const struct ovn_datapath *od, struct ovn_port *op,
> >                         const char *ip_address, const char *eth_addr,
> >                         struct ds *extra_match, bool drop, uint16_t priority,
> >                         const struct ovsdb_idl_row *hint,
> > -                       struct hmap *lflows)
> > +                       struct lflow_table *lflows)
> >  {
> >      struct ds match = DS_EMPTY_INITIALIZER;
> >      struct ds actions = DS_EMPTY_INITIALIZER;
> > @@ -12257,7 +11651,8 @@ build_lrouter_nd_flow(const struct ovn_datapath *od, struct ovn_port *op,
> >                        const char *sn_ip_address, const char *eth_addr,
> >                        struct ds *extra_match, bool drop, uint16_t priority,
> >                        const struct ovsdb_idl_row *hint,
> > -                      struct hmap *lflows, const struct shash *meter_groups)
> > +                      struct lflow_table *lflows,
> > +                      const struct shash *meter_groups)
> >  {
> >      struct ds match = DS_EMPTY_INITIALIZER;
> >      struct ds actions = DS_EMPTY_INITIALIZER;
> > @@ -12308,7 +11703,7 @@ build_lrouter_nd_flow(const struct ovn_datapath *od, struct ovn_port *op,
> >  static void
> >  build_lrouter_nat_arp_nd_flow(const struct ovn_datapath *od,
> >                                struct ovn_nat *nat_entry,
> > -                              struct hmap *lflows,
> > +                              struct lflow_table *lflows,
> >                                const struct shash *meter_groups)
> >  {
> >      struct lport_addresses *ext_addrs = &nat_entry->ext_addrs;
> > @@ -12331,7 +11726,7 @@ build_lrouter_nat_arp_nd_flow(const struct ovn_datapath *od,
> >  static void
> >  build_lrouter_port_nat_arp_nd_flow(struct ovn_port *op,
> >                                     struct ovn_nat *nat_entry,
> > -                                   struct hmap *lflows,
> > +                                   struct lflow_table *lflows,
> >                                     const struct shash *meter_groups)
> >  {
> >      struct lport_addresses *ext_addrs = &nat_entry->ext_addrs;
> > @@ -12405,7 +11800,7 @@ build_lrouter_drop_own_dest(struct ovn_port *op,
> >                              const struct lr_stateful_record *lr_stateful_rec,
> >                              enum ovn_stage stage,
> >                              uint16_t priority, bool drop_snat_ip,
> > -                            struct hmap *lflows)
> > +                            struct lflow_table *lflows)
> >  {
> >      struct ds match_ips = DS_EMPTY_INITIALIZER;
> >
> > @@ -12470,7 +11865,7 @@ build_lrouter_drop_own_dest(struct ovn_port *op,
> >  }
> >
> >  static void
> > -build_lrouter_force_snat_flows(struct hmap *lflows,
> > +build_lrouter_force_snat_flows(struct lflow_table *lflows,
> >                                 const struct ovn_datapath *od,
> >                                 const char *ip_version, const char *ip_addr,
> >                                 const char *context)
> > @@ -12499,7 +11894,7 @@ build_lrouter_force_snat_flows(struct hmap *lflows,
> >  static void
> >  build_lrouter_force_snat_flows_op(struct ovn_port *op,
> >                                    const struct lr_nat_record *lrnat_rec,
> > -                                  struct hmap *lflows,
> > +                                  struct lflow_table *lflows,
> >                                    struct ds *match, struct ds *actions)
> >  {
> >      ovs_assert(op->nbrp);
> > @@ -12571,7 +11966,7 @@ build_lrouter_force_snat_flows_op(struct ovn_port *op,
> >  }
> >
> >  static void
> > -build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op,
> > +build_lrouter_bfd_flows(struct lflow_table *lflows, struct ovn_port *op,
> >                          const struct shash *meter_groups)
> >  {
> >      if (!op->has_bfd) {
> > @@ -12626,7 +12021,7 @@ build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op,
> >   */
> >  static void
> >  build_adm_ctrl_flows_for_lrouter(
> > -        struct ovn_datapath *od, struct hmap *lflows)
> > +        struct ovn_datapath *od, struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbr);
> >      /* Logical VLANs not supported.
> > @@ -12670,7 +12065,7 @@ build_gateway_get_l2_hdr_size(struct ovn_port *op)
> >   * function.
> >   */
> >  static void OVS_PRINTF_FORMAT(9, 10)
> > -build_gateway_mtu_flow(struct hmap *lflows, struct ovn_port *op,
> > +build_gateway_mtu_flow(struct lflow_table *lflows, struct ovn_port *op,
> >                         enum ovn_stage stage, uint16_t prio_low,
> >                         uint16_t prio_high, struct ds *match,
> >                         struct ds *actions, const struct ovsdb_idl_row *hint,
> > @@ -12731,7 +12126,7 @@ consider_l3dgw_port_is_centralized(struct ovn_port *op)
> >   */
> >  static void
> >  build_adm_ctrl_flows_for_lrouter_port(
> > -        struct ovn_port *op, struct hmap *lflows,
> > +        struct ovn_port *op, struct lflow_table *lflows,
> >          struct ds *match, struct ds *actions)
> >  {
> >      ovs_assert(op->nbrp);
> > @@ -12785,7 +12180,7 @@ build_adm_ctrl_flows_for_lrouter_port(
> >   * lflows for logical routers. */
> >  static void
> >  build_neigh_learning_flows_for_lrouter(
> > -        struct ovn_datapath *od, struct hmap *lflows,
> > +        struct ovn_datapath *od, struct lflow_table *lflows,
> >          struct ds *match, struct ds *actions,
> >          const struct shash *meter_groups)
> >  {
> > @@ -12916,7 +12311,7 @@ build_neigh_learning_flows_for_lrouter(
> >   * for logical router ports. */
> >  static void
> >  build_neigh_learning_flows_for_lrouter_port(
> > -        struct ovn_port *op, struct hmap *lflows,
> > +        struct ovn_port *op, struct lflow_table *lflows,
> >          struct ds *match, struct ds *actions)
> >  {
> >      ovs_assert(op->nbrp);
> > @@ -12978,7 +12373,7 @@ build_neigh_learning_flows_for_lrouter_port(
> >   * Adv (RA) options and response. */
> >  static void
> >  build_ND_RA_flows_for_lrouter_port(
> > -        struct ovn_port *op, struct hmap *lflows,
> > +        struct ovn_port *op, struct lflow_table *lflows,
> >          struct ds *match, struct ds *actions,
> >          const struct shash *meter_groups)
> >  {
> > @@ -13093,7 +12488,8 @@ build_ND_RA_flows_for_lrouter_port(
> >  /* Logical router ingress table ND_RA_OPTIONS & ND_RA_RESPONSE: RS
> >   * responder, by default goto next. (priority 0). */
> >  static void
> > -build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
> > +build_ND_RA_flows_for_lrouter(struct ovn_datapath *od,
> > +                              struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbr);
> >      ovn_lflow_add(lflows, od, S_ROUTER_IN_ND_RA_OPTIONS, 0, "1", "next;");
> > @@ -13104,7 +12500,7 @@ build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
> >   * by default goto next. (priority 0). */
> >  static void
> >  build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
> > -                                       struct hmap *lflows)
> > +                                       struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbr);
> >      ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING_PRE, 0, "1",
> > @@ -13132,21 +12528,23 @@ build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
> >   */
> >  static void
> >  build_ip_routing_flows_for_lrp(
> > -        struct ovn_port *op, struct hmap *lflows)
> > +        struct ovn_port *op, struct lflow_table *lflows)
> >  {
> >      ovs_assert(op->nbrp);
> >      for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> >          add_route(lflows, op->od, op, op->lrp_networks.ipv4_addrs[i].addr_s,
> >                    op->lrp_networks.ipv4_addrs[i].network_s,
> >                    op->lrp_networks.ipv4_addrs[i].plen, NULL, false, 0,
> > -                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED);
> > +                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED,
> > +                  NULL);
> >      }
> >
> >      for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
> >          add_route(lflows, op->od, op, op->lrp_networks.ipv6_addrs[i].addr_s,
> >                    op->lrp_networks.ipv6_addrs[i].network_s,
> >                    op->lrp_networks.ipv6_addrs[i].plen, NULL, false, 0,
> > -                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED);
> > +                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED,
> > +                  NULL);
> >      }
> >  }
> >
> > @@ -13159,8 +12557,9 @@ build_ip_routing_flows_for_lrp(
> >   */
> >  static void
> >  build_ip_routing_flows_for_router_type_lsp(
> > -        struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
> > -        const struct hmap *lr_ports, struct hmap *lflows)
> > +    struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
> > +    const struct hmap *lr_ports, struct lflow_table *lflows,
> > +    struct lflow_ref *lflow_ref)
> >  {
> >      ovs_assert(op->nbsp);
> >      if (!lsp_is_router(op->nbsp)) {
> > @@ -13196,7 +12595,8 @@ build_ip_routing_flows_for_router_type_lsp(
> >                              laddrs->ipv4_addrs[k].network_s,
> >                              laddrs->ipv4_addrs[k].plen, NULL, false, 0,
> >                              &peer->nbrp->header_, false,
> > -                            ROUTE_PRIO_OFFSET_CONNECTED);
> > +                            ROUTE_PRIO_OFFSET_CONNECTED,
> > +                            lflow_ref);
> >                  }
> >              }
> >              destroy_routable_addresses(&ra);
> > @@ -13208,7 +12608,7 @@ build_ip_routing_flows_for_router_type_lsp(
> >  static void
> >  build_static_route_flows_for_lrouter(
> >          struct ovn_datapath *od, const struct chassis_features *features,
> > -        struct hmap *lflows, const struct hmap *lr_ports,
> > +        struct lflow_table *lflows, const struct hmap *lr_ports,
> >          const struct hmap *bfd_connections)
> >  {
> >      ovs_assert(od->nbr);
> > @@ -13272,7 +12672,7 @@ build_static_route_flows_for_lrouter(
> >   */
> >  static void
> >  build_mcast_lookup_flows_for_lrouter(
> > -        struct ovn_datapath *od, struct hmap *lflows,
> > +        struct ovn_datapath *od, struct lflow_table *lflows,
> >          struct ds *match, struct ds *actions)
> >  {
> >      ovs_assert(od->nbr);
> > @@ -13373,7 +12773,7 @@ build_mcast_lookup_flows_for_lrouter(
> >   * advances to the next table for ARP/ND resolution. */
> >  static void
> >  build_ingress_policy_flows_for_lrouter(
> > -        struct ovn_datapath *od, struct hmap *lflows,
> > +        struct ovn_datapath *od, struct lflow_table *lflows,
> >          const struct hmap *lr_ports)
> >  {
> >      ovs_assert(od->nbr);
> > @@ -13407,7 +12807,7 @@ build_ingress_policy_flows_for_lrouter(
> >  /* Local router ingress table ARP_RESOLVE: ARP Resolution. */
> >  static void
> >  build_arp_resolve_flows_for_lrouter(
> > -        struct ovn_datapath *od, struct hmap *lflows)
> > +        struct ovn_datapath *od, struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbr);
> >      /* Multicast packets already have the outport set so just advance to
> > @@ -13425,10 +12825,12 @@ build_arp_resolve_flows_for_lrouter(
> >  }
> >
> >  static void
> > -routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
> > +routable_addresses_to_lflows(struct lflow_table *lflows,
> > +                             struct ovn_port *router_port,
> >                               struct ovn_port *peer,
> >                               const struct lr_stateful_record *lr_stateful_rec,
> > -                             struct ds *match, struct ds *actions)
> > +                             struct ds *match, struct ds *actions,
> > +                             struct lflow_ref *lflow_ref)
> >  {
> >      struct ovn_port_routable_addresses ra =
> >          get_op_routable_addresses(router_port, lr_stateful_rec);
> > @@ -13452,8 +12854,9 @@ routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
> >
> >          ds_clear(actions);
> >          ds_put_format(actions, "eth.dst = %s; next;", ra.laddrs[i].ea_s);
> > -        ovn_lflow_add(lflows, peer->od, S_ROUTER_IN_ARP_RESOLVE, 100,
> > -                      ds_cstr(match), ds_cstr(actions));
> > +        ovn_lflow_add_with_lflow_ref(lflows, peer->od, S_ROUTER_IN_ARP_RESOLVE,
> > +                                     100, ds_cstr(match), ds_cstr(actions),
> > +                                     lflow_ref);
> >      }
> >      destroy_routable_addresses(&ra);
> >  }
> > @@ -13470,7 +12873,8 @@ routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
> >
> >  /* This function adds ARP resolve flows related to a LRP. */
> >  static void
> > -build_arp_resolve_flows_for_lrp(struct ovn_port *op, struct hmap *lflows,
> > +build_arp_resolve_flows_for_lrp(struct ovn_port *op,
> > +                                struct lflow_table *lflows,
> >                                  struct ds *match, struct ds *actions)
> >  {
> >      ovs_assert(op->nbrp);
> > @@ -13545,7 +12949,7 @@ build_arp_resolve_flows_for_lrp(struct ovn_port *op, struct hmap *lflows,
> >  /* This function adds ARP resolve flows related to a LSP. */
> >  static void
> >  build_arp_resolve_flows_for_lsp(
> > -        struct ovn_port *op, struct hmap *lflows,
> > +        struct ovn_port *op, struct lflow_table *lflows,
> >          const struct hmap *lr_ports,
> >          struct ds *match, struct ds *actions)
> >  {
> > @@ -13587,11 +12991,12 @@ build_arp_resolve_flows_for_lsp(
> >
> >                      ds_clear(actions);
> >                      ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> > -                    ovn_lflow_add_with_hint(lflows, peer->od,
> > +                    ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
> >                                              S_ROUTER_IN_ARP_RESOLVE, 100,
> >                                              ds_cstr(match),
> >                                              ds_cstr(actions),
> > -                                            &op->nbsp->header_);
> > +                                            &op->nbsp->header_,
> > +                                            op->lflow_ref);
> >                  }
> >              }
> >
> > @@ -13618,11 +13023,12 @@ build_arp_resolve_flows_for_lsp(
> >
> >                      ds_clear(actions);
> >                      ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> > -                    ovn_lflow_add_with_hint(lflows, peer->od,
> > +                    ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
> >                                              S_ROUTER_IN_ARP_RESOLVE, 100,
> >                                              ds_cstr(match),
> >                                              ds_cstr(actions),
> > -                                            &op->nbsp->header_);
> > +                                            &op->nbsp->header_,
> > +                                            op->lflow_ref);
> >                  }
> >              }
> >          }
> > @@ -13666,10 +13072,11 @@ build_arp_resolve_flows_for_lsp(
> >                  ds_clear(actions);
> >                  ds_put_format(actions, "eth.dst = %s; next;",
> >                                            router_port->lrp_networks.ea_s);
> > -                ovn_lflow_add_with_hint(lflows, peer->od,
> > +                ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
> >                                          S_ROUTER_IN_ARP_RESOLVE, 100,
> >                                          ds_cstr(match), ds_cstr(actions),
> > -                                        &op->nbsp->header_);
> > +                                        &op->nbsp->header_,
> > +                                        op->lflow_ref);
> >              }
> >
> >              if (router_port->lrp_networks.n_ipv6_addrs) {
> > @@ -13682,10 +13089,11 @@ build_arp_resolve_flows_for_lsp(
> >                  ds_clear(actions);
> >                  ds_put_format(actions, "eth.dst = %s; next;",
> >                                router_port->lrp_networks.ea_s);
> > -                ovn_lflow_add_with_hint(lflows, peer->od,
> > +                ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
> >                                          S_ROUTER_IN_ARP_RESOLVE, 100,
> >                                          ds_cstr(match), ds_cstr(actions),
> > -                                        &op->nbsp->header_);
> > +                                        &op->nbsp->header_,
> > +                                        op->lflow_ref);
> >              }
> >          }
> >      }
> > @@ -13693,10 +13101,11 @@ build_arp_resolve_flows_for_lsp(
> >
> >  static void
> >  build_arp_resolve_flows_for_lsp_routable_addresses(
> > -        struct ovn_port *op, struct hmap *lflows,
> > -        const struct hmap *lr_ports,
> > -        const struct lr_stateful_table *lr_stateful_table,
> > -        struct ds *match, struct ds *actions)
> > +    struct ovn_port *op, struct lflow_table *lflows,
> > +    const struct hmap *lr_ports,
> > +    const struct lr_stateful_table *lr_stateful_table,
> > +    struct ds *match, struct ds *actions,
> > +    struct lflow_ref *lflow_ref)
> >  {
> >      if (!lsp_is_router(op->nbsp)) {
> >          return;
> > @@ -13730,13 +13139,15 @@ build_arp_resolve_flows_for_lsp_routable_addresses(
> >              lr_stateful_rec = lr_stateful_table_find_by_index(
> >                  lr_stateful_table, router_port->od->index);
> >              routable_addresses_to_lflows(lflows, router_port, peer,
> > -                                         lr_stateful_rec, match, actions);
> > +                                         lr_stateful_rec, match, actions,
> > +                                         lflow_ref);
> >          }
> >      }
> >  }
> >
> >  static void
> > -build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu, struct hmap *lflows,
> > +build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu,
> > +                            struct lflow_table *lflows,
> >                              const struct shash *meter_groups, struct ds *match,
> >                              struct ds *actions, enum ovn_stage stage,
> >                              struct ovn_port *outport)
> > @@ -13829,7 +13240,7 @@ build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu, struct hmap *lflows,
> >
> >  static void
> >  build_check_pkt_len_flows_for_lrp(struct ovn_port *op,
> > -                                  struct hmap *lflows,
> > +                                  struct lflow_table *lflows,
> >                                    const struct hmap *lr_ports,
> >                                    const struct shash *meter_groups,
> >                                    struct ds *match,
> > @@ -13879,7 +13290,7 @@ build_check_pkt_len_flows_for_lrp(struct ovn_port *op,
> >   * */
> >  static void
> >  build_check_pkt_len_flows_for_lrouter(
> > -        struct ovn_datapath *od, struct hmap *lflows,
> > +        struct ovn_datapath *od, struct lflow_table *lflows,
> >          const struct hmap *lr_ports,
> >          struct ds *match, struct ds *actions,
> >          const struct shash *meter_groups)
> > @@ -13906,7 +13317,7 @@ build_check_pkt_len_flows_for_lrouter(
> >  /* Logical router ingress table GW_REDIRECT: Gateway redirect. */
> >  static void
> >  build_gateway_redirect_flows_for_lrouter(
> > -        struct ovn_datapath *od, struct hmap *lflows,
> > +        struct ovn_datapath *od, struct lflow_table *lflows,
> >          struct ds *match, struct ds *actions)
> >  {
> >      ovs_assert(od->nbr);
> > @@ -13950,8 +13361,8 @@ build_gateway_redirect_flows_for_lrouter(
> >  /* Logical router ingress table GW_REDIRECT: Gateway redirect. */
> >  static void
> >  build_lr_gateway_redirect_flows_for_nats(
> > -    const struct ovn_datapath *od, const struct lr_nat_record *lrnat_rec,
> > -    struct hmap *lflows, struct ds *match, struct ds *actions)
> > +        const struct ovn_datapath *od, const struct lr_nat_record *lrnat_rec,
> > +        struct lflow_table *lflows, struct ds *match, struct ds *actions)
> >  {
> >      ovs_assert(od->nbr);
> >      for (size_t i = 0; i < od->n_l3dgw_ports; i++) {
> > @@ -14020,7 +13431,7 @@ build_lr_gateway_redirect_flows_for_nats(
> >   * and sends an ARP/IPv6 NA request (priority 100). */
> >  static void
> >  build_arp_request_flows_for_lrouter(
> > -        struct ovn_datapath *od, struct hmap *lflows,
> > +        struct ovn_datapath *od, struct lflow_table *lflows,
> >          struct ds *match, struct ds *actions,
> >          const struct shash *meter_groups)
> >  {
> > @@ -14098,7 +13509,7 @@ build_arp_request_flows_for_lrouter(
> >   */
> >  static void
> >  build_egress_delivery_flows_for_lrouter_port(
> > -        struct ovn_port *op, struct hmap *lflows,
> > +        struct ovn_port *op, struct lflow_table *lflows,
> >          struct ds *match, struct ds *actions)
> >  {
> >      ovs_assert(op->nbrp);
> > @@ -14140,7 +13551,7 @@ build_egress_delivery_flows_for_lrouter_port(
> >
> >  static void
> >  build_misc_local_traffic_drop_flows_for_lrouter(
> > -        struct ovn_datapath *od, struct hmap *lflows)
> > +        struct ovn_datapath *od, struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbr);
> >      /* Allow IGMP and MLD packets (with TTL = 1) if the router is
> > @@ -14222,7 +13633,7 @@ build_misc_local_traffic_drop_flows_for_lrouter(
> >
> >  static void
> >  build_dhcpv6_reply_flows_for_lrouter_port(
> > -        struct ovn_port *op, struct hmap *lflows,
> > +        struct ovn_port *op, struct lflow_table *lflows,
> >          struct ds *match)
> >  {
> >      ovs_assert(op->nbrp);
> > @@ -14242,7 +13653,7 @@ build_dhcpv6_reply_flows_for_lrouter_port(
> >
> >  static void
> >  build_ipv6_input_flows_for_lrouter_port(
> > -        struct ovn_port *op, struct hmap *lflows,
> > +        struct ovn_port *op, struct lflow_table *lflows,
> >          struct ds *match, struct ds *actions,
> >          const struct shash *meter_groups)
> >  {
> > @@ -14411,7 +13822,7 @@ build_ipv6_input_flows_for_lrouter_port(
> >  static void
> >  build_lrouter_arp_nd_for_datapath(const struct ovn_datapath *od,
> >                                    const struct lr_nat_record *lrnat_rec,
> > -                                  struct hmap *lflows,
> > +                                  struct lflow_table *lflows,
> >                                    const struct shash *meter_groups)
> >  {
> >      ovs_assert(od->nbr);
> > @@ -14463,7 +13874,7 @@ build_lrouter_arp_nd_for_datapath(const struct ovn_datapath *od,
> >  /* Logical router ingress table 3: IP Input for IPv4. */
> >  static void
> >  build_lrouter_ipv4_ip_input(struct ovn_port *op,
> > -                            struct hmap *lflows,
> > +                            struct lflow_table *lflows,
> >                              struct ds *match, struct ds *actions,
> >                              const struct shash *meter_groups)
> >  {
> > @@ -14667,7 +14078,7 @@ build_lrouter_ipv4_ip_input(struct ovn_port *op,
> >  /* Logical router ingress table 3: IP Input for IPv4. */
> >  static void
> >  build_lrouter_ipv4_ip_input_for_lbnats(
> > -    struct ovn_port *op, struct hmap *lflows,
> > +    struct ovn_port *op, struct lflow_table *lflows,
> >      const struct lr_stateful_record *lr_stateful_rec,
> >      struct ds *match, const struct shash *meter_groups)
> >  {
> > @@ -14787,7 +14198,7 @@ build_lrouter_in_unsnat_match(const struct ovn_datapath *od,
> >  }
> >
> >  static void
> > -build_lrouter_in_unsnat_stateless_flow(struct hmap *lflows,
> > +build_lrouter_in_unsnat_stateless_flow(struct lflow_table *lflows,
> >                                         const struct ovn_datapath *od,
> >                                         const struct nbrec_nat *nat,
> >                                         struct ds *match,
> > @@ -14809,7 +14220,7 @@ build_lrouter_in_unsnat_stateless_flow(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
> > +build_lrouter_in_unsnat_in_czone_flow(struct lflow_table *lflows,
> >                                        const struct ovn_datapath *od,
> >                                        const struct nbrec_nat *nat,
> >                                        struct ds *match, bool distributed_nat,
> > @@ -14843,7 +14254,7 @@ build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_in_unsnat_flow(struct hmap *lflows,
> > +build_lrouter_in_unsnat_flow(struct lflow_table *lflows,
> >                               const struct ovn_datapath *od,
> >                               const struct nbrec_nat *nat, struct ds *match,
> >                               bool distributed_nat, bool is_v6,
> > @@ -14865,7 +14276,7 @@ build_lrouter_in_unsnat_flow(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_in_dnat_flow(struct hmap *lflows,
> > +build_lrouter_in_dnat_flow(struct lflow_table *lflows,
> >                             const struct ovn_datapath *od,
> >                             const struct lr_nat_record *lrnat_rec,
> >                             const struct nbrec_nat *nat, struct ds *match,
> > @@ -14937,7 +14348,7 @@ build_lrouter_in_dnat_flow(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_out_undnat_flow(struct hmap *lflows,
> > +build_lrouter_out_undnat_flow(struct lflow_table *lflows,
> >                                const struct ovn_datapath *od,
> >                                const struct nbrec_nat *nat, struct ds *match,
> >                                struct ds *actions, bool distributed_nat,
> > @@ -14988,7 +14399,7 @@ build_lrouter_out_undnat_flow(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_out_is_dnat_local(struct hmap *lflows,
> > +build_lrouter_out_is_dnat_local(struct lflow_table *lflows,
> >                                  const struct ovn_datapath *od,
> >                                  const struct nbrec_nat *nat, struct ds *match,
> >                                  struct ds *actions, bool distributed_nat,
> > @@ -15019,7 +14430,7 @@ build_lrouter_out_is_dnat_local(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_out_snat_match(struct hmap *lflows,
> > +build_lrouter_out_snat_match(struct lflow_table *lflows,
> >                               const struct ovn_datapath *od,
> >                               const struct nbrec_nat *nat, struct ds *match,
> >                               bool distributed_nat, int cidr_bits, bool is_v6,
> > @@ -15048,7 +14459,7 @@ build_lrouter_out_snat_match(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_out_snat_stateless_flow(struct hmap *lflows,
> > +build_lrouter_out_snat_stateless_flow(struct lflow_table *lflows,
> >                                        const struct ovn_datapath *od,
> >                                        const struct nbrec_nat *nat,
> >                                        struct ds *match, struct ds *actions,
> > @@ -15091,7 +14502,7 @@ build_lrouter_out_snat_stateless_flow(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
> > +build_lrouter_out_snat_in_czone_flow(struct lflow_table *lflows,
> >                                       const struct ovn_datapath *od,
> >                                       const struct nbrec_nat *nat,
> >                                       struct ds *match,
> > @@ -15153,7 +14564,7 @@ build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_out_snat_flow(struct hmap *lflows,
> > +build_lrouter_out_snat_flow(struct lflow_table *lflows,
> >                              const struct ovn_datapath *od,
> >                              const struct nbrec_nat *nat, struct ds *match,
> >                              struct ds *actions, bool distributed_nat,
> > @@ -15199,7 +14610,7 @@ build_lrouter_out_snat_flow(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_ingress_nat_check_pkt_len(struct hmap *lflows,
> > +build_lrouter_ingress_nat_check_pkt_len(struct lflow_table *lflows,
> >                                          const struct nbrec_nat *nat,
> >                                          const struct ovn_datapath *od,
> >                                          bool is_v6, struct ds *match,
> > @@ -15271,7 +14682,7 @@ build_lrouter_ingress_nat_check_pkt_len(struct hmap *lflows,
> >  }
> >
> >  static void
> > -build_lrouter_ingress_flow(struct hmap *lflows,
> > +build_lrouter_ingress_flow(struct lflow_table *lflows,
> >                             const struct ovn_datapath *od,
> >                             const struct nbrec_nat *nat, struct ds *match,
> >                             struct ds *actions, struct eth_addr mac,
> > @@ -15451,7 +14862,7 @@ lrouter_check_nat_entry(const struct ovn_datapath *od,
> >
> >  /* NAT, Defrag and load balancing. */
> >  static void build_lr_nat_defrag_and_lb_default_flows(struct ovn_datapath *od,
> > -                                                     struct hmap *lflows)
> > +                                                struct lflow_table *lflows)
> >  {
> >      ovs_assert(od->nbr);
> >
> > @@ -15476,7 +14887,8 @@ static void build_lr_nat_defrag_and_lb_default_flows(struct ovn_datapath *od,
> >
> >  static void
> >  build_lrouter_nat_defrag_and_lb(
> > -    const struct lr_stateful_record *lr_stateful_rec, struct hmap *lflows,
> > +    const struct lr_stateful_record *lr_stateful_rec,
> > +    struct lflow_table *lflows,
> >      const struct hmap *ls_ports, const struct hmap *lr_ports,
> >      struct ds *match, struct ds *actions,
> >      const struct shash *meter_groups,
> > @@ -15858,31 +15270,30 @@ build_lsp_lflows_for_lbnats(struct ovn_port *lsp,
> >                              const struct lr_stateful_record *lr_stateful_rec,
> >                              const struct lr_stateful_table *lr_stateful_table,
> >                              const struct hmap *lr_ports,
> > -                            struct hmap *lflows,
> > +                            struct lflow_table *lflows,
> >                              struct ds *match,
> > -                            struct ds *actions)
> > +                            struct ds *actions,
> > +                            struct lflow_ref *lflow_ref)
> >  {
> >      ovs_assert(lsp->nbsp);
> >      ovs_assert(lsp->peer);
> > -    start_collecting_lflows();
> >      build_lswitch_rport_arp_req_flows_for_lbnats(
> >          lsp->peer, lr_stateful_rec, lsp->od, lsp,
> > -        lflows, &lsp->nbsp->header_);
> > +        lflows, &lsp->nbsp->header_, lflow_ref);
> >      build_ip_routing_flows_for_router_type_lsp(lsp, lr_stateful_table,
> > -                                               lr_ports, lflows);
> > +                                               lr_ports, lflows,
> > +                                               lflow_ref);
> >      build_arp_resolve_flows_for_lsp_routable_addresses(
> > -        lsp, lflows, lr_ports, lr_stateful_table, match, actions);
> > +        lsp, lflows, lr_ports, lr_stateful_table, match, actions, lflow_ref);
> >      build_lswitch_ip_unicast_lookup_for_nats(lsp, lr_stateful_rec, lflows,
> > -                                             match, actions);
> > -    link_ovn_port_to_lflows(lsp, &collected_lflows);
> > -    end_collecting_lflows();
> > +                                             match, actions, lflow_ref);
> >  }
> >
> >  static void
> >  build_lbnat_lflows_iterate_by_lsp(
> >      struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
> >      const struct hmap *lr_ports, struct ds *match, struct ds *actions,
> > -    struct hmap *lflows)
> > +    struct lflow_table *lflows)
> >  {
> >      ovs_assert(op->nbsp);
> >
> > @@ -15895,8 +15306,9 @@ build_lbnat_lflows_iterate_by_lsp(
> >                                                        op->peer->od->index);
> >      ovs_assert(lr_stateful_rec);
> >
> > -    build_lsp_lflows_for_lbnats(op, lr_stateful_rec, lr_stateful_table,
> > -                                lr_ports, lflows, match, actions);
> > +    build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
> > +                                lr_stateful_table, lr_ports, lflows,
> > +                                match, actions, op->stateful_lflow_ref);
> >  }
> >
> >  static void
> > @@ -15904,7 +15316,7 @@ build_lrp_lflows_for_lbnats(struct ovn_port *op,
> >                              const struct lr_stateful_record *lr_stateful_rec,
> >                              const struct shash *meter_groups,
> >                              struct ds *match, struct ds *actions,
> > -                            struct hmap *lflows)
> > +                            struct lflow_table *lflows)
> >  {
> >      /* Drop IP traffic destined to router owned IPs except if the IP is
> >       * also a SNAT IP. Those are dropped later, in stage
> > @@ -15939,7 +15351,7 @@ static void
> >  build_lbnat_lflows_iterate_by_lrp(
> >      struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
> >      const struct shash *meter_groups, struct ds *match,
> > -    struct ds *actions, struct hmap *lflows)
> > +    struct ds *actions, struct lflow_table *lflows)
> >  {
> >      ovs_assert(op->nbrp);
> >
> > @@ -15954,7 +15366,7 @@ build_lbnat_lflows_iterate_by_lrp(
> >
> >  static void
> >  build_lr_stateful_flows(const struct lr_stateful_record *lr_stateful_rec,
> > -                        struct hmap *lflows,
> > +                        struct lflow_table *lflows,
> >                          const struct hmap *ls_ports,
> >                          const struct hmap *lr_ports,
> >                          struct ds *match,
> > @@ -15978,7 +15390,7 @@ build_ls_stateful_flows(const struct ls_stateful_record *ls_stateful_rec,
> >                        const struct ls_port_group_table *ls_pgs,
> >                        const struct chassis_features *features,
> >                        const struct shash *meter_groups,
> > -                      struct hmap *lflows)
> > +                      struct lflow_table *lflows)
> >  {
> >      ovs_assert(ls_stateful_rec->od);
> >
> > @@ -15997,7 +15409,7 @@ struct lswitch_flow_build_info {
> >      const struct ls_port_group_table *ls_port_groups;
> >      const struct lr_stateful_table *lr_stateful_table;
> >      const struct ls_stateful_table *ls_stateful_table;
> > -    struct hmap *lflows;
> > +    struct lflow_table *lflows;
> >      struct hmap *igmp_groups;
> >      const struct shash *meter_groups;
> >      const struct hmap *lb_dps_map;
> > @@ -16080,10 +15492,9 @@ build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,
> >                                           const struct shash *meter_groups,
> >                                           struct ds *match,
> >                                           struct ds *actions,
> > -                                         struct hmap *lflows)
> > +                                         struct lflow_table *lflows)
> >  {
> >      ovs_assert(op->nbsp);
> > -    start_collecting_lflows();
> >
> >      /* Build Logical Switch Flows. */
> >      build_lswitch_port_sec_op(op, lflows, actions, match);
> > @@ -16098,9 +15509,6 @@ build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,
> >
> >      /* Build Logical Router Flows. */
> >      build_arp_resolve_flows_for_lsp(op, lflows, lr_ports, match, actions);
> > -
> > -    link_ovn_port_to_lflows(op, &collected_lflows);
> > -    end_collecting_lflows();
> >  }
> >
> >  /* Helper function to combine all lflow generation which is iterated by logical
> > @@ -16307,7 +15715,7 @@ noop_callback(struct worker_pool *pool OVS_UNUSED,
> >      /* Do nothing */
> >  }
> >
> > -/* Fixes the hmap size (hmap->n) after parallel building the lflow_map when
> > +/* Fixes the hmap size (hmap->n) after parallel building the lflow_table when
> >   * dp-groups is enabled, because in that case all threads are updating the
> >   * global lflow hmap. Although the lflow_hash_lock prevents currently inserting
> >   * to the same hash bucket, the hmap->n is updated currently by all threads and
> > @@ -16317,7 +15725,7 @@ noop_callback(struct worker_pool *pool OVS_UNUSED,
> >   * after the worker threads complete the tasks in each iteration before any
> >   * future operations on the lflow map. */
> >  static void
> > -fix_flow_map_size(struct hmap *lflow_map,
> > +fix_flow_table_size(struct lflow_table *lflow_table,
> >                    struct lswitch_flow_build_info *lsiv,
> >                    size_t n_lsiv)
> >  {
> > @@ -16325,7 +15733,7 @@ fix_flow_map_size(struct hmap *lflow_map,
> >      for (size_t i = 0; i < n_lsiv; i++) {
> >          total += lsiv[i].thread_lflow_counter;
> >      }
> > -    lflow_map->n = total;
> > +    lflow_table_set_size(lflow_table, total);
> >  }
> >
> >  static void
> > @@ -16337,7 +15745,7 @@ build_lswitch_and_lrouter_flows(
> >      const struct ls_port_group_table *ls_pgs,
> >      const struct lr_stateful_table *lr_stateful_table,
> >      const struct ls_stateful_table *ls_stateful_table,
> > -    struct hmap *lflows,
> > +    struct lflow_table *lflows,
> >      struct hmap *igmp_groups,
> >      const struct shash *meter_groups,
> >      const struct hmap *lb_dps_map,
> > @@ -16384,7 +15792,7 @@ build_lswitch_and_lrouter_flows(
> >
> >          /* Run thread pool. */
> >          run_pool_callback(build_lflows_pool, NULL, NULL, noop_callback);
> > -        fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);
> > +        fix_flow_table_size(lflows, lsiv, build_lflows_pool->size);
> >
> >          for (index = 0; index < build_lflows_pool->size; index++) {
> >              ds_destroy(&lsiv[index].match);
> > @@ -16498,24 +15906,6 @@ build_lswitch_and_lrouter_flows(
> >      free(svc_check_match);
> >  }
> >
> > -static ssize_t max_seen_lflow_size = 128;
> > -
> > -void
> > -lflow_data_init(struct lflow_data *data)
> > -{
> > -    fast_hmap_size_for(&data->lflows, max_seen_lflow_size);
> > -}
> > -
> > -void
> > -lflow_data_destroy(struct lflow_data *data)
> > -{
> > -    struct ovn_lflow *lflow;
> > -    HMAP_FOR_EACH_SAFE (lflow, hmap_node, &data->lflows) {
> > -        ovn_lflow_destroy(&data->lflows, lflow);
> > -    }
> > -    hmap_destroy(&data->lflows);
> > -}
> > -
> >  void run_update_worker_pool(int n_threads)
> >  {
> >      /* If number of threads has been updated (or initially set),
> > @@ -16561,7 +15951,7 @@ create_sb_multicast_group(struct ovsdb_idl_txn *ovnsb_txn,
> >   * constructing their contents based on the OVN_NB database. */
> >  void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
> >                    struct lflow_input *input_data,
> > -                  struct hmap *lflows)
> > +                  struct lflow_table *lflows)
> >  {
> >      struct hmap mcast_groups;
> >      struct hmap igmp_groups;
> > @@ -16592,281 +15982,26 @@ void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
> >      }
> >
> >      /* Parallel build may result in a suboptimal hash. Resize the
> > -     * hash to a correct size before doing lookups */
> > -
> > -    hmap_expand(lflows);
> > -
> > -    if (hmap_count(lflows) > max_seen_lflow_size) {
> > -        max_seen_lflow_size = hmap_count(lflows);
> > -    }
> > -
> > -    stopwatch_start(LFLOWS_DP_GROUPS_STOPWATCH_NAME, time_msec());
> > -    /* Collecting all unique datapath groups. */
> > -    struct hmap ls_dp_groups = HMAP_INITIALIZER(&ls_dp_groups);
> > -    struct hmap lr_dp_groups = HMAP_INITIALIZER(&lr_dp_groups);
> > -    struct hmap single_dp_lflows;
> > -
> > -    /* Single dp_flows will never grow bigger than lflows,
> > -     * thus the two hmaps will remain the same size regardless
> > -     * of how many elements we remove from lflows and add to
> > -     * single_dp_lflows.
> > -     * Note - lflows is always sized for at least 128 flows.
> > -     */
> > -    fast_hmap_size_for(&single_dp_lflows, max_seen_lflow_size);
> > -
> > -    struct ovn_lflow *lflow;
> > -    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
> > -        struct ovn_datapath **datapaths_array;
> > -        size_t n_datapaths;
> > -
> > -        if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> > -            n_datapaths = ods_size(input_data->ls_datapaths);
> > -            datapaths_array = input_data->ls_datapaths->array;
> > -        } else {
> > -            n_datapaths = ods_size(input_data->lr_datapaths);
> > -            datapaths_array = input_data->lr_datapaths->array;
> > -        }
> > -
> > -        lflow->n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> > -
> > -        ovs_assert(lflow->n_ods);
> > -
> > -        if (lflow->n_ods == 1) {
> > -            /* There is only one datapath, so it should be moved out of the
> > -             * group to a single 'od'. */
> > -            size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
> > -                                       n_datapaths);
> > -
> > -            bitmap_set0(lflow->dpg_bitmap, index);
> > -            lflow->od = datapaths_array[index];
> > -
> > -            /* Logical flow should be re-hashed to allow lookups. */
> > -            uint32_t hash = hmap_node_hash(&lflow->hmap_node);
> > -            /* Remove from lflows. */
> > -            hmap_remove(lflows, &lflow->hmap_node);
> > -            hash = ovn_logical_flow_hash_datapath(&lflow->od->sb->header_.uuid,
> > -                                                  hash);
> > -            /* Add to single_dp_lflows. */
> > -            hmap_insert_fast(&single_dp_lflows, &lflow->hmap_node, hash);
> > -        }
> > -    }
> > -
> > -    /* Merge multiple and single dp hashes. */
> > -
> > -    fast_hmap_merge(lflows, &single_dp_lflows);
> > -
> > -    hmap_destroy(&single_dp_lflows);
> > -
> > -    stopwatch_stop(LFLOWS_DP_GROUPS_STOPWATCH_NAME, time_msec());
> > +     * lflow map to a correct size before doing lookups */
> > +    lflow_table_expand(lflows);
> > +
> >      stopwatch_start(LFLOWS_TO_SB_STOPWATCH_NAME, time_msec());
> > -
> > -    struct hmap lflows_temp = HMAP_INITIALIZER(&lflows_temp);
> > -    /* Push changes to the Logical_Flow table to database. */
> > -    const struct sbrec_logical_flow *sbflow;
> > -    SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_SAFE (sbflow,
> > -                                     input_data->sbrec_logical_flow_table) {
> > -        struct sbrec_logical_dp_group *dp_group = sbflow->logical_dp_group;
> > -        struct ovn_datapath *logical_datapath_od = NULL;
> > -        size_t i;
> > -
> > -        /* Find one valid datapath to get the datapath type. */
> > -        struct sbrec_datapath_binding *dp = sbflow->logical_datapath;
> > -        if (dp) {
> > -            logical_datapath_od = ovn_datapath_from_sbrec(
> > -                                        &input_data->ls_datapaths->datapaths,
> > -                                        &input_data->lr_datapaths->datapaths,
> > -                                        dp);
> > -            if (logical_datapath_od
> > -                && ovn_datapath_is_stale(logical_datapath_od)) {
> > -                logical_datapath_od = NULL;
> > -            }
> > -        }
> > -        for (i = 0; dp_group && i < dp_group->n_datapaths; i++) {
> > -            logical_datapath_od = ovn_datapath_from_sbrec(
> > -                                        &input_data->ls_datapaths->datapaths,
> > -                                        &input_data->lr_datapaths->datapaths,
> > -                                        dp_group->datapaths[i]);
> > -            if (logical_datapath_od
> > -                && !ovn_datapath_is_stale(logical_datapath_od)) {
> > -                break;
> > -            }
> > -            logical_datapath_od = NULL;
> > -        }
> > -
> > -        if (!logical_datapath_od) {
> > -            /* This lflow has no valid logical datapaths. */
> > -            sbrec_logical_flow_delete(sbflow);
> > -            continue;
> > -        }
> > -
> > -        enum ovn_pipeline pipeline
> > -            = !strcmp(sbflow->pipeline, "ingress") ? P_IN : P_OUT;
> > -
> > -        lflow = ovn_lflow_find(
> > -            lflows, dp_group ? NULL : logical_datapath_od,
> > -            ovn_stage_build(ovn_datapath_get_type(logical_datapath_od),
> > -                            pipeline, sbflow->table_id),
> > -            sbflow->priority, sbflow->match, sbflow->actions,
> > -            sbflow->controller_meter, sbflow->hash);
> > -        if (lflow) {
> > -            struct hmap *dp_groups;
> > -            size_t n_datapaths;
> > -            bool is_switch;
> > -
> > -            lflow->sb_uuid = sbflow->header_.uuid;
> > -            is_switch = ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH;
> > -            if (is_switch) {
> > -                n_datapaths = ods_size(input_data->ls_datapaths);
> > -                dp_groups = &ls_dp_groups;
> > -            } else {
> > -                n_datapaths = ods_size(input_data->lr_datapaths);
> > -                dp_groups = &lr_dp_groups;
> > -            }
> > -            if (input_data->ovn_internal_version_changed) {
> > -                const char *stage_name = smap_get_def(&sbflow->external_ids,
> > -                                                  "stage-name", "");
> > -                const char *stage_hint = smap_get_def(&sbflow->external_ids,
> > -                                                  "stage-hint", "");
> > -                const char *source = smap_get_def(&sbflow->external_ids,
> > -                                                  "source", "");
> > -
> > -                if (strcmp(stage_name, ovn_stage_to_str(lflow->stage))) {
> > -                    sbrec_logical_flow_update_external_ids_setkey(sbflow,
> > -                     "stage-name", ovn_stage_to_str(lflow->stage));
> > -                }
> > -                if (lflow->stage_hint) {
> > -                    if (strcmp(stage_hint, lflow->stage_hint)) {
> > -                        sbrec_logical_flow_update_external_ids_setkey(sbflow,
> > -                        "stage-hint", lflow->stage_hint);
> > -                    }
> > -                }
> > -                if (lflow->where) {
> > -                    if (strcmp(source, lflow->where)) {
> > -                        sbrec_logical_flow_update_external_ids_setkey(sbflow,
> > -                        "source", lflow->where);
> > -                    }
> > -                }
> > -            }
> > -
> > -            if (lflow->od) {
> > -                sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> > -                sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
> > -            } else {
> > -                lflow->dpg = ovn_dp_group_get_or_create(
> > -                                ovnsb_txn, dp_groups, dp_group,
> > -                                lflow->n_ods, lflow->dpg_bitmap,
> > -                                n_datapaths, is_switch,
> > -                                input_data->ls_datapaths,
> > -                                input_data->lr_datapaths);
> > -
> > -                sbrec_logical_flow_set_logical_datapath(sbflow, NULL);
> > -                sbrec_logical_flow_set_logical_dp_group(sbflow,
> > -                                                        lflow->dpg->dp_group);
> > -            }
> > -
> > -            /* This lflow updated.  Not needed anymore. */
> > -            hmap_remove(lflows, &lflow->hmap_node);
> > -            hmap_insert(&lflows_temp, &lflow->hmap_node,
> > -                        hmap_node_hash(&lflow->hmap_node));
> > -        } else {
> > -            sbrec_logical_flow_delete(sbflow);
> > -        }
> > -    }
> > -
> > -    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
> > -        const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
> > -        uint8_t table = ovn_stage_get_table(lflow->stage);
> > -        struct hmap *dp_groups;
> > -        size_t n_datapaths;
> > -        bool is_switch;
> > -
> > -        is_switch = ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH;
> > -        if (is_switch) {
> > -            n_datapaths = ods_size(input_data->ls_datapaths);
> > -            dp_groups = &ls_dp_groups;
> > -        } else {
> > -            n_datapaths = ods_size(input_data->lr_datapaths);
> > -            dp_groups = &lr_dp_groups;
> > -        }
> > -
> > -        lflow->sb_uuid = uuid_random();
> > -        sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
> > -                                                        &lflow->sb_uuid);
> > -        if (lflow->od) {
> > -            sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> > -        } else {
> > -            lflow->dpg = ovn_dp_group_get_or_create(
> > -                                ovnsb_txn, dp_groups, NULL,
> > -                                lflow->n_ods, lflow->dpg_bitmap,
> > -                                n_datapaths, is_switch,
> > -                                input_data->ls_datapaths,
> > -                                input_data->lr_datapaths);
> > -
> > -            sbrec_logical_flow_set_logical_dp_group(sbflow,
> > -                                                    lflow->dpg->dp_group);
> > -        }
> > -
> > -        sbrec_logical_flow_set_pipeline(sbflow, pipeline);
> > -        sbrec_logical_flow_set_table_id(sbflow, table);
> > -        sbrec_logical_flow_set_priority(sbflow, lflow->priority);
> > -        sbrec_logical_flow_set_match(sbflow, lflow->match);
> > -        sbrec_logical_flow_set_actions(sbflow, lflow->actions);
> > -        if (lflow->io_port) {
> > -            struct smap tags = SMAP_INITIALIZER(&tags);
> > -            smap_add(&tags, "in_out_port", lflow->io_port);
> > -            sbrec_logical_flow_set_tags(sbflow, &tags);
> > -            smap_destroy(&tags);
> > -        }
> > -        sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
> > -
> > -        /* Trim the source locator lflow->where, which looks something like
> > -         * "ovn/northd/northd.c:1234", down to just the part following the
> > -         * last slash, e.g. "northd.c:1234". */
> > -        const char *slash = strrchr(lflow->where, '/');
> > -#if _WIN32
> > -        const char *backslash = strrchr(lflow->where, '\\');
> > -        if (!slash || backslash > slash) {
> > -            slash = backslash;
> > -        }
> > -#endif
> > -        const char *where = slash ? slash + 1 : lflow->where;
> > -
> > -        struct smap ids = SMAP_INITIALIZER(&ids);
> > -        smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
> > -        smap_add(&ids, "source", where);
> > -        if (lflow->stage_hint) {
> > -            smap_add(&ids, "stage-hint", lflow->stage_hint);
> > -        }
> > -        sbrec_logical_flow_set_external_ids(sbflow, &ids);
> > -        smap_destroy(&ids);
> > -        hmap_remove(lflows, &lflow->hmap_node);
> > -        hmap_insert(&lflows_temp, &lflow->hmap_node,
> > -                    hmap_node_hash(&lflow->hmap_node));
> > -    }
> > -    hmap_swap(lflows, &lflows_temp);
> > -    hmap_destroy(&lflows_temp);
> > +    lflow_table_sync_to_sb(lflows, ovnsb_txn, input_data->ls_datapaths,
> > +                           input_data->lr_datapaths,
> > +                           input_data->ovn_internal_version_changed,
> > +                           input_data->sbrec_logical_flow_table,
> > +                           input_data->sbrec_logical_dp_group_table);
> >
> >      stopwatch_stop(LFLOWS_TO_SB_STOPWATCH_NAME, time_msec());
> > -    struct ovn_dp_group *dpg;
> > -    HMAP_FOR_EACH_POP (dpg, node, &ls_dp_groups) {
> > -        bitmap_free(dpg->bitmap);
> > -        free(dpg);
> > -    }
> > -    hmap_destroy(&ls_dp_groups);
> > -    HMAP_FOR_EACH_POP (dpg, node, &lr_dp_groups) {
> > -        bitmap_free(dpg->bitmap);
> > -        free(dpg);
> > -    }
> > -    hmap_destroy(&lr_dp_groups);
> >
> >      /* Push changes to the Multicast_Group table to database. */
> >      const struct sbrec_multicast_group *sbmc;
> > -    SBREC_MULTICAST_GROUP_TABLE_FOR_EACH_SAFE (sbmc,
> > -                                input_data->sbrec_multicast_group_table) {
> > +    SBREC_MULTICAST_GROUP_TABLE_FOR_EACH_SAFE (
> > +            sbmc, input_data->sbrec_multicast_group_table) {
> >          struct ovn_datapath *od = ovn_datapath_from_sbrec(
> > -                                       &input_data->ls_datapaths->datapaths,
> > -                                       &input_data->lr_datapaths->datapaths,
> > -                                       sbmc->datapath);
> > +            &input_data->ls_datapaths->datapaths,
> > +            &input_data->lr_datapaths->datapaths,
> > +            sbmc->datapath);
> >
> >          if (!od || ovn_datapath_is_stale(od)) {
> >              sbrec_multicast_group_delete(sbmc);
> > @@ -16906,120 +16041,22 @@ void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
> >      hmap_destroy(&mcast_groups);
> >  }
> >
> > -static void
> > -sync_lsp_lflows_to_sb(struct ovsdb_idl_txn *ovnsb_txn,
> > -                      struct lflow_input *lflow_input,
> > -                      struct hmap *lflows,
> > -                      struct ovn_lflow *lflow)
> > -{
> > -    size_t n_datapaths;
> > -    struct ovn_datapath **datapaths_array;
> > -    if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
> > -        n_datapaths = ods_size(lflow_input->ls_datapaths);
> > -        datapaths_array = lflow_input->ls_datapaths->array;
> > -    } else {
> > -        n_datapaths = ods_size(lflow_input->lr_datapaths);
> > -        datapaths_array = lflow_input->lr_datapaths->array;
> > -    }
> > -    uint32_t n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
> > -    ovs_assert(n_ods == 1);
> > -    /* There is only one datapath, so it should be moved out of the
> > -     * group to a single 'od'. */
> > -    size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
> > -                               n_datapaths);
> > -
> > -    bitmap_set0(lflow->dpg_bitmap, index);
> > -    lflow->od = datapaths_array[index];
> > -
> > -    /* Logical flow should be re-hashed to allow lookups. */
> > -    uint32_t hash = hmap_node_hash(&lflow->hmap_node);
> > -    /* Remove from lflows. */
> > -    hmap_remove(lflows, &lflow->hmap_node);
> > -    hash = ovn_logical_flow_hash_datapath(&lflow->od->sb->header_.uuid,
> > -                                          hash);
> > -    /* Add back. */
> > -    hmap_insert(lflows, &lflow->hmap_node, hash);
> > -
> > -    /* Sync to SB. */
> > -    const struct sbrec_logical_flow *sbflow;
> > -    /* Note: uuid_random acquires a global mutex. If we parallelize the sync to
> > -     * SB this may become a bottleneck. */
> > -    lflow->sb_uuid = uuid_random();
> > -    sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
> > -                                                    &lflow->sb_uuid);
> > -    const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
> > -    uint8_t table = ovn_stage_get_table(lflow->stage);
> > -    sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
> > -    sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
> > -    sbrec_logical_flow_set_pipeline(sbflow, pipeline);
> > -    sbrec_logical_flow_set_table_id(sbflow, table);
> > -    sbrec_logical_flow_set_priority(sbflow, lflow->priority);
> > -    sbrec_logical_flow_set_match(sbflow, lflow->match);
> > -    sbrec_logical_flow_set_actions(sbflow, lflow->actions);
> > -    if (lflow->io_port) {
> > -        struct smap tags = SMAP_INITIALIZER(&tags);
> > -        smap_add(&tags, "in_out_port", lflow->io_port);
> > -        sbrec_logical_flow_set_tags(sbflow, &tags);
> > -        smap_destroy(&tags);
> > -    }
> > -    sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
> > -    /* Trim the source locator lflow->where, which looks something like
> > -     * "ovn/northd/northd.c:1234", down to just the part following the
> > -     * last slash, e.g. "northd.c:1234". */
> > -    const char *slash = strrchr(lflow->where, '/');
> > -#if _WIN32
> > -    const char *backslash = strrchr(lflow->where, '\\');
> > -    if (!slash || backslash > slash) {
> > -        slash = backslash;
> > -    }
> > -#endif
> > -    const char *where = slash ? slash + 1 : lflow->where;
> > -
> > -    struct smap ids = SMAP_INITIALIZER(&ids);
> > -    smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
> > -    smap_add(&ids, "source", where);
> > -    if (lflow->stage_hint) {
> > -        smap_add(&ids, "stage-hint", lflow->stage_hint);
> > -    }
> > -    sbrec_logical_flow_set_external_ids(sbflow, &ids);
> > -    smap_destroy(&ids);
> > -}
> > -
> > -static bool
> > -delete_lflow_for_lsp(struct ovn_port *op, bool is_update,
> > -                     const struct sbrec_logical_flow_table *sb_lflow_table,
> > -                     struct hmap *lflows)
> > -{
> > -    struct lflow_ref_node *lfrn;
> > -    const char *operation = is_update ? "updated" : "deleted";
> > -    LIST_FOR_EACH_SAFE (lfrn, lflow_list_node, &op->lflows) {
> > -        VLOG_DBG("Deleting SB lflow "UUID_FMT" for %s port %s",
> > -                 UUID_ARGS(&lfrn->lflow->sb_uuid), operation, op->key);
> > -
> > -        const struct sbrec_logical_flow *sblflow =
> > -            sbrec_logical_flow_table_get_for_uuid(sb_lflow_table,
> > -                                              &lfrn->lflow->sb_uuid);
> > -        if (sblflow) {
> > -            sbrec_logical_flow_delete(sblflow);
> > -        } else {
> > -            static struct vlog_rate_limit rl =
> > -                VLOG_RATE_LIMIT_INIT(1, 1);
> > -            VLOG_WARN_RL(&rl, "SB lflow "UUID_FMT" not found when handling "
> > -                         "%s port %s. Recompute.",
> > -                         UUID_ARGS(&lfrn->lflow->sb_uuid), operation, op->key);
> > -            return false;
> > -        }
> > +void
> > +lflow_reset_northd_refs(struct lflow_input *lflow_input)
> > +{
> > +    struct ovn_port *op;
> >
> > -        ovn_lflow_destroy(lflows, lfrn->lflow);
> > +    HMAP_FOR_EACH (op, key_node, lflow_input->ls_ports) {
> > +        lflow_ref_clear(op->lflow_ref);
> > +        lflow_ref_clear(op->stateful_lflow_ref);
> >      }
> > -    return true;
> >  }
> >
> >  bool
> >  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
> >                                   struct tracked_ovn_ports *trk_lsps,
> >                                   struct lflow_input *lflow_input,
> > -                                 struct hmap *lflows)
> > +                                 struct lflow_table *lflows)
> >  {
> >      struct hmapx_node *hmapx_node;
> >      struct ovn_port *op;
> > @@ -17028,13 +16065,15 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
> >          op = hmapx_node->data;
> >          /* Make sure 'op' is an lsp and not lrp. */
> >          ovs_assert(op->nbsp);
> > -
> > -        if (!delete_lflow_for_lsp(op, false,
> > -                                  lflow_input->sbrec_logical_flow_table,
> > -                                  lflows)) {
> > -                return false;
> > -            }
> > -
> > +        bool handled = lflow_ref_resync_flows(
> > +            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
> > +            lflow_input->lr_datapaths,
> > +            lflow_input->ovn_internal_version_changed,
> > +            lflow_input->sbrec_logical_flow_table,
> > +            lflow_input->sbrec_logical_dp_group_table);
> > +        if (!handled) {
> > +            return false;
> > +        }
> >          /* No need to update SB multicast groups, thanks to weak
> >           * references. */
> >      }
> > @@ -17043,13 +16082,8 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
> >          op = hmapx_node->data;
> >          /* Make sure 'op' is an lsp and not lrp. */
> >          ovs_assert(op->nbsp);
> > -
> > -        /* Delete old lflows. */
> > -        if (!delete_lflow_for_lsp(op, true,
> > -                                  lflow_input->sbrec_logical_flow_table,
> > -                                  lflows)) {
> > -            return false;
> > -        }
> > +        /* Clear old lflows. */
> > +        lflow_ref_unlink_lflows(op->lflow_ref);
> >
> >          /* Generate new lflows. */
> >          struct ds match = DS_EMPTY_INITIALIZER;
> > @@ -17060,16 +16094,42 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
> >                                                   &match, &actions,
> >                                                   lflows);
> >
> > +        /* Sync the new flows to SB. */
> > +        bool handled = lflow_ref_sync_lflows(
> > +            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
> > +            lflow_input->lr_datapaths,
> > +            lflow_input->ovn_internal_version_changed,
> > +            lflow_input->sbrec_logical_flow_table,
> > +            lflow_input->sbrec_logical_dp_group_table);
> > +        if (!handled) {
> > +            return false;
> > +        }
> > +
> >          if (lsp_is_router(op->nbsp) && op->peer && op->peer->od->nbr) {
> >              const struct lr_stateful_record *lr_stateful_rec =
> >                  lr_stateful_table_find_by_index(lflow_input->lr_stateful_table,
> >                                                  op->peer->od->index);
> >              ovs_assert(lr_stateful_rec);
> >
> > +            /* Clear old lflows. */
> > +            lflow_ref_unlink_lflows(op->stateful_lflow_ref);
> > +
> > +            /* Generate new lflows. */
> >              build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
> >                                          lflow_input->lr_stateful_table,
> >                                          lflow_input->lr_ports,
> > -                                        lflows, &match, &actions);
> > +                                        lflows, &match, &actions,
> > +                                        op->stateful_lflow_ref);
> > +            handled = lflow_ref_sync_lflows(
> > +                op->stateful_lflow_ref, lflows, ovnsb_txn,
> > +                lflow_input->ls_datapaths,
> > +                lflow_input->lr_datapaths,
> > +                lflow_input->ovn_internal_version_changed,
> > +                lflow_input->sbrec_logical_flow_table,
> > +                lflow_input->sbrec_logical_dp_group_table);
> > +            if (!handled) {
> > +                return false;
> > +            }
> >          }
> >
> >          ds_destroy(&match);
> > @@ -17077,13 +16137,6 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
> >
> >          /* SB port_binding is not deleted, so don't update SB multicast
> >           * groups. */
> > -
> > -        /* Sync the new flows to SB. */
> > -        struct lflow_ref_node *lfrn;
> > -        LIST_FOR_EACH (lfrn, lflow_list_node, &op->lflows) {
> > -            sync_lsp_lflows_to_sb(ovnsb_txn, lflow_input, lflows,
> > -                                  lfrn->lflow);
> > -        }
> >      }
> >
> >      HMAPX_FOR_EACH (hmapx_node, &trk_lsps->created) {
> > @@ -17108,6 +16161,17 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
> >                                                   lflow_input->meter_groups,
> >                                                   &match, &actions, lflows);
> >
> > +        /* Sync the newly added flows to SB. */
> > +        bool handled = lflow_ref_sync_lflows(
> > +            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
> > +            lflow_input->lr_datapaths,
> > +            lflow_input->ovn_internal_version_changed,
> > +            lflow_input->sbrec_logical_flow_table,
> > +            lflow_input->sbrec_logical_dp_group_table);
> > +        if (!handled) {
> > +            return false;
> > +        }
> > +
> >          if (lsp_is_router(op->nbsp) && op->peer && op->peer->od->nbr) {
> >              const struct lr_stateful_record *lr_stateful_rec =
> >                  lr_stateful_table_find_by_index(lflow_input->lr_stateful_table,
> > @@ -17117,7 +16181,18 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
> >              build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
> >                                          lflow_input->lr_stateful_table,
> >                                          lflow_input->lr_ports,
> > -                                        lflows, &match, &actions);
> > +                                        lflows, &match, &actions,
> > +                                        op->stateful_lflow_ref);
> > +            handled = lflow_ref_sync_lflows(
> > +                op->stateful_lflow_ref, lflows, ovnsb_txn,
> > +                lflow_input->ls_datapaths,
> > +                lflow_input->lr_datapaths,
> > +                lflow_input->ovn_internal_version_changed,
> > +                lflow_input->sbrec_logical_flow_table,
> > +                lflow_input->sbrec_logical_dp_group_table);
> > +            if (!handled) {
> > +                return false;
> > +            }
> >          }
> >
> >          ds_destroy(&match);
> > @@ -17146,13 +16221,6 @@ lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
> >              sbrec_multicast_group_update_ports_addvalue(sbmc_unknown,
> >                                                          op->sb);
> >          }
> > -
> > -        /* Sync the newly added flows to SB. */
> > -        struct lflow_ref_node *lfrn;
> > -        LIST_FOR_EACH (lfrn, lflow_list_node, &op->lflows) {
> > -            sync_lsp_lflows_to_sb(ovnsb_txn, lflow_input, lflows,
> > -                                    lfrn->lflow);
> > -        }
> >      }
> >
> >      return true;
> > diff --git a/northd/northd.h b/northd/northd.h
> > index 88406bffee..42b4eee607 100644
> > --- a/northd/northd.h
> > +++ b/northd/northd.h
> > @@ -23,6 +23,7 @@
> >  #include "northd/en-port-group.h"
> >  #include "northd/ipam.h"
> >  #include "openvswitch/hmap.h"
> > +#include "ovs-thread.h"
> >
> >  struct northd_input {
> >      /* Northbound table references */
> > @@ -164,13 +165,6 @@ struct northd_data {
> >      struct northd_tracked_data trk_data;
> >  };
> >
> > -struct lflow_data {
> > -    struct hmap lflows;
> > -};
> > -
> > -void lflow_data_init(struct lflow_data *);
> > -void lflow_data_destroy(struct lflow_data *);
> > -
> >  struct lr_nat_table;
> >
> >  struct lflow_input {
> > @@ -182,6 +176,7 @@ struct lflow_input {
> >      const struct sbrec_logical_flow_table *sbrec_logical_flow_table;
> >      const struct sbrec_multicast_group_table *sbrec_multicast_group_table;
> >      const struct sbrec_igmp_group_table *sbrec_igmp_group_table;
> > +    const struct sbrec_logical_dp_group_table *sbrec_logical_dp_group_table;
> >
> >      /* Indexes */
> >      struct ovsdb_idl_index *sbrec_mcast_group_by_name_dp;
> > @@ -201,6 +196,15 @@ struct lflow_input {
> >      bool ovn_internal_version_changed;
> >  };
> >
> > +extern int parallelization_state;
> > +enum {
> > +    STATE_NULL,               /* parallelization is off */
> > +    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
> > +    STATE_USE_PARALLELIZATION /* parallelization is on */
> > +};
> > +
> > +extern thread_local size_t thread_lflow_counter;
> > +
> >  /*
> >   * Multicast snooping and querier per datapath configuration.
> >   */
> > @@ -344,6 +348,179 @@ struct ovn_datapath {
> >  const struct ovn_datapath *ovn_datapath_find(const struct hmap *datapaths,
> >                                               const struct uuid *uuid);
> >
> > +struct ovn_datapath *ovn_datapath_from_sbrec(
> > +    const struct hmap *ls_datapaths, const struct hmap *lr_datapaths,
> > +    const struct sbrec_datapath_binding *);
> > +
> > +static inline bool
> > +ovn_datapath_is_stale(const struct ovn_datapath *od)
> > +{
> > +    return !od->nbr && !od->nbs;
> > +};
> > +
> > +/* Pipeline stages. */
> > +
> > +/* The two purposes for which ovn-northd uses OVN logical datapaths. */
> > +enum ovn_datapath_type {
> > +    DP_SWITCH,                  /* OVN logical switch. */
> > +    DP_ROUTER                   /* OVN logical router. */
> > +};
> > +
> > +/* Returns an "enum ovn_stage" built from the arguments.
> > + *
> > + * (It's better to use ovn_stage_build() for type-safety reasons, but inline
> > + * functions can't be used in enums or switch cases.) */
> > +#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
> > +    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
> > +
> > +/* A stage within an OVN logical switch or router.
> > + *
> > + * An "enum ovn_stage" indicates whether the stage is part of a logical switch
> > + * or router, whether the stage is part of the ingress or egress pipeline, and
> > + * the table within that pipeline.  The first three components are combined to
> > + * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
> > + * S_ROUTER_OUT_DELIVERY. */
> > +enum ovn_stage {
> > +#define PIPELINE_STAGES                                                   \
> > +    /* Logical switch ingress stages. */                                  \
> > +    PIPELINE_STAGE(SWITCH, IN,  CHECK_PORT_SEC, 0, "ls_in_check_port_sec")   \
> > +    PIPELINE_STAGE(SWITCH, IN,  APPLY_PORT_SEC, 1, "ls_in_apply_port_sec")   \
> > +    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    2, "ls_in_lookup_fdb")    \
> > +    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        3, "ls_in_put_fdb")       \
> > +    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")       \
> > +    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")        \
> > +    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")  \
> > +    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       7, "ls_in_acl_hint")      \
> > +    PIPELINE_STAGE(SWITCH, IN,  ACL_EVAL,       8, "ls_in_acl_eval")      \
> > +    PIPELINE_STAGE(SWITCH, IN,  ACL_ACTION,     9, "ls_in_acl_action")    \
> > +    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
> > +    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
> > +    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  12, "ls_in_lb_aff_check")  \
> > +    PIPELINE_STAGE(SWITCH, IN,  LB,            13, "ls_in_lb")            \
> > +    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  14, "ls_in_lb_aff_learn")  \
> > +    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   15, "ls_in_pre_hairpin")   \
> > +    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   16, "ls_in_nat_hairpin")   \
> > +    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       17, "ls_in_hairpin")       \
> > +    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_EVAL,  18, \
> > +                   "ls_in_acl_after_lb_eval")  \
> > +    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_ACTION,  19, \
> > +                   "ls_in_acl_after_lb_action")  \
> > +    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      20, "ls_in_stateful")      \
> > +    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    21, "ls_in_arp_rsp")       \
> > +    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  22, "ls_in_dhcp_options")  \
> > +    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 23, "ls_in_dhcp_response") \
> > +    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    24, "ls_in_dns_lookup")    \
> > +    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  25, "ls_in_dns_response")  \
> > +    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 26, "ls_in_external_port") \
> > +    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       27, "ls_in_l2_lkup")       \
> > +    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    28, "ls_in_l2_unknown")    \
> > +                                                                          \
> > +    /* Logical switch egress stages. */                                   \
> > +    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      0, "ls_out_pre_acl")        \
> > +    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       1, "ls_out_pre_lb")         \
> > +    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
> > +    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
> > +    PIPELINE_STAGE(SWITCH, OUT, ACL_EVAL,     4, "ls_out_acl_eval")       \
> > +    PIPELINE_STAGE(SWITCH, OUT, ACL_ACTION,   5, "ls_out_acl_action")     \
> > +    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     6, "ls_out_qos_mark")       \
> > +    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    7, "ls_out_qos_meter")      \
> > +    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     8, "ls_out_stateful")       \
> > +    PIPELINE_STAGE(SWITCH, OUT, CHECK_PORT_SEC,  9, "ls_out_check_port_sec") \
> > +    PIPELINE_STAGE(SWITCH, OUT, APPLY_PORT_SEC, 10, "ls_out_apply_port_sec") \
> > +                                                                      \
> > +    /* Logical router ingress stages. */                              \
> > +    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
> > +    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
> > +    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
> > +    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
> > +    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
> > +    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
> > +    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
> > +    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
> > +    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
> > +    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
> > +    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
> > +    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") \
> > +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre")  \
> > +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")      \
> > +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, "lr_in_ip_routing_ecmp") \
> > +    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")          \
> > +    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")     \
> > +    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")     \
> > +    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")     \
> > +    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")     \
> > +    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")     \
> > +    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")     \
> > +                                                                      \
> > +    /* Logical router egress stages. */                               \
> > +    PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
> > +                   "lr_out_chk_dnat_local")                                  \
> > +    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,             1, "lr_out_undnat")      \
> > +    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT,        2, "lr_out_post_undnat") \
> > +    PIPELINE_STAGE(ROUTER, OUT, SNAT,               3, "lr_out_snat")        \
> > +    PIPELINE_STAGE(ROUTER, OUT, POST_SNAT,          4, "lr_out_post_snat")   \
> > +    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,           5, "lr_out_egr_loop")    \
> > +    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,           6, "lr_out_delivery")
> > +
> > +#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
> > +    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
> > +        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
> > +    PIPELINE_STAGES
> > +#undef PIPELINE_STAGE
> > +};
> > +
> > +enum ovn_datapath_type ovn_stage_to_datapath_type(enum ovn_stage stage);
> > +
> > +
> > +/* Returns 'od''s datapath type. */
> > +static inline enum ovn_datapath_type
> > +ovn_datapath_get_type(const struct ovn_datapath *od)
> > +{
> > +    return od->nbs ? DP_SWITCH : DP_ROUTER;
> > +}
> > +
> > +/* Returns an "enum ovn_stage" built from the arguments. */
> > +static inline enum ovn_stage
> > +ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
> > +                uint8_t table)
> > +{
> > +    return OVN_STAGE_BUILD(dp_type, pipeline, table);
> > +}
> > +
> > +/* Returns the pipeline to which 'stage' belongs. */
> > +static inline enum ovn_pipeline
> > +ovn_stage_get_pipeline(enum ovn_stage stage)
> > +{
> > +    return (stage >> 8) & 1;
> > +}
> > +
> > +/* Returns the pipeline name to which 'stage' belongs. */
> > +static inline const char *
> > +ovn_stage_get_pipeline_name(enum ovn_stage stage)
> > +{
> > +    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
> > +}
> > +
> > +/* Returns the table to which 'stage' belongs. */
> > +static inline uint8_t
> > +ovn_stage_get_table(enum ovn_stage stage)
> > +{
> > +    return stage & 0xff;
> > +}
> > +
> > +/* Returns a string name for 'stage'. */
> > +static inline const char *
> > +ovn_stage_to_str(enum ovn_stage stage)
> > +{
> > +    switch (stage) {
> > +#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
> > +        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
> > +    PIPELINE_STAGES
> > +#undef PIPELINE_STAGE
> > +        default: return "<unknown>";
> > +    }
> > +}
> > +
> >  /* A logical switch port or logical router port.
> >   *
> >   * In steady state, an ovn_port points to a northbound Logical_Switch_Port
> > @@ -434,8 +611,10 @@ struct ovn_port {
> >      /* Temporarily used for traversing a list (or hmap) of ports. */
> >      bool visited;
> >
> > -    /* List of struct lflow_ref_node that points to the lflows generated by
> > -     * this ovn_port.
> > +    /* Only used for the router type LSP whose peer is l3dgw_port */
> > +    bool enable_router_port_acl;
> > +
> > +    /* Reference of lflows generated for this ovn_port.
> >       *
> >       * This data is initialized and destroyed by the en_northd node, but
> >       * populated and used only by the en_lflow node. Ideally this data should
> > @@ -453,11 +632,16 @@ struct ovn_port {
> >       * Adding the list here is more straightforward. The drawback is that we
> >       * need to keep in mind that this data belongs to en_lflow node, so never
> >       * access it from any other nodes.
> > +     *
> > +     * 'lflow_ref' is used to reference generic logical flows generated for
> > +     *  this ovn_port.
> > +     *
> > +     * 'stateful_lflow_ref' is used for logical switch ports of type
> > +     * 'patch/router' to reference logical flows generated fo this ovn_port
> > +     *  from the 'lr_stateful' record of the peer port's datapath.
> >       */
> > -    struct ovs_list lflows;
> > -
> > -    /* Only used for the router type LSP whose peer is l3dgw_port */
> > -    bool enable_router_port_acl;
> > +    struct lflow_ref *lflow_ref;
> > +    struct lflow_ref *stateful_lflow_ref;
> >  };
> >
> >  void ovnnb_db_run(struct northd_input *input_data,
> > @@ -480,13 +664,17 @@ void northd_destroy(struct northd_data *data);
> >  void northd_init(struct northd_data *data);
> >  void northd_indices_create(struct northd_data *data,
> >                             struct ovsdb_idl *ovnsb_idl);
> > +
> > +struct lflow_table;
> >  void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
> >                    struct lflow_input *input_data,
> > -                  struct hmap *lflows);
> > +                  struct lflow_table *);
> > +void lflow_reset_northd_refs(struct lflow_input *);
> > +
> >  bool lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
> >                                        struct tracked_ovn_ports *,
> >                                        struct lflow_input *,
> > -                                      struct hmap *lflows);
> > +                                      struct lflow_table *lflows);
> >  bool northd_handle_sb_port_binding_changes(
> >      const struct sbrec_port_binding_table *, struct hmap *ls_ports,
> >      struct hmap *lr_ports);
> > diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
> > index 084d675644..e0e60f3559 100644
> > --- a/northd/ovn-northd.c
> > +++ b/northd/ovn-northd.c
> > @@ -848,6 +848,10 @@ main(int argc, char *argv[])
> >          ovsdb_idl_omit_alert(ovnsb_idl_loop.idl,
> >                               &sbrec_port_group_columns[i]);
> >      }
> > +    for (size_t i = 0; i < SBREC_LOGICAL_DP_GROUP_N_COLUMNS; i++) {
> > +        ovsdb_idl_omit_alert(ovnsb_idl_loop.idl,
> > +                             &sbrec_logical_dp_group_columns[i]);
> > +    }
> >
> >      unixctl_command_register("sb-connection-status", "", 0, 0,
> >                               ovn_conn_show, ovnsb_idl_loop.idl);
> > diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> > index 1825fd3e18..f7d47fc7e4 100644
> > --- a/tests/ovn-northd.at
> > +++ b/tests/ovn-northd.at
> > @@ -11267,6 +11267,222 @@ CHECK_NO_CHANGE_AFTER_RECOMPUTE
> >  AT_CLEANUP
> >  ])
> >
> > +OVN_FOR_EACH_NORTHD_NO_HV([
> > +AT_SETUP([Load balancer incremental processing for multiple LBs with same VIPs])
> > +ovn_start
> > +
> > +check ovn-nbctl ls-add sw0
> > +check ovn-nbctl ls-add sw1
> > +check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80
> > +check ovn-nbctl --wait=sb lb-add lb2 10.0.0.10:80 10.0.0.3:80
> > +
> > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> > +check ovn-nbctl --wait=sb ls-lb-add sw0 lb1
> > +check_engine_stats lflow recompute nocompute
> > +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> > +
> > +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> > +sw0_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw0)
> > +
> > +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dp" = "$sw0_uuid"])
> > +
> > +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dpgrp" = ""])
> > +
> > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> > +check ovn-nbctl --wait=sb ls-lb-add sw1 lb2
> > +check_engine_stats lflow recompute nocompute
> > +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> > +
> > +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dp" = ""])
> > +
> > +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> > +
> > +# Clear the SB:Logical_Flow.logical_dp_groups column of all the
> > +# logical flows and then modify the NB:Load_balancer.  ovn-northd
> > +# should resync the logical flows.
> > +for l in $(ovn-sbctl --bare --columns _uuid list logical_flow)
> > +do
> > +    ovn-sbctl clear logical_flow $l logical_dp_group
> > +done
> > +
> > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> > +check ovn-nbctl --wait=sb set load_balancer lb1 options:foo=bar
> > +check_engine_stats lflow recompute nocompute
> > +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> > +
> > +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> > +
> > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> > +check ovn-nbctl --wait=sb clear load_balancer lb2 vips
> > +check_engine_stats lflow recompute nocompute
> > +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> > +
> > +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dp" = "$sw0_uuid"])
> > +
> > +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dpgrp" = ""])
> > +
> > +# Add back the vip to lb2.
> > +check ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80
> > +
> > +# Create additional logical switches and associate lb1 to sw0, sw1 and sw2
> > +# and associate lb2 to sw3, sw4 and sw5
> > +check ovn-nbctl ls-add sw2
> > +check ovn-nbctl ls-add sw3
> > +check ovn-nbctl ls-add sw4
> > +check ovn-nbctl ls-add sw5
> > +check ovn-nbctl --wait=sb ls-lb-del sw1 lb2
> > +check ovn-nbctl ls-lb-add sw1 lb1
> > +check ovn-nbctl ls-lb-add sw2 lb1
> > +check ovn-nbctl ls-lb-add sw3 lb2
> > +check ovn-nbctl ls-lb-add sw4 lb2
> > +
> > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> > +check ovn-nbctl --wait=sb ls-lb-add sw5 lb2
> > +check_engine_stats lflow recompute nocompute
> > +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> > +
> > +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dp" = ""])
> > +
> > +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> > +
> > +sw1_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw1)
> > +sw2_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw2)
> > +sw3_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw3)
> > +sw4_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw4)
> > +sw5_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw5)
> > +
> > +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> > +
> > +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> > +
> > +echo "dpgrp_dps - $dpgrp_dps"
> > +
> > +# Clear the vips for lb2.  The logical lb logical flow dp group should have
> > +# only sw0, sw1 and sw2 uuids.
> > +
> > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> > +check ovn-nbctl --wait=sb clear load_balancer lb2 vips
> > +check_engine_stats lflow recompute nocompute
> > +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> > +
> > +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> > +
> > +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> > +
> > +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [1], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [1], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [1], [ignore])
> > +
> > +# Clear the vips for lb1.  The logical flow should be deleted.
> > +check ovn-nbctl --wait=sb clear load_balancer lb1 vips
> > +
> > +AT_CHECK([ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid], [1], [ignore], [ignore])
> > +
> > +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> > +AT_CHECK([test "$lb_lflow_uuid" = ""])
> > +
> > +
> > +# Now add back the vips,  create another lb with the same vips and associate to
> > +# sw0 and sw1
> > +check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80
> > +check ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80
> > +check ovn-nbctl --wait=sb lb-add lb3 10.0.0.10:80 10.0.0.3:80
> > +
> > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> > +
> > +check ovn-nbctl ls-lb-add sw0 lb3
> > +check ovn-nbctl --wait=sb ls-lb-add sw1 lb3
> > +check_engine_stats lflow recompute nocompute
> > +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> > +
> > +lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
> > +
> > +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dp" = ""])
> > +
> > +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> > +
> > +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> > +
> > +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> > +
> > +# Now clear lb1 vips.
> > +# Since lb3 is associated with sw0 and sw1, the logical flow db group
> > +# should have reference to sw0 and sw1, but not to sw2.
> > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> > +check ovn-nbctl --wait=sb clear load_balancer lb1 vips
> > +check_engine_stats lflow recompute nocompute
> > +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> > +
> > +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dp" = ""])
> > +
> > +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> > +
> > +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> > +
> > +echo "dpgrp dps - $dpgrp_dps"
> > +
> > +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [1], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> > +
> > +# Now clear lb3 vips.  The logical flow db group
> > +# should have reference only to sw3, sw4 and sw5 because lb2 is
> > +# associated to them.
> > +
> > +check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
> > +check ovn-nbctl --wait=sb clear load_balancer lb3 vips
> > +check_engine_stats lflow recompute nocompute
> > +CHECK_NO_CHANGE_AFTER_RECOMPUTE
> > +
> > +lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dp" = ""])
> > +
> > +lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
> > +AT_CHECK([test "$lb_lflow_dpgrp" != ""])
> > +
> > +dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
> > +
> > +echo "dpgrp dps - $dpgrp_dps"
> > +
> > +AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [1], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [1], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [1], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
> > +AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
> > +
> > +AT_CLEANUP
> > +])
> > +
> >  OVN_FOR_EACH_NORTHD_NO_HV([
> >  AT_SETUP([Logical router incremental processing for NAT])
> >
>
> Regards,
> Dumitru
>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
Dumitru Ceara Jan. 19, 2024, 10:50 a.m. UTC | #3
On 1/11/24 16:31, numans@ovn.org wrote:
> +
> +void
> +lflow_table_add_lflow(struct lflow_table *lflow_table,
> +                      const struct ovn_datapath *od,
> +                      const unsigned long *dp_bitmap, size_t dp_bitmap_len,
> +                      enum ovn_stage stage, uint16_t priority,
> +                      const char *match, const char *actions,
> +                      const char *io_port, const char *ctrl_meter,
> +                      const struct ovsdb_idl_row *stage_hint,
> +                      const char *where,
> +                      struct lflow_ref *lflow_ref)
> +    OVS_EXCLUDED(fake_hash_mutex)
> +{
> +    struct ovs_mutex *hash_lock;
> +    uint32_t hash;
> +
> +    ovs_assert(!od ||
> +               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
> +
> +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> +                                 ovn_stage_get_pipeline(stage),
> +                                 priority, match,
> +                                 actions);
> +
> +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
> +    struct ovn_lflow *lflow =
> +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
> +                         dp_bitmap_len, hash, stage,
> +                         priority, match, actions,
> +                         io_port, ctrl_meter, stage_hint, where);
> +
> +    if (lflow_ref) {
> +        /* lflow referencing is only supported if 'od' is not NULL. */
> +        ovs_assert(od);
> +
> +        struct lflow_ref_node *lrn =
> +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes, lflow, hash);
> +        if (!lrn) {
> +            lrn = xzalloc(sizeof *lrn);
> +            lrn->lflow = lflow;
> +            lrn->dp_index = od->index;
> +            ovs_list_insert(&lflow_ref->lflows_ref_list,
> +                            &lrn->lflow_list_node);
> +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
> +            ovs_list_insert(&lflow->referenced_by, &lrn->ref_list_node);
> +
> +            hmap_insert(&lflow_ref->lflow_ref_nodes, &lrn->ref_node, hash);
> +        }
> +
> +        lrn->linked = true;
> +    }
> +
> +    lflow_hash_unlock(hash_lock);
> +
> +}
> +

This part is not thread safe.

If two threads try to add logical flows that have different hashes and
lflow_ref is not NULL we're going to have a race condition when
inserting to the &lflow_ref->lflow_ref_nodes hash map because the two
threads will take different locks.

That might corrupt the map.

I guess, if we don't want to cause more performance degradation we
should maintain as many lflow_ref instances as we do hash_locks, i.e.,
LFLOW_HASH_LOCK_MASK + 1.  Will that even be possible?

Wdyt?

Regards,
Dumitru
Dumitru Ceara Jan. 19, 2024, 1:13 p.m. UTC | #4
On 1/18/24 22:39, Numan Siddique wrote:
>>> +void
>>> +ovn_dp_groups_clear(struct hmap *dp_groups)
>>> +{
>>> +    struct ovn_dp_group *dpg;
>>> +    HMAP_FOR_EACH_POP (dpg, node, dp_groups) {
>>> +        bitmap_free(dpg->bitmap);
>>> +        free(dpg);
>> This is duplicated in dec_ovn_dp_group_ref().  Also, should we assert
>> that all refcounts are 0 here?
> I don't think we can.  Since ovn_dp_groups_clear() will be called
> whenever a full recompute
> happens.  In this case,  we are destroying the internal data and
> rebuilding and the ref count may not be 0.

Hmm, OK, I think that might be fine.

Thanks for checking.

Regards,
Dumitru
Han Zhou Jan. 24, 2024, 5:01 a.m. UTC | #5
On Fri, Jan 19, 2024 at 2:50 AM Dumitru Ceara <dceara@redhat.com> wrote:
>
> On 1/11/24 16:31, numans@ovn.org wrote:
> > +
> > +void
> > +lflow_table_add_lflow(struct lflow_table *lflow_table,
> > +                      const struct ovn_datapath *od,
> > +                      const unsigned long *dp_bitmap, size_t
dp_bitmap_len,
> > +                      enum ovn_stage stage, uint16_t priority,
> > +                      const char *match, const char *actions,
> > +                      const char *io_port, const char *ctrl_meter,
> > +                      const struct ovsdb_idl_row *stage_hint,
> > +                      const char *where,
> > +                      struct lflow_ref *lflow_ref)
> > +    OVS_EXCLUDED(fake_hash_mutex)
> > +{
> > +    struct ovs_mutex *hash_lock;
> > +    uint32_t hash;
> > +
> > +    ovs_assert(!od ||
> > +               ovn_stage_to_datapath_type(stage) ==
ovn_datapath_get_type(od));
> > +
> > +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> > +                                 ovn_stage_get_pipeline(stage),
> > +                                 priority, match,
> > +                                 actions);
> > +
> > +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
> > +    struct ovn_lflow *lflow =
> > +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
> > +                         dp_bitmap_len, hash, stage,
> > +                         priority, match, actions,
> > +                         io_port, ctrl_meter, stage_hint, where);
> > +
> > +    if (lflow_ref) {
> > +        /* lflow referencing is only supported if 'od' is not NULL. */
> > +        ovs_assert(od);
> > +
> > +        struct lflow_ref_node *lrn =
> > +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes, lflow,
hash);
> > +        if (!lrn) {
> > +            lrn = xzalloc(sizeof *lrn);
> > +            lrn->lflow = lflow;
> > +            lrn->dp_index = od->index;
> > +            ovs_list_insert(&lflow_ref->lflows_ref_list,
> > +                            &lrn->lflow_list_node);
> > +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
> > +            ovs_list_insert(&lflow->referenced_by,
&lrn->ref_list_node);
> > +
> > +            hmap_insert(&lflow_ref->lflow_ref_nodes, &lrn->ref_node,
hash);
> > +        }
> > +
> > +        lrn->linked = true;
> > +    }
> > +
> > +    lflow_hash_unlock(hash_lock);
> > +
> > +}
> > +
>
> This part is not thread safe.
>
> If two threads try to add logical flows that have different hashes and
> lflow_ref is not NULL we're going to have a race condition when
> inserting to the &lflow_ref->lflow_ref_nodes hash map because the two
> threads will take different locks.
>

I think it is safe because a lflow_ref is always associated with an object,
e.g. port, datapath, lb, etc., and lflow generation for a single such
object is never executed in parallel, which is how the parallel lflow build
is designed.
Does it make sense?

Thanks,
Han

> That might corrupt the map.
>
> I guess, if we don't want to cause more performance degradation we
> should maintain as many lflow_ref instances as we do hash_locks, i.e.,
> LFLOW_HASH_LOCK_MASK + 1.  Will that even be possible?
>
> Wdyt?
>
> Regards,
> Dumitru
>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Dumitru Ceara Jan. 24, 2024, 12:23 p.m. UTC | #6
On 1/24/24 06:01, Han Zhou wrote:
> On Fri, Jan 19, 2024 at 2:50 AM Dumitru Ceara <dceara@redhat.com> wrote:
>>
>> On 1/11/24 16:31, numans@ovn.org wrote:
>>> +
>>> +void
>>> +lflow_table_add_lflow(struct lflow_table *lflow_table,
>>> +                      const struct ovn_datapath *od,
>>> +                      const unsigned long *dp_bitmap, size_t
> dp_bitmap_len,
>>> +                      enum ovn_stage stage, uint16_t priority,
>>> +                      const char *match, const char *actions,
>>> +                      const char *io_port, const char *ctrl_meter,
>>> +                      const struct ovsdb_idl_row *stage_hint,
>>> +                      const char *where,
>>> +                      struct lflow_ref *lflow_ref)
>>> +    OVS_EXCLUDED(fake_hash_mutex)
>>> +{
>>> +    struct ovs_mutex *hash_lock;
>>> +    uint32_t hash;
>>> +
>>> +    ovs_assert(!od ||
>>> +               ovn_stage_to_datapath_type(stage) ==
> ovn_datapath_get_type(od));
>>> +
>>> +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
>>> +                                 ovn_stage_get_pipeline(stage),
>>> +                                 priority, match,
>>> +                                 actions);
>>> +
>>> +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
>>> +    struct ovn_lflow *lflow =
>>> +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
>>> +                         dp_bitmap_len, hash, stage,
>>> +                         priority, match, actions,
>>> +                         io_port, ctrl_meter, stage_hint, where);
>>> +
>>> +    if (lflow_ref) {
>>> +        /* lflow referencing is only supported if 'od' is not NULL. */
>>> +        ovs_assert(od);
>>> +
>>> +        struct lflow_ref_node *lrn =
>>> +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes, lflow,
> hash);
>>> +        if (!lrn) {
>>> +            lrn = xzalloc(sizeof *lrn);
>>> +            lrn->lflow = lflow;
>>> +            lrn->dp_index = od->index;
>>> +            ovs_list_insert(&lflow_ref->lflows_ref_list,
>>> +                            &lrn->lflow_list_node);
>>> +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
>>> +            ovs_list_insert(&lflow->referenced_by,
> &lrn->ref_list_node);
>>> +
>>> +            hmap_insert(&lflow_ref->lflow_ref_nodes, &lrn->ref_node,
> hash);
>>> +        }
>>> +
>>> +        lrn->linked = true;
>>> +    }
>>> +
>>> +    lflow_hash_unlock(hash_lock);
>>> +
>>> +}
>>> +
>>
>> This part is not thread safe.
>>
>> If two threads try to add logical flows that have different hashes and
>> lflow_ref is not NULL we're going to have a race condition when
>> inserting to the &lflow_ref->lflow_ref_nodes hash map because the two
>> threads will take different locks.
>>
> 
> I think it is safe because a lflow_ref is always associated with an object,
> e.g. port, datapath, lb, etc., and lflow generation for a single such
> object is never executed in parallel, which is how the parallel lflow build
> is designed.
> Does it make sense?

It happens that it's safe in this current patch set because indeed we
always process individual ports, datapaths, lbs, etc, in the same
thread.  However, this code (lflow_table_add_lflow()) is generic and
there's nothing (not even a comment) that would warn developers in the
future about the potential race if the lflow_ref is shared.

I spoke to Numan offline a bit about this and I think the current plan
is to leave it as is and add proper locking as a follow up (or in v7).
But I think we still need a clear comment here warning users about this.
 Maybe we should add a comment where the lflow_ref structure is defined too.

What do you think?

Regards,
Dumitru

> 
> Thanks,
> Han
> 
>> That might corrupt the map.
>>
>> I guess, if we don't want to cause more performance degradation we
>> should maintain as many lflow_ref instances as we do hash_locks, i.e.,
>> LFLOW_HASH_LOCK_MASK + 1.  Will that even be possible?
>>
>> Wdyt?
>>
>> Regards,
>> Dumitru
>>
>> _______________________________________________
>> dev mailing list
>> dev@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
Han Zhou Jan. 25, 2024, 3:53 a.m. UTC | #7
On Wed, Jan 24, 2024 at 4:23 AM Dumitru Ceara <dceara@redhat.com> wrote:
>
> On 1/24/24 06:01, Han Zhou wrote:
> > On Fri, Jan 19, 2024 at 2:50 AM Dumitru Ceara <dceara@redhat.com> wrote:
> >>
> >> On 1/11/24 16:31, numans@ovn.org wrote:
> >>> +
> >>> +void
> >>> +lflow_table_add_lflow(struct lflow_table *lflow_table,
> >>> +                      const struct ovn_datapath *od,
> >>> +                      const unsigned long *dp_bitmap, size_t
> > dp_bitmap_len,
> >>> +                      enum ovn_stage stage, uint16_t priority,
> >>> +                      const char *match, const char *actions,
> >>> +                      const char *io_port, const char *ctrl_meter,
> >>> +                      const struct ovsdb_idl_row *stage_hint,
> >>> +                      const char *where,
> >>> +                      struct lflow_ref *lflow_ref)
> >>> +    OVS_EXCLUDED(fake_hash_mutex)
> >>> +{
> >>> +    struct ovs_mutex *hash_lock;
> >>> +    uint32_t hash;
> >>> +
> >>> +    ovs_assert(!od ||
> >>> +               ovn_stage_to_datapath_type(stage) ==
> > ovn_datapath_get_type(od));
> >>> +
> >>> +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> >>> +                                 ovn_stage_get_pipeline(stage),
> >>> +                                 priority, match,
> >>> +                                 actions);
> >>> +
> >>> +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
> >>> +    struct ovn_lflow *lflow =
> >>> +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
> >>> +                         dp_bitmap_len, hash, stage,
> >>> +                         priority, match, actions,
> >>> +                         io_port, ctrl_meter, stage_hint, where);
> >>> +
> >>> +    if (lflow_ref) {
> >>> +        /* lflow referencing is only supported if 'od' is not NULL.
*/
> >>> +        ovs_assert(od);
> >>> +
> >>> +        struct lflow_ref_node *lrn =
> >>> +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes, lflow,
> > hash);
> >>> +        if (!lrn) {
> >>> +            lrn = xzalloc(sizeof *lrn);
> >>> +            lrn->lflow = lflow;
> >>> +            lrn->dp_index = od->index;
> >>> +            ovs_list_insert(&lflow_ref->lflows_ref_list,
> >>> +                            &lrn->lflow_list_node);
> >>> +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
> >>> +            ovs_list_insert(&lflow->referenced_by,
> > &lrn->ref_list_node);
> >>> +
> >>> +            hmap_insert(&lflow_ref->lflow_ref_nodes, &lrn->ref_node,
> > hash);
> >>> +        }
> >>> +
> >>> +        lrn->linked = true;
> >>> +    }
> >>> +
> >>> +    lflow_hash_unlock(hash_lock);
> >>> +
> >>> +}
> >>> +
> >>
> >> This part is not thread safe.
> >>
> >> If two threads try to add logical flows that have different hashes and
> >> lflow_ref is not NULL we're going to have a race condition when
> >> inserting to the &lflow_ref->lflow_ref_nodes hash map because the two
> >> threads will take different locks.
> >>
> >
> > I think it is safe because a lflow_ref is always associated with an
object,
> > e.g. port, datapath, lb, etc., and lflow generation for a single such
> > object is never executed in parallel, which is how the parallel lflow
build
> > is designed.
> > Does it make sense?
>
> It happens that it's safe in this current patch set because indeed we
> always process individual ports, datapaths, lbs, etc, in the same
> thread.  However, this code (lflow_table_add_lflow()) is generic and
> there's nothing (not even a comment) that would warn developers in the
> future about the potential race if the lflow_ref is shared.
>
> I spoke to Numan offline a bit about this and I think the current plan
> is to leave it as is and add proper locking as a follow up (or in v7).
> But I think we still need a clear comment here warning users about this.
>  Maybe we should add a comment where the lflow_ref structure is defined
too.
>
> What do you think?

I totally agree with you about adding comments to explain the thread safety
considerations, and make it clear that the lflow_ref should always be
associated with the object that is being processed by the thread.
With regard to any follow up change for proper locking, I am not sure what
scenario is your concern. I think if we always make sure the lflow_ref
passed in is the one owned by the object then the current locking is
sufficient. And this is the natural case.

However, I did think about a situation where there might be a potential
problem in the future when we need to maintain references for more than one
input for the same lflow. For the "main" input, which is the object that
the thread is iterating and generating lflow for, will have its lflow_ref
passed in this function, but we might also need to maintain the reference
of the lflow for a secondary input (or even third). In that case it is not
just the locking issue, but firstly we need to have a proper way to pass in
the secondary lflow_ref, which is what I had mentioned in the review
comments for v3:
https://mail.openvswitch.org/pipermail/ovs-dev/2023-December/410269.html.
(the last paragraph). Regardless of that, it is not a problem for now, and
I hope there is no need to add references for more inputs for the I-P that
matters for production use cases.

Thanks,
Han

>
> Regards,
> Dumitru
>
> >
> > Thanks,
> > Han
> >
> >> That might corrupt the map.
> >>
> >> I guess, if we don't want to cause more performance degradation we
> >> should maintain as many lflow_ref instances as we do hash_locks, i.e.,
> >> LFLOW_HASH_LOCK_MASK + 1.  Will that even be possible?
> >>
> >> Wdyt?
> >>
> >> Regards,
> >> Dumitru
> >>
> >> _______________________________________________
> >> dev mailing list
> >> dev@openvswitch.org
> >> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> >
>
Numan Siddique Jan. 25, 2024, 4:39 a.m. UTC | #8
On Wed, Jan 24, 2024 at 10:53 PM Han Zhou <hzhou@ovn.org> wrote:
>
> On Wed, Jan 24, 2024 at 4:23 AM Dumitru Ceara <dceara@redhat.com> wrote:
> >
> > On 1/24/24 06:01, Han Zhou wrote:
> > > On Fri, Jan 19, 2024 at 2:50 AM Dumitru Ceara <dceara@redhat.com> wrote:
> > >>
> > >> On 1/11/24 16:31, numans@ovn.org wrote:
> > >>> +
> > >>> +void
> > >>> +lflow_table_add_lflow(struct lflow_table *lflow_table,
> > >>> +                      const struct ovn_datapath *od,
> > >>> +                      const unsigned long *dp_bitmap, size_t
> > > dp_bitmap_len,
> > >>> +                      enum ovn_stage stage, uint16_t priority,
> > >>> +                      const char *match, const char *actions,
> > >>> +                      const char *io_port, const char *ctrl_meter,
> > >>> +                      const struct ovsdb_idl_row *stage_hint,
> > >>> +                      const char *where,
> > >>> +                      struct lflow_ref *lflow_ref)
> > >>> +    OVS_EXCLUDED(fake_hash_mutex)
> > >>> +{
> > >>> +    struct ovs_mutex *hash_lock;
> > >>> +    uint32_t hash;
> > >>> +
> > >>> +    ovs_assert(!od ||
> > >>> +               ovn_stage_to_datapath_type(stage) ==
> > > ovn_datapath_get_type(od));
> > >>> +
> > >>> +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> > >>> +                                 ovn_stage_get_pipeline(stage),
> > >>> +                                 priority, match,
> > >>> +                                 actions);
> > >>> +
> > >>> +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
> > >>> +    struct ovn_lflow *lflow =
> > >>> +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
> > >>> +                         dp_bitmap_len, hash, stage,
> > >>> +                         priority, match, actions,
> > >>> +                         io_port, ctrl_meter, stage_hint, where);
> > >>> +
> > >>> +    if (lflow_ref) {
> > >>> +        /* lflow referencing is only supported if 'od' is not NULL.
> */
> > >>> +        ovs_assert(od);
> > >>> +
> > >>> +        struct lflow_ref_node *lrn =
> > >>> +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes, lflow,
> > > hash);
> > >>> +        if (!lrn) {
> > >>> +            lrn = xzalloc(sizeof *lrn);
> > >>> +            lrn->lflow = lflow;
> > >>> +            lrn->dp_index = od->index;
> > >>> +            ovs_list_insert(&lflow_ref->lflows_ref_list,
> > >>> +                            &lrn->lflow_list_node);
> > >>> +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
> > >>> +            ovs_list_insert(&lflow->referenced_by,
> > > &lrn->ref_list_node);
> > >>> +
> > >>> +            hmap_insert(&lflow_ref->lflow_ref_nodes, &lrn->ref_node,
> > > hash);
> > >>> +        }
> > >>> +
> > >>> +        lrn->linked = true;
> > >>> +    }
> > >>> +
> > >>> +    lflow_hash_unlock(hash_lock);
> > >>> +
> > >>> +}
> > >>> +
> > >>
> > >> This part is not thread safe.
> > >>
> > >> If two threads try to add logical flows that have different hashes and
> > >> lflow_ref is not NULL we're going to have a race condition when
> > >> inserting to the &lflow_ref->lflow_ref_nodes hash map because the two
> > >> threads will take different locks.
> > >>
> > >
> > > I think it is safe because a lflow_ref is always associated with an
> object,
> > > e.g. port, datapath, lb, etc., and lflow generation for a single such
> > > object is never executed in parallel, which is how the parallel lflow
> build
> > > is designed.
> > > Does it make sense?
> >
> > It happens that it's safe in this current patch set because indeed we
> > always process individual ports, datapaths, lbs, etc, in the same
> > thread.  However, this code (lflow_table_add_lflow()) is generic and
> > there's nothing (not even a comment) that would warn developers in the
> > future about the potential race if the lflow_ref is shared.
> >
> > I spoke to Numan offline a bit about this and I think the current plan
> > is to leave it as is and add proper locking as a follow up (or in v7).
> > But I think we still need a clear comment here warning users about this.
> >  Maybe we should add a comment where the lflow_ref structure is defined
> too.
> >
> > What do you think?
>
> I totally agree with you about adding comments to explain the thread safety
> considerations, and make it clear that the lflow_ref should always be
> associated with the object that is being processed by the thread.
> With regard to any follow up change for proper locking, I am not sure what
> scenario is your concern. I think if we always make sure the lflow_ref
> passed in is the one owned by the object then the current locking is
> sufficient. And this is the natural case.
>
> However, I did think about a situation where there might be a potential
> problem in the future when we need to maintain references for more than one
> input for the same lflow. For the "main" input, which is the object that
> the thread is iterating and generating lflow for, will have its lflow_ref
> passed in this function, but we might also need to maintain the reference
> of the lflow for a secondary input (or even third). In that case it is not
> just the locking issue, but firstly we need to have a proper way to pass in
> the secondary lflow_ref, which is what I had mentioned in the review
> comments for v3:
> https://mail.openvswitch.org/pipermail/ovs-dev/2023-December/410269.html.
> (the last paragraph). Regardless of that, it is not a problem for now, and
> I hope there is no need to add references for more inputs for the I-P that
> matters for production use cases.
>

I'll update the next version with the proper documentation.

@Han Regarding your comment that we may have the requirement in the
future to add a logical flow to
2 or more lflow_ref's,   I doubt if we would have such a requirement
or a scenario.

Because calling lflow_table_add_lflow(,..., lflow_ref1,  lflow_ref2)
is same as calling lflow_table_add_lflow twice

i.e
lflow_table_add_lflow(,..., lflow_ref1)
lflow_table_add_lflow(,..., lflow_ref2)

In the second call,  since the ovn_lflow already exists in the lflow
table, it will be just referenced in the lflow_ref2.
It would be a problem for sure if these lflow_refs (ref1 and ref2)
belong to different objects.

However I do see a scenario where lflow_table_add_lflow() could be
called with a different lflow_ref object.

Few scenarios I could think of

1.  While generating logical flows for a logical switch port (of type
router) 'P1',  there is a possibility (most likely a mistake from
a contributor) may call something like

lflow_table_add_lflow(....,  P1->peer->lflow_ref)

Even in this case,  we don't generate logical router port flows in
parallel with logical switch ports and
hence P1->peer->lflow_ref may not be modified by multiple threads.
But it's a possibility.  And I think Dumitru
is perhaps concerned about such scenarios.

2.  While generating load balancer flows we may also generate logical
flows for the routers and store them in the od->lflow_ref  (If we
happen to support I-P for router ports)
     i.e  HMAP_FOR_EACH_PARALLEL (lb, lb_maps) {
                generate_lflows_for_lb(lb, lb->lflow_ref)
                FOR_EACH_ROUTER_OF_LB(lr, lb) {
                    generate_lb_lflows_for_router(lr, lr->lflow_ref)
                }
           }

     In this case, it is possible that a logical router 'lr' may be
associated with multiple lbs and this may corrupt the lr->lflow_ref.


We have 2 choices  (after this patch series is accepted :))
  a.  Don't do anything.  The documentation added by this patch series
(which I'll update in the next version)  is good enough.
       and whenever such scenarios arise,  the contributor will add
the thread_safe code or make sure that he/she will not use the
lflow_ref of other objects while
       generating lflows.  In the scenario 2 above,  the contributor
should not generate lflows for the routers in the lb_maps loop.
Instead should generate
       them in the lr_datapaths loop or lr_stateful loop.

  b.  As a follow up patch,  make the lflow_table_add_lflow()
thread_safe for lflow_ref so that we are covered for the future
scenarios.
       The downside of this is that we have to maintain hash locks for
each lflow_ref and reconsile the lflow_ref hmap after all the threads
finish the lflow generation.
       It does incur some cost (both memory and cpu wise).  But it
will be only during recomputes.  Which should not be too frequent.


Let me know what you think.  I'm fine with both, but personally prefer
(1) as I don't see such scenarios in the near future.


Thanks
Numan




> Thanks,
> Han
>
> >
> > Regards,
> > Dumitru
> >
> > >
> > > Thanks,
> > > Han
> > >
> > >> That might corrupt the map.
> > >>
> > >> I guess, if we don't want to cause more performance degradation we
> > >> should maintain as many lflow_ref instances as we do hash_locks, i.e.,
> > >> LFLOW_HASH_LOCK_MASK + 1.  Will that even be possible?
> > >>
> > >> Wdyt?
> > >>
> > >> Regards,
> > >> Dumitru
> > >>
> > >> _______________________________________________
> > >> dev mailing list
> > >> dev@openvswitch.org
> > >> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> > >
> >
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Han Zhou Jan. 25, 2024, 5:44 a.m. UTC | #9
On Wed, Jan 24, 2024 at 8:39 PM Numan Siddique <numans@ovn.org> wrote:
>
> On Wed, Jan 24, 2024 at 10:53 PM Han Zhou <hzhou@ovn.org> wrote:
> >
> > On Wed, Jan 24, 2024 at 4:23 AM Dumitru Ceara <dceara@redhat.com> wrote:
> > >
> > > On 1/24/24 06:01, Han Zhou wrote:
> > > > On Fri, Jan 19, 2024 at 2:50 AM Dumitru Ceara <dceara@redhat.com>
wrote:
> > > >>
> > > >> On 1/11/24 16:31, numans@ovn.org wrote:
> > > >>> +
> > > >>> +void
> > > >>> +lflow_table_add_lflow(struct lflow_table *lflow_table,
> > > >>> +                      const struct ovn_datapath *od,
> > > >>> +                      const unsigned long *dp_bitmap, size_t
> > > > dp_bitmap_len,
> > > >>> +                      enum ovn_stage stage, uint16_t priority,
> > > >>> +                      const char *match, const char *actions,
> > > >>> +                      const char *io_port, const char
*ctrl_meter,
> > > >>> +                      const struct ovsdb_idl_row *stage_hint,
> > > >>> +                      const char *where,
> > > >>> +                      struct lflow_ref *lflow_ref)
> > > >>> +    OVS_EXCLUDED(fake_hash_mutex)
> > > >>> +{
> > > >>> +    struct ovs_mutex *hash_lock;
> > > >>> +    uint32_t hash;
> > > >>> +
> > > >>> +    ovs_assert(!od ||
> > > >>> +               ovn_stage_to_datapath_type(stage) ==
> > > > ovn_datapath_get_type(od));
> > > >>> +
> > > >>> +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> > > >>> +                                 ovn_stage_get_pipeline(stage),
> > > >>> +                                 priority, match,
> > > >>> +                                 actions);
> > > >>> +
> > > >>> +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
> > > >>> +    struct ovn_lflow *lflow =
> > > >>> +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
> > > >>> +                         dp_bitmap_len, hash, stage,
> > > >>> +                         priority, match, actions,
> > > >>> +                         io_port, ctrl_meter, stage_hint, where);
> > > >>> +
> > > >>> +    if (lflow_ref) {
> > > >>> +        /* lflow referencing is only supported if 'od' is not
NULL.
> > */
> > > >>> +        ovs_assert(od);
> > > >>> +
> > > >>> +        struct lflow_ref_node *lrn =
> > > >>> +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes,
lflow,
> > > > hash);
> > > >>> +        if (!lrn) {
> > > >>> +            lrn = xzalloc(sizeof *lrn);
> > > >>> +            lrn->lflow = lflow;
> > > >>> +            lrn->dp_index = od->index;
> > > >>> +            ovs_list_insert(&lflow_ref->lflows_ref_list,
> > > >>> +                            &lrn->lflow_list_node);
> > > >>> +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
> > > >>> +            ovs_list_insert(&lflow->referenced_by,
> > > > &lrn->ref_list_node);
> > > >>> +
> > > >>> +            hmap_insert(&lflow_ref->lflow_ref_nodes,
&lrn->ref_node,
> > > > hash);
> > > >>> +        }
> > > >>> +
> > > >>> +        lrn->linked = true;
> > > >>> +    }
> > > >>> +
> > > >>> +    lflow_hash_unlock(hash_lock);
> > > >>> +
> > > >>> +}
> > > >>> +
> > > >>
> > > >> This part is not thread safe.
> > > >>
> > > >> If two threads try to add logical flows that have different hashes
and
> > > >> lflow_ref is not NULL we're going to have a race condition when
> > > >> inserting to the &lflow_ref->lflow_ref_nodes hash map because the
two
> > > >> threads will take different locks.
> > > >>
> > > >
> > > > I think it is safe because a lflow_ref is always associated with an
> > object,
> > > > e.g. port, datapath, lb, etc., and lflow generation for a single
such
> > > > object is never executed in parallel, which is how the parallel
lflow
> > build
> > > > is designed.
> > > > Does it make sense?
> > >
> > > It happens that it's safe in this current patch set because indeed we
> > > always process individual ports, datapaths, lbs, etc, in the same
> > > thread.  However, this code (lflow_table_add_lflow()) is generic and
> > > there's nothing (not even a comment) that would warn developers in the
> > > future about the potential race if the lflow_ref is shared.
> > >
> > > I spoke to Numan offline a bit about this and I think the current plan
> > > is to leave it as is and add proper locking as a follow up (or in v7).
> > > But I think we still need a clear comment here warning users about
this.
> > >  Maybe we should add a comment where the lflow_ref structure is
defined
> > too.
> > >
> > > What do you think?
> >
> > I totally agree with you about adding comments to explain the thread
safety
> > considerations, and make it clear that the lflow_ref should always be
> > associated with the object that is being processed by the thread.
> > With regard to any follow up change for proper locking, I am not sure
what
> > scenario is your concern. I think if we always make sure the lflow_ref
> > passed in is the one owned by the object then the current locking is
> > sufficient. And this is the natural case.
> >
> > However, I did think about a situation where there might be a potential
> > problem in the future when we need to maintain references for more than
one
> > input for the same lflow. For the "main" input, which is the object that
> > the thread is iterating and generating lflow for, will have its
lflow_ref
> > passed in this function, but we might also need to maintain the
reference
> > of the lflow for a secondary input (or even third). In that case it is
not
> > just the locking issue, but firstly we need to have a proper way to
pass in
> > the secondary lflow_ref, which is what I had mentioned in the review
> > comments for v3:
> > https://mail.openvswitch.org/pipermail/ovs-dev/2023-December/410269.html
.
> > (the last paragraph). Regardless of that, it is not a problem for now,
and
> > I hope there is no need to add references for more inputs for the I-P
that
> > matters for production use cases.
> >
>
> I'll update the next version with the proper documentation.
>
> @Han Regarding your comment that we may have the requirement in the
> future to add a logical flow to
> 2 or more lflow_ref's,   I doubt if we would have such a requirement
> or a scenario.
>
> Because calling lflow_table_add_lflow(,..., lflow_ref1,  lflow_ref2)
> is same as calling lflow_table_add_lflow twice
>
> i.e
> lflow_table_add_lflow(,..., lflow_ref1)
> lflow_table_add_lflow(,..., lflow_ref2)
>
> In the second call,  since the ovn_lflow already exists in the lflow
> table, it will be just referenced in the lflow_ref2.
> It would be a problem for sure if these lflow_refs (ref1 and ref2)
> belong to different objects.
>

The scenario you mentioned here is more like duplicated lflows generated
when processing different objects, which has different lflow_refs, which is
not the scenario I was talking about.
I am thinking about the case when a lflow is generated, there are multiple
inputs. For example (extremely simplified just to express the idea), when
generating lflows for a LB object, we also consult other objects such as
datapath. Now we implemented I-P for LB, so for the LB object, for each
lflow generated we will call lflow_table_add_lflow(..., lb->lflow_ref) to
maintain the reference from the LB object to the lflow. However, if in the
future we want to do I-P for datapaths, which means when a datapath is
deleted or updated we need to quickly find the lflows that is linked to the
datapath, we need to maintain the reference from the datapath to the same
lflow that is generated by the same call lflow_table_add_lflow(...,
lb->lflow_ref, datapath->lflow_ref). Or, do you suggest even in this case
we just call the function twice, with the second call's only purpose being
to add the reference to the secondary input? Hmm, it would actually work as
if we are trying to add a redundant lflow, although wasting some cycles for
the ovn_lflow_find(). Of course the locking issue would arise in this case.
Anyway, I don't have a realistic example to prove this is a requirement of
I-P for solving real production scale/performance issues. In the example I
provided, datapath I-P seems to be a non-goal as far as I can tell for now.
So I am not too worried.

> However I do see a scenario where lflow_table_add_lflow() could be
> called with a different lflow_ref object.
>
> Few scenarios I could think of
>
> 1.  While generating logical flows for a logical switch port (of type
> router) 'P1',  there is a possibility (most likely a mistake from
> a contributor) may call something like
>
> lflow_table_add_lflow(....,  P1->peer->lflow_ref)
>
> Even in this case,  we don't generate logical router port flows in
> parallel with logical switch ports and
> hence P1->peer->lflow_ref may not be modified by multiple threads.
> But it's a possibility.  And I think Dumitru
> is perhaps concerned about such scenarios.
>
In this case it does seem to be a problem, because it is possible that
thread1 is processing P1 while thread2 is processing P1->peer, and now both
threads can access P1->peer->lflow_ref. We should make it clear that when
processing an object, we only add lflow_ref for that object.

> 2.  While generating load balancer flows we may also generate logical
> flows for the routers and store them in the od->lflow_ref  (If we
> happen to support I-P for router ports)
>      i.e  HMAP_FOR_EACH_PARALLEL (lb, lb_maps) {
>                 generate_lflows_for_lb(lb, lb->lflow_ref)
>                 FOR_EACH_ROUTER_OF_LB(lr, lb) {
>                     generate_lb_lflows_for_router(lr, lr->lflow_ref)
>                 }
>            }
>
>      In this case, it is possible that a logical router 'lr' may be
> associated with multiple lbs and this may corrupt the lr->lflow_ref.
>
>
> We have 2 choices  (after this patch series is accepted :))
>   a.  Don't do anything.  The documentation added by this patch series
> (which I'll update in the next version)  is good enough.
>        and whenever such scenarios arise,  the contributor will add
> the thread_safe code or make sure that he/she will not use the
> lflow_ref of other objects while
>        generating lflows.  In the scenario 2 above,  the contributor
> should not generate lflows for the routers in the lb_maps loop.
> Instead should generate
>        them in the lr_datapaths loop or lr_stateful loop.
>
>   b.  As a follow up patch,  make the lflow_table_add_lflow()
> thread_safe for lflow_ref so that we are covered for the future
> scenarios.
>        The downside of this is that we have to maintain hash locks for
> each lflow_ref and reconsile the lflow_ref hmap after all the threads
> finish the lflow generation.
>        It does incur some cost (both memory and cpu wise).  But it
> will be only during recomputes.  Which should not be too frequent.
>

I'd prefer solution (a) with proper documentation to emphasize the
rule/assumption.

Thanks,
Han

>
> Let me know what you think.  I'm fine with both, but personally prefer
> (1) as I don't see such scenarios in the near future.
>
>
> Thanks
> Numan
>
>
>
>
> > Thanks,
> > Han
> >
> > >
> > > Regards,
> > > Dumitru
> > >
> > > >
> > > > Thanks,
> > > > Han
> > > >
> > > >> That might corrupt the map.
> > > >>
> > > >> I guess, if we don't want to cause more performance degradation we
> > > >> should maintain as many lflow_ref instances as we do hash_locks,
i.e.,
> > > >> LFLOW_HASH_LOCK_MASK + 1.  Will that even be possible?
> > > >>
> > > >> Wdyt?
> > > >>
> > > >> Regards,
> > > >> Dumitru
> > > >>
> > > >> _______________________________________________
> > > >> dev mailing list
> > > >> dev@openvswitch.org
> > > >> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> > > >
> > >
> > _______________________________________________
> > dev mailing list
> > dev@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Han Zhou Jan. 25, 2024, 6:07 a.m. UTC | #10
On Thu, Jan 11, 2024 at 7:32 AM <numans@ovn.org> wrote:
>
> From: Numan Siddique <numans@ovn.org>
>
> ovn_lflow_add() and other related functions/macros are now moved
> into a separate module - lflow-mgr.c.  This module maintains a
> table 'struct lflow_table' for the logical flows.  lflow table
> maintains a hmap to store the logical flows.
>
> It also maintains the logical switch and router dp groups.
>
> Previous commits which added lflow incremental processing for
> the VIF logical ports, stored the references to
> the logical ports' lflows using 'struct lflow_ref_list'.  This
> struct is renamed to 'struct lflow_ref' and is part of lflow-mgr.c.
> It is  modified a bit to store the resource to lflow references.
>
> Example usage of 'struct lflow_ref'.
>
> 'struct ovn_port' maintains 2 instances of lflow_ref.  i,e
>
> struct ovn_port {
>    ...
>    ...
>    struct lflow_ref *lflow_ref;
>    struct lflow_ref *stateful_lflow_ref;

Hi Numan,

In addition to the lock discussion with you and Dumitru, I still want to
discuss another thing of this patch regarding the second lflow_ref:
stateful_lflow_ref.
I understand that you added this to achieve finer grained I-P especially
for router ports. I am wondering how much performance gain is from this.
For my understanding it shouldn't matter much since each ovn_port should be
associated with a very limited number of lflows. Could you provide more
insight/data on this? I think it would be better to keep things simple
(i.e. one object, one lflow_ref list) unless the benefit is obvious.

I am also trying to run another performance regression test for recompute,
since I am a little concerned about the DP refcnt hmap associated with each
lflow. I understand it's necessary to handle the duplicated lflow cases,
but it looks heavy and removes the opportunities for more efficient bitmap
operations. Let me know if you have already had evaluated its performance.

Thanks,
Han

> };
>
> All the logical flows generated by
> build_lswitch_and_lrouter_iterate_by_lsp() uses the ovn_port->lflow_ref.
>
> All the logical flows generated by build_lsp_lflows_for_lbnats()
> uses the ovn_port->stateful_lflow_ref.
>
> When handling the ovn_port changes incrementally, the lflows referenced
> in 'struct ovn_port' are cleared and regenerated and synced to the
> SB logical flows.
>
> eg.
>
> lflow_ref_clear_lflows(op->lflow_ref);
> build_lswitch_and_lrouter_iterate_by_lsp(op, ...);
> lflow_ref_sync_lflows_to_sb(op->lflow_ref, ...);
>
> This patch does few more changes:
>   -  Logical flows are now hashed without the logical
>      datapaths.  If a logical flow is referenced by just one
>      datapath, we don't rehash it.
>
>   -  The synthetic 'hash' column of sbrec_logical_flow now
>      doesn't use the logical datapath.  This means that
>      when ovn-northd is updated/upgraded and has this commit,
>      all the logical flows with 'logical_datapath' column
>      set will get deleted and re-added causing some disruptions.
>
>   -  With the commit [1] which added I-P support for logical
>      port changes, multiple logical flows with same match 'M'
>      and actions 'A' are generated and stored without the
>      dp groups, which was not the case prior to
>      that patch.
>      One example to generate these lflows is:
>              ovn-nbctl lsp-set-addresses sw0p1 "MAC1 IP1"
>              ovn-nbctl lsp-set-addresses sw1p1 "MAC1 IP1"
>              ovn-nbctl lsp-set-addresses sw2p1 "MAC1 IP1"
>
>      Now with this patch we go back to the earlier way.  i.e
>      one logical flow with logical_dp_groups set.
>
>   -  With this patch any updates to a logical port which
>      doesn't result in new logical flows will not result in
>      deletion and addition of same logical flows.
>      Eg.
>      ovn-nbctl set logical_switch_port sw0p1 external_ids:foo=bar
>      will be a no-op to the SB logical flow table.
>
> [1] - 8bbd678("northd: Incremental processing of VIF additions in 'lflow'
node.")
>
> Signed-off-by: Numan Siddique <numans@ovn.org>
Han Zhou Jan. 25, 2024, 6:15 a.m. UTC | #11
On Wed, Jan 24, 2024 at 10:07 PM Han Zhou <hzhou@ovn.org> wrote:
>
>
>
> On Thu, Jan 11, 2024 at 7:32 AM <numans@ovn.org> wrote:
> >
> > From: Numan Siddique <numans@ovn.org>
> >
> > ovn_lflow_add() and other related functions/macros are now moved
> > into a separate module - lflow-mgr.c.  This module maintains a
> > table 'struct lflow_table' for the logical flows.  lflow table
> > maintains a hmap to store the logical flows.
> >
> > It also maintains the logical switch and router dp groups.
> >
> > Previous commits which added lflow incremental processing for
> > the VIF logical ports, stored the references to
> > the logical ports' lflows using 'struct lflow_ref_list'.  This
> > struct is renamed to 'struct lflow_ref' and is part of lflow-mgr.c.
> > It is  modified a bit to store the resource to lflow references.
> >
> > Example usage of 'struct lflow_ref'.
> >
> > 'struct ovn_port' maintains 2 instances of lflow_ref.  i,e
> >
> > struct ovn_port {
> >    ...
> >    ...
> >    struct lflow_ref *lflow_ref;
> >    struct lflow_ref *stateful_lflow_ref;
>
> Hi Numan,
>
> In addition to the lock discussion with you and Dumitru, I still want to
discuss another thing of this patch regarding the second lflow_ref:
stateful_lflow_ref.
> I understand that you added this to achieve finer grained I-P especially
for router ports. I am wondering how much performance gain is from this.
For my understanding it shouldn't matter much since each ovn_port should be
associated with a very limited number of lflows. Could you provide more
insight/data on this? I think it would be better to keep things simple
(i.e. one object, one lflow_ref list) unless the benefit is obvious.
>
> I am also trying to run another performance regression test for
recompute, since I am a little concerned about the DP refcnt hmap
associated with each lflow. I understand it's necessary to handle the
duplicated lflow cases, but it looks heavy and removes the opportunities
for more efficient bitmap operations. Let me know if you have already had
evaluated its performance.
>
> Thanks,
> Han
>

Sorry that I forgot to mention another minor comment for the struct
lflow_ref_node. It would helpful to add more comments about the typical
life cycle of the lflow_ref_node, so that it is easier to understand why
hmap is used in lflow_ref, and why linked is needed in lflow_ref_node. For
my understanding the life cycle is something like:
1. created and linked at lflow_table_add_lflow()
2. unlinked when handling a change of the object that references it.
3. it may be re-linked when handling the same object change.
4. it is used to sync the lflow change (e.g. adding/removing DPs) to SB.
5. it is destroyed after syncing a unlinked lflow_ref to SB.

I think it would help people understand the code more easily. What do you
think?

Thanks,
Han

> > };
> >
> > All the logical flows generated by
> > build_lswitch_and_lrouter_iterate_by_lsp() uses the ovn_port->lflow_ref.
> >
> > All the logical flows generated by build_lsp_lflows_for_lbnats()
> > uses the ovn_port->stateful_lflow_ref.
> >
> > When handling the ovn_port changes incrementally, the lflows referenced
> > in 'struct ovn_port' are cleared and regenerated and synced to the
> > SB logical flows.
> >
> > eg.
> >
> > lflow_ref_clear_lflows(op->lflow_ref);
> > build_lswitch_and_lrouter_iterate_by_lsp(op, ...);
> > lflow_ref_sync_lflows_to_sb(op->lflow_ref, ...);
> >
> > This patch does few more changes:
> >   -  Logical flows are now hashed without the logical
> >      datapaths.  If a logical flow is referenced by just one
> >      datapath, we don't rehash it.
> >
> >   -  The synthetic 'hash' column of sbrec_logical_flow now
> >      doesn't use the logical datapath.  This means that
> >      when ovn-northd is updated/upgraded and has this commit,
> >      all the logical flows with 'logical_datapath' column
> >      set will get deleted and re-added causing some disruptions.
> >
> >   -  With the commit [1] which added I-P support for logical
> >      port changes, multiple logical flows with same match 'M'
> >      and actions 'A' are generated and stored without the
> >      dp groups, which was not the case prior to
> >      that patch.
> >      One example to generate these lflows is:
> >              ovn-nbctl lsp-set-addresses sw0p1 "MAC1 IP1"
> >              ovn-nbctl lsp-set-addresses sw1p1 "MAC1 IP1"
> >              ovn-nbctl lsp-set-addresses sw2p1 "MAC1 IP1"
> >
> >      Now with this patch we go back to the earlier way.  i.e
> >      one logical flow with logical_dp_groups set.
> >
> >   -  With this patch any updates to a logical port which
> >      doesn't result in new logical flows will not result in
> >      deletion and addition of same logical flows.
> >      Eg.
> >      ovn-nbctl set logical_switch_port sw0p1 external_ids:foo=bar
> >      will be a no-op to the SB logical flow table.
> >
> > [1] - 8bbd678("northd: Incremental processing of VIF additions in
'lflow' node.")
> >
> > Signed-off-by: Numan Siddique <numans@ovn.org>
>
Dumitru Ceara Jan. 25, 2024, 9:21 a.m. UTC | #12
On 1/25/24 06:44, Han Zhou wrote:
> On Wed, Jan 24, 2024 at 8:39 PM Numan Siddique <numans@ovn.org> wrote:
>>
>> On Wed, Jan 24, 2024 at 10:53 PM Han Zhou <hzhou@ovn.org> wrote:
>>>
>>> On Wed, Jan 24, 2024 at 4:23 AM Dumitru Ceara <dceara@redhat.com> wrote:
>>>>
>>>> On 1/24/24 06:01, Han Zhou wrote:
>>>>> On Fri, Jan 19, 2024 at 2:50 AM Dumitru Ceara <dceara@redhat.com>
> wrote:
>>>>>>
>>>>>> On 1/11/24 16:31, numans@ovn.org wrote:
>>>>>>> +
>>>>>>> +void
>>>>>>> +lflow_table_add_lflow(struct lflow_table *lflow_table,
>>>>>>> +                      const struct ovn_datapath *od,
>>>>>>> +                      const unsigned long *dp_bitmap, size_t
>>>>> dp_bitmap_len,
>>>>>>> +                      enum ovn_stage stage, uint16_t priority,
>>>>>>> +                      const char *match, const char *actions,
>>>>>>> +                      const char *io_port, const char
> *ctrl_meter,
>>>>>>> +                      const struct ovsdb_idl_row *stage_hint,
>>>>>>> +                      const char *where,
>>>>>>> +                      struct lflow_ref *lflow_ref)
>>>>>>> +    OVS_EXCLUDED(fake_hash_mutex)
>>>>>>> +{
>>>>>>> +    struct ovs_mutex *hash_lock;
>>>>>>> +    uint32_t hash;
>>>>>>> +
>>>>>>> +    ovs_assert(!od ||
>>>>>>> +               ovn_stage_to_datapath_type(stage) ==
>>>>> ovn_datapath_get_type(od));
>>>>>>> +
>>>>>>> +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
>>>>>>> +                                 ovn_stage_get_pipeline(stage),
>>>>>>> +                                 priority, match,
>>>>>>> +                                 actions);
>>>>>>> +
>>>>>>> +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
>>>>>>> +    struct ovn_lflow *lflow =
>>>>>>> +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
>>>>>>> +                         dp_bitmap_len, hash, stage,
>>>>>>> +                         priority, match, actions,
>>>>>>> +                         io_port, ctrl_meter, stage_hint, where);
>>>>>>> +
>>>>>>> +    if (lflow_ref) {
>>>>>>> +        /* lflow referencing is only supported if 'od' is not
> NULL.
>>> */
>>>>>>> +        ovs_assert(od);
>>>>>>> +
>>>>>>> +        struct lflow_ref_node *lrn =
>>>>>>> +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes,
> lflow,
>>>>> hash);
>>>>>>> +        if (!lrn) {
>>>>>>> +            lrn = xzalloc(sizeof *lrn);
>>>>>>> +            lrn->lflow = lflow;
>>>>>>> +            lrn->dp_index = od->index;
>>>>>>> +            ovs_list_insert(&lflow_ref->lflows_ref_list,
>>>>>>> +                            &lrn->lflow_list_node);
>>>>>>> +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
>>>>>>> +            ovs_list_insert(&lflow->referenced_by,
>>>>> &lrn->ref_list_node);
>>>>>>> +
>>>>>>> +            hmap_insert(&lflow_ref->lflow_ref_nodes,
> &lrn->ref_node,
>>>>> hash);
>>>>>>> +        }
>>>>>>> +
>>>>>>> +        lrn->linked = true;
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    lflow_hash_unlock(hash_lock);
>>>>>>> +
>>>>>>> +}
>>>>>>> +
>>>>>>
>>>>>> This part is not thread safe.
>>>>>>
>>>>>> If two threads try to add logical flows that have different hashes
> and
>>>>>> lflow_ref is not NULL we're going to have a race condition when
>>>>>> inserting to the &lflow_ref->lflow_ref_nodes hash map because the
> two
>>>>>> threads will take different locks.
>>>>>>
>>>>>
>>>>> I think it is safe because a lflow_ref is always associated with an
>>> object,
>>>>> e.g. port, datapath, lb, etc., and lflow generation for a single
> such
>>>>> object is never executed in parallel, which is how the parallel
> lflow
>>> build
>>>>> is designed.
>>>>> Does it make sense?
>>>>
>>>> It happens that it's safe in this current patch set because indeed we
>>>> always process individual ports, datapaths, lbs, etc, in the same
>>>> thread.  However, this code (lflow_table_add_lflow()) is generic and
>>>> there's nothing (not even a comment) that would warn developers in the
>>>> future about the potential race if the lflow_ref is shared.
>>>>
>>>> I spoke to Numan offline a bit about this and I think the current plan
>>>> is to leave it as is and add proper locking as a follow up (or in v7).
>>>> But I think we still need a clear comment here warning users about
> this.
>>>>  Maybe we should add a comment where the lflow_ref structure is
> defined
>>> too.
>>>>
>>>> What do you think?
>>>
>>> I totally agree with you about adding comments to explain the thread
> safety
>>> considerations, and make it clear that the lflow_ref should always be
>>> associated with the object that is being processed by the thread.
>>> With regard to any follow up change for proper locking, I am not sure
> what
>>> scenario is your concern. I think if we always make sure the lflow_ref
>>> passed in is the one owned by the object then the current locking is
>>> sufficient. And this is the natural case.
>>>
>>> However, I did think about a situation where there might be a potential
>>> problem in the future when we need to maintain references for more than
> one
>>> input for the same lflow. For the "main" input, which is the object that
>>> the thread is iterating and generating lflow for, will have its
> lflow_ref
>>> passed in this function, but we might also need to maintain the
> reference
>>> of the lflow for a secondary input (or even third). In that case it is
> not
>>> just the locking issue, but firstly we need to have a proper way to
> pass in
>>> the secondary lflow_ref, which is what I had mentioned in the review
>>> comments for v3:
>>> https://mail.openvswitch.org/pipermail/ovs-dev/2023-December/410269.html
> .
>>> (the last paragraph). Regardless of that, it is not a problem for now,
> and
>>> I hope there is no need to add references for more inputs for the I-P
> that
>>> matters for production use cases.
>>>
>>
>> I'll update the next version with the proper documentation.
>>
>> @Han Regarding your comment that we may have the requirement in the
>> future to add a logical flow to
>> 2 or more lflow_ref's,   I doubt if we would have such a requirement
>> or a scenario.
>>
>> Because calling lflow_table_add_lflow(,..., lflow_ref1,  lflow_ref2)
>> is same as calling lflow_table_add_lflow twice
>>
>> i.e
>> lflow_table_add_lflow(,..., lflow_ref1)
>> lflow_table_add_lflow(,..., lflow_ref2)
>>
>> In the second call,  since the ovn_lflow already exists in the lflow
>> table, it will be just referenced in the lflow_ref2.
>> It would be a problem for sure if these lflow_refs (ref1 and ref2)
>> belong to different objects.
>>
> 
> The scenario you mentioned here is more like duplicated lflows generated
> when processing different objects, which has different lflow_refs, which is
> not the scenario I was talking about.
> I am thinking about the case when a lflow is generated, there are multiple
> inputs. For example (extremely simplified just to express the idea), when
> generating lflows for a LB object, we also consult other objects such as
> datapath. Now we implemented I-P for LB, so for the LB object, for each
> lflow generated we will call lflow_table_add_lflow(..., lb->lflow_ref) to
> maintain the reference from the LB object to the lflow. However, if in the
> future we want to do I-P for datapaths, which means when a datapath is
> deleted or updated we need to quickly find the lflows that is linked to the
> datapath, we need to maintain the reference from the datapath to the same
> lflow that is generated by the same call lflow_table_add_lflow(...,
> lb->lflow_ref, datapath->lflow_ref). Or, do you suggest even in this case
> we just call the function twice, with the second call's only purpose being
> to add the reference to the secondary input? Hmm, it would actually work as
> if we are trying to add a redundant lflow, although wasting some cycles for
> the ovn_lflow_find(). Of course the locking issue would arise in this case.
> Anyway, I don't have a realistic example to prove this is a requirement of
> I-P for solving real production scale/performance issues. In the example I
> provided, datapath I-P seems to be a non-goal as far as I can tell for now.
> So I am not too worried.
> 
>> However I do see a scenario where lflow_table_add_lflow() could be
>> called with a different lflow_ref object.
>>
>> Few scenarios I could think of
>>
>> 1.  While generating logical flows for a logical switch port (of type
>> router) 'P1',  there is a possibility (most likely a mistake from
>> a contributor) may call something like
>>
>> lflow_table_add_lflow(....,  P1->peer->lflow_ref)
>>
>> Even in this case,  we don't generate logical router port flows in
>> parallel with logical switch ports and
>> hence P1->peer->lflow_ref may not be modified by multiple threads.
>> But it's a possibility.  And I think Dumitru
>> is perhaps concerned about such scenarios.
>>
> In this case it does seem to be a problem, because it is possible that
> thread1 is processing P1 while thread2 is processing P1->peer, and now both
> threads can access P1->peer->lflow_ref. We should make it clear that when
> processing an object, we only add lflow_ref for that object.
> 
>> 2.  While generating load balancer flows we may also generate logical
>> flows for the routers and store them in the od->lflow_ref  (If we
>> happen to support I-P for router ports)
>>      i.e  HMAP_FOR_EACH_PARALLEL (lb, lb_maps) {
>>                 generate_lflows_for_lb(lb, lb->lflow_ref)
>>                 FOR_EACH_ROUTER_OF_LB(lr, lb) {
>>                     generate_lb_lflows_for_router(lr, lr->lflow_ref)
>>                 }
>>            }
>>
>>      In this case, it is possible that a logical router 'lr' may be
>> associated with multiple lbs and this may corrupt the lr->lflow_ref.
>>
>>
>> We have 2 choices  (after this patch series is accepted :))
>>   a.  Don't do anything.  The documentation added by this patch series
>> (which I'll update in the next version)  is good enough.
>>        and whenever such scenarios arise,  the contributor will add
>> the thread_safe code or make sure that he/she will not use the
>> lflow_ref of other objects while
>>        generating lflows.  In the scenario 2 above,  the contributor
>> should not generate lflows for the routers in the lb_maps loop.
>> Instead should generate
>>        them in the lr_datapaths loop or lr_stateful loop.
>>
>>   b.  As a follow up patch,  make the lflow_table_add_lflow()
>> thread_safe for lflow_ref so that we are covered for the future
>> scenarios.
>>        The downside of this is that we have to maintain hash locks for
>> each lflow_ref and reconsile the lflow_ref hmap after all the threads
>> finish the lflow generation.
>>        It does incur some cost (both memory and cpu wise).  But it
>> will be only during recomputes.  Which should not be too frequent.
>>
> 
> I'd prefer solution (a) with proper documentation to emphasize the
> rule/assumption.
> 

Would it make sense to also try to detect whether a lflow_ref is used
from multiple threads?  E.g., store the id of the last thread that
accessed it and assert that it's the same with the thread currently
accessing the lflow ref?

Just to make sure we crash explicitly in case such bugs are
inadvertently introduced?

Regards,
Dumitru

> Thanks,
> Han
> 
>>
>> Let me know what you think.  I'm fine with both, but personally prefer
>> (1) as I don't see such scenarios in the near future.
>>
>>
>> Thanks
>> Numan
>>
>>
>>
>>
>>> Thanks,
>>> Han
>>>
>>>>
>>>> Regards,
>>>> Dumitru
>>>>
>>>>>
>>>>> Thanks,
>>>>> Han
>>>>>
>>>>>> That might corrupt the map.
>>>>>>
>>>>>> I guess, if we don't want to cause more performance degradation we
>>>>>> should maintain as many lflow_ref instances as we do hash_locks,
> i.e.,
>>>>>> LFLOW_HASH_LOCK_MASK + 1.  Will that even be possible?
>>>>>>
>>>>>> Wdyt?
>>>>>>
>>>>>> Regards,
>>>>>> Dumitru
>>>>>>
>>>>>> _______________________________________________
>>>>>> dev mailing list
>>>>>> dev@openvswitch.org
>>>>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>>>>
>>>>
>>> _______________________________________________
>>> dev mailing list
>>> dev@openvswitch.org
>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
Numan Siddique Jan. 25, 2024, 3:40 p.m. UTC | #13
On Thu, Jan 25, 2024 at 4:21 AM Dumitru Ceara <dceara@redhat.com> wrote:
>
> On 1/25/24 06:44, Han Zhou wrote:
> > On Wed, Jan 24, 2024 at 8:39 PM Numan Siddique <numans@ovn.org> wrote:
> >>
> >> On Wed, Jan 24, 2024 at 10:53 PM Han Zhou <hzhou@ovn.org> wrote:
> >>>
> >>> On Wed, Jan 24, 2024 at 4:23 AM Dumitru Ceara <dceara@redhat.com> wrote:
> >>>>
> >>>> On 1/24/24 06:01, Han Zhou wrote:
> >>>>> On Fri, Jan 19, 2024 at 2:50 AM Dumitru Ceara <dceara@redhat.com>
> > wrote:
> >>>>>>
> >>>>>> On 1/11/24 16:31, numans@ovn.org wrote:
> >>>>>>> +
> >>>>>>> +void
> >>>>>>> +lflow_table_add_lflow(struct lflow_table *lflow_table,
> >>>>>>> +                      const struct ovn_datapath *od,
> >>>>>>> +                      const unsigned long *dp_bitmap, size_t
> >>>>> dp_bitmap_len,
> >>>>>>> +                      enum ovn_stage stage, uint16_t priority,
> >>>>>>> +                      const char *match, const char *actions,
> >>>>>>> +                      const char *io_port, const char
> > *ctrl_meter,
> >>>>>>> +                      const struct ovsdb_idl_row *stage_hint,
> >>>>>>> +                      const char *where,
> >>>>>>> +                      struct lflow_ref *lflow_ref)
> >>>>>>> +    OVS_EXCLUDED(fake_hash_mutex)
> >>>>>>> +{
> >>>>>>> +    struct ovs_mutex *hash_lock;
> >>>>>>> +    uint32_t hash;
> >>>>>>> +
> >>>>>>> +    ovs_assert(!od ||
> >>>>>>> +               ovn_stage_to_datapath_type(stage) ==
> >>>>> ovn_datapath_get_type(od));
> >>>>>>> +
> >>>>>>> +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
> >>>>>>> +                                 ovn_stage_get_pipeline(stage),
> >>>>>>> +                                 priority, match,
> >>>>>>> +                                 actions);
> >>>>>>> +
> >>>>>>> +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
> >>>>>>> +    struct ovn_lflow *lflow =
> >>>>>>> +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
> >>>>>>> +                         dp_bitmap_len, hash, stage,
> >>>>>>> +                         priority, match, actions,
> >>>>>>> +                         io_port, ctrl_meter, stage_hint, where);
> >>>>>>> +
> >>>>>>> +    if (lflow_ref) {
> >>>>>>> +        /* lflow referencing is only supported if 'od' is not
> > NULL.
> >>> */
> >>>>>>> +        ovs_assert(od);
> >>>>>>> +
> >>>>>>> +        struct lflow_ref_node *lrn =
> >>>>>>> +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes,
> > lflow,
> >>>>> hash);
> >>>>>>> +        if (!lrn) {
> >>>>>>> +            lrn = xzalloc(sizeof *lrn);
> >>>>>>> +            lrn->lflow = lflow;
> >>>>>>> +            lrn->dp_index = od->index;
> >>>>>>> +            ovs_list_insert(&lflow_ref->lflows_ref_list,
> >>>>>>> +                            &lrn->lflow_list_node);
> >>>>>>> +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
> >>>>>>> +            ovs_list_insert(&lflow->referenced_by,
> >>>>> &lrn->ref_list_node);
> >>>>>>> +
> >>>>>>> +            hmap_insert(&lflow_ref->lflow_ref_nodes,
> > &lrn->ref_node,
> >>>>> hash);
> >>>>>>> +        }
> >>>>>>> +
> >>>>>>> +        lrn->linked = true;
> >>>>>>> +    }
> >>>>>>> +
> >>>>>>> +    lflow_hash_unlock(hash_lock);
> >>>>>>> +
> >>>>>>> +}
> >>>>>>> +
> >>>>>>
> >>>>>> This part is not thread safe.
> >>>>>>
> >>>>>> If two threads try to add logical flows that have different hashes
> > and
> >>>>>> lflow_ref is not NULL we're going to have a race condition when
> >>>>>> inserting to the &lflow_ref->lflow_ref_nodes hash map because the
> > two
> >>>>>> threads will take different locks.
> >>>>>>
> >>>>>
> >>>>> I think it is safe because a lflow_ref is always associated with an
> >>> object,
> >>>>> e.g. port, datapath, lb, etc., and lflow generation for a single
> > such
> >>>>> object is never executed in parallel, which is how the parallel
> > lflow
> >>> build
> >>>>> is designed.
> >>>>> Does it make sense?
> >>>>
> >>>> It happens that it's safe in this current patch set because indeed we
> >>>> always process individual ports, datapaths, lbs, etc, in the same
> >>>> thread.  However, this code (lflow_table_add_lflow()) is generic and
> >>>> there's nothing (not even a comment) that would warn developers in the
> >>>> future about the potential race if the lflow_ref is shared.
> >>>>
> >>>> I spoke to Numan offline a bit about this and I think the current plan
> >>>> is to leave it as is and add proper locking as a follow up (or in v7).
> >>>> But I think we still need a clear comment here warning users about
> > this.
> >>>>  Maybe we should add a comment where the lflow_ref structure is
> > defined
> >>> too.
> >>>>
> >>>> What do you think?
> >>>
> >>> I totally agree with you about adding comments to explain the thread
> > safety
> >>> considerations, and make it clear that the lflow_ref should always be
> >>> associated with the object that is being processed by the thread.
> >>> With regard to any follow up change for proper locking, I am not sure
> > what
> >>> scenario is your concern. I think if we always make sure the lflow_ref
> >>> passed in is the one owned by the object then the current locking is
> >>> sufficient. And this is the natural case.
> >>>
> >>> However, I did think about a situation where there might be a potential
> >>> problem in the future when we need to maintain references for more than
> > one
> >>> input for the same lflow. For the "main" input, which is the object that
> >>> the thread is iterating and generating lflow for, will have its
> > lflow_ref
> >>> passed in this function, but we might also need to maintain the
> > reference
> >>> of the lflow for a secondary input (or even third). In that case it is
> > not
> >>> just the locking issue, but firstly we need to have a proper way to
> > pass in
> >>> the secondary lflow_ref, which is what I had mentioned in the review
> >>> comments for v3:
> >>> https://mail.openvswitch.org/pipermail/ovs-dev/2023-December/410269.html
> > .
> >>> (the last paragraph). Regardless of that, it is not a problem for now,
> > and
> >>> I hope there is no need to add references for more inputs for the I-P
> > that
> >>> matters for production use cases.
> >>>
> >>
> >> I'll update the next version with the proper documentation.
> >>
> >> @Han Regarding your comment that we may have the requirement in the
> >> future to add a logical flow to
> >> 2 or more lflow_ref's,   I doubt if we would have such a requirement
> >> or a scenario.
> >>
> >> Because calling lflow_table_add_lflow(,..., lflow_ref1,  lflow_ref2)
> >> is same as calling lflow_table_add_lflow twice
> >>
> >> i.e
> >> lflow_table_add_lflow(,..., lflow_ref1)
> >> lflow_table_add_lflow(,..., lflow_ref2)
> >>
> >> In the second call,  since the ovn_lflow already exists in the lflow
> >> table, it will be just referenced in the lflow_ref2.
> >> It would be a problem for sure if these lflow_refs (ref1 and ref2)
> >> belong to different objects.
> >>
> >
> > The scenario you mentioned here is more like duplicated lflows generated
> > when processing different objects, which has different lflow_refs, which is
> > not the scenario I was talking about.
> > I am thinking about the case when a lflow is generated, there are multiple
> > inputs. For example (extremely simplified just to express the idea), when
> > generating lflows for a LB object, we also consult other objects such as
> > datapath. Now we implemented I-P for LB, so for the LB object, for each
> > lflow generated we will call lflow_table_add_lflow(..., lb->lflow_ref) to
> > maintain the reference from the LB object to the lflow. However, if in the
> > future we want to do I-P for datapaths, which means when a datapath is
> > deleted or updated we need to quickly find the lflows that is linked to the
> > datapath, we need to maintain the reference from the datapath to the same
> > lflow that is generated by the same call lflow_table_add_lflow(...,
> > lb->lflow_ref, datapath->lflow_ref). Or, do you suggest even in this case
> > we just call the function twice, with the second call's only purpose being
> > to add the reference to the secondary input? Hmm, it would actually work as
> > if we are trying to add a redundant lflow, although wasting some cycles for
> > the ovn_lflow_find(). Of course the locking issue would arise in this case.
> > Anyway, I don't have a realistic example to prove this is a requirement of
> > I-P for solving real production scale/performance issues. In the example I
> > provided, datapath I-P seems to be a non-goal as far as I can tell for now.
> > So I am not too worried.
> >
> >> However I do see a scenario where lflow_table_add_lflow() could be
> >> called with a different lflow_ref object.
> >>
> >> Few scenarios I could think of
> >>
> >> 1.  While generating logical flows for a logical switch port (of type
> >> router) 'P1',  there is a possibility (most likely a mistake from
> >> a contributor) may call something like
> >>
> >> lflow_table_add_lflow(....,  P1->peer->lflow_ref)
> >>
> >> Even in this case,  we don't generate logical router port flows in
> >> parallel with logical switch ports and
> >> hence P1->peer->lflow_ref may not be modified by multiple threads.
> >> But it's a possibility.  And I think Dumitru
> >> is perhaps concerned about such scenarios.
> >>
> > In this case it does seem to be a problem, because it is possible that
> > thread1 is processing P1 while thread2 is processing P1->peer, and now both
> > threads can access P1->peer->lflow_ref. We should make it clear that when
> > processing an object, we only add lflow_ref for that object.
> >
> >> 2.  While generating load balancer flows we may also generate logical
> >> flows for the routers and store them in the od->lflow_ref  (If we
> >> happen to support I-P for router ports)
> >>      i.e  HMAP_FOR_EACH_PARALLEL (lb, lb_maps) {
> >>                 generate_lflows_for_lb(lb, lb->lflow_ref)
> >>                 FOR_EACH_ROUTER_OF_LB(lr, lb) {
> >>                     generate_lb_lflows_for_router(lr, lr->lflow_ref)
> >>                 }
> >>            }
> >>
> >>      In this case, it is possible that a logical router 'lr' may be
> >> associated with multiple lbs and this may corrupt the lr->lflow_ref.
> >>
> >>
> >> We have 2 choices  (after this patch series is accepted :))
> >>   a.  Don't do anything.  The documentation added by this patch series
> >> (which I'll update in the next version)  is good enough.
> >>        and whenever such scenarios arise,  the contributor will add
> >> the thread_safe code or make sure that he/she will not use the
> >> lflow_ref of other objects while
> >>        generating lflows.  In the scenario 2 above,  the contributor
> >> should not generate lflows for the routers in the lb_maps loop.
> >> Instead should generate
> >>        them in the lr_datapaths loop or lr_stateful loop.
> >>
> >>   b.  As a follow up patch,  make the lflow_table_add_lflow()
> >> thread_safe for lflow_ref so that we are covered for the future
> >> scenarios.
> >>        The downside of this is that we have to maintain hash locks for
> >> each lflow_ref and reconsile the lflow_ref hmap after all the threads
> >> finish the lflow generation.
> >>        It does incur some cost (both memory and cpu wise).  But it
> >> will be only during recomputes.  Which should not be too frequent.
> >>
> >
> > I'd prefer solution (a) with proper documentation to emphasize the
> > rule/assumption.
> >
>
> Would it make sense to also try to detect whether a lflow_ref is used
> from multiple threads?  E.g., store the id of the last thread that
> accessed it and assert that it's the same with the thread currently
> accessing the lflow ref?
>
> Just to make sure we crash explicitly in case such bugs are
> inadvertently introduced?
>

I think it would be hard to make a decision here because I-P is
handled in a single thread.
How do we distinguish between a full recompute with parallel threads
vs I-P single threading
in lflow_ref.c.

Numan

> Regards,
> Dumitru
>
> > Thanks,
> > Han
> >
> >>
> >> Let me know what you think.  I'm fine with both, but personally prefer
> >> (1) as I don't see such scenarios in the near future.
> >>
> >>
> >> Thanks
> >> Numan
> >>
> >>
> >>
> >>
> >>> Thanks,
> >>> Han
> >>>
> >>>>
> >>>> Regards,
> >>>> Dumitru
> >>>>
> >>>>>
> >>>>> Thanks,
> >>>>> Han
> >>>>>
> >>>>>> That might corrupt the map.
> >>>>>>
> >>>>>> I guess, if we don't want to cause more performance degradation we
> >>>>>> should maintain as many lflow_ref instances as we do hash_locks,
> > i.e.,
> >>>>>> LFLOW_HASH_LOCK_MASK + 1.  Will that even be possible?
> >>>>>>
> >>>>>> Wdyt?
> >>>>>>
> >>>>>> Regards,
> >>>>>> Dumitru
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> dev mailing list
> >>>>>> dev@openvswitch.org
> >>>>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> >>>>>
> >>>>
> >>> _______________________________________________
> >>> dev mailing list
> >>> dev@openvswitch.org
> >>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> >
>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Dumitru Ceara Jan. 26, 2024, 9:33 a.m. UTC | #14
On 1/25/24 16:40, Numan Siddique wrote:
> On Thu, Jan 25, 2024 at 4:21 AM Dumitru Ceara <dceara@redhat.com> wrote:
>>
>> On 1/25/24 06:44, Han Zhou wrote:
>>> On Wed, Jan 24, 2024 at 8:39 PM Numan Siddique <numans@ovn.org> wrote:
>>>>
>>>> On Wed, Jan 24, 2024 at 10:53 PM Han Zhou <hzhou@ovn.org> wrote:
>>>>>
>>>>> On Wed, Jan 24, 2024 at 4:23 AM Dumitru Ceara <dceara@redhat.com> wrote:
>>>>>>
>>>>>> On 1/24/24 06:01, Han Zhou wrote:
>>>>>>> On Fri, Jan 19, 2024 at 2:50 AM Dumitru Ceara <dceara@redhat.com>
>>> wrote:
>>>>>>>>
>>>>>>>> On 1/11/24 16:31, numans@ovn.org wrote:
>>>>>>>>> +
>>>>>>>>> +void
>>>>>>>>> +lflow_table_add_lflow(struct lflow_table *lflow_table,
>>>>>>>>> +                      const struct ovn_datapath *od,
>>>>>>>>> +                      const unsigned long *dp_bitmap, size_t
>>>>>>> dp_bitmap_len,
>>>>>>>>> +                      enum ovn_stage stage, uint16_t priority,
>>>>>>>>> +                      const char *match, const char *actions,
>>>>>>>>> +                      const char *io_port, const char
>>> *ctrl_meter,
>>>>>>>>> +                      const struct ovsdb_idl_row *stage_hint,
>>>>>>>>> +                      const char *where,
>>>>>>>>> +                      struct lflow_ref *lflow_ref)
>>>>>>>>> +    OVS_EXCLUDED(fake_hash_mutex)
>>>>>>>>> +{
>>>>>>>>> +    struct ovs_mutex *hash_lock;
>>>>>>>>> +    uint32_t hash;
>>>>>>>>> +
>>>>>>>>> +    ovs_assert(!od ||
>>>>>>>>> +               ovn_stage_to_datapath_type(stage) ==
>>>>>>> ovn_datapath_get_type(od));
>>>>>>>>> +
>>>>>>>>> +    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
>>>>>>>>> +                                 ovn_stage_get_pipeline(stage),
>>>>>>>>> +                                 priority, match,
>>>>>>>>> +                                 actions);
>>>>>>>>> +
>>>>>>>>> +    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
>>>>>>>>> +    struct ovn_lflow *lflow =
>>>>>>>>> +        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
>>>>>>>>> +                         dp_bitmap_len, hash, stage,
>>>>>>>>> +                         priority, match, actions,
>>>>>>>>> +                         io_port, ctrl_meter, stage_hint, where);
>>>>>>>>> +
>>>>>>>>> +    if (lflow_ref) {
>>>>>>>>> +        /* lflow referencing is only supported if 'od' is not
>>> NULL.
>>>>> */
>>>>>>>>> +        ovs_assert(od);
>>>>>>>>> +
>>>>>>>>> +        struct lflow_ref_node *lrn =
>>>>>>>>> +            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes,
>>> lflow,
>>>>>>> hash);
>>>>>>>>> +        if (!lrn) {
>>>>>>>>> +            lrn = xzalloc(sizeof *lrn);
>>>>>>>>> +            lrn->lflow = lflow;
>>>>>>>>> +            lrn->dp_index = od->index;
>>>>>>>>> +            ovs_list_insert(&lflow_ref->lflows_ref_list,
>>>>>>>>> +                            &lrn->lflow_list_node);
>>>>>>>>> +            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
>>>>>>>>> +            ovs_list_insert(&lflow->referenced_by,
>>>>>>> &lrn->ref_list_node);
>>>>>>>>> +
>>>>>>>>> +            hmap_insert(&lflow_ref->lflow_ref_nodes,
>>> &lrn->ref_node,
>>>>>>> hash);
>>>>>>>>> +        }
>>>>>>>>> +
>>>>>>>>> +        lrn->linked = true;
>>>>>>>>> +    }
>>>>>>>>> +
>>>>>>>>> +    lflow_hash_unlock(hash_lock);
>>>>>>>>> +
>>>>>>>>> +}
>>>>>>>>> +
>>>>>>>>
>>>>>>>> This part is not thread safe.
>>>>>>>>
>>>>>>>> If two threads try to add logical flows that have different hashes
>>> and
>>>>>>>> lflow_ref is not NULL we're going to have a race condition when
>>>>>>>> inserting to the &lflow_ref->lflow_ref_nodes hash map because the
>>> two
>>>>>>>> threads will take different locks.
>>>>>>>>
>>>>>>>
>>>>>>> I think it is safe because a lflow_ref is always associated with an
>>>>> object,
>>>>>>> e.g. port, datapath, lb, etc., and lflow generation for a single
>>> such
>>>>>>> object is never executed in parallel, which is how the parallel
>>> lflow
>>>>> build
>>>>>>> is designed.
>>>>>>> Does it make sense?
>>>>>>
>>>>>> It happens that it's safe in this current patch set because indeed we
>>>>>> always process individual ports, datapaths, lbs, etc, in the same
>>>>>> thread.  However, this code (lflow_table_add_lflow()) is generic and
>>>>>> there's nothing (not even a comment) that would warn developers in the
>>>>>> future about the potential race if the lflow_ref is shared.
>>>>>>
>>>>>> I spoke to Numan offline a bit about this and I think the current plan
>>>>>> is to leave it as is and add proper locking as a follow up (or in v7).
>>>>>> But I think we still need a clear comment here warning users about
>>> this.
>>>>>>  Maybe we should add a comment where the lflow_ref structure is
>>> defined
>>>>> too.
>>>>>>
>>>>>> What do you think?
>>>>>
>>>>> I totally agree with you about adding comments to explain the thread
>>> safety
>>>>> considerations, and make it clear that the lflow_ref should always be
>>>>> associated with the object that is being processed by the thread.
>>>>> With regard to any follow up change for proper locking, I am not sure
>>> what
>>>>> scenario is your concern. I think if we always make sure the lflow_ref
>>>>> passed in is the one owned by the object then the current locking is
>>>>> sufficient. And this is the natural case.
>>>>>
>>>>> However, I did think about a situation where there might be a potential
>>>>> problem in the future when we need to maintain references for more than
>>> one
>>>>> input for the same lflow. For the "main" input, which is the object that
>>>>> the thread is iterating and generating lflow for, will have its
>>> lflow_ref
>>>>> passed in this function, but we might also need to maintain the
>>> reference
>>>>> of the lflow for a secondary input (or even third). In that case it is
>>> not
>>>>> just the locking issue, but firstly we need to have a proper way to
>>> pass in
>>>>> the secondary lflow_ref, which is what I had mentioned in the review
>>>>> comments for v3:
>>>>> https://mail.openvswitch.org/pipermail/ovs-dev/2023-December/410269.html
>>> .
>>>>> (the last paragraph). Regardless of that, it is not a problem for now,
>>> and
>>>>> I hope there is no need to add references for more inputs for the I-P
>>> that
>>>>> matters for production use cases.
>>>>>
>>>>
>>>> I'll update the next version with the proper documentation.
>>>>
>>>> @Han Regarding your comment that we may have the requirement in the
>>>> future to add a logical flow to
>>>> 2 or more lflow_ref's,   I doubt if we would have such a requirement
>>>> or a scenario.
>>>>
>>>> Because calling lflow_table_add_lflow(,..., lflow_ref1,  lflow_ref2)
>>>> is same as calling lflow_table_add_lflow twice
>>>>
>>>> i.e
>>>> lflow_table_add_lflow(,..., lflow_ref1)
>>>> lflow_table_add_lflow(,..., lflow_ref2)
>>>>
>>>> In the second call,  since the ovn_lflow already exists in the lflow
>>>> table, it will be just referenced in the lflow_ref2.
>>>> It would be a problem for sure if these lflow_refs (ref1 and ref2)
>>>> belong to different objects.
>>>>
>>>
>>> The scenario you mentioned here is more like duplicated lflows generated
>>> when processing different objects, which has different lflow_refs, which is
>>> not the scenario I was talking about.
>>> I am thinking about the case when a lflow is generated, there are multiple
>>> inputs. For example (extremely simplified just to express the idea), when
>>> generating lflows for a LB object, we also consult other objects such as
>>> datapath. Now we implemented I-P for LB, so for the LB object, for each
>>> lflow generated we will call lflow_table_add_lflow(..., lb->lflow_ref) to
>>> maintain the reference from the LB object to the lflow. However, if in the
>>> future we want to do I-P for datapaths, which means when a datapath is
>>> deleted or updated we need to quickly find the lflows that is linked to the
>>> datapath, we need to maintain the reference from the datapath to the same
>>> lflow that is generated by the same call lflow_table_add_lflow(...,
>>> lb->lflow_ref, datapath->lflow_ref). Or, do you suggest even in this case
>>> we just call the function twice, with the second call's only purpose being
>>> to add the reference to the secondary input? Hmm, it would actually work as
>>> if we are trying to add a redundant lflow, although wasting some cycles for
>>> the ovn_lflow_find(). Of course the locking issue would arise in this case.
>>> Anyway, I don't have a realistic example to prove this is a requirement of
>>> I-P for solving real production scale/performance issues. In the example I
>>> provided, datapath I-P seems to be a non-goal as far as I can tell for now.
>>> So I am not too worried.
>>>
>>>> However I do see a scenario where lflow_table_add_lflow() could be
>>>> called with a different lflow_ref object.
>>>>
>>>> Few scenarios I could think of
>>>>
>>>> 1.  While generating logical flows for a logical switch port (of type
>>>> router) 'P1',  there is a possibility (most likely a mistake from
>>>> a contributor) may call something like
>>>>
>>>> lflow_table_add_lflow(....,  P1->peer->lflow_ref)
>>>>
>>>> Even in this case,  we don't generate logical router port flows in
>>>> parallel with logical switch ports and
>>>> hence P1->peer->lflow_ref may not be modified by multiple threads.
>>>> But it's a possibility.  And I think Dumitru
>>>> is perhaps concerned about such scenarios.
>>>>
>>> In this case it does seem to be a problem, because it is possible that
>>> thread1 is processing P1 while thread2 is processing P1->peer, and now both
>>> threads can access P1->peer->lflow_ref. We should make it clear that when
>>> processing an object, we only add lflow_ref for that object.
>>>
>>>> 2.  While generating load balancer flows we may also generate logical
>>>> flows for the routers and store them in the od->lflow_ref  (If we
>>>> happen to support I-P for router ports)
>>>>      i.e  HMAP_FOR_EACH_PARALLEL (lb, lb_maps) {
>>>>                 generate_lflows_for_lb(lb, lb->lflow_ref)
>>>>                 FOR_EACH_ROUTER_OF_LB(lr, lb) {
>>>>                     generate_lb_lflows_for_router(lr, lr->lflow_ref)
>>>>                 }
>>>>            }
>>>>
>>>>      In this case, it is possible that a logical router 'lr' may be
>>>> associated with multiple lbs and this may corrupt the lr->lflow_ref.
>>>>
>>>>
>>>> We have 2 choices  (after this patch series is accepted :))
>>>>   a.  Don't do anything.  The documentation added by this patch series
>>>> (which I'll update in the next version)  is good enough.
>>>>        and whenever such scenarios arise,  the contributor will add
>>>> the thread_safe code or make sure that he/she will not use the
>>>> lflow_ref of other objects while
>>>>        generating lflows.  In the scenario 2 above,  the contributor
>>>> should not generate lflows for the routers in the lb_maps loop.
>>>> Instead should generate
>>>>        them in the lr_datapaths loop or lr_stateful loop.
>>>>
>>>>   b.  As a follow up patch,  make the lflow_table_add_lflow()
>>>> thread_safe for lflow_ref so that we are covered for the future
>>>> scenarios.
>>>>        The downside of this is that we have to maintain hash locks for
>>>> each lflow_ref and reconsile the lflow_ref hmap after all the threads
>>>> finish the lflow generation.
>>>>        It does incur some cost (both memory and cpu wise).  But it
>>>> will be only during recomputes.  Which should not be too frequent.
>>>>
>>>
>>> I'd prefer solution (a) with proper documentation to emphasize the
>>> rule/assumption.
>>>
>>
>> Would it make sense to also try to detect whether a lflow_ref is used
>> from multiple threads?  E.g., store the id of the last thread that
>> accessed it and assert that it's the same with the thread currently
>> accessing the lflow ref?
>>
>> Just to make sure we crash explicitly in case such bugs are
>> inadvertently introduced?
>>
> 
> I think it would be hard to make a decision here because I-P is
> handled in a single thread.
> How do we distinguish between a full recompute with parallel threads
> vs I-P single threading
> in lflow_ref.c.
> 

We could distinguish between recompute and incremental updates but that
would probably add a connection between lflow-mgr and the I-P engine.
That's not so nice indeed.

Let's add a clear comment warning people about the right way to use this.

Regards,
Dumitru

> Numan
> 
>> Regards,
>> Dumitru
>>
>>> Thanks,
>>> Han
>>>
>>>>
>>>> Let me know what you think.  I'm fine with both, but personally prefer
>>>> (1) as I don't see such scenarios in the near future.
>>>>
>>>>
>>>> Thanks
>>>> Numan
>>>>
>>>>
>>>>
>>>>
>>>>> Thanks,
>>>>> Han
>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Dumitru
>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Han
>>>>>>>
>>>>>>>> That might corrupt the map.
>>>>>>>>
>>>>>>>> I guess, if we don't want to cause more performance degradation we
>>>>>>>> should maintain as many lflow_ref instances as we do hash_locks,
>>> i.e.,
>>>>>>>> LFLOW_HASH_LOCK_MASK + 1.  Will that even be possible?
>>>>>>>>
>>>>>>>> Wdyt?
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Dumitru
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> dev mailing list
>>>>>>>> dev@openvswitch.org
>>>>>>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> dev mailing list
>>>>> dev@openvswitch.org
>>>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>>
>>
>> _______________________________________________
>> dev mailing list
>> dev@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
Numan Siddique Jan. 30, 2024, 3:11 a.m. UTC | #15
On Thu, Jan 25, 2024 at 1:08 AM Han Zhou <hzhou@ovn.org> wrote:
>
> On Thu, Jan 11, 2024 at 7:32 AM <numans@ovn.org> wrote:
> >
> > From: Numan Siddique <numans@ovn.org>
> >
> > ovn_lflow_add() and other related functions/macros are now moved
> > into a separate module - lflow-mgr.c.  This module maintains a
> > table 'struct lflow_table' for the logical flows.  lflow table
> > maintains a hmap to store the logical flows.
> >
> > It also maintains the logical switch and router dp groups.
> >
> > Previous commits which added lflow incremental processing for
> > the VIF logical ports, stored the references to
> > the logical ports' lflows using 'struct lflow_ref_list'.  This
> > struct is renamed to 'struct lflow_ref' and is part of lflow-mgr.c.
> > It is  modified a bit to store the resource to lflow references.
> >
> > Example usage of 'struct lflow_ref'.
> >
> > 'struct ovn_port' maintains 2 instances of lflow_ref.  i,e
> >
> > struct ovn_port {
> >    ...
> >    ...
> >    struct lflow_ref *lflow_ref;
> >    struct lflow_ref *stateful_lflow_ref;
>
> Hi Numan,
>
> In addition to the lock discussion with you and Dumitru, I still want to
> discuss another thing of this patch regarding the second lflow_ref:
> stateful_lflow_ref.
> I understand that you added this to achieve finer grained I-P especially
> for router ports. I am wondering how much performance gain is from this.
> For my understanding it shouldn't matter much since each ovn_port should be
> associated with a very limited number of lflows. Could you provide more
> insight/data on this? I think it would be better to keep things simple
> (i.e. one object, one lflow_ref list) unless the benefit is obvious.
>
> I am also trying to run another performance regression test for recompute,
> since I am a little concerned about the DP refcnt hmap associated with each
> lflow. I understand it's necessary to handle the duplicated lflow cases,
> but it looks heavy and removes the opportunities for more efficient bitmap
> operations. Let me know if you have already had evaluated its performance.
>

Hi Han,

I did some testing on a large scaled OVN database.
The NB database has
     - 1000 logical switches
     - 500 routers
     - 35253 load balancers
     - Most of these load balancers are associated with all the
logical switches and routers.

When I run this command for example
   - ovn-nbctl set load_balancer 0d647ff9-4e49-4570-a05d-db670873b7ef
options:foo=bar

It results in changes to all the logical routers. And the function
lflow_handle_lr_stateful_changes() is called.
If you see this function, for each changed router, it also loops
through the router ports and calls
     - build_lbnat_lflows_iterate_by_lrp()
     - build_lbnat_lflows_iterate_by_lsp()

With your suggestion we also need to call
build_lswitch_and_lrouter_iterate_by_lsp() and
build_lbnat_lflows_iterate_by_lrp().

I measured the number of lflows referenced in op->lflow_ref and
op->stateful_lflow_ref
for each of the logical switch port  and router port pair.  Total
lflows in lflow_ref
(of all router ports and their peer switch ports) were 23000 and total
lflows in stateful_lflow_ref
were just 4000.

So with just one lflow_ref (as per suggestion) a small update to load
balancer like above
would result in generating 27000 logical flows as compared to just 4000.

I think it has a considerable cost in terms of CPU.  And perhaps it
would matter more
when ovn-northd runs in a DPU.  My preference would be to have a
separate lflow ref
for stateful flows.

Thanks
Numan


> Thanks,
> Han
>
> > };
> >
> > All the logical flows generated by
> > build_lswitch_and_lrouter_iterate_by_lsp() uses the ovn_port->lflow_ref.
> >
> > All the logical flows generated by build_lsp_lflows_for_lbnats()
> > uses the ovn_port->stateful_lflow_ref.
> >
> > When handling the ovn_port changes incrementally, the lflows referenced
> > in 'struct ovn_port' are cleared and regenerated and synced to the
> > SB logical flows.
> >
> > eg.
> >
> > lflow_ref_clear_lflows(op->lflow_ref);
> > build_lswitch_and_lrouter_iterate_by_lsp(op, ...);
> > lflow_ref_sync_lflows_to_sb(op->lflow_ref, ...);
> >
> > This patch does few more changes:
> >   -  Logical flows are now hashed without the logical
> >      datapaths.  If a logical flow is referenced by just one
> >      datapath, we don't rehash it.
> >
> >   -  The synthetic 'hash' column of sbrec_logical_flow now
> >      doesn't use the logical datapath.  This means that
> >      when ovn-northd is updated/upgraded and has this commit,
> >      all the logical flows with 'logical_datapath' column
> >      set will get deleted and re-added causing some disruptions.
> >
> >   -  With the commit [1] which added I-P support for logical
> >      port changes, multiple logical flows with same match 'M'
> >      and actions 'A' are generated and stored without the
> >      dp groups, which was not the case prior to
> >      that patch.
> >      One example to generate these lflows is:
> >              ovn-nbctl lsp-set-addresses sw0p1 "MAC1 IP1"
> >              ovn-nbctl lsp-set-addresses sw1p1 "MAC1 IP1"
> >              ovn-nbctl lsp-set-addresses sw2p1 "MAC1 IP1"
> >
> >      Now with this patch we go back to the earlier way.  i.e
> >      one logical flow with logical_dp_groups set.
> >
> >   -  With this patch any updates to a logical port which
> >      doesn't result in new logical flows will not result in
> >      deletion and addition of same logical flows.
> >      Eg.
> >      ovn-nbctl set logical_switch_port sw0p1 external_ids:foo=bar
> >      will be a no-op to the SB logical flow table.
> >
> > [1] - 8bbd678("northd: Incremental processing of VIF additions in 'lflow'
> node.")
> >
> > Signed-off-by: Numan Siddique <numans@ovn.org>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Han Zhou Jan. 30, 2024, 5:03 a.m. UTC | #16
On Mon, Jan 29, 2024 at 7:11 PM Numan Siddique <numans@ovn.org> wrote:

> On Thu, Jan 25, 2024 at 1:08 AM Han Zhou <hzhou@ovn.org> wrote:
> >
> > On Thu, Jan 11, 2024 at 7:32 AM <numans@ovn.org> wrote:
> > >
> > > From: Numan Siddique <numans@ovn.org>
> > >
> > > ovn_lflow_add() and other related functions/macros are now moved
> > > into a separate module - lflow-mgr.c.  This module maintains a
> > > table 'struct lflow_table' for the logical flows.  lflow table
> > > maintains a hmap to store the logical flows.
> > >
> > > It also maintains the logical switch and router dp groups.
> > >
> > > Previous commits which added lflow incremental processing for
> > > the VIF logical ports, stored the references to
> > > the logical ports' lflows using 'struct lflow_ref_list'.  This
> > > struct is renamed to 'struct lflow_ref' and is part of lflow-mgr.c.
> > > It is  modified a bit to store the resource to lflow references.
> > >
> > > Example usage of 'struct lflow_ref'.
> > >
> > > 'struct ovn_port' maintains 2 instances of lflow_ref.  i,e
> > >
> > > struct ovn_port {
> > >    ...
> > >    ...
> > >    struct lflow_ref *lflow_ref;
> > >    struct lflow_ref *stateful_lflow_ref;
> >
> > Hi Numan,
> >
> > In addition to the lock discussion with you and Dumitru, I still want to
> > discuss another thing of this patch regarding the second lflow_ref:
> > stateful_lflow_ref.
> > I understand that you added this to achieve finer grained I-P especially
> > for router ports. I am wondering how much performance gain is from this.
> > For my understanding it shouldn't matter much since each ovn_port should
> be
> > associated with a very limited number of lflows. Could you provide more
> > insight/data on this? I think it would be better to keep things simple
> > (i.e. one object, one lflow_ref list) unless the benefit is obvious.
> >
> > I am also trying to run another performance regression test for
> recompute,
> > since I am a little concerned about the DP refcnt hmap associated with
> each
> > lflow. I understand it's necessary to handle the duplicated lflow cases,
> > but it looks heavy and removes the opportunities for more efficient
> bitmap
> > operations. Let me know if you have already had evaluated its
> performance.
> >
>
> Hi Han,
>
> I did some testing on a large scaled OVN database.
> The NB database has
>      - 1000 logical switches
>      - 500 routers
>      - 35253 load balancers
>      - Most of these load balancers are associated with all the
> logical switches and routers.
>
> When I run this command for example
>    - ovn-nbctl set load_balancer 0d647ff9-4e49-4570-a05d-db670873b7ef
> options:foo=bar
>
> It results in changes to all the logical routers. And the function
> lflow_handle_lr_stateful_changes() is called.
> If you see this function, for each changed router, it also loops
> through the router ports and calls
>      - build_lbnat_lflows_iterate_by_lrp()
>      - build_lbnat_lflows_iterate_by_lsp()
>
> With your suggestion we also need to call
> build_lswitch_and_lrouter_iterate_by_lsp() and
> build_lbnat_lflows_iterate_by_lrp().
>
> I measured the number of lflows referenced in op->lflow_ref and
> op->stateful_lflow_ref
> for each of the logical switch port  and router port pair.  Total
> lflows in lflow_ref
> (of all router ports and their peer switch ports) were 23000 and total
> lflows in stateful_lflow_ref
> were just 4000.
>
> So with just one lflow_ref (as per suggestion) a small update to load
> balancer like above
> would result in generating 27000 logical flows as compared to just 4000.
>
> I think it has a considerable cost in terms of CPU.  And perhaps it
> would matter more
> when ovn-northd runs in a DPU.  My preference would be to have a
> separate lflow ref
> for stateful flows.
>
> Thanks
> Numan


Thanks for the data points! It makes sense to use separate lists.

Regards,
Han

>
>
>
> > Thanks,
> > Han
> >
> > > };
> > >
> > > All the logical flows generated by
> > > build_lswitch_and_lrouter_iterate_by_lsp() uses the
> ovn_port->lflow_ref.
> > >
> > > All the logical flows generated by build_lsp_lflows_for_lbnats()
> > > uses the ovn_port->stateful_lflow_ref.
> > >
> > > When handling the ovn_port changes incrementally, the lflows referenced
> > > in 'struct ovn_port' are cleared and regenerated and synced to the
> > > SB logical flows.
> > >
> > > eg.
> > >
> > > lflow_ref_clear_lflows(op->lflow_ref);
> > > build_lswitch_and_lrouter_iterate_by_lsp(op, ...);
> > > lflow_ref_sync_lflows_to_sb(op->lflow_ref, ...);
> > >
> > > This patch does few more changes:
> > >   -  Logical flows are now hashed without the logical
> > >      datapaths.  If a logical flow is referenced by just one
> > >      datapath, we don't rehash it.
> > >
> > >   -  The synthetic 'hash' column of sbrec_logical_flow now
> > >      doesn't use the logical datapath.  This means that
> > >      when ovn-northd is updated/upgraded and has this commit,
> > >      all the logical flows with 'logical_datapath' column
> > >      set will get deleted and re-added causing some disruptions.
> > >
> > >   -  With the commit [1] which added I-P support for logical
> > >      port changes, multiple logical flows with same match 'M'
> > >      and actions 'A' are generated and stored without the
> > >      dp groups, which was not the case prior to
> > >      that patch.
> > >      One example to generate these lflows is:
> > >              ovn-nbctl lsp-set-addresses sw0p1 "MAC1 IP1"
> > >              ovn-nbctl lsp-set-addresses sw1p1 "MAC1 IP1"
> > >              ovn-nbctl lsp-set-addresses sw2p1 "MAC1 IP1"
> > >
> > >      Now with this patch we go back to the earlier way.  i.e
> > >      one logical flow with logical_dp_groups set.
> > >
> > >   -  With this patch any updates to a logical port which
> > >      doesn't result in new logical flows will not result in
> > >      deletion and addition of same logical flows.
> > >      Eg.
> > >      ovn-nbctl set logical_switch_port sw0p1 external_ids:foo=bar
> > >      will be a no-op to the SB logical flow table.
> > >
> > > [1] - 8bbd678("northd: Incremental processing of VIF additions in
> 'lflow'
> > node.")
> > >
> > > Signed-off-by: Numan Siddique <numans@ovn.org>
> > _______________________________________________
> > dev mailing list
> > dev@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
diff mbox series

Patch

diff --git a/lib/ovn-util.c b/lib/ovn-util.c
index c8b89cc216..ba29baea63 100644
--- a/lib/ovn-util.c
+++ b/lib/ovn-util.c
@@ -620,13 +620,10 @@  ovn_pipeline_from_name(const char *pipeline)
 uint32_t
 sbrec_logical_flow_hash(const struct sbrec_logical_flow *lf)
 {
-    const struct sbrec_datapath_binding *ld = lf->logical_datapath;
-    uint32_t hash = ovn_logical_flow_hash(lf->table_id,
-                                          ovn_pipeline_from_name(lf->pipeline),
-                                          lf->priority, lf->match,
-                                          lf->actions);
-
-    return ld ? ovn_logical_flow_hash_datapath(&ld->header_.uuid, hash) : hash;
+    return ovn_logical_flow_hash(lf->table_id,
+                                 ovn_pipeline_from_name(lf->pipeline),
+                                 lf->priority, lf->match,
+                                 lf->actions);
 }
 
 uint32_t
@@ -639,13 +636,6 @@  ovn_logical_flow_hash(uint8_t table_id, enum ovn_pipeline pipeline,
     return hash_string(actions, hash);
 }
 
-uint32_t
-ovn_logical_flow_hash_datapath(const struct uuid *logical_datapath,
-                               uint32_t hash)
-{
-    return hash_add(hash, uuid_hash(logical_datapath));
-}
-
 
 struct tnlid_node {
     struct hmap_node hmap_node;
diff --git a/lib/ovn-util.h b/lib/ovn-util.h
index d245d57d56..1c430027c6 100644
--- a/lib/ovn-util.h
+++ b/lib/ovn-util.h
@@ -145,8 +145,6 @@  uint32_t sbrec_logical_flow_hash(const struct sbrec_logical_flow *);
 uint32_t ovn_logical_flow_hash(uint8_t table_id, enum ovn_pipeline pipeline,
                                uint16_t priority,
                                const char *match, const char *actions);
-uint32_t ovn_logical_flow_hash_datapath(const struct uuid *logical_datapath,
-                                        uint32_t hash);
 void ovn_conn_show(struct unixctl_conn *conn, int argc OVS_UNUSED,
                    const char *argv[] OVS_UNUSED, void *idl_);
 
diff --git a/northd/automake.mk b/northd/automake.mk
index a178541759..7c6d56a4ff 100644
--- a/northd/automake.mk
+++ b/northd/automake.mk
@@ -33,7 +33,9 @@  northd_ovn_northd_SOURCES = \
 	northd/inc-proc-northd.c \
 	northd/inc-proc-northd.h \
 	northd/ipam.c \
-	northd/ipam.h
+	northd/ipam.h \
+	northd/lflow-mgr.c \
+	northd/lflow-mgr.h
 northd_ovn_northd_LDADD = \
 	lib/libovn.la \
 	$(OVSDB_LIBDIR)/libovsdb.la \
diff --git a/northd/en-lflow.c b/northd/en-lflow.c
index eb6b2a8666..fef9a1352d 100644
--- a/northd/en-lflow.c
+++ b/northd/en-lflow.c
@@ -24,6 +24,7 @@ 
 #include "en-ls-stateful.h"
 #include "en-northd.h"
 #include "en-meters.h"
+#include "lflow-mgr.h"
 
 #include "lib/inc-proc-eng.h"
 #include "northd.h"
@@ -58,6 +59,8 @@  lflow_get_input_data(struct engine_node *node,
         EN_OVSDB_GET(engine_get_input("SB_multicast_group", node));
     lflow_input->sbrec_igmp_group_table =
         EN_OVSDB_GET(engine_get_input("SB_igmp_group", node));
+    lflow_input->sbrec_logical_dp_group_table =
+        EN_OVSDB_GET(engine_get_input("SB_logical_dp_group", node));
 
     lflow_input->sbrec_mcast_group_by_name_dp =
            engine_ovsdb_node_get_index(
@@ -90,17 +93,19 @@  void en_lflow_run(struct engine_node *node, void *data)
     struct hmap bfd_connections = HMAP_INITIALIZER(&bfd_connections);
     lflow_input.bfd_connections = &bfd_connections;
 
+    stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec());
+
     struct lflow_data *lflow_data = data;
-    lflow_data_destroy(lflow_data);
-    lflow_data_init(lflow_data);
+    lflow_table_clear(lflow_data->lflow_table);
+    lflow_reset_northd_refs(&lflow_input);
 
-    stopwatch_start(BUILD_LFLOWS_STOPWATCH_NAME, time_msec());
     build_bfd_table(eng_ctx->ovnsb_idl_txn,
                     lflow_input.nbrec_bfd_table,
                     lflow_input.sbrec_bfd_table,
                     lflow_input.lr_ports,
                     &bfd_connections);
-    build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input, &lflow_data->lflows);
+    build_lflows(eng_ctx->ovnsb_idl_txn, &lflow_input,
+                 lflow_data->lflow_table);
     bfd_cleanup_connections(lflow_input.nbrec_bfd_table,
                             &bfd_connections);
     hmap_destroy(&bfd_connections);
@@ -131,7 +136,7 @@  lflow_northd_handler(struct engine_node *node,
 
     if (!lflow_handle_northd_port_changes(eng_ctx->ovnsb_idl_txn,
                                 &northd_data->trk_data.trk_lsps,
-                                &lflow_input, &lflow_data->lflows)) {
+                                &lflow_input, lflow_data->lflow_table)) {
         return false;
     }
 
@@ -160,11 +165,14 @@  void *en_lflow_init(struct engine_node *node OVS_UNUSED,
                      struct engine_arg *arg OVS_UNUSED)
 {
     struct lflow_data *data = xmalloc(sizeof *data);
-    lflow_data_init(data);
+    data->lflow_table = lflow_table_alloc();
+    lflow_table_init(data->lflow_table);
     return data;
 }
 
-void en_lflow_cleanup(void *data)
+void en_lflow_cleanup(void *data_)
 {
-    lflow_data_destroy(data);
+    struct lflow_data *data = data_;
+    lflow_table_destroy(data->lflow_table);
+    data->lflow_table = NULL;
 }
diff --git a/northd/en-lflow.h b/northd/en-lflow.h
index 5417b2faff..f7325c56b1 100644
--- a/northd/en-lflow.h
+++ b/northd/en-lflow.h
@@ -9,6 +9,12 @@ 
 
 #include "lib/inc-proc-eng.h"
 
+struct lflow_table;
+
+struct lflow_data {
+    struct lflow_table *lflow_table;
+};
+
 void en_lflow_run(struct engine_node *node, void *data);
 void *en_lflow_init(struct engine_node *node, struct engine_arg *arg);
 void en_lflow_cleanup(void *data);
diff --git a/northd/inc-proc-northd.c b/northd/inc-proc-northd.c
index 9ce4279ee8..0e17bfe2e6 100644
--- a/northd/inc-proc-northd.c
+++ b/northd/inc-proc-northd.c
@@ -99,7 +99,8 @@  static unixctl_cb_func chassis_features_list;
     SB_NODE(bfd, "bfd") \
     SB_NODE(fdb, "fdb") \
     SB_NODE(static_mac_binding, "static_mac_binding") \
-    SB_NODE(chassis_template_var, "chassis_template_var")
+    SB_NODE(chassis_template_var, "chassis_template_var") \
+    SB_NODE(logical_dp_group, "logical_dp_group")
 
 enum sb_engine_node {
 #define SB_NODE(NAME, NAME_STR) SB_##NAME,
@@ -229,6 +230,7 @@  void inc_proc_northd_init(struct ovsdb_idl_loop *nb,
     engine_add_input(&en_lflow, &en_sb_igmp_group, NULL);
     engine_add_input(&en_lflow, &en_lr_stateful, NULL);
     engine_add_input(&en_lflow, &en_ls_stateful, NULL);
+    engine_add_input(&en_lflow, &en_sb_logical_dp_group, NULL);
     engine_add_input(&en_lflow, &en_northd, lflow_northd_handler);
     engine_add_input(&en_lflow, &en_port_group, lflow_port_group_handler);
 
diff --git a/northd/lflow-mgr.c b/northd/lflow-mgr.c
new file mode 100644
index 0000000000..3cf9696f6e
--- /dev/null
+++ b/northd/lflow-mgr.c
@@ -0,0 +1,1261 @@ 
+/*
+ * Copyright (c) 2023, Red Hat, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at:
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <config.h>
+
+#include <getopt.h>
+#include <stdlib.h>
+#include <stdio.h>
+
+/* OVS includes */
+#include "include/openvswitch/hmap.h"
+#include "include/openvswitch/thread.h"
+#include "lib/bitmap.h"
+#include "lib/uuidset.h"
+#include "openvswitch/util.h"
+#include "openvswitch/vlog.h"
+#include "ovs-thread.h"
+#include "stopwatch.h"
+
+/* OVN includes */
+#include "debug.h"
+#include "lflow-mgr.h"
+#include "lib/ovn-parallel-hmap.h"
+#include "northd.h"
+
+VLOG_DEFINE_THIS_MODULE(lflow_mgr);
+
+/* Static function declarations. */
+struct ovn_lflow;
+
+static void ovn_lflow_init(struct ovn_lflow *, struct ovn_datapath *od,
+                           size_t dp_bitmap_len, enum ovn_stage stage,
+                           uint16_t priority, char *match,
+                           char *actions, char *io_port,
+                           char *ctrl_meter, char *stage_hint,
+                           const char *where);
+static struct ovn_lflow *ovn_lflow_find(const struct hmap *lflows,
+                                        enum ovn_stage stage,
+                                        uint16_t priority, const char *match,
+                                        const char *actions,
+                                        const char *ctrl_meter, uint32_t hash);
+static void ovn_lflow_destroy(struct lflow_table *lflow_table,
+                              struct ovn_lflow *lflow);
+static char *ovn_lflow_hint(const struct ovsdb_idl_row *row);
+
+static struct ovn_lflow *do_ovn_lflow_add(
+    struct lflow_table *, const struct ovn_datapath *,
+    const unsigned long *dp_bitmap, size_t dp_bitmap_len, uint32_t hash,
+    enum ovn_stage stage, uint16_t priority, const char *match,
+    const char *actions, const char *io_port,
+    const char *ctrl_meter,
+    const struct ovsdb_idl_row *stage_hint,
+    const char *where);
+
+
+static struct ovs_mutex *lflow_hash_lock(const struct hmap *lflow_table,
+                                         uint32_t hash);
+static void lflow_hash_unlock(struct ovs_mutex *hash_lock);
+
+static struct ovn_dp_group *ovn_dp_group_get(
+    struct hmap *dp_groups, size_t desired_n,
+    const unsigned long *desired_bitmap,
+    size_t bitmap_len);
+static struct ovn_dp_group *ovn_dp_group_create(
+    struct ovsdb_idl_txn *ovnsb_txn, struct hmap *dp_groups,
+    struct sbrec_logical_dp_group *, size_t desired_n,
+    const unsigned long *desired_bitmap,
+    size_t bitmap_len, bool is_switch,
+    const struct ovn_datapaths *ls_datapaths,
+    const struct ovn_datapaths *lr_datapaths);
+static struct ovn_dp_group *ovn_dp_group_get(
+    struct hmap *dp_groups, size_t desired_n,
+    const unsigned long *desired_bitmap,
+    size_t bitmap_len);
+static struct sbrec_logical_dp_group *ovn_sb_insert_or_update_logical_dp_group(
+    struct ovsdb_idl_txn *ovnsb_txn,
+    struct sbrec_logical_dp_group *,
+    const unsigned long *dpg_bitmap,
+    const struct ovn_datapaths *);
+static struct ovn_dp_group *ovn_dp_group_find(const struct hmap *dp_groups,
+                                              const unsigned long *dpg_bitmap,
+                                              size_t bitmap_len,
+                                              uint32_t hash);
+static void inc_ovn_dp_group_ref(struct ovn_dp_group *);
+static void dec_ovn_dp_group_ref(struct hmap *dp_groups,
+                                 struct ovn_dp_group *);
+static void ovn_dp_group_add_with_reference(struct ovn_lflow *,
+                                            const struct ovn_datapath *od,
+                                            const unsigned long *dp_bitmap,
+                                            size_t bitmap_len);
+
+static bool lflow_ref_sync_lflows__(
+    struct lflow_ref  *, struct lflow_table *,
+    struct ovsdb_idl_txn *ovnsb_txn,
+    const struct ovn_datapaths *ls_datapaths,
+    const struct ovn_datapaths *lr_datapaths,
+    bool ovn_internal_version_changed,
+    const struct sbrec_logical_flow_table *,
+    const struct sbrec_logical_dp_group_table *);
+static bool sync_lflow_to_sb(struct ovn_lflow *,
+                             struct ovsdb_idl_txn *ovnsb_txn,
+                             struct lflow_table *,
+                             const struct ovn_datapaths *ls_datapaths,
+                             const struct ovn_datapaths *lr_datapaths,
+                             bool ovn_internal_version_changed,
+                             const struct sbrec_logical_flow *sbflow,
+                             const struct sbrec_logical_dp_group_table *);
+
+extern int parallelization_state;
+extern thread_local size_t thread_lflow_counter;
+
+struct dp_refcnt;
+static struct dp_refcnt *dp_refcnt_find(struct hmap *dp_refcnts_map,
+                                        size_t dp_index);
+static void inc_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index);
+static bool dec_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index);
+static void ovn_lflow_clear_dp_refcnts_map(struct ovn_lflow *);
+static struct lflow_ref_node *lflow_ref_node_find(struct hmap *lflow_ref_nodes,
+                                                  struct ovn_lflow *lflow,
+                                                  uint32_t lflow_hash);
+static void lflow_ref_node_destroy(struct lflow_ref_node *,
+                                   struct hmap *lflow_ref_nodes);
+
+static bool lflow_hash_lock_initialized = false;
+/* The lflow_hash_lock is a mutex array that protects updates to the shared
+ * lflow table across threads when parallel lflow build and dp-group are both
+ * enabled. To avoid high contention between threads, a big array of mutexes
+ * are used instead of just one. This is possible because when parallel build
+ * is used we only use hmap_insert_fast() to update the hmap, which would not
+ * touch the bucket array but only the list in a single bucket. We only need to
+ * make sure that when adding lflows to the same hash bucket, the same lock is
+ * used, so that no two threads can add to the bucket at the same time.  It is
+ * ok that the same lock is used to protect multiple buckets, so a fixed sized
+ * mutex array is used instead of 1-1 mapping to the hash buckets. This
+ * simplies the implementation while effectively reduces lock contention
+ * because the chance that different threads contending the same lock amongst
+ * the big number of locks is very low. */
+#define LFLOW_HASH_LOCK_MASK 0xFFFF
+static struct ovs_mutex lflow_hash_locks[LFLOW_HASH_LOCK_MASK + 1];
+
+/* Full thread safety analysis is not possible with hash locks, because
+ * they are taken conditionally based on the 'parallelization_state' and
+ * a flow hash.  Also, the order in which two hash locks are taken is not
+ * predictable during the static analysis.
+ *
+ * Since the order of taking two locks depends on a random hash, to avoid
+ * ABBA deadlocks, no two hash locks can be nested.  In that sense an array
+ * of hash locks is similar to a single mutex.
+ *
+ * Using a fake mutex to partially simulate thread safety restrictions, as
+ * if it were actually a single mutex.
+ *
+ * OVS_NO_THREAD_SAFETY_ANALYSIS below allows us to ignore conditional
+ * nature of the lock.  Unlike other attributes, it applies to the
+ * implementation and not to the interface.  So, we can define a function
+ * that acquires the lock without analysing the way it does that.
+ */
+extern struct ovs_mutex fake_hash_mutex;
+
+/* Represents a logical ovn flow (lflow).
+ *
+ * A logical flow with match 'M' and actions 'A' - L(M, A) is created
+ * when lflow engine node (northd.c) calls lflow_table_add_lflow
+ * (or one of the helper macros ovn_lflow_add_*).
+ *
+ * Each lflow is stored in the lflow_table (see 'struct lflow_table' below)
+ * and possibly referenced by zero or more lflow_refs
+ * (see 'struct lflow_ref' and 'struct lflow_ref_node' below).
+ *
+ * */
+struct ovn_lflow {
+    struct hmap_node hmap_node;
+
+    struct ovn_datapath *od;     /* 'logical_datapath' in SB schema.  */
+    unsigned long *dpg_bitmap;   /* Bitmap of all datapaths by their 'index'.*/
+    enum ovn_stage stage;
+    uint16_t priority;
+    char *match;
+    char *actions;
+    char *io_port;
+    char *stage_hint;
+    char *ctrl_meter;
+    size_t n_ods;                /* Number of datapaths referenced by 'od' and
+                                  * 'dpg_bitmap'. */
+    struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group. */
+    const char *where;
+
+    struct uuid sb_uuid;         /* SB DB row uuid, specified by northd. */
+    struct ovs_list referenced_by;  /* List of struct lflow_ref_node. */
+    struct hmap dp_refcnts_map; /* Maintains the number of times this ovn_lflow
+                                 * is referenced by a given datapath.
+                                 * Contains 'struct dp_refcnt' in the map. */
+};
+
+/* Logical flow table. */
+struct lflow_table {
+    struct hmap entries; /* hmap of lflows. */
+    struct hmap ls_dp_groups; /* hmap of logical switch dp groups. */
+    struct hmap lr_dp_groups; /* hmap of logical router dp groups. */
+    ssize_t max_seen_lflow_size;
+};
+
+struct lflow_table *
+lflow_table_alloc(void)
+{
+    struct lflow_table *lflow_table = xzalloc(sizeof *lflow_table);
+    lflow_table->max_seen_lflow_size = 128;
+
+    return lflow_table;
+}
+
+void
+lflow_table_init(struct lflow_table *lflow_table)
+{
+    fast_hmap_size_for(&lflow_table->entries,
+                       lflow_table->max_seen_lflow_size);
+    ovn_dp_groups_init(&lflow_table->ls_dp_groups);
+    ovn_dp_groups_init(&lflow_table->lr_dp_groups);
+}
+
+void
+lflow_table_clear(struct lflow_table *lflow_table)
+{
+    struct ovn_lflow *lflow;
+    HMAP_FOR_EACH_POP (lflow, hmap_node, &lflow_table->entries) {
+        ovn_lflow_destroy(NULL, lflow);
+    }
+
+    ovn_dp_groups_clear(&lflow_table->ls_dp_groups);
+    ovn_dp_groups_clear(&lflow_table->lr_dp_groups);
+}
+
+void
+lflow_table_destroy(struct lflow_table *lflow_table)
+{
+    lflow_table_clear(lflow_table);
+    hmap_destroy(&lflow_table->entries);
+    ovn_dp_groups_destroy(&lflow_table->ls_dp_groups);
+    ovn_dp_groups_destroy(&lflow_table->lr_dp_groups);
+    free(lflow_table);
+}
+
+void
+lflow_table_expand(struct lflow_table *lflow_table)
+{
+    hmap_expand(&lflow_table->entries);
+
+    if (hmap_count(&lflow_table->entries) >
+            lflow_table->max_seen_lflow_size) {
+        lflow_table->max_seen_lflow_size = hmap_count(&lflow_table->entries);
+    }
+}
+
+void
+lflow_table_set_size(struct lflow_table *lflow_table, size_t size)
+{
+    lflow_table->entries.n = size;
+}
+
+void
+lflow_table_sync_to_sb(struct lflow_table *lflow_table,
+                       struct ovsdb_idl_txn *ovnsb_txn,
+                       const struct ovn_datapaths *ls_datapaths,
+                       const struct ovn_datapaths *lr_datapaths,
+                       bool ovn_internal_version_changed,
+                       const struct sbrec_logical_flow_table *sb_flow_table,
+                       const struct sbrec_logical_dp_group_table *dpgrp_table)
+{
+    struct hmap lflows_temp = HMAP_INITIALIZER(&lflows_temp);
+    struct hmap *lflows = &lflow_table->entries;
+    struct ovn_lflow *lflow;
+
+    /* Push changes to the Logical_Flow table to database. */
+    const struct sbrec_logical_flow *sbflow;
+    SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_SAFE (sbflow, sb_flow_table) {
+        struct sbrec_logical_dp_group *dp_group = sbflow->logical_dp_group;
+        struct ovn_datapath *logical_datapath_od = NULL;
+        size_t i;
+
+        /* Find one valid datapath to get the datapath type. */
+        struct sbrec_datapath_binding *dp = sbflow->logical_datapath;
+        if (dp) {
+            logical_datapath_od = ovn_datapath_from_sbrec(
+                &ls_datapaths->datapaths, &lr_datapaths->datapaths, dp);
+            if (logical_datapath_od
+                && ovn_datapath_is_stale(logical_datapath_od)) {
+                logical_datapath_od = NULL;
+            }
+        }
+        for (i = 0; dp_group && i < dp_group->n_datapaths; i++) {
+            logical_datapath_od = ovn_datapath_from_sbrec(
+                &ls_datapaths->datapaths, &lr_datapaths->datapaths,
+                dp_group->datapaths[i]);
+            if (logical_datapath_od
+                && !ovn_datapath_is_stale(logical_datapath_od)) {
+                break;
+            }
+            logical_datapath_od = NULL;
+        }
+
+        if (!logical_datapath_od) {
+            /* This lflow has no valid logical datapaths. */
+            sbrec_logical_flow_delete(sbflow);
+            continue;
+        }
+
+        enum ovn_pipeline pipeline
+            = !strcmp(sbflow->pipeline, "ingress") ? P_IN : P_OUT;
+
+        lflow = ovn_lflow_find(
+            lflows,
+            ovn_stage_build(ovn_datapath_get_type(logical_datapath_od),
+                            pipeline, sbflow->table_id),
+            sbflow->priority, sbflow->match, sbflow->actions,
+            sbflow->controller_meter, sbflow->hash);
+        if (lflow) {
+            sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
+                             lr_datapaths, ovn_internal_version_changed,
+                             sbflow, dpgrp_table);
+
+            hmap_remove(lflows, &lflow->hmap_node);
+            hmap_insert(&lflows_temp, &lflow->hmap_node,
+                        hmap_node_hash(&lflow->hmap_node));
+        } else {
+            sbrec_logical_flow_delete(sbflow);
+        }
+    }
+
+    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
+        sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
+                         lr_datapaths, ovn_internal_version_changed,
+                         NULL, dpgrp_table);
+
+        hmap_remove(lflows, &lflow->hmap_node);
+        hmap_insert(&lflows_temp, &lflow->hmap_node,
+                    hmap_node_hash(&lflow->hmap_node));
+    }
+    hmap_swap(lflows, &lflows_temp);
+    hmap_destroy(&lflows_temp);
+}
+
+/* lflow ref */
+struct lflow_ref {
+    /* head of the list 'struct lflow_ref_node'. */
+    struct ovs_list lflows_ref_list;
+
+    /* hmap_node is 'struct lflow_ref_node *'.  This is used to ensure
+     * that there are no duplicates in 'lflow_ref_list' above. */
+    struct hmap lflow_ref_nodes;
+};
+
+struct lflow_ref_node {
+    /* hmap node in the hmap - 'struct lflow_ref->lflow_ref_nodes' */
+    struct hmap_node ref_node;
+
+    /* This list follows different lflows referenced by the
+     * 'struct lflow_ref'. List head is lflow_ref->lflows_ref_list. */
+    struct ovs_list lflow_list_node;
+    /* This list follows different objects that reference the same lflow. List
+     * head is ovn_lflow->referenced_by. */
+    struct ovs_list ref_list_node;
+    /* The lflow. */
+    struct ovn_lflow *lflow;
+
+    /* Index id of the datapath this lflow_ref_node belongs to. */
+    size_t dp_index;
+
+    /* Indicates if the lflow_ref_node for an lflow - L(M, A) is linked
+     * to datapath(s) or not.
+     * It is set to true when an lflow L(M, A) is referenced by an lflow ref
+     * in lflow_table_add_lflow().  It is set to false when it is unlinked
+     * from the datapath when lflow_ref_unlink_lflows() is called. */
+    bool linked;
+};
+
+struct lflow_ref *
+lflow_ref_create(void)
+{
+    struct lflow_ref *lflow_ref = xzalloc(sizeof *lflow_ref);
+    ovs_list_init(&lflow_ref->lflows_ref_list);
+    hmap_init(&lflow_ref->lflow_ref_nodes);
+    return lflow_ref;
+}
+
+void
+lflow_ref_destroy(struct lflow_ref *lflow_ref)
+{
+    struct lflow_ref_node *l;
+
+    LIST_FOR_EACH_SAFE (l, lflow_list_node, &lflow_ref->lflows_ref_list) {
+        lflow_ref_node_destroy(l, NULL);
+    }
+
+    hmap_destroy(&lflow_ref->lflow_ref_nodes);
+    free(lflow_ref);
+}
+
+void
+lflow_ref_clear(struct lflow_ref *lflow_ref)
+{
+    struct lflow_ref_node *l;
+    LIST_FOR_EACH_SAFE (l, lflow_list_node, &lflow_ref->lflows_ref_list) {
+        lflow_ref_node_destroy(l, NULL);
+    }
+
+    hmap_clear(&lflow_ref->lflow_ref_nodes);
+}
+
+/* Unlinks the lflows referenced by the 'lflow_ref'.
+ * For each lflow_ref_node (lrn) in the lflow_ref, it basically clears
+ * the datapath id (lrn->dp_index) from the lrn->lflow's dpg bitmap.
+ */
+void
+lflow_ref_unlink_lflows(struct lflow_ref *lflow_ref)
+{
+    struct lflow_ref_node *lrn;
+
+    LIST_FOR_EACH (lrn, lflow_list_node, &lflow_ref->lflows_ref_list) {
+        if (dec_dp_refcnt(&lrn->lflow->dp_refcnts_map,
+                          lrn->dp_index)) {
+            bitmap_set0(lrn->lflow->dpg_bitmap, lrn->dp_index);
+        }
+
+        lrn->linked = false;
+    }
+}
+
+bool
+lflow_ref_resync_flows(struct lflow_ref *lflow_ref,
+                       struct lflow_table *lflow_table,
+                       struct ovsdb_idl_txn *ovnsb_txn,
+                       const struct ovn_datapaths *ls_datapaths,
+                       const struct ovn_datapaths *lr_datapaths,
+                       bool ovn_internal_version_changed,
+                       const struct sbrec_logical_flow_table *sbflow_table,
+                       const struct sbrec_logical_dp_group_table *dpgrp_table)
+{
+    lflow_ref_unlink_lflows(lflow_ref);
+    return lflow_ref_sync_lflows__(lflow_ref, lflow_table, ovnsb_txn,
+                                   ls_datapaths, lr_datapaths,
+                                   ovn_internal_version_changed, sbflow_table,
+                                   dpgrp_table);
+}
+
+bool
+lflow_ref_sync_lflows(struct lflow_ref *lflow_ref,
+                      struct lflow_table *lflow_table,
+                      struct ovsdb_idl_txn *ovnsb_txn,
+                      const struct ovn_datapaths *ls_datapaths,
+                      const struct ovn_datapaths *lr_datapaths,
+                      bool ovn_internal_version_changed,
+                      const struct sbrec_logical_flow_table *sbflow_table,
+                      const struct sbrec_logical_dp_group_table *dpgrp_table)
+{
+    return lflow_ref_sync_lflows__(lflow_ref, lflow_table, ovnsb_txn,
+                                   ls_datapaths, lr_datapaths,
+                                   ovn_internal_version_changed, sbflow_table,
+                                   dpgrp_table);
+}
+
+void
+lflow_table_add_lflow(struct lflow_table *lflow_table,
+                      const struct ovn_datapath *od,
+                      const unsigned long *dp_bitmap, size_t dp_bitmap_len,
+                      enum ovn_stage stage, uint16_t priority,
+                      const char *match, const char *actions,
+                      const char *io_port, const char *ctrl_meter,
+                      const struct ovsdb_idl_row *stage_hint,
+                      const char *where,
+                      struct lflow_ref *lflow_ref)
+    OVS_EXCLUDED(fake_hash_mutex)
+{
+    struct ovs_mutex *hash_lock;
+    uint32_t hash;
+
+    ovs_assert(!od ||
+               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
+
+    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
+                                 ovn_stage_get_pipeline(stage),
+                                 priority, match,
+                                 actions);
+
+    hash_lock = lflow_hash_lock(&lflow_table->entries, hash);
+    struct ovn_lflow *lflow =
+        do_ovn_lflow_add(lflow_table, od, dp_bitmap,
+                         dp_bitmap_len, hash, stage,
+                         priority, match, actions,
+                         io_port, ctrl_meter, stage_hint, where);
+
+    if (lflow_ref) {
+        /* lflow referencing is only supported if 'od' is not NULL. */
+        ovs_assert(od);
+
+        struct lflow_ref_node *lrn =
+            lflow_ref_node_find(&lflow_ref->lflow_ref_nodes, lflow, hash);
+        if (!lrn) {
+            lrn = xzalloc(sizeof *lrn);
+            lrn->lflow = lflow;
+            lrn->dp_index = od->index;
+            ovs_list_insert(&lflow_ref->lflows_ref_list,
+                            &lrn->lflow_list_node);
+            inc_dp_refcnt(&lflow->dp_refcnts_map, lrn->dp_index);
+            ovs_list_insert(&lflow->referenced_by, &lrn->ref_list_node);
+
+            hmap_insert(&lflow_ref->lflow_ref_nodes, &lrn->ref_node, hash);
+        }
+
+        lrn->linked = true;
+    }
+
+    lflow_hash_unlock(hash_lock);
+
+}
+
+void
+lflow_table_add_lflow_default_drop(struct lflow_table *lflow_table,
+                                   const struct ovn_datapath *od,
+                                   enum ovn_stage stage,
+                                   const char *where,
+                                   struct lflow_ref *lflow_ref)
+{
+    lflow_table_add_lflow(lflow_table, od, NULL, 0, stage, 0, "1",
+                          debug_drop_action(), NULL, NULL, NULL,
+                          where, lflow_ref);
+}
+
+/* Given a desired bitmap, finds a datapath group in 'dp_groups'.  If it
+ * doesn't exist, creates a new one and adds it to 'dp_groups'.
+ * If 'sb_group' is provided, function will try to re-use this group by
+ * either taking it directly, or by modifying, if it's not already in use. */
+struct ovn_dp_group *
+ovn_dp_group_get_or_create(struct ovsdb_idl_txn *ovnsb_txn,
+                           struct hmap *dp_groups,
+                           struct sbrec_logical_dp_group *sb_group,
+                           size_t desired_n,
+                           const unsigned long *desired_bitmap,
+                           size_t bitmap_len,
+                           bool is_switch,
+                           const struct ovn_datapaths *ls_datapaths,
+                           const struct ovn_datapaths *lr_datapaths)
+{
+    struct ovn_dp_group *dpg;
+
+    dpg = ovn_dp_group_get(dp_groups, desired_n, desired_bitmap, bitmap_len);
+    if (dpg) {
+        return dpg;
+    }
+
+    return ovn_dp_group_create(ovnsb_txn, dp_groups, sb_group, desired_n,
+                               desired_bitmap, bitmap_len, is_switch,
+                               ls_datapaths, lr_datapaths);
+}
+
+void
+ovn_dp_groups_clear(struct hmap *dp_groups)
+{
+    struct ovn_dp_group *dpg;
+    HMAP_FOR_EACH_POP (dpg, node, dp_groups) {
+        bitmap_free(dpg->bitmap);
+        free(dpg);
+    }
+}
+
+void
+ovn_dp_groups_destroy(struct hmap *dp_groups)
+{
+    ovn_dp_groups_clear(dp_groups);
+    hmap_destroy(dp_groups);
+}
+
+void
+lflow_hash_lock_init(void)
+{
+    if (!lflow_hash_lock_initialized) {
+        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
+            ovs_mutex_init(&lflow_hash_locks[i]);
+        }
+        lflow_hash_lock_initialized = true;
+    }
+}
+
+void
+lflow_hash_lock_destroy(void)
+{
+    if (lflow_hash_lock_initialized) {
+        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
+            ovs_mutex_destroy(&lflow_hash_locks[i]);
+        }
+    }
+    lflow_hash_lock_initialized = false;
+}
+
+/* static functions. */
+static void
+ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
+               size_t dp_bitmap_len, enum ovn_stage stage, uint16_t priority,
+               char *match, char *actions, char *io_port, char *ctrl_meter,
+               char *stage_hint, const char *where)
+{
+    lflow->dpg_bitmap = bitmap_allocate(dp_bitmap_len);
+    lflow->od = od;
+    lflow->stage = stage;
+    lflow->priority = priority;
+    lflow->match = match;
+    lflow->actions = actions;
+    lflow->io_port = io_port;
+    lflow->stage_hint = stage_hint;
+    lflow->ctrl_meter = ctrl_meter;
+    lflow->dpg = NULL;
+    lflow->where = where;
+    lflow->sb_uuid = UUID_ZERO;
+    hmap_init(&lflow->dp_refcnts_map);
+    ovs_list_init(&lflow->referenced_by);
+}
+
+static struct ovs_mutex *
+lflow_hash_lock(const struct hmap *lflow_table, uint32_t hash)
+    OVS_ACQUIRES(fake_hash_mutex)
+    OVS_NO_THREAD_SAFETY_ANALYSIS
+{
+    struct ovs_mutex *hash_lock = NULL;
+
+    if (parallelization_state == STATE_USE_PARALLELIZATION) {
+        hash_lock =
+            &lflow_hash_locks[hash & lflow_table->mask & LFLOW_HASH_LOCK_MASK];
+        ovs_mutex_lock(hash_lock);
+    }
+    return hash_lock;
+}
+
+static void
+lflow_hash_unlock(struct ovs_mutex *hash_lock)
+    OVS_RELEASES(fake_hash_mutex)
+    OVS_NO_THREAD_SAFETY_ANALYSIS
+{
+    if (hash_lock) {
+        ovs_mutex_unlock(hash_lock);
+    }
+}
+
+static bool
+ovn_lflow_equal(const struct ovn_lflow *a, enum ovn_stage stage,
+                uint16_t priority, const char *match,
+                const char *actions, const char *ctrl_meter)
+{
+    return (a->stage == stage
+            && a->priority == priority
+            && !strcmp(a->match, match)
+            && !strcmp(a->actions, actions)
+            && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
+}
+
+static struct ovn_lflow *
+ovn_lflow_find(const struct hmap *lflows,
+               enum ovn_stage stage, uint16_t priority,
+               const char *match, const char *actions,
+               const char *ctrl_meter, uint32_t hash)
+{
+    struct ovn_lflow *lflow;
+    HMAP_FOR_EACH_WITH_HASH (lflow, hmap_node, hash, lflows) {
+        if (ovn_lflow_equal(lflow, stage, priority, match, actions,
+                            ctrl_meter)) {
+            return lflow;
+        }
+    }
+    return NULL;
+}
+
+static char *
+ovn_lflow_hint(const struct ovsdb_idl_row *row)
+{
+    if (!row) {
+        return NULL;
+    }
+    return xasprintf("%08x", row->uuid.parts[0]);
+}
+
+static void
+ovn_lflow_destroy(struct lflow_table *lflow_table, struct ovn_lflow *lflow)
+{
+    if (lflow) {
+        if (lflow_table) {
+            hmap_remove(&lflow_table->entries, &lflow->hmap_node);
+        }
+        bitmap_free(lflow->dpg_bitmap);
+        free(lflow->match);
+        free(lflow->actions);
+        free(lflow->io_port);
+        free(lflow->stage_hint);
+        free(lflow->ctrl_meter);
+        ovn_lflow_clear_dp_refcnts_map(lflow);
+        struct lflow_ref_node *l;
+        LIST_FOR_EACH_SAFE (l, ref_list_node, &lflow->referenced_by) {
+            lflow_ref_node_destroy(l, NULL);
+        }
+        free(lflow);
+    }
+}
+
+static struct ovn_lflow *
+do_ovn_lflow_add(struct lflow_table *lflow_table,
+                 const struct ovn_datapath *od,
+                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
+                 uint32_t hash, enum ovn_stage stage, uint16_t priority,
+                 const char *match, const char *actions,
+                 const char *io_port, const char *ctrl_meter,
+                 const struct ovsdb_idl_row *stage_hint,
+                 const char *where)
+    OVS_REQUIRES(fake_hash_mutex)
+{
+    struct ovn_lflow *old_lflow;
+    struct ovn_lflow *lflow;
+
+    size_t bitmap_len = od ? ods_size(od->datapaths) : dp_bitmap_len;
+    ovs_assert(bitmap_len);
+
+    old_lflow = ovn_lflow_find(&lflow_table->entries, stage,
+                               priority, match, actions, ctrl_meter, hash);
+    if (old_lflow) {
+        ovn_dp_group_add_with_reference(old_lflow, od, dp_bitmap,
+                                        bitmap_len);
+        return old_lflow;
+    }
+
+    lflow = xzalloc(sizeof *lflow);
+    /* While adding new logical flows we're not setting single datapath, but
+     * collecting a group.  'od' will be updated later for all flows with only
+     * one datapath in a group, so it could be hashed correctly. */
+    ovn_lflow_init(lflow, NULL, bitmap_len, stage, priority,
+                   xstrdup(match), xstrdup(actions),
+                   io_port ? xstrdup(io_port) : NULL,
+                   nullable_xstrdup(ctrl_meter),
+                   ovn_lflow_hint(stage_hint), where);
+
+    ovn_dp_group_add_with_reference(lflow, od, dp_bitmap, bitmap_len);
+
+    if (parallelization_state != STATE_USE_PARALLELIZATION) {
+        hmap_insert(&lflow_table->entries, &lflow->hmap_node, hash);
+    } else {
+        hmap_insert_fast(&lflow_table->entries, &lflow->hmap_node,
+                         hash);
+        thread_lflow_counter++;
+    }
+
+    return lflow;
+}
+
+static bool
+sync_lflow_to_sb(struct ovn_lflow *lflow,
+                 struct ovsdb_idl_txn *ovnsb_txn,
+                 struct lflow_table *lflow_table,
+                 const struct ovn_datapaths *ls_datapaths,
+                 const struct ovn_datapaths *lr_datapaths,
+                 bool ovn_internal_version_changed,
+                 const struct sbrec_logical_flow *sbflow,
+                 const struct sbrec_logical_dp_group_table *sb_dpgrp_table)
+{
+    struct sbrec_logical_dp_group *sbrec_dp_group = NULL;
+    struct ovn_dp_group *pre_sync_dpg = lflow->dpg;
+    struct ovn_datapath **datapaths_array;
+    struct hmap *dp_groups;
+    size_t n_datapaths;
+    bool is_switch;
+
+    if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
+        n_datapaths = ods_size(ls_datapaths);
+        datapaths_array = ls_datapaths->array;
+        dp_groups = &lflow_table->ls_dp_groups;
+        is_switch = true;
+    } else {
+        n_datapaths = ods_size(lr_datapaths);
+        datapaths_array = lr_datapaths->array;
+        dp_groups = &lflow_table->lr_dp_groups;
+        is_switch = false;
+    }
+
+    lflow->n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
+    ovs_assert(lflow->n_ods);
+
+    if (lflow->n_ods == 1) {
+        /* There is only one datapath, so it should be moved out of the
+         * group to a single 'od'. */
+        size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
+                                    n_datapaths);
+
+        lflow->od = datapaths_array[index];
+        lflow->dpg = NULL;
+    } else {
+        lflow->od = NULL;
+    }
+
+    if (!sbflow) {
+        lflow->sb_uuid = uuid_random();
+        sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
+                                                        &lflow->sb_uuid);
+        const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
+        uint8_t table = ovn_stage_get_table(lflow->stage);
+        sbrec_logical_flow_set_pipeline(sbflow, pipeline);
+        sbrec_logical_flow_set_table_id(sbflow, table);
+        sbrec_logical_flow_set_priority(sbflow, lflow->priority);
+        sbrec_logical_flow_set_match(sbflow, lflow->match);
+        sbrec_logical_flow_set_actions(sbflow, lflow->actions);
+        if (lflow->io_port) {
+            struct smap tags = SMAP_INITIALIZER(&tags);
+            smap_add(&tags, "in_out_port", lflow->io_port);
+            sbrec_logical_flow_set_tags(sbflow, &tags);
+            smap_destroy(&tags);
+        }
+        sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
+
+        /* Trim the source locator lflow->where, which looks something like
+         * "ovn/northd/northd.c:1234", down to just the part following the
+         * last slash, e.g. "northd.c:1234". */
+        const char *slash = strrchr(lflow->where, '/');
+#if _WIN32
+        const char *backslash = strrchr(lflow->where, '\\');
+        if (!slash || backslash > slash) {
+            slash = backslash;
+        }
+#endif
+        const char *where = slash ? slash + 1 : lflow->where;
+
+        struct smap ids = SMAP_INITIALIZER(&ids);
+        smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
+        smap_add(&ids, "source", where);
+        if (lflow->stage_hint) {
+            smap_add(&ids, "stage-hint", lflow->stage_hint);
+        }
+        sbrec_logical_flow_set_external_ids(sbflow, &ids);
+        smap_destroy(&ids);
+
+    } else {
+        lflow->sb_uuid = sbflow->header_.uuid;
+        sbrec_dp_group = sbflow->logical_dp_group;
+
+        if (ovn_internal_version_changed) {
+            const char *stage_name = smap_get_def(&sbflow->external_ids,
+                                                  "stage-name", "");
+            const char *stage_hint = smap_get_def(&sbflow->external_ids,
+                                                  "stage-hint", "");
+            const char *source = smap_get_def(&sbflow->external_ids,
+                                              "source", "");
+
+            if (strcmp(stage_name, ovn_stage_to_str(lflow->stage))) {
+                sbrec_logical_flow_update_external_ids_setkey(
+                    sbflow, "stage-name", ovn_stage_to_str(lflow->stage));
+            }
+            if (lflow->stage_hint) {
+                if (strcmp(stage_hint, lflow->stage_hint)) {
+                    sbrec_logical_flow_update_external_ids_setkey(
+                        sbflow, "stage-hint", lflow->stage_hint);
+                }
+            }
+            if (lflow->where) {
+
+                /* Trim the source locator lflow->where, which looks something
+                 * like "ovn/northd/northd.c:1234", down to just the part
+                 * following the last slash, e.g. "northd.c:1234". */
+                const char *slash = strrchr(lflow->where, '/');
+#if _WIN32
+                const char *backslash = strrchr(lflow->where, '\\');
+                if (!slash || backslash > slash) {
+                    slash = backslash;
+                }
+#endif
+                const char *where = slash ? slash + 1 : lflow->where;
+
+                if (strcmp(source, where)) {
+                    sbrec_logical_flow_update_external_ids_setkey(
+                        sbflow, "source", where);
+                }
+            }
+        }
+    }
+
+    if (lflow->od) {
+        sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
+        sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
+    } else {
+        sbrec_logical_flow_set_logical_datapath(sbflow, NULL);
+        lflow->dpg = ovn_dp_group_get(dp_groups, lflow->n_ods,
+                                      lflow->dpg_bitmap,
+                                      n_datapaths);
+        if (lflow->dpg) {
+            /* Update the dpg's sb dp_group. */
+            lflow->dpg->dp_group = sbrec_logical_dp_group_table_get_for_uuid(
+                sb_dpgrp_table,
+                &lflow->dpg->dpg_uuid);
+
+            if (!lflow->dpg->dp_group) {
+                /* Ideally this should not happen.  But it can still happen
+                 * due to 2 reasons:
+                 * 1. There is a bug in the dp_group management.  We should
+                 *    perhaps assert here.
+                 * 2. A User or CMS may delete the logical_dp_groups in SB DB
+                 *    or clear the SB:Logical_flow.logical_dp_groups column
+                 *    (intentionally or accidentally)
+                 *
+                 * Because of (2) it is better to return false instead of
+                 * assert,so that we recover from th inconsistent SB DB.
+                 */
+                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
+                VLOG_WARN_RL(&rl, "SB Logical flow ["UUID_FMT"]'s "
+                            "logical_dp_group column is not set "
+                            "(which is unexpected).  It should have been "
+                            "referencing the dp group ["UUID_FMT"]",
+                            UUID_ARGS(&sbflow->header_.uuid),
+                            UUID_ARGS(&lflow->dpg->dpg_uuid));
+                return false;
+            }
+        } else {
+            lflow->dpg = ovn_dp_group_create(
+                                ovnsb_txn, dp_groups, sbrec_dp_group,
+                                lflow->n_ods, lflow->dpg_bitmap,
+                                n_datapaths, is_switch,
+                                ls_datapaths,
+                                lr_datapaths);
+        }
+        sbrec_logical_flow_set_logical_dp_group(sbflow,
+                                                lflow->dpg->dp_group);
+    }
+
+    if (pre_sync_dpg != lflow->dpg) {
+        if (lflow->dpg) {
+            inc_ovn_dp_group_ref(lflow->dpg);
+        }
+        if (pre_sync_dpg) {
+           dec_ovn_dp_group_ref(dp_groups, pre_sync_dpg);
+        }
+    }
+
+    return true;
+}
+
+static struct ovn_dp_group *
+ovn_dp_group_find(const struct hmap *dp_groups,
+                  const unsigned long *dpg_bitmap, size_t bitmap_len,
+                  uint32_t hash)
+{
+    struct ovn_dp_group *dpg;
+
+    HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) {
+        if (bitmap_equal(dpg->bitmap, dpg_bitmap, bitmap_len)) {
+            return dpg;
+        }
+    }
+    return NULL;
+}
+
+static void
+inc_ovn_dp_group_ref(struct ovn_dp_group *dpg)
+{
+    dpg->refcnt++;
+}
+
+static void
+dec_ovn_dp_group_ref(struct hmap *dp_groups, struct ovn_dp_group *dpg)
+{
+    dpg->refcnt--;
+
+    if (!dpg->refcnt) {
+        hmap_remove(dp_groups, &dpg->node);
+        free(dpg->bitmap);
+        free(dpg);
+    }
+}
+
+static struct sbrec_logical_dp_group *
+ovn_sb_insert_or_update_logical_dp_group(
+                            struct ovsdb_idl_txn *ovnsb_txn,
+                            struct sbrec_logical_dp_group *dp_group,
+                            const unsigned long *dpg_bitmap,
+                            const struct ovn_datapaths *datapaths)
+{
+    const struct sbrec_datapath_binding **sb;
+    size_t n = 0, index;
+
+    sb = xmalloc(bitmap_count1(dpg_bitmap, ods_size(datapaths)) * sizeof *sb);
+    BITMAP_FOR_EACH_1 (index, ods_size(datapaths), dpg_bitmap) {
+        sb[n++] = datapaths->array[index]->sb;
+    }
+    if (!dp_group) {
+        struct uuid dpg_uuid = uuid_random();
+        dp_group = sbrec_logical_dp_group_insert_persist_uuid(
+            ovnsb_txn, &dpg_uuid);
+    }
+    sbrec_logical_dp_group_set_datapaths(
+        dp_group, (struct sbrec_datapath_binding **) sb, n);
+    free(sb);
+
+    return dp_group;
+}
+
+static struct ovn_dp_group *
+ovn_dp_group_get(struct hmap *dp_groups, size_t desired_n,
+                 const unsigned long *desired_bitmap,
+                 size_t bitmap_len)
+{
+    uint32_t hash;
+
+    hash = hash_int(desired_n, 0);
+    return ovn_dp_group_find(dp_groups, desired_bitmap, bitmap_len, hash);
+}
+
+/* Creates a new datapath group and adds it to 'dp_groups'.
+ * If 'sb_group' is provided, function will try to re-use this group by
+ * either taking it directly, or by modifying, if it's not already in use.
+ * Caller should first call ovn_dp_group_get() before calling this function. */
+static struct ovn_dp_group *
+ovn_dp_group_create(struct ovsdb_idl_txn *ovnsb_txn,
+                    struct hmap *dp_groups,
+                    struct sbrec_logical_dp_group *sb_group,
+                    size_t desired_n,
+                    const unsigned long *desired_bitmap,
+                    size_t bitmap_len,
+                    bool is_switch,
+                    const struct ovn_datapaths *ls_datapaths,
+                    const struct ovn_datapaths *lr_datapaths)
+{
+    struct ovn_dp_group *dpg;
+
+    bool update_dp_group = false, can_modify = false;
+    unsigned long *dpg_bitmap;
+    size_t i, n = 0;
+
+    dpg_bitmap = sb_group ? bitmap_allocate(bitmap_len) : NULL;
+    for (i = 0; sb_group && i < sb_group->n_datapaths; i++) {
+        struct ovn_datapath *datapath_od;
+
+        datapath_od = ovn_datapath_from_sbrec(
+                        ls_datapaths ? &ls_datapaths->datapaths : NULL,
+                        lr_datapaths ? &lr_datapaths->datapaths : NULL,
+                        sb_group->datapaths[i]);
+        if (!datapath_od || ovn_datapath_is_stale(datapath_od)) {
+            break;
+        }
+        bitmap_set1(dpg_bitmap, datapath_od->index);
+        n++;
+    }
+    if (!sb_group || i != sb_group->n_datapaths) {
+        /* No group or stale group.  Not going to be used. */
+        update_dp_group = true;
+        can_modify = true;
+    } else if (!bitmap_equal(dpg_bitmap, desired_bitmap, bitmap_len)) {
+        /* The group in Sb is different. */
+        update_dp_group = true;
+        /* We can modify existing group if it's not already in use. */
+        can_modify = !ovn_dp_group_find(dp_groups, dpg_bitmap,
+                                        bitmap_len, hash_int(n, 0));
+    }
+
+    bitmap_free(dpg_bitmap);
+
+    dpg = xzalloc(sizeof *dpg);
+    dpg->bitmap = bitmap_clone(desired_bitmap, bitmap_len);
+    if (!update_dp_group) {
+        dpg->dp_group = sb_group;
+    } else {
+        dpg->dp_group = ovn_sb_insert_or_update_logical_dp_group(
+                            ovnsb_txn,
+                            can_modify ? sb_group : NULL,
+                            desired_bitmap,
+                            is_switch ? ls_datapaths : lr_datapaths);
+    }
+    dpg->dpg_uuid = dpg->dp_group->header_.uuid;
+    hmap_insert(dp_groups, &dpg->node, hash_int(desired_n, 0));
+
+    return dpg;
+}
+
+/* Adds an OVN datapath to a datapath group of existing logical flow.
+ * Version to use when hash bucket locking is NOT required or the corresponding
+ * hash lock is already taken. */
+static void
+ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
+                                const struct ovn_datapath *od,
+                                const unsigned long *dp_bitmap,
+                                size_t bitmap_len)
+    OVS_REQUIRES(fake_hash_mutex)
+{
+    if (od) {
+        bitmap_set1(lflow_ref->dpg_bitmap, od->index);
+    }
+    if (dp_bitmap) {
+        bitmap_or(lflow_ref->dpg_bitmap, dp_bitmap, bitmap_len);
+    }
+}
+
+static bool
+lflow_ref_sync_lflows__(struct lflow_ref  *lflow_ref,
+                        struct lflow_table *lflow_table,
+                        struct ovsdb_idl_txn *ovnsb_txn,
+                        const struct ovn_datapaths *ls_datapaths,
+                        const struct ovn_datapaths *lr_datapaths,
+                        bool ovn_internal_version_changed,
+                        const struct sbrec_logical_flow_table *sbflow_table,
+                        const struct sbrec_logical_dp_group_table *dpgrp_table)
+{
+    struct lflow_ref_node *lrn;
+    struct ovn_lflow *lflow;
+    LIST_FOR_EACH_SAFE (lrn, lflow_list_node, &lflow_ref->lflows_ref_list) {
+        lflow = lrn->lflow;
+        const struct sbrec_logical_flow *sblflow =
+            sbrec_logical_flow_table_get_for_uuid(sbflow_table,
+                                                  &lflow->sb_uuid);
+
+        struct hmap *dp_groups = NULL;
+        size_t n_datapaths;
+        if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
+            dp_groups = &lflow_table->ls_dp_groups;
+            n_datapaths = ods_size(ls_datapaths);
+        } else {
+            dp_groups = &lflow_table->lr_dp_groups;
+            n_datapaths = ods_size(lr_datapaths);
+        }
+
+        size_t n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
+
+        if (n_ods) {
+            if (!sync_lflow_to_sb(lflow, ovnsb_txn, lflow_table, ls_datapaths,
+                                  lr_datapaths, ovn_internal_version_changed,
+                                  sblflow, dpgrp_table)) {
+                return false;
+            }
+        }
+
+        if (!lrn->linked) {
+            lflow_ref_node_destroy(lrn, &lflow_ref->lflow_ref_nodes);
+
+            if (ovs_list_is_empty(&lflow->referenced_by)) {
+                if (lflow->dpg) {
+                    dec_ovn_dp_group_ref(dp_groups, lflow->dpg);
+                }
+                ovn_lflow_destroy(lflow_table, lflow);
+
+                if (sblflow) {
+                    sbrec_logical_flow_delete(sblflow);
+                }
+            }
+        }
+    }
+
+    return true;
+}
+
+/* Used for the datapath reference counting for a given 'struct ovn_lflow'.
+ * See the hmap 'dp_refcnts_map in 'struct ovn_lflow'.
+ * For a given lflow L(M, A) with match - M and actions - A, it can be
+ * referenced by multiple lflow_refs for the same datapath
+ * Eg. Two lflow_ref's - op->lflow_ref and op->stateful_lflow_ref of a
+ * datapath can have a reference to the same lflow L (M, A).  In this it
+ * is important to maintain this reference count so that the sync to the
+ * SB DB logical_flow is correct. */
+struct dp_refcnt {
+    struct hmap_node key_node;
+
+    size_t dp_index; /* datapath index.  Also used as hmap key. */
+    size_t refcnt;  /* reference counter. */
+};
+
+static struct dp_refcnt *
+dp_refcnt_find(struct hmap *dp_refcnts_map, size_t dp_index)
+{
+    struct dp_refcnt *dp_refcnt;
+    HMAP_FOR_EACH_WITH_HASH (dp_refcnt, key_node, dp_index, dp_refcnts_map) {
+        if (dp_refcnt->dp_index == dp_index) {
+            return dp_refcnt;
+        }
+    }
+
+    return NULL;
+}
+
+static void
+inc_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index)
+{
+    struct dp_refcnt *dp_refcnt = dp_refcnt_find(dp_refcnts_map, dp_index);
+
+    if (!dp_refcnt) {
+        dp_refcnt = xzalloc(sizeof *dp_refcnt);
+        dp_refcnt->dp_index = dp_index;
+
+        hmap_insert(dp_refcnts_map, &dp_refcnt->key_node, dp_index);
+    }
+
+    dp_refcnt->refcnt++;
+}
+
+/* Decrements the datapath's refcnt from the 'dp_refcnts_map' if it exists
+ * and returns true of the refcnt is 0 or if the dp refcnt doesn't exist. */
+static bool
+dec_dp_refcnt(struct hmap *dp_refcnts_map, size_t dp_index)
+{
+    bool retval = true;
+
+    struct dp_refcnt *dp_refcnt = dp_refcnt_find(dp_refcnts_map, dp_index);
+    if (dp_refcnt) {
+        dp_refcnt->refcnt--;
+
+        if (!dp_refcnt->refcnt) {
+            hmap_remove(dp_refcnts_map, &dp_refcnt->key_node);
+            free(dp_refcnt);
+        } else {
+            retval = false;
+        }
+    }
+
+    return retval;
+}
+
+static void
+ovn_lflow_clear_dp_refcnts_map(struct ovn_lflow *lflow)
+{
+    struct dp_refcnt *dp_refcnt;
+
+    HMAP_FOR_EACH_POP (dp_refcnt, key_node, &lflow->dp_refcnts_map) {
+        free(dp_refcnt);
+    }
+
+    hmap_destroy(&lflow->dp_refcnts_map);
+}
+
+static struct lflow_ref_node *
+lflow_ref_node_find(struct hmap *lflow_ref_nodes, struct ovn_lflow *lflow,
+                    uint32_t lflow_hash)
+{
+    struct lflow_ref_node *lrn;
+    HMAP_FOR_EACH_WITH_HASH (lrn, ref_node, lflow_hash, lflow_ref_nodes) {
+        if (lrn->lflow == lflow) {
+            return lrn;
+        }
+    }
+
+    return NULL;
+}
+
+static void
+lflow_ref_node_destroy(struct lflow_ref_node *lrn,
+                       struct hmap *lflow_ref_nodes)
+{
+    if (lflow_ref_nodes) {
+        hmap_remove(lflow_ref_nodes, &lrn->ref_node);
+    }
+    ovs_list_remove(&lrn->lflow_list_node);
+    ovs_list_remove(&lrn->ref_list_node);
+    free(lrn);
+}
diff --git a/northd/lflow-mgr.h b/northd/lflow-mgr.h
new file mode 100644
index 0000000000..5a0cc28965
--- /dev/null
+++ b/northd/lflow-mgr.h
@@ -0,0 +1,186 @@ 
+    /*
+ * Copyright (c) 2023, Red Hat, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at:
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef LFLOW_MGR_H
+#define LFLOW_MGR_H 1
+
+#include "include/openvswitch/hmap.h"
+#include "include/openvswitch/uuid.h"
+
+#include "northd.h"
+
+struct ovsdb_idl_txn;
+struct ovn_datapath;
+struct ovsdb_idl_row;
+
+/* lflow map which stores the logical flows. */
+struct lflow_table;
+struct lflow_table *lflow_table_alloc(void);
+void lflow_table_init(struct lflow_table *);
+void lflow_table_clear(struct lflow_table *);
+void lflow_table_destroy(struct lflow_table *);
+void lflow_table_expand(struct lflow_table *);
+void lflow_table_set_size(struct lflow_table *, size_t);
+void lflow_table_sync_to_sb(struct lflow_table *,
+                            struct ovsdb_idl_txn *ovnsb_txn,
+                            const struct ovn_datapaths *ls_datapaths,
+                            const struct ovn_datapaths *lr_datapaths,
+                            bool ovn_internal_version_changed,
+                            const struct sbrec_logical_flow_table *,
+                            const struct sbrec_logical_dp_group_table *);
+void lflow_table_destroy(struct lflow_table *);
+
+void lflow_hash_lock_init(void);
+void lflow_hash_lock_destroy(void);
+
+/* lflow mgr manages logical flows for a resource (like logical port
+ * or datapath). */
+struct lflow_ref;
+
+struct lflow_ref *lflow_ref_create(void);
+void lflow_ref_destroy(struct lflow_ref *);
+void lflow_ref_clear(struct lflow_ref *lflow_ref);
+void lflow_ref_unlink_lflows(struct lflow_ref *);
+bool lflow_ref_resync_flows(struct lflow_ref *,
+                            struct lflow_table *lflow_table,
+                            struct ovsdb_idl_txn *ovnsb_txn,
+                            const struct ovn_datapaths *ls_datapaths,
+                            const struct ovn_datapaths *lr_datapaths,
+                            bool ovn_internal_version_changed,
+                            const struct sbrec_logical_flow_table *,
+                            const struct sbrec_logical_dp_group_table *);
+bool lflow_ref_sync_lflows(struct lflow_ref *,
+                           struct lflow_table *lflow_table,
+                           struct ovsdb_idl_txn *ovnsb_txn,
+                           const struct ovn_datapaths *ls_datapaths,
+                           const struct ovn_datapaths *lr_datapaths,
+                           bool ovn_internal_version_changed,
+                           const struct sbrec_logical_flow_table *,
+                           const struct sbrec_logical_dp_group_table *);
+
+
+void lflow_table_add_lflow(struct lflow_table *, const struct ovn_datapath *,
+                           const unsigned long *dp_bitmap,
+                           size_t dp_bitmap_len, enum ovn_stage stage,
+                           uint16_t priority, const char *match,
+                           const char *actions, const char *io_port,
+                           const char *ctrl_meter,
+                           const struct ovsdb_idl_row *stage_hint,
+                           const char *where, struct lflow_ref *);
+void lflow_table_add_lflow_default_drop(struct lflow_table *,
+                                        const struct ovn_datapath *,
+                                        enum ovn_stage stage,
+                                        const char *where,
+                                        struct lflow_ref *);
+
+/* Adds a row with the specified contents to the Logical_Flow table. */
+#define ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
+                                  ACTIONS, IN_OUT_PORT, CTRL_METER, \
+                                  STAGE_HINT) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, IN_OUT_PORT, CTRL_METER, STAGE_HINT, \
+                          OVS_SOURCE_LOCATOR, NULL)
+
+#define ovn_lflow_add_with_lflow_ref_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, \
+                                            MATCH, ACTIONS, IN_OUT_PORT, \
+                                            CTRL_METER, STAGE_HINT, LFLOW_REF)\
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, IN_OUT_PORT, CTRL_METER, STAGE_HINT, \
+                          OVS_SOURCE_LOCATOR, LFLOW_REF)
+
+#define ovn_lflow_add_with_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
+                                ACTIONS, STAGE_HINT) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, NULL, NULL, STAGE_HINT,  \
+                          OVS_SOURCE_LOCATOR, NULL)
+
+#define ovn_lflow_add_with_lflow_ref_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, \
+                                          MATCH, ACTIONS, STAGE_HINT, \
+                                          LFLOW_REF) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, NULL, NULL, STAGE_HINT,  \
+                          OVS_SOURCE_LOCATOR, LFLOW_REF)
+
+#define ovn_lflow_add_with_dp_group(LFLOW_TABLE, DP_BITMAP, DP_BITMAP_LEN, \
+                                    STAGE, PRIORITY, MATCH, ACTIONS, \
+                                    STAGE_HINT) \
+    lflow_table_add_lflow(LFLOW_TABLE, NULL, DP_BITMAP, DP_BITMAP_LEN, STAGE, \
+                          PRIORITY, MATCH, ACTIONS, NULL, NULL, STAGE_HINT, \
+                          OVS_SOURCE_LOCATOR, NULL)
+
+#define ovn_lflow_add_default_drop(LFLOW_TABLE, OD, STAGE)                    \
+    lflow_table_add_lflow_default_drop(LFLOW_TABLE, OD, STAGE, \
+                                       OVS_SOURCE_LOCATOR, NULL)
+
+
+/* This macro is similar to ovn_lflow_add_with_hint, except that it requires
+ * the IN_OUT_PORT argument, which tells the lport name that appears in the
+ * MATCH, which helps ovn-controller to bypass lflows parsing when the lport is
+ * not local to the chassis. The critiera of the lport to be added using this
+ * argument:
+ *
+ * - For ingress pipeline, the lport that is used to match "inport".
+ * - For egress pipeline, the lport that is used to match "outport".
+ *
+ * For now, only LS pipelines should use this macro.  */
+#define ovn_lflow_add_with_lport_and_hint(LFLOW_TABLE, OD, STAGE, PRIORITY, \
+                                          MATCH, ACTIONS, IN_OUT_PORT, \
+                                          STAGE_HINT, LFLOW_REF) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, IN_OUT_PORT, NULL, STAGE_HINT, \
+                          OVS_SOURCE_LOCATOR, LFLOW_REF)
+
+#define ovn_lflow_add(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, ACTIONS) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, NULL, NULL, NULL, OVS_SOURCE_LOCATOR, NULL)
+
+#define ovn_lflow_add_with_lflow_ref(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
+                                     ACTIONS, LFLOW_REF) \
+    lflow_table_add_lflow(LFLOW_TABLE, OD, NULL, 0, STAGE, PRIORITY, MATCH, \
+                          ACTIONS, NULL, NULL, NULL, OVS_SOURCE_LOCATOR, \
+                          LFLOW_REF)
+
+#define ovn_lflow_metered(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, ACTIONS, \
+                          CTRL_METER) \
+    ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY, MATCH, \
+                              ACTIONS, NULL, CTRL_METER, NULL)
+
+struct sbrec_logical_dp_group;
+
+struct ovn_dp_group {
+    unsigned long *bitmap;
+    const struct sbrec_logical_dp_group *dp_group;
+    struct uuid dpg_uuid;
+    struct hmap_node node;
+    size_t refcnt;
+};
+
+static inline void
+ovn_dp_groups_init(struct hmap *dp_groups)
+{
+    hmap_init(dp_groups);
+}
+
+void ovn_dp_groups_clear(struct hmap *dp_groups);
+void ovn_dp_groups_destroy(struct hmap *dp_groups);
+struct ovn_dp_group *ovn_dp_group_get_or_create(
+    struct ovsdb_idl_txn *ovnsb_txn, struct hmap *dp_groups,
+    struct sbrec_logical_dp_group *sb_group,
+    size_t desired_n, const unsigned long *desired_bitmap,
+    size_t bitmap_len, bool is_switch,
+    const struct ovn_datapaths *ls_datapaths,
+    const struct ovn_datapaths *lr_datapaths);
+
+#endif /* LFLOW_MGR_H */
\ No newline at end of file
diff --git a/northd/northd.c b/northd/northd.c
index 68f2b0bab4..f41df83c40 100644
--- a/northd/northd.c
+++ b/northd/northd.c
@@ -41,6 +41,7 @@ 
 #include "lib/ovn-sb-idl.h"
 #include "lib/ovn-util.h"
 #include "lib/lb.h"
+#include "lflow-mgr.h"
 #include "memory.h"
 #include "northd.h"
 #include "en-lb-data.h"
@@ -68,7 +69,7 @@ 
 VLOG_DEFINE_THIS_MODULE(northd);
 
 static bool controller_event_en;
-static bool lflow_hash_lock_initialized = false;
+
 
 static bool check_lsp_is_up;
 
@@ -97,116 +98,6 @@  static bool default_acl_drop;
 
 #define MAX_OVN_TAGS 4096
 
-/* Pipeline stages. */
-
-/* The two purposes for which ovn-northd uses OVN logical datapaths. */
-enum ovn_datapath_type {
-    DP_SWITCH,                  /* OVN logical switch. */
-    DP_ROUTER                   /* OVN logical router. */
-};
-
-/* Returns an "enum ovn_stage" built from the arguments.
- *
- * (It's better to use ovn_stage_build() for type-safety reasons, but inline
- * functions can't be used in enums or switch cases.) */
-#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
-    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
-
-/* A stage within an OVN logical switch or router.
- *
- * An "enum ovn_stage" indicates whether the stage is part of a logical switch
- * or router, whether the stage is part of the ingress or egress pipeline, and
- * the table within that pipeline.  The first three components are combined to
- * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
- * S_ROUTER_OUT_DELIVERY. */
-enum ovn_stage {
-#define PIPELINE_STAGES                                                   \
-    /* Logical switch ingress stages. */                                  \
-    PIPELINE_STAGE(SWITCH, IN,  CHECK_PORT_SEC, 0, "ls_in_check_port_sec")   \
-    PIPELINE_STAGE(SWITCH, IN,  APPLY_PORT_SEC, 1, "ls_in_apply_port_sec")   \
-    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    2, "ls_in_lookup_fdb")    \
-    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        3, "ls_in_put_fdb")       \
-    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")       \
-    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")        \
-    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")  \
-    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       7, "ls_in_acl_hint")      \
-    PIPELINE_STAGE(SWITCH, IN,  ACL_EVAL,       8, "ls_in_acl_eval")      \
-    PIPELINE_STAGE(SWITCH, IN,  ACL_ACTION,     9, "ls_in_acl_action")    \
-    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
-    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
-    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  12, "ls_in_lb_aff_check")  \
-    PIPELINE_STAGE(SWITCH, IN,  LB,            13, "ls_in_lb")            \
-    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  14, "ls_in_lb_aff_learn")  \
-    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   15, "ls_in_pre_hairpin")   \
-    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   16, "ls_in_nat_hairpin")   \
-    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       17, "ls_in_hairpin")       \
-    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_EVAL,  18, \
-                   "ls_in_acl_after_lb_eval")  \
-    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_ACTION,  19, \
-                   "ls_in_acl_after_lb_action")  \
-    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      20, "ls_in_stateful")      \
-    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    21, "ls_in_arp_rsp")       \
-    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  22, "ls_in_dhcp_options")  \
-    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 23, "ls_in_dhcp_response") \
-    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    24, "ls_in_dns_lookup")    \
-    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  25, "ls_in_dns_response")  \
-    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 26, "ls_in_external_port") \
-    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       27, "ls_in_l2_lkup")       \
-    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    28, "ls_in_l2_unknown")    \
-                                                                          \
-    /* Logical switch egress stages. */                                   \
-    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      0, "ls_out_pre_acl")        \
-    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       1, "ls_out_pre_lb")         \
-    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
-    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
-    PIPELINE_STAGE(SWITCH, OUT, ACL_EVAL,     4, "ls_out_acl_eval")       \
-    PIPELINE_STAGE(SWITCH, OUT, ACL_ACTION,   5, "ls_out_acl_action")     \
-    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     6, "ls_out_qos_mark")       \
-    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    7, "ls_out_qos_meter")      \
-    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     8, "ls_out_stateful")       \
-    PIPELINE_STAGE(SWITCH, OUT, CHECK_PORT_SEC,  9, "ls_out_check_port_sec") \
-    PIPELINE_STAGE(SWITCH, OUT, APPLY_PORT_SEC, 10, "ls_out_apply_port_sec") \
-                                                                      \
-    /* Logical router ingress stages. */                              \
-    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
-    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
-    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
-    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
-    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
-    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
-    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
-    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
-    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
-    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
-    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
-    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") \
-    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre")  \
-    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")      \
-    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, "lr_in_ip_routing_ecmp") \
-    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")          \
-    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")     \
-    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")     \
-    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")     \
-    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")     \
-    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")     \
-    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")     \
-                                                                      \
-    /* Logical router egress stages. */                               \
-    PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
-                   "lr_out_chk_dnat_local")                                  \
-    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,             1, "lr_out_undnat")      \
-    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT,        2, "lr_out_post_undnat") \
-    PIPELINE_STAGE(ROUTER, OUT, SNAT,               3, "lr_out_snat")        \
-    PIPELINE_STAGE(ROUTER, OUT, POST_SNAT,          4, "lr_out_post_snat")   \
-    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,           5, "lr_out_egr_loop")    \
-    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,           6, "lr_out_delivery")
-
-#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
-    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
-        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
-    PIPELINE_STAGES
-#undef PIPELINE_STAGE
-};
 
 /* Due to various hard-coded priorities need to implement ACLs, the
  * northbound database supports a smaller range of ACL priorities than
@@ -391,51 +282,9 @@  enum ovn_stage {
 #define ROUTE_PRIO_OFFSET_STATIC 1
 #define ROUTE_PRIO_OFFSET_CONNECTED 2
 
-/* Returns an "enum ovn_stage" built from the arguments. */
-static enum ovn_stage
-ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
-                uint8_t table)
-{
-    return OVN_STAGE_BUILD(dp_type, pipeline, table);
-}
-
-/* Returns the pipeline to which 'stage' belongs. */
-static enum ovn_pipeline
-ovn_stage_get_pipeline(enum ovn_stage stage)
-{
-    return (stage >> 8) & 1;
-}
-
-/* Returns the pipeline name to which 'stage' belongs. */
-static const char *
-ovn_stage_get_pipeline_name(enum ovn_stage stage)
-{
-    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
-}
-
-/* Returns the table to which 'stage' belongs. */
-static uint8_t
-ovn_stage_get_table(enum ovn_stage stage)
-{
-    return stage & 0xff;
-}
-
-/* Returns a string name for 'stage'. */
-static const char *
-ovn_stage_to_str(enum ovn_stage stage)
-{
-    switch (stage) {
-#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
-        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
-    PIPELINE_STAGES
-#undef PIPELINE_STAGE
-        default: return "<unknown>";
-    }
-}
-
 /* Returns the type of the datapath to which a flow with the given 'stage' may
  * be added. */
-static enum ovn_datapath_type
+enum ovn_datapath_type
 ovn_stage_to_datapath_type(enum ovn_stage stage)
 {
     switch (stage) {
@@ -680,13 +529,6 @@  ovn_datapath_destroy(struct hmap *datapaths, struct ovn_datapath *od)
     }
 }
 
-/* Returns 'od''s datapath type. */
-static enum ovn_datapath_type
-ovn_datapath_get_type(const struct ovn_datapath *od)
-{
-    return od->nbs ? DP_SWITCH : DP_ROUTER;
-}
-
 static struct ovn_datapath *
 ovn_datapath_find_(const struct hmap *datapaths,
                    const struct uuid *uuid)
@@ -722,13 +564,7 @@  ovn_datapath_find_by_key(struct hmap *datapaths, uint32_t dp_key)
     return NULL;
 }
 
-static bool
-ovn_datapath_is_stale(const struct ovn_datapath *od)
-{
-    return !od->nbr && !od->nbs;
-}
-
-static struct ovn_datapath *
+struct ovn_datapath *
 ovn_datapath_from_sbrec(const struct hmap *ls_datapaths,
                         const struct hmap *lr_datapaths,
                         const struct sbrec_datapath_binding *sb)
@@ -1297,19 +1133,6 @@  struct ovn_port_routable_addresses {
     size_t n_addrs;
 };
 
-/* A node that maintains link between an object (such as an ovn_port) and
- * a lflow. */
-struct lflow_ref_node {
-    /* This list follows different lflows referenced by the same object. List
-     * head is, for example, ovn_port->lflows.  */
-    struct ovs_list lflow_list_node;
-    /* This list follows different objects that reference the same lflow. List
-     * head is ovn_lflow->referenced_by. */
-    struct ovs_list ref_list_node;
-    /* The lflow. */
-    struct ovn_lflow *lflow;
-};
-
 static bool lsp_can_be_inc_processed(const struct nbrec_logical_switch_port *);
 
 static bool
@@ -1389,6 +1212,8 @@  ovn_port_set_nb(struct ovn_port *op,
     init_mcast_port_info(&op->mcast_info, op->nbsp, op->nbrp);
 }
 
+static bool lsp_is_router(const struct nbrec_logical_switch_port *nbsp);
+
 static struct ovn_port *
 ovn_port_create(struct hmap *ports, const char *key,
                 const struct nbrec_logical_switch_port *nbsp,
@@ -1407,12 +1232,14 @@  ovn_port_create(struct hmap *ports, const char *key,
     op->l3dgw_port = op->cr_port = NULL;
     hmap_insert(ports, &op->key_node, hash_string(op->key, 0));
 
-    ovs_list_init(&op->lflows);
+    op->lflow_ref = lflow_ref_create();
+    op->stateful_lflow_ref = lflow_ref_create();
+
     return op;
 }
 
 static void
-ovn_port_destroy_orphan(struct ovn_port *port)
+ovn_port_cleanup(struct ovn_port *port)
 {
     if (port->tunnel_key) {
         ovs_assert(port->od);
@@ -1422,6 +1249,8 @@  ovn_port_destroy_orphan(struct ovn_port *port)
         destroy_lport_addresses(&port->lsp_addrs[i]);
     }
     free(port->lsp_addrs);
+    port->n_lsp_addrs = 0;
+    port->lsp_addrs = NULL;
 
     if (port->peer) {
         port->peer->peer = NULL;
@@ -1431,18 +1260,22 @@  ovn_port_destroy_orphan(struct ovn_port *port)
         destroy_lport_addresses(&port->ps_addrs[i]);
     }
     free(port->ps_addrs);
+    port->ps_addrs = NULL;
+    port->n_ps_addrs = 0;
 
     destroy_lport_addresses(&port->lrp_networks);
     destroy_lport_addresses(&port->proxy_arp_addrs);
+}
+
+static void
+ovn_port_destroy_orphan(struct ovn_port *port)
+{
+    ovn_port_cleanup(port);
     free(port->json_key);
     free(port->key);
+    lflow_ref_destroy(port->lflow_ref);
+    lflow_ref_destroy(port->stateful_lflow_ref);
 
-    struct lflow_ref_node *l;
-    LIST_FOR_EACH_SAFE (l, lflow_list_node, &port->lflows) {
-        ovs_list_remove(&l->lflow_list_node);
-        ovs_list_remove(&l->ref_list_node);
-        free(l);
-    }
     free(port);
 }
 
@@ -3911,124 +3744,6 @@  build_lb_port_related_data(
     build_lswitch_lbs_from_lrouter(lr_datapaths, lb_dps_map, lb_group_dps_map);
 }
 
-
-struct ovn_dp_group {
-    unsigned long *bitmap;
-    struct sbrec_logical_dp_group *dp_group;
-    struct hmap_node node;
-};
-
-static struct ovn_dp_group *
-ovn_dp_group_find(const struct hmap *dp_groups,
-                  const unsigned long *dpg_bitmap, size_t bitmap_len,
-                  uint32_t hash)
-{
-    struct ovn_dp_group *dpg;
-
-    HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) {
-        if (bitmap_equal(dpg->bitmap, dpg_bitmap, bitmap_len)) {
-            return dpg;
-        }
-    }
-    return NULL;
-}
-
-static struct sbrec_logical_dp_group *
-ovn_sb_insert_or_update_logical_dp_group(
-                            struct ovsdb_idl_txn *ovnsb_txn,
-                            struct sbrec_logical_dp_group *dp_group,
-                            const unsigned long *dpg_bitmap,
-                            const struct ovn_datapaths *datapaths)
-{
-    const struct sbrec_datapath_binding **sb;
-    size_t n = 0, index;
-
-    sb = xmalloc(bitmap_count1(dpg_bitmap, ods_size(datapaths)) * sizeof *sb);
-    BITMAP_FOR_EACH_1 (index, ods_size(datapaths), dpg_bitmap) {
-        sb[n++] = datapaths->array[index]->sb;
-    }
-    if (!dp_group) {
-        dp_group = sbrec_logical_dp_group_insert(ovnsb_txn);
-    }
-    sbrec_logical_dp_group_set_datapaths(
-        dp_group, (struct sbrec_datapath_binding **) sb, n);
-    free(sb);
-
-    return dp_group;
-}
-
-/* Given a desired bitmap, finds a datapath group in 'dp_groups'.  If it
- * doesn't exist, creates a new one and adds it to 'dp_groups'.
- * If 'sb_group' is provided, function will try to re-use this group by
- * either taking it directly, or by modifying, if it's not already in use. */
-static struct ovn_dp_group *
-ovn_dp_group_get_or_create(struct ovsdb_idl_txn *ovnsb_txn,
-                           struct hmap *dp_groups,
-                           struct sbrec_logical_dp_group *sb_group,
-                           size_t desired_n,
-                           const unsigned long *desired_bitmap,
-                           size_t bitmap_len,
-                           bool is_switch,
-                           const struct ovn_datapaths *ls_datapaths,
-                           const struct ovn_datapaths *lr_datapaths)
-{
-    struct ovn_dp_group *dpg;
-    uint32_t hash;
-
-    hash = hash_int(desired_n, 0);
-    dpg = ovn_dp_group_find(dp_groups, desired_bitmap, bitmap_len, hash);
-    if (dpg) {
-        return dpg;
-    }
-
-    bool update_dp_group = false, can_modify = false;
-    unsigned long *dpg_bitmap;
-    size_t i, n = 0;
-
-    dpg_bitmap = sb_group ? bitmap_allocate(bitmap_len) : NULL;
-    for (i = 0; sb_group && i < sb_group->n_datapaths; i++) {
-        struct ovn_datapath *datapath_od;
-
-        datapath_od = ovn_datapath_from_sbrec(
-                        ls_datapaths ? &ls_datapaths->datapaths : NULL,
-                        lr_datapaths ? &lr_datapaths->datapaths : NULL,
-                        sb_group->datapaths[i]);
-        if (!datapath_od || ovn_datapath_is_stale(datapath_od)) {
-            break;
-        }
-        bitmap_set1(dpg_bitmap, datapath_od->index);
-        n++;
-    }
-    if (!sb_group || i != sb_group->n_datapaths) {
-        /* No group or stale group.  Not going to be used. */
-        update_dp_group = true;
-        can_modify = true;
-    } else if (!bitmap_equal(dpg_bitmap, desired_bitmap, bitmap_len)) {
-        /* The group in Sb is different. */
-        update_dp_group = true;
-        /* We can modify existing group if it's not already in use. */
-        can_modify = !ovn_dp_group_find(dp_groups, dpg_bitmap,
-                                        bitmap_len, hash_int(n, 0));
-    }
-
-    bitmap_free(dpg_bitmap);
-
-    dpg = xzalloc(sizeof *dpg);
-    dpg->bitmap = bitmap_clone(desired_bitmap, bitmap_len);
-    if (!update_dp_group) {
-        dpg->dp_group = sb_group;
-    } else {
-        dpg->dp_group = ovn_sb_insert_or_update_logical_dp_group(
-                            ovnsb_txn,
-                            can_modify ? sb_group : NULL,
-                            desired_bitmap,
-                            is_switch ? ls_datapaths : lr_datapaths);
-    }
-    hmap_insert(dp_groups, &dpg->node, hash);
-
-    return dpg;
-}
-
 struct sb_lb {
     struct hmap_node hmap_node;
 
@@ -4846,28 +4561,20 @@  ovn_port_find_in_datapath(struct ovn_datapath *od,
     return NULL;
 }
 
-static struct ovn_port *
-ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
-               const char *key, const struct nbrec_logical_switch_port *nbsp,
-               struct ovn_datapath *od, const struct sbrec_port_binding *sb,
-               struct ovs_list *lflows,
-               const struct sbrec_mirror_table *sbrec_mirror_table,
-               const struct sbrec_chassis_table *sbrec_chassis_table,
-               struct ovsdb_idl_index *sbrec_chassis_by_name,
-               struct ovsdb_idl_index *sbrec_chassis_by_hostname)
+static bool
+ls_port_init(struct ovn_port *op, struct ovsdb_idl_txn *ovnsb_txn,
+             struct hmap *ls_ports, struct ovn_datapath *od,
+             const struct sbrec_port_binding *sb,
+             const struct sbrec_mirror_table *sbrec_mirror_table,
+             const struct sbrec_chassis_table *sbrec_chassis_table,
+             struct ovsdb_idl_index *sbrec_chassis_by_name,
+             struct ovsdb_idl_index *sbrec_chassis_by_hostname)
 {
-    struct ovn_port *op = ovn_port_create(ls_ports, key, nbsp, NULL,
-                                          NULL);
-    parse_lsp_addrs(op);
     op->od = od;
-    hmap_insert(&od->ports, &op->dp_node, hmap_node_hash(&op->key_node));
-    if (lflows) {
-        ovs_list_splice(&op->lflows, lflows->next, lflows);
-    }
-
+    parse_lsp_addrs(op);
     /* Assign explicitly requested tunnel ids first. */
     if (!ovn_port_assign_requested_tnl_id(sbrec_chassis_table, op)) {
-        return NULL;
+        return false;
     }
     if (sb) {
         op->sb = sb;
@@ -4884,14 +4591,57 @@  ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
     }
     /* Assign new tunnel ids where needed. */
     if (!ovn_port_allocate_key(sbrec_chassis_table, ls_ports, op)) {
-        return NULL;
+        return false;
     }
     ovn_port_update_sbrec(ovnsb_txn, sbrec_chassis_by_name,
                           sbrec_chassis_by_hostname, NULL, sbrec_mirror_table,
                           op, NULL, NULL);
+    return true;
+}
+
+static struct ovn_port *
+ls_port_create(struct ovsdb_idl_txn *ovnsb_txn, struct hmap *ls_ports,
+               const char *key, const struct nbrec_logical_switch_port *nbsp,
+               struct ovn_datapath *od, const struct sbrec_port_binding *sb,
+               const struct sbrec_mirror_table *sbrec_mirror_table,
+               const struct sbrec_chassis_table *sbrec_chassis_table,
+               struct ovsdb_idl_index *sbrec_chassis_by_name,
+               struct ovsdb_idl_index *sbrec_chassis_by_hostname)
+{
+    struct ovn_port *op = ovn_port_create(ls_ports, key, nbsp, NULL,
+                                          NULL);
+    hmap_insert(&od->ports, &op->dp_node, hmap_node_hash(&op->key_node));
+    if (!ls_port_init(op, ovnsb_txn, ls_ports, od, sb,
+                      sbrec_mirror_table, sbrec_chassis_table,
+                      sbrec_chassis_by_name, sbrec_chassis_by_hostname)) {
+        ovn_port_destroy(ls_ports, op);
+        return NULL;
+    }
+
     return op;
 }
 
+static bool
+ls_port_reinit(struct ovn_port *op, struct ovsdb_idl_txn *ovnsb_txn,
+                struct hmap *ls_ports,
+                const struct nbrec_logical_switch_port *nbsp,
+                const struct nbrec_logical_router_port *nbrp,
+                struct ovn_datapath *od,
+                const struct sbrec_port_binding *sb,
+                const struct sbrec_mirror_table *sbrec_mirror_table,
+                const struct sbrec_chassis_table *sbrec_chassis_table,
+                struct ovsdb_idl_index *sbrec_chassis_by_name,
+                struct ovsdb_idl_index *sbrec_chassis_by_hostname)
+{
+    ovn_port_cleanup(op);
+    op->sb = sb;
+    ovn_port_set_nb(op, nbsp, nbrp);
+    op->l3dgw_port = op->cr_port = NULL;
+    return ls_port_init(op, ovnsb_txn, ls_ports, od, sb,
+                        sbrec_mirror_table, sbrec_chassis_table,
+                        sbrec_chassis_by_name, sbrec_chassis_by_hostname);
+}
+
 /* Returns true if the logical switch has changes which can be
  * incrementally handled.
  * Presently supports i-p for the below changes:
@@ -5031,7 +4781,7 @@  ls_handle_lsp_changes(struct ovsdb_idl_txn *ovnsb_idl_txn,
                 goto fail;
             }
             op = ls_port_create(ovnsb_idl_txn, &nd->ls_ports,
-                                new_nbsp->name, new_nbsp, od, NULL, NULL,
+                                new_nbsp->name, new_nbsp, od, NULL,
                                 ni->sbrec_mirror_table,
                                 ni->sbrec_chassis_table,
                                 ni->sbrec_chassis_by_name,
@@ -5062,17 +4812,12 @@  ls_handle_lsp_changes(struct ovsdb_idl_txn *ovnsb_idl_txn,
                 op->visited = true;
                 continue;
             }
-            struct ovs_list lflows = OVS_LIST_INITIALIZER(&lflows);
-            ovs_list_splice(&lflows, op->lflows.next, &op->lflows);
-            ovn_port_destroy(&nd->ls_ports, op);
-            op = ls_port_create(ovnsb_idl_txn, &nd->ls_ports,
-                                new_nbsp->name, new_nbsp, od, sb, &lflows,
-                                ni->sbrec_mirror_table,
+            if (!ls_port_reinit(op, ovnsb_idl_txn, &nd->ls_ports,
+                                new_nbsp, NULL,
+                                od, sb, ni->sbrec_mirror_table,
                                 ni->sbrec_chassis_table,
                                 ni->sbrec_chassis_by_name,
-                                ni->sbrec_chassis_by_hostname);
-            ovs_assert(ovs_list_is_empty(&lflows));
-            if (!op) {
+                                ni->sbrec_chassis_by_hostname)) {
                 goto fail;
             }
             add_op_to_northd_tracked_ports(&trk_lsps->updated, op);
@@ -6017,170 +5762,7 @@  ovn_igmp_group_destroy(struct hmap *igmp_groups,
  * function of most of the northbound database.
  */
 
-struct ovn_lflow {
-    struct hmap_node hmap_node;
-    struct ovs_list list_node;   /* For temporary list of lflows. Don't remove
-                                    at destroy. */
-
-    struct ovn_datapath *od;     /* 'logical_datapath' in SB schema.  */
-    unsigned long *dpg_bitmap;   /* Bitmap of all datapaths by their 'index'.*/
-    enum ovn_stage stage;
-    uint16_t priority;
-    char *match;
-    char *actions;
-    char *io_port;
-    char *stage_hint;
-    char *ctrl_meter;
-    size_t n_ods;                /* Number of datapaths referenced by 'od' and
-                                  * 'dpg_bitmap'. */
-    struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group. */
-
-    struct ovs_list referenced_by;  /* List of struct lflow_ref_node. */
-    const char *where;
-
-    struct uuid sb_uuid;         /* SB DB row uuid, specified by northd. */
-};
-
-static void ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow);
-static struct ovn_lflow *ovn_lflow_find(const struct hmap *lflows,
-                                        const struct ovn_datapath *od,
-                                        enum ovn_stage stage,
-                                        uint16_t priority, const char *match,
-                                        const char *actions,
-                                        const char *ctrl_meter, uint32_t hash);
-
-static char *
-ovn_lflow_hint(const struct ovsdb_idl_row *row)
-{
-    if (!row) {
-        return NULL;
-    }
-    return xasprintf("%08x", row->uuid.parts[0]);
-}
-
-static bool
-ovn_lflow_equal(const struct ovn_lflow *a, const struct ovn_datapath *od,
-                enum ovn_stage stage, uint16_t priority, const char *match,
-                const char *actions, const char *ctrl_meter)
-{
-    return (a->od == od
-            && a->stage == stage
-            && a->priority == priority
-            && !strcmp(a->match, match)
-            && !strcmp(a->actions, actions)
-            && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
-}
-
-enum {
-    STATE_NULL,               /* parallelization is off */
-    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
-    STATE_USE_PARALLELIZATION /* parallelization is on */
-};
-static int parallelization_state = STATE_NULL;
-
-static void
-ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
-               size_t dp_bitmap_len, enum ovn_stage stage, uint16_t priority,
-               char *match, char *actions, char *io_port, char *ctrl_meter,
-               char *stage_hint, const char *where)
-{
-    ovs_list_init(&lflow->list_node);
-    ovs_list_init(&lflow->referenced_by);
-    lflow->dpg_bitmap = bitmap_allocate(dp_bitmap_len);
-    lflow->od = od;
-    lflow->stage = stage;
-    lflow->priority = priority;
-    lflow->match = match;
-    lflow->actions = actions;
-    lflow->io_port = io_port;
-    lflow->stage_hint = stage_hint;
-    lflow->ctrl_meter = ctrl_meter;
-    lflow->dpg = NULL;
-    lflow->where = where;
-    lflow->sb_uuid = UUID_ZERO;
-}
-
-/* The lflow_hash_lock is a mutex array that protects updates to the shared
- * lflow table across threads when parallel lflow build and dp-group are both
- * enabled. To avoid high contention between threads, a big array of mutexes
- * are used instead of just one. This is possible because when parallel build
- * is used we only use hmap_insert_fast() to update the hmap, which would not
- * touch the bucket array but only the list in a single bucket. We only need to
- * make sure that when adding lflows to the same hash bucket, the same lock is
- * used, so that no two threads can add to the bucket at the same time.  It is
- * ok that the same lock is used to protect multiple buckets, so a fixed sized
- * mutex array is used instead of 1-1 mapping to the hash buckets. This
- * simplies the implementation while effectively reduces lock contention
- * because the chance that different threads contending the same lock amongst
- * the big number of locks is very low. */
-#define LFLOW_HASH_LOCK_MASK 0xFFFF
-static struct ovs_mutex lflow_hash_locks[LFLOW_HASH_LOCK_MASK + 1];
-
-static void
-lflow_hash_lock_init(void)
-{
-    if (!lflow_hash_lock_initialized) {
-        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
-            ovs_mutex_init(&lflow_hash_locks[i]);
-        }
-        lflow_hash_lock_initialized = true;
-    }
-}
-
-static void
-lflow_hash_lock_destroy(void)
-{
-    if (lflow_hash_lock_initialized) {
-        for (size_t i = 0; i < LFLOW_HASH_LOCK_MASK + 1; i++) {
-            ovs_mutex_destroy(&lflow_hash_locks[i]);
-        }
-    }
-    lflow_hash_lock_initialized = false;
-}
-
-/* Full thread safety analysis is not possible with hash locks, because
- * they are taken conditionally based on the 'parallelization_state' and
- * a flow hash.  Also, the order in which two hash locks are taken is not
- * predictable during the static analysis.
- *
- * Since the order of taking two locks depends on a random hash, to avoid
- * ABBA deadlocks, no two hash locks can be nested.  In that sense an array
- * of hash locks is similar to a single mutex.
- *
- * Using a fake mutex to partially simulate thread safety restrictions, as
- * if it were actually a single mutex.
- *
- * OVS_NO_THREAD_SAFETY_ANALYSIS below allows us to ignore conditional
- * nature of the lock.  Unlike other attributes, it applies to the
- * implementation and not to the interface.  So, we can define a function
- * that acquires the lock without analysing the way it does that.
- */
-extern struct ovs_mutex fake_hash_mutex;
-
-static struct ovs_mutex *
-lflow_hash_lock(const struct hmap *lflow_map, uint32_t hash)
-    OVS_ACQUIRES(fake_hash_mutex)
-    OVS_NO_THREAD_SAFETY_ANALYSIS
-{
-    struct ovs_mutex *hash_lock = NULL;
-
-    if (parallelization_state == STATE_USE_PARALLELIZATION) {
-        hash_lock =
-            &lflow_hash_locks[hash & lflow_map->mask & LFLOW_HASH_LOCK_MASK];
-        ovs_mutex_lock(hash_lock);
-    }
-    return hash_lock;
-}
-
-static void
-lflow_hash_unlock(struct ovs_mutex *hash_lock)
-    OVS_RELEASES(fake_hash_mutex)
-    OVS_NO_THREAD_SAFETY_ANALYSIS
-{
-    if (hash_lock) {
-        ovs_mutex_unlock(hash_lock);
-    }
-}
+int parallelization_state = STATE_NULL;
 
 
 /* This thread-local var is used for parallel lflow building when dp-groups is
@@ -6193,240 +5775,7 @@  lflow_hash_unlock(struct ovs_mutex *hash_lock)
  * threads are collected to fix the lflow hmap's size (by the function
  * fix_flow_map_size()).
  * */
-static thread_local size_t thread_lflow_counter = 0;
-
-/* Adds an OVN datapath to a datapath group of existing logical flow.
- * Version to use when hash bucket locking is NOT required or the corresponding
- * hash lock is already taken. */
-static void
-ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
-                                const struct ovn_datapath *od,
-                                const unsigned long *dp_bitmap,
-                                size_t bitmap_len)
-    OVS_REQUIRES(fake_hash_mutex)
-{
-    if (od) {
-        bitmap_set1(lflow_ref->dpg_bitmap, od->index);
-    }
-    if (dp_bitmap) {
-        bitmap_or(lflow_ref->dpg_bitmap, dp_bitmap, bitmap_len);
-    }
-}
-
-/* This global variable collects the lflows generated by do_ovn_lflow_add().
- * start_collecting_lflows() will enable the lflow collection and the calls to
- * do_ovn_lflow_add (or the macros ovn_lflow_add_...) will add generated lflows
- * to the list. end_collecting_lflows() will disable it. */
-static thread_local struct ovs_list collected_lflows;
-static thread_local bool collecting_lflows = false;
-
-static void
-start_collecting_lflows(void)
-{
-    ovs_assert(!collecting_lflows);
-    ovs_list_init(&collected_lflows);
-    collecting_lflows = true;
-}
-
-static void
-end_collecting_lflows(void)
-{
-    ovs_assert(collecting_lflows);
-    collecting_lflows = false;
-}
-
-/* Adds a row with the specified contents to the Logical_Flow table.
- * Version to use when hash bucket locking is NOT required. */
-static void
-do_ovn_lflow_add(struct hmap *lflow_map, const struct ovn_datapath *od,
-                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
-                 uint32_t hash, enum ovn_stage stage, uint16_t priority,
-                 const char *match, const char *actions, const char *io_port,
-                 const struct ovsdb_idl_row *stage_hint,
-                 const char *where, const char *ctrl_meter)
-    OVS_REQUIRES(fake_hash_mutex)
-{
-
-    struct ovn_lflow *old_lflow;
-    struct ovn_lflow *lflow;
-
-    size_t bitmap_len = od ? ods_size(od->datapaths) : dp_bitmap_len;
-    ovs_assert(bitmap_len);
-
-    if (collecting_lflows) {
-        ovs_assert(od);
-        ovs_assert(!dp_bitmap);
-    } else {
-        old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
-                                   actions, ctrl_meter, hash);
-        if (old_lflow) {
-            ovn_dp_group_add_with_reference(old_lflow, od, dp_bitmap,
-                                            bitmap_len);
-            return;
-        }
-    }
-
-    lflow = xmalloc(sizeof *lflow);
-    /* While adding new logical flows we're not setting single datapath, but
-     * collecting a group.  'od' will be updated later for all flows with only
-     * one datapath in a group, so it could be hashed correctly. */
-    ovn_lflow_init(lflow, NULL, bitmap_len, stage, priority,
-                   xstrdup(match), xstrdup(actions),
-                   io_port ? xstrdup(io_port) : NULL,
-                   nullable_xstrdup(ctrl_meter),
-                   ovn_lflow_hint(stage_hint), where);
-
-    ovn_dp_group_add_with_reference(lflow, od, dp_bitmap, bitmap_len);
-
-    if (parallelization_state != STATE_USE_PARALLELIZATION) {
-        hmap_insert(lflow_map, &lflow->hmap_node, hash);
-    } else {
-        hmap_insert_fast(lflow_map, &lflow->hmap_node, hash);
-        thread_lflow_counter++;
-    }
-
-    if (collecting_lflows) {
-        ovs_list_insert(&collected_lflows, &lflow->list_node);
-    }
-}
-
-/* Adds a row with the specified contents to the Logical_Flow table. */
-static void
-ovn_lflow_add_at(struct hmap *lflow_map, const struct ovn_datapath *od,
-                 const unsigned long *dp_bitmap, size_t dp_bitmap_len,
-                 enum ovn_stage stage, uint16_t priority,
-                 const char *match, const char *actions, const char *io_port,
-                 const char *ctrl_meter,
-                 const struct ovsdb_idl_row *stage_hint, const char *where)
-    OVS_EXCLUDED(fake_hash_mutex)
-{
-    struct ovs_mutex *hash_lock;
-    uint32_t hash;
-
-    ovs_assert(!od ||
-               ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
-
-    hash = ovn_logical_flow_hash(ovn_stage_get_table(stage),
-                                 ovn_stage_get_pipeline(stage),
-                                 priority, match,
-                                 actions);
-
-    hash_lock = lflow_hash_lock(lflow_map, hash);
-    do_ovn_lflow_add(lflow_map, od, dp_bitmap, dp_bitmap_len, hash, stage,
-                     priority, match, actions, io_port, stage_hint, where,
-                     ctrl_meter);
-    lflow_hash_unlock(hash_lock);
-}
-
-static void
-__ovn_lflow_add_default_drop(struct hmap *lflow_map,
-                             struct ovn_datapath *od,
-                             enum ovn_stage stage,
-                             const char *where)
-{
-        ovn_lflow_add_at(lflow_map, od, NULL, 0, stage, 0, "1",
-                         debug_drop_action(),
-                         NULL, NULL, NULL, where );
-}
-
-/* Adds a row with the specified contents to the Logical_Flow table. */
-#define ovn_lflow_add_with_hint__(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
-                                  ACTIONS, IN_OUT_PORT, CTRL_METER, \
-                                  STAGE_HINT) \
-    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
-                     IN_OUT_PORT, CTRL_METER, STAGE_HINT, OVS_SOURCE_LOCATOR)
-
-#define ovn_lflow_add_with_hint(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
-                                ACTIONS, STAGE_HINT) \
-    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
-                     NULL, NULL, STAGE_HINT, OVS_SOURCE_LOCATOR)
-
-#define ovn_lflow_add_with_dp_group(LFLOW_MAP, DP_BITMAP, DP_BITMAP_LEN, \
-                                    STAGE, PRIORITY, MATCH, ACTIONS, \
-                                    STAGE_HINT) \
-    ovn_lflow_add_at(LFLOW_MAP, NULL, DP_BITMAP, DP_BITMAP_LEN, STAGE, \
-                     PRIORITY, MATCH, ACTIONS, NULL, NULL, STAGE_HINT, \
-                     OVS_SOURCE_LOCATOR)
-
-#define ovn_lflow_add_default_drop(LFLOW_MAP, OD, STAGE)                    \
-    __ovn_lflow_add_default_drop(LFLOW_MAP, OD, STAGE, OVS_SOURCE_LOCATOR)
-
-
-/* This macro is similar to ovn_lflow_add_with_hint, except that it requires
- * the IN_OUT_PORT argument, which tells the lport name that appears in the
- * MATCH, which helps ovn-controller to bypass lflows parsing when the lport is
- * not local to the chassis. The critiera of the lport to be added using this
- * argument:
- *
- * - For ingress pipeline, the lport that is used to match "inport".
- * - For egress pipeline, the lport that is used to match "outport".
- *
- * For now, only LS pipelines should use this macro.  */
-#define ovn_lflow_add_with_lport_and_hint(LFLOW_MAP, OD, STAGE, PRIORITY, \
-                                          MATCH, ACTIONS, IN_OUT_PORT, \
-                                          STAGE_HINT) \
-    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
-                     IN_OUT_PORT, NULL, STAGE_HINT, OVS_SOURCE_LOCATOR)
-
-#define ovn_lflow_add(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, ACTIONS) \
-    ovn_lflow_add_at(LFLOW_MAP, OD, NULL, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
-                     NULL, NULL, NULL, OVS_SOURCE_LOCATOR)
-
-#define ovn_lflow_metered(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, ACTIONS, \
-                          CTRL_METER) \
-    ovn_lflow_add_with_hint__(LFLOW_MAP, OD, STAGE, PRIORITY, MATCH, \
-                              ACTIONS, NULL, CTRL_METER, NULL)
-
-static struct ovn_lflow *
-ovn_lflow_find(const struct hmap *lflows, const struct ovn_datapath *od,
-               enum ovn_stage stage, uint16_t priority,
-               const char *match, const char *actions, const char *ctrl_meter,
-               uint32_t hash)
-{
-    struct ovn_lflow *lflow;
-    HMAP_FOR_EACH_WITH_HASH (lflow, hmap_node, hash, lflows) {
-        if (ovn_lflow_equal(lflow, od, stage, priority, match, actions,
-                            ctrl_meter)) {
-            return lflow;
-        }
-    }
-    return NULL;
-}
-
-static void
-ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow)
-{
-    if (lflow) {
-        if (lflows) {
-            hmap_remove(lflows, &lflow->hmap_node);
-        }
-        bitmap_free(lflow->dpg_bitmap);
-        free(lflow->match);
-        free(lflow->actions);
-        free(lflow->io_port);
-        free(lflow->stage_hint);
-        free(lflow->ctrl_meter);
-        struct lflow_ref_node *l;
-        LIST_FOR_EACH_SAFE (l, ref_list_node, &lflow->referenced_by) {
-            ovs_list_remove(&l->lflow_list_node);
-            ovs_list_remove(&l->ref_list_node);
-            free(l);
-        }
-        free(lflow);
-    }
-}
-
-static void
-link_ovn_port_to_lflows(struct ovn_port *op, struct ovs_list *lflows)
-{
-    struct ovn_lflow *f;
-    LIST_FOR_EACH (f, list_node, lflows) {
-        struct lflow_ref_node *lfrn = xmalloc(sizeof *lfrn);
-        lfrn->lflow = f;
-        ovs_list_insert(&op->lflows, &lfrn->lflow_list_node);
-        ovs_list_insert(&f->referenced_by, &lfrn->ref_list_node);
-    }
-}
+thread_local size_t thread_lflow_counter = 0;
 
 static bool
 build_dhcpv4_action(struct ovn_port *op, ovs_be32 offer_ip,
@@ -6604,8 +5953,8 @@  build_dhcpv6_action(struct ovn_port *op, struct in6_addr *offer_ip,
  * build_lswitch_lflows_admission_control() handles the port security.
  */
 static void
-build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
-                                struct ds *actions, struct ds *match)
+build_lswitch_port_sec_op(struct ovn_port *op, struct lflow_table *lflows,
+                          struct ds *actions, struct ds *match)
 {
     ovs_assert(op->nbsp);
 
@@ -6621,13 +5970,13 @@  build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
         ovn_lflow_add_with_lport_and_hint(
             lflows, op->od, S_SWITCH_IN_CHECK_PORT_SEC,
             100, ds_cstr(match), REGBIT_PORT_SEC_DROP" = 1; next;",
-            op->key, &op->nbsp->header_);
+            op->key, &op->nbsp->header_, op->lflow_ref);
 
         ds_clear(match);
         ds_put_format(match, "outport == %s", op->json_key);
         ovn_lflow_add_with_lport_and_hint(
             lflows, op->od, S_SWITCH_IN_L2_UNKNOWN, 50, ds_cstr(match),
-            debug_drop_action(), op->key, &op->nbsp->header_);
+            debug_drop_action(), op->key, &op->nbsp->header_, op->lflow_ref);
         return;
     }
 
@@ -6643,14 +5992,16 @@  build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
         ovn_lflow_add_with_lport_and_hint(lflows, op->od,
                                           S_SWITCH_IN_CHECK_PORT_SEC, 70,
                                           ds_cstr(match), ds_cstr(actions),
-                                          op->key, &op->nbsp->header_);
+                                          op->key, &op->nbsp->header_,
+                                          op->lflow_ref);
     } else if (queue_id) {
         ds_put_cstr(actions,
                     REGBIT_PORT_SEC_DROP" = check_in_port_sec(); next;");
         ovn_lflow_add_with_lport_and_hint(lflows, op->od,
                                           S_SWITCH_IN_CHECK_PORT_SEC, 70,
                                           ds_cstr(match), ds_cstr(actions),
-                                          op->key, &op->nbsp->header_);
+                                          op->key, &op->nbsp->header_,
+                                          op->lflow_ref);
 
         if (!lsp_is_localnet(op->nbsp) && !op->od->n_localnet_ports) {
             return;
@@ -6665,7 +6016,8 @@  build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
             ovn_lflow_add_with_lport_and_hint(lflows, op->od,
                                               S_SWITCH_OUT_APPLY_PORT_SEC, 100,
                                               ds_cstr(match), ds_cstr(actions),
-                                              op->key, &op->nbsp->header_);
+                                              op->key, &op->nbsp->header_,
+                                              op->lflow_ref);
         } else if (op->od->n_localnet_ports) {
             ds_put_format(match, "outport == %s && inport == %s",
                           op->od->localnet_ports[0]->json_key,
@@ -6674,15 +6026,16 @@  build_lswitch_port_sec_op(struct ovn_port *op, struct hmap *lflows,
                     S_SWITCH_OUT_APPLY_PORT_SEC, 110,
                     ds_cstr(match), ds_cstr(actions),
                     op->od->localnet_ports[0]->key,
-                    &op->od->localnet_ports[0]->nbsp->header_);
+                    &op->od->localnet_ports[0]->nbsp->header_,
+                    op->lflow_ref);
         }
     }
 }
 
 static void
 build_lswitch_learn_fdb_op(
-        struct ovn_port *op, struct hmap *lflows,
-        struct ds *actions, struct ds *match)
+    struct ovn_port *op, struct lflow_table *lflows,
+    struct ds *actions, struct ds *match)
 {
     ovs_assert(op->nbsp);
 
@@ -6699,7 +6052,8 @@  build_lswitch_learn_fdb_op(
         ovn_lflow_add_with_lport_and_hint(lflows, op->od,
                                           S_SWITCH_IN_LOOKUP_FDB, 100,
                                           ds_cstr(match), ds_cstr(actions),
-                                          op->key, &op->nbsp->header_);
+                                          op->key, &op->nbsp->header_,
+                                          op->lflow_ref);
 
         ds_put_cstr(match, " && "REGBIT_LKUP_FDB" == 0");
         ds_clear(actions);
@@ -6707,13 +6061,14 @@  build_lswitch_learn_fdb_op(
         ovn_lflow_add_with_lport_and_hint(lflows, op->od, S_SWITCH_IN_PUT_FDB,
                                           100, ds_cstr(match),
                                           ds_cstr(actions), op->key,
-                                          &op->nbsp->header_);
+                                          &op->nbsp->header_,
+                                          op->lflow_ref);
     }
 }
 
 static void
 build_lswitch_learn_fdb_od(
-        struct ovn_datapath *od, struct hmap *lflows)
+    struct ovn_datapath *od, struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     ovn_lflow_add(lflows, od, S_SWITCH_IN_LOOKUP_FDB, 0, "1", "next;");
@@ -6727,7 +6082,7 @@  build_lswitch_learn_fdb_od(
  *                 (priority 100). */
 static void
 build_lswitch_output_port_sec_od(struct ovn_datapath *od,
-                              struct hmap *lflows)
+                                 struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     ovn_lflow_add(lflows, od, S_SWITCH_OUT_CHECK_PORT_SEC, 100,
@@ -6745,7 +6100,7 @@  static void
 skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
                          bool has_stateful_acl, enum ovn_stage in_stage,
                          enum ovn_stage out_stage, uint16_t priority,
-                         struct hmap *lflows)
+                         struct lflow_table *lflows)
 {
     /* Can't use ct() for router ports. Consider the following configuration:
      * lp1(10.0.0.2) on hostA--ls1--lr0--ls2--lp2(10.0.1.2) on hostB, For a
@@ -6767,10 +6122,10 @@  skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
 
     ovn_lflow_add_with_lport_and_hint(lflows, od, in_stage, priority,
                                       ingress_match, ingress_action,
-                                      op->key, &op->nbsp->header_);
+                                      op->key, &op->nbsp->header_, NULL);
     ovn_lflow_add_with_lport_and_hint(lflows, od, out_stage, priority,
                                       egress_match, egress_action,
-                                      op->key, &op->nbsp->header_);
+                                      op->key, &op->nbsp->header_, NULL);
 
     free(ingress_match);
     free(egress_match);
@@ -6779,7 +6134,7 @@  skip_port_from_conntrack(const struct ovn_datapath *od, struct ovn_port *op,
 static void
 build_stateless_filter(const struct ovn_datapath *od,
                        const struct nbrec_acl *acl,
-                       struct hmap *lflows)
+                       struct lflow_table *lflows)
 {
     const char *action = REGBIT_ACL_STATELESS" = 1; next;";
     if (!strcmp(acl->direction, "from-lport")) {
@@ -6800,7 +6155,7 @@  build_stateless_filter(const struct ovn_datapath *od,
 static void
 build_stateless_filters(const struct ovn_datapath *od,
                         const struct ls_port_group_table *ls_port_groups,
-                        struct hmap *lflows)
+                        struct lflow_table *lflows)
 {
     for (size_t i = 0; i < od->nbs->n_acls; i++) {
         const struct nbrec_acl *acl = od->nbs->acls[i];
@@ -6828,7 +6183,7 @@  build_stateless_filters(const struct ovn_datapath *od,
 }
 
 static void
-build_pre_acls(struct ovn_datapath *od, struct hmap *lflows)
+build_pre_acls(struct ovn_datapath *od, struct lflow_table *lflows)
 {
     /* Ingress and Egress Pre-ACL Table (Priority 0): Packets are
      * allowed by default. */
@@ -6843,16 +6198,17 @@  build_pre_acls(struct ovn_datapath *od, struct hmap *lflows)
 }
 
 static void
-build_ls_stateful_rec_pre_acls(const struct ls_stateful_record *ls_sful_rec,
-                             const struct ls_port_group_table *ls_port_groups,
-                             struct hmap *lflows)
+build_ls_stateful_rec_pre_acls(
+    const struct ls_stateful_record *ls_stateful_rec,
+    const struct ls_port_group_table *ls_port_groups,
+    struct lflow_table *lflows)
 {
-    const struct ovn_datapath *od = ls_sful_rec->od;
+    const struct ovn_datapath *od = ls_stateful_rec->od;
 
     /* If there are any stateful ACL rules in this datapath, we may
      * send IP packets for some (allow) filters through the conntrack action,
      * which handles defragmentation, in order to match L4 headers. */
-    if (ls_sful_rec->has_stateful_acl) {
+    if (ls_stateful_rec->has_stateful_acl) {
         for (size_t i = 0; i < od->n_router_ports; i++) {
             struct ovn_port *op = od->router_ports[i];
             if (op->enable_router_port_acl) {
@@ -6901,7 +6257,7 @@  build_ls_stateful_rec_pre_acls(const struct ls_stateful_record *ls_sful_rec,
                       REGBIT_CONNTRACK_DEFRAG" = 1; next;");
         ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_ACL, 100, "ip",
                       REGBIT_CONNTRACK_DEFRAG" = 1; next;");
-    } else if (ls_sful_rec->has_lb_vip) {
+    } else if (ls_stateful_rec->has_lb_vip) {
         /* We'll build stateless filters if there are LB rules so that
          * the stateless flows are not tracked in pre-lb. */
          build_stateless_filters(od, ls_port_groups, lflows);
@@ -6968,7 +6324,7 @@  build_empty_lb_event_flow(struct ovn_lb_vip *lb_vip,
 static void
 build_interconn_mcast_snoop_flows(struct ovn_datapath *od,
                                   const struct shash *meter_groups,
-                                  struct hmap *lflows)
+                                  struct lflow_table *lflows)
 {
     struct mcast_switch_info *mcast_sw_info = &od->mcast_info.sw;
     if (!mcast_sw_info->enabled
@@ -7002,7 +6358,7 @@  build_interconn_mcast_snoop_flows(struct ovn_datapath *od,
 
 static void
 build_pre_lb(struct ovn_datapath *od, const struct shash *meter_groups,
-             struct hmap *lflows)
+             struct lflow_table *lflows)
 {
     /* Handle IGMP/MLD packets crossing AZs. */
     build_interconn_mcast_snoop_flows(od, meter_groups, lflows);
@@ -7038,7 +6394,7 @@  build_pre_lb(struct ovn_datapath *od, const struct shash *meter_groups,
 
 static void
 build_ls_stateful_rec_pre_lb(const struct ls_stateful_record *ls_stateful_rec,
-                             struct hmap *lflows)
+                             struct lflow_table *lflows)
 {
     const struct ovn_datapath *od = ls_stateful_rec->od;
 
@@ -7104,7 +6460,7 @@  build_ls_stateful_rec_pre_lb(const struct ls_stateful_record *ls_stateful_rec,
 static void
 build_pre_stateful(struct ovn_datapath *od,
                    const struct chassis_features *features,
-                   struct hmap *lflows)
+                   struct lflow_table *lflows)
 {
     /* Ingress and Egress pre-stateful Table (Priority 0): Packets are
      * allowed by default. */
@@ -7134,11 +6490,11 @@  build_pre_stateful(struct ovn_datapath *od,
 }
 
 static void
-build_acl_hints(const struct ls_stateful_record *ls_sful_rec,
+build_acl_hints(const struct ls_stateful_record *ls_stateful_rec,
                 const struct chassis_features *features,
-                struct hmap *lflows)
+                struct lflow_table *lflows)
 {
-    const struct ovn_datapath *od = ls_sful_rec->od;
+    const struct ovn_datapath *od = ls_stateful_rec->od;
 
     /* This stage builds hints for the IN/OUT_ACL stage. Based on various
      * combinations of ct flags packets may hit only a subset of the logical
@@ -7163,13 +6519,14 @@  build_acl_hints(const struct ls_stateful_record *ls_sful_rec,
         const char *match;
 
         /* In any case, advance to the next stage. */
-        if (!ls_sful_rec->has_acls && !ls_sful_rec->has_lb_vip) {
+        if (!ls_stateful_rec->has_acls && !ls_stateful_rec->has_lb_vip) {
             ovn_lflow_add(lflows, od, stage, UINT16_MAX, "1", "next;");
         } else {
             ovn_lflow_add(lflows, od, stage, 0, "1", "next;");
         }
 
-        if (!ls_sful_rec->has_stateful_acl && !ls_sful_rec->has_lb_vip) {
+        if (!ls_stateful_rec->has_stateful_acl &&
+            !ls_stateful_rec->has_lb_vip) {
             continue;
         }
 
@@ -7305,7 +6662,7 @@  build_acl_log(struct ds *actions, const struct nbrec_acl *acl,
 }
 
 static void
-consider_acl(struct hmap *lflows, const struct ovn_datapath *od,
+consider_acl(struct lflow_table *lflows, const struct ovn_datapath *od,
              const struct nbrec_acl *acl, bool has_stateful,
              bool ct_masked_mark, const struct shash *meter_groups,
              uint64_t max_acl_tier, struct ds *match, struct ds *actions)
@@ -7533,7 +6890,7 @@  ovn_update_ipv6_options(struct hmap *lr_ports)
 
 static void
 build_acl_action_lflows(const struct ls_stateful_record *ls_stateful_rec,
-                        struct hmap *lflows,
+                        struct lflow_table *lflows,
                         const char *default_acl_action,
                         const struct shash *meter_groups,
                         struct ds *match,
@@ -7610,7 +6967,8 @@  build_acl_action_lflows(const struct ls_stateful_record *ls_stateful_rec,
 }
 
 static void
-build_acl_log_related_flows(const struct ovn_datapath *od, struct hmap *lflows,
+build_acl_log_related_flows(const struct ovn_datapath *od,
+                            struct lflow_table *lflows,
                             const struct nbrec_acl *acl, bool has_stateful,
                             bool ct_masked_mark,
                             const struct shash *meter_groups,
@@ -7685,7 +7043,7 @@  build_acl_log_related_flows(const struct ovn_datapath *od, struct hmap *lflows,
 static void
 build_acls(const struct ls_stateful_record *ls_stateful_rec,
            const struct chassis_features *features,
-           struct hmap *lflows,
+           struct lflow_table *lflows,
            const struct ls_port_group_table *ls_port_groups,
            const struct shash *meter_groups)
 {
@@ -7931,7 +7289,7 @@  build_acls(const struct ls_stateful_record *ls_stateful_rec,
 }
 
 static void
-build_qos(struct ovn_datapath *od, struct hmap *lflows) {
+build_qos(struct ovn_datapath *od, struct lflow_table *lflows) {
     struct ds action = DS_EMPTY_INITIALIZER;
 
     ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_MARK, 0, "1", "next;");
@@ -7992,7 +7350,7 @@  build_qos(struct ovn_datapath *od, struct hmap *lflows) {
 }
 
 static void
-build_lb_rules_pre_stateful(struct hmap *lflows,
+build_lb_rules_pre_stateful(struct lflow_table *lflows,
                             struct ovn_lb_datapaths *lb_dps,
                             bool ct_lb_mark,
                             const struct ovn_datapaths *ls_datapaths,
@@ -8094,7 +7452,8 @@  build_lb_rules_pre_stateful(struct hmap *lflows,
  *
  */
 static void
-build_lb_affinity_lr_flows(struct hmap *lflows, const struct ovn_northd_lb *lb,
+build_lb_affinity_lr_flows(struct lflow_table *lflows,
+                           const struct ovn_northd_lb *lb,
                            struct ovn_lb_vip *lb_vip, char *new_lb_match,
                            char *lb_action, const unsigned long *dp_bitmap,
                            const struct ovn_datapaths *lr_datapaths)
@@ -8280,7 +7639,7 @@  build_lb_affinity_lr_flows(struct hmap *lflows, const struct ovn_northd_lb *lb,
  *
  */
 static void
-build_lb_affinity_ls_flows(struct hmap *lflows,
+build_lb_affinity_ls_flows(struct lflow_table *lflows,
                            struct ovn_lb_datapaths *lb_dps,
                            struct ovn_lb_vip *lb_vip,
                            const struct ovn_datapaths *ls_datapaths)
@@ -8423,7 +7782,7 @@  build_lb_affinity_ls_flows(struct hmap *lflows,
 
 static void
 build_lswitch_lb_affinity_default_flows(struct ovn_datapath *od,
-                                        struct hmap *lflows)
+                                        struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     ovn_lflow_add(lflows, od, S_SWITCH_IN_LB_AFF_CHECK, 0, "1", "next;");
@@ -8432,7 +7791,7 @@  build_lswitch_lb_affinity_default_flows(struct ovn_datapath *od,
 
 static void
 build_lrouter_lb_affinity_default_flows(struct ovn_datapath *od,
-                                        struct hmap *lflows)
+                                        struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     ovn_lflow_add(lflows, od, S_ROUTER_IN_LB_AFF_CHECK, 0, "1", "next;");
@@ -8440,7 +7799,7 @@  build_lrouter_lb_affinity_default_flows(struct ovn_datapath *od,
 }
 
 static void
-build_lb_rules(struct hmap *lflows, struct ovn_lb_datapaths *lb_dps,
+build_lb_rules(struct lflow_table *lflows, struct ovn_lb_datapaths *lb_dps,
                const struct ovn_datapaths *ls_datapaths,
                const struct chassis_features *features, struct ds *match,
                struct ds *action, const struct shash *meter_groups,
@@ -8520,7 +7879,7 @@  build_lb_rules(struct hmap *lflows, struct ovn_lb_datapaths *lb_dps,
 static void
 build_stateful(struct ovn_datapath *od,
                const struct chassis_features *features,
-               struct hmap *lflows)
+               struct lflow_table *lflows)
 {
     const char *ct_block_action = features->ct_no_masked_label
                                   ? "ct_mark.blocked"
@@ -8570,7 +7929,7 @@  build_stateful(struct ovn_datapath *od,
 
 static void
 build_lb_hairpin(const struct ls_stateful_record *ls_stateful_rec,
-                 struct hmap *lflows)
+                 struct lflow_table *lflows)
 {
     const struct ovn_datapath *od = ls_stateful_rec->od;
 
@@ -8629,7 +7988,7 @@  build_lb_hairpin(const struct ls_stateful_record *ls_stateful_rec,
 }
 
 static void
-build_vtep_hairpin(struct ovn_datapath *od, struct hmap *lflows)
+build_vtep_hairpin(struct ovn_datapath *od, struct lflow_table *lflows)
 {
     if (!od->has_vtep_lports) {
         /* There is no need in these flows if datapath has no vtep lports. */
@@ -8677,7 +8036,7 @@  build_vtep_hairpin(struct ovn_datapath *od, struct hmap *lflows)
 
 /* Build logical flows for the forwarding groups */
 static void
-build_fwd_group_lflows(struct ovn_datapath *od, struct hmap *lflows)
+build_fwd_group_lflows(struct ovn_datapath *od, struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     if (!od->nbs->n_forwarding_groups) {
@@ -8858,7 +8217,8 @@  build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op,
                                         uint32_t priority,
                                         const struct ovn_datapath *od,
                                         const struct lr_nat_record *lrnat_rec,
-                                        struct hmap *lflows)
+                                        struct lflow_table *lflows,
+                                        struct lflow_ref *lflow_ref)
 {
     struct ds eth_src = DS_EMPTY_INITIALIZER;
     struct ds match = DS_EMPTY_INITIALIZER;
@@ -8882,8 +8242,10 @@  build_lswitch_rport_arp_req_self_orig_flow(struct ovn_port *op,
     ds_put_format(&match,
                   "eth.src == %s && (arp.op == 1 || rarp.op == 3 || nd_ns)",
                   ds_cstr(&eth_src));
-    ovn_lflow_add(lflows, od, S_SWITCH_IN_L2_LKUP, priority, ds_cstr(&match),
-                  "outport = \""MC_FLOOD_L2"\"; output;");
+    ovn_lflow_add_with_lflow_ref(lflows, od, S_SWITCH_IN_L2_LKUP, priority,
+                                 ds_cstr(&match),
+                                 "outport = \""MC_FLOOD_L2"\"; output;",
+                                 lflow_ref);
 
     ds_destroy(&eth_src);
     ds_destroy(&match);
@@ -8948,11 +8310,11 @@  lrouter_port_ipv6_reachable(const struct ovn_port *op,
  * switching domain as regular broadcast.
  */
 static void
-build_lswitch_rport_arp_req_flow(const char *ips, int addr_family,
-                                 struct ovn_port *patch_op,
-                                 const struct ovn_datapath *od,
-                                 uint32_t priority, struct hmap *lflows,
-                                 const struct ovsdb_idl_row *stage_hint)
+build_lswitch_rport_arp_req_flow(
+    const char *ips, int addr_family, struct ovn_port *patch_op,
+    const struct ovn_datapath *od, uint32_t priority,
+    struct lflow_table *lflows, const struct ovsdb_idl_row *stage_hint,
+    struct lflow_ref *lflow_ref)
 {
     struct ds match   = DS_EMPTY_INITIALIZER;
     struct ds actions = DS_EMPTY_INITIALIZER;
@@ -8966,14 +8328,17 @@  build_lswitch_rport_arp_req_flow(const char *ips, int addr_family,
         ds_put_format(&actions, "clone {outport = %s; output; }; "
                                 "outport = \""MC_FLOOD_L2"\"; output;",
                       patch_op->json_key);
-        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
-                                priority, ds_cstr(&match),
-                                ds_cstr(&actions), stage_hint);
+        ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
+                                          priority, ds_cstr(&match),
+                                          ds_cstr(&actions), stage_hint,
+                                          lflow_ref);
     } else {
         ds_put_format(&actions, "outport = %s; output;", patch_op->json_key);
-        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_L2_LKUP, priority,
-                                ds_cstr(&match), ds_cstr(&actions),
-                                stage_hint);
+        ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_SWITCH_IN_L2_LKUP,
+                                          priority, ds_cstr(&match),
+                                          ds_cstr(&actions),
+                                          stage_hint,
+                                          lflow_ref);
     }
 
     ds_destroy(&match);
@@ -8991,7 +8356,7 @@  static void
 build_lswitch_rport_arp_req_flows(struct ovn_port *op,
                                   struct ovn_datapath *sw_od,
                                   struct ovn_port *sw_op,
-                                  struct hmap *lflows,
+                                  struct lflow_table *lflows,
                                   const struct ovsdb_idl_row *stage_hint)
 {
     if (!op || !op->nbrp) {
@@ -9009,12 +8374,12 @@  build_lswitch_rport_arp_req_flows(struct ovn_port *op,
     for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
         build_lswitch_rport_arp_req_flow(
             op->lrp_networks.ipv4_addrs[i].addr_s, AF_INET, sw_op, sw_od, 80,
-            lflows, stage_hint);
+            lflows, stage_hint, sw_op->lflow_ref);
     }
     for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
         build_lswitch_rport_arp_req_flow(
             op->lrp_networks.ipv6_addrs[i].addr_s, AF_INET6, sw_op, sw_od, 80,
-            lflows, stage_hint);
+            lflows, stage_hint, sw_op->lflow_ref);
     }
 }
 
@@ -9029,7 +8394,8 @@  static void
 build_lswitch_rport_arp_req_flows_for_lbnats(
     struct ovn_port *op, const struct lr_stateful_record *lr_stateful_rec,
     const struct ovn_datapath *sw_od, struct ovn_port *sw_op,
-    struct hmap *lflows, const struct ovsdb_idl_row *stage_hint)
+    struct lflow_table *lflows, const struct ovsdb_idl_row *stage_hint,
+    struct lflow_ref *lflow_ref)
 {
     if (!op || !op->nbrp) {
         return;
@@ -9057,7 +8423,7 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
                 lrouter_port_ipv4_reachable(op, ipv4_addr)) {
                 build_lswitch_rport_arp_req_flow(
                     ip_addr, AF_INET, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         }
         SSET_FOR_EACH (ip_addr, &lr_stateful_rec->lb_ips->ips_v6_reachable) {
@@ -9070,7 +8436,7 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
                 lrouter_port_ipv6_reachable(op, &ipv6_addr)) {
                 build_lswitch_rport_arp_req_flow(
                     ip_addr, AF_INET6, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         }
     }
@@ -9085,7 +8451,7 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
     if (sw_od->n_router_ports != sw_od->nbs->n_ports) {
         build_lswitch_rport_arp_req_self_orig_flow(op, 75, sw_od,
                                                    lr_stateful_rec->lrnat_rec,
-                                                   lflows);
+                                                   lflows, lflow_ref);
     }
 
     for (size_t i = 0; i < lr_stateful_rec->lrnat_rec->n_nat_entries; i++) {
@@ -9109,14 +8475,14 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
                                nat->external_ip)) {
                 build_lswitch_rport_arp_req_flow(
                     nat->external_ip, AF_INET6, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         } else {
             if (!sset_contains(&lr_stateful_rec->lb_ips->ips_v4,
                                nat->external_ip)) {
                 build_lswitch_rport_arp_req_flow(
                     nat->external_ip, AF_INET, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         }
     }
@@ -9143,7 +8509,7 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
                                nat->external_ip)) {
                 build_lswitch_rport_arp_req_flow(
                     nat->external_ip, AF_INET6, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         } else {
             if (!lr_stateful_rec ||
@@ -9151,7 +8517,7 @@  build_lswitch_rport_arp_req_flows_for_lbnats(
                                nat->external_ip)) {
                 build_lswitch_rport_arp_req_flow(
                     nat->external_ip, AF_INET, sw_op, sw_od, 80, lflows,
-                    stage_hint);
+                    stage_hint, lflow_ref);
             }
         }
     }
@@ -9162,7 +8528,7 @@  build_dhcpv4_options_flows(struct ovn_port *op,
                            struct lport_addresses *lsp_addrs,
                            struct ovn_port *inport, bool is_external,
                            const struct shash *meter_groups,
-                           struct hmap *lflows)
+                           struct lflow_table *lflows)
 {
     struct ds match = DS_EMPTY_INITIALIZER;
 
@@ -9193,7 +8559,7 @@  build_dhcpv4_options_flows(struct ovn_port *op,
                               op->json_key);
             }
 
-            ovn_lflow_add_with_hint__(lflows, op->od,
+            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
                                       S_SWITCH_IN_DHCP_OPTIONS, 100,
                                       ds_cstr(&match),
                                       ds_cstr(&options_action),
@@ -9201,7 +8567,8 @@  build_dhcpv4_options_flows(struct ovn_port *op,
                                       copp_meter_get(COPP_DHCPV4_OPTS,
                                                      op->od->nbs->copp,
                                                      meter_groups),
-                                      &op->nbsp->dhcpv4_options->header_);
+                                      &op->nbsp->dhcpv4_options->header_,
+                                      op->lflow_ref);
             ds_clear(&match);
 
             /* If REGBIT_DHCP_OPTS_RESULT is set, it means the
@@ -9220,7 +8587,8 @@  build_dhcpv4_options_flows(struct ovn_port *op,
             ovn_lflow_add_with_lport_and_hint(
                 lflows, op->od, S_SWITCH_IN_DHCP_RESPONSE, 100,
                 ds_cstr(&match), ds_cstr(&response_action), inport->key,
-                &op->nbsp->dhcpv4_options->header_);
+                &op->nbsp->dhcpv4_options->header_,
+                op->lflow_ref);
             ds_destroy(&options_action);
             ds_destroy(&response_action);
             ds_destroy(&ipv4_addr_match);
@@ -9247,7 +8615,8 @@  build_dhcpv4_options_flows(struct ovn_port *op,
                 ovn_lflow_add_with_lport_and_hint(
                     lflows, op->od, S_SWITCH_OUT_ACL_EVAL, 34000,
                     ds_cstr(&match),dhcp_actions, op->key,
-                    &op->nbsp->dhcpv4_options->header_);
+                    &op->nbsp->dhcpv4_options->header_,
+                    op->lflow_ref);
             }
             break;
         }
@@ -9260,7 +8629,7 @@  build_dhcpv6_options_flows(struct ovn_port *op,
                            struct lport_addresses *lsp_addrs,
                            struct ovn_port *inport, bool is_external,
                            const struct shash *meter_groups,
-                           struct hmap *lflows)
+                           struct lflow_table *lflows)
 {
     struct ds match = DS_EMPTY_INITIALIZER;
 
@@ -9282,7 +8651,7 @@  build_dhcpv6_options_flows(struct ovn_port *op,
                               op->json_key);
             }
 
-            ovn_lflow_add_with_hint__(lflows, op->od,
+            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
                                       S_SWITCH_IN_DHCP_OPTIONS, 100,
                                       ds_cstr(&match),
                                       ds_cstr(&options_action),
@@ -9290,7 +8659,8 @@  build_dhcpv6_options_flows(struct ovn_port *op,
                                       copp_meter_get(COPP_DHCPV6_OPTS,
                                                      op->od->nbs->copp,
                                                      meter_groups),
-                                      &op->nbsp->dhcpv6_options->header_);
+                                      &op->nbsp->dhcpv6_options->header_,
+                                      op->lflow_ref);
 
             /* If REGBIT_DHCP_OPTS_RESULT is set to 1, it means the
              * put_dhcpv6_opts action is successful */
@@ -9298,7 +8668,7 @@  build_dhcpv6_options_flows(struct ovn_port *op,
             ovn_lflow_add_with_lport_and_hint(
                 lflows, op->od, S_SWITCH_IN_DHCP_RESPONSE, 100,
                 ds_cstr(&match), ds_cstr(&response_action), inport->key,
-                &op->nbsp->dhcpv6_options->header_);
+                &op->nbsp->dhcpv6_options->header_, op->lflow_ref);
             ds_destroy(&options_action);
             ds_destroy(&response_action);
 
@@ -9330,7 +8700,8 @@  build_dhcpv6_options_flows(struct ovn_port *op,
                 ovn_lflow_add_with_lport_and_hint(
                     lflows, op->od, S_SWITCH_OUT_ACL_EVAL, 34000,
                     ds_cstr(&match),dhcp6_actions, op->key,
-                    &op->nbsp->dhcpv6_options->header_);
+                    &op->nbsp->dhcpv6_options->header_,
+                    op->lflow_ref);
             }
             break;
         }
@@ -9341,7 +8712,7 @@  build_dhcpv6_options_flows(struct ovn_port *op,
 static void
 build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
                                                  const struct ovn_port *port,
-                                                 struct hmap *lflows)
+                                                 struct lflow_table *lflows)
 {
     struct ds match = DS_EMPTY_INITIALIZER;
 
@@ -9361,7 +8732,7 @@  build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
                     ovn_lflow_add_with_lport_and_hint(
                         lflows, op->od, S_SWITCH_IN_EXTERNAL_PORT, 100,
                         ds_cstr(&match),  debug_drop_action(), port->key,
-                        &op->nbsp->header_);
+                        &op->nbsp->header_, op->lflow_ref);
                 }
                 for (size_t l = 0; l < rp->lsp_addrs[k].n_ipv6_addrs; l++) {
                     ds_clear(&match);
@@ -9377,7 +8748,7 @@  build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
                     ovn_lflow_add_with_lport_and_hint(
                         lflows, op->od, S_SWITCH_IN_EXTERNAL_PORT, 100,
                         ds_cstr(&match), debug_drop_action(), port->key,
-                        &op->nbsp->header_);
+                        &op->nbsp->header_, op->lflow_ref);
                 }
 
                 ds_clear(&match);
@@ -9393,7 +8764,8 @@  build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
                                                   100, ds_cstr(&match),
                                                   debug_drop_action(),
                                                   port->key,
-                                                  &op->nbsp->header_);
+                                                  &op->nbsp->header_,
+                                                  op->lflow_ref);
             }
         }
     }
@@ -9408,7 +8780,7 @@  is_vlan_transparent(const struct ovn_datapath *od)
 
 static void
 build_lswitch_lflows_l2_unknown(struct ovn_datapath *od,
-                                struct hmap *lflows)
+                                struct lflow_table *lflows)
 {
     /* Ingress table 25/26: Destination lookup for unknown MACs. */
     if (od->has_unknown) {
@@ -9429,7 +8801,7 @@  static void
 build_lswitch_lflows_pre_acl_and_acl(
     struct ovn_datapath *od,
     const struct chassis_features *features,
-    struct hmap *lflows,
+    struct lflow_table *lflows,
     const struct shash *meter_groups)
 {
     ovs_assert(od->nbs);
@@ -9445,7 +8817,7 @@  build_lswitch_lflows_pre_acl_and_acl(
  * 100). */
 static void
 build_lswitch_lflows_admission_control(struct ovn_datapath *od,
-                                       struct hmap *lflows)
+                                       struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     /* Logical VLANs not supported. */
@@ -9473,7 +8845,7 @@  build_lswitch_lflows_admission_control(struct ovn_datapath *od,
 
 static void
 build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
-                                          struct hmap *lflows,
+                                          struct lflow_table *lflows,
                                           struct ds *match)
 {
     ovs_assert(op->nbsp);
@@ -9485,14 +8857,14 @@  build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
     ovn_lflow_add_with_lport_and_hint(lflows, op->od,
                                       S_SWITCH_IN_ARP_ND_RSP, 100,
                                       ds_cstr(match), "next;", op->key,
-                                      &op->nbsp->header_);
+                                      &op->nbsp->header_, op->lflow_ref);
 }
 
 /* Ingress table 19: ARP/ND responder, reply for known IPs.
  * (priority 50). */
 static void
 build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
-                                         struct hmap *lflows,
+                                         struct lflow_table *lflows,
                                          const struct hmap *ls_ports,
                                          const struct shash *meter_groups,
                                          struct ds *actions,
@@ -9577,7 +8949,8 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                                               S_SWITCH_IN_ARP_ND_RSP, 100,
                                               ds_cstr(match),
                                               ds_cstr(actions), vparent,
-                                              &vp->nbsp->header_);
+                                              &vp->nbsp->header_,
+                                              op->lflow_ref);
         }
 
         free(tokstr);
@@ -9621,11 +8994,12 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                     "output;",
                     op->lsp_addrs[i].ea_s, op->lsp_addrs[i].ea_s,
                     op->lsp_addrs[i].ipv4_addrs[j].addr_s);
-                ovn_lflow_add_with_hint(lflows, op->od,
-                                        S_SWITCH_IN_ARP_ND_RSP, 50,
-                                        ds_cstr(match),
-                                        ds_cstr(actions),
-                                        &op->nbsp->header_);
+                ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                                  S_SWITCH_IN_ARP_ND_RSP, 50,
+                                                  ds_cstr(match),
+                                                  ds_cstr(actions),
+                                                  &op->nbsp->header_,
+                                                  op->lflow_ref);
 
                 /* Do not reply to an ARP request from the port that owns
                  * the address (otherwise a DHCP client that ARPs to check
@@ -9644,7 +9018,8 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                                                   S_SWITCH_IN_ARP_ND_RSP,
                                                   100, ds_cstr(match),
                                                   "next;", op->key,
-                                                  &op->nbsp->header_);
+                                                  &op->nbsp->header_,
+                                                  op->lflow_ref);
             }
 
             /* For ND solicitations, we need to listen for both the
@@ -9674,15 +9049,16 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                         op->lsp_addrs[i].ipv6_addrs[j].addr_s,
                         op->lsp_addrs[i].ipv6_addrs[j].addr_s,
                         op->lsp_addrs[i].ea_s);
-                ovn_lflow_add_with_hint__(lflows, op->od,
-                                          S_SWITCH_IN_ARP_ND_RSP, 50,
-                                          ds_cstr(match),
-                                          ds_cstr(actions),
-                                          NULL,
-                                          copp_meter_get(COPP_ND_NA,
-                                              op->od->nbs->copp,
-                                              meter_groups),
-                                          &op->nbsp->header_);
+                ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
+                                                    S_SWITCH_IN_ARP_ND_RSP, 50,
+                                                    ds_cstr(match),
+                                                    ds_cstr(actions),
+                                                    NULL,
+                                                    copp_meter_get(COPP_ND_NA,
+                                                        op->od->nbs->copp,
+                                                        meter_groups),
+                                                    &op->nbsp->header_,
+                                                    op->lflow_ref);
 
                 /* Do not reply to a solicitation from the port that owns
                  * the address (otherwise DAD detection will fail). */
@@ -9691,7 +9067,8 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                                                   S_SWITCH_IN_ARP_ND_RSP,
                                                   100, ds_cstr(match),
                                                   "next;", op->key,
-                                                  &op->nbsp->header_);
+                                                  &op->nbsp->header_,
+                                                  op->lflow_ref);
             }
         }
     }
@@ -9737,8 +9114,12 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                 ea_s,
                 ea_s);
 
-            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_ARP_ND_RSP,
-                30, ds_cstr(match), ds_cstr(actions), &op->nbsp->header_);
+            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                              S_SWITCH_IN_ARP_ND_RSP,
+                                              30, ds_cstr(match),
+                                              ds_cstr(actions),
+                                              &op->nbsp->header_,
+                                              op->lflow_ref);
         }
 
         /* Add IPv6 NDP responses.
@@ -9781,15 +9162,16 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
                     lsp_is_router(op->nbsp) ? "nd_na_router" : "nd_na",
                     ea_s,
                     ea_s);
-            ovn_lflow_add_with_hint__(lflows, op->od,
-                                      S_SWITCH_IN_ARP_ND_RSP, 30,
-                                      ds_cstr(match),
-                                      ds_cstr(actions),
-                                      NULL,
-                                      copp_meter_get(COPP_ND_NA,
-                                          op->od->nbs->copp,
-                                          meter_groups),
-                                      &op->nbsp->header_);
+            ovn_lflow_add_with_lflow_ref_hint__(lflows, op->od,
+                                                S_SWITCH_IN_ARP_ND_RSP, 30,
+                                                ds_cstr(match),
+                                                ds_cstr(actions),
+                                                NULL,
+                                                copp_meter_get(COPP_ND_NA,
+                                                    op->od->nbs->copp,
+                                                    meter_groups),
+                                                &op->nbsp->header_,
+                                                op->lflow_ref);
             ds_destroy(&ip6_dst_match);
             ds_destroy(&nd_target_match);
         }
@@ -9800,7 +9182,7 @@  build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
  * (priority 0)*/
 static void
 build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
-                                       struct hmap *lflows)
+                                       struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     ovn_lflow_add(lflows, od, S_SWITCH_IN_ARP_ND_RSP, 0, "1", "next;");
@@ -9811,7 +9193,7 @@  build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
 static void
 build_lswitch_arp_nd_service_monitor(const struct ovn_northd_lb *lb,
                                      const struct hmap *ls_ports,
-                                     struct hmap *lflows,
+                                     struct lflow_table *lflows,
                                      struct ds *actions,
                                      struct ds *match)
 {
@@ -9887,7 +9269,7 @@  build_lswitch_arp_nd_service_monitor(const struct ovn_northd_lb *lb,
  * priority 100 flows. */
 static void
 build_lswitch_dhcp_options_and_response(struct ovn_port *op,
-                                        struct hmap *lflows,
+                                        struct lflow_table *lflows,
                                         const struct shash *meter_groups)
 {
     ovs_assert(op->nbsp);
@@ -9942,7 +9324,7 @@  build_lswitch_dhcp_options_and_response(struct ovn_port *op,
  * (priority 0). */
 static void
 build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
-                                        struct hmap *lflows)
+                                        struct lflow_table *lflows)
 {
     ovs_assert(od->nbs);
     ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_OPTIONS, 0, "1", "next;");
@@ -9957,7 +9339,7 @@  build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
 */
 static void
 build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
-                                      struct hmap *lflows,
+                                      struct lflow_table *lflows,
                                       const struct shash *meter_groups)
 {
     ovs_assert(od->nbs);
@@ -9988,7 +9370,7 @@  build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
  * binding the external ports. */
 static void
 build_lswitch_external_port(struct ovn_port *op,
-                            struct hmap *lflows)
+                            struct lflow_table *lflows)
 {
     ovs_assert(op->nbsp);
     if (!lsp_is_external(op->nbsp)) {
@@ -10004,7 +9386,7 @@  build_lswitch_external_port(struct ovn_port *op,
  * (priority 70 - 100). */
 static void
 build_lswitch_destination_lookup_bmcast(struct ovn_datapath *od,
-                                        struct hmap *lflows,
+                                        struct lflow_table *lflows,
                                         struct ds *actions,
                                         const struct shash *meter_groups)
 {
@@ -10097,7 +9479,7 @@  build_lswitch_destination_lookup_bmcast(struct ovn_datapath *od,
  * (priority 90). */
 static void
 build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
-                                struct hmap *lflows,
+                                struct lflow_table *lflows,
                                 struct ds *actions,
                                 struct ds *match)
 {
@@ -10177,7 +9559,8 @@  build_lswitch_ip_mcast_igmp_mld(struct ovn_igmp_group *igmp_group,
 
 /* Ingress table 25: Destination lookup, unicast handling (priority 50), */
 static void
-build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
+build_lswitch_ip_unicast_lookup(struct ovn_port *op,
+                                struct lflow_table *lflows,
                                 struct ds *actions, struct ds *match)
 {
     ovs_assert(op->nbsp);
@@ -10210,10 +9593,12 @@  build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
 
             ds_clear(actions);
             ds_put_format(actions, action, op->json_key);
-            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
-                                    50, ds_cstr(match),
-                                    ds_cstr(actions),
-                                    &op->nbsp->header_);
+            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                              S_SWITCH_IN_L2_LKUP,
+                                              50, ds_cstr(match),
+                                              ds_cstr(actions),
+                                              &op->nbsp->header_,
+                                              op->lflow_ref);
         } else if (!strcmp(op->nbsp->addresses[i], "unknown")) {
             continue;
         } else if (is_dynamic_lsp_address(op->nbsp->addresses[i])) {
@@ -10228,10 +9613,12 @@  build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
 
             ds_clear(actions);
             ds_put_format(actions, action, op->json_key);
-            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
-                                    50, ds_cstr(match),
-                                    ds_cstr(actions),
-                                    &op->nbsp->header_);
+            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                              S_SWITCH_IN_L2_LKUP,
+                                              50, ds_cstr(match),
+                                              ds_cstr(actions),
+                                              &op->nbsp->header_,
+                                              op->lflow_ref);
         } else if (!strcmp(op->nbsp->addresses[i], "router")) {
             if (!op->peer || !op->peer->nbrp
                 || !ovs_scan(op->peer->nbrp->mac,
@@ -10283,10 +9670,11 @@  build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
 
             ds_clear(actions);
             ds_put_format(actions, action, op->json_key);
-            ovn_lflow_add_with_hint(lflows, op->od,
-                                    S_SWITCH_IN_L2_LKUP, 50,
-                                    ds_cstr(match), ds_cstr(actions),
-                                    &op->nbsp->header_);
+            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                              S_SWITCH_IN_L2_LKUP, 50,
+                                              ds_cstr(match), ds_cstr(actions),
+                                              &op->nbsp->header_,
+                                              op->lflow_ref);
         } else {
             static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
 
@@ -10301,7 +9689,8 @@  build_lswitch_ip_unicast_lookup(struct ovn_port *op, struct hmap *lflows,
 static void
 build_lswitch_ip_unicast_lookup_for_nats(
     struct ovn_port *op, const struct lr_stateful_record *lr_stateful_rec,
-    struct hmap *lflows, struct ds *match, struct ds *actions)
+    struct lflow_table *lflows, struct ds *match, struct ds *actions,
+    struct lflow_ref *lflow_ref)
 {
     ovs_assert(op->nbsp);
 
@@ -10334,11 +9723,12 @@  build_lswitch_ip_unicast_lookup_for_nats(
 
             ds_clear(actions);
             ds_put_format(actions, action, op->json_key);
-            ovn_lflow_add_with_hint(lflows, op->od,
+            ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
                                     S_SWITCH_IN_L2_LKUP, 50,
                                     ds_cstr(match),
                                     ds_cstr(actions),
-                                    &op->nbsp->header_);
+                                    &op->nbsp->header_,
+                                    lflow_ref);
         }
     }
 }
@@ -10578,7 +9968,7 @@  get_outport_for_routing_policy_nexthop(struct ovn_datapath *od,
 }
 
 static void
-build_routing_policy_flow(struct hmap *lflows, struct ovn_datapath *od,
+build_routing_policy_flow(struct lflow_table *lflows, struct ovn_datapath *od,
                           const struct hmap *lr_ports,
                           const struct nbrec_logical_router_policy *rule,
                           const struct ovsdb_idl_row *stage_hint)
@@ -10643,7 +10033,8 @@  build_routing_policy_flow(struct hmap *lflows, struct ovn_datapath *od,
 }
 
 static void
-build_ecmp_routing_policy_flows(struct hmap *lflows, struct ovn_datapath *od,
+build_ecmp_routing_policy_flows(struct lflow_table *lflows,
+                                struct ovn_datapath *od,
                                 const struct hmap *lr_ports,
                                 const struct nbrec_logical_router_policy *rule,
                                 uint16_t ecmp_group_id)
@@ -10779,7 +10170,7 @@  get_route_table_id(struct simap *route_tables, const char *route_table_name)
 }
 
 static void
-build_route_table_lflow(struct ovn_datapath *od, struct hmap *lflows,
+build_route_table_lflow(struct ovn_datapath *od, struct lflow_table *lflows,
                         struct nbrec_logical_router_port *lrp,
                         struct simap *route_tables)
 {
@@ -11190,7 +10581,7 @@  find_static_route_outport(struct ovn_datapath *od, const struct hmap *lr_ports,
 }
 
 static void
-add_ecmp_symmetric_reply_flows(struct hmap *lflows,
+add_ecmp_symmetric_reply_flows(struct lflow_table *lflows,
                                struct ovn_datapath *od,
                                bool ct_masked_mark,
                                const char *port_ip,
@@ -11355,7 +10746,7 @@  add_ecmp_symmetric_reply_flows(struct hmap *lflows,
 }
 
 static void
-build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
+build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
                       bool ct_masked_mark, const struct hmap *lr_ports,
                       struct ecmp_groups_node *eg)
 
@@ -11442,12 +10833,12 @@  build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
 }
 
 static void
-add_route(struct hmap *lflows, struct ovn_datapath *od,
+add_route(struct lflow_table *lflows, struct ovn_datapath *od,
           const struct ovn_port *op, const char *lrp_addr_s,
           const char *network_s, int plen, const char *gateway,
           bool is_src_route, const uint32_t rtb_id,
           const struct ovsdb_idl_row *stage_hint, bool is_discard_route,
-          int ofs)
+          int ofs, struct lflow_ref *lflow_ref)
 {
     bool is_ipv4 = strchr(network_s, '.') ? true : false;
     struct ds match = DS_EMPTY_INITIALIZER;
@@ -11490,14 +10881,17 @@  add_route(struct hmap *lflows, struct ovn_datapath *od,
         ds_put_format(&actions, "ip.ttl--; %s", ds_cstr(&common_actions));
     }
 
-    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_IP_ROUTING, priority,
-                            ds_cstr(&match), ds_cstr(&actions),
-                            stage_hint);
+    ovn_lflow_add_with_lflow_ref_hint(lflows, od, S_ROUTER_IN_IP_ROUTING,
+                                      priority, ds_cstr(&match),
+                                      ds_cstr(&actions), stage_hint,
+                                      lflow_ref);
     if (op && op->has_bfd) {
         ds_put_format(&match, " && udp.dst == 3784");
-        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_ROUTING,
-                                priority + 1, ds_cstr(&match),
-                                ds_cstr(&common_actions), stage_hint);
+        ovn_lflow_add_with_lflow_ref_hint(lflows, op->od,
+                                          S_ROUTER_IN_IP_ROUTING,
+                                          priority + 1, ds_cstr(&match),
+                                          ds_cstr(&common_actions),\
+                                          stage_hint, lflow_ref);
     }
     ds_destroy(&match);
     ds_destroy(&common_actions);
@@ -11505,7 +10899,7 @@  add_route(struct hmap *lflows, struct ovn_datapath *od,
 }
 
 static void
-build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
+build_static_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
                         const struct hmap *lr_ports,
                         const struct parsed_route *route_)
 {
@@ -11531,7 +10925,7 @@  build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
     add_route(lflows, route_->is_discard_route ? od : out_port->od, out_port,
               lrp_addr_s, prefix_s, route_->plen, route->nexthop,
               route_->is_src_route, route_->route_table_id, &route->header_,
-              route_->is_discard_route, ofs);
+              route_->is_discard_route, ofs, NULL);
 
     free(prefix_s);
 }
@@ -11594,7 +10988,7 @@  struct lrouter_nat_lb_flows_ctx {
 
     int prio;
 
-    struct hmap *lflows;
+    struct lflow_table *lflows;
     const struct shash *meter_groups;
 };
 
@@ -11726,7 +11120,7 @@  build_lrouter_nat_flows_for_lb(
     struct ovn_northd_lb_vip *vips_nb,
     const struct ovn_datapaths *lr_datapaths,
     const struct lr_stateful_table *lr_stateful_table,
-    struct hmap *lflows,
+    struct lflow_table *lflows,
     struct ds *match, struct ds *action,
     const struct shash *meter_groups,
     const struct chassis_features *features,
@@ -11895,7 +11289,7 @@  build_lrouter_nat_flows_for_lb(
 
 static void
 build_lswitch_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
-                           struct hmap *lflows,
+                           struct lflow_table *lflows,
                            const struct shash *meter_groups,
                            const struct ovn_datapaths *ls_datapaths,
                            const struct chassis_features *features,
@@ -11956,7 +11350,7 @@  build_lswitch_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
  */
 static void
 build_lrouter_defrag_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
-                                  struct hmap *lflows,
+                                  struct lflow_table *lflows,
                                   const struct ovn_datapaths *lr_datapaths,
                                   struct ds *match)
 {
@@ -11982,7 +11376,7 @@  build_lrouter_defrag_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
 
 static void
 build_lrouter_flows_for_lb(struct ovn_lb_datapaths *lb_dps,
-                           struct hmap *lflows,
+                           struct lflow_table *lflows,
                            const struct shash *meter_groups,
                            const struct ovn_datapaths *lr_datapaths,
                            const struct lr_stateful_table *lr_stateful_table,
@@ -12140,7 +11534,7 @@  lrouter_dnat_and_snat_is_stateless(const struct nbrec_nat *nat)
  */
 static inline void
 lrouter_nat_add_ext_ip_match(const struct ovn_datapath *od,
-                             struct hmap *lflows, struct ds *match,
+                             struct lflow_table *lflows, struct ds *match,
                              const struct nbrec_nat *nat,
                              bool is_v6, bool is_src, int cidr_bits)
 {
@@ -12207,7 +11601,7 @@  build_lrouter_arp_flow(const struct ovn_datapath *od, struct ovn_port *op,
                        const char *ip_address, const char *eth_addr,
                        struct ds *extra_match, bool drop, uint16_t priority,
                        const struct ovsdb_idl_row *hint,
-                       struct hmap *lflows)
+                       struct lflow_table *lflows)
 {
     struct ds match = DS_EMPTY_INITIALIZER;
     struct ds actions = DS_EMPTY_INITIALIZER;
@@ -12257,7 +11651,8 @@  build_lrouter_nd_flow(const struct ovn_datapath *od, struct ovn_port *op,
                       const char *sn_ip_address, const char *eth_addr,
                       struct ds *extra_match, bool drop, uint16_t priority,
                       const struct ovsdb_idl_row *hint,
-                      struct hmap *lflows, const struct shash *meter_groups)
+                      struct lflow_table *lflows,
+                      const struct shash *meter_groups)
 {
     struct ds match = DS_EMPTY_INITIALIZER;
     struct ds actions = DS_EMPTY_INITIALIZER;
@@ -12308,7 +11703,7 @@  build_lrouter_nd_flow(const struct ovn_datapath *od, struct ovn_port *op,
 static void
 build_lrouter_nat_arp_nd_flow(const struct ovn_datapath *od,
                               struct ovn_nat *nat_entry,
-                              struct hmap *lflows,
+                              struct lflow_table *lflows,
                               const struct shash *meter_groups)
 {
     struct lport_addresses *ext_addrs = &nat_entry->ext_addrs;
@@ -12331,7 +11726,7 @@  build_lrouter_nat_arp_nd_flow(const struct ovn_datapath *od,
 static void
 build_lrouter_port_nat_arp_nd_flow(struct ovn_port *op,
                                    struct ovn_nat *nat_entry,
-                                   struct hmap *lflows,
+                                   struct lflow_table *lflows,
                                    const struct shash *meter_groups)
 {
     struct lport_addresses *ext_addrs = &nat_entry->ext_addrs;
@@ -12405,7 +11800,7 @@  build_lrouter_drop_own_dest(struct ovn_port *op,
                             const struct lr_stateful_record *lr_stateful_rec,
                             enum ovn_stage stage,
                             uint16_t priority, bool drop_snat_ip,
-                            struct hmap *lflows)
+                            struct lflow_table *lflows)
 {
     struct ds match_ips = DS_EMPTY_INITIALIZER;
 
@@ -12470,7 +11865,7 @@  build_lrouter_drop_own_dest(struct ovn_port *op,
 }
 
 static void
-build_lrouter_force_snat_flows(struct hmap *lflows,
+build_lrouter_force_snat_flows(struct lflow_table *lflows,
                                const struct ovn_datapath *od,
                                const char *ip_version, const char *ip_addr,
                                const char *context)
@@ -12499,7 +11894,7 @@  build_lrouter_force_snat_flows(struct hmap *lflows,
 static void
 build_lrouter_force_snat_flows_op(struct ovn_port *op,
                                   const struct lr_nat_record *lrnat_rec,
-                                  struct hmap *lflows,
+                                  struct lflow_table *lflows,
                                   struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -12571,7 +11966,7 @@  build_lrouter_force_snat_flows_op(struct ovn_port *op,
 }
 
 static void
-build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op,
+build_lrouter_bfd_flows(struct lflow_table *lflows, struct ovn_port *op,
                         const struct shash *meter_groups)
 {
     if (!op->has_bfd) {
@@ -12626,7 +12021,7 @@  build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op,
  */
 static void
 build_adm_ctrl_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows)
+        struct ovn_datapath *od, struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     /* Logical VLANs not supported.
@@ -12670,7 +12065,7 @@  build_gateway_get_l2_hdr_size(struct ovn_port *op)
  * function.
  */
 static void OVS_PRINTF_FORMAT(9, 10)
-build_gateway_mtu_flow(struct hmap *lflows, struct ovn_port *op,
+build_gateway_mtu_flow(struct lflow_table *lflows, struct ovn_port *op,
                        enum ovn_stage stage, uint16_t prio_low,
                        uint16_t prio_high, struct ds *match,
                        struct ds *actions, const struct ovsdb_idl_row *hint,
@@ -12731,7 +12126,7 @@  consider_l3dgw_port_is_centralized(struct ovn_port *op)
  */
 static void
 build_adm_ctrl_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -12785,7 +12180,7 @@  build_adm_ctrl_flows_for_lrouter_port(
  * lflows for logical routers. */
 static void
 build_neigh_learning_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         struct ds *match, struct ds *actions,
         const struct shash *meter_groups)
 {
@@ -12916,7 +12311,7 @@  build_neigh_learning_flows_for_lrouter(
  * for logical router ports. */
 static void
 build_neigh_learning_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -12978,7 +12373,7 @@  build_neigh_learning_flows_for_lrouter_port(
  * Adv (RA) options and response. */
 static void
 build_ND_RA_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions,
         const struct shash *meter_groups)
 {
@@ -13093,7 +12488,8 @@  build_ND_RA_flows_for_lrouter_port(
 /* Logical router ingress table ND_RA_OPTIONS & ND_RA_RESPONSE: RS
  * responder, by default goto next. (priority 0). */
 static void
-build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
+build_ND_RA_flows_for_lrouter(struct ovn_datapath *od,
+                              struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     ovn_lflow_add(lflows, od, S_ROUTER_IN_ND_RA_OPTIONS, 0, "1", "next;");
@@ -13104,7 +12500,7 @@  build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
  * by default goto next. (priority 0). */
 static void
 build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
-                                       struct hmap *lflows)
+                                       struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING_PRE, 0, "1",
@@ -13132,21 +12528,23 @@  build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
  */
 static void
 build_ip_routing_flows_for_lrp(
-        struct ovn_port *op, struct hmap *lflows)
+        struct ovn_port *op, struct lflow_table *lflows)
 {
     ovs_assert(op->nbrp);
     for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
         add_route(lflows, op->od, op, op->lrp_networks.ipv4_addrs[i].addr_s,
                   op->lrp_networks.ipv4_addrs[i].network_s,
                   op->lrp_networks.ipv4_addrs[i].plen, NULL, false, 0,
-                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED);
+                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED,
+                  NULL);
     }
 
     for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
         add_route(lflows, op->od, op, op->lrp_networks.ipv6_addrs[i].addr_s,
                   op->lrp_networks.ipv6_addrs[i].network_s,
                   op->lrp_networks.ipv6_addrs[i].plen, NULL, false, 0,
-                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED);
+                  &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED,
+                  NULL);
     }
 }
 
@@ -13159,8 +12557,9 @@  build_ip_routing_flows_for_lrp(
  */
 static void
 build_ip_routing_flows_for_router_type_lsp(
-        struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
-        const struct hmap *lr_ports, struct hmap *lflows)
+    struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
+    const struct hmap *lr_ports, struct lflow_table *lflows,
+    struct lflow_ref *lflow_ref)
 {
     ovs_assert(op->nbsp);
     if (!lsp_is_router(op->nbsp)) {
@@ -13196,7 +12595,8 @@  build_ip_routing_flows_for_router_type_lsp(
                             laddrs->ipv4_addrs[k].network_s,
                             laddrs->ipv4_addrs[k].plen, NULL, false, 0,
                             &peer->nbrp->header_, false,
-                            ROUTE_PRIO_OFFSET_CONNECTED);
+                            ROUTE_PRIO_OFFSET_CONNECTED,
+                            lflow_ref);
                 }
             }
             destroy_routable_addresses(&ra);
@@ -13208,7 +12608,7 @@  build_ip_routing_flows_for_router_type_lsp(
 static void
 build_static_route_flows_for_lrouter(
         struct ovn_datapath *od, const struct chassis_features *features,
-        struct hmap *lflows, const struct hmap *lr_ports,
+        struct lflow_table *lflows, const struct hmap *lr_ports,
         const struct hmap *bfd_connections)
 {
     ovs_assert(od->nbr);
@@ -13272,7 +12672,7 @@  build_static_route_flows_for_lrouter(
  */
 static void
 build_mcast_lookup_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(od->nbr);
@@ -13373,7 +12773,7 @@  build_mcast_lookup_flows_for_lrouter(
  * advances to the next table for ARP/ND resolution. */
 static void
 build_ingress_policy_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         const struct hmap *lr_ports)
 {
     ovs_assert(od->nbr);
@@ -13407,7 +12807,7 @@  build_ingress_policy_flows_for_lrouter(
 /* Local router ingress table ARP_RESOLVE: ARP Resolution. */
 static void
 build_arp_resolve_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows)
+        struct ovn_datapath *od, struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     /* Multicast packets already have the outport set so just advance to
@@ -13425,10 +12825,12 @@  build_arp_resolve_flows_for_lrouter(
 }
 
 static void
-routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
+routable_addresses_to_lflows(struct lflow_table *lflows,
+                             struct ovn_port *router_port,
                              struct ovn_port *peer,
                              const struct lr_stateful_record *lr_stateful_rec,
-                             struct ds *match, struct ds *actions)
+                             struct ds *match, struct ds *actions,
+                             struct lflow_ref *lflow_ref)
 {
     struct ovn_port_routable_addresses ra =
         get_op_routable_addresses(router_port, lr_stateful_rec);
@@ -13452,8 +12854,9 @@  routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
 
         ds_clear(actions);
         ds_put_format(actions, "eth.dst = %s; next;", ra.laddrs[i].ea_s);
-        ovn_lflow_add(lflows, peer->od, S_ROUTER_IN_ARP_RESOLVE, 100,
-                      ds_cstr(match), ds_cstr(actions));
+        ovn_lflow_add_with_lflow_ref(lflows, peer->od, S_ROUTER_IN_ARP_RESOLVE,
+                                     100, ds_cstr(match), ds_cstr(actions),
+                                     lflow_ref);
     }
     destroy_routable_addresses(&ra);
 }
@@ -13470,7 +12873,8 @@  routable_addresses_to_lflows(struct hmap *lflows, struct ovn_port *router_port,
 
 /* This function adds ARP resolve flows related to a LRP. */
 static void
-build_arp_resolve_flows_for_lrp(struct ovn_port *op, struct hmap *lflows,
+build_arp_resolve_flows_for_lrp(struct ovn_port *op,
+                                struct lflow_table *lflows,
                                 struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -13545,7 +12949,7 @@  build_arp_resolve_flows_for_lrp(struct ovn_port *op, struct hmap *lflows,
 /* This function adds ARP resolve flows related to a LSP. */
 static void
 build_arp_resolve_flows_for_lsp(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         const struct hmap *lr_ports,
         struct ds *match, struct ds *actions)
 {
@@ -13587,11 +12991,12 @@  build_arp_resolve_flows_for_lsp(
 
                     ds_clear(actions);
                     ds_put_format(actions, "eth.dst = %s; next;", ea_s);
-                    ovn_lflow_add_with_hint(lflows, peer->od,
+                    ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
                                             S_ROUTER_IN_ARP_RESOLVE, 100,
                                             ds_cstr(match),
                                             ds_cstr(actions),
-                                            &op->nbsp->header_);
+                                            &op->nbsp->header_,
+                                            op->lflow_ref);
                 }
             }
 
@@ -13618,11 +13023,12 @@  build_arp_resolve_flows_for_lsp(
 
                     ds_clear(actions);
                     ds_put_format(actions, "eth.dst = %s; next;", ea_s);
-                    ovn_lflow_add_with_hint(lflows, peer->od,
+                    ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
                                             S_ROUTER_IN_ARP_RESOLVE, 100,
                                             ds_cstr(match),
                                             ds_cstr(actions),
-                                            &op->nbsp->header_);
+                                            &op->nbsp->header_,
+                                            op->lflow_ref);
                 }
             }
         }
@@ -13666,10 +13072,11 @@  build_arp_resolve_flows_for_lsp(
                 ds_clear(actions);
                 ds_put_format(actions, "eth.dst = %s; next;",
                                           router_port->lrp_networks.ea_s);
-                ovn_lflow_add_with_hint(lflows, peer->od,
+                ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
                                         S_ROUTER_IN_ARP_RESOLVE, 100,
                                         ds_cstr(match), ds_cstr(actions),
-                                        &op->nbsp->header_);
+                                        &op->nbsp->header_,
+                                        op->lflow_ref);
             }
 
             if (router_port->lrp_networks.n_ipv6_addrs) {
@@ -13682,10 +13089,11 @@  build_arp_resolve_flows_for_lsp(
                 ds_clear(actions);
                 ds_put_format(actions, "eth.dst = %s; next;",
                               router_port->lrp_networks.ea_s);
-                ovn_lflow_add_with_hint(lflows, peer->od,
+                ovn_lflow_add_with_lflow_ref_hint(lflows, peer->od,
                                         S_ROUTER_IN_ARP_RESOLVE, 100,
                                         ds_cstr(match), ds_cstr(actions),
-                                        &op->nbsp->header_);
+                                        &op->nbsp->header_,
+                                        op->lflow_ref);
             }
         }
     }
@@ -13693,10 +13101,11 @@  build_arp_resolve_flows_for_lsp(
 
 static void
 build_arp_resolve_flows_for_lsp_routable_addresses(
-        struct ovn_port *op, struct hmap *lflows,
-        const struct hmap *lr_ports,
-        const struct lr_stateful_table *lr_stateful_table,
-        struct ds *match, struct ds *actions)
+    struct ovn_port *op, struct lflow_table *lflows,
+    const struct hmap *lr_ports,
+    const struct lr_stateful_table *lr_stateful_table,
+    struct ds *match, struct ds *actions,
+    struct lflow_ref *lflow_ref)
 {
     if (!lsp_is_router(op->nbsp)) {
         return;
@@ -13730,13 +13139,15 @@  build_arp_resolve_flows_for_lsp_routable_addresses(
             lr_stateful_rec = lr_stateful_table_find_by_index(
                 lr_stateful_table, router_port->od->index);
             routable_addresses_to_lflows(lflows, router_port, peer,
-                                         lr_stateful_rec, match, actions);
+                                         lr_stateful_rec, match, actions,
+                                         lflow_ref);
         }
     }
 }
 
 static void
-build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu, struct hmap *lflows,
+build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu,
+                            struct lflow_table *lflows,
                             const struct shash *meter_groups, struct ds *match,
                             struct ds *actions, enum ovn_stage stage,
                             struct ovn_port *outport)
@@ -13829,7 +13240,7 @@  build_icmperr_pkt_big_flows(struct ovn_port *op, int mtu, struct hmap *lflows,
 
 static void
 build_check_pkt_len_flows_for_lrp(struct ovn_port *op,
-                                  struct hmap *lflows,
+                                  struct lflow_table *lflows,
                                   const struct hmap *lr_ports,
                                   const struct shash *meter_groups,
                                   struct ds *match,
@@ -13879,7 +13290,7 @@  build_check_pkt_len_flows_for_lrp(struct ovn_port *op,
  * */
 static void
 build_check_pkt_len_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         const struct hmap *lr_ports,
         struct ds *match, struct ds *actions,
         const struct shash *meter_groups)
@@ -13906,7 +13317,7 @@  build_check_pkt_len_flows_for_lrouter(
 /* Logical router ingress table GW_REDIRECT: Gateway redirect. */
 static void
 build_gateway_redirect_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(od->nbr);
@@ -13950,8 +13361,8 @@  build_gateway_redirect_flows_for_lrouter(
 /* Logical router ingress table GW_REDIRECT: Gateway redirect. */
 static void
 build_lr_gateway_redirect_flows_for_nats(
-    const struct ovn_datapath *od, const struct lr_nat_record *lrnat_rec,
-    struct hmap *lflows, struct ds *match, struct ds *actions)
+        const struct ovn_datapath *od, const struct lr_nat_record *lrnat_rec,
+        struct lflow_table *lflows, struct ds *match, struct ds *actions)
 {
     ovs_assert(od->nbr);
     for (size_t i = 0; i < od->n_l3dgw_ports; i++) {
@@ -14020,7 +13431,7 @@  build_lr_gateway_redirect_flows_for_nats(
  * and sends an ARP/IPv6 NA request (priority 100). */
 static void
 build_arp_request_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows,
+        struct ovn_datapath *od, struct lflow_table *lflows,
         struct ds *match, struct ds *actions,
         const struct shash *meter_groups)
 {
@@ -14098,7 +13509,7 @@  build_arp_request_flows_for_lrouter(
  */
 static void
 build_egress_delivery_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions)
 {
     ovs_assert(op->nbrp);
@@ -14140,7 +13551,7 @@  build_egress_delivery_flows_for_lrouter_port(
 
 static void
 build_misc_local_traffic_drop_flows_for_lrouter(
-        struct ovn_datapath *od, struct hmap *lflows)
+        struct ovn_datapath *od, struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
     /* Allow IGMP and MLD packets (with TTL = 1) if the router is
@@ -14222,7 +13633,7 @@  build_misc_local_traffic_drop_flows_for_lrouter(
 
 static void
 build_dhcpv6_reply_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match)
 {
     ovs_assert(op->nbrp);
@@ -14242,7 +13653,7 @@  build_dhcpv6_reply_flows_for_lrouter_port(
 
 static void
 build_ipv6_input_flows_for_lrouter_port(
-        struct ovn_port *op, struct hmap *lflows,
+        struct ovn_port *op, struct lflow_table *lflows,
         struct ds *match, struct ds *actions,
         const struct shash *meter_groups)
 {
@@ -14411,7 +13822,7 @@  build_ipv6_input_flows_for_lrouter_port(
 static void
 build_lrouter_arp_nd_for_datapath(const struct ovn_datapath *od,
                                   const struct lr_nat_record *lrnat_rec,
-                                  struct hmap *lflows,
+                                  struct lflow_table *lflows,
                                   const struct shash *meter_groups)
 {
     ovs_assert(od->nbr);
@@ -14463,7 +13874,7 @@  build_lrouter_arp_nd_for_datapath(const struct ovn_datapath *od,
 /* Logical router ingress table 3: IP Input for IPv4. */
 static void
 build_lrouter_ipv4_ip_input(struct ovn_port *op,
-                            struct hmap *lflows,
+                            struct lflow_table *lflows,
                             struct ds *match, struct ds *actions,
                             const struct shash *meter_groups)
 {
@@ -14667,7 +14078,7 @@  build_lrouter_ipv4_ip_input(struct ovn_port *op,
 /* Logical router ingress table 3: IP Input for IPv4. */
 static void
 build_lrouter_ipv4_ip_input_for_lbnats(
-    struct ovn_port *op, struct hmap *lflows,
+    struct ovn_port *op, struct lflow_table *lflows,
     const struct lr_stateful_record *lr_stateful_rec,
     struct ds *match, const struct shash *meter_groups)
 {
@@ -14787,7 +14198,7 @@  build_lrouter_in_unsnat_match(const struct ovn_datapath *od,
 }
 
 static void
-build_lrouter_in_unsnat_stateless_flow(struct hmap *lflows,
+build_lrouter_in_unsnat_stateless_flow(struct lflow_table *lflows,
                                        const struct ovn_datapath *od,
                                        const struct nbrec_nat *nat,
                                        struct ds *match,
@@ -14809,7 +14220,7 @@  build_lrouter_in_unsnat_stateless_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
+build_lrouter_in_unsnat_in_czone_flow(struct lflow_table *lflows,
                                       const struct ovn_datapath *od,
                                       const struct nbrec_nat *nat,
                                       struct ds *match, bool distributed_nat,
@@ -14843,7 +14254,7 @@  build_lrouter_in_unsnat_in_czone_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_in_unsnat_flow(struct hmap *lflows,
+build_lrouter_in_unsnat_flow(struct lflow_table *lflows,
                              const struct ovn_datapath *od,
                              const struct nbrec_nat *nat, struct ds *match,
                              bool distributed_nat, bool is_v6,
@@ -14865,7 +14276,7 @@  build_lrouter_in_unsnat_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_in_dnat_flow(struct hmap *lflows,
+build_lrouter_in_dnat_flow(struct lflow_table *lflows,
                            const struct ovn_datapath *od,
                            const struct lr_nat_record *lrnat_rec,
                            const struct nbrec_nat *nat, struct ds *match,
@@ -14937,7 +14348,7 @@  build_lrouter_in_dnat_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_undnat_flow(struct hmap *lflows,
+build_lrouter_out_undnat_flow(struct lflow_table *lflows,
                               const struct ovn_datapath *od,
                               const struct nbrec_nat *nat, struct ds *match,
                               struct ds *actions, bool distributed_nat,
@@ -14988,7 +14399,7 @@  build_lrouter_out_undnat_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_is_dnat_local(struct hmap *lflows,
+build_lrouter_out_is_dnat_local(struct lflow_table *lflows,
                                 const struct ovn_datapath *od,
                                 const struct nbrec_nat *nat, struct ds *match,
                                 struct ds *actions, bool distributed_nat,
@@ -15019,7 +14430,7 @@  build_lrouter_out_is_dnat_local(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_snat_match(struct hmap *lflows,
+build_lrouter_out_snat_match(struct lflow_table *lflows,
                              const struct ovn_datapath *od,
                              const struct nbrec_nat *nat, struct ds *match,
                              bool distributed_nat, int cidr_bits, bool is_v6,
@@ -15048,7 +14459,7 @@  build_lrouter_out_snat_match(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_snat_stateless_flow(struct hmap *lflows,
+build_lrouter_out_snat_stateless_flow(struct lflow_table *lflows,
                                       const struct ovn_datapath *od,
                                       const struct nbrec_nat *nat,
                                       struct ds *match, struct ds *actions,
@@ -15091,7 +14502,7 @@  build_lrouter_out_snat_stateless_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
+build_lrouter_out_snat_in_czone_flow(struct lflow_table *lflows,
                                      const struct ovn_datapath *od,
                                      const struct nbrec_nat *nat,
                                      struct ds *match,
@@ -15153,7 +14564,7 @@  build_lrouter_out_snat_in_czone_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_out_snat_flow(struct hmap *lflows,
+build_lrouter_out_snat_flow(struct lflow_table *lflows,
                             const struct ovn_datapath *od,
                             const struct nbrec_nat *nat, struct ds *match,
                             struct ds *actions, bool distributed_nat,
@@ -15199,7 +14610,7 @@  build_lrouter_out_snat_flow(struct hmap *lflows,
 }
 
 static void
-build_lrouter_ingress_nat_check_pkt_len(struct hmap *lflows,
+build_lrouter_ingress_nat_check_pkt_len(struct lflow_table *lflows,
                                         const struct nbrec_nat *nat,
                                         const struct ovn_datapath *od,
                                         bool is_v6, struct ds *match,
@@ -15271,7 +14682,7 @@  build_lrouter_ingress_nat_check_pkt_len(struct hmap *lflows,
 }
 
 static void
-build_lrouter_ingress_flow(struct hmap *lflows,
+build_lrouter_ingress_flow(struct lflow_table *lflows,
                            const struct ovn_datapath *od,
                            const struct nbrec_nat *nat, struct ds *match,
                            struct ds *actions, struct eth_addr mac,
@@ -15451,7 +14862,7 @@  lrouter_check_nat_entry(const struct ovn_datapath *od,
 
 /* NAT, Defrag and load balancing. */
 static void build_lr_nat_defrag_and_lb_default_flows(struct ovn_datapath *od,
-                                                     struct hmap *lflows)
+                                                struct lflow_table *lflows)
 {
     ovs_assert(od->nbr);
 
@@ -15476,7 +14887,8 @@  static void build_lr_nat_defrag_and_lb_default_flows(struct ovn_datapath *od,
 
 static void
 build_lrouter_nat_defrag_and_lb(
-    const struct lr_stateful_record *lr_stateful_rec, struct hmap *lflows,
+    const struct lr_stateful_record *lr_stateful_rec,
+    struct lflow_table *lflows,
     const struct hmap *ls_ports, const struct hmap *lr_ports,
     struct ds *match, struct ds *actions,
     const struct shash *meter_groups,
@@ -15858,31 +15270,30 @@  build_lsp_lflows_for_lbnats(struct ovn_port *lsp,
                             const struct lr_stateful_record *lr_stateful_rec,
                             const struct lr_stateful_table *lr_stateful_table,
                             const struct hmap *lr_ports,
-                            struct hmap *lflows,
+                            struct lflow_table *lflows,
                             struct ds *match,
-                            struct ds *actions)
+                            struct ds *actions,
+                            struct lflow_ref *lflow_ref)
 {
     ovs_assert(lsp->nbsp);
     ovs_assert(lsp->peer);
-    start_collecting_lflows();
     build_lswitch_rport_arp_req_flows_for_lbnats(
         lsp->peer, lr_stateful_rec, lsp->od, lsp,
-        lflows, &lsp->nbsp->header_);
+        lflows, &lsp->nbsp->header_, lflow_ref);
     build_ip_routing_flows_for_router_type_lsp(lsp, lr_stateful_table,
-                                               lr_ports, lflows);
+                                               lr_ports, lflows,
+                                               lflow_ref);
     build_arp_resolve_flows_for_lsp_routable_addresses(
-        lsp, lflows, lr_ports, lr_stateful_table, match, actions);
+        lsp, lflows, lr_ports, lr_stateful_table, match, actions, lflow_ref);
     build_lswitch_ip_unicast_lookup_for_nats(lsp, lr_stateful_rec, lflows,
-                                             match, actions);
-    link_ovn_port_to_lflows(lsp, &collected_lflows);
-    end_collecting_lflows();
+                                             match, actions, lflow_ref);
 }
 
 static void
 build_lbnat_lflows_iterate_by_lsp(
     struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
     const struct hmap *lr_ports, struct ds *match, struct ds *actions,
-    struct hmap *lflows)
+    struct lflow_table *lflows)
 {
     ovs_assert(op->nbsp);
 
@@ -15895,8 +15306,9 @@  build_lbnat_lflows_iterate_by_lsp(
                                                       op->peer->od->index);
     ovs_assert(lr_stateful_rec);
 
-    build_lsp_lflows_for_lbnats(op, lr_stateful_rec, lr_stateful_table,
-                                lr_ports, lflows, match, actions);
+    build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
+                                lr_stateful_table, lr_ports, lflows,
+                                match, actions, op->stateful_lflow_ref);
 }
 
 static void
@@ -15904,7 +15316,7 @@  build_lrp_lflows_for_lbnats(struct ovn_port *op,
                             const struct lr_stateful_record *lr_stateful_rec,
                             const struct shash *meter_groups,
                             struct ds *match, struct ds *actions,
-                            struct hmap *lflows)
+                            struct lflow_table *lflows)
 {
     /* Drop IP traffic destined to router owned IPs except if the IP is
      * also a SNAT IP. Those are dropped later, in stage
@@ -15939,7 +15351,7 @@  static void
 build_lbnat_lflows_iterate_by_lrp(
     struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
     const struct shash *meter_groups, struct ds *match,
-    struct ds *actions, struct hmap *lflows)
+    struct ds *actions, struct lflow_table *lflows)
 {
     ovs_assert(op->nbrp);
 
@@ -15954,7 +15366,7 @@  build_lbnat_lflows_iterate_by_lrp(
 
 static void
 build_lr_stateful_flows(const struct lr_stateful_record *lr_stateful_rec,
-                        struct hmap *lflows,
+                        struct lflow_table *lflows,
                         const struct hmap *ls_ports,
                         const struct hmap *lr_ports,
                         struct ds *match,
@@ -15978,7 +15390,7 @@  build_ls_stateful_flows(const struct ls_stateful_record *ls_stateful_rec,
                       const struct ls_port_group_table *ls_pgs,
                       const struct chassis_features *features,
                       const struct shash *meter_groups,
-                      struct hmap *lflows)
+                      struct lflow_table *lflows)
 {
     ovs_assert(ls_stateful_rec->od);
 
@@ -15997,7 +15409,7 @@  struct lswitch_flow_build_info {
     const struct ls_port_group_table *ls_port_groups;
     const struct lr_stateful_table *lr_stateful_table;
     const struct ls_stateful_table *ls_stateful_table;
-    struct hmap *lflows;
+    struct lflow_table *lflows;
     struct hmap *igmp_groups;
     const struct shash *meter_groups;
     const struct hmap *lb_dps_map;
@@ -16080,10 +15492,9 @@  build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,
                                          const struct shash *meter_groups,
                                          struct ds *match,
                                          struct ds *actions,
-                                         struct hmap *lflows)
+                                         struct lflow_table *lflows)
 {
     ovs_assert(op->nbsp);
-    start_collecting_lflows();
 
     /* Build Logical Switch Flows. */
     build_lswitch_port_sec_op(op, lflows, actions, match);
@@ -16098,9 +15509,6 @@  build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,
 
     /* Build Logical Router Flows. */
     build_arp_resolve_flows_for_lsp(op, lflows, lr_ports, match, actions);
-
-    link_ovn_port_to_lflows(op, &collected_lflows);
-    end_collecting_lflows();
 }
 
 /* Helper function to combine all lflow generation which is iterated by logical
@@ -16307,7 +15715,7 @@  noop_callback(struct worker_pool *pool OVS_UNUSED,
     /* Do nothing */
 }
 
-/* Fixes the hmap size (hmap->n) after parallel building the lflow_map when
+/* Fixes the hmap size (hmap->n) after parallel building the lflow_table when
  * dp-groups is enabled, because in that case all threads are updating the
  * global lflow hmap. Although the lflow_hash_lock prevents currently inserting
  * to the same hash bucket, the hmap->n is updated currently by all threads and
@@ -16317,7 +15725,7 @@  noop_callback(struct worker_pool *pool OVS_UNUSED,
  * after the worker threads complete the tasks in each iteration before any
  * future operations on the lflow map. */
 static void
-fix_flow_map_size(struct hmap *lflow_map,
+fix_flow_table_size(struct lflow_table *lflow_table,
                   struct lswitch_flow_build_info *lsiv,
                   size_t n_lsiv)
 {
@@ -16325,7 +15733,7 @@  fix_flow_map_size(struct hmap *lflow_map,
     for (size_t i = 0; i < n_lsiv; i++) {
         total += lsiv[i].thread_lflow_counter;
     }
-    lflow_map->n = total;
+    lflow_table_set_size(lflow_table, total);
 }
 
 static void
@@ -16337,7 +15745,7 @@  build_lswitch_and_lrouter_flows(
     const struct ls_port_group_table *ls_pgs,
     const struct lr_stateful_table *lr_stateful_table,
     const struct ls_stateful_table *ls_stateful_table,
-    struct hmap *lflows,
+    struct lflow_table *lflows,
     struct hmap *igmp_groups,
     const struct shash *meter_groups,
     const struct hmap *lb_dps_map,
@@ -16384,7 +15792,7 @@  build_lswitch_and_lrouter_flows(
 
         /* Run thread pool. */
         run_pool_callback(build_lflows_pool, NULL, NULL, noop_callback);
-        fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);
+        fix_flow_table_size(lflows, lsiv, build_lflows_pool->size);
 
         for (index = 0; index < build_lflows_pool->size; index++) {
             ds_destroy(&lsiv[index].match);
@@ -16498,24 +15906,6 @@  build_lswitch_and_lrouter_flows(
     free(svc_check_match);
 }
 
-static ssize_t max_seen_lflow_size = 128;
-
-void
-lflow_data_init(struct lflow_data *data)
-{
-    fast_hmap_size_for(&data->lflows, max_seen_lflow_size);
-}
-
-void
-lflow_data_destroy(struct lflow_data *data)
-{
-    struct ovn_lflow *lflow;
-    HMAP_FOR_EACH_SAFE (lflow, hmap_node, &data->lflows) {
-        ovn_lflow_destroy(&data->lflows, lflow);
-    }
-    hmap_destroy(&data->lflows);
-}
-
 void run_update_worker_pool(int n_threads)
 {
     /* If number of threads has been updated (or initially set),
@@ -16561,7 +15951,7 @@  create_sb_multicast_group(struct ovsdb_idl_txn *ovnsb_txn,
  * constructing their contents based on the OVN_NB database. */
 void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
                   struct lflow_input *input_data,
-                  struct hmap *lflows)
+                  struct lflow_table *lflows)
 {
     struct hmap mcast_groups;
     struct hmap igmp_groups;
@@ -16592,281 +15982,26 @@  void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
     }
 
     /* Parallel build may result in a suboptimal hash. Resize the
-     * hash to a correct size before doing lookups */
-
-    hmap_expand(lflows);
-
-    if (hmap_count(lflows) > max_seen_lflow_size) {
-        max_seen_lflow_size = hmap_count(lflows);
-    }
-
-    stopwatch_start(LFLOWS_DP_GROUPS_STOPWATCH_NAME, time_msec());
-    /* Collecting all unique datapath groups. */
-    struct hmap ls_dp_groups = HMAP_INITIALIZER(&ls_dp_groups);
-    struct hmap lr_dp_groups = HMAP_INITIALIZER(&lr_dp_groups);
-    struct hmap single_dp_lflows;
-
-    /* Single dp_flows will never grow bigger than lflows,
-     * thus the two hmaps will remain the same size regardless
-     * of how many elements we remove from lflows and add to
-     * single_dp_lflows.
-     * Note - lflows is always sized for at least 128 flows.
-     */
-    fast_hmap_size_for(&single_dp_lflows, max_seen_lflow_size);
-
-    struct ovn_lflow *lflow;
-    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
-        struct ovn_datapath **datapaths_array;
-        size_t n_datapaths;
-
-        if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
-            n_datapaths = ods_size(input_data->ls_datapaths);
-            datapaths_array = input_data->ls_datapaths->array;
-        } else {
-            n_datapaths = ods_size(input_data->lr_datapaths);
-            datapaths_array = input_data->lr_datapaths->array;
-        }
-
-        lflow->n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
-
-        ovs_assert(lflow->n_ods);
-
-        if (lflow->n_ods == 1) {
-            /* There is only one datapath, so it should be moved out of the
-             * group to a single 'od'. */
-            size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
-                                       n_datapaths);
-
-            bitmap_set0(lflow->dpg_bitmap, index);
-            lflow->od = datapaths_array[index];
-
-            /* Logical flow should be re-hashed to allow lookups. */
-            uint32_t hash = hmap_node_hash(&lflow->hmap_node);
-            /* Remove from lflows. */
-            hmap_remove(lflows, &lflow->hmap_node);
-            hash = ovn_logical_flow_hash_datapath(&lflow->od->sb->header_.uuid,
-                                                  hash);
-            /* Add to single_dp_lflows. */
-            hmap_insert_fast(&single_dp_lflows, &lflow->hmap_node, hash);
-        }
-    }
-
-    /* Merge multiple and single dp hashes. */
-
-    fast_hmap_merge(lflows, &single_dp_lflows);
-
-    hmap_destroy(&single_dp_lflows);
-
-    stopwatch_stop(LFLOWS_DP_GROUPS_STOPWATCH_NAME, time_msec());
+     * lflow map to a correct size before doing lookups */
+    lflow_table_expand(lflows);
+    
     stopwatch_start(LFLOWS_TO_SB_STOPWATCH_NAME, time_msec());
-
-    struct hmap lflows_temp = HMAP_INITIALIZER(&lflows_temp);
-    /* Push changes to the Logical_Flow table to database. */
-    const struct sbrec_logical_flow *sbflow;
-    SBREC_LOGICAL_FLOW_TABLE_FOR_EACH_SAFE (sbflow,
-                                     input_data->sbrec_logical_flow_table) {
-        struct sbrec_logical_dp_group *dp_group = sbflow->logical_dp_group;
-        struct ovn_datapath *logical_datapath_od = NULL;
-        size_t i;
-
-        /* Find one valid datapath to get the datapath type. */
-        struct sbrec_datapath_binding *dp = sbflow->logical_datapath;
-        if (dp) {
-            logical_datapath_od = ovn_datapath_from_sbrec(
-                                        &input_data->ls_datapaths->datapaths,
-                                        &input_data->lr_datapaths->datapaths,
-                                        dp);
-            if (logical_datapath_od
-                && ovn_datapath_is_stale(logical_datapath_od)) {
-                logical_datapath_od = NULL;
-            }
-        }
-        for (i = 0; dp_group && i < dp_group->n_datapaths; i++) {
-            logical_datapath_od = ovn_datapath_from_sbrec(
-                                        &input_data->ls_datapaths->datapaths,
-                                        &input_data->lr_datapaths->datapaths,
-                                        dp_group->datapaths[i]);
-            if (logical_datapath_od
-                && !ovn_datapath_is_stale(logical_datapath_od)) {
-                break;
-            }
-            logical_datapath_od = NULL;
-        }
-
-        if (!logical_datapath_od) {
-            /* This lflow has no valid logical datapaths. */
-            sbrec_logical_flow_delete(sbflow);
-            continue;
-        }
-
-        enum ovn_pipeline pipeline
-            = !strcmp(sbflow->pipeline, "ingress") ? P_IN : P_OUT;
-
-        lflow = ovn_lflow_find(
-            lflows, dp_group ? NULL : logical_datapath_od,
-            ovn_stage_build(ovn_datapath_get_type(logical_datapath_od),
-                            pipeline, sbflow->table_id),
-            sbflow->priority, sbflow->match, sbflow->actions,
-            sbflow->controller_meter, sbflow->hash);
-        if (lflow) {
-            struct hmap *dp_groups;
-            size_t n_datapaths;
-            bool is_switch;
-
-            lflow->sb_uuid = sbflow->header_.uuid;
-            is_switch = ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH;
-            if (is_switch) {
-                n_datapaths = ods_size(input_data->ls_datapaths);
-                dp_groups = &ls_dp_groups;
-            } else {
-                n_datapaths = ods_size(input_data->lr_datapaths);
-                dp_groups = &lr_dp_groups;
-            }
-            if (input_data->ovn_internal_version_changed) {
-                const char *stage_name = smap_get_def(&sbflow->external_ids,
-                                                  "stage-name", "");
-                const char *stage_hint = smap_get_def(&sbflow->external_ids,
-                                                  "stage-hint", "");
-                const char *source = smap_get_def(&sbflow->external_ids,
-                                                  "source", "");
-
-                if (strcmp(stage_name, ovn_stage_to_str(lflow->stage))) {
-                    sbrec_logical_flow_update_external_ids_setkey(sbflow,
-                     "stage-name", ovn_stage_to_str(lflow->stage));
-                }
-                if (lflow->stage_hint) {
-                    if (strcmp(stage_hint, lflow->stage_hint)) {
-                        sbrec_logical_flow_update_external_ids_setkey(sbflow,
-                        "stage-hint", lflow->stage_hint);
-                    }
-                }
-                if (lflow->where) {
-                    if (strcmp(source, lflow->where)) {
-                        sbrec_logical_flow_update_external_ids_setkey(sbflow,
-                        "source", lflow->where);
-                    }
-                }
-            }
-
-            if (lflow->od) {
-                sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
-                sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
-            } else {
-                lflow->dpg = ovn_dp_group_get_or_create(
-                                ovnsb_txn, dp_groups, dp_group,
-                                lflow->n_ods, lflow->dpg_bitmap,
-                                n_datapaths, is_switch,
-                                input_data->ls_datapaths,
-                                input_data->lr_datapaths);
-
-                sbrec_logical_flow_set_logical_datapath(sbflow, NULL);
-                sbrec_logical_flow_set_logical_dp_group(sbflow,
-                                                        lflow->dpg->dp_group);
-            }
-
-            /* This lflow updated.  Not needed anymore. */
-            hmap_remove(lflows, &lflow->hmap_node);
-            hmap_insert(&lflows_temp, &lflow->hmap_node,
-                        hmap_node_hash(&lflow->hmap_node));
-        } else {
-            sbrec_logical_flow_delete(sbflow);
-        }
-    }
-
-    HMAP_FOR_EACH_SAFE (lflow, hmap_node, lflows) {
-        const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
-        uint8_t table = ovn_stage_get_table(lflow->stage);
-        struct hmap *dp_groups;
-        size_t n_datapaths;
-        bool is_switch;
-
-        is_switch = ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH;
-        if (is_switch) {
-            n_datapaths = ods_size(input_data->ls_datapaths);
-            dp_groups = &ls_dp_groups;
-        } else {
-            n_datapaths = ods_size(input_data->lr_datapaths);
-            dp_groups = &lr_dp_groups;
-        }
-
-        lflow->sb_uuid = uuid_random();
-        sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
-                                                        &lflow->sb_uuid);
-        if (lflow->od) {
-            sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
-        } else {
-            lflow->dpg = ovn_dp_group_get_or_create(
-                                ovnsb_txn, dp_groups, NULL,
-                                lflow->n_ods, lflow->dpg_bitmap,
-                                n_datapaths, is_switch,
-                                input_data->ls_datapaths,
-                                input_data->lr_datapaths);
-
-            sbrec_logical_flow_set_logical_dp_group(sbflow,
-                                                    lflow->dpg->dp_group);
-        }
-
-        sbrec_logical_flow_set_pipeline(sbflow, pipeline);
-        sbrec_logical_flow_set_table_id(sbflow, table);
-        sbrec_logical_flow_set_priority(sbflow, lflow->priority);
-        sbrec_logical_flow_set_match(sbflow, lflow->match);
-        sbrec_logical_flow_set_actions(sbflow, lflow->actions);
-        if (lflow->io_port) {
-            struct smap tags = SMAP_INITIALIZER(&tags);
-            smap_add(&tags, "in_out_port", lflow->io_port);
-            sbrec_logical_flow_set_tags(sbflow, &tags);
-            smap_destroy(&tags);
-        }
-        sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
-
-        /* Trim the source locator lflow->where, which looks something like
-         * "ovn/northd/northd.c:1234", down to just the part following the
-         * last slash, e.g. "northd.c:1234". */
-        const char *slash = strrchr(lflow->where, '/');
-#if _WIN32
-        const char *backslash = strrchr(lflow->where, '\\');
-        if (!slash || backslash > slash) {
-            slash = backslash;
-        }
-#endif
-        const char *where = slash ? slash + 1 : lflow->where;
-
-        struct smap ids = SMAP_INITIALIZER(&ids);
-        smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
-        smap_add(&ids, "source", where);
-        if (lflow->stage_hint) {
-            smap_add(&ids, "stage-hint", lflow->stage_hint);
-        }
-        sbrec_logical_flow_set_external_ids(sbflow, &ids);
-        smap_destroy(&ids);
-        hmap_remove(lflows, &lflow->hmap_node);
-        hmap_insert(&lflows_temp, &lflow->hmap_node,
-                    hmap_node_hash(&lflow->hmap_node));
-    }
-    hmap_swap(lflows, &lflows_temp);
-    hmap_destroy(&lflows_temp);
+    lflow_table_sync_to_sb(lflows, ovnsb_txn, input_data->ls_datapaths,
+                           input_data->lr_datapaths,
+                           input_data->ovn_internal_version_changed,
+                           input_data->sbrec_logical_flow_table,
+                           input_data->sbrec_logical_dp_group_table);
 
     stopwatch_stop(LFLOWS_TO_SB_STOPWATCH_NAME, time_msec());
-    struct ovn_dp_group *dpg;
-    HMAP_FOR_EACH_POP (dpg, node, &ls_dp_groups) {
-        bitmap_free(dpg->bitmap);
-        free(dpg);
-    }
-    hmap_destroy(&ls_dp_groups);
-    HMAP_FOR_EACH_POP (dpg, node, &lr_dp_groups) {
-        bitmap_free(dpg->bitmap);
-        free(dpg);
-    }
-    hmap_destroy(&lr_dp_groups);
 
     /* Push changes to the Multicast_Group table to database. */
     const struct sbrec_multicast_group *sbmc;
-    SBREC_MULTICAST_GROUP_TABLE_FOR_EACH_SAFE (sbmc,
-                                input_data->sbrec_multicast_group_table) {
+    SBREC_MULTICAST_GROUP_TABLE_FOR_EACH_SAFE (
+            sbmc, input_data->sbrec_multicast_group_table) {
         struct ovn_datapath *od = ovn_datapath_from_sbrec(
-                                       &input_data->ls_datapaths->datapaths,
-                                       &input_data->lr_datapaths->datapaths,
-                                       sbmc->datapath);
+            &input_data->ls_datapaths->datapaths,
+            &input_data->lr_datapaths->datapaths,
+            sbmc->datapath);
 
         if (!od || ovn_datapath_is_stale(od)) {
             sbrec_multicast_group_delete(sbmc);
@@ -16906,120 +16041,22 @@  void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
     hmap_destroy(&mcast_groups);
 }
 
-static void
-sync_lsp_lflows_to_sb(struct ovsdb_idl_txn *ovnsb_txn,
-                      struct lflow_input *lflow_input,
-                      struct hmap *lflows,
-                      struct ovn_lflow *lflow)
-{
-    size_t n_datapaths;
-    struct ovn_datapath **datapaths_array;
-    if (ovn_stage_to_datapath_type(lflow->stage) == DP_SWITCH) {
-        n_datapaths = ods_size(lflow_input->ls_datapaths);
-        datapaths_array = lflow_input->ls_datapaths->array;
-    } else {
-        n_datapaths = ods_size(lflow_input->lr_datapaths);
-        datapaths_array = lflow_input->lr_datapaths->array;
-    }
-    uint32_t n_ods = bitmap_count1(lflow->dpg_bitmap, n_datapaths);
-    ovs_assert(n_ods == 1);
-    /* There is only one datapath, so it should be moved out of the
-     * group to a single 'od'. */
-    size_t index = bitmap_scan(lflow->dpg_bitmap, true, 0,
-                               n_datapaths);
-
-    bitmap_set0(lflow->dpg_bitmap, index);
-    lflow->od = datapaths_array[index];
-
-    /* Logical flow should be re-hashed to allow lookups. */
-    uint32_t hash = hmap_node_hash(&lflow->hmap_node);
-    /* Remove from lflows. */
-    hmap_remove(lflows, &lflow->hmap_node);
-    hash = ovn_logical_flow_hash_datapath(&lflow->od->sb->header_.uuid,
-                                          hash);
-    /* Add back. */
-    hmap_insert(lflows, &lflow->hmap_node, hash);
-
-    /* Sync to SB. */
-    const struct sbrec_logical_flow *sbflow;
-    /* Note: uuid_random acquires a global mutex. If we parallelize the sync to
-     * SB this may become a bottleneck. */
-    lflow->sb_uuid = uuid_random();
-    sbflow = sbrec_logical_flow_insert_persist_uuid(ovnsb_txn,
-                                                    &lflow->sb_uuid);
-    const char *pipeline = ovn_stage_get_pipeline_name(lflow->stage);
-    uint8_t table = ovn_stage_get_table(lflow->stage);
-    sbrec_logical_flow_set_logical_datapath(sbflow, lflow->od->sb);
-    sbrec_logical_flow_set_logical_dp_group(sbflow, NULL);
-    sbrec_logical_flow_set_pipeline(sbflow, pipeline);
-    sbrec_logical_flow_set_table_id(sbflow, table);
-    sbrec_logical_flow_set_priority(sbflow, lflow->priority);
-    sbrec_logical_flow_set_match(sbflow, lflow->match);
-    sbrec_logical_flow_set_actions(sbflow, lflow->actions);
-    if (lflow->io_port) {
-        struct smap tags = SMAP_INITIALIZER(&tags);
-        smap_add(&tags, "in_out_port", lflow->io_port);
-        sbrec_logical_flow_set_tags(sbflow, &tags);
-        smap_destroy(&tags);
-    }
-    sbrec_logical_flow_set_controller_meter(sbflow, lflow->ctrl_meter);
-    /* Trim the source locator lflow->where, which looks something like
-     * "ovn/northd/northd.c:1234", down to just the part following the
-     * last slash, e.g. "northd.c:1234". */
-    const char *slash = strrchr(lflow->where, '/');
-#if _WIN32
-    const char *backslash = strrchr(lflow->where, '\\');
-    if (!slash || backslash > slash) {
-        slash = backslash;
-    }
-#endif
-    const char *where = slash ? slash + 1 : lflow->where;
-
-    struct smap ids = SMAP_INITIALIZER(&ids);
-    smap_add(&ids, "stage-name", ovn_stage_to_str(lflow->stage));
-    smap_add(&ids, "source", where);
-    if (lflow->stage_hint) {
-        smap_add(&ids, "stage-hint", lflow->stage_hint);
-    }
-    sbrec_logical_flow_set_external_ids(sbflow, &ids);
-    smap_destroy(&ids);
-}
-
-static bool
-delete_lflow_for_lsp(struct ovn_port *op, bool is_update,
-                     const struct sbrec_logical_flow_table *sb_lflow_table,
-                     struct hmap *lflows)
-{
-    struct lflow_ref_node *lfrn;
-    const char *operation = is_update ? "updated" : "deleted";
-    LIST_FOR_EACH_SAFE (lfrn, lflow_list_node, &op->lflows) {
-        VLOG_DBG("Deleting SB lflow "UUID_FMT" for %s port %s",
-                 UUID_ARGS(&lfrn->lflow->sb_uuid), operation, op->key);
-
-        const struct sbrec_logical_flow *sblflow =
-            sbrec_logical_flow_table_get_for_uuid(sb_lflow_table,
-                                              &lfrn->lflow->sb_uuid);
-        if (sblflow) {
-            sbrec_logical_flow_delete(sblflow);
-        } else {
-            static struct vlog_rate_limit rl =
-                VLOG_RATE_LIMIT_INIT(1, 1);
-            VLOG_WARN_RL(&rl, "SB lflow "UUID_FMT" not found when handling "
-                         "%s port %s. Recompute.",
-                         UUID_ARGS(&lfrn->lflow->sb_uuid), operation, op->key);
-            return false;
-        }
+void
+lflow_reset_northd_refs(struct lflow_input *lflow_input)
+{
+    struct ovn_port *op;
 
-        ovn_lflow_destroy(lflows, lfrn->lflow);
+    HMAP_FOR_EACH (op, key_node, lflow_input->ls_ports) {
+        lflow_ref_clear(op->lflow_ref);
+        lflow_ref_clear(op->stateful_lflow_ref);
     }
-    return true;
 }
 
 bool
 lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
                                  struct tracked_ovn_ports *trk_lsps,
                                  struct lflow_input *lflow_input,
-                                 struct hmap *lflows)
+                                 struct lflow_table *lflows)
 {
     struct hmapx_node *hmapx_node;
     struct ovn_port *op;
@@ -17028,13 +16065,15 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
         op = hmapx_node->data;
         /* Make sure 'op' is an lsp and not lrp. */
         ovs_assert(op->nbsp);
-
-        if (!delete_lflow_for_lsp(op, false,
-                                  lflow_input->sbrec_logical_flow_table,
-                                  lflows)) {
-                return false;
-            }
-
+        bool handled = lflow_ref_resync_flows(
+            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
+            lflow_input->lr_datapaths,
+            lflow_input->ovn_internal_version_changed,
+            lflow_input->sbrec_logical_flow_table,
+            lflow_input->sbrec_logical_dp_group_table);
+        if (!handled) {
+            return false;
+        }
         /* No need to update SB multicast groups, thanks to weak
          * references. */
     }
@@ -17043,13 +16082,8 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
         op = hmapx_node->data;
         /* Make sure 'op' is an lsp and not lrp. */
         ovs_assert(op->nbsp);
-
-        /* Delete old lflows. */
-        if (!delete_lflow_for_lsp(op, true,
-                                  lflow_input->sbrec_logical_flow_table,
-                                  lflows)) {
-            return false;
-        }
+        /* Clear old lflows. */
+        lflow_ref_unlink_lflows(op->lflow_ref);
 
         /* Generate new lflows. */
         struct ds match = DS_EMPTY_INITIALIZER;
@@ -17060,16 +16094,42 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
                                                  &match, &actions,
                                                  lflows);
 
+        /* Sync the new flows to SB. */
+        bool handled = lflow_ref_sync_lflows(
+            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
+            lflow_input->lr_datapaths,
+            lflow_input->ovn_internal_version_changed,
+            lflow_input->sbrec_logical_flow_table,
+            lflow_input->sbrec_logical_dp_group_table);
+        if (!handled) {
+            return false;
+        }
+
         if (lsp_is_router(op->nbsp) && op->peer && op->peer->od->nbr) {
             const struct lr_stateful_record *lr_stateful_rec =
                 lr_stateful_table_find_by_index(lflow_input->lr_stateful_table,
                                                 op->peer->od->index);
             ovs_assert(lr_stateful_rec);
 
+            /* Clear old lflows. */
+            lflow_ref_unlink_lflows(op->stateful_lflow_ref);
+
+            /* Generate new lflows. */
             build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
                                         lflow_input->lr_stateful_table,
                                         lflow_input->lr_ports,
-                                        lflows, &match, &actions);
+                                        lflows, &match, &actions,
+                                        op->stateful_lflow_ref);
+            handled = lflow_ref_sync_lflows(
+                op->stateful_lflow_ref, lflows, ovnsb_txn,
+                lflow_input->ls_datapaths,
+                lflow_input->lr_datapaths,
+                lflow_input->ovn_internal_version_changed,
+                lflow_input->sbrec_logical_flow_table,
+                lflow_input->sbrec_logical_dp_group_table);
+            if (!handled) {
+                return false;
+            }
         }
 
         ds_destroy(&match);
@@ -17077,13 +16137,6 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
 
         /* SB port_binding is not deleted, so don't update SB multicast
          * groups. */
-
-        /* Sync the new flows to SB. */
-        struct lflow_ref_node *lfrn;
-        LIST_FOR_EACH (lfrn, lflow_list_node, &op->lflows) {
-            sync_lsp_lflows_to_sb(ovnsb_txn, lflow_input, lflows,
-                                  lfrn->lflow);
-        }
     }
 
     HMAPX_FOR_EACH (hmapx_node, &trk_lsps->created) {
@@ -17108,6 +16161,17 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
                                                  lflow_input->meter_groups,
                                                  &match, &actions, lflows);
 
+        /* Sync the newly added flows to SB. */
+        bool handled = lflow_ref_sync_lflows(
+            op->lflow_ref, lflows, ovnsb_txn, lflow_input->ls_datapaths,
+            lflow_input->lr_datapaths,
+            lflow_input->ovn_internal_version_changed,
+            lflow_input->sbrec_logical_flow_table,
+            lflow_input->sbrec_logical_dp_group_table);
+        if (!handled) {
+            return false;
+        }
+
         if (lsp_is_router(op->nbsp) && op->peer && op->peer->od->nbr) {
             const struct lr_stateful_record *lr_stateful_rec =
                 lr_stateful_table_find_by_index(lflow_input->lr_stateful_table,
@@ -17117,7 +16181,18 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
             build_lsp_lflows_for_lbnats(op, lr_stateful_rec,
                                         lflow_input->lr_stateful_table,
                                         lflow_input->lr_ports,
-                                        lflows, &match, &actions);
+                                        lflows, &match, &actions,
+                                        op->stateful_lflow_ref);
+            handled = lflow_ref_sync_lflows(
+                op->stateful_lflow_ref, lflows, ovnsb_txn,
+                lflow_input->ls_datapaths,
+                lflow_input->lr_datapaths,
+                lflow_input->ovn_internal_version_changed,
+                lflow_input->sbrec_logical_flow_table,
+                lflow_input->sbrec_logical_dp_group_table);
+            if (!handled) {
+                return false;
+            }
         }
 
         ds_destroy(&match);
@@ -17146,13 +16221,6 @@  lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
             sbrec_multicast_group_update_ports_addvalue(sbmc_unknown,
                                                         op->sb);
         }
-
-        /* Sync the newly added flows to SB. */
-        struct lflow_ref_node *lfrn;
-        LIST_FOR_EACH (lfrn, lflow_list_node, &op->lflows) {
-            sync_lsp_lflows_to_sb(ovnsb_txn, lflow_input, lflows,
-                                    lfrn->lflow);
-        }
     }
 
     return true;
diff --git a/northd/northd.h b/northd/northd.h
index 88406bffee..42b4eee607 100644
--- a/northd/northd.h
+++ b/northd/northd.h
@@ -23,6 +23,7 @@ 
 #include "northd/en-port-group.h"
 #include "northd/ipam.h"
 #include "openvswitch/hmap.h"
+#include "ovs-thread.h"
 
 struct northd_input {
     /* Northbound table references */
@@ -164,13 +165,6 @@  struct northd_data {
     struct northd_tracked_data trk_data;
 };
 
-struct lflow_data {
-    struct hmap lflows;
-};
-
-void lflow_data_init(struct lflow_data *);
-void lflow_data_destroy(struct lflow_data *);
-
 struct lr_nat_table;
 
 struct lflow_input {
@@ -182,6 +176,7 @@  struct lflow_input {
     const struct sbrec_logical_flow_table *sbrec_logical_flow_table;
     const struct sbrec_multicast_group_table *sbrec_multicast_group_table;
     const struct sbrec_igmp_group_table *sbrec_igmp_group_table;
+    const struct sbrec_logical_dp_group_table *sbrec_logical_dp_group_table;
 
     /* Indexes */
     struct ovsdb_idl_index *sbrec_mcast_group_by_name_dp;
@@ -201,6 +196,15 @@  struct lflow_input {
     bool ovn_internal_version_changed;
 };
 
+extern int parallelization_state;
+enum {
+    STATE_NULL,               /* parallelization is off */
+    STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
+    STATE_USE_PARALLELIZATION /* parallelization is on */
+};
+
+extern thread_local size_t thread_lflow_counter;
+
 /*
  * Multicast snooping and querier per datapath configuration.
  */
@@ -344,6 +348,179 @@  struct ovn_datapath {
 const struct ovn_datapath *ovn_datapath_find(const struct hmap *datapaths,
                                              const struct uuid *uuid);
 
+struct ovn_datapath *ovn_datapath_from_sbrec(
+    const struct hmap *ls_datapaths, const struct hmap *lr_datapaths,
+    const struct sbrec_datapath_binding *);
+
+static inline bool
+ovn_datapath_is_stale(const struct ovn_datapath *od)
+{
+    return !od->nbr && !od->nbs;
+};
+
+/* Pipeline stages. */
+
+/* The two purposes for which ovn-northd uses OVN logical datapaths. */
+enum ovn_datapath_type {
+    DP_SWITCH,                  /* OVN logical switch. */
+    DP_ROUTER                   /* OVN logical router. */
+};
+
+/* Returns an "enum ovn_stage" built from the arguments.
+ *
+ * (It's better to use ovn_stage_build() for type-safety reasons, but inline
+ * functions can't be used in enums or switch cases.) */
+#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
+    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
+
+/* A stage within an OVN logical switch or router.
+ *
+ * An "enum ovn_stage" indicates whether the stage is part of a logical switch
+ * or router, whether the stage is part of the ingress or egress pipeline, and
+ * the table within that pipeline.  The first three components are combined to
+ * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
+ * S_ROUTER_OUT_DELIVERY. */
+enum ovn_stage {
+#define PIPELINE_STAGES                                                   \
+    /* Logical switch ingress stages. */                                  \
+    PIPELINE_STAGE(SWITCH, IN,  CHECK_PORT_SEC, 0, "ls_in_check_port_sec")   \
+    PIPELINE_STAGE(SWITCH, IN,  APPLY_PORT_SEC, 1, "ls_in_apply_port_sec")   \
+    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    2, "ls_in_lookup_fdb")    \
+    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        3, "ls_in_put_fdb")       \
+    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")       \
+    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")        \
+    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")  \
+    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       7, "ls_in_acl_hint")      \
+    PIPELINE_STAGE(SWITCH, IN,  ACL_EVAL,       8, "ls_in_acl_eval")      \
+    PIPELINE_STAGE(SWITCH, IN,  ACL_ACTION,     9, "ls_in_acl_action")    \
+    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
+    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
+    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  12, "ls_in_lb_aff_check")  \
+    PIPELINE_STAGE(SWITCH, IN,  LB,            13, "ls_in_lb")            \
+    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  14, "ls_in_lb_aff_learn")  \
+    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   15, "ls_in_pre_hairpin")   \
+    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   16, "ls_in_nat_hairpin")   \
+    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       17, "ls_in_hairpin")       \
+    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_EVAL,  18, \
+                   "ls_in_acl_after_lb_eval")  \
+    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB_ACTION,  19, \
+                   "ls_in_acl_after_lb_action")  \
+    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      20, "ls_in_stateful")      \
+    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    21, "ls_in_arp_rsp")       \
+    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  22, "ls_in_dhcp_options")  \
+    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 23, "ls_in_dhcp_response") \
+    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    24, "ls_in_dns_lookup")    \
+    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  25, "ls_in_dns_response")  \
+    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 26, "ls_in_external_port") \
+    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       27, "ls_in_l2_lkup")       \
+    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    28, "ls_in_l2_unknown")    \
+                                                                          \
+    /* Logical switch egress stages. */                                   \
+    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      0, "ls_out_pre_acl")        \
+    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       1, "ls_out_pre_lb")         \
+    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
+    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
+    PIPELINE_STAGE(SWITCH, OUT, ACL_EVAL,     4, "ls_out_acl_eval")       \
+    PIPELINE_STAGE(SWITCH, OUT, ACL_ACTION,   5, "ls_out_acl_action")     \
+    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     6, "ls_out_qos_mark")       \
+    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    7, "ls_out_qos_meter")      \
+    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     8, "ls_out_stateful")       \
+    PIPELINE_STAGE(SWITCH, OUT, CHECK_PORT_SEC,  9, "ls_out_check_port_sec") \
+    PIPELINE_STAGE(SWITCH, OUT, APPLY_PORT_SEC, 10, "ls_out_apply_port_sec") \
+                                                                      \
+    /* Logical router ingress stages. */                              \
+    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
+    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
+    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
+    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
+    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
+    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
+    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
+    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
+    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
+    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
+    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
+    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") \
+    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre")  \
+    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")      \
+    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, "lr_in_ip_routing_ecmp") \
+    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")          \
+    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")     \
+    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")     \
+    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")     \
+    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")     \
+    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")     \
+    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")     \
+                                                                      \
+    /* Logical router egress stages. */                               \
+    PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       \
+                   "lr_out_chk_dnat_local")                                  \
+    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,             1, "lr_out_undnat")      \
+    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT,        2, "lr_out_post_undnat") \
+    PIPELINE_STAGE(ROUTER, OUT, SNAT,               3, "lr_out_snat")        \
+    PIPELINE_STAGE(ROUTER, OUT, POST_SNAT,          4, "lr_out_post_snat")   \
+    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,           5, "lr_out_egr_loop")    \
+    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,           6, "lr_out_delivery")
+
+#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
+    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
+        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
+    PIPELINE_STAGES
+#undef PIPELINE_STAGE
+};
+
+enum ovn_datapath_type ovn_stage_to_datapath_type(enum ovn_stage stage);
+
+
+/* Returns 'od''s datapath type. */
+static inline enum ovn_datapath_type
+ovn_datapath_get_type(const struct ovn_datapath *od)
+{
+    return od->nbs ? DP_SWITCH : DP_ROUTER;
+}
+
+/* Returns an "enum ovn_stage" built from the arguments. */
+static inline enum ovn_stage
+ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
+                uint8_t table)
+{
+    return OVN_STAGE_BUILD(dp_type, pipeline, table);
+}
+
+/* Returns the pipeline to which 'stage' belongs. */
+static inline enum ovn_pipeline
+ovn_stage_get_pipeline(enum ovn_stage stage)
+{
+    return (stage >> 8) & 1;
+}
+
+/* Returns the pipeline name to which 'stage' belongs. */
+static inline const char *
+ovn_stage_get_pipeline_name(enum ovn_stage stage)
+{
+    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
+}
+
+/* Returns the table to which 'stage' belongs. */
+static inline uint8_t
+ovn_stage_get_table(enum ovn_stage stage)
+{
+    return stage & 0xff;
+}
+
+/* Returns a string name for 'stage'. */
+static inline const char *
+ovn_stage_to_str(enum ovn_stage stage)
+{
+    switch (stage) {
+#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
+        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
+    PIPELINE_STAGES
+#undef PIPELINE_STAGE
+        default: return "<unknown>";
+    }
+}
+
 /* A logical switch port or logical router port.
  *
  * In steady state, an ovn_port points to a northbound Logical_Switch_Port
@@ -434,8 +611,10 @@  struct ovn_port {
     /* Temporarily used for traversing a list (or hmap) of ports. */
     bool visited;
 
-    /* List of struct lflow_ref_node that points to the lflows generated by
-     * this ovn_port.
+    /* Only used for the router type LSP whose peer is l3dgw_port */
+    bool enable_router_port_acl;
+
+    /* Reference of lflows generated for this ovn_port.
      *
      * This data is initialized and destroyed by the en_northd node, but
      * populated and used only by the en_lflow node. Ideally this data should
@@ -453,11 +632,16 @@  struct ovn_port {
      * Adding the list here is more straightforward. The drawback is that we
      * need to keep in mind that this data belongs to en_lflow node, so never
      * access it from any other nodes.
+     *
+     * 'lflow_ref' is used to reference generic logical flows generated for
+     *  this ovn_port.
+     *
+     * 'stateful_lflow_ref' is used for logical switch ports of type
+     * 'patch/router' to reference logical flows generated fo this ovn_port
+     *  from the 'lr_stateful' record of the peer port's datapath.
      */
-    struct ovs_list lflows;
-
-    /* Only used for the router type LSP whose peer is l3dgw_port */
-    bool enable_router_port_acl;
+    struct lflow_ref *lflow_ref;
+    struct lflow_ref *stateful_lflow_ref;
 };
 
 void ovnnb_db_run(struct northd_input *input_data,
@@ -480,13 +664,17 @@  void northd_destroy(struct northd_data *data);
 void northd_init(struct northd_data *data);
 void northd_indices_create(struct northd_data *data,
                            struct ovsdb_idl *ovnsb_idl);
+
+struct lflow_table;
 void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
                   struct lflow_input *input_data,
-                  struct hmap *lflows);
+                  struct lflow_table *);
+void lflow_reset_northd_refs(struct lflow_input *);
+
 bool lflow_handle_northd_port_changes(struct ovsdb_idl_txn *ovnsb_txn,
                                       struct tracked_ovn_ports *,
                                       struct lflow_input *,
-                                      struct hmap *lflows);
+                                      struct lflow_table *lflows);
 bool northd_handle_sb_port_binding_changes(
     const struct sbrec_port_binding_table *, struct hmap *ls_ports,
     struct hmap *lr_ports);
diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
index 084d675644..e0e60f3559 100644
--- a/northd/ovn-northd.c
+++ b/northd/ovn-northd.c
@@ -848,6 +848,10 @@  main(int argc, char *argv[])
         ovsdb_idl_omit_alert(ovnsb_idl_loop.idl,
                              &sbrec_port_group_columns[i]);
     }
+    for (size_t i = 0; i < SBREC_LOGICAL_DP_GROUP_N_COLUMNS; i++) {
+        ovsdb_idl_omit_alert(ovnsb_idl_loop.idl,
+                             &sbrec_logical_dp_group_columns[i]);
+    }
 
     unixctl_command_register("sb-connection-status", "", 0, 0,
                              ovn_conn_show, ovnsb_idl_loop.idl);
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index 1825fd3e18..f7d47fc7e4 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -11267,6 +11267,222 @@  CHECK_NO_CHANGE_AFTER_RECOMPUTE
 AT_CLEANUP
 ])
 
+OVN_FOR_EACH_NORTHD_NO_HV([
+AT_SETUP([Load balancer incremental processing for multiple LBs with same VIPs])
+ovn_start
+
+check ovn-nbctl ls-add sw0
+check ovn-nbctl ls-add sw1
+check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80
+check ovn-nbctl --wait=sb lb-add lb2 10.0.0.10:80 10.0.0.3:80
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb ls-lb-add sw0 lb1
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
+sw0_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw0)
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = "$sw0_uuid"])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" = ""])
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb ls-lb-add sw1 lb2
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = ""])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+# Clear the SB:Logical_Flow.logical_dp_groups column of all the
+# logical flows and then modify the NB:Load_balancer.  ovn-northd
+# should resync the logical flows.
+for l in $(ovn-sbctl --bare --columns _uuid list logical_flow)
+do
+    ovn-sbctl clear logical_flow $l logical_dp_group
+done
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb set load_balancer lb1 options:foo=bar
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb clear load_balancer lb2 vips
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = "$sw0_uuid"])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" = ""])
+
+# Add back the vip to lb2.
+check ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80
+
+# Create additional logical switches and associate lb1 to sw0, sw1 and sw2
+# and associate lb2 to sw3, sw4 and sw5
+check ovn-nbctl ls-add sw2
+check ovn-nbctl ls-add sw3
+check ovn-nbctl ls-add sw4
+check ovn-nbctl ls-add sw5
+check ovn-nbctl --wait=sb ls-lb-del sw1 lb2
+check ovn-nbctl ls-lb-add sw1 lb1
+check ovn-nbctl ls-lb-add sw2 lb1
+check ovn-nbctl ls-lb-add sw3 lb2
+check ovn-nbctl ls-lb-add sw4 lb2
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb ls-lb-add sw5 lb2
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = ""])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+sw1_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw1)
+sw2_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw2)
+sw3_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw3)
+sw4_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw4)
+sw5_uuid=$(fetch_column Datapath_Binding _uuid external_ids:name=sw5)
+
+dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
+
+AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
+
+echo "dpgrp_dps - $dpgrp_dps"
+
+# Clear the vips for lb2.  The logical lb logical flow dp group should have
+# only sw0, sw1 and sw2 uuids.
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb clear load_balancer lb2 vips
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
+
+AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [1], [ignore])
+
+# Clear the vips for lb1.  The logical flow should be deleted.
+check ovn-nbctl --wait=sb clear load_balancer lb1 vips
+
+AT_CHECK([ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid], [1], [ignore], [ignore])
+
+lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
+AT_CHECK([test "$lb_lflow_uuid" = ""])
+
+
+# Now add back the vips,  create another lb with the same vips and associate to
+# sw0 and sw1
+check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.3:80
+check ovn-nbctl lb-add lb2 10.0.0.10:80 10.0.0.3:80
+check ovn-nbctl --wait=sb lb-add lb3 10.0.0.10:80 10.0.0.3:80
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+
+check ovn-nbctl ls-lb-add sw0 lb3
+check ovn-nbctl --wait=sb ls-lb-add sw1 lb3
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_uuid=$(fetch_column Logical_flow _uuid match='"ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80"')
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = ""])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
+
+AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
+
+# Now clear lb1 vips.
+# Since lb3 is associated with sw0 and sw1, the logical flow db group
+# should have reference to sw0 and sw1, but not to sw2.
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb clear load_balancer lb1 vips
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = ""])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
+
+echo "dpgrp dps - $dpgrp_dps"
+
+AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
+
+# Now clear lb3 vips.  The logical flow db group
+# should have reference only to sw3, sw4 and sw5 because lb2 is
+# associated to them.
+
+check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
+check ovn-nbctl --wait=sb clear load_balancer lb3 vips
+check_engine_stats lflow recompute nocompute
+CHECK_NO_CHANGE_AFTER_RECOMPUTE
+
+lb_lflow_dp=$(ovn-sbctl --bare --columns logical_datapath list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dp" = ""])
+
+lb_lflow_dpgrp=$(ovn-sbctl --bare --columns logical_dp_group list logical_flow $lb_lflow_uuid)
+AT_CHECK([test "$lb_lflow_dpgrp" != ""])
+
+dpgrp_dps=$(ovn-sbctl --bare --columns datapaths list logical_dp_group $lb_lflow_dpgrp)
+
+echo "dpgrp dps - $dpgrp_dps"
+
+AT_CHECK([echo $dpgrp_dps | grep $sw0_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw1_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw2_uuid], [1], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw3_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw4_uuid], [0], [ignore])
+AT_CHECK([echo $dpgrp_dps | grep $sw5_uuid], [0], [ignore])
+
+AT_CLEANUP
+])
+
 OVN_FOR_EACH_NORTHD_NO_HV([
 AT_SETUP([Logical router incremental processing for NAT])