From patchwork Wed May 19 22:02:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Jurgens X-Patchwork-Id: 1481314 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4Fln1G5hM8z9sWX; Thu, 20 May 2021 08:02:58 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1ljUHG-0003NX-N9; Wed, 19 May 2021 22:02:54 +0000 Received: from mail-il-dmz.mellanox.com ([193.47.165.129] helo=mellanox.co.il) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1ljUHD-0003ME-50 for kernel-team@lists.ubuntu.com; Wed, 19 May 2021 22:02:51 +0000 Received: from Internal Mail-Server by MTLPINE1 (envelope-from danielj@nvidia.com) with SMTP; 20 May 2021 01:02:47 +0300 Received: from sw-mtx-hparm-003.mtx.labs.mlnx. (sw-mtx-hparm-003.mtx.labs.mlnx [10.9.151.78]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 14JM2hiN004209; Thu, 20 May 2021 01:02:46 +0300 From: Daniel Jurgens To: kernel-team@lists.ubuntu.com Subject: [SRU][F:linux-bluefield][PATCH v2 2/2] netfilter: flowtable: Make nf_flow_table_offload_add/del_cb inline Date: Thu, 20 May 2021 01:02:43 +0300 Message-Id: <1621461763-176157-3-git-send-email-danielj@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1621461763-176157-1-git-send-email-danielj@nvidia.com> References: <1621461763-176157-1-git-send-email-danielj@nvidia.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: vlad@nvidia.com, danielj@nvidia.com MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Alaa Hleihel BugLink: https://bugs.launchpad.net/bugs/1927246 Currently, nf_flow_table_offload_add/del_cb are exported by nf_flow_table module, therefore modules using them will have hard-dependency on nf_flow_table and will require loading it all the time. This can lead to an unnecessary overhead on systems that do not use this API. To relax the hard-dependency between the modules, we unexport these functions and make them static inline. Fixes: 978703f42549 ("netfilter: flowtable: Add API for registering to flow table events") Signed-off-by: Alaa Hleihel Reviewed-by: Roi Dayan Reviewed-by: Marcelo Ricardo Leitner Signed-off-by: David S. Miller (backported from commit 505ee3a1cab96785fbc2c7cdb41ab677ec270c3c) Signed-off-by: Daniel Jurgens --- v2: Apply after "netfilter: flowtable: Free block_cb when being deleted" to avoid merge conflict for the maintainers. include/net/netfilter/nf_flow_table.h | 49 ++++++++++++++++++++++++++++++++--- net/netfilter/nf_flow_table_core.c | 46 -------------------------------- 2 files changed, 45 insertions(+), 50 deletions(-) diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h index 0c11d3a..f3c8f18 100644 --- a/include/net/netfilter/nf_flow_table.h +++ b/include/net/netfilter/nf_flow_table.h @@ -165,10 +165,51 @@ struct nf_flow_route { struct flow_offload *flow_offload_alloc(struct nf_conn *ct); void flow_offload_free(struct flow_offload *flow); -int nf_flow_table_offload_add_cb(struct nf_flowtable *flow_table, - flow_setup_cb_t *cb, void *cb_priv); -void nf_flow_table_offload_del_cb(struct nf_flowtable *flow_table, - flow_setup_cb_t *cb, void *cb_priv); +static inline int +nf_flow_table_offload_add_cb(struct nf_flowtable *flow_table, + flow_setup_cb_t *cb, void *cb_priv) +{ + struct flow_block *block = &flow_table->flow_block; + struct flow_block_cb *block_cb; + int err = 0; + + mutex_lock(&flow_table->flow_block_lock); + block_cb = flow_block_cb_lookup(block, cb, cb_priv); + if (block_cb) { + err = -EEXIST; + goto unlock; + } + + block_cb = flow_block_cb_alloc(cb, cb_priv, cb_priv, NULL); + if (IS_ERR(block_cb)) { + err = PTR_ERR(block_cb); + goto unlock; + } + + list_add_tail(&block_cb->list, &block->cb_list); + +unlock: + mutex_unlock(&flow_table->flow_block_lock); + return err; +} + +static inline void +nf_flow_table_offload_del_cb(struct nf_flowtable *flow_table, + flow_setup_cb_t *cb, void *cb_priv) +{ + struct flow_block *block = &flow_table->flow_block; + struct flow_block_cb *block_cb; + + mutex_lock(&flow_table->flow_block_lock); + block_cb = flow_block_cb_lookup(block, cb, cb_priv); + if (block_cb) { + list_del(&block_cb->list); + flow_block_cb_free(block_cb); + } else { + WARN_ON(true); + } + mutex_unlock(&flow_table->flow_block_lock); +} int flow_offload_route_init(struct flow_offload *flow, const struct nf_flow_route *route); diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c index e0c0af2..70bd67b 100644 --- a/net/netfilter/nf_flow_table_core.c +++ b/net/netfilter/nf_flow_table_core.c @@ -390,52 +390,6 @@ static void nf_flow_offload_work_gc(struct work_struct *work) queue_delayed_work(system_power_efficient_wq, &flow_table->gc_work, HZ); } -int nf_flow_table_offload_add_cb(struct nf_flowtable *flow_table, - flow_setup_cb_t *cb, void *cb_priv) -{ - struct flow_block *block = &flow_table->flow_block; - struct flow_block_cb *block_cb; - int err = 0; - - mutex_lock(&flow_table->flow_block_lock); - block_cb = flow_block_cb_lookup(block, cb, cb_priv); - if (block_cb) { - err = -EEXIST; - goto unlock; - } - - block_cb = flow_block_cb_alloc(cb, cb_priv, cb_priv, NULL); - if (IS_ERR(block_cb)) { - err = PTR_ERR(block_cb); - goto unlock; - } - - list_add_tail(&block_cb->list, &block->cb_list); - -unlock: - mutex_unlock(&flow_table->flow_block_lock); - return err; -} -EXPORT_SYMBOL_GPL(nf_flow_table_offload_add_cb); - -void nf_flow_table_offload_del_cb(struct nf_flowtable *flow_table, - flow_setup_cb_t *cb, void *cb_priv) -{ - struct flow_block *block = &flow_table->flow_block; - struct flow_block_cb *block_cb; - - mutex_lock(&flow_table->flow_block_lock); - block_cb = flow_block_cb_lookup(block, cb, cb_priv); - if (block_cb) { - list_del(&block_cb->list); - flow_block_cb_free(block_cb); - } else { - WARN_ON(true); - } - mutex_unlock(&flow_table->flow_block_lock); -} -EXPORT_SYMBOL_GPL(nf_flow_table_offload_del_cb); - static int nf_flow_nat_port_tcp(struct sk_buff *skb, unsigned int thoff, __be16 port, __be16 new_port) {