From patchwork Thu Aug 22 12:43:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlad Buslov X-Patchwork-Id: 1151587 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=mellanox.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46DkkX0Pn0z9sNk for ; Thu, 22 Aug 2019 22:44:35 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388538AbfHVMoS (ORCPT ); Thu, 22 Aug 2019 08:44:18 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:44415 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2387871AbfHVMoR (ORCPT ); Thu, 22 Aug 2019 08:44:17 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from vladbu@mellanox.com) with ESMTPS (AES256-SHA encrypted); 22 Aug 2019 15:44:11 +0300 Received: from reg-r-vrt-018-180.mtr.labs.mlnx. (reg-r-vrt-018-180.mtr.labs.mlnx [10.215.1.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x7MCiAoj025522; Thu, 22 Aug 2019 15:44:10 +0300 From: Vlad Buslov To: netdev@vger.kernel.org Cc: jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, davem@davemloft.net, jakub.kicinski@netronome.com, pablo@netfilter.org, Vlad Buslov , Jiri Pirko Subject: [PATCH net-next 02/10] net: sched: change tcf block offload counter type to atomic_t Date: Thu, 22 Aug 2019 15:43:45 +0300 Message-Id: <20190822124353.16902-3-vladbu@mellanox.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190822124353.16902-1-vladbu@mellanox.com> References: <20190822124353.16902-1-vladbu@mellanox.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org As a preparation for running proto ops functions without rtnl lock, change offload counter type to atomic. This is necessary to allow updating the counter by multiple concurrent users when offloading filters to hardware from unlocked classifiers. Signed-off-by: Vlad Buslov Acked-by: Jiri Pirko --- include/net/sch_generic.h | 7 ++++--- net/sched/cls_api.c | 2 +- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index a3eaf5f9d28f..d778c502decd 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -401,7 +402,7 @@ struct tcf_block { struct flow_block flow_block; struct list_head owner_list; bool keep_dst; - unsigned int offloadcnt; /* Number of oddloaded filters */ + atomic_t offloadcnt; /* Number of oddloaded filters */ unsigned int nooffloaddevcnt; /* Number of devs unable to do offload */ struct { struct tcf_chain *chain; @@ -443,7 +444,7 @@ static inline void tcf_block_offload_inc(struct tcf_block *block, u32 *flags) if (*flags & TCA_CLS_FLAGS_IN_HW) return; *flags |= TCA_CLS_FLAGS_IN_HW; - block->offloadcnt++; + atomic_inc(&block->offloadcnt); } static inline void tcf_block_offload_dec(struct tcf_block *block, u32 *flags) @@ -451,7 +452,7 @@ static inline void tcf_block_offload_dec(struct tcf_block *block, u32 *flags) if (!(*flags & TCA_CLS_FLAGS_IN_HW)) return; *flags &= ~TCA_CLS_FLAGS_IN_HW; - block->offloadcnt--; + atomic_dec(&block->offloadcnt); } static inline void diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index a3b07c6e3f53..8502bd006b37 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -629,7 +629,7 @@ static void tc_indr_block_call(struct tcf_block *block, static bool tcf_block_offload_in_use(struct tcf_block *block) { - return block->offloadcnt; + return atomic_read(&block->offloadcnt); } static int tcf_block_offload_cmd(struct tcf_block *block,