From patchwork Thu Dec 19 03:17:39 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "fan.du" X-Patchwork-Id: 303152 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id E4E842C0089 for ; Thu, 19 Dec 2013 14:17:38 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751882Ab3LSDRf (ORCPT ); Wed, 18 Dec 2013 22:17:35 -0500 Received: from mail1.windriver.com ([147.11.146.13]:48504 "EHLO mail1.windriver.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751140Ab3LSDRe (ORCPT ); Wed, 18 Dec 2013 22:17:34 -0500 Received: from ALA-HCA.corp.ad.wrs.com (ala-hca.corp.ad.wrs.com [147.11.189.40]) by mail1.windriver.com (8.14.5/8.14.5) with ESMTP id rBJ3HSE1011279 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Wed, 18 Dec 2013 19:17:28 -0800 (PST) Received: from [128.224.162.161] (128.224.162.161) by ALA-HCA.corp.ad.wrs.com (147.11.189.50) with Microsoft SMTP Server (TLS) id 14.2.347.0; Wed, 18 Dec 2013 19:17:27 -0800 Message-ID: <52B26553.9070103@windriver.com> Date: Thu, 19 Dec 2013 11:17:39 +0800 From: Fan Du User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101208 Thunderbird/3.1.7 MIME-Version: 1.0 To: Eric Dumazet CC: =?UTF-8?B?546L6IGq?= , , , Subject: [PATCHv3 net-next] xfrm: Namespacify xfrm_policy_sk_bundles References: <1387337658-28951-1-git-send-email-fan.du@windriver.com> <1387342211.19078.295.camel@edumazet-glaptop2.roam.corp.google.com> <52B24D7D.6060902@windriver.com> <1387419308.19078.343.camel@edumazet-glaptop2.roam.corp.google.com> In-Reply-To: <1387419308.19078.343.camel@edumazet-glaptop2.roam.corp.google.com> X-Originating-IP: [128.224.162.161] Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On 2013年12月19日 10:15, Eric Dumazet wrote: >> I think an analogy of llist_add_batch for the updating part will be ok for this: >> > >> > struct dst_entry *first; >> > do { >> > xdst->u.dst.next = first = ACCESS_ONCE(xfrm_policy_sk_bundles); >> > } while (cmpxchg(&xfrm_policy_sk_bundles, first,&xdst->u.dst) != first); >> > >> > >> > And the deleting part: >> > >> > head = xchg(&net->xfrm.xfrm_policy_sk_bundles, NULL); >> > >> > > The point is to_use_ the helpers, so that code review is easy. > > llist.h is safe, it was extensively discussed and adopted. > > If you want to get rid of the spinlock, then use llist_del_all() for the > deleting, and llist_add() for the addition. > > Really, its that simple. Ok, I will use common api suggested by you, then locks is gone at the cost of xfrm_dst growing up a bit, sizeof(struct llist_node). From b4c5fc86861abd98866111a7b1b51dc13d546a0c Mon Sep 17 00:00:00 2001 From: Fan Du Date: Thu, 19 Dec 2013 11:03:20 +0800 Subject: [PATCHv3 net-next] xfrm: Namespacify xfrm_policy_sk_bundles xfrm_policy_sk_bundles, protected by net->xfrm.xfrm_policy_sk_bundle_lock should be put into netns xfrm structure, otherwise xfrm_policy_sk_bundles can be corrupted from different net namespace. Moreover current xfrm_policy_sk_bundle_lock used in below two scenarios: 1. xfrm_lookup(Process context) vs __xfrm_garbage_collect(softirq context) 2. xfrm_lookup(Process context) vs __xfrm_garbage_collect(Process context when SPD change or dev down) We can use lock less list to cater to those two scenarios as suggested by Eric Dumazet. Signed-off-by: Fan Du Assisted-by: Eric Dumazet --- v2: Fix incorrect commit log. v3: Drop xchg, use llist instead, adviced by Eric Dumazet. Build test only. --- include/net/netns/xfrm.h | 2 +- include/net/xfrm.h | 1 + net/xfrm/xfrm_policy.c | 26 ++++++-------------------- 3 files changed, 8 insertions(+), 21 deletions(-) diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h index 1006a26..22f4f90 100644 --- a/include/net/netns/xfrm.h +++ b/include/net/netns/xfrm.h @@ -58,9 +58,9 @@ struct netns_xfrm { struct dst_ops xfrm6_dst_ops; #endif spinlock_t xfrm_state_lock; - spinlock_t xfrm_policy_sk_bundle_lock; rwlock_t xfrm_policy_lock; struct mutex xfrm_cfg_mutex; + struct llist_head xp_sk_bundles_list; }; #endif diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 59f5d0a..05296ab 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -957,6 +957,7 @@ struct xfrm_dst { u32 child_mtu_cached; u32 route_cookie; u32 path_cookie; + struct llist_node xdst_llist; }; #ifdef CONFIG_XFRM diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index a7487f3..fb286af 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -39,8 +39,6 @@ #define XFRM_QUEUE_TMO_MAX ((unsigned)(60*HZ)) #define XFRM_MAX_QUEUE_LEN 100 -static struct dst_entry *xfrm_policy_sk_bundles; - static DEFINE_SPINLOCK(xfrm_policy_afinfo_lock); static struct xfrm_policy_afinfo __rcu *xfrm_policy_afinfo[NPROTO] __read_mostly; @@ -2108,12 +2106,7 @@ struct dst_entry *xfrm_lookup(struct net *net, struct dst_entry *dst_orig, } dst_hold(&xdst->u.dst); - - spin_lock_bh(&net->xfrm.xfrm_policy_sk_bundle_lock); - xdst->u.dst.next = xfrm_policy_sk_bundles; - xfrm_policy_sk_bundles = &xdst->u.dst; - spin_unlock_bh(&net->xfrm.xfrm_policy_sk_bundle_lock); - + llist_add(&xdst->xdst_llist, &net->xfrm.xp_sk_bundles_list); route = xdst->route; } } @@ -2549,18 +2542,12 @@ static struct dst_entry *xfrm_negative_advice(struct dst_entry *dst) static void __xfrm_garbage_collect(struct net *net) { - struct dst_entry *head, *next; - - spin_lock_bh(&net->xfrm.xfrm_policy_sk_bundle_lock); - head = xfrm_policy_sk_bundles; - xfrm_policy_sk_bundles = NULL; - spin_unlock_bh(&net->xfrm.xfrm_policy_sk_bundle_lock); + struct llist_node *head; + struct xfrm_dst *xdst; - while (head) { - next = head->next; - dst_free(head); - head = next; - } + head = llist_del_all(&net->xfrm.xp_sk_bundles_list); + llist_for_each_entry(xdst, head, xdst_llist) + dst_free(&xdst->u.dst); } void xfrm_garbage_collect(struct net *net) @@ -2942,7 +2929,6 @@ static int __net_init xfrm_net_init(struct net *net) /* Initialize the per-net locks here */ spin_lock_init(&net->xfrm.xfrm_state_lock); rwlock_init(&net->xfrm.xfrm_policy_lock); - spin_lock_init(&net->xfrm.xfrm_policy_sk_bundle_lock); mutex_init(&net->xfrm.xfrm_cfg_mutex); return 0;