From patchwork Wed Feb 11 13:06:23 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Stancek X-Patchwork-Id: 438771 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id C83BE1401AB for ; Thu, 12 Feb 2015 00:06:50 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752831AbbBKNGd (ORCPT ); Wed, 11 Feb 2015 08:06:33 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41772 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752746AbbBKNGa (ORCPT ); Wed, 11 Feb 2015 08:06:30 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id t1BD6S3f026537 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 11 Feb 2015 08:06:28 -0500 Received: from dustball.brq.redhat.com (dustball.brq.redhat.com [10.34.26.57]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t1BD6QPC011688; Wed, 11 Feb 2015 08:06:27 -0500 From: Jan Stancek To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, jstancek@redhat.com Subject: [PATCH] ipv6: fix possible deadlock in ip6_fl_purge / ip6_fl_gc Date: Wed, 11 Feb 2015 14:06:23 +0100 Message-Id: X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Use spin_lock_bh in ip6_fl_purge() to prevent following potentially deadlock scenario between ip6_fl_purge() and ip6_fl_gc() timer. ================================= [ INFO: inconsistent lock state ] 3.19.0 #1 Not tainted --------------------------------- inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage. swapper/5/0 [HC0[0]:SC1[1]:HE1:SE0] takes: (ip6_fl_lock){+.?...}, at: [] ip6_fl_gc+0x2d/0x180 {SOFTIRQ-ON-W} state was registered at: [] __lock_acquire+0x4a0/0x10b0 [] lock_acquire+0xc4/0x2b0 [] _raw_spin_lock+0x3d/0x80 [] ip6_flowlabel_net_exit+0x28/0x110 [] ops_exit_list.isra.1+0x39/0x60 [] cleanup_net+0x100/0x1e0 [] process_one_work+0x20a/0x830 [] worker_thread+0x11b/0x460 [] kthread+0x104/0x120 [] ret_from_fork+0x7c/0xb0 irq event stamp: 84640 hardirqs last enabled at (84640): [] _raw_spin_unlock_irq+0x30/0x50 hardirqs last disabled at (84639): [] _raw_spin_lock_irq+0x1f/0x80 softirqs last enabled at (84628): [] _local_bh_enable+0x21/0x50 softirqs last disabled at (84629): [] irq_exit+0x12d/0x150 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(ip6_fl_lock); lock(ip6_fl_lock); *** DEADLOCK *** Signed-off-by: Jan Stancek --- net/ipv6/ip6_flowlabel.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c index 2f780cb..f45d6db 100644 --- a/net/ipv6/ip6_flowlabel.c +++ b/net/ipv6/ip6_flowlabel.c @@ -172,7 +172,7 @@ static void __net_exit ip6_fl_purge(struct net *net) { int i; - spin_lock(&ip6_fl_lock); + spin_lock_bh(&ip6_fl_lock); for (i = 0; i <= FL_HASH_MASK; i++) { struct ip6_flowlabel *fl; struct ip6_flowlabel __rcu **flp; @@ -190,7 +190,7 @@ static void __net_exit ip6_fl_purge(struct net *net) flp = &fl->next; } } - spin_unlock(&ip6_fl_lock); + spin_unlock_bh(&ip6_fl_lock); } static struct ip6_flowlabel *fl_intern(struct net *net,