From patchwork Fri Apr 20 06:01:01 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yong Zhang X-Patchwork-Id: 153938 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 5AA8CB7036 for ; Fri, 20 Apr 2012 16:02:19 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752205Ab2DTGBX (ORCPT ); Fri, 20 Apr 2012 02:01:23 -0400 Received: from mail-vx0-f174.google.com ([209.85.220.174]:52719 "EHLO mail-vx0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751551Ab2DTGBV (ORCPT ); Fri, 20 Apr 2012 02:01:21 -0400 Received: by vcqp1 with SMTP id p1so6148186vcq.19 for ; Thu, 19 Apr 2012 23:01:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=G+5b/jqSnQ0dxqYM2GTI2ClKEGMGsVWsqrQQSwPQfnk=; b=L76tSN/6cVAZ/MWiG5H/WDJtDt1xYQGxEbAf5xhGItl7kuJBgJ+/SbKD4MofdnMX0q oqG0KI7YdKnd2ty5FrOWkSmPAnDdrTN1aGikuFzx57ym06sT5RTWoRihF76n39rxJB5e sDIsbMQ0H8oXDMnZabUBMKTCoMDi/3Abo3U8aeHGfpTXhct5yJf/k4R6uYv0mzR3zWTZ Nlv9rxEyf/KtMthpAhFf5phu0IkKkIb2agOsHsXzIxl6SptGMfqwYyygGU5JpfqnI7tU rQh4UuqwrOS+I9x0b9hppySce3zXRVI2m9UBBvxCQvQkUqrARchpQ/+3jUDT/H6JpnBt uJ2Q== Received: by 10.52.72.169 with SMTP id e9mr2802694vdv.21.1334901681014; Thu, 19 Apr 2012 23:01:21 -0700 (PDT) Received: from localhost ([61.148.56.138]) by mx.google.com with ESMTPS id hf9sm6007105vdb.7.2012.04.19.23.01.13 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 19 Apr 2012 23:01:19 -0700 (PDT) Date: Fri, 20 Apr 2012 14:01:01 +0800 From: Yong Zhang To: Stephen Boyd Cc: linux-kernel@vger.kernel.org, Tejun Heo , netdev@vger.kernel.org, Ben Dooks Subject: Re: [PATCH 1/2] workqueue: Catch more locking problems with flush_work() Message-ID: <20120420060101.GA16563@zhy> Reply-To: Yong Zhang References: <1334805958-29119-1-git-send-email-sboyd@codeaurora.org> <20120419081002.GB3963@zhy> <4F905B30.4080501@codeaurora.org> <20120420052633.GA16219@zhy> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20120420052633.GA16219@zhy> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Fri, Apr 20, 2012 at 01:26:33PM +0800, Yong Zhang wrote: > On Thu, Apr 19, 2012 at 11:36:32AM -0700, Stephen Boyd wrote: > > Does looking at the second patch help? Basically schedule_work() can run > > the callback right between the time the mutex is acquired and > > flush_work() is called: > > > > CPU0 CPU1 > > > > > > schedule_work() mutex_lock(&mutex) > > > > my_work() flush_work() > > mutex_lock(&mutex) > > > > Get you point. It is a problem. But your patch could introduece false > positive since when flush_work() is called that very work may finish > running already. > > So I think we need the lock_map_acquire()/lock_map_release() only when > the work is under processing, no? But start_flush_work() has tried take care of this issue except it doesn't add work->lockdep_map into the chain. So does below patch help? Thanks, Yong --- From: Yong Zhang Date: Fri, 20 Apr 2012 13:44:16 +0800 Subject: [PATCH] workqueue:lockdep: make flush_work notice deadlock Connet the lock chain by aquiring work->lockdep_map when the tobe-flush work is running. Signed-off-by: Yong Zhang Reported-by: Stephen Boyd --- kernel/workqueue.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index bc867e8..c096b05 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -2461,6 +2461,8 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, lock_map_acquire(&cwq->wq->lockdep_map); else lock_map_acquire_read(&cwq->wq->lockdep_map); + lock_map_acquire(&work->lockdep_map); + lock_map_release(&work->lockdep_map); lock_map_release(&cwq->wq->lockdep_map); return true;