From patchwork Mon Sep 5 13:08:32 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 665806 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3sSVRg5xqzz9sBX for ; Mon, 5 Sep 2016 23:09:03 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b=zT2qRB4u; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932945AbcIENIn (ORCPT ); Mon, 5 Sep 2016 09:08:43 -0400 Received: from mail-qk0-f181.google.com ([209.85.220.181]:35569 "EHLO mail-qk0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932127AbcIENIl (ORCPT ); Mon, 5 Sep 2016 09:08:41 -0400 Received: by mail-qk0-f181.google.com with SMTP id v123so189429524qkh.2; Mon, 05 Sep 2016 06:08:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=qfGBX3hL8GdXyRTYmmmEiEjh5ke/JhM2Wkm3t6+D66M=; b=zT2qRB4uHNBEW5dUiCQ6/hiBzRlKEtF7u1SuIrhz1Gtg85gHgSFdcpayvRyNHU15Rs 606+CSF1cpxC2OouZzSbXDz/VB+fFY2RWqAoLaNwGgpEvTaLsMOleWQEsOCZYRy7F9By t/KbKZGltqs0thes7AHO66k8V/NfD2e37p2KOUh+TTVf8XW7cK2GQk7zkjbA46v4ODIX rF/R1XnvJtGs1eVSw430sdw3TXNMJ76nfVuAT+jKs3CuHMvIWunaPRIe1UYId9s04h50 qAyQL8ydmXUFhZNnrN2Uc2GSBI5k2sMJubAVqGiznCsksTkTXiK08OCWRdG9hA0K9pef NNZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to:user-agent; bh=qfGBX3hL8GdXyRTYmmmEiEjh5ke/JhM2Wkm3t6+D66M=; b=ljHx9g36SDQu5J0s+l+iM7I9AWJJmEar/E830HM0CEQ20XfC74DXXiXtK6JvZhYKCN rQ7aGDr0hgSTTT8wS3v2Nj7sMueLIoAWWQvuFmpUQOR37vhxcn6AdhpSVR8Sn6lQKehY bNdq8y2hu0pn8n0vLMTs7yxBdNWD9tG0gaXmkWUl8xPgDxXnTmCJPe6tW3zPrYupC29r cvBNABjBdlodgoLk7nHENszNDPn3HcuMWNgzAw3a7zy1KgucRt0kAtM1TDYejsTbeKfV XRUpbFkkhEoOm4NIDoC8D6sUoN6aVrcjeOcOPjWjrs0BjRF+CTReBygvF/Ix/rVw0ovW tRyw== X-Gm-Message-State: AE9vXwPjtTtMsptr/IEj5xwUQbhh7gttpgApx/JSXqIEz0U5QE1pWobKBVKWznRzFA4Nng== X-Received: by 10.55.112.193 with SMTP id l184mr40109246qkc.116.1473080919936; Mon, 05 Sep 2016 06:08:39 -0700 (PDT) Received: from localhost (dhcp-ec-8-6b-ed-7a-cf.cpe.echoes.net. [72.28.23.141]) by smtp.gmail.com with ESMTPSA id y4sm14679227qtb.33.2016.09.05.06.08.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 05 Sep 2016 06:08:39 -0700 (PDT) Date: Mon, 5 Sep 2016 09:08:32 -0400 From: Tejun Heo To: Dmitry Vyukov Cc: Jiri Slaby , Marcel Holtmann , Gustavo Padovan , Johan Hedberg , "David S. Miller" , linux-bluetooth , netdev , LKML , syzkaller , Kostya Serebryany , Alexander Potapenko , Sasha Levin , Eric Dumazet , Takashi Iwai Subject: Re: net/bluetooth: workqueue destruction WARNING in hci_unregister_dev Message-ID: <20160905130832.GD20784@mtj.duckdns.org> References: <56C70618.3010902@suse.cz> <20160302154507.GC4282@mtj.duckdns.org> <56D7FFE1.90900@suse.cz> <20160311171205.GB24046@htj.duckdns.org> <56EA9C4D.2080803@suse.cz> <20160318205231.GO20028@mtj.duckdns.org> <56F01A1C.40208@suse.cz> <56F0FDCE.1040701@suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.6.2 (2016-07-01) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Hello, On Sat, Sep 03, 2016 at 12:58:33PM +0200, Dmitry Vyukov wrote: > > I've seen it only several times in several months, so I don't it will > > be helpful. > > > Bad news: I hit it again. > On 0f98f121e1670eaa2a2fbb675e07d6ba7f0e146f of linux-next, so I have > bf389cabb3b8079c23f9762e62b05f291e2d5e99. Hmmm... we're not getting anywhere. I've applied the following patch to wq/for-4.8-fixes so that when this happens the next time we can actually tell what the hell is going wrong. Thanks. ------ 8< ------ From 278930ada88c972d20025b0f20def27b1a09dff7 Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Mon, 5 Sep 2016 08:54:06 -0400 Subject: [PATCH] workqueue: dump workqueue state on sanity check failures in destroy_workqueue() destroy_workqueue() performs a number of sanity checks to ensure that the workqueue is empty before proceeding with destruction. However, it's not always easy to tell what's going on just from the warning message. Let's dump workqueue state after sanity check failures to help debugging. Signed-off-by: Tejun Heo Link: http://lkml.kernel.org/r/CACT4Y+Zs6vkjHo9qHb4TrEiz3S4+quvvVQ9VWvj2Mx6pETGb9Q@mail.gmail.com Cc: Dmitry Vyukov --- kernel/workqueue.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index ef071ca..4eaec8b8 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -4021,6 +4021,7 @@ void destroy_workqueue(struct workqueue_struct *wq) for (i = 0; i < WORK_NR_COLORS; i++) { if (WARN_ON(pwq->nr_in_flight[i])) { mutex_unlock(&wq->mutex); + show_workqueue_state(); return; } } @@ -4029,6 +4030,7 @@ void destroy_workqueue(struct workqueue_struct *wq) WARN_ON(pwq->nr_active) || WARN_ON(!list_empty(&pwq->delayed_works))) { mutex_unlock(&wq->mutex); + show_workqueue_state(); return; } }