From patchwork Wed Mar 21 11:00:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marta Rybczynska X-Patchwork-Id: 888698 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=kalray.eu Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kalray.eu header.i=@kalray.eu header.b="ASlpl2sh"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 405nDD6tpVz9s0m for ; Wed, 21 Mar 2018 22:11:08 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751546AbeCULLG (ORCPT ); Wed, 21 Mar 2018 07:11:06 -0400 Received: from zimbra1.kalray.eu ([92.103.151.219]:47004 "EHLO zimbra1.kalray.eu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751505AbeCULLF (ORCPT ); Wed, 21 Mar 2018 07:11:05 -0400 X-Greylist: delayed 614 seconds by postgrey-1.27 at vger.kernel.org; Wed, 21 Mar 2018 07:11:05 EDT Received: from localhost (localhost [127.0.0.1]) by zimbra1.kalray.eu (Postfix) with ESMTP id 39FF92800AC; Wed, 21 Mar 2018 12:00:50 +0100 (CET) Received: from zimbra1.kalray.eu ([127.0.0.1]) by localhost (zimbra1.kalray.eu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id xLEgoRibcHft; Wed, 21 Mar 2018 12:00:49 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by zimbra1.kalray.eu (Postfix) with ESMTP id B053F2800CF; Wed, 21 Mar 2018 12:00:49 +0100 (CET) DKIM-Filter: OpenDKIM Filter v2.9.2 zimbra1.kalray.eu B053F2800CF DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kalray.eu; s=32AE1B44-9502-11E5-BA35-3734643DEF29; t=1521630049; bh=RGGTfYzZ607JpTm4iifiEUgjIOc/cdkz/0ZkDqdHjgg=; h=Date:From:To:Message-ID:Subject:MIME-Version:Content-Type: Content-Transfer-Encoding; b=ASlpl2sh/5+zFOTqhbRCWnvq1udthQbySILAC3GY4edQVO+gl4CXYHOukDXhAAU0P cp0zcK/cGWiI2Vuu568zk+2b+uwVfnKM50LmzhlX5lLalt5bI0A9KR+bb1DL8tMzEK 8f0QprCjLO64RHLHXnBSmCxv3aqxwkZynwBnB7HM= X-Virus-Scanned: amavisd-new at kalray.eu Received: from zimbra1.kalray.eu ([127.0.0.1]) by localhost (zimbra1.kalray.eu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id yXOVEol6FzaR; Wed, 21 Mar 2018 12:00:49 +0100 (CET) Received: from zimbra1.kalray.eu (localhost [127.0.0.1]) by zimbra1.kalray.eu (Postfix) with ESMTP id 9372D2800AC; Wed, 21 Mar 2018 12:00:49 +0100 (CET) Date: Wed, 21 Mar 2018 12:00:49 +0100 (CET) From: Marta Rybczynska To: keith.busch@intel.com, axboe@fb.com, hch@lst.de, sagi@grimberg.me, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, bhelgaas@google.com, linux-pci@vger.kernel.org Cc: Pierre-Yves Kerbrat Message-ID: <744877924.5841545.1521630049567.JavaMail.zimbra@kalray.eu> Subject: [RFC PATCH] nvme: avoid race-conditions when enabling devices MIME-Version: 1.0 X-Originating-IP: [192.168.40.201] X-Mailer: Zimbra 8.6.0_GA_1182 (ZimbraWebClient - FF45 (Linux)/8.6.0_GA_1182) Thread-Topic: nvme: avoid race-conditions when enabling devices Thread-Index: /KDF5fOd/N0q13hqh+WCbGANP130IQ== Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org NVMe driver uses threads for the work at device reset, including enabling the PCIe device. When multiple NVMe devices are initialized, their reset works may be scheduled in parallel. Then pci_enable_device_mem can be called in parallel on multiple cores. This causes a loop of enabling of all upstream bridges in pci_enable_bridge(). pci_enable_bridge() causes multiple operations including __pci_set_master and architecture-specific functions that call ones like and pci_enable_resources(). Both __pci_set_master() and pci_enable_resources() read PCI_COMMAND field in the PCIe space and change it. This is done as read/modify/write. Imagine that the PCIe tree looks like: A - B - switch - C - D \- E - F D and F are two NVMe disks and all devices from B are not enabled and bus mastering is not set. If their reset work are scheduled in parallel the two modifications of PCI_COMMAND may happen in parallel without locking and the system may end up with the part of PCIe tree not enabled. The problem may also happen if other device is initialized in parallel to a nvme disk. This fix moves pci_enable_device_mem to the probe part of the driver that is run sequentially to avoid the issue. Signed-off-by: Marta Rybczynska Signed-off-by: Pierre-Yves Kerbrat --- drivers/nvme/host/pci.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index b6f43b7..af53854 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2515,6 +2515,14 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) dev_info(dev->ctrl.device, "pci function %s\n", dev_name(&pdev->dev)); + /* + * Enable the device now to make sure that all accesses to bridges above + * are done without races + */ + result = pci_enable_device_mem(pdev); + if (result) + goto release_pools; + nvme_reset_ctrl(&dev->ctrl); return 0;