From patchwork Tue Nov 23 16:15:12 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladislav Zolotarov X-Patchwork-Id: 72693 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 1A016B70A7 for ; Wed, 24 Nov 2010 03:15:55 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755109Ab0KWQPt (ORCPT ); Tue, 23 Nov 2010 11:15:49 -0500 Received: from mms2.broadcom.com ([216.31.210.18]:1771 "EHLO mms2.broadcom.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753708Ab0KWQPt (ORCPT ); Tue, 23 Nov 2010 11:15:49 -0500 Received: from [10.9.200.133] by mms2.broadcom.com with ESMTP (Broadcom SMTP Relay (Email Firewall v6.3.2)); Tue, 23 Nov 2010 08:15:30 -0800 X-Server-Uuid: D3C04415-6FA8-4F2C-93C1-920E106A2031 Received: from mail-irva-13.broadcom.com (10.11.16.103) by IRVEXCHHUB02.corp.ad.broadcom.com (10.9.200.133) with Microsoft SMTP Server id 8.2.247.2; Tue, 23 Nov 2010 08:15:20 -0800 Received: from [10.185.6.94] (lb-tlvb-vladz.il.broadcom.com [10.185.6.94]) by mail-irva-13.broadcom.com (Postfix) with ESMTP id D6BB974D10; Tue, 23 Nov 2010 08:15:17 -0800 (PST) Subject: [PATCH net-next] bnx2x: Resolving a possible dead-lock situation From: "Vladislav Zolotarov" To: "Dave Miller" cc: "Eilon Greenstein" , "netdev list" Date: Tue, 23 Nov 2010 18:15:12 +0200 Message-ID: <1290528912.15453.3.camel@lb-tlvb-vladz> MIME-Version: 1.0 X-Mailer: Evolution 2.28.3 X-WSS-ID: 60F537285ZW506933-01-01 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org There is a possible dead-lock situation between sch_direct_xmit() (called from soft_IRQ context) and bnx2x_tx_int() when called from an ethtool self-test flow (syscall context). To prevent a dead-lock, disable bottom-halves on a local CPU when taking a tx_lock from bnx2x_tx_int() (use __netif_tx_lock_bh(txq)). The flow in the bnx2x_tx_int() where tx_lock is taken should be hit very rarely thus performance penalty of this change should be minimal. Signed-off-by: Vladislav Zolotarov Signed-off-by: Eilon Greenstein --- drivers/net/bnx2x/bnx2x_cmn.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnx2x/bnx2x_cmn.c b/drivers/net/bnx2x/bnx2x_cmn.c index 94d5f59..5189788 100644 --- a/drivers/net/bnx2x/bnx2x_cmn.c +++ b/drivers/net/bnx2x/bnx2x_cmn.c @@ -144,14 +144,14 @@ int bnx2x_tx_int(struct bnx2x_fastpath *fp) * stops the queue */ - __netif_tx_lock(txq, smp_processor_id()); + __netif_tx_lock_bh(txq); if ((netif_tx_queue_stopped(txq)) && (bp->state == BNX2X_STATE_OPEN) && (bnx2x_tx_avail(fp) >= MAX_SKB_FRAGS + 3)) netif_tx_wake_queue(txq); - __netif_tx_unlock(txq); + __netif_tx_unlock_bh(txq); } return 0; }