From patchwork Fri Apr 1 13:21:18 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Wojtas X-Patchwork-Id: 604822 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3qc2682K7xz9sBm for ; Sat, 2 Apr 2016 00:19:28 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=semihalf-com.20150623.gappssmtp.com header.i=@semihalf-com.20150623.gappssmtp.com header.b=ED8ce0ng; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754545AbcDANTM (ORCPT ); Fri, 1 Apr 2016 09:19:12 -0400 Received: from mail-wm0-f43.google.com ([74.125.82.43]:38364 "EHLO mail-wm0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752212AbcDANTK (ORCPT ); Fri, 1 Apr 2016 09:19:10 -0400 Received: by mail-wm0-f43.google.com with SMTP id 20so22843362wmh.1 for ; Fri, 01 Apr 2016 06:19:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=8sQzmwvdaFxQTj/x8E22lxE2PBfRdrVJKf63BAXW1Eg=; b=ED8ce0ngr3qAtemBYdgxAJH303wdGRX6I4TQdzhFwPpcO2NotLzD1x4JIDTj+UY6Y0 5R8l8uuNa7FnNVa1DZqiu+1rfzufKwzY0PPBsLca8MEmVkmYcCWpFk6Tjxo2WWZ5QCWJ SV8OnahU8On42wLAsFpEqfNmRceBw8zrt1VMlbpn8PqFXsvwdhlskgpI6K22gmMxhE+2 kG3AqyZl1DH7ksjZKe+Emhde0nKDmITUQo/Mt0b6wXOOsr+vMZy5kG+O2bY6frrnH01A g55CDL5cLo0E4PkjAD+vS8mKntSxdUxmlpWQS4JEalgfkwoKuJiTqZPELgxvSlJACfFd K4WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=8sQzmwvdaFxQTj/x8E22lxE2PBfRdrVJKf63BAXW1Eg=; b=NI79D0CtSyfO+tBH7IdC0LArDBYbBlF1I0S8Ps2EHXhjaZ2fheHVK9A2N+Dw6XfA6i n7pRZAEuYCAYVgowIEpvo/oGdZXRL9BaDOYsmN4CUzx/eh494WLUWneLzov1YTx6t63E /Kd6+ihZvm2M5XFHTxWd1w1ME9je5DEhvTCZc7Zyjc4Bj9opVTpKk+je5SZQhLe+vQ2t yYvzxMJZ0wz1dJc1sjQBJ+GCWQeBFB+tk0ixRHUvqnkigmwfoc0jKI2thR/wIXc2kaG2 UmxR9ZGa39fqsksNKHO/le+pJ/MXadlVjlA7Ui2wX+NFv9xEsavof3AzZIEySjOCimeS 01Cw== X-Gm-Message-State: AD7BkJK2zbeP7cNwLCGAUMyLn721Hp17iAgD+7GUeP+SBaWn9tyXJCnG6WkfhduLI9AYXQ== X-Received: by 10.28.150.195 with SMTP id y186mr3769926wmd.43.1459516749702; Fri, 01 Apr 2016 06:19:09 -0700 (PDT) Received: from enkidu.semihalf.local ([80.82.22.190]) by smtp.gmail.com with ESMTPSA id by7sm13982020wjc.18.2016.04.01.06.19.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 01 Apr 2016 06:19:09 -0700 (PDT) From: Marcin Wojtas To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, netdev@vger.kernel.org Cc: davem@davemloft.net, linux@arm.linux.org.uk, sebastian.hesselbarth@gmail.com, andrew@lunn.ch, jason@lakedaemon.net, thomas.petazzoni@free-electrons.com, gregory.clement@free-electrons.com, nadavh@marvell.com, alior@marvell.com, nitroshift@yahoo.com, mw@semihalf.com, jaz@semihalf.com Subject: [PATCH] net: mvneta: fix changing MTU when using per-cpu processing Date: Fri, 1 Apr 2016 15:21:18 +0200 Message-Id: <1459516878-2802-1-git-send-email-mw@semihalf.com> X-Mailer: git-send-email 1.8.3.1 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org After enabling per-cpu processing it appeared that under heavy load changing MTU can result in blocking all port's interrupts and transmitting data is not possible after the change. This commit fixes above issue by disabling percpu interrupts for the time, when TXQs and RXQs are reconfigured. Signed-off-by: Marcin Wojtas --- drivers/net/ethernet/marvell/mvneta.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index fee6a91..a433de9 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -3083,6 +3083,20 @@ static int mvneta_check_mtu_valid(struct net_device *dev, int mtu) return mtu; } +static void mvneta_percpu_enable(void *arg) +{ + struct mvneta_port *pp = arg; + + enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE); +} + +static void mvneta_percpu_disable(void *arg) +{ + struct mvneta_port *pp = arg; + + disable_percpu_irq(pp->dev->irq); +} + /* Change the device mtu */ static int mvneta_change_mtu(struct net_device *dev, int mtu) { @@ -3107,6 +3121,7 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu) * reallocation of the queues */ mvneta_stop_dev(pp); + on_each_cpu(mvneta_percpu_disable, pp, true); mvneta_cleanup_txqs(pp); mvneta_cleanup_rxqs(pp); @@ -3130,6 +3145,7 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu) return ret; } + on_each_cpu(mvneta_percpu_enable, pp, true); mvneta_start_dev(pp); mvneta_port_up(pp); @@ -3283,20 +3299,6 @@ static void mvneta_mdio_remove(struct mvneta_port *pp) pp->phy_dev = NULL; } -static void mvneta_percpu_enable(void *arg) -{ - struct mvneta_port *pp = arg; - - enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE); -} - -static void mvneta_percpu_disable(void *arg) -{ - struct mvneta_port *pp = arg; - - disable_percpu_irq(pp->dev->irq); -} - /* Electing a CPU must be done in an atomic way: it should be done * after or before the removal/insertion of a CPU and this function is * not reentrant.