From patchwork Fri Jul 3 14:25:51 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Ripard X-Patchwork-Id: 491104 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 68E071401B5 for ; Sat, 4 Jul 2015 00:32:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754893AbbGCObk (ORCPT ); Fri, 3 Jul 2015 10:31:40 -0400 Received: from down.free-electrons.com ([37.187.137.238]:37791 "EHLO mail.free-electrons.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755109AbbGCOay (ORCPT ); Fri, 3 Jul 2015 10:30:54 -0400 Received: by mail.free-electrons.com (Postfix, from userid 106) id 9D329850; Fri, 3 Jul 2015 16:30:53 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.free-electrons.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from localhost (145.200.97.84.rev.sfr.net [84.97.200.145]) by mail.free-electrons.com (Postfix) with ESMTPSA id 91E5D66B; Fri, 3 Jul 2015 16:30:51 +0200 (CEST) From: Maxime Ripard To: Thomas Gleixner , Gregory Clement , Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Thomas Petazzoni , "David S. Miller" Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Maxime Ripard Subject: [PATCH 6/6] net: mvneta: Statically assign queues to CPUs Date: Fri, 3 Jul 2015 16:25:51 +0200 Message-Id: <1435933551-28696-7-git-send-email-maxime.ripard@free-electrons.com> X-Mailer: git-send-email 2.4.5 In-Reply-To: <1435933551-28696-1-git-send-email-maxime.ripard@free-electrons.com> References: <1435933551-28696-1-git-send-email-maxime.ripard@free-electrons.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since the switch to per-CPU interrupts, we lost the ability to set which CPU was going to receive our RX interrupt, which was now only the CPU on which the mvneta_open function was run. We can now assign our queues to their respective CPUs, and make sure only this CPU is going to handle our traffic. This also paves the road to be able to change that at runtime, and later on to support RSS. Signed-off-by: Maxime Ripard --- drivers/net/ethernet/marvell/mvneta.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 0d21b8a779d9..658d713abc18 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2630,6 +2630,13 @@ static void mvneta_mdio_remove(struct mvneta_port *pp) pp->phy_dev = NULL; } +static void mvneta_percpu_enable(void *arg) +{ + struct mvneta_port *pp = arg; + + enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE); +} + static int mvneta_open(struct net_device *dev) { struct mvneta_port *pp = netdev_priv(dev); @@ -2655,6 +2662,19 @@ static int mvneta_open(struct net_device *dev) goto err_cleanup_txqs; } + /* + * Even though the documentation says that request_percpu_irq + * doesn't enable the interrupts automatically, it actually + * does so on the local CPU. + * + * Make sure it's disabled. + */ + disable_percpu_irq(pp->dev->irq); + + /* Enable per-CPU interrupt on the one CPU we care about */ + smp_call_function_single(rxq_def % num_online_cpus(), + mvneta_percpu_enable, pp, true); + /* In default link is down */ netif_carrier_off(pp->dev);