From patchwork Wed Apr 24 11:42:55 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 239169 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 5D1532C00F8 for ; Wed, 24 Apr 2013 21:47:50 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758378Ab3DXLqz (ORCPT ); Wed, 24 Apr 2013 07:46:55 -0400 Received: from mail-pd0-f180.google.com ([209.85.192.180]:48937 "EHLO mail-pd0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758375Ab3DXLqv (ORCPT ); Wed, 24 Apr 2013 07:46:51 -0400 Received: by mail-pd0-f180.google.com with SMTP id u10so1084364pdi.39 for ; Wed, 24 Apr 2013 04:46:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:in-reply-to:references:x-gm-message-state; bh=d+fXWnxa6Gx7IPIshpDO/xjkXhL2mNQ4hVL1CtVowrM=; b=awCMTVD+0cl7WeZRfOymIm0jcw96TgYzpvK8D2XZFTQ+SfQ6xibIqvlQFAEJyGf1fk urWutapKBkrFtCxjwliLoMoezgJw3HFlNXa9EoRvqr44Bwwc6lc6jt+BXRghzyrmY9gv M/ixinskyTYgrr60mpyqN7NENVqni/PPkQu0fJC9uZFdBXy8Snk1IIbpP1AhY+zBnT+z rJ9OjQmCt9AvLjyzaF5aXhEB4OfVXAe1UVAVpQ2oT/XrLq943XW9Ytr3zqRAPlCuyeCV MjYmzOU7R14ZQ8+PnvNWQaZaMdWNbcDD8NXFdl5a7uF5SuqrU2gkdS/sMKfQFgeycKoW X/Vg== X-Received: by 10.66.159.234 with SMTP id xf10mr18874122pab.203.1366804011522; Wed, 24 Apr 2013 04:46:51 -0700 (PDT) Received: from localhost ([122.172.242.146]) by mx.google.com with ESMTPSA id wi6sm2769151pbc.22.2013.04.24.04.46.44 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 24 Apr 2013 04:46:50 -0700 (PDT) From: Viresh Kumar To: tj@kernel.org Cc: davem@davemloft.net, airlied@redhat.com, axboe@kernel.dk, tglx@linutronix.de, peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org, linux-rt-users@vger.kernel.org, linux-kernel@vger.kernel.org, robin.randhawa@arm.com, Steve.Bannister@arm.com, Liviu.Dudau@arm.com, charles.garcia-tobin@arm.com, arvind.chauhan@arm.com, linaro-kernel@lists.linaro.org, patches@linaro.org, Viresh Kumar , netdev@vger.kernel.org Subject: [PATCH V5 3/5] PHYLIB: queue work on system_power_efficient_wq Date: Wed, 24 Apr 2013 17:12:55 +0530 Message-Id: X-Mailer: git-send-email 1.7.12.rc2.18.g61b472e In-Reply-To: References: In-Reply-To: References: X-Gm-Message-State: ALoCoQlpeiQJgpdyhgeQ/LGVcLuRSPIytcN8ZZD5GLmevFiZr99k1lcap72bRDl7hQaHo3XPgqk4 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Phylib uses workqueues for multiple purposes. There is no real dependency of scheduling these on the cpu which scheduled them. On a idle system, it is observed that and idle cpu wakes up many times just to service this work. It would be better if we can schedule it on a cpu which the scheduler believes to be the most appropriate one. This patch replaces system_wq with system_power_efficient_wq for PHYLIB. Cc: David S. Miller Cc: netdev@vger.kernel.org Signed-off-by: Viresh Kumar Acked-by: David S. Miller --- drivers/net/phy/phy.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index c14f147..984c0b5 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -439,7 +439,7 @@ void phy_start_machine(struct phy_device *phydev, { phydev->adjust_state = handler; - schedule_delayed_work(&phydev->state_queue, HZ); + queue_delayed_work(system_power_efficient_wq, &phydev->state_queue, HZ); } /** @@ -500,7 +500,7 @@ static irqreturn_t phy_interrupt(int irq, void *phy_dat) disable_irq_nosync(irq); atomic_inc(&phydev->irq_disable); - schedule_work(&phydev->phy_queue); + queue_work(system_power_efficient_wq, &phydev->phy_queue); return IRQ_HANDLED; } @@ -655,7 +655,7 @@ static void phy_change(struct work_struct *work) /* reschedule state queue work to run as soon as possible */ cancel_delayed_work_sync(&phydev->state_queue); - schedule_delayed_work(&phydev->state_queue, 0); + queue_delayed_work(system_power_efficient_wq, &phydev->state_queue, 0); return; @@ -918,7 +918,8 @@ void phy_state_machine(struct work_struct *work) if (err < 0) phy_error(phydev); - schedule_delayed_work(&phydev->state_queue, PHY_STATE_TIME * HZ); + queue_delayed_work(system_power_efficient_wq, &phydev->state_queue, + PHY_STATE_TIME * HZ); } static inline void mmd_phy_indirect(struct mii_bus *bus, int prtad, int devad,