From patchwork Wed Dec 2 09:58:02 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arun Bharadwaj X-Patchwork-Id: 39973 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from bilbo.ozlabs.org (localhost [127.0.0.1]) by ozlabs.org (Postfix) with ESMTP id 33ABAB7BEC for ; Wed, 2 Dec 2009 20:58:17 +1100 (EST) Received: from e28smtp02.in.ibm.com (e28smtp02.in.ibm.com [122.248.162.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "e28smtp02.in.ibm.com", Issuer "Equifax" (verified OK)) by ozlabs.org (Postfix) with ESMTPS id DB52EB7BDD for ; Wed, 2 Dec 2009 20:58:08 +1100 (EST) Received: from d28relay03.in.ibm.com (d28relay03.in.ibm.com [9.184.220.60]) by e28smtp02.in.ibm.com (8.14.3/8.13.1) with ESMTP id nB29w5qx000851 for ; Wed, 2 Dec 2009 15:28:05 +0530 Received: from d28av03.in.ibm.com (d28av03.in.ibm.com [9.184.220.65]) by d28relay03.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id nB29w5eH2871498 for ; Wed, 2 Dec 2009 15:28:05 +0530 Received: from d28av03.in.ibm.com (loopback [127.0.0.1]) by d28av03.in.ibm.com (8.14.3/8.13.1/NCO v10.0 AVout) with ESMTP id nB29w4RW022442 for ; Wed, 2 Dec 2009 20:58:04 +1100 Received: from linux.vnet.ibm.com (Crystal-Planet.in.ibm.com [9.124.35.99]) by d28av03.in.ibm.com (8.14.3/8.13.1/NCO v10.0 AVin) with ESMTP id nB29w2cu022393 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Wed, 2 Dec 2009 20:58:04 +1100 Date: Wed, 2 Dec 2009 15:28:02 +0530 From: Arun R Bharadwaj To: Peter Zijlstra , Benjamin Herrenschmidt , Ingo Molnar , Vaidyanathan Srinivasan , Dipankar Sarma , Balbir Singh , Venkatesh Pallipadi , Arun Bharadwaj Subject: [v10 PATCH 3/9]: cpuidle: implement a list based approach to register a set of idle routines Message-ID: <20091202095802.GD27251@linux.vnet.ibm.com> References: <20091202095427.GA27251@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20091202095427.GA27251@linux.vnet.ibm.com> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: linux-arch@vger.kernel.org, linux-acpi@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.12 Precedence: list Reply-To: arun@linux.vnet.ibm.com List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org * Arun R Bharadwaj [2009-12-02 15:24:27]: Implement a list based registering mechanism for architectures which have multiple sets of idle routines which are to be registered. Currently, in x86 it is done by merely setting pm_idle = idle_routine and managing this pm_idle pointer is messy. To give an example of how this mechanism works: In x86, initially, idle routine is selected from the set of poll/mwait/ c1e/default idle loops. So the selected idle loop is registered in cpuidle as one idle state cpuidle devices. Once ACPI comes up, it registers another set of idle states on top of this state. Again, suppose a module registers another set of idle loops, it is added to this list. This provides a clean way of registering and unregistering idle state routines. In the current implementation, pm_idle is set as the current idle routine being used and the old idle routine has to be maintained and when a module registers/unregisters an idle routine, confusion arises. Signed-off-by: Arun R Bharadwaj --- drivers/cpuidle/cpuidle.c | 54 ++++++++++++++++++++++++++++++++++++++++------ include/linux/cpuidle.h | 1 2 files changed, 48 insertions(+), 7 deletions(-) Index: linux.trees.git/drivers/cpuidle/cpuidle.c =================================================================== --- linux.trees.git.orig/drivers/cpuidle/cpuidle.c +++ linux.trees.git/drivers/cpuidle/cpuidle.c @@ -22,6 +22,7 @@ #include "cpuidle.h" DEFINE_PER_CPU(struct cpuidle_device *, cpuidle_devices); +DEFINE_PER_CPU(struct list_head, cpuidle_devices_list); DEFINE_MUTEX(cpuidle_lock); @@ -129,6 +130,45 @@ void cpuidle_resume_and_unlock(void) EXPORT_SYMBOL_GPL(cpuidle_resume_and_unlock); +int cpuidle_add_to_list(struct cpuidle_device *dev) +{ + int ret, cpu = dev->cpu; + struct cpuidle_device *old_dev; + + if (!list_empty(&per_cpu(cpuidle_devices_list, cpu))) { + old_dev = list_first_entry(&per_cpu(cpuidle_devices_list, cpu), + struct cpuidle_device, idle_list); + cpuidle_remove_state_sysfs(old_dev); + } + + list_add(&dev->idle_list, &per_cpu(cpuidle_devices_list, cpu)); + ret = cpuidle_add_state_sysfs(dev); + return ret; +} + +void cpuidle_remove_from_list(struct cpuidle_device *dev) +{ + struct cpuidle_device *temp_dev; + struct list_head *pos; + int ret, cpu = dev->cpu; + + list_for_each(pos, &per_cpu(cpuidle_devices_list, cpu)) { + temp_dev = container_of(pos, struct cpuidle_device, idle_list); + if (dev == temp_dev) { + list_del(&temp_dev->idle_list); + cpuidle_remove_state_sysfs(temp_dev); + break; + } + } + + if (!list_empty(&per_cpu(cpuidle_devices_list, cpu))) { + temp_dev = list_first_entry(&per_cpu(cpuidle_devices_list, cpu), + struct cpuidle_device, idle_list); + ret = cpuidle_add_state_sysfs(temp_dev); + } + cpuidle_kick_cpus(); +} + /** * cpuidle_enable_device - enables idle PM for a CPU * @dev: the CPU @@ -153,9 +193,6 @@ int cpuidle_enable_device(struct cpuidle return ret; } - if ((ret = cpuidle_add_state_sysfs(dev))) - return ret; - if (cpuidle_curr_governor->enable && (ret = cpuidle_curr_governor->enable(dev))) goto fail_sysfs; @@ -174,7 +211,7 @@ int cpuidle_enable_device(struct cpuidle return 0; fail_sysfs: - cpuidle_remove_state_sysfs(dev); + cpuidle_remove_from_list(dev); return ret; } @@ -199,8 +236,6 @@ void cpuidle_disable_device(struct cpuid if (cpuidle_curr_governor->disable) cpuidle_curr_governor->disable(dev); - - cpuidle_remove_state_sysfs(dev); } EXPORT_SYMBOL_GPL(cpuidle_disable_device); @@ -271,6 +306,7 @@ int cpuidle_register_device(struct cpuid } cpuidle_enable_device(dev); + cpuidle_add_to_list(dev); mutex_unlock(&cpuidle_lock); @@ -292,6 +328,7 @@ void cpuidle_unregister_device(struct cp cpuidle_pause_and_lock(); cpuidle_disable_device(dev); + cpuidle_remove_from_list(dev); per_cpu(cpuidle_devices, dev->cpu) = NULL; @@ -342,12 +379,15 @@ static inline void latency_notifier_init */ static int __init cpuidle_init(void) { - int ret; + int ret, cpu; ret = cpuidle_add_class_sysfs(&cpu_sysdev_class); if (ret) return ret; + for_each_possible_cpu(cpu) + INIT_LIST_HEAD(&per_cpu(cpuidle_devices_list, cpu)); + latency_notifier_init(&cpuidle_latency_notifier); return 0; Index: linux.trees.git/include/linux/cpuidle.h =================================================================== --- linux.trees.git.orig/include/linux/cpuidle.h +++ linux.trees.git/include/linux/cpuidle.h @@ -92,6 +92,7 @@ struct cpuidle_device { struct cpuidle_state_kobj *kobjs[CPUIDLE_STATE_MAX]; struct cpuidle_state *last_state; + struct list_head idle_list; struct kobject kobj; struct completion kobj_unregister; void *governor_data;