From patchwork Tue Jul 10 02:33:25 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 170018 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 8067C2C0209 for ; Tue, 10 Jul 2012 12:16:40 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752864Ab2GJCQj (ORCPT ); Mon, 9 Jul 2012 22:16:39 -0400 Received: from mga02.intel.com ([134.134.136.20]:30390 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752803Ab2GJCQi (ORCPT ); Mon, 9 Jul 2012 22:16:38 -0400 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP; 09 Jul 2012 19:16:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.67,351,1309762800"; d="scan'208";a="169575971" Received: from dwillia2-linux.jf.intel.com ([10.7.137.146]) by orsmga002.jf.intel.com with ESMTP; 09 Jul 2012 19:16:37 -0700 Received: from dwillia2-linux.jf.intel.com (localhost.localdomain [IPv6:::1]) by dwillia2-linux.jf.intel.com (Postfix) with ESMTP id 3A65B8003D; Mon, 9 Jul 2012 19:33:25 -0700 (PDT) Subject: [set4 resend PATCH 1/5] async: introduce 'async_domain' type To: jgarzik@pobox.com, JBottomley@parallels.com From: Dan Williams Cc: linux-scsi@vger.kernel.org, Eldad Zack , Mark Brown , linux-ide@vger.kernel.org, Arjan van de Ven , Liam Girdwood Date: Mon, 09 Jul 2012 19:33:25 -0700 Message-ID: <20120710023325.26249.73156.stgit@dwillia2-linux.jf.intel.com> In-Reply-To: <20120710023241.26249.13718.stgit@dwillia2-linux.jf.intel.com> References: <20120710023241.26249.13718.stgit@dwillia2-linux.jf.intel.com> User-Agent: StGit/0.16-1-g7004 MIME-Version: 1.0 Sender: linux-ide-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ide@vger.kernel.org This is in preparation for teaching async_synchronize_full() to sync all pending async work, and not just on the async_running domain. This conversion is functionally equivalent, just embedding the existing list in a new async_domain type. The .registered attribute is used in a later patch to distinguish between domains that want to be flushed by async_synchronize_full() versus those that only expect async_synchronize_{full|cookie}_domain to be used for flushing. Cc: Liam Girdwood Cc: James Bottomley Acked-by: Arjan van de Ven Acked-by: Mark Brown Tested-by: Eldad Zack Signed-off-by: Dan Williams --- drivers/regulator/core.c | 2 +- drivers/scsi/libsas/sas_ata.c | 2 +- drivers/scsi/scsi.c | 3 ++- drivers/scsi/scsi_priv.h | 2 +- include/linux/async.h | 35 +++++++++++++++++++++++++++++++---- kernel/async.c | 35 +++++++++++++++++------------------ sound/soc/soc-dapm.c | 2 +- 7 files changed, 54 insertions(+), 27 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c index 09a737c..4293aae 100644 --- a/drivers/regulator/core.c +++ b/drivers/regulator/core.c @@ -2744,7 +2744,7 @@ static void regulator_bulk_enable_async(void *data, async_cookie_t cookie) int regulator_bulk_enable(int num_consumers, struct regulator_bulk_data *consumers) { - LIST_HEAD(async_domain); + ASYNC_DOMAIN_EXCLUSIVE(async_domain); int i; int ret = 0; diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c index bec3bc8..a59fcdc 100644 --- a/drivers/scsi/libsas/sas_ata.c +++ b/drivers/scsi/libsas/sas_ata.c @@ -742,7 +742,7 @@ static void async_sas_ata_eh(void *data, async_cookie_t cookie) void sas_ata_strategy_handler(struct Scsi_Host *shost) { struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost); - LIST_HEAD(async); + ASYNC_DOMAIN_EXCLUSIVE(async); int i; /* it's ok to defer revalidation events during ata eh, these diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c index bbbc9c9..4cade88 100644 --- a/drivers/scsi/scsi.c +++ b/drivers/scsi/scsi.c @@ -54,6 +54,7 @@ #include #include #include +#include #include #include @@ -91,7 +92,7 @@ EXPORT_SYMBOL(scsi_logging_level); #endif /* sd, scsi core and power management need to coordinate flushing async actions */ -LIST_HEAD(scsi_sd_probe_domain); +ASYNC_DOMAIN(scsi_sd_probe_domain); EXPORT_SYMBOL(scsi_sd_probe_domain); /* NB: These are exposed through /proc/scsi/scsi and form part of the ABI. diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h index 13d74da..4bd25ec 100644 --- a/drivers/scsi/scsi_priv.h +++ b/drivers/scsi/scsi_priv.h @@ -163,7 +163,7 @@ static inline int scsi_autopm_get_host(struct Scsi_Host *h) { return 0; } static inline void scsi_autopm_put_host(struct Scsi_Host *h) {} #endif /* CONFIG_PM_RUNTIME */ -extern struct list_head scsi_sd_probe_domain; +extern struct async_domain scsi_sd_probe_domain; /* * internal scsi timeout functions: for use by mid-layer and transport diff --git a/include/linux/async.h b/include/linux/async.h index 68a9530..364e7ff 100644 --- a/include/linux/async.h +++ b/include/linux/async.h @@ -9,19 +9,46 @@ * as published by the Free Software Foundation; version 2 * of the License. */ +#ifndef __ASYNC_H__ +#define __ASYNC_H__ #include #include typedef u64 async_cookie_t; typedef void (async_func_ptr) (void *data, async_cookie_t cookie); +struct async_domain { + struct list_head node; + struct list_head domain; + int count; + unsigned registered:1; +}; + +/* + * domain participates in global async_synchronize_full + */ +#define ASYNC_DOMAIN(_name) \ + struct async_domain _name = { .node = LIST_HEAD_INIT(_name.node), \ + .domain = LIST_HEAD_INIT(_name.domain), \ + .count = 0, \ + .registered = 1 } + +/* + * domain is free to go out of scope as soon as all pending work is + * complete, this domain does not participate in async_synchronize_full + */ +#define ASYNC_DOMAIN_EXCLUSIVE(_name) \ + struct async_domain _name = { .node = LIST_HEAD_INIT(_name.node), \ + .domain = LIST_HEAD_INIT(_name.domain), \ + .count = 0, \ + .registered = 0 } extern async_cookie_t async_schedule(async_func_ptr *ptr, void *data); extern async_cookie_t async_schedule_domain(async_func_ptr *ptr, void *data, - struct list_head *list); + struct async_domain *domain); extern void async_synchronize_full(void); -extern void async_synchronize_full_domain(struct list_head *list); +extern void async_synchronize_full_domain(struct async_domain *domain); extern void async_synchronize_cookie(async_cookie_t cookie); extern void async_synchronize_cookie_domain(async_cookie_t cookie, - struct list_head *list); - + struct async_domain *domain); +#endif diff --git a/kernel/async.c b/kernel/async.c index bd0c168..ba5491d 100644 --- a/kernel/async.c +++ b/kernel/async.c @@ -62,7 +62,7 @@ static async_cookie_t next_cookie = 1; #define MAX_WORK 32768 static LIST_HEAD(async_pending); -static LIST_HEAD(async_running); +static ASYNC_DOMAIN(async_running); static DEFINE_SPINLOCK(async_lock); struct async_entry { @@ -71,7 +71,7 @@ struct async_entry { async_cookie_t cookie; async_func_ptr *func; void *data; - struct list_head *running; + struct async_domain *running; }; static DECLARE_WAIT_QUEUE_HEAD(async_done); @@ -82,13 +82,12 @@ static atomic_t entry_count; /* * MUST be called with the lock held! */ -static async_cookie_t __lowest_in_progress(struct list_head *running) +static async_cookie_t __lowest_in_progress(struct async_domain *running) { struct async_entry *entry; - if (!list_empty(running)) { - entry = list_first_entry(running, - struct async_entry, list); + if (!list_empty(&running->domain)) { + entry = list_first_entry(&running->domain, typeof(*entry), list); return entry->cookie; } @@ -99,7 +98,7 @@ static async_cookie_t __lowest_in_progress(struct list_head *running) return next_cookie; /* "infinity" value */ } -static async_cookie_t lowest_in_progress(struct list_head *running) +static async_cookie_t lowest_in_progress(struct async_domain *running) { unsigned long flags; async_cookie_t ret; @@ -119,10 +118,11 @@ static void async_run_entry_fn(struct work_struct *work) container_of(work, struct async_entry, work); unsigned long flags; ktime_t uninitialized_var(calltime), delta, rettime; + struct async_domain *running = entry->running; /* 1) move self to the running queue */ spin_lock_irqsave(&async_lock, flags); - list_move_tail(&entry->list, entry->running); + list_move_tail(&entry->list, &running->domain); spin_unlock_irqrestore(&async_lock, flags); /* 2) run (and print duration) */ @@ -156,7 +156,7 @@ static void async_run_entry_fn(struct work_struct *work) wake_up(&async_done); } -static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct list_head *running) +static async_cookie_t __async_schedule(async_func_ptr *ptr, void *data, struct async_domain *running) { struct async_entry *entry; unsigned long flags; @@ -223,7 +223,7 @@ EXPORT_SYMBOL_GPL(async_schedule); * Note: This function may be called from atomic or non-atomic contexts. */ async_cookie_t async_schedule_domain(async_func_ptr *ptr, void *data, - struct list_head *running) + struct async_domain *running) { return __async_schedule(ptr, data, running); } @@ -238,20 +238,20 @@ void async_synchronize_full(void) { do { async_synchronize_cookie(next_cookie); - } while (!list_empty(&async_running) || !list_empty(&async_pending)); + } while (!list_empty(&async_running.domain) || !list_empty(&async_pending)); } EXPORT_SYMBOL_GPL(async_synchronize_full); /** * async_synchronize_full_domain - synchronize all asynchronous function within a certain domain - * @list: running list to synchronize on + * @domain: running list to synchronize on * * This function waits until all asynchronous function calls for the - * synchronization domain specified by the running list @list have been done. + * synchronization domain specified by the running list @domain have been done. */ -void async_synchronize_full_domain(struct list_head *list) +void async_synchronize_full_domain(struct async_domain *domain) { - async_synchronize_cookie_domain(next_cookie, list); + async_synchronize_cookie_domain(next_cookie, domain); } EXPORT_SYMBOL_GPL(async_synchronize_full_domain); @@ -261,11 +261,10 @@ EXPORT_SYMBOL_GPL(async_synchronize_full_domain); * @running: running list to synchronize on * * This function waits until all asynchronous function calls for the - * synchronization domain specified by the running list @list submitted + * synchronization domain specified by running list @running submitted * prior to @cookie have been done. */ -void async_synchronize_cookie_domain(async_cookie_t cookie, - struct list_head *running) +void async_synchronize_cookie_domain(async_cookie_t cookie, struct async_domain *running) { ktime_t uninitialized_var(starttime), delta, endtime; diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c index 89eae93..fa1e312 100644 --- a/sound/soc/soc-dapm.c +++ b/sound/soc/soc-dapm.c @@ -1545,7 +1545,7 @@ static int dapm_power_widgets(struct snd_soc_dapm_context *dapm, int event) struct snd_soc_dapm_context *d; LIST_HEAD(up_list); LIST_HEAD(down_list); - LIST_HEAD(async_domain); + ASYNC_DOMAIN_EXCLUSIVE(async_domain); enum snd_soc_bias_level bias; trace_snd_soc_dapm_start(card);