From patchwork Thu Sep 11 23:38:27 2008 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Detsch X-Patchwork-Id: 258 X-Patchwork-Delegate: jk@ozlabs.org Return-Path: X-Original-To: patchwork@ozlabs.org Delivered-To: patchwork@ozlabs.org Received: from ozlabs.org (localhost [127.0.0.1]) by ozlabs.org (Postfix) with ESMTP id AA27ADEB54 for ; Fri, 12 Sep 2008 09:40:41 +1000 (EST) X-Original-To: cbe-oss-dev@ozlabs.org Delivered-To: cbe-oss-dev@ozlabs.org Received: from igw1.br.ibm.com (igw1.br.ibm.com [32.104.18.24]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 0FC75DE000; Fri, 12 Sep 2008 09:39:17 +1000 (EST) Received: from mailhub1.br.ibm.com (mailhub1 [9.18.232.109]) by igw1.br.ibm.com (Postfix) with ESMTP id A69F532C124; Thu, 11 Sep 2008 20:08:38 -0300 (BRT) Received: from d24av02.br.ibm.com (d24av02.br.ibm.com [9.18.232.47]) by mailhub1.br.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id m8BNdFnN2478300; Thu, 11 Sep 2008 20:39:15 -0300 Received: from d24av02.br.ibm.com (loopback [127.0.0.1]) by d24av02.br.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id m8BNd8Tb030236; Thu, 11 Sep 2008 20:39:08 -0300 Received: from [9.8.10.86] ([9.8.10.86]) by d24av02.br.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id m8BNd8Ij030227; Thu, 11 Sep 2008 20:39:08 -0300 From: Andre Detsch To: cbe-oss-dev@ozlabs.org Date: Thu, 11 Sep 2008 20:38:27 -0300 User-Agent: KMail/1.9.6 References: <200809111955.28780.adetsch@br.ibm.com> In-Reply-To: <200809111955.28780.adetsch@br.ibm.com> MIME-Version: 1.0 Content-Disposition: inline Message-Id: <200809112038.27534.adetsch@br.ibm.com> Cc: LukeBrowning@us.ibm.com, Jeremy Kerr Subject: [Cbe-oss-dev] [PATCH 11/11] powerpc/spufs: Implement SPU affinity on top of gang scheduling X-BeenThere: cbe-oss-dev@ozlabs.org X-Mailman-Version: 2.1.11 Precedence: list List-Id: Discussion about Open Source Software for the Cell Broadband Engine List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: cbe-oss-dev-bounces+patchwork=ozlabs.org@ozlabs.org Errors-To: cbe-oss-dev-bounces+patchwork=ozlabs.org@ozlabs.org SPU affinity, originally implemented before we had gang scheduling, was disabled after gang scheduling was introduced. This patch re-enables SPU affinity, making it fit the new scheduling algorithm. Signed-off-by: Andre Detsch diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c index 8326034..c34e53f 100644 --- a/arch/powerpc/platforms/cell/spufs/sched.c +++ b/arch/powerpc/platforms/cell/spufs/sched.c @@ -443,31 +443,20 @@ static struct spu *ctx_location(struct spu *ref, int offset, int node) return spu; } -/* - * affinity_check is called each time a context is going to be scheduled. - * It returns the spu ptr on which the context must run. - */ -static int has_affinity(struct spu_gang *gang) +static void set_affinity(struct spu_gang *gang) { - if (list_empty(&gang->aff_list_head)) - return 0; - - /* - * TODO: fix SPU Affinity to work with gang scheduling. - */ - - if (atomic_read(&gang->aff_sched_count) == 0) - gang->aff_ref_spu = NULL; + BUG_ON(list_empty(&gang->aff_list_head)); - if (!gang->aff_ref_spu) { - if (!(gang->aff_flags & AFF_MERGED)) - aff_merge_remaining_ctxs(gang); - if (!(gang->aff_flags & AFF_OFFSETS_SET)) - aff_set_offsets(gang); - aff_set_ref_point_location(gang); - } + if (!(gang->aff_flags & AFF_MERGED)) + aff_merge_remaining_ctxs(gang); + if (!(gang->aff_flags & AFF_OFFSETS_SET)) + aff_set_offsets(gang); + aff_set_ref_point_location(gang); +} - return gang->aff_ref_spu != NULL; +static int has_affinity(struct spu_gang *gang) +{ + return !list_empty(&gang->aff_list_head); } /** @@ -486,9 +475,6 @@ static void spu_unbind_context(struct spu *spu, struct spu_context *ctx) if (spu->ctx->flags & SPU_CREATE_NOSCHED) atomic_dec(&cbe_spu_info[spu->node].reserved_spus); - if (ctx->gang) - atomic_dec_if_positive(&ctx->gang->aff_sched_count); - spu_switch_notify(spu, NULL); spu_unmap_mappings(ctx); spu_save(&ctx->csa, spu); @@ -582,6 +568,15 @@ static struct spu *spu_bind(struct spu_gang *gang, if (!node_allowed(gang, node)) continue; + if (has_affinity(gang)) { + spin_lock(&cbe_spu_info[node].list_lock); + spu = ctx_location(gang->aff_ref_spu, ctx->aff_offset, + node); + if (spu && spu->alloc_state == SPU_FREE) + goto found; + spin_unlock(&cbe_spu_info[node].list_lock); + } + spin_lock(&cbe_spu_info[node].list_lock); list_for_each_entry(spu, &cbe_spu_info[node].spus, cbe_list) { if ((spu->alloc_state == SPU_FREE) && @@ -608,6 +603,9 @@ static void __spu_schedule(struct spu_gang *gang, int node_chosen) spu_del_from_rq(gang); + if (has_affinity(gang)) + set_affinity(gang); + list_for_each_entry(ctx, &gang->list, gang_list) { mutex_lock(&ctx->state_mutex); BUG_ON(ctx->spu); @@ -657,6 +655,18 @@ static int spu_get_idle(struct spu_gang *gang, int node) spu_context_nospu_trace(spu_get_idle__enter, gang); /* TO DO: SPU affinity scheduling. */ +#if 0 + if (has_affinity(gang)) { + aff_ref_spu = ctx->gang->aff_ref_spu; + node = aff_ref_spu->node; + + mutex_lock(&cbe_spu_info[node].list_mutex); + spu = ctx_location(aff_ref_spu, ctx->aff_offset, node); + if (spu && spu->alloc_state == SPU_FREE) + goto found; + mutex_unlock(&cbe_spu_info[node].list_mutex); + } +#endif mode = SPU_RESERVE; diff --git a/arch/powerpc/platforms/cell/spufs/spufs.h b/arch/powerpc/platforms/cell/spufs/spufs.h index 6afc514..907baf9 100644 --- a/arch/powerpc/platforms/cell/spufs/spufs.h +++ b/arch/powerpc/platforms/cell/spufs/spufs.h @@ -178,7 +178,6 @@ struct spu_gang { struct mutex aff_mutex; int aff_flags; struct spu *aff_ref_spu; - atomic_t aff_sched_count; /* spu scheduler statistics for zombie ctxts */ struct {