From patchwork Thu Oct 29 00:44:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Blanchard X-Patchwork-Id: 537633 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 78D011402D2 for ; Thu, 29 Oct 2015 12:06:58 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; secure) header.d=samba.org header.i=@samba.org header.b=SSYG9MRt; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4C66C1A0CF7 for ; Thu, 29 Oct 2015 12:06:58 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; secure) header.d=samba.org header.i=@samba.org header.b=SSYG9MRt; dkim-atps=neutral X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from hr2.samba.org (hr2.samba.org [IPv6:2a01:4f8:192:486::147:1]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 760EB1A0BFD for ; Thu, 29 Oct 2015 12:05:37 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dkim=pass (1024-bit key; secure) header.d=samba.org header.i=@samba.org header.b=SSYG9MRt; dkim-atps=neutral DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=samba.org; s=42627210; h=Message-Id:Date:Cc:To:From; bh=/SEqa4NfSiF6JFCqrp+oEx6/kyqfQUAnYjnRIPliGTM=; b=SSYG9MRtk6YpwTfw0aoomY4V2OTphuh1W5x5+Mb1Lo4QiDdOiIR73PX2qKW2GtnANEpwPUJ1465dMjeKu5ewM3kcIkoJAPNCLpCrwgVslcnSEKOYcEjWLxsxGoE6WN4oZgHfa3w0TPrDUWKhSeWge47vmZN753g4xtPDYXcjsoE=; Received: from [127.0.0.2] (localhost [127.0.0.1]) by hr2.samba.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_CBC_SHA1:128) (Exim) id 1ZrbLp-00042k-Ew; Thu, 29 Oct 2015 00:45:57 +0000 From: Anton Blanchard To: benh@kernel.crashing.org, paulus@samba.org, mpe@ellerman.id.au, mikey@neuling.org, cyrilbur@gmail.com, scottwood@freescale.com Subject: [PATCH 12/19] powerpc: Create msr_check_and_{set,clear}() Date: Thu, 29 Oct 2015 11:44:04 +1100 Message-Id: <1446079451-8774-13-git-send-email-anton@samba.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1446079451-8774-1-git-send-email-anton@samba.org> References: <1446079451-8774-1-git-send-email-anton@samba.org> X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org MIME-Version: 1.0 Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Create helper functions to set and clear MSR bits after first checking if they are already set. Grouping them will make it easy to avoid the MSR writes in a subsequent optimisation. Signed-off-by: Anton Blanchard --- arch/powerpc/kernel/process.c | 107 ++++++++++++++++++++---------------------- 1 file changed, 52 insertions(+), 55 deletions(-) diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 0cb6276..5cdd35c 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -87,23 +87,46 @@ static void check_if_tm_restore_required(struct task_struct *tsk) static inline void check_if_tm_restore_required(struct task_struct *tsk) { } #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ -#ifdef CONFIG_PPC_FPU -void giveup_fpu(struct task_struct *tsk) +static void msr_check_and_set(unsigned long bits) { - u64 oldmsr = mfmsr(); - u64 newmsr; + unsigned long oldmsr = mfmsr(); + unsigned long newmsr; - check_if_tm_restore_required(tsk); + newmsr = oldmsr | bits; - newmsr = oldmsr | MSR_FP; #ifdef CONFIG_VSX - if (cpu_has_feature(CPU_FTR_VSX)) + if (cpu_has_feature(CPU_FTR_VSX) && (bits & MSR_FP)) newmsr |= MSR_VSX; #endif + if (oldmsr != newmsr) mtmsr_isync(newmsr); +} + +static void msr_check_and_clear(unsigned long bits) +{ + unsigned long oldmsr = mfmsr(); + unsigned long newmsr; + + newmsr = oldmsr & ~bits; + +#ifdef CONFIG_VSX + if (cpu_has_feature(CPU_FTR_VSX) && (bits & MSR_FP)) + newmsr &= ~MSR_VSX; +#endif + if (oldmsr != newmsr) + mtmsr_isync(newmsr); +} + +#ifdef CONFIG_PPC_FPU +void giveup_fpu(struct task_struct *tsk) +{ + check_if_tm_restore_required(tsk); + + msr_check_and_set(MSR_FP); __giveup_fpu(tsk); + msr_check_and_clear(MSR_FP); } EXPORT_SYMBOL(giveup_fpu); @@ -144,30 +167,21 @@ void enable_kernel_fp(void) { WARN_ON(preemptible()); - if (current->thread.regs && (current->thread.regs->msr & MSR_FP)) { - giveup_fpu(current); - } else { - u64 oldmsr = mfmsr(); + msr_check_and_set(MSR_FP); - if (!(oldmsr & MSR_FP)) - mtmsr_isync(oldmsr | MSR_FP); - } + if (current->thread.regs && (current->thread.regs->msr & MSR_FP)) + __giveup_fpu(current); } EXPORT_SYMBOL(enable_kernel_fp); #ifdef CONFIG_ALTIVEC void giveup_altivec(struct task_struct *tsk) { - u64 oldmsr = mfmsr(); - u64 newmsr; - check_if_tm_restore_required(tsk); - newmsr = oldmsr | MSR_VEC; - if (oldmsr != newmsr) - mtmsr_isync(newmsr); - + msr_check_and_set(MSR_VEC); __giveup_altivec(tsk); + msr_check_and_clear(MSR_VEC); } EXPORT_SYMBOL(giveup_altivec); @@ -175,14 +189,10 @@ void enable_kernel_altivec(void) { WARN_ON(preemptible()); - if (current->thread.regs && (current->thread.regs->msr & MSR_VEC)) { - giveup_altivec(current); - } else { - u64 oldmsr = mfmsr(); + msr_check_and_set(MSR_VEC); - if (!(oldmsr & MSR_VEC)) - mtmsr_isync(oldmsr | MSR_VEC); - } + if (current->thread.regs && (current->thread.regs->msr & MSR_VEC)) + __giveup_altivec(current); } EXPORT_SYMBOL(enable_kernel_altivec); @@ -207,20 +217,15 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread); #ifdef CONFIG_VSX void giveup_vsx(struct task_struct *tsk) { - u64 oldmsr = mfmsr(); - u64 newmsr; - check_if_tm_restore_required(tsk); - newmsr = oldmsr | (MSR_FP|MSR_VEC|MSR_VSX); - if (oldmsr != newmsr) - mtmsr_isync(newmsr); - + msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX); if (tsk->thread.regs->msr & MSR_FP) __giveup_fpu(tsk); if (tsk->thread.regs->msr & MSR_VEC) __giveup_altivec(tsk); __giveup_vsx(tsk); + msr_check_and_clear(MSR_FP|MSR_VEC|MSR_VSX); } EXPORT_SYMBOL(giveup_vsx); @@ -228,13 +233,14 @@ void enable_kernel_vsx(void) { WARN_ON(preemptible()); - if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) { - giveup_vsx(current); - } else { - u64 oldmsr = mfmsr(); + msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX); - if (!(oldmsr & MSR_VSX)) - mtmsr_isync(oldmsr | MSR_VSX); + if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) { + if (current->thread.regs->msr & MSR_FP) + __giveup_fpu(current); + if (current->thread.regs->msr & MSR_VEC) + __giveup_altivec(current); + __giveup_vsx(current); } } EXPORT_SYMBOL(enable_kernel_vsx); @@ -256,16 +262,11 @@ EXPORT_SYMBOL_GPL(flush_vsx_to_thread); #ifdef CONFIG_SPE void giveup_spe(struct task_struct *tsk) { - u64 oldmsr = mfmsr(); - u64 newmsr; - check_if_tm_restore_required(tsk); - newmsr = oldmsr | MSR_SPE; - if (oldmsr != newmsr) - mtmsr_isync(newmsr); - + msr_check_and_set(MSR_SPE); __giveup_spe(tsk); + msr_check_and_clear(MSR_SPE); } EXPORT_SYMBOL(giveup_spe); @@ -273,14 +274,10 @@ void enable_kernel_spe(void) { WARN_ON(preemptible()); - if (current->thread.regs && (current->thread.regs->msr & MSR_SPE)) { - giveup_spe(current); - } else { - u64 oldmsr = mfmsr(); + msr_check_and_set(MSR_SPE); - if (!(oldmsr & MSR_SPE)) - mtmsr_isync(oldmsr | MSR_SPE); - } + if (current->thread.regs && (current->thread.regs->msr & MSR_SPE)) + __giveup_spe(current); } EXPORT_SYMBOL(enable_kernel_spe);