From patchwork Mon Mar 16 11:33:16 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Hao X-Patchwork-Id: 450509 X-Patchwork-Delegate: michael@ellerman.id.au Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 1A180140083 for ; Mon, 16 Mar 2015 22:38:31 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="verification failed; unprotected key" header.d=gmail.com header.i=@gmail.com header.b=rxUHPw1G; dkim-adsp=none (unprotected policy); dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id E03851A0D81 for ; Mon, 16 Mar 2015 22:38:30 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dkim=fail reason="verification failed; unprotected key" header.d=gmail.com header.i=@gmail.com header.b=rxUHPw1G; dkim-adsp=none (unprotected policy); dkim-atps=neutral X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from mail-yk0-x231.google.com (mail-yk0-x231.google.com [IPv6:2607:f8b0:4002:c07::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 065231A09DC for ; Mon, 16 Mar 2015 22:33:45 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dkim=pass reason="2048-bit key; unprotected key" header.d=gmail.com header.i=@gmail.com header.b=rxUHPw1G; dkim-adsp=pass; dkim-atps=neutral Received: by ykfs63 with SMTP id s63so16327538ykf.2 for ; Mon, 16 Mar 2015 04:33:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1Oje8ORDf8RiZF7FeCrc7GWRQnxm5lZl+Lwh5TY3JHA=; b=rxUHPw1GeBxKXDTD2vkCt6o71MFsykR9ryMJhuH2jt+VIq97QqGOGGE3GFqacbYo0t z7RhJGxjKbKbAyN9iJNNc/23kxaTul10A0EW3dahxTegHJLoXUZtehVKY5HbKua59b4b 1u7afjy6CPZaYGsQ6UEM0K4fah/wskrj6ThczdU6kB6fpEJQOvCNerom8qwxhn9BQC6k c3R/MRQ3eJgw7dHQ5irRBeX6csnoTx+ZmY9Gcsa3a5/N7xWywo1C3KgCxfZYbibqZmr8 Chcu2oHCaHVF9QS8GPqJbsLuV1N3jdeu/07GX3BcoP6SGpvyr+6PCLApM7mBvurV0QqF ewBA== X-Received: by 10.170.55.23 with SMTP id 23mr52234435ykx.29.1426505622545; Mon, 16 Mar 2015 04:33:42 -0700 (PDT) Received: from pek-khao-d1.corp.ad.wrs.com ([106.120.101.38]) by mx.google.com with ESMTPSA id n47sm8931614yha.55.2015.03.16.04.33.39 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 Mar 2015 04:33:41 -0700 (PDT) From: Kevin Hao To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 2/3] powerpc: spinlock: refactor codes wrapped by PPC_HAS_LOCK_OWNER Date: Mon, 16 Mar 2015 19:33:16 +0800 Message-Id: <1426505597-1042-3-git-send-email-haokexin@gmail.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1426505597-1042-1-git-send-email-haokexin@gmail.com> References: <1426505597-1042-1-git-send-email-haokexin@gmail.com> Cc: Paul Mackerras X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Move all of them to one place. No function change. Signed-off-by: Kevin Hao --- arch/powerpc/include/asm/spinlock.h | 71 ++++++++++++++++--------------------- 1 file changed, 30 insertions(+), 41 deletions(-) diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 38f40ea63a8c..cbc9511df409 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -30,6 +30,20 @@ #define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */ +/* + * On a system with shared processors (that is, where a physical + * processor is multiplexed between several virtual processors), + * there is no point spinning on a lock if the holder of the lock + * isn't currently scheduled on a physical processor. Instead + * we detect this situation and ask the hypervisor to give the + * rest of our timeslice to the lock holder. + * + * So that we can tell which virtual processor is holding a lock, + * we put 0x80000000 | smp_processor_id() in the lock when it is + * held. Conveniently, we have a word in the paca that holds this + * value. + */ + #ifdef CONFIG_PPC_HAS_LOCK_OWNER /* use 0x800000yy when locked, where yy == CPU number */ #ifdef __BIG_ENDIAN__ @@ -37,9 +51,22 @@ #else #define LOCK_TOKEN (*(u32 *)(&get_paca()->paca_index)) #endif -#else -#define LOCK_TOKEN 1 -#endif +#define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */ + +/* We only yield to the hypervisor if we are in shared processor mode */ +#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr)) +extern void __spin_yield(arch_spinlock_t *lock); +extern void __rw_yield(arch_rwlock_t *lock); +extern void arch_spin_unlock_wait(arch_spinlock_t *lock); +#else /* CONFIG_PPC_HAS_LOCK_OWNER */ +#define LOCK_TOKEN 1 +#define WRLOCK_TOKEN (-1) +#define SHARED_PROCESSOR 0 +#define __spin_yield(x) barrier() +#define __rw_yield(x) barrier() +#define arch_spin_unlock_wait(lock) \ + do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0) +#endif /* CONFIG_PPC_HAS_LOCK_OWNER */ #if defined(CONFIG_PPC64) && defined(CONFIG_SMP) #define CLEAR_IO_SYNC (get_paca()->io_sync = 0) @@ -95,31 +122,6 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock) return __arch_spin_trylock(lock) == 0; } -/* - * On a system with shared processors (that is, where a physical - * processor is multiplexed between several virtual processors), - * there is no point spinning on a lock if the holder of the lock - * isn't currently scheduled on a physical processor. Instead - * we detect this situation and ask the hypervisor to give the - * rest of our timeslice to the lock holder. - * - * So that we can tell which virtual processor is holding a lock, - * we put 0x80000000 | smp_processor_id() in the lock when it is - * held. Conveniently, we have a word in the paca that holds this - * value. - */ - -#if defined(CONFIG_PPC_HAS_LOCK_OWNER) -/* We only yield to the hypervisor if we are in shared processor mode */ -#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr)) -extern void __spin_yield(arch_spinlock_t *lock); -extern void __rw_yield(arch_rwlock_t *lock); -#else /* SPLPAR */ -#define __spin_yield(x) barrier() -#define __rw_yield(x) barrier() -#define SHARED_PROCESSOR 0 -#endif - static inline void arch_spin_lock(arch_spinlock_t *lock) { CLEAR_IO_SYNC; @@ -164,13 +166,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock) lock->slock = 0; } -#ifdef CONFIG_PPC_HAS_LOCK_OWNER -extern void arch_spin_unlock_wait(arch_spinlock_t *lock); -#else -#define arch_spin_unlock_wait(lock) \ - do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0) -#endif - /* * Read-write spinlocks, allowing multiple readers * but only one writer. @@ -191,12 +186,6 @@ extern void arch_spin_unlock_wait(arch_spinlock_t *lock); #define __DO_SIGN_EXTEND #endif -#ifdef CONFIG_PPC_HAS_LOCK_OWNER -#define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */ -#else -#define WRLOCK_TOKEN (-1) -#endif - /* * This returns the old value in the lock + 1, * so we got a read lock if the return value is > 0.