From patchwork Fri Jul 3 07:35:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1322156 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=LPJ9kOrR; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49ymw71QJ8z9sRf for ; Fri, 3 Jul 2020 17:35:35 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726245AbgGCHfe (ORCPT ); Fri, 3 Jul 2020 03:35:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725764AbgGCHfe (ORCPT ); Fri, 3 Jul 2020 03:35:34 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BE51C08C5C1; Fri, 3 Jul 2020 00:35:34 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id a14so9071081pfi.2; Fri, 03 Jul 2020 00:35:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=p3+TUO0b9VV9VdTNyuHlA2h8v8EGxg7Uc2Jzn7JHwh8=; b=LPJ9kOrR/DVOvIvwHOyv2EvxvH2mWm1cTxw9pMagbdFf9Yf/o5YqVhwLZxlm4zMzB0 BupvO34NgK3WU5FusVXN3LG1KZEmguQH+5WTUWXoEsEr0RjWXGLP0Ixc8TEqeeT6lVZN WsPOzbDKE4F33Gwek0i4ekNv+nVM6tYRDmjW4ojHXc4ANZghx4n39tGew1CqNAPQKTMw YLAag0hi/6/ZIqQRe4jaFochEOd0Hjq4hDBYBwaui6RvinZrC2mtHhJfIZ/yrLu1gwor mCQwkmxixNVC8IyJn8QQbryDYT5OUMEzfJcaRC7zrD+8ntI7y8IhZGKDc/ADG9oGWhN1 1vwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=p3+TUO0b9VV9VdTNyuHlA2h8v8EGxg7Uc2Jzn7JHwh8=; b=DjbkOmbLC1sqkErMQipMOu0asvDEaEsnF1/5O7UOoH+j1jnhjvvyDWiEoNk1vNBRPw iaA9XldIfIJ0xikinDX8Fv07SEatP9+cNShQJCBh5f+5eK+lfcO54QfK1ZsHusGG3JBa 7aDoZdZT0ZkI2gU2hZqzdmm06OEvcNMKyNDum7dT8BQyl2by/U9qmRMDHrflQ6FOGJS3 uyIuBtYEI2UEvE9FEuPiqMjRF35j5Xewg36jnhmBrnKPxHZFnK+bSmIctT2svW6W1hhI qWOppoUhjlUzuRqD5wRLum6crx3+HOi+/sIs43uiCRQdnbk+TuMUEn3NIfdzXT6bgNZ7 phcQ== X-Gm-Message-State: AOAM5324Tw50fRuSjYr5EltqAF1EANOKjj2NoTAbKTDruXvePQbvr9Jc Y691AfAft7JjcOPXRSsxuho= X-Google-Smtp-Source: ABdhPJxSb50XzIXPJCZ5aFL/fPOkGNSVYnLGWYBsj4lJ5hpdQfsLdm3sGDjF2UwCTSWxhcuQ8/5rHw== X-Received: by 2002:a63:b06:: with SMTP id 6mr27000920pgl.116.1593761733846; Fri, 03 Jul 2020 00:35:33 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id y7sm10218499pgk.93.2020.07.03.00.35.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jul 2020 00:35:33 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH v2 1/6] powerpc/powernv: must include hvcall.h to get PAPR defines Date: Fri, 3 Jul 2020 17:35:11 +1000 Message-Id: <20200703073516.1354108-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200703073516.1354108-1-npiggin@gmail.com> References: <20200703073516.1354108-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org An include goes away in future patches which breaks compilation without this. Signed-off-by: Nicholas Piggin --- arch/powerpc/platforms/powernv/pci-ioda-tce.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c index f923359d8afc..8eba6ece7808 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c +++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c @@ -15,6 +15,7 @@ #include #include +#include /* share error returns with PAPR */ #include "pci.h" unsigned long pnv_ioda_parse_tce_sizes(struct pnv_phb *phb) From patchwork Fri Jul 3 07:35:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1322157 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=vRY66vm5; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49ymwD4VLfz9sTR for ; Fri, 3 Jul 2020 17:35:40 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726299AbgGCHfj (ORCPT ); Fri, 3 Jul 2020 03:35:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725764AbgGCHfj (ORCPT ); Fri, 3 Jul 2020 03:35:39 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34566C08C5C1; Fri, 3 Jul 2020 00:35:39 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id d4so14718626pgk.4; Fri, 03 Jul 2020 00:35:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c7no+9nj4ki/bMOC81yoyKciU109ZEB92uhR/9nSW+8=; b=vRY66vm5FUDJEsoGFGavRJYjEmtgpHX1dXQAAwctS12/ThOZ3C43SojkqpLZMdIBeU yJOG6e8GkbPpoFk+cp2MyDTxkAS0wBV/op1dm0KXvl4hvlb9CRK70mRasL0ApGy8d32M L1k0ms5+dYiDN+n7C+ToTaEBJD560EG/e3VQ1vV14W7dLSDpsbNc7WCoHm45EtgOyV++ ZVolDBVx+Zh06Rnsl3LltBhr4/DW7vZbLKdgmlJ0NkfzSJHmiw2tCgPuOm1mCQ7Pifta gcoyHA0qONEU3V3uF6BnP0nLJbEF2dpLVAL+pVMwyXcbqoNDePIk7twc0ImapzJ62zjP XU1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c7no+9nj4ki/bMOC81yoyKciU109ZEB92uhR/9nSW+8=; b=gv/Q9e19OvNf19GiGN/nasXvYbxBpn0/KYYwQhRpyUSBbruFzeO5tQGhPzcW+KoXRg RVf2yq7Vm6EIZ4L/Mgn9VRukD5xCfMiwdVMpkcAZV2dZheBpCLtVDzoJDjmVxk06ITvj TYNL2A9Ke1iWiKRrgzcHRCtdbcYQ3mBDFGey9gbEwolHWFIYaG9+QYQ++hRtI/GSr3Jx jZlO2AaB5vxD0/UrtcZzTM5JAQfUBf2U4QvVIf0YuAdcyx8A1F+zzZJP07cVKGgLIKtT 55pKxoOY2vdDXdWC65Ry5rpz0TSwTTFSgnHyizim1vC2x7yYR/TZOhr7Zlk4YWf1A/zd 7/EA== X-Gm-Message-State: AOAM530LvZh2awfYr3DjgbA9oB0r5eK1U7SaY/yIzVRClQ3L/dqi0OOi iWMq+g2yWgq3GM6zv0iIBg4= X-Google-Smtp-Source: ABdhPJx2xxNwxRVsH+jvqrol9Go/a3kXU8z8PyOlzC5MYZAoajDNq2y4DQYDm+PXAElKBlllmfz+5w== X-Received: by 2002:a62:79cd:: with SMTP id u196mr22270293pfc.152.1593761738795; Fri, 03 Jul 2020 00:35:38 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id y7sm10218499pgk.93.2020.07.03.00.35.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jul 2020 00:35:38 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH v2 2/6] powerpc/pseries: move some PAPR paravirt functions to their own file Date: Fri, 3 Jul 2020 17:35:12 +1000 Message-Id: <20200703073516.1354108-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200703073516.1354108-1-npiggin@gmail.com> References: <20200703073516.1354108-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/paravirt.h | 61 +++++++++++++++++++++++++++++ arch/powerpc/include/asm/spinlock.h | 24 +----------- arch/powerpc/lib/locks.c | 12 +++--- 3 files changed, 68 insertions(+), 29 deletions(-) create mode 100644 arch/powerpc/include/asm/paravirt.h diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h new file mode 100644 index 000000000000..7a8546660a63 --- /dev/null +++ b/arch/powerpc/include/asm/paravirt.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ASM_PARAVIRT_H +#define __ASM_PARAVIRT_H +#ifdef __KERNEL__ + +#include +#include +#ifdef CONFIG_PPC64 +#include +#include +#endif + +#ifdef CONFIG_PPC_SPLPAR +DECLARE_STATIC_KEY_FALSE(shared_processor); + +static inline bool is_shared_processor(void) +{ + return static_branch_unlikely(&shared_processor); +} + +/* If bit 0 is set, the cpu has been preempted */ +static inline u32 yield_count_of(int cpu) +{ + __be32 yield_count = READ_ONCE(lppaca_of(cpu).yield_count); + return be32_to_cpu(yield_count); +} + +static inline void yield_to_preempted(int cpu, u32 yield_count) +{ + plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); +} +#else +static inline bool is_shared_processor(void) +{ + return false; +} + +static inline u32 yield_count_of(int cpu) +{ + return 0; +} + +extern void ___bad_yield_to_preempted(void); +static inline void yield_to_preempted(int cpu, u32 yield_count) +{ + ___bad_yield_to_preempted(); /* This would be a bug */ +} +#endif + +#define vcpu_is_preempted vcpu_is_preempted +static inline bool vcpu_is_preempted(int cpu) +{ + if (!is_shared_processor()) + return false; + if (yield_count_of(cpu) & 1) + return true; + return false; +} + +#endif /* __KERNEL__ */ +#endif /* __ASM_PARAVIRT_H */ diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 2d620896cdae..79be9bb10bbb 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -15,11 +15,10 @@ * * (the type definitions are in asm/spinlock_types.h) */ -#include #include +#include #ifdef CONFIG_PPC64 #include -#include #endif #include #include @@ -35,18 +34,6 @@ #define LOCK_TOKEN 1 #endif -#ifdef CONFIG_PPC_PSERIES -DECLARE_STATIC_KEY_FALSE(shared_processor); - -#define vcpu_is_preempted vcpu_is_preempted -static inline bool vcpu_is_preempted(int cpu) -{ - if (!static_branch_unlikely(&shared_processor)) - return false; - return !!(be32_to_cpu(lppaca_of(cpu).yield_count) & 1); -} -#endif - static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) { return lock.slock == 0; @@ -110,15 +97,6 @@ static inline void splpar_spin_yield(arch_spinlock_t *lock) {}; static inline void splpar_rw_yield(arch_rwlock_t *lock) {}; #endif -static inline bool is_shared_processor(void) -{ -#ifdef CONFIG_PPC_SPLPAR - return static_branch_unlikely(&shared_processor); -#else - return false; -#endif -} - static inline void spin_yield(arch_spinlock_t *lock) { if (is_shared_processor()) diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index 6440d5943c00..04165b7a163f 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -27,14 +27,14 @@ void splpar_spin_yield(arch_spinlock_t *lock) return; holder_cpu = lock_value & 0xffff; BUG_ON(holder_cpu >= NR_CPUS); - yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + + yield_count = yield_count_of(holder_cpu); if ((yield_count & 1) == 0) return; /* virtual cpu is currently running */ rmb(); if (lock->slock != lock_value) return; /* something has changed */ - plpar_hcall_norets(H_CONFER, - get_hard_smp_processor_id(holder_cpu), yield_count); + yield_to_preempted(holder_cpu, yield_count); } EXPORT_SYMBOL_GPL(splpar_spin_yield); @@ -53,13 +53,13 @@ void splpar_rw_yield(arch_rwlock_t *rw) return; /* no write lock at present */ holder_cpu = lock_value & 0xffff; BUG_ON(holder_cpu >= NR_CPUS); - yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + + yield_count = yield_count_of(holder_cpu); if ((yield_count & 1) == 0) return; /* virtual cpu is currently running */ rmb(); if (rw->lock != lock_value) return; /* something has changed */ - plpar_hcall_norets(H_CONFER, - get_hard_smp_processor_id(holder_cpu), yield_count); + yield_to_preempted(holder_cpu, yield_count); } #endif From patchwork Fri Jul 3 07:35:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1322158 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=K0p/31Ym; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49ymwP25F2z9sSn for ; Fri, 3 Jul 2020 17:35:49 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726319AbgGCHfp (ORCPT ); Fri, 3 Jul 2020 03:35:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725764AbgGCHfp (ORCPT ); Fri, 3 Jul 2020 03:35:45 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5C72C08C5C1; Fri, 3 Jul 2020 00:35:44 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id f3so14735080pgr.2; Fri, 03 Jul 2020 00:35:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xjWMms9hqssB2L3Uqzup8R5/c6eD/F4/doDU2/0geL4=; b=K0p/31YmXa7vBXg7THPeY+CEOTV5KHSKcf5zrXySrXIc9ve10DCgwxbRF4fTbEdxx2 Q+F7ywJ4vcvUBaTMQBo+GHY7LfOwMqKbaVIBD/wsQhCRgFFLBTqvk61yPOsS5CPkUB0v /UFAUPqmZIG+oKSJwK4Bc8ojTU15y2wWYoZgthx3I7WYDsNQt/ZOHgqZeGiT54/m7uen J7+j05MKdvjH8TsQngPXsNJHbCv/bSqo4yIYYQorpVvjqKKtALAA1rs0/DRBHP3dt673 O8y2OSWfYmz7uAL9E58DqlZvYrfxaRp0YljowDoKf8XwWLu2CvBCBDreGWvCCC86DWgP SETQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xjWMms9hqssB2L3Uqzup8R5/c6eD/F4/doDU2/0geL4=; b=QXvgMNzs8PcMcrfZy5LNcX0ki1x2iAJlKiUcCCmV79ZDEb5Z3CLXuhBrJtcWs+i7XR E+bXkVE/DyjGWdkbEzivOj8G/RT3hB/Al/CzUUB2FzovlphxGKVggjMdNDNgnw11W8SY GUSIe+30alBUYOe+md3e0wSO8TS4dYKGaEo6zgj0B23P9CK695lLoepUTSytF/19IMmh HAPOfw5joCat2dIdoFNYu9cn/DUJ9tPewzZtOuCL+VDulNTHqjmy8YeeNknHo6mmms21 2n0KiKONAYQ/9RjTrYuNzmEnqwDEaeXyHpSpg0rEdOmUIFaT6QS5sPex++uIT8KJWgAy /eEQ== X-Gm-Message-State: AOAM5321XQI2BUBoDreLEqBvVsKat4n3TVw53mCCL1JJTYz67bJxBM8e L+x72KN6TE/G1lQL9ayTxE4= X-Google-Smtp-Source: ABdhPJxNnMYzGZyjX2m5P69sbecN1ZcD+hXj1T2uBOj0IAZhXKQI/Qwbvz+oXRazGsgyO38qKktt7w== X-Received: by 2002:aa7:9112:: with SMTP id 18mr31901719pfh.212.1593761744136; Fri, 03 Jul 2020 00:35:44 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id y7sm10218499pgk.93.2020.07.03.00.35.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jul 2020 00:35:43 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH v2 3/6] powerpc: move spinlock implementation to simple_spinlock Date: Fri, 3 Jul 2020 17:35:13 +1000 Message-Id: <20200703073516.1354108-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200703073516.1354108-1-npiggin@gmail.com> References: <20200703073516.1354108-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org To prepare for queued spinlocks. This is a simple rename except to update preprocessor guard name and a file reference. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/simple_spinlock.h | 292 ++++++++++++++++++ .../include/asm/simple_spinlock_types.h | 21 ++ arch/powerpc/include/asm/spinlock.h | 285 +---------------- arch/powerpc/include/asm/spinlock_types.h | 12 +- 4 files changed, 315 insertions(+), 295 deletions(-) create mode 100644 arch/powerpc/include/asm/simple_spinlock.h create mode 100644 arch/powerpc/include/asm/simple_spinlock_types.h diff --git a/arch/powerpc/include/asm/simple_spinlock.h b/arch/powerpc/include/asm/simple_spinlock.h new file mode 100644 index 000000000000..e048c041c4a9 --- /dev/null +++ b/arch/powerpc/include/asm/simple_spinlock.h @@ -0,0 +1,292 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ASM_SIMPLE_SPINLOCK_H +#define __ASM_SIMPLE_SPINLOCK_H +#ifdef __KERNEL__ + +/* + * Simple spin lock operations. + * + * Copyright (C) 2001-2004 Paul Mackerras , IBM + * Copyright (C) 2001 Anton Blanchard , IBM + * Copyright (C) 2002 Dave Engebretsen , IBM + * Rework to support virtual processors + * + * Type of int is used as a full 64b word is not necessary. + * + * (the type definitions are in asm/simple_spinlock_types.h) + */ +#include +#include +#ifdef CONFIG_PPC64 +#include +#endif +#include +#include + +#ifdef CONFIG_PPC64 +/* use 0x800000yy when locked, where yy == CPU number */ +#ifdef __BIG_ENDIAN__ +#define LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token)) +#else +#define LOCK_TOKEN (*(u32 *)(&get_paca()->paca_index)) +#endif +#else +#define LOCK_TOKEN 1 +#endif + +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) +{ + return lock.slock == 0; +} + +static inline int arch_spin_is_locked(arch_spinlock_t *lock) +{ + smp_mb(); + return !arch_spin_value_unlocked(*lock); +} + +/* + * This returns the old value in the lock, so we succeeded + * in getting the lock if the return value is 0. + */ +static inline unsigned long __arch_spin_trylock(arch_spinlock_t *lock) +{ + unsigned long tmp, token; + + token = LOCK_TOKEN; + __asm__ __volatile__( +"1: " PPC_LWARX(%0,0,%2,1) "\n\ + cmpwi 0,%0,0\n\ + bne- 2f\n\ + stwcx. %1,0,%2\n\ + bne- 1b\n" + PPC_ACQUIRE_BARRIER +"2:" + : "=&r" (tmp) + : "r" (token), "r" (&lock->slock) + : "cr0", "memory"); + + return tmp; +} + +static inline int arch_spin_trylock(arch_spinlock_t *lock) +{ + return __arch_spin_trylock(lock) == 0; +} + +/* + * On a system with shared processors (that is, where a physical + * processor is multiplexed between several virtual processors), + * there is no point spinning on a lock if the holder of the lock + * isn't currently scheduled on a physical processor. Instead + * we detect this situation and ask the hypervisor to give the + * rest of our timeslice to the lock holder. + * + * So that we can tell which virtual processor is holding a lock, + * we put 0x80000000 | smp_processor_id() in the lock when it is + * held. Conveniently, we have a word in the paca that holds this + * value. + */ + +#if defined(CONFIG_PPC_SPLPAR) +/* We only yield to the hypervisor if we are in shared processor mode */ +void splpar_spin_yield(arch_spinlock_t *lock); +void splpar_rw_yield(arch_rwlock_t *lock); +#else /* SPLPAR */ +static inline void splpar_spin_yield(arch_spinlock_t *lock) {}; +static inline void splpar_rw_yield(arch_rwlock_t *lock) {}; +#endif + +static inline void spin_yield(arch_spinlock_t *lock) +{ + if (is_shared_processor()) + splpar_spin_yield(lock); + else + barrier(); +} + +static inline void rw_yield(arch_rwlock_t *lock) +{ + if (is_shared_processor()) + splpar_rw_yield(lock); + else + barrier(); +} + +static inline void arch_spin_lock(arch_spinlock_t *lock) +{ + while (1) { + if (likely(__arch_spin_trylock(lock) == 0)) + break; + do { + HMT_low(); + if (is_shared_processor()) + splpar_spin_yield(lock); + } while (unlikely(lock->slock != 0)); + HMT_medium(); + } +} + +static inline +void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags) +{ + unsigned long flags_dis; + + while (1) { + if (likely(__arch_spin_trylock(lock) == 0)) + break; + local_save_flags(flags_dis); + local_irq_restore(flags); + do { + HMT_low(); + if (is_shared_processor()) + splpar_spin_yield(lock); + } while (unlikely(lock->slock != 0)); + HMT_medium(); + local_irq_restore(flags_dis); + } +} +#define arch_spin_lock_flags arch_spin_lock_flags + +static inline void arch_spin_unlock(arch_spinlock_t *lock) +{ + __asm__ __volatile__("# arch_spin_unlock\n\t" + PPC_RELEASE_BARRIER: : :"memory"); + lock->slock = 0; +} + +/* + * Read-write spinlocks, allowing multiple readers + * but only one writer. + * + * NOTE! it is quite common to have readers in interrupts + * but no interrupt writers. For those circumstances we + * can "mix" irq-safe locks - any writer needs to get a + * irq-safe write-lock, but readers can get non-irqsafe + * read-locks. + */ + +#ifdef CONFIG_PPC64 +#define __DO_SIGN_EXTEND "extsw %0,%0\n" +#define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */ +#else +#define __DO_SIGN_EXTEND +#define WRLOCK_TOKEN (-1) +#endif + +/* + * This returns the old value in the lock + 1, + * so we got a read lock if the return value is > 0. + */ +static inline long __arch_read_trylock(arch_rwlock_t *rw) +{ + long tmp; + + __asm__ __volatile__( +"1: " PPC_LWARX(%0,0,%1,1) "\n" + __DO_SIGN_EXTEND +" addic. %0,%0,1\n\ + ble- 2f\n" +" stwcx. %0,0,%1\n\ + bne- 1b\n" + PPC_ACQUIRE_BARRIER +"2:" : "=&r" (tmp) + : "r" (&rw->lock) + : "cr0", "xer", "memory"); + + return tmp; +} + +/* + * This returns the old value in the lock, + * so we got the write lock if the return value is 0. + */ +static inline long __arch_write_trylock(arch_rwlock_t *rw) +{ + long tmp, token; + + token = WRLOCK_TOKEN; + __asm__ __volatile__( +"1: " PPC_LWARX(%0,0,%2,1) "\n\ + cmpwi 0,%0,0\n\ + bne- 2f\n" +" stwcx. %1,0,%2\n\ + bne- 1b\n" + PPC_ACQUIRE_BARRIER +"2:" : "=&r" (tmp) + : "r" (token), "r" (&rw->lock) + : "cr0", "memory"); + + return tmp; +} + +static inline void arch_read_lock(arch_rwlock_t *rw) +{ + while (1) { + if (likely(__arch_read_trylock(rw) > 0)) + break; + do { + HMT_low(); + if (is_shared_processor()) + splpar_rw_yield(rw); + } while (unlikely(rw->lock < 0)); + HMT_medium(); + } +} + +static inline void arch_write_lock(arch_rwlock_t *rw) +{ + while (1) { + if (likely(__arch_write_trylock(rw) == 0)) + break; + do { + HMT_low(); + if (is_shared_processor()) + splpar_rw_yield(rw); + } while (unlikely(rw->lock != 0)); + HMT_medium(); + } +} + +static inline int arch_read_trylock(arch_rwlock_t *rw) +{ + return __arch_read_trylock(rw) > 0; +} + +static inline int arch_write_trylock(arch_rwlock_t *rw) +{ + return __arch_write_trylock(rw) == 0; +} + +static inline void arch_read_unlock(arch_rwlock_t *rw) +{ + long tmp; + + __asm__ __volatile__( + "# read_unlock\n\t" + PPC_RELEASE_BARRIER +"1: lwarx %0,0,%1\n\ + addic %0,%0,-1\n" +" stwcx. %0,0,%1\n\ + bne- 1b" + : "=&r"(tmp) + : "r"(&rw->lock) + : "cr0", "xer", "memory"); +} + +static inline void arch_write_unlock(arch_rwlock_t *rw) +{ + __asm__ __volatile__("# write_unlock\n\t" + PPC_RELEASE_BARRIER: : :"memory"); + rw->lock = 0; +} + +#define arch_spin_relax(lock) spin_yield(lock) +#define arch_read_relax(lock) rw_yield(lock) +#define arch_write_relax(lock) rw_yield(lock) + +/* See include/linux/spinlock.h */ +#define smp_mb__after_spinlock() smp_mb() + +#endif /* __KERNEL__ */ +#endif /* __ASM_SIMPLE_SPINLOCK_H */ diff --git a/arch/powerpc/include/asm/simple_spinlock_types.h b/arch/powerpc/include/asm/simple_spinlock_types.h new file mode 100644 index 000000000000..7c2b48ce62dc --- /dev/null +++ b/arch/powerpc/include/asm/simple_spinlock_types.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_SIMPLE_SPINLOCK_TYPES_H +#define _ASM_POWERPC_SIMPLE_SPINLOCK_TYPES_H + +#ifndef __LINUX_SPINLOCK_TYPES_H +# error "please don't include this file directly" +#endif + +typedef struct { + volatile unsigned int slock; +} arch_spinlock_t; + +#define __ARCH_SPIN_LOCK_UNLOCKED { 0 } + +typedef struct { + volatile signed int lock; +} arch_rwlock_t; + +#define __ARCH_RW_LOCK_UNLOCKED { 0 } + +#endif diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 79be9bb10bbb..21357fe05fe0 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -3,290 +3,7 @@ #define __ASM_SPINLOCK_H #ifdef __KERNEL__ -/* - * Simple spin lock operations. - * - * Copyright (C) 2001-2004 Paul Mackerras , IBM - * Copyright (C) 2001 Anton Blanchard , IBM - * Copyright (C) 2002 Dave Engebretsen , IBM - * Rework to support virtual processors - * - * Type of int is used as a full 64b word is not necessary. - * - * (the type definitions are in asm/spinlock_types.h) - */ -#include -#include -#ifdef CONFIG_PPC64 -#include -#endif -#include -#include - -#ifdef CONFIG_PPC64 -/* use 0x800000yy when locked, where yy == CPU number */ -#ifdef __BIG_ENDIAN__ -#define LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token)) -#else -#define LOCK_TOKEN (*(u32 *)(&get_paca()->paca_index)) -#endif -#else -#define LOCK_TOKEN 1 -#endif - -static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) -{ - return lock.slock == 0; -} - -static inline int arch_spin_is_locked(arch_spinlock_t *lock) -{ - smp_mb(); - return !arch_spin_value_unlocked(*lock); -} - -/* - * This returns the old value in the lock, so we succeeded - * in getting the lock if the return value is 0. - */ -static inline unsigned long __arch_spin_trylock(arch_spinlock_t *lock) -{ - unsigned long tmp, token; - - token = LOCK_TOKEN; - __asm__ __volatile__( -"1: " PPC_LWARX(%0,0,%2,1) "\n\ - cmpwi 0,%0,0\n\ - bne- 2f\n\ - stwcx. %1,0,%2\n\ - bne- 1b\n" - PPC_ACQUIRE_BARRIER -"2:" - : "=&r" (tmp) - : "r" (token), "r" (&lock->slock) - : "cr0", "memory"); - - return tmp; -} - -static inline int arch_spin_trylock(arch_spinlock_t *lock) -{ - return __arch_spin_trylock(lock) == 0; -} - -/* - * On a system with shared processors (that is, where a physical - * processor is multiplexed between several virtual processors), - * there is no point spinning on a lock if the holder of the lock - * isn't currently scheduled on a physical processor. Instead - * we detect this situation and ask the hypervisor to give the - * rest of our timeslice to the lock holder. - * - * So that we can tell which virtual processor is holding a lock, - * we put 0x80000000 | smp_processor_id() in the lock when it is - * held. Conveniently, we have a word in the paca that holds this - * value. - */ - -#if defined(CONFIG_PPC_SPLPAR) -/* We only yield to the hypervisor if we are in shared processor mode */ -void splpar_spin_yield(arch_spinlock_t *lock); -void splpar_rw_yield(arch_rwlock_t *lock); -#else /* SPLPAR */ -static inline void splpar_spin_yield(arch_spinlock_t *lock) {}; -static inline void splpar_rw_yield(arch_rwlock_t *lock) {}; -#endif - -static inline void spin_yield(arch_spinlock_t *lock) -{ - if (is_shared_processor()) - splpar_spin_yield(lock); - else - barrier(); -} - -static inline void rw_yield(arch_rwlock_t *lock) -{ - if (is_shared_processor()) - splpar_rw_yield(lock); - else - barrier(); -} - -static inline void arch_spin_lock(arch_spinlock_t *lock) -{ - while (1) { - if (likely(__arch_spin_trylock(lock) == 0)) - break; - do { - HMT_low(); - if (is_shared_processor()) - splpar_spin_yield(lock); - } while (unlikely(lock->slock != 0)); - HMT_medium(); - } -} - -static inline -void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags) -{ - unsigned long flags_dis; - - while (1) { - if (likely(__arch_spin_trylock(lock) == 0)) - break; - local_save_flags(flags_dis); - local_irq_restore(flags); - do { - HMT_low(); - if (is_shared_processor()) - splpar_spin_yield(lock); - } while (unlikely(lock->slock != 0)); - HMT_medium(); - local_irq_restore(flags_dis); - } -} -#define arch_spin_lock_flags arch_spin_lock_flags - -static inline void arch_spin_unlock(arch_spinlock_t *lock) -{ - __asm__ __volatile__("# arch_spin_unlock\n\t" - PPC_RELEASE_BARRIER: : :"memory"); - lock->slock = 0; -} - -/* - * Read-write spinlocks, allowing multiple readers - * but only one writer. - * - * NOTE! it is quite common to have readers in interrupts - * but no interrupt writers. For those circumstances we - * can "mix" irq-safe locks - any writer needs to get a - * irq-safe write-lock, but readers can get non-irqsafe - * read-locks. - */ - -#ifdef CONFIG_PPC64 -#define __DO_SIGN_EXTEND "extsw %0,%0\n" -#define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */ -#else -#define __DO_SIGN_EXTEND -#define WRLOCK_TOKEN (-1) -#endif - -/* - * This returns the old value in the lock + 1, - * so we got a read lock if the return value is > 0. - */ -static inline long __arch_read_trylock(arch_rwlock_t *rw) -{ - long tmp; - - __asm__ __volatile__( -"1: " PPC_LWARX(%0,0,%1,1) "\n" - __DO_SIGN_EXTEND -" addic. %0,%0,1\n\ - ble- 2f\n" -" stwcx. %0,0,%1\n\ - bne- 1b\n" - PPC_ACQUIRE_BARRIER -"2:" : "=&r" (tmp) - : "r" (&rw->lock) - : "cr0", "xer", "memory"); - - return tmp; -} - -/* - * This returns the old value in the lock, - * so we got the write lock if the return value is 0. - */ -static inline long __arch_write_trylock(arch_rwlock_t *rw) -{ - long tmp, token; - - token = WRLOCK_TOKEN; - __asm__ __volatile__( -"1: " PPC_LWARX(%0,0,%2,1) "\n\ - cmpwi 0,%0,0\n\ - bne- 2f\n" -" stwcx. %1,0,%2\n\ - bne- 1b\n" - PPC_ACQUIRE_BARRIER -"2:" : "=&r" (tmp) - : "r" (token), "r" (&rw->lock) - : "cr0", "memory"); - - return tmp; -} - -static inline void arch_read_lock(arch_rwlock_t *rw) -{ - while (1) { - if (likely(__arch_read_trylock(rw) > 0)) - break; - do { - HMT_low(); - if (is_shared_processor()) - splpar_rw_yield(rw); - } while (unlikely(rw->lock < 0)); - HMT_medium(); - } -} - -static inline void arch_write_lock(arch_rwlock_t *rw) -{ - while (1) { - if (likely(__arch_write_trylock(rw) == 0)) - break; - do { - HMT_low(); - if (is_shared_processor()) - splpar_rw_yield(rw); - } while (unlikely(rw->lock != 0)); - HMT_medium(); - } -} - -static inline int arch_read_trylock(arch_rwlock_t *rw) -{ - return __arch_read_trylock(rw) > 0; -} - -static inline int arch_write_trylock(arch_rwlock_t *rw) -{ - return __arch_write_trylock(rw) == 0; -} - -static inline void arch_read_unlock(arch_rwlock_t *rw) -{ - long tmp; - - __asm__ __volatile__( - "# read_unlock\n\t" - PPC_RELEASE_BARRIER -"1: lwarx %0,0,%1\n\ - addic %0,%0,-1\n" -" stwcx. %0,0,%1\n\ - bne- 1b" - : "=&r"(tmp) - : "r"(&rw->lock) - : "cr0", "xer", "memory"); -} - -static inline void arch_write_unlock(arch_rwlock_t *rw) -{ - __asm__ __volatile__("# write_unlock\n\t" - PPC_RELEASE_BARRIER: : :"memory"); - rw->lock = 0; -} - -#define arch_spin_relax(lock) spin_yield(lock) -#define arch_read_relax(lock) rw_yield(lock) -#define arch_write_relax(lock) rw_yield(lock) - -/* See include/linux/spinlock.h */ -#define smp_mb__after_spinlock() smp_mb() +#include #endif /* __KERNEL__ */ #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h index 87adaf13b7e8..3906f52dae65 100644 --- a/arch/powerpc/include/asm/spinlock_types.h +++ b/arch/powerpc/include/asm/spinlock_types.h @@ -6,16 +6,6 @@ # error "please don't include this file directly" #endif -typedef struct { - volatile unsigned int slock; -} arch_spinlock_t; - -#define __ARCH_SPIN_LOCK_UNLOCKED { 0 } - -typedef struct { - volatile signed int lock; -} arch_rwlock_t; - -#define __ARCH_RW_LOCK_UNLOCKED { 0 } +#include #endif From patchwork Fri Jul 3 07:35:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1322159 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=t/DcuBln; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49ymwV2sRTz9sSy for ; Fri, 3 Jul 2020 17:35:54 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726352AbgGCHfu (ORCPT ); Fri, 3 Jul 2020 03:35:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725764AbgGCHft (ORCPT ); Fri, 3 Jul 2020 03:35:49 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9253FC08C5C1; Fri, 3 Jul 2020 00:35:49 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id g67so13825856pgc.8; Fri, 03 Jul 2020 00:35:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UUxFjJG7nOA2F32QFMF2PG4d8UpWHDclaIzEQEIjez4=; b=t/DcuBln1TOczNhhS1RATc2/NnTrMBnFGRrGdnGuqjpdsq+Fs1nuXtL5YYC+xcQGRk aM1SEwLnOarxGPzB58W4HvBd8eCAYa2oqdGst/c1qF8Pb5j4F6HCttSpsxOCXAEWZ1q7 mRaZwJ2y0+T8Fi6279TSJnYgElDq4fQXxRBzLZrnZc+0iTMFAtdZLxJv9QBBKuBxwduU fMDkta2O5bKHFhrquqQr1FjKCuro28TgFIfYzar6c/KvVkWaGU9CibSBBC5uFV8uIGIm VqljSkd+JMD3xwc35D0g2nEwwYGY38LAw4RMZYywfMlihXUhCGbDJ3+gbGLanJXGMk7l c4HA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UUxFjJG7nOA2F32QFMF2PG4d8UpWHDclaIzEQEIjez4=; b=JWdPO0a9f0TCSOlXPKCxfbUxvMqm8GLID1nsvfzR5Bmz54isduM8IPCAG4NaUXVymD xGVoen9FbwkXPuYkY2LlrBydXRnSajeuP6jOVW23PYhu7GND1WCohoueS+LEtPrz2jlj 91Fld0Hb1egoEpE8hrEJ6Pc8QvZfYD9At/6p2D9lPQ411kKbAE++2tZomCi3Zsy51Pcm wxdO7H9ynsU/tkBBxftHttXBICqgZ3SQDqsrD7x5bjtHdATamnhyOg15JJzeKQHxBeK2 4QEnE3WhtpjmMINKCUfEhUKkuWjbxByeyo9C+MwkilTR814jAUgoZh9LvVuxu/Og08dD KXLA== X-Gm-Message-State: AOAM531RP748w2CK8dnwtpn83fd5qORgLUZEsGtFz+pWqtfERbos9jz1 eQI7T8zTfF2eYfnjYUKAMzY= X-Google-Smtp-Source: ABdhPJyAUeT3I294mbzcYA41kzn+deHjwA6zmF7n1GktkYqK+vHtHy8SaZjl+7itYMw+3yjF2MjWxw== X-Received: by 2002:a63:d501:: with SMTP id c1mr28034230pgg.159.1593761749176; Fri, 03 Jul 2020 00:35:49 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id y7sm10218499pgk.93.2020.07.03.00.35.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jul 2020 00:35:48 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH v2 4/6] powerpc/64s: implement queued spinlocks and rwlocks Date: Fri, 3 Jul 2020 17:35:14 +1000 Message-Id: <20200703073516.1354108-5-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200703073516.1354108-1-npiggin@gmail.com> References: <20200703073516.1354108-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org These have shown significantly improved performance and fairness when spinlock contention is moderate to high on very large systems. [ Numbers hopefully forthcoming after more testing, but initial results look good ] Thanks to the fast path, single threaded performance is not noticably hurt. Signed-off-by: Nicholas Piggin --- arch/powerpc/Kconfig | 13 ++++++++++++ arch/powerpc/include/asm/Kbuild | 2 ++ arch/powerpc/include/asm/qspinlock.h | 25 +++++++++++++++++++++++ arch/powerpc/include/asm/spinlock.h | 5 +++++ arch/powerpc/include/asm/spinlock_types.h | 5 +++++ arch/powerpc/lib/Makefile | 3 +++ include/asm-generic/qspinlock.h | 2 ++ 7 files changed, 55 insertions(+) create mode 100644 arch/powerpc/include/asm/qspinlock.h diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 9fa23eb320ff..b17575109876 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -145,6 +145,8 @@ config PPC select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF if PPC64 + select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS + select ARCH_USE_QUEUED_SPINLOCKS if PPC_QUEUED_SPINLOCKS select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WEAK_RELEASE_ACQUIRE select BINFMT_ELF @@ -490,6 +492,17 @@ config HOTPLUG_CPU Say N if you are unsure. +config PPC_QUEUED_SPINLOCKS + bool "Queued spinlocks" + depends on SMP + default "y" if PPC_BOOK3S_64 + help + Say Y here to use to use queued spinlocks which are more complex + but give better salability and fairness on large SMP and NUMA + systems. + + If unsure, say "Y" if you have lots of cores, otherwise "N". + config ARCH_CPU_PROBE_RELEASE def_bool y depends on HOTPLUG_CPU diff --git a/arch/powerpc/include/asm/Kbuild b/arch/powerpc/include/asm/Kbuild index dadbcf3a0b1e..1dd8b6adff5e 100644 --- a/arch/powerpc/include/asm/Kbuild +++ b/arch/powerpc/include/asm/Kbuild @@ -6,5 +6,7 @@ generated-y += syscall_table_spu.h generic-y += export.h generic-y += local64.h generic-y += mcs_spinlock.h +generic-y += qrwlock.h +generic-y += qspinlock.h generic-y += vtime.h generic-y += early_ioremap.h diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h new file mode 100644 index 000000000000..c49e33e24edd --- /dev/null +++ b/arch/powerpc/include/asm/qspinlock.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_QSPINLOCK_H +#define _ASM_POWERPC_QSPINLOCK_H + +#include + +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ + +#define smp_mb__after_spinlock() smp_mb() + +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) +{ + /* + * This barrier was added to simple spinlocks by commit 51d7d5205d338, + * but it should now be possible to remove it, asm arm64 has done with + * commit c6f5d02b6a0f. + */ + smp_mb(); + return atomic_read(&lock->val); +} +#define queued_spin_is_locked queued_spin_is_locked + +#include + +#endif /* _ASM_POWERPC_QSPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 21357fe05fe0..434615f1d761 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -3,7 +3,12 @@ #define __ASM_SPINLOCK_H #ifdef __KERNEL__ +#ifdef CONFIG_PPC_QUEUED_SPINLOCKS +#include +#include +#else #include +#endif #endif /* __KERNEL__ */ #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h index 3906f52dae65..c5d742f18021 100644 --- a/arch/powerpc/include/asm/spinlock_types.h +++ b/arch/powerpc/include/asm/spinlock_types.h @@ -6,6 +6,11 @@ # error "please don't include this file directly" #endif +#ifdef CONFIG_PPC_QUEUED_SPINLOCKS +#include +#include +#else #include +#endif #endif diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile index 5e994cda8e40..d66a645503eb 100644 --- a/arch/powerpc/lib/Makefile +++ b/arch/powerpc/lib/Makefile @@ -41,7 +41,10 @@ obj-$(CONFIG_PPC_BOOK3S_64) += copyuser_power7.o copypage_power7.o \ obj64-y += copypage_64.o copyuser_64.o mem_64.o hweight_64.o \ memcpy_64.o memcpy_mcsafe_64.o +ifndef CONFIG_PPC_QUEUED_SPINLOCKS obj64-$(CONFIG_SMP) += locks.o +endif + obj64-$(CONFIG_ALTIVEC) += vmx-helper.o obj64-$(CONFIG_KPROBES_SANITY_TEST) += test_emulate_step.o \ test_emulate_step_exec_instr.o diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index fde943d180e0..fb0a814d4395 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -12,6 +12,7 @@ #include +#ifndef queued_spin_is_locked /** * queued_spin_is_locked - is the spinlock locked? * @lock: Pointer to queued spinlock structure @@ -25,6 +26,7 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock) */ return atomic_read(&lock->val); } +#endif /** * queued_spin_value_unlocked - is the spinlock structure unlocked? From patchwork Fri Jul 3 07:35:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1322160 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=vb2pex0R; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49ymwZ73bFz9sVM for ; Fri, 3 Jul 2020 17:35:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726372AbgGCHfz (ORCPT ); Fri, 3 Jul 2020 03:35:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725764AbgGCHfy (ORCPT ); Fri, 3 Jul 2020 03:35:54 -0400 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93BA1C08C5C1; Fri, 3 Jul 2020 00:35:54 -0700 (PDT) Received: by mail-pj1-x1043.google.com with SMTP id gc15so4754258pjb.0; Fri, 03 Jul 2020 00:35:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9yjpRPYAkGJfnlgR11q94BNR4c/kQenbGX77BKWJOAs=; b=vb2pex0RuQq8dj/z/9e240VJxjYMvhin/amY2uJzTlAAc4mvDuqIVuyL9HS+Ric01J scVOq0qJOONOPe/IRxuJNDhfa5drHQkLRYe+eNlZGcXplA1AZsAATibfhjGtMFu0qaW4 yQtUZmmCKITGTPFeO6v7baS6SordS4ULXZSv61STN645xmMNzWf9XWe/4zTC0KhDHPvB EQsQLpxhgAQBxtYkw3WxhkSujg3gTIHXQeGpDLEDZWAswf2hlt+hzxzLDGDWNXbe1KP1 F2CnAhVPobOPkEYq3QXad/leLPqMqoY7OXQl4T7KcaYhY8k//cciqoHUgun7nxHhm1Ov kfwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9yjpRPYAkGJfnlgR11q94BNR4c/kQenbGX77BKWJOAs=; b=aAt3K/i5HXT57BoOkcD9DpUf/U9F6vWUurEUZw2XU9dN3dA4cWXzfARYtwU+YyHxbr qamCzwwZsPxf8e2dYYtYUXfTNukFkbhMvWDNCXcbsKZ636bDem8kqvNd7gqO+e1XLR71 RYFWkRmaE4C/Eyc9WQr30TpgitWDet6jXd5247ZsFY7A9f7w1b5JWq6BRK+Vc9kS1d9H 30yLXaH3b8VS+I/f16Cj/xT3Mch71ClrkwBTIoHZNGl2InAFAmZFBIZCVRa690S7d1uG OmthwvPo7f9A8pin3qLSnLttTEKTzWGc5kDrXZNFWkS3SZaemz4+cQ8szJEXM/kA8Gzd kMkw== X-Gm-Message-State: AOAM5315sou/uFruGXxJri7RsHEd0oRC8cPl6jLTo7OgeJO7SZFPmzG2 NDrpwfXMpQF0LQj6dFMz/ao= X-Google-Smtp-Source: ABdhPJynqEze7gaZ21hI2+hKykLI894gPGyNg2YKJFfauO66Us+bkfhTLKYKBJCi3O6wv7v1MFpJ5Q== X-Received: by 2002:a17:90a:ea83:: with SMTP id h3mr25730448pjz.176.1593761754166; Fri, 03 Jul 2020 00:35:54 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id y7sm10218499pgk.93.2020.07.03.00.35.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jul 2020 00:35:53 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH v2 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR Date: Fri, 3 Jul 2020 17:35:15 +1000 Message-Id: <20200703073516.1354108-6-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200703073516.1354108-1-npiggin@gmail.com> References: <20200703073516.1354108-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/paravirt.h | 28 ++++++++++ arch/powerpc/include/asm/qspinlock.h | 55 +++++++++++++++++++ arch/powerpc/include/asm/qspinlock_paravirt.h | 5 ++ arch/powerpc/platforms/pseries/Kconfig | 5 ++ arch/powerpc/platforms/pseries/setup.c | 6 +- include/asm-generic/qspinlock.h | 2 + 6 files changed, 100 insertions(+), 1 deletion(-) create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt.h diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h index 7a8546660a63..f2d51f929cf5 100644 --- a/arch/powerpc/include/asm/paravirt.h +++ b/arch/powerpc/include/asm/paravirt.h @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); } + +static inline void prod_cpu(int cpu) +{ + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); +} + +static inline void yield_to_any(void) +{ + plpar_hcall_norets(H_CONFER, -1, 0); +} #else static inline bool is_shared_processor(void) { @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { ___bad_yield_to_preempted(); /* This would be a bug */ } + +extern void ___bad_yield_to_any(void); +static inline void yield_to_any(void) +{ + ___bad_yield_to_any(); /* This would be a bug */ +} + +extern void ___bad_prod_cpu(void); +static inline void prod_cpu(int cpu) +{ + ___bad_prod_cpu(); /* This would be a bug */ +} + #endif #define vcpu_is_preempted vcpu_is_preempted @@ -57,5 +80,10 @@ static inline bool vcpu_is_preempted(int cpu) return false; } +static inline bool pv_is_native_spin_unlock(void) +{ + return !is_shared_processor(); +} + #endif /* __KERNEL__ */ #endif /* __ASM_PARAVIRT_H */ diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h index c49e33e24edd..0960a0de2467 100644 --- a/arch/powerpc/include/asm/qspinlock.h +++ b/arch/powerpc/include/asm/qspinlock.h @@ -3,9 +3,36 @@ #define _ASM_POWERPC_QSPINLOCK_H #include +#include #define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ +#ifdef CONFIG_PARAVIRT_SPINLOCKS +extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); + +static __always_inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) +{ + if (!is_shared_processor()) + native_queued_spin_lock_slowpath(lock, val); + else + __pv_queued_spin_lock_slowpath(lock, val); +} +#else +extern void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +#endif + +static __always_inline void queued_spin_lock(struct qspinlock *lock) +{ + u32 val = 0; + + if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL))) + return; + + queued_spin_lock_slowpath(lock, val); +} +#define queued_spin_lock queued_spin_lock + #define smp_mb__after_spinlock() smp_mb() static __always_inline int queued_spin_is_locked(struct qspinlock *lock) @@ -20,6 +47,34 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock) } #define queued_spin_is_locked queued_spin_is_locked +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define SPIN_THRESHOLD (1<<15) /* not tuned */ + +static __always_inline void pv_wait(u8 *ptr, u8 val) +{ + if (*ptr != val) + return; + yield_to_any(); + /* + * We could pass in a CPU here if waiting in the queue and yield to + * the previous CPU in the queue. + */ +} + +static __always_inline void pv_kick(int cpu) +{ + prod_cpu(cpu); +} + +extern void __pv_init_lock_hash(void); + +static inline void pv_spinlocks_init(void) +{ + __pv_init_lock_hash(); +} + +#endif + #include #endif /* _ASM_POWERPC_QSPINLOCK_H */ diff --git a/arch/powerpc/include/asm/qspinlock_paravirt.h b/arch/powerpc/include/asm/qspinlock_paravirt.h new file mode 100644 index 000000000000..6dbdb8a4f84f --- /dev/null +++ b/arch/powerpc/include/asm/qspinlock_paravirt.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ASM_QSPINLOCK_PARAVIRT_H +#define __ASM_QSPINLOCK_PARAVIRT_H + +#endif /* __ASM_QSPINLOCK_PARAVIRT_H */ diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig index 24c18362e5ea..756e727b383f 100644 --- a/arch/powerpc/platforms/pseries/Kconfig +++ b/arch/powerpc/platforms/pseries/Kconfig @@ -25,9 +25,14 @@ config PPC_PSERIES select SWIOTLB default y +config PARAVIRT_SPINLOCKS + bool + default n + config PPC_SPLPAR depends on PPC_PSERIES bool "Support for shared-processor logical partitions" + select PARAVIRT_SPINLOCKS if PPC_QUEUED_SPINLOCKS help Enabling this option will make the kernel run more efficiently on logically-partitioned pSeries systems which use shared diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index 2db8469e475f..747a203d9453 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -771,8 +771,12 @@ static void __init pSeries_setup_arch(void) if (firmware_has_feature(FW_FEATURE_LPAR)) { vpa_init(boot_cpuid); - if (lppaca_shared_proc(get_lppaca())) + if (lppaca_shared_proc(get_lppaca())) { static_branch_enable(&shared_processor); +#ifdef CONFIG_PARAVIRT_SPINLOCKS + pv_spinlocks_init(); +#endif + } ppc_md.power_save = pseries_lpar_idle; ppc_md.enable_pmcs = pseries_lpar_enable_pmcs; diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index fb0a814d4395..38ca14e79a86 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -69,6 +69,7 @@ static __always_inline int queued_spin_trylock(struct qspinlock *lock) extern void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +#ifndef queued_spin_lock /** * queued_spin_lock - acquire a queued spinlock * @lock: Pointer to queued spinlock structure @@ -82,6 +83,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock) queued_spin_lock_slowpath(lock, val); } +#endif #ifndef queued_spin_unlock /** From patchwork Fri Jul 3 07:35:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1322161 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=tUMVPFF4; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49ymwh4GWzz9sVS for ; Fri, 3 Jul 2020 17:36:04 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726417AbgGCHgA (ORCPT ); Fri, 3 Jul 2020 03:36:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725764AbgGCHf7 (ORCPT ); Fri, 3 Jul 2020 03:35:59 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9FE2C08C5C1; Fri, 3 Jul 2020 00:35:59 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id m9so3388346pfh.0; Fri, 03 Jul 2020 00:35:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kvRCd9PLGVjMWPK2r0TpbnS9leKQS2B3nkKMRCX0GBc=; b=tUMVPFF4vP2G4/hgBKCTYJMF+Rw+SPYAjVXhxPe0fKu026bXGY5X+I5s+FA04mZq6a Et0nP5rO88tHymCo1NOQQ/W/kJDXtP6UaLNrJgLJXf8JIfg9yGsu8i0ZSFQwVqPFYkBX Wb6Vj1fQ2w43tn7SzVOKglgLx7emY1FiLNKNQIKSKfEbs73HUANA4nkFZ70EQNdNBHgB kLu3aQQ0XiLbZOQRXQm1V4GyCEOvfFXXsND1+pAzYPksxSg2lNgJiEO8xDchGcWp3HaT 604/CxpVjNs6i0Fkg797X4y1Q+H9nLZnDzhhVAcrw64zCjA8NYJr7L1Gl1Wek4A7IMtT ackg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kvRCd9PLGVjMWPK2r0TpbnS9leKQS2B3nkKMRCX0GBc=; b=KM77fhNHvJYfgF//AXpK/CEv84v9mE3160l4FoZYePxhOsOYN5XqgLW3qLt9tTJVK6 NHD/bEUNA3YZRLuTrrL/PaLFZGaGhzMh3vZaYXrDlng04K/Ekmd1jgoRPC66tXZBKIJf rNOvMH5xcCy8PyaBoMIy+dNehMv/JgH0oUHQkYGQydkWIgj86qCjUnpfaaTvAm2SCdT+ wDyozIRaiNLIXQCPtKcpMFhiXbpaUIC8rxxov6K0yJZLpaZ0xU3HK1Hv/2+rT9DonMmD AP8LfI43wW/XSJptMqy/5zXuwWpkG//KKU+llwGH2Zo5WYSZaghE95ZkvJAw+8m265dO 3NPA== X-Gm-Message-State: AOAM5331p1annmOOHySzP5Jl3GdrJnj9ZhesGhISb9oMzmEt38qv8yhX rh4ZrGj+Tw4wQaUijz8wCsE= X-Google-Smtp-Source: ABdhPJwRaPz9pu4t64jQMYoToKXv30O7t7kk9YVbA5y1a5jnOvLY5hddEayNHp5CZ6Ie32+ngdqn2w== X-Received: by 2002:a62:1c13:: with SMTP id c19mr17009102pfc.52.1593761759369; Fri, 03 Jul 2020 00:35:59 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id y7sm10218499pgk.93.2020.07.03.00.35.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jul 2020 00:35:59 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH v2 6/6] powerpc/qspinlock: optimised atomic_try_cmpxchg_lock that adds the lock hint Date: Fri, 3 Jul 2020 17:35:16 +1000 Message-Id: <20200703073516.1354108-7-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200703073516.1354108-1-npiggin@gmail.com> References: <20200703073516.1354108-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This brings the behaviour of the uncontended fast path back to roughly equivalent to simple spinlocks -- a single atomic op with lock hint. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/atomic.h | 28 ++++++++++++++++++++++++++++ arch/powerpc/include/asm/qspinlock.h | 2 +- 2 files changed, 29 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 498785ffc25f..f6a3d145ffb7 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -193,6 +193,34 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) #define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new)) +/* + * Don't want to override the generic atomic_try_cmpxchg_acquire, because + * we add a lock hint to the lwarx, which may not be wanted for the + * _acquire case (and is not used by the other _acquire variants so it + * would be a surprise). + */ +static __always_inline bool +atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new) +{ + int r, o = *old; + + __asm__ __volatile__ ( +"1:\t" PPC_LWARX(%0,0,%2,1) " # atomic_try_cmpxchg_acquire \n" +" cmpw 0,%0,%3 \n" +" bne- 2f \n" +" stwcx. %4,0,%2 \n" +" bne- 1b \n" +"\t" PPC_ACQUIRE_BARRIER " \n" +"2: \n" + : "=&r" (r), "+m" (v->counter) + : "r" (&v->counter), "r" (o), "r" (new) + : "cr0", "memory"); + + if (unlikely(r != o)) + *old = r; + return likely(r == o); +} + /** * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h index 0960a0de2467..beb6aa4628e7 100644 --- a/arch/powerpc/include/asm/qspinlock.h +++ b/arch/powerpc/include/asm/qspinlock.h @@ -26,7 +26,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock) { u32 val = 0; - if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL))) + if (likely(atomic_try_cmpxchg_lock(&lock->val, &val, _Q_LOCKED_VAL))) return; queued_spin_lock_slowpath(lock, val);