From patchwork Thu Oct 5 21:40:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 822100 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3y7RBJ3qpjz9s9Y for ; Fri, 6 Oct 2017 08:44:40 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3y7RBJ2RvxzDqsF for ; Fri, 6 Oct 2017 08:44:40 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=efficios.com (client-ip=167.114.142.141; helo=mail.efficios.com; envelope-from=mathieu.desnoyers@efficios.com; receiver=) Received: from mail.efficios.com (mail.efficios.com [167.114.142.141]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3y7R6w6H4NzDqJ7 for ; Fri, 6 Oct 2017 08:41:43 +1100 (AEDT) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.efficios.com (Postfix) with ESMTP id B5D4C340237; Thu, 5 Oct 2017 21:43:13 +0000 (UTC) Received: from mail.efficios.com ([127.0.0.1]) by localhost (evm-mail-1.efficios.com [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id C1cbViYTssOC; Thu, 5 Oct 2017 21:42:59 +0000 (UTC) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.efficios.com (Postfix) with ESMTP id 1831A34015C; Thu, 5 Oct 2017 21:42:59 +0000 (UTC) X-Virus-Scanned: amavisd-new at efficios.com Received: from mail.efficios.com ([127.0.0.1]) by localhost (evm-mail-1.efficios.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 7l08m1RHk45L; Thu, 5 Oct 2017 21:42:59 +0000 (UTC) Received: from thinkos.internal.efficios.com (cable-192.222.218.157.electronicbox.net [192.222.218.157]) by mail.efficios.com (Postfix) with ESMTPSA id C8B9B34020E; Thu, 5 Oct 2017 21:42:58 +0000 (UTC) From: Mathieu Desnoyers To: "Paul E . McKenney" , Peter Zijlstra Subject: [RFC PATCH for 4.14 2/2] Fix: membarrier: Handle CLONE_VM + !CLONE_THREAD correctly on powerpc Date: Thu, 5 Oct 2017 17:40:50 -0400 Message-Id: <20171005214050.32700-2-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171005214050.32700-1-mathieu.desnoyers@efficios.com> References: <20171005214050.32700-1-mathieu.desnoyers@efficios.com> X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.24 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Avi Kivity , Maged Michael , Boqun Feng , Dave Watson , Alan Stern , linux-kernel@vger.kernel.org, Will Deacon , Andrew Hunter , Nicholas Piggin , Ingo Molnar , Mathieu Desnoyers , Alexander Viro , Andy Lutomirski , Paul Mackerras , linuxppc-dev@lists.ozlabs.org, gromer@google.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Threads targeting the same VM but which belong to different thread groups is a tricky case. It has a few consequences: It turns out that we cannot rely on get_nr_threads(p) to count the number of threads using a VM. We can use (atomic_read(&mm->mm_users) == 1 && get_nr_threads(p) == 1) instead to skip the synchronize_sched() for cases where the VM only has a single user, and that user only has a single thread. It also turns out that we cannot use for_each_thread() to set thread flags in all threads using a VM, as it only iterates on the thread group. Therefore, test the membarrier state variable directly rather than relying on thread flags. This means membarrier_register_private_expedited() needs to set the MEMBARRIER_STATE_SWITCH_MM flag, issue synchronize_sched(), and only then set MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY which allows private expedited membarrier commands to succeed. membarrier_arch_switch_mm() now tests for the MEMBARRIER_STATE_SWITCH_MM flag. Reported-by: Peter Zijlstra Signed-off-by: Mathieu Desnoyers CC: Paul E. McKenney CC: Boqun Feng CC: Andrew Hunter CC: Maged Michael CC: gromer@google.com CC: Avi Kivity CC: Benjamin Herrenschmidt CC: Paul Mackerras CC: Michael Ellerman CC: Dave Watson CC: Alan Stern CC: Will Deacon CC: Andy Lutomirski CC: Ingo Molnar CC: Alexander Viro CC: Nicholas Piggin CC: linuxppc-dev@lists.ozlabs.org CC: linux-arch@vger.kernel.org --- arch/powerpc/include/asm/membarrier.h | 21 ++------------------- arch/powerpc/kernel/membarrier.c | 17 ++++------------- include/linux/mm_types.h | 2 +- include/linux/sched/mm.h | 28 ++++++---------------------- kernel/fork.c | 2 -- kernel/sched/membarrier.c | 16 +++++++++++++--- 6 files changed, 26 insertions(+), 60 deletions(-) diff --git a/arch/powerpc/include/asm/membarrier.h b/arch/powerpc/include/asm/membarrier.h index 61152a7a3cf9..0951646253d9 100644 --- a/arch/powerpc/include/asm/membarrier.h +++ b/arch/powerpc/include/asm/membarrier.h @@ -11,8 +11,8 @@ static inline void membarrier_arch_switch_mm(struct mm_struct *prev, * when switching from userspace to kernel is not needed after * store to rq->curr. */ - if (likely(!test_ti_thread_flag(task_thread_info(tsk), - TIF_MEMBARRIER_PRIVATE_EXPEDITED) || !prev)) + if (likely(!(atomic_read(&next->membarrier_state) + & MEMBARRIER_STATE_SWITCH_MM) || !prev)) return; /* @@ -21,23 +21,6 @@ static inline void membarrier_arch_switch_mm(struct mm_struct *prev, */ smp_mb(); } -static inline void membarrier_arch_fork(struct task_struct *t, - unsigned long clone_flags) -{ - /* - * Coherence of TIF_MEMBARRIER_PRIVATE_EXPEDITED against thread - * fork is protected by siglock. membarrier_arch_fork is called - * with siglock held. - */ - if (test_thread_flag(TIF_MEMBARRIER_PRIVATE_EXPEDITED)) - set_ti_thread_flag(task_thread_info(t), - TIF_MEMBARRIER_PRIVATE_EXPEDITED); -} -static inline void membarrier_arch_execve(struct task_struct *t) -{ - clear_ti_thread_flag(task_thread_info(t), - TIF_MEMBARRIER_PRIVATE_EXPEDITED); -} void membarrier_arch_register_private_expedited(struct task_struct *t); #endif /* _ASM_POWERPC_MEMBARRIER_H */ diff --git a/arch/powerpc/kernel/membarrier.c b/arch/powerpc/kernel/membarrier.c index b0d79a5f5981..4795ad59b833 100644 --- a/arch/powerpc/kernel/membarrier.c +++ b/arch/powerpc/kernel/membarrier.c @@ -19,24 +19,15 @@ #include #include #include +#include void membarrier_arch_register_private_expedited(struct task_struct *p) { - struct task_struct *t; + struct mm_struct *mm = p->mm; - if (get_nr_threads(p) == 1) { - set_thread_flag(TIF_MEMBARRIER_PRIVATE_EXPEDITED); + atomic_or(MEMBARRIER_STATE_SWITCH_MM, &mm->membarrier_state); + if (atomic_read(&mm->mm_users) == 1 && get_nr_threads(p) == 1) return; - } - /* - * Coherence of TIF_MEMBARRIER_PRIVATE_EXPEDITED against thread - * fork is protected by siglock. - */ - spin_lock(&p->sighand->siglock); - for_each_thread(p, t) - set_ti_thread_flag(task_thread_info(t), - TIF_MEMBARRIER_PRIVATE_EXPEDITED); - spin_unlock(&p->sighand->siglock); /* * Ensure all future scheduler executions will observe the new * thread flag state for this process. diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5e0fe8ce053b..1861ea8dba77 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -446,7 +446,7 @@ struct mm_struct { struct core_state *core_state; /* coredumping support */ #ifdef CONFIG_MEMBARRIER - int membarrier_private_expedited; + atomic_t membarrier_state; #endif #ifdef CONFIG_AIO spinlock_t ioctx_lock; diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index b2767ecb21a8..2fb6089efcd7 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -212,35 +212,23 @@ static inline void memalloc_noreclaim_restore(unsigned int flags) #ifdef CONFIG_MEMBARRIER +enum { + MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY = (1U << 0), + MEMBARRIER_STATE_SWITCH_MM = (1U << 1), +}; + #ifdef CONFIG_ARCH_HAS_MEMBARRIER_HOOKS #include #else -static inline void membarrier_arch_fork(struct task_struct *t, - unsigned long clone_flags) -{ -} -static inline void membarrier_arch_execve(struct task_struct *t) -{ -} static inline void membarrier_arch_register_private_expedited( struct task_struct *p) { } #endif -static inline void membarrier_fork(struct task_struct *t, - unsigned long clone_flags) -{ - /* - * Prior copy_mm() copies the membarrier_private_expedited field - * from current->mm to t->mm. - */ - membarrier_arch_fork(t, clone_flags); -} static inline void membarrier_execve(struct task_struct *t) { - t->mm->membarrier_private_expedited = 0; - membarrier_arch_execve(t); + atomic_set(&t->mm->membarrier_state, 0); } #else #ifdef CONFIG_ARCH_HAS_MEMBARRIER_HOOKS @@ -249,10 +237,6 @@ static inline void membarrier_arch_switch_mm(struct mm_struct *prev, { } #endif -static inline void membarrier_fork(struct task_struct *t, - unsigned long clone_flags) -{ -} static inline void membarrier_execve(struct task_struct *t) { } diff --git a/kernel/fork.c b/kernel/fork.c index bd4a93915e08..10646182440f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1840,8 +1840,6 @@ static __latent_entropy struct task_struct *copy_process( */ copy_seccomp(p); - membarrier_fork(p, clone_flags); - /* * Process group and session signals need to be delivered to just the * parent before the fork or both the parent and the child after the diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c index f06949a279ca..23bd256f8458 100644 --- a/kernel/sched/membarrier.c +++ b/kernel/sched/membarrier.c @@ -18,6 +18,7 @@ #include #include #include +#include #include "sched.h" /* for cpu_rq(). */ @@ -40,7 +41,8 @@ static int membarrier_private_expedited(void) bool fallback = false; cpumask_var_t tmpmask; - if (!READ_ONCE(current->mm->membarrier_private_expedited)) + if (!(atomic_read(¤t->mm->membarrier_state) + & MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY)) return -EPERM; if (num_online_cpus() == 1) @@ -104,11 +106,19 @@ static int membarrier_private_expedited(void) static void membarrier_register_private_expedited(void) { struct task_struct *p = current; + struct mm_struct *mm = p->mm; - if (READ_ONCE(p->mm->membarrier_private_expedited)) + /* + * We need to consider threads belonging to different thread + * groups, which use the same mm. (CLONE_VM but not + * CLONE_THREAD). + */ + if (atomic_read(&mm->membarrier_state) + & MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY) return; membarrier_arch_register_private_expedited(p); - WRITE_ONCE(p->mm->membarrier_private_expedited, 1); + atomic_or(MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY, + &mm->membarrier_state); } /**