From patchwork Wed Jun 16 03:21:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 1492628 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org (client-ip=112.213.38.117; helo=lists.ozlabs.org; envelope-from=linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=hNxV4V7z; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4G4Vq52hSkz9sTD for ; Wed, 16 Jun 2021 13:22:09 +1000 (AEST) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4G4Vq56sBFz3c2K for ; Wed, 16 Jun 2021 13:22:09 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=hNxV4V7z; dkim-atps=neutral X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kernel.org (client-ip=198.145.29.99; helo=mail.kernel.org; envelope-from=luto@kernel.org; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=hNxV4V7z; dkim-atps=neutral Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4G4VpD1swDz2ym5 for ; Wed, 16 Jun 2021 13:21:23 +1000 (AEST) Received: by mail.kernel.org (Postfix) with ESMTPSA id 76404613DC; Wed, 16 Jun 2021 03:21:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623813679; bh=FjSVK+oLKWCgQcLVc/yxw6eKOykcdskBrRZdDTQATJQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hNxV4V7z+Nf+3L/lZMKFuV3SJ66ggCJ+cspPEUtfvkee+klF0IxtUDpq9DSwO40te Y/+8Op9Fj4sCWiK65uZ5EZUH8vS1Ha4lEzTXSwI28v2CGFytvqvS37LLikVoN0cVG8 6jMich5fC5pKUGbDFMObsHD8RbVXsawbyksBNUDe2evE4KHmZh18FEckNI+KTuQEUp PWpa6XDbLpI7L05XpEjB5MM7gd//a/KYKlt2yWknMaftgoVrL2FGbzOCMguQqyJVD5 vN6uvOJUdkIa7BI3bE944plKIQ3YtdbreMjZpSiKDnYa1JLRGqwKU4HVbz8WVVuMfY WVLB2KuFIkidw== From: Andy Lutomirski To: x86@kernel.org Subject: [PATCH 6/8] powerpc/membarrier: Remove special barrier on mm switch Date: Tue, 15 Jun 2021 20:21:11 -0700 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-mm@kvack.org, Peter Zijlstra , LKML , Nicholas Piggin , Dave Hansen , Paul Mackerras , Andy Lutomirski , Mathieu Desnoyers , Andrew Morton , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" powerpc did the following on some, but not all, paths through switch_mm_irqs_off(): /* * Only need the full barrier when switching between processes. * Barrier when switching from kernel to userspace is not * required here, given that it is implied by mmdrop(). Barrier * when switching from userspace to kernel is not needed after * store to rq->curr. */ if (likely(!(atomic_read(&next->membarrier_state) & (MEMBARRIER_STATE_PRIVATE_EXPEDITED | MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev)) return; This is puzzling: if !prev, then one might expect that we are switching from kernel to user, not user to kernel, which is inconsistent with the comment. But this is all nonsense, because the one and only caller would never have prev == NULL and would, in fact, OOPS if prev == NULL. In any event, this code is unnecessary, since the new generic membarrier_finish_switch_mm() provides the same barrier without arch help. Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Cc: Mathieu Desnoyers Cc: Peter Zijlstra Signed-off-by: Andy Lutomirski --- arch/powerpc/include/asm/membarrier.h | 27 --------------------------- arch/powerpc/mm/mmu_context.c | 2 -- 2 files changed, 29 deletions(-) delete mode 100644 arch/powerpc/include/asm/membarrier.h diff --git a/arch/powerpc/include/asm/membarrier.h b/arch/powerpc/include/asm/membarrier.h deleted file mode 100644 index 6e20bb5c74ea..000000000000 --- a/arch/powerpc/include/asm/membarrier.h +++ /dev/null @@ -1,27 +0,0 @@ -#ifndef _ASM_POWERPC_MEMBARRIER_H -#define _ASM_POWERPC_MEMBARRIER_H - -static inline void membarrier_arch_switch_mm(struct mm_struct *prev, - struct mm_struct *next, - struct task_struct *tsk) -{ - /* - * Only need the full barrier when switching between processes. - * Barrier when switching from kernel to userspace is not - * required here, given that it is implied by mmdrop(). Barrier - * when switching from userspace to kernel is not needed after - * store to rq->curr. - */ - if (likely(!(atomic_read(&next->membarrier_state) & - (MEMBARRIER_STATE_PRIVATE_EXPEDITED | - MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev)) - return; - - /* - * The membarrier system call requires a full memory barrier - * after storing to rq->curr, before going back to user-space. - */ - smp_mb(); -} - -#endif /* _ASM_POWERPC_MEMBARRIER_H */ diff --git a/arch/powerpc/mm/mmu_context.c b/arch/powerpc/mm/mmu_context.c index a857af401738..8daa95b3162b 100644 --- a/arch/powerpc/mm/mmu_context.c +++ b/arch/powerpc/mm/mmu_context.c @@ -85,8 +85,6 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, if (new_on_cpu) radix_kvm_prefetch_workaround(next); - else - membarrier_arch_switch_mm(prev, next, tsk); /* * The actual HW switching method differs between the various