From patchwork Thu Jan 9 01:32:41 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Scott Wood X-Patchwork-Id: 308487 X-Patchwork-Delegate: scottwood@freescale.com Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from ozlabs.org (localhost [IPv6:::1]) by ozlabs.org (Postfix) with ESMTP id 414F92C0318 for ; Thu, 9 Jan 2014 12:33:31 +1100 (EST) Received: from na01-bn1-obe.outbound.protection.outlook.com (mail-bn1lp0153.outbound.protection.outlook.com [207.46.163.153]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 03F152C00AA for ; Thu, 9 Jan 2014 12:32:59 +1100 (EST) Received: from snotra.am.freescale.net (192.88.168.50) by BY2PR03MB396.namprd03.prod.outlook.com (10.141.141.26) with Microsoft SMTP Server (TLS) id 15.0.847.13; Thu, 9 Jan 2014 01:32:49 +0000 From: Scott Wood To: Subject: [PATCH v4 1/3] powerpc: add barrier after writing kernel PTE Date: Wed, 8 Jan 2014 19:32:41 -0600 Message-ID: <1389231163-11175-1-git-send-email-scottwood@freescale.com> X-Mailer: git-send-email 1.8.3.2 MIME-Version: 1.0 X-Originating-IP: [192.88.168.50] X-ClientProxiedBy: BL2PR08CA001.namprd08.prod.outlook.com (10.255.226.11) To BY2PR03MB396.namprd03.prod.outlook.com (10.141.141.26) X-Forefront-PRVS: 008663486A X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009001)(199002)(189002)(74366001)(46102001)(51856001)(48376002)(65816001)(80022001)(87976001)(77096001)(66066001)(77156001)(33646001)(85306002)(77982001)(59766001)(47976001)(50986001)(4396001)(50226001)(47736001)(54316002)(49866001)(79102001)(53806001)(42186004)(87266001)(87286001)(56776001)(63696002)(47776003)(76482001)(31966008)(74662001)(69226001)(76796001)(88136002)(81342001)(81686001)(74502001)(47446002)(85852003)(19580395003)(80976001)(83072002)(83322001)(19580405001)(74706001)(50466002)(89996001)(74876001)(36756003)(76786001)(76176001)(81542001)(81816001)(92566001)(56816005)(90146001)(62966002); DIR:OUT; SFP:1101; SCL:1; SRVR:BY2PR03MB396; H:snotra.am.freescale.net; CLIP:192.88.168.50; FPR:; RD:InfoNoRecords; A:1; MX:1; LANG:en; X-OriginatorOrg: freescale.com Cc: Scott Wood X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" There is no barrier between something like ioremap() writing to a PTE, and returning the value to a caller that may then store the pointer in a place that is visible to other CPUs. Such callers generally don't perform barriers of their own. Even if callers of ioremap() and similar things did use barriers, the most logical choise would be smp_wmb(), which is not architecturally sufficient when BookE hardware tablewalk is used. A full sync is specified by the architecture. For userspace mappings, OTOH, we generally already have an lwsync due to locking, and if we occasionally take a spurious fault due to not having a full sync with hardware tablewalk, it will not be fatal because we will retry rather than oops. Signed-off-by: Scott Wood --- v4: no change arch/powerpc/mm/pgtable_32.c | 1 + arch/powerpc/mm/pgtable_64.c | 12 ++++++++++++ 2 files changed, 13 insertions(+) diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c index 5b96017..343a87f 100644 --- a/arch/powerpc/mm/pgtable_32.c +++ b/arch/powerpc/mm/pgtable_32.c @@ -299,6 +299,7 @@ int map_page(unsigned long va, phys_addr_t pa, int flags) set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT, __pgprot(flags))); } + smp_wmb(); return err; } diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c index 02e8681..7551382 100644 --- a/arch/powerpc/mm/pgtable_64.c +++ b/arch/powerpc/mm/pgtable_64.c @@ -153,6 +153,18 @@ int map_kernel_page(unsigned long ea, unsigned long pa, int flags) } #endif /* !CONFIG_PPC_MMU_NOHASH */ } + +#ifdef CONFIG_PPC_BOOK3E_64 + /* + * With hardware tablewalk, a sync is needed to ensure that + * subsequent accesses see the PTE we just wrote. Unlike userspace + * mappings, we can't tolerate spurious faults, so make sure + * the new PTE will be seen the first time. + */ + mb(); +#else + smp_wmb(); +#endif return 0; }