From patchwork Thu Aug 3 21:23:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Tatashin X-Patchwork-Id: 797453 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xNjlv4WfCz9rxj for ; Fri, 4 Aug 2017 07:26:03 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751956AbdHCVZf (ORCPT ); Thu, 3 Aug 2017 17:25:35 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:41402 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751754AbdHCVZ2 (ORCPT ); Thu, 3 Aug 2017 17:25:28 -0400 Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v73LOB61002472 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 3 Aug 2017 21:24:11 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id v73LOBBG031451 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 3 Aug 2017 21:24:11 GMT Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v73LOBgF028524; Thu, 3 Aug 2017 21:24:11 GMT Received: from ca-ldom-ol-build-1.us.oracle.com (/10.129.68.23) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 03 Aug 2017 14:24:10 -0700 From: Pavel Tatashin To: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, kasan-dev@googlegroups.com, borntraeger@de.ibm.com, heiko.carstens@de.ibm.com, davem@davemloft.net, willy@infradead.org, mhocko@kernel.org Subject: [v5 10/15] x86/kasan: explicitly zero kasan shadow memory Date: Thu, 3 Aug 2017 17:23:48 -0400 Message-Id: <1501795433-982645-11-git-send-email-pasha.tatashin@oracle.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1501795433-982645-1-git-send-email-pasha.tatashin@oracle.com> References: <1501795433-982645-1-git-send-email-pasha.tatashin@oracle.com> X-Source-IP: aserv0022.oracle.com [141.146.126.234] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org To optimize the performance of struct page initialization, vmemmap_populate() will no longer zero memory. We must explicitly zero the memory that is allocated by vmemmap_populate() for kasan, as this memory does not go through struct page initialization path. Signed-off-by: Pavel Tatashin Reviewed-by: Steven Sistare Reviewed-by: Daniel Jordan Reviewed-by: Bob Picco --- arch/x86/mm/kasan_init_64.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 02c9d7553409..7d06cf0b0b6e 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -84,6 +84,28 @@ static struct notifier_block kasan_die_notifier = { }; #endif +/* + * Memory that was allocated by vmemmap_populate is not zeroed, so we must + * zero it here explicitly. + */ +static void +zero_vemmap_populated_memory(void) +{ + u64 i, start, end; + + for (i = 0; i < E820_MAX_ENTRIES && pfn_mapped[i].end; i++) { + void *kaddr_start = pfn_to_kaddr(pfn_mapped[i].start); + void *kaddr_end = pfn_to_kaddr(pfn_mapped[i].end); + + start = (u64)kasan_mem_to_shadow(kaddr_start); + end = (u64)kasan_mem_to_shadow(kaddr_end); + memset((void *)start, 0, end - start); + } + start = (u64)kasan_mem_to_shadow(_stext); + end = (u64)kasan_mem_to_shadow(_end); + memset((void *)start, 0, end - start); +} + void __init kasan_early_init(void) { int i; @@ -156,6 +178,13 @@ void __init kasan_init(void) pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO); set_pte(&kasan_zero_pte[i], pte); } + + /* + * vmemmap_populate does not zero the memory, so we need to zero it + * explicitly + */ + zero_vemmap_populated_memory(); + /* Flush TLBs again to be sure that write protection applied. */ __flush_tlb_all();