From patchwork Mon Aug 7 20:38:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Tatashin X-Patchwork-Id: 798875 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xR8db2glSz9rxl for ; Tue, 8 Aug 2017 06:44:03 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752208AbdHGUnt (ORCPT ); Mon, 7 Aug 2017 16:43:49 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:42892 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751829AbdHGUkb (ORCPT ); Mon, 7 Aug 2017 16:40:31 -0400 Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v77Kd6lq007009 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 7 Aug 2017 20:39:06 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id v77Kd68f032325 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 7 Aug 2017 20:39:06 GMT Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v77Kd5nr017242; Mon, 7 Aug 2017 20:39:05 GMT Received: from ca-ldom-ol-build-1.us.oracle.com (/10.129.68.23) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 07 Aug 2017 13:39:05 -0700 From: Pavel Tatashin To: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, kasan-dev@googlegroups.com, borntraeger@de.ibm.com, heiko.carstens@de.ibm.com, davem@davemloft.net, willy@infradead.org, mhocko@kernel.org, ard.biesheuvel@linaro.org, will.deacon@arm.com, catalin.marinas@arm.com, sam@ravnborg.org Subject: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory Date: Mon, 7 Aug 2017 16:38:45 -0400 Message-Id: <1502138329-123460-12-git-send-email-pasha.tatashin@oracle.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1502138329-123460-1-git-send-email-pasha.tatashin@oracle.com> References: <1502138329-123460-1-git-send-email-pasha.tatashin@oracle.com> X-Source-IP: userv0021.oracle.com [156.151.31.71] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org To optimize the performance of struct page initialization, vmemmap_populate() will no longer zero memory. We must explicitly zero the memory that is allocated by vmemmap_populate() for kasan, as this memory does not go through struct page initialization path. Signed-off-by: Pavel Tatashin Reviewed-by: Steven Sistare Reviewed-by: Daniel Jordan Reviewed-by: Bob Picco --- arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 81f03959a4ab..e78a9ecbb687 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start, set_pgd(pgd_offset_k(start), __pgd(0)); } +/* + * Memory that was allocated by vmemmap_populate is not zeroed, so we must + * zero it here explicitly. + */ +static void +zero_vmemmap_populated_memory(void) +{ + struct memblock_region *reg; + u64 start, end; + + for_each_memblock(memory, reg) { + start = __phys_to_virt(reg->base); + end = __phys_to_virt(reg->base + reg->size); + + if (start >= end) + break; + + start = (u64)kasan_mem_to_shadow((void *)start); + end = (u64)kasan_mem_to_shadow((void *)end); + + /* Round to the start end of the mapped pages */ + start = round_down(start, SWAPPER_BLOCK_SIZE); + end = round_up(end, SWAPPER_BLOCK_SIZE); + memset((void *)start, 0, end - start); + } + + start = (u64)kasan_mem_to_shadow(_text); + end = (u64)kasan_mem_to_shadow(_end); + + /* Round to the start end of the mapped pages */ + start = round_down(start, SWAPPER_BLOCK_SIZE); + end = round_up(end, SWAPPER_BLOCK_SIZE); + memset((void *)start, 0, end - start); +} + void __init kasan_init(void) { u64 kimg_shadow_start, kimg_shadow_end; @@ -205,8 +240,15 @@ void __init kasan_init(void) pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO)); memset(kasan_zero_page, 0, PAGE_SIZE); + cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); + /* + * vmemmap_populate does not zero the memory, so we need to zero it + * explicitly + */ + zero_vmemmap_populated_memory(); + /* At this point kasan is fully initialized. Enable error messages */ init_task.kasan_depth = 0; pr_info("KernelAddressSanitizer initialized\n");