From patchwork Fri Oct 13 17:32:11 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Tatashin X-Patchwork-Id: 825640 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3yDFHc2T8yz9sRW for ; Sat, 14 Oct 2017 04:35:56 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752945AbdJMRfN (ORCPT ); Fri, 13 Oct 2017 13:35:13 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:41393 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752966AbdJMRdq (ORCPT ); Fri, 13 Oct 2017 13:33:46 -0400 Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v9DHWWMC014973 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 13 Oct 2017 17:32:32 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id v9DHWWTD006130 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 13 Oct 2017 17:32:32 GMT Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id v9DHWWqm019772; Fri, 13 Oct 2017 17:32:32 GMT Received: from localhost.localdomain (/98.216.35.41) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 13 Oct 2017 10:32:32 -0700 From: Pavel Tatashin To: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, kasan-dev@googlegroups.com, borntraeger@de.ibm.com, heiko.carstens@de.ibm.com, davem@davemloft.net, willy@infradead.org, mhocko@kernel.org, ard.biesheuvel@linaro.org, mark.rutland@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, sam@ravnborg.org, mgorman@techsingularity.net, akpm@linux-foundation.org, steven.sistare@oracle.com, daniel.m.jordan@oracle.com, bob.picco@oracle.com Subject: [PATCH v12 08/11] arm64/kasan: add and use kasan_map_populate() Date: Fri, 13 Oct 2017 13:32:11 -0400 Message-Id: <20171013173214.27300-9-pasha.tatashin@oracle.com> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171013173214.27300-1-pasha.tatashin@oracle.com> References: <20171013173214.27300-1-pasha.tatashin@oracle.com> X-Source-IP: aserv0021.oracle.com [141.146.126.233] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org During early boot, kasan uses vmemmap_populate() to establish its shadow memory. But, that interface is intended for struct pages use. Because of the current project, vmemmap won't be zeroed during allocation, but kasan expects that memory to be zeroed. We are adding a new kasan_map_populate() function to resolve this difference. Therefore, we must use a new interface to allocate and map kasan shadow memory, that also zeroes memory for us. Signed-off-by: Pavel Tatashin --- arch/arm64/mm/kasan_init.c | 72 ++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 66 insertions(+), 6 deletions(-) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 81f03959a4ab..cb4af2951c90 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -28,6 +28,66 @@ static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); +/* Creates mappings for kasan during early boot. The mapped memory is zeroed */ +static int __meminit kasan_map_populate(unsigned long start, unsigned long end, + int node) +{ + unsigned long addr, pfn, next; + unsigned long long size; + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + int ret; + + ret = vmemmap_populate(start, end, node); + /* + * We might have partially populated memory, so check for no entries, + * and zero only those that actually exist. + */ + for (addr = start; addr < end; addr = next) { + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) { + next = pgd_addr_end(addr, end); + continue; + } + + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) { + next = pud_addr_end(addr, end); + continue; + } + if (pud_sect(*pud)) { + /* This is PUD size page */ + next = pud_addr_end(addr, end); + size = PUD_SIZE; + pfn = pud_pfn(*pud); + } else { + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) { + next = pmd_addr_end(addr, end); + continue; + } + if (pmd_sect(*pmd)) { + /* This is PMD size page */ + next = pmd_addr_end(addr, end); + size = PMD_SIZE; + pfn = pmd_pfn(*pmd); + } else { + pte = pte_offset_kernel(pmd, addr); + next = addr + PAGE_SIZE; + if (pte_none(*pte)) + continue; + /* This is base size page */ + size = PAGE_SIZE; + pfn = pte_pfn(*pte); + } + } + memset(phys_to_virt(PFN_PHYS(pfn)), 0, size); + } + return ret; +} + /* * The p*d_populate functions call virt_to_phys implicitly so they can't be used * directly on kernel symbols (bm_p*d). All the early functions are called too @@ -161,11 +221,11 @@ void __init kasan_init(void) clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); - vmemmap_populate(kimg_shadow_start, kimg_shadow_end, - pfn_to_nid(virt_to_pfn(lm_alias(_text)))); + kasan_map_populate(kimg_shadow_start, kimg_shadow_end, + pfn_to_nid(virt_to_pfn(lm_alias(_text)))); /* - * vmemmap_populate() has populated the shadow region that covers the + * kasan_map_populate() has populated the shadow region that covers the * kernel image with SWAPPER_BLOCK_SIZE mappings, so we have to round * the start and end addresses to SWAPPER_BLOCK_SIZE as well, to prevent * kasan_populate_zero_shadow() from replacing the page table entries @@ -191,9 +251,9 @@ void __init kasan_init(void) if (start >= end) break; - vmemmap_populate((unsigned long)kasan_mem_to_shadow(start), - (unsigned long)kasan_mem_to_shadow(end), - pfn_to_nid(virt_to_pfn(start))); + kasan_map_populate((unsigned long)kasan_mem_to_shadow(start), + (unsigned long)kasan_mem_to_shadow(end), + pfn_to_nid(virt_to_pfn(start))); } /*