From patchwork Fri May 5 17:03:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Tatashin X-Patchwork-Id: 759091 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3wKJBh34b1z9s7t for ; Sat, 6 May 2017 03:03:40 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750890AbdEERDi (ORCPT ); Fri, 5 May 2017 13:03:38 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:24049 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750784AbdEERDg (ORCPT ); Fri, 5 May 2017 13:03:36 -0400 Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v45H3ShU027147 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 5 May 2017 17:03:29 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserv0021.oracle.com (8.13.8/8.14.4) with ESMTP id v45H3QXl017993 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 5 May 2017 17:03:27 GMT Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v45H3Q0T015631; Fri, 5 May 2017 17:03:26 GMT Received: from ca-ldom103.us.oracle.com (/10.129.68.23) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 05 May 2017 10:03:26 -0700 From: Pavel Tatashin To: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, borntraeger@de.ibm.com, heiko.carstens@de.ibm.com, davem@davemloft.net Subject: [v3 7/9] x86: teach x86 not to zero struct pages memory Date: Fri, 5 May 2017 13:03:14 -0400 Message-Id: <1494003796-748672-8-git-send-email-pasha.tatashin@oracle.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1494003796-748672-1-git-send-email-pasha.tatashin@oracle.com> References: <1494003796-748672-1-git-send-email-pasha.tatashin@oracle.com> X-Source-IP: aserv0021.oracle.com [141.146.126.233] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org If we are using deferred struct page initialization feature, most of "struct page"es are getting initialized after other CPUs are started, and hence we are benefiting from doing this job in parallel. However, we are still zeroing all the memory that is allocated for "struct pages" using the boot CPU. This patch solves this problem, by deferring zeroing "struct pages" to only when they are initialized on x86 platforms. Signed-off-by: Pavel Tatashin Reviewed-by: Shannon Nelson --- arch/x86/mm/init_64.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 839e5d4..332a21e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1276,7 +1276,7 @@ static int __meminit vmemmap_populate_hugepages(unsigned long start, void *p; p = __vmemmap_alloc_block_buf(PMD_SIZE, node, altmap, - true); + VMEMMAP_ZERO); if (p) { pte_t entry;