From patchwork Thu Sep 14 22:35:12 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Tatashin X-Patchwork-Id: 813981 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xtYMt3rNqz9s5L for ; Fri, 15 Sep 2017 08:38:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751451AbdINWg5 (ORCPT ); Thu, 14 Sep 2017 18:36:57 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:43401 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751453AbdINWgx (ORCPT ); Thu, 14 Sep 2017 18:36:53 -0400 Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v8EMZY0D030118 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 14 Sep 2017 22:35:34 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id v8EMZXDN023166 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 14 Sep 2017 22:35:34 GMT Received: from ubhmp0003.oracle.com (ubhmp0003.oracle.com [156.151.24.56]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v8EMZXli003996; Thu, 14 Sep 2017 22:35:33 GMT Received: from cmex.localdomain (/12.145.98.253) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 14 Sep 2017 22:35:33 +0000 From: Pavel Tatashin To: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, kasan-dev@googlegroups.com, borntraeger@de.ibm.com, heiko.carstens@de.ibm.com, davem@davemloft.net, willy@infradead.org, mhocko@kernel.org, ard.biesheuvel@linaro.org, will.deacon@arm.com, catalin.marinas@arm.com, sam@ravnborg.org, mgorman@techsingularity.net, Steven.Sistare@oracle.com, daniel.m.jordan@oracle.com, bob.picco@oracle.com Subject: [PATCH v8 06/11] mm: zero struct pages during initialization Date: Thu, 14 Sep 2017 18:35:12 -0400 Message-Id: <20170914223517.8242-7-pasha.tatashin@oracle.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20170914223517.8242-1-pasha.tatashin@oracle.com> References: <20170914223517.8242-1-pasha.tatashin@oracle.com> X-Source-IP: aserv0022.oracle.com [141.146.126.234] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org Add struct page zeroing as a part of initialization of other fields in __init_single_page(). This single thread performance collected on: Intel(R) Xeon(R) CPU E7-8895 v3 @ 2.60GHz with 1T of memory (268400646 pages in 8 nodes): BASE FIX sparse_init 11.244671836s 0.007199623s zone_sizes_init 4.879775891s 8.355182299s -------------------------- Total 16.124447727s 8.362381922s sparse_init is where memory for struct pages is zeroed, and the zeroing part is moved later in this patch into __init_single_page(), which is called from zone_sizes_init(). Signed-off-by: Pavel Tatashin Reviewed-by: Steven Sistare Reviewed-by: Daniel Jordan Reviewed-by: Bob Picco Acked-by: Michal Hocko --- include/linux/mm.h | 9 +++++++++ mm/page_alloc.c | 1 + 2 files changed, 10 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index f8c10d336e42..50b74d628243 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -94,6 +94,15 @@ extern int mmap_rnd_compat_bits __read_mostly; #define mm_forbids_zeropage(X) (0) #endif +/* + * On some architectures it is expensive to call memset() for small sizes. + * Those architectures should provide their own implementation of "struct page" + * zeroing by defining this macro in . + */ +#ifndef mm_zero_struct_page +#define mm_zero_struct_page(pp) ((void)memset((pp), 0, sizeof(struct page))) +#endif + /* * Default maximum number of active map areas, this limits the number of vmas * per mm struct. Users can overwrite this number by sysctl but there is a diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a8dbd405ed94..4b630ee91430 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1170,6 +1170,7 @@ static void free_one_page(struct zone *zone, static void __meminit __init_single_page(struct page *page, unsigned long pfn, unsigned long zone, int nid) { + mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); init_page_count(page); page_mapcount_reset(page);