From patchwork Wed Apr 29 12:11:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 1279280 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49BylY6m8pz9sRf for ; Wed, 29 Apr 2020 22:40:09 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=default header.b=Fybm+idu; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 49BylX4pWszDr13 for ; Wed, 29 Apr 2020 22:40:08 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kernel.org (client-ip=198.145.29.99; helo=mail.kernel.org; envelope-from=rppt@kernel.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: lists.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=default header.b=Fybm+idu; dkim-atps=neutral Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 49By9v3qg1zDr4q for ; Wed, 29 Apr 2020 22:14:27 +1000 (AEST) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 198FD2176D; Wed, 29 Apr 2020 12:14:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1588162465; bh=v3pg5rQCKSDgBrwUBSTJ3WMqlUUMT1AhZF5bxYsaQHw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Fybm+idutOMmODZKIQU7tOi+El/wu/r765kUoGW+1ITFejKgicXc/K0HXwplkaEfO 7tBMtpoc+y3mNxUQHK78fVWvc4NW0/WU/iDDUTgkQ8nln/22zZWXxuO3mVONFVxeGf TqGlRUIc/YkLqOeO+k+bGnsOXDppbS5/xHPEOEkI= From: Mike Rapoport To: linux-kernel@vger.kernel.org Subject: [PATCH v2 10/20] m68k: mm: simplify detection of memory zone boundaries Date: Wed, 29 Apr 2020 15:11:16 +0300 Message-Id: <20200429121126.17989-11-rppt@kernel.org> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200429121126.17989-1-rppt@kernel.org> References: <20200429121126.17989-1-rppt@kernel.org> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rich Felker , linux-ia64@vger.kernel.org, linux-doc@vger.kernel.org, Catalin Marinas , Heiko Carstens , Michal Hocko , "James E.J. Bottomley" , Max Filippov , Guo Ren , linux-csky@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-riscv@lists.infradead.org, Mike Rapoport , Greg Ungerer , linux-arch@vger.kernel.org, linux-s390@vger.kernel.org, linux-c6x-dev@linux-c6x.org, Baoquan He , Jonathan Corbet , linux-sh@vger.kernel.org, Helge Deller , x86@kernel.org, Russell King , Ley Foon Tan , Yoshinori Sato , Geert Uytterhoeven , linux-arm-kernel@lists.infradead.org, Mark Salter , Matt Turner , linux-snps-arc@lists.infradead.org, uclinux-h8-devel@lists.sourceforge.jp, linux-xtensa@linux-xtensa.org, linux-alpha@vger.kernel.org, linux-um@lists.infradead.org, linux-m68k@lists.linux-m68k.org, Tony Luck , Qian Cai , Greentime Hu , Paul Walmsley , Stafford Horne , Guan Xuetao , Hoan Tran , Michal Simek , Thomas Bogendoerfer , Brian Cain , Nick Hu , linux-mm@kvack.org, Vineet Gupta , linux-mips@vger.kernel.org, openrisc@lists.librecores.org, Richard Weinberger , Andrew Morton , linuxppc-dev@lists.ozlabs.org, "David S. Miller" Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Mike Rapoport The free_area_init() function only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Signed-off-by: Mike Rapoport --- arch/m68k/mm/motorola.c | 11 +++++------ arch/m68k/mm/sun3mmu.c | 10 +++------- 2 files changed, 8 insertions(+), 13 deletions(-) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 84ab5963cabb..904c2a663977 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -365,7 +365,7 @@ static void __init map_node(int node) */ void __init paging_init(void) { - unsigned long zones_size[MAX_NR_ZONES] = { 0, }; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, }; unsigned long min_addr, max_addr; unsigned long addr; int i; @@ -448,11 +448,10 @@ void __init paging_init(void) #ifdef DEBUG printk ("before free_area_init\n"); #endif - for (i = 0; i < m68k_num_memory; i++) { - zones_size[ZONE_DMA] = m68k_memory[i].size >> PAGE_SHIFT; - free_area_init_node(i, zones_size, - m68k_memory[i].addr >> PAGE_SHIFT, NULL); + for (i = 0; i < m68k_num_memory; i++) if (node_present_pages(i)) node_set_state(i, N_NORMAL_MEMORY); - } + + max_zone_pfn[ZONE_DMA] = memblock_end_of_DRAM(); + free_area_init(max_zone_pfn); } diff --git a/arch/m68k/mm/sun3mmu.c b/arch/m68k/mm/sun3mmu.c index eca1c46bb90a..5d8d956d9329 100644 --- a/arch/m68k/mm/sun3mmu.c +++ b/arch/m68k/mm/sun3mmu.c @@ -42,7 +42,7 @@ void __init paging_init(void) unsigned long address; unsigned long next_pgtable; unsigned long bootmem_end; - unsigned long zones_size[MAX_NR_ZONES] = { 0, }; + unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, }; unsigned long size; empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); @@ -89,14 +89,10 @@ void __init paging_init(void) current->mm = NULL; /* memory sizing is a hack stolen from motorola.c.. hope it works for us */ - zones_size[ZONE_DMA] = ((unsigned long)high_memory - PAGE_OFFSET) >> PAGE_SHIFT; + max_zone_pfn[ZONE_DMA] = ((unsigned long)high_memory) >> PAGE_SHIFT; /* I really wish I knew why the following change made things better... -- Sam */ -/* free_area_init(zones_size); */ - free_area_init_node(0, zones_size, - (__pa(PAGE_OFFSET) >> PAGE_SHIFT) + 1, NULL); + free_area_init(max_zone_pfn); } - -