From patchwork Mon Aug 5 17:06:04 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirill Tkhai X-Patchwork-Id: 264713 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 60DE22C0097 for ; Tue, 6 Aug 2013 03:06:11 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753053Ab3HERGI (ORCPT ); Mon, 5 Aug 2013 13:06:08 -0400 Received: from forward18.mail.yandex.net ([95.108.253.143]:53325 "EHLO forward18.mail.yandex.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753001Ab3HERGH (ORCPT ); Mon, 5 Aug 2013 13:06:07 -0400 Received: from web3g.yandex.ru (web3g.yandex.ru [95.108.252.103]) by forward18.mail.yandex.net (Yandex) with ESMTP id 9980F178179A; Mon, 5 Aug 2013 21:06:05 +0400 (MSK) Received: from 127.0.0.1 (localhost.localdomain [127.0.0.1]) by web3g.yandex.ru (Yandex) with ESMTP id 302001A98001; Mon, 5 Aug 2013 21:06:05 +0400 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1375722365; bh=ndOpdHiQ2kCu5yhEfrg/fkFRLLZK08tQYD+HVKmbBuk=; h=From:To:Cc:Subject:Date; b=aN1CZ4DkSEtlsO7mWNpnb7RudDgXM1DHtMx1IEifGyttlHJUAdfEy3IeoLKCNnSLi h5ikSQUqKMqZGACN5sWdgL1MV1/kxDcfg6aLZQQrYGCaRO2ioBjqksLJZqFYrDKfOf eBdFlNkufivIA3CWuLiqespNaztWZxE7zwZoziWg= Received: from ns.ineum.ru (ns.ineum.ru [213.247.249.139]) by web3g.yandex.ru with HTTP; Mon, 05 Aug 2013 21:06:04 +0400 From: Kirill Tkhai To: "sparclinux@vger.kernel.org" Cc: David Miller Subject: [PATCH]sparc64: Flush locked TLB entries used to map initmem MIME-Version: 1.0 Message-Id: <48681375722364@web3g.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Mon, 05 Aug 2013 21:06:04 +0400 Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org Check if __init_begin <= (some 4Mb aligned address) < __init_end. If true then flush corresponding 4Mb TLB page. Move BSS at the middle of the image to make initmem be last range of mapped kernel area. It's not likely case, but sometimes can be useful. I use flush_tlb_kernel_range which flushes 8K pages sequentially and only the first flush() has a result. It seems it's not necessary to introduce new function for one-time boot use... Signed-off-by: Kirill Tkhai CC: David Miller --- arch/sparc/kernel/head_64.S | 4 ++-- arch/sparc/kernel/vmlinux.lds.S | 3 ++- arch/sparc/mm/init_64.c | 17 +++++++++++++++++ 3 files changed, 21 insertions(+), 3 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe sparclinux" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S index 26b706a..81dbb74 100644 --- a/arch/sparc/kernel/head_64.S +++ b/arch/sparc/kernel/head_64.S @@ -681,8 +681,8 @@ tlb_fixup_done: /* Clear the bss */ sethi %hi(__bss_start), %o0 or %o0, %lo(__bss_start), %o0 - sethi %hi(_end), %o1 - or %o1, %lo(_end), %o1 + sethi %hi(__bss_stop), %o1 + or %o1, %lo(__bss_stop), %o1 call __bzero sub %o1, %o0, %o1 diff --git a/arch/sparc/kernel/vmlinux.lds.S b/arch/sparc/kernel/vmlinux.lds.S index 0bacceb..be57226 100644 --- a/arch/sparc/kernel/vmlinux.lds.S +++ b/arch/sparc/kernel/vmlinux.lds.S @@ -64,6 +64,8 @@ SECTIONS /* End of data section */ _edata = .; + BSS_SECTION(0, 0, 0) + .fixup : { __start___fixup = .; *(.fixup) @@ -141,7 +143,6 @@ SECTIONS . = ALIGN(PAGE_SIZE); __init_end = .; - BSS_SECTION(0, 0, 0) _end = . ; STABS_DEBUG diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index ed82eda..52bfcf8 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -2079,6 +2079,21 @@ void __init mem_init(void) cheetah_ecache_flush_init(); } +/* Flush locked TLB entry (entries) if it is used to map initmem range + * only, i.e. __init_begin <= (some 4Mb aligned address) < __init_end. + */ +static void flush_initmem_tlb_entries(void) +{ + unsigned long begin = round_up((unsigned long)(__init_begin), 1 << 22); + unsigned long end = round_up((unsigned long)(__init_end), 1 << 22); + + if (begin >= end) + return; + + pr_info("TLB: Flushing initmem range [%016lx, %016lx]\n", begin, end-1); + flush_tlb_kernel_range(begin, end); +} + void free_initmem(void) { unsigned long addr, initend; @@ -2108,6 +2123,8 @@ void free_initmem(void) if (do_free) free_reserved_page(virt_to_page(page)); } + + flush_initmem_tlb_entries(); } #ifdef CONFIG_BLK_DEV_INITRD