From patchwork Tue Mar 24 10:40:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 1260568 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=default header.b=AMPZKMFs; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 48mp4W5NpMz9sQt for ; Tue, 24 Mar 2020 21:52:59 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727446AbgCXKw4 (ORCPT ); Tue, 24 Mar 2020 06:52:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:33068 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726818AbgCXKwz (ORCPT ); Tue, 24 Mar 2020 06:52:55 -0400 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5A1D620786; Tue, 24 Mar 2020 10:52:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585047174; bh=HNQfhXx51iW1BFTByuYOqMiuB0J3MOmgoL1IZBf+yd8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AMPZKMFsRBI7UURwUjnbgiAt8skXpRRCOBwwjro67ukTAliBwp6IuV8gNWsTApc30 ftYsUpWep8+YfoLbRs8hPWUDKhyUoHK/5DOUgpaR7hn7MyjWPn8MqT87e4aUuc1W2W oBHcj4JqHd5xpZ/+2Gt79p1XB2Y9GxYyfYTI9tIM= From: Will Deacon To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, Will Deacon , "David S. Miller" , Peter Zijlstra , Nick Desaulniers , Matt Fleming , sparclinux@vger.kernel.org Subject: [PATCH 3/4] sparc32: mm: Change pgtable_t type to pte_t * instead of struct page * Date: Tue, 24 Mar 2020 10:40:04 +0000 Message-Id: <20200324104005.11279-4-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200324104005.11279-1-will@kernel.org> References: <20200324104005.11279-1-will@kernel.org> MIME-Version: 1.0 Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org Change the 'pgtable_t' type for sparc32 so that it represents the uncached virtual address of the PTE table, rather than the underlying 'struct page'. This allows us to free page table allocations smaller than a page. Cc: "David S. Miller" Cc: Peter Zijlstra Signed-off-by: Will Deacon --- arch/sparc/include/asm/page_32.h | 2 +- arch/sparc/include/asm/pgalloc_32.h | 6 +++--- arch/sparc/include/asm/pgtable_32.h | 11 +++++++++++ arch/sparc/mm/srmmu.c | 29 +++++++++-------------------- 4 files changed, 24 insertions(+), 24 deletions(-) diff --git a/arch/sparc/include/asm/page_32.h b/arch/sparc/include/asm/page_32.h index e76e6c006e10..721fa99abc73 100644 --- a/arch/sparc/include/asm/page_32.h +++ b/arch/sparc/include/asm/page_32.h @@ -106,7 +106,7 @@ typedef unsigned long iopgprot_t; #endif -typedef struct page *pgtable_t; +typedef pte_t *pgtable_t; #define TASK_UNMAPPED_BASE 0x50000000 diff --git a/arch/sparc/include/asm/pgalloc_32.h b/arch/sparc/include/asm/pgalloc_32.h index 99c032424946..b772384871e9 100644 --- a/arch/sparc/include/asm/pgalloc_32.h +++ b/arch/sparc/include/asm/pgalloc_32.h @@ -50,11 +50,11 @@ static inline void free_pmd_fast(pmd_t * pmd) #define pmd_free(mm, pmd) free_pmd_fast(pmd) #define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) -void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, struct page *ptep); -#define pmd_pgtable(pmd) pmd_page(pmd) +#define pmd_populate(mm, pmd, pte) pmd_set(pmd, pte) +#define pmd_pgtable(pmd) (pgtable_t)__pmd_page(pmd) void pmd_set(pmd_t *pmdp, pte_t *ptep); -#define pmd_populate_kernel(MM, PMD, PTE) pmd_set(PMD, PTE) +#define pmd_populate_kernel pmd_populate pgtable_t pte_alloc_one(struct mm_struct *mm); diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index 785dc9a95e70..03f4b84dc8fd 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -135,6 +135,17 @@ static inline struct page *pmd_page(pmd_t pmd) return pfn_to_page((pmd_val(pmd) & SRMMU_PTD_PMASK) >> (PAGE_SHIFT-4)); } +static inline unsigned long __pmd_page(pmd_t pmd) +{ + unsigned long v; + + if (srmmu_device_memory(pmd_val(pmd))) + BUG(); + + v = pmd_val(pmd) & SRMMU_PTD_PMASK; + return (unsigned long)__nocache_va(v << 4); +} + static inline unsigned long pud_page_vaddr(pud_t pud) { if (srmmu_device_memory(pud_val(pud))) { diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index 7fa096422bd0..dfd883b53054 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -140,12 +140,6 @@ void pmd_set(pmd_t *pmdp, pte_t *ptep) set_pte((pte_t *)&pmd_val(*pmdp), __pte(SRMMU_ET_PTD | ptp)); } -void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, struct page *ptep) -{ - unsigned long ptp = page_to_pfn(ptep) << (PAGE_SHIFT-4); /* watch for overflow */ - set_pte((pte_t *)&pmd_val(*pmdp), __pte(SRMMU_ET_PTD | ptp)); -} - /* Find an entry in the third-level page table.. */ pte_t *pte_offset_kernel(pmd_t *dir, unsigned long address) { @@ -364,31 +358,26 @@ pgd_t *get_pgd_fast(void) */ pgtable_t pte_alloc_one(struct mm_struct *mm) { - unsigned long pte; + pte_t *ptep; struct page *page; - if ((pte = (unsigned long)pte_alloc_one_kernel(mm)) == 0) + if ((ptep = pte_alloc_one_kernel(mm)) == 0) return NULL; - page = pfn_to_page(__nocache_pa(pte) >> PAGE_SHIFT); + page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT); if (!pgtable_pte_page_ctor(page)) { __free_page(page); return NULL; } - return page; + return ptep; } -void pte_free(struct mm_struct *mm, pgtable_t pte) +void pte_free(struct mm_struct *mm, pgtable_t ptep) { - unsigned long p; - - pgtable_pte_page_dtor(pte); - p = (unsigned long)page_address(pte); /* Cached address (for test) */ - if (p == 0) - BUG(); - p = page_to_pfn(pte) << PAGE_SHIFT; /* Physical address */ + struct page *page; - /* free non cached virtual address*/ - srmmu_free_nocache(__nocache_va(p), SRMMU_PTE_TABLE_SIZE); + page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT); + pgtable_pte_page_dtor(page); + srmmu_free_nocache(ptep, SRMMU_PTE_TABLE_SIZE); } /* context handling - a dynamically sized pool is used */