From patchwork Fri Oct 11 10:23:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 1996048 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=vY4RlrhN; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=bkcZ+SM8; dkim=pass (1024-bit key; unprotected) header.d=antgroup.com header.i=@antgroup.com header.a=rsa-sha256 header.s=default header.b=MYoN5fAR; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-um-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XQ4G156glz1xsc for ; Fri, 11 Oct 2024 22:31:37 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=wGSZJTGBgekPP7ZWKI1NSpSXjezm46pwmtVfcdcBt0M=; b=vY4RlrhNkQ7yzyyK16I972GCEQ zhUUmHXp9FLIlYQHZpcjKc4QsJqI8fpDKDfxtj71Of3nKWWr2WVXFNIumRD0ICc2GY0KEhy+sl9+g b25dO1yJIAQA2iddv7lqq3Hy1iHdZVlgp2GiCQmqcbMcCwNhCp3+Nm4YSrsRZ3flNleYidaMGP7wS xAYi4lzaVcHLMkcvpnYKLLsIZSG3pfqLcwdIF1oiclV4e3cIXVz4FiL1XxbhoBRC0KWJiJVpSs0dK 7c9xfPM2Jb+3TfY9w6YR+NojJ7I/3uHopZTOKMWEdUoAOIr3CnLG9taksH03Snpeg564mSkk7ClOz BYGkHHNA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szDrr-0000000G8rV-47Hb; Fri, 11 Oct 2024 11:31:35 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1szCop-0000000FxcV-1Dmu for linux-um@bombadil.infradead.org; Fri, 11 Oct 2024 10:24:23 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wGSZJTGBgekPP7ZWKI1NSpSXjezm46pwmtVfcdcBt0M=; b=bkcZ+SM8XhO1Jv2gB8GmpGWDNi WRzBnEF/FXg/ev+YIB+N2tTjLE9YB7gh7xz8E7Y+TONJehIqoNEcSwywY1u+UWmighxJed/A7C6/d ufzL4Stok9wtaUrVrFq30oYS/ImSKaFlTOnealNr+1VMW7kJ80HH0wI/dM5HOH4ey0Gtj4G7kKHLH 7sDkQoIFOllgyfKOHovyKWjGhm6MC432DzIuHhqjPHrsRSmiJpDStToMBjmFMYGL8Z3a7hDLwBff7 F3z0PwK9BLZflwQ0EVv6EFqepzffpv1fAVfrhnt/DonWb1fIuOCLl4qvy5oceAmiTaPtbunxBnVMs 3BMTZ+MA==; Received: from out0-197.mail.aliyun.com ([140.205.0.197]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1szCok-00000005qVr-3rnx for linux-um@lists.infradead.org; Fri, 11 Oct 2024 10:24:22 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=antgroup.com; s=default; t=1728642245; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=wGSZJTGBgekPP7ZWKI1NSpSXjezm46pwmtVfcdcBt0M=; b=MYoN5fARlJvB1iNSkdeanRRfECJ1m6s8ewrPJ7ux6s8ds5Gbfi1SHPZ6lU3ZZ8oleAtfajnqkxsQ4oBhtgxn2fLO33boe9ykjA2PS5zqNrFdwj6UNL5iX+krsFpXkidFtNLoXfbUrc68pdaGnxgZpqTu0ZFKGV73sKb1Pf9bt84= Received: from ubuntu..(mailfrom:tiwei.btw@antgroup.com fp:SMTPD_---.ZeXjAHV_1728642242 cluster:ay29) by smtp.aliyun-inc.com; Fri, 11 Oct 2024 18:24:03 +0800 From: "Tiwei Bie" To: richard@nod.at, anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net Cc: , , , "Tiwei Bie" , "Benjamin Berg" Subject: [PATCH v2 1/2] um: Abandon the _PAGE_NEWPROT bit Date: Fri, 11 Oct 2024 18:23:53 +0800 Message-Id: <20241011102354.1682626-2-tiwei.btw@antgroup.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241011102354.1682626-1-tiwei.btw@antgroup.com> References: <20241011102354.1682626-1-tiwei.btw@antgroup.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241011_112420_539388_BD45152D X-CRM114-Status: GOOD ( 22.65 ) X-Spam-Score: -0.2 (/) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: When a PTE is updated in the page table, the _PAGE_NEWPAGE bit will always be set. And the corresponding page will always be mapped or unmapped depending on whether the PTE is present or not. The chec [...] Content analysis details: (-0.2 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [140.205.0.197 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid 0.0 UNPARSEABLE_RELAY Informational: message has unparseable relay lines X-BeenThere: linux-um@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-um" Errors-To: linux-um-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org When a PTE is updated in the page table, the _PAGE_NEWPAGE bit will always be set. And the corresponding page will always be mapped or unmapped depending on whether the PTE is present or not. The check on the _PAGE_NEWPROT bit is not really reachable. Abandoning it will allow us to simplify the code and remove the unreachable code. Reviewed-by: Benjamin Berg Signed-off-by: Tiwei Bie --- arch/um/include/asm/pgtable.h | 37 +++----------- arch/um/include/asm/tlbflush.h | 4 +- arch/um/include/shared/os.h | 2 - arch/um/include/shared/skas/stub-data.h | 1 - arch/um/kernel/skas/stub.c | 10 ---- arch/um/kernel/tlb.c | 66 +++++++++++-------------- arch/um/os-Linux/skas/mem.c | 21 -------- 7 files changed, 36 insertions(+), 105 deletions(-) diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h index bd7a9593705f..6b1317400190 100644 --- a/arch/um/include/asm/pgtable.h +++ b/arch/um/include/asm/pgtable.h @@ -12,7 +12,6 @@ #define _PAGE_PRESENT 0x001 #define _PAGE_NEWPAGE 0x002 -#define _PAGE_NEWPROT 0x004 #define _PAGE_RW 0x020 #define _PAGE_USER 0x040 #define _PAGE_ACCESSED 0x080 @@ -151,23 +150,12 @@ static inline int pte_newpage(pte_t pte) return pte_get_bits(pte, _PAGE_NEWPAGE); } -static inline int pte_newprot(pte_t pte) -{ - return(pte_present(pte) && (pte_get_bits(pte, _PAGE_NEWPROT))); -} - /* * ================================= * Flags setting section. * ================================= */ -static inline pte_t pte_mknewprot(pte_t pte) -{ - pte_set_bits(pte, _PAGE_NEWPROT); - return(pte); -} - static inline pte_t pte_mkclean(pte_t pte) { pte_clear_bits(pte, _PAGE_DIRTY); @@ -182,19 +170,14 @@ static inline pte_t pte_mkold(pte_t pte) static inline pte_t pte_wrprotect(pte_t pte) { - if (likely(pte_get_bits(pte, _PAGE_RW))) - pte_clear_bits(pte, _PAGE_RW); - else - return pte; - return(pte_mknewprot(pte)); + pte_clear_bits(pte, _PAGE_RW); + return pte; } static inline pte_t pte_mkread(pte_t pte) { - if (unlikely(pte_get_bits(pte, _PAGE_USER))) - return pte; pte_set_bits(pte, _PAGE_USER); - return(pte_mknewprot(pte)); + return pte; } static inline pte_t pte_mkdirty(pte_t pte) @@ -211,18 +194,14 @@ static inline pte_t pte_mkyoung(pte_t pte) static inline pte_t pte_mkwrite_novma(pte_t pte) { - if (unlikely(pte_get_bits(pte, _PAGE_RW))) - return pte; pte_set_bits(pte, _PAGE_RW); - return(pte_mknewprot(pte)); + return pte; } static inline pte_t pte_mkuptodate(pte_t pte) { pte_clear_bits(pte, _PAGE_NEWPAGE); - if(pte_present(pte)) - pte_clear_bits(pte, _PAGE_NEWPROT); - return(pte); + return pte; } static inline pte_t pte_mknewpage(pte_t pte) @@ -236,12 +215,10 @@ static inline void set_pte(pte_t *pteptr, pte_t pteval) pte_copy(*pteptr, pteval); /* If it's a swap entry, it needs to be marked _PAGE_NEWPAGE so - * fix_range knows to unmap it. _PAGE_NEWPROT is specific to - * mapped pages. + * update_pte_range knows to unmap it. */ *pteptr = pte_mknewpage(*pteptr); - if(pte_present(*pteptr)) *pteptr = pte_mknewprot(*pteptr); } #define PFN_PTE_SHIFT PAGE_SHIFT @@ -298,8 +275,6 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b) ({ pte_t pte; \ \ pte_set_val(pte, page_to_phys(page), (pgprot)); \ - if (pte_present(pte)) \ - pte_mknewprot(pte_mknewpage(pte)); \ pte;}) static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) diff --git a/arch/um/include/asm/tlbflush.h b/arch/um/include/asm/tlbflush.h index db997976b6ea..13a3009942be 100644 --- a/arch/um/include/asm/tlbflush.h +++ b/arch/um/include/asm/tlbflush.h @@ -9,8 +9,8 @@ #include /* - * In UML, we need to sync the TLB over by using mmap/munmap/mprotect syscalls - * from the process handling the MM (which can be the kernel itself). + * In UML, we need to sync the TLB over by using mmap/munmap syscalls from + * the process handling the MM (which can be the kernel itself). * * To track updates, we can hook into set_ptes and flush_tlb_*. With set_ptes * we catch all PTE transitions where memory that was unusable becomes usable. diff --git a/arch/um/include/shared/os.h b/arch/um/include/shared/os.h index bf539fee7831..09f8201de5db 100644 --- a/arch/um/include/shared/os.h +++ b/arch/um/include/shared/os.h @@ -279,8 +279,6 @@ int map(struct mm_id *mm_idp, unsigned long virt, unsigned long len, int prot, int phys_fd, unsigned long long offset); int unmap(struct mm_id *mm_idp, unsigned long addr, unsigned long len); -int protect(struct mm_id *mm_idp, unsigned long addr, - unsigned long len, unsigned int prot); /* skas/process.c */ extern int is_skas_winch(int pid, int fd, void *data); diff --git a/arch/um/include/shared/skas/stub-data.h b/arch/um/include/shared/skas/stub-data.h index 3fbdda727373..81a4cace032c 100644 --- a/arch/um/include/shared/skas/stub-data.h +++ b/arch/um/include/shared/skas/stub-data.h @@ -30,7 +30,6 @@ enum stub_syscall_type { STUB_SYSCALL_UNSET = 0, STUB_SYSCALL_MMAP, STUB_SYSCALL_MUNMAP, - STUB_SYSCALL_MPROTECT, }; struct stub_syscall { diff --git a/arch/um/kernel/skas/stub.c b/arch/um/kernel/skas/stub.c index 5d52ffa682dc..796fc266d3bb 100644 --- a/arch/um/kernel/skas/stub.c +++ b/arch/um/kernel/skas/stub.c @@ -35,16 +35,6 @@ static __always_inline int syscall_handler(struct stub_data *d) return -1; } break; - case STUB_SYSCALL_MPROTECT: - res = stub_syscall3(__NR_mprotect, - sc->mem.addr, sc->mem.length, - sc->mem.prot); - if (res) { - d->err = res; - d->syscall_data_len = i; - return -1; - } - break; default: d->err = -95; /* EOPNOTSUPP */ d->syscall_data_len = i; diff --git a/arch/um/kernel/tlb.c b/arch/um/kernel/tlb.c index 548af31d4111..23c1f550cd7c 100644 --- a/arch/um/kernel/tlb.c +++ b/arch/um/kernel/tlb.c @@ -23,9 +23,6 @@ struct vm_ops { int phys_fd, unsigned long long offset); int (*unmap)(struct mm_id *mm_idp, unsigned long virt, unsigned long len); - int (*mprotect)(struct mm_id *mm_idp, - unsigned long virt, unsigned long len, - unsigned int prot); }; static int kern_map(struct mm_id *mm_idp, @@ -44,15 +41,6 @@ static int kern_unmap(struct mm_id *mm_idp, return os_unmap_memory((void *)virt, len); } -static int kern_mprotect(struct mm_id *mm_idp, - unsigned long virt, unsigned long len, - unsigned int prot) -{ - return os_protect_memory((void *)virt, len, - prot & UM_PROT_READ, prot & UM_PROT_WRITE, - 1); -} - void report_enomem(void) { printk(KERN_ERR "UML ran out of memory on the host side! " @@ -65,33 +53,37 @@ static inline int update_pte_range(pmd_t *pmd, unsigned long addr, struct vm_ops *ops) { pte_t *pte; - int r, w, x, prot, ret = 0; + int ret = 0; pte = pte_offset_kernel(pmd, addr); do { - r = pte_read(*pte); - w = pte_write(*pte); - x = pte_exec(*pte); - if (!pte_young(*pte)) { - r = 0; - w = 0; - } else if (!pte_dirty(*pte)) - w = 0; - - prot = ((r ? UM_PROT_READ : 0) | (w ? UM_PROT_WRITE : 0) | - (x ? UM_PROT_EXEC : 0)); - if (pte_newpage(*pte)) { - if (pte_present(*pte)) { - __u64 offset; - unsigned long phys = pte_val(*pte) & PAGE_MASK; - int fd = phys_mapping(phys, &offset); - - ret = ops->mmap(ops->mm_idp, addr, PAGE_SIZE, - prot, fd, offset); - } else - ret = ops->unmap(ops->mm_idp, addr, PAGE_SIZE); - } else if (pte_newprot(*pte)) - ret = ops->mprotect(ops->mm_idp, addr, PAGE_SIZE, prot); + if (!pte_newpage(*pte)) + continue; + + if (pte_present(*pte)) { + __u64 offset; + unsigned long phys = pte_val(*pte) & PAGE_MASK; + int fd = phys_mapping(phys, &offset); + int r, w, x, prot; + + r = pte_read(*pte); + w = pte_write(*pte); + x = pte_exec(*pte); + if (!pte_young(*pte)) { + r = 0; + w = 0; + } else if (!pte_dirty(*pte)) + w = 0; + + prot = (r ? UM_PROT_READ : 0) | + (w ? UM_PROT_WRITE : 0) | + (x ? UM_PROT_EXEC : 0); + + ret = ops->mmap(ops->mm_idp, addr, PAGE_SIZE, + prot, fd, offset); + } else + ret = ops->unmap(ops->mm_idp, addr, PAGE_SIZE); + *pte = pte_mkuptodate(*pte); } while (pte++, addr += PAGE_SIZE, ((addr < end) && !ret)); return ret; @@ -180,11 +172,9 @@ int um_tlb_sync(struct mm_struct *mm) if (mm == &init_mm) { ops.mmap = kern_map; ops.unmap = kern_unmap; - ops.mprotect = kern_mprotect; } else { ops.mmap = map; ops.unmap = unmap; - ops.mprotect = protect; } pgd = pgd_offset(mm, addr); diff --git a/arch/um/os-Linux/skas/mem.c b/arch/um/os-Linux/skas/mem.c index 9a13ac23c606..d7f1814b0e5a 100644 --- a/arch/um/os-Linux/skas/mem.c +++ b/arch/um/os-Linux/skas/mem.c @@ -217,24 +217,3 @@ int unmap(struct mm_id *mm_idp, unsigned long addr, unsigned long len) return 0; } - -int protect(struct mm_id *mm_idp, unsigned long addr, unsigned long len, - unsigned int prot) -{ - struct stub_syscall *sc; - - /* Compress with previous syscall if that is possible */ - sc = syscall_stub_get_previous(mm_idp, STUB_SYSCALL_MPROTECT, addr); - if (sc && sc->mem.prot == prot) { - sc->mem.length += len; - return 0; - } - - sc = syscall_stub_alloc(mm_idp); - sc->syscall = STUB_SYSCALL_MPROTECT; - sc->mem.addr = addr; - sc->mem.length = len; - sc->mem.prot = prot; - - return 0; -} From patchwork Fri Oct 11 10:23:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 1996047 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=v9Sqm4fC; dkim=pass (1024-bit key; unprotected) header.d=antgroup.com header.i=@antgroup.com header.a=rsa-sha256 header.s=default header.b=ZSS7BtSW; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-um-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XQ4G159sxz1xv0 for ; Fri, 11 Oct 2024 22:31:37 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=0xE8I/eqX8wXeraGOlsIj0r7xfdCnIf/HQneg4xzk8Q=; b=v9Sqm4fC1XukYo8w64G6CqcTKv X2W3grScN6QGtQcA1dwYOsqBYHTgyk4QGOR5+Gi5qfuTsYu0SxzmbpnnCz/FyJ7e7M4wlxpKdGhWs YR6RBEvBbrUx7vPYqPVXQsSwqxfnSsulcWPY6gMGw8wpTAAJw4T2+uMjfpx4LaDamM9TEh6pfCfzZ rMaK6AUXh7fyjmpQPSoqxPf1elJkjrjeOiO2AE4jgkTJ+YVGuzfp5UAUzf1ZRGIp5DyLAR73IZxKg 8uaPCUwGGqlpCQ6tnWkTchyNlVhQ9u6M8OEeDJH2HAJ5RXYlNg56wa6/tQ+02eFD7Uh/z3NZyfU0r bYA/qTYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szDrr-0000000G8rO-2j2U; Fri, 11 Oct 2024 11:31:35 +0000 Received: from out0-203.mail.aliyun.com ([140.205.0.203]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1szCob-0000000FxbJ-0Fy5 for linux-um@lists.infradead.org; Fri, 11 Oct 2024 10:24:11 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=antgroup.com; s=default; t=1728642244; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=0xE8I/eqX8wXeraGOlsIj0r7xfdCnIf/HQneg4xzk8Q=; b=ZSS7BtSWKzgICNuEKlOoEzPEg0H49V+4d7ckvLXEaAz7H8DjUcHLQwWKE81LPQ4jsl9LyFgb12Z5IuHjorH07aA2sc9p5bsVJOlmP3HDthmH83XhN/ac6lsEzwWja3zLqTx6Zzo5xC1eEkZ4lQQsu9WOerz6vn5nRkaFTruceZY= Received: from ubuntu..(mailfrom:tiwei.btw@antgroup.com fp:SMTPD_---.ZeXjAJE_1728642243 cluster:ay29) by smtp.aliyun-inc.com; Fri, 11 Oct 2024 18:24:04 +0800 From: "Tiwei Bie" To: richard@nod.at, anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net Cc: , , , "Tiwei Bie" , "Benjamin Berg" Subject: [PATCH v2 2/2] um: Rename _PAGE_NEWPAGE to _PAGE_NEEDSYNC Date: Fri, 11 Oct 2024 18:23:54 +0800 Message-Id: <20241011102354.1682626-3-tiwei.btw@antgroup.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241011102354.1682626-1-tiwei.btw@antgroup.com> References: <20241011102354.1682626-1-tiwei.btw@antgroup.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241011_032409_743571_63CBCFC0 X-CRM114-Status: GOOD ( 14.57 ) X-Spam-Score: -2.1 (--) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: The _PAGE_NEWPAGE bit does not really indicate that this is a new page, but rather whether this entry needs to be synced or not. Renaming it to _PAGE_NEEDSYNC will make it more clear how everything ti [...] Content analysis details: (-2.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 RCVD_IN_VALIDITY_SAFE_BLOCKED RBL: ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. [140.205.0.203 listed in sa-accredit.habeas.com] -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [140.205.0.203 listed in list.dnswl.org] 0.0 RCVD_IN_VALIDITY_CERTIFIED_BLOCKED RBL: ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. [140.205.0.203 listed in sa-trusted.bondedsender.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.0 RCVD_IN_VALIDITY_RPBL_BLOCKED RBL: ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. [140.205.0.203 listed in bl.score.senderscore.com] 0.0 UNPARSEABLE_RELAY Informational: message has unparseable relay lines X-BeenThere: linux-um@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-um" Errors-To: linux-um-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org The _PAGE_NEWPAGE bit does not really indicate that this is a new page, but rather whether this entry needs to be synced or not. Renaming it to _PAGE_NEEDSYNC will make it more clear how everything ties together. Suggested-by: Benjamin Berg Signed-off-by: Tiwei Bie --- arch/um/include/asm/page.h | 2 +- arch/um/include/asm/pgtable-2level.h | 2 +- arch/um/include/asm/pgtable-4level.h | 14 +++++----- arch/um/include/asm/pgtable.h | 38 ++++++++++++++-------------- arch/um/kernel/tlb.c | 10 ++++---- 5 files changed, 33 insertions(+), 33 deletions(-) diff --git a/arch/um/include/asm/page.h b/arch/um/include/asm/page.h index f0ad80fc8c10..99aa459c7c6c 100644 --- a/arch/um/include/asm/page.h +++ b/arch/um/include/asm/page.h @@ -56,7 +56,7 @@ typedef struct { unsigned long pud; } pud_t; #define pte_set_bits(p, bits) ((p).pte |= (bits)) #define pte_clear_bits(p, bits) ((p).pte &= ~(bits)) #define pte_copy(to, from) ((to).pte = (from).pte) -#define pte_is_zero(p) (!((p).pte & ~_PAGE_NEWPAGE)) +#define pte_is_zero(p) (!((p).pte & ~_PAGE_NEEDSYNC)) #define pte_set_val(p, phys, prot) (p).pte = (phys | pgprot_val(prot)) typedef unsigned long phys_t; diff --git a/arch/um/include/asm/pgtable-2level.h b/arch/um/include/asm/pgtable-2level.h index 8256ecc5b919..ab0c8dd86564 100644 --- a/arch/um/include/asm/pgtable-2level.h +++ b/arch/um/include/asm/pgtable-2level.h @@ -31,7 +31,7 @@ printk("%s:%d: bad pgd %p(%08lx).\n", __FILE__, __LINE__, &(e), \ pgd_val(e)) -static inline int pgd_newpage(pgd_t pgd) { return 0; } +static inline int pgd_needsync(pgd_t pgd) { return 0; } static inline void pgd_mkuptodate(pgd_t pgd) { } #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) diff --git a/arch/um/include/asm/pgtable-4level.h b/arch/um/include/asm/pgtable-4level.h index f912fcc16b7a..0d279caee93c 100644 --- a/arch/um/include/asm/pgtable-4level.h +++ b/arch/um/include/asm/pgtable-4level.h @@ -55,7 +55,7 @@ printk("%s:%d: bad pgd %p(%016lx).\n", __FILE__, __LINE__, &(e), \ pgd_val(e)) -#define pud_none(x) (!(pud_val(x) & ~_PAGE_NEWPAGE)) +#define pud_none(x) (!(pud_val(x) & ~_PAGE_NEEDSYNC)) #define pud_bad(x) ((pud_val(x) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE) #define pud_present(x) (pud_val(x) & _PAGE_PRESENT) #define pud_populate(mm, pud, pmd) \ @@ -63,7 +63,7 @@ #define set_pud(pudptr, pudval) (*(pudptr) = (pudval)) -#define p4d_none(x) (!(p4d_val(x) & ~_PAGE_NEWPAGE)) +#define p4d_none(x) (!(p4d_val(x) & ~_PAGE_NEEDSYNC)) #define p4d_bad(x) ((p4d_val(x) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE) #define p4d_present(x) (p4d_val(x) & _PAGE_PRESENT) #define p4d_populate(mm, p4d, pud) \ @@ -72,23 +72,23 @@ #define set_p4d(p4dptr, p4dval) (*(p4dptr) = (p4dval)) -static inline int pgd_newpage(pgd_t pgd) +static inline int pgd_needsync(pgd_t pgd) { - return(pgd_val(pgd) & _PAGE_NEWPAGE); + return pgd_val(pgd) & _PAGE_NEEDSYNC; } -static inline void pgd_mkuptodate(pgd_t pgd) { pgd_val(pgd) &= ~_PAGE_NEWPAGE; } +static inline void pgd_mkuptodate(pgd_t pgd) { pgd_val(pgd) &= ~_PAGE_NEEDSYNC; } #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) static inline void pud_clear (pud_t *pud) { - set_pud(pud, __pud(_PAGE_NEWPAGE)); + set_pud(pud, __pud(_PAGE_NEEDSYNC)); } static inline void p4d_clear (p4d_t *p4d) { - set_p4d(p4d, __p4d(_PAGE_NEWPAGE)); + set_p4d(p4d, __p4d(_PAGE_NEEDSYNC)); } #define pud_page(pud) phys_to_page(pud_val(pud) & PAGE_MASK) diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h index 6b1317400190..12bfc58a8580 100644 --- a/arch/um/include/asm/pgtable.h +++ b/arch/um/include/asm/pgtable.h @@ -11,7 +11,7 @@ #include #define _PAGE_PRESENT 0x001 -#define _PAGE_NEWPAGE 0x002 +#define _PAGE_NEEDSYNC 0x002 #define _PAGE_RW 0x020 #define _PAGE_USER 0x040 #define _PAGE_ACCESSED 0x080 @@ -79,22 +79,22 @@ extern unsigned long end_iomem; */ #define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page) -#define pte_clear(mm,addr,xp) pte_set_val(*(xp), (phys_t) 0, __pgprot(_PAGE_NEWPAGE)) +#define pte_clear(mm, addr, xp) pte_set_val(*(xp), (phys_t) 0, __pgprot(_PAGE_NEEDSYNC)) -#define pmd_none(x) (!((unsigned long)pmd_val(x) & ~_PAGE_NEWPAGE)) +#define pmd_none(x) (!((unsigned long)pmd_val(x) & ~_PAGE_NEEDSYNC)) #define pmd_bad(x) ((pmd_val(x) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE) #define pmd_present(x) (pmd_val(x) & _PAGE_PRESENT) -#define pmd_clear(xp) do { pmd_val(*(xp)) = _PAGE_NEWPAGE; } while (0) +#define pmd_clear(xp) do { pmd_val(*(xp)) = _PAGE_NEEDSYNC; } while (0) -#define pmd_newpage(x) (pmd_val(x) & _PAGE_NEWPAGE) -#define pmd_mkuptodate(x) (pmd_val(x) &= ~_PAGE_NEWPAGE) +#define pmd_needsync(x) (pmd_val(x) & _PAGE_NEEDSYNC) +#define pmd_mkuptodate(x) (pmd_val(x) &= ~_PAGE_NEEDSYNC) -#define pud_newpage(x) (pud_val(x) & _PAGE_NEWPAGE) -#define pud_mkuptodate(x) (pud_val(x) &= ~_PAGE_NEWPAGE) +#define pud_needsync(x) (pud_val(x) & _PAGE_NEEDSYNC) +#define pud_mkuptodate(x) (pud_val(x) &= ~_PAGE_NEEDSYNC) -#define p4d_newpage(x) (p4d_val(x) & _PAGE_NEWPAGE) -#define p4d_mkuptodate(x) (p4d_val(x) &= ~_PAGE_NEWPAGE) +#define p4d_needsync(x) (p4d_val(x) & _PAGE_NEEDSYNC) +#define p4d_mkuptodate(x) (p4d_val(x) &= ~_PAGE_NEEDSYNC) #define pmd_pfn(pmd) (pmd_val(pmd) >> PAGE_SHIFT) #define pmd_page(pmd) phys_to_page(pmd_val(pmd) & PAGE_MASK) @@ -145,9 +145,9 @@ static inline int pte_young(pte_t pte) return pte_get_bits(pte, _PAGE_ACCESSED); } -static inline int pte_newpage(pte_t pte) +static inline int pte_needsync(pte_t pte) { - return pte_get_bits(pte, _PAGE_NEWPAGE); + return pte_get_bits(pte, _PAGE_NEEDSYNC); } /* @@ -200,13 +200,13 @@ static inline pte_t pte_mkwrite_novma(pte_t pte) static inline pte_t pte_mkuptodate(pte_t pte) { - pte_clear_bits(pte, _PAGE_NEWPAGE); + pte_clear_bits(pte, _PAGE_NEEDSYNC); return pte; } -static inline pte_t pte_mknewpage(pte_t pte) +static inline pte_t pte_mkneedsync(pte_t pte) { - pte_set_bits(pte, _PAGE_NEWPAGE); + pte_set_bits(pte, _PAGE_NEEDSYNC); return(pte); } @@ -214,11 +214,11 @@ static inline void set_pte(pte_t *pteptr, pte_t pteval) { pte_copy(*pteptr, pteval); - /* If it's a swap entry, it needs to be marked _PAGE_NEWPAGE so + /* If it's a swap entry, it needs to be marked _PAGE_NEEDSYNC so * update_pte_range knows to unmap it. */ - *pteptr = pte_mknewpage(*pteptr); + *pteptr = pte_mkneedsync(*pteptr); } #define PFN_PTE_SHIFT PAGE_SHIFT @@ -258,7 +258,7 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, #define __HAVE_ARCH_PTE_SAME static inline int pte_same(pte_t pte_a, pte_t pte_b) { - return !((pte_val(pte_a) ^ pte_val(pte_b)) & ~_PAGE_NEWPAGE); + return !((pte_val(pte_a) ^ pte_val(pte_b)) & ~_PAGE_NEEDSYNC); } /* @@ -308,7 +308,7 @@ extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr); * <--------------- offset ----------------> E < type -> 0 0 0 1 0 * * E is the exclusive marker that is not stored in swap entries. - * _PAGE_NEWPAGE (bit 1) is always set to 1 in set_pte(). + * _PAGE_NEEDSYNC (bit 1) is always set to 1 in set_pte(). */ #define __swp_type(x) (((x).val >> 5) & 0x1f) #define __swp_offset(x) ((x).val >> 11) diff --git a/arch/um/kernel/tlb.c b/arch/um/kernel/tlb.c index 23c1f550cd7c..cf7e0d4407f2 100644 --- a/arch/um/kernel/tlb.c +++ b/arch/um/kernel/tlb.c @@ -57,7 +57,7 @@ static inline int update_pte_range(pmd_t *pmd, unsigned long addr, pte = pte_offset_kernel(pmd, addr); do { - if (!pte_newpage(*pte)) + if (!pte_needsync(*pte)) continue; if (pte_present(*pte)) { @@ -101,7 +101,7 @@ static inline int update_pmd_range(pud_t *pud, unsigned long addr, do { next = pmd_addr_end(addr, end); if (!pmd_present(*pmd)) { - if (pmd_newpage(*pmd)) { + if (pmd_needsync(*pmd)) { ret = ops->unmap(ops->mm_idp, addr, next - addr); pmd_mkuptodate(*pmd); @@ -124,7 +124,7 @@ static inline int update_pud_range(p4d_t *p4d, unsigned long addr, do { next = pud_addr_end(addr, end); if (!pud_present(*pud)) { - if (pud_newpage(*pud)) { + if (pud_needsync(*pud)) { ret = ops->unmap(ops->mm_idp, addr, next - addr); pud_mkuptodate(*pud); @@ -147,7 +147,7 @@ static inline int update_p4d_range(pgd_t *pgd, unsigned long addr, do { next = p4d_addr_end(addr, end); if (!p4d_present(*p4d)) { - if (p4d_newpage(*p4d)) { + if (p4d_needsync(*p4d)) { ret = ops->unmap(ops->mm_idp, addr, next - addr); p4d_mkuptodate(*p4d); @@ -181,7 +181,7 @@ int um_tlb_sync(struct mm_struct *mm) do { next = pgd_addr_end(addr, mm->context.sync_tlb_range_to); if (!pgd_present(*pgd)) { - if (pgd_newpage(*pgd)) { + if (pgd_needsync(*pgd)) { ret = ops.unmap(ops.mm_idp, addr, next - addr); pgd_mkuptodate(*pgd);