From patchwork Thu Jul 13 09:51:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 1807189 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org (client-ip=2404:9400:2:0:216:3eff:fee1:b9f1; helo=lists.ozlabs.org; envelope-from=linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org; receiver=) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2404:9400:2:0:216:3eff:fee1:b9f1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4R1qMB75NDz20ZZ for ; Thu, 13 Jul 2023 19:39:34 +1000 (AEST) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4R1qM974bnz3c22 for ; Thu, 13 Jul 2023 19:39:33 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=huawei.com (client-ip=45.249.212.255; helo=szxga08-in.huawei.com; envelope-from=wangkefeng.wang@huawei.com; receiver=lists.ozlabs.org) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4R1qLw5nLJz3bZK for ; Thu, 13 Jul 2023 19:39:14 +1000 (AEST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4R1qKw75b5z18MBd; Thu, 13 Jul 2023 17:38:28 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 13 Jul 2023 17:39:03 +0800 From: Kefeng Wang To: , Andrew Morton , Subject: [PATCH rfc -next 00/10] mm: convert to generic VMA lock-based page fault Date: Thu, 13 Jul 2023 17:51:55 +0800 Message-ID: <20230713095155.189443-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kefeng Wang , x86@kernel.org, loongarch@lists.linux.dev, Peter Zijlstra , Catalin Marinas , Dave Hansen , WANG Xuerui , Will Deacon , Alexander Gordeev , linux-s390@vger.kernel.org, Huacai Chen , Russell King , Ingo Molnar , Gerald Schaefer , Christian Borntraeger , Albert Ou , Vasily Gorbik , Heiko Carstens , Nicholas Piggin , Borislav Petkov , Andy Lutomirski , Paul Walmsley , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , Sven Schnelle , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Add a generic VMA lock-based page fault handler in mm core, and convert architectures to use it, which eliminate architectures's duplicated codes. With it, we can avoid multiple changes in architectures's code if we add new feature or bugfix. This fixes riscv missing change about commit 38b3aec8e8d2 "mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED", and in the end, we enable this feature on ARM32/Loongarch too. This is based on next-20230713, only built test(no loongarch compiler, so except loongarch). Kefeng Wang (10): mm: add a generic VMA lock-based page fault handler x86: mm: use try_vma_locked_page_fault() arm64: mm: use try_vma_locked_page_fault() s390: mm: use try_vma_locked_page_fault() powerpc: mm: use try_vma_locked_page_fault() riscv: mm: use try_vma_locked_page_fault() ARM: mm: try VMA lock-based page fault handling first loongarch: mm: cleanup __do_page_fault() loongarch: mm: add access_error() helper loongarch: mm: try VMA lock-based page fault handling first arch/arm/Kconfig | 1 + arch/arm/mm/fault.c | 15 ++++++- arch/arm64/mm/fault.c | 28 +++--------- arch/loongarch/Kconfig | 1 + arch/loongarch/mm/fault.c | 92 ++++++++++++++++++++++++--------------- arch/powerpc/mm/fault.c | 54 ++++++++++------------- arch/riscv/mm/fault.c | 38 +++++++--------- arch/s390/mm/fault.c | 23 +++------- arch/x86/mm/fault.c | 39 +++++++---------- include/linux/mm.h | 28 ++++++++++++ mm/memory.c | 42 ++++++++++++++++++ 11 files changed, 206 insertions(+), 155 deletions(-)