From patchwork Tue Oct 22 11:10:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000197 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=afbm=rs=vger.kernel.org=linux-ext4+bounces-4688-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXchL49Gjz1xwl for ; Tue, 22 Oct 2024 14:13:38 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXchK1MGRz4w2L for ; Tue, 22 Oct 2024 14:13:37 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXchK1K4vz4wnr; Tue, 22 Oct 2024 14:13:37 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.80.249 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566817; cv=pass; b=sAm3mo9TjlseSBdQ3kdRjQcFPJV+TMsHKY7IW0vFgTlDUDs+xb0A5eTWENvl4tAxTR8V3loyz6bPNjIAmoxz/av+mr7DdHXyylaQybOkC5zJMDWYrqOfs6idPVCK5tt7T0AEytsNF2ArdstP1XqAzyc6UVkwAzfw51kbPH7uAddB/HWjEc8hhJJK/OXhV52AUvLpG81JEp9EayVwFARNQNjW7Np6ineor/EyK+HZc0Iv6byQ7x3C9eOFTeyjcfjJsGpglld6eqy2Q7B8l79vc428VohmrmPvUqZ2lBPY1pc690D+fb3dKAHm1Hnds0FlmNB3FmWhNwJLLctQHTMX8A== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566817; c=relaxed/relaxed; bh=jUpRgWippTh615FyaUWZUAM3FziHEOoxxXoH8GMVeeg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LoO6Dy7Ebtzui9PeUt0qRQy63lwm/ntWAIcI9+IsTn9dRSCh5JoJsGadOETXPF8PslL7IVeZOIZNONaHFUxBFQKsZvsVZbJALwRyq+XJPmXvN1OTYFqfrADxpdeDlELXa6d2oQj5q6GwvhniB47YRCLKAZ3scT328TlIdeMvX6GG1bSzYxuAhD+yupIUEp+P2T6P+BdBmxi9ITwkCGv1GrdAlJQbzzijJdfqQ917NxzcToXo1jk/QiZXFnbyFbByZVp8IYLzLdsAlUeL41Mv8l2OhctX9imqsSd6uMYX7hvXpN28Mc8MvUJe9wbd7z7topP6sNK3uPGu4gaikrgmmQ== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4688-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4688-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [147.75.80.249]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXchJ4C17z4w2L for ; Tue, 22 Oct 2024 14:13:36 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 4B8FD1F23577 for ; Tue, 22 Oct 2024 03:13:35 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 59F7B145B1D; Tue, 22 Oct 2024 03:13:00 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61F1D13E03E; Tue, 22 Oct 2024 03:12:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566780; cv=none; b=dhEHvFWJ07qZFmQdLc8H38eysb9dNezr0icw2O01lRgjDSQ0M4DKqkpunrq4KLF/zELYsFb/1+bfv4yYe1MD/s32tkNLpdOfhLPiWn34E/b1FCpUKQ5rKuvHpL39bNog1pZecigPLSebOo4crJcuu13pNnVSqUBsvWo9jYNlB4s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566780; c=relaxed/simple; bh=Un7uRmLmnNBLObbxKEcGdntaorjVplO34POchkhMyuE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=e/mBwx0ddok5DlagbdJlg/uCL4qd6vkf0+NJDuPPWb3VGDWNLR+nP9rvzjdiotMyVH9hD6WOQYSVsMhUhTfBiokdW4DBtt+btRTHbIJ7T4RnCpEBVfoWEOak682A6G23DS+H3MuQIDKljGpwpnHtT4qvhQXXI7k+OVjtb3NvxXk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcg34f5Lz4f3lW3; Tue, 22 Oct 2024 11:12:31 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id CCB881A0359; Tue, 22 Oct 2024 11:12:49 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S5; Tue, 22 Oct 2024 11:12:49 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 01/27] ext4: remove writable userspace mappings before truncating page cache Date: Tue, 22 Oct 2024 19:10:32 +0800 Message-ID: <20241022111059.2566137-2-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S5 X-Coremail-Antispam: 1UD129KBjvJXoWxXFyrJF43Xw18CFy5Gry5XFb_yoWrAF1kpr 9rGFyfCrWrZasrWa1Sg3WUZw1rK3WkCF4UJ34fGr1UXFyrX3WkKF1Dtw1UAF4UKrW8Jw4j vF45trWjgF45A3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQI14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_Jr4l82xGYIkIc2x26xkF7I0E14v26r4j6ryUM28lY4IEw2IIxxk0rwA2 F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjx v20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E 87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64 kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8JwAm 72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYx C7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI7VAK I48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7 xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY6xII jxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw2 0EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x02 67AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7sR_Q6LtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi When zeroing a range of folios on the filesystem which block size is less than the page size, the file's mapped partial blocks within one page will be marked as unwritten, we should remove writable userspace mappings to ensure that ext4_page_mkwrite() can be called during subsequent write access to these folios. Otherwise, data written by subsequent mmap writes may not be saved to disk. $mkfs.ext4 -b 1024 /dev/vdb $mount /dev/vdb /mnt $xfs_io -t -f -c "pwrite -S 0x58 0 4096" -c "mmap -rw 0 4096" \ -c "mwrite -S 0x5a 2048 2048" -c "fzero 2048 2048" \ -c "mwrite -S 0x59 2048 2048" -c "close" /mnt/foo $od -Ax -t x1z /mnt/foo 000000 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 * 000800 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 * 001000 $umount /mnt && mount /dev/vdb /mnt $od -Ax -t x1z /mnt/foo 000000 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 * 000800 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 * 001000 Signed-off-by: Zhang Yi --- fs/ext4/ext4.h | 2 ++ fs/ext4/extents.c | 1 + fs/ext4/inode.c | 41 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 44 insertions(+) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 44b0d418143c..6d0267afd4c1 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -3020,6 +3020,8 @@ extern int ext4_inode_attach_jinode(struct inode *inode); extern int ext4_can_truncate(struct inode *inode); extern int ext4_truncate(struct inode *); extern int ext4_break_layouts(struct inode *); +extern void ext4_truncate_folios_range(struct inode *inode, loff_t start, + loff_t end); extern int ext4_punch_hole(struct file *file, loff_t offset, loff_t length); extern void ext4_set_inode_flags(struct inode *, bool init); extern int ext4_alloc_da_blocks(struct inode *inode); diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 34e25eee6521..2a054c3689f0 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4677,6 +4677,7 @@ static long ext4_zero_range(struct file *file, loff_t offset, } /* Now release the pages and zero block aligned part of pages */ + ext4_truncate_folios_range(inode, start, end); truncate_pagecache_range(inode, start, end - 1); inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 54bdd4884fe6..8b34e79112d5 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include @@ -3870,6 +3871,46 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset, return ret; } +static inline void ext4_truncate_folio(struct inode *inode, + loff_t start, loff_t end) +{ + unsigned long blocksize = i_blocksize(inode); + struct folio *folio; + + if (round_up(start, blocksize) >= round_down(end, blocksize)) + return; + + folio = filemap_lock_folio(inode->i_mapping, start >> PAGE_SHIFT); + if (IS_ERR(folio)) + return; + + if (folio_mkclean(folio)) + folio_mark_dirty(folio); + folio_unlock(folio); + folio_put(folio); +} + +/* + * When truncating a range of folios, if the block size is less than the + * page size, the file's mapped partial blocks within one page could be + * freed or converted to unwritten. We should call this function to remove + * writable userspace mappings so that ext4_page_mkwrite() can be called + * during subsequent write access to these folios. + */ +void ext4_truncate_folios_range(struct inode *inode, loff_t start, loff_t end) +{ + unsigned long blocksize = i_blocksize(inode); + + if (end > inode->i_size) + end = inode->i_size; + if (start >= end || blocksize >= PAGE_SIZE) + return; + + ext4_truncate_folio(inode, start, min(round_up(start, PAGE_SIZE), end)); + if (end > round_up(start, PAGE_SIZE)) + ext4_truncate_folio(inode, round_down(end, PAGE_SIZE), end); +} + static void ext4_wait_dax_page(struct inode *inode) { filemap_invalidate_unlock(inode->i_mapping); From patchwork Tue Oct 22 11:10:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000194 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=mail.ozlabs.org; envelope-from=srs0=kjsd=rs=vger.kernel.org=linux-ext4+bounces-4685-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcgj5q3Fz1xwf for ; Tue, 22 Oct 2024 14:13:04 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcgc4X9Qz4w2L for ; Tue, 22 Oct 2024 14:13:00 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcgc4MKMz4w2N; Tue, 22 Oct 2024 14:13:00 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=139.178.88.99 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566780; cv=pass; b=AkhrHubI1GuydLGxrAWF9VvWAlT5ndj++j5GSj1TE79o5tz6aPo1ZXmAO9EvM+RbvpVTSejBqJMF/YQXM+WLmXmLf0fcm93tBHBcDDakx7/Dx0qHQiyax8xZvPyfkrZuJu28SSJcpZValZq/wFeaofbvNupuKo4VtHzcAZ9Lbmi6cFIvFduhYXG5ykb3TWB49XTi/5lFDYU0B4VDgEiinswpsmlaAsILYuiKjfBvSiZPPgU4iQup0Cc8YeoFGLlAeHboJEUCR2jl+BTaByqJwlXA9vBKd0QMQz7/FMPWWC6JUE9rG8VPp0OnpGUuuVahfW2rBbWD9YgDOVV8Ygd8Rg== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566780; c=relaxed/relaxed; bh=efYUDlSkXIB5SreOool2zCIGgfUsbRVATEZ7GvIyQMY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=yXwUCsPBZxLRW/egyCe4cJ9Q1j3GArfu9e3H3pvcFsJOl0uM/T5IgTmyITtRnM6KXmMUwUQKmU1o4GdI56/E+41L8R5FpBga2S2wcFaXedM85xXglNSU+hjRuQ4K5DAvo43OVQSjhc1Xe76IJ3NyHGj5BVsAD3b/uLnGwPCgns/BJtnQURuMnMEXiRYw/pDL7DG31nITRiEw6yyVbpQlUd2Gx+WoQo+CnAmnGob70H0HO+b4botOuxaQ54bdBvi2J8EWI6U8tMA6Hyt4QArXPb1GxuJj6zpMrAvr/OsCv0VWtWON8oR2t1fi7YYuWb1HoRZiYVtpTwP/ot/Oqhvonw== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=139.178.88.99; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4685-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=139.178.88.99; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4685-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [139.178.88.99]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcgb5HCyz4w2L for ; Tue, 22 Oct 2024 14:12:59 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id C5359284860 for ; Tue, 22 Oct 2024 03:12:59 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9F25C13AA5F; Tue, 22 Oct 2024 03:12:56 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7D64450E2; Tue, 22 Oct 2024 03:12:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566776; cv=none; b=lV/TBFX4G+ccfgZxBbjWWkcED+IGkwvRgAGjG32jEZMEWtilVjpuqgHVGbDCr04s8rGijtzIAkF1yDzVgF5Fxe5SKK1lo3TWzktd79kES9mR+UYFR+Pv9NucNnMBRF7DT47LcGiYjYFB5vwt8oAo/O0cgU/ILzqJB/IOvnDUKWU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566776; c=relaxed/simple; bh=eTE9FdAXMuVd28LY2UqYUhldzFq3CJH4Ou5K6L/lr0o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p1rrlLvVb5tsALTTOWBXDT5mCLJSHBDB04yBgfPTyRGhhT+KyrNhVEKNNgTbn/qDfaM7xI4y71tKuUaThhOsPRdYklsZPb0zkKfks6um0Nhxy3tg3eierw4W5FLdw6SianV3zAuuC1xK+qJ1qlcVNgjmrfvcJ7dDotWyo+z2QgU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4XXcg50frNz4f3jMX; Tue, 22 Oct 2024 11:12:33 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 732F71A058E; Tue, 22 Oct 2024 11:12:50 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S6; Tue, 22 Oct 2024 11:12:50 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 02/27] ext4: don't explicit update times in ext4_fallocate() Date: Tue, 22 Oct 2024 19:10:33 +0800 Message-ID: <20241022111059.2566137-3-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S6 X-Coremail-Antispam: 1UD129KBjvJXoWxZFyDZr13KFy8urW5KF48JFb_yoW5GFyrpr WrGa4rG3W8WFyq9rWFkr4UZrs7K3W7Gr48XrZ5u34xua4DJwnYgF1jyFySyF4rtrW8Zr4j vFyUKw1UWw1Uu37anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQI14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_Jryl82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2 F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjx v20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E 87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64 kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8JwAm 72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYx C7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI7VAK I48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7 xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY6xII jxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw2 0EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x02 67AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7sRRBT53UUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi After commit 'ad5cd4f4ee4d ("ext4: fix fallocate to use file_modified to update permissions consistently"), we can update mtime and ctime appropriately through file_modified() when doing zero range, collapse rage, insert range and punch hole, hence there is no need to explicit update times in those paths, just drop them. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/extents.c | 4 ---- fs/ext4/inode.c | 1 - 2 files changed, 5 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 2a054c3689f0..aa07b5ddaff8 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4679,7 +4679,6 @@ static long ext4_zero_range(struct file *file, loff_t offset, /* Now release the pages and zero block aligned part of pages */ ext4_truncate_folios_range(inode, start, end); truncate_pagecache_range(inode, start, end - 1); - inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size, flags); @@ -4704,7 +4703,6 @@ static long ext4_zero_range(struct file *file, loff_t offset, goto out_mutex; } - inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); if (new_size) ext4_update_inode_size(inode, new_size); ret = ext4_mark_inode_dirty(handle, inode); @@ -5440,7 +5438,6 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len) up_write(&EXT4_I(inode)->i_data_sem); if (IS_SYNC(inode)) ext4_handle_sync(handle); - inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); ret = ext4_mark_inode_dirty(handle, inode); ext4_update_inode_fsync_trans(handle, inode, 1); @@ -5550,7 +5547,6 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) /* Expand file to avoid data loss if there is error while shifting */ inode->i_size += len; EXT4_I(inode)->i_disksize += len; - inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); ret = ext4_mark_inode_dirty(handle, inode); if (ret) goto out_stop; diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 8b34e79112d5..f8796f7b0f94 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4085,7 +4085,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) if (IS_SYNC(inode)) ext4_handle_sync(handle); - inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); ret2 = ext4_mark_inode_dirty(handle, inode); if (unlikely(ret2)) ret = ret2; From patchwork Tue Oct 22 11:10:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000199 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=mail.ozlabs.org; envelope-from=srs0=nzue=rs=vger.kernel.org=linux-ext4+bounces-4690-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcj55lDpz1xwf for ; Tue, 22 Oct 2024 14:14:17 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcj42yXgz4w2N for ; Tue, 22 Oct 2024 14:14:16 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcj42wHqz4wb7; Tue, 22 Oct 2024 14:14:16 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=139.178.88.99 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566856; cv=pass; b=GeRTGdrR88kE1JFXUO2qn3ONrvC9iu3J/lPf/ouAO30NrFmoxEY/aMS6M1dKvcGqQz/7hr1d6isB1RSPrhQ0h/NFphnyqAMehvi6sUkOINar1n94vEwBzRSrkCMMXdf3kZj2U5PWiQkioDfHHaQnmcsgtUGewMeGUE1MGgZU1w68oaBj5ns3AfBkCslWIJ86xDnQxKYfCZwLmwZPT4lWulDnjMW7W3d9/J9UnMOgEtHd65TWVNN1xpM2XD2BZBF2ZFypYejRTfGuZ7qqLRryDglUXFnJtON+d/ZoBAnYlpZrCjqoHYKqoMRDDfk+GaggMVW3rZERlx+725GQ5z6b5Q== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566856; c=relaxed/relaxed; bh=jcCaLcfUy7wxINSQT2T6ZoHNc0Q5kC5lpbiWhAX9WFA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TQ8v40+3A6k/XFzSELzTKmptzRCx82eraYSuKFlNyc45d8yiCgfTMJPk9wIMV/o1WS3iebE8rnCATZVucNTWaIE6NWUiWmmhf4ohHuz4UO3KhyLA2dmD30PEhJCopbrjZ/3Xrz9LUNCSRmGwWWw+NBGC6g955wyva1+8+g2bTe3ORQNRyU1hx4NvhcKOYndfBp1zt++FuBOhLGwaMQnaZgkvHo1hWsUGGLfbVaY09LExSppCbsWD8hHYGzzSzkrBlLPzcLAGZF75oQXGP/c3Zxid8NkKPZTJ7LcUskx7OrnEgb3S/EXELokCh86yYaFi9MFexfZ2IRVPfF1tISVx/w== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=139.178.88.99; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4690-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=139.178.88.99; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4690-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [139.178.88.99]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcj40g2mz4w2N for ; Tue, 22 Oct 2024 14:14:16 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 35D7D284DC8 for ; Tue, 22 Oct 2024 03:14:16 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C26C314B088; Tue, 22 Oct 2024 03:13:02 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0C3C13E04C; Tue, 22 Oct 2024 03:12:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566782; cv=none; b=dAmrxXxmz+1oQpXj/7EfIqSCChOc0TG7E9Lh7Sf6mU0SaOnWAmu53S3hWwGagE/PtZHbVG0btBTDEhAx0gVXdwy0zVj0BU3puJaK+0ZUF/8cS+KJ8UvYjNAx8q+B9pDGn04zqBfPdcosMJy+8Kr8QJ5sOHBULSmFhAt9jJZvo2w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566782; c=relaxed/simple; bh=6M3qTW3UtnW4iYeVI9qa0bpqFTev2GsxqOQy2F7zyjU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=H64IL7e+a83SW/AZclgxdhALX4AIbzDkrQuEt6MZ+yzB80uVG7ROVCkuSoB9ssKARZWiSW8nWPL4eeIpnSyyJeSgjKAxLfzK1P9KGtyRznsdsG9i2sXnaewGiB6odWBtMPODYtjBPnh7FpFkw09BfXg+XnjWUIFsIiWc/F3VBWs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgB64frz4f3jkq; Tue, 22 Oct 2024 11:12:38 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 2020B1A07B6; Tue, 22 Oct 2024 11:12:51 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S7; Tue, 22 Oct 2024 11:12:50 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 03/27] ext4: don't write back data before punch hole in nojournal mode Date: Tue, 22 Oct 2024 19:10:34 +0800 Message-ID: <20241022111059.2566137-4-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S7 X-Coremail-Antispam: 1UD129KBjvJXoW7WF1DWr4UAry8Kr1rCF13twb_yoW8Zry5pr 9Ik3yrtF48WFZFkr4SqFsrXF1rKaykGrW8WFyxG3s7Way5Aw1xKF4q9F1rGayUt39xArWj qF40qa4xWFyUAFDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQa14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JrWl82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2 F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjx v20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E 87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64 kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8JwAm 72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYx C7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI7VAK I48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7 xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY6xII jxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcVCF04 k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7Cj xVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0pRMKZLUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi There is no need to write back all data before punching a hole in data=ordered|writeback mode since it will be dropped soon after removing space, so just remove the filemap_write_and_wait_range() in these modes. However, in data=journal mode, we need to write dirty pages out before discarding page cache in case of crash before committing the freeing data transaction, which could expose old, stale data. Signed-off-by: Zhang Yi --- fs/ext4/inode.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index f8796f7b0f94..94b923afcd9c 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3965,17 +3965,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) trace_ext4_punch_hole(inode, offset, length, 0); - /* - * Write out all dirty pages to avoid race conditions - * Then release them. - */ - if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) { - ret = filemap_write_and_wait_range(mapping, offset, - offset + length - 1); - if (ret) - return ret; - } - inode_lock(inode); /* No need to punch hole beyond i_size */ @@ -4037,6 +4026,21 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) ret = ext4_update_disksize_before_punch(inode, offset, length); if (ret) goto out_dio; + + /* + * For journalled data we need to write (and checkpoint) pages + * before discarding page cache to avoid inconsitent data on + * disk in case of crash before punching trans is committed. + */ + if (ext4_should_journal_data(inode)) { + ret = filemap_write_and_wait_range(mapping, + first_block_offset, last_block_offset); + if (ret) + goto out_dio; + } + + ext4_truncate_folios_range(inode, first_block_offset, + last_block_offset + 1); truncate_pagecache_range(inode, first_block_offset, last_block_offset); } From patchwork Tue Oct 22 11:10:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000201 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=vcza=rs=vger.kernel.org=linux-ext4+bounces-4691-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcjP55qZz1xwf for ; Tue, 22 Oct 2024 14:14:33 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcjN2Hw7z4w2L for ; Tue, 22 Oct 2024 14:14:32 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcjN2FqWz4w2N; Tue, 22 Oct 2024 14:14:32 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:45e3:2400::1" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566872; cv=pass; b=b1CWx5EXw62exyCwzj+kwOyub+q1fQkwO/6JNbgQE81OOX/cwtXc2qYNJv4RYpp3fwZraXQ106CnxDH/JP/6oIm5gnr5pjdec+lAZtL7i1EkEXD6yURo1ShA0oWba5fPFoZsqyC+rhsKexAx57SciV2bw7VOgZK35r0kCEZvcMlacy0t0CnfDa4rhikMbAAjEyi4yRzHQ+n85FbfBdXa47LMKZ3sS48JaUQ33VaFLmrfbfd2vroQ0Naqy73LZUel3JbToNViUC+GLiAG2LEx9JVnx2cc3eaQikDUgeRa/tvse3ewG3fZmUIwIyeGGxAusMe3d0whN9G5Dp3eeqPEUQ== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566872; c=relaxed/relaxed; bh=H5ValX1H17UBFmFyDQjkyIoT08Ny0XbPEUOlLmowMfI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OGHHnrdyStmSwTdXLLNRUa3ZXB21w4mlCjjeCGghFqTc5A8cun4JoXriJa6kzg6L503CIPVAkDdAahjoBfGNKVgSoFUgjnkX0EQ09+2b4iTgvupnLwOwdJviYyQKqVhtdaY9ifzldzCGAdLAwUSEMC+fWeoSgvkj6VmyG5axsbeaeTQQ9yC6LwTHJz57az2hF8lxv/stl1KLTABpZIbyoYlxOoSCJxv219htFxi3i64tTHrPSh35MAsH2uPkmWfVHqK/3zuLIuGy3sOZkREzKMFnniPqT+StfVrErNPBlwpw0w10kCn4TE4hVYc5kiq2uKe7CEDtPEUTsAHmWjcjLQ== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4691-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4691-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [IPv6:2604:1380:45e3:2400::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcjM74cwz4w2L for ; Tue, 22 Oct 2024 14:14:31 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 13292285013 for ; Tue, 22 Oct 2024 03:14:32 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 461FD1547C5; Tue, 22 Oct 2024 03:13:03 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B369D146A63; Tue, 22 Oct 2024 03:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566782; cv=none; b=tEq+r8OQciRoPwxazhWPircBofwzgBD/idqG36o0AfBCiOP7HXerDMhTOKZBjivr7CFnwZQcmM8awplj8L4AIcP6IIDpdwhVK1A58ksARMNFfgqEfh4aqM11C0Jso2G0Ec10K1IPRHIjWrRGoN4VTUGoVS87eYjX0U3sbq3XPe0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566782; c=relaxed/simple; bh=9OoTY+6/DPLRRyXqggoqwvpTYNignh7uw7u+WydgWiA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=C8jB1ZXpIdgMvErPAbKHQGF9DuagwgAdbM3FDCOE5rvNs0psENo0NG+TmZfpCgvzS5YbCXUXhNvjx7DFNVNk3RhPqJuePl0YjwbG5dyEWb4wm97miUWTu1I/zsw6pR8HKOG9ATFF7QBmPB5r25Oao7l9QW1o44uk88s4tAYwSoU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcg54Jh4z4f3lVc; Tue, 22 Oct 2024 11:12:33 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id C0CBC1A0359; Tue, 22 Oct 2024 11:12:51 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S8; Tue, 22 Oct 2024 11:12:51 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 04/27] ext4: refactor ext4_punch_hole() Date: Tue, 22 Oct 2024 19:10:35 +0800 Message-ID: <20241022111059.2566137-5-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S8 X-Coremail-Antispam: 1UD129KBjvJXoW3GF48tFWrGF1fAF17GFWfKrg_yoW3GFy7p3 y3Ary5Kr48WFyq9F4Iqr4DXF1Ik3WDKrWUWryxGr1fW34qyw1IgFs0kF1Fgay5trZrZw4j vF45t347W34UCFJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQv14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0pRtl1hUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi The current implementation of ext4_punch_hole() contains complex position calculations and stale error tags. To improve the code's clarity and maintainability, it is essential to clean up the code and improve its readability, this can be achieved by: a) simplifying and renaming variables; b) eliminating unnecessary position calculations; c) writing back all data in data=journal mode, and drop page cache from the original offset to the end, rather than using aligned blocks, d) renaming the stale error tags. Signed-off-by: Zhang Yi --- fs/ext4/inode.c | 140 +++++++++++++++++++++--------------------------- 1 file changed, 62 insertions(+), 78 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 94b923afcd9c..1d128333bd06 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3955,13 +3955,14 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) { struct inode *inode = file_inode(file); struct super_block *sb = inode->i_sb; - ext4_lblk_t first_block, stop_block; + ext4_lblk_t start_lblk, end_lblk; struct address_space *mapping = inode->i_mapping; - loff_t first_block_offset, last_block_offset, max_length; - struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb); + loff_t max_end = EXT4_SB(sb)->s_bitmap_maxbytes - sb->s_blocksize; + loff_t end = offset + length; + unsigned long blocksize = i_blocksize(inode); handle_t *handle; unsigned int credits; - int ret = 0, ret2 = 0; + int ret = 0; trace_ext4_punch_hole(inode, offset, length, 0); @@ -3969,36 +3970,27 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) /* No need to punch hole beyond i_size */ if (offset >= inode->i_size) - goto out_mutex; + goto out; /* - * If the hole extends beyond i_size, set the hole - * to end after the page that contains i_size + * If the hole extends beyond i_size, set the hole to end after + * the page that contains i_size, and also make sure that the hole + * within one block before last range. */ - if (offset + length > inode->i_size) { - length = inode->i_size + - PAGE_SIZE - (inode->i_size & (PAGE_SIZE - 1)) - - offset; - } + if (end > inode->i_size) + end = round_up(inode->i_size, PAGE_SIZE); + if (end > max_end) + end = max_end; + length = end - offset; /* - * For punch hole the length + offset needs to be within one block - * before last range. Adjust the length if it goes beyond that limit. + * Attach jinode to inode for jbd2 if we do any zeroing of partial + * block. */ - max_length = sbi->s_bitmap_maxbytes - inode->i_sb->s_blocksize; - if (offset + length > max_length) - length = max_length - offset; - - if (offset & (sb->s_blocksize - 1) || - (offset + length) & (sb->s_blocksize - 1)) { - /* - * Attach jinode to inode for jbd2 if we do any zeroing of - * partial block - */ + if (offset & (blocksize - 1) || end & (blocksize - 1)) { ret = ext4_inode_attach_jinode(inode); if (ret < 0) - goto out_mutex; - + goto out; } /* Wait all existing dio workers, newcomers will block on i_rwsem */ @@ -4006,7 +3998,7 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) ret = file_modified(file); if (ret) - goto out_mutex; + goto out; /* * Prevent page faults from reinstantiating pages we have released from @@ -4016,34 +4008,24 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) ret = ext4_break_layouts(inode); if (ret) - goto out_dio; - - first_block_offset = round_up(offset, sb->s_blocksize); - last_block_offset = round_down((offset + length), sb->s_blocksize) - 1; + goto out_invalidate_lock; - /* Now release the pages and zero block aligned part of pages*/ - if (last_block_offset > first_block_offset) { + /* + * For journalled data we need to write (and checkpoint) pages + * before discarding page cache to avoid inconsitent data on + * disk in case of crash before punching trans is committed. + */ + if (ext4_should_journal_data(inode)) { + ret = filemap_write_and_wait_range(mapping, offset, end - 1); + } else { ret = ext4_update_disksize_before_punch(inode, offset, length); - if (ret) - goto out_dio; - - /* - * For journalled data we need to write (and checkpoint) pages - * before discarding page cache to avoid inconsitent data on - * disk in case of crash before punching trans is committed. - */ - if (ext4_should_journal_data(inode)) { - ret = filemap_write_and_wait_range(mapping, - first_block_offset, last_block_offset); - if (ret) - goto out_dio; - } - - ext4_truncate_folios_range(inode, first_block_offset, - last_block_offset + 1); - truncate_pagecache_range(inode, first_block_offset, - last_block_offset); + ext4_truncate_folios_range(inode, offset, end); } + if (ret) + goto out_invalidate_lock; + + /* Now release the pages and zero block aligned part of pages*/ + truncate_pagecache_range(inode, offset, end - 1); if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) credits = ext4_writepage_trans_blocks(inode); @@ -4053,52 +4035,54 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) if (IS_ERR(handle)) { ret = PTR_ERR(handle); ext4_std_error(sb, ret); - goto out_dio; + goto out_invalidate_lock; } - ret = ext4_zero_partial_blocks(handle, inode, offset, - length); + ret = ext4_zero_partial_blocks(handle, inode, offset, length); if (ret) - goto out_stop; - - first_block = (offset + sb->s_blocksize - 1) >> - EXT4_BLOCK_SIZE_BITS(sb); - stop_block = (offset + length) >> EXT4_BLOCK_SIZE_BITS(sb); + goto out_handle; /* If there are blocks to remove, do it */ - if (stop_block > first_block) { - ext4_lblk_t hole_len = stop_block - first_block; + start_lblk = round_up(offset, blocksize) >> inode->i_blkbits; + end_lblk = end >> inode->i_blkbits; + + if (end_lblk > start_lblk) { + ext4_lblk_t hole_len = end_lblk - start_lblk; down_write(&EXT4_I(inode)->i_data_sem); ext4_discard_preallocations(inode); - ext4_es_remove_extent(inode, first_block, hole_len); + ext4_es_remove_extent(inode, start_lblk, hole_len); if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) - ret = ext4_ext_remove_space(inode, first_block, - stop_block - 1); + ret = ext4_ext_remove_space(inode, start_lblk, + end_lblk - 1); else - ret = ext4_ind_remove_space(handle, inode, first_block, - stop_block); + ret = ext4_ind_remove_space(handle, inode, start_lblk, + end_lblk); + if (ret) { + up_write(&EXT4_I(inode)->i_data_sem); + goto out_handle; + } - ext4_es_insert_extent(inode, first_block, hole_len, ~0, + ext4_es_insert_extent(inode, start_lblk, hole_len, ~0, EXTENT_STATUS_HOLE, 0); up_write(&EXT4_I(inode)->i_data_sem); } - ext4_fc_track_range(handle, inode, first_block, stop_block); + ext4_fc_track_range(handle, inode, start_lblk, end_lblk); + + ret = ext4_mark_inode_dirty(handle, inode); + if (unlikely(ret)) + goto out_handle; + + ext4_update_inode_fsync_trans(handle, inode, 1); if (IS_SYNC(inode)) ext4_handle_sync(handle); - - ret2 = ext4_mark_inode_dirty(handle, inode); - if (unlikely(ret2)) - ret = ret2; - if (ret >= 0) - ext4_update_inode_fsync_trans(handle, inode, 1); -out_stop: +out_handle: ext4_journal_stop(handle); -out_dio: +out_invalidate_lock: filemap_invalidate_unlock(mapping); -out_mutex: +out: inode_unlock(inode); return ret; } From patchwork Tue Oct 22 11:10:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000202 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=bb2h=rs=vger.kernel.org=linux-ext4+bounces-4693-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcjj5zHSz1xwf for ; Tue, 22 Oct 2024 14:14:49 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcjh3BCvz4w2L for ; Tue, 22 Oct 2024 14:14:48 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcjh38Dcz4w2N; Tue, 22 Oct 2024 14:14:48 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.80.249 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566888; cv=pass; b=mTCVLrVvKijtlJkp/e1iXQZFzmAFkQmnKGIZ3hQjOUlRrtFqyvB2jRS0u7XmkeLIzblnwmwdEaN2BVNvth+8I6c37tsAPm7L4b1RHov0UIVSUmncZ5CU7HHFLlz8oS5is/YWMn3L+WahdHylY9AQ6IAH3jlzuWCLySwZeCT9CKwhQtyRoygYD0lieQol7bx79Sy5vQlylnwNfUdvoT9pY0fLH/ha7qAELyAN4NQPmW8kWcFllAspIegaKf2lNm/vXiFHwsHshEHw5Ixd2qSQnvnB0BroluPRpGUDR4dPMHbCNOyl0/8riz+2tKDhttZNomItSQ3RxHkjIM+WXUyiGQ== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566888; c=relaxed/relaxed; bh=cBIVmdYuakZHskHiru2gAiL1iS5vdCxjzu5ICQE0qPE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FVXkOtfP0ivTxOlfCNk2J6fCZhJTlUjdr6eM2Z/AWdxwWjApuXsu1Zq9wB2EfmlDAVBHUjp/ISBkJhXcY1G+qeepyPruSiKhc4aeSZjC9edPvGtrKO/6KhGWg+foNT6J82RiyWNwoklsZMFRxv36mMlCRr1CDuVsWvksccJO47Lbgbi58lEJhQIhgEvu2as5Fnv6gEokXrRdZcHFK7ex1NFBao/0V0KE704EWY8xz3f83+dvJ4J/kXH6vfjhxSXySOJ1WwqpPHYVHPzX6K+OpeBIcvmfsrz4I/OjCpzR0pZnnpfRSLsAITfhJijBD+OhOqw40ShrA+6kebCWSqvFxg== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4693-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4693-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [147.75.80.249]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcjf6b2qz4w2L for ; Tue, 22 Oct 2024 14:14:46 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 6773A1F24257 for ; Tue, 22 Oct 2024 03:14:44 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 997AB156C78; Tue, 22 Oct 2024 03:13:03 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CCA44146A9F; Tue, 22 Oct 2024 03:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566783; cv=none; b=V4GopPnZVgwfThWtsJ41ZcEpfmJElKH+EGgrHf76YJw24Y1gbu2v9u5XvRDyyHQSu5jN1KAelfhrwRmZw5VdhuOzXItNDN92aEPKNQPrlv1FJPv2SKDN6eth3fOiOOY1gAFWTVAqv0g8VgauDmbL+dI4IekdN3F3Vox8u7sqRrA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566783; c=relaxed/simple; bh=SL3556zehzIj4Ho5hXlwQ/qb69LOIePNOS7/RN3+B9k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=B8vlb/NYadNu1VKUsbO0P2nKD1dEBUTd4Z6g9HSPd5c96Sj5l53VCLaGJWC/wxQFFUMK7aiXsYpUXAAjQtYoLes+ZKkhy23PJmG7SU7MY0rIxQSfACoeJRnlnqLFyZR5q6U4kgGk1BOEvLzLHFzMYNcYCLdW0o38Ye4LbFCx0T8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcg61f68z4f3lW1; Tue, 22 Oct 2024 11:12:34 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 64D5A1A058E; Tue, 22 Oct 2024 11:12:52 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S9; Tue, 22 Oct 2024 11:12:52 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 05/27] ext4: refactor ext4_zero_range() Date: Tue, 22 Oct 2024 19:10:36 +0800 Message-ID: <20241022111059.2566137-6-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S9 X-Coremail-Antispam: 1UD129KBjvJXoW3JFWrGr4DWF1UGr13Jr1fJFb_yoW3GF18pF ZIqrW5Kr48WFyq9r48KFsrZF40k3WkKrWUCFyxWrn5X3srtwn7Kan8KF95WFWIqrZ7Zw4Y vF4Yyry7GFWUWFJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQv14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0pRtl1hUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi The current implementation of ext4_zero_range() contains complex position calculations and stale error tags. To improve the code's clarity and maintainability, it is essential to clean up the code and improve its readability, this can be achieved by: a) simplifying and renaming variables, making the style the same as ext4_punch_hole(); b) eliminating unnecessary position calculations, writing back all data in data=journal mode, and drop page cache from the original offset to the end, rather than using aligned blocks; c) renaming the stale out_mutex tags. Signed-off-by: Zhang Yi --- fs/ext4/extents.c | 161 +++++++++++++++++++--------------------------- 1 file changed, 65 insertions(+), 96 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index aa07b5ddaff8..f843342e5164 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4565,40 +4565,15 @@ static long ext4_zero_range(struct file *file, loff_t offset, struct inode *inode = file_inode(file); struct address_space *mapping = file->f_mapping; handle_t *handle = NULL; - unsigned int max_blocks; loff_t new_size = 0; - int ret = 0; - int flags; - int credits; - int partial_begin, partial_end; - loff_t start, end; - ext4_lblk_t lblk; + loff_t end = offset + len; + ext4_lblk_t start_lblk, end_lblk; + unsigned int blocksize = i_blocksize(inode); unsigned int blkbits = inode->i_blkbits; + int ret, flags, credits; trace_ext4_zero_range(inode, offset, len, mode); - /* - * Round up offset. This is not fallocate, we need to zero out - * blocks, so convert interior block aligned part of the range to - * unwritten and possibly manually zero out unaligned parts of the - * range. Here, start and partial_begin are inclusive, end and - * partial_end are exclusive. - */ - start = round_up(offset, 1 << blkbits); - end = round_down((offset + len), 1 << blkbits); - - if (start < offset || end > offset + len) - return -EINVAL; - partial_begin = offset & ((1 << blkbits) - 1); - partial_end = (offset + len) & ((1 << blkbits) - 1); - - lblk = start >> blkbits; - max_blocks = (end >> blkbits); - if (max_blocks < lblk) - max_blocks = 0; - else - max_blocks -= lblk; - inode_lock(inode); /* @@ -4606,88 +4581,78 @@ static long ext4_zero_range(struct file *file, loff_t offset, */ if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) { ret = -EOPNOTSUPP; - goto out_mutex; + goto out; } if (!(mode & FALLOC_FL_KEEP_SIZE) && - (offset + len > inode->i_size || - offset + len > EXT4_I(inode)->i_disksize)) { - new_size = offset + len; + (end > inode->i_size || end > EXT4_I(inode)->i_disksize)) { + new_size = end; ret = inode_newsize_ok(inode, new_size); if (ret) - goto out_mutex; + goto out; } - flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT; - /* Wait all existing dio workers, newcomers will block on i_rwsem */ inode_dio_wait(inode); ret = file_modified(file); if (ret) - goto out_mutex; - - /* Preallocate the range including the unaligned edges */ - if (partial_begin || partial_end) { - ret = ext4_alloc_file_blocks(file, - round_down(offset, 1 << blkbits) >> blkbits, - (round_up((offset + len), 1 << blkbits) - - round_down(offset, 1 << blkbits)) >> blkbits, - new_size, flags); - if (ret) - goto out_mutex; - - } - - /* Zero range excluding the unaligned edges */ - if (max_blocks > 0) { - flags |= (EXT4_GET_BLOCKS_CONVERT_UNWRITTEN | - EXT4_EX_NOCACHE); + goto out; - /* - * Prevent page faults from reinstantiating pages we have - * released from page cache. - */ - filemap_invalidate_lock(mapping); + /* + * Prevent page faults from reinstantiating pages we have released + * from page cache. + */ + filemap_invalidate_lock(mapping); - ret = ext4_break_layouts(inode); - if (ret) { - filemap_invalidate_unlock(mapping); - goto out_mutex; - } + ret = ext4_break_layouts(inode); + if (ret) + goto out_invalidate_lock; + /* + * For journalled data we need to write (and checkpoint) pages before + * discarding page cache to avoid inconsitent data on disk in case of + * crash before zeroing trans is committed. + */ + if (ext4_should_journal_data(inode)) { + ret = filemap_write_and_wait_range(mapping, offset, end - 1); + } else { ret = ext4_update_disksize_before_punch(inode, offset, len); - if (ret) { - filemap_invalidate_unlock(mapping); - goto out_mutex; - } + ext4_truncate_folios_range(inode, offset, end); + } + if (ret) + goto out_invalidate_lock; - /* - * For journalled data we need to write (and checkpoint) pages - * before discarding page cache to avoid inconsitent data on - * disk in case of crash before zeroing trans is committed. - */ - if (ext4_should_journal_data(inode)) { - ret = filemap_write_and_wait_range(mapping, start, - end - 1); - if (ret) { - filemap_invalidate_unlock(mapping); - goto out_mutex; - } - } + /* Now release the pages and zero block aligned part of pages */ + truncate_pagecache_range(inode, offset, end - 1); - /* Now release the pages and zero block aligned part of pages */ - ext4_truncate_folios_range(inode, start, end); - truncate_pagecache_range(inode, start, end - 1); + flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT; + /* Preallocate the range including the unaligned edges */ + if (offset & (blocksize - 1) || end & (blocksize - 1)) { + ext4_lblk_t alloc_lblk = offset >> blkbits; + ext4_lblk_t len_lblk = EXT4_MAX_BLOCKS(len, offset, blkbits); - ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size, - flags); - filemap_invalidate_unlock(mapping); + ret = ext4_alloc_file_blocks(file, alloc_lblk, len_lblk, + new_size, flags); if (ret) - goto out_mutex; + goto out_invalidate_lock; } - if (!partial_begin && !partial_end) - goto out_mutex; + + /* Zero range excluding the unaligned edges */ + start_lblk = round_up(offset, blocksize) >> blkbits; + end_lblk = end >> blkbits; + if (end_lblk > start_lblk) { + ext4_lblk_t zero_blks = end_lblk - start_lblk; + + flags |= (EXT4_GET_BLOCKS_CONVERT_UNWRITTEN | EXT4_EX_NOCACHE); + ret = ext4_alloc_file_blocks(file, start_lblk, zero_blks, + new_size, flags); + if (ret) + goto out_invalidate_lock; + } + /* Finish zeroing out if it doesn't contain partial block */ + if (!(offset & (blocksize - 1)) && !(end & (blocksize - 1))) + goto out_invalidate_lock; /* * In worst case we have to writeout two nonadjacent unwritten @@ -4700,25 +4665,29 @@ static long ext4_zero_range(struct file *file, loff_t offset, if (IS_ERR(handle)) { ret = PTR_ERR(handle); ext4_std_error(inode->i_sb, ret); - goto out_mutex; + goto out_invalidate_lock; } + /* Zero out partial block at the edges of the range */ + ret = ext4_zero_partial_blocks(handle, inode, offset, len); + if (ret) + goto out_handle; + if (new_size) ext4_update_inode_size(inode, new_size); ret = ext4_mark_inode_dirty(handle, inode); if (unlikely(ret)) goto out_handle; - /* Zero out partial block at the edges of the range */ - ret = ext4_zero_partial_blocks(handle, inode, offset, len); - if (ret >= 0) - ext4_update_inode_fsync_trans(handle, inode, 1); + ext4_update_inode_fsync_trans(handle, inode, 1); if (file->f_flags & O_SYNC) ext4_handle_sync(handle); out_handle: ext4_journal_stop(handle); -out_mutex: +out_invalidate_lock: + filemap_invalidate_unlock(mapping); +out: inode_unlock(inode); return ret; } From patchwork Tue Oct 22 11:10:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000203 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=cdxa=rs=vger.kernel.org=linux-ext4+bounces-4694-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXck05bwfz1xwf for ; Tue, 22 Oct 2024 14:15:04 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcjz2nCLz4w2L for ; Tue, 22 Oct 2024 14:15:03 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcjz2kkKz4w2N; Tue, 22 Oct 2024 14:15:03 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:45d1:ec00::1" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566903; cv=pass; b=A595wM3GAmeAxoshS8Oku//0R/iuhfjqbef7uOBeOqW5NhXh8ImgQeSJsfs+Q/tqu2vNNtuHPMYkdGCEqRwaylKI09PvaW6cz4VF4xNGgZBt4uySeEADRNvfxqKHCN9BsOABKtj+nt6lVsKazP+KD+/312jmHagTtVSUhIv0uHoi54fKAxMuE2hfdojMk9bl7/q0G7Gx2PPDD7JzICw2+BkXJGcCkjGjkNlCg9/GJjpy/h3eponsk0RGgOqAdDO6LCAb0hQXw+0nGPk8RVwuLCEJvy2pyBUW1138JksLZAf8vrNmP3EULawEmho9imgIxg4BI4E8IEkKKan6df1Oew== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566903; c=relaxed/relaxed; bh=hyXkzNe/2uGG6dUaqqD+4cX2N6o5E0F37SOHn+AwULA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NLk/kOYsuUoNQNnXZyU+UcVSfLZLTAdrqCjAx9RJTc+uDLYKFscwUm4qI+ekmaUYyOBkz+s5Cn57ds2JgRD3Z+WGDDDP3cxB15e5ytHEqZPs7IUKi4GebS+5mrkDK2jMarrAEAXLECJNUVURHn9o05hiPbuihsVeQalt0r5j7yuZ31BBrO0PKEUYG8hoNXygO1DMsPbZNlVCBU3Lzwe+CFRkc56f1Jk5ZZxZkusetOCJdj41W7IhGWBoZyhQg38XAiPrAUJzR3qGTYkR9tLe+OwJxaK8fNtpgA+Tbzm1oBguiWoa1iC3Y2wBCPd4/9TZR1/m9p3HShIxFEtk7lWlQA== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4694-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4694-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [IPv6:2604:1380:45d1:ec00::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcjy6g2wz4w2L for ; Tue, 22 Oct 2024 14:15:02 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 75FB51C235C2 for ; Tue, 22 Oct 2024 03:15:02 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3CCDA195F17; Tue, 22 Oct 2024 03:13:04 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 56E71148318; Tue, 22 Oct 2024 03:13:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566783; cv=none; b=Ull5qcXfeRkH0X9wgma3A1dAl0QjmM5ysOMwW3VmgHxai638bF0IhqLTGHUbpsBhyxuL2ZPgigKeYimjG5keUtzREWcoykPyM6Er9sPcylkesCUS8+o24RXlXv8fa4LNg8989O1UQ9m0qy9ShYsAzHBYGxDsPzjjo18oVsUdzSc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566783; c=relaxed/simple; bh=T/gNZrAi+yf3cn2sY8Slhlo+bj7bmazb0AjAtYqV5+g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VVF5hsOMFar+hSWBTPDYwUTrGZSyuPCYdl7pz3EELUmK5U/j39U3VdsCEqlk9frpLjTaEfV4IW9LMcTuAtmWk0nYhmqO/q8rd1XdK0pha9WbBqIAEa1VnaqWXpIvf0hR+6ApWnt7GrHEq0iHXY5uA4U88Fs+Z/43rouNd21BXuU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgD5QN1z4f3jkc; Tue, 22 Oct 2024 11:12:40 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 088B21A018D; Tue, 22 Oct 2024 11:12:53 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S10; Tue, 22 Oct 2024 11:12:52 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 06/27] ext4: refactor ext4_collapse_range() Date: Tue, 22 Oct 2024 19:10:37 +0800 Message-ID: <20241022111059.2566137-7-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S10 X-Coremail-Antispam: 1UD129KBjvJXoW3Gry8KryxGr4fWrW5Kw18uFg_yoWxJr4fpr ZxWry5Kr40ga4kWr48tF4DZF10y3W0g3yUWrWxGrnYqa4qyrnrKa4YyFyF9Fy8trWkZFWj qF40v34UWrW7Aa7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Simplify ext4_collapse_range() and align its code style with that of ext4_zero_range() and ext4_punch_hole(). Refactor it by: a) renaming variables, b) removing redundant input parameter checks and moving the remaining checks under i_rwsem in preparation for future refactoring, and c) renaming the three stale error tags. Signed-off-by: Zhang Yi --- fs/ext4/extents.c | 103 +++++++++++++++++++++------------------------- 1 file changed, 48 insertions(+), 55 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index f843342e5164..a4e95f3b5f09 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -5295,43 +5295,36 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len) struct inode *inode = file_inode(file); struct super_block *sb = inode->i_sb; struct address_space *mapping = inode->i_mapping; - ext4_lblk_t punch_start, punch_stop; + loff_t end = offset + len; + ext4_lblk_t start_lblk, end_lblk; handle_t *handle; unsigned int credits; - loff_t new_size, ioffset; + loff_t start, new_size; int ret; - /* - * We need to test this early because xfstests assumes that a - * collapse range of (0, 1) will return EOPNOTSUPP if the file - * system does not support collapse range. - */ - if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) - return -EOPNOTSUPP; + trace_ext4_collapse_range(inode, offset, len); - /* Collapse range works only on fs cluster size aligned regions. */ - if (!IS_ALIGNED(offset | len, EXT4_CLUSTER_SIZE(sb))) - return -EINVAL; + inode_lock(inode); - trace_ext4_collapse_range(inode, offset, len); + /* Currently just for extent based files */ + if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) { + ret = -EOPNOTSUPP; + goto out; + } - punch_start = offset >> EXT4_BLOCK_SIZE_BITS(sb); - punch_stop = (offset + len) >> EXT4_BLOCK_SIZE_BITS(sb); + /* Collapse range works only on fs cluster size aligned regions. */ + if (!IS_ALIGNED(offset | len, EXT4_CLUSTER_SIZE(sb))) { + ret = -EINVAL; + goto out; + } - inode_lock(inode); /* * There is no need to overlap collapse range with EOF, in which case * it is effectively a truncate operation */ - if (offset + len >= inode->i_size) { + if (end >= inode->i_size) { ret = -EINVAL; - goto out_mutex; - } - - /* Currently just for extent based files */ - if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) { - ret = -EOPNOTSUPP; - goto out_mutex; + goto out; } /* Wait for existing dio to complete */ @@ -5339,7 +5332,7 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len) ret = file_modified(file); if (ret) - goto out_mutex; + goto out; /* * Prevent page faults from reinstantiating pages we have released from @@ -5349,55 +5342,52 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len) ret = ext4_break_layouts(inode); if (ret) - goto out_mmap; + goto out_invalidate_lock; /* + * Write tail of the last page before removed range and data that + * will be shifted since they will get removed from the page cache + * below. We are also protected from pages becoming dirty by + * i_rwsem and invalidate_lock. * Need to round down offset to be aligned with page size boundary * for page size > block size. */ - ioffset = round_down(offset, PAGE_SIZE); - /* - * Write tail of the last page before removed range since it will get - * removed from the page cache below. - */ - ret = filemap_write_and_wait_range(mapping, ioffset, offset); - if (ret) - goto out_mmap; - /* - * Write data that will be shifted to preserve them when discarding - * page cache below. We are also protected from pages becoming dirty - * by i_rwsem and invalidate_lock. - */ - ret = filemap_write_and_wait_range(mapping, offset + len, - LLONG_MAX); + start = round_down(offset, PAGE_SIZE); + ret = filemap_write_and_wait_range(mapping, start, offset); + if (!ret) + ret = filemap_write_and_wait_range(mapping, end, LLONG_MAX); if (ret) - goto out_mmap; - truncate_pagecache(inode, ioffset); + goto out_invalidate_lock; + + truncate_pagecache(inode, start); credits = ext4_writepage_trans_blocks(inode); handle = ext4_journal_start(inode, EXT4_HT_TRUNCATE, credits); if (IS_ERR(handle)) { ret = PTR_ERR(handle); - goto out_mmap; + goto out_invalidate_lock; } ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_FALLOC_RANGE, handle); + start_lblk = offset >> inode->i_blkbits; + end_lblk = (offset + len) >> inode->i_blkbits; + down_write(&EXT4_I(inode)->i_data_sem); ext4_discard_preallocations(inode); - ext4_es_remove_extent(inode, punch_start, EXT_MAX_BLOCKS - punch_start); + ext4_es_remove_extent(inode, start_lblk, EXT_MAX_BLOCKS - start_lblk); - ret = ext4_ext_remove_space(inode, punch_start, punch_stop - 1); + ret = ext4_ext_remove_space(inode, start_lblk, end_lblk - 1); if (ret) { up_write(&EXT4_I(inode)->i_data_sem); - goto out_stop; + goto out_handle; } ext4_discard_preallocations(inode); - ret = ext4_ext_shift_extents(inode, handle, punch_stop, - punch_stop - punch_start, SHIFT_LEFT); + ret = ext4_ext_shift_extents(inode, handle, end_lblk, + end_lblk - start_lblk, SHIFT_LEFT); if (ret) { up_write(&EXT4_I(inode)->i_data_sem); - goto out_stop; + goto out_handle; } new_size = inode->i_size - len; @@ -5405,16 +5395,19 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len) EXT4_I(inode)->i_disksize = new_size; up_write(&EXT4_I(inode)->i_data_sem); - if (IS_SYNC(inode)) - ext4_handle_sync(handle); ret = ext4_mark_inode_dirty(handle, inode); + if (ret) + goto out_handle; + ext4_update_inode_fsync_trans(handle, inode, 1); + if (IS_SYNC(inode)) + ext4_handle_sync(handle); -out_stop: +out_handle: ext4_journal_stop(handle); -out_mmap: +out_invalidate_lock: filemap_invalidate_unlock(mapping); -out_mutex: +out: inode_unlock(inode); return ret; } From patchwork Tue Oct 22 11:10:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000208 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=xdxh=rs=vger.kernel.org=linux-ext4+bounces-4699-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXckm2zsWz1xw0 for ; Tue, 22 Oct 2024 14:15:44 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXckl09tCz4w2H for ; Tue, 22 Oct 2024 14:15:43 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXckl07t6z4w2M; Tue, 22 Oct 2024 14:15:43 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=139.178.88.99 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566943; cv=pass; b=VQF6jFbPcG9eEUlfXJ3ta8VYBNTL2r/enlXY3fYwSJC8zc/qsrKM6yd75AoW/gRUhtkox80saz0fffo4or44+k/dGBrb947stK1BPPC54k9RyB8W6vrEBTZoVhxke2hZMCDtcSAb8oBC1zMLtdgMLGvpdK2xZvLmzLNUAr7ZSLRNKoYwTfhNKkcWyc4IKl/agPOBgdQYV38ylf3Uk90L81AKh8BNiTbAHg83K1MVLd8S5yYKNudyHtd8LRsPf4Rpqj3gGOHlEZQRAwA/QQjpdsWpEtGaG46Rjpnyqd9THxsF+CQzWilvNG8XwqKjEEI+ug2lEE/R9dbi01GEwBIKhw== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566943; c=relaxed/relaxed; bh=5OvERNo/GxUBr4uEubVxlPlqOTeqnxhAyfxlYBnXa7k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AwTqlbQ16L7pgQl0CwOxyFmCYb7lI4t8yk7PQG/O84axqpzdLZKHoH6TNOrNzi1/GQoWtSTJABNq4UmUc5Ukjidlgp7Z+UHoqXT0IsTtySKdZp+ChVNM6vhBjvh+g4HjY9C5GtjOAHyFEbbRZUtaEzGhsq3qn6idC15HV4Qy6RtJXwl7NaFZn0OYNw3EU3lNGoN/R16RUMvom/xWZopBTdeVLio4/SanqaQoklk+GgqvWZvAcMMM3EUBEdFM6kJC9ShwwCmd/rMXem/E5THBz+LJ1YbJT4Yxatf4ZZn6ac0r3AoZQOD3SRxkQg6KwCfA/nfom6hsNl0rI73tpGmaEg== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=139.178.88.99; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4699-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=139.178.88.99; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4699-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [139.178.88.99]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXckk50wKz4w2H for ; Tue, 22 Oct 2024 14:15:42 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id C728928517B for ; Tue, 22 Oct 2024 03:15:42 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 30E4E1C57A0; Tue, 22 Oct 2024 03:13:05 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5EBFB1487D1; Tue, 22 Oct 2024 03:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566785; cv=none; b=i9sHGMGsF37YTirdUIx1bCVRkW/swwWH1eyWR4eRgP0szwgiX8CdabHWbjFYXz2v3WrqAqWpW4VWvZ+xUl915z5gFiwGRNE0Q+aaDJ+jQnyV0skXF3ULjZRnztrsp8tFrOHflgAXMpvr5MJMwKQAwFcOK+klVTBnKSx9sGqCJAM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566785; c=relaxed/simple; bh=QSoSH/PS71Cm1qttsP1bNXGF+E8W1iWagKD5oA18EkQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KUR3S9OxcAZVzp125lQEv9JGEyUVNt97irLCk9AeGBWlaM6hvUAPAdX59vuWBT5Y0URhGpWIXrnKyQG4Wn/BMlSslHSHH11lWjj878EE8uxPtQ6v7ddGORAx3Ejl9451gt98MW6J6U0kPI1CArbeZWFATYrlsn15hLIXMLIrLKg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcg73Vqkz4f3lVx; Tue, 22 Oct 2024 11:12:35 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id A5F681A0359; Tue, 22 Oct 2024 11:12:53 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S11; Tue, 22 Oct 2024 11:12:53 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 07/27] ext4: refactor ext4_insert_range() Date: Tue, 22 Oct 2024 19:10:38 +0800 Message-ID: <20241022111059.2566137-8-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S11 X-Coremail-Antispam: 1UD129KBjvJXoW3GryUCF13Wr1UJw45uryDWrg_yoWxAr48pr ZxWry5GrWFqa4v9rW8KF4DZF18K3WkG3y7GryxGrn3Xa4jvr9rK3WYkFyYgFy8KrWkZrWY vF4Fk345Way2ka7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Simplify ext4_insert_range() and align its code style with that of ext4_collapse_range(). Refactor it by: a) renaming variables, b) removing redundant input parameter checks and moving the remaining checks under i_rwsem in preparation for future refactoring, and c) renaming the three stale error tags. Signed-off-by: Zhang Yi --- fs/ext4/extents.c | 101 ++++++++++++++++++++++------------------------ 1 file changed, 48 insertions(+), 53 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index a4e95f3b5f09..4e35c2415e9b 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -5428,45 +5428,37 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) handle_t *handle; struct ext4_ext_path *path; struct ext4_extent *extent; - ext4_lblk_t offset_lblk, len_lblk, ee_start_lblk = 0; + ext4_lblk_t start_lblk, len_lblk, ee_start_lblk = 0; unsigned int credits, ee_len; - int ret = 0, depth, split_flag = 0; - loff_t ioffset; - - /* - * We need to test this early because xfstests assumes that an - * insert range of (0, 1) will return EOPNOTSUPP if the file - * system does not support insert range. - */ - if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) - return -EOPNOTSUPP; - - /* Insert range works only on fs cluster size aligned regions. */ - if (!IS_ALIGNED(offset | len, EXT4_CLUSTER_SIZE(sb))) - return -EINVAL; + int ret, depth, split_flag = 0; + loff_t start; trace_ext4_insert_range(inode, offset, len); - offset_lblk = offset >> EXT4_BLOCK_SIZE_BITS(sb); - len_lblk = len >> EXT4_BLOCK_SIZE_BITS(sb); - inode_lock(inode); + /* Currently just for extent based files */ if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) { ret = -EOPNOTSUPP; - goto out_mutex; + goto out; } - /* Check whether the maximum file size would be exceeded */ - if (len > inode->i_sb->s_maxbytes - inode->i_size) { - ret = -EFBIG; - goto out_mutex; + /* Insert range works only on fs cluster size aligned regions. */ + if (!IS_ALIGNED(offset | len, EXT4_CLUSTER_SIZE(sb))) { + ret = -EINVAL; + goto out; } /* Offset must be less than i_size */ if (offset >= inode->i_size) { ret = -EINVAL; - goto out_mutex; + goto out; + } + + /* Check whether the maximum file size would be exceeded */ + if (len > inode->i_sb->s_maxbytes - inode->i_size) { + ret = -EFBIG; + goto out; } /* Wait for existing dio to complete */ @@ -5474,7 +5466,7 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) ret = file_modified(file); if (ret) - goto out_mutex; + goto out; /* * Prevent page faults from reinstantiating pages we have released from @@ -5484,25 +5476,24 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) ret = ext4_break_layouts(inode); if (ret) - goto out_mmap; + goto out_invalidate_lock; /* - * Need to round down to align start offset to page size boundary - * for page size > block size. + * Write out all dirty pages. Need to round down to align start offset + * to page size boundary for page size > block size. */ - ioffset = round_down(offset, PAGE_SIZE); - /* Write out all dirty pages */ - ret = filemap_write_and_wait_range(inode->i_mapping, ioffset, - LLONG_MAX); + start = round_down(offset, PAGE_SIZE); + ret = filemap_write_and_wait_range(mapping, start, LLONG_MAX); if (ret) - goto out_mmap; - truncate_pagecache(inode, ioffset); + goto out_invalidate_lock; + + truncate_pagecache(inode, start); credits = ext4_writepage_trans_blocks(inode); handle = ext4_journal_start(inode, EXT4_HT_TRUNCATE, credits); if (IS_ERR(handle)) { ret = PTR_ERR(handle); - goto out_mmap; + goto out_invalidate_lock; } ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_FALLOC_RANGE, handle); @@ -5511,16 +5502,19 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) EXT4_I(inode)->i_disksize += len; ret = ext4_mark_inode_dirty(handle, inode); if (ret) - goto out_stop; + goto out_handle; + + start_lblk = offset >> inode->i_blkbits; + len_lblk = len >> inode->i_blkbits; down_write(&EXT4_I(inode)->i_data_sem); ext4_discard_preallocations(inode); - path = ext4_find_extent(inode, offset_lblk, NULL, 0); + path = ext4_find_extent(inode, start_lblk, NULL, 0); if (IS_ERR(path)) { up_write(&EXT4_I(inode)->i_data_sem); ret = PTR_ERR(path); - goto out_stop; + goto out_handle; } depth = ext_depth(inode); @@ -5530,16 +5524,16 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) ee_len = ext4_ext_get_actual_len(extent); /* - * If offset_lblk is not the starting block of extent, split - * the extent @offset_lblk + * If start_lblk is not the starting block of extent, split + * the extent @start_lblk */ - if ((offset_lblk > ee_start_lblk) && - (offset_lblk < (ee_start_lblk + ee_len))) { + if ((start_lblk > ee_start_lblk) && + (start_lblk < (ee_start_lblk + ee_len))) { if (ext4_ext_is_unwritten(extent)) split_flag = EXT4_EXT_MARK_UNWRIT1 | EXT4_EXT_MARK_UNWRIT2; path = ext4_split_extent_at(handle, inode, path, - offset_lblk, split_flag, + start_lblk, split_flag, EXT4_EX_NOCACHE | EXT4_GET_BLOCKS_PRE_IO | EXT4_GET_BLOCKS_METADATA_NOFAIL); @@ -5548,31 +5542,32 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) if (IS_ERR(path)) { up_write(&EXT4_I(inode)->i_data_sem); ret = PTR_ERR(path); - goto out_stop; + goto out_handle; } } ext4_free_ext_path(path); - ext4_es_remove_extent(inode, offset_lblk, EXT_MAX_BLOCKS - offset_lblk); + ext4_es_remove_extent(inode, start_lblk, EXT_MAX_BLOCKS - start_lblk); /* - * if offset_lblk lies in a hole which is at start of file, use + * if start_lblk lies in a hole which is at start of file, use * ee_start_lblk to shift extents */ ret = ext4_ext_shift_extents(inode, handle, - max(ee_start_lblk, offset_lblk), len_lblk, SHIFT_RIGHT); - + max(ee_start_lblk, start_lblk), len_lblk, SHIFT_RIGHT); up_write(&EXT4_I(inode)->i_data_sem); + if (ret) + goto out_handle; + + ext4_update_inode_fsync_trans(handle, inode, 1); if (IS_SYNC(inode)) ext4_handle_sync(handle); - if (ret >= 0) - ext4_update_inode_fsync_trans(handle, inode, 1); -out_stop: +out_handle: ext4_journal_stop(handle); -out_mmap: +out_invalidate_lock: filemap_invalidate_unlock(mapping); -out_mutex: +out: inode_unlock(inode); return ret; } From patchwork Tue Oct 22 11:10:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000204 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=zhdm=rs=vger.kernel.org=linux-ext4+bounces-4695-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXckB4KMkz1xw0 for ; Tue, 22 Oct 2024 14:15:14 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXck91WJFz4w2L for ; Tue, 22 Oct 2024 14:15:13 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXck91SLpz4w2M; Tue, 22 Oct 2024 14:15:13 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:4601:e00::3" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566913; cv=pass; b=CqowamZ9ygQnS95gXJwpNOqk5QuwaZ47yqe7+gLYI6uu6Kh4jaQA8O7E2IEgxZlRGQDVp2xbprV/o2hv+BfW+t2IQb/yOrn98wyL6TGYGMGtVbCdFhcLqwYzgBBwoh8BUt6OraPOOD5KZlo3PkIWUBlV1qd8fmeBj5l74efud1JqKMgMtvEVR9LSRyLssRCgls8TwbZMv8hoUMsRsOt0Nkj5jnLzoj5KCqH0SWTcXI40/MQ96v+uYKgEKqMBNlR9c5sfDcSmfm8x9Re8Lg+DhrxGKQ74GhlelnCwRTDHbOnMXGh8B+KpWxT/rLduqUhVSyU5ou7I3EiMAzYZuuTluQ== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566913; c=relaxed/relaxed; bh=LD035+QVt5lGZBI4Rxuim3qnH/yo9bQEMTUkKs/QSgw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AYSC+e7r3y40yWydQkqGJkRETehPg4A5wbWhqIelv+lEYXNoOnh5728qHjL+d0Bdqq+xN9x+eU/e1LF7mGmHIH3CwplHul5y9e1yaQqFImNtJ1yVEl0fmvjZOjnAfybmUWcBsvtK/MqVUAJtRmNfXWv8xARdHQ0vlbsCCHL4QFriA0/1meuV9Lo53yndPMhGBQt2VazSWCds3AirtpR6lEf4i02wHW1xYo9f3RIt6fbE6RRfA7iRi2n6her1pAYjRAOl2T9mDnRucfzprntOC/RSiZdimMG5V1KX8OS5yoX8+yQ9s96YZWQPLjvwcp6mqNFuTVdDAYC9cdUHKnWT6A== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:4601:e00::3; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4695-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:4601:e00::3; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4695-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [IPv6:2604:1380:4601:e00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXck84vpnz4w2L for ; Tue, 22 Oct 2024 14:15:12 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id B0AB51F245CF for ; Tue, 22 Oct 2024 03:15:11 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6FC8D1990C8; Tue, 22 Oct 2024 03:13:04 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98F541487DC; Tue, 22 Oct 2024 03:13:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566784; cv=none; b=f2oahD1vSlMaHn4raz9NCtIPg7lIgEitSZfQu+y42t786gj+kIVm9fthSmxtPx0NlWlGjuudaYj7emUjR+Whmrr/Dg3zrgfw27uIZh1eMZLGVeCSQV8hEupOxlY9NE2fU01fiD2E3RcPXpFM2DW7BgoHWN+O9iJhulshHkh74VY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566784; c=relaxed/simple; bh=5A6JmUMLg4zuQzawlkKnnalaCarkP+KrzfNz2a5uV8s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IchPzrbzkZptwlz9i9Pv7QtaISg1ps5YM7SChjler2Vzne+N0bX5rveln+tzNGHDrBPqsUIkWS/NqIkASYs3BgQHsiweDTZy7WBlQqh4gmClxR/gTermu6CuOJm28QkzOram1IP8Yqj5G0lWAQDayJeXgOslTXH7pUTH6e3VBsU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcg812Jwz4f3lW6; Tue, 22 Oct 2024 11:12:36 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 502AB1A0568; Tue, 22 Oct 2024 11:12:54 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S12; Tue, 22 Oct 2024 11:12:54 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 08/27] ext4: factor out ext4_do_fallocate() Date: Tue, 22 Oct 2024 19:10:39 +0800 Message-ID: <20241022111059.2566137-9-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S12 X-Coremail-Antispam: 1UD129KBjvJXoWxuFy8JFW7Kr48tw1fJFyrJFb_yoWrKr4fpF Z8JryUGFWxXa4DWrW0qw4UXFn8ta1kKrWUWrWI9rnaq3s0y3sxKF1YkFyFgFWftrW8Ar4j vF4Yyry7CF17A3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Now the real job of normal fallocate are open coded in ext4_fallocate(), factor out a new helper ext4_do_fallocate() to do the real job, like others functions (e.g. ext4_zero_range()) in ext4_fallocate() do, this can make the code more clear, no functional changes. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/ext4/extents.c | 125 ++++++++++++++++++++++------------------------ 1 file changed, 60 insertions(+), 65 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 4e35c2415e9b..2f727104f53d 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4692,6 +4692,58 @@ static long ext4_zero_range(struct file *file, loff_t offset, return ret; } +static long ext4_do_fallocate(struct file *file, loff_t offset, + loff_t len, int mode) +{ + struct inode *inode = file_inode(file); + loff_t end = offset + len; + loff_t new_size = 0; + ext4_lblk_t start_lblk, len_lblk; + int ret; + + trace_ext4_fallocate_enter(inode, offset, len, mode); + + start_lblk = offset >> inode->i_blkbits; + len_lblk = EXT4_MAX_BLOCKS(len, offset, inode->i_blkbits); + + inode_lock(inode); + + /* We only support preallocation for extent-based files only. */ + if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) { + ret = -EOPNOTSUPP; + goto out; + } + + if (!(mode & FALLOC_FL_KEEP_SIZE) && + (end > inode->i_size || end > EXT4_I(inode)->i_disksize)) { + new_size = end; + ret = inode_newsize_ok(inode, new_size); + if (ret) + goto out; + } + + /* Wait all existing dio workers, newcomers will block on i_rwsem */ + inode_dio_wait(inode); + + ret = file_modified(file); + if (ret) + goto out; + + ret = ext4_alloc_file_blocks(file, start_lblk, len_lblk, new_size, + EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT); + if (ret) + goto out; + + if (file->f_flags & O_SYNC && EXT4_SB(inode->i_sb)->s_journal) { + ret = ext4_fc_commit(EXT4_SB(inode->i_sb)->s_journal, + EXT4_I(inode)->i_sync_tid); + } +out: + inode_unlock(inode); + trace_ext4_fallocate_exit(inode, offset, len_lblk, ret); + return ret; +} + /* * preallocate space for a file. This implements ext4's fallocate file * operation, which gets called from sys_fallocate system call. @@ -4702,12 +4754,7 @@ static long ext4_zero_range(struct file *file, loff_t offset, long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len) { struct inode *inode = file_inode(file); - loff_t new_size = 0; - unsigned int max_blocks; - int ret = 0; - int flags; - ext4_lblk_t lblk; - unsigned int blkbits = inode->i_blkbits; + int ret; /* * Encrypted inodes can't handle collapse range or insert @@ -4729,71 +4776,19 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len) ret = ext4_convert_inline_data(inode); inode_unlock(inode); if (ret) - goto exit; + return ret; - if (mode & FALLOC_FL_PUNCH_HOLE) { + if (mode & FALLOC_FL_PUNCH_HOLE) ret = ext4_punch_hole(file, offset, len); - goto exit; - } - - if (mode & FALLOC_FL_COLLAPSE_RANGE) { + else if (mode & FALLOC_FL_COLLAPSE_RANGE) ret = ext4_collapse_range(file, offset, len); - goto exit; - } - - if (mode & FALLOC_FL_INSERT_RANGE) { + else if (mode & FALLOC_FL_INSERT_RANGE) ret = ext4_insert_range(file, offset, len); - goto exit; - } - - if (mode & FALLOC_FL_ZERO_RANGE) { + else if (mode & FALLOC_FL_ZERO_RANGE) ret = ext4_zero_range(file, offset, len, mode); - goto exit; - } - trace_ext4_fallocate_enter(inode, offset, len, mode); - lblk = offset >> blkbits; - - max_blocks = EXT4_MAX_BLOCKS(len, offset, blkbits); - flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT; - - inode_lock(inode); - - /* - * We only support preallocation for extent-based files only - */ - if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) { - ret = -EOPNOTSUPP; - goto out; - } - - if (!(mode & FALLOC_FL_KEEP_SIZE) && - (offset + len > inode->i_size || - offset + len > EXT4_I(inode)->i_disksize)) { - new_size = offset + len; - ret = inode_newsize_ok(inode, new_size); - if (ret) - goto out; - } - - /* Wait all existing dio workers, newcomers will block on i_rwsem */ - inode_dio_wait(inode); - - ret = file_modified(file); - if (ret) - goto out; - - ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size, flags); - if (ret) - goto out; + else + ret = ext4_do_fallocate(file, offset, len, mode); - if (file->f_flags & O_SYNC && EXT4_SB(inode->i_sb)->s_journal) { - ret = ext4_fc_commit(EXT4_SB(inode->i_sb)->s_journal, - EXT4_I(inode)->i_sync_tid); - } -out: - inode_unlock(inode); - trace_ext4_fallocate_exit(inode, offset, max_blocks, ret); -exit: return ret; } From patchwork Tue Oct 22 11:10:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000196 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=kuro=rs=vger.kernel.org=linux-ext4+bounces-4687-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXchK2LYHz1xwf for ; Tue, 22 Oct 2024 14:13:37 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXchD0QMsz4w2L for ; Tue, 22 Oct 2024 14:13:32 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXchD0MLCz4w2N; Tue, 22 Oct 2024 14:13:32 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.199.223 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566812; cv=pass; b=ohkUaFBs05XCMicdltXElDmLkz9iHnnLpq+YPN6/276nMfjmrnVGuCvj8Eub7/CI+Phs30PDmgCNrt2ZNfKBXJtnBhowIcBWTarnt4Mm8PzUTBwfMlZWEd2n1XsP0hcDrfFSP4qf1HGIMEn0YRc0iAL0EkiWf723BKm3qoU78ZtEYS5L1ZKFogWLGTg6pBfpEKP2B39TGlu0+z0qxDneDOx2db8Uopi5bnH7Es4Zx/IHIsdK6TtnVnwFFPXTthgWgANFMd1RQmsW0IVCG2vBz8oN3xkXR5luGU8Cs9+wfRurdd/iZ/f2cOBZU9R7SziHcaIs52o0TYMJ+HyKO0GfOg== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566812; c=relaxed/relaxed; bh=G+CEpYA5QFXdW7MkAAUCVUdkJu3CAGLQjNBQLnrrKe4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i9RCHXXd5e5Y+StdgHfowNXWUGc0nBJv/yxR22JllYZrHruxZPV8awStl3y5WodCWhCFCWSqxhHRPWgGxsarm5tThnllFV5u+1RCnEGUf1JKjyiExRC/Z5Smaf641PtHYxSd33E4hjHHN58LKZiYdjXEhIsSPUwPES/UiL9ssRUjtB1wq9m8yaKX6MRAD2xK1lY0Yqo03vLM3hjvlYnS/XMwjF4UR1VpFD7xqwyeoW97t4Raeadhg2D+cw1FzNRqZGYl2n+niTCLJbyp3t+7bE8/nScRgfXAH9xV74S1FW3KUwdTOxV+Q5iCND+mp5Zi/xG1FJfl6U+2UAas+9JnYg== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.199.223; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4687-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.199.223; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4687-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [147.75.199.223]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXchC4L9Cz4w2L for ; Tue, 22 Oct 2024 14:13:31 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 712BA1C21A3A for ; Tue, 22 Oct 2024 03:13:30 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 43C0A1448F2; Tue, 22 Oct 2024 03:13:00 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 226BD13DBBE; Tue, 22 Oct 2024 03:12:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566780; cv=none; b=D1ZGPNmajqzd9SmC1CpUTq9/Ct9uQImKA7n3JJOCTrClWv2Q7eMHWDj+JtQkXE4RuhTL8fNqc9Gy4xT1YejuupqRFoPWFy7RXPenN9T3xxjrbNAFy02g7CGSKbJfuEqsUP72wOTw2XPHqIa9uLpCh9Sn9s71lu7cWQCbNM28YgU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566780; c=relaxed/simple; bh=Hs642+YcBh6zGzG/cPCBStH/9nkc21Aq/HGV670zhPA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GkNPKuH8nnFCMz7FalIc65DFMG1LhlH1D7sCvNyb6KkqTqL5Trm2kF+A7w23QTgwtk+IsZ3vAQAgV29n38AvBnu0OX44L2plwFHayTRczA6OsAuIyEv/6vUd7iiEauxhC6hgp/YWg48v7n+fh9mru2h+gdxjp7fjvTZqE59ssws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgG4jwxz4f3jkm; Tue, 22 Oct 2024 11:12:42 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id E790C1A0359; Tue, 22 Oct 2024 11:12:54 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S13; Tue, 22 Oct 2024 11:12:54 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 09/27] ext4: move out inode_lock into ext4_fallocate() Date: Tue, 22 Oct 2024 19:10:40 +0800 Message-ID: <20241022111059.2566137-10-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S13 X-Coremail-Antispam: 1UD129KBjvJXoW3GrWxtF47uw47tr13XF1rtFb_yoW3Xr4rpr Z8G3y5Jr4rXFykWrWvqa1DZF1jy3Z7KrWUWrW8urnFyasFy34fKF4YyFyF9FWrtrW8ZrWY vF4Utry7CFy7C3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Currently, all five sub-functions of ext4_fallocate() acquire the inode's i_rwsem at the beginning and release it before exiting. This process can be simplified by factoring out the management of i_rwsem into the ext4_fallocate() function. Signed-off-by: Zhang Yi --- fs/ext4/extents.c | 90 +++++++++++++++-------------------------------- fs/ext4/inode.c | 13 +++---- 2 files changed, 33 insertions(+), 70 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 2f727104f53d..a2db4e85790f 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4573,23 +4573,18 @@ static long ext4_zero_range(struct file *file, loff_t offset, int ret, flags, credits; trace_ext4_zero_range(inode, offset, len, mode); + WARN_ON_ONCE(!inode_is_locked(inode)); - inode_lock(inode); - - /* - * Indirect files do not support unwritten extents - */ - if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) { - ret = -EOPNOTSUPP; - goto out; - } + /* Indirect files do not support unwritten extents */ + if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) + return -EOPNOTSUPP; if (!(mode & FALLOC_FL_KEEP_SIZE) && (end > inode->i_size || end > EXT4_I(inode)->i_disksize)) { new_size = end; ret = inode_newsize_ok(inode, new_size); if (ret) - goto out; + return ret; } /* Wait all existing dio workers, newcomers will block on i_rwsem */ @@ -4597,7 +4592,7 @@ static long ext4_zero_range(struct file *file, loff_t offset, ret = file_modified(file); if (ret) - goto out; + return ret; /* * Prevent page faults from reinstantiating pages we have released @@ -4687,8 +4682,6 @@ static long ext4_zero_range(struct file *file, loff_t offset, ext4_journal_stop(handle); out_invalidate_lock: filemap_invalidate_unlock(mapping); -out: - inode_unlock(inode); return ret; } @@ -4702,12 +4695,11 @@ static long ext4_do_fallocate(struct file *file, loff_t offset, int ret; trace_ext4_fallocate_enter(inode, offset, len, mode); + WARN_ON_ONCE(!inode_is_locked(inode)); start_lblk = offset >> inode->i_blkbits; len_lblk = EXT4_MAX_BLOCKS(len, offset, inode->i_blkbits); - inode_lock(inode); - /* We only support preallocation for extent-based files only. */ if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) { ret = -EOPNOTSUPP; @@ -4739,7 +4731,6 @@ static long ext4_do_fallocate(struct file *file, loff_t offset, EXT4_I(inode)->i_sync_tid); } out: - inode_unlock(inode); trace_ext4_fallocate_exit(inode, offset, len_lblk, ret); return ret; } @@ -4774,9 +4765,8 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len) inode_lock(inode); ret = ext4_convert_inline_data(inode); - inode_unlock(inode); if (ret) - return ret; + goto out; if (mode & FALLOC_FL_PUNCH_HOLE) ret = ext4_punch_hole(file, offset, len); @@ -4788,7 +4778,8 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len) ret = ext4_zero_range(file, offset, len, mode); else ret = ext4_do_fallocate(file, offset, len, mode); - +out: + inode_unlock(inode); return ret; } @@ -5298,36 +5289,27 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len) int ret; trace_ext4_collapse_range(inode, offset, len); - - inode_lock(inode); + WARN_ON_ONCE(!inode_is_locked(inode)); /* Currently just for extent based files */ - if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) { - ret = -EOPNOTSUPP; - goto out; - } - + if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) + return -EOPNOTSUPP; /* Collapse range works only on fs cluster size aligned regions. */ - if (!IS_ALIGNED(offset | len, EXT4_CLUSTER_SIZE(sb))) { - ret = -EINVAL; - goto out; - } - + if (!IS_ALIGNED(offset | len, EXT4_CLUSTER_SIZE(sb))) + return -EINVAL; /* * There is no need to overlap collapse range with EOF, in which case * it is effectively a truncate operation */ - if (end >= inode->i_size) { - ret = -EINVAL; - goto out; - } + if (end >= inode->i_size) + return -EINVAL; /* Wait for existing dio to complete */ inode_dio_wait(inode); ret = file_modified(file); if (ret) - goto out; + return ret; /* * Prevent page faults from reinstantiating pages we have released from @@ -5402,8 +5384,6 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len) ext4_journal_stop(handle); out_invalidate_lock: filemap_invalidate_unlock(mapping); -out: - inode_unlock(inode); return ret; } @@ -5429,39 +5409,27 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) loff_t start; trace_ext4_insert_range(inode, offset, len); - - inode_lock(inode); + WARN_ON_ONCE(!inode_is_locked(inode)); /* Currently just for extent based files */ - if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) { - ret = -EOPNOTSUPP; - goto out; - } - + if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) + return -EOPNOTSUPP; /* Insert range works only on fs cluster size aligned regions. */ - if (!IS_ALIGNED(offset | len, EXT4_CLUSTER_SIZE(sb))) { - ret = -EINVAL; - goto out; - } - + if (!IS_ALIGNED(offset | len, EXT4_CLUSTER_SIZE(sb))) + return -EINVAL; /* Offset must be less than i_size */ - if (offset >= inode->i_size) { - ret = -EINVAL; - goto out; - } - + if (offset >= inode->i_size) + return -EINVAL; /* Check whether the maximum file size would be exceeded */ - if (len > inode->i_sb->s_maxbytes - inode->i_size) { - ret = -EFBIG; - goto out; - } + if (len > inode->i_sb->s_maxbytes - inode->i_size) + return -EFBIG; /* Wait for existing dio to complete */ inode_dio_wait(inode); ret = file_modified(file); if (ret) - goto out; + return ret; /* * Prevent page faults from reinstantiating pages we have released from @@ -5562,8 +5530,6 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) ext4_journal_stop(handle); out_invalidate_lock: filemap_invalidate_unlock(mapping); -out: - inode_unlock(inode); return ret; } diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 1d128333bd06..bea19cd6e676 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3962,15 +3962,14 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) unsigned long blocksize = i_blocksize(inode); handle_t *handle; unsigned int credits; - int ret = 0; + int ret; trace_ext4_punch_hole(inode, offset, length, 0); - - inode_lock(inode); + WARN_ON_ONCE(!inode_is_locked(inode)); /* No need to punch hole beyond i_size */ if (offset >= inode->i_size) - goto out; + return 0; /* * If the hole extends beyond i_size, set the hole to end after @@ -3990,7 +3989,7 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) if (offset & (blocksize - 1) || end & (blocksize - 1)) { ret = ext4_inode_attach_jinode(inode); if (ret < 0) - goto out; + return ret; } /* Wait all existing dio workers, newcomers will block on i_rwsem */ @@ -3998,7 +3997,7 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) ret = file_modified(file); if (ret) - goto out; + return ret; /* * Prevent page faults from reinstantiating pages we have released from @@ -4082,8 +4081,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) ext4_journal_stop(handle); out_invalidate_lock: filemap_invalidate_unlock(mapping); -out: - inode_unlock(inode); return ret; } From patchwork Tue Oct 22 11:10:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000205 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=mail.ozlabs.org; envelope-from=srs0=s7bu=rs=vger.kernel.org=linux-ext4+bounces-4696-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXckT2gFRz1xw0 for ; Tue, 22 Oct 2024 14:15:29 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXckR6z1Nz4w2H for ; Tue, 22 Oct 2024 14:15:27 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXckR6vvpz4w2M; Tue, 22 Oct 2024 14:15:27 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.199.223 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566927; cv=pass; b=eVuHAkvw/aBCebdUJ/vAgQTQfXAcejzRXH0Aaw/nC8vCZUL2JewJ7LxVKhCdsXRkz9YvjQuCLO2a/GZcsOqRXgtf/za2Tw8Zb875ztV6dsCjKQgEPDdw3q1LfJEBotuqL2kaX5g6S/gYHb3kJOMSo/avTSWrNg3TeCnBDVj67keYvFo2RFoVDvqBFL915XJYXcMlDNlekepUZwcSXw/P+cbrbAkC5818pC1mV8OlDVfVeb8KCIHtbANup4O5M/X/TETw/UYDpH+N+Ku0Z8ouJi/lSjNU9FBGdRD7aN8pamI+ZxI5oL9Wy3BLdsvPno8/49ttI3w7VuS2F46YqwGTHQ== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566927; c=relaxed/relaxed; bh=GXkl6aUQX/6qXJvzhcdYQhkWQpX1jCI5BuEJ9nMRqcc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GBjU4++gBRX41Xc+AW3ZwuvJmFQs65NJc9EbzoqUbLSMwiaWed+Dascl5D2mrJXMEdOd7AGP6CiSvjGFlSb96thwC8EbtL2gjLZoUtnjPb+hkVabcGMNlBXwpq6il3JLBgjfb1lSaQWBWKwgdu1Jqz7wEmOdPYduyZ9GZ8ImqGwP3A/E/frSrrzMqUvLG3U5xOpFvm7bpO0cx0do1dWS6bncpjYysDjS0mch0DxpY/VnqHEkSHGhOPqt6wzQdRH2dRdUFXaxehDZQF0tHols30V8kusLzwZiyDOC1MiYXS4IPDKTep3jr6dA0I0/cG76+DPeqy9R4rFy8KQ+HGxNvA== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.199.223; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4696-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.199.223; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4696-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [147.75.199.223]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXckR3gN7z4w2H for ; Tue, 22 Oct 2024 14:15:27 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 10E481C237CF for ; Tue, 22 Oct 2024 03:15:27 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D677D1A2860; Tue, 22 Oct 2024 03:13:04 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B1C31487E9; Tue, 22 Oct 2024 03:13:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566784; cv=none; b=JHuU9eUpzyMq0OfXDwME50+AvNyimXEsVUc+bRLWG6WkN50cn7Lz/4iimgfKtI6hDjEqCpX3mLs12WdwwpsrqDU6+XQkEo96feYjlZO4cbiKrRCg4e76Srp3rVo4xgwIG6jP5kjzSoUrhv58HXqMMKhlc1VaxpSvZxL6QzuWDLw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566784; c=relaxed/simple; bh=3kLoIhWBgWrbTJmEHk0gx9jhaeHoiNhK86cFpt3kCaM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZXJ6i7o7mKGXUoz6SYxfuHtGyB7N3OFKC+97eyd5Rs8pR6bk1EuDFxVEzjGUCbu9Lu45y0aL8i3Es0T15dQ0E/MTb3iY/hNit9PmwxfOC7NlcXOxGDiHzD+N6sfuhX+XJ8DhXrB7cDXl7BLdIEFk8ZyLr+1lpR0QW/XZS7mj/Jw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcg9330xz4f3lW6; Tue, 22 Oct 2024 11:12:37 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 943CB1A0568; Tue, 22 Oct 2024 11:12:55 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S14; Tue, 22 Oct 2024 11:12:55 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 10/27] ext4: move out common parts into ext4_fallocate() Date: Tue, 22 Oct 2024 19:10:41 +0800 Message-ID: <20241022111059.2566137-11-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S14 X-Coremail-Antispam: 1UD129KBjvJXoW3Jw1DCF1rGrWfKF4kZry8Grg_yoWftryfpF W5JrW5tFyxWFykWr4rAanrZF13twnFgrWUWrWxu34vvasIywnFka1YkFyFqFW3trW8Zr4j vF4jvr9rGFW7Z3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r4j6ryUMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Currently, all zeroing ranges, punch holes, collapse ranges, and insert ranges first wait for all existing direct I/O workers to complete, and then they acquire the mapping's invalidate lock before performing the actual work. These common components are nearly identical, so we can simplify the code by factoring them out into the ext4_fallocate(). Signed-off-by: Zhang Yi --- fs/ext4/extents.c | 121 ++++++++++++++++------------------------------ fs/ext4/inode.c | 23 +-------- 2 files changed, 43 insertions(+), 101 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index a2db4e85790f..d5067d5aa449 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4587,23 +4587,6 @@ static long ext4_zero_range(struct file *file, loff_t offset, return ret; } - /* Wait all existing dio workers, newcomers will block on i_rwsem */ - inode_dio_wait(inode); - - ret = file_modified(file); - if (ret) - return ret; - - /* - * Prevent page faults from reinstantiating pages we have released - * from page cache. - */ - filemap_invalidate_lock(mapping); - - ret = ext4_break_layouts(inode); - if (ret) - goto out_invalidate_lock; - /* * For journalled data we need to write (and checkpoint) pages before * discarding page cache to avoid inconsitent data on disk in case of @@ -4616,7 +4599,7 @@ static long ext4_zero_range(struct file *file, loff_t offset, ext4_truncate_folios_range(inode, offset, end); } if (ret) - goto out_invalidate_lock; + return ret; /* Now release the pages and zero block aligned part of pages */ truncate_pagecache_range(inode, offset, end - 1); @@ -4630,7 +4613,7 @@ static long ext4_zero_range(struct file *file, loff_t offset, ret = ext4_alloc_file_blocks(file, alloc_lblk, len_lblk, new_size, flags); if (ret) - goto out_invalidate_lock; + return ret; } /* Zero range excluding the unaligned edges */ @@ -4643,11 +4626,11 @@ static long ext4_zero_range(struct file *file, loff_t offset, ret = ext4_alloc_file_blocks(file, start_lblk, zero_blks, new_size, flags); if (ret) - goto out_invalidate_lock; + return ret; } /* Finish zeroing out if it doesn't contain partial block */ if (!(offset & (blocksize - 1)) && !(end & (blocksize - 1))) - goto out_invalidate_lock; + return ret; /* * In worst case we have to writeout two nonadjacent unwritten @@ -4660,7 +4643,7 @@ static long ext4_zero_range(struct file *file, loff_t offset, if (IS_ERR(handle)) { ret = PTR_ERR(handle); ext4_std_error(inode->i_sb, ret); - goto out_invalidate_lock; + return ret; } /* Zero out partial block at the edges of the range */ @@ -4680,8 +4663,6 @@ static long ext4_zero_range(struct file *file, loff_t offset, out_handle: ext4_journal_stop(handle); -out_invalidate_lock: - filemap_invalidate_unlock(mapping); return ret; } @@ -4714,13 +4695,6 @@ static long ext4_do_fallocate(struct file *file, loff_t offset, goto out; } - /* Wait all existing dio workers, newcomers will block on i_rwsem */ - inode_dio_wait(inode); - - ret = file_modified(file); - if (ret) - goto out; - ret = ext4_alloc_file_blocks(file, start_lblk, len_lblk, new_size, EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT); if (ret) @@ -4745,6 +4719,7 @@ static long ext4_do_fallocate(struct file *file, loff_t offset, long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len) { struct inode *inode = file_inode(file); + struct address_space *mapping = file->f_mapping; int ret; /* @@ -4768,6 +4743,29 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len) if (ret) goto out; + /* Wait all existing dio workers, newcomers will block on i_rwsem */ + inode_dio_wait(inode); + + ret = file_modified(file); + if (ret) + return ret; + + if ((mode & FALLOC_FL_MODE_MASK) == FALLOC_FL_ALLOCATE_RANGE) { + ret = ext4_do_fallocate(file, offset, len, mode); + goto out; + } + + /* + * Follow-up operations will drop page cache, hold invalidate lock + * to prevent page faults from reinstantiating pages we have + * released from page cache. + */ + filemap_invalidate_lock(mapping); + + ret = ext4_break_layouts(inode); + if (ret) + goto out_invalidate_lock; + if (mode & FALLOC_FL_PUNCH_HOLE) ret = ext4_punch_hole(file, offset, len); else if (mode & FALLOC_FL_COLLAPSE_RANGE) @@ -4777,7 +4775,10 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len) else if (mode & FALLOC_FL_ZERO_RANGE) ret = ext4_zero_range(file, offset, len, mode); else - ret = ext4_do_fallocate(file, offset, len, mode); + ret = -EOPNOTSUPP; + +out_invalidate_lock: + filemap_invalidate_unlock(mapping); out: inode_unlock(inode); return ret; @@ -5304,23 +5305,6 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len) if (end >= inode->i_size) return -EINVAL; - /* Wait for existing dio to complete */ - inode_dio_wait(inode); - - ret = file_modified(file); - if (ret) - return ret; - - /* - * Prevent page faults from reinstantiating pages we have released from - * page cache. - */ - filemap_invalidate_lock(mapping); - - ret = ext4_break_layouts(inode); - if (ret) - goto out_invalidate_lock; - /* * Write tail of the last page before removed range and data that * will be shifted since they will get removed from the page cache @@ -5334,16 +5318,15 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len) if (!ret) ret = filemap_write_and_wait_range(mapping, end, LLONG_MAX); if (ret) - goto out_invalidate_lock; + return ret; truncate_pagecache(inode, start); credits = ext4_writepage_trans_blocks(inode); handle = ext4_journal_start(inode, EXT4_HT_TRUNCATE, credits); - if (IS_ERR(handle)) { - ret = PTR_ERR(handle); - goto out_invalidate_lock; - } + if (IS_ERR(handle)) + return PTR_ERR(handle); + ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_FALLOC_RANGE, handle); start_lblk = offset >> inode->i_blkbits; @@ -5382,8 +5365,6 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len) out_handle: ext4_journal_stop(handle); -out_invalidate_lock: - filemap_invalidate_unlock(mapping); return ret; } @@ -5424,23 +5405,6 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) if (len > inode->i_sb->s_maxbytes - inode->i_size) return -EFBIG; - /* Wait for existing dio to complete */ - inode_dio_wait(inode); - - ret = file_modified(file); - if (ret) - return ret; - - /* - * Prevent page faults from reinstantiating pages we have released from - * page cache. - */ - filemap_invalidate_lock(mapping); - - ret = ext4_break_layouts(inode); - if (ret) - goto out_invalidate_lock; - /* * Write out all dirty pages. Need to round down to align start offset * to page size boundary for page size > block size. @@ -5448,16 +5412,15 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) start = round_down(offset, PAGE_SIZE); ret = filemap_write_and_wait_range(mapping, start, LLONG_MAX); if (ret) - goto out_invalidate_lock; + return ret; truncate_pagecache(inode, start); credits = ext4_writepage_trans_blocks(inode); handle = ext4_journal_start(inode, EXT4_HT_TRUNCATE, credits); - if (IS_ERR(handle)) { - ret = PTR_ERR(handle); - goto out_invalidate_lock; - } + if (IS_ERR(handle)) + return PTR_ERR(handle); + ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_FALLOC_RANGE, handle); /* Expand file to avoid data loss if there is error while shifting */ @@ -5528,8 +5491,6 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len) out_handle: ext4_journal_stop(handle); -out_invalidate_lock: - filemap_invalidate_unlock(mapping); return ret; } diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index bea19cd6e676..1ccf84a64b7b 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3992,23 +3992,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) return ret; } - /* Wait all existing dio workers, newcomers will block on i_rwsem */ - inode_dio_wait(inode); - - ret = file_modified(file); - if (ret) - return ret; - - /* - * Prevent page faults from reinstantiating pages we have released from - * page cache. - */ - filemap_invalidate_lock(mapping); - - ret = ext4_break_layouts(inode); - if (ret) - goto out_invalidate_lock; - /* * For journalled data we need to write (and checkpoint) pages * before discarding page cache to avoid inconsitent data on @@ -4021,7 +4004,7 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) ext4_truncate_folios_range(inode, offset, end); } if (ret) - goto out_invalidate_lock; + return ret; /* Now release the pages and zero block aligned part of pages*/ truncate_pagecache_range(inode, offset, end - 1); @@ -4034,7 +4017,7 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) if (IS_ERR(handle)) { ret = PTR_ERR(handle); ext4_std_error(sb, ret); - goto out_invalidate_lock; + return ret; } ret = ext4_zero_partial_blocks(handle, inode, offset, length); @@ -4079,8 +4062,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) ext4_handle_sync(handle); out_handle: ext4_journal_stop(handle); -out_invalidate_lock: - filemap_invalidate_unlock(mapping); return ret; } From patchwork Tue Oct 22 11:10:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000198 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=/srt=rs=vger.kernel.org=linux-ext4+bounces-4689-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXchq3W7Sz1xwf for ; Tue, 22 Oct 2024 14:14:03 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXchp0xlhz4w2L for ; Tue, 22 Oct 2024 14:14:02 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXchp0vgKz4wb7; Tue, 22 Oct 2024 14:14:02 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.80.249 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566842; cv=pass; b=mgkTYzPTPIxFVW56KVKfiOwYkTOLd/7jPNEy4jzW8HQBp8P5W1MS7Vcxk0PvrOmsRyO0Jy2NV1JQ4A+bA1d4plbPKVG7tKMkw+DQUfDOZfoc8vhtjT0etjfAUFQ4hsbXHDECH+ddEa59jjl3FMczK2JHDFsK6m62t0hvDQ8gdPjDlua5LoGKdLAOnByv561/agPpeeMvzjrRfzQKqrYXOPoNXcu8qkaDwtZXpGeJ2hwsVq3jHpOPo972MwXPdbqR4qIPv9OstQ1NM6uXIb4sR4BH6/LLnIQZMkQuJzWlKIRJ/FvF1pjhZOivhPc6AAwF/srurLLCZC8A9yqY+0I2+Q== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566842; c=relaxed/relaxed; bh=eSLy+lo5m0gINXfpOA36tVYVYFaR36cWpjv6QInrnzo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jw9JflNf+P6iSwmrkfxUNeJjt/ysvhwiEHY/AoCsdWkLaWVK70mt4cOHK1WlPwvC3hiUDBC0mZicQZdOtcK6rx2NjMH+lhWxVPzQCqrUTWkIUc5Hbf4pTvMoWZ28GLLjFz/dVHSN/VNubQVkxGy/Xx072xR/g02nkkSjleOjF4gF8DoT6/uPNpUrc9h54lwhwS2OIYLNiqLpIUmGBu2aPHsXC++9XQikunK/M7RIB13o6e2MiIW5xW0Fe0PpiFBLTCvZwOjq7ag7S7xi1d4tWzAxFmT4Ri7zYWqqS3KrppGidU8hvkAs/Wmw9vqmNI8AprNsYDjFkI89odeyxxVfEw== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4689-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4689-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [147.75.80.249]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXchn3swXz4w2L for ; Tue, 22 Oct 2024 14:14:01 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 748DF1F2363A for ; Tue, 22 Oct 2024 03:14:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CFC28148FF0; Tue, 22 Oct 2024 03:13:01 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2AE613E88C; Tue, 22 Oct 2024 03:12:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566781; cv=none; b=l0DSry85iyEVbFY4V3Vh0j5/Rbz58/SMHMT4GT9RfIkThHxfhCmMZOj2llMTuRKxvuDSvlwvodvB8v0Zk3KZxsJEQ6WmI5niBaiSdJ8fOAtH5jk7ToOcSS5YmfT9KLaPmPHhVgTe31oh21+oU/nRYoxLUf3vz9/Ua+OUJQEPtlY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566781; c=relaxed/simple; bh=Q4ZjcXrFDZS2msa9ZwQ+vB5v3/8DlSJ9xp0ij3cEtWA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XeLJ4zNxDivNMVZhAfwptTEU452qFl5thQFkrLax8E1sZ+snEmjFe685Ct0Sh1MueBjGVAbIYR0Gu0Gjqyga6i1aN2n6nHFwygC583ZFZdkqkhL2iRQrD44LoVHox36Dg/dKevFMDHg/ET4UOs18LUXjVLaTJ5c9hwNujB607F0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgH6bxHz4f3jkk; Tue, 22 Oct 2024 11:12:43 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 344111A058E; Tue, 22 Oct 2024 11:12:56 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S15; Tue, 22 Oct 2024 11:12:55 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 11/27] ext4: use reserved metadata blocks when splitting extent on endio Date: Tue, 22 Oct 2024 19:10:42 +0800 Message-ID: <20241022111059.2566137-12-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S15 X-Coremail-Antispam: 1UD129KBjvJXoW7Zw15Ary3ZF4fKF4UGrW8Xrb_yoW8ur1DpF yay3W8Gr40vw1Y9F92ya1UAr12k3W5GF47urZ8tayUuay7Ar1rXF12k34F9FyYvrWxJa1Y vr409w1UZwn5uaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r4j6ryUMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi When performing buffered writes, we may need to split and convert an unwritten extent into a written one during the end I/O process. However, we do not reserve space specifically for these metadata changes, we only reserve 2% of space or 4096 blocks. To address this, we use EXT4_GET_BLOCKS_PRE_IO to potentially split extents in advance and EXT4_GET_BLOCKS_METADATA_NOFAIL to utilize reserved space if necessary. These two approaches can reduce the likelihood of running out of space and losing data. However, these methods are merely best efforts, we could still run out of space, and there is not much difference between converting an extent during the writeback process and the end I/O process, it won't increase the rick of losing data if we postpone the conversion. Therefore, also use EXT4_GET_BLOCKS_METADATA_NOFAIL in ext4_convert_unwritten_extents_endio() to prepare for the buffered I/O iomap conversion, which may perform extent conversion during the end I/O process. Signed-off-by: Zhang Yi --- fs/ext4/extents.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index d5067d5aa449..33bc2cc5aff4 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -3767,6 +3767,8 @@ ext4_convert_unwritten_extents_endio(handle_t *handle, struct inode *inode, * illegal. */ if (ee_block != map->m_lblk || ee_len > map->m_len) { + int flags = EXT4_GET_BLOCKS_CONVERT | + EXT4_GET_BLOCKS_METADATA_NOFAIL; #ifdef CONFIG_EXT4_DEBUG ext4_warning(inode->i_sb, "Inode (%ld) finished: extent logical block %llu," " len %u; IO logical block %llu, len %u", @@ -3774,7 +3776,7 @@ ext4_convert_unwritten_extents_endio(handle_t *handle, struct inode *inode, (unsigned long long)map->m_lblk, map->m_len); #endif path = ext4_split_convert_extents(handle, inode, map, path, - EXT4_GET_BLOCKS_CONVERT, NULL); + flags, NULL); if (IS_ERR(path)) return path; From patchwork Tue Oct 22 11:10:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000209 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=mail.ozlabs.org; envelope-from=srs0=lk8v=rs=vger.kernel.org=linux-ext4+bounces-4700-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXclS5JkBz1xw0 for ; Tue, 22 Oct 2024 14:16:20 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXclR2SLKz4w2M for ; Tue, 22 Oct 2024 14:16:19 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXclR2Pltz4wnr; Tue, 22 Oct 2024 14:16:19 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.80.249 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566979; cv=pass; b=VP4fKivZfxb+iD8PYFYS0KcDQ480wMT6OlKt+QvwEW1YLV091GitW5RAAEIa6E7jD1oUiy8gKFB1uQHjUOKTzu5BEbaoW1ehwq9HrLDTNrSdIwiB+JavP38HFHZx5Kze1H6wu5SpNm1YiS1kEEjB5X09Rljv93lv5rc64e6eLzsOGOVssEgoBZVE7U4JKDIP8W7zS8ARjHrWIsnjl9ffw8R8DbGB7N6Di6CaLi4cN2Z7PohdPhtB9z/gYJE1KF3HP14++UR+f1Ur8KGwWuNhnaw4Fa+CMvOJvk+mEGwif9F9JSKFpSn3OjLWLGPc9rX1HTE4iOSNozUYAzZekyBTMw== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566979; c=relaxed/relaxed; bh=8tvvFSusHzhWWGOtMHAWkpbmz1vXqcZOkKSVkZ6c1eU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ht6coEvZ/+Ettmqao0hzvQcCrPi9NEzv3HbTnK/QUU3craCMxnHG2qiEGfokofd7MSo7LMyOwtQBYEqUfmpBGgmtR145C2eGwM28ITlH/Il7V8mpmvFGeDUY7dLh5PeYNoXhAq30nfSl4v9PGLvD1gI5Vs7BYRCP+Ov25AkA+YZyHV+lPsOQQ1H1eJl4Ez/PY5vxJTy8XhxrU+n0l7epQufIx/XrlUEeX9YlO+T25UDq5ucBA2nANV6rbzRE9E1DrGK9SMErv7fesbdf+q0R+a5mdJL/W9wtkPobTzHGX/jCxHX/pOeLFRwujaiITXwQKk6wdMWV5qOYK7kPVkHjvw== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4700-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4700-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [147.75.80.249]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXclP0ZZrz4w2M for ; Tue, 22 Oct 2024 14:16:17 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 8C0D91F24C67 for ; Tue, 22 Oct 2024 03:16:11 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id ACCDD1C9EA1; Tue, 22 Oct 2024 03:13:05 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2135142E78; Tue, 22 Oct 2024 03:12:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566785; cv=none; b=SONKffdL0qFhKUyjqwZi2C1EwvtRyzq6By6+oG2EQG5RV237UF72bpkuG1OdjN9qnfjuYNS1uzo9ov+nKW8a+sbHtmsFPw6pG2KiMwXXiL1w/Opn5tRiAyIWtQvYV6abVVFgNC/DV2GVRHPb8Fd2KNTLX/4MwLbLATygJ+8Uh+Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566785; c=relaxed/simple; bh=wA/+byRUE9HEVfAVPo0uMAiN7r+pgEmHZjo59/JZv7c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=c9wdkTPijzBpGuONDJL2IK4oF9uqsdzH9Dt35ucD7XZcmu479iYM9SQ9hvdwEh7HcfmD8hW67oApdPZJTbX9FZzXabQ3tuK0AkUeXPSyJjYohXEXlP8RJ6Au+MhCrMseH7Ej39TK1/mvXIw/73XEuljRusOus/AE+j6vHRQeWys= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4XXcgC3PVqz4f3jY2; Tue, 22 Oct 2024 11:12:39 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id D0FE91A018D; Tue, 22 Oct 2024 11:12:56 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S16; Tue, 22 Oct 2024 11:12:56 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 12/27] ext4: introduce seq counter for the extent status entry Date: Tue, 22 Oct 2024 19:10:43 +0800 Message-ID: <20241022111059.2566137-13-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S16 X-Coremail-Antispam: 1UD129KBjvJXoWxtrW8WF4fuF1UCFWkuF15urg_yoW3GFWxpa 9rAr15GrWkXw4q93WxZw4rWr43Wa48CrW7Gr9IgFWFvFW8tr1DKF1UtF1jvF98tFW0yrnr XFWFy34DA3WUWa7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r4j6ryUMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi In the iomap_write_iter(), the iomap buffered write frame does not hold any locks between querying the inode extent mapping info and performing page cache writes. As a result, the extent mapping can be changed due to concurrent I/O in flight. Similarly, in the iomap_writepage_map(), the write-back process faces a similar problem: concurrent changes can invalidate the extent mapping before the I/O is submitted. Therefore, both of these processes must recheck the mapping info after acquiring the folio lock. To address this, similar to XFS, we propose introducing an extent sequence number to serve as a validity cookie for the extent. We will increment this number whenever the extent status tree changes, thereby preparing for the buffered write iomap conversion. Besides, it also changes the trace code style to make checkpatch.pl happy. Signed-off-by: Zhang Yi --- fs/ext4/ext4.h | 1 + fs/ext4/extents_status.c | 13 ++++++++- fs/ext4/super.c | 1 + include/trace/events/ext4.h | 57 +++++++++++++++++++++---------------- 4 files changed, 46 insertions(+), 26 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 6d0267afd4c1..44f6867d3037 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1123,6 +1123,7 @@ struct ext4_inode_info { ext4_lblk_t i_es_shrink_lblk; /* Offset where we start searching for extents to shrink. Protected by i_es_lock */ + unsigned int i_es_seq; /* Change counter for extents */ /* ialloc */ ext4_group_t i_last_alloc_group; diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c index c786691dabd3..bea4f87db502 100644 --- a/fs/ext4/extents_status.c +++ b/fs/ext4/extents_status.c @@ -204,6 +204,13 @@ static inline ext4_lblk_t ext4_es_end(struct extent_status *es) return es->es_lblk + es->es_len - 1; } +static inline void ext4_es_inc_seq(struct inode *inode) +{ + struct ext4_inode_info *ei = EXT4_I(inode); + + WRITE_ONCE(ei->i_es_seq, READ_ONCE(ei->i_es_seq) + 1); +} + /* * search through the tree for an delayed extent with a given offset. If * it can't be found, try to find next extent. @@ -872,6 +879,7 @@ void ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk, BUG_ON(end < lblk); WARN_ON_ONCE(status & EXTENT_STATUS_DELAYED); + ext4_es_inc_seq(inode); newes.es_lblk = lblk; newes.es_len = len; ext4_es_store_pblock_status(&newes, pblk, status); @@ -1519,13 +1527,15 @@ void ext4_es_remove_extent(struct inode *inode, ext4_lblk_t lblk, if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY) return; - trace_ext4_es_remove_extent(inode, lblk, len); es_debug("remove [%u/%u) from extent status tree of inode %lu\n", lblk, len, inode->i_ino); if (!len) return; + ext4_es_inc_seq(inode); + trace_ext4_es_remove_extent(inode, lblk, len); + end = lblk + len - 1; BUG_ON(end < lblk); @@ -2107,6 +2117,7 @@ void ext4_es_insert_delayed_extent(struct inode *inode, ext4_lblk_t lblk, WARN_ON_ONCE((EXT4_B2C(sbi, lblk) == EXT4_B2C(sbi, end)) && end_allocated); + ext4_es_inc_seq(inode); newes.es_lblk = lblk; newes.es_len = len; ext4_es_store_pblock_status(&newes, ~0, EXTENT_STATUS_DELAYED); diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 16a4ce704460..a01e0bbe57c8 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1409,6 +1409,7 @@ static struct inode *ext4_alloc_inode(struct super_block *sb) ei->i_es_all_nr = 0; ei->i_es_shk_nr = 0; ei->i_es_shrink_lblk = 0; + ei->i_es_seq = 0; ei->i_reserved_data_blocks = 0; spin_lock_init(&(ei->i_block_reservation_lock)); ext4_init_pending_tree(&ei->i_pending_tree); diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h index 156908641e68..6f2bf9035216 100644 --- a/include/trace/events/ext4.h +++ b/include/trace/events/ext4.h @@ -2176,12 +2176,13 @@ DECLARE_EVENT_CLASS(ext4__es_extent, TP_ARGS(inode, es), TP_STRUCT__entry( - __field( dev_t, dev ) - __field( ino_t, ino ) - __field( ext4_lblk_t, lblk ) - __field( ext4_lblk_t, len ) - __field( ext4_fsblk_t, pblk ) - __field( char, status ) + __field(dev_t, dev) + __field(ino_t, ino) + __field(ext4_lblk_t, lblk) + __field(ext4_lblk_t, len) + __field(ext4_fsblk_t, pblk) + __field(char, status) + __field(unsigned int, seq) ), TP_fast_assign( @@ -2191,13 +2192,15 @@ DECLARE_EVENT_CLASS(ext4__es_extent, __entry->len = es->es_len; __entry->pblk = ext4_es_show_pblock(es); __entry->status = ext4_es_status(es); + __entry->seq = EXT4_I(inode)->i_es_seq; ), - TP_printk("dev %d,%d ino %lu es [%u/%u) mapped %llu status %s", + TP_printk("dev %d,%d ino %lu es [%u/%u) mapped %llu status %s seq %u", MAJOR(__entry->dev), MINOR(__entry->dev), (unsigned long) __entry->ino, __entry->lblk, __entry->len, - __entry->pblk, show_extent_status(__entry->status)) + __entry->pblk, show_extent_status(__entry->status), + __entry->seq) ); DEFINE_EVENT(ext4__es_extent, ext4_es_insert_extent, @@ -2218,10 +2221,11 @@ TRACE_EVENT(ext4_es_remove_extent, TP_ARGS(inode, lblk, len), TP_STRUCT__entry( - __field( dev_t, dev ) - __field( ino_t, ino ) - __field( loff_t, lblk ) - __field( loff_t, len ) + __field(dev_t, dev) + __field(ino_t, ino) + __field(loff_t, lblk) + __field(loff_t, len) + __field(unsigned int, seq) ), TP_fast_assign( @@ -2229,12 +2233,13 @@ TRACE_EVENT(ext4_es_remove_extent, __entry->ino = inode->i_ino; __entry->lblk = lblk; __entry->len = len; + __entry->seq = EXT4_I(inode)->i_es_seq; ), - TP_printk("dev %d,%d ino %lu es [%lld/%lld)", + TP_printk("dev %d,%d ino %lu es [%lld/%lld) seq %u", MAJOR(__entry->dev), MINOR(__entry->dev), (unsigned long) __entry->ino, - __entry->lblk, __entry->len) + __entry->lblk, __entry->len, __entry->seq) ); TRACE_EVENT(ext4_es_find_extent_range_enter, @@ -2486,14 +2491,15 @@ TRACE_EVENT(ext4_es_insert_delayed_extent, TP_ARGS(inode, es, lclu_allocated, end_allocated), TP_STRUCT__entry( - __field( dev_t, dev ) - __field( ino_t, ino ) - __field( ext4_lblk_t, lblk ) - __field( ext4_lblk_t, len ) - __field( ext4_fsblk_t, pblk ) - __field( char, status ) - __field( bool, lclu_allocated ) - __field( bool, end_allocated ) + __field(dev_t, dev) + __field(ino_t, ino) + __field(ext4_lblk_t, lblk) + __field(ext4_lblk_t, len) + __field(ext4_fsblk_t, pblk) + __field(char, status) + __field(bool, lclu_allocated) + __field(bool, end_allocated) + __field(unsigned int, seq) ), TP_fast_assign( @@ -2505,15 +2511,16 @@ TRACE_EVENT(ext4_es_insert_delayed_extent, __entry->status = ext4_es_status(es); __entry->lclu_allocated = lclu_allocated; __entry->end_allocated = end_allocated; + __entry->seq = EXT4_I(inode)->i_es_seq; ), - TP_printk("dev %d,%d ino %lu es [%u/%u) mapped %llu status %s " - "allocated %d %d", + TP_printk("dev %d,%d ino %lu es [%u/%u) mapped %llu status %s allocated %d %d seq %u", MAJOR(__entry->dev), MINOR(__entry->dev), (unsigned long) __entry->ino, __entry->lblk, __entry->len, __entry->pblk, show_extent_status(__entry->status), - __entry->lclu_allocated, __entry->end_allocated) + __entry->lclu_allocated, __entry->end_allocated, + __entry->seq) ); /* fsmap traces */ From patchwork Tue Oct 22 11:10:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000207 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=elyz=rs=vger.kernel.org=linux-ext4+bounces-4698-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXckh04Jdz1xw0 for ; Tue, 22 Oct 2024 14:15:40 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXckf4NHdz4w2M for ; Tue, 22 Oct 2024 14:15:38 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXckf4LPpz4wb7; Tue, 22 Oct 2024 14:15:38 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.199.223 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566938; cv=pass; b=XeBogq3MnbM5QilY87GYAERi1RnxMbsa3MuI7cLA7zz7WVCGxgEusCNJ4V4TsKQNYHBqowJLva9ISfcljgMZvA/lvpGYiKoXtf3Ioup4pVmUxDaaxkPKpZ6nNvlzUbz6kTj0I0eXr+cYBgnaZYVy9bTfUKSw82qyvPScFmENNC2jFrUtj+LIryAOwK0mLMBnSbTUn4fPalohtfTXU6ZDnUrhqD4CmXqkwr7maBJvcZb9seM2mbOxdjJbPYZJPmC2PQ2N7sn2CVKQnSg5kgtxvDr7AvFf5W4VMs3aO/zzXwRoGNwUv90OW9uoBg7ZkIjQwomqHytTMS2EKpqK126RvA== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566938; c=relaxed/relaxed; bh=tKGaNFHm96OtiwM4E5DeIOYeBT+nWJ2KY9N6JGSK8Ig=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k5o4dox5xKvRwOEFnEv5GkAHKshHc2T1FXO3E9bLhsnOYrJnPeDfaeSGa2vIT+fvvVwmYw/t2hkYyh10kU2x2XW+IcAhroi7HwtKNfAeRXfrx3qA+XYXgELHiv0R3MsH73J4JSvT9pV6MvZihcOIlOgH05L5xOBjGP8fgp5Se7RZSDJUQjVv6vCUflr8PHyoa1W5L0wavRm2BIqMs9HRr6hd4ZLzGdnrwOphKtNMup3mmjE3TcExN2AzFkvO4wyGro4i+v/g2XPpE1TE6Vy+ly+7NXF2+rvHeo466z+ulRcnHpI8pEhoXsoVdf1b+o9szT82fGA90G/Hk8WkxwUyxQ== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.199.223; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4698-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.199.223; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4698-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [147.75.199.223]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXckf18gDz4w2M for ; Tue, 22 Oct 2024 14:15:38 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id A86D91C238ED for ; Tue, 22 Oct 2024 03:15:37 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1E8461C2DDA; Tue, 22 Oct 2024 03:13:05 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 364EE1494CF; Tue, 22 Oct 2024 03:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566784; cv=none; b=O7ShU/5f0jlEC7SX5YIvy3cV50ideJgpA80vJtQjT9gjJBrzE9r7GAIo49QIPKtTqZvRIIA1qqSZla2F3dUtzbGs8yIu4qiXejePI3s5UhJ9IJFR0t4FPPzPo8toC6aQe6T8FUmaXJXOx5U9lCm9qla8pQOqQJXtdZmP7JERFiw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566784; c=relaxed/simple; bh=L2PnKFl1ZCOWQSi7+wvxH9mUeWEOTVmtXzjuGxJ5G7M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PnvmN/uxyz9N5tqq7VO1g7BuVW6Oe88UvlG3GmNO/t71z2vmL6QgMByOMi7fmBmE1XaUOCNk/lj5g8xvFauGEyGhM5uSRSi+c0ocgntwT0qDBiERWnZgjDRJ62ghxtU0fj9w+ngWntKCCM7OPpk41cpTYFugB2+1sHje9iNCmvs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgC27Rlz4f3lW5; Tue, 22 Oct 2024 11:12:39 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 77EE71A018D; Tue, 22 Oct 2024 11:12:57 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S17; Tue, 22 Oct 2024 11:12:57 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 13/27] ext4: add a new iomap aops for regular file's buffered IO path Date: Tue, 22 Oct 2024 19:10:44 +0800 Message-ID: <20241022111059.2566137-14-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S17 X-Coremail-Antispam: 1UD129KBjvJXoWxAFy8XFW5AF1kWFW5JFW5KFg_yoW5ZrykpF 98Kas3GF18Zr9rua1fXa9rAr4Yya4fJa1UKFW3G3Wa9r98GrW7KFWqka4jkFy5t3ykJr1I qr4j9ry7GF17CrDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r4j6ryUMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi This patch starts support for iomap in the buffered I/O path of ext4 regular files. First, it introduces a new iomap address space operation, ext4_iomap_aops. Additionally, it adds an inode state flag, EXT4_STATE_BUFFERED_IOMAP, which indicates that the inode uses the iomap path instead of the original buffer_head path for buffered I/O. Most callbacks of ext4_iomap_aops can directly utilize generic implementations, the remaining functions .read_folio(), .readahead(), and .writepages() will be implemented in later patches. Signed-off-by: Zhang Yi --- fs/ext4/ext4.h | 1 + fs/ext4/inode.c | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 44f6867d3037..ee170196bfff 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1916,6 +1916,7 @@ enum { EXT4_STATE_VERITY_IN_PROGRESS, /* building fs-verity Merkle tree */ EXT4_STATE_FC_COMMITTING, /* Fast commit ongoing */ EXT4_STATE_ORPHAN_FILE, /* Inode orphaned in orphan file */ + EXT4_STATE_BUFFERED_IOMAP, /* Inode use iomap for buffered IO */ }; #define EXT4_INODE_BIT_FNS(name, field, offset) \ diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 1ccf84a64b7b..b233f36efefa 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3526,6 +3526,22 @@ const struct iomap_ops ext4_iomap_report_ops = { .iomap_begin = ext4_iomap_begin_report, }; +static int ext4_iomap_read_folio(struct file *file, struct folio *folio) +{ + return 0; +} + +static void ext4_iomap_readahead(struct readahead_control *rac) +{ + +} + +static int ext4_iomap_writepages(struct address_space *mapping, + struct writeback_control *wbc) +{ + return 0; +} + /* * For data=journal mode, folio should be marked dirty only when it was * writeably mapped. When that happens, it was already attached to the @@ -3612,6 +3628,20 @@ static const struct address_space_operations ext4_da_aops = { .swap_activate = ext4_iomap_swap_activate, }; +static const struct address_space_operations ext4_iomap_aops = { + .read_folio = ext4_iomap_read_folio, + .readahead = ext4_iomap_readahead, + .writepages = ext4_iomap_writepages, + .dirty_folio = iomap_dirty_folio, + .bmap = ext4_bmap, + .invalidate_folio = iomap_invalidate_folio, + .release_folio = iomap_release_folio, + .migrate_folio = filemap_migrate_folio, + .is_partially_uptodate = iomap_is_partially_uptodate, + .error_remove_folio = generic_error_remove_folio, + .swap_activate = ext4_iomap_swap_activate, +}; + static const struct address_space_operations ext4_dax_aops = { .writepages = ext4_dax_writepages, .dirty_folio = noop_dirty_folio, @@ -3633,6 +3663,8 @@ void ext4_set_aops(struct inode *inode) } if (IS_DAX(inode)) inode->i_mapping->a_ops = &ext4_dax_aops; + else if (ext4_test_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP)) + inode->i_mapping->a_ops = &ext4_iomap_aops; else if (test_opt(inode->i_sb, DELALLOC)) inode->i_mapping->a_ops = &ext4_da_aops; else From patchwork Tue Oct 22 11:10:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000200 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=mail.ozlabs.org; envelope-from=srs0=mllf=rs=vger.kernel.org=linux-ext4+bounces-4692-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcjN5cQcz1xwf for ; Tue, 22 Oct 2024 14:14:32 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcjM2qnMz4w2L for ; Tue, 22 Oct 2024 14:14:31 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcjM2nb8z4w2N; Tue, 22 Oct 2024 14:14:31 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:45e3:2400::1" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566871; cv=pass; b=fBJPuW4tZoDNkO3/pdFjSfJ86wk3jCrez1J8V6BjjnoME7O7F9VunDTl97iAqactP9qm+1llUfFpFdRBzwDNeDbeZ2a3kT3XtCk8mBNqF0UKYAVBW+/salOONZv9kNbNKbqQMU3EhSfKzRiSb04CYOQq5vjaidBuJ16eTXLt7h+vIHv/QG134B0e0JnQMiWXiA93qO/ycLdpI6ieNaVZMhmqa8wRZfwQqNDDOJTafdkd+5xaoWnKe7Up1g3DK8C44A1mp7CwHeDNS0cHaGJfgxp5TGQ0H4kJPNIDcqX4YYT5u53OHZaq0byekLiw+wjVBHsXGlylLzuBdCyikam/VA== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566871; c=relaxed/relaxed; bh=ZbfSww2ffc1fzJnNE1QAuqMfxkNANKpsf/E3zIXaJeY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lNXEWmOsW+4z2eOJhvdYjJRjTP2RkHlQPwi+OWr/SsE1zIRzHEO+ZwCMo283kzbc3kRbJThuoE45knhrCty18JvoCWa5/DE6R7hFVoKPmJeX1K2FrcKeVcp3L2zeoh3YVuNvHo42rdQ1Oprc+Ve4t3VsHzHqi/cuLm/cPpu5WX8sSUY1e3HArCxMYLlSAax+dw1W62mLZ8lJbvNuOuMYS4QRE668vxNmoLuVqLTyGVApJXIiryzV/Z8Sjy7hR3QV1dGS+MejS2rsnJtTNHUk+9y56mVTRp+TBP9AjXiJIvcCY2G1+SzKnQnfk57kuiOjZ24pDfu1wXk8TtjAIdIHDg== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4692-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4692-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [IPv6:2604:1380:45e3:2400::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcjM0WyPz4w2L for ; Tue, 22 Oct 2024 14:14:31 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id F36F8284FE7 for ; Tue, 22 Oct 2024 03:14:30 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4151015443D; Tue, 22 Oct 2024 03:13:03 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F09351474BF; Tue, 22 Oct 2024 03:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566783; cv=none; b=epPpTKZe+sNEsb6iJtOXxNHKKlQR+Y16fEp/bvTQCkZB9oW6BebaNbypqr/gqVZWKpV76jkWiI0OOH34IyleTYXFWSv35HtbORtbszUZkES+xHFFToHGnP4hRKLrBjmPXpE2pZsRQFEaCE2qNgO+5+iCRIzy3jXHgbZB8ZbIt+w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566783; c=relaxed/simple; bh=mfpbSaLpTZWPmr7Qv9kGMuWgxyR1PGeQlGnSV9/uK1M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TYhS0NATSfBhQ4zAs9wpB6ZbA7mya340rL5HIai0logD0bomUbUWtLc7vDPn/+uKGEbGdQYvXaq4kdzScU4wQcrU2/u1q5HpbXBsaDMtREZ+CFY51SBLY5Me8wq70gnn/YvXIVpePjfmOFRPyk+jnE3EBNFoVFEpqd2jikB+XGI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgK66cyz4f3jkk; Tue, 22 Oct 2024 11:12:45 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 227051A018D; Tue, 22 Oct 2024 11:12:58 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S18; Tue, 22 Oct 2024 11:12:57 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 14/27] ext4: implement buffered read iomap path Date: Tue, 22 Oct 2024 19:10:45 +0800 Message-ID: <20241022111059.2566137-15-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S18 X-Coremail-Antispam: 1UD129KBjvJXoW7CFyDXF1UXrW3Zw4UurWkXrb_yoW8KrWrpF 98KFy5GF47XrnI9a1SgFZrJr1Fk3WxtF45ZrWfWasxuFyYkrW2gay0gFyYvF1Yq3yxAr10 qr4jkr1xWF1UArDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r4j6ryUMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Introduce a new iomap_ops, ext4_iomap_buffered_read_ops to implement the iomap read paths, specifically .read_folio() and .readahead() of ext4_iomap_aops. This .iomap_begin() handle invokes ext4_map_blocks() to query the extent mapping status of the read range and then converts the mapping information to iomap. Signed-off-by: Zhang Yi --- fs/ext4/inode.c | 37 +++++++++++++++++++++++++++++++++++-- 1 file changed, 35 insertions(+), 2 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index b233f36efefa..f0bc4b58ac4f 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3526,14 +3526,47 @@ const struct iomap_ops ext4_iomap_report_ops = { .iomap_begin = ext4_iomap_begin_report, }; -static int ext4_iomap_read_folio(struct file *file, struct folio *folio) +static int ext4_iomap_buffered_read_begin(struct inode *inode, loff_t offset, + loff_t length, unsigned int flags, struct iomap *iomap, + struct iomap *srcmap) { + int ret; + struct ext4_map_blocks map; + u8 blkbits = inode->i_blkbits; + + if (unlikely(ext4_forced_shutdown(inode->i_sb))) + return -EIO; + if ((offset >> blkbits) > EXT4_MAX_LOGICAL_BLOCK) + return -EINVAL; + /* Inline data support is not yet available. */ + if (WARN_ON_ONCE(ext4_has_inline_data(inode))) + return -ERANGE; + + /* Calculate the first and last logical blocks respectively. */ + map.m_lblk = offset >> blkbits; + map.m_len = min_t(loff_t, (offset + length - 1) >> blkbits, + EXT4_MAX_LOGICAL_BLOCK) - map.m_lblk + 1; + + ret = ext4_map_blocks(NULL, inode, &map, 0); + if (ret < 0) + return ret; + + ext4_set_iomap(inode, iomap, &map, offset, length, flags); return 0; } -static void ext4_iomap_readahead(struct readahead_control *rac) +const struct iomap_ops ext4_iomap_buffered_read_ops = { + .iomap_begin = ext4_iomap_buffered_read_begin, +}; + +static int ext4_iomap_read_folio(struct file *file, struct folio *folio) { + return iomap_read_folio(folio, &ext4_iomap_buffered_read_ops); +} +static void ext4_iomap_readahead(struct readahead_control *rac) +{ + iomap_readahead(rac, &ext4_iomap_buffered_read_ops); } static int ext4_iomap_writepages(struct address_space *mapping, From patchwork Tue Oct 22 11:10:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000210 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=bxn3=rs=vger.kernel.org=linux-ext4+bounces-4701-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXclb3x1Wz1xw0 for ; Tue, 22 Oct 2024 14:16:27 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXclZ16RSz4w2H for ; Tue, 22 Oct 2024 14:16:26 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXclZ14Dzz4wxf; Tue, 22 Oct 2024 14:16:26 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.48.161 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566986; cv=pass; b=v7KAd8G15XUH4Wn/fVBVmngCMEbDsFWPGMJzlYeninWzzOuHTVj+R2XGhcVwHZezXTEwmTd9eKXZZFP9ft0Q+YwQ8rGpGZUbqkfZiUog+EErTXr6s+lm9DjZinCy9FOKVMZaFiMXwAvFIS/tzL58luG21yAVkmGiWCxrL+lob9RMUuzSSiL7RfuHPMeukuTMYzTUqronHfKDr6/78jkA8Rg+JahSuXNtXVrpp6gnQGzoRMzIg0l3lOJGPd8c9HjX35jw8t4Ios734iXXVmu7FN87uLuRT9FKVq7ZVWQS6CaGJp5RQYBs3alDfsyRA0PBbZleWk7idSGgHsXNsBTrTw== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566986; c=relaxed/relaxed; bh=6WDfgfeRF6sXBz9oAvyPJa8M6vISQ7Qn3bj6j0o8u2Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WRlnRkB9M1whIm2pI50XvtL0fix5yeR0B0L+lVgsJdlWNBAzV0nyvcZVJI4aHkGmd0SP8k9+MpLa5kRKI4YL5qjqhuBw1Gfw6B0nDGejQEB3Hw3xzTA5g0cg7ZR1nAlBp8WnoZyvDoefAE24UGLLx7BIOXBGy1AiOJVqwadlmj0jHZ5180MPBctqNnW2Z/GmqwT6ni4JBhQmQzIfr9J8b8T5SSuqAfy4I+RChl8ZHnpf58WtXiC0rBJPkrffG/Ym4VNIpUrAd1VBpi5Ba1VOi4fV9/I5S7NsqDWgAO6cn9u6l22VrmXxZdDLgN2scRUPQb4w4A/bSeCGuYzJXt0kJQ== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.48.161; helo=sy.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4701-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.48.161; helo=sy.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4701-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org [147.75.48.161]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXclZ0vdcz4w2H for ; Tue, 22 Oct 2024 14:16:26 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 9A930B22D64 for ; Tue, 22 Oct 2024 03:16:26 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 150441CB501; Tue, 22 Oct 2024 03:13:06 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B324A148850; Tue, 22 Oct 2024 03:13:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566785; cv=none; b=LlSx4piQqUSyfPPLHT9/HpnUc+76IG9ChlQr2EY7NhvlLEQwcJCDFPa0y+LKJUsQMotUtYpkuvEB8FPkI08EAI4QjfWIXs9qFHN0sjQyN2ZdF4M3rO830ZF466+eKx8iC8jhoro+fyL9ZReLHcAhS/oTMsxOKmg5vrwm4JGQnaM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566785; c=relaxed/simple; bh=mvYROxnhlgk2x4bYLtyr7uhZYNNSevG/8KgsQUSX+BU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ShlEKI1V48EycMcOIqEVPtgmCJV5ZsajI52AznSHcohfToHiFl6LDzEpLiYmrr2NRHOt5x5lSJaSORAKfxsPZoSxKAsvYdEEpdp26PzsZv6FRWbJFytfrJyYrIhRqRSw9vCR0Xp2MbRwLl4HbE+g507l5SbKrmEWYbYyYsZJj80= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4XXcgF31JNz4f3jXP; Tue, 22 Oct 2024 11:12:41 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id BF4F01A018C; Tue, 22 Oct 2024 11:12:58 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S19; Tue, 22 Oct 2024 11:12:58 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 15/27] ext4: implement buffered write iomap path Date: Tue, 22 Oct 2024 19:10:46 +0800 Message-ID: <20241022111059.2566137-16-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S19 X-Coremail-Antispam: 1UD129KBjvJXoW3ZFy8Zr1fJw1kGFy8Ar4rGrg_yoWDZF4kpF Z0kry5GF47Xr97uF4ftF47Zr1Fk3Wxtr4UCrW3Wrn8Xr9IyryIqF409FyayF15t3yxCr4j qF4Ykry8Wr4UCrDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r4j6ryUMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Introduce two new iomap_ops: ext4_iomap_buffered_write_ops and ext4_iomap_buffered_da_write_ops to implement the iomap write path. These operations invoke ext4_da_map_blocks() to map delayed allocation extents and introduce ext4_iomap_get_blocks() to directly allocate blocks in non-delayed allocation mode. Additionally, implement ext4_iomap_valid() to check the validity of extent mapping. There are two key differences between the buffer_head write path and the iomap write path: 1) In the iomap write path, we always allocate unwritten extents for new blocks, which means we consistently enable dioread_nolock. Therefore, we do not need to truncate blocks for short writes and write failure. 2) The iomap write frame maps multi-blocks in the ->iomap_begin() function, so we must remove the stale delayed allocation range from the short writes and write failure. Otherwise, this could result in a range of delayed extents being covered by a clean folio, leading to inaccurate space reservation. Signed-off-by: Zhang Yi --- fs/ext4/ext4.h | 3 + fs/ext4/file.c | 19 +++++- fs/ext4/inode.c | 155 +++++++++++++++++++++++++++++++++++++++++++++--- 3 files changed, 169 insertions(+), 8 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index ee170196bfff..a09f96ef17d8 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -2985,6 +2985,7 @@ int ext4_walk_page_buffers(handle_t *handle, struct buffer_head *bh)); int do_journal_get_write_access(handle_t *handle, struct inode *inode, struct buffer_head *bh); +int ext4_nonda_switch(struct super_block *sb); #define FALL_BACK_TO_NONDELALLOC 1 #define CONVERT_INLINE_DATA 2 @@ -3845,6 +3846,8 @@ static inline void ext4_clear_io_unwritten_flag(ext4_io_end_t *io_end) extern const struct iomap_ops ext4_iomap_ops; extern const struct iomap_ops ext4_iomap_overwrite_ops; extern const struct iomap_ops ext4_iomap_report_ops; +extern const struct iomap_ops ext4_iomap_buffered_write_ops; +extern const struct iomap_ops ext4_iomap_buffered_da_write_ops; static inline int ext4_buffer_uptodate(struct buffer_head *bh) { diff --git a/fs/ext4/file.c b/fs/ext4/file.c index f14aed14b9cf..92471865b4e5 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -282,6 +282,20 @@ static ssize_t ext4_write_checks(struct kiocb *iocb, struct iov_iter *from) return count; } +static ssize_t ext4_iomap_buffered_write(struct kiocb *iocb, + struct iov_iter *from) +{ + struct inode *inode = file_inode(iocb->ki_filp); + const struct iomap_ops *iomap_ops; + + if (test_opt(inode->i_sb, DELALLOC) && !ext4_nonda_switch(inode->i_sb)) + iomap_ops = &ext4_iomap_buffered_da_write_ops; + else + iomap_ops = &ext4_iomap_buffered_write_ops; + + return iomap_file_buffered_write(iocb, from, iomap_ops, NULL); +} + static ssize_t ext4_buffered_write_iter(struct kiocb *iocb, struct iov_iter *from) { @@ -296,7 +310,10 @@ static ssize_t ext4_buffered_write_iter(struct kiocb *iocb, if (ret <= 0) goto out; - ret = generic_perform_write(iocb, from); + if (ext4_test_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP)) + ret = ext4_iomap_buffered_write(iocb, from); + else + ret = generic_perform_write(iocb, from); out: inode_unlock(inode); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index f0bc4b58ac4f..23cbcaab0a56 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2862,7 +2862,7 @@ static int ext4_dax_writepages(struct address_space *mapping, return ret; } -static int ext4_nonda_switch(struct super_block *sb) +int ext4_nonda_switch(struct super_block *sb) { s64 free_clusters, dirty_clusters; struct ext4_sb_info *sbi = EXT4_SB(sb); @@ -3257,6 +3257,15 @@ static bool ext4_inode_datasync_dirty(struct inode *inode) return inode->i_state & I_DIRTY_DATASYNC; } +static bool ext4_iomap_valid(struct inode *inode, const struct iomap *iomap) +{ + return iomap->validity_cookie == READ_ONCE(EXT4_I(inode)->i_es_seq); +} + +static const struct iomap_folio_ops ext4_iomap_folio_ops = { + .iomap_valid = ext4_iomap_valid, +}; + static void ext4_set_iomap(struct inode *inode, struct iomap *iomap, struct ext4_map_blocks *map, loff_t offset, loff_t length, unsigned int flags) @@ -3287,6 +3296,9 @@ static void ext4_set_iomap(struct inode *inode, struct iomap *iomap, !ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) iomap->flags |= IOMAP_F_MERGED; + iomap->validity_cookie = READ_ONCE(EXT4_I(inode)->i_es_seq); + iomap->folio_ops = &ext4_iomap_folio_ops; + /* * Flags passed to ext4_map_blocks() for direct I/O writes can result * in m_flags having both EXT4_MAP_MAPPED and EXT4_MAP_UNWRITTEN bits @@ -3526,11 +3538,57 @@ const struct iomap_ops ext4_iomap_report_ops = { .iomap_begin = ext4_iomap_begin_report, }; -static int ext4_iomap_buffered_read_begin(struct inode *inode, loff_t offset, - loff_t length, unsigned int flags, struct iomap *iomap, - struct iomap *srcmap) +static int ext4_iomap_get_blocks(struct inode *inode, + struct ext4_map_blocks *map) { - int ret; + loff_t i_size = i_size_read(inode); + handle_t *handle; + int ret, needed_blocks; + + /* + * Check if the blocks have already been allocated, this could + * avoid initiating a new journal transaction and return the + * mapping information directly. + */ + if ((map->m_lblk + map->m_len) <= + round_up(i_size, i_blocksize(inode)) >> inode->i_blkbits) { + ret = ext4_map_blocks(NULL, inode, map, 0); + if (ret < 0) + return ret; + if (map->m_flags & (EXT4_MAP_MAPPED | EXT4_MAP_UNWRITTEN | + EXT4_MAP_DELAYED)) + return 0; + } + + /* + * Reserve one block more for addition to orphan list in case + * we allocate blocks but write fails for some reason. + */ + needed_blocks = ext4_writepage_trans_blocks(inode) + 1; + handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, needed_blocks); + if (IS_ERR(handle)) + return PTR_ERR(handle); + + ret = ext4_map_blocks(handle, inode, map, + EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT); + /* + * We need to stop handle here due to a potential deadlock caused + * by the subsequent call to balance_dirty_pages(). This function + * may wait for the dirty pages to be written back, which could + * initiate another handle and cause it to wait for the first + * handle to complete. + */ + ext4_journal_stop(handle); + + return ret; +} + +static int ext4_iomap_buffered_begin(struct inode *inode, loff_t offset, + loff_t length, unsigned int flags, + struct iomap *iomap, struct iomap *srcmap, + bool delalloc) +{ + int ret, retries = 0; struct ext4_map_blocks map; u8 blkbits = inode->i_blkbits; @@ -3541,13 +3599,23 @@ static int ext4_iomap_buffered_read_begin(struct inode *inode, loff_t offset, /* Inline data support is not yet available. */ if (WARN_ON_ONCE(ext4_has_inline_data(inode))) return -ERANGE; - +retry: /* Calculate the first and last logical blocks respectively. */ map.m_lblk = offset >> blkbits; map.m_len = min_t(loff_t, (offset + length - 1) >> blkbits, EXT4_MAX_LOGICAL_BLOCK) - map.m_lblk + 1; + if (flags & IOMAP_WRITE) { + if (delalloc) + ret = ext4_da_map_blocks(inode, &map); + else + ret = ext4_iomap_get_blocks(inode, &map); - ret = ext4_map_blocks(NULL, inode, &map, 0); + if (ret == -ENOSPC && + ext4_should_retry_alloc(inode->i_sb, &retries)) + goto retry; + } else { + ret = ext4_map_blocks(NULL, inode, &map, 0); + } if (ret < 0) return ret; @@ -3555,6 +3623,79 @@ static int ext4_iomap_buffered_read_begin(struct inode *inode, loff_t offset, return 0; } +static int ext4_iomap_buffered_read_begin(struct inode *inode, + loff_t offset, loff_t length, unsigned int flags, + struct iomap *iomap, struct iomap *srcmap) +{ + return ext4_iomap_buffered_begin(inode, offset, length, flags, + iomap, srcmap, false); +} + +static int ext4_iomap_buffered_write_begin(struct inode *inode, + loff_t offset, loff_t length, unsigned int flags, + struct iomap *iomap, struct iomap *srcmap) +{ + return ext4_iomap_buffered_begin(inode, offset, length, flags, + iomap, srcmap, false); +} + +static int ext4_iomap_buffered_da_write_begin(struct inode *inode, + loff_t offset, loff_t length, unsigned int flags, + struct iomap *iomap, struct iomap *srcmap) +{ + return ext4_iomap_buffered_begin(inode, offset, length, flags, + iomap, srcmap, true); +} + +/* + * Drop the staled delayed allocation range from the write failure, + * including both start and end blocks. If not, we could leave a range + * of delayed extents covered by a clean folio, it could lead to + * inaccurate space reservation. + */ +static void ext4_iomap_punch_delalloc(struct inode *inode, loff_t offset, + loff_t length, struct iomap *iomap) +{ + down_write(&EXT4_I(inode)->i_data_sem); + ext4_es_remove_extent(inode, offset >> inode->i_blkbits, + DIV_ROUND_UP_ULL(length, EXT4_BLOCK_SIZE(inode->i_sb))); + up_write(&EXT4_I(inode)->i_data_sem); +} + +static int ext4_iomap_buffered_da_write_end(struct inode *inode, loff_t offset, + loff_t length, ssize_t written, + unsigned int flags, + struct iomap *iomap) +{ + loff_t start_byte, end_byte; + + /* If we didn't reserve the blocks, we're not allowed to punch them. */ + if (iomap->type != IOMAP_DELALLOC || !(iomap->flags & IOMAP_F_NEW)) + return 0; + + /* Nothing to do if we've written the entire delalloc extent */ + start_byte = iomap_last_written_block(inode, offset, written); + end_byte = round_up(offset + length, i_blocksize(inode)); + if (start_byte >= end_byte) + return 0; + + filemap_invalidate_lock(inode->i_mapping); + iomap_write_delalloc_release(inode, start_byte, end_byte, flags, + iomap, ext4_iomap_punch_delalloc); + filemap_invalidate_unlock(inode->i_mapping); + return 0; +} + + +const struct iomap_ops ext4_iomap_buffered_write_ops = { + .iomap_begin = ext4_iomap_buffered_write_begin, +}; + +const struct iomap_ops ext4_iomap_buffered_da_write_ops = { + .iomap_begin = ext4_iomap_buffered_da_write_begin, + .iomap_end = ext4_iomap_buffered_da_write_end, +}; + const struct iomap_ops ext4_iomap_buffered_read_ops = { .iomap_begin = ext4_iomap_buffered_read_begin, }; From patchwork Tue Oct 22 11:10:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000206 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=y+r8=rs=vger.kernel.org=linux-ext4+bounces-4697-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXckY3ps8z1xw0 for ; Tue, 22 Oct 2024 14:15:33 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXckX127mz4w2M for ; Tue, 22 Oct 2024 14:15:32 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXckX0zlLz4wb7; Tue, 22 Oct 2024 14:15:32 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:4601:e00::3" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566932; cv=pass; b=h2kmxT1WcBtH3mF8DjYR+STWwmTvEvkANIaXdoH4sKpx5FBywwo90KunKFJy4tnGiSCrEmLrNklQQYyGTf4+lQbY6AdoIL3P3zmhkR3LI6oILGxYE/LDS9QY6YgYgbuW7ujG2LZbSQqDwm07IgytjLvx/O8yOmHOJY8qapGX2ezpKB5cs2mNT0wCwrVV4W3mlvLWrnBZnZWKm5T3f7hKZtZwhTbfvXSc1F/MQxA3/I4RDq38Jzmafjip3o5k5MkV6mIX3PkKUClJ5AjlJrBskNb+WVMuGg17ZU3Gndq7TSvuoeGTBZBzNYyYjmM1mgHgxVZTks0F7HB3G+IH7Y4Y5g== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729566932; c=relaxed/relaxed; bh=LQs8+ix+QcHPW3o8mR0lLbVxv3eUtsxCfxD+12go7Zo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eV7d2MFHD429RkkmqAFDqQDzKjkXd3OybK56toyomAyIX/dCWI2H/rDaC8XE/k0IQS1XreStuWdAtvAofseCFuaDNDI4iZ2zjYJeZgMv5vVq7nUPeOM7Nf5nzFtpPdyYvNL738wOZiQxR4y5zYUdABJU+9Qelfmo5g3dnPpJSnrKYE/4CTe5cRWs1jd6sh9YwLFuEByQA0XEa1s/MhgsVtmgWI7U5hhtxqQzp7xFvXWdAzzdXqP5oze9d1SrPB61sA5jZLUVJcYVOiS40tZ0z+SsJKEdnbF3T0jaxu2hj7LXaVD8hrBOCh9Y0vBYVlX+UWyI/eHOzrrYZjJoHIDmag== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:4601:e00::3; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4697-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:4601:e00::3; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4697-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [IPv6:2604:1380:4601:e00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXckW4TDzz4w2M for ; Tue, 22 Oct 2024 14:15:31 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id CDA481F2477A for ; Tue, 22 Oct 2024 03:15:30 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 014281AA788; Tue, 22 Oct 2024 03:13:05 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47C2B1494D8; Tue, 22 Oct 2024 03:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566784; cv=none; b=CMcP0xUkL2PuEDiLi8VuKhXPrsb99eVrT5ZduHEx7vYpyAaSEahAmE8FP3vpQUJ+zYq9tMW5lQfqzAaqRpXbi1NwNZm8PLqFIO9+0G77XnGBlThwbNlVs/mlC1Wat3DBowNB21qZ7xU+PxszWHGu2o6vWEvDmIa+kzjhZ331X6s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566784; c=relaxed/simple; bh=JtRQRisxrHnV8IY4GtAev6/mcJf6zvVRjOwlQ8nLm5M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Awj37Oi5zY/OASHTDzf8+f16e6tda/sPAmb58o8TYUKp7bL/i9MZQMB8C8w7ZiyP+zyZpzjUIhtboN5GzKOnF7oxzDitlK1Vn8EEf7GrEg1ZLdzPBYLyPJBwnpT3xDfMO7t/ZD0TSm1jleL6tNgY11qxaA3cCUwPImk1ZVe4eCw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgM0yXXz4f3jkk; Tue, 22 Oct 2024 11:12:47 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 64E721A018C; Tue, 22 Oct 2024 11:12:59 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S20; Tue, 22 Oct 2024 11:12:59 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 16/27] ext4: don't order data for inode with EXT4_STATE_BUFFERED_IOMAP Date: Tue, 22 Oct 2024 19:10:47 +0800 Message-ID: <20241022111059.2566137-17-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S20 X-Coremail-Antispam: 1UD129KBjvdXoWrtr48GFy3AF4rGF4fCryfJFb_yoWkGrc_Ga s7Ar48K34SvF1xWFWFkr15Xas29r48Wr1rWFZ5tr4fuFyUJrs7Aw1kZrsrZry5Wr42kF9x Cry8Gr1rt3W2gjkaLaAFLSUrUUUUjb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbQAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k26cxKx2IYs7xG 6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M28IrcIa0xkI8V A2jI8067AKxVWUAVCq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJ M28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2I x0cI8IcVCY1x0267AKxVWxJr0_GcWl84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vE x4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2 IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I64 8v4I1lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI0_Jw0_GFyl42xK82IYc2Ij 64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x 8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE 2Ix0cI8IcVAFwI0_Gr0_Xr1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjTRio7uDUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi In the iomap buffered I/O path, there is no risk of exposing stale data because we always allocate unwritten extents for new allocated blocks, the extent changes to written only when the I/O is completed. Therefore, we do not need to order data in this mode. Signed-off-by: Zhang Yi --- fs/ext4/ext4_jbd2.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h index 0c77697d5e90..9dca10027032 100644 --- a/fs/ext4/ext4_jbd2.h +++ b/fs/ext4/ext4_jbd2.h @@ -467,6 +467,14 @@ static inline int ext4_should_journal_data(struct inode *inode) static inline int ext4_should_order_data(struct inode *inode) { + /* + * There is no need to order data for inodes with iomap buffered I/O + * path since it always allocate unwritten extents for new allocated + * blocks and have no risk of stale data. + */ + if (ext4_test_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP)) + return 0; + return ext4_inode_journal_mode(inode) & EXT4_INODE_ORDERED_DATA_MODE; } From patchwork Tue Oct 22 11:10:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000213 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=mail.ozlabs.org; envelope-from=srs0=4sh/=rs=vger.kernel.org=linux-ext4+bounces-4704-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcmY0DgLz1xwl for ; Tue, 22 Oct 2024 14:17:17 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcmW4b5Rz4w2H for ; Tue, 22 Oct 2024 14:17:15 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcmW4XyTz4w2M; Tue, 22 Oct 2024 14:17:15 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:45e3:2400::1" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567035; cv=pass; b=cl7KIqNVSmk4FSvDLzJyEBLCsgqtOo3rt7oRN9WDxubOA9sQPZiTBLM3B6GaaBstJy0GOFUdCx1i/mcchwXtFI6vBYgE30aH7ujRWOMvTOMarsXs4YXA69NeMMFpj2pDafaU9mI8CZi6ltpCRhJrIPb+sjxc+5gb+9nXfdo0r28k5KHjOZefaWRvZvGlGQ+pP8KGzo3Vf5rNgFC6D0Y4jkOSzcAZcJOvvVNLUH1iGaMxpNHkXjCtrCTWdHoDqME7aJtpNkyM0XYZ9qUC2gk2S02iXvOua111GMC+hMws+z+kzr0y+bX46bTjvFEbBBYqXPtzHDrcHxp8YDJv9/N5mQ== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567035; c=relaxed/relaxed; bh=csdcmGmYToq0qOV9KbJuiVuE2W5gBHLQQDWa41kDPfw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Nb+rCupm/Wz/JYqdlA+lYFfqkOIncmFUlOxukerd/GR5tOZyoXew8MZOgqU1WxR+FNeasZvtXfSmoIZtEb3CR93XtqFv6bpeazYX17mv3kQ94mCWFBNgqUQvLZeHYSBAB4CMHJE8dAN/F4TpQ9CBEcNlFJauTs+iHk34sN/H50BBbqC6FyrGOxTaOFdcTYzGBXL51p0lkRbKR0RH9kMrHNzyg/gIjGxoJUrTftTZz7o2LHa/2CgRInsJlWTe0K3B61/U35t3d5Fxdq+KV69K0YIoEEv5ydyN7e/UI0ERBvXIPYxaPGaewJipV3Ljh0FjrXtwwYUiQxrt8WfniIXUwg== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4704-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4704-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [IPv6:2604:1380:45e3:2400::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcmW2FL3z4w2H for ; Tue, 22 Oct 2024 14:17:15 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 0EF13285370 for ; Tue, 22 Oct 2024 03:17:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2D8D11E7C25; Tue, 22 Oct 2024 03:13:07 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3FAF14AD02; Tue, 22 Oct 2024 03:13:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566786; cv=none; b=FaRq/XgTfj1wmQOXL3AfKro8N1gvYGLt2gSozswQnFYxcOgYgvJxilELrLTKR7aWv/EFm6hrcujROGFPw7cRoZRqfAM7kgZ6CaPvwIw3qH1m7NgYDPfuKUnq8sagW9pdd4e1EyvT4QM6DH3sV7DQXA1tjA8EWJsz9Rgwd0aAynk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566786; c=relaxed/simple; bh=4wNxr6z3hjdV+Va2/DAKywfCxx5XItXDt3rDGOdQbBs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tGXW4ccsm3uTUEbhkXX2slLh9If6BTMXCsosDgOUvHaJc6YPWjaN7FAEnGXAEMUzL+qV+6ZkHszgOmjC2Cf4igGResrj2Esaap9PuAGHhIqxJKH22YjsUDBg7d+0FNFGd5smRrIAXrzVCd3TXKeeyjgekrxaA2jyeJhhvpHieAc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgM5bWvz4f3jkm; Tue, 22 Oct 2024 11:12:47 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 0C73F1A0568; Tue, 22 Oct 2024 11:13:00 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S21; Tue, 22 Oct 2024 11:12:59 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 17/27] ext4: implement writeback iomap path Date: Tue, 22 Oct 2024 19:10:48 +0800 Message-ID: <20241022111059.2566137-18-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S21 X-Coremail-Antispam: 1UD129KBjvAXoWfGFyruFW3Cry8JFy5ZF43Jrb_yoW8Xw4DCo WSva13XF48Jr98ta9Ykr1fJFyUuan7Ga1rAF15Zr40qa43JF1a9w4xGw43X3W7Ww4Fkryx ZryxJa15Gr4kJF4rn29KB7ZKAUJUUUU8529EdanIXcx71UUUUU7v73VFW2AGmfu7bjvjm3 AaLaJ3UjIYCTnIWjp_UUUOH7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20EY4v20xva j40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l87I20VAvwVAaII0Ic2I_JFv_Gryl82xGYIkIc2 x26280x7IE14v26r126s0DM28IrcIa0xkI8VCY1x0267AKxVW5JVCq3wA2ocxC64kIII0Y j41l84x0c7CEw4AK67xGY2AK021l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwV C0I7IYx2IY6xkF7I0E14v26F4UJVW0owA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xv wVC2z280aVCY1x0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFc xC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_ Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2 IErcIFxwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY 0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I 0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_WrylIxkGc2Ij64vIr41lIxAI cVC0I7IYx2IY67AKxVW8JVW5JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwCI42 IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr0_Cr1lIxAIcVC2z280 aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x0pRtl1hUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Implement ext4_iomap_writepages(), introduce ext4_writeback_ops, and create an end I/O extent conversion worker to implement the iomap buffered write-back path. In the map_blocks() handler, we first query the longest range of existing mapped extents. If the block range has not already been allocated, we attempt to allocate a range of blocks that is as long as possible to minimize the number of block mappings. This allocation is based on the write-back length and the delalloc extent length, rather than allocating for a single folio at a time. In the ->prepare_ioend() handler, we register the end I/O worker to convert unwritten extents into written extents. There are three key differences between the buffer_head write-back path and the iomap write-back path: 1) Since we aim to allocate a range of blocks as long as possible within the writeback length for each invocation of ->map_blocks(), we may allocate a long range but write less in certain corner cases. Therefore, we cannot convert the extent to written in advance within ->map_blocks(). Fortunately, there is minimal risk of losing data between split extents during the write-back and the end I/O process. We defer this action to the end I/O worker, where we can accurately determine the actual written length. Besides, we should remove the warning in ext4_convert_unwritten_extents_endio(). 2) Since we do not order data, the journal thread is not required to write back data. Besides, we also do not need to use the reserve handle when converting the unwritten extent in the end I/O worker, we can start normal handle directly. 3) We can also delay updating the i_disksize until the end of the I/O, which could prevent the exposure of zero data that may occur during a system crash while performing buffer append writes in the buffer_head buffered write path. Signed-off-by: Zhang Yi --- fs/ext4/ext4.h | 4 + fs/ext4/extents.c | 22 +++--- fs/ext4/inode.c | 188 +++++++++++++++++++++++++++++++++++++++++++++- fs/ext4/page-io.c | 105 ++++++++++++++++++++++++++ fs/ext4/super.c | 2 + 5 files changed, 311 insertions(+), 10 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index a09f96ef17d8..d4d594d97634 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1151,6 +1151,8 @@ struct ext4_inode_info { */ struct list_head i_rsv_conversion_list; struct work_struct i_rsv_conversion_work; + struct list_head i_iomap_ioend_list; + struct work_struct i_iomap_ioend_work; spinlock_t i_block_reservation_lock; @@ -3773,6 +3775,8 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *page, size_t len); extern struct ext4_io_end_vec *ext4_alloc_io_end_vec(ext4_io_end_t *io_end); extern struct ext4_io_end_vec *ext4_last_io_end_vec(ext4_io_end_t *io_end); +extern void ext4_iomap_end_io(struct work_struct *work); +extern void ext4_iomap_end_bio(struct bio *bio); /* mmp.c */ extern int ext4_multi_mount_protect(struct super_block *, ext4_fsblk_t); diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 33bc2cc5aff4..4b30e6f0a634 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -3760,20 +3760,24 @@ ext4_convert_unwritten_extents_endio(handle_t *handle, struct inode *inode, ext_debug(inode, "logical block %llu, max_blocks %u\n", (unsigned long long)ee_block, ee_len); - /* If extent is larger than requested it is a clear sign that we still - * have some extent state machine issues left. So extent_split is still - * required. - * TODO: Once all related issues will be fixed this situation should be - * illegal. + /* + * If the extent is larger than requested, we should split it here. + * For inodes using the iomap buffered I/O path, we do not split in + * advance during the write-back process. Therefore, we may need to + * perform the split during the end I/O process here. However, + * other inodes should not require this action. */ if (ee_block != map->m_lblk || ee_len > map->m_len) { int flags = EXT4_GET_BLOCKS_CONVERT | EXT4_GET_BLOCKS_METADATA_NOFAIL; #ifdef CONFIG_EXT4_DEBUG - ext4_warning(inode->i_sb, "Inode (%ld) finished: extent logical block %llu," - " len %u; IO logical block %llu, len %u", - inode->i_ino, (unsigned long long)ee_block, ee_len, - (unsigned long long)map->m_lblk, map->m_len); + if (!ext4_test_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP)) { + ext4_warning(inode->i_sb, + "Inode (%ld) finished: extent logical block %llu, len %u; IO logical block %llu, len %u", + inode->i_ino, (unsigned long long)ee_block, + ee_len, (unsigned long long)map->m_lblk, + map->m_len); + } #endif path = ext4_split_convert_extents(handle, inode, map, path, flags, NULL); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 23cbcaab0a56..a260942fd2dd 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -44,6 +44,7 @@ #include #include "ext4_jbd2.h" +#include "ext4_extents.h" #include "xattr.h" #include "acl.h" #include "truncate.h" @@ -3710,10 +3711,195 @@ static void ext4_iomap_readahead(struct readahead_control *rac) iomap_readahead(rac, &ext4_iomap_buffered_read_ops); } +struct ext4_writeback_ctx { + struct iomap_writepage_ctx ctx; + struct writeback_control *wbc; + unsigned int data_seq; +}; + +static int ext4_iomap_map_one_extent(struct inode *inode, + struct ext4_map_blocks *map) +{ + struct extent_status es; + handle_t *handle = NULL; + int credits, map_flags; + int retval; + + credits = ext4_da_writepages_trans_blocks(inode); + handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, credits); + if (IS_ERR(handle)) + return PTR_ERR(handle); + + map->m_flags = 0; + /* + * It is necessary to look up extent and map blocks under i_data_sem + * in write mode, otherwise, the delalloc extent may become stale + * during concurrent truncate operations. + */ + down_write(&EXT4_I(inode)->i_data_sem); + if (likely(ext4_es_lookup_extent(inode, map->m_lblk, NULL, &es))) { + retval = es.es_len - (map->m_lblk - es.es_lblk); + map->m_len = min_t(unsigned int, retval, map->m_len); + + if (ext4_es_is_delayed(&es)) { + map->m_flags |= EXT4_MAP_DELAYED; + trace_ext4_da_write_pages_extent(inode, map); + /* + * Call ext4_map_create_blocks() to allocate any + * delayed allocation blocks. It is possible that + * we're going to need more metadata blocks, however + * we must not fail because we're in writeback and + * there is nothing we can do so it might result in + * data loss. So use reserved blocks to allocate + * metadata if possible. + */ + map_flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT | + EXT4_GET_BLOCKS_METADATA_NOFAIL; + + retval = ext4_map_create_blocks(handle, inode, map, + map_flags); + goto out; + } + if (unlikely(ext4_es_is_hole(&es))) + goto out; + + /* Found written or unwritten extent. */ + map->m_pblk = ext4_es_pblock(&es) + map->m_lblk - + es.es_lblk; + map->m_flags = ext4_es_is_written(&es) ? + EXT4_MAP_MAPPED : EXT4_MAP_UNWRITTEN; + goto out; + } + + retval = ext4_map_query_blocks(handle, inode, map); +out: + up_write(&EXT4_I(inode)->i_data_sem); + ext4_journal_stop(handle); + return retval < 0 ? retval : 0; +} + +static int ext4_iomap_map_blocks(struct iomap_writepage_ctx *wpc, + struct inode *inode, loff_t offset, + unsigned int dirty_len) +{ + struct ext4_writeback_ctx *ewpc = + container_of(wpc, struct ext4_writeback_ctx, ctx); + struct super_block *sb = inode->i_sb; + struct journal_s *journal = EXT4_SB(sb)->s_journal; + struct ext4_inode_info *ei = EXT4_I(inode); + struct ext4_map_blocks map; + unsigned int blkbits = inode->i_blkbits; + unsigned int index = offset >> blkbits; + unsigned int end, len; + int ret; + + if (unlikely(ext4_forced_shutdown(inode->i_sb))) + return -EIO; + + /* Check validity of the cached writeback mapping. */ + if (offset >= wpc->iomap.offset && + offset < wpc->iomap.offset + wpc->iomap.length && + ewpc->data_seq == READ_ONCE(ei->i_es_seq)) + return 0; + + end = min_t(unsigned int, (ewpc->wbc->range_end >> blkbits), + (UINT_MAX - 1)); + len = (end > index + dirty_len) ? end - index + 1 : dirty_len; + +retry: + map.m_lblk = index; + map.m_len = min_t(unsigned int, MAX_WRITEPAGES_EXTENT_LEN, len); + ret = ext4_map_blocks(NULL, inode, &map, 0); + if (ret < 0) + return ret; + + /* + * The map is not a delalloc extent, it must either be a hole + * or an extent which have already been allocated. + */ + if (!(map.m_flags & EXT4_MAP_DELAYED)) + goto out; + + /* Map one delalloc extent. */ + ret = ext4_iomap_map_one_extent(inode, &map); + if (ret < 0) { + if (ext4_forced_shutdown(sb)) + return ret; + + /* + * Retry transient ENOSPC errors, if + * ext4_count_free_blocks() is non-zero, a commit + * should free up blocks. + */ + if (ret == -ENOSPC && journal && ext4_count_free_clusters(sb)) { + jbd2_journal_force_commit_nested(journal); + goto retry; + } + + ext4_msg(sb, KERN_CRIT, + "Delayed block allocation failed for inode %lu at logical offset %llu with max blocks %u with error %d", + inode->i_ino, (unsigned long long)map.m_lblk, + (unsigned int)map.m_len, -ret); + ext4_msg(sb, KERN_CRIT, + "This should not happen!! Data will be lost\n"); + if (ret == -ENOSPC) + ext4_print_free_blocks(inode); + return ret; + } +out: + ewpc->data_seq = READ_ONCE(ei->i_es_seq); + ext4_set_iomap(inode, &wpc->iomap, &map, offset, + map.m_len << blkbits, 0); + return 0; +} + +static int ext4_iomap_prepare_ioend(struct iomap_ioend *ioend, int status) +{ + struct ext4_inode_info *ei = EXT4_I(ioend->io_inode); + + /* Need to convert unwritten extents when I/Os are completed. */ + if (ioend->io_type == IOMAP_UNWRITTEN || + ioend->io_offset + ioend->io_size > READ_ONCE(ei->i_disksize)) + ioend->io_bio.bi_end_io = ext4_iomap_end_bio; + + return status; +} + +static void ext4_iomap_discard_folio(struct folio *folio, loff_t pos) +{ + struct inode *inode = folio->mapping->host; + loff_t length = folio_pos(folio) + folio_size(folio) - pos; + + ext4_iomap_punch_delalloc(inode, pos, length, NULL); +} + +static const struct iomap_writeback_ops ext4_writeback_ops = { + .map_blocks = ext4_iomap_map_blocks, + .prepare_ioend = ext4_iomap_prepare_ioend, + .discard_folio = ext4_iomap_discard_folio, +}; + static int ext4_iomap_writepages(struct address_space *mapping, struct writeback_control *wbc) { - return 0; + struct inode *inode = mapping->host; + struct super_block *sb = inode->i_sb; + long nr = wbc->nr_to_write; + int alloc_ctx, ret; + struct ext4_writeback_ctx ewpc = { + .wbc = wbc, + }; + + if (unlikely(ext4_forced_shutdown(sb))) + return -EIO; + + alloc_ctx = ext4_writepages_down_read(sb); + trace_ext4_writepages(inode, wbc); + ret = iomap_writepages(mapping, wbc, &ewpc.ctx, &ext4_writeback_ops); + trace_ext4_writepages_result(inode, wbc, ret, nr - wbc->nr_to_write); + ext4_writepages_up_read(sb, alloc_ctx); + + return ret; } /* diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index ad5543866d21..659ee0fb7cea 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -562,3 +563,107 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio, return 0; } + +static void ext4_iomap_finish_ioend(struct iomap_ioend *ioend) +{ + struct inode *inode = ioend->io_inode; + struct ext4_inode_info *ei = EXT4_I(inode); + loff_t pos = ioend->io_offset; + size_t size = ioend->io_size; + loff_t new_disksize; + handle_t *handle; + int credits; + int ret, err; + + ret = blk_status_to_errno(ioend->io_bio.bi_status); + if (unlikely(ret)) + goto out; + + /* + * We may need to convert up to one extent per block in + * the page and we may dirty the inode. + */ + credits = ext4_chunk_trans_blocks(inode, + EXT4_MAX_BLOCKS(size, pos, inode->i_blkbits)); + handle = ext4_journal_start(inode, EXT4_HT_EXT_CONVERT, credits); + if (IS_ERR(handle)) { + ret = PTR_ERR(handle); + goto out_err; + } + + if (ioend->io_type == IOMAP_UNWRITTEN) { + ret = ext4_convert_unwritten_extents(handle, inode, pos, size); + if (ret) + goto out_journal; + } + + /* + * Update on-disk size after IO is completed. Races with + * truncate are avoided by checking i_size under i_data_sem. + */ + new_disksize = pos + size; + if (new_disksize > READ_ONCE(ei->i_disksize)) { + down_write(&ei->i_data_sem); + new_disksize = min(new_disksize, i_size_read(inode)); + if (new_disksize > ei->i_disksize) + ei->i_disksize = new_disksize; + up_write(&ei->i_data_sem); + ret = ext4_mark_inode_dirty(handle, inode); + if (ret) + EXT4_ERROR_INODE_ERR(inode, -ret, + "Failed to mark inode dirty"); + } + +out_journal: + err = ext4_journal_stop(handle); + if (!ret) + ret = err; +out_err: + if (ret < 0 && !ext4_forced_shutdown(inode->i_sb)) { + ext4_msg(inode->i_sb, KERN_EMERG, + "failed to convert unwritten extents to written extents or update inode size -- potential data loss! (inode %lu, error %d)", + inode->i_ino, ret); + } +out: + iomap_finish_ioends(ioend, ret); +} + +/* + * Work on buffered iomap completed IO, to convert unwritten extents to + * mapped extents + */ +void ext4_iomap_end_io(struct work_struct *work) +{ + struct ext4_inode_info *ei = container_of(work, struct ext4_inode_info, + i_iomap_ioend_work); + struct iomap_ioend *ioend; + struct list_head ioend_list; + unsigned long flags; + + spin_lock_irqsave(&ei->i_completed_io_lock, flags); + list_replace_init(&ei->i_iomap_ioend_list, &ioend_list); + spin_unlock_irqrestore(&ei->i_completed_io_lock, flags); + + iomap_sort_ioends(&ioend_list); + while (!list_empty(&ioend_list)) { + ioend = list_entry(ioend_list.next, struct iomap_ioend, io_list); + list_del_init(&ioend->io_list); + iomap_ioend_try_merge(ioend, &ioend_list); + ext4_iomap_finish_ioend(ioend); + } +} + +void ext4_iomap_end_bio(struct bio *bio) +{ + struct iomap_ioend *ioend = iomap_ioend_from_bio(bio); + struct ext4_inode_info *ei = EXT4_I(ioend->io_inode); + struct ext4_sb_info *sbi = EXT4_SB(ioend->io_inode->i_sb); + unsigned long flags; + + /* Only reserved conversions from writeback should enter here */ + spin_lock_irqsave(&ei->i_completed_io_lock, flags); + if (list_empty(&ei->i_iomap_ioend_list)) + queue_work(sbi->rsv_conversion_wq, &ei->i_iomap_ioend_work); + list_add_tail(&ioend->io_list, &ei->i_iomap_ioend_list); + spin_unlock_irqrestore(&ei->i_completed_io_lock, flags); +} diff --git a/fs/ext4/super.c b/fs/ext4/super.c index a01e0bbe57c8..56baadec27e0 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1419,11 +1419,13 @@ static struct inode *ext4_alloc_inode(struct super_block *sb) #endif ei->jinode = NULL; INIT_LIST_HEAD(&ei->i_rsv_conversion_list); + INIT_LIST_HEAD(&ei->i_iomap_ioend_list); spin_lock_init(&ei->i_completed_io_lock); ei->i_sync_tid = 0; ei->i_datasync_tid = 0; atomic_set(&ei->i_unwritten, 0); INIT_WORK(&ei->i_rsv_conversion_work, ext4_end_io_rsv_work); + INIT_WORK(&ei->i_iomap_ioend_work, ext4_iomap_end_io); ext4_fc_init_inode(&ei->vfs_inode); mutex_init(&ei->i_fc_lock); return &ei->vfs_inode; From patchwork Tue Oct 22 11:10:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000211 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=mail.ozlabs.org; envelope-from=srs0=4mkm=rs=vger.kernel.org=linux-ext4+bounces-4702-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcmG2SX7z1xw0 for ; Tue, 22 Oct 2024 14:17:02 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcmD6n2qz4w2H for ; Tue, 22 Oct 2024 14:17:00 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcmD6kQQz4wnr; Tue, 22 Oct 2024 14:17:00 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:45d1:ec00::1" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567020; cv=pass; b=YX3WwGymffHALnGs8E0W1fjyILm+7kK8W2EQNuzLQcFAovcWxcevW7cNdxsoBYExvx4tig+mV0L77Hl9Y93ZdRGm549V9BZ7/q//so5j1viC4uEpvEXMRdjn8ziyNsxBH2LGTeaYyVRmvzaefY++45reHjPCQcAMxXni1Imj5YMUj63FMS4AD+UW6uz1C/FFUHIc9oHwaSM85btO3qIQCts4601Pc7gQQhB6H0k7xdgmfaEgnEEgv60MbEVrSVtcfWyKtiYy4rDc+IBADA7A4uitcPhQtvGn1TvJCGa98QmDsCmYlev0uZGxFH3nwdH2vuCHmVKps+VJTEiUNkQUlQ== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567020; c=relaxed/relaxed; bh=4BtWwGF+J5LCiRATx4VWlc2JP7CEtVyA2SbXIOVT0mM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q4NJldAm8qugr0xf0N3UhC7DKonyOf77qVBo0CKnGqMOy0bgSMXc9IzpCLbU3hn+LaUuESdqYZIIFN3tsPfgGfFxoswHQC6V3a9IvMT0fd+LWzpDodeUXcBt2DmF8gIugmREb9OmAGZtS7KrZtFgI7WL+27kmAq47J05kDZTSachi1v+zItssEsQofA0q8pixcB5FtMGVRRF1XNCwl2fUye0SokGKdjTTdNSbCnYw0P1j4BRahkadRgVwNO3QmnKU/GmEifBewkpaFOM1v3nKjTlr2OMVcjDjEErp4jdo9Gr9UmzAvMiRJsain/zvlU+tEkt5DQhXp11MLbPwOyGxg== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4702-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4702-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [IPv6:2604:1380:45d1:ec00::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcmD3ZnGz4w2H for ; Tue, 22 Oct 2024 14:17:00 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 1EDE41C222BF for ; Tue, 22 Oct 2024 03:17:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DDD121E2824; Tue, 22 Oct 2024 03:13:06 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 796F315534E; Tue, 22 Oct 2024 03:13:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566786; cv=none; b=sHviAgBmSQgTCb447XvrnoHPxRtjfxC0f+4Oap+/CjTLigB9neJk9aUrguMyN+gFMJy3eL5lfxVP5yIRh3x5QHlN2HhvX5DWOibpDbqzqCLbCM1ejv1KXQhzbao2TArFfY6raNoOf4FNaeK9xrl6CbA88zs/hLthAl1kUL19L6E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566786; c=relaxed/simple; bh=sl8nMZCfHahdvL0vfn1kF0RABscNeTDuKi83wBvnCRw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=m/SkHpo+40S1J7VQcYdOF5Pxx60cPRSFjpGR/Wdc1xejRfAWtPrfR3Bt0orAN7XsyPzytfQd3MUHJenUukbzr7/dNK8E3iU4uMj3jKK2n1KB0HrY/UNjIrEu0BHPDWyc9/h1/u2t936DamsTHf7bY+jN+yOv/uLTuzU8aEkNO3c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4XXcgH2HSYz4f3jMP; Tue, 22 Oct 2024 11:12:43 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id AA7E01A058E; Tue, 22 Oct 2024 11:13:00 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S22; Tue, 22 Oct 2024 11:13:00 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 18/27] ext4: implement mmap iomap path Date: Tue, 22 Oct 2024 19:10:49 +0800 Message-ID: <20241022111059.2566137-19-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S22 X-Coremail-Antispam: 1UD129KBjvJXoW7Kw1DArW3CrykXF1ftr1DJrb_yoW8WFW3pF 9IkrWrGr47XwnY9FsagF98ZF1Yya4rWr47Xry3Crn8Zrnru3y5ta18KFn5ZF45J3yxZr4r tF4Fkr1UWw1akrDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK 6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4 xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8 JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20V AGYxC7M4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS14v26r1q6r43MxAIw28IcxkI 7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxV Cjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x0EwIxGrwCI42IY 6xIIjxv20xvE14v26r4j6ryUMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UMIIF0x vE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42IY6I8E87Iv 6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7sRRgAFtUUUUU== X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Introduce ext4_iomap_page_mkwrite() to implement the mmap iomap path. It invoke iomap_page_mkwrite() and passes ext4_iomap_buffered_[da_]write_ops to make the folio dirty and map blocks, Almost all other work are handled by iomap_page_mkwrite(). Signed-off-by: Zhang Yi --- fs/ext4/inode.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index a260942fd2dd..0a9b73534257 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -6478,6 +6478,23 @@ static int ext4_bh_unmapped(handle_t *handle, struct inode *inode, return !buffer_mapped(bh); } +static vm_fault_t ext4_iomap_page_mkwrite(struct vm_fault *vmf) +{ + struct inode *inode = file_inode(vmf->vma->vm_file); + const struct iomap_ops *iomap_ops; + + /* + * ext4_nonda_switch() could writeback this folio, so have to + * call it before lock folio. + */ + if (test_opt(inode->i_sb, DELALLOC) && !ext4_nonda_switch(inode->i_sb)) + iomap_ops = &ext4_iomap_buffered_da_write_ops; + else + iomap_ops = &ext4_iomap_buffered_write_ops; + + return iomap_page_mkwrite(vmf, iomap_ops); +} + vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; @@ -6501,6 +6518,11 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf) filemap_invalidate_lock_shared(mapping); + if (ext4_test_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP)) { + ret = ext4_iomap_page_mkwrite(vmf); + goto out; + } + err = ext4_convert_inline_data(inode); if (err) goto out_ret; From patchwork Tue Oct 22 11:10:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000212 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=mail.ozlabs.org; envelope-from=srs0=mozq=rs=vger.kernel.org=linux-ext4+bounces-4703-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcmW6wd4z1xw0 for ; Tue, 22 Oct 2024 14:17:15 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcmV47rhz4w2H for ; Tue, 22 Oct 2024 14:17:14 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcmV45wcz4w2M; Tue, 22 Oct 2024 14:17:14 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:4601:e00::3" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567034; cv=pass; b=FqjApyCY+0FBnv1KvR4XEmqFYoJFM5Ep7PZaA0NW1EEkEvjT/Sx5WxvELKajAAFiHvnXPyzMQUKxFS0a4vxjZamsJMFfYbwBrL9pHdeAIt5LhRXLOzsM+DgCk/sDqp8fKrdJLQNItHEcMr2apfGDAJ99DJ+qCRMLoK8mlrDq8Fw7JarNYFXLRI2sNTFAVwY0Tp7RF7h85QrKCsMvyIFvZaFodDfNG7EtMds0JK6xLvoqVi5WJ5JI0H7nq5kQEd+c72P4ZJS3M30eCduz1cTMeVjEq/O5qdGfp6uvkBcUUqJOl8nSXMuC/6YaaAQNsbm2DlJfgaAUap3DiQXLUSi51g== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567034; c=relaxed/relaxed; bh=iheHjltvTQdz2wA/GqdYkJgFEyEntJ/mLCZyBC0lfTs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tlNJj6LvfR7q2t2a6wQBdFP7CAKvOikFziY5f5lvG3iPNs4p3e28E8T2cW7ASqD/nVhFql1HH4saMlM8QTImgu85P9jK6iaaYlqTi4/s5TIW+qxjijdxmYiwu78Z1VHtcLkYHgC+h5svz3G1Soc0jSa1XT1yz4ZcYo/FJ8nNfmNIffJdQXHmA7FA+Yjasg17el/i9TfWFZJlRZoe97zxc3rW6kWZnezc7CoAg6OpMRWrrs5D13pmlrqJhShQz3MBJQrhYIXY1jNkN3bFrRiQH8g/6743Z1hJAdQpUd3bAFYxPDaxDqgFSjTrfndPj+KCSjG8So/Ogvc+Wy8n9/CGNg== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:4601:e00::3; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4703-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:4601:e00::3; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4703-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [IPv6:2604:1380:4601:e00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcmV0RV8z4w2H for ; Tue, 22 Oct 2024 14:17:14 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 22F8A1F25144 for ; Tue, 22 Oct 2024 03:17:13 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 118F51E5736; Tue, 22 Oct 2024 03:13:07 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2476318EFF8; Tue, 22 Oct 2024 03:13:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566786; cv=none; b=DU7hX21GsFTxXGciC/BvnuumhvtkETkWsaki2yBqLy4BL642bMCYp84C/9q8VAMiSO3/YkwV0qYIXI4KvN55Pf/5R7JOnGwG5MuCAvD4YXsglcHCKsRotSYRVT/gEhYQ24XMh+4i6AKxaESFdSkVIzFkhcSoz5PsugTHipLOt0Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566786; c=relaxed/simple; bh=4NoQniKGHNdVBDky4VIHNSuSe3OHkt9IWIU1DQzi2tM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BpGIJaF3il8JrNppxXV4fFlnQFpmSYX4erSGAJmWH63IKFMKe/eeMd2CuCvLuFuBJzRynBEaIId8Vsm6Ck1nvy71uybDZ7pcJD6h/Euvh8Z0maBuYV6SoEzxRNoN1wZcFftgJseCfqXFQDdkQohUGKDvr95rf2tkjLlTBCpSYjo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgH13tZz4f3lVs; Tue, 22 Oct 2024 11:12:43 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 50B7E1A058E; Tue, 22 Oct 2024 11:13:01 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S23; Tue, 22 Oct 2024 11:13:01 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 19/27] ext4: do not always order data when partial zeroing out a block Date: Tue, 22 Oct 2024 19:10:50 +0800 Message-ID: <20241022111059.2566137-20-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S23 X-Coremail-Antispam: 1UD129KBjvJXoW3Gr13WFW3JrWDKr4xAry5urg_yoW7Ar13pr y5tw45ur47ua4q9F4xWF1DXr1ak3Z3GFW8WrW7G3sYv3s3X3WxKFy5K3WFyF4jgw4xXay0 qF4YyrWjgw1UArJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQm14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vE x4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2 IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I64 8v4I1lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI0_Jw0_GFyl42xK82IYc2Ij 64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x 8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE 2Ix0cI8IcVAFwI0_Gr0_Xr1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjTRio7uDUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi When zeroing out a partial block during a partial truncate, zeroing range, or punching a hole, it is essential to order the data only during the partial truncate. This is necessary because there is a risk of exposing stale data. Consider a scenario in which a crash occurs just after the i_disksize transaction has been submitted but before the zeroed data is written out. In this case, the tail block will retain stale data, which could be exposed on the next expand truncate operation. However, partial zeroing range and punching hole don not have this risk. Therefore, we could move the ext4_jbd2_inode_add_write() out to ext4_truncate(), only order data for the partial truncate. Signed-off-by: Zhang Yi --- fs/ext4/inode.c | 50 +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 38 insertions(+), 12 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 0a9b73534257..97be75cde481 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4038,7 +4038,9 @@ void ext4_set_aops(struct inode *inode) * racing writeback can come later and flush the stale pagecache to disk. */ static int __ext4_block_zero_page_range(handle_t *handle, - struct address_space *mapping, loff_t from, loff_t length) + struct address_space *mapping, + loff_t from, loff_t length, + bool *did_zero) { ext4_fsblk_t index = from >> PAGE_SHIFT; unsigned offset = from & (PAGE_SIZE-1); @@ -4116,14 +4118,16 @@ static int __ext4_block_zero_page_range(handle_t *handle, if (ext4_should_journal_data(inode)) { err = ext4_dirty_journalled_data(handle, bh); + if (err) + goto unlock; } else { err = 0; mark_buffer_dirty(bh); - if (ext4_should_order_data(inode)) - err = ext4_jbd2_inode_add_write(handle, inode, from, - length); } + if (did_zero) + *did_zero = true; + unlock: folio_unlock(folio); folio_put(folio); @@ -4138,7 +4142,9 @@ static int __ext4_block_zero_page_range(handle_t *handle, * that corresponds to 'from' */ static int ext4_block_zero_page_range(handle_t *handle, - struct address_space *mapping, loff_t from, loff_t length) + struct address_space *mapping, + loff_t from, loff_t length, + bool *did_zero) { struct inode *inode = mapping->host; unsigned offset = from & (PAGE_SIZE-1); @@ -4156,7 +4162,8 @@ static int ext4_block_zero_page_range(handle_t *handle, return dax_zero_range(inode, from, length, NULL, &ext4_iomap_ops); } - return __ext4_block_zero_page_range(handle, mapping, from, length); + return __ext4_block_zero_page_range(handle, mapping, from, length, + did_zero); } /* @@ -4166,12 +4173,15 @@ static int ext4_block_zero_page_range(handle_t *handle, * of that block so it doesn't yield old data if the file is later grown. */ static int ext4_block_truncate_page(handle_t *handle, - struct address_space *mapping, loff_t from) + struct address_space *mapping, loff_t from, + loff_t *zero_len) { unsigned offset = from & (PAGE_SIZE-1); unsigned length; unsigned blocksize; struct inode *inode = mapping->host; + bool did_zero = false; + int ret; /* If we are processing an encrypted inode during orphan list handling */ if (IS_ENCRYPTED(inode) && !fscrypt_has_encryption_key(inode)) @@ -4180,7 +4190,13 @@ static int ext4_block_truncate_page(handle_t *handle, blocksize = inode->i_sb->s_blocksize; length = blocksize - (offset & (blocksize - 1)); - return ext4_block_zero_page_range(handle, mapping, from, length); + ret = ext4_block_zero_page_range(handle, mapping, from, length, + &did_zero); + if (ret) + return ret; + + *zero_len = length; + return 0; } int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, @@ -4203,13 +4219,14 @@ int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, if (start == end && (partial_start || (partial_end != sb->s_blocksize - 1))) { err = ext4_block_zero_page_range(handle, mapping, - lstart, length); + lstart, length, NULL); return err; } /* Handle partial zero out on the start of the range */ if (partial_start) { err = ext4_block_zero_page_range(handle, mapping, - lstart, sb->s_blocksize); + lstart, sb->s_blocksize, + NULL); if (err) return err; } @@ -4217,7 +4234,7 @@ int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, if (partial_end != sb->s_blocksize - 1) err = ext4_block_zero_page_range(handle, mapping, byte_end - partial_end, - partial_end + 1); + partial_end + 1, NULL); return err; } @@ -4517,6 +4534,7 @@ int ext4_truncate(struct inode *inode) int err = 0, err2; handle_t *handle; struct address_space *mapping = inode->i_mapping; + loff_t zero_len = 0; /* * There is a possibility that we're either freeing the inode @@ -4560,7 +4578,15 @@ int ext4_truncate(struct inode *inode) } if (inode->i_size & (inode->i_sb->s_blocksize - 1)) - ext4_block_truncate_page(handle, mapping, inode->i_size); + ext4_block_truncate_page(handle, mapping, inode->i_size, + &zero_len); + + if (zero_len && ext4_should_order_data(inode)) { + err = ext4_jbd2_inode_add_write(handle, inode, inode->i_size, + zero_len); + if (err) + goto out_stop; + } /* * We add the inode to the orphan list, so that if this From patchwork Tue Oct 22 11:10:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000214 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=bkxs=rs=vger.kernel.org=linux-ext4+bounces-4705-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcnH6gfmz1xw0 for ; Tue, 22 Oct 2024 14:17:55 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcnG2XkLz4w2H for ; Tue, 22 Oct 2024 14:17:54 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcnG2VPYz4wb7; Tue, 22 Oct 2024 14:17:54 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:45e3:2400::1" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567074; cv=pass; b=wdfDUImIwFIq8kRTSNpVyqYKbXesCcQEZe+hw/DCffshgFxxDFcJSaiHVkv/wMjbaos5lDnTm+vOjgjuxfjK+esnYuO6fbrIAJ2mTb0QrbLsLVljmf2UIsTJYUov/fdeqvPSWoD/w7bIxlapObHeDVGU8cup/oaqhNMe489c/aBDSqrsXYmIOwRnJvho21wOlga8LKTpNr3Dqo2GDc/3Y2bBIx6YNjf7J7cQTPqIrOJ/55fpn22r0pl0l36w3+aBGpRfKqtUBWQIq1vd6dNUW6YUJp+E0rghLftqT3zz09mxgQ1NxDe1Dm83f2Gq42ygqsnCAVFm/T0bSFnNU9FgbA== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567074; c=relaxed/relaxed; bh=wY/rlIyiajsviLgdlP4yAUOmrwLhmh8uoUdSmQZ2/4g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=otojT1MqPTEkxeJuX4VDTFHw1yVe4fe9xqlH6NG2qDJN9vYoiLyT/qqm5IQC2s4Di/sYrg4JLMP1CXpwrglbYYcmWAxgFFxCMK/6IOc31jjNA6Bs9FVbm+WswfjiRDoYkskAeMgw9KTr+EgqPoXg5z3gNnZWv7tKRAOWql3GTmjA/VzHzZU+NpnYvIo57xDWyPpzD6gwz5uWWFvts0o3XV207s/vsyYslaDBs+WNNfzjoTY/iXAtz/cvUgnhEJ6L9BGu7u4JIO+AQwqYio2go9y9nhA8Z2TOzFS15c8KawgfNHKqHD+h6GY4riCw4cmFjN4FEsMCGkjI/NvdSQ/w1g== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4705-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4705-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [IPv6:2604:1380:45e3:2400::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcnG07rzz4w2H for ; Tue, 22 Oct 2024 14:17:53 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id EBEAC2865BF for ; Tue, 22 Oct 2024 03:17:53 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 01A4F1FEFC9; Tue, 22 Oct 2024 03:13:08 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D57B319F424; Tue, 22 Oct 2024 03:13:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566787; cv=none; b=qSxt1bORvL3naEvPjwXcSBYIzqlxzWsM38Vv1ISi3Lfncff5HvgDqUis02fUWdVpptthgR2uc41ks11h/VGvj43Fo4YOdpnWaCIMqrsGjwE5lRE6+96GkECM5IxxDRfAvxfvnZS6NI/QLNOoTcd+WQtrXx014KggRSfGA9jwIWg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566787; c=relaxed/simple; bh=ZybBkTVx2KW71/Nmb2vUl1H2ZG3QiraM4QFypEzqLtg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k0jPOL58QmFEpIt+i/Brb3zGVGm/fQbcpuyPSHxAvKPNmawoDDsQHDDuk20UrGkHXvukt2aPiqmEJHQONUkizRrvhL+nIOwIDg2YSt9xF2TPOS3p6tuXgufWxadrzJO11MI8vFGGBYKV9Q3g1prma1O3UaJoTxaOKwLNWUwMxKY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4XXcgJ4Dd7z4f3jXg; Tue, 22 Oct 2024 11:12:44 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id EBDD71A0194; Tue, 22 Oct 2024 11:13:01 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S24; Tue, 22 Oct 2024 11:13:01 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 20/27] ext4: do not start handle if unnecessary while partial zeroing out a block Date: Tue, 22 Oct 2024 19:10:51 +0800 Message-ID: <20241022111059.2566137-21-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S24 X-Coremail-Antispam: 1UD129KBjvJXoW3tFW7ZF48ur1fAF47Gw15Arb_yoWDZF45pr y5Gr15Cr47ua4q9F4IgF4DXr1Ik3Z3KFW8WrW7Gr9Yva93Xw1fKF98KFnYvF4YgrW7Xay0 vF4Yy347Ww4UJ3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQm14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vE x4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2 IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I64 8v4I1lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI0_Jw0_GFyl42xK82IYc2Ij 64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x 8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE 2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjTRio7uDUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi When zeroing out a partial block in __ext4_block_zero_page_range() during a partial truncate, zeroing range, or punching a hole, we only need to start a handle in data=journal mode because we need to log the zeroed data block, we don't need this handle in other modes. Therefore, we can start a handle in ext4_block_zero_page_range() and avoid performing the zeroing process under a running handle if it is in data=ordered or writeback mode. This change is essential for the conversion to iomap buffered I/O, as it helps prevent a potential deadlock issue. After we switch to using iomap_zero_range() to zero out a partial block in the later patches, iomap_zero_range() may write out dirty folios and wait for I/O to complete before the zeroing out. However, we can't wait I/O to complete under running handle because the end I/O process may also wait this handle to stop if the running transaction has begun to commit or the journal is running out of space. Therefore, let's postpone the start of handle in the of the partial truncation, zeroing range, and hole punching, in preparation for the buffered write iomap conversion. Signed-off-by: Zhang Yi --- fs/ext4/ext4.h | 4 +-- fs/ext4/extents.c | 22 ++++++--------- fs/ext4/inode.c | 70 +++++++++++++++++++++++++---------------------- 3 files changed, 47 insertions(+), 49 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index d4d594d97634..e1b7f7024f07 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -3034,8 +3034,8 @@ extern void ext4_set_aops(struct inode *inode); extern int ext4_writepage_trans_blocks(struct inode *); extern int ext4_normal_submit_inode_data_buffers(struct jbd2_inode *jinode); extern int ext4_chunk_trans_blocks(struct inode *, int nrblocks); -extern int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, - loff_t lstart, loff_t lend); +extern int ext4_zero_partial_blocks(struct inode *inode, + loff_t lstart, loff_t lend); extern vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf); extern qsize_t *ext4_get_reserved_space(struct inode *inode); extern int ext4_get_projid(struct inode *inode, kprojid_t *projid); diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 4b30e6f0a634..20e56cd17847 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4576,7 +4576,7 @@ static long ext4_zero_range(struct file *file, loff_t offset, ext4_lblk_t start_lblk, end_lblk; unsigned int blocksize = i_blocksize(inode); unsigned int blkbits = inode->i_blkbits; - int ret, flags, credits; + int ret, flags; trace_ext4_zero_range(inode, offset, len, mode); WARN_ON_ONCE(!inode_is_locked(inode)); @@ -4638,27 +4638,21 @@ static long ext4_zero_range(struct file *file, loff_t offset, if (!(offset & (blocksize - 1)) && !(end & (blocksize - 1))) return ret; - /* - * In worst case we have to writeout two nonadjacent unwritten - * blocks and update the inode - */ - credits = (2 * ext4_ext_index_trans_blocks(inode, 2)) + 1; - if (ext4_should_journal_data(inode)) - credits += 2; - handle = ext4_journal_start(inode, EXT4_HT_MISC, credits); + /* Zero out partial block at the edges of the range */ + ret = ext4_zero_partial_blocks(inode, offset, len); + if (ret) + return ret; + + handle = ext4_journal_start(inode, EXT4_HT_INODE, 2); if (IS_ERR(handle)) { ret = PTR_ERR(handle); ext4_std_error(inode->i_sb, ret); return ret; } - /* Zero out partial block at the edges of the range */ - ret = ext4_zero_partial_blocks(handle, inode, offset, len); - if (ret) - goto out_handle; - if (new_size) ext4_update_inode_size(inode, new_size); + ret = ext4_mark_inode_dirty(handle, inode); if (unlikely(ret)) goto out_handle; diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 97be75cde481..34701afe61c2 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4037,8 +4037,7 @@ void ext4_set_aops(struct inode *inode) * ext4_punch_hole, etc) which needs to be properly zeroed out. Otherwise a * racing writeback can come later and flush the stale pagecache to disk. */ -static int __ext4_block_zero_page_range(handle_t *handle, - struct address_space *mapping, +static int __ext4_block_zero_page_range(struct address_space *mapping, loff_t from, loff_t length, bool *did_zero) { @@ -4046,16 +4045,25 @@ static int __ext4_block_zero_page_range(handle_t *handle, unsigned offset = from & (PAGE_SIZE-1); unsigned blocksize, pos; ext4_lblk_t iblock; + handle_t *handle; struct inode *inode = mapping->host; struct buffer_head *bh; struct folio *folio; int err = 0; + if (ext4_should_journal_data(inode)) { + handle = ext4_journal_start(inode, EXT4_HT_MISC, 1); + if (IS_ERR(handle)) + return PTR_ERR(handle); + } + folio = __filemap_get_folio(mapping, from >> PAGE_SHIFT, FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mapping_gfp_constraint(mapping, ~__GFP_FS)); - if (IS_ERR(folio)) - return PTR_ERR(folio); + if (IS_ERR(folio)) { + err = PTR_ERR(folio); + goto out; + } blocksize = inode->i_sb->s_blocksize; @@ -4106,22 +4114,24 @@ static int __ext4_block_zero_page_range(handle_t *handle, } } } + if (ext4_should_journal_data(inode)) { BUFFER_TRACE(bh, "get write access"); err = ext4_journal_get_write_access(handle, inode->i_sb, bh, EXT4_JTR_NONE); if (err) goto unlock; - } - folio_zero_range(folio, offset, length); - BUFFER_TRACE(bh, "zeroed end of block"); - if (ext4_should_journal_data(inode)) { + folio_zero_range(folio, offset, length); + BUFFER_TRACE(bh, "zeroed end of block"); + err = ext4_dirty_journalled_data(handle, bh); if (err) goto unlock; } else { - err = 0; + folio_zero_range(folio, offset, length); + BUFFER_TRACE(bh, "zeroed end of block"); + mark_buffer_dirty(bh); } @@ -4131,6 +4141,9 @@ static int __ext4_block_zero_page_range(handle_t *handle, unlock: folio_unlock(folio); folio_put(folio); +out: + if (ext4_should_journal_data(inode)) + ext4_journal_stop(handle); return err; } @@ -4141,8 +4154,7 @@ static int __ext4_block_zero_page_range(handle_t *handle, * the end of the block it will be shortened to end of the block * that corresponds to 'from' */ -static int ext4_block_zero_page_range(handle_t *handle, - struct address_space *mapping, +static int ext4_block_zero_page_range(struct address_space *mapping, loff_t from, loff_t length, bool *did_zero) { @@ -4162,8 +4174,7 @@ static int ext4_block_zero_page_range(handle_t *handle, return dax_zero_range(inode, from, length, NULL, &ext4_iomap_ops); } - return __ext4_block_zero_page_range(handle, mapping, from, length, - did_zero); + return __ext4_block_zero_page_range(mapping, from, length, did_zero); } /* @@ -4172,8 +4183,7 @@ static int ext4_block_zero_page_range(handle_t *handle, * This required during truncate. We need to physically zero the tail end * of that block so it doesn't yield old data if the file is later grown. */ -static int ext4_block_truncate_page(handle_t *handle, - struct address_space *mapping, loff_t from, +static int ext4_block_truncate_page(struct address_space *mapping, loff_t from, loff_t *zero_len) { unsigned offset = from & (PAGE_SIZE-1); @@ -4190,8 +4200,7 @@ static int ext4_block_truncate_page(handle_t *handle, blocksize = inode->i_sb->s_blocksize; length = blocksize - (offset & (blocksize - 1)); - ret = ext4_block_zero_page_range(handle, mapping, from, length, - &did_zero); + ret = ext4_block_zero_page_range(mapping, from, length, &did_zero); if (ret) return ret; @@ -4199,8 +4208,7 @@ static int ext4_block_truncate_page(handle_t *handle, return 0; } -int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, - loff_t lstart, loff_t length) +int ext4_zero_partial_blocks(struct inode *inode, loff_t lstart, loff_t length) { struct super_block *sb = inode->i_sb; struct address_space *mapping = inode->i_mapping; @@ -4218,21 +4226,19 @@ int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode, /* Handle partial zero within the single block */ if (start == end && (partial_start || (partial_end != sb->s_blocksize - 1))) { - err = ext4_block_zero_page_range(handle, mapping, - lstart, length, NULL); + err = ext4_block_zero_page_range(mapping, lstart, length, NULL); return err; } /* Handle partial zero out on the start of the range */ if (partial_start) { - err = ext4_block_zero_page_range(handle, mapping, - lstart, sb->s_blocksize, - NULL); + err = ext4_block_zero_page_range(mapping, lstart, + sb->s_blocksize, NULL); if (err) return err; } /* Handle partial zero out on the end of the range */ if (partial_end != sb->s_blocksize - 1) - err = ext4_block_zero_page_range(handle, mapping, + err = ext4_block_zero_page_range(mapping, byte_end - partial_end, partial_end + 1, NULL); return err; @@ -4418,6 +4424,10 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) /* Now release the pages and zero block aligned part of pages*/ truncate_pagecache_range(inode, offset, end - 1); + ret = ext4_zero_partial_blocks(inode, offset, length); + if (ret) + return ret; + if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) credits = ext4_writepage_trans_blocks(inode); else @@ -4429,10 +4439,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) return ret; } - ret = ext4_zero_partial_blocks(handle, inode, offset, length); - if (ret) - goto out_handle; - /* If there are blocks to remove, do it */ start_lblk = round_up(offset, blocksize) >> inode->i_blkbits; end_lblk = end >> inode->i_blkbits; @@ -4564,6 +4570,8 @@ int ext4_truncate(struct inode *inode) err = ext4_inode_attach_jinode(inode); if (err) goto out_trace; + + ext4_block_truncate_page(mapping, inode->i_size, &zero_len); } if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) @@ -4577,10 +4585,6 @@ int ext4_truncate(struct inode *inode) goto out_trace; } - if (inode->i_size & (inode->i_sb->s_blocksize - 1)) - ext4_block_truncate_page(handle, mapping, inode->i_size, - &zero_len); - if (zero_len && ext4_should_order_data(inode)) { err = ext4_jbd2_inode_add_write(handle, inode, inode->i_size, zero_len); From patchwork Tue Oct 22 11:10:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000215 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=rgeh=rs=vger.kernel.org=linux-ext4+bounces-4706-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcnX33X2z1xw0 for ; Tue, 22 Oct 2024 14:18:08 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcnW0ByZz4w2M for ; Tue, 22 Oct 2024 14:18:07 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcnW08HYz4wnr; Tue, 22 Oct 2024 14:18:07 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:45d1:ec00::1" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567087; cv=pass; b=Gw1dcNAfTfFzcVuQ2dO/viXbNg0jzfhjRcZWAMLKaUfy0IyJeEtzsT/0noC93ffxNLMKMF9/gh2c8mhXEISWhqFU59gTYtQAFcIqqydRa8MZ4rEVjMN0FOQmcKvRZ2jC5+SWm8n24SNvJPOhxNUhNHugsb7GhbDUGp/Ihg8qiwECuUo0wYIf+C7bXrvp0e5G/jaSFmZevmRPQyX7eq03sjXcccpAQ3mMcr+lQnA5Wl3PleDsDP4XYrGUxqB5UZxCyqwPwDciaH+pe65T1OBLbVG6yrkC+nUQs3JFRJTaqWkOQh2heoDyuL/bWt4Y/fU6xhqGDiT2RyeCCkvC6+I0Jg== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567087; c=relaxed/relaxed; bh=v977qpXdDxHDFJQ6UrtjIzPpCDcsKZ0E5RX+AYTXUao=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gsO6QJHd4QT4foDpAN6VgDHj8lddks85DnjHv6KYYBqfArsJCvBTg2og6dH7Hmmm5jcNYrpO0xoFOJQtV0GeKsvC12UPE/6nI6TUnTHJezK8mDBnMgYgSm3m/g6ecHxvmdZn7eVEiQ1sDSlR/N3LMSJWjO3x4pcxcjYobdgRgc/eOqAsPirmWnv1fKFRpKEE4IeJXSQJIOYUnqEEpAGII9Yce4zMN1WyX2n6Nw9Jj2/vnOq569VqddlmswG+9A7W8sqzklx7FgtahiKoSszgW6eMJwjj5CHny1UuSUP5XGi56xksLiZ+z/wCEsHnu2XJDmgb9JvoOBVdfNoIIiEd5A== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4706-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4706-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [IPv6:2604:1380:45d1:ec00::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcnV44nyz4w2M for ; Tue, 22 Oct 2024 14:18:06 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 3134C1C2425B for ; Tue, 22 Oct 2024 03:18:06 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 479E81FEFB9; Tue, 22 Oct 2024 03:13:08 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A9401C7B72; Tue, 22 Oct 2024 03:13:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566787; cv=none; b=J5ST+fbQJtIXz0Wx5bbqfihOneg4tvyGLM3qFfWIdlqYrFX8SDT80CEZWADGCbuxIkNJX5dxySeTRV7bERYPlJIfTnNTsRt/HL1M+QRkXM6xXm03Vy+M2e4G3kV7MQ2GwQy7PJ9+4hRfFWJlzDOaMpDCsdLxT9lJp/VduVoQO3w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566787; c=relaxed/simple; bh=elKdS8cCNg8nXPyquUp1cwsUe+GpytOnFehHkD0Zbvk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=umLtg3HQorzruYrFelXD6c3dN7zCTIVyGQqf/KQO32glx86y6c0JIMgYVcPogj7JTpZcJv58QXCWJUkDCH1ugdw2CDIHeXztUA87V1VJ7SMqQi7P1LvtcxA4qh1gHo7uf2nXxfyMgP+BNFrvpdafkCNU634xu/3/tS5M9T+3dAQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4XXcgK1cBMz4f3jJD; Tue, 22 Oct 2024 11:12:45 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 91FC31A058E; Tue, 22 Oct 2024 11:13:02 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S25; Tue, 22 Oct 2024 11:13:02 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 21/27] ext4: implement zero_range iomap path Date: Tue, 22 Oct 2024 19:10:52 +0800 Message-ID: <20241022111059.2566137-22-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S25 X-Coremail-Antispam: 1UD129KBjvJXoWxJFyftF4xAw15Zr15Kw4fuFg_yoW5GF43pr 4DK3yUWrsrW39F9w4SqFy7Xr1ay3WfGrW8Wry3Gr98Zr95Xa4xKFW5KFySkFyUtrW7A3yj qF4jyrW7Kr1UAFJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQm14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vE x4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2 IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I64 8v4I1lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI0_Jw0_GFyl42xK82IYc2Ij 64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x 8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE 2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjTRio7uDUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Introduce ext4_iomap_zero_range() to implement the zero_range iomap path. Currently, this function direct invokes iomap_zero_range() to zero out a mapped partial block during the truncate down, zeroing range and punching hole. Almost all operations are handled by iomap_zero_range(). One important aspect to consider is the truncate-down operation. Since we do not order the data, it is essential to write out zeroed data before the i_disksize update transaction is committed. Otherwise, stale data may left over in the last block, which could be exposed during the next expand truncate operation. Signed-off-by: Zhang Yi --- fs/ext4/inode.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 34701afe61c2..50e4afd17e93 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4147,6 +4147,13 @@ static int __ext4_block_zero_page_range(struct address_space *mapping, return err; } +static int ext4_iomap_zero_range(struct inode *inode, loff_t from, + loff_t length, bool *did_zero) +{ + return iomap_zero_range(inode, from, length, did_zero, + &ext4_iomap_buffered_write_ops); +} + /* * ext4_block_zero_page_range() zeros out a mapping of length 'length' * starting from file offset 'from'. The range to be zero'd must @@ -4173,6 +4180,8 @@ static int ext4_block_zero_page_range(struct address_space *mapping, if (IS_DAX(inode)) { return dax_zero_range(inode, from, length, NULL, &ext4_iomap_ops); + } else if (ext4_test_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP)) { + return ext4_iomap_zero_range(inode, from, length, did_zero); } return __ext4_block_zero_page_range(mapping, from, length, did_zero); } @@ -4572,6 +4581,22 @@ int ext4_truncate(struct inode *inode) goto out_trace; ext4_block_truncate_page(mapping, inode->i_size, &zero_len); + /* + * inode with an iomap buffered I/O path does not order data, + * so it is necessary to write out zeroed data before the + * updating i_disksize transaction is committed. Otherwise, + * stale data may remain in the last block, which could be + * exposed during the next expand truncate operation. + */ + if (zero_len && ext4_test_inode_state(inode, + EXT4_STATE_BUFFERED_IOMAP)) { + loff_t zero_end = inode->i_size + zero_len; + + err = filemap_write_and_wait_range(mapping, + inode->i_size, zero_end - 1); + if (err) + goto out_trace; + } } if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) From patchwork Tue Oct 22 11:10:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000216 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=9hhg=rs=vger.kernel.org=linux-ext4+bounces-4707-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcph0psyz1xw0 for ; Tue, 22 Oct 2024 14:19:08 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcpf56ZNz4w2H for ; Tue, 22 Oct 2024 14:19:06 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcpf54NVz4w2M; Tue, 22 Oct 2024 14:19:06 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:45d1:ec00::1" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567146; cv=pass; b=UpF2B14F5SERUMYepC+aA+hkmJF38Oz5CijxMVie4rww6nHdT1Vb6vHumD2GxOeOJp/oZBA2kqFVrpLcDGsys1W/AgAPh71uRkQyR9fZ5DegRIv0CAlQDHVLejG/a08SGkp+C/YA656M9/0PtqalECkYqG1Sz7EY8947igPXoznskaNHWufEegajIMetcC9QN34ysGyCS9rFa05jjrhBmB89ok/pK3OIofMj9+0cmdsNVS+5mir6/QTWK6iFg0cmpfMYLG6mZ5gARRSri5VrUIhOCtRiTqBSBKE+29U0/YyvZ0D8JL2rD0FP/oCb0FKmsGiKoNzjUQ3KUkt4paPt3w== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567146; c=relaxed/relaxed; bh=p1ajBI4YdHB4S6QLN3mld8F83VmPtjNMTkKY6NxzFYw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kR+RJ+GKMekMrztCFHedLc0hRUZ66WhAwIz2TUXEJAjnLu77xKkqCmBXYYON/YnwUl3ggN+3FI0rfE5Mto9It3zY0xoUCS7LdctzxZYS9buHOD+Xi0NkpAuLMRECvABTVM/BXkqtTV16iXTnQ2ouNXAsKYFBrygkiKkSmNPr+MjdXpkVbDXE9CMa4UGpdPWCX2Dk7qdghxpK7ChOdUQml+m9+BjvxSmh5pX1mC0z0aL9+IuDUHxTEWREfV/VZas6XxklXpBI2KRia52w+lWL17eC3EbTtgrhFjPyKRUUgsWx0ROG/rV7+SnNFIzbX1GFIqPGHNQnFPmIMyHLwKKW1A== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4707-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4707-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [IPv6:2604:1380:45d1:ec00::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcpf1qsqz4w2H for ; Tue, 22 Oct 2024 14:19:06 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id A44BE1C24753 for ; Tue, 22 Oct 2024 03:19:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EDF15200130; Tue, 22 Oct 2024 03:13:09 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60C9E1F429E; Tue, 22 Oct 2024 03:13:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566789; cv=none; b=gKpwirXp9N4Z/4pwzKl5AiGo95Nv88AyMBEb5JB5ek+qieOOxvx9dOQxaOSTTj5X5XKu8qv1gGDQG9HRGw5jL1rnHVu2wr4f4G7Ms9+XMH0w5rIEONvjOWN9bz08gUyEqrpKaOXbJftqVyQM1amoEGE5+77HZXx1reT9eRZTkXc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566789; c=relaxed/simple; bh=ubP8dtgkjA6KMpBj7kQAXGPFgndkJ/y1ISEWHCdmxc4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XArNYAcLmZ36iknp3MHsUWAlzSrHEOt77K2t7Mx0yvH6XETlYWqPqqEfaaUQofyQ/Ld+KmA/pczOIZYgeuJUQW4Iasj+0d6rLAOIaoOOEmNgf/do0gRHbcJNNB7QlHA+CdIoDvMqJc5gY3Lm6ecPEtwCgOvWza/rQrEucFMNhFg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4XXcgK6Gs2z4f3jXy; Tue, 22 Oct 2024 11:12:45 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 3E8121A0568; Tue, 22 Oct 2024 11:13:03 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S26; Tue, 22 Oct 2024 11:13:02 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 22/27] ext4: disable online defrag when inode using iomap buffered I/O path Date: Tue, 22 Oct 2024 19:10:53 +0800 Message-ID: <20241022111059.2566137-23-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S26 X-Coremail-Antispam: 1UD129KBjvdXoW7GrW7Cr45WFy3Cw18Jw1kXwb_yoWDKwc_Ka 97XrWkWr4FvF4S9an8Ary5JrZ5CF48KF1fXFZ3KF4fuw42y395J34qkrZI9rykWr4jqrZx Crs7Jr1YgFy8WjkaLaAFLSUrUUUUjb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbQkFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k26cxKx2IYs7xG 6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M28IrcIa0xkI8V A2jI8067AKxVWUAVCq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJ M28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2I x0cI8IcVCY1x0267AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2 z280aVCY1x0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0V AKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1l Ox8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErc IFxwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0E wIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E74 80Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_WrylIxkGc2Ij64vIr41lIxAIcVC0 I7IYx2IY67AKxVW5JVW7JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwCI42IY6x AIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr0_Cr1lIxAIcVC2z280aVCY 1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x0pRtl1hUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Online defragmentation does not yet support the inode using iomap buffered I/O path, as it still relies on ext4_get_block() to get blocks and copy data. Therefore, we must disable it for the time being. Signed-off-by: Zhang Yi --- fs/ext4/move_extent.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c index b64661ea6e0e..508e342b4a1d 100644 --- a/fs/ext4/move_extent.c +++ b/fs/ext4/move_extent.c @@ -610,6 +610,13 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk, return -EOPNOTSUPP; } + if (ext4_test_inode_state(orig_inode, EXT4_STATE_BUFFERED_IOMAP) || + ext4_test_inode_state(donor_inode, EXT4_STATE_BUFFERED_IOMAP)) { + ext4_msg(orig_inode->i_sb, KERN_ERR, + "Online defrag not supported for inode with iomap buffered IO path"); + return -EOPNOTSUPP; + } + /* Protect orig and donor inodes against a truncate */ lock_two_nondirectories(orig_inode, donor_inode); From patchwork Tue Oct 22 11:10:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000217 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=frxh=rs=vger.kernel.org=linux-ext4+bounces-4708-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcpm18W9z1xw0 for ; Tue, 22 Oct 2024 14:19:12 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcpk5V2cz4w2H for ; Tue, 22 Oct 2024 14:19:10 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcpk5RWbz4w2M; Tue, 22 Oct 2024 14:19:10 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:4601:e00::3" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567150; cv=pass; b=VqcGdHmlFSC4EWlbw54Oadhztg93OyauOE6quQ1h73bBLd7mEe0ope+/PifC2H/GOwaoYAwedvKIYPeBKEBymOACLIGgafq+vTcGLrDc7ZQm67EXa8wJ3T3D2R9I1lenevY+QEnxO4HvpDEkKiM0rAIr4DffVf8H6h98o9BGFxVICCxnO1mxvHANUs5aaIW9tYLg0pg1fsHJNQqHNsoO28tyqdw7Wt0B0pGjewRzOSG9MU4viDGD5rDXmM7D0rAOl9KbSp9lOYaUTfm48d5arbJE0WZdJqFydH8IEc9x6XnyMsvl0UHUEKZYoxx5NIkrFR0L6JRaoOuoGbNsrF8jtQ== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567150; c=relaxed/relaxed; bh=JK3/K84zq4u/hAFwU+CocTObP9UYmPfuQEx8MK9spaA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eOazBzxrlQYxtNIyfTFpcU38ED/v/gs1aUXjuTgM7tHLMcdmYNsVYpEd8td+TpbuK1EWPWZ70VMqGEter4AjLnsEd+UsjnZBXeviKaDAnDm9nkLIxbViAgvw/H1PSAkTxJw3AzqZFyL0zNuEU3C+drF2h4AGeAv49Fg4+WdV2dT00dlKzte1Y5eJwZCVgsYj3ow0QdFl3iwJlwuEoRI38BkJDn5r/WOuWo9oCrO7cPUHKANUytGiHMoLQ2RrMcMM8I4oR4kinjXgjRLNYhTDNExZDz61LCmN7tdL5LAC3PbjD8qSzktmDaEKOzIhJpEQHatA6XxHCwuSPwcCd/V1Uw== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:4601:e00::3; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4708-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:4601:e00::3; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4708-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [IPv6:2604:1380:4601:e00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcpk1ptkz4w2H for ; Tue, 22 Oct 2024 14:19:10 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 59C0C1F25EBD for ; Tue, 22 Oct 2024 03:19:09 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 54FDD2003AD; Tue, 22 Oct 2024 03:13:10 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60BF11F4270; Tue, 22 Oct 2024 03:13:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566789; cv=none; b=Vh7Q0Db8RQxqZWpMc69sJiV3MyukPWd7z6a3CrbIoLWrofsQZf7PEYPjX8dn45KuKJLm56Pa07tp3VAR4uvlTe3HnujeJSsk3kwDlmmJNGzQfGkX+i5hWlGYK6fh/6uh48hhoPLCAXmWJ3caq3wGlZ5pw8S7ybvjeDxo0BS65qU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566789; c=relaxed/simple; bh=Z95YRtgtA+FXYnOrYJV2Oe0WpWpGeIHunhhahRecsYg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=r8RXI7sBhHuiRM9MuaXExqFFgDjsiewF7P8/k57hiXLF6sq3ZUYo0apny+NbJJzxMZVD2qGdgq2PZKQgcvGChePJqO4r/Z8NRkqxgbzcSGR5YxierNuV2u7ez5KShC79ZBMz3VjBaBQFcjcRG89Yfu6IMLvMm+cNRsrKe7RtT9I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4XXcgL3WPpz4f3jY4; Tue, 22 Oct 2024 11:12:46 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id D47F61A058E; Tue, 22 Oct 2024 11:13:03 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S27; Tue, 22 Oct 2024 11:13:03 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 23/27] ext4: disable inode journal mode when using iomap buffered I/O path Date: Tue, 22 Oct 2024 19:10:54 +0800 Message-ID: <20241022111059.2566137-24-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S27 X-Coremail-Antispam: 1UD129KBjvdXoW7XrW7Jr4DJF1UJr1Duw1xKrg_yoWDKFb_XF 97tr40q34Svw1Skrs5Cr15tFnFkFyxKr10grWfGa4rWFWUtF4DAF9rZry2v34DWr4rKF9x Zr4kJrW8Ka4xXjkaLaAFLSUrUUUUjb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUbQkFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k26cxKx2IYs7xG 6rWj6s0DM7CIcVAFz4kK6r1j6r18M280x2IEY4vEnII2IxkI6r1a6r45M28IrcIa0xkI8V A2jI8067AKxVWUAVCq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJ M28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2I x0cI8IcVCY1x0267AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2 z280aVCY1x0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0V AKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUGVWUXwAv7VC2z280aVAFwI0_Jr0_Gr1l Ox8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErc IFxwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7AKxVWUtVW8ZwCF04k20xvY0x0E wIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E74 80Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_WrylIxkGc2Ij64vIr41lIxAIcVC0 I7IYx2IY67AKxVW5JVW7JwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F4UJwCI42IY6x AIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr0_Cr1lIxAIcVC2z280aVCY 1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x0pRtl1hUUUUU= X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi inode with data=journal mode will not support the iomap buffered I/O path, so just disable this mode if EXT4_STATE_BUFFERED_IOMAP is set. Signed-off-by: Zhang Yi --- fs/ext4/ext4_jbd2.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c index da4a82456383..367a29babe09 100644 --- a/fs/ext4/ext4_jbd2.c +++ b/fs/ext4/ext4_jbd2.c @@ -16,7 +16,8 @@ int ext4_inode_journal_mode(struct inode *inode) ext4_test_inode_flag(inode, EXT4_INODE_EA_INODE) || test_opt(inode->i_sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA || (ext4_test_inode_flag(inode, EXT4_INODE_JOURNAL_DATA) && - !test_opt(inode->i_sb, DELALLOC))) { + !test_opt(inode->i_sb, DELALLOC) && + !ext4_test_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP))) { /* We do not support data journalling for encrypted data */ if (S_ISREG(inode->i_mode) && IS_ENCRYPTED(inode)) return EXT4_INODE_ORDERED_DATA_MODE; /* ordered */ From patchwork Tue Oct 22 11:10:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000219 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=pyzh=rs=vger.kernel.org=linux-ext4+bounces-4709-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcpt4DT4z1xw0 for ; Tue, 22 Oct 2024 14:19:18 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcps1RsQz4w2M for ; Tue, 22 Oct 2024 14:19:17 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcps1PcDz4wb7; Tue, 22 Oct 2024 14:19:17 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.199.223 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567157; cv=pass; b=FmlaqIfZgBOvMTXZyzEfw4mHIjTMuyRmgIie2TV3YN8TsxyNsfE4oxizf8V0WIx2Pu/q8j/codgRGmCuUyQUXQHmpCtKG7xdNFTZcjpPgWQB91JP3n6sXij5psegUhKXT4mnYnvFKmAGDUBSDCFxNzj508Cx5BMEk0buR+WJ9PIe3VRJqNN3i7mTHvHViky5mv1j1JbY37/RjBx+d82jqlr7686xLBLxt5yhjj3r9bj0ESDBL0ZY2pDRhs6y1/D9h7F2ihTW8+Vn/Swg/I3188Eah2Wb7ZQV1SrXlnqp9ywxty6sstCBoAzYTcWy5YdFtpVWBb2Pd2KBMqaL1xyK5A== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567157; c=relaxed/relaxed; bh=tCoZjr4M0tXep8vDLPyn6FLGFg61+3hYOy+gjp5Fqpg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=vTT3DAkzT5wHQ77Fp++dDNhK7aKQRcIvX4a6FGYEpgdWemYAYaJTjg+AW8QGZDMasD3b6ESq1nQSE6YW0WuoFLH0K8JWwJ6Ey1VvU8FTiootZhz8vjWsB7/YqwXJixoOMa+KSNOU/vcDYwEbIVoBQthQ6XRK8vMcryZVApCn9GRUqRBTlIuuMKZqOH77x6TPglF78RKZE9Qr4WpueWPN1dX8RikHc+8sl3zIFzryTpG2yleyou/3pQAjn3BN03g+POkGWQyNnl9lNaBjZLUAorraxgSGIKNGU999XWJ6VXUE7qBFU6kXHLCGEK4w46Y3+0a3mUuPlmq0mXxn034EzA== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.199.223; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4709-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.199.223; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4709-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [147.75.199.223]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcpr5JFbz4w2M for ; Tue, 22 Oct 2024 14:19:16 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 4294D1C24858 for ; Tue, 22 Oct 2024 03:19:16 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8D4042003D5; Tue, 22 Oct 2024 03:13:10 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52BA61EABBF; Tue, 22 Oct 2024 03:13:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566790; cv=none; b=ZZVlk1jGAWmoMc9Occ9s9XGUJKYxAekEuEsLOHpr3yS1p+brZ5IylKHr4hH2vNKdPzWEbBA8dX1Ew3+Oh4r3t457CX4y3WI7O0aCHz/aZnRJK5yOuhmoUV9XgvRtfYckqjsU4Qbb9HsByCjzcRb8PojW0ebO/3WFpfvI82F3WEM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566790; c=relaxed/simple; bh=UB+Gs3civCuEYlS4cLJVQslxb6/ZGgNvoifXv/YHRsA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b0Q5RspfAisFoKguoHd3lfBFAUcipQV7H3H2uImEZM+puS12NRyq53wrZZPQ2Z4PmhL0raJXRheaTiczuPwyrW9M+uFL9Zcj8ncEb09evSU7jpa9Xe1Yg6v5bAleOD7uCKhPeEw+DXE1uziflAGbk72HGUmmqKQ6TKMIdRs9bDI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgL2Rv3z4f3lVv; Tue, 22 Oct 2024 11:12:46 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 80E2A1A018D; Tue, 22 Oct 2024 11:13:04 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S28; Tue, 22 Oct 2024 11:13:04 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 24/27] ext4: partially enable iomap for the buffered I/O path of regular files Date: Tue, 22 Oct 2024 19:10:55 +0800 Message-ID: <20241022111059.2566137-25-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S28 X-Coremail-Antispam: 1UD129KBjvJXoWxZFy3uF1xur47Wr1UGrW8tFb_yoWrGF15pF ZxKF1rGr4v93s29r4ftF48Zr1av3WxKa1UWrWSgr95XFWUJw1SqF10yF15A3W5JrZ5u34a qF4jkr15uw43urDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQm14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vE x4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2 IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I64 8v4I1lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI0_Jw0_GFyl42xK82IYc2Ij 64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x 8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE 2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjTRio7uDUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Partially enable iomap for the buffered I/O path of regular files with the default mount option. This supports default filesystem features and the bigalloc feature. However, it does not yet support inline data, fs_verity, fs_crypt, online defrag and data=journal mode. Some of these features should be supported gradually in the future. The filesystem will fallback to the buffered_head path automatically if these mount options or features are enabled. Signed-off-by: Zhang Yi --- fs/ext4/ext4.h | 1 + fs/ext4/ialloc.c | 3 +++ fs/ext4/inode.c | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 36 insertions(+) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index e1b7f7024f07..0096191b454c 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -2987,6 +2987,7 @@ int ext4_walk_page_buffers(handle_t *handle, struct buffer_head *bh)); int do_journal_get_write_access(handle_t *handle, struct inode *inode, struct buffer_head *bh); +bool ext4_should_use_buffered_iomap(struct inode *inode); int ext4_nonda_switch(struct super_block *sb); #define FALL_BACK_TO_NONDELALLOC 1 #define CONVERT_INLINE_DATA 2 diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c index 7f1a5f90dbbd..2e3e257b9808 100644 --- a/fs/ext4/ialloc.c +++ b/fs/ext4/ialloc.c @@ -1333,6 +1333,9 @@ struct inode *__ext4_new_inode(struct mnt_idmap *idmap, } } + if (ext4_should_use_buffered_iomap(inode)) + ext4_set_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP); + ext4_update_inode_fsync_trans(handle, inode, 1); err = ext4_mark_inode_dirty(handle, inode); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 50e4afd17e93..512094dc4117 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -776,6 +776,8 @@ static int _ext4_get_block(struct inode *inode, sector_t iblock, if (ext4_has_inline_data(inode)) return -ERANGE; + if (WARN_ON(ext4_test_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP))) + return -EINVAL; map.m_lblk = iblock; map.m_len = bh->b_size >> inode->i_blkbits; @@ -2572,6 +2574,9 @@ static int ext4_do_writepages(struct mpage_da_data *mpd) trace_ext4_writepages(inode, wbc); + if (WARN_ON(ext4_test_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP))) + return -EINVAL; + /* * No pages to write? This is mainly a kludge to avoid starting * a transaction for special inodes like journal inode on last iput() @@ -5144,6 +5149,30 @@ static const char *check_igot_inode(struct inode *inode, ext4_iget_flags flags) return NULL; } +bool ext4_should_use_buffered_iomap(struct inode *inode) +{ + struct super_block *sb = inode->i_sb; + + if (ext4_has_feature_inline_data(sb)) + return false; + if (ext4_has_feature_verity(sb)) + return false; + if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA) + return false; + if (!S_ISREG(inode->i_mode)) + return false; + if (IS_DAX(inode)) + return false; + if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) + return false; + if (ext4_test_inode_flag(inode, EXT4_INODE_EA_INODE)) + return false; + if (ext4_test_inode_flag(inode, EXT4_INODE_ENCRYPT)) + return false; + + return true; +} + struct inode *__ext4_iget(struct super_block *sb, unsigned long ino, ext4_iget_flags flags, const char *function, unsigned int line) @@ -5408,6 +5437,9 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino, if (ret) goto bad_inode; + if (ext4_should_use_buffered_iomap(inode)) + ext4_set_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP); + if (S_ISREG(inode->i_mode)) { inode->i_op = &ext4_file_inode_operations; inode->i_fop = &ext4_file_operations; From patchwork Tue Oct 22 11:10:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000220 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=mail.ozlabs.org; envelope-from=srs0=hfdz=rs=vger.kernel.org=linux-ext4+bounces-4710-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcqN15rzz1xw0 for ; Tue, 22 Oct 2024 14:19:44 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcqL5PTSz4w2H for ; Tue, 22 Oct 2024 14:19:42 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcqJ0NZXz4wnw; Tue, 22 Oct 2024 14:19:40 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.80.249 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567180; cv=pass; b=Z2vyZVzmVHh7gb1mNm0+kMfaH9D7uJz3ELgwaJdG5hJdyH++UtPh4Le1k64yFOE63yn26th/0QGqTabPmspv+JfLcyjwJczv8usGrtsTFaF+Tnj3IjhDXSBeZHCTbCsX/zVybpiBIGdyAd5dIQWjYvMz/+kd0X0Zs3eNhRLDMJ8DU1FHHLXGmew9RO0SZzS9Uf1raEKCvkxDRu3og7FUZ0f4wFYDQauLhW45l9l+RnuHOVoYNjlUG04DuGncCxXQosIeHCiogjyeUu5Yu4zfN/tsu9gXoiQ8V1zM7Zk5giOCjRz1dMjffb8t3z1MGOd9nH0B64TkPb/UUz8s8LDEyg== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567180; c=relaxed/relaxed; bh=e5XYdf9dMCD5a2YbqwZ3u8pwo8cmNmWFKTOW1JyV6Kc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=plCipWyM+1m8akvrF7m36IUWu34FHro6CN4dNtXkHBwrXSSRYaT0lwEoEXpYxjfe6FzRVefNh+nRoUdY9ZabuDx9hwwa9RrOx0icZzXtUwkwWiyFaPR+bhHpe2Q550qwFFVP04MGphEqNBYuYy1NbaNgK/ZI2B5vl9vVZXejgRadWHzQaL+hzikHlsm5Ew/hsfa9nmx8NvKLaKKoZWeV2HCWrdybWCRP9csjVHpnKfUEebE/kBcOV++VhrBhaA6jQkXjZpXRhMj6RUYo4eJ6C4l4WZ8AIrkXg+B35r6CMSxtfocACRIhhsJxTfavxEEcFlQxiO+GxO6CmWivtWbIZA== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4710-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4710-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [147.75.80.249]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcqH30wsz4wnr for ; Tue, 22 Oct 2024 14:19:39 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 21E721F2622E for ; Tue, 22 Oct 2024 03:19:38 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F283C200BAD; Tue, 22 Oct 2024 03:13:10 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7B071FEFC0; Tue, 22 Oct 2024 03:13:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566790; cv=none; b=nj+u32SIKCsD1RPAvIMIttvEMKkftKqmksjd2JVzV2OV46qlTQI84NV9aCE+FZ0IJllTsgTtnu2C0CtHw+Zpbj7yCkFxkDKZWGlkFvfL/kOlw7IEiI3UYAZMjZuI3+p+eWjFkA6bsJi02Hngy9lOyBPsLISK6N12h4ZC+fBF+jw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566790; c=relaxed/simple; bh=T7Ax1zmPeYMqNlopQKLb8lsCwVcBJZ9ZJijwkkRpwBM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=F0bPXJsbR0uh3ljwh0Dx/B6IYAE5Eo+fXaFD8QaL1xylCD6bvuqIf3Mwtz+A/0LD7oT9QmXxfFVkqLcRqh/1MLCeNkmFFJTO0iMG6vy8MAMApcuTORkUe3zwGOIJ8MaACR+yhjU/9D60EiwwMhfVS+V8ee/O5Qscj+fMUdwq1vU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgS68Hhz4f3jks; Tue, 22 Oct 2024 11:12:52 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 235601A018C; Tue, 22 Oct 2024 11:13:05 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S29; Tue, 22 Oct 2024 11:13:04 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 25/27] ext4: enable large folio for regular file with iomap buffered I/O path Date: Tue, 22 Oct 2024 19:10:56 +0800 Message-ID: <20241022111059.2566137-26-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S29 X-Coremail-Antispam: 1UD129KBjvJXoW7tFyrCw1fZw1xWr1xWF17Jrb_yoW8XrW7pr sxK3W8GrWkX34q9ws3KryxZr1Uta1xGw4UuFWF9wn8WrWDJ34SqF4jkF1xAF48JrWrA3y2 qFyIkr13Z3WfC3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQm14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vE x4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2 IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I64 8v4I1lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI0_Jw0_GFyl42xK82IYc2Ij 64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x 8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE 2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjTRio7uDUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Since we have converted the buffered I/O path to iomap for regular files, we can enable large folio support as well. This should result in significant performance gains for large I/O operations. Signed-off-by: Zhang Yi --- fs/ext4/ialloc.c | 4 +++- fs/ext4/inode.c | 4 +++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c index 2e3e257b9808..6ff03fb74867 100644 --- a/fs/ext4/ialloc.c +++ b/fs/ext4/ialloc.c @@ -1333,8 +1333,10 @@ struct inode *__ext4_new_inode(struct mnt_idmap *idmap, } } - if (ext4_should_use_buffered_iomap(inode)) + if (ext4_should_use_buffered_iomap(inode)) { ext4_set_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP); + mapping_set_large_folios(inode->i_mapping); + } ext4_update_inode_fsync_trans(handle, inode, 1); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 512094dc4117..97abc88e6658 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -5437,8 +5437,10 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino, if (ret) goto bad_inode; - if (ext4_should_use_buffered_iomap(inode)) + if (ext4_should_use_buffered_iomap(inode)) { ext4_set_inode_state(inode, EXT4_STATE_BUFFERED_IOMAP); + mapping_set_large_folios(inode->i_mapping); + } if (S_ISREG(inode->i_mode)) { inode->i_op = &ext4_file_inode_operations; From patchwork Tue Oct 22 11:10:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000222 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=mail.ozlabs.org; envelope-from=srs0=azpd=rs=vger.kernel.org=linux-ext4+bounces-4712-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcsQ4X8Zz1xtp for ; Tue, 22 Oct 2024 14:21:30 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcsP1djyz4w2L for ; Tue, 22 Oct 2024 14:21:29 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcsP1bFRz4w2M; Tue, 22 Oct 2024 14:21:29 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip=147.75.199.223 arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567289; cv=pass; b=trPV2N0KtjHctJAix6ZGHEiWA/UY0ebaOwaiNu6zKFvzHSNHSdQMfL0B2nHUZzc3L+aQA9EZvvm8nSHZd6yk/lcSWUVYUjEA4EvxQl0LWd0WlsxYVvWngiv/caonvwodatFf7kBMFbSSCh9s/xPnl6QPZFvG4VpkGVYx8oXCCDg9Ilwr6BQqNU7J9IeRybhvRN+4c5VDo7iCfDnzRt1PLoLUP7069Qt7/kU/oe2vJg/TpPjSVm++D2F1QbpsSU2ctffCAkcciEN02L1cNOn761tIQhOw5GSuM/M/5e67dnfaPvB9AWk8/eh32/vVVSswmMLOw5PCLrmS5rucQBHUhA== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567289; c=relaxed/relaxed; bh=GiRwjxq2YmpCfGANS5x1iOBZMQzY/gpssYd99mXJLWE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FXLsjRqrRQEXK3g7XpaxR8hUH6HmQze8YM7FEbnHJBJiDVj3FFXQrzwdG11yN7JvcTo5jvIe52HeO2RHUlXAFj0SVyE3jGb0I5V3Ud+Til2Qx+2h0kxoDBNWgtlzcE+wIYvs/N4WzdNGQQ6ZgT6Dr+VBIEHLSrxvzFNMBGQ3v3POC8BbHvbqfW01uEwSpyNtNQQRswMWJcUjRBpCnWapLlOZYdO9fTI0VRdbhlv3FpjSGWBnb6AK2Fsq6p57r6CWhAY/N5OYs3RKaIqTPDwZ6l1s+m6NJ4avXoQ8ssBP9Y0tocxhT2YpV2jirpzY8hxVwTWCs6U5GoIEpdBNmTlXrQ== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=147.75.199.223; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4712-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.199.223; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4712-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [147.75.199.223]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcsN5Xp7z4w2L for ; Tue, 22 Oct 2024 14:21:28 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 4860A1C25055 for ; Tue, 22 Oct 2024 03:21:28 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7333B205E38; Tue, 22 Oct 2024 03:13:13 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0AF31FF610; Tue, 22 Oct 2024 03:13:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566793; cv=none; b=LEY58rP5XRWV2NK18nV57dMPewcES4ZLFDaKSywcRejE2Zuu5R+qpZ/NS9CsRD95E+rY+Z6l/PYcUYNAkjOAmiN8E3f0Mqc14dm9qLLBjUvOJFGJs6laBH5NYXY87DFEdqDomlPc8ZOVHrvWH4TkmZRaRmB6qjL/dwW7MyVMX04= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566793; c=relaxed/simple; bh=uaagAkzcaigmaepALuH8W84E4x3TWY8c1iyfP0Yfel4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cYlOyQ+u6NTrw10cm1rNGGoaHqPGEHVok8Y31AlRYH5ZV5dfhEmd0jGZMJm021tUMXeEK5itVEct6uiDMD4xI3HhsTaA0H9ujSXRcm0DYQkwpN3wHmQ3gQrLpyCoPO0zEB7dSM4vg8KPSv+zs0oqeiSVSnefcDFK3ajB/p3PLdU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgM4RVxz4f3lVv; Tue, 22 Oct 2024 11:12:47 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id C054D1A018C; Tue, 22 Oct 2024 11:13:05 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S30; Tue, 22 Oct 2024 11:13:05 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 26/27] ext4: change mount options code style Date: Tue, 22 Oct 2024 19:10:57 +0800 Message-ID: <20241022111059.2566137-27-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S30 X-Coremail-Antispam: 1UD129KBjvJXoWfJw17urW3Wr1fAF1UXw13urg_yoWDAr1kpr 18JFW3u3Z8JrWku3W0kF48tw4FvrnY9r4vyrn8CF17Kw1qyry8XFyIgFyxA3W3t34rZ34r KFn0va45Ca17GrDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQm14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vE x4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2 IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I64 8v4I1lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI0_Jw0_GFyl42xK82IYc2Ij 64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x 8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE 2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjTRio7uDUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Just remove the space between the macro name and open parenthesis to satisfy the checkpatch.pl script and prevent it from complaining when we add new mount options in the subsequent patch. This will not result in any logical changes. Signed-off-by: Zhang Yi --- fs/ext4/super.c | 175 +++++++++++++++++++++++------------------------- 1 file changed, 84 insertions(+), 91 deletions(-) diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 56baadec27e0..89955081c4fe 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1723,101 +1723,94 @@ static const struct constant_table ext4_param_dax[] = { * separate for now. */ static const struct fs_parameter_spec ext4_param_specs[] = { - fsparam_flag ("bsddf", Opt_bsd_df), - fsparam_flag ("minixdf", Opt_minix_df), - fsparam_flag ("grpid", Opt_grpid), - fsparam_flag ("bsdgroups", Opt_grpid), - fsparam_flag ("nogrpid", Opt_nogrpid), - fsparam_flag ("sysvgroups", Opt_nogrpid), - fsparam_gid ("resgid", Opt_resgid), - fsparam_uid ("resuid", Opt_resuid), - fsparam_u32 ("sb", Opt_sb), - fsparam_enum ("errors", Opt_errors, ext4_param_errors), - fsparam_flag ("nouid32", Opt_nouid32), - fsparam_flag ("debug", Opt_debug), - fsparam_flag ("oldalloc", Opt_removed), - fsparam_flag ("orlov", Opt_removed), - fsparam_flag ("user_xattr", Opt_user_xattr), - fsparam_flag ("acl", Opt_acl), - fsparam_flag ("norecovery", Opt_noload), - fsparam_flag ("noload", Opt_noload), - fsparam_flag ("bh", Opt_removed), - fsparam_flag ("nobh", Opt_removed), - fsparam_u32 ("commit", Opt_commit), - fsparam_u32 ("min_batch_time", Opt_min_batch_time), - fsparam_u32 ("max_batch_time", Opt_max_batch_time), - fsparam_u32 ("journal_dev", Opt_journal_dev), - fsparam_bdev ("journal_path", Opt_journal_path), - fsparam_flag ("journal_checksum", Opt_journal_checksum), - fsparam_flag ("nojournal_checksum", Opt_nojournal_checksum), - fsparam_flag ("journal_async_commit",Opt_journal_async_commit), - fsparam_flag ("abort", Opt_abort), - fsparam_enum ("data", Opt_data, ext4_param_data), - fsparam_enum ("data_err", Opt_data_err, + fsparam_flag("bsddf", Opt_bsd_df), + fsparam_flag("minixdf", Opt_minix_df), + fsparam_flag("grpid", Opt_grpid), + fsparam_flag("bsdgroups", Opt_grpid), + fsparam_flag("nogrpid", Opt_nogrpid), + fsparam_flag("sysvgroups", Opt_nogrpid), + fsparam_gid("resgid", Opt_resgid), + fsparam_uid("resuid", Opt_resuid), + fsparam_u32("sb", Opt_sb), + fsparam_enum("errors", Opt_errors, ext4_param_errors), + fsparam_flag("nouid32", Opt_nouid32), + fsparam_flag("debug", Opt_debug), + fsparam_flag("oldalloc", Opt_removed), + fsparam_flag("orlov", Opt_removed), + fsparam_flag("user_xattr", Opt_user_xattr), + fsparam_flag("acl", Opt_acl), + fsparam_flag("norecovery", Opt_noload), + fsparam_flag("noload", Opt_noload), + fsparam_flag("bh", Opt_removed), + fsparam_flag("nobh", Opt_removed), + fsparam_u32("commit", Opt_commit), + fsparam_u32("min_batch_time", Opt_min_batch_time), + fsparam_u32("max_batch_time", Opt_max_batch_time), + fsparam_u32("journal_dev", Opt_journal_dev), + fsparam_bdev("journal_path", Opt_journal_path), + fsparam_flag("journal_checksum", Opt_journal_checksum), + fsparam_flag("nojournal_checksum", Opt_nojournal_checksum), + fsparam_flag("journal_async_commit", Opt_journal_async_commit), + fsparam_flag("abort", Opt_abort), + fsparam_enum("data", Opt_data, ext4_param_data), + fsparam_enum("data_err", Opt_data_err, ext4_param_data_err), - fsparam_string_empty - ("usrjquota", Opt_usrjquota), - fsparam_string_empty - ("grpjquota", Opt_grpjquota), - fsparam_enum ("jqfmt", Opt_jqfmt, ext4_param_jqfmt), - fsparam_flag ("grpquota", Opt_grpquota), - fsparam_flag ("quota", Opt_quota), - fsparam_flag ("noquota", Opt_noquota), - fsparam_flag ("usrquota", Opt_usrquota), - fsparam_flag ("prjquota", Opt_prjquota), - fsparam_flag ("barrier", Opt_barrier), - fsparam_u32 ("barrier", Opt_barrier), - fsparam_flag ("nobarrier", Opt_nobarrier), - fsparam_flag ("i_version", Opt_removed), - fsparam_flag ("dax", Opt_dax), - fsparam_enum ("dax", Opt_dax_type, ext4_param_dax), - fsparam_u32 ("stripe", Opt_stripe), - fsparam_flag ("delalloc", Opt_delalloc), - fsparam_flag ("nodelalloc", Opt_nodelalloc), - fsparam_flag ("warn_on_error", Opt_warn_on_error), - fsparam_flag ("nowarn_on_error", Opt_nowarn_on_error), - fsparam_u32 ("debug_want_extra_isize", - Opt_debug_want_extra_isize), - fsparam_flag ("mblk_io_submit", Opt_removed), - fsparam_flag ("nomblk_io_submit", Opt_removed), - fsparam_flag ("block_validity", Opt_block_validity), - fsparam_flag ("noblock_validity", Opt_noblock_validity), - fsparam_u32 ("inode_readahead_blks", - Opt_inode_readahead_blks), - fsparam_u32 ("journal_ioprio", Opt_journal_ioprio), - fsparam_u32 ("auto_da_alloc", Opt_auto_da_alloc), - fsparam_flag ("auto_da_alloc", Opt_auto_da_alloc), - fsparam_flag ("noauto_da_alloc", Opt_noauto_da_alloc), - fsparam_flag ("dioread_nolock", Opt_dioread_nolock), - fsparam_flag ("nodioread_nolock", Opt_dioread_lock), - fsparam_flag ("dioread_lock", Opt_dioread_lock), - fsparam_flag ("discard", Opt_discard), - fsparam_flag ("nodiscard", Opt_nodiscard), - fsparam_u32 ("init_itable", Opt_init_itable), - fsparam_flag ("init_itable", Opt_init_itable), - fsparam_flag ("noinit_itable", Opt_noinit_itable), + fsparam_string_empty("usrjquota", Opt_usrjquota), + fsparam_string_empty("grpjquota", Opt_grpjquota), + fsparam_enum("jqfmt", Opt_jqfmt, ext4_param_jqfmt), + fsparam_flag("grpquota", Opt_grpquota), + fsparam_flag("quota", Opt_quota), + fsparam_flag("noquota", Opt_noquota), + fsparam_flag("usrquota", Opt_usrquota), + fsparam_flag("prjquota", Opt_prjquota), + fsparam_flag("barrier", Opt_barrier), + fsparam_u32("barrier", Opt_barrier), + fsparam_flag("nobarrier", Opt_nobarrier), + fsparam_flag("i_version", Opt_removed), + fsparam_flag("dax", Opt_dax), + fsparam_enum("dax", Opt_dax_type, ext4_param_dax), + fsparam_u32("stripe", Opt_stripe), + fsparam_flag("delalloc", Opt_delalloc), + fsparam_flag("nodelalloc", Opt_nodelalloc), + fsparam_flag("warn_on_error", Opt_warn_on_error), + fsparam_flag("nowarn_on_error", Opt_nowarn_on_error), + fsparam_u32("debug_want_extra_isize", Opt_debug_want_extra_isize), + fsparam_flag("mblk_io_submit", Opt_removed), + fsparam_flag("nomblk_io_submit", Opt_removed), + fsparam_flag("block_validity", Opt_block_validity), + fsparam_flag("noblock_validity", Opt_noblock_validity), + fsparam_u32("inode_readahead_blks", Opt_inode_readahead_blks), + fsparam_u32("journal_ioprio", Opt_journal_ioprio), + fsparam_u32("auto_da_alloc", Opt_auto_da_alloc), + fsparam_flag("auto_da_alloc", Opt_auto_da_alloc), + fsparam_flag("noauto_da_alloc", Opt_noauto_da_alloc), + fsparam_flag("dioread_nolock", Opt_dioread_nolock), + fsparam_flag("nodioread_nolock", Opt_dioread_lock), + fsparam_flag("dioread_lock", Opt_dioread_lock), + fsparam_flag("discard", Opt_discard), + fsparam_flag("nodiscard", Opt_nodiscard), + fsparam_u32("init_itable", Opt_init_itable), + fsparam_flag("init_itable", Opt_init_itable), + fsparam_flag("noinit_itable", Opt_noinit_itable), #ifdef CONFIG_EXT4_DEBUG - fsparam_flag ("fc_debug_force", Opt_fc_debug_force), - fsparam_u32 ("fc_debug_max_replay", Opt_fc_debug_max_replay), + fsparam_flag("fc_debug_force", Opt_fc_debug_force), + fsparam_u32("fc_debug_max_replay", Opt_fc_debug_max_replay), #endif - fsparam_u32 ("max_dir_size_kb", Opt_max_dir_size_kb), - fsparam_flag ("test_dummy_encryption", - Opt_test_dummy_encryption), - fsparam_string ("test_dummy_encryption", - Opt_test_dummy_encryption), - fsparam_flag ("inlinecrypt", Opt_inlinecrypt), - fsparam_flag ("nombcache", Opt_nombcache), - fsparam_flag ("no_mbcache", Opt_nombcache), /* for backward compatibility */ - fsparam_flag ("prefetch_block_bitmaps", - Opt_removed), - fsparam_flag ("no_prefetch_block_bitmaps", + fsparam_u32("max_dir_size_kb", Opt_max_dir_size_kb), + fsparam_flag("test_dummy_encryption", Opt_test_dummy_encryption), + fsparam_string("test_dummy_encryption", Opt_test_dummy_encryption), + fsparam_flag("inlinecrypt", Opt_inlinecrypt), + fsparam_flag("nombcache", Opt_nombcache), + fsparam_flag("no_mbcache", Opt_nombcache), /* for backward compatibility */ + fsparam_flag("prefetch_block_bitmaps", Opt_removed), + fsparam_flag("no_prefetch_block_bitmaps", Opt_no_prefetch_block_bitmaps), - fsparam_s32 ("mb_optimize_scan", Opt_mb_optimize_scan), - fsparam_string ("check", Opt_removed), /* mount option from ext2/3 */ - fsparam_flag ("nocheck", Opt_removed), /* mount option from ext2/3 */ - fsparam_flag ("reservation", Opt_removed), /* mount option from ext2/3 */ - fsparam_flag ("noreservation", Opt_removed), /* mount option from ext2/3 */ - fsparam_u32 ("journal", Opt_removed), /* mount option from ext2/3 */ + fsparam_s32("mb_optimize_scan", Opt_mb_optimize_scan), + fsparam_string("check", Opt_removed), /* mount option from ext2/3 */ + fsparam_flag("nocheck", Opt_removed), /* mount option from ext2/3 */ + fsparam_flag("reservation", Opt_removed), /* mount option from ext2/3 */ + fsparam_flag("noreservation", Opt_removed), /* mount option from ext2/3 */ + fsparam_u32("journal", Opt_removed), /* mount option from ext2/3 */ {} }; From patchwork Tue Oct 22 11:10:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhang Yi X-Patchwork-Id: 2000221 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=mail.ozlabs.org; envelope-from=srs0=+qza=rs=vger.kernel.org=linux-ext4+bounces-4711-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XXcrV1MTpz1xtp for ; Tue, 22 Oct 2024 14:20:42 +1100 (AEDT) Received: from mail.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4XXcrS5fvdz4w2L for ; Tue, 22 Oct 2024 14:20:40 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4XXcrS5cDnz4w2M; Tue, 22 Oct 2024 14:20:40 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; arc=pass smtp.remote-ip="2604:1380:45d1:ec00::1" arc.chain=subspace.kernel.org ARC-Seal: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567240; cv=pass; b=eLMyYpHJGBfeoTFjBOu3ljuBatpY6LYt7hFx8cs81CwTF2LbOGN4Q/dHmXJaQVGWQIf7w2xgZav2A2w3ohnX1F/YVmFfVXH3gFlDDDe/C2EYiuci+SBv4M0uLnFOB+8KRJ/rqrVmMnVpdY+zzXREKErK1tXsF9gDCeqeNYzSpvYF4dsRvsOo/xehM26sOANbMCO6hngb3N5EzcXHPf1ct8fcqSUOGgavDCmJOC/GSZP9MYsgOs13zlWDj5fSzAQJ4rX8I7vRDMAV4c9FeMLYVE+tG0p29SXxgwb3EK+5xn18S8xWS1DAoVEl5qbrd8YoDAT3YhEZOj6Kf7taIs1a5w== ARC-Message-Signature: i=2; a=rsa-sha256; d=ozlabs.org; s=201707; t=1729567240; c=relaxed/relaxed; bh=cm8XMd7swM3zw5na8URp2CJYzEFcM3P9lBEhJr21fjU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GpFgENGnRfCmk3MFAblYhWkOrhoDE9Xpp5rTbbrmxhMD8F1lLtHFkwql0P5u97+HzXcpgRQVYN+ywqcVjYHXrhp1demEZgthyzcjMvwJ0TAGDh3I3DlTP5OwR2aazVgz6HJBl6T+5tVPLM8vra9T37XK2GjIa5U2sFTDrDk8/Iz0lV4/aMBGL7cOKJ1OdJDDVkZmdPo4J1RiMsaXfAxzW+tUfUT0gDx2XQ9TAqR/q+F6yfSL1xSiYICi7ccovncVvXEEWfzJvlhZra7IojSfzSxhOVc6mxCF7PU1LCy47NXJM4zA9wLh/dGAyGO/Fp9taBpVaMyiliMviGteIvbeIQ== ARC-Authentication-Results: i=2; gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4711-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) smtp.mailfrom=vger.kernel.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=linux-ext4+bounces-4711-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [IPv6:2604:1380:45d1:ec00::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4XXcrS2Qp5z4w2L for ; Tue, 22 Oct 2024 14:20:40 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id E613A1C24EA7 for ; Tue, 22 Oct 2024 03:20:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6D3AB2038DF; Tue, 22 Oct 2024 03:13:12 +0000 (UTC) X-Original-To: linux-ext4@vger.kernel.org Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5570E200109; Tue, 22 Oct 2024 03:13:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566791; cv=none; b=RcRw68I1MAid/0YhzUg9G12DApj7fBrxGxqxAUT9ZbXZFq4B4FmoyJLQTzwgqF3wUD2zrkxmJ+rOnY9hyCHqdHVdru+9kvsnDKpfsZDDOAfB9w0mC/mgRoOrPBq6sIdqwO21NUD5Vq3RJvZb9vN+yZmG3CBrELCd0J5CQ0l9PUw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729566791; c=relaxed/simple; bh=OAzCCl3m2q23m1U+qcLkhExvUHjt8Ff1nVP57Z12V6I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ISr+Zrdmrj/8Fcl4/paJCjDmHkYROINytjIcGVczotiiYIuVgDxxxFMMu9w+7sTG0yLRUUnK/F/81M2z+StNahAdRYW/WVwYDnow8hW4CrJLqCgGhdpkWfGAbymqfqlyc97KJliDmQWe2Zdk+gB41hLQ1MglNKbIKOHvhGNPqv4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4XXcgN1lZBz4f3lVx; Tue, 22 Oct 2024 11:12:48 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 67C5D1A058E; Tue, 22 Oct 2024 11:13:06 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.112.188]) by APP4 (Coremail) with SMTP id gCh0CgCXysYlGBdnPSwWEw--.716S31; Tue, 22 Oct 2024 11:13:06 +0800 (CST) From: Zhang Yi To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, david@fromorbit.com, zokeefe@google.com, yi.zhang@huawei.com, yi.zhang@huaweicloud.com, chengzhihao1@huawei.com, yukuai3@huawei.com, yangerkun@huawei.com Subject: [PATCH 27/27] ext4: introduce a mount option for iomap buffered I/O path Date: Tue, 22 Oct 2024 19:10:58 +0800 Message-ID: <20241022111059.2566137-28-yi.zhang@huaweicloud.com> X-Mailer: git-send-email 2.46.1 In-Reply-To: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> References: <20241022111059.2566137-1-yi.zhang@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-ext4@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXysYlGBdnPSwWEw--.716S31 X-Coremail-Antispam: 1UD129KBjvJXoWxZrW3WFyrtryxWF4xKw1rWFg_yoW5Cr18p3 sYkFy8Gr1kZr1F93y8CF48Wr1FkF4Yka1UuFWYgw17WFZrAryxXFyfKF1rCF42qrW8XryI gF1rWw17WF43CrDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUQm14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_JF0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0 rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6x IIjxv20xvEc7CjxVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vE x4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2 IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_JrI_JrylYx0Ex4A2jsIE14v26r1j6r4U McvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I64 8v4I1lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI0_Jw0_GFyl42xK82IYc2Ij 64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x 8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE 2Ix0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4UJVWxJr1lIxAIcV CF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F4UMIIF0xvEx4A2jsIE c7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjTRio7uDUUUU X-CM-SenderInfo: d1lo6xhdqjqx5xdzvxpfor3voofrz/ From: Zhang Yi Introduce the buffered_iomap and the nobuffered_iomap mount options to enable the iomap buffered I/O path for regular files. This option is currently disabled by default until we can support more comprehensive features along this path. Signed-off-by: Zhang Yi --- fs/ext4/ext4.h | 1 + fs/ext4/inode.c | 2 ++ fs/ext4/super.c | 7 +++++++ 3 files changed, 10 insertions(+) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 0096191b454c..c2a44530e026 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1257,6 +1257,7 @@ struct ext4_inode_info { * scanning in mballoc */ #define EXT4_MOUNT2_ABORT 0x00000100 /* Abort filesystem */ +#define EXT4_MOUNT2_BUFFERED_IOMAP 0x00000200 /* Use iomap for buffered IO */ #define clear_opt(sb, opt) EXT4_SB(sb)->s_mount_opt &= \ ~EXT4_MOUNT_##opt diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 97abc88e6658..b6e041a423f9 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -5153,6 +5153,8 @@ bool ext4_should_use_buffered_iomap(struct inode *inode) { struct super_block *sb = inode->i_sb; + if (!test_opt2(sb, BUFFERED_IOMAP)) + return false; if (ext4_has_feature_inline_data(sb)) return false; if (ext4_has_feature_verity(sb)) diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 89955081c4fe..435a866359d9 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1675,6 +1675,7 @@ enum { Opt_discard, Opt_nodiscard, Opt_init_itable, Opt_noinit_itable, Opt_max_dir_size_kb, Opt_nojournal_checksum, Opt_nombcache, Opt_no_prefetch_block_bitmaps, Opt_mb_optimize_scan, + Opt_buffered_iomap, Opt_nobuffered_iomap, Opt_errors, Opt_data, Opt_data_err, Opt_jqfmt, Opt_dax_type, #ifdef CONFIG_EXT4_DEBUG Opt_fc_debug_max_replay, Opt_fc_debug_force @@ -1806,6 +1807,8 @@ static const struct fs_parameter_spec ext4_param_specs[] = { fsparam_flag("no_prefetch_block_bitmaps", Opt_no_prefetch_block_bitmaps), fsparam_s32("mb_optimize_scan", Opt_mb_optimize_scan), + fsparam_flag("buffered_iomap", Opt_buffered_iomap), + fsparam_flag("nobuffered_iomap", Opt_nobuffered_iomap), fsparam_string("check", Opt_removed), /* mount option from ext2/3 */ fsparam_flag("nocheck", Opt_removed), /* mount option from ext2/3 */ fsparam_flag("reservation", Opt_removed), /* mount option from ext2/3 */ @@ -1900,6 +1903,10 @@ static const struct mount_opts { {Opt_nombcache, EXT4_MOUNT_NO_MBCACHE, MOPT_SET}, {Opt_no_prefetch_block_bitmaps, EXT4_MOUNT_NO_PREFETCH_BLOCK_BITMAPS, MOPT_SET}, + {Opt_buffered_iomap, EXT4_MOUNT2_BUFFERED_IOMAP, + MOPT_SET | MOPT_2 | MOPT_EXT4_ONLY}, + {Opt_nobuffered_iomap, EXT4_MOUNT2_BUFFERED_IOMAP, + MOPT_CLEAR | MOPT_2 | MOPT_EXT4_ONLY}, #ifdef CONFIG_EXT4_DEBUG {Opt_fc_debug_force, EXT4_MOUNT2_JOURNAL_FAST_COMMIT, MOPT_SET | MOPT_2 | MOPT_EXT4_ONLY},