diff mbox series

[3/3] ext4: fix a potential assertion failure due to improperly dirtied buffer

Message ID 20240829085407.3331490-4-zhangshida@kylinos.cn
State Superseded
Headers show
Series Fix an error caused by improperly dirtied buffer | expand

Commit Message

zhangshida Aug. 29, 2024, 8:54 a.m. UTC
From: Shida Zhang <zhangshida@kylinos.cn>

On an old kernel version(4.19, ext3, data=journal, pagesize=64k),
an assertion failure will occasionally be triggered by the line below:
-----------
jbd2_journal_commit_transaction
{
...
J_ASSERT_BH(bh, !buffer_dirty(bh));
/*
* The buffer on BJ_Forget list and not jbddirty means
...
}
-----------

The same condition may also be applied to the lattest kernel version.

When blocksize < pagesize and we truncate a file, there can be buffers in
the mapping tail page beyond i_size. These buffers will be filed to
transaction's BJ_Forget list by ext4_journalled_invalidatepage() during
truncation. When the transaction doing truncate starts committing, we can
grow the file again. This calls __block_write_begin() which allocates new
blocks under these buffers in the tail page we go through the branch:

                        if (buffer_new(bh)) {
                                clean_bdev_bh_alias(bh);
                                if (folio_test_uptodate(folio)) {
                                        clear_buffer_new(bh);
                                        set_buffer_uptodate(bh);
                                        mark_buffer_dirty(bh);
                                        continue;
                                }
                                ...
                        }

Hence buffers on BJ_Forget list of the committing transaction get marked
dirty and this triggers the jbd2 assertion.

Teach ext4_block_write_begin() to properly handle files with data
journalling by avoiding dirtying them directly. Instead of
folio_zero_new_buffers() we use ext4_journalled_zero_new_buffers() which
takes care of handling journalling. We also don't need to mark new uptodate
buffers as dirty in ext4_block_write_begin(). That will be either done
either by block_commit_write() in case of success or by
folio_zero_new_buffers() in case of failure.

Reported-by: Baolin Liu <liubaolin@kylinos.cn>
Suggested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Shida Zhang <zhangshida@kylinos.cn>
---
 fs/ext4/ext4.h   |  3 ++-
 fs/ext4/inline.c |  7 ++++---
 fs/ext4/inode.c  | 41 +++++++++++++++++++++++++++++++++--------
 3 files changed, 39 insertions(+), 12 deletions(-)

Comments

Jan Kara Aug. 29, 2024, 9:30 a.m. UTC | #1
On Thu 29-08-24 16:54:07, zhangshida wrote:
> From: Shida Zhang <zhangshida@kylinos.cn>
> 
> On an old kernel version(4.19, ext3, data=journal, pagesize=64k),
> an assertion failure will occasionally be triggered by the line below:
> -----------
> jbd2_journal_commit_transaction
> {
> ...
> J_ASSERT_BH(bh, !buffer_dirty(bh));
> /*
> * The buffer on BJ_Forget list and not jbddirty means
> ...
> }
> -----------
> 
> The same condition may also be applied to the lattest kernel version.
> 
> When blocksize < pagesize and we truncate a file, there can be buffers in
> the mapping tail page beyond i_size. These buffers will be filed to
> transaction's BJ_Forget list by ext4_journalled_invalidatepage() during
> truncation. When the transaction doing truncate starts committing, we can
> grow the file again. This calls __block_write_begin() which allocates new
> blocks under these buffers in the tail page we go through the branch:
> 
>                         if (buffer_new(bh)) {
>                                 clean_bdev_bh_alias(bh);
>                                 if (folio_test_uptodate(folio)) {
>                                         clear_buffer_new(bh);
>                                         set_buffer_uptodate(bh);
>                                         mark_buffer_dirty(bh);
>                                         continue;
>                                 }
>                                 ...
>                         }
> 
> Hence buffers on BJ_Forget list of the committing transaction get marked
> dirty and this triggers the jbd2 assertion.
> 
> Teach ext4_block_write_begin() to properly handle files with data
> journalling by avoiding dirtying them directly. Instead of
> folio_zero_new_buffers() we use ext4_journalled_zero_new_buffers() which
> takes care of handling journalling. We also don't need to mark new uptodate
> buffers as dirty in ext4_block_write_begin(). That will be either done
> either by block_commit_write() in case of success or by
> folio_zero_new_buffers() in case of failure.
> 
> Reported-by: Baolin Liu <liubaolin@kylinos.cn>
> Suggested-by: Jan Kara <jack@suse.cz>
> Signed-off-by: Shida Zhang <zhangshida@kylinos.cn>

One small comment below but regardless whether you decide to address it or
not, feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

> @@ -1083,11 +1090,22 @@ int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len,
>  			err = get_block(inode, block, bh, 1);
>  			if (err)
>  				break;
> +			/*
> +			 * We may be zeroing partial buffers or all new
> +			 * buffers in case of failure. Prepare JBD2 for
> +			 * that.
> +			 */
> +			if (should_journal_data)
> +				do_journal_get_write_access(handle, inode, bh);

Thanks for adding comments! I also mentioned this hunk can be moved inside
the if (buffer_new(bh)) check below to make it more obvious that this is
indeed about handling of newly allocated buffers. But this is just a nit
and the comment explains is well enough so I don't insist.

>  			if (buffer_new(bh)) {
>  				if (folio_test_uptodate(folio)) {
> -					clear_buffer_new(bh);
> +					/*
> +					 * Unlike __block_write_begin() we leave
> +					 * dirtying of new uptodate buffers to
> +					 * ->write_end() time or
> +					 * folio_zero_new_buffers().
> +					 */
>  					set_buffer_uptodate(bh);
> -					mark_buffer_dirty(bh);
>  					continue;
>  				}
>  				if (block_end > to || block_start < from)

Thanks!

								Honza
zhangshida Aug. 30, 2024, 2:03 a.m. UTC | #2
Jan Kara <jack@suse.cz> 于2024年8月29日周四 17:30写道:
>
> On Thu 29-08-24 16:54:07, zhangshida wrote:
> > From: Shida Zhang <zhangshida@kylinos.cn>
> >
> > On an old kernel version(4.19, ext3, data=journal, pagesize=64k),
> > an assertion failure will occasionally be triggered by the line below:
> > -----------
> > jbd2_journal_commit_transaction
> > {
> > ...
> > J_ASSERT_BH(bh, !buffer_dirty(bh));
> > /*
> > * The buffer on BJ_Forget list and not jbddirty means
> > ...
> > }
> > -----------
> >
> > The same condition may also be applied to the lattest kernel version.
> >
> > When blocksize < pagesize and we truncate a file, there can be buffers in
> > the mapping tail page beyond i_size. These buffers will be filed to
> > transaction's BJ_Forget list by ext4_journalled_invalidatepage() during
> > truncation. When the transaction doing truncate starts committing, we can
> > grow the file again. This calls __block_write_begin() which allocates new
> > blocks under these buffers in the tail page we go through the branch:
> >
> >                         if (buffer_new(bh)) {
> >                                 clean_bdev_bh_alias(bh);
> >                                 if (folio_test_uptodate(folio)) {
> >                                         clear_buffer_new(bh);
> >                                         set_buffer_uptodate(bh);
> >                                         mark_buffer_dirty(bh);
> >                                         continue;
> >                                 }
> >                                 ...
> >                         }
> >
> > Hence buffers on BJ_Forget list of the committing transaction get marked
> > dirty and this triggers the jbd2 assertion.
> >
> > Teach ext4_block_write_begin() to properly handle files with data
> > journalling by avoiding dirtying them directly. Instead of
> > folio_zero_new_buffers() we use ext4_journalled_zero_new_buffers() which
> > takes care of handling journalling. We also don't need to mark new uptodate
> > buffers as dirty in ext4_block_write_begin(). That will be either done
> > either by block_commit_write() in case of success or by
> > folio_zero_new_buffers() in case of failure.
> >
> > Reported-by: Baolin Liu <liubaolin@kylinos.cn>
> > Suggested-by: Jan Kara <jack@suse.cz>
> > Signed-off-by: Shida Zhang <zhangshida@kylinos.cn>
>
> One small comment below but regardless whether you decide to address it or
> not, feel free to add:
>
> Reviewed-by: Jan Kara <jack@suse.cz>
>
> > @@ -1083,11 +1090,22 @@ int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len,
> >                       err = get_block(inode, block, bh, 1);
> >                       if (err)
> >                               break;
> > +                     /*
> > +                      * We may be zeroing partial buffers or all new
> > +                      * buffers in case of failure. Prepare JBD2 for
> > +                      * that.
> > +                      */
> > +                     if (should_journal_data)
> > +                             do_journal_get_write_access(handle, inode, bh);
>
> Thanks for adding comments! I also mentioned this hunk can be moved inside
> the if (buffer_new(bh)) check below to make it more obvious that this is
> indeed about handling of newly allocated buffers. But this is just a nit
> and the comment explains is well enough so I don't insist.
>

Feel free to tell me if you have other issues/nits/ideas.
Because even with your detailed explanation, I may take it in a wrong way. :p

And Thanks for your patience.

-Stephen

> >                       if (buffer_new(bh)) {
> >                               if (folio_test_uptodate(folio)) {
> > -                                     clear_buffer_new(bh);
> > +                                     /*
> > +                                      * Unlike __block_write_begin() we leave
> > +                                      * dirtying of new uptodate buffers to
> > +                                      * ->write_end() time or
> > +                                      * folio_zero_new_buffers().
> > +                                      */
> >                                       set_buffer_uptodate(bh);
> > -                                     mark_buffer_dirty(bh);
> >                                       continue;
> >                               }
> >                               if (block_end > to || block_start < from)
>
> Thanks!
>
>                                                                 Honza
>
> --
> Jan Kara <jack@suse.com>
> SUSE Labs, CR
diff mbox series

Patch

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 5f8257b68190..b653bd423b11 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -3851,7 +3851,8 @@  static inline int ext4_buffer_uptodate(struct buffer_head *bh)
 	return buffer_uptodate(bh);
 }
 
-extern int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len,
+extern int ext4_block_write_begin(handle_t *handle, struct folio *folio,
+				  loff_t pos, unsigned len,
 				  get_block_t *get_block);
 #endif	/* __KERNEL__ */
 
diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index 0a1a8431e281..8d5599d5af27 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -601,10 +601,11 @@  static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
 		goto out;
 
 	if (ext4_should_dioread_nolock(inode)) {
-		ret = ext4_block_write_begin(folio, from, to,
+		ret = ext4_block_write_begin(handle, folio, from, to,
 					     ext4_get_block_unwritten);
 	} else
-		ret = ext4_block_write_begin(folio, from, to, ext4_get_block);
+		ret = ext4_block_write_begin(handle, folio, from, to,
+					     ext4_get_block);
 
 	if (!ret && ext4_should_journal_data(inode)) {
 		ret = ext4_walk_page_buffers(handle, inode,
@@ -856,7 +857,7 @@  static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping,
 			goto out;
 	}
 
-	ret = ext4_block_write_begin(folio, 0, inline_size,
+	ret = ext4_block_write_begin(NULL, folio, 0, inline_size,
 				     ext4_da_get_block_prep);
 	if (ret) {
 		up_read(&EXT4_I(inode)->xattr_sem);
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 4964c67e029e..bc26200b2852 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -49,6 +49,11 @@ 
 
 #include <trace/events/ext4.h>
 
+static void ext4_journalled_zero_new_buffers(handle_t *handle,
+					    struct inode *inode,
+					    struct folio *folio,
+					    unsigned from, unsigned to);
+
 static __u32 ext4_inode_csum(struct inode *inode, struct ext4_inode *raw,
 			      struct ext4_inode_info *ei)
 {
@@ -1041,7 +1046,8 @@  int do_journal_get_write_access(handle_t *handle, struct inode *inode,
 	return ret;
 }
 
-int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len,
+int ext4_block_write_begin(handle_t *handle, struct folio *folio,
+			   loff_t pos, unsigned len,
 			   get_block_t *get_block)
 {
 	unsigned from = pos & (PAGE_SIZE - 1);
@@ -1055,6 +1061,7 @@  int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len,
 	struct buffer_head *bh, *head, *wait[2];
 	int nr_wait = 0;
 	int i;
+	bool should_journal_data = ext4_should_journal_data(inode);
 
 	BUG_ON(!folio_test_locked(folio));
 	BUG_ON(from > PAGE_SIZE);
@@ -1083,11 +1090,22 @@  int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len,
 			err = get_block(inode, block, bh, 1);
 			if (err)
 				break;
+			/*
+			 * We may be zeroing partial buffers or all new
+			 * buffers in case of failure. Prepare JBD2 for
+			 * that.
+			 */
+			if (should_journal_data)
+				do_journal_get_write_access(handle, inode, bh);
 			if (buffer_new(bh)) {
 				if (folio_test_uptodate(folio)) {
-					clear_buffer_new(bh);
+					/*
+					 * Unlike __block_write_begin() we leave
+					 * dirtying of new uptodate buffers to
+					 * ->write_end() time or
+					 * folio_zero_new_buffers().
+					 */
 					set_buffer_uptodate(bh);
-					mark_buffer_dirty(bh);
 					continue;
 				}
 				if (block_end > to || block_start < from)
@@ -1117,7 +1135,11 @@  int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len,
 			err = -EIO;
 	}
 	if (unlikely(err)) {
-		folio_zero_new_buffers(folio, from, to);
+		if (should_journal_data)
+			ext4_journalled_zero_new_buffers(handle, inode, folio,
+							 from, to);
+		else
+			folio_zero_new_buffers(folio, from, to);
 	} else if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
 		for (i = 0; i < nr_wait; i++) {
 			int err2;
@@ -1215,10 +1237,11 @@  static int ext4_write_begin(struct file *file, struct address_space *mapping,
 	folio_wait_stable(folio);
 
 	if (ext4_should_dioread_nolock(inode))
-		ret = ext4_block_write_begin(folio, pos, len,
+		ret = ext4_block_write_begin(handle, folio, pos, len,
 					     ext4_get_block_unwritten);
 	else
-		ret = ext4_block_write_begin(folio, pos, len, ext4_get_block);
+		ret = ext4_block_write_begin(handle, folio, pos, len,
+					     ext4_get_block);
 	if (!ret && ext4_should_journal_data(inode)) {
 		ret = ext4_walk_page_buffers(handle, inode,
 					     folio_buffers(folio), from, to,
@@ -2951,7 +2974,8 @@  static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
 	if (IS_ERR(folio))
 		return PTR_ERR(folio);
 
-	ret = ext4_block_write_begin(folio, pos, len, ext4_da_get_block_prep);
+	ret = ext4_block_write_begin(NULL, folio, pos, len,
+				     ext4_da_get_block_prep);
 	if (ret < 0) {
 		folio_unlock(folio);
 		folio_put(folio);
@@ -6205,7 +6229,8 @@  vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 		if (folio_pos(folio) + len > size)
 			len = size - folio_pos(folio);
 
-		err = __block_write_begin(&folio->page, 0, len, ext4_get_block);
+		err = ext4_block_write_begin(handle, folio, 0, len,
+					     ext4_get_block);
 		if (!err) {
 			ret = VM_FAULT_SIGBUS;
 			if (ext4_journal_folio_buffers(handle, folio, len))