diff mbox series

[V2] jbd2: use rhashtable for revoke records during replay

Message ID 20241105034428.578701-1-dongyangli@ddn.com
State New
Headers show
Series [V2] jbd2: use rhashtable for revoke records during replay | expand

Commit Message

Li Dongyang Nov. 5, 2024, 3:44 a.m. UTC
Resizable hashtable should improve journal replay time when
we have million of revoke records.
Notice that rhashtable is used during replay only,
as removal with list_del() is less expensive and it's still used
during regular processing.

before:
1048576 records - 95 seconds
2097152 records - 580 seconds

after:
1048576 records - 2 seconds
2097152 records - 3 seconds
4194304 records - 7 seconds

Signed-off-by: Alex Zhuravlev <bzzz@whamcloud.com>
Signed-off-by: Li Dongyang <dongyangli@ddn.com>
---
v1->v2:
include rhashtable header in jbd2.h
---
 fs/jbd2/recovery.c   |  4 +++
 fs/jbd2/revoke.c     | 65 +++++++++++++++++++++++++++++++-------------
 include/linux/jbd2.h |  7 +++++
 3 files changed, 57 insertions(+), 19 deletions(-)

Comments

Jan Kara Nov. 8, 2024, 10:33 a.m. UTC | #1
On Tue 05-11-24 14:44:28, Li Dongyang wrote:
> Resizable hashtable should improve journal replay time when
> we have million of revoke records.
> Notice that rhashtable is used during replay only,
> as removal with list_del() is less expensive and it's still used
> during regular processing.
> 
> before:
> 1048576 records - 95 seconds
> 2097152 records - 580 seconds

These are really high numbers of revoke records. Deleting couple GB of
metadata doesn't happen so easily. Are they from a real workload or just
a stress test?
 
> after:
> 1048576 records - 2 seconds
> 2097152 records - 3 seconds
> 4194304 records - 7 seconds

The gains are very nice :).

> Signed-off-by: Alex Zhuravlev <bzzz@whamcloud.com>
> Signed-off-by: Li Dongyang <dongyangli@ddn.com>

> diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
> index 667f67342c52..d9287439171c 100644
> --- a/fs/jbd2/recovery.c
> +++ b/fs/jbd2/recovery.c
> @@ -294,6 +294,10 @@ int jbd2_journal_recover(journal_t *journal)
>  	memset(&info, 0, sizeof(info));
>  	sb = journal->j_superblock;
>  
> +	err = jbd2_journal_init_recovery_revoke(journal);
> +	if (err)
> +		return err;
> +
>  	/*
>  	 * The journal superblock's s_start field (the current log head)
>  	 * is always zero if, and only if, the journal was cleanly
> diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
> index 4556e4689024..d6e96099e9c9 100644
> --- a/fs/jbd2/revoke.c
> +++ b/fs/jbd2/revoke.c
> @@ -90,6 +90,7 @@
>  #include <linux/bio.h>
>  #include <linux/log2.h>
>  #include <linux/hash.h>
> +#include <linux/rhashtable.h>
>  #endif
>  
>  static struct kmem_cache *jbd2_revoke_record_cache;
> @@ -101,7 +102,10 @@ static struct kmem_cache *jbd2_revoke_table_cache;
>  
>  struct jbd2_revoke_record_s
>  {
> -	struct list_head  hash;
> +	union {
> +		struct list_head  hash;
> +		struct rhash_head linkage;
> +	};
>  	tid_t		  sequence;	/* Used for recovery only */
>  	unsigned long long	  blocknr;
>  };
> @@ -680,13 +684,22 @@ static void flush_descriptor(journal_t *journal,
>   * single block.
>   */
>  
> +static const struct rhashtable_params revoke_rhashtable_params = {
> +	.key_len     = sizeof(unsigned long long),
> +	.key_offset  = offsetof(struct jbd2_revoke_record_s, blocknr),
> +	.head_offset = offsetof(struct jbd2_revoke_record_s, linkage),
> +};
> +

I'd probably view your performance results as: "JOURNAL_REVOKE_DEFAULT_HASH
is just too small for replays of a journal with huge numbers of revoked
blocks". Or did you observe that JOURNAL_REVOKE_DEFAULT_HASH is causing
performance issues also during normal operation when we track there revokes
for the current transaction?

If my interpretation is correct, then rhashtable is unnecessarily huge
hammer for this. Firstly, as the big hash is needed only during replay,
there's no concurrent access to the data structure. Secondly, we just fill
the data structure in the PASS_REVOKE scan and then use it. Thirdly, we
know the number of elements we need to store in the table in advance (well,
currently we don't but it's trivial to modify PASS_SCAN to get that
number). 

So rather than playing with rhashtable, I'd modify PASS_SCAN to sum up
number of revoke records we're going to process and then prepare a static
hash of appropriate size for replay (we can just use the standard hashing
fs/jbd2/revoke.c uses, just with differently sized hash table allocated for
replay and point journal->j_revoke to it). And once recovery completes
jbd2_journal_clear_revoke() can free the table and point journal->j_revoke
back to the original table. What do you think?

								Honza
Theodore Ts'o Nov. 8, 2024, 4:11 p.m. UTC | #2
On Fri, Nov 08, 2024 at 11:33:58AM +0100, Jan Kara wrote:
> > 1048576 records - 95 seconds
> > 2097152 records - 580 seconds
> 
> These are really high numbers of revoke records. Deleting couple GB of
> metadata doesn't happen so easily. Are they from a real workload or just
> a stress test?

For context, the background of this is that this has been an
out-of-tree that's been around for a very long time, for use with
Lustre servers where apparently, this very large number of revoke
records is a real thing.

> If my interpretation is correct, then rhashtable is unnecessarily
> huge hammer for this. Firstly, as the big hash is needed only during
> replay, there's no concurrent access to the data
> structure. Secondly, we just fill the data structure in the
> PASS_REVOKE scan and then use it. Thirdly, we know the number of
> elements we need to store in the table in advance (well, currently
> we don't but it's trivial to modify PASS_SCAN to get that number).
> 
> So rather than playing with rhashtable, I'd modify PASS_SCAN to sum
> up number of revoke records we're going to process and then prepare
> a static hash of appropriate size for replay (we can just use the
> standard hashing fs/jbd2/revoke.c uses, just with differently sized
> hash table allocated for replay and point journal->j_revoke to
> it). And once recovery completes jbd2_journal_clear_revoke() can
> free the table and point journal->j_revoke back to the original
> table. What do you think?

Hmm, that's a really nice idea; Andreas, what do you think?

     	      	     	  		 - Ted
Zhang Yi Nov. 9, 2024, 3:12 a.m. UTC | #3
On 2024/11/8 18:33, Jan Kara wrote:
> On Tue 05-11-24 14:44:28, Li Dongyang wrote:
>> Resizable hashtable should improve journal replay time when
>> we have million of revoke records.
>> Notice that rhashtable is used during replay only,
>> as removal with list_del() is less expensive and it's still used
>> during regular processing.
>>
>> before:
>> 1048576 records - 95 seconds
>> 2097152 records - 580 seconds
> 
> These are really high numbers of revoke records. Deleting couple GB of
> metadata doesn't happen so easily. Are they from a real workload or just
> a stress test?
>  
>> after:
>> 1048576 records - 2 seconds
>> 2097152 records - 3 seconds
>> 4194304 records - 7 seconds
> 
> The gains are very nice :).
> 
>> Signed-off-by: Alex Zhuravlev <bzzz@whamcloud.com>
>> Signed-off-by: Li Dongyang <dongyangli@ddn.com>
> 
>> diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
>> index 667f67342c52..d9287439171c 100644
>> --- a/fs/jbd2/recovery.c
>> +++ b/fs/jbd2/recovery.c
>> @@ -294,6 +294,10 @@ int jbd2_journal_recover(journal_t *journal)
>>  	memset(&info, 0, sizeof(info));
>>  	sb = journal->j_superblock;
>>  
>> +	err = jbd2_journal_init_recovery_revoke(journal);
>> +	if (err)
>> +		return err;
>> +
>>  	/*
>>  	 * The journal superblock's s_start field (the current log head)
>>  	 * is always zero if, and only if, the journal was cleanly
>> diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
>> index 4556e4689024..d6e96099e9c9 100644
>> --- a/fs/jbd2/revoke.c
>> +++ b/fs/jbd2/revoke.c
>> @@ -90,6 +90,7 @@
>>  #include <linux/bio.h>
>>  #include <linux/log2.h>
>>  #include <linux/hash.h>
>> +#include <linux/rhashtable.h>
>>  #endif
>>  
>>  static struct kmem_cache *jbd2_revoke_record_cache;
>> @@ -101,7 +102,10 @@ static struct kmem_cache *jbd2_revoke_table_cache;
>>  
>>  struct jbd2_revoke_record_s
>>  {
>> -	struct list_head  hash;
>> +	union {
>> +		struct list_head  hash;
>> +		struct rhash_head linkage;
>> +	};
>>  	tid_t		  sequence;	/* Used for recovery only */
>>  	unsigned long long	  blocknr;
>>  };
>> @@ -680,13 +684,22 @@ static void flush_descriptor(journal_t *journal,
>>   * single block.
>>   */
>>  
>> +static const struct rhashtable_params revoke_rhashtable_params = {
>> +	.key_len     = sizeof(unsigned long long),
>> +	.key_offset  = offsetof(struct jbd2_revoke_record_s, blocknr),
>> +	.head_offset = offsetof(struct jbd2_revoke_record_s, linkage),
>> +};
>> +
> 
> I'd probably view your performance results as: "JOURNAL_REVOKE_DEFAULT_HASH
> is just too small for replays of a journal with huge numbers of revoked
> blocks". Or did you observe that JOURNAL_REVOKE_DEFAULT_HASH is causing
> performance issues also during normal operation when we track there revokes
> for the current transaction?
> 
> If my interpretation is correct, then rhashtable is unnecessarily huge
> hammer for this. Firstly, as the big hash is needed only during replay,
> there's no concurrent access to the data structure. Secondly, we just fill
> the data structure in the PASS_REVOKE scan and then use it. Thirdly, we
> know the number of elements we need to store in the table in advance (well,
> currently we don't but it's trivial to modify PASS_SCAN to get that
> number). 
> 
> So rather than playing with rhashtable, I'd modify PASS_SCAN to sum up
> number of revoke records we're going to process and then prepare a static
> hash of appropriate size for replay (we can just use the standard hashing
> fs/jbd2/revoke.c uses, just with differently sized hash table allocated for
> replay and point journal->j_revoke to it). And once recovery completes
> jbd2_journal_clear_revoke() can free the table and point journal->j_revoke
> back to the original table. What do you think?
> 

Sounds reasonable to me. I'd vote for this solution, this is a really simple
and clear solution, and I believe it can achieve similar gains as rhashtable.

Thanks,
Yi.
Andreas Dilger Nov. 12, 2024, 6:44 p.m. UTC | #4
On Nov 8, 2024, at 9:11 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> 
> On Fri, Nov 08, 2024 at 11:33:58AM +0100, Jan Kara wrote:
>>> 1048576 records - 95 seconds
>>> 2097152 records - 580 seconds
>> 
>> These are really high numbers of revoke records. Deleting couple GB of
>> metadata doesn't happen so easily. Are they from a real workload or just
>> a stress test?
> 
> For context, the background of this is that this has been an
> out-of-tree that's been around for a very long time, for use with
> Lustre servers where apparently, this very large number of revoke
> records is a real thing.

Yes, we've seen this in production if there was a crash after deleting
many millions of log records.  This causes remount to take potentially
several hours before completing (and this was made worse by HA causing
failovers due to mount being "stuck" doing the journal replay).

>> If my interpretation is correct, then rhashtable is unnecessarily
>> huge hammer for this. Firstly, as the big hash is needed only during
>> replay, there's no concurrent access to the data
>> structure. Secondly, we just fill the data structure in the
>> PASS_REVOKE scan and then use it. Thirdly, we know the number of
>> elements we need to store in the table in advance (well, currently
>> we don't but it's trivial to modify PASS_SCAN to get that number).
>> 
>> So rather than playing with rhashtable, I'd modify PASS_SCAN to sum
>> up number of revoke records we're going to process and then prepare
>> a static hash of appropriate size for replay (we can just use the
>> standard hashing fs/jbd2/revoke.c uses, just with differently sized
>> hash table allocated for replay and point journal->j_revoke to
>> it). And once recovery completes jbd2_journal_clear_revoke() can
>> free the table and point journal->j_revoke back to the original
>> table. What do you think?
> 
> Hmm, that's a really nice idea; Andreas, what do you think?

Implementing code to manually count and resize the recovery hashtable
will also have its own complexity, including possible allocation size
limits for a huge hash table.  That could be worked around by kvmalloc(),
but IMHO this essentially starts "open coding" something rhashtable was
exactly designed to avoid.

Since the rhashtable is only used during the journal recovery phase,
IMHO it isn't adding any complexity to the normal operational code, but
if there was ongoing overhead then this would be a different discussion.

Cheers, Andreas
Jan Kara Nov. 13, 2024, 2:47 p.m. UTC | #5
On Tue 12-11-24 11:44:11, Andreas Dilger wrote:
> On Nov 8, 2024, at 9:11 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> > 
> > On Fri, Nov 08, 2024 at 11:33:58AM +0100, Jan Kara wrote:
> >>> 1048576 records - 95 seconds
> >>> 2097152 records - 580 seconds
> >> 
> >> These are really high numbers of revoke records. Deleting couple GB of
> >> metadata doesn't happen so easily. Are they from a real workload or just
> >> a stress test?
> > 
> > For context, the background of this is that this has been an
> > out-of-tree that's been around for a very long time, for use with
> > Lustre servers where apparently, this very large number of revoke
> > records is a real thing.
> 
> Yes, we've seen this in production if there was a crash after deleting
> many millions of log records.  This causes remount to take potentially
> several hours before completing (and this was made worse by HA causing
> failovers due to mount being "stuck" doing the journal replay).

Thanks for clarification!

> >> If my interpretation is correct, then rhashtable is unnecessarily
> >> huge hammer for this. Firstly, as the big hash is needed only during
> >> replay, there's no concurrent access to the data
> >> structure. Secondly, we just fill the data structure in the
> >> PASS_REVOKE scan and then use it. Thirdly, we know the number of
> >> elements we need to store in the table in advance (well, currently
> >> we don't but it's trivial to modify PASS_SCAN to get that number).
> >> 
> >> So rather than playing with rhashtable, I'd modify PASS_SCAN to sum
> >> up number of revoke records we're going to process and then prepare
> >> a static hash of appropriate size for replay (we can just use the
> >> standard hashing fs/jbd2/revoke.c uses, just with differently sized
> >> hash table allocated for replay and point journal->j_revoke to
> >> it). And once recovery completes jbd2_journal_clear_revoke() can
> >> free the table and point journal->j_revoke back to the original
> >> table. What do you think?
> > 
> > Hmm, that's a really nice idea; Andreas, what do you think?
> 
> Implementing code to manually count and resize the recovery hashtable
> will also have its own complexity, including possible allocation size
> limits for a huge hash table.  That could be worked around by kvmalloc(),
> but IMHO this essentially starts "open coding" something rhashtable was
> exactly designed to avoid.

Well, I'd say the result is much simpler than rhashtable code since you
don't need all that dynamic reallocation and complex locking. Attached is a
patch that implements my suggestion. I'd say it is simpler than having two
types of revoke block hashing depending on whether we are doing recovery or
running the journal. I've tested it and it seems to work fine (including
replay of a journal with sufficiently many revoke blocks) but I'm not sure
I can do a meaningful performance testing (I cannot quite reproduce the
slow replay times even when shutting down the filesystem after deleting
1000000 directories). So can you please give it a spin?

								Honza
diff mbox series

Patch

diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
index 667f67342c52..d9287439171c 100644
--- a/fs/jbd2/recovery.c
+++ b/fs/jbd2/recovery.c
@@ -294,6 +294,10 @@  int jbd2_journal_recover(journal_t *journal)
 	memset(&info, 0, sizeof(info));
 	sb = journal->j_superblock;
 
+	err = jbd2_journal_init_recovery_revoke(journal);
+	if (err)
+		return err;
+
 	/*
 	 * The journal superblock's s_start field (the current log head)
 	 * is always zero if, and only if, the journal was cleanly
diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
index 4556e4689024..d6e96099e9c9 100644
--- a/fs/jbd2/revoke.c
+++ b/fs/jbd2/revoke.c
@@ -90,6 +90,7 @@ 
 #include <linux/bio.h>
 #include <linux/log2.h>
 #include <linux/hash.h>
+#include <linux/rhashtable.h>
 #endif
 
 static struct kmem_cache *jbd2_revoke_record_cache;
@@ -101,7 +102,10 @@  static struct kmem_cache *jbd2_revoke_table_cache;
 
 struct jbd2_revoke_record_s
 {
-	struct list_head  hash;
+	union {
+		struct list_head  hash;
+		struct rhash_head linkage;
+	};
 	tid_t		  sequence;	/* Used for recovery only */
 	unsigned long long	  blocknr;
 };
@@ -680,13 +684,22 @@  static void flush_descriptor(journal_t *journal,
  * single block.
  */
 
+static const struct rhashtable_params revoke_rhashtable_params = {
+	.key_len     = sizeof(unsigned long long),
+	.key_offset  = offsetof(struct jbd2_revoke_record_s, blocknr),
+	.head_offset = offsetof(struct jbd2_revoke_record_s, linkage),
+};
+
 int jbd2_journal_set_revoke(journal_t *journal,
 		       unsigned long long blocknr,
 		       tid_t sequence)
 {
 	struct jbd2_revoke_record_s *record;
+	gfp_t gfp_mask = GFP_NOFS;
+	int err;
 
-	record = find_revoke_record(journal, blocknr);
+	record = rhashtable_lookup(&journal->j_revoke_rhtable, &blocknr,
+				   revoke_rhashtable_params);
 	if (record) {
 		/* If we have multiple occurrences, only record the
 		 * latest sequence number in the hashed record */
@@ -694,7 +707,22 @@  int jbd2_journal_set_revoke(journal_t *journal,
 			record->sequence = sequence;
 		return 0;
 	}
-	return insert_revoke_hash(journal, blocknr, sequence);
+
+	if (journal_oom_retry)
+		gfp_mask |= __GFP_NOFAIL;
+	record = kmem_cache_alloc(jbd2_revoke_record_cache, gfp_mask);
+	if (!record)
+		return -ENOMEM;
+
+	record->sequence = sequence;
+	record->blocknr = blocknr;
+	err = rhashtable_lookup_insert_fast(&journal->j_revoke_rhtable,
+					    &record->linkage,
+					    revoke_rhashtable_params);
+	if (err)
+		kmem_cache_free(jbd2_revoke_record_cache, record);
+
+	return err;
 }
 
 /*
@@ -710,7 +738,8 @@  int jbd2_journal_test_revoke(journal_t *journal,
 {
 	struct jbd2_revoke_record_s *record;
 
-	record = find_revoke_record(journal, blocknr);
+	record = rhashtable_lookup(&journal->j_revoke_rhtable, &blocknr,
+				   revoke_rhashtable_params);
 	if (!record)
 		return 0;
 	if (tid_gt(sequence, record->sequence))
@@ -718,6 +747,17 @@  int jbd2_journal_test_revoke(journal_t *journal,
 	return 1;
 }
 
+int jbd2_journal_init_recovery_revoke(journal_t *journal)
+{
+	return rhashtable_init(&journal->j_revoke_rhtable,
+			       &revoke_rhashtable_params);
+}
+
+static void jbd2_revoke_record_free(void *ptr, void *arg)
+{
+	kmem_cache_free(jbd2_revoke_record_cache, ptr);
+}
+
 /*
  * Finally, once recovery is over, we need to clear the revoke table so
  * that it can be reused by the running filesystem.
@@ -725,19 +765,6 @@  int jbd2_journal_test_revoke(journal_t *journal,
 
 void jbd2_journal_clear_revoke(journal_t *journal)
 {
-	int i;
-	struct list_head *hash_list;
-	struct jbd2_revoke_record_s *record;
-	struct jbd2_revoke_table_s *revoke;
-
-	revoke = journal->j_revoke;
-
-	for (i = 0; i < revoke->hash_size; i++) {
-		hash_list = &revoke->hash_table[i];
-		while (!list_empty(hash_list)) {
-			record = (struct jbd2_revoke_record_s*) hash_list->next;
-			list_del(&record->hash);
-			kmem_cache_free(jbd2_revoke_record_cache, record);
-		}
-	}
+	rhashtable_free_and_destroy(&journal->j_revoke_rhtable,
+				    jbd2_revoke_record_free, NULL);
 }
diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
index 8aef9bb6ad57..2b0aa1e159b8 100644
--- a/include/linux/jbd2.h
+++ b/include/linux/jbd2.h
@@ -28,6 +28,7 @@ 
 #include <linux/slab.h>
 #include <linux/bit_spinlock.h>
 #include <linux/blkdev.h>
+#include <linux/rhashtable-types.h>
 #include <crypto/hash.h>
 #endif
 
@@ -1122,6 +1123,11 @@  struct journal_s
 	 */
 	struct jbd2_revoke_table_s *j_revoke_table[2];
 
+	/**
+	 * @j_revoke_rhtable:	rhashtable for revoke records during recovery
+	 */
+	struct rhashtable	j_revoke_rhtable;
+
 	/**
 	 * @j_wbuf: Array of bhs for jbd2_journal_commit_transaction.
 	 */
@@ -1644,6 +1650,7 @@  extern void	   jbd2_journal_write_revoke_records(transaction_t *transaction,
 /* Recovery revoke support */
 extern int	jbd2_journal_set_revoke(journal_t *, unsigned long long, tid_t);
 extern int	jbd2_journal_test_revoke(journal_t *, unsigned long long, tid_t);
+extern int	jbd2_journal_init_recovery_revoke(journal_t *);
 extern void	jbd2_journal_clear_revoke(journal_t *);
 extern void	jbd2_journal_switch_revoke_table(journal_t *journal);
 extern void	jbd2_clear_buffer_revoked_flags(journal_t *journal);