Message ID | aff746f3dbce54f5ea807928c2286edfd6e9976e.1724145714.git.ojaswin@linux.ibm.com |
---|---|
State | Superseded |
Headers | show |
Series | [v2,1/2] ext4: Check stripe size compatibility on remount as well | expand |
on 8/20/2024 5:27 PM, Ojaswin Mujoo wrote: > Although we have checks to make sure s_stripe is a multiple of cluster > size, in case we accidentally end up with a scenario where this is not > the case, use EXT4_NUM_B2C() so that we don't end up with unexpected > cases where EXT4_B2C(stripe) becomes 0. > > Also make the is_stripe_aligned check in regular_allocator a bit more > robust while we are at it. This should ideally have no functional change > unless we have a bug somewhere causing (stripe % cluster_size != 0) > > Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>Looks good to me. Feel free to add: Reviewed-by: Kemeng Shi <shikemeng@huaweicloud.com> > --- > fs/ext4/mballoc.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c > index 9dda9cd68ab2..99d1a8c730e0 100644 > --- a/fs/ext4/mballoc.c > +++ b/fs/ext4/mballoc.c > @@ -2553,7 +2553,7 @@ void ext4_mb_scan_aligned(struct ext4_allocation_context *ac, > do_div(a, sbi->s_stripe); > i = (a * sbi->s_stripe) - first_group_block; > > - stripe = EXT4_B2C(sbi, sbi->s_stripe); > + stripe = EXT4_NUM_B2C(sbi, sbi->s_stripe); > i = EXT4_B2C(sbi, i); > while (i < EXT4_CLUSTERS_PER_GROUP(sb)) { > if (!mb_test_bit(i, bitmap)) { > @@ -2928,9 +2928,11 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac) > if (cr == CR_POWER2_ALIGNED) > ext4_mb_simple_scan_group(ac, &e4b); > else { > - bool is_stripe_aligned = sbi->s_stripe && > + bool is_stripe_aligned = > + (sbi->s_stripe >= > + sbi->s_cluster_ratio) && > !(ac->ac_g_ex.fe_len % > - EXT4_B2C(sbi, sbi->s_stripe)); > + EXT4_NUM_B2C(sbi, sbi->s_stripe)); > > if ((cr == CR_GOAL_LEN_FAST || > cr == CR_BEST_AVAIL_LEN) && > @@ -3707,7 +3709,7 @@ int ext4_mb_init(struct super_block *sb) > */ > if (sbi->s_stripe > 1) { > sbi->s_mb_group_prealloc = roundup( > - sbi->s_mb_group_prealloc, EXT4_B2C(sbi, sbi->s_stripe)); > + sbi->s_mb_group_prealloc, EXT4_NUM_B2C(sbi, sbi->s_stripe)); > } > > sbi->s_locality_groups = alloc_percpu(struct ext4_locality_group); >
Ojaswin Mujoo <ojaswin@linux.ibm.com> writes: > Although we have checks to make sure s_stripe is a multiple of cluster > size, in case we accidentally end up with a scenario where this is not > the case, use EXT4_NUM_B2C() so that we don't end up with unexpected > cases where EXT4_B2C(stripe) becomes 0. man page of strip=n mount options says... stripe=n Number of file system blocks that mballoc will try to use for allocation size and alignment. For RAID5/6 systems this should be the number of data disks * RAID chunk size in file system blocks. ... So stripe is anyways the no. of filesystem blocks. Making it EXT4_NUM_B2C() make sense to me. However, there is one more user that remains in ext4_mb_find_by_goal(), right? -ritesh > > Also make the is_stripe_aligned check in regular_allocator a bit more > robust while we are at it. This should ideally have no functional change > unless we have a bug somewhere causing (stripe % cluster_size != 0) > > Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> > --- > fs/ext4/mballoc.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-)
On Wed, Aug 28, 2024 at 02:58:13PM +0530, Ritesh Harjani wrote: > Ojaswin Mujoo <ojaswin@linux.ibm.com> writes: > > > Although we have checks to make sure s_stripe is a multiple of cluster > > size, in case we accidentally end up with a scenario where this is not > > the case, use EXT4_NUM_B2C() so that we don't end up with unexpected > > cases where EXT4_B2C(stripe) becomes 0. > > man page of strip=n mount options says... > stripe=n > Number of file system blocks that mballoc will try to use > for allocation size and alignment. For RAID5/6 systems > this should be the number of data disks * RAID chunk size > in file system blocks. > > ... So stripe is anyways the no. of filesystem blocks. Making it > EXT4_NUM_B2C() make sense to me. > > However, there is one more user that remains in ext4_mb_find_by_goal(), > right? Oh right, I'll fix that in v3. Thanks! > > -ritesh > > > > > Also make the is_stripe_aligned check in regular_allocator a bit more > > robust while we are at it. This should ideally have no functional change > > unless we have a bug somewhere causing (stripe % cluster_size != 0) > > > > Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> > > --- > > fs/ext4/mballoc.c | 10 ++++++---- > > 1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 9dda9cd68ab2..99d1a8c730e0 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -2553,7 +2553,7 @@ void ext4_mb_scan_aligned(struct ext4_allocation_context *ac, do_div(a, sbi->s_stripe); i = (a * sbi->s_stripe) - first_group_block; - stripe = EXT4_B2C(sbi, sbi->s_stripe); + stripe = EXT4_NUM_B2C(sbi, sbi->s_stripe); i = EXT4_B2C(sbi, i); while (i < EXT4_CLUSTERS_PER_GROUP(sb)) { if (!mb_test_bit(i, bitmap)) { @@ -2928,9 +2928,11 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac) if (cr == CR_POWER2_ALIGNED) ext4_mb_simple_scan_group(ac, &e4b); else { - bool is_stripe_aligned = sbi->s_stripe && + bool is_stripe_aligned = + (sbi->s_stripe >= + sbi->s_cluster_ratio) && !(ac->ac_g_ex.fe_len % - EXT4_B2C(sbi, sbi->s_stripe)); + EXT4_NUM_B2C(sbi, sbi->s_stripe)); if ((cr == CR_GOAL_LEN_FAST || cr == CR_BEST_AVAIL_LEN) && @@ -3707,7 +3709,7 @@ int ext4_mb_init(struct super_block *sb) */ if (sbi->s_stripe > 1) { sbi->s_mb_group_prealloc = roundup( - sbi->s_mb_group_prealloc, EXT4_B2C(sbi, sbi->s_stripe)); + sbi->s_mb_group_prealloc, EXT4_NUM_B2C(sbi, sbi->s_stripe)); } sbi->s_locality_groups = alloc_percpu(struct ext4_locality_group);
Although we have checks to make sure s_stripe is a multiple of cluster size, in case we accidentally end up with a scenario where this is not the case, use EXT4_NUM_B2C() so that we don't end up with unexpected cases where EXT4_B2C(stripe) becomes 0. Also make the is_stripe_aligned check in regular_allocator a bit more robust while we are at it. This should ideally have no functional change unless we have a bug somewhere causing (stripe % cluster_size != 0) Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> --- fs/ext4/mballoc.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)