diff mbox series

[RFC,v1,10/10] book3s64/hash: Disable kfence if not early init

Message ID fd77730375a0ab6ae0a89f934385750b239d3113.1722408881.git.ritesh.list@gmail.com (mailing list archive)
State Changes Requested
Headers show
Series book3s64/hash: Improve kfence support | expand

Checks

Context Check Description
snowpatch_ozlabs/github-powerpc_ppctests success Successfully ran 8 jobs.
snowpatch_ozlabs/github-powerpc_selftests success Successfully ran 8 jobs.
snowpatch_ozlabs/github-powerpc_sparse success Successfully ran 4 jobs.
snowpatch_ozlabs/github-powerpc_clang success Successfully ran 5 jobs.
snowpatch_ozlabs/github-powerpc_kernel_qemu success Successfully ran 21 jobs.

Commit Message

Ritesh Harjani (IBM) July 31, 2024, 7:56 a.m. UTC
Enable kfence on book3s64 hash only when early init is enabled.
This is because, kfence could cause the kernel linear map to be mapped
at PAGE_SIZE level instead of 16M (which I guess we don't want).

Also currently there is no way to -
1. Make multiple page size entries for the SLB used for kernel linear
   map.
2. No easy way of getting the hash slot details after the page table
   mapping for kernel linear setup. So even if kfence allocate the
   pool in late init, we won't be able to get the hash slot details in
   kfence linear map.

Thus this patch disables kfence on hash if kfence early init is not
enabled.

Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
 arch/powerpc/mm/book3s64/hash_utils.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--
2.45.2

Comments

Christophe Leroy Aug. 14, 2024, 5:18 p.m. UTC | #1
Le 31/07/2024 à 09:56, Ritesh Harjani (IBM) a écrit :
> [Vous ne recevez pas souvent de courriers de ritesh.list@gmail.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
> 
> Enable kfence on book3s64 hash only when early init is enabled.
> This is because, kfence could cause the kernel linear map to be mapped
> at PAGE_SIZE level instead of 16M (which I guess we don't want).
> 
> Also currently there is no way to -
> 1. Make multiple page size entries for the SLB used for kernel linear
>     map.
> 2. No easy way of getting the hash slot details after the page table
>     mapping for kernel linear setup. So even if kfence allocate the
>     pool in late init, we won't be able to get the hash slot details in
>     kfence linear map.
> 
> Thus this patch disables kfence on hash if kfence early init is not
> enabled.
> 
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> ---
>   arch/powerpc/mm/book3s64/hash_utils.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
> index c66b9921fc7d..759dbcbf1483 100644
> --- a/arch/powerpc/mm/book3s64/hash_utils.c
> +++ b/arch/powerpc/mm/book3s64/hash_utils.c
> @@ -410,6 +410,8 @@ static phys_addr_t kfence_pool;
> 
>   static inline void hash_kfence_alloc_pool(void)
>   {
> +       if (!kfence_early_init)
> +               goto err;
> 
>          // allocate linear map for kfence within RMA region
>          linear_map_kf_hash_count = KFENCE_POOL_SIZE >> PAGE_SHIFT;
> @@ -1074,7 +1076,8 @@ static void __init htab_init_page_sizes(void)
>          bool aligned = true;
>          init_hpte_page_sizes();
> 
> -       if (!debug_pagealloc_enabled_or_kfence()) {
> +       if (!debug_pagealloc_enabled() &&
> +           !(IS_ENABLED(CONFIG_KFENCE) && kfence_early_init)) {

Looks complex, can we do simpler ?

>                  /*
>                   * Pick a size for the linear mapping. Currently, we only
>                   * support 16M, 1M and 4K which is the default
> --
> 2.45.2
>
Ritesh Harjani (IBM) Sept. 4, 2024, 6:44 p.m. UTC | #2
Christophe Leroy <christophe.leroy@csgroup.eu> writes:

> Le 31/07/2024 à 09:56, Ritesh Harjani (IBM) a écrit :
>> [Vous ne recevez pas souvent de courriers de ritesh.list@gmail.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
>> 
>> Enable kfence on book3s64 hash only when early init is enabled.
>> This is because, kfence could cause the kernel linear map to be mapped
>> at PAGE_SIZE level instead of 16M (which I guess we don't want).
>> 
>> Also currently there is no way to -
>> 1. Make multiple page size entries for the SLB used for kernel linear
>>     map.
>> 2. No easy way of getting the hash slot details after the page table
>>     mapping for kernel linear setup. So even if kfence allocate the
>>     pool in late init, we won't be able to get the hash slot details in
>>     kfence linear map.
>> 
>> Thus this patch disables kfence on hash if kfence early init is not
>> enabled.
>> 
>> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
>> ---
>>   arch/powerpc/mm/book3s64/hash_utils.c | 5 ++++-
>>   1 file changed, 4 insertions(+), 1 deletion(-)
>> 
>> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
>> index c66b9921fc7d..759dbcbf1483 100644
>> --- a/arch/powerpc/mm/book3s64/hash_utils.c
>> +++ b/arch/powerpc/mm/book3s64/hash_utils.c
>> @@ -410,6 +410,8 @@ static phys_addr_t kfence_pool;
>> 
>>   static inline void hash_kfence_alloc_pool(void)
>>   {
>> +       if (!kfence_early_init)
>> +               goto err;
>> 
>>          // allocate linear map for kfence within RMA region
>>          linear_map_kf_hash_count = KFENCE_POOL_SIZE >> PAGE_SHIFT;
>> @@ -1074,7 +1076,8 @@ static void __init htab_init_page_sizes(void)
>>          bool aligned = true;
>>          init_hpte_page_sizes();
>> 
>> -       if (!debug_pagealloc_enabled_or_kfence()) {
>> +       if (!debug_pagealloc_enabled() &&
>> +           !(IS_ENABLED(CONFIG_KFENCE) && kfence_early_init)) {
>
> Looks complex, can we do simpler ?
>

Yes, kfence_early_init anyway needs clean up. Will make it simpler.

Thanks for the review!

-ritesh
diff mbox series

Patch

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index c66b9921fc7d..759dbcbf1483 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -410,6 +410,8 @@  static phys_addr_t kfence_pool;

 static inline void hash_kfence_alloc_pool(void)
 {
+	if (!kfence_early_init)
+		goto err;

 	// allocate linear map for kfence within RMA region
 	linear_map_kf_hash_count = KFENCE_POOL_SIZE >> PAGE_SHIFT;
@@ -1074,7 +1076,8 @@  static void __init htab_init_page_sizes(void)
 	bool aligned = true;
 	init_hpte_page_sizes();

-	if (!debug_pagealloc_enabled_or_kfence()) {
+	if (!debug_pagealloc_enabled() &&
+	    !(IS_ENABLED(CONFIG_KFENCE) && kfence_early_init)) {
 		/*
 		 * Pick a size for the linear mapping. Currently, we only
 		 * support 16M, 1M and 4K which is the default