diff mbox series

powerpc/code-patching: Disable KASAN in __patch_instructions()

Message ID 20240213043638.168048-1-bgray@linux.ibm.com (mailing list archive)
State Superseded
Headers show
Series powerpc/code-patching: Disable KASAN in __patch_instructions() | expand

Checks

Context Check Description
snowpatch_ozlabs/github-powerpc_ppctests success Successfully ran 8 jobs.
snowpatch_ozlabs/github-powerpc_selftests success Successfully ran 8 jobs.
snowpatch_ozlabs/github-powerpc_sparse success Successfully ran 4 jobs.
snowpatch_ozlabs/github-powerpc_clang success Successfully ran 6 jobs.
snowpatch_ozlabs/github-powerpc_kernel_qemu success Successfully ran 23 jobs.

Commit Message

Benjamin Gray Feb. 13, 2024, 4:36 a.m. UTC
The memset/memcpy functions are by default instrumented by KASAN, which
complains about user memory access when using a poking page in
userspace.

Using a userspace address is expected though, so don't instrument with
KASAN for this function.

Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>

---

I tried to replace the memsetN calls with __memsetN, but we appear to
disable the non-instrumented variants of these when KASAN is enabled.
Christophe might you know more here?

The cost of just suppressing reports for this section shouldn't be too
relevant; KASAN detects the access, but exits before it starts preparing
the report itself. So it's just like any other KASAN instrumented
function for the most part.
---
 arch/powerpc/lib/code-patching.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Christophe Leroy Feb. 19, 2024, 7:47 a.m. UTC | #1
Le 13/02/2024 à 05:36, Benjamin Gray a écrit :
> The memset/memcpy functions are by default instrumented by KASAN, which
> complains about user memory access when using a poking page in
> userspace.
> 
> Using a userspace address is expected though, so don't instrument with
> KASAN for this function.

memcpy/memset should never be used to access user memory, we have 
copy_to_user() and clear_user() for that.

A few weeks age I sent a KASAN report I got from the same function. But 
I got it on PPC32 which doesn't use userspace for that. See 
https://lore.kernel.org/all/2000a30f-214a-4b20-b0b5-348e987d6a0e@csgroup.eu/T/#u

So I have the feeling that your patch may be hidding another problem. 
The PPC32 report for sure will be hidden if your patch gets applied, 
allthough your explanation doesn't fit.

Christophe

> 
> Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
> 
> ---
> 
> I tried to replace the memsetN calls with __memsetN, but we appear to
> disable the non-instrumented variants of these when KASAN is enabled.
> Christophe might you know more here?
> 
> The cost of just suppressing reports for this section shouldn't be too
> relevant; KASAN detects the access, but exits before it starts preparing
> the report itself. So it's just like any other KASAN instrumented
> function for the most part.
> ---
>   arch/powerpc/lib/code-patching.c | 3 +++
>   1 file changed, 3 insertions(+)
> 
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index c6ab46156cda..24989594578a 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -3,6 +3,7 @@
>    *  Copyright 2008 Michael Ellerman, IBM Corporation.
>    */
>   
> +#include <linux/kasan.h>
>   #include <linux/kprobes.h>
>   #include <linux/mmu_context.h>
>   #include <linux/random.h>
> @@ -377,6 +378,7 @@ static int __patch_instructions(u32 *patch_addr, u32 *code, size_t len, bool rep
>   	unsigned long start = (unsigned long)patch_addr;
>   
>   	/* Repeat instruction */
> +	kasan_disable_current();
>   	if (repeat_instr) {
>   		ppc_inst_t instr = ppc_inst_read(code);
>   
> @@ -392,6 +394,7 @@ static int __patch_instructions(u32 *patch_addr, u32 *code, size_t len, bool rep
>   	} else {
>   		memcpy(patch_addr, code, len);
>   	}
> +	kasan_enable_current();
>   
>   	smp_wmb();	/* smp write barrier */
>   	flush_icache_range(start, start + len);
diff mbox series

Patch

diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index c6ab46156cda..24989594578a 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -3,6 +3,7 @@ 
  *  Copyright 2008 Michael Ellerman, IBM Corporation.
  */
 
+#include <linux/kasan.h>
 #include <linux/kprobes.h>
 #include <linux/mmu_context.h>
 #include <linux/random.h>
@@ -377,6 +378,7 @@  static int __patch_instructions(u32 *patch_addr, u32 *code, size_t len, bool rep
 	unsigned long start = (unsigned long)patch_addr;
 
 	/* Repeat instruction */
+	kasan_disable_current();
 	if (repeat_instr) {
 		ppc_inst_t instr = ppc_inst_read(code);
 
@@ -392,6 +394,7 @@  static int __patch_instructions(u32 *patch_addr, u32 *code, size_t len, bool rep
 	} else {
 		memcpy(patch_addr, code, len);
 	}
+	kasan_enable_current();
 
 	smp_wmb();	/* smp write barrier */
 	flush_icache_range(start, start + len);