diff mbox series

[v3,3/5] powerpc/64: Convert patch_instruction() to patch_u32()

Message ID 20240325055302.876434-4-bgray@linux.ibm.com (mailing list archive)
State Superseded, archived
Headers show
Series Add generic data patching functions | expand

Commit Message

Benjamin Gray March 25, 2024, 5:53 a.m. UTC
This use of patch_instruction() is working on 32 bit data, and can fail
if the data looks like a prefixed instruction and the extra write
crosses a page boundary. Use patch_u32() to fix the write size.

Fixes: 8734b41b3efe ("powerpc/module_64: Fix livepatching for RO modules")
Link: https://lore.kernel.org/all/20230203004649.1f59dbd4@yea/
Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>

---

v2: * Added the fixes tag, it seems appropriate even if the subject does
      mention a more robust solution being required.

patch_u64() should be more efficient, but judging from the bug report
it doesn't seem like the data is doubleword aligned.
---
 arch/powerpc/kernel/module_64.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

Comments

Naveen N Rao April 23, 2024, 9:39 a.m. UTC | #1
On Mon, Mar 25, 2024 at 04:53:00PM +1100, Benjamin Gray wrote:
> This use of patch_instruction() is working on 32 bit data, and can fail
> if the data looks like a prefixed instruction and the extra write
> crosses a page boundary. Use patch_u32() to fix the write size.
> 
> Fixes: 8734b41b3efe ("powerpc/module_64: Fix livepatching for RO modules")
> Link: https://lore.kernel.org/all/20230203004649.1f59dbd4@yea/
> Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
> 
> ---
> 
> v2: * Added the fixes tag, it seems appropriate even if the subject does
>       mention a more robust solution being required.
> 
> patch_u64() should be more efficient, but judging from the bug report
> it doesn't seem like the data is doubleword aligned.

Asking again, is that still the case? It looks like at least the first 
fix below can be converted to patch_u64().

- Naveen

> ---
>  arch/powerpc/kernel/module_64.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
> index 7112adc597a8..e9bab599d0c2 100644
> --- a/arch/powerpc/kernel/module_64.c
> +++ b/arch/powerpc/kernel/module_64.c
> @@ -651,12 +651,11 @@ static inline int create_stub(const Elf64_Shdr *sechdrs,
>  	// func_desc_t is 8 bytes if ABIv2, else 16 bytes
>  	desc = func_desc(addr);
>  	for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) {
> -		if (patch_instruction(((u32 *)&entry->funcdata) + i,
> -				      ppc_inst(((u32 *)(&desc))[i])))
> +		if (patch_u32(((u32 *)&entry->funcdata) + i, ((u32 *)&desc)[i]))
>  			return 0;
>  	}
>  
> -	if (patch_instruction(&entry->magic, ppc_inst(STUB_MAGIC)))
> +	if (patch_u32(&entry->magic, STUB_MAGIC))
>  		return 0;
>  
>  	return 1;
> -- 
> 2.44.0
>
Benjamin Gray May 14, 2024, 2:59 a.m. UTC | #2
On Tue, 2024-04-23 at 15:09 +0530, Naveen N Rao wrote:
> On Mon, Mar 25, 2024 at 04:53:00PM +1100, Benjamin Gray wrote:
> > This use of patch_instruction() is working on 32 bit data, and can
> > fail
> > if the data looks like a prefixed instruction and the extra write
> > crosses a page boundary. Use patch_u32() to fix the write size.
> > 
> > Fixes: 8734b41b3efe ("powerpc/module_64: Fix livepatching for RO
> > modules")
> > Link: https://lore.kernel.org/all/20230203004649.1f59dbd4@yea/
> > Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
> > 
> > ---
> > 
> > v2: * Added the fixes tag, it seems appropriate even if the subject
> > does
> >       mention a more robust solution being required.
> > 
> > patch_u64() should be more efficient, but judging from the bug
> > report
> > it doesn't seem like the data is doubleword aligned.
> 
> Asking again, is that still the case? It looks like at least the
> first 
> fix below can be converted to patch_u64().
> 
> - Naveen

Sorry, I think I forgot this question last time. Reading the commit
descriptions you linked, I don't see any mention of "entry->funcdata
will always be doubleword aligned because XYZ". If the patch makes it
doubleword aligned anyway, I wouldn't be confident asserting all
callers will always do this without looking into it a lot more.

Perhaps a separate series could optimise it with appropriate
justification/assertions to catch bad alignment. But I think leaving it
out of this series is fine because the original works in words, so it's
not regressing anything.

> 
> > ---
> >  arch/powerpc/kernel/module_64.c | 5 ++---
> >  1 file changed, 2 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/powerpc/kernel/module_64.c
> > b/arch/powerpc/kernel/module_64.c
> > index 7112adc597a8..e9bab599d0c2 100644
> > --- a/arch/powerpc/kernel/module_64.c
> > +++ b/arch/powerpc/kernel/module_64.c
> > @@ -651,12 +651,11 @@ static inline int create_stub(const
> > Elf64_Shdr *sechdrs,
> >  	// func_desc_t is 8 bytes if ABIv2, else 16 bytes
> >  	desc = func_desc(addr);
> >  	for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) {
> > -		if (patch_instruction(((u32 *)&entry->funcdata) +
> > i,
> > -				      ppc_inst(((u32
> > *)(&desc))[i])))
> > +		if (patch_u32(((u32 *)&entry->funcdata) + i, ((u32
> > *)&desc)[i]))
> >  			return 0;
> >  	}
> >  
> > -	if (patch_instruction(&entry->magic,
> > ppc_inst(STUB_MAGIC)))
> > +	if (patch_u32(&entry->magic, STUB_MAGIC))
> >  		return 0;
> >  
> >  	return 1;
> > -- 
> > 2.44.0
> >
Christophe Leroy May 14, 2024, 4:39 a.m. UTC | #3
Le 14/05/2024 à 04:59, Benjamin Gray a écrit :
> On Tue, 2024-04-23 at 15:09 +0530, Naveen N Rao wrote:
>> On Mon, Mar 25, 2024 at 04:53:00PM +1100, Benjamin Gray wrote:
>>> This use of patch_instruction() is working on 32 bit data, and can
>>> fail
>>> if the data looks like a prefixed instruction and the extra write
>>> crosses a page boundary. Use patch_u32() to fix the write size.
>>>
>>> Fixes: 8734b41b3efe ("powerpc/module_64: Fix livepatching for RO
>>> modules")
>>> Link: https://lore.kernel.org/all/20230203004649.1f59dbd4@yea/
>>> Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
>>>
>>> ---
>>>
>>> v2: * Added the fixes tag, it seems appropriate even if the subject
>>> does
>>>        mention a more robust solution being required.
>>>
>>> patch_u64() should be more efficient, but judging from the bug
>>> report
>>> it doesn't seem like the data is doubleword aligned.
>>
>> Asking again, is that still the case? It looks like at least the
>> first
>> fix below can be converted to patch_u64().
>>
>> - Naveen
> 
> Sorry, I think I forgot this question last time. Reading the commit
> descriptions you linked, I don't see any mention of "entry->funcdata
> will always be doubleword aligned because XYZ". If the patch makes it
> doubleword aligned anyway, I wouldn't be confident asserting all
> callers will always do this without looking into it a lot more.
> 
> Perhaps a separate series could optimise it with appropriate
> justification/assertions to catch bad alignment. But I think leaving it
> out of this series is fine because the original works in words, so it's
> not regressing anything.

As far as I can see, the struct is 64 bits aligned by definition so 
funcdata field is aligned too as there are just 8x u32 before it:

struct ppc64_stub_entry {
	/*
	 * 28 byte jump instruction sequence (7 instructions) that can
	 * hold ppc64_stub_insns or stub_insns. Must be 8-byte aligned
	 * with PCREL kernels that use prefix instructions in the stub.
	 */
	u32 jump[7];
	/* Used by ftrace to identify stubs */
	u32 magic;
	/* Data for the above code */
	func_desc_t funcdata;
} __aligned(8);

> 
>>
>>> ---
>>>   arch/powerpc/kernel/module_64.c | 5 ++---
>>>   1 file changed, 2 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/powerpc/kernel/module_64.c
>>> b/arch/powerpc/kernel/module_64.c
>>> index 7112adc597a8..e9bab599d0c2 100644
>>> --- a/arch/powerpc/kernel/module_64.c
>>> +++ b/arch/powerpc/kernel/module_64.c
>>> @@ -651,12 +651,11 @@ static inline int create_stub(const
>>> Elf64_Shdr *sechdrs,
>>>   	// func_desc_t is 8 bytes if ABIv2, else 16 bytes
>>>   	desc = func_desc(addr);
>>>   	for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) {
>>> -		if (patch_instruction(((u32 *)&entry->funcdata) +
>>> i,
>>> -				      ppc_inst(((u32
>>> *)(&desc))[i])))
>>> +		if (patch_u32(((u32 *)&entry->funcdata) + i, ((u32
>>> *)&desc)[i]))
>>>   			return 0;
>>>   	}
>>>   
>>> -	if (patch_instruction(&entry->magic,
>>> ppc_inst(STUB_MAGIC)))
>>> +	if (patch_u32(&entry->magic, STUB_MAGIC))
>>>   		return 0;
>>>   
>>>   	return 1;
>>> -- 
>>> 2.44.0
>>>
>
Naveen N Rao May 14, 2024, 10:42 a.m. UTC | #4
On Tue, May 14, 2024 at 04:39:30AM GMT, Christophe Leroy wrote:
> 
> 
> Le 14/05/2024 à 04:59, Benjamin Gray a écrit :
> > On Tue, 2024-04-23 at 15:09 +0530, Naveen N Rao wrote:
> >> On Mon, Mar 25, 2024 at 04:53:00PM +1100, Benjamin Gray wrote:
> >>> This use of patch_instruction() is working on 32 bit data, and can
> >>> fail
> >>> if the data looks like a prefixed instruction and the extra write
> >>> crosses a page boundary. Use patch_u32() to fix the write size.
> >>>
> >>> Fixes: 8734b41b3efe ("powerpc/module_64: Fix livepatching for RO
> >>> modules")
> >>> Link: https://lore.kernel.org/all/20230203004649.1f59dbd4@yea/
> >>> Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
> >>>
> >>> ---
> >>>
> >>> v2: * Added the fixes tag, it seems appropriate even if the subject
> >>> does
> >>>        mention a more robust solution being required.
> >>>
> >>> patch_u64() should be more efficient, but judging from the bug
> >>> report
> >>> it doesn't seem like the data is doubleword aligned.
> >>
> >> Asking again, is that still the case? It looks like at least the
> >> first
> >> fix below can be converted to patch_u64().
> >>
> >> - Naveen
> > 
> > Sorry, I think I forgot this question last time. Reading the commit
> > descriptions you linked, I don't see any mention of "entry->funcdata
> > will always be doubleword aligned because XYZ". If the patch makes it
> > doubleword aligned anyway, I wouldn't be confident asserting all
> > callers will always do this without looking into it a lot more.

No worries. I was asking primarily to check if you had noticed a 
specific issue with alignment.

As Christophe mentions, the structure is aligned. It is primarily 
allotted in a separate stubs section for modules. Looking at it closer 
though, I wonder if we need the below:

diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index cccb1f78e058..0226d73a0007 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -428,8 +428,11 @@ int module_frob_arch_sections(Elf64_Ehdr *hdr,
 
        /* Find .toc and .stubs sections, symtab and strtab */
        for (i = 1; i < hdr->e_shnum; i++) {
-               if (strcmp(secstrings + sechdrs[i].sh_name, ".stubs") == 0)
+               if (strcmp(secstrings + sechdrs[i].sh_name, ".stubs") == 0) {
                        me->arch.stubs_section = i;
+                       if (sechdrs[i].sh_addralign < 8)
+                               sechdrs[i].sh_addralign = 8;
+               }
 #ifdef CONFIG_PPC_KERNEL_PCREL
                else if (strcmp(secstrings + sechdrs[i].sh_name, ".data..percpu") == 0)
                        me->arch.pcpu_section = i;

> > 
> > Perhaps a separate series could optimise it with appropriate
> > justification/assertions to catch bad alignment. But I think leaving it
> > out of this series is fine because the original works in words, so it's
> > not regressing anything.

That should be fine.

> 
> As far as I can see, the struct is 64 bits aligned by definition so 
> funcdata field is aligned too as there are just 8x u32 before it:
> 
> struct ppc64_stub_entry {
> 	/*
> 	 * 28 byte jump instruction sequence (7 instructions) that can
> 	 * hold ppc64_stub_insns or stub_insns. Must be 8-byte aligned
> 	 * with PCREL kernels that use prefix instructions in the stub.
> 	 */
> 	u32 jump[7];
> 	/* Used by ftrace to identify stubs */
> 	u32 magic;
> 	/* Data for the above code */
> 	func_desc_t funcdata;
> } __aligned(8);
> 

Thanks,
Naveen
diff mbox series

Patch

diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 7112adc597a8..e9bab599d0c2 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -651,12 +651,11 @@  static inline int create_stub(const Elf64_Shdr *sechdrs,
 	// func_desc_t is 8 bytes if ABIv2, else 16 bytes
 	desc = func_desc(addr);
 	for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) {
-		if (patch_instruction(((u32 *)&entry->funcdata) + i,
-				      ppc_inst(((u32 *)(&desc))[i])))
+		if (patch_u32(((u32 *)&entry->funcdata) + i, ((u32 *)&desc)[i]))
 			return 0;
 	}
 
-	if (patch_instruction(&entry->magic, ppc_inst(STUB_MAGIC)))
+	if (patch_u32(&entry->magic, STUB_MAGIC))
 		return 0;
 
 	return 1;