On Tue, May 14, 2024 at 04:39:30AM GMT, Christophe Leroy wrote:
> 
> 
> Le 14/05/2024 à 04:59, Benjamin Gray a écrit :
> > On Tue, 2024-04-23 at 15:09 +0530, Naveen N Rao wrote:
> >> On Mon, Mar 25, 2024 at 04:53:00PM +1100, Benjamin Gray wrote:
> >>> This use of patch_instruction() is working on 32 bit data, and can
> >>> fail
> >>> if the data looks like a prefixed instruction and the extra write
> >>> crosses a page boundary. Use patch_u32() to fix the write size.
> >>>
> >>> Fixes: 8734b41b3efe ("powerpc/module_64: Fix livepatching for RO
> >>> modules")
> >>> Link: https://lore.kernel.org/all/20230203004649.1f59dbd4@yea/
> >>> Signed-off-by: Benjamin Gray <bg...@linux.ibm.com>
> >>>
> >>> ---
> >>>
> >>> v2: * Added the fixes tag, it seems appropriate even if the subject
> >>> does
> >>>        mention a more robust solution being required.
> >>>
> >>> patch_u64() should be more efficient, but judging from the bug
> >>> report
> >>> it doesn't seem like the data is doubleword aligned.
> >>
> >> Asking again, is that still the case? It looks like at least the
> >> first
> >> fix below can be converted to patch_u64().
> >>
> >> - Naveen
> > 
> > Sorry, I think I forgot this question last time. Reading the commit
> > descriptions you linked, I don't see any mention of "entry->funcdata
> > will always be doubleword aligned because XYZ". If the patch makes it
> > doubleword aligned anyway, I wouldn't be confident asserting all
> > callers will always do this without looking into it a lot more.

No worries. I was asking primarily to check if you had noticed a 
specific issue with alignment.

As Christophe mentions, the structure is aligned. It is primarily 
allotted in a separate stubs section for modules. Looking at it closer 
though, I wonder if we need the below:

diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index cccb1f78e058..0226d73a0007 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -428,8 +428,11 @@ int module_frob_arch_sections(Elf64_Ehdr *hdr,
 
        /* Find .toc and .stubs sections, symtab and strtab */
        for (i = 1; i < hdr->e_shnum; i++) {
-               if (strcmp(secstrings + sechdrs[i].sh_name, ".stubs") == 0)
+               if (strcmp(secstrings + sechdrs[i].sh_name, ".stubs") == 0) {
                        me->arch.stubs_section = i;
+                       if (sechdrs[i].sh_addralign < 8)
+                               sechdrs[i].sh_addralign = 8;
+               }
 #ifdef CONFIG_PPC_KERNEL_PCREL
                else if (strcmp(secstrings + sechdrs[i].sh_name, 
".data..percpu") == 0)
                        me->arch.pcpu_section = i;

> > 
> > Perhaps a separate series could optimise it with appropriate
> > justification/assertions to catch bad alignment. But I think leaving it
> > out of this series is fine because the original works in words, so it's
> > not regressing anything.

That should be fine.

> 
> As far as I can see, the struct is 64 bits aligned by definition so 
> funcdata field is aligned too as there are just 8x u32 before it:
> 
> struct ppc64_stub_entry {
>       /*
>        * 28 byte jump instruction sequence (7 instructions) that can
>        * hold ppc64_stub_insns or stub_insns. Must be 8-byte aligned
>        * with PCREL kernels that use prefix instructions in the stub.
>        */
>       u32 jump[7];
>       /* Used by ftrace to identify stubs */
>       u32 magic;
>       /* Data for the above code */
>       func_desc_t funcdata;
> } __aligned(8);
> 

Thanks,
Naveen

Reply via email to