This use of patch_instruction() is working on 32 bit data, and can fail
if the data looks like a prefixed instruction and the extra write
crosses a page boundary. Use patch_u32() to fix the write size.

Fixes: 8734b41b3efe ("powerpc/module_64: Fix livepatching for RO modules")
Link: https://lore.kernel.org/all/20230203004649.1f59dbd4@yea/
Signed-off-by: Benjamin Gray <bg...@linux.ibm.com>

---

v2: * Added the fixes tag, it seems appropriate even if the subject does
      mention a more robust solution being required.

patch_u64() should be more efficient, but judging from the bug report
it doesn't seem like the data is doubleword aligned.
---
 arch/powerpc/kernel/module_64.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 7112adc597a8..e9bab599d0c2 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -651,12 +651,11 @@ static inline int create_stub(const Elf64_Shdr *sechdrs,
        // func_desc_t is 8 bytes if ABIv2, else 16 bytes
        desc = func_desc(addr);
        for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) {
-               if (patch_instruction(((u32 *)&entry->funcdata) + i,
-                                     ppc_inst(((u32 *)(&desc))[i])))
+               if (patch_u32(((u32 *)&entry->funcdata) + i, ((u32 *)&desc)[i]))
                        return 0;
        }
 
-       if (patch_instruction(&entry->magic, ppc_inst(STUB_MAGIC)))
+       if (patch_u32(&entry->magic, STUB_MAGIC))
                return 0;
 
        return 1;
-- 
2.45.0

Reply via email to