With STRICT_KERNEL_RWX on in a relocatable kernel under the hash MMU, if
the position the kernel is loaded at is not 16M aligned, the kernel
miscalculates its ALIGN*()s and things go horribly wrong.

We can easily avoid this when selecting the linear mapping size, so do
so and print a warning.  I tested this for various alignments and as
long as the position is 64K aligned it's fine (the base requirement for
powerpc).

Signed-off-by: Russell Currey <rus...@russell.cc>
---
 arch/powerpc/mm/book3s64/hash_utils.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index b30435c7d804..523d4d39d11e 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -652,6 +652,7 @@ static void init_hpte_page_sizes(void)
 
 static void __init htab_init_page_sizes(void)
 {
+       bool aligned = true;
        init_hpte_page_sizes();
 
        if (!debug_pagealloc_enabled()) {
@@ -659,7 +660,15 @@ static void __init htab_init_page_sizes(void)
                 * Pick a size for the linear mapping. Currently, we only
                 * support 16M, 1M and 4K which is the default
                 */
-               if (mmu_psize_defs[MMU_PAGE_16M].shift)
+               if (IS_ENABLED(STRICT_KERNEL_RWX) &&
+                   (unsigned long)_stext % 0x1000000) {
+                       if (mmu_psize_defs[MMU_PAGE_16M].shift)
+                               pr_warn("Kernel not 16M aligned, "
+                                       "disabling 16M linear map alignment");
+                       aligned = false;
+               }
+
+               if (mmu_psize_defs[MMU_PAGE_16M].shift && aligned)
                        mmu_linear_psize = MMU_PAGE_16M;
                else if (mmu_psize_defs[MMU_PAGE_1M].shift)
                        mmu_linear_psize = MMU_PAGE_1M;
-- 
2.24.1

Reply via email to