In memory region KASLR, __PHYSICAL_MASK_SHIFT is taken to calculate the initial size of the direct mapping region. This is correct in the old code where __PHYSICAL_MASK_SHIFT was equal to MAX_PHYSMEM_BITS, 46 bits, and only 4-level mode was supported.
Later, in commit: b83ce5ee91471d ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52"), __PHYSICAL_MASK_SHIFT was changed to be always 52 bits, no matter it's 5-level or 4-level. This is wrong for 4-level paging since it may cause randomness of KASLR being greatly weakened in 4-level. For KASLR, we compare the sum of RAM size and CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING with the size of the max RAM which can be supported by system, then choose the bigger one as the value to reserve space for the direct mapping region. The max RAM supported in 4-level is 64 TB according to MAX_PHYSMEM_BITS. However, here it's 4 PB in code to be compared with when __PHYSICAL_MASK_SHIFT is mistakenly used. E.g in a system owning 64 TB RAM, it will reserve 74 TB (which is 64 TB plus CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING). In fact it should reserve 64 TB according to the algorithm which is supposed to do. Obviously the extra 10 TB space should be saved to join randomization. So, here MAX_PHYSMEM_BITS should be used instead. Fix it by replacing __PHYSICAL_MASK_SHIFT with MAX_PHYSMEM_BITS. Acked-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com> Reviewed-by: Thomas Garnier <thgar...@google.com> Signed-off-by: Baoquan He <b...@redhat.com> --- arch/x86/mm/kaslr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c index 9a8756517504..387d4ed25d7c 100644 --- a/arch/x86/mm/kaslr.c +++ b/arch/x86/mm/kaslr.c @@ -94,7 +94,7 @@ void __init kernel_randomize_memory(void) if (!kaslr_memory_enabled()) return; - kaslr_regions[0].size_tb = 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT); + kaslr_regions[0].size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT); kaslr_regions[1].size_tb = VMALLOC_SIZE_TB; /* -- 2.17.2