On Mon, May 2, 2016 at 2:58 PM, Dave Hansen <dave.han...@linux.intel.com> wrote:
> On 05/02/2016 02:41 PM, Thomas Garnier wrote:
>> Minor change that allows early boot physical mapping of PUD level virtual
>> addresses. This change prepares usage of different virtual addresses for
>> KASLR memory randomization. It has no impact on default usage.
> ...
>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>> index 89d9747..6adfbce 100644
>> --- a/arch/x86/mm/init_64.c
>> +++ b/arch/x86/mm/init_64.c
>> @@ -526,10 +526,10 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, 
>> unsigned long end,
>>  {
>>       unsigned long pages = 0, next;
>>       unsigned long last_map_addr = end;
>> -     int i = pud_index(addr);
>> +     int i = pud_index((unsigned long)__va(addr));
>>
>>       for (; i < PTRS_PER_PUD; i++, addr = next) {
>> -             pud_t *pud = pud_page + pud_index(addr);
>> +             pud_t *pud = pud_page + pud_index((unsigned long)__va(addr));
>>               pmd_t *pmd;
>>               pgprot_t prot = PAGE_KERNEL;
>
> pud_index() is supposed to take a virtual address.  We were passing a
> physical address in here, and it all just worked because PAGE_OFFSET is
> PUD-aligned.  Now that you are moving PAGE_OFFSET around a bit and not
> PUD-aligning it, this breaks.  Right?
>
> Could you spell this out a bit more the changelog?

Sure, will do on next iteration.

Thanks,
Thomas

Reply via email to