On 11/24/2017 07:27 PM, Andy Lutomirski wrote:

>>> +     cpu_entry_area_begin = (void 
>>> *)(__fix_to_virt(FIX_CPU_ENTRY_AREA_BOTTOM));
>>> +     cpu_entry_area_end = (void *)(__fix_to_virt(FIX_CPU_ENTRY_AREA_TOP) + 
>>> PAGE_SIZE);
>>> +
>>>       kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
>>> -                     (void *)KASAN_SHADOW_END);
>>> +                                kasan_mem_to_shadow(cpu_entry_area_begin));
>>> +
>>> +     kasan_populate_shadow((unsigned 
>>> long)kasan_mem_to_shadow(cpu_entry_area_begin),
>>> +                           (unsigned 
>>> long)kasan_mem_to_shadow(cpu_entry_area_end),
>>> +             0);
>>> +
>>> +     kasan_populate_zero_shadow(kasan_mem_to_shadow(cpu_entry_area_end),
>>
>> Seems we need to round_up kasan_mem_to_shadow(cpu_entry_area_end) to the 
>> next page
>> (or alternatively - round_up(cpu_entry_area_end, 
>> KASAN_SHADOW_SCALE_SIZE*PAGE_SIZE)).
>> Otherwise, kasan_populate_zero_shadow() will overpopulate the last shadow 
>> page of cpu_entry area with kasan_zero_page.
>>
>> We don't necessarily need to 
>> round_down(kasan_mem_to_shadow(cpu_entry_area_begin), PAGE_SIZE) because
>> kasan_populate_zero_shadow() will not populate the last 'incomplete' page 
>> and kasan_populate_shadow()
>> does round_down() internally, which is exactly what we want here. But it 
>> might be better to round_down()
>> explicitly anyway, to avoid relying on such subtle implementation details.
> 
> Any chance you could send a fixup patch or a replacement patch?  You
> obviously understand this code *way* better than I do.
> 
> Or you could do my table-based approach and fix it permanently... :)
> 

Perhaps I'll look at table-based approach later. I've send you a fixed patch 
for now, to not slow you down.

Reply via email to