On 2016-01-15 09:32, Philippe Gerum wrote:
> On 01/14/2016 06:34 PM, Henning Schild wrote:
>> Hey,
>>
>> the 4.1 kernel supports mapping IO memory using huge pages.
>> 0f616be120c632c818faaea9adcb8f05a7a8601f ..
>> 6b6378355b925050eb6fa966742d8c2d65ff0d83
>>
>> In ipipe memory that gets ioremapped will get pinned using
>> __ipipe_pin_mapping_globally, however in the x86_64 case that function
>> uses vmalloc_sync_one which must only be used on 4k pages.
>>
>> We found the problem when using the kernel in a VBox VM, where the
>> paravirtualized PCI device has enough iomem to cause huge page
>> mappings. When loading the device driver you will get a BUG caused by
>> __ipipe_pin_mapping_globally.
>>
>> I will work on a fix for the problem. But i would also like to
>> understand the initial purpose of the pinning. Is it even supposed to
>> work for io memory as well? It looks like a way to commit address space
>> changes right down into the page tables, to avoid page-faults in the
>> kernel address space. Probably for more predictable timing ...
>>
> 
> This is for pinning the page table entries referencing kernel mappings,
> so that we don't get minor faults when treading over kernel memory,
> unless the fault fixup code is compatible with primary domain execution,
> and cheaper than tracking the pgds.

I suppose a critical scenario would already be a real-time driver that
accesses io regions in its interrupt handler or in the context of some
real-time task.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

_______________________________________________
Xenomai mailing list
[email protected]
http://xenomai.org/mailman/listinfo/xenomai

Reply via email to