>>> On 02.02.16 at 15:33, wrote:
> On 02/02/16 13:24, Jan Beulich wrote:
> On 01.02.16 at 16:00, wrote:
>>> On 01/02/16 09:14, Jan Beulich wrote:
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -899,48 +899,64 @@ void p2m_change_type_range(struct domain
p2m
On 02/02/16 13:24, Jan Beulich wrote:
On 01.02.16 at 16:00, wrote:
>> On 01/02/16 09:14, Jan Beulich wrote:
>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -899,48 +899,64 @@ void p2m_change_type_range(struct domain
>>> p2m_unlock(p2m);
>>> }
>>>
>>> -/* Returns
>>> On 01.02.16 at 16:00, wrote:
> On 01/02/16 09:14, Jan Beulich wrote:
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -899,48 +899,64 @@ void p2m_change_type_range(struct domain
>> p2m_unlock(p2m);
>> }
>>
>> -/* Returns: 0 for success, -errno for failure */
>> +/*
>
On 01/02/16 09:14, Jan Beulich wrote:
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -899,48 +899,64 @@ void p2m_change_type_range(struct domain
> p2m_unlock(p2m);
> }
>
> -/* Returns: 0 for success, -errno for failure */
> +/*
> + * Returns:
> + *0 for su
When mapping large BARs (e.g. the frame buffer of a graphics card) the
overhead of establishing such mappings using only 4k pages has,
particularly after the XSA-125 fix, become unacceptable. Alter the
XEN_DOMCTL_memory_mapping semantics once again, so that there's no
longer a fixed amount of guest