Hi Stefano,
On 10/08/15 13:03, Stefano Stabellini wrote:
>> +xen_pfn = xen_page_to_pfn(page);
>> +}
>> +fn(pfn_to_gfn(xen_pfn++), data);
>
> What is the purpose of incrementing xen_pfn here?
Because the Linux page is split into multiple xen_pfn, so we
On 07/08/15 17:46, Julien Grall wrote:
> The hypercall interface (as well as the toolstack) is always using 4KB
> page granularity. When the toolstack is asking for mapping a series of
> guest PFN in a batch, it expects to have the page map contiguously in
> its virtual memory.
>
> When Linux is u
Hi Stefano,
On 10/08/15 13:57, Stefano Stabellini wrote:
> On Mon, 10 Aug 2015, David Vrabel wrote:
>> On 10/08/15 13:03, Stefano Stabellini wrote:
>>> On Fri, 7 Aug 2015, Julien Grall wrote:
- rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap_range, &xatp);
- return rc < 0 ? rc : err;
>
On Mon, 10 Aug 2015, David Vrabel wrote:
> On 10/08/15 13:03, Stefano Stabellini wrote:
> > On Fri, 7 Aug 2015, Julien Grall wrote:
> >> - rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap_range, &xatp);
> >> - return rc < 0 ? rc : err;
> >> + for (i = 0; i < nr_gfn; i++) {
> >> + if ((i
On 10/08/15 13:03, Stefano Stabellini wrote:
> On Fri, 7 Aug 2015, Julien Grall wrote:
>> -rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap_range, &xatp);
>> -return rc < 0 ? rc : err;
>> +for (i = 0; i < nr_gfn; i++) {
>> +if ((i % XEN_PFN_PER_PAGE) == 0) {
>> +
On Fri, 7 Aug 2015, Julien Grall wrote:
> The hypercall interface (as well as the toolstack) is always using 4KB
> page granularity. When the toolstack is asking for mapping a series of
> guest PFN in a batch, it expects to have the page map contiguously in
> its virtual memory.
>
> When Linux is
The hypercall interface (as well as the toolstack) is always using 4KB
page granularity. When the toolstack is asking for mapping a series of
guest PFN in a batch, it expects to have the page map contiguously in
its virtual memory.
When Linux is using 64KB page granularity, the privcmd driver will
7 matches
Mail list logo