>>> On 13.05.16 at 18:29, wrote:
> On Fri, May 13, 2016 at 10:12 AM, Jan Beulich wrote:
> On 13.05.16 at 17:31, wrote:
>>> On Fri, May 13, 2016 at 9:09 AM, Jan Beulich wrote:
>>> On 13.05.16 at 16:50, wrote:
> On Fri, May 13, 2016 at 6:00 AM, Jan Beulich wrote:
> On 12.05.
On Fri, May 13, 2016 at 10:12 AM, Jan Beulich wrote:
On 13.05.16 at 17:31, wrote:
>> On Fri, May 13, 2016 at 9:09 AM, Jan Beulich wrote:
>> On 13.05.16 at 16:50, wrote:
On Fri, May 13, 2016 at 6:00 AM, Jan Beulich wrote:
On 12.05.16 at 17:25, wrote:
>> @@ -1468,6 +1
>>> On 13.05.16 at 17:35, wrote:
> On 05/13/2016 11:09 AM, Jan Beulich wrote:
> On 13.05.16 at 16:50, wrote:
> [...]
> @@ -1468,6 +1505,69 @@ int
> mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
> }
> break;
>
> +case X
>>> On 13.05.16 at 17:31, wrote:
> On Fri, May 13, 2016 at 9:09 AM, Jan Beulich wrote:
> On 13.05.16 at 16:50, wrote:
>>> On Fri, May 13, 2016 at 6:00 AM, Jan Beulich wrote:
>>> On 12.05.16 at 17:25, wrote:
> @@ -1468,6 +1505,69 @@ int
> +if ( rc )
> +
On 05/13/2016 11:09 AM, Jan Beulich wrote:
On 13.05.16 at 16:50, wrote:
[...]
@@ -1468,6 +1505,69 @@ int
mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
}
break;
+case XENMEM_sharing_op_bulk_share:
+{
+unsigned long max_sgfn
On Fri, May 13, 2016 at 9:09 AM, Jan Beulich wrote:
On 13.05.16 at 16:50, wrote:
>> On Fri, May 13, 2016 at 6:00 AM, Jan Beulich wrote:
>> On 12.05.16 at 17:25, wrote:
+if ( !rc )
+mem_sharing_share_pages(d, bulk->start, sh, cd,
bulk->start,
>>> On 13.05.16 at 16:50, wrote:
> On Fri, May 13, 2016 at 6:00 AM, Jan Beulich wrote:
> On 12.05.16 at 17:25, wrote:
>>> +if ( !rc )
>>> +mem_sharing_share_pages(d, bulk->start, sh, cd,
>>> bulk->start, ch);
>>
>> You shouldn't be ignoring errors here.
>
> The
On Fri, May 13, 2016 at 6:00 AM, Jan Beulich wrote:
On 12.05.16 at 17:25, wrote:
>> --- a/xen/arch/x86/mm/mem_sharing.c
>> +++ b/xen/arch/x86/mm/mem_sharing.c
>> @@ -1294,6 +1294,43 @@ int relinquish_shared_pages(struct domain *d)
>> return rc;
>> }
>>
>> +static int bulk_share(struct
>>> On 12.05.16 at 17:25, wrote:
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1294,6 +1294,43 @@ int relinquish_shared_pages(struct domain *d)
> return rc;
> }
>
> +static int bulk_share(struct domain *d, struct domain *cd, unsigned long max,
> +
Currently mem-sharing can be performed on a page-by-page base from the control
domain. However, when completely deduplicating (cloning) a VM, this requires
at least 3 hypercalls per page. As the user has to loop through all pages up
to max_gpfn, this process is very slow and wasteful.
This patch i
>>> On 16.10.15 at 19:02, wrote:
> On Fri, Oct 16, 2015 at 12:46 AM, Jan Beulich wrote:
>> >>> On 15.10.15 at 20:09, wrote:
>> > +rc = -EFAULT;
>> > +else
>> > +rc =
>> > hypercall_create_continuation(__HYPERVISOR_memory_op,
>> > +
On Fri, Oct 16, 2015 at 12:46 AM, Jan Beulich wrote:
> >>> On 15.10.15 at 20:09, wrote:
> > --- a/xen/arch/x86/mm/mem_sharing.c
> > +++ b/xen/arch/x86/mm/mem_sharing.c
> > @@ -1293,6 +1293,42 @@ int relinquish_shared_pages(struct domain *d)
> > return rc;
> > }
> >
> > +static int bulk_sha
>>> On 15.10.15 at 20:09, wrote:
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1293,6 +1293,42 @@ int relinquish_shared_pages(struct domain *d)
> return rc;
> }
>
> +static int bulk_share(struct domain *d, struct domain *cd, unsigned long max,
> +
Currently mem-sharing can be performed on a page-by-page base from the control
domain. However, when completely deduplicating (cloning) a VM, this requires
at least 3 hypercalls per page. As the user has to loop through all pages up
to max_gpfn, this process is very slow and wasteful.
This patch i
14 matches
Mail list logo