Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-08-09 Thread Peter Xu
On Wed, Aug 09, 2017 at 10:05:19AM +0200, Juan Quintela wrote:
> Peter Xu  wrote:
> > On Tue, Aug 08, 2017 at 06:06:04PM +0200, Juan Quintela wrote:
> >> Peter Xu  wrote:
> >> > On Mon, Jul 17, 2017 at 03:42:32PM +0200, Juan Quintela wrote:
> >> >
> >> > [...]
> >> >
> >> >>  static int multifd_send_page(uint8_t *address)
> >> >>  {
> >> >> -int i;
> >> >> +int i, j;
> >> >>  MultiFDSendParams *p = NULL; /* make happy gcc */
> >> >> +static multifd_pages_t pages;
> >> >> +static bool once;
> >> >> +
> >> >> +if (!once) {
> >> >> +multifd_init_group(&pages);
> >> >> +once = true;
> >> >
> >> > Would it be good to put the "pages" into multifd_send_state? One is to
> >> > stick globals together; another benefit is that we can remove the
> >> > "once" here: we can then init the "pages" when init multifd_send_state
> >> > struct (but maybe with a better name?...).
> >> 
> >> I did to be able to free it.
> >
> > Free it? But they a static variables, then how can we free them?
> >
> > (I thought the only way to free it is putting it into
> >  multifd_send_state...)
> >
> > Something I must have missed here. :(
> 
> I did the change that you suggested in response to a comment from Dave
> that asked where I freed it.   I see that my sentence was ambigous.

Oh! Then it's clear now. Thanks!

(Sorry I may have missed some of the emails in the threads)

-- 
Peter Xu



Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-08-09 Thread Juan Quintela
Peter Xu  wrote:
> On Tue, Aug 08, 2017 at 06:06:04PM +0200, Juan Quintela wrote:
>> Peter Xu  wrote:
>> > On Mon, Jul 17, 2017 at 03:42:32PM +0200, Juan Quintela wrote:
>> >
>> > [...]
>> >
>> >>  static int multifd_send_page(uint8_t *address)
>> >>  {
>> >> -int i;
>> >> +int i, j;
>> >>  MultiFDSendParams *p = NULL; /* make happy gcc */
>> >> +static multifd_pages_t pages;
>> >> +static bool once;
>> >> +
>> >> +if (!once) {
>> >> +multifd_init_group(&pages);
>> >> +once = true;
>> >
>> > Would it be good to put the "pages" into multifd_send_state? One is to
>> > stick globals together; another benefit is that we can remove the
>> > "once" here: we can then init the "pages" when init multifd_send_state
>> > struct (but maybe with a better name?...).
>> 
>> I did to be able to free it.
>
> Free it? But they a static variables, then how can we free them?
>
> (I thought the only way to free it is putting it into
>  multifd_send_state...)
>
> Something I must have missed here. :(

I did the change that you suggested in response to a comment from Dave
that asked where I freed it.   I see that my sentence was ambigous.

>
>> 
>> > (there are similar static variables in multifd_recv_page() as well, if
>> >  this one applies, then we can possibly use multifd_recv_state for
>> >  that one)
>> 
>> Also there.
>> 
>> >> +}
>> >> +
>> >> +pages.iov[pages.num].iov_base = address;
>> >> +pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE;
>> >> +pages.num++;
>> >> +
>> >> +if (pages.num < (pages.size - 1)) {
>> >> +return UINT16_MAX;
>> >
>> > Nit: shall we define something for readability?  Like:
>> >
>> > #define  MULTIFD_FD_INVALID  UINT16_MAX
>> 
>> Also done.
>> 
>> MULTIFD_CONTINUE
>> 
>> But I am open to changes.
>
> It's clear enough at least to me. Thanks!

Thanks, Juan.



Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-08-09 Thread Peter Xu
On Tue, Aug 08, 2017 at 06:06:04PM +0200, Juan Quintela wrote:
> Peter Xu  wrote:
> > On Mon, Jul 17, 2017 at 03:42:32PM +0200, Juan Quintela wrote:
> >
> > [...]
> >
> >>  static int multifd_send_page(uint8_t *address)
> >>  {
> >> -int i;
> >> +int i, j;
> >>  MultiFDSendParams *p = NULL; /* make happy gcc */
> >> +static multifd_pages_t pages;
> >> +static bool once;
> >> +
> >> +if (!once) {
> >> +multifd_init_group(&pages);
> >> +once = true;
> >
> > Would it be good to put the "pages" into multifd_send_state? One is to
> > stick globals together; another benefit is that we can remove the
> > "once" here: we can then init the "pages" when init multifd_send_state
> > struct (but maybe with a better name?...).
> 
> I did to be able to free it.

Free it? But they a static variables, then how can we free them?

(I thought the only way to free it is putting it into
 multifd_send_state...)

Something I must have missed here. :(

> 
> > (there are similar static variables in multifd_recv_page() as well, if
> >  this one applies, then we can possibly use multifd_recv_state for
> >  that one)
> 
> Also there.
> 
> >> +}
> >> +
> >> +pages.iov[pages.num].iov_base = address;
> >> +pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE;
> >> +pages.num++;
> >> +
> >> +if (pages.num < (pages.size - 1)) {
> >> +return UINT16_MAX;
> >
> > Nit: shall we define something for readability?  Like:
> >
> > #define  MULTIFD_FD_INVALID  UINT16_MAX
> 
> Also done.
> 
> MULTIFD_CONTINUE
> 
> But I am open to changes.

It's clear enough at least to me. Thanks!

-- 
Peter Xu



Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-08-08 Thread Juan Quintela
Peter Xu  wrote:
> On Mon, Jul 17, 2017 at 03:42:32PM +0200, Juan Quintela wrote:
>
> [...]
>
>>  static int multifd_send_page(uint8_t *address)
>>  {
>> -int i;
>> +int i, j;
>>  MultiFDSendParams *p = NULL; /* make happy gcc */
>> +static multifd_pages_t pages;
>> +static bool once;
>> +
>> +if (!once) {
>> +multifd_init_group(&pages);
>> +once = true;
>
> Would it be good to put the "pages" into multifd_send_state? One is to
> stick globals together; another benefit is that we can remove the
> "once" here: we can then init the "pages" when init multifd_send_state
> struct (but maybe with a better name?...).

I did to be able to free it.

> (there are similar static variables in multifd_recv_page() as well, if
>  this one applies, then we can possibly use multifd_recv_state for
>  that one)

Also there.

>> +}
>> +
>> +pages.iov[pages.num].iov_base = address;
>> +pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE;
>> +pages.num++;
>> +
>> +if (pages.num < (pages.size - 1)) {
>> +return UINT16_MAX;
>
> Nit: shall we define something for readability?  Like:
>
> #define  MULTIFD_FD_INVALID  UINT16_MAX

Also done.

MULTIFD_CONTINUE

But I am open to changes.


>> +}
>>  
>>  qemu_sem_wait(&multifd_send_state->sem);
>>  qemu_mutex_lock(&multifd_send_state->mutex);
>> @@ -530,7 +559,12 @@ static int multifd_send_page(uint8_t *address)
>>  }
>>  qemu_mutex_unlock(&multifd_send_state->mutex);
>>  qemu_mutex_lock(&p->mutex);
>> -p->address = address;
>> +p->pages.num = pages.num;
>> +for (j = 0; j < pages.size; j++) {
>> +p->pages.iov[j].iov_base = pages.iov[j].iov_base;
>> +p->pages.iov[j].iov_len = pages.iov[j].iov_len;
>> +}
>> +pages.num = 0;
>>  qemu_mutex_unlock(&p->mutex);
>>  qemu_sem_post(&p->sem);
>>  
>> -- 
>> 2.9.4
>> 



Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-08-08 Thread Juan Quintela
"Dr. David Alan Gilbert"  wrote:
> * Juan Quintela (quint...@redhat.com) wrote:
>> We now send several pages at a time each time that we wakeup a thread.
>> 
>> Signed-off-by: Juan Quintela 
>> 
>> --
>> 
>> Use iovec's insead of creating the equivalent.
>> ---
>>  migration/ram.c | 46 --
>>  1 file changed, 40 insertions(+), 6 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 2bf3fa7..90e1bcb 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -362,6 +362,13 @@ static void compress_threads_save_setup(void)
>>  
>>  /* Multiple fd's */
>>  
>> +
>> +typedef struct {
>> +int num;
>> +int size;
>
> size_t ?

Done.
>
>> +struct iovec *iov;
>> +} multifd_pages_t;
>> +
>>  struct MultiFDSendParams {
>>  /* not changed */
>>  uint8_t id;
>> @@ -371,7 +378,7 @@ struct MultiFDSendParams {
>>  QemuMutex mutex;
>>  /* protected by param mutex */
>>  bool quit;
>> -uint8_t *address;
>> +multifd_pages_t pages;
>>  /* protected by multifd mutex */
>>  bool done;
>>  };
>> @@ -459,8 +466,8 @@ static void *multifd_send_thread(void *opaque)
>>  qemu_mutex_unlock(&p->mutex);
>>  break;
>>  }
>> -if (p->address) {
>> -p->address = 0;
>> +if (p->pages.num) {
>> +p->pages.num = 0;
>>  qemu_mutex_unlock(&p->mutex);
>>  qemu_mutex_lock(&multifd_send_state->mutex);
>>  p->done = true;
>> @@ -475,6 +482,13 @@ static void *multifd_send_thread(void *opaque)
>>  return NULL;
>>  }
>>  
>> +static void multifd_init_group(multifd_pages_t *pages)
>> +{
>> +pages->num = 0;
>> +pages->size = migrate_multifd_group();
>> +pages->iov = g_malloc0(pages->size * sizeof(struct iovec));
>
> Does that get freed anywhere?

Ooops.  Now it does.

>> +}
>> +
>>  int multifd_save_setup(void)
>>  {
>>  int thread_count;
>> @@ -498,7 +512,7 @@ int multifd_save_setup(void)
>>  p->quit = false;
>>  p->id = i;
>>  p->done = true;
>> -p->address = 0;
>> +multifd_init_group(&p->pages);
>>  p->c = socket_send_channel_create();
>>  if (!p->c) {
>>  error_report("Error creating a send channel");
>> @@ -515,8 +529,23 @@ int multifd_save_setup(void)
>>  
>>  static int multifd_send_page(uint8_t *address)
>>  {
>> -int i;
>> +int i, j;
>>  MultiFDSendParams *p = NULL; /* make happy gcc */
>> +static multifd_pages_t pages;
>> +static bool once;
>> +
>> +if (!once) {
>> +multifd_init_group(&pages);
>> +once = true;
>> +}
>> +
>> +pages.iov[pages.num].iov_base = address;
>> +pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE;
>> +pages.num++;
>> +
>> +if (pages.num < (pages.size - 1)) {
>> +return UINT16_MAX;
>
> That's a very odd magic constant to return.
> What's your intention?
>
>> +}
>>  
>>  qemu_sem_wait(&multifd_send_state->sem);
>>  qemu_mutex_lock(&multifd_send_state->mutex);
>> @@ -530,7 +559,12 @@ static int multifd_send_page(uint8_t *address)
>>  }
>>  qemu_mutex_unlock(&multifd_send_state->mutex);
>>  qemu_mutex_lock(&p->mutex);
>> -p->address = address;
>> +p->pages.num = pages.num;
>> +for (j = 0; j < pages.size; j++) {
>> +p->pages.iov[j].iov_base = pages.iov[j].iov_base;
>> +p->pages.iov[j].iov_len = pages.iov[j].iov_len;
>> +}
>
> It would seem more logical to update p->pages.num last
>
> This is also a little odd in that iov_len is never really used,
> it's always TARGET_PAGE_SIZE.

changed by peter suggestion to iov_copy().  And iov_len is used in the
qio send functions, so we have a right value.

>> +pages.num = 0;
>>  qemu_mutex_unlock(&p->mutex);
>>  qemu_sem_post(&p->sem);
>
> What makes sure that any final chunk of pages that was less
> than the group size is sent at the end?

See last_page boolean in a following patch.  It was the wrong place.

Thanks, Juan.



Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-08-08 Thread Juan Quintela
"Daniel P. Berrange"  wrote:
> On Mon, Jul 17, 2017 at 03:42:32PM +0200, Juan Quintela wrote:
>> We now send several pages at a time each time that we wakeup a thread.
>> 
>> Signed-off-by: Juan Quintela 
>> 
>> --
>> 
>> Use iovec's insead of creating the equivalent.
>> ---
>>  migration/ram.c | 46 --
>>  1 file changed, 40 insertions(+), 6 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 2bf3fa7..90e1bcb 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>
>> +static void multifd_init_group(multifd_pages_t *pages)
>> +{
>> +pages->num = 0;
>> +pages->size = migrate_multifd_group();
>> +pages->iov = g_malloc0(pages->size * sizeof(struct iovec));
>
> Use g_new() so that it checks for overflow in the size calculation.

Done, thanks.



Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-07-20 Thread Peter Xu
On Thu, Jul 20, 2017 at 05:49:47PM +0800, Peter Xu wrote:
> On Mon, Jul 17, 2017 at 03:42:32PM +0200, Juan Quintela wrote:
> 
> [...]
> 
> >  static int multifd_send_page(uint8_t *address)
> >  {
> > -int i;
> > +int i, j;
> >  MultiFDSendParams *p = NULL; /* make happy gcc */
> > +static multifd_pages_t pages;
> > +static bool once;
> > +
> > +if (!once) {
> > +multifd_init_group(&pages);
> > +once = true;
> 
> Would it be good to put the "pages" into multifd_send_state? One is to
> stick globals together; another benefit is that we can remove the
> "once" here: we can then init the "pages" when init multifd_send_state
> struct (but maybe with a better name?...).
> 
> (there are similar static variables in multifd_recv_page() as well, if
>  this one applies, then we can possibly use multifd_recv_state for
>  that one)
> 
> > +}
> > +
> > +pages.iov[pages.num].iov_base = address;
> > +pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE;
> > +pages.num++;
> > +
> > +if (pages.num < (pages.size - 1)) {
> > +return UINT16_MAX;
> 
> Nit: shall we define something for readability?  Like:
> 
> #define  MULTIFD_FD_INVALID  UINT16_MAX

Sorry I misunderstood. INVALID may not suite here. Maybe
MULTIFD_FD_CONTINUE?

(afaiu we send this before we send the real fd_num for the chunk)

-- 
Peter Xu



Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-07-20 Thread Peter Xu
On Mon, Jul 17, 2017 at 03:42:32PM +0200, Juan Quintela wrote:

[...]

>  static int multifd_send_page(uint8_t *address)
>  {
> -int i;
> +int i, j;
>  MultiFDSendParams *p = NULL; /* make happy gcc */
> +static multifd_pages_t pages;
> +static bool once;
> +
> +if (!once) {
> +multifd_init_group(&pages);
> +once = true;

Would it be good to put the "pages" into multifd_send_state? One is to
stick globals together; another benefit is that we can remove the
"once" here: we can then init the "pages" when init multifd_send_state
struct (but maybe with a better name?...).

(there are similar static variables in multifd_recv_page() as well, if
 this one applies, then we can possibly use multifd_recv_state for
 that one)

> +}
> +
> +pages.iov[pages.num].iov_base = address;
> +pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE;
> +pages.num++;
> +
> +if (pages.num < (pages.size - 1)) {
> +return UINT16_MAX;

Nit: shall we define something for readability?  Like:

#define  MULTIFD_FD_INVALID  UINT16_MAX

> +}
>  
>  qemu_sem_wait(&multifd_send_state->sem);
>  qemu_mutex_lock(&multifd_send_state->mutex);
> @@ -530,7 +559,12 @@ static int multifd_send_page(uint8_t *address)
>  }
>  qemu_mutex_unlock(&multifd_send_state->mutex);
>  qemu_mutex_lock(&p->mutex);
> -p->address = address;
> +p->pages.num = pages.num;
> +for (j = 0; j < pages.size; j++) {
> +p->pages.iov[j].iov_base = pages.iov[j].iov_base;
> +p->pages.iov[j].iov_len = pages.iov[j].iov_len;
> +}
> +pages.num = 0;
>  qemu_mutex_unlock(&p->mutex);
>  qemu_sem_post(&p->sem);
>  
> -- 
> 2.9.4
> 

-- 
Peter Xu



Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-07-20 Thread Dr. David Alan Gilbert
* Juan Quintela (quint...@redhat.com) wrote:
> We now send several pages at a time each time that we wakeup a thread.
> 
> Signed-off-by: Juan Quintela 
> 
> --
> 
> Use iovec's insead of creating the equivalent.
> ---
>  migration/ram.c | 46 --
>  1 file changed, 40 insertions(+), 6 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 2bf3fa7..90e1bcb 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -362,6 +362,13 @@ static void compress_threads_save_setup(void)
>  
>  /* Multiple fd's */
>  
> +
> +typedef struct {
> +int num;
> +int size;

size_t ?

> +struct iovec *iov;
> +} multifd_pages_t;
> +
>  struct MultiFDSendParams {
>  /* not changed */
>  uint8_t id;
> @@ -371,7 +378,7 @@ struct MultiFDSendParams {
>  QemuMutex mutex;
>  /* protected by param mutex */
>  bool quit;
> -uint8_t *address;
> +multifd_pages_t pages;
>  /* protected by multifd mutex */
>  bool done;
>  };
> @@ -459,8 +466,8 @@ static void *multifd_send_thread(void *opaque)
>  qemu_mutex_unlock(&p->mutex);
>  break;
>  }
> -if (p->address) {
> -p->address = 0;
> +if (p->pages.num) {
> +p->pages.num = 0;
>  qemu_mutex_unlock(&p->mutex);
>  qemu_mutex_lock(&multifd_send_state->mutex);
>  p->done = true;
> @@ -475,6 +482,13 @@ static void *multifd_send_thread(void *opaque)
>  return NULL;
>  }
>  
> +static void multifd_init_group(multifd_pages_t *pages)
> +{
> +pages->num = 0;
> +pages->size = migrate_multifd_group();
> +pages->iov = g_malloc0(pages->size * sizeof(struct iovec));

Does that get freed anywhere?

> +}
> +
>  int multifd_save_setup(void)
>  {
>  int thread_count;
> @@ -498,7 +512,7 @@ int multifd_save_setup(void)
>  p->quit = false;
>  p->id = i;
>  p->done = true;
> -p->address = 0;
> +multifd_init_group(&p->pages);
>  p->c = socket_send_channel_create();
>  if (!p->c) {
>  error_report("Error creating a send channel");
> @@ -515,8 +529,23 @@ int multifd_save_setup(void)
>  
>  static int multifd_send_page(uint8_t *address)
>  {
> -int i;
> +int i, j;
>  MultiFDSendParams *p = NULL; /* make happy gcc */
> +static multifd_pages_t pages;
> +static bool once;
> +
> +if (!once) {
> +multifd_init_group(&pages);
> +once = true;
> +}
> +
> +pages.iov[pages.num].iov_base = address;
> +pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE;
> +pages.num++;
> +
> +if (pages.num < (pages.size - 1)) {
> +return UINT16_MAX;

That's a very odd magic constant to return.
What's your intention?

> +}
>  
>  qemu_sem_wait(&multifd_send_state->sem);
>  qemu_mutex_lock(&multifd_send_state->mutex);
> @@ -530,7 +559,12 @@ static int multifd_send_page(uint8_t *address)
>  }
>  qemu_mutex_unlock(&multifd_send_state->mutex);
>  qemu_mutex_lock(&p->mutex);
> -p->address = address;
> +p->pages.num = pages.num;
> +for (j = 0; j < pages.size; j++) {
> +p->pages.iov[j].iov_base = pages.iov[j].iov_base;
> +p->pages.iov[j].iov_len = pages.iov[j].iov_len;
> +}

It would seem more logical to update p->pages.num last

This is also a little odd in that iov_len is never really used,
it's always TARGET_PAGE_SIZE.

> +pages.num = 0;
>  qemu_mutex_unlock(&p->mutex);
>  qemu_sem_post(&p->sem);

What makes sure that any final chunk of pages that was less
than the group size is sent at the end?

Dave

> -- 
> 2.9.4
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-07-19 Thread Daniel P. Berrange
On Mon, Jul 17, 2017 at 03:42:32PM +0200, Juan Quintela wrote:
> We now send several pages at a time each time that we wakeup a thread.
> 
> Signed-off-by: Juan Quintela 
> 
> --
> 
> Use iovec's insead of creating the equivalent.
> ---
>  migration/ram.c | 46 --
>  1 file changed, 40 insertions(+), 6 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 2bf3fa7..90e1bcb 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c

> +static void multifd_init_group(multifd_pages_t *pages)
> +{
> +pages->num = 0;
> +pages->size = migrate_multifd_group();
> +pages->iov = g_malloc0(pages->size * sizeof(struct iovec));

Use g_new() so that it checks for overflow in the size calculation.

> +}
> +

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



[Qemu-devel] [PATCH v5 11/17] migration: Really use multiple pages at a time

2017-07-17 Thread Juan Quintela
We now send several pages at a time each time that we wakeup a thread.

Signed-off-by: Juan Quintela 

--

Use iovec's insead of creating the equivalent.
---
 migration/ram.c | 46 --
 1 file changed, 40 insertions(+), 6 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 2bf3fa7..90e1bcb 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -362,6 +362,13 @@ static void compress_threads_save_setup(void)
 
 /* Multiple fd's */
 
+
+typedef struct {
+int num;
+int size;
+struct iovec *iov;
+} multifd_pages_t;
+
 struct MultiFDSendParams {
 /* not changed */
 uint8_t id;
@@ -371,7 +378,7 @@ struct MultiFDSendParams {
 QemuMutex mutex;
 /* protected by param mutex */
 bool quit;
-uint8_t *address;
+multifd_pages_t pages;
 /* protected by multifd mutex */
 bool done;
 };
@@ -459,8 +466,8 @@ static void *multifd_send_thread(void *opaque)
 qemu_mutex_unlock(&p->mutex);
 break;
 }
-if (p->address) {
-p->address = 0;
+if (p->pages.num) {
+p->pages.num = 0;
 qemu_mutex_unlock(&p->mutex);
 qemu_mutex_lock(&multifd_send_state->mutex);
 p->done = true;
@@ -475,6 +482,13 @@ static void *multifd_send_thread(void *opaque)
 return NULL;
 }
 
+static void multifd_init_group(multifd_pages_t *pages)
+{
+pages->num = 0;
+pages->size = migrate_multifd_group();
+pages->iov = g_malloc0(pages->size * sizeof(struct iovec));
+}
+
 int multifd_save_setup(void)
 {
 int thread_count;
@@ -498,7 +512,7 @@ int multifd_save_setup(void)
 p->quit = false;
 p->id = i;
 p->done = true;
-p->address = 0;
+multifd_init_group(&p->pages);
 p->c = socket_send_channel_create();
 if (!p->c) {
 error_report("Error creating a send channel");
@@ -515,8 +529,23 @@ int multifd_save_setup(void)
 
 static int multifd_send_page(uint8_t *address)
 {
-int i;
+int i, j;
 MultiFDSendParams *p = NULL; /* make happy gcc */
+static multifd_pages_t pages;
+static bool once;
+
+if (!once) {
+multifd_init_group(&pages);
+once = true;
+}
+
+pages.iov[pages.num].iov_base = address;
+pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE;
+pages.num++;
+
+if (pages.num < (pages.size - 1)) {
+return UINT16_MAX;
+}
 
 qemu_sem_wait(&multifd_send_state->sem);
 qemu_mutex_lock(&multifd_send_state->mutex);
@@ -530,7 +559,12 @@ static int multifd_send_page(uint8_t *address)
 }
 qemu_mutex_unlock(&multifd_send_state->mutex);
 qemu_mutex_lock(&p->mutex);
-p->address = address;
+p->pages.num = pages.num;
+for (j = 0; j < pages.size; j++) {
+p->pages.iov[j].iov_base = pages.iov[j].iov_base;
+p->pages.iov[j].iov_len = pages.iov[j].iov_len;
+}
+pages.num = 0;
 qemu_mutex_unlock(&p->mutex);
 qemu_sem_post(&p->sem);
 
-- 
2.9.4