Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-20 Thread Paolo Bonzini


On 19/01/2016 04:17, Li, Liang Z wrote:
> > Paolo is right, for VM in destination, QEMU may write VM's memory before
> > VM starts.
> > So your assumption that "VM's RAM pages are initialized to zero" is 
> > incorrect.
> > This patch will break LM.
> 
> Which portion of the VM's RAM pages will be written by QEMU? Do you know some 
> exact information?
> I can't wait for Paolo's response.

It is basically anything that uses rom_add_file_fixed or
rom_add_blob_fixed with an address that points into RAM.

Paolo



Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-20 Thread Li, Liang Z
This patch will break LM.
> >
> > Which portion of the VM's RAM pages will be written by QEMU? Do you
> know some exact information?
> > I can't wait for Paolo's response.
> 
> It is basically anything that uses rom_add_file_fixed or rom_add_blob_fixed
> with an address that points into RAM.
> 
> Paolo

Thanks a lot!

Liang




Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-18 Thread Hailiang Zhang

On 2016/1/16 2:57, Dr. David Alan Gilbert wrote:

* Liang Li (liang.z...@intel.com) wrote:

Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
if hugetlbfs is used.) so there is no need to send the zero page header
to destination.

For guest just uses a small portions of RAM, this change can avoid
allocating all the guest's RAM pages in the destination node after
live migration. Another benefit is destination QEMU can save lots of
CPU cycles for zero page checking.


I think this would break postcopy, because the zero pages wouldn't be
filled in, so accessing them would still generate a userfault.
So you'd have to disable this optimisation if postcopy is enabled
(even during the precopy bulk stage).

Also, are you sure about the benefits?
  Destination guests RAM should not be allocated on receiving a zero
page; see ram_handle_compressed, it doesn't write to the page if
it's zero, so it shouldn't cause an allocate.  I think you're probably
correct about the zero page test on the destination, I wonder if we
can speed that up.



Yes, we have already optimize the zero page allocation in destination.
but this patch can reduce the amount of data that transferred and the
time of checking zero page, which can reduce the migration time.



Dave



Signed-off-by: Liang Li 
---
  migration/ram.c | 10 ++
  1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 4e606ab..c4821d1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, 
ram_addr_t offset,

  if (is_zero_range(p, TARGET_PAGE_SIZE)) {
  acct_info.dup_pages++;
-*bytes_transferred += save_page_header(f, block,
-   offset | 
RAM_SAVE_FLAG_COMPRESS);
-qemu_put_byte(f, 0);
-*bytes_transferred += 1;
+if (!ram_bulk_stage) {
+*bytes_transferred += save_page_header(f, block, offset |
+   RAM_SAVE_FLAG_COMPRESS);
+qemu_put_byte(f, 0);
+*bytes_transferred += 1;
+}
  pages = 1;
  }

--
1.9.1


--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

.







Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-18 Thread Hailiang Zhang

Hi,

On 2016/1/15 18:24, Li, Liang Z wrote:

It seems that this patch is incorrect, if the no-zero pages are zeroed again
during !ram_bulk_stage, we didn't send the new zeroed page, there will be
an error.



If not in ram_bulk_stage, still send the header, could you explain why it's 
wrong?

Liang



I have made a mistake, and yes, this patch can speed up the live migration time,
especially when there are many zero pages, it will be more obvious.
I like this idea. Did you test it with postcopy ? Does it break postcopy ?

Thanks,
zhanghailiang


For guest just uses a small portions of RAM, this change can avoid
allocating all the guest's RAM pages in the destination node after
live migration. Another benefit is destination QEMU can save lots of
CPU cycles for zero page checking.

Signed-off-by: Liang Li 
---
   migration/ram.c | 10 ++
   1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c index 4e606ab..c4821d1
100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f,

RAMBlock

*block, ram_addr_t offset,

   if (is_zero_range(p, TARGET_PAGE_SIZE)) {
   acct_info.dup_pages++;
-*bytes_transferred += save_page_header(f, block,
-   offset | 
RAM_SAVE_FLAG_COMPRESS);
-qemu_put_byte(f, 0);
-*bytes_transferred += 1;
+if (!ram_bulk_stage) {
+*bytes_transferred += save_page_header(f, block, offset |
+   RAM_SAVE_FLAG_COMPRESS);
+qemu_put_byte(f, 0);
+*bytes_transferred += 1;
+}
   pages = 1;
   }








.







Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-18 Thread Dr. David Alan Gilbert
* Li, Liang Z (liang.z...@intel.com) wrote:
> > * Liang Li (liang.z...@intel.com) wrote:
> > > Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> > > with the mmap() and MAP_ANONYMOUS option, or mmap() without
> > MAP_SHARED
> > > if hugetlbfs is used.) so there is no need to send the zero page
> > > header to destination.
> > >
> > > For guest just uses a small portions of RAM, this change can avoid
> > > allocating all the guest's RAM pages in the destination node after
> > > live migration. Another benefit is destination QEMU can save lots of
> > > CPU cycles for zero page checking.
> > 
> > I think this would break postcopy, because the zero pages wouldn't be filled
> > in, so accessing them would still generate a userfault.
> > So you'd have to disable this optimisation if postcopy is enabled (even 
> > during
> > the precopy bulk stage).
> > 
> > Also, are you sure about the benefits?
> >  Destination guests RAM should not be allocated on receiving a zero page;
> > see ram_handle_compressed, it doesn't write to the page if it's zero, so it
> > shouldn't cause an allocate.  I think you're probably correct about the zero
> > page test on the destination, I wonder if we can speed that up.
> > 
> > Dave
> 
> I have test the performance, with a 8G guest just booted, this patch can 
> reduce total live migration time about 10%.
> Unfortunately, Paolo said this patch would break LM in some case 
> 
> For the zero page test on the destination, if the page is really a zero page, 
> test is faster than writing a whole page of zero.

There shouldn't be a write on the destination though; it does a check if
the page is already zero and only if it's none-zero does it do the write;
it should rarely be non-zero.

Dave

> 
> Liang
> 
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-18 Thread Li, Liang Z
> On 2016/1/15 18:24, Li, Liang Z wrote:
> >> It seems that this patch is incorrect, if the no-zero pages are
> >> zeroed again during !ram_bulk_stage, we didn't send the new zeroed
> >> page, there will be an error.
> >>
> >
> > If not in ram_bulk_stage, still send the header, could you explain why it's
> wrong?
> >
> > Liang
> >
> 
> I have made a mistake, and yes, this patch can speed up the live migration
> time, especially when there are many zero pages, it will be more obvious.
> I like this idea. Did you test it with postcopy ? Does it break postcopy ?
> 

Not yet, I saw Dave's comment's, it will beak post copy, it's not hard to fix 
this. 
A more important thing is Paolo's comments, I don't know in which case this 
patch will break LM. Do you have any idea about this? 
Hope that QEMU don't write data to the block 'pc.ram'.

Liang

> Thanks,
> zhanghailiang
> 



Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-18 Thread Hailiang Zhang

On 2016/1/19 11:11, Hailiang Zhang wrote:

On 2016/1/19 9:26, Li, Liang Z wrote:

On 2016/1/15 18:24, Li, Liang Z wrote:

It seems that this patch is incorrect, if the no-zero pages are
zeroed again during !ram_bulk_stage, we didn't send the new zeroed
page, there will be an error.



If not in ram_bulk_stage, still send the header, could you explain why it's

wrong?


Liang



I have made a mistake, and yes, this patch can speed up the live migration
time, especially when there are many zero pages, it will be more obvious.
I like this idea. Did you test it with postcopy ? Does it break postcopy ?



Not yet, I saw Dave's comment's, it will beak post copy, it's not hard to fix 
this.
A more important thing is Paolo's comments, I don't know in which case this 
patch will break LM. Do you have any idea about this?
Hope that QEMU don't write data to the block 'pc.ram'.



Paolo is right, for VM in destination, QEMU may write VM's memory before VM 
starts.
So your assumption that "VM's RAM pages are initialized to zero" is incorrect.
This patch will break LM.



Actually, someone has done like that before and cause a migration bug,
See commit f1c72795af573b24a7da5eb52375c9aba8a37972, and
the fixing patch is
commit 9ef051e5536b6368a1076046ec6c4ec4ac12b5c6
Revert "migration: do not sent zero pages in bulk stage"


Liang


Thanks,
zhanghailiang



.









Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-18 Thread Li, Liang Z
> > Not yet, I saw Dave's comment's, it will beak post copy, it's not hard to 
> > fix
> this.
> > A more important thing is Paolo's comments, I don't know in which case
> this patch will break LM. Do you have any idea about this?
> > Hope that QEMU don't write data to the block 'pc.ram'.
> >
> 
> Paolo is right, for VM in destination, QEMU may write VM's memory before
> VM starts.
> So your assumption that "VM's RAM pages are initialized to zero" is incorrect.
> This patch will break LM.
> 

Which portion of the VM's RAM pages will be written by QEMU? Do you know some 
exact information?
I can't wait for Paolo's response.

Liang



Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-18 Thread Li, Liang Z
> Actually, someone has done like that before and cause a migration bug, See
> commit f1c72795af573b24a7da5eb52375c9aba8a37972, and the fixing patch is
> commit 9ef051e5536b6368a1076046ec6c4ec4ac12b5c6
> Revert "migration: do not sent zero pages in bulk stage"

Thanks for your information, I didn't notice that before. May be there is a 
workaround solution instead of reverting, I need more investigation.

Liang



Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-18 Thread Hailiang Zhang

On 2016/1/19 9:26, Li, Liang Z wrote:

On 2016/1/15 18:24, Li, Liang Z wrote:

It seems that this patch is incorrect, if the no-zero pages are
zeroed again during !ram_bulk_stage, we didn't send the new zeroed
page, there will be an error.



If not in ram_bulk_stage, still send the header, could you explain why it's

wrong?


Liang



I have made a mistake, and yes, this patch can speed up the live migration
time, especially when there are many zero pages, it will be more obvious.
I like this idea. Did you test it with postcopy ? Does it break postcopy ?



Not yet, I saw Dave's comment's, it will beak post copy, it's not hard to fix 
this.
A more important thing is Paolo's comments, I don't know in which case this 
patch will break LM. Do you have any idea about this?
Hope that QEMU don't write data to the block 'pc.ram'.



Paolo is right, for VM in destination, QEMU may write VM's memory before VM 
starts.
So your assumption that "VM's RAM pages are initialized to zero" is incorrect.
This patch will break LM.


Liang


Thanks,
zhanghailiang



.







Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-16 Thread Li, Liang Z
> On 15/01/2016 10:48, Liang Li wrote:
> > Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> > with the mmap() and MAP_ANONYMOUS option, or mmap() without
> MAP_SHARED
> > if hugetlbfs is used.) so there is no need to send the zero page
> > header to destination.
> >
> > For guest just uses a small portions of RAM, this change can avoid
> > allocating all the guest's RAM pages in the destination node after
> > live migration. Another benefit is destination QEMU can save lots of
> > CPU cycles for zero page checking.
> >
> > Signed-off-by: Liang Li 
> 
> This does not work.  Depending on the board, some pages are written by
> QEMU before the guest starts.  If the guest rewrites them with zeroes, this
> change breaks migration.
> 
> Paolo

Hi Paolo,

   Luckily I cc to you.  Could you give an example in which case this patch 
will break migration?
Then I can understand your comments better. Much appreciate!


Liang


> 
> > ---
> >  migration/ram.c | 10 ++
> >  1 file changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/migration/ram.c b/migration/ram.c index 4e606ab..c4821d1
> > 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f,
> RAMBlock
> > *block, ram_addr_t offset,
> >
> >  if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> >  acct_info.dup_pages++;
> > -*bytes_transferred += save_page_header(f, block,
> > -   offset | 
> > RAM_SAVE_FLAG_COMPRESS);
> > -qemu_put_byte(f, 0);
> > -*bytes_transferred += 1;
> > +if (!ram_bulk_stage) {
> > +*bytes_transferred += save_page_header(f, block, offset |
> > +   RAM_SAVE_FLAG_COMPRESS);
> > +qemu_put_byte(f, 0);
> > +*bytes_transferred += 1;
> > +}
> >  pages = 1;
> >  }
> >
> >



Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-16 Thread Li, Liang Z
> * Liang Li (liang.z...@intel.com) wrote:
> > Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> > with the mmap() and MAP_ANONYMOUS option, or mmap() without
> MAP_SHARED
> > if hugetlbfs is used.) so there is no need to send the zero page
> > header to destination.
> >
> > For guest just uses a small portions of RAM, this change can avoid
> > allocating all the guest's RAM pages in the destination node after
> > live migration. Another benefit is destination QEMU can save lots of
> > CPU cycles for zero page checking.
> 
> I think this would break postcopy, because the zero pages wouldn't be filled
> in, so accessing them would still generate a userfault.
> So you'd have to disable this optimisation if postcopy is enabled (even during
> the precopy bulk stage).
> 
> Also, are you sure about the benefits?
>  Destination guests RAM should not be allocated on receiving a zero page;
> see ram_handle_compressed, it doesn't write to the page if it's zero, so it
> shouldn't cause an allocate.  I think you're probably correct about the zero
> page test on the destination, I wonder if we can speed that up.
> 
> Dave

I have test the performance, with a 8G guest just booted, this patch can reduce 
total live migration time about 10%.
Unfortunately, Paolo said this patch would break LM in some case 

For the zero page test on the destination, if the page is really a zero page, 
test is faster than writing a whole page of zero.

Liang





Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-15 Thread Hailiang Zhang

On 2016/1/15 17:48, Liang Li wrote:

Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
if hugetlbfs is used.) so there is no need to send the zero page header
to destination.



It seems that this patch is incorrect, if the no-zero pages are zeroed again
during !ram_bulk_stage, we didn't send the new zeroed page, there will be an 
error.


For guest just uses a small portions of RAM, this change can avoid
allocating all the guest's RAM pages in the destination node after
live migration. Another benefit is destination QEMU can save lots of
CPU cycles for zero page checking.

Signed-off-by: Liang Li 
---
  migration/ram.c | 10 ++
  1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 4e606ab..c4821d1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, 
ram_addr_t offset,

  if (is_zero_range(p, TARGET_PAGE_SIZE)) {
  acct_info.dup_pages++;
-*bytes_transferred += save_page_header(f, block,
-   offset | 
RAM_SAVE_FLAG_COMPRESS);
-qemu_put_byte(f, 0);
-*bytes_transferred += 1;
+if (!ram_bulk_stage) {
+*bytes_transferred += save_page_header(f, block, offset |
+   RAM_SAVE_FLAG_COMPRESS);
+qemu_put_byte(f, 0);
+*bytes_transferred += 1;
+}
  pages = 1;
  }








Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-15 Thread Li, Liang Z
> It seems that this patch is incorrect, if the no-zero pages are zeroed again
> during !ram_bulk_stage, we didn't send the new zeroed page, there will be
> an error.
> 

If not in ram_bulk_stage, still send the header, could you explain why it's 
wrong?

Liang

> > For guest just uses a small portions of RAM, this change can avoid
> > allocating all the guest's RAM pages in the destination node after
> > live migration. Another benefit is destination QEMU can save lots of
> > CPU cycles for zero page checking.
> >
> > Signed-off-by: Liang Li 
> > ---
> >   migration/ram.c | 10 ++
> >   1 file changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/migration/ram.c b/migration/ram.c index 4e606ab..c4821d1
> > 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f,
> RAMBlock
> > *block, ram_addr_t offset,
> >
> >   if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> >   acct_info.dup_pages++;
> > -*bytes_transferred += save_page_header(f, block,
> > -   offset | 
> > RAM_SAVE_FLAG_COMPRESS);
> > -qemu_put_byte(f, 0);
> > -*bytes_transferred += 1;
> > +if (!ram_bulk_stage) {
> > +*bytes_transferred += save_page_header(f, block, offset |
> > +   RAM_SAVE_FLAG_COMPRESS);
> > +qemu_put_byte(f, 0);
> > +*bytes_transferred += 1;
> > +}
> >   pages = 1;
> >   }
> >
> >
> 
> 




Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-15 Thread Paolo Bonzini


On 15/01/2016 10:48, Liang Li wrote:
> Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
> if hugetlbfs is used.) so there is no need to send the zero page header
> to destination.
> 
> For guest just uses a small portions of RAM, this change can avoid
> allocating all the guest's RAM pages in the destination node after
> live migration. Another benefit is destination QEMU can save lots of
> CPU cycles for zero page checking.
> 
> Signed-off-by: Liang Li 

This does not work.  Depending on the board, some pages are written by
QEMU before the guest starts.  If the guest rewrites them with zeroes,
this change breaks migration.

Paolo

> ---
>  migration/ram.c | 10 ++
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 4e606ab..c4821d1 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, 
> ram_addr_t offset,
>  
>  if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>  acct_info.dup_pages++;
> -*bytes_transferred += save_page_header(f, block,
> -   offset | 
> RAM_SAVE_FLAG_COMPRESS);
> -qemu_put_byte(f, 0);
> -*bytes_transferred += 1;
> +if (!ram_bulk_stage) {
> +*bytes_transferred += save_page_header(f, block, offset |
> +   RAM_SAVE_FLAG_COMPRESS);
> +qemu_put_byte(f, 0);
> +*bytes_transferred += 1;
> +}
>  pages = 1;
>  }
>  
> 



Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-15 Thread Dr. David Alan Gilbert
* Liang Li (liang.z...@intel.com) wrote:
> Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
> if hugetlbfs is used.) so there is no need to send the zero page header
> to destination.
> 
> For guest just uses a small portions of RAM, this change can avoid
> allocating all the guest's RAM pages in the destination node after
> live migration. Another benefit is destination QEMU can save lots of
> CPU cycles for zero page checking.

I think this would break postcopy, because the zero pages wouldn't be
filled in, so accessing them would still generate a userfault.
So you'd have to disable this optimisation if postcopy is enabled
(even during the precopy bulk stage).

Also, are you sure about the benefits?
 Destination guests RAM should not be allocated on receiving a zero
page; see ram_handle_compressed, it doesn't write to the page if
it's zero, so it shouldn't cause an allocate.  I think you're probably
correct about the zero page test on the destination, I wonder if we
can speed that up.

Dave

> 
> Signed-off-by: Liang Li 
> ---
>  migration/ram.c | 10 ++
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 4e606ab..c4821d1 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, 
> ram_addr_t offset,
>  
>  if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>  acct_info.dup_pages++;
> -*bytes_transferred += save_page_header(f, block,
> -   offset | 
> RAM_SAVE_FLAG_COMPRESS);
> -qemu_put_byte(f, 0);
> -*bytes_transferred += 1;
> +if (!ram_bulk_stage) {
> +*bytes_transferred += save_page_header(f, block, offset |
> +   RAM_SAVE_FLAG_COMPRESS);
> +qemu_put_byte(f, 0);
> +*bytes_transferred += 1;
> +}
>  pages = 1;
>  }
>  
> -- 
> 1.9.1
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



[Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage

2016-01-15 Thread Liang Li
Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
if hugetlbfs is used.) so there is no need to send the zero page header
to destination.

For guest just uses a small portions of RAM, this change can avoid
allocating all the guest's RAM pages in the destination node after
live migration. Another benefit is destination QEMU can save lots of
CPU cycles for zero page checking.

Signed-off-by: Liang Li 
---
 migration/ram.c | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 4e606ab..c4821d1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, 
ram_addr_t offset,
 
 if (is_zero_range(p, TARGET_PAGE_SIZE)) {
 acct_info.dup_pages++;
-*bytes_transferred += save_page_header(f, block,
-   offset | 
RAM_SAVE_FLAG_COMPRESS);
-qemu_put_byte(f, 0);
-*bytes_transferred += 1;
+if (!ram_bulk_stage) {
+*bytes_transferred += save_page_header(f, block, offset |
+   RAM_SAVE_FLAG_COMPRESS);
+qemu_put_byte(f, 0);
+*bytes_transferred += 1;
+}
 pages = 1;
 }
 
-- 
1.9.1