> It seems that this patch is incorrect, if the no-zero pages are zeroed again
> during !ram_bulk_stage, we didn't send the new zeroed page, there will be
> an error.
> 

If not in ram_bulk_stage, still send the header, could you explain why it's 
wrong?

Liang

> > For guest just uses a small portions of RAM, this change can avoid
> > allocating all the guest's RAM pages in the destination node after
> > live migration. Another benefit is destination QEMU can save lots of
> > CPU cycles for zero page checking.
> >
> > Signed-off-by: Liang Li <liang.z...@intel.com>
> > ---
> >   migration/ram.c | 10 ++++++----
> >   1 file changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/migration/ram.c b/migration/ram.c index 4e606ab..c4821d1
> > 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f,
> RAMBlock
> > *block, ram_addr_t offset,
> >
> >       if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> >           acct_info.dup_pages++;
> > -        *bytes_transferred += save_page_header(f, block,
> > -                                               offset | 
> > RAM_SAVE_FLAG_COMPRESS);
> > -        qemu_put_byte(f, 0);
> > -        *bytes_transferred += 1;
> > +        if (!ram_bulk_stage) {
> > +            *bytes_transferred += save_page_header(f, block, offset |
> > +                                                   RAM_SAVE_FLAG_COMPRESS);
> > +            qemu_put_byte(f, 0);
> > +            *bytes_transferred += 1;
> > +        }
> >           pages = 1;
> >       }
> >
> >
> 
> 


Reply via email to