On 三, 2018-01-10 at 20:17 +0000, Stefan Hajnoczi wrote:
> On Wed, Jan 10, 2018 at 8:15 PM, Dr. David Alan Gilbert
> <dgilb...@redhat.com> wrote:
> > 
> > * Stefan Hajnoczi (stefa...@gmail.com) wrote:
> > > 
> > > On Tue, Jan 9, 2018 at 7:55 PM, Dr. David Alan Gilbert
> > > <dgilb...@redhat.com> wrote:
> > > > 
> > > > > 
> > > > > Certain guest operations like rebooting or zeroing memory
> > > > > will defeat
> > > > > the incremental guest RAM snapshot feature.  It's worth
> > > > > thinking about
> > > > > these cases to make sure this feature would be worth it in
> > > > > real use
> > > > > cases.
> > > > But those probably wouldn't upset an NVDimm?
> > > If the guest dirties all RAM then the incremental snapshot
> > > feature
> > > degrades to a full snapshot.  I'm asking if there are common
> > > operations where that happens.
> > > 
> > > I seem to remember Windows guests zero all pages on cold
> > > boot.  Maybe
> > > that's not the case anymore.
> > > 
> > > Worth checking before embarking on this feature because it could
> > > be a
> > > waste of effort if it turns out real-world guests dirty all
> > > memory in
> > > common cases.
> > Right, but I'm hoping that there's some magic somewhere where an
> > NVDimm doesn't
> > get zero'd because of a cold boot since that would seem to make it
> > volatile.
> This feature isn't specific to NVDIMM though.  It would be equally
> useful for regular RAM.
> 
> Stefan
> 

Thanks for all your advices.
I already did a lot of investigation and write some code. My
consideration is as following:
1. As the first step, I will use is_active() function to make it just
work for nvdimm kind memory region and just for snapshot saving, not
for live migration. I understand that this can work for all kinds of
memory, but it may low guest's performance if we always enable dirty
log tracking for memory. For the live migration, all the data need to
be copied and it seems can not get benefit by this manner.
2. Saving and Loading is relatively easy to do, while deleting some
snapshot point needs a lot of work to do. The current framework just
deletes all the data of one snapshot point by one shot. I want to add
some reference to L1/L2 table of QCOW2 when the cluster's data is
depended by other snapshot point, so we can keep the data when
deleting.

I will also check whether the cold boot will zero all pages later.
 
Thanks,
Junyan






Reply via email to