RE: [PATCH v0 0/4] background snapshot
Hi David, Thanks for cc me, it was really exciting to know that write-protect feature finally been merged. Exclude live memory snapshot, I'm thinking if we can use it to realize the real memory throttle in migration, Since we still can come across dirty pages fail to converge with current cpu throttle method. we may use write-protect capability to slow down the accessing speed of guest's memory, in order to slow down the dirty pages ..., I'll look into it. Besides, I'll follow this snapshot series, and to see if I can do some works to make this feature to be perfect enough To be accepted as quickly as possible. ;) Thanks, Hailiang > -Original Message- > From: Dr. David Alan Gilbert [mailto:dgilb...@redhat.com] > Sent: Tuesday, July 28, 2020 1:00 AM > To: Denis Plotnikov ; da...@redhat.com; > Zhanghailiang > Cc: qemu-devel@nongnu.org; pbonz...@redhat.com; quint...@redhat.com; > ebl...@redhat.com; arm...@redhat.com; pet...@redhat.com; > d...@openvz.org > Subject: Re: [PATCH v0 0/4] background snapshot > > * Denis Plotnikov (dplotni...@virtuozzo.com) wrote: > > Currently where is no way to make a vm snapshot without pausing a vm > > for the whole time until the snapshot is done. So, the problem is the > > vm downtime on snapshoting. The downtime value depends on the > vmstate > > size, the major part of which is RAM and the disk performance which is > > used for the snapshot saving. > > > > The series propose a way to reduce the vm snapshot downtime. This is > > done by saving RAM, the major part of vmstate, in the background when > > the vm is running. > > > > The background snapshot uses linux UFFD write-protected mode for > > memory page access intercepting. UFFD write-protected mode was added > to the linux v5.7. > > If UFFD write-protected mode isn't available the background snapshot > > rejects to run. > > Hi Denis, > I see Peter has responded to most of your patches, but just anted to say > thank you; but also to cc in a couple of other people; David Hildenbrand > (who is interested in unusual memory stuff) and zhanghailiang who works on > COLO which also does snapshotting and had long wanted to use WP. > > 2/4 was a bit big for my liking; please try and do it in smaller chunks! > > Dave > > > How to use: > > 1. enable background snapshot capability > >virsh qemu-monitor-command vm --hmp migrate_set_capability > > background-snapshot on > > > > 2. stop the vm > >virsh qemu-monitor-command vm --hmp stop > > > > 3. Start the external migration to a file > >virsh qemu-monitor-command cent78-bs --hmp migrate > exec:'cat > ./vm_state' > > > > 4. Wait for the migration finish and check that the migration has completed > state. > > > > Denis Plotnikov (4): > > bitops: add some atomic versions of bitmap operations > > migration: add background snapshot capability > > migration: add background snapshot > > background snapshot: add trace events for page fault processing > > > > qapi/migration.json | 7 +- > > include/exec/ramblock.h | 8 + > > include/exec/ramlist.h | 2 + > > include/qemu/bitops.h | 25 ++ > > migration/migration.h | 1 + > > migration/ram.h | 19 +- > > migration/savevm.h | 3 + > > migration/migration.c | 142 +- > > migration/ram.c | 582 > ++-- > > migration/savevm.c | 1 - > > migration/trace-events | 2 + > > 11 files changed, 771 insertions(+), 21 deletions(-) > > > > -- > > 2.17.0 > > > -- > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
Re: [PATCH v0 0/4] background snapshot
* Denis Plotnikov (dplotni...@virtuozzo.com) wrote: > Currently where is no way to make a vm snapshot without pausing a vm > for the whole time until the snapshot is done. So, the problem is > the vm downtime on snapshoting. The downtime value depends on the vmstate > size, the major part of which is RAM and the disk performance which is > used for the snapshot saving. > > The series propose a way to reduce the vm snapshot downtime. This is done > by saving RAM, the major part of vmstate, in the background when the vm > is running. > > The background snapshot uses linux UFFD write-protected mode for memory > page access intercepting. UFFD write-protected mode was added to the linux > v5.7. > If UFFD write-protected mode isn't available the background snapshot rejects > to > run. Hi Denis, I see Peter has responded to most of your patches, but just anted to say thank you; but also to cc in a couple of other people; David Hildenbrand (who is interested in unusual memory stuff) and zhanghailiang who works on COLO which also does snapshotting and had long wanted to use WP. 2/4 was a bit big for my liking; please try and do it in smaller chunks! Dave > How to use: > 1. enable background snapshot capability >virsh qemu-monitor-command vm --hmp migrate_set_capability > background-snapshot on > > 2. stop the vm >virsh qemu-monitor-command vm --hmp stop > > 3. Start the external migration to a file >virsh qemu-monitor-command cent78-bs --hmp migrate exec:'cat > ./vm_state' > > 4. Wait for the migration finish and check that the migration has completed > state. > > Denis Plotnikov (4): > bitops: add some atomic versions of bitmap operations > migration: add background snapshot capability > migration: add background snapshot > background snapshot: add trace events for page fault processing > > qapi/migration.json | 7 +- > include/exec/ramblock.h | 8 + > include/exec/ramlist.h | 2 + > include/qemu/bitops.h | 25 ++ > migration/migration.h | 1 + > migration/ram.h | 19 +- > migration/savevm.h | 3 + > migration/migration.c | 142 +- > migration/ram.c | 582 ++-- > migration/savevm.c | 1 - > migration/trace-events | 2 + > 11 files changed, 771 insertions(+), 21 deletions(-) > > -- > 2.17.0 > -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
Re: [PATCH v0 0/4] background snapshot
On Fri, Jul 24, 2020 at 11:06:17AM +0300, Denis Plotnikov wrote: > > > On 23.07.2020 20:39, Peter Xu wrote: > > On Thu, Jul 23, 2020 at 11:03:55AM +0300, Denis Plotnikov wrote: > > > > > > On 22.07.2020 19:30, Peter Xu wrote: > > > > On Wed, Jul 22, 2020 at 06:47:44PM +0300, Denis Plotnikov wrote: > > > > > On 22.07.2020 18:42, Denis Plotnikov wrote: > > > > > > On 22.07.2020 17:50, Peter Xu wrote: > > > > > > > Hi, Denis, > > > > > > Hi, Peter > > > > > > > ... > > > > > > > > How to use: > > > > > > > > 1. enable background snapshot capability > > > > > > > > virsh qemu-monitor-command vm --hmp migrate_set_capability > > > > > > > > background-snapshot on > > > > > > > > > > > > > > > > 2. stop the vm > > > > > > > > virsh qemu-monitor-command vm --hmp stop > > > > > > > > > > > > > > > > 3. Start the external migration to a file > > > > > > > > virsh qemu-monitor-command cent78-bs --hmp migrate > > > > > > > > exec:'cat > > > > > > > > > ./vm_state' > > > > > > > > 4. Wait for the migration finish and check that the migration > > > > > > > > has completed state. > > > > > > > Thanks for continued working on this project! I have two high > > > > > > > level > > > > > > > questions > > > > > > > before dig into the patches. > > > > > > > > > > > > > > Firstly, is step 2 required? Can we use a single QMP command to > > > > > > > take snapshots > > > > > > > (which can still be a "migrate" command)? > > > > > > With this series it is required, but steps 2 and 3 should be merged > > > > > > into > > > > > > a single one. > > > > I'm not sure whether you're talking about the disk snapshot operations, > > > > anyway > > > > yeah it'll be definitely good if we merge them into one in the next > > > > version. > > > After thinking for a while, I remembered why I split these two steps. > > > The vm snapshot consists of two parts: disk(s) snapshot(s) and vmstate. > > > With migrate command we save the vmstate only. So, the steps to save > > > the whole vm snapshot is the following: > > > > > > 2. stop the vm > > > virsh qemu-monitor-command vm --hmp stop > > > > > > 2.1. Make a disk snapshot, something like > > > virsh qemu-monitor-command vm --hmp snapshot_blkdev > > > drive-scsi0-0-0-0 ./new_data > > > 3. Start the external migration to a file > > > virsh qemu-monitor-command vm --hmp migrate exec:'cat ./vm_state' > > > > > > In this example, vm snapshot consists of two files: vm_state and the disk > > > file. new_data will contain all new disk data written since [2.1.] > > > executing. > > But that's slightly different to the current interface of savevm and loadvm > > which only requires a snapshot name, am I right? > > Yes > > Now we need both a snapshot > > name (of the vmstate) and the name of the new snapshot image. > > Yes > > > > I'm not familiar with qemu image snapshots... my understanding is that > > current > > snapshot (save_snapshot) used internal image snapshots, while in this > > proposal > > you want the live snapshot to use extrenal snapshots. > Correct, I want to add ability to make a external live snapshot. (live = > asyn ram writing) > >Is there any criteria on > > making this decision/change? > Internal snapshot is supported by qcow2 and sheepdog (I never heard of > someone using the later). > Because of qcow2 internal snapshot design, it's quite complex to implement > "background" snapshot there. > More details here: > https://www.mail-archive.com/qemu-devel@nongnu.org/msg705116.html > So, I decided to start with external snapshot to implement and approve the > memory access intercepting part firstly. > Once it's done for external snapshot we can start to approach the internal > snapshots. Fair enough. Let's start with external snapshot then. Thanks, -- Peter Xu
Re: [PATCH v0 0/4] background snapshot
On 23.07.2020 20:39, Peter Xu wrote: On Thu, Jul 23, 2020 at 11:03:55AM +0300, Denis Plotnikov wrote: On 22.07.2020 19:30, Peter Xu wrote: On Wed, Jul 22, 2020 at 06:47:44PM +0300, Denis Plotnikov wrote: On 22.07.2020 18:42, Denis Plotnikov wrote: On 22.07.2020 17:50, Peter Xu wrote: Hi, Denis, Hi, Peter ... How to use: 1. enable background snapshot capability virsh qemu-monitor-command vm --hmp migrate_set_capability background-snapshot on 2. stop the vm virsh qemu-monitor-command vm --hmp stop 3. Start the external migration to a file virsh qemu-monitor-command cent78-bs --hmp migrate exec:'cat ./vm_state' 4. Wait for the migration finish and check that the migration has completed state. Thanks for continued working on this project! I have two high level questions before dig into the patches. Firstly, is step 2 required? Can we use a single QMP command to take snapshots (which can still be a "migrate" command)? With this series it is required, but steps 2 and 3 should be merged into a single one. I'm not sure whether you're talking about the disk snapshot operations, anyway yeah it'll be definitely good if we merge them into one in the next version. After thinking for a while, I remembered why I split these two steps. The vm snapshot consists of two parts: disk(s) snapshot(s) and vmstate. With migrate command we save the vmstate only. So, the steps to save the whole vm snapshot is the following: 2. stop the vm virsh qemu-monitor-command vm --hmp stop 2.1. Make a disk snapshot, something like virsh qemu-monitor-command vm --hmp snapshot_blkdev drive-scsi0-0-0-0 ./new_data 3. Start the external migration to a file virsh qemu-monitor-command vm --hmp migrate exec:'cat ./vm_state' In this example, vm snapshot consists of two files: vm_state and the disk file. new_data will contain all new disk data written since [2.1.] executing. But that's slightly different to the current interface of savevm and loadvm which only requires a snapshot name, am I right? Yes Now we need both a snapshot name (of the vmstate) and the name of the new snapshot image. Yes I'm not familiar with qemu image snapshots... my understanding is that current snapshot (save_snapshot) used internal image snapshots, while in this proposal you want the live snapshot to use extrenal snapshots. Correct, I want to add ability to make a external live snapshot. (live = asyn ram writing) Is there any criteria on making this decision/change? Internal snapshot is supported by qcow2 and sheepdog (I never heard of someone using the later). Because of qcow2 internal snapshot design, it's quite complex to implement "background" snapshot there. More details here: https://www.mail-archive.com/qemu-devel@nongnu.org/msg705116.html So, I decided to start with external snapshot to implement and approve the memory access intercepting part firstly. Once it's done for external snapshot we can start to approach the internal snapshots. Thanks, Denis
Re: [PATCH v0 0/4] background snapshot
On Thu, Jul 23, 2020 at 11:03:55AM +0300, Denis Plotnikov wrote: > > > On 22.07.2020 19:30, Peter Xu wrote: > > On Wed, Jul 22, 2020 at 06:47:44PM +0300, Denis Plotnikov wrote: > > > > > > On 22.07.2020 18:42, Denis Plotnikov wrote: > > > > > > > > On 22.07.2020 17:50, Peter Xu wrote: > > > > > Hi, Denis, > > > > Hi, Peter > > > > > ... > > > > > > How to use: > > > > > > 1. enable background snapshot capability > > > > > > virsh qemu-monitor-command vm --hmp migrate_set_capability > > > > > > background-snapshot on > > > > > > > > > > > > 2. stop the vm > > > > > > virsh qemu-monitor-command vm --hmp stop > > > > > > > > > > > > 3. Start the external migration to a file > > > > > > virsh qemu-monitor-command cent78-bs --hmp migrate exec:'cat > > > > > > > ./vm_state' > > > > > > 4. Wait for the migration finish and check that the migration > > > > > > has completed state. > > > > > Thanks for continued working on this project! I have two high level > > > > > questions > > > > > before dig into the patches. > > > > > > > > > > Firstly, is step 2 required? Can we use a single QMP command to > > > > > take snapshots > > > > > (which can still be a "migrate" command)? > > > > With this series it is required, but steps 2 and 3 should be merged into > > > > a single one. > > I'm not sure whether you're talking about the disk snapshot operations, > > anyway > > yeah it'll be definitely good if we merge them into one in the next version. > > After thinking for a while, I remembered why I split these two steps. > The vm snapshot consists of two parts: disk(s) snapshot(s) and vmstate. > With migrate command we save the vmstate only. So, the steps to save > the whole vm snapshot is the following: > > 2. stop the vm > virsh qemu-monitor-command vm --hmp stop > > 2.1. Make a disk snapshot, something like > virsh qemu-monitor-command vm --hmp snapshot_blkdev drive-scsi0-0-0-0 > ./new_data > 3. Start the external migration to a file > virsh qemu-monitor-command vm --hmp migrate exec:'cat ./vm_state' > > In this example, vm snapshot consists of two files: vm_state and the disk > file. new_data will contain all new disk data written since [2.1.] executing. But that's slightly different to the current interface of savevm and loadvm which only requires a snapshot name, am I right? Now we need both a snapshot name (of the vmstate) and the name of the new snapshot image. I'm not familiar with qemu image snapshots... my understanding is that current snapshot (save_snapshot) used internal image snapshots, while in this proposal you want the live snapshot to use extrenal snapshots. Is there any criteria on making this decision/change? Thanks, -- Peter Xu
Re: [PATCH v0 0/4] background snapshot
On 22.07.2020 19:30, Peter Xu wrote: On Wed, Jul 22, 2020 at 06:47:44PM +0300, Denis Plotnikov wrote: On 22.07.2020 18:42, Denis Plotnikov wrote: On 22.07.2020 17:50, Peter Xu wrote: Hi, Denis, Hi, Peter ... How to use: 1. enable background snapshot capability virsh qemu-monitor-command vm --hmp migrate_set_capability background-snapshot on 2. stop the vm virsh qemu-monitor-command vm --hmp stop 3. Start the external migration to a file virsh qemu-monitor-command cent78-bs --hmp migrate exec:'cat ./vm_state' 4. Wait for the migration finish and check that the migration has completed state. Thanks for continued working on this project! I have two high level questions before dig into the patches. Firstly, is step 2 required? Can we use a single QMP command to take snapshots (which can still be a "migrate" command)? With this series it is required, but steps 2 and 3 should be merged into a single one. I'm not sure whether you're talking about the disk snapshot operations, anyway yeah it'll be definitely good if we merge them into one in the next version. After thinking for a while, I remembered why I split these two steps. The vm snapshot consists of two parts: disk(s) snapshot(s) and vmstate. With migrate command we save the vmstate only. So, the steps to save the whole vm snapshot is the following: 2. stop the vm virsh qemu-monitor-command vm --hmp stop 2.1. Make a disk snapshot, something like virsh qemu-monitor-command vm --hmp snapshot_blkdev drive-scsi0-0-0-0 ./new_data 3. Start the external migration to a file virsh qemu-monitor-command vm --hmp migrate exec:'cat ./vm_state' In this example, vm snapshot consists of two files: vm_state and the disk file. new_data will contain all new disk data written since [2.1.] executing. Meanwhile, we might also want to check around the type of backend RAM. E.g., shmem and hugetlbfs are still not supported for uffd-wp (which I'm still working on). I didn't check explicitly whether we'll simply fail the migration for those cases since the uffd ioctls will fail for those kinds of RAM. It would be okay if we handle all the ioctl failures gracefully, The ioctl's result is processed but the patch has a flaw - it ignores the result of ioctl check. Need to fix it. It happens here: +int ram_write_tracking_start(void) +{ +if (page_fault_thread_start()) { +return -1; +} + +ram_block_list_create(); +ram_block_list_set_readonly(); << this returns -1 in case of failure but I just ignore it here + +return 0; +} or it would be even better if we directly fail when we want to enable live snapshot capability for a guest who contains other types of ram besides private anonymous memories. I agree, but to know whether shmem or hugetlbfs are supported by the current kernel we should execute the ioctl for all memory regions on the capability enabling. Yes, that seems to be a better solution, so we don't care about the type of ram backend anymore but check directly with the uffd ioctls. With these checks, it'll be even fine to ignore the above retcode, or just assert, because we've already checked that before that point. Thanks,
Re: [PATCH v0 0/4] background snapshot
On Wed, Jul 22, 2020 at 06:47:44PM +0300, Denis Plotnikov wrote: > > > On 22.07.2020 18:42, Denis Plotnikov wrote: > > > > > > On 22.07.2020 17:50, Peter Xu wrote: > > > Hi, Denis, > > Hi, Peter > > > ... > > > > How to use: > > > > 1. enable background snapshot capability > > > > virsh qemu-monitor-command vm --hmp migrate_set_capability > > > > background-snapshot on > > > > > > > > 2. stop the vm > > > > virsh qemu-monitor-command vm --hmp stop > > > > > > > > 3. Start the external migration to a file > > > > virsh qemu-monitor-command cent78-bs --hmp migrate exec:'cat > > > > > ./vm_state' > > > > > > > > 4. Wait for the migration finish and check that the migration > > > > has completed state. > > > Thanks for continued working on this project! I have two high level > > > questions > > > before dig into the patches. > > > > > > Firstly, is step 2 required? Can we use a single QMP command to > > > take snapshots > > > (which can still be a "migrate" command)? > > > > With this series it is required, but steps 2 and 3 should be merged into > > a single one. I'm not sure whether you're talking about the disk snapshot operations, anyway yeah it'll be definitely good if we merge them into one in the next version. > > > > > > Meanwhile, we might also want to check around the type of backend > > > RAM. E.g., > > > shmem and hugetlbfs are still not supported for uffd-wp (which I'm still > > > working on). I didn't check explicitly whether we'll simply fail > > > the migration > > > for those cases since the uffd ioctls will fail for those kinds of > > > RAM. It > > > would be okay if we handle all the ioctl failures gracefully, > > > > The ioctl's result is processed but the patch has a flaw - it ignores > > the result of ioctl check. Need to fix it. > > It happens here: > > +int ram_write_tracking_start(void) > +{ > +if (page_fault_thread_start()) { > +return -1; > +} > + > +ram_block_list_create(); > +ram_block_list_set_readonly(); << this returns -1 in case of failure but > I just ignore it here > + > +return 0; > +} > > > > or it would be > > > even better if we directly fail when we want to enable live snapshot > > > capability > > > for a guest who contains other types of ram besides private > > > anonymous memories. > > > > I agree, but to know whether shmem or hugetlbfs are supported by the > > current kernel we should > > execute the ioctl for all memory regions on the capability enabling. Yes, that seems to be a better solution, so we don't care about the type of ram backend anymore but check directly with the uffd ioctls. With these checks, it'll be even fine to ignore the above retcode, or just assert, because we've already checked that before that point. Thanks, -- Peter Xu
Re: [PATCH v0 0/4] background snapshot
On 22.07.2020 18:42, Denis Plotnikov wrote: On 22.07.2020 17:50, Peter Xu wrote: Hi, Denis, Hi, Peter ... How to use: 1. enable background snapshot capability virsh qemu-monitor-command vm --hmp migrate_set_capability background-snapshot on 2. stop the vm virsh qemu-monitor-command vm --hmp stop 3. Start the external migration to a file virsh qemu-monitor-command cent78-bs --hmp migrate exec:'cat > ./vm_state' 4. Wait for the migration finish and check that the migration has completed state. Thanks for continued working on this project! I have two high level questions before dig into the patches. Firstly, is step 2 required? Can we use a single QMP command to take snapshots (which can still be a "migrate" command)? With this series it is required, but steps 2 and 3 should be merged into a single one. Meanwhile, we might also want to check around the type of backend RAM. E.g., shmem and hugetlbfs are still not supported for uffd-wp (which I'm still working on). I didn't check explicitly whether we'll simply fail the migration for those cases since the uffd ioctls will fail for those kinds of RAM. It would be okay if we handle all the ioctl failures gracefully, The ioctl's result is processed but the patch has a flaw - it ignores the result of ioctl check. Need to fix it. It happens here: +int ram_write_tracking_start(void) +{ +if (page_fault_thread_start()) { +return -1; +} + +ram_block_list_create(); +ram_block_list_set_readonly(); << this returns -1 in case of failure but I just ignore it here + +return 0; +} or it would be even better if we directly fail when we want to enable live snapshot capability for a guest who contains other types of ram besides private anonymous memories. I agree, but to know whether shmem or hugetlbfs are supported by the current kernel we should execute the ioctl for all memory regions on the capability enabling. Thanks, Denis
Re: [PATCH v0 0/4] background snapshot
On 22.07.2020 17:50, Peter Xu wrote: Hi, Denis, Hi, Peter ... How to use: 1. enable background snapshot capability virsh qemu-monitor-command vm --hmp migrate_set_capability background-snapshot on 2. stop the vm virsh qemu-monitor-command vm --hmp stop 3. Start the external migration to a file virsh qemu-monitor-command cent78-bs --hmp migrate exec:'cat > ./vm_state' 4. Wait for the migration finish and check that the migration has completed state. Thanks for continued working on this project! I have two high level questions before dig into the patches. Firstly, is step 2 required? Can we use a single QMP command to take snapshots (which can still be a "migrate" command)? With this series it is required, but steps 2 and 3 should be merged into a single one. Meanwhile, we might also want to check around the type of backend RAM. E.g., shmem and hugetlbfs are still not supported for uffd-wp (which I'm still working on). I didn't check explicitly whether we'll simply fail the migration for those cases since the uffd ioctls will fail for those kinds of RAM. It would be okay if we handle all the ioctl failures gracefully, The ioctl's result is processed but the patch has a flaw - it ignores the result of ioctl check. Need to fix it. or it would be even better if we directly fail when we want to enable live snapshot capability for a guest who contains other types of ram besides private anonymous memories. I agree, but to know whether shmem or hugetlbfs are supported by the current kernel we should execute the ioctl for all memory regions on the capability enabling. Thanks, Denis
Re: [PATCH v0 0/4] background snapshot
Hi, Denis, On Wed, Jul 22, 2020 at 11:11:29AM +0300, Denis Plotnikov wrote: > Currently where is no way to make a vm snapshot without pausing a vm > for the whole time until the snapshot is done. So, the problem is > the vm downtime on snapshoting. The downtime value depends on the vmstate > size, the major part of which is RAM and the disk performance which is > used for the snapshot saving. > > The series propose a way to reduce the vm snapshot downtime. This is done > by saving RAM, the major part of vmstate, in the background when the vm > is running. > > The background snapshot uses linux UFFD write-protected mode for memory > page access intercepting. UFFD write-protected mode was added to the linux > v5.7. > If UFFD write-protected mode isn't available the background snapshot rejects > to > run. > > How to use: > 1. enable background snapshot capability >virsh qemu-monitor-command vm --hmp migrate_set_capability > background-snapshot on > > 2. stop the vm >virsh qemu-monitor-command vm --hmp stop > > 3. Start the external migration to a file >virsh qemu-monitor-command cent78-bs --hmp migrate exec:'cat > ./vm_state' > > 4. Wait for the migration finish and check that the migration has completed > state. Thanks for continued working on this project! I have two high level questions before dig into the patches. Firstly, is step 2 required? Can we use a single QMP command to take snapshots (which can still be a "migrate" command)? Meanwhile, we might also want to check around the type of backend RAM. E.g., shmem and hugetlbfs are still not supported for uffd-wp (which I'm still working on). I didn't check explicitly whether we'll simply fail the migration for those cases since the uffd ioctls will fail for those kinds of RAM. It would be okay if we handle all the ioctl failures gracefully, or it would be even better if we directly fail when we want to enable live snapshot capability for a guest who contains other types of ram besides private anonymous memories. Thanks, -- Peter Xu