On Mon, Jan 11, 2016 at 05:47:06PM +0100, Kevin Wolf wrote: > Am 24.12.2015 um 06:41 hat Denis V. Lunev geschrieben: > > On 12/24/2015 02:19 AM, Max Reitz wrote: > > >So the benefits of a qcow2 flag are only minor ones. However, I > > >personally believe that automatic unlock on crash is a very minor > > >benefit as well. That should never happen in practice anyway, and a > > >crashing qemu is such a great inconvenience that I as a user wouldn't > > >really mind having to unlock the image afterwards. > > IMHO you are wrong. This is VERY important. The situation would be exactly > > the same after node poweroff, which could happen and really happens in > > the real life from time to time. > > > > In this cases VMs should start automatically and ASAP if configured this > > way. Any manual interaction here is a REAL pain. > > Yes. Your management tool should be able to cope with it. > > > >In fact, libvirt could even do that manually, couldn't it? If qemu > > >crashes, it just invokes qemu-img force-unlock on any qcow2 image which > > >was attached R/W to the VM. > > > > in the situation above libvirt does not have the information or this > > information could be unreliable. > > That would be a libvirt bug then. Did you check? > > A good management tool knows which VMs it had running before a host > crash. For all I know, libvirt does.
Dealing with recovery after host crash is out of scope for libvirt. This is the responsibility of the higher level mgmt tool, which should be using some kind of reliable clustering & fencing technology to ensure safety (ie via STONITH capability) even in the fact of mis-behaving hardware. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|