On 12/24/2015 02:19 AM, Max Reitz wrote:
On 23.12.2015 04:14, Fam Zheng wrote:
On Tue, 12/22 17:46, Kevin Wolf wrote:
Enough innocent images have died because users called 'qemu-img snapshot' while
the VM was still running. Educating the users doesn't seem to be a working
strategy, so this series adds locking to qcow2 that refuses to access the image
read-write from two processes.

Eric, this will require a libvirt update to deal with qemu crashes which leave
locked images behind. The simplest thinkable way would be to unconditionally
override the lock in libvirt whenever the option is present. In that case,
libvirt VMs would be protected against concurrent non-libvirt accesses, but not
the other way round. If you want more than that, libvirt would have to check
somehow if it was its own VM that used the image and left the lock behind. I
imagine that can't be too hard either.
The motivation is great, but I'm not sure I like the side-effect that an
unclean shutdown will require a "forced" open, because it makes using qcow2 in
development cumbersome,
How so?

Just extend your "x86_64-softmmu/qemu-system-x86_64
-these-options-will-make-qemu-crash" invocation by "; qemu-img
force-unlock foo.qcow2".

                         and like you said, management/user also needs to handle
this explicitly. This is a bit of a personal preference, but it's strong enough
that I want to speak up.
Well, I personally always had the opposite preference. I see Denis's
series works on Windows, too, which is good. However, it won't work with
any backend, which a qcow2 flag will.

Also, Denis's/Olga's series will by default not lock the file. This is
not an issue if one uses libvirt to run a VM; but there are people who
invoke qemu directly and then try to run qemu-img concurrently, and I
doubt those people will manually set the locking option. This might be
addressed by automatically setting the option if a certain format like
qcow2 is used, but it may be pretty difficult to get that implemented
nicely.

So the benefits of a qcow2 flag are only minor ones. However, I
personally believe that automatic unlock on crash is a very minor
benefit as well. That should never happen in practice anyway, and a
crashing qemu is such a great inconvenience that I as a user wouldn't
really mind having to unlock the image afterwards.
IMHO you are wrong. This is VERY important. The situation would be exactly
the same after node poweroff, which could happen and really happens in
the real life from time to time.

In this cases VMs should start automatically and ASAP if configured this
way. Any manual interaction here is a REAL pain.

In fact, libvirt could even do that manually, couldn't it? If qemu
crashes, it just invokes qemu-img force-unlock on any qcow2 image which
was attached R/W to the VM.

in the situation above libvirt does not have the information or this
information could be unreliable.

As an alternative, can we introduce .bdrv_flock() in protocol drivers, with
similar semantics to flock(2) or lockf(3)? That way all formats can benefit,
and a program crash will automatically drop the lock.
Making other formats benefit from addressing this issue is a good point,
but it too is a minor point. Formats other than qcow2 and raw are only
supported for compatibility anyway, and we don't need this feature for raw.
I would like to have this covered by flock and this indeed working for
years with Parallels.


I feel like most of the question which approach to take revolves around
"But what if qemu crashes?". You (and others) are right in that having
to manually unlock the image then is cumbersome, however, I think that:
(1) qemu should never crash anyway.
(2) If qemu does crash, having to unlock the image is probably the
     least of your worries.
(3) If you are using libvirt, it should actually be possible for
     libvirt to automatically force-unlock images on qemu crash.

This is why I don't think that keeping a locked image behind on qemu
crash is actually an issue.

Max

pls see above. Node failure and unexpected power loss really matters.

Den

Reply via email to