On 12/23/2015 03:34 PM, Daniel P. Berrange wrote:
On Wed, Dec 23, 2015 at 03:15:50PM +0300, Roman Kagan wrote:
On Wed, Dec 23, 2015 at 10:47:22AM +0000, Daniel P. Berrange wrote:
On Wed, Dec 23, 2015 at 11:14:12AM +0800, Fam Zheng wrote:
As an alternative, can we introduce .bdrv_flock() in protocol drivers, with
similar semantics to flock(2) or lockf(3)? That way all formats can benefit,
and a program crash will automatically drop the lock.
FWIW, the libvirt locking daemon (virtlockd) will already attempt to take
out locks using fcntl()/lockf() on all disk images associated with a VM.
Is it even possible without QEMU cooperating?  In particular in complex
cases with e.g. backing chains?

This was exactly the reason why we designed the "lock" option to take an
argument describing the locking mechanism to be used (see the tentative
patchset Denis posted in this thread).  The only one currently
implemented is flock()-based; however it can be extended to other
mechanisms like network / cluster / SAN lock managers, etc.  In
particular, it can be made to talk to virtlockd.
NB, libvirt generally considers QEMU to be untrustworthy, which is
another reason why we use virtlockd to acquire the locks *prior*
to granting QEMU any access to the file(s). On this basis we would
not really trust QEMU to do acquire/release locks itself by talking
to virtlockd. Indeed, we'd not really trust QEMU locking at all, no
matter what mechanism it used - we want strong guarantee of locking
regardless of whether QEMU is broken / compromised.

Regards,
Daniel
this is not the case we are trying to solve here. Here customer accidentally
called 'qemu-img snapshot' and face his doom in ruined image.

How can we will be able to find proper libvirtd in the case of network
filesystem inside client swarm? This daemon is local to the host.
Filesystem locking can be used in the hope that setup is consistent.

Den

Reply via email to