Hi. There's a connected discussion on the sheepdog list about locking, and I have a patch there which could complement this one quite well.
Sheepdog is a distributed, replicated block store being developed (primarily) for Qemu. Images have a mandatory exclusive locking requirement, enforced by the cluster manager. Without this, the replication scheme breaks down and you can end up with inconsistent copies of the block image. The initial release of sheepdog took these locks in the block driver bdrv_open() and bdrv_close() hooks. They also added a bdrv_closeall() and ensured it was called in all the usual qemu exit paths to avoid stray locks. (The rarer case of crashing hosts or crashing qemus will have to be handled externally, and is 'to do'.) The problem was that this prevented live migration, because both ends wanted to open the image at once, even though only one would be using it at a time. To get around this, we introduced bdrv_claim() and bdrv_release() block driver hooks, which can be used to claim and release the lock on vm start and stop. Since at most one end of a live migration is running at any one time, this is sufficient to allow migration for Sheepdog devices. It would also allow live migration of virtual machines on any other shared block devices with mandatory exclusive locking, such as the locks introduced by your patch. My patch in the sheepdog qemu branch is here: http://sheepdog.git.sourceforge.net/git/gitweb.cgi?p=sheepdog/qemu-kvm;a=commitdiff;h=fd9d7c739831cf04e5a913a89bdca680ba028ada I was intending to rebase it against current qemu head for more detailed review over the next couple of weeks after it's had a little more testing, but given that your patch now provides a second use-case for it outside of sheepdog, this seems like a timely moment to ask for any more general feedback about our proposed approach. Cheers, Chris.