On 05/06/2015 03:27 PM, Michael Schwager wrote:

>> ​​which can give more accurate numbers when doing things like
>> qcow2 format atop an LVM block device).
>>
> 
> 
> ​...This has me intrigued. We are running qcow2 format in files, and the
> instructions I've seen have you converting qcow2 to raw and then dd into
> LVM. But is there an advantage to running qcow2 on LVM? Is there an example
> of this in action?

VDSM does this.  You gain the advantages of qcow2 (storage snapshots,
efficient zero-cluster representations) coupled with less overhead than
what you would get with qcow2 on a file system (cutting out the
filesystem layer) and guaranteed host resources (you know the lvm size
is reserved for the guest and won't grow without your action, unlike a
file system where automatic growth of a qcow2 file competes for space
quotas with every other file in the file system).  It's still less
efficient than running raw, but as close as you can get to bare-metal
speeds while still gaining benefits from the format.  When using
qcow2-on-lvm, you are responsible for making sure the LVM volume is
large enough: often (thanks to qcow2 having efficient 0-cluster
representation, and particularly true if your guest supports TRIM on
clusters that its file system is not using) the guest will see more
virtual space than what the actual lvm volume exposes (a guest that
needs 10G of storage but isn't fully using it can sometimes get by with
a 5G lvm partition, for example); although the converse is also possible
(a guest that can only see 10G of storage, but where you have lots of
qcow2 snapshots, may require 20G of lvm storage to track all the qcow2
metadata).

Another use case for using qcow2 on lvm is that you can initially
oversubscribe your storage: qemu has a mode where it will gracefully
halt the guest if it runs out of space required for managing the qcow2
data, at which point you can then enlarge the lvm volume and resume the
guest; the guest will never know that you were shuffling the size of the
host storage (note that this applies for twp separate cases - one where
you are merely giving the host more room to store qcow2 metadata, but
leaving the guest size unchanged, such as when you start a guest that
sees a 10G disk on an lvm partition that is initially 5G large, then
enlarge the lvm partition to 6G but the guest still sees a 10G disk; the
other where you are actually resizing the amount of storage allocated to
the guest as if you had hot-swapped in a larger disk, and where some
guests can then resize their file system on the fly to take advantage of
the larger disk).  Related to this, I'm currently working on a patch for
libvirt that will take advantage of a recent qemu feature of setting a
threshold for event triggering, such that you could then request
notification that the lvm volume is X% full and give yourself time to
enlarge the lvm volume even before you reach the point where the guest
would halt because the lvm volume was full.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Reply via email to