Hi,

On 08/04/2015 12:51 AM, Bosson VZ wrote:
Hello,

yes, we have 200+ containers all running on top of simfs in clusters. Most of
the limitations of simfs mentioned above are not a problem to us as we use
clustered LVMs on top of DRBD storage. Every container has its own file-system
sitting on an LVM block device. This layout is universal for our OpenVZ and
KVM guests (where a raw block device is passed to the guest). Different
containers use different file-systems (ext4, xfs, nfs, ...), some containers
share part of their file-system hierarchy with others in the cluster using
ocfs2/gfs clustered file-systems. On non-cluster hosts, we sometimes use host-
guest bind-mounts as the way to share some data. Thanks to DRBD, live
migration is also possible with minimal downtime.

The solution you described looks very interesting. How do you reclaim unused disk space from container to host system? More specifically: suppose, at the moment of creation of a container you devoted 100GB LVM block device for that container; later on you decided that 10GB would be enough for it; what do you do to return those unused 90GB back to the host (or to other containers)? I guess the only way is to fully shutdown the container, shrink its filesystem, then shrink LVM block device, then start it up again.


The bossonvz libvirt driver at the moment only supports simfs (as that's what
we are using nowadays).

Is it somehow possible to use ploop in a cluster and share the virtual file-
system between hosts?

You can keep ploop image files on a shared storage if its client filesystem is FUSE-based. For other filesystems more work needed, but it must be doable.


How can host->guest bind-mount be achieved without simfs?


The same way as for simfs: "mount --bind /shared_space /vz/root/101/mnt"? It shouldn't matter whether mount-point "/vz/root/101" came from simfs or from ploop. Am I missing something obvious?

Thanks,
Maxim
_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

Reply via email to