On 18.05.2011 17:52, Serge Hallyn wrote:
>
> Why do you call it not easy?  Because you don't have spare partitions to
> dedicate to a pv?  Or because you're not used to using lvm?
>
> If the former, then you could use a loopback filesystem instead of
> an LVM.  I assume that'll impact performance, but I've not tested it
> to see by how much.
>
> If the latter, then in the next few months I intend to push some
> stuff to lxc to integrate LVM usage.  Daniel had had comments to
> my first patches so it'll likely change, but what I'm using right
> now let's me just do lxc-lvmcreate in place of lxc-create to create
> a lvm-backed lxc partition, and 'lxc-clone -s -o c1 -n c2' lets me
> create container c2 with a lvm snapshot of c1's rootfs.
> (See http://s3hh.wordpress.com/2011/03/30/lxc-lvm-clone/ and
> http://s3hh.wordpress.com/2011/03/30/one-more-lxc-clone-update/)
>
> There's no cgroup to do what you want, though.
>
>
I might be wrong, but I think the biggest disadvantage (show-stopper) of 
lvm/ loopback is that the partition/ image will consume the whole space 
even when not a single file is actually stored in the fs. For example 
imagine you have a 500 GB hardisk and want to create 50 vservers with an 
50 GB diskspace limit each. This is not possible with lvm or loopback 
devices because one would need 2500 GB storage. But it's a very common 
use case because in average only 5% of the vservers will actually use 
50GB and so you'll never run out of space if the space would be 
allocated on demand.

Corin


------------------------------------------------------------------------------
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
_______________________________________________
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users

Reply via email to