On Fri 2011-05-20 (08:22), Ulli Horlacher wrote:
> How can I determine how many memory a container uses currently?
Found it my myself :-)
It is rss in memory.stat
--
Ullrich Horlacher Server- und Arbeitsplatzsysteme
Rechenzentrum E-Mail: horlac...@rus.uni-stuttgar
How can I determine how many memory a container uses currently?
memory.usage_in_bytes and memory.memsw.usage_in_bytes shows much too much,
4 times more than the host uses totally.
root@vms2:# cat /cgroup/flupp/memory.usage_in_bytes
4181008384
root@vms2:# cat /cgroup/flupp/memory.memsw.usage_in_
Quoting Francois-Xavier Bourlet (francois-xavier.bour...@dotcloud.com):
> and what about using xfs quota by project? is somebody tried?
Not me, sounds promising though - best suggestion yet. Someone let
us know if it works :)
thanks,
-serge
--
and what about using xfs quota by project? is somebody tried?
On Thu, May 19, 2011 at 8:04 AM, Serge Hallyn
wrote:
> Quoting Corin Langosch (cor...@gmx.de):
>> On 19.05.2011 11:18, Ulli Horlacher wrote:
>> After some time users install data on their vservers and so the
>> snapshots grow over time
Quoting Corin Langosch (cor...@gmx.de):
> On 19.05.2011 11:18, Ulli Horlacher wrote:
> After some time users install data on their vservers and so the
> snapshots grow over time.
>
> disc: 500 GB (one big lvm partition)
> lvm volume: 10 GB (has vserver base system installation)
> snapshot 1: 5 GB
I've used ZFS on Fuse before, with OpenVZ. The performance was horrible, but
the flexibility outweighs the cons for small setups. No space was lost, thanks
to volume management as well as deduplication support.
For serious setups, I'd recommend exporting ZFS over NFS from a Nexenta host
(or Free
On 19.05.2011 11:18, Ulli Horlacher wrote:
>
> But the underlaying partion must be big enouigh to contain all LXC
> containers! How do you prevent to a single container to allocate all free
> disk space?
I had no time to consult the man pages or to just give it try. Have you
tried it? But I guess
On Thu 2011-05-19 (10:35), Corin Langosch wrote:
> > But how do you set up quotas for the snapshots?
> > One can limit the size of the whole LVM container, but this is the same as
> > using a regular disk partition (for all LXC containers).
>
> I'm by no means an lvm expert, but I would have gues
On 19.05.2011 09:59, Ulli Horlacher wrote:
>
> But how do you set up quotas for the snapshots?
> One can limit the size of the whole LVM container, but this is the same as
> using a regular disk partition (for all LXC containers).
I'm by no means an lvm expert, but I would have guessed from Hallyn
On Wed, 18 May 2011, Serge Hallyn wrote:
> dd if=/dev/zero of=/srv/container1.rootfs.img bs=1M skip=1 count=1
That ought to be seek=1, not skip. (you skip the input, seek the
outout)
I'm not a fan of this though - if you create the image file(s) using dd
there is a good chance it's g
On Wed 2011-05-18 (13:59), Serge Hallyn wrote:
> For LVM volumes, you can create one canonical container which takes
> up the space, then create other containers as snapshots of it. The
> snapshot containers won't take up space until the container starts
> changing blocks.
But how do you set up
On 18.05.2011 20:59, Serge Hallyn wrote:
> Certainly not for loopback. Just make sure to create it as having
> a big hole in the middle, something like
>
> dd if=/dev/zero of=/srv/container1.rootfs.img bs=1M skip=1 count=1
>
Cool, I didn't know I can use sparse files for that. Good to know
I have written a script "lxc" which is a superset of some of the lxc-*
programs and adds some extra features. Maybe it is useful for others:
http://fex.rus.uni-stuttgart.de/download/lxc
root@vms2:~# lxc -h
usage: lxc option
options: -l list containers
-p list all container processes
13 matches
Mail list logo