On 18/09/13 16:43, Gary Ballantyne wrote:
> I am pretty sure that XFS needs to be *initially* mounted with the quota
> option --- but after rebooting I have lost the uquota.
Update:
If I create an ordinary (not lvm backed) container, then shuffle things
around so that /var/lib/lxc/vm0/
Hi All
I have a container running over a XFS logical volume, and would like to
employ user-level disk quota.
This doesn't work, but it seems like I need something like:
mount -o remount,uquota /var/lib/lxc/vm0/rootfs/
The change seems to stick:
/dev/mapper/lxc-vm0 on /var/lib/lxc/vm0/rootfs t
On 14/03/13 16:31, Serge Hallyn wrote:
> Looks to me like the problem is a conflict between memory cgroup and
> xen:
Thanks Serge. This is the distro:
http://cloud-images.ubuntu.com/releases/raring/alpha-2/ (ami-c842608d).
And a stable version of quantal before that.
I will start by looking for
Hi All
I have an intermittent, but crippling, problem on a raring EC2 instance
(also on quantal). Its a (raring) lvm-backed container --- I use cgroups
directly (via /sys/fs) and iptables in the instance (not sure if that's
relevant at all).
Occasionally, when stopping or starting the containe
On Fri, 1 Feb 2013 10:24:13 -0600
Serge Hallyn wrote:
>
> Did you actually test with a memory hog program? I just noticed there
> appears to be a bug in that if I
>
> d=/sys/fs/cgroup/memory/a
> mkdir $d
> echo 1 > $d/memory.use_hierarchy
> echo 5000 > $d/memory.limit_in
On 01/02/13 02:33, lxc-users-requ...@lists.sourceforge.net wrote:
On 2013-01-31 07:41, Gary Ballantyne wrote:
>*# echo '64M' > /sys/fs/cgroup/memory/lxc/memory.limit_in_bytes*
># cat /sys/fs/cgroup/memory/lxc/memory.limit_in_bytes (return 67108864)
Dear Gary,
what'
On 12/01/13 08:49, Stéphane Graber wrote:
On 01/11/2013 01:17 PM, Gary Ballantyne wrote:
Hello All
I understand that I can limit the RAM of a single container via
lxc.cgroup.memory.limit_in_bytes. But, is there a way to limit the total
RAM available to all containers (without limiting each
Hello All
I understand that I can limit the RAM of a single container via
lxc.cgroup.memory.limit_in_bytes. But, is there a way to limit the total
RAM available to all containers (without limiting each individually)?
E.g., say we have 4G available. Rather than specifying a maximum number
of
Hi
I use "lxc.aa_profile = unconfined" to get the NFS client to work in a
container (precise host and container).
Is that the best approach?
Thanks
Gary
--
Everyone hates slow websites. So do we.
Make your web apps fa
Hi
Does anyone have experience with cachefilesd they can share?
I have a ubuntu precise host and container (latest EC2 beta). I
installed cachefilesd on both host and container. cachefilesd starts on
the host, but not in the container.
The container's syslog complained about access to /dev/cac
Hello List
Various templates have differing fstab definitions (at least for
ubuntu). For example, [1] includes only /proc and /sys, [2] further adds
/dev/pts, and [3] further adds /var/lock and /var/run.
Could someone please explain the pros/cons of including more than /proc
and /sysfs? (which
On 08/12/11 19:39, Daniel Lezcano wrote:
> On 12/08/2011 12:38 AM, Joseph Heck wrote:
>> I've been seeing a pause in the whole networking stack when starting
>> and stopping LXC - it seems to be somewhat intermittent, but happens
>> reasonably consistently the first time I start up the LXC.
>>
>> I
Hello All,
Is there any known means for a non-root user, who is ssh'd into a
container, to attack the host (e.g. read a file, reboot the machine ...)?
From what I have read the (potential) trouble seems to be with root
users. Is that true?
Many thanks,
Gary
---
On 16/08/11 06:52, Andre Nathan wrote:
> Hi Gary
>
> On Tue, 2011-08-16 at 06:38 +1200, Gary Ballantyne wrote:
>> Unfortunately, I am still getting the same errors with a little over 40
>> containers.
> I also had this problem. It was solved after Daniel suggested me to
On 15/08/11 19:52, Jäkel, Guido wrote:
>> Hi
>>
>> Going back through the list, I couldn't find whether this has been resolved.
>>
>> I had a similar problem today with a little over 40 containers:
>>
>> # lxc-start -n gary
>> lxc-start: Too many open files - failed to inotify_init
>> lxc-start: fa
Hi
Going back through the list, I couldn't find whether this has been resolved.
I had a similar problem today with a little over 40 containers:
# lxc-start -n gary
lxc-start: Too many open files - failed to inotify_init
lxc-start: failed to add utmp handler to mainloop
lxc-start: mainloop exite
On 2/6/2011 3:56 PM, John Drescher wrote:
>> Is this important if, say, a malicious user has access to a container?
>> Or, can a container be configured such that they could do little harm?
>
> You can easily make a container have its own filesystem and no access
> to the host's filesystem or devi
On 2/6/2011 10:44 AM, Daniel Lezcano wrote:
> On 02/04/2011 07:24 PM, Andre Nathan wrote:
>> Hello
>>
>> Is it possible to have everything inside a container (including init,
>> getty and whatever daemons are installed) being run as a normal user?
>> That is, can I have a container with no root use
On 2/3/2011 1:47 PM, Trent W. Buck wrote:
> Gary Ballantyne
> writes:
>
>> # /usr/bin/lxc-execute -n foo -f
>> /usr/share/doc/lxc/examples/lxc-veth.conf /bin/bash
>>
>> The container fired up, and I could ping to/from the host. However, when
>> I left
On 2/2/2011 1:13 PM, Trent W. Buck wrote:
> Gary Ballantyne
> writes:
>
>> Would greatly appreciate any help getting the sshd template working on
>> my Ubuntu 9.10 host.
>
> I recommend you upgrade to 10.04 LTS and try again. 9.10 will be
> end-of-lifed by Ca
On 2/1/2011 11:05 PM, Daniel Lezcano wrote:
> On 02/01/2011 12:16 AM, Gary Ballantyne wrote:
>> Hi
>>
>> Would greatly appreciate any help getting the sshd template working on
>> my Ubuntu 9.1 host.
>>
>> I can ssh to and from the container and hos
Hi
Would greatly appreciate any help getting the sshd template working on
my Ubuntu 9.1 host.
I can ssh to and from the container and host when the container is
generated by:
lxc-execute -n foo2 -f /usr/share/doc/lxc/examples/lxc/lxc-veth-gb.conf
/bin/bash
Here I have slightly modified the exam
22 matches
Mail list logo