I am trying to use lxc in product environment, but i can not limit the guest's
network bandwidth. I followed the list instructions, but it does take effect.
How do you limit the guest's network bandwidth?
# tc qdisc add dev virbr0 root handle 10: htb
# tc filter add dev virbr0 parent 10: p
On Tue, 23 Oct 2012 20:03:33 +0200
olx69 wrote:
> >> > to be more precise, I've got after root/passwd phrase the
> >> > option:
> >> >
> >> > Would you like to enter a security context? [N]
> >>
> >> Looks like selinux problem? Can you try disabling selinux in the
> >> host (and possibly i
Am 23.10.2012 20:10, schrieb olx69:
>> in the the lxc container I can do now
>>
>> [root@pgsql ~]# sestatus
>> SELinux status: enabled
>> SELinuxfs mount:/selinux
>> Current mode: enforcing
>> Mode from config file: disabled
>> Policy versi
> in the the lxc container I can do now
>
> [root@pgsql ~]# sestatus
> SELinux status: enabled
> SELinuxfs mount:/selinux
> Current mode: enforcing
> Mode from config file: disabled
> Policy version: 24
> Policy from config
>> > to be more precise, I've got after root/passwd phrase the option:
>> >
>> > Would you like to enter a security context? [N]
>>
>> Looks like selinux problem? Can you try disabling selinux in the host
>> (and possibly in the guest as well) with "setenforce 0".
>
>FWIW in my experience d
On 10/23/2012 12:05 AM, Michael H. Warfield wrote:
> On Mon, 2012-10-22 at 16:21 -0500, Serge Hallyn wrote:
>> Quoting Michael H. Warfield (m...@wittsend.com):
>>> On Mon, 2012-10-22 at 15:14 -0500, Serge Hallyn wrote:
>
>
>
How about just a devtmpfs? We actually now do this by default (as
On 10/23/2012 12:29 AM, Ulli Horlacher wrote:
> On Mon 2012-10-22 (14:53), Stéphane Graber wrote:
>
>> All in all, that's somewhere around 300-400 containers I'm managing
>
> How do you handle a host (hardware) failure?
Everything that runs in the container is in a configuration management
syste