Hi
Not sure if it's a good idea to announce this one day before my
vacations but here it goes :)
I've written Ruby bindings for liblxc, available at
https://github.com/andrenth/ruby-lxc
Currently there are no docs, but a look at the unit tests can give an
idea of how to use the library. The b
On 09/18/2013 03:48 PM, Serge Hallyn wrote:
> Double-d'oh. The package in raring-proposed doesn't yet have the needed
> fix, which is below. It's in upstream git. Do you mind opening a new
> bug so we can SRU this?
Done: https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1227313
Cheers,
Andre
Hi Serge
On 09/18/2013 01:55 PM, Serge Hallyn wrote:
> An unfortunate known bug - try the package in raring-proposed.
> (You'll need lxc-start to be running unconfined as well, but if
> that worked for you in precise I assume you already have that).
I am using that package (I reported those ipv6
Hello
In Ubuntu 12.04 I used to be able to create containers with this line in
the container's fstab:
proc /var/lib/lxc/test/rootfs/proc proc ro,nodev,noexec,nosuid 0 0
Now in 13.04 I get the following error:
$ sudo lxc-start -n test -f /var/lib/lxc/test/lxc.conf -lDEBUG -L
/dev/stdout
lxc-star
Hello
I'm doing some tests with lxc and cgroup memory limits. From what I
understand from the documentation, the lxc.cgroup.memory.limit_in_bytes
limit is applied against the sum of the 'cache', 'rss' and 'mapped_file'
fields in the /sys/fs/cgroup/lxc/$container/memory.stat file; that is,
if that
Hello
Reading the list I found this thread from 2011:
http://www.mail-archive.com/lxc-users@lists.sourceforge.net/msg01475.html
It is describes how memory is shared among containers when one uses bind
mounts for binaries and libraries.
Does anyone know how the memory limit cgroup setting is aff
Hi Gary
On Tue, 2011-08-16 at 06:38 +1200, Gary Ballantyne wrote:
> Unfortunately, I am still getting the same errors with a little over 40
> containers.
I also had this problem. It was solved after Daniel suggested me to
increase the following sysctl setting:
fs.inotify.max_user_instances
H
On Thu, 2011-08-04 at 08:44 -0300, Andre Nathan wrote:
> Is there a way then to just disable the networking part of? The IPv6
... of Smack
> rule I was trying to add was just to have unlabled netw
Hello Casey
On Wed, 2011-08-03 at 22:21 -0700, Casey Schaufler wrote:
> Thus, IPv6 support for Smack is much harder than IPv4 support
> for Smack was. The difference is not between IPv6 and IPv4,
> rather it is the difference between IPsec and CIPSO.
Is there a way then to just disable the networ
Hi Mike
On Wed, 2011-08-03 at 17:52 -0400, Michael H. Warfield wrote:
> That's v4 syntax. Does it not work at all? Did you try this:
>
> echo ::/0 @ > /smack/netlabel
>
> Not having tried this myself at all, I'm just asking. If it doesn't
> work, that needs to be fixed but it's a SMACK bug.
Hi Olivier
On Wed, 2011-08-03 at 19:48 +0200, Mauras Olivier wrote:
> You're true it won't work out of the box, sorry i forgot the network
> part.
>
> echo 0.0.0.0/0 @ > /smack/netlabel
Apparently this doesn't support IPv6... do you happen to know of a
workaround?
Thanks again,
Andre
--
Hi Olivier
On Tue, 2011-08-02 at 12:13 +0200, Mauras Olivier wrote:
> Here's a practical example:
> # smack_label.py -w -r /srv/lxc/lxc1 lxc1
> # echo "lxc1" > /proc/self/current/attr
> # lxc-start -n lxc1
> # echo "_" > /proc/self/current/attr
Does networking inside the containers work for you w
ur
> container write to the host ;)
>
>
> To summarize, by default only setting a different label - without any
> complex configuration at all - to your containers will ensure you that
> a root inside a container could only have minimal impact and/or no
> impact on the host.
Hi Olivier
On Sun, 2011-07-31 at 16:42 +0200, Mauras Olivier wrote:
> Furthermore system has SMACK enabled - Simplified Mandatory Access
> Control - a label based MAC.
> Each LXC container has its files and processes labeled differently -
> Labels which can't write the host system default label, s
On Wed, 2011-03-02 at 14:24 +0100, Daniel Lezcano wrote:
> > I could paste my configuration files if you think it'd help you
> > reproducing the issue.
>
> Yes, please :)
Ok. The test host has a br0 interface which is not attached to any
physical interface:
auto br0
iface br0 inet static
On Mon, 2011-02-28 at 20:03 +0100, Daniel Lezcano wrote:
> I will try to reproduce the problem on my server (may take a couple of
> days to put in place).
I could paste my configuration files if you think it'd help you
reproducing the issue.
Thanks
Andre
---
Daniel,
Do you think trying a different network configuration for the containers
could help? I'm trying to get macvlan to work right now...
Thanks
Andre
--
Free Software Download: Index, Search & Analyze Logs and other
On Mon, 2011-02-28 at 13:19 +0100, Daniel Lezcano wrote:
> Hmm, that sounds really weird ... What is the kernel version ?
I tried on 2.6.35 (ubuntu 10.10 "server" kernel) and 2.6.38 (recompiled
from the ubuntu natty package).
-
On Mon, 2011-02-28 at 11:22 +0100, Daniel Lezcano wrote:
> I am waiting some feedbacks from Andre with the kernel boot option. So
> he should say if he is facing the same problem with dnsmasq or not.
Sorry for the delay. I lost access to the machine during the weekend. I
had no luch with the kern
Hi Daniel
On Sat, 2011-02-26 at 18:34 +0100, Daniel Lezcano wrote:
> > The neighbor table still overflows though. Do you think increasing the
> > number even more would be worth a try?
>
> I don't know. Are message occurrences the same ? You can try to
> increase the number to see if it finish
On Sat, 2011-02-26 at 09:13 +0100, Daniel Lezcano wrote:
> > How many cpus do you have on your hardware ?
>
I'm running this on a dual quad-core system, so 8 cores total.
> Can you add to the kernel boot option:
>
> rhash_entries=2097152
>
>
> You should have in your console outpout:
>
On Fri, 2011-02-25 at 20:06 +0100, Geordy Korte wrote:
> Maybee a really stupid question... but why would you want to run that many
> containers?
The idea is to use containers as a lighter-weight approach to provide
isolation between customers (compared to hardware virtualization).
Andre
-
On Fri, 2011-02-25 at 13:13 -0300, Andre Nathan wrote:
> > Google says you can setup these tables with the following values if you
> > encounter this problem.
> >
> > echo 256 > /proc/sys/net/ipv4/neigh/default/gc_thresh1
> > echo 512 > /proc/sys/net/ipv4/ne
On Fri, 2011-02-25 at 16:47 +0100, Daniel Lezcano wrote:
> Mmh, I don't remember exactly what I did (that was last year). But you
> are right, the containers where spawned one after the other. I think I
> was doing lxc-wait -n -s RUNNING before running the next
> container. As I have a 8 cores,
On Fri, 2011-02-25 at 08:06 +0100, Daniel Lezcano wrote:
> I did exactly the same configuration and ran 1024 containers.
By the way, how did you handle the start-up of that many containers? The
load average goes up very quickly unless I add a "sleep 1" between
lxc-start calls...
Did you have "Ne
Hi Daniel
On Fri, 2011-02-25 at 08:06 +0100, Daniel Lezcano wrote:
> I did exactly the same configuration and ran 1024 containers.
> I hadn't to modify any ulimits for the container AFAIR, but just to
> tweak /proc/sys limits.
Other than /proc/sys/fs/inotify/max_user_instances, do you remember
Hello
My container setup uses read-only bind-mounts from the host's key
directories (/bin, /sbin, /lib, /usr, parts of /etc and so on). It all
works fine in single-container tests.
Today I wrote a script that creates and starts a thousand containers,
all using the scheme above with the bind-mount
And of course I forgot the bugtracker link. Here it is.
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=30655
On Thu, 2011-02-17 at 13:40 -0200, Andre Nathan wrote:
> Hello
>
> In order to minimize security risks, I'm running my ubuntu containers
> with a patched ver
Hello
In order to minimize security risks, I'm running my ubuntu containers
with a patched version of cron that allows it to run as an unprivileged
user. This, of course, only works if all the jobs are run by the same
user, but since my containers are single-user only it's no issue.
Incidentally,
Hello
I'm trying to setup bandwidth limiting for my container, using the
examples in RedHat's whitepaper:
http://vger.kernel.org/netconf2009_slides/Network%20Control%20Group%
20Whitepaper.odt
The container is configured with the following line:
lxc.cgroup.net_cls.classid = 0x10002
And I ha
On Mon, 2011-02-07 at 10:27 -0200, Andre Nathan wrote:
> So far, for a container running apache and cron, plus the usual stuff
> (init, getty, login), I managed to drop these:
>
> audit_control, audit_write, fowner, fsetid, ipc_lock, ipc_owner,
> lease, linux_immut
On Mon, 2011-02-07 at 03:58 -0800, Dean Mao wrote:
> Yeah, would be nice to have this list -- I remember looking all over,
> but I didn't see lxc.console. Is there a comprehensive list of these
> "abilities"?
So far, for a container running apache and cron, plus the usual stuff
(init, getty, logi
On Mon, 2011-02-07 at 11:40 +1100, Trent W. Buck wrote:
> lxc.cap.drop=sys_admin should prevent all mount(2) calls within the
> container. It seems to work for me. In fact... I thought LXC *always*
> removed that capability, even if you never mentioned it?
Nice! Is there a list of capabilities
Hello
Let's say I have a file bind-mounted in read-only mode from the host to
the container. For example, /etc/resolv.conf.
In the container, I can use the mount command with the -oremount,rw
options and then edit the file from the container.
Is there a way to disable that behavior and forbid th
Hello
Is it possible to have everything inside a container (including init,
getty and whatever daemons are installed) being run as a normal user?
That is, can I have a container with no root user in /etc/passwd?
Thanks
Andre
--
Hello
I have the following container network configuration:
lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
lxc.network.ipv4 = 192.168.0.2/24
lxc.network.name = eth0
When the container starts up, this is how its eth0 interface is
configured:
eth0 Link encap:Ethernet
On Thu, 2011-02-03 at 09:09 -0800, Dean Mao wrote:
> You can just add a new bridge with "brctl addbr br7" if you wanted to
> add a bridge 7... then configure it with "ifconfig br7 172.16.0.1
> netmask 255.255.255.0 up" and you'll have a new network on the same
> computer.
Didn't know that... I th
On Thu, 2011-02-03 at 09:13 -0200, Andre Nathan wrote:
> eth0 -> external network
> eth1 -> 10.0.0.0/16 network
> containers -> 192.168.0.0/16 network
Hmm I managed to do this creating a dummy interface and setting up a
bridge on it, so now I have
eth0 -> external networ
On Wed, 2011-02-02 at 12:07 -0800, Dean Mao wrote:
> Yeah, it's quite easy to do this. Here's my lxc network config from
> one of my machines:
>
>
> lxc.network.type = veth
> lxc.network.flags = up
> lxc.network.link = br1
> lxc.network.ipv4 = 192.168.0.4/24
>
>
> My outside network is eth0/br
Hello
My host is configured with two networks as below:
eth0: external network a.b.c.d/24
eth1: internal network 10.1.0.0/16
I would like to configure my containers to belong to a third network
(say, 10.2.0.0/16), and then set up two NAT rules (one for eth0 and one
for eth1) to allow them to acc
r/proc proc defaults 0 0
> none /container/dev/pts devpts newinstance 0 0
>
>
> Sergio D. Troiano
> Development Team.
>
>
> Av. de los Incas 3085
> (C1426ELA) Capital Federal
>
>
> On Thu, 2011-01-20 at 13:51 -0200, Andre Nathan wrote:
> > O
On Thu, 2011-01-20 at 11:44 -0200, Sergio Daniel Troiano wrote:
> Sure but there are a lot of things i have found about lxc , how far
> are you? where are you stuck?
I'm just beginning with LXC... I have tried to use the lxc-sshd script
as a starting point but I still haven't got it to work yet.
On Thu, 2011-01-20 at 11:25 -0200, Sergio Daniel Troiano wrote:
> It is possible , i'm using Lxc with its own inittab and lxc.-start.
> Within it i' running an apache server and it works perfectly.
> I've tested lxc for 1 month more less and i haven't had any problem.
>
> If you have got any doubt
Hello
I have the following scenario in mind: in a machine shared by a few
users, let each one control its own apache configuration by having an
application container for each user, with its own network interface and
apache configuration directory. Ideally, the apache instances would run
as the app
44 matches
Mail list logo