On 02/25/2011 03:39 PM, Andre Nathan wrote:
> On Fri, 2011-02-25 at 08:06 +0100, Daniel Lezcano wrote:
>> I did exactly the same configuration and ran 1024 containers.
> By the way, how did you handle the start-up of that many containers? The
> load average goes up very quickly unless I add a "sleep 1" between
> lxc-start calls...

Mmh, I don't remember exactly what I did (that was last year). But you 
are right, the containers where spawned one after the other. I think I 
was doing lxc-wait -n <name> -s RUNNING before running the next 
container. As I have a 8 cores, that did not take so much time to boot 
(2 minutes).

I was using dnsmasq as a dns and a dhcp server and sending the hostname 
as an identifier for the dhcp protocol, so I was able to reach the 
container without taking care of the ip address / mac address. But I 
noticed dnsmasq was collapsing and taking a very long time before 
sending an ip address after ~ 600 containers.

The first 500 containers were taking 2 minutes to start.
The next 500 containers were taking 6 minutes to start.

> Did you have "Neighbour table overflow" errors? I tried increasing
> net.ipv4.neigh.default.gc_thresh{1,2,3} but that didn't fix the problem.

I didn't notice the message, but maybe I missed it.

Google says you can setup these tables with the following values if you 
encounter this problem.

echo 256 > /proc/sys/net/ipv4/neigh/default/gc_thresh1
echo 512 > /proc/sys/net/ipv4/neigh/default/gc_thresh2
echo 1024 > /proc/sys/net/ipv4/neigh/default/gc_thresh3

Do they fix your problem ?


------------------------------------------------------------------------------
Free Software Download: Index, Search & Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
_______________________________________________
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users

Reply via email to