Hello

My container setup uses read-only bind-mounts from the host's key
directories (/bin, /sbin, /lib, /usr, parts of /etc and so on). It all
works fine in single-container tests.

Today I wrote a script that creates and starts a thousand containers,
all using the scheme above with the bind-mounts. After about 40
containers being started, I start getting "too many open files" errors
and subsequent container start-up attempts fail.

In the host, checking /proc/sys/fs/file-nr shows a number well below the
configured maximum number of open files for the system, so it must be
something related to the containers.

My question is, how does ulimit work in the context of lxc? I noticed
that despite increasing the limit in the shell before running lxc-start,
the containers still showed a lower limit.
Editing /etc/security/limits.conf didn't help either, so maybe I'm
missing something on container startup?

Also, is the limit really independently set per-container? If so, and if
a single container works fine, why would this error occur with multiple
containers?

I'm using linux 2.6.35 with lxc 0.7.2.

Thanks in advance
Andre


------------------------------------------------------------------------------
Free Software Download: Index, Search & Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
_______________________________________________
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users

Reply via email to