Am 14.11.2011 16:57, schrieb Tobias Oberstein: > I am trying to convince Python to open more than 32k files .. this is on > FreeBSD. > > Now I know I have to set appropriate limits .. I did: > > $ sysctl kern.maxfiles > kern.maxfiles: 204800 > $ sysctl kern.maxfilesperproc > kern.maxfilesperproc: 200000 > $ sysctl kern.maxvnodes > kern.maxvnodes: 200000 > $ ulimit > unlimited > > Here is what happens with a Python freshly built from sources .. it'll tell > me I can open 200k files .. but will bail out at 32k:
I'm not familiar with BSD but Linux has similar Kernel options. The kernel options might be *global* flags to set the total upper limit of open file descriptors for the entire system, not for a single process. Also on Linux "ulimit" doesn't display the fd limit. You have to use "ulimit -n". Why do you need more than 32k file descriptors anyway? It's an insanely high amount of FDs. Most programs need less than 100 and the default value of 1024 on my Linux servers is usually high enough. I've never increased the fd limit over 8192 and our biggest installation servers more than 80 TB data in about 20 to 25 million files. Christian -- http://mail.python.org/mailman/listinfo/python-list