On Thu, Jan 22, 2015 at 11:54:49AM +0100, Michael Haggerty wrote:

> > +run_with_limited_open_files () {
> > +   (ulimit -n 32 && "$@")
> > +}
> 
> Regarding the choice of "32", I wonder what is the worst-case number of
> open file descriptors that are needed *before* counting the ones that
> are currently wasted on open loose-reference locks. On Linux it seems to
> be only 4 with my setup:
> 
>     $ (ulimit -n 3 && git update-ref --stdin </dev/null)
>     bash: /dev/null: Too many open files
>     $ (ulimit -n 4 && git update-ref --stdin </dev/null)
>     $
> 
> This number might depend a little bit on details of the repository, like
> whether config files import other config files. But as long as the
> "background" number of fds required is at least a few less than 32, then
> your number should be safe.
> 
> Does anybody know of a platform where file descriptors are eaten up
> gluttonously by, for example, each shared library that is in use or
> something? That's the only think I can think of that could potentially
> make your choice of 32 problematic.

It's not just choice of platform. There could be inherited descriptors
in the environment (e.g., the test suite is being run by a shell that
keeps a pipe to CI server open, or something). And the test suite itself
uses several extra descriptors for hiding and showing output.

I think this is the sort of thing that we have to determine with a mix
of guessing and empiricism.  4 is almost certainly too low. 32 looks
"pretty big" in practice but not so big that it will make the test slow.
I think our best bet is probably to ship it and see if anybody reports
problems while the patch cooks.  Then we can bump the number (or find a
new approach) as necessary.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to