Anthony Liguori wrote: > Kevin Wolf wrote: > >We're leaking file descriptors to child processes. Set FD_CLOEXEC on file > >descriptors that don't need to be passed to children to stop this > >misbehaviour. > > > >Signed-off-by: Kevin Wolf <kw...@redhat.com> > > > > pid = fork(); > if (pid == 0) { > int open_max = sysconf(_SC_OPEN_MAX), i; > > for (i = 0; i < open_max; i++) { > if (i != STDIN_FILENO && > i != STDOUT_FILENO && > i != STDERR_FILENO && > i != fd) { > close(i); > } > > Handles this in a less invasive way. I think the only problem we have > today is that we use popen() for exec: migration. The solution to that > though should be to convert popen to a proper fork/exec() with a pipe. > > I'd prefer to introduce a single fork/exec helper that behaved properly > instead of having to deal with cloexec everywhere.
The above can be a bit slow when sysconf(_SC_OPEN_MAX) == 131072, which you get if running qemu from some web servers or some user environments set up to run web servers... But it's not _that_ slow on a modern machine on Linux - 10^7 closes per second has been measured. Still a bit slow if it's INT_MAX :-) A scalable method on Linux is readdir(/proc/self/fd). (I'm not sure if readdir returns everything reliably if you close while reading, so just reading to get the largest open fd value, then closing all fds up to that value is what I do). Or just copy the closefrom() implementation from openssh/sudo. Interestingly, that says "We avoid checking resource limits since it is possible to open a file descriptor and then drop the rlimit such that it is below the open fd..." but then uses _SC_OPEN_MAX, which I think on Glibc checks the resource limits... -- Jamie