[issue11284] slow close file descriptors in subprocess, popen2, os.pepen*

2011-02-22 Thread s7v7nislands
s7v7nislands added the comment: thanks, neologix. I think should put this hint in python doc. -- ___ Python tracker ___ ___ Python-b

[issue11284] slow close file descriptors in subprocess, popen2, os.pepen*

2011-02-22 Thread Charles-Francois Natali
Charles-Francois Natali added the comment: To elaborate on this, to my knowledge, there's no portable and reliable way to close all open file descriptors. Even with the current code, it's still possible that some FD aren't properly closed, since getconf(SC_OPEN_MAX) often returns RLIMIT_NOFILE

[issue11284] slow close file descriptors in subprocess, popen2, os.pepen*

2011-02-22 Thread Charles-Francois Natali
Charles-Francois Natali added the comment: dup(2) returns the lowest numbered available file descriptor: if there's a discontinuity in the FDs allocation, this code is going to close only the FDs up to the first available FD. Imagine for example the following: open("/tmp/foo") = 3 open("/tmp/b

[issue11284] slow close file descriptors in subprocess, popen2, os.pepen*

2011-02-22 Thread s7v7nislands
Changes by s7v7nislands : Added file: http://bugs.python.org/file20835/python27.patch ___ Python tracker ___ ___ Python-bugs-list mailing list

[issue11284] slow close file descriptors in subprocess, popen2, os.pepen*

2011-02-22 Thread s7v7nislands
New submission from s7v7nislands : when use popen*() and close_fds is True, python will close unused fds. but the MAXFD is not the real max. especially in freebsd, subprocess.MAXFD=655000. so python will try to close to many fd, it's too slow, in my test on freebsd, using about 3 seconds. poo