Bugs item #1663329, was opened at 2007-02-19 11:17
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1663329&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Performance
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: H. von Bargen (hvbargen)
Assigned to: Nobody/Anonymous (nobody)
Summary: subprocess/popen close_fds perform poor if SC_OPEN_MAX is hi

Initial Comment:
If the value of sysconf("SC_OPEN_MAX") is high
and you try to start a subprocess with subprocess.py or os.popen2 with 
close_fds=True, then starting the other process is very slow.
This boils down to the following code in subprocess.py:
        def _close_fds(self, but):
            for i in xrange(3, MAXFD):
                if i == but:
                    continue
                try:
                    os.close(i)
                except:
                    pass

resp. the similar code in popen2.py:
    def _run_child(self, cmd):
        if isinstance(cmd, basestring):
            cmd = ['/bin/sh', '-c', cmd]
        for i in xrange(3, MAXFD):
            try:
                os.close(i)
            except OSError:
                pass

There has been an optimization already (range has been replaced by xrange to 
reduce memory impact), but I think the problem is that for high values of 
MAXFD, usually a high percentage of the os.close statements will fail, raising 
an exception (which is an "expensive" operation).
It has been suggested already to add a C implementation called "rclose" or 
"close_range" that tries to close all FDs in a given range (min, max) without 
the overhead of Python exception handling.

I'd like emphasize that this is not a theoretical, but a real world problem:
We have a Python application in a production environment on Sun Solaris. Some 
other software running on the same server needed a high value of 260000 for 
SC_OPEN_MAX (set with ulimit -n XXX or in some /etc/-file (don't know which 
one).
Suddenly calling any other process with subprocess.Popen (..., close_fds=True) 
now took 14 seconds (!) instead of some microseconds.
This caused a huge performance degradation, since the subprocess itself only 
needs only  a few seconds.

See also:
Patches item #1607087 "popen() slow on AIX due to large FOPEN_MAX value".
This contains a fix, but only for AIX - and I think the patch does not support 
the "but" argument used in subprocess.py.
The correct solution should be coded in C, and should
do the same as the _close_fds routine in subprocess.py.
It could be optimized to make use of (operating-specific) system calls to close 
all handles from (but+1) to MAX_FD with "closefrom" or "fcntl" as proposed in 
the patch.


----------------------------------------------------------------------

>Comment By: Martin v. Löwis (loewis)
Date: 2007-02-21 00:45

Message:
Logged In: YES 
user_id=21627
Originator: NO

Wouldn't it be simpler for you to just don't pass close_fds=True to popen?

----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1663329&group_id=5470
_______________________________________________
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to