[issue21595] Creating many subprocess generates lots of internal BlockingIOError

2014-05-28 Thread Sebastian Kreft

New submission from Sebastian Kreft:

Using the asyncio.create_subprocess_exec, generates lost of internal error 
messages. These messages are:

Exception ignored when trying to write to the signal wakeup fd:
BlockingIOError: [Errno 11] Resource temporarily unavailable

Getting the messages depeneds on how many subprocesses are active at the same 
time. In my system (Debian 7, kernel 3.2.0-4-amd64, python 3.4.1), with 3 or 
less processes at the same time I don't see any problem, but with 4 or more I 
got lot of messages.

On the other hand, these error messages seem to be innocuous, as no exception 
seems to be raised.

Attached is a test script that shows the problem.

It is run as:
bin/python3.4 test_subprocess_error.py MAX_PROCESSES ITERATIONS

it requires to have the du command.


Let me know if there are any (conceptual) mistakes in the attached code.

--
files: test_subprocess_error.py
messages: 219288
nosy: Sebastian.Kreft.Deezer
priority: normal
severity: normal
status: open
title: Creating many subprocess generates lots of internal BlockingIOError
versions: Python 3.4
Added file: http://bugs.python.org/file35385/test_subprocess_error.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21595
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21595] Creating many subprocess generates lots of internal BlockingIOError

2014-05-28 Thread STINNER Victor

STINNER Victor added the comment:

Exception ignored when trying to write to the signal wakeup fd message comes 
from the signal handler in Modules/signalmodule.c. The problem is that Python 
gets a lot of SIGCHLD signals (the test scripts creates +300 processes per 
second on my computer). The producer (signal handler writing the signal number 
into the self pipe) is faster than the consumer 
(BaseSelectorEventLoop._read_from_self callback).

Attached patch should reduce the risk of seeing the message Exception ignored 
when trying to write to the signal wakeup fd. The patch reads all pending of 
the self pipe, instead of just trying to read a signal byte.

The test script doesn't write the error message anymore when the patch is 
applied (the script creates more than 300 processes per second).

The patch doesn't solve completly the issue. Other possible enhancements:

* Add a flag in the signal handler to notify that a signal was received, and 
write a single byte until the flag is reset to False. It would avoid to fill 
the pipe. It requires to implement a custom signal handler implemented in C, 
different from signal handlers of the Python module.

* Add an higher priority to callbacks of signal handlers. Asyncio doesn't 
support priority on callbacks right now.

* Increaze the size of the pipe. On Linux, it looks like fcntl(fd, 
F_SETPIPE_SZ, size); can be used. The maximum size is 
/proc/sys/fs/pipe-max-size (ex: 1 MB of my Fedora 20).

--
keywords: +patch
nosy: +giampaolo.rodola, gvanrossum, haypo, pitrou, yselivanov
Added file: http://bugs.python.org/file35388/asyncio_read_from_self.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21595
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21595] Creating many subprocess generates lots of internal BlockingIOError

2014-05-28 Thread STINNER Victor

STINNER Victor added the comment:

BaseProactorEventLoop._loop_self_reading() uses an overlapped read of 4096 
bytes. I don't understand how it wakes up the event loop. When the operation is 
done, _loop_self_reading() is scheduled with call_soon() by the Future object. 
Is it enough to wake up the event loop?

Is BaseProactorEventLoop correct?

--

Oh, I forgot to explain this part of asyncio_read_from_self.patch:

+data = self._ssock.recv(4096)
+if not data:
+break

This break should never occur. It should only occur if _ssock is no more 
blocking. But it would be a bug, because this pipe is private and set to 
non-blocking at its creation.

I chose to add the test because it should not hurt to add it just in case 
(and to avoid an unlimited busy loop).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21595
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com