[issue21594] asyncio.create_subprocess_exec raises OSError

2014-06-02 Thread Sebastian Kreft

Sebastian Kreft added the comment:

I agree that blocking is not ideal, however there are already some other 
methods that can eventually block forever, and for such cases a timeout is 
provided. A similar approach could be used here.

I think this method should retry until it can actually access the resources, 
because knowing when and how many files descriptors are going to be used is 
very implementation dependent. So handling the retry logic on the application 
side, would be probably very inefficient as lot os information is missing, as 
the subprocess mechanism is a black box.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21594
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21594] asyncio.create_subprocess_exec raises OSError

2014-06-02 Thread STINNER Victor

Changes by STINNER Victor victor.stin...@gmail.com:


--
nosy: +giampaolo.rodola, gvanrossum, pitrou, yselivanov

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21594
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21594] asyncio.create_subprocess_exec raises OSError

2014-06-02 Thread Guido van Rossum

Guido van Rossum added the comment:

I'm not sure. Running out of file descriptors is really not something a
library can handle on its own -- this needs to be kicked back to the app to
handle. E.g. by pacing itself, or closing some connections, or changing the
system limit... The library really can't know what to do, and just waiting
until the condition magically clears seems asking for mysterious hangs.

On Mon, Jun 2, 2014 at 7:30 AM, STINNER Victor rep...@bugs.python.org
wrote:


 Changes by STINNER Victor victor.stin...@gmail.com:


 --
 nosy: +giampaolo.rodola, gvanrossum, pitrou, yselivanov

 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue21594
 ___


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21594
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21594] asyncio.create_subprocess_exec raises OSError

2014-06-02 Thread STINNER Victor

STINNER Victor added the comment:

I agree that blocking is not ideal, however there are already some other 
methods that can eventually block forever, and for such cases a timeout is 
provided.

Functions like read() can block during several minutes, but it's something 
expect from network functions. Blocking until the application releases a file 
descriptor is more surprising.


I think this method should retry until it can actually access the resources,

You can easily implement this in your application.

knowing when and how many files descriptors are going to be used is very 
implementation dependent

I don't think that asyncio is the right place to handle file descriptors.

Usually, the file descriptor limit is around 1024. How did you reach such high 
limit? How many processes are running at the same time? asyncio should not 
leak file descriptors. It's maybe a bug in your application?

I'm now closing the bug.

--
resolution:  - wont fix
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21594
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21594] asyncio.create_subprocess_exec raises OSError

2014-05-28 Thread Sebastian Kreft

New submission from Sebastian Kreft:

In some cases asyncio.create_subprocess_exec raises an OSError because there 
are no file descriptors available.

I don't know if that is expected, but IMO I think it would be better to just 
block until the required numbers of fds are available. Otherwise one would need 
to do this handling, which is not a trivial task.

This issue is happening in Debian 7, with a 3.2.0-4-amd64 kernel, and python 
3.4.1 compiled from source.

--
messages: 219285
nosy: Sebastian.Kreft.Deezer
priority: normal
severity: normal
status: open
title: asyncio.create_subprocess_exec raises OSError
versions: Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21594
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21594] asyncio.create_subprocess_exec raises OSError

2014-05-28 Thread STINNER Victor

STINNER Victor added the comment:

I don't know if that is expected, but IMO I think it would be better to just 
block until the required numbers of fds are available.

Does it mean that it can block forever? It sounds strange to try to make such 
error silent.

Why not retrying in case of such error in your application? asyncio has no idea 
how to release file descriptors.

--
nosy: +haypo

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21594
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com