STINNER Victor added the comment:

"I agree that blocking is not ideal, however there are already some other 
methods that can eventually block forever, and for such cases a timeout is 
provided."

Functions like read() can "block" during several minutes, but it's something 
expect from network functions. Blocking until the application releases a file 
descriptor is more surprising.


"I think this method should retry until it can actually access the resources,"

You can easily implement this in your application.

"knowing when and how many files descriptors are going to be used is very 
implementation dependent"

I don't think that asyncio is the right place to handle file descriptors.

Usually, the file descriptor limit is around 1024. How did you reach such high 
limit? How many processes are running at the same time? asyncio should not 
"leak" file descriptors. It's maybe a bug in your application?

I'm now closing the bug.

----------
resolution:  -> wont fix
status: open -> closed

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue21594>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to