The asyncio module already has a subprocess support: Subprocesses — Python
3.9.1 documentation
<https://docs.python.org/3/library/asyncio-subprocess.html>

Was that not sufficient to solve your problem?


On Mon, Dec 28, 2020 at 5:23 AM Roger Iyengar <raiye...@cs.cmu.edu> wrote:

> I believe that asyncio should have a way to wait for input from a
> different process without blocking the event loop.
>
> The Asyncio module currently contains a Queue class that allows
> communication between multiple coroutines running on the same event loop.
> However, this module is not threadsafe or process-safe.
>
> The multiprocessing module contains Queue and Pipe classes that allow
> inter-process communication, but there's no way to directly read from these
> objects without blocking the event loop.
>
> I propose adding a Pipe class to asyncio, that is process-safe and can be
> read from without blocking the event loop. This was discussed a bit here:
> https://github.com/python/cpython/pull/20882#issuecomment-683463367
>
> This could be implemented using the multiprocessing.Pipe
> class. multiprocessing.connection.Connection.fileno() returns the file
> descriptor used by a pipe. We could then use loop.add_reader() to set
> an asyncio.Event when something has been written to the pipe by the other
> process. I did this all manually in a project I was working on. However,
> this required me to learn a considerable amount about asyncio. It would
> have saved me a lot of time if there was an easy documented way to wait for
> input from another process in a non-blocking way.
>
> One compelling use case for this is a server that uses asyncio, which
> receives inputs from clients, then sends these to another process that runs
> a neural network. The server then sends the client a result after the
> neural network finishes. ProcessPoolExecutor does not seem like a good fit
> for this use case, because the process needs to stay alive and be re-used
> for subsequent requests. Starting a new process for each request is
> impractical, because loading the neural network into GPU memory is an
> expensive operation. See here for an example of such a server (however this
> one is mostly written in C++ and does not asyncio):
> https://www.tensorflow.org/tfx/guide/serving
> _______________________________________________
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/2YTRR3QUFJ66MOJKVUQXAVPBY4AKB4PX/
> Code of Conduct: http://python.org/psf/codeofconduct/
>


-- 
--Guido van Rossum (python.org/~guido)
*Pronouns: he/him **(why is my pronoun here?)*
<http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/2HH6R3T5ME5QW3LHELC5F4YSUSPWSBVF/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to