Thanks for the continued speedy research, Tom!

Weighing in on the design of an ASGI-direct protocol, the main issue I've
had at this point is not HTTP (as there's a single request message and the
body stuff could be massaged in somehow), but WebSocket, where you have
separate "connect", "receive" and "disconnect" events.

Because of the separate events, having them covered in one
consumer/function, even in an async style, is not possible under something
that works in the same overall way as ASGI does at the base level; instead,
it would have to be modified substantially so that whatever the server was
bundled all those things together into a single asyncio function, and then
we'd have to stick a key on there that made it clear what it was.

The thing I wanted to investigate, and which I started making progress
towards at PyCon, was keeping the same basic architecture as ASGI (that is,
server -> channel layer <- consumer), but stripping it way back to the
in-memory layer with a very small capacity, so it basically just acts as a
async-thread-to-async-thread channel in the same way things would in, for
example, Go.

This resulted in me adding receive_async() to the memory layer so that
works, but it's a relatively slow implementation as it has to coexist with
the rest of the layer which works synchronously. I suspect there is
potential for a very fast async-only layer that can trigger the await
that's hanging in a receive_async() directly from a send() to a related
channel, rather than sticking it onto a memory location and waiting. This
could be done with generic channel names like in the spec
("websocket.connect"), and end up with a server that has less worker
contexts/threads than sockets waiting, or with specific channel names per
connection ("websocket.connect?j13DkW2") and then spin up one handling
function per connection per message type, which seems sub-optimal.

Otherwise, it seems the only way to re-engineer it is to shove all the
message types for a protocol into a single async handling function, which
seems less than ideal to me, or start down the path of having these things
represented as classes with callables on them - so you'd call
"consumer.connect()", "consumer.receive()" etc., which is probably my
preferred design for keeping the event separation nice and clean.

Andrew

On Thu, Jun 1, 2017 at 3:18 AM, Tom Christie <christie....@gmail.com> wrote:

> I've been doing some initial work on a Gunicorn worker process that
> interfaces with an ASGI consumer callable, rather than a WSGI callable.
>
>     https://github.com/tomchristie/asgiworker
>
> In the standard channels setup, the application is run behind a message
> bus...
>
>     Protocol Server -> Channels <- Worker Process -> ASGI Consumer
>
> In the Gunicorn worker implementation above, we're instead calling the
> consumer interface directly...
>
>     Protocol Server -> ASGI Consumer
>
> There's a few things that're promising here...
>
> 1. The ASGI consumer interface is suitable for async application
> frameworks, where WSGI necessarily can't be.
>
> In WSGI the response gets returned when the callable returns, you can't
> queue an asynchronous task to perform some work later.
> With an ASGI consumer, the messaging interface style means that you can
> push tasks onto the event loop and return immediately.
> In short, you can use async...await under ASGI.
>
> 2. The uvloop and httptools implementations are seriously speedy.
>
> For comparative purposes, plaintext hello world servers against a few
> different implementations on my MacBook Air
>
> wrk -d20s -t10 -c200 http://127.0.0.1:8080/
>
>                           Throughput Latency (stddev)
> Go                      44,000 req/s     6ms      92%
> uvloop+httptools, ASGI  33,000 req/s     6ms      67%
> meinheld, WSGI          16,000 req/s    12ms      91%
> Node                     9,000 req/s    22ms      91%
>
> As application developers those baselines aren't typically a priority, but
> if we want Python web frameworks to be able to nail the same kind of
> services that node and go currently excel in, then having both async
> support, and a low framework overhead *is* important.
>
> It's not immediately clear to me if any of this is interesting to Django
> land directly or not. The synchronous nature of the framework means that
> having the separation of async application servers, and synchronous workers
> behind a channel layer makes a lot of sense. Tho you could perfectly well
> run a regular HTTP Django application on top of this implementation
> (replacing wsgi.py with an asgi.py that uses ASGIHandler) and be no worse
> off for it. (Sure you're running blocking operations while running in the
> context of an event loop, but that's no worse than running blocking
> operations in a standard WSGI configuration)
>
> However it is valuable if you want to be able to write HTTP frameworks
> that support async...await, or if you want to support websockets and don't
> require the kinds of broadcast functionality that adding a channel layer
> provides for.
>
> ---
>
> At the moment I'm working against the ASGI consumer interface as it's
> currently specified. There's a few things that I'm interested in next:
>
> 1. If there'd be any sense in mandating that the ASGI callable *may* be a
> coroutine. (requiring an asyncio worker or server implementation)
> 2. If there'd be any sense in including `.loop` as either a mandatory or
> as an optional attribute on a channel layer that supports the syncio
> extension.
> 3. Andrew's mentioned that he's been considering an alternative that maps
> more simply onto WSGI, I'd really like to see what he's thinking there.
> 4. Response streaming isn't a problem - you can send multiple message back
> to the channel, and run that off the event loop. However, I've not quite
> got my head around how you handle streaming request bodies, or around how
> you'd invert the interface so that from the application perspective there's
> something like an interface available for  `chunk = await body.read()`.
> 5. One other avenue of interest here might be if it's worth considering
> bringing ASGIHandler out of channels and into Django core, so that we can
> expose either an asgi consumer callable or a wsgi callable interface to
> Django, with `runworker` being only one of a number of possible ASGI
> deployments.
>
> Plenty to unpack here, feedback on any aspects most welcome!
>
> Cheers,
>
>   T :)
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers (Contributions to Django itself)" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To post to this group, send email to django-developers@googlegroups.com.
> Visit this group at https://groups.google.com/group/django-developers.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/django-developers/db158350-60a9-4950-b11c-
> 83f2f7a9221c%40googlegroups.com
> <https://groups.google.com/d/msgid/django-developers/db158350-60a9-4950-b11c-83f2f7a9221c%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CAFwN1upemEA3908efpJq7H52gmPcZ6yphbCxnUO6utoCo37dTw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to