#33699: Read ASGI request body from asyncio queue on-demand
-------------------------------+------------------------------------
     Reporter:  Noxx           |                    Owner:  noneNote
         Type:  New feature    |                   Status:  closed
    Component:  HTTP handling  |                  Version:  dev
     Severity:  Normal         |               Resolution:  wontfix
     Keywords:  ASGI, async    |             Triage Stage:  Accepted
    Has patch:  1              |      Needs documentation:  0
  Needs tests:  0              |  Patch needs improvement:  1
Easy pickings:  0              |                    UI/UX:  0
-------------------------------+------------------------------------
Comment (by Klaas van Schelven):

 I have the feeling I'm missing something here but I really don't see it.

 What does "disconnect handling" mean in this context? Disconnect handling
 by the ASGI server or by the ASGI application?

 From my perspective as an application developer, there's not much to
 _handle_ I think? I just want to _trigger_ a disconnect. I fail to see why
 this could not simply be done by letting go of the constraint that the
 request body must be consumed before sending a response. i.e. for this
 particular issue I don't think changing the type of `body` to a file
 descriptor is actually required.

 Responding to a quote in the mentioned thread:

 > The concerns with consuming the body (potentially unnecessarily, but
 then why send it?)

 Because it's not always up to the client to determine what should be
 consumed by the server. Enforcement of quota / max message sizes is my
 personal use-case.

 Another use-case (not concerning disconnects) is to reduce latency for big
 uploads, because processing (and possibly even sending back the result)
 can start earlier.

 But my main point would be: given ASGI's stated goals, it's _very
 surprising_ that it's not possible to implement (for http) a streaming
 echo server.

 Anyway, this is moving into "someone is wrong on the internet" territory
 for me; I ran into this while evaluating the streaming capacities of
 various server setups and have not tied myself to an ASGI setup yet, so
 for me the take-away is simply that ASGI is not a good fit for my use-
 case.

 Background regarding the HTTP/1.1 spec, and the idea of send-before-full-
 receive (outside of the ASGI context):
 https://stackoverflow.com/questions/14250991/is-it-acceptable-for-a
 -server-to-send-a-http-response-before-the-entire-request
-- 
Ticket URL: <https://code.djangoproject.com/ticket/33699#comment:10>
Django <https://code.djangoproject.com/>
The Web framework for perfectionists with deadlines.

-- 
You received this message because you are subscribed to the Google Groups 
"Django updates" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-updates+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-updates/01070190a5ff7135-608d2b07-aad3-46a5-b1b7-55b4e99cac78-000000%40eu-central-1.amazonses.com.

Reply via email to