On Thu, Jun 11, 2015 at 11:59 AM, Glyph <gl...@twistedmatrix.com> wrote:

> On Jun 11, 2015, at 2:05 AM, Martin Teichmann <martin.teichm...@gmail.com>
> wrote:
>
> StreamWriter.drain cannot be called from different tasks (asyncio tasks
> that is)
> at the same time. It raises an assertion error. I added a script that
> shows this problem.
>
>
> In my opinion, this is fine.  It should probably be some exception other
> than AssertionError, but it should be an exception.  Do not try to
> manipulate a StreamWriter or a StreamReader from multiple tasks at once.
> If you encourage people to do this, it is a recipe for corrupted output
> streams and interleaved writes.
>

Agreed. These streams should not be accessed "concurrently" by different
coroutines. A single coroutine that repeatedly calls write() and eventually
drain() is in full control over how many bytes it writes before calling
drain(), and thus it can easily ensure the memory needed for buffering is
strictly bounded. But if multiple independent coroutines engage in this
pattern for the same stream, the amount of buffer space is not under
control of any single coroutine. Even calling drain() after each write does
not prevent this (as Martin's original test script shows -- both coroutines
end up calling write() before anything else happens). Because of this
reasoning I don't really think that calling drain() before write() matters
-- either way you're alternating between write() and drain().

Finally, take all of this with a grain of salt, because the flow control
protocol is complicated -- drain()'s behavior is affected by
set_write_buffer_limits(), and in addition there is buffering in the OS
socket layer (which may in turn be tuned by setsockopt(), but you rarely
need this -- even if you can control the buffering in your own OS, you
can't control the buffering in the rest of the network or the receiving
host -- so we don't provide a direct interface to it).

-- 
--Guido van Rossum (python.org/~guido)

Reply via email to