On 11 October 2017 at 02:52, Guido van Rossum <gu...@python.org> wrote:

> I think we really need to do more soul-searching before we decide that a
> much more complex semantics and implementation is worth it to maintain
> backwards compatibility for leaking in via next().
>

As a less-contrived example, consider context managers implemented as
generators.

We want those to run with the execution context that's active when they're
used in a with statement, not the one that's active when they're created
(the fact that generator-based context managers can only be used once
mitigates the risk of creation time context capture causing problems, but
the implications would still be weird enough to be worth avoiding).

For native coroutines, we want them to run with the execution context
that's active when they're awaited or when they're prepared for submission
to an event loop, not the one that's active when they're created.

For generators-as-coroutines, we want them to be like their native
coroutine counterparts: run with the execution context that's active when
they're passed to "yield from" or prepared for submission to an event loop.

It's only for generators-as-iterators that the question of what behaviour
we want even really arises, as it's less clear cut whether we'd be better
off overall if they behaved more like an eagerly populated container (and
hence always ran with the execution context that's active when they're
created), or more like the way they do now (where retrieval of the next
value from a generator is treated like any other method call).

That combination of use cases across context managers, native coroutines,
top level event loop tasks, and generator-based coroutines mean we already
need to support both execution models regardless, so the choice of default
behaviour for generator-iterators won't make much difference to the overall
complexity of the PEP.

However, having generator-iterators default to *not* capturing their
creation context makes them more consistent with the other lazy evaluation
constructs, and also makes the default ContextVar semantics more consistent
with thread local storage semantics.

The flipside of that argument would be:

* the choice doesn't matter if there aren't any context changes between
creation & use
* having generators capture their context by default may ease future
migrations from eager container creation to generators in cases that
involve context-dependent calculations
* decorators clearing the implicitly captured context from the
generator-iterator when appropriate is simpler than writing a custom
iterator wrapper to handle the capturing

I just don't find that counterargument compelling when we have specific use
cases that definitely benefit from the proposed default behaviour
(contextlib.contextmanager, asyncio.coroutine), but no concrete use cases
for the proposed alternative that couldn't be addressed by a combination of
map(), functools.partial(), and contextvars.run_in_execution_context().

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to