Yury Selivanov added the comment:
New changeset 57b698886b47bb81c782c2ba80a8a72fe66c7aad by Elvis Pranskevichus
in branch '3.8':
[3.8] bpo-32751: Wait for task cancel in asyncio.wait_for() when timeout <= 0
(GH-21895) (#21967)
https://github.com/python/cpython/com
Yury Selivanov added the comment:
New changeset 6e1954cd8286e083e7f8d09516d91b6b15769a4e by Miss Islington (bot)
in branch '3.8':
bpo-37658: Fix asyncio.wait_for() to respect waited task status (GH-21894)
(#21965)
https://github.com/python/cpython/commit
Yury Selivanov added the comment:
New changeset a2118a14627256197bddcf4fcecad4c264c1e39d by Elvis Pranskevichus
in branch 'master':
bpo-37658: Fix asyncio.wait_for() to respect waited task status (#21894)
https://github.com/python/cpython/commit/a2118a14627256197bddcf4fcecad4c264c1e39d
Yury Selivanov added the comment:
New changeset c517fc712105c8e5930cb42baaebdbe37fc3e15f by Elvis Pranskevichus
in branch 'master':
bpo-32751: Wait for task cancel in asyncio.wait_for() when timeout <= 0 (#21895)
https://github.com/python/cpython/com
Yury Selivanov added the comment:
I typically don't like objects that support both `with` and `async with`, but
in this case I think that's appropriate. Nathaniel, Andrew, what do you think?
--
nosy: +njs
___
Python tracker
<ht
Change by Yury Selivanov :
--
nosy: +asvetlov, njs
___
Python tracker
<https://bugs.python.org/issue41197>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
I'm OK with adding this, but the close method should be `aclose()`
--
___
Python tracker
<https://bugs.python.org/issue41
Yury Selivanov added the comment:
> In python 3.8.4 running this script will print:
> asyncio.TimeoutError happened
This is the expected behavior in 3.8.
> This is a breaking change which I didn't see announced in the changelog.
Yeah, looks like this wasn't mentioned in w
Yury Selivanov added the comment:
New changeset 0dd98c2d00a75efbec19c2ed942923981bc06683 by Alex Grönholm in
branch 'master':
bpo-41317: Remove reader on cancellation in asyncio.loop.sock_accept() (#21595)
https://github.com/python/cpython/commit/0dd98c2d00a75efbec19c2ed942923981bc06683
Yury Selivanov added the comment:
Alex, do you want to submit a PR?
--
___
Python tracker
<https://bugs.python.org/issue41317>
___
___
Python-bugs-list mailin
Yury Selivanov added the comment:
> By the way if we will eventually combine StreamReader and StreamWriter won't
> this function (readinto) be useful then?
Yes. But StreamReader and StreamWriter will stay for the foreseeable future
for backwards compatibility pretty much frozen i
Yury Selivanov added the comment:
> Is it simply combining stream reader and stream writer into a single object
> and changing the write() function to always wait the write (thus deprecating
> drain) and that's it?
Pretty much. We might also rename a few APIs here and there, li
Yury Selivanov added the comment:
> Im interested in learning about the new api.
There are two problems with the current API:
1. Reader and writer are separate objects, while they should be one.
2. Writer.write is not a coroutine, although it should be one.
There are other minor n
Yury Selivanov added the comment:
We don't want to extend StreamReader with new APIs as ultimately we plan to
deprecate it. A new streams API is needed, perhaps inspired by Trio. Sadly, I'm
-1 on this one.
--
___
Python tracker
<ht
Yury Selivanov added the comment:
New changeset 568fb0ff4aa641329261cdde20795b0aa9278175 by Tony Solomonik in
branch 'master':
bpo-41273: asyncio's proactor read transport's better performance by using
recv_into instead of recv (#21442)
https://github.com/python/cpython/commit
Yury Selivanov added the comment:
> The aiohttp issue says they won't fix this until asyncio supports it. Kinda
> understand that.
I saw you opened an issue with aiohttp to allow this and they're open to it. I
hope that will get some movement. It also would be a big test for uv
Yury Selivanov added the comment:
New changeset 0b6169e391ce6468aad711f08ffb829362293ad5 by Tony Solomonik in
branch '3.8':
bpo-41247: asyncio.set_running_loop() cache running loop holder (#21406)
https://github.com/python/cpython/commit/0b6169e391ce6468aad711f08ffb829362293ad5
Yury Selivanov added the comment:
> So how about maybe:
That wouldn't work. You still haven't explained what's wrong with calling `loop
= asyncio.get_running_loop()` inside `async def main()`. That literally solves
all problems without the need of us modifying any A
Yury Selivanov added the comment:
Looks like https://github.com/python/cpython/pull/17975 was forgotten and was
never committed to 3.9. So it's 3.10 now.
Best bet for you is to use uvloop which should support the feature.
--
___
Python tracker
Yury Selivanov added the comment:
> n python to know if there could be a context switch to get_running_loop while
> set_running_loop is running.
No, it's protected by the GIL.
Good catch, and merged.
--
nosy: +yselivanov
resolution: -> fixed
stage: patch review -> res
Yury Selivanov added the comment:
> Changes in bpo-41242 were rejected not because the new code is worse, but
> because it is not obviously and significantly better that the existing code.
> Here status quo wins for the same reasons. This rule saves us from endless
> rewrit
Yury Selivanov added the comment:
> The community is hurting *A LOT* right now because asyncio is intentionally
> non-compatible with the traditional blocking approach that is not only still
> prevalent it's one that a lot of us think is *easier* to work with.
Mike, I'm su
Yury Selivanov added the comment:
> Yeah, writing a trivial "event loop" to drive actually-synchronous code is
> easy. Try it out:
This is exactly the approach I used in edgedb-python.
> I guess there's technically some overhead, but it's tiny.
Correct, the overhead is
Yury Selivanov added the comment:
> I think this is a really critical technique to have so that libraries that
> mediate between a user-facing facade and TCP based backends no longer have to
> make a hard choice about if they are to support sync vs. async (or async with
> an o
Yury Selivanov added the comment:
The idiomatic way:
async def main():
loop = asyncio.get_running_loop()
loop.set_exception_handler(...)
# other code
asyncio.run(main())
We don't want to add new arguments to asyncio.run as there would be too many
Yury Selivanov added the comment:
Thanks for posting this, Mike.
> Vague claims of "framework X is faster because it's async" appear, impossible
> to confirm as it is unknown how much of their performance gains come from the
> "async" aspect and how muc
Yury Selivanov added the comment:
> Optimally, we want to do removals before the beta so that users can prepare
> accordingly rather than deal with breakage in the final release.
+1 to remove it now. Up to Lukasz to give us green or red light here,
Yury Selivanov added the comment:
> Since asyncio is no longer provisional, should it break backwards
> compatibility with just a What's New entry?
Adding new APIs to asyncio can't be classified as a backward compatibility
issue. Otherwise the development of it would stall and nobody
Yury Selivanov added the comment:
This has long been implemented:
https://docs.python.org/3/library/contextlib.html#contextlib.asynccontextmanager
--
nosy: +yselivanov
___
Python tracker
<https://bugs.python.org/issue40
Change by Yury Selivanov :
--
resolution: -> rejected
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/issue40844>
___
___
Yury Selivanov added the comment:
> I'm suggesting a method on coroutines that runs them without blocking, and
> will run a callback when it's complete.
And how would that method be implemented? Presumably the event loop would
execute the coroutine, but that API is already there, an
Yury Selivanov added the comment:
New changeset dc4eee9e266267498a6b783a0abccc23c06f2b87 by Fantix King in branch
'master':
bpo-30064: Properly skip unstable loop.sock_connect() racing test (GH-20494)
https://github.com/python/cpython/commit/dc4eee9e266267498a6b783a0abccc23c06f2b87
Yury Selivanov added the comment:
New changeset 210a137396979d747c2602eeef46c34fc4955448 by Fantix King in branch
'master':
bpo-30064: Fix asyncio loop.sock_* race condition issue (#20369)
https://github.com/python/cpython/commit/210a137396979d747c2602eeef46c34fc4955448
Yury Selivanov added the comment:
> If someone else agrees, I can create a new issue.
I'd keep this one issue, but really up to you. I don't think I have time in the
next few days to work on what I proposed but would be happy to brainstorm /
review
Yury Selivanov added the comment:
Just a note, __context__ cycles can theoretically be longer than 2 nodes. I've
encountered cycles like `exc.__context__.__context__.__context__ is exc` a few
times in my life, typically resulting from some weird third-party libraries.
The only solution
Yury Selivanov added the comment:
> 2) Add some warning about the value is thrown away (in debug mode) and
> document it somewhere.
The documentation update is definitely something that needs to be done in 3.9.
Want to submit a PR?
We can also issue a warning in asyncio debug mode
Yury Selivanov added the comment:
Elevating to release blocker to make sure it's included. The PR is good.
--
___
Python tracker
<https://bugs.python.org/issue31
Change by Yury Selivanov :
--
priority: normal -> release blocker
___
Python tracker
<https://bugs.python.org/issue31033>
___
___
Python-bugs-list mai
Yury Selivanov added the comment:
New changeset 382a5635bd10c237c3e23e346b21cde27e48d7fa by romasku in branch
'master':
bpo-40607: Reraise exception during task cancelation in asyncio.wait_for()
(GH-20054)
https://github.com/python/cpython/commit/382a5635bd10c237c3e23e346b21cde27e48d7fa
Yury Selivanov added the comment:
New changeset de92769d473d1c0955d36da2fc71462621326f00 by jack1142 in branch
'master':
bpo-34790: add version of removal of explicit passing of coros to
`asyncio.wait`'s documentation (#20008)
https://github.com/python/cpython/commit
Yury Selivanov added the comment:
> I think that in case inner task cancelation fails with some error,
> asyncio.wait_for should reraise it instead of silently losing it.
+1.
--
___
Python tracker
<https://bugs.python.org/i
Change by Yury Selivanov :
--
nosy: -yselivanov
___
Python tracker
<https://bugs.python.org/issue40257>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
> If so, the main purpose of that example is just to demonstrate basic
> async/await syntax, and show asyncio.run() for a trivial case to clearly show
> how it's used at a fundamental level; it's intentional that the more involved
&
Yury Selivanov added the comment:
Good catch. The function should be fixed to:
_marker = object()
def run(coro, *, debug=_marker):
if debug is not _marker:
loop.set_debug(debug)
--
___
Python tracker
<https://bugs.python.
Change by Yury Selivanov :
--
nosy: -yselivanov
___
Python tracker
<https://bugs.python.org/issue39562>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
IMO this is a 3.9 fix.
--
___
Python tracker
<https://bugs.python.org/issue29587>
___
___
Python-bugs-list mailing list
Unsub
Yury Selivanov added the comment:
> @Yury what do you think?
Yeah, the documentation needs to be fixed.
> Maybe "Returns an iterator of awaitables"?
I'd suggest to change to: "Return an iterator of coroutines. Each coroutine
allows to wait for the earliest nex
Yury Selivanov added the comment:
Good catch & PR ;) Thanks
--
___
Python tracker
<https://bugs.python.org/issue39965>
___
___
Python-bugs-list ma
Yury Selivanov added the comment:
I'd be fine with `Signature.from_text()`, but not with `Signature` constructor
/ `signature()` function accepting both callable and string arguments. Overall,
I think that we ought to have a real need to add this new API, so unless
there's a good (or any
Yury Selivanov added the comment:
Thank you so much, Stefan, for looking into this. Really appreciate the help.
--
___
Python tracker
<https://bugs.python.org/issue39
Yury Selivanov added the comment:
What's the actual use case for exposing this functionality?
--
___
Python tracker
<https://bugs.python.org/issue37497>
___
___
Yury Selivanov added the comment:
> For asyncio.Lock (plus other synchronization primitives) and asyncio.Queue,
> this would be added in https://github.com/python/cpython/pull/18195.
> Currently waiting on emanu (author of the PR) to finish up some changes, but
> it's mostly com
Yury Selivanov added the comment:
> Source?
I could not find a good source, sorry. I remember I had a complaint in uvloop
to support negative timeouts, but I can't trace it.
That said, I also distinctly remember seeing code (and writing such code
myself) that performs computat
Yury Selivanov added the comment:
> I very doubt if any sane code is organizing like this test: start delayed
> reading, cancel it and read again.
Hm, cancellation should work correctly no matter how "sane" or "insane" the
user code is.
> The worse, neither p
Yury Selivanov added the comment:
> It does seem like a good solution.
Great. I'll close this issue then as the proposed solution is actually not as
straightforward as it seems. Task names exist specifically to solve this case.
--
resolution: -> rejected
stage: -> resolv
Yury Selivanov added the comment:
> Would there be too much overhead if allowing specification of a python
> function that contextvars calls on context changes?
Potentially yes, especially if we allow more than one context change callback.
Allowing just one makes the API inflexible
Yury Selivanov added the comment:
> I agree, but wouldn't you agree that some information is better than no
> information?
We do agree with that. Making it work in the way that does not disturb people
when a 10mb bytes string is passed is challenging. We could just cut everything
aft
Yury Selivanov added the comment:
> Is there any existing API that can be used to call `lib.set_state` on context
> changes?
No, but there's C API that you can use to get/set contextvars. If a C library
is hard coded to use threadlocals I'm afraid there's nothing we can do about it
Yury Selivanov added the comment:
> The ship has sailed, this change breaks a lot of existing code without a
> strong reason.
Yes.
It's a common thing to compute asyncio.sleep delay and sometimes it goes
negative. The current behavior is part of our API now.
--
reso
Yury Selivanov added the comment:
> Unfortunately, if Python is used as a frontend for a native libray (eg
> accessed via ctypes), and in case that the state of interest is managed in
> the native library, contextvar API is insufficient.
Please elaborate with a cleare
Yury Selivanov added the comment:
Андрей,
Here's how you can fix your example:
import asyncio
class Singleton:
_LOCK = None
_LOOP = None
@classmethod
async def create(cls):
if cls._LOOP is None:
cls._LOOP = asyncio.get_running_loop()
cls
Yury Selivanov added the comment:
> I'm not strongly against the feature. I first proposed to expose it, but make
> it private. Almost one year later, the PR was not updated. So I just closed
> the PR and the issue.
All clear, Victor. Let's keep this closed. The reason I did
Yury Selivanov added the comment:
> There is not clear rationale to justify the addition of the function
Yeah, with the new threaded watcher being the default we don't need this
anymore.
> so I reject the feature
NP, here, but, hm, can you unilaterally reject featur
Yury Selivanov added the comment:
I think we still use get_event_loop() in asyncio code, no? If we do, we should
start by raising deprecation warnings in those call sites, e.g. if a Future or
Lock is created outside of a coroutine and no explicit event loop is passed. We
should do
Yury Selivanov added the comment:
> This throws 'cannot pickle Context'
Yes, this is expected. contextvars are not compatible with multiprocessing.
> This hangs forever *
This hanging part is weird, and most likely hints at a bug in multiproc
Change by Yury Selivanov :
--
nosy: -yselivanov
___
Python tracker
<https://bugs.python.org/issue38973>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
> The issue is minor, I suspect nobody wants to derive from ContextVar class.
I don't think that's allowed, actually.
> The generic implementation for __class_getitem__ is returning unmodified self
> argument. Yuri, is there a reason to behave di
Yury Selivanov added the comment:
> 1. A CancelledError (or maybe`QueueCancelled`?) exception is raised in all
> producers and consumers ) - this gives a producer a chance to handle the
> error and do something with the waiting item that could not be `put()`
> 2. Ite
Yury Selivanov added the comment:
This seems like a useful idea. I recommend to write a test implementation and
play with it.
Andrew:
> I think the proposal makes the queues API more error-prone: concurrent put()
> and close() produces unpredictable result on get() side.
How? C
Yury Selivanov added the comment:
> Oh in that case, would you like me to close or modify GH-17311? I didn't
> think you'd approve of making the more extensive changes all the way back to
> 3.5.
After reading the comments here I think Antoine's solution makes sense. But...
let's w
Yury Selivanov added the comment:
> My preference for create_datagram_endpoint() would be:
> - make the "reuse_address" parameter a no-op, and raise an error when
> "reuse_address=True" is passed
> - do that in 3.8 as well
Yeah, I like this prposal; we can
New submission from Yury Selivanov :
The exception should probably be just ignored. Andrew, thoughts?
Here's an example error traceback:
Traceback (most recent call last):
File "c:\projects\asyncpg\asyncpg\connection.py", line 1227, in _cancel
await w.w
Yury Selivanov added the comment:
I'd be -1 on changing the default of an existing method, at least without
consulting with a wider audience. We can add a new method to the loop and
deprecate create_datagram_endpoint.
I suggest to post to python-dev and discuss this before making any
Yury Selivanov added the comment:
We can merge this PR as is (Benjamin, thanks for working on this!), but I think
that as soon as we merge it we should do some refactoring and deprecations.
The child watchers API has to go. It's confusing, painful to use, it's not
compatible with third
Yury Selivanov added the comment:
> I think that I'm still not understanding something important though. Even if
> we initialize our ThreadPoolExecutor outside of __init__ (in a start()
> coroutine method, as your proposing), it seems like the threads will be
> spawne
Yury Selivanov added the comment:
> I'm going to have to rescind the above statements. I was able to implement a
> new prototype of asyncio.ThreadPool (using ThreadPoolExecutor) that spawns
> it's threads asynchronously on startup. Since this one a bit more involved
> than the p
Yury Selivanov added the comment:
> We should either remove the API (not realistic dream at least for many years)
> or fix it. There is no choice actually.
I don't understand. What happens if we don't await the future that
run_in_executor returns? Does it get GCed eventually? Why is
Yury Selivanov added the comment:
> From my understanding, the executor classes are designed around spawning the
> threads (or processes in the case of ProcessPoolExecutor) as needed up to
> max_workers, rather than spawning them upon startup. The asynchronous
> spawnin
Yury Selivanov added the comment:
>
>
>
> IMO, I think it would be a bit more clear to just explicitly call it
> "threads" or "max_threads", as that explains what it's effectively doing.
> While "concurrency" is still a perfectly correct wa
Yury Selivanov added the comment:
>> async with asyncio.ThreadPool(concurrency=10) as pool:
> I'm definitely on board with the usage of an async context manager and the
> functionality shown in the example, but I'm not sure that I entirely
> understand what the &quo
Yury Selivanov added the comment:
Few thoughts:
1. I like the idea of having a context manager to create a thread pool. It
should be initialized in a top-level coroutine and the passed down to other
code, as in:
async def main():
async with asyncio.ThreadPool(concurrency=10) as pool
Yury Selivanov added the comment:
> It is a more complex solution but definitely 100% backward compatible; plus
> the solution we can prepare people for removing the deprecated code
> eventually.
Yeah. Do you think it's worth it bothering with this old low-level API instead
Yury Selivanov added the comment:
> The change is slightly not backward compatible but
Yeah, that's my main problem with converting `loop.run_in_executor()` to a
coroutine. When I attempted doing that I discovered that there's code that
expects the method to return a Future, and so expe
Yury Selivanov added the comment:
Yeah, please do the change! Thanks!
--
___
Python tracker
<https://bugs.python.org/issue38652>
___
___
Python-bugs-list mailin
Yury Selivanov added the comment:
> Well, I'm basically using a run method defined as a shorthand for
> self.loop.run_until_complete (without closing loop, reusing it throughout).
> It would be nice if asyncio.run could simply be used instead, but I
> understand you're saying th
Yury Selivanov added the comment:
> It would be nice if asyncio.run() uses the default loop if it's available
> (and not running), and that the default loop remains intact after the call
> returns.
Unfortunately it's not possible to implement that reliably and withou
Yury Selivanov added the comment:
Yes, I've experienced this bug. We need to fix this in 3.8.1.
--
nosy: +lukasz.langa
priority: normal -> release blocker
___
Python tracker
<https://bugs.python.org/issu
Yury Selivanov added the comment:
Yes.
As a remedy for this particular problem we can add checks here and there
comparing `asyncio.get_running_loop()` and `self._loop`. Performance will
suffer a bit but the usability will greatly improve
Yury Selivanov added the comment:
I'd add `and will be removed in 3.11.` now.
--
___
Python tracker
<https://bugs.python.org/issue34790>
___
___
Python-bug
Yury Selivanov added the comment:
Kyle, why are you resetting the status to "Pending"?
> I think it will be necessary to fix the issues in the near future if we want
> to keep MultiLoopChildWatcher in 3.8 (as they apply to 3.8 and 3.9). I would
> certainly not be ecst
Yury Selivanov added the comment:
> but perhaps if we go through with that we should remove MultiLoopChildWatcher
> from 3.8 (if it's not too late to do so) instead of deprecating it.
I'll leave that up to Andrew to decide, but I'd be +1 to drop it asap,
especially if we want to even
Yury Selivanov added the comment:
Didn't we just add MultiLoopChildWatcher in 3.8?
--
status: pending -> open
___
Python tracker
<https://bugs.python.org/issu
Change by Yury Selivanov :
--
nosy: -yselivanov
___
Python tracker
<https://bugs.python.org/issue20443>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
The more I think about this the more I like new 3.8 behavior. I'm going to
close this issue; if anything comes up I'll reopen it later. Sorry for the
noise.
--
resolution: -> not a bug
stage: -> resolved
status: open -&g
Yury Selivanov added the comment:
I'll elevate the status so that we don't forget before 3.8.1 is too close
--
priority: normal -> deferred blocker
___
Python tracker
<https://bugs.python.org/issu
Yury Selivanov added the comment:
I think this broke asyncio a bit. These two uvloop sock_* API tests no longer
pass on asyncio 3.8:
* https://github.com/MagicStack/uvloop/blob/master/tests/test_sockets.py#L192
* https://github.com/MagicStack/uvloop/blob/master/tests/test_sockets.py#L238
Yury Selivanov added the comment:
> Yury: I'm okay with merging. If I recall, you were going to add a comment or
> two about the approach (a closure, if I remember correctly).
Sure, I'll rebase and add a comment in a couple of days. I'll elevate the
status so that I don't forget
Yury Selivanov added the comment:
FWIW we discussed my patch with Eric at PyCon 2019 and I think he had no issues
with it. Eric, can I merge it? Should we backport to 3.8?
--
___
Python tracker
<https://bugs.python.org/issue34
Yury Selivanov added the comment:
> So I think asyncio.run is actually totally fine with the 3.8.0 behavior.
Yes. Looks like it.
> It's only explicit calls to shutdown_asyncgens that might run into this, and
> I think that's probably OK?
Yes. Calling shutdown_asyncgens after
Yury Selivanov added the comment:
Was fixed as part of #30773.
--
resolution: -> fixed
stage: patch review -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.or
101 - 200 of 3072 matches
Mail list logo