Joongi Kim added the comment:
I have released the new version of aiotools with rewritten TaskGroup and
PersistentTaskGroup.
https://aiotools.readthedocs.io/en/latest/aiotools.taskgroup.html
aiotools.TaskGroup has small additions to asyncio.TaskGroup: a naming API and
`current_taskgroup
Joongi Kim added the comment:
Ah, I'm confused with aiotools.TaskGroup (originated from EdgeDB's TaskGroup)
code while browsing both aiotools and stdlib asyncio.TaskGroup source codes.
The naming facility seems to be intentionally removed when ported to the stdlib.
So I am closin
New submission from Joongi Kim :
The __repr__() method in asyncio.TaskGroup does not include self._name.
I think this is a simple overlook, because asyncio.Task includes the task name
in __repr__(). :wink:
https://github.com/python/cpython/blob/345572a1a02/Lib/asyncio/taskgroups.py#L28-L42
Joongi Kim added the comment:
I have updated the PersistentTaskGroup implementation referring
asyncio.TaskGroup and added more detailed test cases, which works with the
latest Python 3.11 GitHub checkout.
https://github.com/achimnol/aiotools/pull/36/files
Please have a look at the class
Joongi Kim added the comment:
Short summary:
PersistentTaskGroup shares the followings from TaskGroup:
- It uses WeakSet to keep track of child tasks.
- After exiting the async context manager scope (or the shutdown procedure), it
ensures that all tasks are complete or cancelled
Joongi Kim added the comment:
> And just a question: I'm just curious about what happens if belonging tasks
> see the cancellation raised from their inner tasks. Sibling tasks should not
> be cancelled, and the outer task group should not be cancelled, unless the
> task
Joongi Kim added the comment:
> As for errors in siblings aborting the TaskGroup, could you apply a wrapper
> to the scheduled coroutines to swallow and log any errors yourself?
Yes, this could be a simplest way to implement PersistentTaskGroup if TaskGroup
supports "persistent
Joongi Kim added the comment:
Good to hear that TaskGroup already uses WeakSet.
When all tasks finish, PersistentTaskGroup should not finish and wait for
future tasks, unless explicitly cancelled or shutdown. Could this be also
configured with asyncio.TaskGroup?
I'm also ok with add
Joongi Kim added the comment:
Anoter case:
https://github.com/lablup/backend.ai-manager/pull/533
https://github.com/lablup/backend.ai-agent/pull/341
When shutting down the application, I'd like to explicitly cancel the shielded
tasks, while keep them shielded before shutdown.
So I ins
Joongi Kim added the comment:
Updated the title to reduce confusion.
--
title: Context-based TaskGroup for legacy libraries -> Implicit binding of
PersistentTaskGroup (or virtual event loops)
___
Python tracker
<https://bugs.python.org/issu
Joongi Kim added the comment:
I have added more about my stories in bpo-46843.
I think the suggestion of implicit taskgroup binding with the current
asyncio.TaskGroup has no point but it would have more meaning with
PersistentTaskGroup.
So, if we treat PersistentTaskGroup as a "n
Joongi Kim added the comment:
Here is one another story.
When handling message queues in distributed applications, I use the following
pattern frequently for graceful shutdown:
* Use a sentinel object to signal the end of queue.
* Enqueue the sentinel object when:
- The server is shutting
Joongi Kim added the comment:
I ended up with the following conclusion:
- The new abstraction should not cancel sibling tasks and itself upon unhandled
execption but loudly report such errors (and the fallback error handler should
be customizable).
- Nesting task groups will give additional
Joongi Kim added the comment:
This particular experience,
https://github.com/lablup/backend.ai-agent/pull/331, has actually motivated me
to suggest PersistentTaskGroup.
The program subscribes the event stream of Docker daemon using aiohttp as an
asyncio task, and this should be kept
Joongi Kim added the comment:
@gvanrossum As you mentioned, the event loop currently plays the role of the
top-level task group already, even without introducing yet another top-level
task. For instance, asyncio.run() includes necessary shutdown procedures to
cancel all belonging
Change by Joongi Kim :
--
nosy: +achimnol
___
Python tracker
<https://bugs.python.org/issue46622>
___
___
Python-bugs-list mailing list
Unsubscribe:
Joongi Kim added the comment:
@yselivanov @asvetlov
I think this API suggestion would require more refining and discussion in
depths, and probably it may be better to undergo the PEP writing and review
process. Or I might need to have a separate discussion thread somewhere else
(maybe
Joongi Kim added the comment:
Some search results from cs.github.com with the input "asyncio task weakset",
which may be replaced/simplified with PersistentTaskGroup:
-
https://github.com/Textualize/textual/blob/38efc821737e3158a8c4c7ef8ecfa953dc7c0ba8/src/textual/message_p
Joongi Kim added the comment:
Example use cases:
* Implement an event iteration loop to fetch events and dispatch the handlers
depending on the event type (e.g., WebSocket connections, message queues, etc.)
- https://github.com/aio-libs/aiohttp/pull/2885
- https://github.com/lablup
Joongi Kim added the comment:
I think people may ask "why in stdlib?".
My reasons are:
- We are adding new asyncio APIs in 3.11 such as TaskGroup, so I think it is a
good time to add another one, as long as it does not break existing stuffs.
- I believe that long-running tas
Joongi Kim added the comment:
So I have more things in mind.
Basically PersistentTaskGroup resemble TaskGroup in that:
- It has the same "create_task()" method.
- It has an explicit "cancel()" or "shutdown()" method.
- Exiting of the context manager means th
Joongi Kim added the comment:
Ok, let me be clear: Patching asyncio.create_task() to support this opt-in
contextual task group binding is not an ultimate goal of this issue. If it
becomes possible to override/extend the task factory at runtime with any event
loop implementation, then it
Joongi Kim added the comment:
Ah, and this use case also requires that TaskGroup should have an option like
`return_exceptions=True` which makes it not to cancel sibling tasks upon
unhandled exceptions, as I suggested in PersistentTaskGroup (bpo-46843
Joongi Kim added the comment:
An example would be like:
tg = asyncio.TaskGroup()
...
async with tg:
with asyncio.TaskGroupBinder(tg): # just a hypothetical API
asyncio.create_task(...) # equivalent to tg.create_task(...)
await some_library.some_work() # all tasks are
Joongi Kim added the comment:
My propsal is to opt-in the taskgroup binding for asyncio.create_task() under a
specific context, not changing the defautl behavior.
--
___
Python tracker
<https://bugs.python.org/issue46
Joongi Kim added the comment:
It is also useful to write debugging/monitoring codes for asyncio applications.
For instance, we could "group" tasks from different libraries and count them.
--
___
Python tracker
<https://bugs.python.o
Joongi Kim added the comment:
Conceptually it is similar to replace malloc using LD_PRELOAD or
LD_LIBRARY_PATH manipulation. When I cannot modify the executable/library
binaries, this allows replacing the functionality of specific functions.
If we could assign a specific (persistent) task
Joongi Kim added the comment:
The main benefit is that any legacy code that I cannot modify can be upgraded
to TaskGroup-based codes, which offers a better machinary for exception
handling and propagation.
There may be different ways to visit this issue: allow replacing the task
factory in
New submission from Joongi Kim :
Along with bpo-46843 and the new asyncio.TaskGroup API, I would like to suggest
addition of context-based TaskGroup feature.
Currently asyncio.create_task() just creates a new task directly attached to
the event loop, while asyncio.TaskGroup.create_task
New submission from Joongi Kim :
I'm now tracking the recent addition and discussion of TaskGroup and
cancellation scopes. It's interesting! :)
I would like to suggest to have a different mode of operation in
asyncio.TaskGroup, which I named "PersistentTaskGroup".
AFAI
Change by Joongi Kim :
--
keywords: +patch
nosy: +Joongi Kim
nosy_count: 6.0 -> 7.0
pull_requests: +27160
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/28850
___
Python tracker
<https://bugs.python.org/i
Joongi Kim added the comment:
As in the previous discussion, instead of tackling stdlib right away, it would
be nice to evaluate the approach using 3rd-party libs, such as trio and/or
async-tokio, or maybe a new library.
I have a strong feeling that we need to improve the async file I/O
Joongi Kim added the comment:
Ah, yes, but one year has passed so it may be another chance to discuss its
adoption, as new advances like tokio_uring became available.
--
___
Python tracker
<https://bugs.python.org/issue44
New submission from Joongi Kim :
This is a rough early idea suggestion on adding io_uring as an alternative I/O
multiplexing mechanism in Python (maybe selectors and asyncio). io_uring is a
relatively new I/O mechanism introduced in Linux kernel 5.1 or later.
https://lwn.net/Articles/776703
Joongi Kim added the comment:
After checking out PEP-567 (https://www.python.org/dev/peps/pep-0567/),
I'm adding njs to the nosy list.
--
nosy: +njs
___
Python tracker
<https://bugs.python.org/is
New submission from Joongi Kim :
This is just an idea: ContextVar.set() and ContextVar.reset() looks naturally
mappable with the "with" statement.
For example:
a = ContextVar('a')
token = a.set(1234)
...
a.reset(token)
could be naturally rewritten as:
a = ContextVar(
john kim added the comment:
Okay. Thank you for the detailed explanation.
--
resolution: -> not a bug
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.or
john kim added the comment:
Thank you for your explanation of venv.
I understand that venv is not portable.
But I was confused because the venv was written on the left side of the
terminal line and it looked like it was working.
Therefore, if venv does not work, such as if __venv_dir__ is
New submission from john kim :
I changed the path of the project using venv, so it didn't work properly.
I thought it worked successfully because there was a (venv) on the terminal
line.
However, the __VENV_DIR__ in the
set "VIRTUAL_ENV=__VENV_DIR__"
in the activate scri
New submission from JiKwon Kim :
Currently imghdr module only finds "JFIF" or "Exif" in specific position.
However there's some jpeg images with "JFXX" marker. I had some image with this
marker and imghdr.what() returned None.
Refer to:
https://www.ecma-i
New submission from JiKwon Kim :
Currently pathlib with_suffix() function only accepts suffix starts with
dot(".").
Consider this code;
some_pathlib_path.with_suffix("jpg")
This should change suffix to ".jpg", not raising ValueError.
--
components:
Change by Joongi Kim :
--
pull_requests: +22115
pull_request: https://github.com/python/cpython/pull/23217
___
Python tracker
<https://bugs.python.org/issue41
Yongkwan Kim added the comment:
My solution is creating link of libffi.so.6 as .5 This is for anyone who has
same issue with me.
But thanks for your kind reply though.
--
___
Python tracker
<https://bugs.python.org/issue35
Jinseo Kim added the comment:
My environment is Ubuntu 18.04.4
Python version:
Python 3.8.0 (default, Oct 28 2019, 16:14:01)
[GCC 8.3.0] on linux
--
___
Python tracker
<https://bugs.python.org/issue41
Jinseo Kim added the comment:
Yes, I restarted and cleared directory before each test.
--
___
Python tracker
<https://bugs.python.org/issue41532>
___
___
Pytho
Change by Joongi Kim :
--
nosy: +Joongi Kim
nosy_count: 6.0 -> 7.0
pull_requests: +20687
pull_request: https://github.com/python/cpython/pull/21545
___
Python tracker
<https://bugs.python.org/issu
Change by Joongi Kim :
--
nosy: +njs
___
Python tracker
<https://bugs.python.org/issue41229>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
Joongi Kim added the comment:
I've searched the Python documentation and the docs must be updated to
explicitly state the necessity of aclose().
refs)
https://docs.python.org/3/reference/expressions.html#asynchronous-generator-functions
https://www.python.org/dev/peps/pep-0525/
I'
Joongi Kim added the comment:
>From the given example, if I add "await q.aclose()" after "await
>q.asend(123456)" it does not leak the memory.
This is a good example showing that we should always wrap async generators with
explicit "aclosing" context mana
Change by Joongi Kim :
--
nosy: +achimnol
___
Python tracker
<https://bugs.python.org/issue41320>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yongjik Kim added the comment:
Hi, sorry if I'm interrupting, but while we're at this, could we also not
escape regex for "message" part? (Or at least amend the documentation to
clarify that the message part is literal string match?)
Currently, the docs on -W just say
Change by Wansoo Kim :
--
keywords: +patch
pull_requests: +20601
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21453
___
Python tracker
<https://bugs.python.org/issu
New submission from Wansoo Kim :
Many Python users use the following snippets to read Json File.
```
with oepn(filepath, 'r') as f:
data = json.load(f)
```
I suggest providing this snippet as a function.
```
data = json.read(filepath)
```
Reading Json is very frequent task
Wansoo Kim added the comment:
Can I solve this issue?
--
nosy: +ys19991
___
Python tracker
<https://bugs.python.org/issue40982>
___
___
Python-bugs-list mailin
Change by Wansoo Kim :
--
pull_requests: +20575
pull_request: https://github.com/python/cpython/pull/21428
___
Python tracker
<https://bugs.python.org/issue41
Change by Wansoo Kim :
--
keywords: +patch
pull_requests: +20574
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21427
___
Python tracker
<https://bugs.python.org/issu
New submission from Wansoo Kim :
Using the name of the built-in function as a variable can cause unexpected
problems.
```
# example
type = 'Hello'
...
type('Happy')
Traceback (most recent call last):
File "", line 1, in
TypeError: 'str' obje
Wansoo Kim added the comment:
Can you reproduce this bug? I was able to find the hidden file by recursive
search by excuting the code below.
```
from glob import glob
hidden = glob('**/.*')
print(hidden)
```
--
nosy: +ys19991
___
Pyth
Wansoo Kim added the comment:
Can I solve this problem?
--
nosy: +ys19991
___
Python tracker
<https://bugs.python.org/issue37894>
___
___
Python-bugs-list mailin
Change by Wansoo Kim :
--
keywords: +patch
pull_requests: +20562
pull_request: https://github.com/python/cpython/pull/21413
___
Python tracker
<https://bugs.python.org/issue41
Wansoo Kim added the comment:
May I solve this issue?
--
nosy: +ys19991
___
Python tracker
<https://bugs.python.org/issue41199>
___
___
Python-bugs-list mailin
Wansoo Kim added the comment:
Well... to be honest, I'm a little confused. bpo-41244 and this issue are
completely opposite. I'm not used to Python community yet because it hasn't
been long since I joined it.
You're saying that if a particular method is not dramatically
Change by Wansoo Kim :
--
keywords: +patch
pull_requests: +20546
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21398
___
Python tracker
<https://bugs.python.org/issu
New submission from Wansoo Kim :
https://bugs.python.org/issue41242
According to BPO-41242, it is better to use join than += when concatenating
multiple strings.
https://github.com/python/cpython/blob/b26a0db8ea2de3a8a8e4b40e69fc8642c7d7cb68/Lib/asyncio/queues.py#L82
However, the link above
Change by Wansoo Kim :
--
keywords: +patch
pull_requests: +20545
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21397
___
Python tracker
<https://bugs.python.org/issu
New submission from Wansoo Kim :
Hello
I think it's better to use += than list.join() when concating strings.
This is more intuitive than other methods.
Also, I personally think it is not good for one variable to change to another
type during runtime.
https://github.com/python/cpython
Change by Wansoo Kim :
--
keywords: +patch
pull_requests: +20544
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21396
___
Python tracker
<https://bugs.python.org/issu
New submission from Wansoo Kim :
Hello!
When using 'if syntax', casting condition to bool type is unnecessary. Rather,
it only occurs overhead.
https://github.com/python/cpython/blob/b26a0db8ea2de3a8a8e4b40e69fc8642c7d7cb68/Lib/asyncio/futures.py#L118
If you look at the link above
Change by Wansoo Kim :
--
keywords: +patch
nosy: +ys19991
nosy_count: 4.0 -> 5.0
pull_requests: +20529
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21384
___
Python tracker
<https://bugs.python.org/i
Joongi Kim added the comment:
And I suspect that this issue is something simliar to what I did in a recent
janus PR:
https://github.com/aio-libs/janus/blob/ec8592b91254971473b508313fb91b01623f13d7/janus/__init__.py#L84
to give a chance for specific callbacks to execute via an extra context
Joongi Kim added the comment:
I just encountered this issue when doing "sys.exit(1)" on a Click-based CLI
program that internally uses asyncio event loop via wrapped via a context
manager, on Python 3.8.2.
Using uvloop or adding "time.sleep(0.1)" before "sys.e
New submission from Kim-Adeline Miguel :
(See #33234)
Recently we added Python 3.8 to our CI test matrix, and we noticed a possible
backward incompatibility with the list() constructor.
We found that __len__ is getting called twice, while before 3.8 it was only
called once.
Here'
Joongi Kim added the comment:
It is also generating deprecation warning:
> /opt/python/3.8.0/lib/python3.8/asyncio/queues.py:48: DeprecationWarning: The
> loop argument is deprecated since Python 3.8, and scheduled for removal in
> Python 3.10.
> self._finished = locks.Eve
Change by kim :
--
nosy: +kim
___
Python tracker
<https://bugs.python.org/issue38576>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
New submission from minjeong kim <98...@naver.com>:
normal result :
$ python test.py
{'p1': 1, 'p2': 1, 'p3': 1, 'p4': 1}
run with tracing :
$ python -mtrace --trackcalls test.py
{}
It seems that the foo and save functions that multiprocess shoul
Kim Oldfield added the comment:
Usually the search page is the quickest way to find documentation about a
module or function - quicker than navigating through a couple of levels of
pages (documentation home, index, index by letter, scroll or search in page to
find desired name, click on
New submission from Kim Oldfield :
The python 3 documentation search
https://docs.python.org/3/search.html
doesn't always find built-in functions.
For example, searching for "zip" takes me to
https://docs.python.org/3/search.html?q=zip
I would expect the first match to be
Change by Kim Gustyr :
--
nosy: +kgustyr
___
Python tracker
<https://bugs.python.org/issue36077>
___
___
Python-bugs-list mailing list
Unsubscribe:
June Kim added the comment:
Here is my environment
---system
CPU: Intel i5 @2.67GHz
RAM: 8G
OS: Windows 10 Home (64bit)
OS version: 1803
OS build: 17134.472
---python
version1: 3.7.1 AMD64 on win32
version2: 3.7.2 AMD64 on win32
Python path: (venv)/Scripts/python.exe
IDE: VS Code(1.30.1
New submission from June Kim :
## Test code ##
## Modified a bit from the original written by Doug Hellmann
## https://pymotw.com/3/multiprocessing/communication.html
import multiprocessing
import time
class Consumer(multiprocessing.Process):
def __init__(self, task_queue, result_queue
Change by June Kim :
--
components: Library (Lib)
nosy: June Kim
priority: normal
severity: normal
status: open
title: multiprocessing.queue in 3.7.2 doesn't behave as it was in 3.7.1
type: behavior
versions: Python 3.7
___
Python tracker
&
Change by Yongkwan Kim :
--
components: +Build
type: -> compile error
versions: +Python 3.7
___
Python tracker
<https://bugs.python.org/issue35061>
___
___
Py
New submission from Yongkwan Kim :
As python 3.7 excludes libffi from it's package, my build on centos6 doesn't
work on centos7. Error message is following.
ImportError: libffi.so.5: cannot open shared object file: No such file or
directory
centos7 have libffi.so.6 instead of libf
Change by Bumsik Kim :
--
pull_requests: +8753
___
Python tracker
<https://bugs.python.org/issue33649>
___
___
Python-bugs-list mailing list
Unsubscribe:
Bumsik Kim added the comment:
#33649 does not solve a problem of SubprocessTransport.close() done in #23347.
I made a PR #33649 directly to fix that.
--
___
Python tracker
<https://bugs.python.org/issue33
Bumsik Kim added the comment:
Hi, I came from #33986. I noticed that the new doc still does not reflect a
design change on SubprocessTransport.close() done in #23347. I made a PR to fix
that.
BTW this is opposed to the original PEP 3156:
https://www.python.org/dev/peps/pep-3156/#subprocess
Change by Bumsik Kim :
--
pull_requests: +8651
___
Python tracker
<https://bugs.python.org/issue33649>
___
___
Python-bugs-list mailing list
Unsubscribe:
Seonggi Kim added the comment:
I thing so too, it's my fault.
--
___
Python tracker
<https://bugs.python.org/issue34302>
___
___
Python-bugs-list m
Seonggi Kim added the comment:
Sorry, I'm waiting for permit CLA signing.
I will request PR after CLA was signed.
--
nosy: +hacksg
___
Python tracker
<https://bugs.python.org/is
Seonggi Kim added the comment:
Request PR again : https://bugs.python.org/issue34302
--
stage: patch review -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/i
Seonggi Kim added the comment:
Base commit : Python 3.8.0a0 (heads/master:b75d7e2435, Aug 1 2018, 10:32:28)
$ test.py
import timeit
queue_setup = '''
from collections import deque
q = deque()
start = 10**5
stop = start + 500
for i in range(0, stop):
q.append(i)
'
Change by Seonggi Kim :
--
keywords: +patch
pull_requests: +8098
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue34298>
___
___
Py
Change by Seonggi Kim :
--
components: Extension Modules
nosy: hacksg
priority: normal
severity: normal
status: open
title: Avoid inefficient way to find start point in deque.index
type: enhancement
versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8
Change by Seonggi Kim :
--
components: +Extension Modules -ctypes
___
Python tracker
<https://bugs.python.org/issue34295>
___
___
Python-bugs-list mailin
Change by Seonggi Kim :
--
keywords: +patch
pull_requests: +8097
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue34295>
___
___
Py
Change by Seonggi Kim :
--
components: ctypes
nosy: hacksg
priority: normal
severity: normal
status: open
title: Avoid inefficient way to find start point in deque.index
type: enhancement
versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8
Bumsik Kim added the comment:
I also believe this can be backported to 2.7 as well.
--
___
Python tracker
<https://bugs.python.org/issue34019>
___
___
Python-bug
Bumsik Kim added the comment:
No problem :) To add more, the Opera's documentation
(https://www.opera.com/docs/switches) says:
"This document was last updated for Opera 11.61"
This is the reason I guessed Opera changed its CLI format to Chrome's and
f
Bumsik Kim added the comment:
$opera --version
53.0.2907.68
$opera -remote "openURL(https://google.com,new-window)"
would open a broken link: http://openurl%28https//google.com,new-window)
I found that problem while answering a StackOverflow question:
https://stackoverflow.com
Change by Bumsik Kim :
--
versions: +Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.8
___
Python tracker
<https://bugs.python.org/issue34019>
___
___
1 - 100 of 190 matches
Mail list logo