[issue44697] Memory leak when asyncio.open_connection raise

2021-07-26 Thread Kyle Stanley


Kyle Stanley  added the comment:

Thank you Arteem, that should help indicate where the memory leak is present.

--

___
Python tracker 
<https://bugs.python.org/issue44697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39529] Deprecate get_event_loop()

2021-05-31 Thread Kyle Stanley


Kyle Stanley  added the comment:

> But why does `asyncio.run` unconditionally create a new event loop instead of 
> running on `asyncio.get_event_loop`? 

AFAIK, it does so for purposes of compatibility in programs that need multiple 
separate event loops and providing a degree of self-dependency. asyncio.run() 
is entirely self-reliant in that it creates all needed resources at the start 
and closes them in finalization, rather than depending on existing resources. I 
believe this to be significantly safer and better guaranteed to function as 
intended, although perhaps at some cost to convenience in cases like your own 
where there only needs to be one event loop.

--

___
Python tracker 
<https://bugs.python.org/issue39529>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2021-05-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

Thanks for closing up the issue, Victor :)

--

___
Python tracker 
<https://bugs.python.org/issue39995>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2021-05-22 Thread Kyle Stanley


Kyle Stanley  added the comment:

> Thanks, Ned <3

(For following up and closing the issue)

--

___
Python tracker 
<https://bugs.python.org/issue37228>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2021-05-22 Thread Kyle Stanley


Kyle Stanley  added the comment:

> Since 3.5 has now reached end-of-life, this issue will not be fixed there so 
> it looks like it can be closed.

Thanks, Ned <3

--

___
Python tracker 
<https://bugs.python.org/issue37228>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42647] Unable to use concurrent.futures in atexit hook

2021-01-29 Thread Kyle Stanley


Kyle Stanley  added the comment:

Thanks for bringing attention to this, Julien. 

While the regression is definitely unfortunate, I'm uncertain about whether the 
behavior is *correct* to allow in the first place. Specifically, allowing the 
registration of an atexit hook which uses a ThreadPoolExecutor within it means 
that the finalization of the executor will be done *after* thread finalization 
occurs, leaving dangling threads which will have to be abruptly killed upon 
interpreter exit instead of being safely joined. From my perspective at least, 
this doesn't seem like something to be encouraged.

Is there a real-world situation where it's specifically necessary or even 
beneficial to utilize ThreadPoolExecutor at this point after thread 
finalization rather than earlier in the program? Not that it doesn't exist, but 
to me it intuitively seems very odd to utilize an executor within an atexit 
hook, which are intended to just be resource finalization/cleanup functions 
called at interpreter shutdown. Assuming there is a genuine use case I'm not 
seeing, it may be worth weighing against the decision to convert the executors 
to not use daemon threads, as I presently don't think there's a way to (safely) 
allow that behavior without reverting back to using daemon threads.

That said, I'll admit that I'm more than a bit biased as the author of the 
commit which introduced the regression, so I'll CC Antoine Pitrou (active 
expert for threading and concurrent.futures) to help make the final decision.

--
nosy: +pitrou

___
Python tracker 
<https://bugs.python.org/issue42647>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41962] Make threading._register_atexit public?

2021-01-24 Thread Kyle Stanley


Kyle Stanley  added the comment:

> I'm dealing with a subtle deadlock involving 
> concurrent.futures.ThreadPoolExecutor, and my solution that worked in Python 
> 3.8 broke with 3.9. I'm running some long-running (possibly infinite) tasks 
> in the thread pool, and I cancel them in an `atexit` callback so that 
> everything can shut down cleanly (before ThreadPoolExecutor joins all worker 
> threads in its own `atexit` hook).

IMO, a better practice would be providing those potentially infinite running 
tasks a direct method of escape and invoking it before calling 
executor.shutdown(), it would be a more reliable approach. But, perhaps there 
is some convenience utility in being able to provide custom atexit hooks. It 
also might help the user to separate the shutdown logic from the rest of the 
program. 

Since you worked with me in adding threading._register_atexit(), Do you have 
any thoughts, Antoine? I would personally not be opposed to it being made 
public assuming there's real utility present in doing so.

My only concern is that it might be a potential foot-gun. If the user submits 
an atexit hook that deadlocks, it might prevent threads from shutting down 
safely prior to interpreter finalization. I'm presently undecided if explicitly 
mentioning that it in the docs would be sufficient warning.

--
nosy: +pitrou

___
Python tracker 
<https://bugs.python.org/issue41962>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42611] PEP 594

2020-12-09 Thread Kyle Stanley


New submission from Kyle Stanley :

This issue was created for the purpose of tracking any changes related to PEP 
594 (Removing dead batteries from stdlib), and any relevant discussions about 
the modules being removed.

--
components: Library (Lib)
messages: 382815
nosy: aeros, christian.heimes
priority: normal
pull_requests: 22588
severity: normal
status: open
title: PEP 594
type: enhancement
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue42611>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42392] remove the 'loop' parameter from __init__ in all classes in asyncio.locks

2020-11-25 Thread Kyle Stanley


Kyle Stanley  added the comment:


New changeset b9127dd6eedd693cfd716a648864e2e00186 by Yurii Karabas in 
branch 'master':
bpo-42392: Improve removal of *loop* parameter in asyncio primitives (GH-23499)
https://github.com/python/cpython/commit/b9127dd6eedd693cfd716a648864e2e00186


--

___
Python tracker 
<https://bugs.python.org/issue42392>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42392] remove the 'loop' parameter from __init__ in all classes in asyncio.locks

2020-11-24 Thread Kyle Stanley


Kyle Stanley  added the comment:

> Is there anyone who is assigned to removing the deprecated `loop` parameter 
> from `asyncio` codebase?

> If not I can take this task, I believe I have enough free time and curiosity 
> to do that :)

You can certainly feel free to work on that and it would definitely be 
appreciated! However, I would recommend splitting it into several PRs, 
basically as "Remove *loop* parameter from x` rather than doing a massive PR 
that removes it from everywhere that it was deprecated. This makes the review 
process easier.

Also, keep in mind that there are some uses of *loop* that are still perfectly 
valid and will remain so, such as in `asyncio.run_coroutine_threadsafe()`. It 
should only be removed in locations where there was a deprecation warning from 
3.8 or sooner present (as we give two major versions of deprecation notice 
before most breaking changes are made -- this has been made official policy as 
of PEP 387).

--

___
Python tracker 
<https://bugs.python.org/issue42392>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42392] remove the 'loop' parameter from __init__ in all classes in asyncio.locks

2020-11-18 Thread Kyle Stanley


Kyle Stanley  added the comment:

Regarding the example _get_loop():

```
   def _get_loop(self):
   loop = asyncio.get_running_loop()
   if self._loop is None:
self._loop = loop
   if loop is not self._loop: raise
   if not loop.is_running(): raise
```

Would this be added to all asyncio primitives to be called anytime a reference 
to the loop is needed within a coroutine?

Also, regarding the last line "if not loop.is_running(): raise" I'm not 100% 
certain that I understand the purpose. Wouldn't it already raise a RuntimeError 
from `asyncio.get_running_loop()` if the event loop wasn't running?

The only thing I can think of where it would have an effect is if somehow the 
event loop was running at the start of `_get_loop()` and then the event loop 
was stopped (e.g. a loop in an alternative thread was stopped by the main 
thread while the alternative thread was in the middle of executing 
`_get_loop()`). But to me, that seems to be enough of an edge case to simplify 
it to the following:


```
   def _get_loop(self):
   loop = asyncio.get_running_loop()
   if self._loop is None:
self._loop = loop
   if loop is not self._loop: raise
```

(Unless you intended for the first line `loop = asyncio.get_running_loop()` to 
instead be using the private `asyncio._get_running_loop()`, which returns None 
and doesn't raise. In that case, the original would be good to me.)

Other than that, I think the approach seems solid.

--

___
Python tracker 
<https://bugs.python.org/issue42392>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42392] remove the 'loop' parameter from __init__ in all classes in asyncio.locks

2020-11-17 Thread Kyle Stanley


Kyle Stanley  added the comment:

> Oh my.  FWIW I think that we need to implement this differently. I don't 
> think it matters where, say, an asyncio.Lock was instantiated. It can be 
> created anywhere. So IMO its __init__ shouldn't try to capture the current 
> loop -- there's no need for that. The loop can be and should be captured when 
> the first `await lock.acquire()` is called.

That's good to know and I think more convenient to work with, so +1 from me. I 
guess my remaining question though is whether it's okay to `await 
lock.acquire()` on a single lock instance from multiple different running event 
loops (presumably each in their own separate threads) or if there's a need to 
restrict it to only one event loop.

> I'm writing a piece of code right now that would need to jump through the 
> hoops to simply create a new `asyncio.Lock()` in a place where there's no 
> asyncio loop yet.

>From what I've seen of asyncio user code, it seems reasonably common to create 
>async primitives (lock, semaphore, queue, etc.) in the __init__ for some class 
>prior to using the event loop, which would fail with usage of 
>`get_running_loop()` in the __init__ for the primitives. So, if it's not an 
>issue to wait until accessing the event loop until it's actually needed (e.g. 
>in the lock.acquire() or queue.get/put()), I think we should definitely try to 
>be conscious about when we call `get_running_loop()` going forward to ensure 
>we're not imposing arbitrary inconveniences on users.

--

___
Python tracker 
<https://bugs.python.org/issue42392>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42392] remove the 'loop' parameter from __init__ in all classes in asyncio.locks

2020-11-17 Thread Kyle Stanley


Kyle Stanley  added the comment:

Sure, I would be interested in helping with this. Although if a newer 
contributor takes it up before I'm able to, I wouldn't be opposed to that 
either (my main priority at the moment is helping with PEP 594 since it's a 
concrete goal of my internship w/ the PSF, but I should have some time to work 
on this as well).

As far as I can tell though, there's currently a similar PR open: 
https://github.com/python/cpython/pull/18195 . This attempts to deprecate the 
loop argument and creating primitives such as asyncio.Lock outside of a running 
event loop with the following approach:

```
def __init__(self, *, loop=None):
self._waiters = None
self._locked = False
if loop is None:
self._loop = events._get_running_loop()
if self._loop is None:
warnings.warn("The creation of asyncio objects without a 
running "
  "event loop is deprecated as of Python 3.9.",
  DeprecationWarning, stacklevel=2)
self._loop = events.get_event_loop()
else:
warnings.warn("The loop argument is deprecated since Python 
3.8, "
  "and scheduled for removal in Python 3.10.",
  DeprecationWarning, stacklevel=2) 
```

So, do we want to add more strictness to that with always using 
`get_running_loop()` to access the event loop each time instead of accessing 
self._loop, and effectively ignore the user-supplied one? Presumably, this 
would start with a warning for passing a *loop* arg and then be removed 
entirely as a parameter ~two versions later.

> (or alternatively they can cache the running `loop`, but the first loop 
> lookup should be performed with `asyncio.get_running_loop()`)

AFAIK, at the C-API extension level, get_running_loop() already caches the 
running loop in `cached_running_holder`. 
(https://github.com/python/cpython/blob/9c98e8cc3ebf56d01183c67adbc000ed19b8e0f4/Modules/_asynciomodule.c#L232).
 So from a performance perspective, wouldn't it effectively be the same if we 
repeatedly use `get_running_loop()` to access the same event loop? I think it 
also adds a nice integrity check to be certain that the primitive wasn't 
initially created within a running event loop, and then later accessed outside 
of one.

The only concern that I can see with this approach is that users could 
potentially create a primitive in one running event loop and then access it in 
a separate loop running in a different thread (without using something like 
self._loop, the primitive would no longer be associated with a specific event 
loop and could potentially used within *any* running event loop). I'm not 
entirely certain if that is a real problem though, and if anything, it seems 
like it could prove to be useful in some multi-loop environment. I just want to 
make sure that it's intended.

--

___
Python tracker 
<https://bugs.python.org/issue42392>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41699] Potential memory leak with asyncio and run_in_executor

2020-10-29 Thread Kyle Stanley


Kyle Stanley  added the comment:

> Regularly calling executor.shutdown() and then instantiating a new 
> ThreadPoolExecutor in order to run an asyncio program does not seem like a 
> good API to me.

Clarification: you're typically only supposed to instantiate a single 
ThreadPoolExecutor or ProcessPoolExecutor per program (sometimes one of each 
depending on use case), and continuously submit jobs to it rather than creating 
multiple executor instances. Otherwise, it's generally unneeded overhead.

(I'll take a look at the other parts once I find the time, just wanted to 
briefly mention the above in the meantime.)

--

___
Python tracker 
<https://bugs.python.org/issue41699>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41699] Potential memory leak with asyncio and run_in_executor

2020-10-29 Thread Kyle Stanley


Kyle Stanley  added the comment:

Also note that the difference in memory is much higher when an exception occurs 
(presumably because the exception is stored on future._exception and not 
cleaned up?):
```
[aeros:~/repos/cpython]$ ./python 
~/programming/python/asyncio_run_in_exec_leak.py -n=1 --asyncio
start RSS memory:
VmRSS: 16976 kB
after 10_000 iterations of 1 element lists:
VmRSS: 64132 kB
```

--

___
Python tracker 
<https://bugs.python.org/issue41699>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41699] Potential memory leak with asyncio and run_in_executor

2020-10-29 Thread Kyle Stanley


Kyle Stanley  added the comment:

In the snippet provided, at least part of the resources are not finalized 
because executor.shutdown() was not called in the program (which should be done 
when creating a local instance of the executors, either explicitly or using the 
context manager). For the event loop's default threadpool (used w/ 
loop.run_in_executor(None, ...), I added a coroutine function 
loop.shutdown_default_executor() in 3.9+ handles this (called in asyncio.run()).

Without ever calling executor.shutdown(), the worker threads/processes and 
their associated resources are not finalized until interpreter shutdown. 
There's also some additional finalization that occurs in `_python_exit()` for 
both TPE and PPE (see 
https://github.com/python/cpython/blob/3317466061509c83dce257caab3661d52571cab1/Lib/concurrent/futures/thread.py#L23
 or 
https://github.com/python/cpython/blob/3317466061509c83dce257caab3661d52571cab1/Lib/concurrent/futures/process.py#L87),
 which is called just before all non-daemon threads are joined just before 
interpreter shutdown occurs.

However, even considering the above, there still seems to be a significant 
additional difference in RSS compared to using ThreadPoolExecutor vs 
loop.run_in_executor() that I can't seem to account for (before and after 
asyncio.run()):

```
import asyncio
import concurrent.futures as cf
import os
import gc
import argparse

from concurrent.futures.thread import _python_exit

def leaker(n):
list(range(n))

def func_TPE(n):
with cf.ThreadPoolExecutor() as executor:
for i in range(10_000):
executor.submit(leaker, n)

async def func_run_in_executor(n):
loop = asyncio.get_running_loop()
for i in range(10_000):
loop.run_in_executor(None, leaker, n)

def display_rss():
os.system(f"grep ^VmRSS /proc/{os.getpid()}/status")

def main(n=100, asyncio_=False):
try:
if asyncio_:
asyncio.run(func_run_in_executor(n))
else:
func_TPE(n)
finally:
_python_exit()
gc.collect()
print(f"after 10_000 iterations of {n} element lists:")
display_rss()

if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-n", type=int, default=100)
parser.add_argument("--asyncio", action=argparse.BooleanOptionalAction)

print("start RSS memory:")
display_rss()

args = parser.parse_args()
main(args.n, args.asyncio)
```
Results (on latest commit to master, 3.10):
asyncio -
```
[aeros:~/repos/cpython]$ ./python 
~/programming/python/asyncio_run_in_exec_leak.py -n=1 --asyncio
start RSS memory:
VmRSS: 16948 kB
after 10_000 iterations of 1 element lists:
VmRSS: 27080 kB
```
concurrent.futures -
```
[aeros:~/repos/cpython]$ ./python 
~/programming/python/asyncio_run_in_exec_leak.py -n=1 --no-asyncio
start RSS memory:
VmRSS: 17024 kB
after 10_000 iterations of 1 element lists:
VmRSS: 19572 kB
```
When using await before loop.run_in_executor(), the results are more similar to 
using ThreadPoolExecutor directly:
```
[aeros:~/repos/cpython]$ ./python 
~/programming/python/asyncio_run_in_exec_leak.py -n=1 --asyncio  
start RSS memory:
VmRSS: 16940 kB
after 10_000 iterations of 1 element lists:
VmRSS: 17756 kB
```
However, as mentioned by the OP, if stored in a container and awaited later 
(such as w/ asyncio.gather()), a substantial memory difference is present 
(increases with list size):
```
[aeros:~/repos/cpython]$ ./python 
~/programming/python/asyncio_run_in_exec_leak.py -n=1 --asyncio
start RSS memory:
VmRSS: 16980 kB
after 10_000 iterations of 1 element lists:
VmRSS: 29744 kB
```

Based on the above results, I think there may be a smaller leak occurring in 
concurrent.futures (potentially related to the linked bpo-41588) and a bit of a 
larger leak occurring in loop.run_in_executor(). So they can remain as separate 
issues IMO.

At the moment, my best guess is that there's some memory leak that occurs from 
the future not being fully cleaned up, but I'm not certain about that. This 
will likely require some further investigation.

Input from Yury and/or Andrew would definitely be appreciated. Is there 
something that I'm potentially missing here?

--

___
Python tracker 
<https://bugs.python.org/issue41699>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41588] Potential Memory leak with concurrent.futures.ThreadPoolExecutor's map

2020-10-29 Thread Kyle Stanley


Change by Kyle Stanley :


--
nosy: +aeros, bquinlan, pitrou

___
Python tracker 
<https://bugs.python.org/issue41588>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28533] Replace asyncore

2020-10-22 Thread Kyle Stanley


Kyle Stanley  added the comment:

Since this issue is now a significant blocker for PEP 594 (remove stdlib dead 
batteries, which includes asyncore and asynchat), I'm going to prioritize 
working on this and assign it to myself.

--
assignee:  -> aeros

___
Python tracker 
<https://bugs.python.org/issue28533>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38912] test_asyncio altered the execution environment

2020-10-19 Thread Kyle Stanley


Kyle Stanley  added the comment:

I've applied Justin's patch, and opened PRs for backporting to 3.9 and 3.8 
(currently waiting on CI). 

Based on my local testing and buildbot fleet results, it definitely reduces the 
rate at which the failure occurs, but does not 100% resolve the ENV CHANGED 
failure (from the transport being closed). As a result, I'll leave the issue 
open for further investigation.

--

___
Python tracker 
<https://bugs.python.org/issue38912>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38912] test_asyncio altered the execution environment

2020-10-19 Thread Kyle Stanley


Kyle Stanley  added the comment:


New changeset de73d432bb29f6439f2db16cb991e15e09c70c26 by Justin Turner Arthur 
in branch 'master':
bpo-38912: fix close before connect callback in test_asyncio SSL tests 
(GH-22691)
https://github.com/python/cpython/commit/de73d432bb29f6439f2db16cb991e15e09c70c26


--

___
Python tracker 
<https://bugs.python.org/issue38912>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29127] Incorrect reference names in asyncio.subprocess documentation

2020-10-02 Thread Kyle Stanley


Kyle Stanley  added the comment:

I can confirm that both on the latest version of the docs (for 3.8) and for the 
version mentioned in the issue (3.6), the issue mentioned with 
asyncio.subprocess.PIPE is no longer present. (It was likely fixed in the 
asyncio documentation overhaul that happened within the last couple of years).

As a result, I'll proceed with closing this issue. Thanks for checking, Carl.

--
nosy: +aeros
resolution:  -> fixed
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue29127>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37294] concurrent.futures.ProcessPoolExecutor state=finished raised error

2020-09-30 Thread Kyle Stanley


Kyle Stanley  added the comment:

DanilZ, could you take a look at the superseding issue 
(https://bugs.python.org/issue37297) and see if your exception raised within 
the job is the same?  

If it's not, I would suggest opening a separate issue (and linking to it in a 
comment here), as I don't think it's necessarily related to this one. 
"state=finished raised error" doesn't indicate the specific exception that 
occurred. A good format for the name would be something along the lines of:

"ProcessPoolExecutor.submit()  while reading 
large object (4GB)"

It'd also be helpful in the separate issue to paste the full exception stack 
trace, specify OS, and multiprocessing start method used (spawn, fork, or 
forkserver). This is necessary to know for replicating the issue on our end.

In the meantime, I workaround I would suggest trying would be to use the  
*chunksize* parameter (or *Iterator*) in pandas.read_csv(), and split it across 
several jobs (at least 4+, more if you have additional cores) instead of within 
a single one. It'd also be generally helpful to see if that alleviates the 
problem, as it could possibly indicate an issue with running out of memory when 
the dataframe is converted to pickle format (which often increases the total 
size) within the process associated with the job.

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue37294>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41815] SQLite: segfault if backup called on closed database

2020-09-21 Thread Kyle Stanley


Kyle Stanley  added the comment:

I've received approval from Lukasz to backport to 3.9 and 3.8, and have now 
merged the PRs.

It could use a post-commit review from a core dev that has some additional 
experience with SQLite3, but it can be de-escalated from release blocker.

--
priority: release blocker -> normal
stage: patch review -> commit review
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue41815>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41815] SQLite: segfault if backup called on closed database

2020-09-19 Thread Kyle Stanley


Change by Kyle Stanley :


--
nosy: +ghaering

___
Python tracker 
<https://bugs.python.org/issue41815>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41815] SQLite: segfault if backup called on closed database

2020-09-19 Thread Kyle Stanley


Kyle Stanley  added the comment:

With this being a segfault that's simple to replicate under normal 
circumstances, I've elevated the priority to release blocker. I'll wait on the 
release manager (Lukasz) for explicit approval for backporting to 3.9 and 3.8, 
with both branches being in the release candidate phase.

(The attached PR looks to be ready to be merged as well, after a Misc/NEWS 
entry is added by the author. See 
https://github.com/python/cpython/pull/22322#pullrequestreview-492167467 for 
details.)

--
nosy: +aeros, lukasz.langa
priority: normal -> release blocker
versions:  -Python 3.7

___
Python tracker 
<https://bugs.python.org/issue41815>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41818] Lib/pty.py major revision

2020-09-19 Thread Kyle Stanley


Kyle Stanley  added the comment:

In addition to the above, if a major revision is made to pty, I'd suggest also 
addressing the issue of "master/slave" terminology, and replace it with 
something comparable like "parent/child". There's an open devguide issue 
(https://github.com/python/devguide/issues/605) to more explicitly state terms 
to avoid, and support for avoiding usage of "slave/master" seems 
uncontroversial (especially in any new code).

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue41818>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39349] Add "cancel_futures" parameter to concurrent.futures.Executor.shutdown()

2020-09-02 Thread Kyle Stanley


Kyle Stanley  added the comment:

Thanks for bringing attention to cancel_futures being missing in the base 
Executor class and for the PR, Shantanu!

--

___
Python tracker 
<https://bugs.python.org/issue39349>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39349] Add "cancel_futures" parameter to concurrent.futures.Executor.shutdown()

2020-09-02 Thread Kyle Stanley


Kyle Stanley  added the comment:


New changeset a763ee3c583e6a2dfe1b1ac0600a48e8a978ed50 by Shantanu in branch 
'3.9':
[3.9] bpo-39349: Add cancel_futures to Executor.shutdown base class (GH-22023) 
(GH-22048)
https://github.com/python/cpython/commit/a763ee3c583e6a2dfe1b1ac0600a48e8a978ed50


--

___
Python tracker 
<https://bugs.python.org/issue39349>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39349] Add "cancel_futures" parameter to concurrent.futures.Executor.shutdown()

2020-09-01 Thread Kyle Stanley


Kyle Stanley  added the comment:


New changeset 17dc1b789ecc33b4a254eb3b799085f4b3624ca5 by Shantanu in branch 
'master':
bpo-39349: Add cancel_futures to Executor.shutdown base class (GH-22023)
https://github.com/python/cpython/commit/17dc1b789ecc33b4a254eb3b799085f4b3624ca5


--

___
Python tracker 
<https://bugs.python.org/issue39349>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41682] test_asyncio: Proactor test_sendfile_close_peer_in_the_middle_of_receiving failure

2020-08-31 Thread Kyle Stanley


Change by Kyle Stanley :


--
nosy:  -paul.moore, steve.dower, tim.golden, zach.ware

___
Python tracker 
<https://bugs.python.org/issue41682>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41682] test_asyncio: Proactor test_sendfile_close_peer_in_the_middle_of_receiving failure

2020-08-31 Thread Kyle Stanley


Change by Kyle Stanley :


--
components: +Windows, asyncio
nosy: +paul.moore, steve.dower, tim.golden, zach.ware

___
Python tracker 
<https://bugs.python.org/issue41682>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41682] test_asyncio: Proactor test_sendfile_close_peer_in_the_middle_of_receiving failure

2020-08-31 Thread Kyle Stanley

Kyle Stanley  added the comment:

In case the link to the logs are lost from closing and reopening the PR, it was 
a Windows CI test (GitHub actions) ran with the following configuration:

D:\a\cpython\cpython>"D:\a\cpython\cpython\PCbuild\amd64\python.exe"  -u -Wd -E 
-bb -m test  -uall -u-cpu -rwW --slowest --timeout 1200 -j0 
== CPython 3.10.0a0 (remotes/pull/22023/merge:992cc27, Aug 31 2020, 06:11:21) 
[MSC v.1927 64 bit (AMD64)]
== Windows-10-10.0.17763-SP0 little-endian
== cwd: D:\a\cpython\cpython\build\test_python_6372�
== CPU count: 2
== encodings: locale=cp1252, FS=utf-8
Using random seed 491155
0:00:00 Run tests in parallel using 4 child processes (timeout: 20 min, worker 
timeout: 25 min)

--

___
Python tracker 
<https://bugs.python.org/issue41682>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41682] test_asyncio: Proactor test_sendfile_close_peer_in_the_middle_of_receiving failure

2020-08-31 Thread Kyle Stanley


Change by Kyle Stanley :


--
components: +Tests
nosy: +asvetlov, yselivanov
type:  -> behavior
versions: +Python 3.10

___
Python tracker 
<https://bugs.python.org/issue41682>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41682] test_asyncio: Proactor test_sendfile_close_peer_in_the_middle_of_receiving failure

2020-08-31 Thread Kyle Stanley


New submission from Kyle Stanley :

In an unrelated PR to add a new argument to the base Executor class for 
concurrent.futures, there was a test_asyncio failure in test_sendfile for the 
ProactorEventLoop tests:

==
FAIL: test_sendfile_close_peer_in_the_middle_of_receiving 
(test.test_asyncio.test_sendfile.ProactorEventLoopTests)
--
Traceback (most recent call last):
  File "D:\a\cpython\cpython\lib\test\test_asyncio\test_sendfile.py", line 452, 
in test_sendfile_close_peer_in_the_middle_of_receiving
self.run_loop(
AssertionError: ConnectionError not raised

--

https://github.com/python/cpython/pull/22023/checks?check_run_id=1049760805#step:5:2373

--
title: test_asyncio: Proactor -> test_asyncio: Proactor 
test_sendfile_close_peer_in_the_middle_of_receiving failure

___
Python tracker 
<https://bugs.python.org/issue41682>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41682] test_asyncio: Proactor

2020-08-31 Thread Kyle Stanley


Change by Kyle Stanley :


--
nosy: aeros
priority: normal
severity: normal
status: open
title: test_asyncio: Proactor

___
Python tracker 
<https://bugs.python.org/issue41682>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39349] Add "cancel_futures" parameter to concurrent.futures.Executor.shutdown()

2020-08-30 Thread Kyle Stanley


Kyle Stanley  added the comment:

Good catch, Shantanu. It was not intentional on my part, and it would make 
sense to include *cancel_futures* in the base Executor class as documented. If 
you'd like to submit a PR to add it (attaching it to this issue), I should be 
able to review it within a reasonable period.

--

___
Python tracker 
<https://bugs.python.org/issue39349>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41577] Cannot use ProcessPoolExecutor if in a decorator?

2020-08-18 Thread Kyle Stanley


Kyle Stanley  added the comment:

Due to the way pickle works, it's not presently possible to serialize wrapped 
functions directly, at least not in a way that allows you to pass it to 
executor.submit() within the inner function (AFAICT). I'm also not certain what 
it would involve to provide that, or if it would be feasible to do so in a 
manner that would be backwards compatible.

In the meantime, this is a decent work-around:

```
from concurrent import futures
import random

class PPEWrapper:
def __init__(self, func, num_proc=4):
self.fn = func
self.num_proc = num_proc

def __call__(self, *args, **kwargs):
with futures.ProcessPoolExecutor() as executor:
fs = []
for i in range(self.num_proc):
fut = executor.submit(self.fn, *args, **kwargs)
fs.append(fut)
result = []
for f in futures.as_completed(fs):
result.append(f.result())
return result


def _get_random_int():
return random.randint(0, 100)

# it's often quite useful anyways to have a parallel and non-parallel version
# (for testing and devices that don't support MP)
get_random_int = PPEWrapper(_get_random_int, num_proc=4)

if __name__ == "__main__":
result = get_random_int()
print(result)
```

This doesn't allow you to use the decorator syntax, but it largely provides the 
same functionality. That being said, my familiarity with the pickle protocol 
isn't overly strong, so the above is mostly based on my own recent 
investigation. There could very well be a way to accomplish what you're looking 
for in a way that I was unable to determine.

--
nosy: +aeros, alexandre.vassalotti, pitrou

___
Python tracker 
<https://bugs.python.org/issue41577>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41202] Allow to provide custom exception handler to asyncio.run()

2020-07-10 Thread Kyle Stanley


Kyle Stanley  added the comment:

> Should I set status for this issue for closed with resolution rejected ?

I'll proceed with closing the issue.

> Should I delete branch on my forked git repo ?
> Can I delete my forked git repo ?

Might as well delete the branch, but the forked repo might be useful to keep 
around should you choose to open a PR to CPython in the future (although you 
can also just create a new one later). If you are interested in working on 
something else in the future, I'd recommend looking at the "newcomer 
friendly"/"easy" issues, or just looking for an existing open issue that you're 
interested in helping with. In most cases, working with a clearly defined issue 
is much easier than trying to propose a new one and getting it merged.

--
resolution:  -> wont fix
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue41202>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41202] Allow to provide custom exception handler to asyncio.run()

2020-07-08 Thread Kyle Stanley


Kyle Stanley  added the comment:

Yep, having to set a custom exception handler definitely constitutes as needing 
"finer control over the event loop behavior". There's absolute nothing wrong 
with using the low-level API when you need further customization, but we try to 
minimize the high-level API as much as possible to make it simple for the 
average user. I suspect that the majority of asyncio users don't have a need to 
customize the exception handler beyond the default settings.

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue41202>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40967] asyncio.Task.all_tasks() and asyncio.Task.current_task() must be removed in 3.9

2020-07-01 Thread Kyle Stanley

Kyle Stanley  added the comment:

Thanks Rémi for working on this.

--
priority: release blocker -> normal
stage: patch review -> commit review

___
Python tracker 
<https://bugs.python.org/issue40967>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40967] asyncio.Task.all_tasks() and asyncio.Task.current_task() must be removed in 3.9

2020-06-27 Thread Kyle Stanley


Kyle Stanley  added the comment:

> With beta 4 coming in 2 days it would perhaps good to choose now whether this 
> must be done for 3.9 or 3.10

Agreed. I'm definitely leaning towards 3.10 with it being decently well into 
the beta, but I'll try to bring this to Yury's attention to make a final 
decision on the matter. Optimally, we want to do removals before the beta so 
that users can prepare accordingly rather than deal with breakage in the final 
release.

--

___
Python tracker 
<https://bugs.python.org/issue40967>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30064] BaseSelectorEventLoop.sock_{recv, sendall}() don't remove their callbacks when canceled

2020-06-14 Thread Kyle Stanley


Kyle Stanley  added the comment:

Thanks again for working on this @Fantix, and for the continued vigilance on 
the issue after the test failures occurred in the buildbots. I think this 
recent PR-20868 will do the trick. At the very least, it will make the failure 
much more rare than before, which is definitely an improvement.

--
stage: patch review -> commit review
versions: +Python 3.10, Python 3.9 -Python 3.8

___
Python tracker 
<https://bugs.python.org/issue30064>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40973] platform.platform() in Python 3.8 does not report detailed Linux platform information

2020-06-14 Thread Kyle Stanley


Change by Kyle Stanley :


--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue40973>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40973] platform.platform() in Python 3.8 does not report detailed Linux platform information

2020-06-14 Thread Kyle Stanley


Change by Kyle Stanley :


--
nosy: +jaraco, lemburg

___
Python tracker 
<https://bugs.python.org/issue40973>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30064] BaseSelectorEventLoop.sock_{recv, sendall}() don't remove their callbacks when canceled

2020-06-14 Thread Kyle Stanley


Kyle Stanley  added the comment:

> Last question before PR pls: given the fact that this test is to cover a 
> fixed case when loop.sock_*() was hanging forever, should I keep the 
> wait_for(..., timeout=10)?

For now, I think we can keep the asyncio.wait_for() w/ 10 the sec timeout. I'll 
have to run the tests locally though w/ the patch to double check (once the PR 
is opened), and it may still end up needing to be increased if the test fails 
again in the future. It should be okay though.

--

___
Python tracker 
<https://bugs.python.org/issue30064>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30064] BaseSelectorEventLoop.sock_{recv, sendall}() don't remove their callbacks when canceled

2020-06-14 Thread Kyle Stanley


Kyle Stanley  added the comment:

No problem at all, and thanks for clarifying.

> If enlarging the timeout works in some of your cases, that really inspires me 
> on a potential cause of the issue in the test. To simulate the race condition 
> specifically for loop.sock_sendall(), I needed a connected socket with a full 
> send buffer so that the next sock_sendall() call would block. To achieve 
> this, I wrote something like this:

> It might be that it cost too much time in these loops (especially the 2nd one 
> because the logs showed that it was blocked on calling sock_sendall()). But 
> that's just a guess yet, let me add some more debug logs and test locally.

That seems reasonable to me and would make the most sense at a glance. If this 
is confirmed to be the source of the issue, we should be able to just reduce 
the size of the socket's send buffer via SO_SNDBUF before sending to it so that 
it fills up much more quickly.

--

___
Python tracker 
<https://bugs.python.org/issue30064>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30064] BaseSelectorEventLoop.sock_{recv, sendall}() don't remove their callbacks when canceled

2020-06-13 Thread Kyle Stanley


Kyle Stanley  added the comment:

I believe that I might have identified a simple fix for the issue, but first 
I'd like to clarify on something:

What exactly is the reason that we need to use asyncio.wait_for() with a 
timeout of 10 seconds in these tests? We typically avoid using short duration 
timeouts on individual tests, largely because we don't want them to fail based 
on the performance of the device they're ran on. This can be a concern for 
slower devices running the test suite w/ parallel jobs.

If asyncio.wait_for() is necessary in this case, I'd advise increasing the 
timeout substantially. I didn't experiment with a wide variety of numbers, but 
increasing the duration from 10 to 60 seconds seems to have fixed the failure 
locally, so that might be a decent starting point. If it's not necessary, I'd 
prefer to simply remove the asyncio.wait_for()s in test_sock_client_racing.

--

___
Python tracker 
<https://bugs.python.org/issue30064>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30064] BaseSelectorEventLoop.sock_{recv, sendall}() don't remove their callbacks when canceled

2020-06-13 Thread Kyle Stanley


Kyle Stanley  added the comment:

I was able to replicate the failure locally by running the large number of jobs 
in parallel. We typically use this to test for tricky race conditions, to 
simulate maximum load:

[aeros:~/repos/cpython]$ ./python -m test test_asyncio.test_sock_lowlevel 
--match test_sock_client_racing -j100 -F -v
# [snip]
==
ERROR: test_sock_client_racing 
(test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests)
--
Traceback (most recent call last):
  File "/home/aeros/repos/cpython/Lib/test/test_asyncio/test_sock_lowlevel.py", 
line 200, in _basetest_sock_send_racing
await self.loop.sock_sendall(sock, b'world')
  File "/home/aeros/repos/cpython/Lib/asyncio/selector_events.py", line 460, in 
sock_sendall
return await fut
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/aeros/repos/cpython/Lib/asyncio/tasks.py", line 507, in wait_for
fut.result()
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/aeros/repos/cpython/Lib/test/test_asyncio/test_sock_lowlevel.py", 
line 256, in test_sock_client_racing
self.loop.run_until_complete(asyncio.wait_for(
  File "/home/aeros/repos/cpython/Lib/asyncio/base_events.py", line 642, in 
run_until_complete
return future.result()
  File "/home/aeros/repos/cpython/Lib/asyncio/tasks.py", line 509, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError

==
ERROR: test_sock_client_racing 
(test.test_asyncio.test_sock_lowlevel.PollEventLoopTests)
--
Traceback (most recent call last):
  File "/home/aeros/repos/cpython/Lib/test/test_asyncio/test_sock_lowlevel.py", 
line 200, in _basetest_sock_send_racing
await self.loop.sock_sendall(sock, b'world')
  File "/home/aeros/repos/cpython/Lib/asyncio/selector_events.py", line 460, in 
sock_sendall
return await fut
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/aeros/repos/cpython/Lib/asyncio/tasks.py", line 507, in wait_for
fut.result()
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/aeros/repos/cpython/Lib/test/test_asyncio/test_sock_lowlevel.py", 
line 256, in test_sock_client_racing
self.loop.run_until_complete(asyncio.wait_for(
  File "/home/aeros/repos/cpython/Lib/asyncio/base_events.py", line 642, in 
run_until_complete
return future.result()
  File "/home/aeros/repos/cpython/Lib/asyncio/tasks.py", line 509, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError

==
ERROR: test_sock_client_racing 
(test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests)
--
Traceback (most recent call last):
  File "/home/aeros/repos/cpython/Lib/test/test_asyncio/test_sock_lowlevel.py", 
line 256, in test_sock_client_racing
self.loop.run_until_complete(asyncio.wait_for(
  File "/home/aeros/repos/cpython/Lib/asyncio/base_events.py", line 629, in 
run_until_complete
self.run_forever()
  File "/home/aeros/repos/cpython/Lib/asyncio/base_events.py", line 596, in 
run_forever
self._run_once()
  File "/home/aeros/repos/cpython/Lib/asyncio/base_events.py", line 1854, in 
_run_once
event_list = self._selector.select(timeout)
  File "/home/aeros/repos/cpython/Lib/selectors.py", line 323, in select
r, w, _ = self._select(self._readers, self._writers, [], timeout)
OSError: [Errno 9] Bad file descriptor

I'll try to add some debug lugs to find the source of the failure(s). This one 
might be a bit tricky to assess though.

@Fantix I'd recommend trying to run the local test with the above command, and 
increasing the parallel job count until it occurs. If it takes more than a 
minute or two for the tests to run, slowly lower the number. At the moment, the 
failure has only occurred once in the buildbots, but we might need to 
temporarily disable it if it occurs repeatedly or find some form of workaround 
(as it will potentially mask other bugs).

--

___
Python tracker 
<https://bugs.python.org/issue30064>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40967] asyncio.Task.all_tasks() and asyncio.Task.current_task() must be removed in 3.9

2020-06-13 Thread Kyle Stanley

Kyle Stanley  added the comment:

Thanks for bringing attention to this Rémi. 

I'm a bit concerned that it might be a bit late into the 3.9 beta to 
incorporate a removal and that we might need to consider delaying to 3.10, but 
I'll leave that up to Lukasz. Feel free to open a PR in the meantime since it 
will need to be removed either way, the only main difference is whether or not 
it gets backported to 3.9.

(I elevated the priority to "release blocker", as I think it should be a 
blocker for 3.10 if it's not for 3.9. The version should be updated 
accordingly.)

--
nosy: +aeros, lukasz.langa
priority: normal -> release blocker

___
Python tracker 
<https://bugs.python.org/issue40967>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40894] asyncio.gather() cancelled() always False

2020-06-13 Thread Kyle Stanley


Kyle Stanley  added the comment:

Thanks for the additional feedback, Caleb.

> I think `gather()` should work the same. It would be confusing if 
> `future_gather.cancelled()` is false if a child is cancelled, while a plain 
> old outer future returns `future.cancelled() == true` if futures that it 
> waits on are cancelled.

Agreed, I think it would make the most sense IMO for the behavior of 
.cancelled() to be similar for tasks, futures, and for gather itself. So, I'm 
still inclined towards checking the exception to see if CancelledError occurred 
for _GatheringFuture.cancelled(), as I think that would result in the most 
similar behavior.

Currently waiting on feedback from Yury and/or Andrew about the current 
long-term plans for asyncio.gather(), and their general thoughts on this issue. 
Unless it is realistically going to be deprecated and effectively replaced with 
task groups in 3.10, I think the change would make sense.

--

___
Python tracker 
<https://bugs.python.org/issue40894>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40894] asyncio.gather() cancelled() always False

2020-06-07 Thread Kyle Stanley


Kyle Stanley  added the comment:

Upon further investigation, I've realized that the issue is just that the 
cancel() override for `_GatheringFuture` never sets its state to CANCELLED at 
any point (unlike its parent), and is instead going to always be set to 
FINISHED because of the `outer.set_exception()` 
(https://github.com/python/cpython/blob/b8f67c0923ac85468dfbfd43375e0bbfb6ca50ea/Lib/asyncio/tasks.py#L826
 or 
https://github.com/python/cpython/blob/b8f67c0923ac85468dfbfd43375e0bbfb6ca50ea/Lib/asyncio/tasks.py#L791,
 depending on the arg for *return_exceptions*). 

Since we are unable to set the state of `_GatheringFuture` (at least not 
without causing side effects), I'm starting to agree that it makes sense to 
override the behavior for `cancelled()`. However, it might make more sense to 
replace the `self._cancel_requested` check with  `isinstance(self.exception(), 
exceptions.CancelledError)`, as this more accurately indicates if and when the 
gather() was cancelled. I don't think we should rely on 
`self._cancel_requested` since it would be true before the future is actually 
cancelled and not always be correct since the gather() can be cancelled without 
setting `self._cancel_requested`.

Here's one case where relying on `self._cancel_requested` for 
future_gather.cancelled() wouldn't work, based on a slight modification of the 
reproducer example:

```
async def main():
"""Cancel a gather() future and child and return it."""
task_child = ensure_future(sleep(1.0))
future_gather = gather(task_child)

task_child.cancel()
try:
await future_gather
except asyncio.CancelledError:
pass

return future_gather, task_child
```

(Despite `await future_gather` raising a CancelError and effectively being 
cancelled, `future_gather.cancelled()` would return false since 
`self._cancel_requested` was never set to true.)

--

___
Python tracker 
<https://bugs.python.org/issue40894>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40894] asyncio.gather() cancelled() always False

2020-06-06 Thread Kyle Stanley


Kyle Stanley  added the comment:

> So, we can't rely on checking ``self.done() and self._cancel_requested`` for 
> future.cancelled() as this would mean that future.cancelled() would return 
> true for a future that fully completed if `future.cancel()` was called after 
> it finished (which is incorrect).

This should be "was called *as* it finished", not "was called *after* it was 
finished". If gather_future.cancel() is called after the future is `FINISHED`, 
it will immediately return false since it checks `self.done()` first 
(https://github.com/python/cpython/blob/b8f67c0923ac85468dfbfd43375e0bbfb6ca50ea/Lib/asyncio/tasks.py#L727).

--

___
Python tracker 
<https://bugs.python.org/issue40894>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40894] asyncio.gather() cancelled() always False

2020-06-06 Thread Kyle Stanley


Kyle Stanley  added the comment:

> Specifically a future can't be cancelled once it reaches the PENDING state, 
> this is indicated when future.cancel() returns false; see 
> https://github.com/python/cpython/blob/0e96c419d7287c3c7f155c9f2de3c61020386256/Lib/asyncio/futures.py#L152
>  for the source code

Actually, I was mistaken in my previous message; sorry about that. It can 
*only* be cancelled when the state is PENDING. So, I suspect there's a race 
condition where the state is `PENDING` when future.cancel() is called, but 
transitions to `FINISHED` before the future is actually cancelled. I'll insert 
some logging in `future.cancel()` to verify this locally.

The approach in PR-20686 has a fundamental problem though: simply because 
cancellation was requested does not mean the future was cancelled. So, we can't 
rely on checking ``self.done() and self._cancel_requested`` for 
future.cancelled() as this would mean that future.cancelled() would return true 
for a future that fully completed if `future.cancel()` was called after it 
finished (which is incorrect).

I'll have to do some further investigation to determine the cause and a 
potential fix of the race condition. Thanks again for reporting it, and my 
apologies for the earlier mixup.

--

___
Python tracker 
<https://bugs.python.org/issue40894>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40894] asyncio.gather() cancelled() always False

2020-06-06 Thread Kyle Stanley


Kyle Stanley  added the comment:

Thank you for the bug report Timm.

While I can certainly understand the source of the confusion here, I think you 
may be misunderstanding an important part of cancellation for futures. 
Specifically a future can't be cancelled once it reaches the PENDING state, 
this is indicated when future.cancel() returns false; see 
https://github.com/python/cpython/blob/0e96c419d7287c3c7f155c9f2de3c61020386256/Lib/asyncio/futures.py#L152
 for the source code. In your example code snippet, you can also see this if 
you replace the line:

```
future_gather.cancel()
```
with the following:
```
if not future_gather.cancel():
logger.info("future_gather could not be cancelled")
```

Also, to clarify on something else:
> However, my understanding is that asyncio.gather() is scheduled for 
> deprecation, so maybe changes in this area are on halt anyways!?

asyncio.gather() is not deprecated or scheduled for deprecation, it is simply 
the loop argument being deprecated.

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue40894>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40802] AbstractEventLoop.shutdown_default_executor breaks backwards compatibility

2020-06-04 Thread Kyle Stanley


Kyle Stanley  added the comment:

The main issue is that shutdown_default_executor() (or something that does the 
same thing) is necessary to ensure the resources of the event loop's 
ThreadPoolExecutor are finalized in a safe manner. Otherwise, it frequently 
results in dangling threads when run_in_executor() is used with the default 
executor, and was regularly doing so in the buildbots. So, I think this is a 
case where we ultimately have to break backwards compatibility in an upcoming 
version to fix a bug.

Also, the implementation of shutdown_default_executor() is fairly 
straightforward, and as far as I'm aware it's not incompatible with any of the 
listed projects that would need to implement it for 3.9. While inconvenient, I 
don't think the burden to address the compatibility break is at all high in 
this case.

> Since asyncio is no longer provisional, should it break backwards 
> compatibility with just a What's New entry?

Do you have an alternative suggestion? Personally, I don't think a deprecation 
warning (or other) would make much sense here since it would be effectively 
leaving a resource leak bug as long as it isn't implemented.

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue40802>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40061] Possible refleak in _asynciomodule.c future_add_done_callback()

2020-05-30 Thread Kyle Stanley


Kyle Stanley  added the comment:

Thanks for the PR, Zackery.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue40061>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35621] asyncio.create_subprocess_exec() only works with main event loop

2020-05-29 Thread Kyle Stanley


Kyle Stanley  added the comment:

> is there a workaround for earlier Python versions that does not involve 
> patching the standard library?

You could potentially try using the new default watcher, 
`ThreadedChildWatcher`, by implementing it locally and setting it as the child 
watcher to use (instead of `SafeChildWatcher`) with `set_child_watcher()`. 
AFAICT, the current implementation should work well for earlier versions of 
Python that don't have it, we just can't include it earlier than 3.8 since it's 
a new feature.

See 
https://github.com/python/cpython/blob/4649202ea75d48e1496e99911709824ca2d3170e/Lib/asyncio/unix_events.py#L1326
 for reference.

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue35621>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30064] BaseSelectorEventLoop.sock_{recv, sendall}() don't remove their callbacks when canceled

2020-05-27 Thread Kyle Stanley


Kyle Stanley  added the comment:

> The new test_sock_client_racing() test seems to have... a race condition...

I'm also a bit skeptical about relying on `time.sleep(0.01)` at the end of the 
loop in `_basetest_sock_recv_racing()`, but I don't have an idea for a better 
approach at the moment.

--

___
Python tracker 
<https://bugs.python.org/issue30064>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40782] AbstactEventLoop.run_in_executor is listed as an async method, but should actually return a Futrue

2020-05-26 Thread Kyle Stanley


Kyle Stanley  added the comment:

>From looking at the commit history of AbstactEventLoop.run_in_executor(), it 
>seems that it was previously be a non-coroutine method prior to the conversion 
>from the `@asyncio.coroutine` decorator to `async def` (PR-4753). See 
>https://github.com/python/cpython/blame/ede157331b4f9e550334900b3b4de1c8590688de/Lib/asyncio/events.py#L305.

The only context for the change I can find is the following conversation 
between Andrew and Yury: 
https://github.com/python/cpython/pull/4753#issuecomment-350114336. However, 
the example provided of `connect_read_pipe()` had already been a coroutine at 
the time for the BaseEventLoop implementation, which makes sense in that case. 
So, it's not clear to me as to why `run_in_executor()` was also converted to 
"async def" when its main implementation is not a coroutine. Furthermore, it's 
documented as an awaitable rather than a coroutine 
(https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor).

@Andrew do you have any additional context to provide that I'm potentially 
missing?

--
nosy: +aeros, asvetlov, yselivanov

___
Python tracker 
<https://bugs.python.org/issue40782>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40160] documentation example of os.walk should be less destructive

2020-05-23 Thread Kyle Stanley


Kyle Stanley  added the comment:

> I made the suggested change to just print the os.remove() statements (instead 
> of executing them) and also removed the 'skip news'.

I think you may have misunderstood the suggestion. 

Specifically, the key part was "I would suggest adding succinct comments or a 
note that very briefly explains how one could see a visual demonstration...". 
This would mean the actual code in the example would be *unchanged*, but with a 
new code comment or separate note after the example that explains how one could 
replace ``os.remove(os.path.join(root, name))`` with 
``print(f"os.remove({os.path.join(root, name)})")`` for a purely visual 
demonstration that doesn't affect any local files.

--

___
Python tracker 
<https://bugs.python.org/issue40160>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40405] asyncio.as_completed documentation misleading

2020-05-23 Thread Kyle Stanley


Kyle Stanley  added the comment:

Thanks for bring this to our attention and working on this Bar Harel, and 
thanks to Yury for the suggestions.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue40405>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40405] asyncio.as_completed documentation misleading

2020-05-23 Thread Kyle Stanley


Kyle Stanley  added the comment:


New changeset 13206b52d16c2489f4c7dd2dce2a7f48a554b5ed by Bar Harel in branch 
'master':
bpo-40405: Fix asyncio.as_completed docs (GH-19753)
https://github.com/python/cpython/commit/13206b52d16c2489f4c7dd2dce2a7f48a554b5ed


--

___
Python tracker 
<https://bugs.python.org/issue40405>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32309] Implement asyncio.run_in_executor shortcut

2020-05-20 Thread Kyle Stanley


Kyle Stanley  added the comment:

Now that the `versionadded` label has been added and the contextvar issue was 
addressed, this issue can be closed.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue32309>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32309] Implement asyncio.run_in_executor shortcut

2020-05-20 Thread Kyle Stanley


Change by Kyle Stanley :


--
pull_requests: +19555
pull_request: https://github.com/python/cpython/pull/20278

___
Python tracker 
<https://bugs.python.org/issue32309>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32309] Implement asyncio.run_in_executor shortcut

2020-05-16 Thread Kyle Stanley


Change by Kyle Stanley :


--
pull_requests: +19448
pull_request: https://github.com/python/cpython/pull/20143

___
Python tracker 
<https://bugs.python.org/issue32309>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38323] asyncio: MultiLoopWatcher has a race condition (test_asyncio: test_close_kill_running() hangs on AMD64 RHEL7 Refleaks 3.x)

2020-05-09 Thread Kyle Stanley


Kyle Stanley  added the comment:

> Would it make sense to deprecate it and stick with ThreadedChildWatcher?

I had proposed deprecating it in another bpo issue, but it was brought up that 
MutliLoopWatcher had only been added recently in 3.8. So, it might be a bit too 
soon to deprecate it already.

Although I suppose that if there are several complex and difficult to resolve 
issues with the implementation, it may very well be less damaging to deprecate 
it now rather than after it's gained a decent a decent amount of usage.

--

___
Python tracker 
<https://bugs.python.org/issue38323>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38323] asyncio: MultiLoopWatcher has a race condition (test_asyncio: test_close_kill_running() hangs on AMD64 RHEL7 Refleaks 3.x)

2020-05-08 Thread Kyle Stanley


Change by Kyle Stanley :


--
pull_requests: +19324
pull_request: https://github.com/python/cpython/pull/20013

___
Python tracker 
<https://bugs.python.org/issue38323>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38323] asyncio: MultiLoopWatcher has a race condition (test_asyncio: test_close_kill_running() hangs on AMD64 RHEL7 Refleaks 3.x)

2020-05-08 Thread Kyle Stanley


Kyle Stanley  added the comment:

> What other MultiLoopWatcher tests are currently having random hangs?

Oh, never mind. I see `test_stdin_stdout` is also hanging in 
https://bugs.python.org/issue38182. I would like to take a closer look at that 
one before skipping it as well.

--

___
Python tracker 
<https://bugs.python.org/issue38323>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38323] asyncio: MultiLoopWatcher has a race condition (test_asyncio: test_close_kill_running() hangs on AMD64 RHEL7 Refleaks 3.x)

2020-05-08 Thread Kyle Stanley


Kyle Stanley  added the comment:

> There are more MultiLoopWatcher tests which hang randomly, it's not only 
> test_close_kill_running().

What other MultiLoopWatcher tests are currently having random hangs? From 
searching "MultiLoopWatcher" on bpo, I'm only finding this issue. For now, I'll 
just work on a PR to temporarily skip `test_close_kill_running()` until we can 
find a fix.

--

___
Python tracker 
<https://bugs.python.org/issue38323>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40526] documentation bad on asyncio

2020-05-05 Thread Kyle Stanley


Kyle Stanley  added the comment:

I presume this is referring to the following example on the first page of the 
docs:

```
import asyncio

async def main():
print('Hello ...')
await asyncio.sleep(1)
print('... World!')

# Python 3.7+
asyncio.run(main())
```

If so, the main purpose of that example is just to demonstrate basic 
async/await syntax, and show asyncio.run() for a trivial case to clearly show 
how it's used at a fundamental level; it's intentional that the more involved 
examples that demonstrate asynchronous programming are contained in 
https://docs.python.org/3/library/asyncio-task.html#coroutine. Also, the 
example is simple and condensed enough that it requires zero additional 
explanation or context, as should be the case for a simple "hello world" 
example. Consider the perspective of someone who found the page without having 
previously seen async/await syntax used.

FYI, in the future, I would highly recommend focusing more on the constructive 
parts when opening issues. Particularly the title "documentation bad on 
asyncio", provides zero context or usefulness. It also comes across as rather 
rude and unappreciative of the significant voluntary efforts that went into 
writing the documentation in the first place. Instead, something like "Improve 
example on front page of asyncio docs" is much more helpful.

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue40526>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40357] asyncio: will shutdown_default_executor work in single step (stop, run_forever) mode?

2020-05-05 Thread Kyle Stanley


Kyle Stanley  added the comment:

> Is the new asyncio.loop.shutdown_default_executor() suitable for event loops 
> that are run in single-step mode?

I honestly can't say for certain; the primary intended use case for 
shutdown_default_executor() was to provide a means of properly finalizing the 
resources associated with the default executor without blocking the event loop, 
notably the threads in the ThreadPoolExecutor (which were intermittently left 
dangling).  So, when working on the implementation w/ Yury and Andrew, I did 
not strongly consider single-step event loops. I was more concerned with how it 
fit in with asyncio.run() and safe finalization of executor resources. See 
https://bugs.python.org/issue34037 for context.

If you have a recommendation for a change to the current version 
shutdown_default_executor() that would help provide compatibility with 
single-step event loops without hindering the primary goals, I'm sure it would 
be considered.

> Also, what happens to pending executor futures?

When using `loop.shutdown_default_executor()`, it calls 
executor.shutdown(wait=True), which waits for submitted futures to the executor 
to complete before joining the executor's workers (regardless of whether 
they're threads or processes). So, the executor should not be terminated prior 
to the pending futures being completed.

>From a glance at the example code posted above, it seems like it would be 
>incompatible with asyncio.run(), which is a requirement for 
>shutdown_default_executor(). See 
>https://github.com/python/cpython/blob/b9c46a2c2d7fc68457bff641f78932d66f5e5f59/Lib/asyncio/runners.py#L8.

--

___
Python tracker 
<https://bugs.python.org/issue40357>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38323] asyncio: MultiLoopWatcher has a race condition (test_asyncio: test_close_kill_running() hangs on AMD64 RHEL7 Refleaks 3.x)

2020-05-05 Thread Kyle Stanley


Kyle Stanley  added the comment:

I have also explored a few different solutions and was unable to find a fix for 
this issue. If it's still causing intermittent hangs after Andrew's patch, we 
may want to consider skipping `test_close_kill_running` for `MultiLoopWatcher` 
until we can find one. This clearly seems to be a complex issue without an easy 
solution, and I don't think we want it to potentially mask other issues in the 
meantime.

--

___
Python tracker 
<https://bugs.python.org/issue38323>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40512] Meta issue: per-interpreter GIL

2020-05-05 Thread Kyle Stanley


Change by Kyle Stanley :


--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue40512>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40487] Unexpected exception handler behavior in Jupyter when returning task objects created with create_task

2020-05-03 Thread Kyle Stanley


Kyle Stanley  added the comment:

John Smith wrote:
> The code works normally again if I remove the return statement from the 
> `run_coro` function.

I am a bit surprised that the code works as intended simply by removing the 
return statement. Perhaps Yury or Andrew can clarify on that point.

>From looking at the code above, the main issue seems to be that the tasks are 
>not awaited or cancelled and cleaned up at any point. When creating a task 
>without ever awaiting it, there's no guarantee that it will be completed 
>within the duration of the event loop. This is handled for the user when using 
>asyncio.run() by cancelling the tasks and propagating any exceptions during 
>event loop finalization, but should be done manually or by the event loop when 
>not using it. For an example, see _cancel_all_tasks() in 
>https://github.com/python/cpython/blob/d699d5e6178adca785a8701c32daf5e18fad0bf1/Lib/asyncio/runners.py#L54.

So, my best guess as to what's happening is that the tasks are still in 
progress while the event loop is finalizing, and the exception handler isn't 
called. But, I can't say that for sure without knowing the implementation of 
the Jupyter event loop.

For now, my recommended solution would be to include something like the above 
_cancel_all_tasks() or simply await your tasks at the end to ensure they are 
completed before the event loop is finalized. Alternatively, Jupyter could 
include something like that in their event loop finalization process, so that 
users don't have to include it on their own when using asyncio.

I would also consider using asyncio.run(), but I'm not certain if it works 
correctly in a Jupyter notebook. I'm aware that it's not always a viable option 
when it is desired to use an existing event loop instead of creating a separate 
one; that's why I'm not explicitly recommending it for this situation.

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue40487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40405] asyncio.as_completed documentation misleading

2020-04-29 Thread Kyle Stanley


Kyle Stanley  added the comment:

@Bar as requested by Guido, let's continue the discussion in the PR: 
https://github.com/python/cpython/pull/19753#pullrequestreview-403185142.

--

___
Python tracker 
<https://bugs.python.org/issue40405>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40405] asyncio.as_completed documentation misleading

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

Alternatively, I think "can wait for" would also read better than "allows to 
wait for", if that is preferred over my above suggestion.

--

___
Python tracker 
<https://bugs.python.org/issue40405>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40405] asyncio.as_completed documentation misleading

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

> based upon the order of completion

I forgot to explain this in the above message. I think something that mentions 
"order of completion" should be included. Although it's implied in the name 
`as_completed`, to me it seems worthwhile to be fully explicit with what 
exactly is meant by "earliest".

--

___
Python tracker 
<https://bugs.python.org/issue40405>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40405] asyncio.as_completed documentation misleading

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

Yury Selivanov wrote:
> I'd suggest to change to: "Return an iterator of coroutines.  Each coroutine 
> allows to wait for the earliest next result from the set of the remaining 
> awaitables."

The first sentence looks good to me, but I'm not certain about the second 
sentence, particularly "allows to wait". Instead, I'm thinking something like 
this might read better:

"Each coroutine represents the earliest next result from the set of the 
remaining awaitables, based upon the order of completion."

Thoughts?

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue40405>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40431] turtledemo/__main__.py: SyntaxError: invalid string prefix else"#fca"

2020-04-28 Thread Kyle Stanley


Change by Kyle Stanley :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue40431>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40431] turtledemo/__main__.py: SyntaxError: invalid string prefix else"#fca"

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:


New changeset cc011b5190b63f0be561ddec38fc4cd9e60cbf6a by Kyle Stanley in 
branch '3.8':
[3.8] bpo-40431: Fix syntax typo in turtledemo (GH-19777) (#19784)
https://github.com/python/cpython/commit/cc011b5190b63f0be561ddec38fc4cd9e60cbf6a


--

___
Python tracker 
<https://bugs.python.org/issue40431>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

Also, thanks to Victor for coming up with an interim solution, and for the 
feedback from Antoine and Thomas. GH-19760 is a significant improvement from my 
initial proposal in GH-19751.

--

___
Python tracker 
<https://bugs.python.org/issue39995>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

I am a bit concerned about the performance implication of accessing the thread 
wakeup through a lock in the call queue, but for now I think it's reasonable to 
address the buildbot failure with a locking solution while we try to find a 
better one.

I'm not certain if we'll be able to find one that correctly addresses the 
failures with zero additional locking, but I think we may be able to reduce the 
number of times we use the lock compared to the implementation in GH-19760. 
I'll spend some time exploring that as I find the time to, and report back with 
any significant findings.

--

___
Python tracker 
<https://bugs.python.org/issue39995>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40431] turtledemo/__main__.py: SyntaxError: invalid string prefix else"#fca"

2020-04-28 Thread Kyle Stanley


Change by Kyle Stanley :


--
pull_requests: +19107
pull_request: https://github.com/python/cpython/pull/19784

___
Python tracker 
<https://bugs.python.org/issue40431>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40431] turtledemo/__main__.py: SyntaxError: invalid string prefix else"#fca"

2020-04-28 Thread Kyle Stanley


Change by Kyle Stanley :


--
versions:  -Python 3.7

___
Python tracker 
<https://bugs.python.org/issue40431>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40431] turtledemo/__main__.py: SyntaxError: invalid string prefix else"#fca"

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

Victor Stinner wrote:
> If you backport the change, please remove the NEWS entry, since it's not a 
> SyntaxError in 3.7 and 3.8.

Good point. In that case, I'll do a manual backport of the PR just to 3.8 and 
skip the news entry. As mentioned above, I think most of the value in fixing it 
when the SyntaxError isn't present is mainly just for providing a good example 
in the latest stable version of the stdlib, so I don't think there would be 
much additional value in also backporting it to 3.7.

--

___
Python tracker 
<https://bugs.python.org/issue40431>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40246] Different error messages for same error - invalid string prefixes

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

For addressing the backwards compatibility concern, I think we should just 
convert it to a SyntaxWarning for cases like the above to indicate that it's 
not really correct syntax, but not harmful enough to justify code breakage. I 
think it fits the documented description of SyntaxWarning well, which is to 
address "dubious syntax". 

Lysandros Nikolaou wrote:
> A possible solution would be to only emit a SyntaxError if the NAME directly 
> preceding a STRING token contains one of the valid string prefixes (either 
> one of 'f', 'r', 'u', 'b'). This would still output a nicer error message, 
> but would not break code like the one of the example. What do you think about 
> this?

That would certainly help to minimize the breakage, so I'd be in favor of that 
over a SyntaxError for all invalid prefixes. But, I'm not certain that it's 
additionally harmful if an invalid string prefix proceeds a valid one. Is there 
any additional harm, other than from a visual perspective?

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue40246>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40431] turtledemo/__main__.py: SyntaxError: invalid string prefix else"#fca"

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

> I don't think that it's worth it to backport the change to 3.7 and 3.8: in 
> Python 3.7 and 3.8, the code is accepted.

> Python 3.7 and 3.8 are not broken, so there is no need to fix them :-)

I think we could very reasonably change  `else"#fca"` -> `else "#fca"` on the 
current bugfix branches, or at least 3.8. Even when it doesn't cause a syntax 
error and the attempted prefix is ignored, it's still more syntactically 
correct compared to attempting to use an "e-string" (which of course doesn't 
exist) and has nearly zero cost on our end to fix. 

While it's a low priority issue since it doesn't raise an error, that doesn't 
necessarily mean it's correct. In the latest stable version of the stdlib, we 
should aim to provide a good example IMO, especially if it comes at practically 
zero cost to us.

--
nosy: +aeros
resolution: fixed -> 
stage: resolved -> patch review
status: closed -> open
versions: +Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue40431>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

I decided to close PR-19751. Both because it does not correctly address the 
race condition (due to an oversight on my part) and it would add substantial 
overhead to _ThreadWakeup. Instead, I agree that we should explore a 
non-locking solution.

--

___
Python tracker 
<https://bugs.python.org/issue39995>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

Oops, it seems that I opened PR-19751 a bit preemptively. When I get the 
chance, I'll see if Antoine's implementation can address the failures and do 
some comparisons.

--

___
Python tracker 
<https://bugs.python.org/issue39995>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40061] Possible refleak in _asynciomodule.c future_add_done_callback()

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

Antoine Pitrou wrote:
> You're right that a Py_DECREF is missing there.  However, it seems unlikely 
> that this is triggering the test failure, because `PyList_New(1)` will 
> practically never fail in normal conditions (and even making it deliberately 
> fail would be quite difficult).

Thanks for clarifying; spotting refleaks is something that I've only recently 
started to get the hang of, so I wanted to double check first. :-)

I should have updated this issue, but I resolved the main test_asyncio failure 
that I saw from GH-19149 in GH-19282. Either way though, I think this could 
still be fixed (even if it would rarely be an issue in most situations).

--

___
Python tracker 
<https://bugs.python.org/issue40061>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2020-04-28 Thread Kyle Stanley


Change by Kyle Stanley :


--
pull_requests: +19072
pull_request: https://github.com/python/cpython/pull/19751

___
Python tracker 
<https://bugs.python.org/issue39995>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40411] frozen collection.Counter

2020-04-28 Thread Kyle Stanley


Kyle Stanley  added the comment:

paul rubin wrote:
> Yeah I think the basic answer to this ticket is "Python doesn't really have 
> multisets and a proposal to add them should go somewhere else".  Fair 
> enough-- consider the request withdrawn from here.

Not necessarily, python-ideas is just more so the designated location for 
cultivating Python change proposals in their early stages. If you're interested 
enough in this being added to the standard library, you can present your case 
there using some of the guidelines I suggested earlier. Also, if others are 
interested in the feature and have specific use cases for it, that can 
significantly increase the odds of it being added. The python-ideas mailing 
list tends to get much more traffic than an individual open issue on bpo.

In the context of significant new feature proposals, bpo is typically used once 
it's been approved of (with a formal PEP for especially large changes), to 
further discuss the implementation details and for tracking related PRs.

--

___
Python tracker 
<https://bugs.python.org/issue40411>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40411] frozen collection.Counter

2020-04-27 Thread Kyle Stanley


Kyle Stanley  added the comment:

paul rubin wrote:
> In that case, it works for what it is made for, so it really then is a 
> separate question of whether implementing multisets properly is worthwhile.

Indeed, that seems to be the primary question to address. Personally, I don't 
have any strong opposition to including a multiset implementation in 
collections, I'm just not yet convinced they have enough widespread practical 
utility (that can't already be provided with Counter or a slightly modified 
version of it) to justify including them in collections.
 
> I dislike the concept of shovelling off basic functionality to 3rd party 
> libraries.  The rejection of that philosophy (Python's old motto "Batteries 
> included") is one of the things that attracted me to Python in the first 
> place.  Though that value is mostly historical now, I'm sad about the loss.  
> It's impossible to write a large Ruby or Javascript (npm) application with 
> 100s of third party modules, every one of which is an attack vector ("supply 
> chain attack" is the current buzzword), and  whose implementations vary 
> widely in quality and usability.

I can certainly sympathize with that sentiment of wanting to avoid excessive 
dependencies, and I think in general that we do still try to include more in 
the Python standard library compared to many other ecosystems. AFAIK, there's 
zero desire for the Python ecosystem to slowly turn into the "dependency hell" 
that NPM has become, with almost every library relying on a chain of minor 3rd 
party utilities.

However, rather than including everything that *might* be useful, the goal has 
been to include features that have substantial practical utility for a decent 
volume of users (and library maintainers), as long as they fit with the 
existing theme of a module (or are widely needed enough to justify a separate 
module) for a couple of primary reasons:

1) Making the stdlib modules as digestible as reasonably possible
2) Maximizing the ratio of practical benefit to implementation and long-term 
maintenance cost, which is especially important since development is 
voluntarily driven

Also, I would consider "batteries included" to still be a general goal, but 
it's practically impossible for the stdlib to fully cover what everyone would 
consider to be "basic functionality" while still maintaining its present 
quality. It's even more complicated by the fact that "basic functionality" will 
differ significantly depending upon who you ask, based upon what they 
personally use Python for and their unique experiences.

So, we tend to lean towards minimalism, focusing on implementing and improving 
features that have the most or strongest concrete use cases for a large volume 
of users. There are of course exceptions to this, such as features that are 
highly useful for a relatively small volume of users, but are difficult to 
implement a proper or optimized version. That has to be determined on a 
case-by-case basis though.

--
nosy:  -veky

___
Python tracker 
<https://bugs.python.org/issue40411>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2020-04-27 Thread Kyle Stanley


Kyle Stanley  added the comment:

After writing the above out and a bit of further consideration, I think it 
might make more sense to wait for the event after setting `self._closed = True` 
so that it prevents future wakeup() and clear() calls from reading/writing to 
the pipe, while still allowing ones that are currently ongoing to finish.

Thoughts?

--

___
Python tracker 
<https://bugs.python.org/issue39995>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2020-04-27 Thread Kyle Stanley


Kyle Stanley  added the comment:

I can't be certain for the other failures, but I'm currently exploring a 
potential solution for addressing the `test_killed_child` failure. To me, it 
seems to be a race condition with attempting to call _ThreadWakeup.close() 
while there are still bytes being sent. IMO, we should wait until closing the 
pipe's reader and writer until it's finished sending or receiving bytes. Here's 
one way to implement that w/ threading.Event:

diff --git a/Lib/concurrent/futures/process.py 
b/Lib/concurrent/futures/process.py
index 8e9b69a8f0..9bf073fc34 100644
--- a/Lib/concurrent/futures/process.py
+++ b/Lib/concurrent/futures/process.py
@@ -68,21 +68,30 @@ class _ThreadWakeup:
 def __init__(self):
 self._closed = False
 self._reader, self._writer = mp.Pipe(duplex=False)
+# Used to ensure pipe is not closed while sending or receiving bytes
+self._not_running = threading.Event()
+# Initialize event as True
+self._not_running.set()
 
 def close(self):
 if not self._closed:
+self._not_running.wait()
 self._closed = True
 self._writer.close()
 self._reader.close()
 
 def wakeup(self):
 if not self._closed:
+self._not_running.clear()
 self._writer.send_bytes(b"")
+self._not_running.set()
 
 def clear(self):
 if not self._closed:
+self._not_running.clear()
 while self._reader.poll():
 self._reader.recv_bytes()
+self._not_running.set()


>From using Victor's method of replicating the failure with inserting a 
>`time.sleep(0.050)` in multiprocessing.Connection._send(), it appears to fix 
>the failure in test_killed_child. I believe it would also fix the other 
>failures since they appear to be caused by the same core issue, but I've been 
>unable to replicate them locally so I'm not 100% certain.

--
assignee:  -> aeros

___
Python tracker 
<https://bugs.python.org/issue39995>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2020-04-27 Thread Kyle Stanley


Change by Kyle Stanley :


--
nosy: +pitrou

___
Python tracker 
<https://bugs.python.org/issue39995>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2020-04-27 Thread Kyle Stanley


Change by Kyle Stanley :


--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue39995>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40411] frozen collection.Counter

2020-04-27 Thread Kyle Stanley


Kyle Stanley  added the comment:

Also, adding Raymond to the nosy list in case he has any specific comments 
about a frozen collections.Counter.

--
nosy: +rhettinger
versions: +Python 3.9

___
Python tracker 
<https://bugs.python.org/issue40411>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40411] frozen collection.Counter

2020-04-27 Thread Kyle Stanley


Kyle Stanley  added the comment:

Rather than starting this out as a bpo issue, I would highly recommend bringing 
forth a proposal for this in python-id...@python.org and explaining in as much 
detail as possible:

1) How a frozen variation of collection.Counter would benefit your specific use 
case
2) The current available workaround you mentioned, issues with it, and the 
comparative benefit your proposal would provide
3) Why it should be included in the standard library, compared to a 3rd party 
package on PyPI

The 3rd one can be a bit tricky, but it typically involves explaining how 
widespread the potential use case(s) would be and how tricky it would be for 
users to implement something like this on their own (from both a functionality 
and performance perspective).

>From a brief glance of the archives, it doesn't look like this specific idea 
>was proposed, other than part of a more generalized one to add frozen 
>equivalents for the entire collections module, which was rejected on the basis 
>of the use cases being too theoretical and overly broad.

Also, while not specific to your particular proposal, I would recommend reading 
over the following post from Raymond Hettinger in the python-ideas archive 
(from the previously mentioned thread): 
https://mail.python.org/archives/list/python-id...@python.org/message/QVBVJU4RNJ5MDKBJ5CNGINYG24WZDZX7/.
 It explains why the collections module tries to be minimalist, and 
specifically the requirement of new features providing a tangible, real-world 
benefit.

It's of course up to you how you decide to format the proposal, but by 
answering the above points and demonstrating clear value in specific use cases, 
it has a much better chance of eventually making into the standard library (or 
at the least, a constructive discussion about it).

--
nosy: +aeros

___
Python tracker 
<https://bugs.python.org/issue40411>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40340] Programming FAQ about "How do I convert a string to a number?" contains a typo

2020-04-25 Thread Kyle Stanley


Change by Kyle Stanley :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue40340>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   4   5   >