[issue36656] Allow os.symlink(src, target, force=True) to prevent race conditions

2019-04-19 Thread Tom Hale


Tom Hale  added the comment:

The most correct work-around I believe exists is:

(updates at: https://stackoverflow.com/a/55742015/5353461)

def symlink_force(target, link_name):
'''
Create a symbolic link pointing to target named link_name.
Overwrite target if it exists.
'''

# os.replace may fail if files are on different filesystems.
# Therefore, use the directory of target
link_dir = os.path.dirname(target)

# os.symlink requires that the target does NOT exist.
# Avoid race condition of file creation between mktemp and symlink:
while True:
temp_pathname = tempfile.mktemp(suffix='.tmp', \
prefix='symlink_force_tmp-', dir=link_dir)
try:
os.symlink(target, temp_pathname)
break  # Success, exit loop
except FileExistsError:
time.sleep(0.001)  # Prevent high load in pathological 
conditions
except:
raise
os.replace(temp_pathname, link_name)

An unlikely race condition still remains: the symlink created at the 
randomly-named `temp_path` could be modified between creation and 
rename/replacing the specified link name.

Suggestions for improvement welcome.

--
type:  -> security

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36607] asyncio.all_tasks() crashes if asyncio is used in multiple threads

2019-04-19 Thread Andrew Svetlov


Andrew Svetlov  added the comment:

Sorry, I've missed that the loop has hashable requirement already.
Would you prepare a patch for number 3? 
I am afraid we can add another hard-to-debug multi-threaded problem by 
complicating the data structure.

I'm just curious why do you call `all_tasks()` at all?
In my mind, the only non-debug usage is `asyncio.run()`

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36661] Missing dataclass decorator import in dataclasses module docs

2019-04-19 Thread Stéphane Wirtel

Stéphane Wirtel  added the comment:

We could change the example with 

```
from dataclasses import dataclass

@dataclass
class InventoryItem:
...
```

Because it's not specified in the documentation (header, that we need to
import dataclass from dataclasses).

+1 for a small update.

You are free to propose a PR.

Have a nice day,

>I think the import is implied in the example since the docs page is for
>dataclasses module but adding an explicit import to InventoryItem at
>the top won't hurt too.
Yep, but explicit is better than implicit.

--
nosy: +matrixise

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36662] asdict/astuple Dataclass methods

2019-04-19 Thread Stéphane Wirtel

Change by Stéphane Wirtel :


--
nosy: +matrixise

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36661] Missing dataclass decorator import in dataclasses module docs

2019-04-19 Thread Eric V. Smith


Eric V. Smith  added the comment:

I think adding "from dataclasses import dataclass" in the first example is fine.

There's a similar import in the sqlite3 documentation, just to pick one at 
random.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36295] Need to yield (sleep(0)) twice in asyncio

2019-04-19 Thread Andrew Svetlov


Andrew Svetlov  added the comment:

In asyncio `await asyncio.sleep(0)` is for switching execution context from the 
current task to other code. 
There is no guarantee for finishing already running tasks before returning from 
`asyncio.sleep(0)` call etc.

Also, your code snippet has a logical error. It should not call regular 
`time.sleep(123)` from a coroutine but use `loop.run_in_executor()` for 
performing blocking calls

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36661] Missing dataclass decorator import in dataclasses module docs

2019-04-19 Thread Stéphane Wirtel

Stéphane Wirtel  added the comment:

Ok, I suggest to add a "first issue" for the sprint days at PyCon US.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36661] Missing dataclass decorator import in dataclasses module docs

2019-04-19 Thread Stéphane Wirtel

Change by Stéphane Wirtel :


--
keywords: +easy

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36662] asdict/astuple Dataclass methods

2019-04-19 Thread Eric V. Smith


Eric V. Smith  added the comment:

I think the best thing to do is write another decorator that adds this method. 
I've often thought that having a dataclasses_tools third-party module would be 
a good idea. It could include my add_slots decorator in 
https://github.com/ericvsmith/dataclasses/blob/master/dataclass_tools.py

Such a decorator could then deal with all the complications that I don't want 
to add to @dataclass. For example, choosing a method name. @dataclass doesn't 
inject any non-dunder names in the class, but the new decorator could, or it 
could provide a way to customize the member name.

Also, note that your example asdict method doesn't do the same thing as 
dataclasses.asdict. While you get some speedup by knowing the field names in 
advance, you also don't do the recursive generation that dataclasses.asdict 
does. In order to skip the recursive dict generation, you'd either have to test 
the type of each member (using some heuristic about what doesn't need 
recursion), or assume the member type matches the type defined in the class. I 
don't want dataclasses.asdict to make the assumption that the member type 
matches the declared type. There's nowhere else it does this.

I'm not sure how much of the speedup you're seeing is the result of hard-coding 
the member names, and how much is avoiding recursion. If all of the improvement 
is by eliminating recursion, then it's not worth doing.

I'm not saying the existing dataclasses.asdict can't be sped up: surely it can. 
But I don't want to remove features or add complexity to do so.

--
assignee:  -> eric.smith

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36345] Deprecate Tools/scripts/serve.py in favour of python -m http.server -d

2019-04-19 Thread Stéphane Wirtel

Change by Stéphane Wirtel :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36345] Deprecate Tools/scripts/serve.py in favour of python -m http.server -d

2019-04-19 Thread Stéphane Wirtel

Change by Stéphane Wirtel :


--
resolution: fixed -> 
status: closed -> open

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35792] Specifying AbstractEventLoop.run_in_executor as a coroutine conflicts with implementation/intent

2019-04-19 Thread Andrew Svetlov


Andrew Svetlov  added the comment:

I would rather change the implementation by converting it into async function.
It can break some code, sure -- but in a very explicit way (coroutine 
`run_in_executor is never awaited` error).
Making existing third-party code forward-compatible is trivial: just push 
`await` before the call.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34155] email.utils.parseaddr mistakenly parse an email

2019-04-19 Thread Stéphane Bortzmeyer

Stéphane Bortzmeyer  added the comment:

Note that this bug was used in an actual security attack so it is serious

https://medium.com/@fs0c131y/tchap-the-super-not-secure-app-of-the-french-government-84b31517d144
https://twitter.com/fs0c131y/status/1119143946687434753

--
nosy: +bortzmeyer

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34155] email.utils.parseaddr mistakenly parse an email

2019-04-19 Thread Karthikeyan Singaravelan


Karthikeyan Singaravelan  added the comment:

Relevant attack from matrix blog post.

https://matrix.org/blog/2019/04/18/security-update-sydent-1-0-2/

> sydent uses python's email.utils.parseaddr function to parse the input email 
> address before sending validation mail to it, but it turns out that if you 
> hand parseaddr an malformed email address of form a...@b.com@c.com, it 
> silently discards the @c.com prefix without error. The result of this is that 
> if one requested a validation token for 'a...@malicious.org@important.com', 
> the token would be sent to 'a...@malicious.org', but the address 
> 'a...@malicious.org@important.com' would be marked as validated. This release 
> fixes this behaviour by asserting that the parsed email address is the same 
> as the input email address.

I am marking this as a security issue.

--
keywords: +security_issue
nosy: +vstinner

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14788] Pdb debugs itself after ^C and a breakpoint is set anywhere

2019-04-19 Thread daniel hahler


daniel hahler  added the comment:

I think this issue itself might be fixed already / changed since 3.5.

I've came up with something similar in this area though, which is only 
triggered when using Ctrl-C while pdb is waiting for the next statement: 
https://github.com/python/cpython/pull/12880

--
nosy: +blueyed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36662] asdict/astuple Dataclass methods

2019-04-19 Thread George Sakkis


George Sakkis  added the comment:

> I think the best thing to do is write another decorator that adds this 
> method. I've often thought that having a dataclasses_tools third-party module 
> would be a good idea.

I'd be happy with a separate decorator in the standard library for adding these 
methods. Not so sure about a third-party module, the added value is probably 
not high enough to justify an extra dependency (assuming one is aware it exists 
in the first place).

> or assume the member type matches the type defined in the class. 

This doesn't seem an unreasonable assumption to me. If I'm using a dataclass, I 
probably care enough about its member types to bother declaring them and I 
wouldn't mind if a particular method expects that the members actually match 
the types. This behaviour would be clearly documented. 

Alternatively, if we go with a separate decorator, whether this assumption 
holds could be a parameter, something like:

def add_asdict(cls, name='asdict', strict=True)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22135] allow to break into pdb with Ctrl-C for all the commands that resume execution

2019-04-19 Thread daniel hahler


daniel hahler  added the comment:

Would be nice to have this indeed.
Please consider creating a PR with an updated patch then for easier review.

--
nosy: +blueyed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36667] pdb: restore SIGINT handler in sigint_handler already

2019-04-19 Thread daniel hahler

New submission from daniel hahler :

Without this, and additional SIGINT while waiting for the next statement
(e.g. during `time.sleep`) will stop at `sigint_handler`.

With this patch:

> …/t-pdb-sigint-in-sleep.py(10)()
-> sleep()
(Pdb) c
^C
Program interrupted. (Use 'cont' to resume).
^CKeyboardInterrupt
> …/t-pdb-sigint-in-sleep.py(6)sleep()
-> time.sleep(10)
(Pdb)

Without this patch:

> …/t-pdb-sigint-in-sleep.py(10)()
-> sleep()
(Pdb) c
^C
Program interrupted. (Use 'cont' to resume).
^C--Call--
> …/cpython/Lib/pdb.py(188)sigint_handler()
-> def sigint_handler(self, signum, frame):
(Pdb)

This was changed / regressed in 
https://github.com/python/cpython/commit/10e54aeaa234f2806b367c66e3fb4ac6568b39f6
 (3.5.3rc1?), when it was moved while fixing issue 20766.

--
components: Library (Lib)
messages: 340539
nosy: blueyed
priority: normal
severity: normal
status: open
title: pdb: restore SIGINT handler in sigint_handler already
type: behavior
versions: Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36667] pdb: restore SIGINT handler in sigint_handler already

2019-04-19 Thread daniel hahler


Change by daniel hahler :


--
keywords: +patch
pull_requests: +12804
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20766] reference leaks in pdb

2019-04-19 Thread daniel hahler


daniel hahler  added the comment:

Please see https://bugs.python.org/issue36667 for a followup.

It does not look like moving it to `interaction` is relevant for fixing the 
leak, is it?

I think it should be restored in both places.

--
nosy: +blueyed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36461] timeit: Additional changes for autorange

2019-04-19 Thread Alessandro Cucci


Alessandro Cucci  added the comment:

Hello @Mariatta,
if this is simple I would like to work on that, can I?
Thanks!

--
nosy: +Alessandro Cucci

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34155] email.utils.parseaddr mistakenly parse an email

2019-04-19 Thread Nicolas Évrard

Change by Nicolas Évrard :


--
nosy: +nicoe

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36666] threading.Thread should have way to catch an exception thrown within

2019-04-19 Thread Antoine Pitrou


Change by Antoine Pitrou :


--
nosy: +tim.peters
type:  -> enhancement
versions: +Python 3.8 -Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36666] threading.Thread should have way to catch an exception thrown within

2019-04-19 Thread Antoine Pitrou


Antoine Pitrou  added the comment:

The current behavior can't be changed for compatibility reasons (imagine user 
programs starting to raise on Thread.join()), but we could add an option to the 
threading.Thread() constructor in order to store and propagate exceptions.

--
nosy: +giampaolo.rodola, pablogsal

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36668] semaphore_tracker is not reused by child processes

2019-04-19 Thread Thomas Moreau


New submission from Thomas Moreau :

The current implementation of the semaphore_tracker creates a new process for 
each children.

The easy fix would be to pass the _pid to the children but the current 
mechanism to check if the semaphore_tracker is alive relies on waitpid which 
cannot be used in child processes (the semaphore_tracker is only a sibling of 
these processes). The main issue is to have a reliable check that either:

The pipe is open. This is what is done here by sending a message. I don't 
know if there is a more efficient way to check it.
Check that a given pid is alive. As we cannot rely on waitpid, I don't see 
an efficient mechanism.

I propose to add a PROBE command in the semaphore tracker. When the pipe is 
closed, the send command will fail, meaning that the semaphore tracker is down.

--
components: Library (Lib)
messages: 340543
nosy: tomMoral
priority: normal
severity: normal
status: open
title: semaphore_tracker is not reused by child processes
type: behavior
versions: Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36668] semaphore_tracker is not reused by child processes

2019-04-19 Thread Thomas Moreau


Change by Thomas Moreau :


--
keywords: +patch
pull_requests: +12805
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36607] asyncio.all_tasks() crashes if asyncio is used in multiple threads

2019-04-19 Thread Nick Davies


Nick Davies  added the comment:

> Would you prepare a patch for number 3?

I will give it a try and see what I come up with.

> I am afraid we can add another hard-to-debug multi-threaded problem by 
> complicating the data structure.

Yeah this was my concern too, the adding and removing from the 
WeakDict[AbstractEventLoop, WeakSet[Task]] for `_all_tasks` could still cause 
issues. Specifically the whole WeakSet class is not threadsafe so I would 
assume WeakDict is the same, there may not be a nice way of ensuring a 
combination of GC + the IterationGuard doesn't come and mess up the dict even 
if I wrap it in a threading lock.

Another option would be to have the WeakSet[Task] attached to the loop itself 
then because using the same loop in multiple threads not at all thread safe 
already that would contain the problem. You mentioned "third-party loops" which 
may make this option impossible.

> I'm just curious why do you call `all_tasks()` at all?
> In my mind, the only non-debug usage is `asyncio.run()`

In reality we aren't using `all_tasks()` directly. We are calling 
`asyncio.run()` from multiple threads which triggers the issue. The repro I 
provided was just a more reliable way of triggering the issue. I will paste a 
slightly more real-world example of how this happened below. This version is a 
little more messy and harder to see exactly what the problem is which is why I 
started with the other one.

```
import asyncio
from threading import Thread


async def do_nothing(n=0):
await asyncio.sleep(n)


async def loop_tasks():
loop = asyncio.get_event_loop()
while True:
loop.create_task(do_nothing())
await asyncio.sleep(0.01)


async def make_tasks(n):
loop = asyncio.get_event_loop()
for i in range(n):
loop.create_task(do_nothing(1))
await asyncio.sleep(1)


def make_lots_of_tasks():
while True:
asyncio.run(make_tasks(1))


for i in range(10):
t = Thread(target=make_lots_of_tasks)
t.start()

asyncio.run(loop_tasks())
```

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36668] semaphore_tracker is not reused by child processes

2019-04-19 Thread SilentGhost


Change by SilentGhost :


--
nosy: +davin, pitrou

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36669] weakref proxy doesn't support the matrix multiplication operator

2019-04-19 Thread Dan Snider


Change by Dan Snider :


--
nosy: bup
priority: normal
severity: normal
status: open
title: weakref proxy doesn't support the matrix multiplication operator

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35792] Specifying AbstractEventLoop.run_in_executor as a coroutine conflicts with implementation/intent

2019-04-19 Thread Christopher Hunt


Christopher Hunt  added the comment:

My use case is scheduling work against an executor but waiting on the results 
later (on demand).

If converting `BaseEventLoop.run_in_executor(executor, func, *args)` to a 
coroutine function, I believe there are two possible approaches (the discussion 
that started this 
[here](https://stackoverflow.com/questions/54263558/is-asyncio-run-in-executor-specified-ambiguously)
 only considers [impl.1]):

impl.1) `BaseEventLoop.run_in_executor` still returns a future, but we must 
await the coroutine object in order to get it (very breaking change), or
impl.2) `BaseEventLoop.run_in_executor` awaits on the result of `func` itself 
and returns the result directly

In both cases the provided `func` will only be dispatched to `executor` when 
the coroutine object is scheduled with the event loop.

For [impl.1], from the linked discussion, there is an example of user code 
required to get the behavior of schedule immediately and return future while 
still using `BaseEventLoop.run_in_executor`:

async def run_now(f, *args):
loop = asyncio.get_event_loop()
started = asyncio.Event()
def wrapped_f():
loop.call_soon_threadsafe(started.set)
return f(*args)
fut = loop.run_in_executor(None, wrapped_f)
await started.wait()
return fut

however this wrapper would only be possible to use in an async function and 
assumes the executor is running in the same process - synchronous functions 
(e.g. an implementation of Protocol.data_received) would need to use an 
alternative `my_run_in_executor`:

def my_run_in_executor(executor, f, *args, loop=asyncio.get_running_loop()):
return asyncio.wrap_future(executor.submit(f, *args), loop=loop)

either of these would need to be discovered by users and live in their code 
base.

Having to use `my_run_in_executor` would be most unfortunate, given the purpose 
of `run_in_executor` per the PEP is to be a shorthand for this exact function.

For [impl.2], we are fine if the use case allows submitting and awaiting the 
completion of `func` in the same location, and no methods of asyncio.Future 
(e.g. `add_done_callback`, `cancel`) are used. If not then we still need to 
either:

soln.1) use `my_run_in_executor`, or
soln.2) wrap the `BaseEventLoop.run_in_executor` coroutine 
object/asyncio.Future with `asyncio.ensure_future`

[soln.1] is bad for the reason stated above: this is the function we are trying 
to avoid users having to write.

[soln.2] uses the low-level function `asyncio.ensure_future` because both of 
the suggested alternatives (per the docs) `asyncio.create_task` and 
`BaseEventLoop.create_task` throw a `TypeError` when provided an 
`asyncio.Future` as returned by the current implementation of 
`BaseEventLoop.run_in_executor`. This will have to be discovered by users and 
exist in their code base.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36656] Allow os.symlink(src, target, force=True) to prevent race conditions

2019-04-19 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

If the symlink can be recreated, it can also be changed after creation.

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36670] test suite broken due to cpu usage feature on win 10/ german

2019-04-19 Thread Lorenz Mende

New submission from Lorenz Mende :

The test suite fails with the first tests (I assume 1st call of getloadavg of 
WindowsLoadTracker).
Traceback (most recent call last):
  File "P:\Repos\CPython\cpython\lib\runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
  File "P:\Repos\CPython\cpython\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
  File "P:\Repos\CPython\cpython\lib\test\__main__.py", line 2, in 
main()
  File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 653, in 
main
Regrtest().main(tests=tests, **kwargs)
  File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 586, in 
main
self._main(tests, kwargs)
  File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 632, in 
_main
self.run_tests()
  File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 515, in 
run_tests
self.run_tests_sequential()
  File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 396, in 
run_tests_sequential
self.display_progress(test_index, text)
  File "P:\Repos\CPython\cpython\lib\test\libregrtest\main.py", line 150, in 
display_progress
load_avg_1min = self.getloadavg()
  File "P:\Repos\CPython\cpython\lib\test\libregrtest\win_utils.py", line 81, 
in getloadavg
typeperf_output = self.read_output()
  File "P:\Repos\CPython\cpython\lib\test\libregrtest\win_utils.py", line 78, 
in read_output
return response.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 67: 
invalid start byte
##
The windows 'typeperf "\System\Processor Queue Length" -si 1' command unluckily 
returns an string with an umlaut which leads to the Decode-Error. This comes up 
because the  for the typeperf is location dependend. (In german the 
counter would read \System\Prozessor-Warteschlangenlänge)

I see two possible solutions to this issue.
1. Raising an exception earlier on creation of WindowsLoadTracker resulting in 
the same behaviour as if there is no typeperf available (german pythoneers 
would have a drawback with this)
2. Getting the typeperf counter correctly from registry 
(HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows 
NT\CurrentVersion\Perflib\CurrentLanguage, described here 
https://social.technet.microsoft.com/Forums/de-DE/25bc6907-cf2c-4dc8-8687-974b799ba754/powershell-ausgabesprache-umstellen?forum=powershell_de)

environment:
Windows 10 x64, 1809, german
cpython @e16467af0bfcc9f399df251495ff2d2ad20a1669
commit of assumed root cause of https://bugs.python.org/issue34060

--
components: Tests
messages: 340547
nosy: LorenzMende
priority: normal
severity: normal
status: open
title: test suite broken due to cpu usage feature on win 10/ german
type: crash
versions: Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36670] test suite broken due to cpu usage feature on win 10/ german

2019-04-19 Thread Lorenz Mende


Change by Lorenz Mende :


--
nosy: +ammar2

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36670] test suite broken due to cpu usage feature on win 10/ german

2019-04-19 Thread Lorenz Mende


Change by Lorenz Mende :


--
nosy:  -ammar2

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36670] test suite broken due to cpu usage feature on win 10/ german

2019-04-19 Thread Ammar Askar


Ammar Askar  added the comment:

What does `typeperf "\System\Processor Queue Length" -si 1` actually return on 
your non-English system?

Does it just return an error with the counter's name or is the umalet just in 
the first header line, i.e this one for me:

  "(PDH-CSV 4.0)","\\MSI\System\Processor Queue Length"

If it's the latter then I think the correct fix would be to figure out what 
encoding typeperf is outputting in and then just decode with that.

--
nosy: +ammar2

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36617] The rich comparison operators are second class citizens

2019-04-19 Thread Eric V. Smith


Eric V. Smith  added the comment:

I assume the OP is using the stdlib parser module just to show what is a syntax 
error and what isn't. But most of the characters in the example strings aren't 
required, so it can be simplified.

Here is a simpler case demonstrating what I think the OP is trying to say.

This is not a syntax error:
>>> [*0<<1]
Traceback (most recent call last):
  File "", line 1, in 
TypeError: 'int' object is not iterable

(Ignore the type error, this shows that it's syntactically valid.)

But this is a syntax error:
>>> [*0<=1]
  File "", line 1
[*0<=1]
^
SyntaxError: invalid syntax

Both of these are treated the same way, as not syntax errors:
>>> f(*0==1)
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'f' is not defined
>>> f(*0<=1)
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'f' is not defined

Here's the above, using parser:

>>> import parser
>>> parser.expr("[*0<<1]")


>>> parser.expr("[*0<=1]")
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 1
[*0<=1]
^
SyntaxError: invalid syntax

>>> parser.expr("f(*0<<1)")


>>> parser.expr("f(*0<=1)")


I'm not sure this is worth fixing. Maybe if someone can find where in the 
grammar this is caused, and understands the side effects of fixing it, it could 
be addressed. But I expect it to be non-trivial.

--
components: +Interpreter Core
nosy: +eric.smith
versions: +Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36617] The rich comparison operators are second class citizens

2019-04-19 Thread Mark Dickinson


Change by Mark Dickinson :


--
nosy: +mark.dickinson

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36670] test suite broken due to cpu usage feature on win 10/ german

2019-04-19 Thread Lorenz Mende

Lorenz Mende  added the comment:

Hi Ammar, you are correct.
typeperf returns:
P:\Repos\CPython\cpython>typeperf "\System\Prozessor-Warteschlangenlänge" -si 1

"(PDH-CSV 4.0)","\\ZERO\System\Prozessor-Warteschlangenlänge"
"04/19/2019 19:09:14.510","0.00"
"04/19/2019 19:09:15.514","0.00"

So even with correct counter name the outpu needs to be decoded correctly.
I already got a solution to get the location specific counter name from 
registry - if it helps I'll commit it.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36670] test suite broken due to cpu usage feature on win 10/ german

2019-04-19 Thread Ammar Askar


Ammar Askar  added the comment:

Thank you, could you also share the output if you just give it the English name 
of the counter?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35959] math.prod(range(10)) caues segfault

2019-04-19 Thread Mark Dickinson


Mark Dickinson  added the comment:

I think this can be closed; I did look at the PR post-merge (sorry that I 
didn't get to it before it was merged), and I agree that it should fix the 
segfault.

There's scope for refactoring / improving the implementation, but that would 
belong in a different issue.

I'll close, but @pabolgsal: please feel free to re-open if I've misunderstood.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36669] weakref proxy doesn't support the matrix multiplication operator

2019-04-19 Thread SilentGhost


New submission from SilentGhost :

It's not obvious why it should. Do you care to show a use case you had in mind?

--
components: +Library (Lib)
nosy: +SilentGhost, fdrake
type:  -> enhancement
versions: +Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36650] Cached method implementation no longer works on Python 3.7.3

2019-04-19 Thread Raymond Hettinger


Raymond Hettinger  added the comment:

Thanks for the reproducer code.  I've bisected this back to 
b2b023c657ba8c3f4a24d0c847d10fe8e2a73d44 which fixes other known bugs in the 
lru_cache in issue 35780.  The problem is due to the lines that use a scalar 
instead of an args tuple for exact ints and strs.  I'll work-up a PR to fix it 
soon (I'm on vacation and have limited connectivity so it may take a few days).

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36650] Cached method implementation no longer works on Python 3.7.3

2019-04-19 Thread Jason R. Coombs


Jason R. Coombs  added the comment:

Nice work. Thanks Raymond.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33135] Define field prefixes for the various config structs

2019-04-19 Thread Joannah Nanjekye


Change by Joannah Nanjekye :


--
nosy: +nanjekyejoannah

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36666] threading.Thread should have way to catch an exception thrown within

2019-04-19 Thread Joel Croteau


Joel Croteau  added the comment:

I agree that we should not change the default behavior of Thread.join(), as 
that would break existing code, but there are plenty of other ways to do this. 
I see a couple of possibilities:

1. Add an option to the Thread constructor, something like raise_exc, that 
defaults to False, but when set to True, causes join() to raise any exceptions.

2. (Better, IMO) Add this option to the join() method instead.

3. Create a new method, join_with_exc(), that acts like join() but raises 
exceptions from the target.

4. (Should probably do this anyway, regardless of what else we do) Add a new 
method, check_exc(), that checks if any unhandled exceptions have occurred in 
the thread and returns and/or raises any that have.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36666] threading.Thread should have way to catch an exception thrown within

2019-04-19 Thread Eric Snow


Eric Snow  added the comment:

Here's a basic decorator along those lines, similar to one that I've used on 
occasion:

def as_thread(target):
def _target():
try:
t.result = target()
except Exception as exc:
t.failure = exc
t = threading.Thread(target=_target)
return t

Sure, it's border-line non-trivial, but I'd hardly call it "exceptionally 
complicated".

Variations for more flexibility:

def as_thread(target=None, **tkwds):
# A decorator to create a one-off thread from a function.
if target is None:
# Used as a decorator factory
return lambda target: as_thread(target, **tkwds)

def _target(*args, **kwargs):
try:
t.result = target(*args, **kwargs)
except Exception as exc:
t.failure = exc
t = threading.Thread(target=_target, **tkwds)
return t


def threaded(target, **tkwds):
# A decorator to produce a started thread when the "function" is called.
if target is None:
# Used as a decorator factory
return lambda target: as_thread(target, **tkwds)

@functools.wraps(target)
def wrapper(*targs, **tkwargs)
def _target(*args, *kwargs):
try:
t.result = target(*args, **kwargs)
except Exception as exc:
t.failure = exc
t = threading.Thread(target=_target, args=targs, kwargs=tkwargs, 
**tkwds)
t.start()
return t
return wrapper

--
nosy: +eric.snow

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36666] threading.Thread should have way to catch an exception thrown within

2019-04-19 Thread Joel Croteau

Joel Croteau  added the comment:

Yes, I know there are workarounds for it, I have seen many, and everyone seems 
to have their own version. I'm saying we shouldn't need workarounds though–this 
should be built in functionality. Ideally, dropping an exception should never 
be default behavior, but I understand not wanting to break existing code, 
that's why I'm saying add additional functionality to make these checks easier 
and not require hacky, un-pythonic wrappers and other methods to find out if 
your code actually worked.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36643] Forward reference is not resolved by dataclasses.fields()

2019-04-19 Thread Ivan Levkivskyi


Change by Ivan Levkivskyi :


--
nosy: +levkivskyi

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36661] Missing dataclass decorator import in dataclasses module docs

2019-04-19 Thread Windson Yang


Windson Yang  added the comment:

I can find some example in the docs that didn't `import the correct module` 
even in the first example. Should we add the `import` statement for all of them?

--
nosy: +Windson Yang

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36650] Cached method implementation no longer works on Python 3.7.3

2019-04-19 Thread Raymond Hettinger


Raymond Hettinger  added the comment:

For the record, here's what is going on.  The method_cache() code uses a 
slightly different invocation for the first call than for subsequent calls. In 
particular, the wrapper() uses **kwargs with an empty dict whereas the first 
call didn't use keyword arguments at all.   The C version of the lru_cache() is 
treating that first call as distinct from the second call, resulting in a cache 
miss for the both the first and second invocation but not in subsequent 
invocations.

The pure python lru_cache() has a memory saving fast path taken when kwds is an 
empty dict.  The C version is out-of-sync with because it runs the that path 
only when kwds==NULL and it doesn't check for the case where kwds is an empty 
dict.  Here's a minimal reproducer:

@lru_cache()
def f(x):
pass

f(0)
f(0, **{})
assert f.cache_info().hits == 1

Here's a possible fix:

diff --git a/Modules/_functoolsmodule.c b/Modules/_functoolsmodule.c
index 3f1c01651d..f118119479 100644
--- a/Modules/_functoolsmodule.c
+++ b/Modules/_functoolsmodule.c
@@ -751,7 +751,7 @@ lru_cache_make_key(PyObject *args, PyObject *kwds, int 
typed)
 Py_ssize_t key_size, pos, key_pos, kwds_size;

 /* short path, key will match args anyway, which is a tuple */
-if (!typed && !kwds) {
+if (!typed && (!kwds || PyDict_GET_SIZE(kwds) == 0)) {
 if (PyTuple_GET_SIZE(args) == 1) {
 key = PyTuple_GET_ITEM(args, 0);
 if (PyUnicode_CheckExact(key) || PyLong_CheckExact(key)) {

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36650] Cached method implementation no longer works on Python 3.7.3

2019-04-19 Thread Raymond Hettinger


Change by Raymond Hettinger :


--
keywords: +patch
pull_requests: +12806
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36665] REPL doesn't ensure builtins are available when implicitly recreating __main__

2019-04-19 Thread Terry J. Reedy


Terry J. Reedy  added the comment:

To me, the failure of dir() in message 1 is surprising and possibly a bug. I 
always though of a module globals = locals = dict() instance as continuous 
across statements, whether in batch or interactive move.  In batch mode

import sys
mod = sys.modules[__name__]
sys.modules[__name__]
print(dir())

works.  Adding '-i' to the command line is supposed to allow one to enter 
interactive statements to be executed in the same namespace.

In IDLE's Shell, dir() in msg 1 executes normally.  This is because 
idlelib.run.Executive() initializes the instance by caching globals().
self.locals = __main__.__dict__
Then self.runcode(self, code) executes user statements with
exec(code, self.locals)
With exec in the old statement form of 'exec code in self.locals', this pair 
predates the first patch git has access to, on 5/26/2002 (GvR, committed by 
Chui Tey).

Could and should, python do similarly, and keep a reference to the module 
namespace? What did Python do in 2002?  What do other implementations and 
simulated Shells do now?

--
nosy: +terry.reedy

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36670] test suite broken due to cpu usage feature on win 10/ german

2019-04-19 Thread Terry J. Reedy


Terry J. Reedy  added the comment:

'crash' mean *nix coredump or Windows equivelent, rather than traceback.

--
components: +Windows
nosy: +paul.moore, steve.dower, terry.reedy, tim.golden, zach.ware
type: crash -> behavior

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com