[issue46935] import of submodule polutes global namespace

2022-03-06 Thread Max Bachmann


Max Bachmann  added the comment:

Thanks Dennis. This helped me track down the issue in rapidfuzz.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue46935>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46935] import of submodule polutes global namespace

2022-03-05 Thread Max Bachmann


Max Bachmann  added the comment:

It appears this only occurs when a C Extension is involved. When the so is 
imported first it is preferred over the .py file that the user would like to 
import. I could not find any documentation on this behavior, so I assume that 
this is not the intended.

My current workaround is the usage of a unique name for the C Extension and the 
importing everything from a Python file with the corresponding name.

--

___
Python tracker 
<https://bugs.python.org/issue46935>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46935] import of submodule polutes global namespace

2022-03-05 Thread Max Bachmann


New submission from Max Bachmann :

In my environment I installed the following two libraries:
```
pip install rapidfuzz
pip install python-Levenshtein
```
Those two libraries have the following structures:
rapidfuzz
|-distance
  |- __init__.py (from . import Levenshtein)
  |- Levenshtein.*.so
|-__init__.py (from rapidfuzz import distance)


Levenshtein
|-__init__.py

When importing Levenshtein first everything behaves as expected:
```
>>> import Levenshtein
>>> Levenshtein.
Levenshtein.apply_edit(   Levenshtein.jaro_winkler( Levenshtein.ratio(
Levenshtein.distance( Levenshtein.matching_blocks(  
Levenshtein.seqratio(
Levenshtein.editops(  Levenshtein.median(   
Levenshtein.setmedian(
Levenshtein.hamming(  Levenshtein.median_improve(   
Levenshtein.setratio(
Levenshtein.inverse(  Levenshtein.opcodes(  
Levenshtein.subtract_edit(
Levenshtein.jaro( Levenshtein.quickmedian(   
>>> import rapidfuzz
>>> Levenshtein.
Levenshtein.apply_edit(   Levenshtein.jaro_winkler( Levenshtein.ratio(
Levenshtein.distance( Levenshtein.matching_blocks(  
Levenshtein.seqratio(
Levenshtein.editops(  Levenshtein.median(   
Levenshtein.setmedian(
Levenshtein.hamming(  Levenshtein.median_improve(   
Levenshtein.setratio(
Levenshtein.inverse(  Levenshtein.opcodes(  
Levenshtein.subtract_edit(
Levenshtein.jaro( Levenshtein.quickmedian( 
```

However when importing rapidfuzz first it import 
`rapidfuzz.distance.Levenshtein` when running `import Levenshtein`
```
>>> import rapidfuzz
>>> Levenshtein
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'Levenshtein' is not defined
>>> import Levenshtein
>>> Levenshtein.
Levenshtein.array(  Levenshtein.normalized_distance(
Levenshtein.similarity(
Levenshtein.distance(   Levenshtein.normalized_similarity(  
Levenshtein.editops(Levenshtein.opcodes( 
```

My expectation was that in both cases `import Levenshtein` should import the 
`Levenshtein` module. I could reproduce this behavior on all Python versions I 
had available (Python3.8 - Python3.10) on Ubuntu and Fedora.

--
components: Interpreter Core
messages: 414599
nosy: maxbachmann
priority: normal
severity: normal
status: open
title: import of submodule polutes global namespace
type: behavior
versions: Python 3.10, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue46935>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15373] copy.copy() does not properly copy os.environment

2022-03-01 Thread Max Katsev


Max Katsev  added the comment:

Note that deepcopy doesn't work either, even though it looks like it does at 
the first glance (which is arguably worse since it's harder to notice):

Python 3.8.6 (default, Jun  4 2021, 05:16:01)
>>> import copy, os, subprocess
>>> env_copy = copy.deepcopy(os.environ)
>>> env_copy["TEST"] = "oh no"
>>> os.environ["TEST"]
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/local/fbcode/platform009/lib/python3.8/os.py", line 675, in 
__getitem__
raise KeyError(key) from None
KeyError: 'TEST'
>>> subprocess.run("echo $TEST", shell=True, 
>>> capture_output=True).stdout.decode()
'oh no\n'

--
nosy: +mkatsev

___
Python tracker 
<https://bugs.python.org/issue15373>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45393] help() on operator precedence has confusing entries "await" "x" and "not" "x"

2021-10-07 Thread Max


Max  added the comment:

option 1 looks most attractive to me (and will also look most attractive in the 
rendering, IMHO -- certainly better than "await" "x", in any case).

P.S.: OK, thanks for explanations concerning 3.6 - 3.8. I do understand that it 
won't be fixed for these versions (not certain why not if possible at no cost), 
but I do not understand why these labels must be removed. The bug does exist 
but should simply be considered as "nofix" for these versions (or not), given 
that it's not in the "security" category. The fact that it won't be fixed, for 
whatever reason, should not mean that it should not be listed as existing, 
there.

--

___
Python tracker 
<https://bugs.python.org/issue45393>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45393] help() on operator precedence has confusing entries "await" "x" and "not" "x"

2021-10-06 Thread Max


Max  added the comment:

Thanks for fixing the typo, didn't knnow how to do that when I spotted it (I'm 
new to this). 
You also removed Python version 3.6, 3.7, 3.8, however, I just tested on 
pythonanywhere,
>>> sys.version
'3.7.0 (default, Aug 22 2018, 20:50:05) \n[GCC 5.4.0 20160609]'
So I can confirm that the bug *is* there on 3.7 (so I put this back in the list 
- unless it was removed in a later 3.7.x (to you mean that?) and put back in 
later versions...?)
It is also on the Python 3.9.7 I'm running on my laptop, so I'd greatly be 
surprised if it were not present on the other two versions you also removed.

--
versions: +Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue45393>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45393] help() on operator precedence has confusing entries "avait" "x" and "not" "x"

2021-10-06 Thread Max


New submission from Max :

Nobody seems to have noticed this AFAICS: 
If you type, e.g., help('+') to get help on operator precedence, the fist 
column gives a lit of operators for each row corresponding to a given 
precedence. However, the row for "not" (and similar for "await"), has the entry

"not" "x"

That looks as if there were two operators, "not" and "x". But the letter x is 
just an argument to the operator, so it should be:

 "not x"

exactly as for "+x" and "-x" and "~x" and "x[index]" and "x.attribute", where 
also x is not part of the operator but an argument.

On the corresponding web page 
https://docs.python.org/3/reference/expressions.html#operator-summary
it is displayed correctly, there are no quotes.

--
assignee: docs@python
components: Documentation
messages: 403321
nosy: MFH, docs@python
priority: normal
severity: normal
status: open
title: help() on operator precedence has confusing entries "avait" "x" and 
"not" "x"
type: enhancement
versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 
3.9

___
Python tracker 
<https://bugs.python.org/issue45393>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45105] Incorrect handling of unicode character \U00010900

2021-09-05 Thread Max Bachmann

Max Bachmann  added the comment:

As far as a I understood this is caused by the same reason:

```
>>> s = '123\U00010900456'
>>> s
'123ऀ456'
>>> list(s)
['1', '2', '3', 'ऀ', '4', '5', '6']
# note that everything including the commas is mirrored until ] is reached
>>> s[3]
'ऀ'
>>> list(s)[3]
'ऀ'
>>> ls = list(s)
>>> ls[3] += 'a'
>>> ls
['1', '2', '3', 'ऀa', '4', '5', '6']
```

Which as far as I understood is the expected behavior when a right-to-left 
character is encountered.

--

___
Python tracker 
<https://bugs.python.org/issue45105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45105] Incorrect handling of unicode character \U00010900

2021-09-05 Thread Max Bachmann

Max Bachmann  added the comment:

> That is using Python 3.9 in the xfce4-terminal. Which xterm are you using?

This was in the default gnome terminal that is pre-installed on Fedora 34 and 
on windows I directly opened the Python Terminal. I just installed 
xfce4-terminal on my Fedora 34 machine which has exactly the same behavior for 
me that I had in the gnome terminal.

> But regardless, I cannot replicate the behavior you show where list(s) is 
> different from indexing the characters one by one.

That is what surprised me the most. I just ran into this because this was 
somehow generated when fuzz testing my code using hypothesis (which uncovered 
an unrelated bug in my application). However I was quite confused by the 
character order when debugging it.

My original case was:
```
s1='00'
s2='9010ऀ000\x8dÀĀĀĀ222Ā'
parts = [s2[max(0, i) : min(len(s2), i+len(s1))] for i in range(-len(s1), 
len(s2))]
for part in parts:
print(list(part))
```
which produced
```
[]
['9']
['9', '0']
['9', '0', '1']
['9', '0', '1', '0']
['9', '0', '1', '0', 'ऀ']
['9', '0', '1', '0', 'ऀ', '0']
['0', '1', '0', 'ऀ', '0', '0']
['1', '0', 'ऀ', '0', '0', '0']
['0', 'ऀ', '0', '0', '0', '\x8d']
['ऀ', '0', '0', '0', '\x8d', 'À']
['0', '0', '0', '\x8d', 'À', 'Ā']
['0', '0', '\x8d', 'À', 'Ā', 'Ā']
['0', '\x8d', 'À', 'Ā', 'Ā', 'Ā']
['\x8d', 'À', 'Ā', 'Ā', 'Ā', '2']
['À', 'Ā', 'Ā', 'Ā', '2', '2']
['Ā', 'Ā', 'Ā', '2', '2', '2']
['Ā', 'Ā', '2', '2', '2', 'Ā']
['Ā', '2', '2', '2', 'Ā']
['2', '2', '2', 'Ā']
['2', '2', 'Ā']
['2', 'Ā']
['ĀÀ]
```
which has a missing single quote:
  - ['ĀÀ]
changing direction of characters including commas:
  - ['1', '0', 'ऀ', '0', '0', '0']
changing direction back:
  - ['ऀ', '0', '0', '0', '\x8d', 'À']

> AFAICT, there is no bug here. It's just confusing how Unicode right-to-left 
> characters in the repr() can modify how it's displayed in the 
> console/terminal.

Yes it appears the same confusion occurs in other applications like Firefox and 
VS Code.
Thanks at @eryksun and @steven.daprano for testing and telling me about 
Bidirectional writing in Unicode (The more I know about Unicode the more it 
scares me)

--
status: pending -> open

___
Python tracker 
<https://bugs.python.org/issue45105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45105] Incorrect handling of unicode character \U00010900

2021-09-05 Thread Max Bachmann

Max Bachmann  added the comment:

This is the result of copy pasting example posted above on windows using 
```
Python 3.7.8 (tags/v3.7.8:4b47a5b6ba, Jun 28 2020, 08:53:46) [MSC v.1916 64 bit 
(AMD64)] on win32
```
which appears to run into similar problems:
```
>>> s = '0��00' 
>>> 
>>> 
>>> 
>>>   >>> s 
>>> 
>>> 
>>> 
>>> 
>>> '0ऀ00'  
>>> 
>>> 
>>> 
>>>   >>> ls = list(s)  
>>> 
>>> 
>>> 
>>> 
>>> >>> ls  
>>> 
>>> 
>>> 
>>>   ['0', 'ऀ', '0', '0']  
>>> 
>>> 
>>> 
>>> 
>>> >>> s[0]
>>> 
>>> 
>>> 
>>>   '0'   
>>> 
>>> 
>>> 
>>> 
>>> >>> s[1]
>>> 
>>> 
>>> 
>>>   'ऀ'
```

--

___
Python tracker 
<https://bugs.python.org/issue45105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45105] Incorrect handling of unicode character \U00010900

2021-09-05 Thread Max Bachmann

New submission from Max Bachmann :

I noticed that when using the Unicode character \U00010900 when inserting the 
character as character:
Here is the result on the Python console both for 3.6 and 3.9:
```
>>> s = '0ऀ00'
>>> s
'0ऀ00'
>>> ls = list(s)
>>> ls
['0', 'ऀ', '0', '0']
>>> s[0]
'0'
>>> s[1]
'ऀ'
>>> s[2]
'0'
>>> s[3]
'0'
>>> ls[0]
'0'
>>> ls[1]
'ऀ'
>>> ls[2]
'0'
>>> ls[3]
'0'
```

It appears that for some reason in this specific case the character is actually 
stored in a different position that shown when printing the complete string. 
Note that the string is already behaving strange when marking it in the 
console. When marking the special character it directly highlights the last 3 
characters (probably because it already thinks this character is in the second 
position).

The same behavior does not occur when directly using the unicode point
```
>>> s='000\U00010900'
>>> s
'000ऀ'
>>> s[0]
'0'
>>> s[1]
'0'
>>> s[2]
'0'
>>> s[3]
'ऀ'
```

This was tested using the following Python versions:
```
Python 3.6.0 (default, Dec 29 2020, 02:18:14) 
[GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] on linux

Python 3.9.6 (default, Jul 16 2021, 00:00:00) 
[GCC 11.1.1 20210531 (Red Hat 11.1.1-3)] on linux
```
on Fedora 34

--
components: Unicode
messages: 401078
nosy: ezio.melotti, maxbachmann, vstinner
priority: normal
severity: normal
status: open
title: Incorrect handling of unicode character \U00010900
type: behavior
versions: Python 3.6, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue45105>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44153] Signaling an asyncio subprocess might raise ProcessLookupError, even if you haven't called .wait() yet

2021-05-16 Thread Max Marrone


Change by Max Marrone :


--
title: Signaling an asyncio subprocess raises ProcessLookupError, depending on 
timing -> Signaling an asyncio subprocess might raise ProcessLookupError, even 
if you haven't called .wait() yet

___
Python tracker 
<https://bugs.python.org/issue44153>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44153] Signaling an asyncio subprocess raises ProcessLookupError, depending on timing

2021-05-16 Thread Max Marrone


New submission from Max Marrone :

# Summary

Basic use of `asyncio.subprocess.Process.terminate()` can raise a 
`ProcessLookupError`, depending on the timing of the subprocess's exit.

I assume (but haven't checked) that this problem extends to `.kill()` and 
`.send_signal()`.

This breaks the expected POSIX semantics of signaling and waiting on a process. 
See the "Expected behavior" section.


# Test case

I've tested this on macOS 11.2.3 with Python 3.7.9 and Python 3.10.0a7, both 
installed via pyenv.

```
import asyncio
import sys

# Tested with:
# asyncio.ThreadedChildWatcher (3.10.0a7  only)
# asyncio.MultiLoopChildWatcher (3.10.0a7 only)
# asyncio.SafeChildWatcher (3.7.9 and 3.10.0a7)
# asyncio.FastChildWatcher (3.7.9 and 3.10.0a7)
# Not tested with asyncio.PidfdChildWatcher because I'm not on Linux.
WATCHER_CLASS = asyncio.FastChildWatcher

async def main():
# Dummy command that should be executable cross-platform.
process = await asyncio.subprocess.create_subprocess_exec(
sys.executable, "--version"
)

for i in range(20):
# I think the problem is that the event loop opportunistically wait()s
# all outstanding subprocesses on its own. Do a bunch of separate
# sleep() calls to give it a bunch of chances to do this, for reliable
# reproduction.
#
# I'm not sure if this is strictly necessary for the problem to happen.
# On my machine, the problem also happens with a single sleep(2.0).
await asyncio.sleep(0.1)

process.terminate() # This unexpectedly errors with ProcessLookupError.

print(await process.wait())

asyncio.set_child_watcher(WATCHER_CLASS())
asyncio.run(main())
```

The `process.terminate()` call raises a `ProcessLookupError`:

```
Traceback (most recent call last):
  File "kill_is_broken.py", line 29, in 
asyncio.run(main())
  File "/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/runners.py", 
line 43, in run
return loop.run_until_complete(main)
  File 
"/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/base_events.py", line 
587, in run_until_complete
return future.result()
  File "kill_is_broken.py", line 24, in main
process.terminate() # This errors with ProcessLookupError.
  File 
"/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/subprocess.py", line 
131, in terminate
self._transport.terminate()
  File 
"/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/base_subprocess.py", 
line 150, in terminate
self._check_proc()
  File 
"/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/base_subprocess.py", 
line 143, in _check_proc
raise ProcessLookupError()
ProcessLookupError
```


# Expected behavior and discussion

Normally, with POSIX semantics, the `wait()` syscall tells the operating system 
that we won't send any more signals to that process, and that it's safe for the 
operating system to recycle that process's PID. This comment from Jack O'Connor 
on another issue explains it well: https://bugs.python.org/issue40550#msg382427

So, I expect that on any given `asyncio.subprocess.Process`, if I call 
`.terminate()`, `.kill()`, or `.send_signal()` before I call `.wait()`, then:

* It should not raise a `ProcessLookupError`.
* The asyncio internals shouldn't do anything with a stale PID. (A stale PID is 
one that used to belong to our subprocess, but that we've since consumed 
through a `wait()` syscall, allowing the operating system to recycle it).

asyncio internals are mostly over my head. But I *think* the problem is that 
the event loop opportunistically calls the `wait()` syscall on our child 
processes. So, as implemented, there's a race condition. If the event loop's 
`wait()` syscall happens to come before my `.terminate()` call, my 
`.terminate()` call will raise a `ProcessLookupError`.

So, as a corollary to the expectations listed above, I think the implementation 
details should be either:

* Ideally, the asyncio internals should not call syscall `wait()` on a process 
until *I* call `wait()` on that process. 
* Failing that, `.terminate()`, `.kill()` and `.send_signal()` should should 
no-op if the asyncio internals have already called `.wait()` on that process.

--
components: asyncio
messages: 393764
nosy: asvetlov, syntaxcoloring, yselivanov
priority: normal
severity: normal
status: open
title: Signaling an asyncio subprocess raises ProcessLookupError, depending on 
timing
type: behavior
versions: Python 3.10, Python 3.7

___
Python tracker 
<https://bugs.python.org/issue44153>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2021-04-08 Thread Max Bélanger

Change by Max Bélanger :


--
nosy: +maxbelanger
nosy_count: 4.0 -> 5.0
pull_requests: +24011
pull_request: https://github.com/python/cpython/pull/25274

___
Python tracker 
<https://bugs.python.org/issue42688>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41100] Support macOS 11 and Apple Silicon Macs

2021-04-08 Thread Max Bélanger

Change by Max Bélanger :


--
nosy: +maxbelanger
nosy_count: 18.0 -> 19.0
pull_requests: +24010
pull_request: https://github.com/python/cpython/pull/25274

___
Python tracker 
<https://bugs.python.org/issue41100>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26680] Incorporating float.is_integer into Decimal

2021-03-21 Thread Max Prokop


Change by Max Prokop :


--
components: +2to3 (2.x to 3.x conversion tool), Argument Clinic, Build, C API, 
Cross-Build, Demos and Tools, Distutils, Documentation, asyncio, ctypes
nosy: +Alex.Willmer, asvetlov, dstufft, eric.araujo, larry, yselivanov
type: enhancement -> compile error
Added file: https://bugs.python.org/file49898/Mobile_Signup.vcf

___
Python tracker 
<https://bugs.python.org/issue26680>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43565] PyUnicode_KIND macro does not has specified return type

2021-03-19 Thread Max Bachmann


New submission from Max Bachmann :

The documentation stated, that the PyUnicode_KIND macro has the following 
interface:
- int PyUnicode_KIND(PyObject *o)
However it actually returns a value of the underlying type of the 
PyUnicode_Kind enum. This could be e.g. an unsigned int as well.

--
components: C API
messages: 389133
nosy: maxbachmann
priority: normal
severity: normal
status: open
title: PyUnicode_KIND macro does not has specified return type
type: behavior

___
Python tracker 
<https://bugs.python.org/issue43565>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7856] cannot decode from or encode to big5 \xf9\xd8

2021-03-09 Thread Max Bolingbroke

Max Bolingbroke  added the comment:

As of Python 3.7.9 this also affects \xf9\xd6 which should be \u7881 in 
Unicode. This character is the second character of 宏碁 which is the name of the 
Taiwanese electronics manufacturer Acer.

You can work around the issue using big5hkscs just like with the original 
\xf9\xd8 problem.

It looks like the F9D6–F9FE characters all come from the Big5-ETen extension 
(https://en.wikipedia.org/wiki/Big5#ETEN_extensions, 
https://moztw.org/docs/big5/table/eten.txt) which is so popular that it is a 
defacto standard. Big5-2003 (mentioned in a comment below) seems to be an 
extension of Big5-ETen. For what it's worth, whatwg includes these mappings in 
their own big5 reference tables: https://encoding.spec.whatwg.org/big5.html. 

Unfortunately Big5 is still in common use in Taiwan. It's pretty funny that 
Python fails to decode Big5 documents containing the name of one of Taiwan's 
largest multinationals :-)

--
nosy: +batterseapower

___
Python tracker 
<https://bugs.python.org/issue7856>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43377] _PyErr_Display should be available in the CPython-specific API

2021-03-03 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
nosy: +maxbelanger
nosy_count: 1.0 -> 2.0
pull_requests: +23495
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/24719

___
Python tracker 
<https://bugs.python.org/issue43377>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43221] German Text Conversion Using Upper() and Lower()

2021-02-13 Thread Max Parry

New submission from Max Parry :

The German alphabet has four extra characters (ä, ö, ü and ß) when compared to 
the UK/USA alphabet.  Until 2017 the character ß was normally only lower case.  
Upper case ß was represented by SS.  In 2017 upper case ß was introduced, 
although SS is still often/usually used instead.  It is important to note that, 
as far as I can see, upper case ß and lower case ß are identical.

The upper() method converts upper or lower case ß to SS.  N.B. ä, ö and ü are 
handled correctly.  Lower() seems to work correctly.

Please note that German is my second language and everything I say about the 
language, its history and its use might not be reliable.  Happy to be corrected.

--
components: Windows
messages: 386938
nosy: Strongbow, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: German Text Conversion Using Upper() and Lower()
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43221>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42629] PyObject_Call not behaving as documented

2020-12-12 Thread Max Bachmann


New submission from Max Bachmann :

The documentation of PyObject_Call here: 
https://docs.python.org/3/c-api/call.html#c.PyObject_Call
states, that it is the equivalent of the Python expression: callable(*args, 
**kwargs).

so I would expect:
PyObject* args = PyTuple_New(0);
PyObject* kwargs = PyDict_New();
PyObject_Call(funcObj, args, kwargs)

to behave similar to
args = []
kwargs = {}
func(*args, **kwargs)

however this is not the case since in this case when I edit kwargs inside
PyObject* func(PyObject* /*self*/, PyObject* /*args*/, PyObject* keywds)
{
  PyObject* str = PyUnicode_FromString("test_str");
  PyDict_SetItemString(keywds, "test", str);
}

it changes the original dictionary passed into PyObject_Call. I was wondering, 
whether this means, that:
a) it is not allowed to modify the keywds argument passed to a 
PyCFunctionWithKeywords
b) when calling PyObject_Call it is required to copy the kwargs for the call 
using PyDict_Copy

Neither the documentation of PyObject_Call nor the documentation of 
PyCFunctionWithKeywords 
(https://docs.python.org/3/c-api/structures.html#c.PyCFunctionWithKeywords) 
made this clear to me.

--
components: C API
messages: 382927
nosy: maxbachmann
priority: normal
severity: normal
status: open
title: PyObject_Call not behaving as documented
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42629>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41100] Support macOS 11 and Apple Silicon Macs

2020-11-16 Thread Max Desiatov


Change by Max Desiatov :


--
nosy:  -MaxDesiatov

___
Python tracker 
<https://bugs.python.org/issue41100>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39603] [security] http.client: HTTP Header Injection in the HTTP method

2020-07-22 Thread Max


Max  added the comment:

I've just noticed an issue with the current version of the patch. It should 
also include 0x20 (space) since that can also be used to manipulate the request.

--

___
Python tracker 
<https://bugs.python.org/issue39603>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39603] [security] http.client: HTTP Header Injection in the HTTP method

2020-02-11 Thread Max


Max  added the comment:

I agree that the solution is quite restrictive.
Restricting to ASCII characters alone would certainly work.

--

___
Python tracker 
<https://bugs.python.org/issue39603>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39603] Injection in http.client

2020-02-10 Thread Max


New submission from Max :

I recently came across a bug during a pentest that's allowed me to perform some 
really interesting attacks on a target. While originally discovered in 
requests, I had been forwarded to one of the urllib3 developers after agreeing 
that fixing it at it's lowest level would be preferable. I was informed that 
the vulnerability is also present in http.client and that I should report it 
here as well.

The 'method' parameter is not filtered to prevent the injection from altering 
the entire request.

For example:
>>> conn = http.client.HTTPConnection("localhost", 80)
>>> conn.request(method="GET / HTTP/1.1\r\nHost: abc\r\nRemainder:", 
>>> url="/index.html")

This will result in the following request being generated:
GET / HTTP/1.1
Host: abc
Remainder: /index.html HTTP/1.1
Host: localhost
Accept-Encoding: identity

This was originally found in an HTTP proxy that was utilising Requests. It 
allowed me to manipulate the original path to access different files from an 
internal server since the developers had assumed that the method would filter 
out non-standard HTTP methods.

The recommended solution is to only allow the standard HTTP methods of GET, 
HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH.

An alternate solution that would allow programmers to use non-standard methods 
would be to only support characters [a-z] and stop reading at any special 
characters (especially newlines and spaces).

--
components: Library (Lib)
messages: 361710
nosy: maxpl0it
priority: normal
severity: normal
status: open
title: Injection in http.client
type: security
versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39603>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30825] csv.Sniffer does not detect lineterminator

2020-02-03 Thread Max Vorobev


Change by Max Vorobev :


--
keywords: +patch
pull_requests: +17708
stage: test needed -> patch review
pull_request: https://github.com/python/cpython/pull/18336

___
Python tracker 
<https://bugs.python.org/issue30825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38952] asyncio cannot handle Python3 IPv4Address

2019-12-02 Thread Max Coplan

Max Coplan  added the comment:

Well I’ve submitted a fix for it.  It isn’t perfect.  Well, while it doesn’t 
look perfect, it actually worked with everything I’ve thrown at it, and seems 
to be a very robust and sufficient fix.

--

___
Python tracker 
<https://bugs.python.org/issue38952>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38952] asyncio cannot handle Python3 IPv4Address

2019-12-01 Thread Max Coplan


Change by Max Coplan :


--
title: asyncio cannot handle Python3 IPv4Address or IPv6 Address -> asyncio 
cannot handle Python3 IPv4Address

___
Python tracker 
<https://bugs.python.org/issue38952>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38952] asyncio cannot handle Python3 IPv4Address or IPv6 Address

2019-12-01 Thread Max Coplan


Change by Max Coplan :


--
keywords: +patch
pull_requests: +16913
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/17434

___
Python tracker 
<https://bugs.python.org/issue38952>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38952] asyncio cannot handle Python3 IPv4Address or IPv6 Address

2019-12-01 Thread Max Coplan


New submission from Max Coplan :

Trying to use new Python 3 `IPv4Address`s fails with the following error
```
File 
"/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py",
 line 1270, in _ensure_resolved
info = _ipaddr_info(host, port, family, type, proto, *address[2:])
  File 
"/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py",
 line 134, in _ipaddr_info
if '%' in host:
TypeError: argument of type 'IPv4Address' is not iterable
```

--
components: asyncio
messages: 357697
nosy: Max Coplan, asvetlov, yselivanov
priority: normal
severity: normal
status: open
title: asyncio cannot handle Python3 IPv4Address or IPv6 Address
versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue38952>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38279] multiprocessing example enhancement

2019-09-25 Thread Max


Change by Max :


--
keywords: +patch
pull_requests: +15979
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/16398

___
Python tracker 
<https://bugs.python.org/issue38279>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38279] multiprocessing example enhancement

2019-09-25 Thread Max Voss


New submission from Max Voss :

Hello all,

I've been trying to understand multiprocessing for a while, I tried multiple 
times. The PR is a suggested enhancement to the example that made it "click" 
for me. Or should I say, produced a working result that made sense to me.

Details for each change in the PR. It's short too.

The concept of multiprocessing is easy enough, but the syntax is so unlike 
regular python and so much happens "behind the curtain" so to speak, it took me 
a while. When I looked for multiprocessing advice online, many answers seemed 
unsure if or how their solution worked.

Generally I'd like to help write documentation. So this is a test to see how 
good your issue handling process is too. :)

--
assignee: docs@python
components: Documentation
messages: 353222
nosy: BMV, docs@python
priority: normal
severity: normal
status: open
title: multiprocessing example enhancement
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue38279>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35787] shlex.split inserts extra item on backslash space space

2019-01-20 Thread Max


New submission from Max :

I believe in both cases below, the ouptu should be ['a', 'b']; the extra ' ' 
inserted in the list is incorrect:

python3.6
Python 3.6.2 (default, Aug  4 2017, 14:35:04)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import shlex
>>> shlex.split('a \ b')
['a', ' b']
>>> shlex.split('a \  b')
['a', ' ', 'b']
>>>

Doc reference: https://docs.python.org/3/library/shlex.html#parsing-rules
> Non-quoted escape characters (e.g. '\') preserve the literal value of the 
> next character that follows;

I believe this implies that backslash space should be just space; and then two 
adjacent spaces should be used (just like a single space) as a separator 
between arguments.

--
components: Library (Lib)
messages: 334081
nosy: max
priority: normal
severity: normal
status: open
title: shlex.split inserts extra item on backslash space space
versions: Python 3.6

___
Python tracker 
<https://bugs.python.org/issue35787>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35203] Windows Installer Ignores Launcher Installer Options Where The Python Launcher Is Already Present

2018-11-09 Thread Max Bowsher


Change by Max Bowsher :


--
nosy: +Max Bowsher

___
Python tracker 
<https://bugs.python.org/issue35203>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35139] Statically linking pyexpat in Modules/Setup fails to compile on macOS

2018-11-01 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +9599
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue35139>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35080] The tests for the `dis` module can be too rigid when changing opcodes

2018-10-26 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +9469
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue35080>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35025] Compiling `timemodule.c` can fail on macOS due to availability warnings

2018-10-19 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +9308
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue35025>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35022] MagicMock should support `__fspath__`

2018-10-18 Thread Max Bélanger

Change by Max Bélanger :


--
keywords: +patch
pull_requests: +9307
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue35022>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28627] [alpine] shutil.copytree fail to copy a direcotry with broken symlinks

2018-04-18 Thread Max Rees

Max Rees <maxcr...@me.com> added the comment:

Actually the symlinks don't need to be broken. It fails for any kind of symlink
on musl.

$ ls -l /tmp/symtest
lrwxrwxrwx 1 mcrees mcrees 10 Apr 18 21:16 empty -> /var/empty
-rw-r--r-- 1 mcrees mcrees  0 Apr 18 21:16 regular
lrwxrwxrwx 1 mcrees mcrees 16 Apr 18 21:16 resolv.conf -> /etc/resolv.conf

$ python3
>>> import shutil; shutil.copytree('/tmp/symtest', '/tmp/symtest2', 
>>> symlinks=True)
shutil.Error: [('/tmp/symtest/resolv.conf', '/tmp/symtest2/resolv.conf', "[Errno
95] Not supported: '/tmp/symtest2/resolv.conf'"), ('/tmp/symtest/empty',
'/tmp/symtest2/empty', "[Errno 95] Not supported: '/tmp/symtest2/empty'")]

$ ls -l /tmp/symtest2
total 0
lrwxrwxrwx 1 mcrees mcrees 10 Apr 18 21:16 empty -> /var/empty
-rw-r--r-- 1 mcrees mcrees  0 Apr 18 21:16 regular
lrwxrwxrwx 1 mcrees mcrees 16 Apr 18 21:16 resolv.conf -> /etc/resolv.conf

The implication of these bugs mean that things like pip may fail if it calls
shutil.copytree(..., symlinks=True) on a directory that contains symlinks(!)

Attached is a patch that works around the issue but does not address why chmod
is returning OSError instead of NotImplementedError.

--
keywords: +patch
nosy: +sroracle
Added file: https://bugs.python.org/file47540/musl-eopnotsupp.patch

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue28627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32285] In `unicodedata`, it should be possible to check a unistr's normal form without necessarily copying it

2017-12-11 Thread Max Bélanger

Change by Max Bélanger <aero...@gmail.com>:


--
keywords: +patch
pull_requests: +4703
stage:  -> patch review

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32285>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32282] When using a Windows XP compatible toolset, `socketmodule.c` fails to build

2017-12-11 Thread Max Bélanger

Change by Max Bélanger <aero...@gmail.com>:


--
keywords: +patch
pull_requests: +4702
stage:  -> patch review

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32282>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32280] Expose `_PyRuntime` through a section name

2017-12-11 Thread Max Bélanger

Change by Max Bélanger <aero...@gmail.com>:


--
keywords: +patch
pull_requests: +4700
stage:  -> patch review

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32280>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31903] `_scproxy` calls SystemConfiguration functions in a way that can cause deadlocks

2017-10-30 Thread Max Bélanger

Change by Max Bélanger <aero...@gmail.com>:


--
keywords: +patch
pull_requests: +4148
stage:  -> patch review

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31903>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments

2017-10-11 Thread Max Rothman

Max Rothman <max.r.roth...@gmail.com> added the comment:

Hi, I'd like to wrap this ticket up and get some kind of resolution, whether 
it's accepted or not. I'm new to the Python community, what's the right way to 
prompt a discussion about this sort of thing? Should I have taken it to one of 
the mailing lists?

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue30821>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments

2017-07-18 Thread Max Rothman

Max Rothman added the comment:

Hi, just wanted to ping this again and see if there was any movement.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30821>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments

2017-07-12 Thread Max Rothman

Max Rothman added the comment:

> Generally the called with asserts can only be used to match the *actual 
> call*, and they don't determine "equivalence".

That's fair, but as unittest.mock stands now, it *does* check equivalence, but 
only partially, which is more confusing to users than either checking 
equivalence or not.

> I'm not convinced there's a massive use case - generally you want to make 
> asserts about what your code actually does - not just check if it does 
> something equivalent to your assert.

To me, making asserts about what your code actually does means not having tests 
fail because a function call switches to a set of equivalent but different 
arguments. As a developer, I care about the state in the parent and the state 
in the child, and I trust Python to work out the details in between. If Python 
treats two forms as equivalent, why shouldn't our tests?

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30821>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30825] csv.Sniffer does not detect lineterminator

2017-07-01 Thread Max Vorobev

Changes by Max Vorobev <vmax0...@gmail.com>:


--
pull_requests: +2595

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30825] csv.Sniffer does not detect lineterminator

2017-07-01 Thread Max Vorobev

New submission from Max Vorobev:

Line terminator defaults to '\r\n' while detecting dialect in csv.Sniffer

--
components: Library (Lib)
messages: 297497
nosy: Max Vorobev
priority: normal
severity: normal
status: open
title: csv.Sniffer does not detect lineterminator
type: behavior
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30825>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments

2017-06-30 Thread Max Rothman

Max Rothman added the comment:

I'd be happy to look at submitting a patch for this, but it'd be helpful to be 
able to ask questions of someone more familiar with unittest.mock's code.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30821>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments

2017-06-30 Thread Max Rothman

New submission from Max Rothman:

For a function f with the signature f(foo=None), the following three calls are 
equivalent:

f(None)
f(foo=None)
f()

However, only the first two are equivalent in the eyes of 
unittest.mock.Mock.assert_called_with:

>>> with patch('__main__.f', autospec=True) as f_mock:
f_mock(foo=None)
f_mock.assert_called_with(None)

>>> with patch('__main__.f', autospec=True) as f_mock:
f_mock(None)
f_mock.assert_called_with()
AssertionError: Expected call: f()  Actual call: f(None)

This is definitely surprising to new users (it was surprising to me!) and 
unnecessarily couples tests to how a particular piece of code happens to call a 
function.

--
components: Library (Lib)
messages: 297433
nosy: Max Rothman
priority: normal
severity: normal
status: open
title: unittest.mock.Mocks with specs aren't aware of default arguments
versions: Python 2.7, Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30821>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30685] Multiprocessing Send to Manager Fails for Large Payload

2017-06-16 Thread Max Ehrlich

New submission from Max Ehrlich:

On line 393 of multiprocessing/connection.py, the size of the payload to be 
sent is serialized as an integer. This fails for sending large payloads. It 
should probably be serialized as a long or better yet a long long.

--
components: Library (Lib)
messages: 296210
nosy: maxehr
priority: normal
severity: normal
status: open
title: Multiprocessing Send to Manager Fails for Large Payload
versions: Python 3.5

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30685>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30641] No way to specify "File name too long" error in except statement.

2017-06-12 Thread Max Staff

Max Staff added the comment:

...at least those are the only two ways that I can think of.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30641>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30641] No way to specify "File name too long" error in except statement.

2017-06-12 Thread Max Staff

Max Staff added the comment:

Yes I know about the errno. There would be two ways to resolve this:

One way would be by introducing a new exception class which would be nice 
because it's almost impossible to reliably check the allowed filename length 
(except for trial and error) and I have quite a few functions where I would 
want the error to propagate further as long as it's not an ENAMETOOLONG.

The other way would be by introducing a new syntax feature ("except OSError as 
e if e.errno == errno.ENAMETOOLONG:") but I don't think that that approach is 
reasonable.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30641>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30641] No way to specify "File name too long" error in except statement.

2017-06-12 Thread Max Staff

New submission from Max Staff:

There are different ways to catch exceptions of the type "OSError": By using 
"except OSError as e:" and then checking the errno or by using "except 
FileNotFoundError e:" or "except FileExistsError e:" or whatever error one 
wants to catch. There's no such way for above mentioned error that occurs when 
a filename is too long for the filesystem/OS.

--
components: IO
messages: 295810
nosy: Max Staff
priority: normal
severity: normal
status: open
title: No way to specify "File name too long" error in except statement.
type: behavior
versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30641>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30517] Enum does not recognize enum.auto as unique values

2017-05-31 Thread Max

Max added the comment:

Ah sorry about that ... Yes, everything works fine when used properly.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30517>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30517] Enum does not recognize enum.auto as unique values

2017-05-30 Thread Max

New submission from Max:

This probably shouldn't happen:

import enum

class E(enum.Enum):
  A = enum.auto
  B = enum.auto

x = E.B.value
print(x) # 
print(E(x))  # E.A

The first print() is kinda ok, I don't really care about which value was used 
by the implementation. But the second print() seems surprising.

By the same token, this probably shouldn't raise an exception (it does now):

import enum

@enum.unique
class E(enum.Enum):
  A = enum.auto
  B = enum.auto
  C = object()

and `dir(E)` shouldn't skip `B` in its output (it does now).

--
components: Library (Lib)
messages: 294804
nosy: max
priority: normal
severity: normal
status: open
title: Enum does not recognize enum.auto as unique values
type: behavior
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30517>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30488] Documentation for subprocess.STDOUT needs clarification

2017-05-26 Thread Max

New submission from Max:

The documentation states that subprocess.STDOUT is:

Special value that can be used as the stderr argument to Popen and indicates 
that standard error should go into the same handle as standard output.

However, when Popen is called with stdout=None, stderr=subprocess.STDOUT, 
stderr is not redirected to stdout and continues to be sent to stderr.

To reproduce the problem:

$ python >/dev/null -c 'import subprocess;\
subprocess.call(["ls", 
"/404"],stderr=subprocess.STDOUT)'

and observe the error message appearing on the console (assuming /404 directory 
does not exist).

This was reported on SO 5 years ago: 
https://stackoverflow.com/questions/11495783/redirect-subprocess-stderr-to-stdout.

The SO attributed this to a documentation issue, but arguably it should be 
considered a bug because there seems to be no reason to make subprocess.STDOUT 
unusable in this very common use case.

--
components: Interpreter Core
messages: 294560
nosy: max
priority: normal
severity: normal
status: open
title: Documentation for subprocess.STDOUT needs clarification
type: behavior
versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30488>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29842] Make Executor.map work with infinite/large inputs correctly

2017-05-15 Thread Max

Max added the comment:

Correction: this PR is useful for `ProcessPoolExecutor` as well. I thought 
`chunksize` parameter handles infinite generators already, but I was wrong. 
And, as long as the number of items prefetched is a multiple of `chunksize`, 
there are no issues with the chunksize optimization either.

And a minor correction: when listing the advantages of this PR, I should have 
said: "In addition, if the pool is not busy when `map` is called, your 
implementation will also be more responsive, since it will yield the first 
result earlier."

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29842>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29842] Make Executor.map work with infinite/large inputs correctly

2017-05-14 Thread Max

Max added the comment:

I'm also concerned about this (undocumented) inconsistency between map and 
Executor.map.

I think you would want to make your PR limited to `ThreadPoolExecutor`. The 
`ProcessPoolExecutor` already does everything you want with its `chunksize` 
paramater, and adding `prefetch` to it will jeopardize the optimization for 
which `chunksize` is intended.

Actually, I was even thinking whether it might be worth merging `chunksize` and 
`prefetch` arguments. The semantics of the two arguments is similar but not 
identical. Specifically, for `ProcessPoolExecutor`, there is pretty clear 
pressure to increase the value of `chunksize` to reduce amortized IPC costs; 
there is no IPC with threads, so the pressure to increase `prefetch` is much 
more situational (e.g., in the busy pool example I give below).

For `ThreadPoolExecutor`, I prefer your implementation over the current one, 
but I want to point out that it is not strictly better, in the sense that *with 
default arguments*, there are situations where the current implementation 
behaves better.

In many cases your implementation behaves much better. If the input is too 
large, it prevents out of memory condition. In addition, if the pool is not 
busy when `map` is called, your implementation will also be faster, since it 
will submit the first input for processing earlier.

But consider the case where input is produced slower than it can be processed 
(`iterables` may fetch data from a database, but the callable `fn` may be a 
fast in-memory transformation). Now suppose the `Executor.map` is called when 
the pool is busy, so there'll be a delay before processing begins. In this 
case, the most efficient approach is to get as much input as possible while the 
pool is busy, since eventually (when the pool is freed up) it will become the 
bottleneck. This is exactly what the current implementation does.

The implementation you propose will (by default) only prefetch a small number 
of input items. Then when the pool becomes available, it will quickly run out 
of prefetched input, and so it will be less efficient than the current 
implementation. This is especially unfortunate since the entire time the pool 
was busy, `Executor.map` is just blocking the main thread so it's literally 
doing nothing useful.

Of course, the client can tweak `prefetch` argument to achieve better 
performance. Still, I wanted to make sure this issue is considered before the 
new implementation is adopted.

>From the performance perspective, an even more efficient implementation would 
>be one that uses three background threads:

- one to prefetch items from the input
- one to sends items to the workers for processing
- one to yield results as they become available

It has a disadvantage of being slightly more complex, so I don't know if it 
really belongs in the standard library.

Its advantage is that it will waste less time: it fetches inputs without pause, 
it submits them for processing without pause, and it makes results available to 
the client as soon as they are processed. (I have implemented and tried this 
approach, but not in productioon.)

But even this implementation requires tuning. In the case with the busy pool 
that I described above, one would want to prefetch as much input as possible, 
but that may cause too much memory consumption and also possibly waste 
computation resources (if the most of input produced proves to be unneeded in 
the end).

--
nosy: +max

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29842>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30026] Hashable doesn't check for __eq__

2017-04-09 Thread Max

Max added the comment:

Sorry, this should be just a documentation issue.

I just realized that __eq__ = None isn't correct anyway, so instead we should 
just document that Hashable cannot check for __eq__ and that explicitly 
deriving from Hashable suppresses hashability.

--
components:  -Interpreter Core

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30026>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30026] Hashable doesn't check for __eq__

2017-04-09 Thread Max

New submission from Max:

I think collections.abc.Hashable.__subclasshook__ should check __eq__ method in 
addition to __hash__ method. This helps detect classes that are unhashable due 
to:

to __eq__ = None

Of course, it still cannot detect:

def __eq__: return NotImplemented

but it's better than nothing.

In addition, it's probably worth documenting that explicitly inheriting from 
Hashable has (correct but unexpected) effect of *suppressing* hashability that 
was already present:

from collections.abc import Hashable
class X: pass
assert issubclass(X, Hashable)
x = X()

class X(Hashable): pass
assert issubclass(X, Hashable)
x = X() # Can't instantiate abstract class X with abstract methods

--
assignee: docs@python
components: Documentation, Interpreter Core
messages: 291382
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: Hashable doesn't check for __eq__

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30026>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29982] tempfile.TemporaryDirectory fails to delete itself

2017-04-04 Thread Max

New submission from Max:

There's a known issue with `shutil.rmtree` on Windows, in that it fails 
intermittently. 

The issue is well known 
(https://mail.python.org/pipermail/python-dev/2013-September/128353.html), and 
the agreement is that it cannot be cleanly solved inside `shutil` and should 
instead be solved by the calling app. Specifically, python devs themselves 
faced it in their test suite and solved it by retrying delete.

However, what to do about `tempfile.TemporaryDirectory`? Is it considered the 
calling app, and therefore should retry delete when it calls `shutil.rmtree` in 
its `cleanup` method?

I don't think `tempfile` is protected by the same argument that `shutil.rmtree` 
is protected, in that it's too messy to solve it in the standard library. My 
rationale is that while it's very easy for the end user to retry 
`shutil.rmtree`, it's far more difficult to fix the problem with 
`tempfile.TempDirectory` not deleting itself - how would the end user retry the 
`cleanup` method (which is called from `weakref.finalizer`)?

So perhaps the retry loop should be added to `cleanup`.

--
components: Library (Lib)
messages: 291130
nosy: max
priority: normal
severity: normal
status: open
title: tempfile.TemporaryDirectory fails to delete itself
type: behavior
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29982>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29795] Clarify how to share multiprocessing primitives

2017-03-12 Thread Max

Max added the comment:

Actually, never mind, I think one of the paragraphs in the Programming 
Guidelines ("Explicitly pass resources to child processes") basically explains 
everything already. I just didn't notice it until @noxdafox pointed it out to 
me on SO.

Close please.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29795>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29715] Arparse improperly handles "-_"

2017-03-12 Thread Max Rothman

Max Rothman added the comment:

I think that makes sense, but there's still an open question: what should the 
correct way be to allow dashes to be present at the beginning of positional 
arguments?

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29715>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29795] Clarify how to share multiprocessing primitives

2017-03-12 Thread Max

Max added the comment:

Somewhat related is this statement from Programming Guidelines:

> When using the spawn or forkserver start methods many types from 
> multiprocessing need to be picklable so that child processes can use them. 
> However, one should generally avoid sending shared objects to other processes 
> using pipes or queues. Instead you should arrange the program so that a 
> process which needs access to a shared resource created elsewhere can inherit 
> it from an ancestor process.

Since on Windows, even "inheritance" is really the same pickle + pipe executed 
inside CPython, I assume the entire paragraph is intended for UNIX platform 
only (might be worth clarifying, btw).

On Linux, "inheritance" works faster, and can deal with more complex objects 
compared to pickle with pipe/queue -- but it's equally true whether it's 
inheritance through global variables or through arguments to the target 
function. There's no reason 

So the text I proposed earlier wouldn't conflict with this one. It would just 
encourage programmers to use function arguments instead of global variables: 
because it's doesn't matter on Linux but makes the code portable to Windows.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29795>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29797] Deadlock with multiprocessing.Queue()

2017-03-12 Thread Max

Max added the comment:

Yes, this makes sense. My bad, I didn't realize processes might need to wait 
until the queue is consumed.

I don't think there's any need to update the docs either, nobody should have 
production code that never reads the queue (mine was a test of some other 
issue).

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29797>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29797] Deadlock with multiprocessing.Queue()

2017-03-11 Thread Max

New submission from Max:

Using multiprocessing.Queue() with several processes writing very fast results 
in a deadlock both on Windows and UNIX.

For example, this code:

from multiprocessing import Process, Queue, Manager
import time, sys

def simulate(q, n_results):
for i in range(n_results):
time.sleep(0.01)
q.put(i)

def main():
n_workers = int(sys.argv[1])
n_results = int(sys.argv[2])

q = Queue()
proc_list = [Process(target=simulate, 
args=(q, n_results),
daemon=True) for i in range(n_workers)]

for proc in proc_list:
proc.start()

for i in range(5):
time.sleep(1)
print('current approximate queue size:', q.qsize())
alive = [p.pid for p in proc_list if p.is_alive()]
if alive:
print(len(alive), 'processes alive; among them:', alive[:5])
else:
break

for p in proc_list:
p.join()

print('final appr queue size', q.qsize())


if __name__ == '__main__':
main()


hangs on Windows 10 (python 3.6) with 2 workers and 1000 results each, and on 
Ubuntu 16.04 (python 3.5) with 100 workers and 100 results each. The print out 
shows that the queue has reached the full size, but a bunch of processes are 
still alive. Presumably, they somehow manage to lock themselves out even though 
they don't depend on each other (must be in the implementation of Queue()):

current approximate queue size: 9984
47 processes alive; among them: [2238, 2241, 2242, 2244, 2247]
current approximate queue size: 1
47 processes alive; among them: [2238, 2241, 2242, 2244, 2247]

The deadlock disappears once multiprocessing.Queue() is replaced with 
multiprocessing.Manager().Queue() - or at least I wasn't able to replicate it 
with a reasonable number of processes and results.

--
components: Library (Lib)
messages: 289479
nosy: max
priority: normal
severity: normal
status: open
title: Deadlock with multiprocessing.Queue()
type: behavior
versions: Python 3.5, Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29797>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29795] Clarify how to share multiprocessing primitives

2017-03-11 Thread Max

Max added the comment:

How about inserting this text somewhere:

Note that sharing and synchronization objects (such as `Queue()`, `Pipe()`, 
`Manager()`, `Lock()`, `Semaphore()`) should be made available to a new process 
by passing them as arguments to the `target` function invoked by the `run()` 
method. Making these objects visible through global variables will only work 
when the process was started using `fork` (and as such sacrifices portability 
for no special benefit).

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29795>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29795] Clarify how to share multiprocessing primitives

2017-03-11 Thread Max

New submission from Max:

It seems both me and many other people (judging from SO questions) are confused 
about whether it's ok to write this:

from multiprocessing import Process, Queue
q = Queue()

def f():
q.put([42, None, 'hello'])

def main():
p = Process(target=f)
p.start()
print(q.get())# prints "[42, None, 'hello']"
p.join()

if __name__ == '__main__':
main()

It's not ok (doesn't work on Windows presumably because somehow when it's 
pickled, the connection between global queues in the two processes is lost; 
works on Linux, because I guess fork keeps more information than pickle, so the 
connection is maintained).

I thought it would be good to clarify in the docs that all the Queue() and 
Manager().* and other similar objects should be passed as parameters not just 
defined as globals.

--
assignee: docs@python
components: Documentation
messages: 289454
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: Clarify how to share multiprocessing primitives
type: behavior
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29795>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29715] Arparse improperly handles "-_"

2017-03-04 Thread Max Rothman

Max Rothman added the comment:

Martin: huh, I didn't notice that documentation. The error message definitely 
could be improved.

It still seems like an odd choice given that argparse knows about the expected 
spec, so it knows whether there are any options or not. Perhaps one could 
enable/disable this cautious behavior with a flag passed to ArgumentParser? It 
was rather surprising in my case, since I was parsing morse code and the 
arguments were random combinations of "-", "_", and "*", so it wasn't 
immediately obvious what the issue was.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29715>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29715] Arparse improperly handles "-_"

2017-03-03 Thread Max Rothman

New submission from Max Rothman:

In the case detailed below, argparse.ArgumentParser improperly parses the 
argument string "-_":
```
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('first')
print(parser.parse_args(['-_']))
```

Expected behavior: prints Namespace(first='-_')
Actual behavior: prints usage message

The issue seems to be specific to the string "-_". Either character alone or 
both in the opposite order does not trigger the issue.

--
components: Library (Lib)
messages: 288929
nosy: Max Rothman
priority: normal
severity: normal
status: open
title: Arparse improperly handles "-_"
type: behavior
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29715>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29597] __new__ / __init__ calls during unpickling not documented correctly

2017-02-18 Thread Max

New submission from Max:

According to the 
[docs](https://docs.python.org/3/library/pickle.html#pickling-class-instances):

> Note: At unpickling time, some methods like `__getattr__()`, 
> `__getattribute__()`, or `__setattr__()` may be called upon the instance. In 
> case those methods rely on some internal invariant being true, the type 
> should implement `__getnewargs__()` or `__getnewargs_ex__()` to establish 
> such an invariant; otherwise, neither `__new__()` nor `__init__()` will be 
> called.

It seems, however, that this note is incorrect. First, `__new__` is called even 
if `__getnewargs__` isn't implemented. Second, `__init__` is not called even if 
it is (while the note didn't say that `__init__` would be called when 
`__getnewargs__` is defined, the wording does seem to imply it).


class A:
def __new__(cls, *args):
print('__new__ called with', args)
return object.__new__(cls)

def __init__(self, *args):
print('__init__ called with', args)
self.args = args

def __getnewargs__(self):
print('called')
return ()

a = A(1)
s = pickle.dumps(a)
a = pickle.loads(s) # __new__ called, not __init__
delattr(A, '__getnewargs__') 
a = A(1)
s = pickle.dumps(a)
a = pickle.loads(s) # __new__ called, not __init__

--
assignee: docs@python
components: Documentation
messages: 288088
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: __new__ / __init__ calls during unpickling not documented correctly
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29597>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29415] Exposing handle._callback and handle._args in asyncio

2017-02-01 Thread Max

Max added the comment:

@yselivanov I just wanted to use the handler to avoid storing the callback and 
args in my own data structure (I would just store the handlers whenever I may 
need to reschedule). Not a big deal, I don't have to use handler as a storage 
space, if it's not supported across implementations.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29415>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29415] Exposing handle._callback and handle._args in asyncio

2017-02-01 Thread Max

New submission from Max:

Is it safe to use the _callback and _args attributes of asyncio.Handle? Is it 
possible to officially expose them as public API?

My use case: 

handle = event_loop.call_later(delay, callback)

# this function can be triggered by some events
def reschedule(handle):
  event_loop.call_later(new_delay, handle._callback, *handle._args)
  handle.cancel()

--
components: asyncio
messages: 286709
nosy: gvanrossum, max, yselivanov
priority: normal
severity: normal
status: open
title: Exposing handle._callback and handle._args in asyncio
type: enhancement
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29415>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28785] Clarify the behavior of NotImplemented

2016-11-24 Thread Max

Max added the comment:

Martin - what you suggest is precisely what I had in mind (but didn't phrase it 
as well):

> to document the above sort of behaviour as being directly associated with 
> operations like as == and !=, and only indirectly associated with the 
> NotImplemented object and the __eq__() method

Also a minor typo: you meant "If that call returns NotImplemented, the first 
fallback is to try the *reverse* call."

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue28785>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28785] Clarify the behavior of NotImplemented

2016-11-24 Thread Max

New submission from Max:

Currently, there's no clear statement as to what exactly the fallback is in 
case `__eq__` returns `NotImplemented`.  It would be good to clarify the 
behavior of `NotImplemented`; at least for `__eq__`, but perhaps also other 
rich comparison methods. For example: "When `NotImplemented` is returned from a 
rich comparison method, the interpreter behaves as if the rich comparison 
method was not defined in the first place." See 
http://stackoverflow.com/questions/40780004/returning-notimplemented-from-eq 
for more discussion.

--
assignee: docs@python
components: Documentation
messages: 281616
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: Clarify the behavior of NotImplemented
type: enhancement
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue28785>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27972] Confusing error during cyclic yield

2016-10-10 Thread Max von Tettenborn

Max von Tettenborn added the comment:

You are very welcome, glad I could help.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue27972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27972] Confusing error during cyclic yield

2016-09-06 Thread Max von Tettenborn

New submission from Max von Tettenborn:

Below code reproduces the problem. The resulting error is a RecursionError and 
it is very hard to trace that to the cause of the problem, which is the runner 
task and the stop task yielding from each other, forming a deadlock.

I think, an easy to make mistake like that should raise a clearer exception. 
And maybe I am mistaken, but it should in principle be possible for the event 
loop to detect a cyclic yield, right?


import asyncio


class A:
@asyncio.coroutine
def start(self):
self.runner_task = asyncio.ensure_future(self.runner())

@asyncio.coroutine
def stop(self):
self.runner_task.cancel()
yield from self.runner_task

@asyncio.coroutine
def runner(self):
try:
while True:
yield from asyncio.sleep(5)
except asyncio.CancelledError:
yield from self.stop()
return


def do_test():
@asyncio.coroutine
def f():
a = A()
yield from a.start()
yield from asyncio.sleep(1)
yield from a.stop()

asyncio.get_event_loop().run_until_complete(f())

--
components: asyncio
messages: 274547
nosy: Max von Tettenborn, gvanrossum, haypo, yselivanov
priority: normal
severity: normal
status: open
title: Confusing error during cyclic yield
type: behavior
versions: Python 3.5

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue27972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14903] dictobject infinite loop in module set-up

2016-07-20 Thread Max Khon

Max Khon added the comment:

I reproduced the problem with Python 2.7.5 as shipped with CentOS 7:

root@192.168.0.86 /home/padmin # python -V
Python 2.7.5
root@192.168.0.86 /home/padmin # rpm -q python
python-2.7.5-34.el7.x86_64
root@192.168.0.86 /home/padmin # 

(gdb) bt
#0  lookdict_string (mp=, key='RPMTAG_OPTFLAGS', 
hash=411442822543039667)
at /usr/src/debug/Python-2.7.5/Objects/dictobject.c:461
#1  0x7f92d6d9f2c9 in insertdict (mp=0x2502600, key='RPMTAG_OPTFLAGS', 
hash=411442822543039667, 
value=1122) at /usr/src/debug/Python-2.7.5/Objects/dictobject.c:559
#2  0x7f92d6d9f3b0 in dict_set_item_by_hash_or_entry (
op={'RPMTAG_HEADERREGIONS': 64, 'RPMTAG_EXCLUSIVEOS': 1062, 'fi': , 'RPMTAG_CHANGELOGNAME': 1081, 'RPMTAG_CONFLICTNEVRS': 
5044, 'RPMTAG_FILECAPS': 5010, 'RPMTAG_FILERDEVS': 1033, 'RPMTAG_COLLECTIONS': 
5029, 'RPMTAG_BUGURL': 5012, 'setStats': , 
'RPMTAG_FILEDIGESTALGO': 5011, 'RPMTAG_DEPENDSDICT': 1145, 'RPMTAG_CLASSDICT': 
1142, 'RPMTAG_FILEMODES': 1030, 'RPMTAG_FILEDEPENDSN': 1144, 
'RPMTAG_BUILDTIME': 1006, 'ii': , 
'RPMTAG_INSTALLCOLOR': 1127, 'RPMTAG_CHANGELOGTEXT': 1082, 
'RPMTAG_HEADERCOLOR': 5017, 'RPMTAG_CONFLICTNAME': 1054, 'RPMTAG_CONFLICTS': 
1054, 'setLogFile': , 'versionCompare': , 'RPMTAG_CONFLICTVERSION': 1055, 'RPMTAG_NVRA': 1196, 
'RPMTAG_NOPATCH': 1052, 'RPMTAG_HEADERI18NTABLE': 100, 
'RPMTAG_LONGARCHIVESIZE': 271, 'RPMTAG_FILEREQUIRE': 5002, 
'RPMTAG_FILEDEPENDSX': 1143, 'RPMTAG_EVR': 5013, 'RPMTAG_INSTALLTIME': 1008, 
 'RPMTAG_NAME': 1000, 'RPMTAG_LONG...(truncated), key=, 
hash=, 
ep=, value=) at 
/usr/src/debug/Python-2.7.5/Objects/dictobject.c:774
#3  0x7f92d6da18a8 in PyDict_SetItemString (
v={'RPMTAG_HEADERREGIONS': 64, 'RPMTAG_EXCLUSIVEOS': 1062, 'fi': , 'RPMTAG_CHANGELOGNAME': 1081, 'RPMTAG_CONFLICTNEVRS': 
5044, 'RPMTAG_FILECAPS': 5010, 'RPMTAG_FILERDEVS': 1033, 'RPMTAG_COLLECTIONS': 
5029, 'RPMTAG_BUGURL': 5012, 'setStats': , 
'RPMTAG_FILEDIGESTALGO': 5011, 'RPMTAG_DEPENDSDICT': 1145, 'RPMTAG_CLASSDICT': 
1142, 'RPMTAG_FILEMODES': 1030, 'RPMTAG_FILEDEPENDSN': 1144, 
'RPMTAG_BUILDTIME': 1006, 'ii': , 
'RPMTAG_INSTALLCOLOR': 1127, 'RPMTAG_CHANGELOGTEXT': 1082, 
'RPMTAG_HEADERCOLOR': 5017, 'RPMTAG_CONFLICTNAME': 1054, 'RPMTAG_CONFLICTS': 
1054, 'setLogFile': , 'versionCompare': , 'RPMTAG_CONFLICTVERSION': 1055, 'RPMTAG_NVRA': 1196, 
'RPMTAG_NOPATCH': 1052, 'RPMTAG_HEADERI18NTABLE': 100, 
'RPMTAG_LONGARCHIVESIZE': 271, 'RPMTAG_FILEREQUIRE': 5002, 
'RPMTAG_FILEDEPENDSX': 1143, 'RPMTAG_EVR': 5013, 'RPMTAG_INSTALLTIME': 1008, '
 RPMTAG_NAME': 1000, 'RPMTAG_LONG...(truncated), key=key@entry=0x7f92c83bf537 
"RPMTAG_OPTFLAGS", 
item=item@entry=1122) at 
/usr/src/debug/Python-2.7.5/Objects/dictobject.c:2448
#4  0x7f92d6e181f2 in PyModule_AddObject (m=m@entry=, 
name=name@entry=0x7f92c83bf537 "RPMTAG_OPTFLAGS", o=o@entry=1122)
at /usr/src/debug/Python-2.7.5/Python/modsupport.c:616
#5  0x7f92d6e182d8 in PyModule_AddIntConstant (m=m@entry=, 
name=name@entry=0x7f92c83bf537 "RPMTAG_OPTFLAGS", value=value@entry=1122)
at /usr/src/debug/Python-2.7.5/Python/modsupport.c:628
#6  0x7f92c85e2b20 in addRpmTags (module=) at 
rpmmodule.c:200
#7  initModule (m=) at rpmmodule.c:343
#8  init_rpm () at rpmmodule.c:281
#9  0x7f92d6e13ed9 in _PyImport_LoadDynamicModule 
(name=name@entry=0x24f69d0 "rpm._rpm", 
pathname=pathname@entry=0x24f79e0 
"/usr/lib64/python2.7/site-packages/rpm/_rpmmodule.so", 
fp=) at /usr/src/debug/Python-2.7.5/Python/importdl.c:53
...

The infinite loop happens when "import rpm" is called in low-memory conditions 
(e.g. when handling ENOMEM or exceptions.MemoryError - RedHat/CentOS 
abrt-addon-python package which installs sys.excepthook handler).

--
nosy: +Max Khon

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue14903>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25327] Windows 10 Installation Fails With Corrupt Directory Error

2015-10-06 Thread Max Farrell

New submission from Max Farrell:

Cannot install Python 3.5 64-bit on Windows 10 64-bit Educational Edition.

I have Python 3.4 Installed.

Log include.

--
components: Installation
files: Python 3.5.0 (64-bit)_20151006150920.log
messages: 252423
nosy: Max Farrell
priority: normal
severity: normal
status: open
title: Windows 10 Installation Fails With Corrupt Directory Error
type: compile error
versions: Python 3.5
Added file: http://bugs.python.org/file40705/Python 3.5.0 
(64-bit)_20151006150920.log

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25327>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25327] Python 3.5 Windows 10 Installation Fails With Corrupt Directory Error

2015-10-06 Thread Max Farrell

Max Farrell added the comment:

Windows 10 64-Bit Educational Edition.
Python 3.5 64-bit Installation failed. Directory is corrupt.
Log included.

--
title: Windows 10 Installation Fails With Corrupt Directory Error -> Python 3.5 
Windows 10 Installation Fails With Corrupt Directory Error
Added file: http://bugs.python.org/file40706/Python 3.5.0 
(64-bit)_20151006150920.log

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25327>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21214] PEP8 doesn't verifies last line.

2014-04-14 Thread Max

New submission from Max:

PEP8 doesn't verifies last line at all. Also W292 will never be checked.
Reproducible on PEP8 = 1.5.0

--
messages: 216072
nosy: f1ashhimself
priority: normal
severity: normal
status: open
title: PEP8 doesn't verifies last line.
type: behavior

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21214
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16296] Patch to fix building on Win32/64 under VS 2010

2014-03-17 Thread Max Naumov

Max Naumov added the comment:

Wouldn't this be more correct?
--- Lib/distutils/msvc9compiler.py  2013-08-03T16:17:08+04:00
+++ Lib/distutils/msvc9compiler.py  2014-03-17T18:36:50.078672+04:00
@@ -411,7 +411,11 @@
   '/Z7', '/D_DEBUG']
 
 self.ldflags_shared = ['/DLL', '/nologo', '/INCREMENTAL:NO']
-if self.__version = 7:
+if self.__version = 10:
+self.ldflags_shared = [
+'/DLL', '/nologo', '/INCREMENTAL:NO', '/DEBUG', '/pdb:None', 
'/Manifest'
+]
+elif self.__version = 7:
 self.ldflags_shared_debug = [
 '/DLL', '/nologo', '/INCREMENTAL:no', '/DEBUG', '/pdb:None'
 ]

--
nosy: +Max.Naumov
Added file: http://bugs.python.org/file34465/msvc9compilerpatch.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16296
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16296] Patch to fix building on Win32/64 under VS 2010

2014-03-17 Thread Max Naumov

Max Naumov added the comment:

It allows to install numpy on windows python 3.4. Just like the patch in the 
original post. Actually my patch is merely the original patch refactored.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16296
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19864] multiprocessing Proxy docs need locking semantics explained

2013-12-02 Thread Max Polk

New submission from Max Polk:

In the docs (both Python 2 and 3) for the multiprocessing module, the Proxy 
section needs to be explicit about the fact that is does NOT create locks 
around access to shared objects.

Why?  Because on the same page, we read about 
multiprocessing.managers.SyncManager's Queue method to create a shared 
queue.Queue object and return a proxy for it, next to other methods like 
SyncManager.list to create a process safe list.

But later, a willful violation of these semantics exists in the Using a remote 
manager section where a Proxy is implicitly created (via the register method 
of multiprocessing.managers.BaseManager) surrounding a Queue.Queue object.

Note that it did not create a proxy around a SyncManager.Queue, it created it 
around a Queue.Queue.  A user who copies this code and replaces Queue.Queue 
with a plain Python list gets the sense that all the needed locks will be 
created to protect the shared list.

However, due to lack of documentation in the Proxy section, the user will not 
know it's an unsafe list, and Proxy isn't helping them.  I'm guessing that 
Queue.Queue has its own locks to protect it in a process-shared setting, and we 
lucked out in this example, but an unwary reader's luck runs out if they 
replace it with a plain Python list.

Therefore, may I suggest two changes: (1) replace Queue.Queue with 
SyncManager.Queue in the Using a remote manager section to avoid misleading 
readers; and (2) be explicit in Proxy class docs that no locks are created to 
protect against concurrent access, and maybe add that the user must go to the 
multiprocessing.managers.SyncManager methods (Queue, list, etc) to get process 
safe objects to place behind the Proxy.

--
assignee: docs@python
components: Documentation
messages: 205039
nosy: docs@python, maxpolk
priority: normal
severity: normal
status: open
title: multiprocessing Proxy docs need locking semantics explained

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19864
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18289] Segmentation Fault using round()

2013-06-23 Thread Max Kaye

New submission from Max Kaye:

Python 2.7.3 (v2.7.3:70274d53c1dd, Apr  9 2012, 20:52:43) 
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type help, copyright, credits or license for more information.
 print round(1.123456, 3)
1.123
 print round(1.123756, 3)
Segmentation fault: 11

This doesn't happen if you just do the second round, only if you do both.
`python -c print round(1.123456, 3); print round(1.123756, 3)` doesn't 
segfault.

Doesn't segfault on brew's python:
Python 2.7.3 (default, May 19 2013, 04:22:38) 
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.51)] on darwin
Type help, copyright, credits or license for more information.
 print round(1.123456, 3)
1.123
 print round(1.123756, 3)
1.124
 

Doesn't segfault on another box:
Python 2.7.2+ (default, Jul 20 2012, 22:12:53) 
[GCC 4.6.1] on linux2
Type help, copyright, credits or license for more information.
 print round(1.123456, 3)
1.123
 print round(1.123756, 3)
1.124
 

OSX Log File: (goes to EOF)

Process: Python [5423]
Path:
/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Identifier:  Python
Version: 2.7.3 (2.7.3)
Code Type:   X86-64 (Native)
Parent Process:  bash [1219]
Responsible: iTerm [442]
User ID: 501

Date/Time:   2013-06-24 13:55:56.871 +1000
OS Version:  Mac OS X 10.9 (13A476u)
Report Version:  11
Anonymous UUID:  D8CE5653-35DD-5963-C8C9-E5012E41FDEE

Sleep/Wake UUID: 9C40804C-F025-4E16-A61A-D1E9D9F68DD3

Crashed Thread:  0  Dispatch queue: com.apple.main-thread

Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x

VM Regions Near 0:
-- 
__TEXT 0001-00011000 [4K] r-x/rwx 
SM=COW  
/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python

Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0   readline.so 0x0001002eff97 call_readline + 647
1   org.python.python   0x00018852 PyOS_Readline + 274
2   org.python.python   0x0001a0a8 tok_nextc + 104
3   org.python.python   0x0001a853 PyTokenizer_Get + 147
4   org.python.python   0x0001544a parsetok + 218
5   org.python.python   0x0001000e7722 PyParser_ASTFromFile 
+ 146
6   org.python.python   0x0001000e8983 
PyRun_InteractiveOneFlags + 243
7   org.python.python   0x0001000e8c6e 
PyRun_InteractiveLoopFlags + 78
8   org.python.python   0x0001000e9451 PyRun_AnyFileExFlags 
+ 161
9   org.python.python   0x00010010006d Py_Main + 3085
10  org.python.python   0x00010f14 0x1 + 3860

Thread 0 crashed with X86 Thread State (64-bit):
  rax: 0x  rbx: 0x00010060  rcx: 0x00010060  
rdx: 0x2800
  rdi: 0x  rsi: 0x0001002f0254  rbp: 0x7fff5fbff620  
rsp: 0x7fff5fbff550
   r8: 0x00010060   r9: 0x  r10: 0x0007  
r11: 0x0001
  r12: 0x0001  r13: 0x0018  r14: 0x7fff5fbff5e0  
r15: 0x7fff5fbff560
  rip: 0x0001002eff97  rfl: 0x00010206  cr2: 0x
  
Logical CPU: 4
Error Code:  0x0004
Trap Number: 14


Binary Images:
   0x1 -0x10fff +org.python.python (2.7.3 - 2.7.3) 
BE41DDF4-595E-0D6D-89DB-413749B339C3 
/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
   0x13000 -0x10016dff7 +org.python.python (2.7.3, [c] 
2004-2012 Python Software Foundation. - 2.7.3) 
4F9EF48A-7D0C-0C1A-670B-3BF4E72C8696 
/Library/Frameworks/Python.framework/Versions/2.7/Python
   0x1002ee000 -0x1002f0fff +readline.so (???) 
A33567B3-2793-9387-FD19-41FFD86C18E5 
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/readline.so
   0x1004f -0x10050effb  libedit.2.dylib (39) 
24343EDB-E64F-34DC-8632-68C9E44E0FF7 /usr/lib/libedit.2.dylib
0x7fff66a19000 - 0x7fff66a4c987  dyld (236) 
B508800F-8D03-3886-A1DA-2F0C44D9B794 /usr/lib/dyld
0x7fff8201f000 - 0x7fff82026ff3  libcopyfile.dylib (103) 
7925E83E-6C96-38AD-9E53-AA9AE7C9E406 /usr/lib/system/libcopyfile.dylib
0x7fff82062000 - 0x7fff82063ff7  libsystem_blocks.dylib (63) 
FAC54B3A-C76F-33E4-8671-D4A81B39AE56 /usr/lib/system/libsystem_blocks.dylib
0x7fff823cd000 - 0x7fff823f4ff3  libsystem_info.dylib (449) 
6080681C-A561-3458-AA51-4DBE6D4E36B2 /usr/lib/system/libsystem_info.dylib
0x7fff8246 - 0x7fff82466fef  libsystem_platform.dylib (20.0.0.0.1) 
E84062C5-5735-3708-A00F-45B6CBBDCE84 /usr/lib/system/libsystem_platform.dylib
0x7fff82514000 - 0x7fff8251bff7  liblaunch.dylib (841) 
0554A59B-30EE-3E01-A5F6-3676F4B060B2 /usr/lib/system

[issue18197] insufficient error checking causes crash on windows

2013-06-15 Thread Max DeLiso

Max DeLiso added the comment:

ok I checked in to this more deeply and I was wrong about a few things. first, 
my patch is worthless - there are several more instances where the retval of 
fileno is passed directly to fstat and that is totally valid (provided the 
file* points to a valid file).

looking deeper in the call stack, this call stack is originating from a 
PyCFunction_Call from mercurial's native extension, osutil.

 # Call Site
00 MSVCR90!fstat64i32+0xe8
01 python27!dircheck+0x29
02 python27!fill_file_fields+0x18e
03 python27!PyFile_FromFile+0x89
04 osutil+0x176f
05 python27!PyCFunction_Call+0x76

Here's the code in osutil.c (which is part of mercurial)

(osutil.c:554)
#ifndef IS_PY3K
fp = _fdopen(fd, fpmode);
if (fp == NULL) {
_close(fd);
PyErr_SetFromErrnoWithFilename(PyExc_IOError, name);
goto bail;
}

file_obj = PyFile_FromFile(fp, name, mode, fclose); //this is the call 
that is the parent
if (file_obj == NULL) {
fclose(fp);
goto bail;
}

PyFile_SetBufSize(file_obj, bufsize);
#else

fileno() is actually 'succeeding' and returning a value of 3.
fstat is then throwing the invalid parameter exception, presumably because 3 is 
not a valid file descriptor.
the way fileno() is implemented in M$CRT is really simple: it just copies a 
value at a fixed offset from the pointer passed to it without checking to see 
if the FILE* is valid.

this is why in the docs for _fileno they say The result is undefined if stream 
does not specify an open file.

anyways, I don't think this is a bug in python, but rather in the mercurial 
extension.
it's a little tricky to debug on windows because the osutil module gets 
delay-loaded.
I'm taking another pass at it now.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18197
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18197] insufficient error checking causes crash on windows

2013-06-12 Thread Max DeLiso

New submission from Max DeLiso:

hi.

if you cross compile the mercurial native extensions against python 2.7.5 (x64) 
on 64 bit windows 7 and then try to clone something, it will crash. 

I believe the reason for this is that the c runtime functions in the microsoft 
crt will throw a win32 exception if they are given invalid parameters, and 
since the return value of fileno() is not checked in Objects/fileobject.c, if a 
file handle is passed to fileno and the result is not a valid file descriptor, 
that invalid decriptor will get passed to _fstat64i32, an invalid parameter 
exception will be raised, and the program will crash.

here's the function with the alleged bug:

static PyFileObject*
dircheck(PyFileObject* f)
{
#if defined(HAVE_FSTAT)  defined(S_IFDIR)  defined(EISDIR)
struct stat buf;  
if (f-f_fp == NULL)
return f;
if (fstat(fileno(f-f_fp), buf) == 0  // this line is the problem, 
fileno's return value never gets checked
S_ISDIR(buf.st_mode)) {
char *msg = strerror(EISDIR);
PyObject *exc = PyObject_CallFunction(PyExc_IOError, (isO),
  EISDIR, msg, f-f_name);
PyErr_SetObject(PyExc_IOError, exc);
Py_XDECREF(exc);
return NULL;
}
#endif
return f;
}

here's the stack trace:

   msvcr90.dll!_invalid_parameter()   Unknown
msvcr90.dll!_fstat64i32()  Unknown
python27.dll!dircheck(PyFileObject * f) Line 127C
python27.dll!fill_file_fields(PyFileObject * f, _iobuf * fp, _object * 
name, char * mode, int (_iobuf *) * close) Line 183  C
python27.dll!PyFile_FromFile(_iobuf * fp, char * name, char * mode, int 
(_iobuf *) * close) Line 484C

here's a dump summary:

Dump Summary

Process Name:   python.exe : c:\Python27\python.exe
Process Architecture:   x64
Exception Code: 0xC417
Exception Information:  
Heap Information:   Present

about the patch:

the attached patch fixes that behavior and doesn't break any test cases on 
windows or linux. it applies against the current trunk of cpython. the return 
value of fileno should get checked for correctness anyways, even on *nix. the 
extra overhead is tiny, (one comparison and a conditional jump and a few extra 
bytes of stack space), but you do catch some weird edge cases.  

here are the steps to reproduce:

download the python 2.7.5 installer for windows
download the mercurial 2.6.2 source release
build the native extensions with 64 bit microsoft compilers
try to hg clone any remote repo 
(it should crash)

here are some version strings:

Python 2.7.5 (default, May 15 2013, 22:44:16) [MSC v.1500 64 bit (AMD64)] on 
win32
Microsoft (R) C/C++ Optimizing Compiler Version 17.00.60315.1 for x64
mercurial 2.6.2

here are some links:

in particular, read the bits about the invalid parameter exception:

_fsta64i32: 
http://msdn.microsoft.com/en-US/library/221w8e43%28v=vs.80%29.aspx 

_fileno:
http://msdn.microsoft.com/en-US/library/zs6wbdhx%28v=vs.80%29.aspx

Please let me know if my patch needs work or if I missed something.
Thanks!

--
components: IO
files: fileobject_fix.patch
hgrepos: 199
keywords: patch
messages: 191012
nosy: maxdeliso
priority: normal
severity: normal
status: open
title: insufficient error checking causes crash on windows
type: crash
versions: Python 2.7
Added file: http://bugs.python.org/file30552/fileobject_fix.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18197
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18197] insufficient error checking causes crash on windows

2013-06-12 Thread Max DeLiso

Changes by Max DeLiso maxdel...@gmail.com:


--
hgrepos:  -199

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18197
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18038] Unhelpful error message on invalid encoding specification

2013-05-22 Thread Max Cantor

New submission from Max Cantor:

When you specify a nonexistent encoding at the top of a file, like so for 
example:

# -*- coding: fakefakefoobar -*-

The following exception occurs:

SyntaxError: encoding problem: with BOM

This is very unhelpful, especially in cases where you might have made a typo in 
the encoding.

--
components: Library (Lib)
messages: 189840
nosy: Max.Cantor
priority: normal
severity: normal
status: open
title: Unhelpful error message on invalid encoding specification
type: behavior
versions: Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18038
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17724] urllib -- add_handler method refactoring for clarity

2013-04-17 Thread Max Mautner

Changes by Max Mautner max.maut...@gmail.com:


--
resolution:  - invalid
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17724
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17769] python-config --ldflags gives broken output when statically linking Python with --as-needed

2013-04-16 Thread Max Cantor

New submission from Max Cantor:

On certain Linux distributions such as Ubuntu, the linker is invoked by default 
with --as-needed, which has an undesireable side effect when linking static 
libraries: it is bad at detecting required symbols, and the order of libraries 
on the command line become significant.

Right now, on my Ubuntu 12.10 system with a custom 32-bit version of Python, I 
get the following command output:

mcantor@hpmongo:~$ /opt/pym32/bin/python-config --ldflags
-L/opt/pym32/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 
-Xlinker -export-dynamic

When linking a project with those flags, I get the following error:

/usr/bin/ld: /opt/pym32/lib/python2.7/config/libpython2.7.a(dynload_shlib.o): 
undefined reference to symbol 'dlopen@@GLIBC_2.1'
/usr/bin/ld: note: 'dlopen@@GLIBC_2.1' is defined in DSO 
/usr/lib/gcc/x86_64-linux-gnu/4.7/../../../i386-linux-gnu/libdl.so so try 
adding it to the linker command line
/usr/lib/gcc/x86_64-linux-gnu/4.7/../../../i386-linux-gnu/libdl.so: could not 
read symbols: Invalid operation
collect2: error: ld returned 1 exit status

To resolve the error, I moved -ldl and -lutil *AFTER* -lpython2.7, so the 
relevant chunk of my gcc command line looked like this:

-L/opt/pym32/lib/python2.7/config -lpthread -lm -lpython2.7 -ldl -lutil 
-Xlinker -export-dynamic

I have no idea why --as-needed has such an unpleasant side effect when static 
libraries are being used, and it's arguable from my perspective that this 
behavior is the real bug. However it's equally likely that there's a good 
reason for that behavior, like it causes a slowdown during leap-years on Apple 
IIs or something. So here I am. python-config ought to respect the quirks of 
--as-needed when outputting its ldflags.

--
components: Build, Cross-Build
messages: 187121
nosy: Max.Cantor
priority: normal
severity: normal
status: open
title: python-config --ldflags gives broken output when statically linking 
Python with --as-needed
type: behavior
versions: Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17769
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17724] urllib -- add_handler method refactoring for clarity

2013-04-13 Thread Max Mautner

New submission from Max Mautner:

Response handlers are registered with the OpenerDirector class in the 
urllib.request module using the add_handler method--it's a convoluted method 
that I refactored for legibility's sake.

--
files: urllib_add_handler.patch
keywords: patch
messages: 186853
nosy: Max.Mautner
priority: normal
severity: normal
status: open
title: urllib -- add_handler method refactoring for clarity
type: enhancement
versions: Python 3.4
Added file: http://bugs.python.org/file29831/urllib_add_handler.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17724
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16128] hashable documentation error

2012-10-04 Thread Max

New submission from Max:

http://docs.python.org/dev/glossary.html?highlight=hashable says:

Objects which are instances of user-defined classes are hashable by default; 
they all compare unequal, and their hash value is their id().

Since x == x returns True by default, so they all compare unequal isn't quite 
right.

In addition, both the above paragraph and 
http://docs.python.org/dev/reference/datamodel.html?highlight=__eq__#object.__hash__
 say:

User-defined classes have __eq__() and __hash__() methods by default; with 
them, all objects compare unequal (except with themselves) and x.__hash__() 
returns an appropriate value such that x == y implies both that x is y and 
hash(x) == hash(y).

This is correct, but may leave some confusion with the reader about what 
happens to a subclass of a built-in class (which doesn't use the default 
behavior, but instead simply inherits the parent's __hash__ and __eq__).

--
assignee: docs@python
components: Documentation
messages: 171935
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: hashable documentation error
type: enhancement
versions: Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16128
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15997] NotImplemented needs to be documented

2012-09-21 Thread Max

Max added the comment:

I agree about reflected operation, although the wording could be clearer (will 
try reflected operation is better worded as will return the result of the 
reflected operation called on the swapped arguments.)

But what does it mean or some other fallback? And what if the reflected 
operation or the fallback again return NotImplemented or is actually not 
implemented. Is it somewhere else in the docs?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15997
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15981] improve documentation of __hash__

2012-09-20 Thread Max

New submission from Max:

In dev/reference/datamodel#object.__hash__, there are two paragraphs that seem 
inconsistent. The first paragraph seems to say that a class that overrides 
__eq__() *should* explicitly flag itself as unhashable. The next paragraph says 
that a class that overrides __eq__() *will be* flagged unhashable by default. 
Which one is it?

Here are the two paragraphs:

Classes which inherit a __hash__() method from a parent class but change the 
meaning of __eq__() such that the hash value returned is no longer appropriate 
(e.g. by switching to a value-based concept of equality instead of the default 
identity based equality) can explicitly flag themselves as being unhashable by 
setting __hash__ = None in the class definition. Doing so means that not only 
will instances of the class raise an appropriate TypeError when a program 
attempts to retrieve their hash value, but they will also be correctly 
identified as unhashable when checking isinstance(obj, collections.Hashable) 
(unlike classes which define their own __hash__() to explicitly raise 
TypeError).

If a class that overrides __eq__() needs to retain the implementation of 
__hash__() from a parent class, the interpreter must be told this explicitly by 
setting __hash__ = ParentClass.__hash__. Otherwise the inheritance of 
__hash__() will be blocked, just as if __hash__ had been explicitly set to None.

--
assignee: docs@python
components: Documentation
messages: 170798
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: improve documentation of __hash__
type: enhancement
versions: Python 3.3

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15981
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15997] NotImplemented needs to be documented

2012-09-20 Thread Max

New submission from Max:

Quoting from 
http://docs.python.org/reference/datamodel.html#the-standard-type-hierarchy:

NotImplemented
This type has a single value. There is a single object with this value. This 
object is accessed through the built-in name NotImplemented. Numeric methods 
and rich comparison methods may return this value if they do not implement the 
operation for the operands provided. (The interpreter will then try the 
reflected operation, or some other fallback, depending on the operator.) Its 
truth value is true.

This is not a sufficient description of NotImplemented behavior. What does it 
mean reflected operation (I assume it is other.__eq__(self), but it needs to 
be clarified), and what does it mean or some other fallback (wouldn't 
developers need to know?). It also doesn't state what happens if the reflected 
operation or the fallback again return NotImplemented.

The rest of the documentation doesn't seem to talk about this either, despite 
several mentions of NotImplemented, with references to other sections.

This is particularly serious problem because Python's behavior changed in this 
respect not that long ago.

--
assignee: docs@python
components: Documentation
messages: 170860
nosy: docs@python, max
priority: normal
severity: normal
status: open
title: NotImplemented needs to be documented
type: enhancement
versions: Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15997
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9592] Limitations in objects returned by multiprocessing Pool

2012-09-11 Thread Max

Max added the comment:

I propose to close this issue as fixed.

The first two problems in the OP are now resolved through patches to pickle.

The third problem is addressed by issue5370: it is a documented feature of 
pickle that anyone who defines __setattr__ / __getattr__ that depend on an 
internal state must also take care to restore that state during unpickling. 
Otherwise, the code is not pickle-safe, and by extension, not 
multiprocessing-safe.

--
nosy: +max

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9592
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14445] Providing more fine-grained control over assert statements

2012-03-29 Thread Max

New submission from Max maxmo...@gmail.com:

Currently -O optimizer flag disables assert statements.

I want to ask that more fine-grained control is offered to users over the 
assert statements. In many cases, it would be nice to have the option of 
keeping asserts in release code, while still performing optimizations (if any 
are offered in the future). It can be achieved by removing the disable 
assertions feature of the -O flag, and instead adding a new flag that does 
nothing but disables asserts.

--
messages: 157070
nosy: max
priority: normal
severity: normal
status: open
title: Providing more fine-grained control over assert statements
type: enhancement
versions: Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14445
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9592] Limitations in objects returned by multiprocessing Pool

2012-03-07 Thread Max Franks

Max Franks eliqui...@gmail.com added the comment:

Issue 3 is not related to the other 2. See this post 
http://bugs.python.org/issue5370. As haypo said, it has to do with unpickling 
objects. The post above gives a solution by using the __setstate__ function.

--
nosy: +eliquious

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9592
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >