ANN: EmPy 4.0.1
r` instead of "fully buffered files"; cleaned up environment variables; "repr" markup replaced with emoji markup; remove literal markups `@)`, `@]`, `@}`; context line markup `@!...` no longer pre-adjusts line; custom markup `@<...>` now parsed more sensibly; filter shortcuts removed; context now track column and character count; auxiliary classes moved to `emlib` module; use `argv` instead of `argc` for interpreter arguments. See [Full list of changes between EmPy 3._x_ and 4.0](http://www.alcyone.com/software/empy/ANNOUNCE.html#full-list-of-changes-between-empy-3-x-and-4-0) for a more comprehensive list. -- Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/ San Jose, CA, USA && 37 18 N 121 57 W && Skype erikmaxfrancis To be refutable is not the least charm of a theory. -- Friedrich Nietzsche -- https://mail.python.org/mailman/listinfo/python-list
[issue46935] import of submodule polutes global namespace
Max Bachmann added the comment: Thanks Dennis. This helped me track down the issue in rapidfuzz. -- resolution: -> not a bug stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue46935> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46935] import of submodule polutes global namespace
Max Bachmann added the comment: It appears this only occurs when a C Extension is involved. When the so is imported first it is preferred over the .py file that the user would like to import. I could not find any documentation on this behavior, so I assume that this is not the intended. My current workaround is the usage of a unique name for the C Extension and the importing everything from a Python file with the corresponding name. -- ___ Python tracker <https://bugs.python.org/issue46935> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46935] import of submodule polutes global namespace
New submission from Max Bachmann : In my environment I installed the following two libraries: ``` pip install rapidfuzz pip install python-Levenshtein ``` Those two libraries have the following structures: rapidfuzz |-distance |- __init__.py (from . import Levenshtein) |- Levenshtein.*.so |-__init__.py (from rapidfuzz import distance) Levenshtein |-__init__.py When importing Levenshtein first everything behaves as expected: ``` >>> import Levenshtein >>> Levenshtein. Levenshtein.apply_edit( Levenshtein.jaro_winkler( Levenshtein.ratio( Levenshtein.distance( Levenshtein.matching_blocks( Levenshtein.seqratio( Levenshtein.editops( Levenshtein.median( Levenshtein.setmedian( Levenshtein.hamming( Levenshtein.median_improve( Levenshtein.setratio( Levenshtein.inverse( Levenshtein.opcodes( Levenshtein.subtract_edit( Levenshtein.jaro( Levenshtein.quickmedian( >>> import rapidfuzz >>> Levenshtein. Levenshtein.apply_edit( Levenshtein.jaro_winkler( Levenshtein.ratio( Levenshtein.distance( Levenshtein.matching_blocks( Levenshtein.seqratio( Levenshtein.editops( Levenshtein.median( Levenshtein.setmedian( Levenshtein.hamming( Levenshtein.median_improve( Levenshtein.setratio( Levenshtein.inverse( Levenshtein.opcodes( Levenshtein.subtract_edit( Levenshtein.jaro( Levenshtein.quickmedian( ``` However when importing rapidfuzz first it import `rapidfuzz.distance.Levenshtein` when running `import Levenshtein` ``` >>> import rapidfuzz >>> Levenshtein Traceback (most recent call last): File "", line 1, in NameError: name 'Levenshtein' is not defined >>> import Levenshtein >>> Levenshtein. Levenshtein.array( Levenshtein.normalized_distance( Levenshtein.similarity( Levenshtein.distance( Levenshtein.normalized_similarity( Levenshtein.editops(Levenshtein.opcodes( ``` My expectation was that in both cases `import Levenshtein` should import the `Levenshtein` module. I could reproduce this behavior on all Python versions I had available (Python3.8 - Python3.10) on Ubuntu and Fedora. -- components: Interpreter Core messages: 414599 nosy: maxbachmann priority: normal severity: normal status: open title: import of submodule polutes global namespace type: behavior versions: Python 3.10, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue46935> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15373] copy.copy() does not properly copy os.environment
Max Katsev added the comment: Note that deepcopy doesn't work either, even though it looks like it does at the first glance (which is arguably worse since it's harder to notice): Python 3.8.6 (default, Jun 4 2021, 05:16:01) >>> import copy, os, subprocess >>> env_copy = copy.deepcopy(os.environ) >>> env_copy["TEST"] = "oh no" >>> os.environ["TEST"] Traceback (most recent call last): File "", line 1, in File "/usr/local/fbcode/platform009/lib/python3.8/os.py", line 675, in __getitem__ raise KeyError(key) from None KeyError: 'TEST' >>> subprocess.run("echo $TEST", shell=True, >>> capture_output=True).stdout.decode() 'oh no\n' -- nosy: +mkatsev ___ Python tracker <https://bugs.python.org/issue15373> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45393] help() on operator precedence has confusing entries "await" "x" and "not" "x"
Max added the comment: option 1 looks most attractive to me (and will also look most attractive in the rendering, IMHO -- certainly better than "await" "x", in any case). P.S.: OK, thanks for explanations concerning 3.6 - 3.8. I do understand that it won't be fixed for these versions (not certain why not if possible at no cost), but I do not understand why these labels must be removed. The bug does exist but should simply be considered as "nofix" for these versions (or not), given that it's not in the "security" category. The fact that it won't be fixed, for whatever reason, should not mean that it should not be listed as existing, there. -- ___ Python tracker <https://bugs.python.org/issue45393> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45393] help() on operator precedence has confusing entries "await" "x" and "not" "x"
Max added the comment: Thanks for fixing the typo, didn't knnow how to do that when I spotted it (I'm new to this). You also removed Python version 3.6, 3.7, 3.8, however, I just tested on pythonanywhere, >>> sys.version '3.7.0 (default, Aug 22 2018, 20:50:05) \n[GCC 5.4.0 20160609]' So I can confirm that the bug *is* there on 3.7 (so I put this back in the list - unless it was removed in a later 3.7.x (to you mean that?) and put back in later versions...?) It is also on the Python 3.9.7 I'm running on my laptop, so I'd greatly be surprised if it were not present on the other two versions you also removed. -- versions: +Python 3.7, Python 3.8 ___ Python tracker <https://bugs.python.org/issue45393> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45393] help() on operator precedence has confusing entries "avait" "x" and "not" "x"
New submission from Max : Nobody seems to have noticed this AFAICS: If you type, e.g., help('+') to get help on operator precedence, the fist column gives a lit of operators for each row corresponding to a given precedence. However, the row for "not" (and similar for "await"), has the entry "not" "x" That looks as if there were two operators, "not" and "x". But the letter x is just an argument to the operator, so it should be: "not x" exactly as for "+x" and "-x" and "~x" and "x[index]" and "x.attribute", where also x is not part of the operator but an argument. On the corresponding web page https://docs.python.org/3/reference/expressions.html#operator-summary it is displayed correctly, there are no quotes. -- assignee: docs@python components: Documentation messages: 403321 nosy: MFH, docs@python priority: normal severity: normal status: open title: help() on operator precedence has confusing entries "avait" "x" and "not" "x" type: enhancement versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue45393> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45105] Incorrect handling of unicode character \U00010900
Max Bachmann added the comment: As far as a I understood this is caused by the same reason: ``` >>> s = '123\U00010900456' >>> s '123ऀ456' >>> list(s) ['1', '2', '3', 'ऀ', '4', '5', '6'] # note that everything including the commas is mirrored until ] is reached >>> s[3] 'ऀ' >>> list(s)[3] 'ऀ' >>> ls = list(s) >>> ls[3] += 'a' >>> ls ['1', '2', '3', 'ऀa', '4', '5', '6'] ``` Which as far as I understood is the expected behavior when a right-to-left character is encountered. -- ___ Python tracker <https://bugs.python.org/issue45105> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45105] Incorrect handling of unicode character \U00010900
Max Bachmann added the comment: > That is using Python 3.9 in the xfce4-terminal. Which xterm are you using? This was in the default gnome terminal that is pre-installed on Fedora 34 and on windows I directly opened the Python Terminal. I just installed xfce4-terminal on my Fedora 34 machine which has exactly the same behavior for me that I had in the gnome terminal. > But regardless, I cannot replicate the behavior you show where list(s) is > different from indexing the characters one by one. That is what surprised me the most. I just ran into this because this was somehow generated when fuzz testing my code using hypothesis (which uncovered an unrelated bug in my application). However I was quite confused by the character order when debugging it. My original case was: ``` s1='00' s2='9010ऀ000\x8dÀĀĀĀ222Ā' parts = [s2[max(0, i) : min(len(s2), i+len(s1))] for i in range(-len(s1), len(s2))] for part in parts: print(list(part)) ``` which produced ``` [] ['9'] ['9', '0'] ['9', '0', '1'] ['9', '0', '1', '0'] ['9', '0', '1', '0', 'ऀ'] ['9', '0', '1', '0', 'ऀ', '0'] ['0', '1', '0', 'ऀ', '0', '0'] ['1', '0', 'ऀ', '0', '0', '0'] ['0', 'ऀ', '0', '0', '0', '\x8d'] ['ऀ', '0', '0', '0', '\x8d', 'À'] ['0', '0', '0', '\x8d', 'À', 'Ā'] ['0', '0', '\x8d', 'À', 'Ā', 'Ā'] ['0', '\x8d', 'À', 'Ā', 'Ā', 'Ā'] ['\x8d', 'À', 'Ā', 'Ā', 'Ā', '2'] ['À', 'Ā', 'Ā', 'Ā', '2', '2'] ['Ā', 'Ā', 'Ā', '2', '2', '2'] ['Ā', 'Ā', '2', '2', '2', 'Ā'] ['Ā', '2', '2', '2', 'Ā'] ['2', '2', '2', 'Ā'] ['2', '2', 'Ā'] ['2', 'Ā'] ['ĀÀ] ``` which has a missing single quote: - ['ĀÀ] changing direction of characters including commas: - ['1', '0', 'ऀ', '0', '0', '0'] changing direction back: - ['ऀ', '0', '0', '0', '\x8d', 'À'] > AFAICT, there is no bug here. It's just confusing how Unicode right-to-left > characters in the repr() can modify how it's displayed in the > console/terminal. Yes it appears the same confusion occurs in other applications like Firefox and VS Code. Thanks at @eryksun and @steven.daprano for testing and telling me about Bidirectional writing in Unicode (The more I know about Unicode the more it scares me) -- status: pending -> open ___ Python tracker <https://bugs.python.org/issue45105> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45105] Incorrect handling of unicode character \U00010900
Max Bachmann added the comment: This is the result of copy pasting example posted above on windows using ``` Python 3.7.8 (tags/v3.7.8:4b47a5b6ba, Jun 28 2020, 08:53:46) [MSC v.1916 64 bit (AMD64)] on win32 ``` which appears to run into similar problems: ``` >>> s = '0��00' >>> >>> >>> >>> >>> s >>> >>> >>> >>> >>> '0ऀ00' >>> >>> >>> >>> >>> ls = list(s) >>> >>> >>> >>> >>> >>> ls >>> >>> >>> >>> ['0', 'ऀ', '0', '0'] >>> >>> >>> >>> >>> >>> s[0] >>> >>> >>> >>> '0' >>> >>> >>> >>> >>> >>> s[1] >>> >>> >>> >>> 'ऀ' ``` -- ___ Python tracker <https://bugs.python.org/issue45105> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45105] Incorrect handling of unicode character \U00010900
New submission from Max Bachmann : I noticed that when using the Unicode character \U00010900 when inserting the character as character: Here is the result on the Python console both for 3.6 and 3.9: ``` >>> s = '0ऀ00' >>> s '0ऀ00' >>> ls = list(s) >>> ls ['0', 'ऀ', '0', '0'] >>> s[0] '0' >>> s[1] 'ऀ' >>> s[2] '0' >>> s[3] '0' >>> ls[0] '0' >>> ls[1] 'ऀ' >>> ls[2] '0' >>> ls[3] '0' ``` It appears that for some reason in this specific case the character is actually stored in a different position that shown when printing the complete string. Note that the string is already behaving strange when marking it in the console. When marking the special character it directly highlights the last 3 characters (probably because it already thinks this character is in the second position). The same behavior does not occur when directly using the unicode point ``` >>> s='000\U00010900' >>> s '000ऀ' >>> s[0] '0' >>> s[1] '0' >>> s[2] '0' >>> s[3] 'ऀ' ``` This was tested using the following Python versions: ``` Python 3.6.0 (default, Dec 29 2020, 02:18:14) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] on linux Python 3.9.6 (default, Jul 16 2021, 00:00:00) [GCC 11.1.1 20210531 (Red Hat 11.1.1-3)] on linux ``` on Fedora 34 -- components: Unicode messages: 401078 nosy: ezio.melotti, maxbachmann, vstinner priority: normal severity: normal status: open title: Incorrect handling of unicode character \U00010900 type: behavior versions: Python 3.6, Python 3.9 ___ Python tracker <https://bugs.python.org/issue45105> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Use Chrome's / Firefox's dev-tools in python
Found this: https://pastebin.com/fvLkSJRp with use-select tags. I'll try to use selenium and select the page. But using the JSON packet that's sent will still be more practical. -- https://mail.python.org/mailman/listinfo/python-list
Re: Use Chrome's / Firefox's dev-tools in python
Ok, So here's a screenshot: https://ibb.co/2dtGr3c 1 is the website's scrollbar and 2 is Firefox's scrollbar. Seems like it uses a strange embed thing. The packet follows: https://pastebin.com/2qEkhZMN @Martin Di Paola: I sent you the pastebin password per email so that you're the only one who can access it, I just don't want anyone who passes by to be able to see my quotes... What is that CSS tag? I could try to disable it in the inspector. -- https://mail.python.org/mailman/listinfo/python-list
Re: Use Chrome's / Firefox's dev-tools in python
Already tried this, only works for messages and not for homework etc. -- https://mail.python.org/mailman/listinfo/python-list
Re: Use Chrome's / Firefox's dev-tools in python
@Curt: That is notifications for the ENT app, I want the notifications for the app named ProNote. ENT is for e-mails and Pronote for homework, quotes, etc. -- https://mail.python.org/mailman/listinfo/python-list
Re: Use Chrome's / Firefox's dev-tools in python
Hi, Seems like that could be a method of doing things. Just one clarification: the website has unselectable text, looks like it's an image strangely generated, so if I can get the packet with it, it would be perfect. As I said (I think), logging in with Selenium was already possible, and I could get a screenshot of the page after logging in. If you got this working like a packet listener in browser capable of seeing packet data, I'd gladly accept the code. I've tried to do this for 3 years now (since I came into that school basically), looks like it's coming to an end! Thanks! -- https://mail.python.org/mailman/listinfo/python-list
Re: Use Chrome's / Firefox's dev-tools in python
Hello, Thanks for you answer! Actually my goal is not to automatically get the file once I open the page, but more to periodically check the site and get a notification when there's new homework or, at the morning, know when an hour is cancelled, so I don't want to have to open the browser every time. I have pretty good javascript knowledge so if you could better explain that idea, it would be a great help. -- https://mail.python.org/mailman/listinfo/python-list
Use Chrome's / Firefox's dev-tools in python
My school has a website for homework called pronote (no problem if you don't know it). After logging in on parisclassenumerique.fr (works with selenium but I cant get requests to work), I want to read one of the packets that is sent: All the info about my day, my homework, etc. are in there and it is the perfect file: header request response stack trace The file's download address looks random. The login works only for a limited period of time in the same browser. Any ideas for using that tool of Firefox or same of Chrome? Thanks! -- https://mail.python.org/mailman/listinfo/python-list
[issue44153] Signaling an asyncio subprocess might raise ProcessLookupError, even if you haven't called .wait() yet
Change by Max Marrone : -- title: Signaling an asyncio subprocess raises ProcessLookupError, depending on timing -> Signaling an asyncio subprocess might raise ProcessLookupError, even if you haven't called .wait() yet ___ Python tracker <https://bugs.python.org/issue44153> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44153] Signaling an asyncio subprocess raises ProcessLookupError, depending on timing
New submission from Max Marrone : # Summary Basic use of `asyncio.subprocess.Process.terminate()` can raise a `ProcessLookupError`, depending on the timing of the subprocess's exit. I assume (but haven't checked) that this problem extends to `.kill()` and `.send_signal()`. This breaks the expected POSIX semantics of signaling and waiting on a process. See the "Expected behavior" section. # Test case I've tested this on macOS 11.2.3 with Python 3.7.9 and Python 3.10.0a7, both installed via pyenv. ``` import asyncio import sys # Tested with: # asyncio.ThreadedChildWatcher (3.10.0a7 only) # asyncio.MultiLoopChildWatcher (3.10.0a7 only) # asyncio.SafeChildWatcher (3.7.9 and 3.10.0a7) # asyncio.FastChildWatcher (3.7.9 and 3.10.0a7) # Not tested with asyncio.PidfdChildWatcher because I'm not on Linux. WATCHER_CLASS = asyncio.FastChildWatcher async def main(): # Dummy command that should be executable cross-platform. process = await asyncio.subprocess.create_subprocess_exec( sys.executable, "--version" ) for i in range(20): # I think the problem is that the event loop opportunistically wait()s # all outstanding subprocesses on its own. Do a bunch of separate # sleep() calls to give it a bunch of chances to do this, for reliable # reproduction. # # I'm not sure if this is strictly necessary for the problem to happen. # On my machine, the problem also happens with a single sleep(2.0). await asyncio.sleep(0.1) process.terminate() # This unexpectedly errors with ProcessLookupError. print(await process.wait()) asyncio.set_child_watcher(WATCHER_CLASS()) asyncio.run(main()) ``` The `process.terminate()` call raises a `ProcessLookupError`: ``` Traceback (most recent call last): File "kill_is_broken.py", line 29, in asyncio.run(main()) File "/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/runners.py", line 43, in run return loop.run_until_complete(main) File "/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete return future.result() File "kill_is_broken.py", line 24, in main process.terminate() # This errors with ProcessLookupError. File "/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/subprocess.py", line 131, in terminate self._transport.terminate() File "/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/base_subprocess.py", line 150, in terminate self._check_proc() File "/Users/maxpm/.pyenv/versions/3.7.9/lib/python3.7/asyncio/base_subprocess.py", line 143, in _check_proc raise ProcessLookupError() ProcessLookupError ``` # Expected behavior and discussion Normally, with POSIX semantics, the `wait()` syscall tells the operating system that we won't send any more signals to that process, and that it's safe for the operating system to recycle that process's PID. This comment from Jack O'Connor on another issue explains it well: https://bugs.python.org/issue40550#msg382427 So, I expect that on any given `asyncio.subprocess.Process`, if I call `.terminate()`, `.kill()`, or `.send_signal()` before I call `.wait()`, then: * It should not raise a `ProcessLookupError`. * The asyncio internals shouldn't do anything with a stale PID. (A stale PID is one that used to belong to our subprocess, but that we've since consumed through a `wait()` syscall, allowing the operating system to recycle it). asyncio internals are mostly over my head. But I *think* the problem is that the event loop opportunistically calls the `wait()` syscall on our child processes. So, as implemented, there's a race condition. If the event loop's `wait()` syscall happens to come before my `.terminate()` call, my `.terminate()` call will raise a `ProcessLookupError`. So, as a corollary to the expectations listed above, I think the implementation details should be either: * Ideally, the asyncio internals should not call syscall `wait()` on a process until *I* call `wait()` on that process. * Failing that, `.terminate()`, `.kill()` and `.send_signal()` should should no-op if the asyncio internals have already called `.wait()` on that process. -- components: asyncio messages: 393764 nosy: asvetlov, syntaxcoloring, yselivanov priority: normal severity: normal status: open title: Signaling an asyncio subprocess raises ProcessLookupError, depending on timing type: behavior versions: Python 3.10, Python 3.7 ___ Python tracker <https://bugs.python.org/issue44153> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42688] ctypes memory error on Apple Silicon with external libffi
Change by Max Bélanger : -- nosy: +maxbelanger nosy_count: 4.0 -> 5.0 pull_requests: +24011 pull_request: https://github.com/python/cpython/pull/25274 ___ Python tracker <https://bugs.python.org/issue42688> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41100] Support macOS 11 and Apple Silicon Macs
Change by Max Bélanger : -- nosy: +maxbelanger nosy_count: 18.0 -> 19.0 pull_requests: +24010 pull_request: https://github.com/python/cpython/pull/25274 ___ Python tracker <https://bugs.python.org/issue41100> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26680] Incorporating float.is_integer into Decimal
Change by Max Prokop : -- components: +2to3 (2.x to 3.x conversion tool), Argument Clinic, Build, C API, Cross-Build, Demos and Tools, Distutils, Documentation, asyncio, ctypes nosy: +Alex.Willmer, asvetlov, dstufft, eric.araujo, larry, yselivanov type: enhancement -> compile error Added file: https://bugs.python.org/file49898/Mobile_Signup.vcf ___ Python tracker <https://bugs.python.org/issue26680> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43565] PyUnicode_KIND macro does not has specified return type
New submission from Max Bachmann : The documentation stated, that the PyUnicode_KIND macro has the following interface: - int PyUnicode_KIND(PyObject *o) However it actually returns a value of the underlying type of the PyUnicode_Kind enum. This could be e.g. an unsigned int as well. -- components: C API messages: 389133 nosy: maxbachmann priority: normal severity: normal status: open title: PyUnicode_KIND macro does not has specified return type type: behavior ___ Python tracker <https://bugs.python.org/issue43565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7856] cannot decode from or encode to big5 \xf9\xd8
Max Bolingbroke added the comment: As of Python 3.7.9 this also affects \xf9\xd6 which should be \u7881 in Unicode. This character is the second character of 宏碁 which is the name of the Taiwanese electronics manufacturer Acer. You can work around the issue using big5hkscs just like with the original \xf9\xd8 problem. It looks like the F9D6–F9FE characters all come from the Big5-ETen extension (https://en.wikipedia.org/wiki/Big5#ETEN_extensions, https://moztw.org/docs/big5/table/eten.txt) which is so popular that it is a defacto standard. Big5-2003 (mentioned in a comment below) seems to be an extension of Big5-ETen. For what it's worth, whatwg includes these mappings in their own big5 reference tables: https://encoding.spec.whatwg.org/big5.html. Unfortunately Big5 is still in common use in Taiwan. It's pretty funny that Python fails to decode Big5 documents containing the name of one of Taiwan's largest multinationals :-) -- nosy: +batterseapower ___ Python tracker <https://bugs.python.org/issue7856> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43377] _PyErr_Display should be available in the CPython-specific API
Change by Max Bélanger : -- keywords: +patch nosy: +maxbelanger nosy_count: 1.0 -> 2.0 pull_requests: +23495 stage: -> patch review pull_request: https://github.com/python/cpython/pull/24719 ___ Python tracker <https://bugs.python.org/issue43377> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43221] German Text Conversion Using Upper() and Lower()
New submission from Max Parry : The German alphabet has four extra characters (ä, ö, ü and ß) when compared to the UK/USA alphabet. Until 2017 the character ß was normally only lower case. Upper case ß was represented by SS. In 2017 upper case ß was introduced, although SS is still often/usually used instead. It is important to note that, as far as I can see, upper case ß and lower case ß are identical. The upper() method converts upper or lower case ß to SS. N.B. ä, ö and ü are handled correctly. Lower() seems to work correctly. Please note that German is my second language and everything I say about the language, its history and its use might not be reliable. Happy to be corrected. -- components: Windows messages: 386938 nosy: Strongbow, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: German Text Conversion Using Upper() and Lower() type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue43221> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42629] PyObject_Call not behaving as documented
New submission from Max Bachmann : The documentation of PyObject_Call here: https://docs.python.org/3/c-api/call.html#c.PyObject_Call states, that it is the equivalent of the Python expression: callable(*args, **kwargs). so I would expect: PyObject* args = PyTuple_New(0); PyObject* kwargs = PyDict_New(); PyObject_Call(funcObj, args, kwargs) to behave similar to args = [] kwargs = {} func(*args, **kwargs) however this is not the case since in this case when I edit kwargs inside PyObject* func(PyObject* /*self*/, PyObject* /*args*/, PyObject* keywds) { PyObject* str = PyUnicode_FromString("test_str"); PyDict_SetItemString(keywds, "test", str); } it changes the original dictionary passed into PyObject_Call. I was wondering, whether this means, that: a) it is not allowed to modify the keywds argument passed to a PyCFunctionWithKeywords b) when calling PyObject_Call it is required to copy the kwargs for the call using PyDict_Copy Neither the documentation of PyObject_Call nor the documentation of PyCFunctionWithKeywords (https://docs.python.org/3/c-api/structures.html#c.PyCFunctionWithKeywords) made this clear to me. -- components: C API messages: 382927 nosy: maxbachmann priority: normal severity: normal status: open title: PyObject_Call not behaving as documented type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue42629> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41100] Support macOS 11 and Apple Silicon Macs
Change by Max Desiatov : -- nosy: -MaxDesiatov ___ Python tracker <https://bugs.python.org/issue41100> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39603] [security] http.client: HTTP Header Injection in the HTTP method
Max added the comment: I've just noticed an issue with the current version of the patch. It should also include 0x20 (space) since that can also be used to manipulate the request. -- ___ Python tracker <https://bugs.python.org/issue39603> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39603] [security] http.client: HTTP Header Injection in the HTTP method
Max added the comment: I agree that the solution is quite restrictive. Restricting to ASCII characters alone would certainly work. -- ___ Python tracker <https://bugs.python.org/issue39603> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39603] Injection in http.client
New submission from Max : I recently came across a bug during a pentest that's allowed me to perform some really interesting attacks on a target. While originally discovered in requests, I had been forwarded to one of the urllib3 developers after agreeing that fixing it at it's lowest level would be preferable. I was informed that the vulnerability is also present in http.client and that I should report it here as well. The 'method' parameter is not filtered to prevent the injection from altering the entire request. For example: >>> conn = http.client.HTTPConnection("localhost", 80) >>> conn.request(method="GET / HTTP/1.1\r\nHost: abc\r\nRemainder:", >>> url="/index.html") This will result in the following request being generated: GET / HTTP/1.1 Host: abc Remainder: /index.html HTTP/1.1 Host: localhost Accept-Encoding: identity This was originally found in an HTTP proxy that was utilising Requests. It allowed me to manipulate the original path to access different files from an internal server since the developers had assumed that the method would filter out non-standard HTTP methods. The recommended solution is to only allow the standard HTTP methods of GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH. An alternate solution that would allow programmers to use non-standard methods would be to only support characters [a-z] and stop reading at any special characters (especially newlines and spaces). -- components: Library (Lib) messages: 361710 nosy: maxpl0it priority: normal severity: normal status: open title: Injection in http.client type: security versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue39603> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30825] csv.Sniffer does not detect lineterminator
Change by Max Vorobev : -- keywords: +patch pull_requests: +17708 stage: test needed -> patch review pull_request: https://github.com/python/cpython/pull/18336 ___ Python tracker <https://bugs.python.org/issue30825> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38952] asyncio cannot handle Python3 IPv4Address
Max Coplan added the comment: Well I’ve submitted a fix for it. It isn’t perfect. Well, while it doesn’t look perfect, it actually worked with everything I’ve thrown at it, and seems to be a very robust and sufficient fix. -- ___ Python tracker <https://bugs.python.org/issue38952> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38952] asyncio cannot handle Python3 IPv4Address
Change by Max Coplan : -- title: asyncio cannot handle Python3 IPv4Address or IPv6 Address -> asyncio cannot handle Python3 IPv4Address ___ Python tracker <https://bugs.python.org/issue38952> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38952] asyncio cannot handle Python3 IPv4Address or IPv6 Address
Change by Max Coplan : -- keywords: +patch pull_requests: +16913 stage: -> patch review pull_request: https://github.com/python/cpython/pull/17434 ___ Python tracker <https://bugs.python.org/issue38952> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38952] asyncio cannot handle Python3 IPv4Address or IPv6 Address
New submission from Max Coplan : Trying to use new Python 3 `IPv4Address`s fails with the following error ``` File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 1270, in _ensure_resolved info = _ipaddr_info(host, port, family, type, proto, *address[2:]) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 134, in _ipaddr_info if '%' in host: TypeError: argument of type 'IPv4Address' is not iterable ``` -- components: asyncio messages: 357697 nosy: Max Coplan, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio cannot handle Python3 IPv4Address or IPv6 Address versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 ___ Python tracker <https://bugs.python.org/issue38952> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38279] multiprocessing example enhancement
Change by Max : -- keywords: +patch pull_requests: +15979 stage: -> patch review pull_request: https://github.com/python/cpython/pull/16398 ___ Python tracker <https://bugs.python.org/issue38279> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38279] multiprocessing example enhancement
New submission from Max Voss : Hello all, I've been trying to understand multiprocessing for a while, I tried multiple times. The PR is a suggested enhancement to the example that made it "click" for me. Or should I say, produced a working result that made sense to me. Details for each change in the PR. It's short too. The concept of multiprocessing is easy enough, but the syntax is so unlike regular python and so much happens "behind the curtain" so to speak, it took me a while. When I looked for multiprocessing advice online, many answers seemed unsure if or how their solution worked. Generally I'd like to help write documentation. So this is a test to see how good your issue handling process is too. :) -- assignee: docs@python components: Documentation messages: 353222 nosy: BMV, docs@python priority: normal severity: normal status: open title: multiprocessing example enhancement versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue38279> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: OT: Using a fake Gmail address is probably not a good idea
On Mon, Sep 16, 2019 at 1:56 PM Skip Montanaro wrote: > Mails for someone here who goes by the handle "ast" with a fake > address of n...@gmail.com keep landing in my Gmail spam folder. I > suspect the same is true for all people subscribed to python-list who > use Gmail. Gmail (correctly, I think) can't verify that the mail I > received actually originated there. It is true for any server that is applying DMARC policies. And having to deal with his messages is also very annoying to me. Ast should use a proper invalid email address (E.g. anything ending in .invalid, but there are also other reserved domains for such purposes.) if he does not want to reveal his real address, instead of making up a possibly valid address. -- https://mail.python.org/mailman/listinfo/python-list
Re: open, close
On Sat, Aug 31, 2019 at 3:43 PM Piet van Oostrum wrote: > > There is a difference here with the construct that the OP mentioned: > > lines = open("foo.txt").readlines() > > In that case the file COULD be closed, but there is no guarantee. It depends > on garbage collection. > In your case the file will not be closed as long as there is still a > reference to it (as in f). When f disappears and all copies of it as well, > the file COULD be closed similarly. > Yes, that is correct. I thought about mentioning the garbage collection and the extra binding for f, but eventually it does not change the conclusion. The garbage collection is just too unpredictable to rely upon in any scenario where you would deal with many open descriptors in a short period of time e.g. when opening and processing files in a loop. It is not easy to generalise from such simple examples. After all, if all the program does is process one file and shut down afterwards, this would not be an aspect to worry about. -- https://mail.python.org/mailman/listinfo/python-list
Re: open, close
On Sat, Aug 31, 2019 at 2:22 PM Manfred Lotz wrote: > > Could I use the latter as a substitute for the with-construct? > You can't use the second statement as a proper substitute for the first one. With the context manager, it is ensured that the file is closed. It's more or less equal to a "finally" clause which closes the file descriptor. So as long as the Python runtime environment functions properly, it will be closed. Your second statement on the other hand, is more or less equivalent to: f = open("foo.txt") lines = f.readlines() Close won't be called. -- https://mail.python.org/mailman/listinfo/python-list
Socket.py SSM support
Hi, as of right now there appears to be a lack of setsockoptions required to enable SSM, MCAST_JOIN_SOURCE_GROUP or something a kin to that in particular. Is there currently any effort to add those options or any other workaround to make SSM work in python natively? Best regards Max -- https://mail.python.org/mailman/listinfo/python-list
[issue35787] shlex.split inserts extra item on backslash space space
New submission from Max : I believe in both cases below, the ouptu should be ['a', 'b']; the extra ' ' inserted in the list is incorrect: python3.6 Python 3.6.2 (default, Aug 4 2017, 14:35:04) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import shlex >>> shlex.split('a \ b') ['a', ' b'] >>> shlex.split('a \ b') ['a', ' ', 'b'] >>> Doc reference: https://docs.python.org/3/library/shlex.html#parsing-rules > Non-quoted escape characters (e.g. '\') preserve the literal value of the > next character that follows; I believe this implies that backslash space should be just space; and then two adjacent spaces should be used (just like a single space) as a separator between arguments. -- components: Library (Lib) messages: 334081 nosy: max priority: normal severity: normal status: open title: shlex.split inserts extra item on backslash space space versions: Python 3.6 ___ Python tracker <https://bugs.python.org/issue35787> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35203] Windows Installer Ignores Launcher Installer Options Where The Python Launcher Is Already Present
Change by Max Bowsher : -- nosy: +Max Bowsher ___ Python tracker <https://bugs.python.org/issue35203> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35139] Statically linking pyexpat in Modules/Setup fails to compile on macOS
Change by Max Bélanger : -- keywords: +patch pull_requests: +9599 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35139> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35080] The tests for the `dis` module can be too rigid when changing opcodes
Change by Max Bélanger : -- keywords: +patch pull_requests: +9469 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35080> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35025] Compiling `timemodule.c` can fail on macOS due to availability warnings
Change by Max Bélanger : -- keywords: +patch pull_requests: +9308 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35025> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35022] MagicMock should support `__fspath__`
Change by Max Bélanger : -- keywords: +patch pull_requests: +9307 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35022> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Program to find Primes of the form prime(n+2) * prime(n+1) - prime(n) +- 1.
On Tue, Oct 2, 2018 at 10:23 PM, Musatov wrote: > Primes of the form prime(n+2) * prime(n+1) - prime(n) +- 1. > DATA > > 31, 71, 73, 137, 211, 311, 419, 421, 647, 877, 1117, 1487, 1979, 2447, 3079, > 3547, 4027, 7307, 7309, 12211, 14243, 18911, 18913, 23557, 25439, 28729, > 36683, 37831, 46853, 50411, 53129, 55457, 57367, 60251, 67339, 70489, 74797, > 89669, 98909, 98911 > > EXAMPLE > > 7*5 - 3 - 1 = 31 > > 11*7 - 5 - 1 = 71 > > 11*7 - 5 + 1 = 73 > > 13*11 - 7 + 1 = 137 > > Can someone put this in a Python program and post? > Here you go, my friend: #!/usr/bin/env python3 primes = """Primes of the form prime(n+2) * prime(n+1) - prime(n) +- 1. DATA 31, 71, 73, 137, 211, 311, 419, 421, 647, 877, 1117, 1487, 1979, 2447, 3079, 35\ 47, 4027, 7307, 7309, 12211, 14243, 18911, 18913, 23557, 25439, 28729, 36683, 3\ 7831, 46853, 50411, 53129, 55457, 57367, 60251, 67339, 70489, 74797, 89669, 989\ 09, 98911 EXAMPLE 7*5 - 3 - 1 = 31 11*7 - 5 - 1 = 71 11*7 - 5 + 1 = 73 13*11 - 7 + 1 = 137 """ if __name__ == "__main__": print(primes) As soon as you start showing more effort yourself in the form of your honest attempts to create a program or at least in the form of some serious ideas, you might get replies which better fit what you attempted to receive. -- https://mail.python.org/mailman/listinfo/python-list
Re: So apparently I've been banned from this list
On Sun, Sep 30, 2018 at 6:30 PM, Steven D'Aprano wrote: > Notwithstanding Ethan's comment about having posted the suspension notice > on the list, I see no sign that he actually did so. At the risk of > further retaliation from the moderators, I am ignoring the ban in this > instance for the purposes of transparency and openness. (I don't know if > this will show up on the mailing list or the newsgroup.) [...] > > Forwarded Message > Subject: Temporary Suspension > Date: Mon, 10 Sep 2018 07:09:04 -0700 > From: Ethan Furman > To: Python List Moderators > > As a list moderator, my goal for this list is to keep the list a useful > resource -- but what does "useful" mean? To me it means a place that > python users can go to ask questions, get answers, offer advice, and all > without sarcasm, name-calling, and deliberate mis-understandings. > Conversations should stay mostly on-topic. > > Due to hostile and inappropriate posts*, Steven D'Aprano is temporarily > suspended from Python List for a period of two months. > > This suspension, along with past suspensions, is being taken only after > careful consideration and consultation with other Python moderators. > > -- > ~Ethan~ > Python List Moderator > > > * posts in question: > > [1] https://mail.python.org/pipermail/python-list/2018-July/735735.html > [2] https://mail.python.org/pipermail/python-list/2018-September/737020.html > I can assure you that I did not receive Ethan's message via the mailing list. First of all I would have noticed it, if it would have been a separate thread. But I also just went over all the messages and tried to find it. It never arrived in my mailbox. -- https://mail.python.org/mailman/listinfo/python-list
Re: help me in python plssss!!!!
On Fri, Sep 14, 2018 at 4:33 PM, Noel P. CUA wrote: > Calculate the true, relative and approximate errors, and Relate the > absolute relative approximate error to the number of significant digits. > > epsilon = 1 > > while epsilon + 1 > 1: > epsilon = epsilon / 2.0 > > epsilon = 2 * epsilon > > help me! > This list is not here to solve every single step of what is (presumably) your homework for you. Everything you present right here is what I helped you with in "how to convert this psuedo code to python". You will have to at least try it yourself and to present your approaches or at least ideas. Additionally, tu...@python.org seems to be more fitting for the rather basic level of the problems which you present. -- https://mail.python.org/mailman/listinfo/python-list
Re: how to convert this psuedo code to python
On Fri, Sep 14, 2018 at 2:37 PM, Noel P. CUA wrote: > compose your own octave script to calculate the machine > epsilon. Analyze the code. > > epsilon = 1 > DO > IF (epsilon+1<=1) EXIT > epsilon = epsilon/2 > END DO > epsilon = 2 x epsilon > epsilon = 1 while epsilon + 1 > 1: epsilon = epsilon / 2.0 epsilon = 2 * epsilon This will not work in Octave. But maybe it will help you in improving your understanding of the solution. -- https://mail.python.org/mailman/listinfo/python-list
Re: Pretty printing dicts with compact=True
On Tue, Sep 11, 2018 at 1:58 PM, Nicolas Hug wrote: > pprint({x: x for x in range(15)}, compact=True) > > would be be printed in 15 lines while it could fit on 2. > > > Is this a bug or was this decided on purpose? It is on purpose as can be seen in the code for pprint [1], which calls _format [2], which in the case of a dictionary calls _pprint_dict [3], which ultimately calls _format_dict_items [4]. (which does not use compact or rather _compact) To me it also seems to be the most sensible behaviour, since dictionaries with their keys and values are different from most other sequences. In a dictionary the relation between keys and values is the most important one and reading a dictionary certainly is easier if each key value pair has a line of it's own. (Especially if the keys and values vary a lot in their lengths.) [1] https://github.com/python/cpython/blob/e42b705188271da108de42b55d9344642170aa2b/Lib/pprint.py#L138 [2] https://github.com/python/cpython/blob/e42b705188271da108de42b55d9344642170aa2b/Lib/pprint.py#L154 [3] https://github.com/python/cpython/blob/e42b705188271da108de42b55d9344642170aa2b/Lib/pprint.py#L180 [4] https://github.com/python/cpython/blob/e42b705188271da108de42b55d9344642170aa2b/Lib/pprint.py#L333 -- https://mail.python.org/mailman/listinfo/python-list
Re: "glob.glob('weirdness')" Any thoughts?
On Mon, Sep 10, 2018 at 3:05 PM, Thomas Jollans wrote: from glob import glob glob('test *') > ['test comment', 'test [co]mment', 'test [fallacy]', 'test [comments]', > 'test [comment] a'] glob('test [[]*') > ['test [co]mment', 'test [fallacy]', 'test [comments]', 'test [comment] a'] glob('test [[]c*') > ['test [co]mment', 'test [comments]', 'test [comment] a'] glob('test [[]comment]*') > ['test [comment] a'] > > I'm escaping the '[' as '[[]'. You can escape the ']' as well if you want, > but there's no need as a ']' is not special unless it's preceded by an > unescaped '['. > > To match the character class I think you thought my glob was matching, you'd > have to use '[][comment]' rather than '[[]comment]'. > That is of course correct. I'm sorry. Now that I looked at it again, I can't see how I came to that wrong conclusion. Your suggested "[][comment]" is exactly what I thought your "[[]comment]" to be and I can't explain to myself anymore how I came to that conclusion. There actually is glob.ecape [1] which escapes all the glob meta-characters in a path and does so exactly as in your example. (Since it is obviously the shortest and therefore in this case best way.) [1] https://docs.python.org/3/library/glob.html#glob.escape -- https://mail.python.org/mailman/listinfo/python-list
Re: "glob.glob('weirdness')" Any thoughts?
On Sun, Sep 9, 2018 at 6:03 PM, Thomas Jollans wrote: > On 09/09/2018 02:20 PM, Gilmeh Serda wrote: >> >> >> # Python 3.6.1/Linux >> (acts the same in Python 2.7.3 also, by the way) >> > from glob import glob >> >> > glob('./Testfile *') >> >> ['./Testfile [comment] some text.txt'] >> > glob('./Testfile [comment]*') >> >> [] >> [...] > > https://docs.python.org/3/library/glob.html#glob.escape demonstrates a way > of escaping that works: > > glob('./Testfile [[]comment]*') > That is about the least correct working solution one could conceive. Of course your suggested "glob('./Testfile [[]comment]*')" works in the positive case, but pretty much comes down to a glob('./Testfile [[]*'). And in the negative case it would provide many false positives. (e.g. "Testfile [falacy]", "Testfile monty", "Testfile ]not quite" and so on) Even if you wanted to use that strange character class, which is not a good idea (as explained above), using "[[]coment]" would be better, since there is no reason to repeat a character. -- https://mail.python.org/mailman/listinfo/python-list
Re: "glob.glob('weirdness')" Any thoughts?
On Sun, Sep 9, 2018 at 2:20 PM, Gilmeh Serda wrote: > > # Python 3.6.1/Linux > (acts the same in Python 2.7.3 also, by the way) > from glob import glob > glob('./Testfile *') > ['./Testfile [comment] some text.txt'] > glob('./Testfile [comment]*') > [] > glob('./Testfile [comment? some text.*') > ['./Testfile [comment] some text.txt'] > The behaviour is stated rather clearly in the documentation: For glob: "No tilde expansion is done, but *, ?, and character ranges expressed with [] will be correctly matched. This is done by using the os.scandir() and fnmatch.fnmatch() functions in concert, and not by actually invoking a subshell." [1] And then for fnmatch, since that is used by glob: "For a literal match, wrap the meta-characters in brackets. For example, '[?]' matches the character '?'." [2] Therefore glob('./Testfile [[]comment[]]*') is what you are looking for. It should be straightforward to wrap all the meta-characters which you want to use in their literal form in square brackets. The results of your analysis are also stated in the documentation for the glob patterns [1], so there is no guessing required. Your analysis about escaping special characters is wrong though. While backslashes are often used as escape characters, they are not used in such a fashion everywhere. In this case they are not used as escape characters, which makes a lot of sense when considering that the directory separator in Windows is a backslash and additionally using backslashes as escape characters would lead to quite some confusion in this case. [1] https://docs.python.org/3/library/glob.html [2] https://docs.python.org/3/library/fnmatch.html -- https://mail.python.org/mailman/listinfo/python-list
[issue28627] [alpine] shutil.copytree fail to copy a direcotry with broken symlinks
Max Rees <maxcr...@me.com> added the comment: Actually the symlinks don't need to be broken. It fails for any kind of symlink on musl. $ ls -l /tmp/symtest lrwxrwxrwx 1 mcrees mcrees 10 Apr 18 21:16 empty -> /var/empty -rw-r--r-- 1 mcrees mcrees 0 Apr 18 21:16 regular lrwxrwxrwx 1 mcrees mcrees 16 Apr 18 21:16 resolv.conf -> /etc/resolv.conf $ python3 >>> import shutil; shutil.copytree('/tmp/symtest', '/tmp/symtest2', >>> symlinks=True) shutil.Error: [('/tmp/symtest/resolv.conf', '/tmp/symtest2/resolv.conf', "[Errno 95] Not supported: '/tmp/symtest2/resolv.conf'"), ('/tmp/symtest/empty', '/tmp/symtest2/empty', "[Errno 95] Not supported: '/tmp/symtest2/empty'")] $ ls -l /tmp/symtest2 total 0 lrwxrwxrwx 1 mcrees mcrees 10 Apr 18 21:16 empty -> /var/empty -rw-r--r-- 1 mcrees mcrees 0 Apr 18 21:16 regular lrwxrwxrwx 1 mcrees mcrees 16 Apr 18 21:16 resolv.conf -> /etc/resolv.conf The implication of these bugs mean that things like pip may fail if it calls shutil.copytree(..., symlinks=True) on a directory that contains symlinks(!) Attached is a patch that works around the issue but does not address why chmod is returning OSError instead of NotImplementedError. -- keywords: +patch nosy: +sroracle Added file: https://bugs.python.org/file47540/musl-eopnotsupp.patch ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue28627> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32285] In `unicodedata`, it should be possible to check a unistr's normal form without necessarily copying it
Change by Max Bélanger <aero...@gmail.com>: -- keywords: +patch pull_requests: +4703 stage: -> patch review ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32285> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32282] When using a Windows XP compatible toolset, `socketmodule.c` fails to build
Change by Max Bélanger <aero...@gmail.com>: -- keywords: +patch pull_requests: +4702 stage: -> patch review ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32282> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32280] Expose `_PyRuntime` through a section name
Change by Max Bélanger <aero...@gmail.com>: -- keywords: +patch pull_requests: +4700 stage: -> patch review ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32280> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31903] `_scproxy` calls SystemConfiguration functions in a way that can cause deadlocks
Change by Max Bélanger <aero...@gmail.com>: -- keywords: +patch pull_requests: +4148 stage: -> patch review ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31903> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments
Max Rothman <max.r.roth...@gmail.com> added the comment: Hi, I'd like to wrap this ticket up and get some kind of resolution, whether it's accepted or not. I'm new to the Python community, what's the right way to prompt a discussion about this sort of thing? Should I have taken it to one of the mailing lists? -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue30821> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments
Max Rothman added the comment: Hi, just wanted to ping this again and see if there was any movement. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30821> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments
Max Rothman added the comment: > Generally the called with asserts can only be used to match the *actual > call*, and they don't determine "equivalence". That's fair, but as unittest.mock stands now, it *does* check equivalence, but only partially, which is more confusing to users than either checking equivalence or not. > I'm not convinced there's a massive use case - generally you want to make > asserts about what your code actually does - not just check if it does > something equivalent to your assert. To me, making asserts about what your code actually does means not having tests fail because a function call switches to a set of equivalent but different arguments. As a developer, I care about the state in the parent and the state in the child, and I trust Python to work out the details in between. If Python treats two forms as equivalent, why shouldn't our tests? -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30821> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30825] csv.Sniffer does not detect lineterminator
Changes by Max Vorobev <vmax0...@gmail.com>: -- pull_requests: +2595 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30825> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30825] csv.Sniffer does not detect lineterminator
New submission from Max Vorobev: Line terminator defaults to '\r\n' while detecting dialect in csv.Sniffer -- components: Library (Lib) messages: 297497 nosy: Max Vorobev priority: normal severity: normal status: open title: csv.Sniffer does not detect lineterminator type: behavior versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30825> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments
Max Rothman added the comment: I'd be happy to look at submitting a patch for this, but it'd be helpful to be able to ask questions of someone more familiar with unittest.mock's code. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30821> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30821] unittest.mock.Mocks with specs aren't aware of default arguments
New submission from Max Rothman: For a function f with the signature f(foo=None), the following three calls are equivalent: f(None) f(foo=None) f() However, only the first two are equivalent in the eyes of unittest.mock.Mock.assert_called_with: >>> with patch('__main__.f', autospec=True) as f_mock: f_mock(foo=None) f_mock.assert_called_with(None) >>> with patch('__main__.f', autospec=True) as f_mock: f_mock(None) f_mock.assert_called_with() AssertionError: Expected call: f() Actual call: f(None) This is definitely surprising to new users (it was surprising to me!) and unnecessarily couples tests to how a particular piece of code happens to call a function. -- components: Library (Lib) messages: 297433 nosy: Max Rothman priority: normal severity: normal status: open title: unittest.mock.Mocks with specs aren't aware of default arguments versions: Python 2.7, Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30821> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30685] Multiprocessing Send to Manager Fails for Large Payload
New submission from Max Ehrlich: On line 393 of multiprocessing/connection.py, the size of the payload to be sent is serialized as an integer. This fails for sending large payloads. It should probably be serialized as a long or better yet a long long. -- components: Library (Lib) messages: 296210 nosy: maxehr priority: normal severity: normal status: open title: Multiprocessing Send to Manager Fails for Large Payload versions: Python 3.5 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30685> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30641] No way to specify "File name too long" error in except statement.
Max Staff added the comment: ...at least those are the only two ways that I can think of. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30641> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30641] No way to specify "File name too long" error in except statement.
Max Staff added the comment: Yes I know about the errno. There would be two ways to resolve this: One way would be by introducing a new exception class which would be nice because it's almost impossible to reliably check the allowed filename length (except for trial and error) and I have quite a few functions where I would want the error to propagate further as long as it's not an ENAMETOOLONG. The other way would be by introducing a new syntax feature ("except OSError as e if e.errno == errno.ENAMETOOLONG:") but I don't think that that approach is reasonable. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30641> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30641] No way to specify "File name too long" error in except statement.
New submission from Max Staff: There are different ways to catch exceptions of the type "OSError": By using "except OSError as e:" and then checking the errno or by using "except FileNotFoundError e:" or "except FileExistsError e:" or whatever error one wants to catch. There's no such way for above mentioned error that occurs when a filename is too long for the filesystem/OS. -- components: IO messages: 295810 nosy: Max Staff priority: normal severity: normal status: open title: No way to specify "File name too long" error in except statement. type: behavior versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30641> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30517] Enum does not recognize enum.auto as unique values
Max added the comment: Ah sorry about that ... Yes, everything works fine when used properly. -- resolution: -> not a bug stage: -> resolved status: open -> closed ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30517> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30517] Enum does not recognize enum.auto as unique values
New submission from Max: This probably shouldn't happen: import enum class E(enum.Enum): A = enum.auto B = enum.auto x = E.B.value print(x) # print(E(x)) # E.A The first print() is kinda ok, I don't really care about which value was used by the implementation. But the second print() seems surprising. By the same token, this probably shouldn't raise an exception (it does now): import enum @enum.unique class E(enum.Enum): A = enum.auto B = enum.auto C = object() and `dir(E)` shouldn't skip `B` in its output (it does now). -- components: Library (Lib) messages: 294804 nosy: max priority: normal severity: normal status: open title: Enum does not recognize enum.auto as unique values type: behavior versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30517> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30488] Documentation for subprocess.STDOUT needs clarification
New submission from Max: The documentation states that subprocess.STDOUT is: Special value that can be used as the stderr argument to Popen and indicates that standard error should go into the same handle as standard output. However, when Popen is called with stdout=None, stderr=subprocess.STDOUT, stderr is not redirected to stdout and continues to be sent to stderr. To reproduce the problem: $ python >/dev/null -c 'import subprocess;\ subprocess.call(["ls", "/404"],stderr=subprocess.STDOUT)' and observe the error message appearing on the console (assuming /404 directory does not exist). This was reported on SO 5 years ago: https://stackoverflow.com/questions/11495783/redirect-subprocess-stderr-to-stdout. The SO attributed this to a documentation issue, but arguably it should be considered a bug because there seems to be no reason to make subprocess.STDOUT unusable in this very common use case. -- components: Interpreter Core messages: 294560 nosy: max priority: normal severity: normal status: open title: Documentation for subprocess.STDOUT needs clarification type: behavior versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30488> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29842] Make Executor.map work with infinite/large inputs correctly
Max added the comment: Correction: this PR is useful for `ProcessPoolExecutor` as well. I thought `chunksize` parameter handles infinite generators already, but I was wrong. And, as long as the number of items prefetched is a multiple of `chunksize`, there are no issues with the chunksize optimization either. And a minor correction: when listing the advantages of this PR, I should have said: "In addition, if the pool is not busy when `map` is called, your implementation will also be more responsive, since it will yield the first result earlier." -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29842> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29842] Make Executor.map work with infinite/large inputs correctly
Max added the comment: I'm also concerned about this (undocumented) inconsistency between map and Executor.map. I think you would want to make your PR limited to `ThreadPoolExecutor`. The `ProcessPoolExecutor` already does everything you want with its `chunksize` paramater, and adding `prefetch` to it will jeopardize the optimization for which `chunksize` is intended. Actually, I was even thinking whether it might be worth merging `chunksize` and `prefetch` arguments. The semantics of the two arguments is similar but not identical. Specifically, for `ProcessPoolExecutor`, there is pretty clear pressure to increase the value of `chunksize` to reduce amortized IPC costs; there is no IPC with threads, so the pressure to increase `prefetch` is much more situational (e.g., in the busy pool example I give below). For `ThreadPoolExecutor`, I prefer your implementation over the current one, but I want to point out that it is not strictly better, in the sense that *with default arguments*, there are situations where the current implementation behaves better. In many cases your implementation behaves much better. If the input is too large, it prevents out of memory condition. In addition, if the pool is not busy when `map` is called, your implementation will also be faster, since it will submit the first input for processing earlier. But consider the case where input is produced slower than it can be processed (`iterables` may fetch data from a database, but the callable `fn` may be a fast in-memory transformation). Now suppose the `Executor.map` is called when the pool is busy, so there'll be a delay before processing begins. In this case, the most efficient approach is to get as much input as possible while the pool is busy, since eventually (when the pool is freed up) it will become the bottleneck. This is exactly what the current implementation does. The implementation you propose will (by default) only prefetch a small number of input items. Then when the pool becomes available, it will quickly run out of prefetched input, and so it will be less efficient than the current implementation. This is especially unfortunate since the entire time the pool was busy, `Executor.map` is just blocking the main thread so it's literally doing nothing useful. Of course, the client can tweak `prefetch` argument to achieve better performance. Still, I wanted to make sure this issue is considered before the new implementation is adopted. >From the performance perspective, an even more efficient implementation would >be one that uses three background threads: - one to prefetch items from the input - one to sends items to the workers for processing - one to yield results as they become available It has a disadvantage of being slightly more complex, so I don't know if it really belongs in the standard library. Its advantage is that it will waste less time: it fetches inputs without pause, it submits them for processing without pause, and it makes results available to the client as soon as they are processed. (I have implemented and tried this approach, but not in productioon.) But even this implementation requires tuning. In the case with the busy pool that I described above, one would want to prefetch as much input as possible, but that may cause too much memory consumption and also possibly waste computation resources (if the most of input produced proves to be unneeded in the end). -- nosy: +max ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29842> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30026] Hashable doesn't check for __eq__
Max added the comment: Sorry, this should be just a documentation issue. I just realized that __eq__ = None isn't correct anyway, so instead we should just document that Hashable cannot check for __eq__ and that explicitly deriving from Hashable suppresses hashability. -- components: -Interpreter Core ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30026> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30026] Hashable doesn't check for __eq__
New submission from Max: I think collections.abc.Hashable.__subclasshook__ should check __eq__ method in addition to __hash__ method. This helps detect classes that are unhashable due to: to __eq__ = None Of course, it still cannot detect: def __eq__: return NotImplemented but it's better than nothing. In addition, it's probably worth documenting that explicitly inheriting from Hashable has (correct but unexpected) effect of *suppressing* hashability that was already present: from collections.abc import Hashable class X: pass assert issubclass(X, Hashable) x = X() class X(Hashable): pass assert issubclass(X, Hashable) x = X() # Can't instantiate abstract class X with abstract methods -- assignee: docs@python components: Documentation, Interpreter Core messages: 291382 nosy: docs@python, max priority: normal severity: normal status: open title: Hashable doesn't check for __eq__ ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30026> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29982] tempfile.TemporaryDirectory fails to delete itself
New submission from Max: There's a known issue with `shutil.rmtree` on Windows, in that it fails intermittently. The issue is well known (https://mail.python.org/pipermail/python-dev/2013-September/128353.html), and the agreement is that it cannot be cleanly solved inside `shutil` and should instead be solved by the calling app. Specifically, python devs themselves faced it in their test suite and solved it by retrying delete. However, what to do about `tempfile.TemporaryDirectory`? Is it considered the calling app, and therefore should retry delete when it calls `shutil.rmtree` in its `cleanup` method? I don't think `tempfile` is protected by the same argument that `shutil.rmtree` is protected, in that it's too messy to solve it in the standard library. My rationale is that while it's very easy for the end user to retry `shutil.rmtree`, it's far more difficult to fix the problem with `tempfile.TempDirectory` not deleting itself - how would the end user retry the `cleanup` method (which is called from `weakref.finalizer`)? So perhaps the retry loop should be added to `cleanup`. -- components: Library (Lib) messages: 291130 nosy: max priority: normal severity: normal status: open title: tempfile.TemporaryDirectory fails to delete itself type: behavior versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29982> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29795] Clarify how to share multiprocessing primitives
Max added the comment: Actually, never mind, I think one of the paragraphs in the Programming Guidelines ("Explicitly pass resources to child processes") basically explains everything already. I just didn't notice it until @noxdafox pointed it out to me on SO. Close please. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29795> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29715] Arparse improperly handles "-_"
Max Rothman added the comment: I think that makes sense, but there's still an open question: what should the correct way be to allow dashes to be present at the beginning of positional arguments? -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29715> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29795] Clarify how to share multiprocessing primitives
Max added the comment: Somewhat related is this statement from Programming Guidelines: > When using the spawn or forkserver start methods many types from > multiprocessing need to be picklable so that child processes can use them. > However, one should generally avoid sending shared objects to other processes > using pipes or queues. Instead you should arrange the program so that a > process which needs access to a shared resource created elsewhere can inherit > it from an ancestor process. Since on Windows, even "inheritance" is really the same pickle + pipe executed inside CPython, I assume the entire paragraph is intended for UNIX platform only (might be worth clarifying, btw). On Linux, "inheritance" works faster, and can deal with more complex objects compared to pickle with pipe/queue -- but it's equally true whether it's inheritance through global variables or through arguments to the target function. There's no reason So the text I proposed earlier wouldn't conflict with this one. It would just encourage programmers to use function arguments instead of global variables: because it's doesn't matter on Linux but makes the code portable to Windows. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29795> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29797] Deadlock with multiprocessing.Queue()
Max added the comment: Yes, this makes sense. My bad, I didn't realize processes might need to wait until the queue is consumed. I don't think there's any need to update the docs either, nobody should have production code that never reads the queue (mine was a test of some other issue). -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29797> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29797] Deadlock with multiprocessing.Queue()
New submission from Max: Using multiprocessing.Queue() with several processes writing very fast results in a deadlock both on Windows and UNIX. For example, this code: from multiprocessing import Process, Queue, Manager import time, sys def simulate(q, n_results): for i in range(n_results): time.sleep(0.01) q.put(i) def main(): n_workers = int(sys.argv[1]) n_results = int(sys.argv[2]) q = Queue() proc_list = [Process(target=simulate, args=(q, n_results), daemon=True) for i in range(n_workers)] for proc in proc_list: proc.start() for i in range(5): time.sleep(1) print('current approximate queue size:', q.qsize()) alive = [p.pid for p in proc_list if p.is_alive()] if alive: print(len(alive), 'processes alive; among them:', alive[:5]) else: break for p in proc_list: p.join() print('final appr queue size', q.qsize()) if __name__ == '__main__': main() hangs on Windows 10 (python 3.6) with 2 workers and 1000 results each, and on Ubuntu 16.04 (python 3.5) with 100 workers and 100 results each. The print out shows that the queue has reached the full size, but a bunch of processes are still alive. Presumably, they somehow manage to lock themselves out even though they don't depend on each other (must be in the implementation of Queue()): current approximate queue size: 9984 47 processes alive; among them: [2238, 2241, 2242, 2244, 2247] current approximate queue size: 1 47 processes alive; among them: [2238, 2241, 2242, 2244, 2247] The deadlock disappears once multiprocessing.Queue() is replaced with multiprocessing.Manager().Queue() - or at least I wasn't able to replicate it with a reasonable number of processes and results. -- components: Library (Lib) messages: 289479 nosy: max priority: normal severity: normal status: open title: Deadlock with multiprocessing.Queue() type: behavior versions: Python 3.5, Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29797> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29795] Clarify how to share multiprocessing primitives
Max added the comment: How about inserting this text somewhere: Note that sharing and synchronization objects (such as `Queue()`, `Pipe()`, `Manager()`, `Lock()`, `Semaphore()`) should be made available to a new process by passing them as arguments to the `target` function invoked by the `run()` method. Making these objects visible through global variables will only work when the process was started using `fork` (and as such sacrifices portability for no special benefit). -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29795> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29795] Clarify how to share multiprocessing primitives
New submission from Max: It seems both me and many other people (judging from SO questions) are confused about whether it's ok to write this: from multiprocessing import Process, Queue q = Queue() def f(): q.put([42, None, 'hello']) def main(): p = Process(target=f) p.start() print(q.get())# prints "[42, None, 'hello']" p.join() if __name__ == '__main__': main() It's not ok (doesn't work on Windows presumably because somehow when it's pickled, the connection between global queues in the two processes is lost; works on Linux, because I guess fork keeps more information than pickle, so the connection is maintained). I thought it would be good to clarify in the docs that all the Queue() and Manager().* and other similar objects should be passed as parameters not just defined as globals. -- assignee: docs@python components: Documentation messages: 289454 nosy: docs@python, max priority: normal severity: normal status: open title: Clarify how to share multiprocessing primitives type: behavior versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29795> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29715] Arparse improperly handles "-_"
Max Rothman added the comment: Martin: huh, I didn't notice that documentation. The error message definitely could be improved. It still seems like an odd choice given that argparse knows about the expected spec, so it knows whether there are any options or not. Perhaps one could enable/disable this cautious behavior with a flag passed to ArgumentParser? It was rather surprising in my case, since I was parsing morse code and the arguments were random combinations of "-", "_", and "*", so it wasn't immediately obvious what the issue was. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29715> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29715] Arparse improperly handles "-_"
New submission from Max Rothman: In the case detailed below, argparse.ArgumentParser improperly parses the argument string "-_": ``` import argparse parser = argparse.ArgumentParser() parser.add_argument('first') print(parser.parse_args(['-_'])) ``` Expected behavior: prints Namespace(first='-_') Actual behavior: prints usage message The issue seems to be specific to the string "-_". Either character alone or both in the opposite order does not trigger the issue. -- components: Library (Lib) messages: 288929 nosy: Max Rothman priority: normal severity: normal status: open title: Arparse improperly handles "-_" type: behavior versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29715> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29597] __new__ / __init__ calls during unpickling not documented correctly
New submission from Max: According to the [docs](https://docs.python.org/3/library/pickle.html#pickling-class-instances): > Note: At unpickling time, some methods like `__getattr__()`, > `__getattribute__()`, or `__setattr__()` may be called upon the instance. In > case those methods rely on some internal invariant being true, the type > should implement `__getnewargs__()` or `__getnewargs_ex__()` to establish > such an invariant; otherwise, neither `__new__()` nor `__init__()` will be > called. It seems, however, that this note is incorrect. First, `__new__` is called even if `__getnewargs__` isn't implemented. Second, `__init__` is not called even if it is (while the note didn't say that `__init__` would be called when `__getnewargs__` is defined, the wording does seem to imply it). class A: def __new__(cls, *args): print('__new__ called with', args) return object.__new__(cls) def __init__(self, *args): print('__init__ called with', args) self.args = args def __getnewargs__(self): print('called') return () a = A(1) s = pickle.dumps(a) a = pickle.loads(s) # __new__ called, not __init__ delattr(A, '__getnewargs__') a = A(1) s = pickle.dumps(a) a = pickle.loads(s) # __new__ called, not __init__ -- assignee: docs@python components: Documentation messages: 288088 nosy: docs@python, max priority: normal severity: normal status: open title: __new__ / __init__ calls during unpickling not documented correctly versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29597> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29415] Exposing handle._callback and handle._args in asyncio
Max added the comment: @yselivanov I just wanted to use the handler to avoid storing the callback and args in my own data structure (I would just store the handlers whenever I may need to reschedule). Not a big deal, I don't have to use handler as a storage space, if it's not supported across implementations. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29415> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29415] Exposing handle._callback and handle._args in asyncio
New submission from Max: Is it safe to use the _callback and _args attributes of asyncio.Handle? Is it possible to officially expose them as public API? My use case: handle = event_loop.call_later(delay, callback) # this function can be triggered by some events def reschedule(handle): event_loop.call_later(new_delay, handle._callback, *handle._args) handle.cancel() -- components: asyncio messages: 286709 nosy: gvanrossum, max, yselivanov priority: normal severity: normal status: open title: Exposing handle._callback and handle._args in asyncio type: enhancement versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29415> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28785] Clarify the behavior of NotImplemented
Max added the comment: Martin - what you suggest is precisely what I had in mind (but didn't phrase it as well): > to document the above sort of behaviour as being directly associated with > operations like as == and !=, and only indirectly associated with the > NotImplemented object and the __eq__() method Also a minor typo: you meant "If that call returns NotImplemented, the first fallback is to try the *reverse* call." -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28785> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28785] Clarify the behavior of NotImplemented
New submission from Max: Currently, there's no clear statement as to what exactly the fallback is in case `__eq__` returns `NotImplemented`. It would be good to clarify the behavior of `NotImplemented`; at least for `__eq__`, but perhaps also other rich comparison methods. For example: "When `NotImplemented` is returned from a rich comparison method, the interpreter behaves as if the rich comparison method was not defined in the first place." See http://stackoverflow.com/questions/40780004/returning-notimplemented-from-eq for more discussion. -- assignee: docs@python components: Documentation messages: 281616 nosy: docs@python, max priority: normal severity: normal status: open title: Clarify the behavior of NotImplemented type: enhancement versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28785> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27972] Confusing error during cyclic yield
Max von Tettenborn added the comment: You are very welcome, glad I could help. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27972> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27972] Confusing error during cyclic yield
New submission from Max von Tettenborn: Below code reproduces the problem. The resulting error is a RecursionError and it is very hard to trace that to the cause of the problem, which is the runner task and the stop task yielding from each other, forming a deadlock. I think, an easy to make mistake like that should raise a clearer exception. And maybe I am mistaken, but it should in principle be possible for the event loop to detect a cyclic yield, right? import asyncio class A: @asyncio.coroutine def start(self): self.runner_task = asyncio.ensure_future(self.runner()) @asyncio.coroutine def stop(self): self.runner_task.cancel() yield from self.runner_task @asyncio.coroutine def runner(self): try: while True: yield from asyncio.sleep(5) except asyncio.CancelledError: yield from self.stop() return def do_test(): @asyncio.coroutine def f(): a = A() yield from a.start() yield from asyncio.sleep(1) yield from a.stop() asyncio.get_event_loop().run_until_complete(f()) -- components: asyncio messages: 274547 nosy: Max von Tettenborn, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: Confusing error during cyclic yield type: behavior versions: Python 3.5 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27972> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14903] dictobject infinite loop in module set-up
Max Khon added the comment: I reproduced the problem with Python 2.7.5 as shipped with CentOS 7: root@192.168.0.86 /home/padmin # python -V Python 2.7.5 root@192.168.0.86 /home/padmin # rpm -q python python-2.7.5-34.el7.x86_64 root@192.168.0.86 /home/padmin # (gdb) bt #0 lookdict_string (mp=, key='RPMTAG_OPTFLAGS', hash=411442822543039667) at /usr/src/debug/Python-2.7.5/Objects/dictobject.c:461 #1 0x7f92d6d9f2c9 in insertdict (mp=0x2502600, key='RPMTAG_OPTFLAGS', hash=411442822543039667, value=1122) at /usr/src/debug/Python-2.7.5/Objects/dictobject.c:559 #2 0x7f92d6d9f3b0 in dict_set_item_by_hash_or_entry ( op={'RPMTAG_HEADERREGIONS': 64, 'RPMTAG_EXCLUSIVEOS': 1062, 'fi': , 'RPMTAG_CHANGELOGNAME': 1081, 'RPMTAG_CONFLICTNEVRS': 5044, 'RPMTAG_FILECAPS': 5010, 'RPMTAG_FILERDEVS': 1033, 'RPMTAG_COLLECTIONS': 5029, 'RPMTAG_BUGURL': 5012, 'setStats': , 'RPMTAG_FILEDIGESTALGO': 5011, 'RPMTAG_DEPENDSDICT': 1145, 'RPMTAG_CLASSDICT': 1142, 'RPMTAG_FILEMODES': 1030, 'RPMTAG_FILEDEPENDSN': 1144, 'RPMTAG_BUILDTIME': 1006, 'ii': , 'RPMTAG_INSTALLCOLOR': 1127, 'RPMTAG_CHANGELOGTEXT': 1082, 'RPMTAG_HEADERCOLOR': 5017, 'RPMTAG_CONFLICTNAME': 1054, 'RPMTAG_CONFLICTS': 1054, 'setLogFile': , 'versionCompare': , 'RPMTAG_CONFLICTVERSION': 1055, 'RPMTAG_NVRA': 1196, 'RPMTAG_NOPATCH': 1052, 'RPMTAG_HEADERI18NTABLE': 100, 'RPMTAG_LONGARCHIVESIZE': 271, 'RPMTAG_FILEREQUIRE': 5002, 'RPMTAG_FILEDEPENDSX': 1143, 'RPMTAG_EVR': 5013, 'RPMTAG_INSTALLTIME': 1008, 'RPMTAG_NAME': 1000, 'RPMTAG_LONG...(truncated), key=, hash=, ep=, value=) at /usr/src/debug/Python-2.7.5/Objects/dictobject.c:774 #3 0x7f92d6da18a8 in PyDict_SetItemString ( v={'RPMTAG_HEADERREGIONS': 64, 'RPMTAG_EXCLUSIVEOS': 1062, 'fi': , 'RPMTAG_CHANGELOGNAME': 1081, 'RPMTAG_CONFLICTNEVRS': 5044, 'RPMTAG_FILECAPS': 5010, 'RPMTAG_FILERDEVS': 1033, 'RPMTAG_COLLECTIONS': 5029, 'RPMTAG_BUGURL': 5012, 'setStats': , 'RPMTAG_FILEDIGESTALGO': 5011, 'RPMTAG_DEPENDSDICT': 1145, 'RPMTAG_CLASSDICT': 1142, 'RPMTAG_FILEMODES': 1030, 'RPMTAG_FILEDEPENDSN': 1144, 'RPMTAG_BUILDTIME': 1006, 'ii': , 'RPMTAG_INSTALLCOLOR': 1127, 'RPMTAG_CHANGELOGTEXT': 1082, 'RPMTAG_HEADERCOLOR': 5017, 'RPMTAG_CONFLICTNAME': 1054, 'RPMTAG_CONFLICTS': 1054, 'setLogFile': , 'versionCompare': , 'RPMTAG_CONFLICTVERSION': 1055, 'RPMTAG_NVRA': 1196, 'RPMTAG_NOPATCH': 1052, 'RPMTAG_HEADERI18NTABLE': 100, 'RPMTAG_LONGARCHIVESIZE': 271, 'RPMTAG_FILEREQUIRE': 5002, 'RPMTAG_FILEDEPENDSX': 1143, 'RPMTAG_EVR': 5013, 'RPMTAG_INSTALLTIME': 1008, ' RPMTAG_NAME': 1000, 'RPMTAG_LONG...(truncated), key=key@entry=0x7f92c83bf537 "RPMTAG_OPTFLAGS", item=item@entry=1122) at /usr/src/debug/Python-2.7.5/Objects/dictobject.c:2448 #4 0x7f92d6e181f2 in PyModule_AddObject (m=m@entry=, name=name@entry=0x7f92c83bf537 "RPMTAG_OPTFLAGS", o=o@entry=1122) at /usr/src/debug/Python-2.7.5/Python/modsupport.c:616 #5 0x7f92d6e182d8 in PyModule_AddIntConstant (m=m@entry=, name=name@entry=0x7f92c83bf537 "RPMTAG_OPTFLAGS", value=value@entry=1122) at /usr/src/debug/Python-2.7.5/Python/modsupport.c:628 #6 0x7f92c85e2b20 in addRpmTags (module=) at rpmmodule.c:200 #7 initModule (m=) at rpmmodule.c:343 #8 init_rpm () at rpmmodule.c:281 #9 0x7f92d6e13ed9 in _PyImport_LoadDynamicModule (name=name@entry=0x24f69d0 "rpm._rpm", pathname=pathname@entry=0x24f79e0 "/usr/lib64/python2.7/site-packages/rpm/_rpmmodule.so", fp=) at /usr/src/debug/Python-2.7.5/Python/importdl.c:53 ... The infinite loop happens when "import rpm" is called in low-memory conditions (e.g. when handling ENOMEM or exceptions.MemoryError - RedHat/CentOS abrt-addon-python package which installs sys.excepthook handler). -- nosy: +Max Khon ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue14903> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25327] Windows 10 Installation Fails With Corrupt Directory Error
New submission from Max Farrell: Cannot install Python 3.5 64-bit on Windows 10 64-bit Educational Edition. I have Python 3.4 Installed. Log include. -- components: Installation files: Python 3.5.0 (64-bit)_20151006150920.log messages: 252423 nosy: Max Farrell priority: normal severity: normal status: open title: Windows 10 Installation Fails With Corrupt Directory Error type: compile error versions: Python 3.5 Added file: http://bugs.python.org/file40705/Python 3.5.0 (64-bit)_20151006150920.log ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25327> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com