[issue46576] test_peg_generator is extremely slow
Change by Jeremy Kloth : -- pull_requests: +30420 pull_request: https://github.com/python/cpython/pull/32382 ___ Python tracker <https://bugs.python.org/issue46576> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47230] New compiler warnings with latest zlib
Change by Jeremy Kloth : -- resolution: -> fixed stage: patch review -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue47230> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47230] New compiler warnings with latest zlib
Change by Jeremy Kloth : -- pull_requests: +30400 pull_request: https://github.com/python/cpython/pull/32347 ___ Python tracker <https://bugs.python.org/issue47230> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47230] New compiler warnings with latest zlib
Change by Jeremy Kloth : -- pull_requests: +30399 pull_request: https://github.com/python/cpython/pull/32346 ___ Python tracker <https://bugs.python.org/issue47230> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47230] New compiler warnings with latest zlib
Jeremy Kloth added the comment: It seems so, as the zlib update was also backported to 3.9 and 3.10. -- ___ Python tracker <https://bugs.python.org/issue47230> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46576] test_peg_generator is extremely slow
Jeremy Kloth added the comment: My PR-32338 further reduces the runtime of the test another ~25%. On my machine, before 85s, after 65s. -- ___ Python tracker <https://bugs.python.org/issue46576> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46576] test_peg_generator is extremely slow
Change by Jeremy Kloth : -- nosy: +jkloth nosy_count: 3.0 -> 4.0 pull_requests: +30394 pull_request: https://github.com/python/cpython/pull/32338 ___ Python tracker <https://bugs.python.org/issue46576> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47230] New compiler warnings with latest zlib
Change by Jeremy Kloth : -- keywords: +patch pull_requests: +30393 stage: -> patch review pull_request: https://github.com/python/cpython/pull/32337 ___ Python tracker <https://bugs.python.org/issue47230> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47230] New compiler warnings with latest zlib
New submission from Jeremy Kloth : The latest zlib (1.2.12) introduces 3 new compiler warnings. Now being an external library, I do not think we generally patch them, so I propose to simply silence the warnings for the offending file. For reference, the problem comes from: --- deflate.h.old 2022-04-05 11:27:26.869042900 -0600 +++ deflate.h.new 2022-04-05 11:26:11.512039600 -0600 @@ -329,8 +329,8 @@ # define _tr_tally_dist(s, distance, length, flush) \ { uch len = (uch)(length); \ ush dist = (ush)(distance); \ -s->sym_buf[s->sym_next++] = dist; \ -s->sym_buf[s->sym_next++] = dist >> 8; \ +s->sym_buf[s->sym_next++] = (uch)dist; \ +s->sym_buf[s->sym_next++] = (uch)(dist >> 8); \ s->sym_buf[s->sym_next++] = len; \ dist--; \ s->dyn_ltree[_length_code[len]+LITERALS+1].Freq++; \ -- components: Build, Windows messages: 416792 nosy: jkloth, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: New compiler warnings with latest zlib type: compile error versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue47230> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45354] test_winconsoleio fails on Windows 11
Change by Jeremy Kloth : -- resolution: -> fixed status: open -> closed ___ Python tracker <https://bugs.python.org/issue45354> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47131] Speedup test_unparse
Change by Jeremy Kloth : -- stage: patch review -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue47131> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47131] Speedup test_unparse
Jeremy Kloth added the comment: Resolved with merged PR. -- resolution: -> fixed ___ Python tracker <https://bugs.python.org/issue47131> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47203] ImportError: DLL load failed while importing binascii: %1 is not a valid Win32 application.
Jeremy Kloth added the comment: Well, to really see where things are going wrong, there is always the verbose option when launching Python: py -v -c "import binascii" This will output a lot of information but should help pin down what is exactly being imported when the error occurs. If the above doesn't give enough of a clue as to the offending file, try '-vv' instead of '-v'. This increases the amount of debugging output, but does show each filename attempted for each particular import. You will need to scroll back quite awhile to get to the error location (past the cleanup messages). -- ___ Python tracker <https://bugs.python.org/issue47203> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47203] ImportError: DLL load failed while importing binascii: %1 is not a valid Win32 application.
Jeremy Kloth added the comment: This error will occur when there is a 64-bit/32-bit conflict. Normally, Python extension modules are installed in architecture dependent locations, however user-installed modules (pip install) can share a path referred to as "user site". A quick check from the command-line can give you its location: py -m site A scan of the paths listed as USER_BASE and USER_SITE might reveal a binascii.pyd which would be shadowing the normally built-in module. Another source of conflict would be a PYTHONPATH environment variable, if set. -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue47203> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37387] test_compileall fails randomly on Windows when tests are run in parallel
Jeremy Kloth added the comment: bpo-47089 is a duplicate of this issue and is fixed. This issue should be closed as well. -- ___ Python tracker <https://bugs.python.org/issue37387> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47089] Avoid sporadic failure of test_compileall on Windows
Change by Jeremy Kloth : -- pull_requests: +30311 pull_request: https://github.com/python/cpython/pull/32240 ___ Python tracker <https://bugs.python.org/issue47089> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47089] Avoid sporadic failure of test_compileall on Windows
Change by Jeremy Kloth : -- nosy: +vstinner ___ Python tracker <https://bugs.python.org/issue47089> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47131] Speedup test_unparse
Change by Jeremy Kloth : -- nosy: +vstinner type: -> performance ___ Python tracker <https://bugs.python.org/issue47131> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47131] Speedup test_unparse
Change by Jeremy Kloth : -- keywords: +patch pull_requests: +30212 stage: -> patch review pull_request: https://github.com/python/cpython/pull/32132 ___ Python tracker <https://bugs.python.org/issue47131> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47131] Speedup test_unparse
New submission from Jeremy Kloth : The string building and comparing of ast.dump() causes significant slowdowns on the large ASTs produced when doing the roundtrip test on the stdlib. This PR avoids that cost by doing a direct node traversal and comparison. It results in a 33% runtime improvement on machines to which I have access. -- components: Tests messages: 416082 nosy: jkloth priority: normal severity: normal status: open title: Speedup test_unparse versions: Python 3.10, Python 3.11, Python 3.9 ___ Python tracker <https://bugs.python.org/issue47131> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46716] regrtest didn't respect the timeout when running test_subprocess on AMD64 Windows11 3.x
Change by Jeremy Kloth : -- pull_requests: +30168 pull_request: https://github.com/python/cpython/pull/32081 ___ Python tracker <https://bugs.python.org/issue46716> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46716] regrtest didn't respect the timeout when running test_subprocess on AMD64 Windows11 3.x
Change by Jeremy Kloth : -- keywords: +patch pull_requests: +30167 stage: -> patch review pull_request: https://github.com/python/cpython/pull/32079 ___ Python tracker <https://bugs.python.org/issue46716> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44336] Windows buildbots hang after fatal exit
Change by Jeremy Kloth : -- pull_requests: +30146 pull_request: https://github.com/python/cpython/pull/32048 ___ Python tracker <https://bugs.python.org/issue44336> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46788] regrtest fails to start on missing performance counter names
Jeremy Kloth added the comment: Backports state that they are ready... I'm just a little uneasy as I've never used cherry_picker before. 3.10 went smooth, but 3.9 required manual merging. -- ___ Python tracker <https://bugs.python.org/issue46788> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44336] Windows buildbots hang after fatal exit
Change by Jeremy Kloth : -- pull_requests: +30139 pull_request: https://github.com/python/cpython/pull/32050 ___ Python tracker <https://bugs.python.org/issue44336> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46788] regrtest fails to start on missing performance counter names
Jeremy Kloth added the comment: With 3.8 so close to security only, I would doubt it is worth it anymore. I've just run the PR against HEAD and it still works as is, so should be good to go. -- versions: -Python 3.8 ___ Python tracker <https://bugs.python.org/issue46788> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47089] Avoid sporadic failure of test_compileall on Windows
Change by Jeremy Kloth : -- keywords: +patch pull_requests: +30127 stage: -> patch review pull_request: https://github.com/python/cpython/pull/32037 ___ Python tracker <https://bugs.python.org/issue47089> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47089] Avoid sporadic failure of test_compileall on Windows
New submission from Jeremy Kloth : Testing on Windows occasionally has issues in test_compileall when running with multiple processes. This is due to other test files importing stdlib modules at the same time that compileall is doing its own testing. While not fatal (test_compileall succeeds on re-run), the transient warnings obfuscate the test results for other "real" warnings (e.g., compiler warnings) without digging into each run separately. This can be avoided by using the PYTHONPYCACHEPREFIX functionality to compile the stdlib modules locally. -- components: Tests messages: 415711 nosy: jkloth priority: normal severity: normal status: open title: Avoid sporadic failure of test_compileall on Windows versions: Python 3.10, Python 3.11, Python 3.9 ___ Python tracker <https://bugs.python.org/issue47089> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46788] regrtest fails to start on missing performance counter names
Jeremy Kloth added the comment: OK, I know it has been a busy month for Python, but this issue is really hampering my bug fixing efforts. It makes the complete regrtest useless for me. I am required to run each affected test directly so it is possible to miss side-effects of changes made. The original PR is now 9 months old. -- ___ Python tracker <https://bugs.python.org/issue46788> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46084] Python 3.9.6 scan_dir returns filenotfound on long paths, but os_walk does not
Change by Jeremy Kloth : -- pull_requests: -30123 ___ Python tracker <https://bugs.python.org/issue46084> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46084] Python 3.9.6 scan_dir returns filenotfound on long paths, but os_walk does not
Change by Jeremy Kloth : -- pull_requests: +30123 pull_request: https://github.com/python/cpython/pull/32032 ___ Python tracker <https://bugs.python.org/issue46084> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47084] Statically allocated Unicode objects leak cached representations
Change by Jeremy Kloth : -- keywords: +patch pull_requests: +30122 stage: -> patch review pull_request: https://github.com/python/cpython/pull/32032 ___ Python tracker <https://bugs.python.org/issue47084> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46084] Python 3.9.6 scan_dir returns filenotfound on long paths, but os_walk does not
Change by Jeremy Kloth : -- pull_requests: -30121 ___ Python tracker <https://bugs.python.org/issue46084> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46084] Python 3.9.6 scan_dir returns filenotfound on long paths, but os_walk does not
Change by Jeremy Kloth : -- keywords: +patch nosy: +jkloth nosy_count: 7.0 -> 8.0 pull_requests: +30121 stage: -> patch review pull_request: https://github.com/python/cpython/pull/32032 ___ Python tracker <https://bugs.python.org/issue46084> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47084] Statically allocated Unicode objects leak cached representations
New submission from Jeremy Kloth : The newly implemented statically allocated Unicode objects do not clear their cached representations (wstr and utf-8) at exit causing leaked blocks at exit (see also issue46857). At issue are the Unicode objects created by deepfreeze and the 1-character strings (ordinals < 256). -- components: Interpreter Core, Unicode messages: 415695 nosy: ezio.melotti, jkloth, vstinner priority: normal severity: normal status: open title: Statically allocated Unicode objects leak cached representations versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue47084> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46857] Python leaks one reference at exit on Windows
Jeremy Kloth added the comment: Did you also modify initconfig.c? That part is required as the usual processing of the environment variable PYTHONDUMPREFS needed to enable tracing output is ignored with -I -- ___ Python tracker <https://bugs.python.org/issue46857> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46857] Python leaks one reference at exit on Windows
Jeremy Kloth added the comment: > ./configure --enabled-shared --with-py-debug --with-trace-refs (that's what I get for typing from memory): ./configure --enable-shared --with-pydebug --with-trace-refs > > I proposed GH-31594 to fix this macro. > > Even using that change, I still have negative refs (but I still have > Py_TRACE_REFS defined) I initially missed the _PySet_Dummy change, with that total refs (w/o dump_refs) is now 0. -- ___ Python tracker <https://bugs.python.org/issue46857> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46857] Python leaks one reference at exit on Windows
Jeremy Kloth added the comment: > Oh wow. How did you find this leak? Did you read all C files and check for > code specific to Windows? How did you proceed? Well spotted! Initially, I modified Py_INCREF to dump the object (addr & tp_name) on initial inc (ob_refcnt == 1) and Py_DECREF to dump on final dec (ob_refcnt == 0). Then filter that list (~65K) to find objects not dealloc'ed. Given those names (~200), cross-check with source files containing 'ifdef MS_WINDOWS' (and related spellings). > Which command do you type? Do you pass -I option to Python? For both as -I disables environment lookup: --- a/Python/initconfig.c +++ b/Python/initconfig.c @@ -757,6 +757,7 @@ config_init_defaults(PyConfig *config) config->user_site_directory = 1; config->buffered_stdio = 1; config->pathconfig_warnings = 1; + config->dump_refs = 1; #ifdef MS_WINDOWS config->legacy_windows_stdio = 0; #endif For linux: ./configure --enabled-shared --with-py-debug --with-trace-refs make build_all LD_LIBRARY_PATH=$PWD ./python -X showrefcount -I -c pass For Windows: Add "#define Py_TRACE_REFS 1" to PC\pyconfig.h build.bat -d -e amd64\python_d.exe -X showrefcount -I -c pass > I proposed GH-31594 to fix this macro. Even using that change, I still have negative refs (but I still have Py_TRACE_REFS defined) -- nosy: +jeremy.kloth ___ Python tracker <https://bugs.python.org/issue46857> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46857] Python leaks one reference at exit on Windows
Jeremy Kloth added the comment: Note that an allocated block is still leaking. Strange as well, when using dump_refs, the total refs are much more negative (-12 linux, -13 Windows) -- ___ Python tracker <https://bugs.python.org/issue46857> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46857] Python leaks one reference at exit on Windows
Jeremy Kloth added the comment: Good news, the difference on Windows was easy enough to find, bad news total refs are now negative! --- a/Objects/exceptions.c +++ b/Objects/exceptions.c @@ -3647,8 +3647,7 @@ _PyBuiltins_AddExceptions(PyObject *bltinmod) #define INIT_ALIAS(NAME, TYPE) \ do { \ -Py_INCREF(PyExc_ ## TYPE); \ -Py_XDECREF(PyExc_ ## NAME); \ +Py_XSETREF(PyExc_ ## NAME, PyExc_ ## TYPE); \ PyExc_ ## NAME = PyExc_ ## TYPE; \ if (PyDict_SetItemString(mod_dict, # NAME, PyExc_ ## NAME)) { \ return -1; \ As the PyExc_* aliases just deprecated names for PyExc_OSError, there is no need to increment their refcounts. Or they could be decremented in Fini(). Or they could finally be removed entirely. -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue46857> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46789] Restore caching of externals on Windows buildbots
Jeremy Kloth added the comment: > Would it be possible to create a download cache somewhere outside the Python > source tree, so "git clean -fdx" would not remove this cache? I was thinking of locating it next to the checkout directory. The current structure is: [worker root] -- [builder root] [checkout] I propose to add the externals directory within the builder root, so each branch would still have a unique copy. -- nosy: +jeremy.kloth ___ Python tracker <https://bugs.python.org/issue46789> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46789] Restore caching of externals on Windows buildbots
Jeremy Kloth added the comment: I personally would like to see caching restored so as to keep the duration of buildbot runs as low as possible. The repeated fetching effectively doubles compilation time for my Win11 builder. -- ___ Python tracker <https://bugs.python.org/issue46789> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46790] Normalize handling of negative timeouts in subprocess.py
Jeremy Kloth added the comment: Oh, I forgot to add that I'm in favor of following the threading.py behavior of allowing <=0 to mean "non-blocking" (i.e., just check). This would probably also benefit from a documentation update to clarify. -- ___ Python tracker <https://bugs.python.org/issue46790> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46790] Normalize handling of negative timeouts in subprocess.py
Change by Jeremy Kloth : -- nosy: +eryksun, vstinner ___ Python tracker <https://bugs.python.org/issue46790> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46790] Normalize handling of negative timeouts in subprocess.py
New submission from Jeremy Kloth : As a follow on to bpo-46716, the various timeout parameters currently deal with negative values differently in POSIX and Windows. On POSIX, a negative value is treated the same as 0; check completion and raise TimeoutExpired is still running. On Windows, the negative value is treated as unsigned and ultimately waits for ~49 days. While the Windows behavior is obviously wrong and will be fixed internally as part of bpo-46716, that still leaves what to do with timeouts coming from user-space. The current documentation just states that after `timeout` seconds TimeoutExpired is raised. A liberal reading of the documentation could lead one to believe any value <=0 would suffice for an "active" check (the POSIX behavior). OR, the documentation could be amended and negative values are now invalid and apply range checking in the user-facing functions. -- components: Library (Lib) messages: 413496 nosy: jkloth priority: normal severity: normal status: open title: Normalize handling of negative timeouts in subprocess.py versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue46790> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46789] Restore caching of externals on Windows buildbots
Change by Jeremy Kloth : -- nosy: +pablogsal, vstinner ___ Python tracker <https://bugs.python.org/issue46789> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46789] Restore caching of externals on Windows buildbots
New submission from Jeremy Kloth : A recent change to the buildmaster config effectively disabled the caching of the externals for Windows buildbots: https://github.com/python/buildmaster-config/pull/255 If the caching is desired, a simple change to the buildmaster config is needed (define EXTERNALS_DIR in the build environment). Or, to continue with fetching them each run, the buildbot scripts in Tools\buildbot can be simplified. Once a course of action is determined I can develop the requisite PR(s) in the appropriate tracker. -- components: Build, Tests, Windows messages: 413494 nosy: jkloth, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Restore caching of externals on Windows buildbots ___ Python tracker <https://bugs.python.org/issue46789> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46788] regrtest fails to start on missing performance counter names
Change by Jeremy Kloth : -- nosy: +vstinner ___ Python tracker <https://bugs.python.org/issue46788> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46788] regrtest fails to start on missing performance counter names
New submission from Jeremy Kloth : When attempting to run the test harness, I receive the following: Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Public\Devel\cpython\main\Lib\test\__main__.py", line 2, in main() ^^ File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\main.py", line 736, in main Regrtest().main(tests=tests, **kwargs) ^^ File "C:\Public\Devel\cpython\main\Lib\contextlib.py", line 155, in __exit__ self.gen.throw(typ, value, traceback) ^ File "C:\Public\Devel\cpython\main\Lib\contextlib.py", line 155, in __exit__ self.gen.throw(typ, value, traceback) ^ File "C:\Public\Devel\cpython\main\Lib\test\support\os_helper.py", line 396, in temp_dir yield path ^^ File "C:\Public\Devel\cpython\main\Lib\contextlib.py", line 155, in __exit__ self.gen.throw(typ, value, traceback) ^ File "C:\Public\Devel\cpython\main\Lib\test\support\os_helper.py", line 427, in change_cwd yield os.getcwd() ^ File "C:\Public\Devel\cpython\main\Lib\test\support\os_helper.py", line 449, in temp_cwd yield cwd_dir ^ File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\main.py", line 658, in main self._main(tests, kwargs) ^ File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\main.py", line 704, in _main self.win_load_tracker = WindowsLoadTracker() File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\win_utils.py", line 41, in __init__ self.start() File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\win_utils.py", line 70, in start counter_name = self._get_counter_name() File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\win_utils.py", line 90, in _get_counter_name system = counters_dict['2'] ~^ KeyError: '2' This is due to my machine missing the localized names for the performance counters. Other performance monitoring tools operate just fine. While I have been working around this issue for some time, it has become difficult to seperate the workarounds from actually changes in the test harness. The PR (https://github.com/python/cpython/pull/26578) from https://bugs.python.org/issue44336 also solves this issue by accessing the counters directly instead of relying on their localized names. -- components: Tests, Windows messages: 413493 nosy: jkloth, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: regrtest fails to start on missing performance counter names versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue46788> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46778] Enable parallel compilation on Windows builds
Change by Jeremy Kloth : -- keywords: +patch pull_requests: +29535 stage: -> patch review pull_request: https://github.com/python/cpython/pull/31390 ___ Python tracker <https://bugs.python.org/issue46778> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46778] Enable parallel compilation on Windows builds
New submission from Jeremy Kloth : While the current build does enable building of projects in parallel (msbuild -m), the compilation of each project's source files is done sequentially. For large projects like pythoncore or _freeze_module this can take quite some time. This simple PR speeds things up significantly, ~2x on machines that I have access. -- components: Build, Windows messages: 413412 nosy: jkloth, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Enable parallel compilation on Windows builds versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue46778> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46716] regrtest didn't respect the timeout when running test_subprocess on AMD64 Windows11 3.x
Jeremy Kloth added the comment: > > the fix should be as simple as coercing the timeout values to >= 0. > > Popen._remaining_time() should return max(endtime - _time(), 0). That was my first initial instinct as well, however, that change would also affect more of the Popen behavior and need a much more thorough investigation of the POSIX side of Popen. > Popen._wait() should raise OverflowError if the timeout is too large for the > implementation. In Windows, the upper limit in milliseconds is > `_winapi.INFINITE - 1` (about 49.7 days). It's important to only allow the > timeout in milliseconds to be _winapi.INFINITE when `timeout is None`. I agree. > The DWORD converter in _winapi needs to subclass unsigned_long_converter. The > current implementation based on the legacy format unit "k" is too lenient. > Negative values and values that are too large should fail. Whilst I agree this is a correct solution, I fear the potential 3rd-party breakage alone should bump this to its own issue. I believe that this then leads to the following action items for this issue: 1) modify Windows Popen._wait() to raise on out of bounds values [< 0 or >= INFINITE] 2) cap Popen._remaining_time() return value to >= 0 3) change _winapi DWORD converter be unsigned long 4) use Job objects to group Windows processes for termination Have I missed anything? I should be able to knock out PRs for these today. -- Jeremy Kloth -- nosy: +jeremy.kloth ___ Python tracker <https://bugs.python.org/issue46716> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46716] regrtest didn't respect the timeout when running test_subprocess on AMD64 Windows11 3.x
Jeremy Kloth added the comment: I've been able locally to reproduce the test_subprocess hang. The responsible function is subprocess.run(). The test case, test_timeout(), uses a small timeout value (0.0001), which, when given enough load, can cause the run() call to hang. A judicious use of prints in subprocess.py, reveals that the timeout passed to wait() ends up being negative. That value, once cast to a DWORD, ultimately causes a very long wait (0xfff2, in my testing). It does seem that only the Windows Popen._wait() cannot handle negative timeout values, so the fix should be as simple as coercing the timeout values to >= 0. -- Added file: https://bugs.python.org/file50623/process.py ___ Python tracker <https://bugs.python.org/issue46716> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46716] regrtest didn't respect the timeout when running test_subprocess on AMD64 Windows11 3.x
Jeremy Kloth added the comment: The test only completed once I purposefully terminated the offending Python process. The only identifying information I noticed was the command-line of `-c "while True: pass"`, indicating it was stuck in either test_call_timeout() or test_timeout() in test_subprocess.py. Something to note is that Windows does not, by default, have a concept of process trees whereas terminating a parent automatically kills the children. Eryk Sun may have additional ideas on how this desired behavior could be accomplished. -- nosy: +eryksun, jkloth ___ Python tracker <https://bugs.python.org/issue46716> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46274] backslash creating statement out of nothing
New submission from Jeremy : A source of one or more backslash-escaped newlines, and one final newline, is not tokenized the same as a source where those lines are "manually joined". The source ``` \ \ \ ``` produces the tokens NEWLINE, ENDMARKER when piped to the tokenize module. Whereas the source ``` ``` produces the tokens NL, ENDMARKER. What I expect is to receive only one NL token from both sources. As per the documentation "Two or more physical lines may be joined into logical lines using backslash characters" ... "A logical line that contains only spaces, tabs, formfeeds and possibly a comment, is ignored (i.e., no NEWLINE token is generated)" And, because these logical lines are not being ignored, if there are spaces/tabs, INDENT and DEDENT tokens are also being unexpectedly produced. The source ``` \ ``` produces the tokens INDENT, NEWLINE, DEDENT, ENDMARKER. Whereas the source (with spaces) ``` ``` produces the tokens NL, ENDMARKER. -- components: Parser messages: 409811 nosy: lys.nikolaou, pablogsal, ucodery priority: normal severity: normal status: open title: backslash creating statement out of nothing versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue46274> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46091] IndendationError from multi-line indented statements
Jeremy added the comment: Wow, this was a fast turnaround! I was going to spin some cycles on this, but would have not seen the solution in 50m. -- ___ Python tracker <https://bugs.python.org/issue46091> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46091] IndendationError from multi-line indented statements
New submission from Jeremy : At some point in 3.9 Python appears to have stopped accepting source that starts with an indent, then a '\', then the indented statement. From the lexical analysis [1] "Indentation cannot be split over multiple physical lines using backslashes; the whitespace up to the first backslash determines the indentation." Running the attached program under 3.8.12 I get: ``` 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 ``` But running under 3.10.0 I get: ``` File "/Users/jeremyp/tmp/nodent.py", line 3 """Print a Fibonacci series up to n.""" ^ IndentationError: expected an indented block after function definition on line 1 ``` Running under 3.9.9 also gives an IndentationError, both with and without -X oldparser. So this doesn't seem directly related to the new parser, but seems likely it is fall out from the general grammar restructuring. IMHO it isn't a particularly nice feature for the language to have. Especially since not all lines like '\' behave the same. But it was there and documented for many years, so should probably be put back. Does a core developer agree? That the implementation is not following the spec? [1]: https://docs.python.org/3/reference/lexical_analysis.html#indentation -- components: Parser files: nodent.py messages: 408651 nosy: lys.nikolaou, pablogsal, ucodery priority: normal severity: normal status: open title: IndendationError from multi-line indented statements versions: Python 3.10, Python 3.11, Python 3.9 Added file: https://bugs.python.org/file50496/nodent.py ___ Python tracker <https://bugs.python.org/issue46091> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45806] Cannot Recover From StackOverflow in 3.9 Tests
Jeremy Kloth added the comment: I'll note that it also fails on first run on the Windows 11 builder: https://buildbot.python.org/all/#/builders/737/builds/65 -- components: +Windows nosy: +paul.moore, steve.dower, tim.golden, zach.ware ___ Python tracker <https://bugs.python.org/issue45806> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45806] Cannot Recover From StackOverflow in 3.9 Tests
Change by Jeremy Kloth : -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue45806> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45545] chdir __exit__ is not safe
Jeremy added the comment: > How common do you expect such errors to be though? Do you expect them to be > more or less common than with os.chdir()? Do you expect the mitigations to > be any different than with a failing os.chdir()? It has come up for me with some frequency. But I'm sure my use case is an outlier, stress testing filesystems and working on backup/restore. The thing about needing to access long paths is that you have to do it with these leaps of <= PATH_MAX until you get close enough to the end. Whether you use relative paths or open fds, you have to get there slowly and then walk back along the same path. This would be greatly simplified by contextlib.chdir if it isn't restricted to absolute paths; otherwise it will remain as much a manual effort as ever. It also has to do with the scope of any try block. If we leave any exceptions to bubble up to the caller, then any code in the with block is also being guarded. Presumably the caller used chdir because they want to do more os operations in the with block, but they won't be able to sort out if the ENOENT or similar error was from the context manager or their own, perhaps more critical, os operations. > If the context manager isn't going to address the long-path case reliably > using either a file-descriptor approach or repeated relative chdir() calls, > then I think failing early like this is the next best choice. I thought about going down the fd road but as not every platform can chdir to a fd, the relative way back would have to be implemented anyways. It didn't seem worth it to have different platforms behave differently on exiting the context manager. -- ___ Python tracker <https://bugs.python.org/issue45545> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45545] chdir __exit__ is not safe
Jeremy added the comment: A LBYL won't always raise errors early as you point out. It will give earlier warnings for a lot of cases, but makes contextlib.chdir usable in less places than os.chdir. Some return paths will always be errors, and some will be technically recoverable but too difficult to detect and or fragile. That's why I think any solution should incorporate the `ignore_errors` flag. Its pretty ugly to wrap a context manager in a try: except: just because you were trying to clean up after whatever you were doing but the cwd changed in unexpected ways, maybe out of your control. -- ___ Python tracker <https://bugs.python.org/issue45545> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45545] chdir __exit__ is not safe
Change by Jeremy : -- keywords: +patch pull_requests: +27481 stage: -> patch review pull_request: https://github.com/python/cpython/pull/29218 ___ Python tracker <https://bugs.python.org/issue45545> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45545] chdir __exit__ is not safe
Jeremy added the comment: >If os.chdir is in os.supports_fd, the context manager can use dirfd = >os.open(os.getcwd(), os.O_RDONLY). Using an fd should also work around the >deleted directory case, though POSIX doesn't specify whether fchdir() succeeds >in this case. It does in Linux, and the resulting state is the same as >deleting the current working directory. Yes, I was considering an open fd to guarantee to return to the old pwd as long as it existed. I said as much on the mailing list, but was uncertain if it was possible do deadlock holding on to arbitrary directory handles. If it's possible at all to deadlock, and I think it is, I don't think we can use this; not in a stdlib implementation. The reason for the deadlock is too hidden from the user and debugging it would be difficult. It would be fine for a user implementation where they understood the implications and made other guarantees about their traversals, but we can't be sure everyone using this implementation would read an understand this limitation. I hadn't considered systems that don't support fd vops. I also hadn't considered crossing mount points and if that could cause any additional error cases. I don't think it can, not that we could correct in user-space and with just using os.chdir(). >In Windows, SetCurrentDirectoryW() resolves the full path. So the result from >os.getcwd() should always work with os.chdir(). The context manager could >prevent the original directory from getting deleted by opening it without >delete sharing (e.g. via _winapi.CreateFile). Though it may be more reasonable >to just let it fail to restore the original working directory. Thanks, I am much less familiar with these APIs. So I believe you are saying the implementation as is will work in all reasonable cases for Windows. I think attempting to move back to a directory that has been removed should be an error. Especially if we give the same behavior on Linux. Producing a FileNotFoundError gives the user the power to decide if they do in fact want to handle that differently. -- ___ Python tracker <https://bugs.python.org/issue45545> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45545] chdir __exit__ is not safe
Jeremy added the comment: Yes, precisely. Besides being an unreachable long abs path, it might have been deleted since last visited. I’m working on a few more odd test cases. -- ___ Python tracker <https://bugs.python.org/issue45545> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45545] chdir __exit__ is not safe
New submission from Jeremy : The way that contextlib.chdir currently restores the old working directory, an exception is raised if the program was already close to or beyond a system's PATH_MAX. The context manager has no issue crafting the path in __enter__ because os.getcwd() can return a path that is longer than PATH_MAX, but when used in __exit__ os.chdir() cannot use a path that long. I think an __exit__ should be as cautious as possible to not raise as the exception can occur far away from where the context manager was created. Its also doesn't reflect the programmer actually using the context manager incorrectly as they might not have any control or care where the process was started, yet if it happened to already be at a deep path when launched, any use of chdir anywhere would cause exceptions. I have tested this on macOS 11.13 using APFS but I am sure it would also fail on other macs and Linuxes. I don't know about Windows. Note I originally created this test as a patch to Lib/tests/test_contextlib.py but libregrtest uses os.getcwd() in its runner and that disrupts the traceback and misidentifies the cause of failure. Test file: ```python import os import shutil from contextlib import chdir def test_long_path(): # NAME_MAX of 255 long_filename = "_".join(["testdir"]*32) long_path_end = startingwd = os.getcwd() try: # I thought this would have to be 16, i.e. a path length over 4096, PATH_MAX # but seemingly just crossing 1050 is enough to fail for _ in range(4): os.mkdir(long_filename) os.chdir(long_filename) long_path_end = os.path.join(long_path_end, long_filename) os.mkdir(long_filename) long_path_end = os.path.join(long_path_end, long_filename) with chdir(long_filename): #self.assertEqual(os.getcwd(), long_path_end) assert os.getcwd() == long_path_end print("passed") finally: shutil.rmtree(os.path.join(startingwd, long_filename), ignore_errors=True) test_long_path() ``` And output: ``` $ ./python.exe ./test_chdir.py passed Traceback (most recent call last): File "/Users/ucodery/git/cpython/./test_chdir.py", line 27, in test_long_path() File "/Users/ucodery/git/cpython/./test_chdir.py", line 19, in test_long_path with chdir(long_filename): ^^ File "/Users/ucodery/git/cpython/Lib/contextlib.py", line 781, in __exit__ os.chdir(self._old_cwd.pop()) ^ OSError: [Errno 63] File name too long: '/Users/ucodery/git/cpython/testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir/testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir/testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir/testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_te stdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir_testdir' ``` -- files: test_chdir.py messages: 404534 nosy: ucodery priority: normal severity: normal status: open title: chdir __exit__ is not safe type: behavior versions: Python 3.11 Added file: https://bugs.python.org/file50377/test_chdir.py ___ Python tracker <https://bugs.python.org/issue45545> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45354] test_winconsoleio fails on Windows 11
Change by Jeremy Kloth : -- keywords: +patch pull_requests: +27062 stage: -> patch review pull_request: https://github.com/python/cpython/pull/28712 ___ Python tracker <https://bugs.python.org/issue45354> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45354] test_winconsoleio fails on Windows 11
Jeremy Kloth added the comment: Note that I have a pending PR for adding a Windows 11 build worker that will, once merged, help in testing a solution. -- ___ Python tracker <https://bugs.python.org/issue45354> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45354] test_winconsoleio fails on Windows 11
New submission from Jeremy Kloth : It appears there have been some console related changes in Windows 11 == ERROR: test_open_name (test.test_winconsoleio.WindowsConsoleIOTests) -- Traceback (most recent call last): File "C:\Users\Jeremy\source\repos\cpython\lib\test\test_winconsoleio.py", line 95, in test_open_name f = open('C:/con', 'rb', buffering=0) ^ FileNotFoundError: [Errno 2] No such file or directory: 'C:/con' == FAIL: test_conout_path (test.test_winconsoleio.WindowsConsoleIOTests) -- Traceback (most recent call last): File "C:\Users\Jeremy\source\repos\cpython\lib\test\test_winconsoleio.py", line 118, in test_conout_path self.assertIsInstance(f, ConIO) ^^^ AssertionError: <_io.FileIO name='C:\\Users\\Jeremy\\AppData\\Local\\Temp\\tmpoqx235b0\\CONOUT$' mode='wb' closefd=True> is not an instance of -- -- components: Tests, Windows messages: 403090 nosy: jkloth, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_winconsoleio fails on Windows 11 type: behavior ___ Python tracker <https://bugs.python.org/issue45354> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42969] pthread_exit & PyThread_exit_thread from PyEval_RestoreThread etc. are harmful
Jeremy Maitin-Shepard added the comment: Yes, I would agree that the new APIs are a useful addition regardless of the PyThread_exit_thread change. As far as the proposed `Py_SetThreadExitCallback` that seems like a fine thing for applications to use, as long as it doesn't impact how extensions need to be written to be safe from crashes/memory corruption. So for example if the default is to hang, then changing it to log and then hang, or optionally log and terminate the program, would be fine, since extensions aren't affected either way. Conversely, if one of the possible behaviors may be `_endthreadex` or `pthread_exit`, then libraries must be written to be safe under that behavior anyway, which is unfortunate. Furthermore, say for a library that only supports POSIX, maybe it is written to be safe under `pthread_exit` because it uses destructors to do cleanup, but then it will cause deadlock if the callback chooses to hang the thread instead. Thus, I think allowing the callback to change the behavior in a way that could impact extensions is not a great idea. The callback also doesn't seem like a very good mechanism for an extension that is incompatible with `pthread_exit` or `_endthreadex`, such as one using pybind11, to try to mitigate that problem, since an individual library shouldn't be changing application-wide behavior unless the library is specifically being used by the application for that purpose. -- ___ Python tracker <https://bugs.python.org/issue42969> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42969] pthread_exit & PyThread_exit_thread from PyEval_RestoreThread etc. are harmful
Jeremy Maitin-Shepard added the comment: To be clear, the problem I'm trying to address here is not specific to embedding Python in a C++ application. In fact the issue came to my attention while using Python directly, but loading an extension module that was written in C++ using the popular pybind11 library. If we continue having Python call `pthread_exit` and `_endthreadex`, we are imposing strong constraints on call stacks that call the Python API. Granted, hanging a thread is also not something a well-behaved library should do, but it is at least slightly better than killing the thread. In a sense hanging is also logical, since the thread has requested to block until the GIL can be acquired, and the GIL cannot be acquired. I have described a number of problems caused by `pthread_exit`/`_endthreadex` that are fixed by hanging. Can you help me understand what problems caused by hanging are fixed by `pthread_exit`/`_endthreadex`, that leads you to think it is a better default? -- ___ Python tracker <https://bugs.python.org/issue42969> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42969] pthread_exit & PyThread_exit_thread from PyEval_RestoreThread etc. are harmful
Jeremy Maitin-Shepard added the comment: In general, I view hanging threads as the least bad thing to do when faced with either acquiring the GIL or not returning at all. There is a lot of existing usage of Python that currently poses a risk of random crashes and memory corruption while Python is exiting, and I would like to fix that. However, I would certainly recommend that code using the Python C API attempt to avoid threads getting to that state in the first place. I added a "finalize block" mechanism to that PR which is intended to provide a way to attempt to acquire the GIL in a way that ensures the GIL won't get hung. I would welcome feedback on that. A common use case for that API might be a non-Python created thread that wants to invoke some sort of asynchronous callback handler via Python APIs. For Python daemon threads that you control, you can avoid them hanging by registering an atexit function that signals them to exit and then waits until they do. vsteinner: Regarding processing the Windows messages, I updated the PR to include a link to the MSDN documentation that led me to think it was a good idea. vstinner: As for random code outside of Python itself that is using `PyThread_exit_thread`: although I suppose there are legitimate use cases for `pthread_exit` and `_endthreadex`, these functions are only safe with the cooperation of the entire call stack of the thread. Additionally, while `pthread_exit` and `_endthreadex` have similar behavior for C code, they don't have the same behavior for C++ code, and that difference seems like a likely source of problems. Finally, I would say Python itself does not guarantee that its call stacks safely cooperate with `pthread_exit` (at the very least, there are sure to be memory leaks). Therefore, I think Python should not encourage its use by leaving it as a non-deprecated public API. If a user wishes to terminate a thread, they can invoke `pthread_exit` or `_endthreadex` directly, ideally without any Python functions in the call stack, and can refer to the documentation of those functions to understand the implications. gps: The reasons I believe hanging the thread is better than `pthread_exit`: - `pthread_exit` essentially throws an exception. In theory that means you could do proper cleanup via C++ destructors and/or re-throwing catch blocks. But in practice existing extension code is not designed to do that, and it would be quite a complex task to modify it to do proper cleanup, and on Windows the cleanup wouldn't run anyway. - Additionally, throwing an exception means if there is a `noexcept` function in the call stack, the program terminates. We would need to document that you aren't allowed to call Python APIs if there is a `noexcept` function in the call stack. If you have a `catch(...)` in the call stack, then you may inadvertently catch the exception and return control back to Python at a point that assumes it owns the GIL, which will cause all sorts of havoc. We would likewise need to document that you can't have a non-rethrowing `catch(...)` block in the call stack (I believe pybind11 has some of those). - Throwing an exception also means C++ destructors run. pybind11 has a smart pointer type that holds a `PyObject` and whose destructor calls `Py_DECREF`. That causes a crash when `pthread_exit` unwinds the stack, since the thread doesn't own the GIL. Those are the additional problems specific to `pthread_exit`. As gps noted, there is the additional problem of memory corruption common to both `pthread_exit` and `_endthreadex`: - Data structures that are accessible from other threads may contain pointers to data on the thread's stack. For example, with certain types of locks/signalling mechanisms it is common to store a linked list node on the stack that as then added to a list of waiting threads. If we destroy the thread stack without proper cleanup (and that proper cleanup definitely won't happen with `_endthreadex`, and probably in most cases still won't happen with `pthread_exit`), the data structure has now become corrupted. I don't think hanging the thread really increases the risk of deadlock over the status quo. In theory someone could have a C++ destructor that does some cleanup that safely pevents deadlock, but that is not portable to Windows, and I expect that properly implemented `pthread_exit`-safe code is extremely rare. I think we would want to ensure that Python itself is implemented in such a way as to not deadlock if Python-created threads with only Python functions in the call stack hang. Mostly that would amount to not holding mutexes while calling functions that may transitively attempt to acquire the GIL (or release and then re-acquire the GIL). That is probably a good practice for avoiding deadlock even when not finalizing. We would also want to document that external
[issue42969] pthread_exit & PyThread_exit_thread from PyEval_RestoreThread etc. are harmful
Jeremy Maitin-Shepard added the comment: I suppose calling `Py_Initialize`, `Py_FinalizeEx`, then `Py_Initialize` again, then `Py_FinalizeEx` again in an embedding application, was already not particularly well supported, since it would leak memory. However, with this change it also leaks threads. That is a bit unfortunate, but I suppose it is just another form of memory leak, and the user can avoid it by ensuring there are no daemon threads (of course even previously, the presence of any daemon threads meant additional memory leaking). -- ___ Python tracker <https://bugs.python.org/issue42969> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42969] pthread_exit & PyThread_exit_thread from PyEval_RestoreThread etc. are harmful
Change by Jeremy Maitin-Shepard : -- keywords: +patch pull_requests: +26916 stage: -> patch review pull_request: https://github.com/python/cpython/pull/28525 ___ Python tracker <https://bugs.python.org/issue42969> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42969] pthread_exit & PyThread_exit_thread from PyEval_RestoreThread etc. are harmful
Jeremy Maitin-Shepard added the comment: It looks like the `_thread` module does not actually expose `PyThread_exit_thread` --- the similarly named `thread_PyThread_exit_thread` just raises SystemExit. >From a search in the codebase, it appears `PyThread_exit_thread` is currently >used only to kill threads when they attempt to acquire the GIL during >finalization. Also, if it is changed to no longer kill the thread, it would probably make sense to rename it, e.g. to `PyThread_stop_thread_during_finalization`. -- ___ Python tracker <https://bugs.python.org/issue42969> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42969] pthread_exit & PyThread_exit_thread from PyEval_RestoreThread etc. are harmful
Jeremy Maitin-Shepard added the comment: Regarding your suggestion of banning daemon threads: I happened to come across this bug not because of daemon threads but because of threads started by C++ code directly that call into Python APIs. The solution I am planning to implement is to add an `atexit` handler to prevent this problem. I do think it is reasonable to suggest that users should ensure daemon threads are exited cleanly via an atexit handler. However, in some cases that may be challenging to implement, and there is also the issue of backward compatibility. -- ___ Python tracker <https://bugs.python.org/issue42969> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42969] pthread_exit & PyThread_exit_thread from PyEval_RestoreThread etc. are harmful
Jeremy Maitin-Shepard added the comment: Regarding your suggestion of adding a hook like `Py_SetThreadExitCallback`, it seems like there are 4 plausible behaviors that such a callback may implement: 1. Abort the process immediately with an error. 2. Exit immediately with the original exit code specified by the user. 3. Hang the thread. 4. Attempt to unwind the thread, like `pthread_exit`, calling pthread thread cleanup functions and C++ destructors. 5. Terminate the thread immediately without any cleanup or C++ destructor calls. The current behavior is (4) on POSIX platforms (`pthread_exit`), and (5) on Windows (`_endthreadex`). In general, achieving a clean shutdown will require the cooperation of all relevant code in the program, particularly code using the Python C API. Commonly the Python C API is used more by library code rather than application code, while it would presumably be the application that is responsible for setting this callback. Writing a library that supports multiple different thread shutdown behaviors would be particularly challenging. I think the callback is useful, but we would still need to discuss what the default behavior should be (hopefully different from the current behavior), and what guidance would be provided as far as what the callback is allowed to do. Option (1) is highly likely to result in a user-visible error --- a lot of Python programs that previously exited successfully will now, possibly only some of the time, exit with an error. The advantage is the user is alerted to the fact that some threads were not cleanly exited, but a lot of previously working code is now broken. This seems like a reasonable policy for a given application to impose (effectively requiring the use of an atexit handler to terminate all daemon threads), but does not seem like a reasonable default given the existing use of daemon threads. Option (2) would likely do the right thing in many cases, but main thread cleanup that was previously run would now be silently skipped. This again seems like a reasonable policy for a given application to impose, but does not seem like a reasonable default. Option (3) avoids the possibility of crashes and memory corruption. Since the thread stack remains allocated, any pointers to the thread stack held in global data structures or by other threads remain valid. There is a risk that the thread may be holding a lock, or otherwise block progress of the main thread, resulting in silent deadlock. That can be mitigated by registering an atexit handler. Option (4) in theory would allow cleanup handlers to be registered in order to avoid deadlock due to locks held. In practice, though, it causes a lot of problems: - The CPython codebase itself contains no such cleanup handlers, and I expect the vast majority of existing C extensionss are also not designed to properly handle the stack unwind triggered by `pthread_exit`. Without proper cleanup handlers, this option reverts to option (5), where there is a risk of memory corruption due to other threads accessing pointers to the freed thread stack. There is also the same risk of deadlock as in option (3). - Stack unwinding interacts particularly badly with common C++ usage because the very first thing most people want to do when using the Python C API from C++ is create a "smart pointer" type for holding a `PyObject` pointer that handles the reference counting automatically (calls `Py_INCREF` when copied, `Py_DECREF` in the destructor). When the stack unwinds due to `pthread_exit`, the current thread will NOT hold the GIL, and these `Py_DECREF` calls result in a crash / memory corruption. We would need to either create a new finalizing-safe version of Py_DECREF, that is a noop when called from a non-main thread if `_Py_IsFinalizing()` is true (and then existing C++ libraries like pybind11 would need to be changed to use it), or modify the existing `Py_DECREF` to always have that additional check. Other calls to Python C APIs in destructors are also common. - When writing code that attempts to be safe in the presence of stack unwinding due to `pthread_exit`, it is not merely explicitly GIL-related calls that are a concern. Virtually any Python C API function can transitively release and acquire the GIL and therefore you must defend against unwind from virtually all Python C API functions. - Some C++ functions in the call stack may unintentionally catch the exception thrown by `pthread_exit` and then return normally. If they return back to a CPython stack frame, memory corruption/crashing is likely. - Alternatively, some C++ functions in the call stack may be marked `noexcept`. If the unwinding reaches such a function, then we end up with option (1). - In general this option seems to require auditing and fixing a very large amount of existing code, and introduces a lot of complexity. For that reasons, I think this option
[issue42969] pthread_exit & PyThread_exit_thread from PyEval_RestoreThread etc. are harmful
Jeremy Maitin-Shepard added the comment: Another possible resolution would to simply make threads that attempt to acquire the GIL after Python starts to finalize hang (i.e. sleep until the process exits). Since the GIL can never be acquired again, this is in some sense the simplest way to fulfill the contract. This also ensures that any data stored on the thread call stack and referenced from another thread remains valid. As long as nothing on the main thread blocks waiting for one of these hung threads, there won't be deadlock. I have a case right now where a background thread (created from C++, which is similar to a daemon Python thread) acquires the GIL, and calls "call_soon_threadsafe" on an asycnio event loop. I think that causes some Python code internally to release the GIL at some point, after triggering some code to run on the main thread which happens to cause the program to exit. While `Py_FinalizeEx` is running, the call to "call_soon_threadsafe" completes on the background thread, attempts to re-acquire the GIL, which triggers a call to pthread_exit. That unwinds the C++ stack, which results in a call to Py_DECREF without the GIL held, leading to a crash. -- nosy: +jbms ___ Python tracker <https://bugs.python.org/issue42969> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45110] argparse repeats itself when formatting help metavars
Jeremy Kloth added the comment: Except that the output in question is not for manpages but for the command-line. The analogous would be for `grep --help` (again an excerpt): Context control: -B, --before-context=NUM print NUM lines of leading context -A, --after-context=NUM print NUM lines of trailing context -C, --context=NUM print NUM lines of output context -NUM same as --context=NUM --color[=WHEN], --colour[=WHEN] use markers to highlight the matching strings; WHEN is 'always', 'never', or 'auto' [using grep (GNU grep) 3.1] -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue45110> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44779] Checkouts stale following changes to .gitattributes
Change by Jeremy Kloth : -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue44779> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44777] Create mechanism to contact buildbot worker owners
Jeremy Kloth added the comment: There is a list `python-buildb...@python.org` that all buildbot owners have been subscribed. -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue44777> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44675] Cross-platform issues with private methods and multiprocessing
New submission from Jeremy : While writing a program using the multiprocessing library I stumbled upon what appears to be a bug with how different platforms deal with private methods. When a class has a private method which is the target for a multiprocessing process, this name is correctly resolved on Linux (20.04.1-Ubuntu running Python 3.8.10) but fails to be resolved correctly on MacOS (Python 3.8.2 and 3.8.8) or Windows 10 (Python 3.9.6). import multiprocessing class Test(object): def __init__(self): self.a = 1 self._b = 2 self.__c = 3 self.run1() self.run2() def _test1(self, conn): conn.send(self._b) def __test2(self, conn): conn.send(self.__c) def run1(self): print("Running self._test1()") parent, child = multiprocessing.Pipe() process = multiprocessing.Process(target=self._test1, args=(child, )) process.start() print(parent.recv()) process.join() def run2(self): print("Running self.__test2()") parent, child = multiprocessing.Pipe() process = multiprocessing.Process(target=self.__test2, args=(child, )) process.start() print(parent.recv()) process.join() if __name__ == "__main__": t = Test() On Linux, this has the intended behavior of printing: Running self._test1() 2 Running self.__test2() 3 However, on Windows 10, this results in an Exception being raised: Running self._test1() 2 Running self.__test2() Traceback (most recent call last): File "", line 1, in File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) AttributeError: 'Test' object has no attribute '__test2' A similar exception is also raised on MacOS for this code. It would therefore appear that there is different behavior for resolving class attributes starting with `__` on different platforms (at least within multiprocessing). It is my understanding that because multiprocessing.Process is called within the class, the private method should be within scope and so should resolve correctly. I'm aware that Python doesn't have strict private methods, and instead renames them (Test.__test2 becomes Test._Test__test2) - explaining why on Windows it cannot find the attribute with that name. My question really is, which platform is correct here, and is the inconsistency intentional? I'd suggest Linux is most correct here as the process is spawned from within the object so the method should be in scope, but either way, the inconsistency between platforms may cause some unintended issues. -- components: Library (Lib), Windows, macOS messages: 397810 nosy: ned.deily, paul.moore, ronaldoussoren, steve.dower, tim.golden, ymerej, zach.ware priority: normal severity: normal status: open title: Cross-platform issues with private methods and multiprocessing type: behavior versions: Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue44675> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44336] Windows buildbots hang after fatal exit
Jeremy Kloth added the comment: While now not as immediately beneficial, I believe that the linked PR would be good for the long run. The ramifications of bpo-11105 meant that the Windows buildbots were basically unusable for 5 days. Realistically, any commit that triggers aborts in the Windows test runs will exhibit this problematic behavior. I'm confident that the PR is in ready-to-go form. -- ___ Python tracker <https://bugs.python.org/issue44336> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11105] Compiling recursive Python ASTs crash the interpreter
Change by Jeremy Kloth : -- nosy: +jkloth nosy_count: 11.0 -> 12.0 pull_requests: +25186 pull_request: https://github.com/python/cpython/pull/26578 ___ Python tracker <https://bugs.python.org/issue11105> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44336] Windows buildbots hang after fatal exit
Jeremy Kloth added the comment: The PR has been successfully run on the buildbots. Before: https://buildbot.python.org/all/#/builders/593/builds/58 After: https://buildbot.python.org/all/#/builders/593/builds/59 With these changes, at least now aborted runs can be seen as direct failures of Python's tests instead of just builder exceptions. -- ___ Python tracker <https://bugs.python.org/issue44336> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44336] Windows buildbots hang after fatal exit
Jeremy Kloth added the comment: To verify the PR, can someone please add the test-with-buildbots label on GH? -- ___ Python tracker <https://bugs.python.org/issue44336> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44336] Windows buildbots hang after fatal exit
Change by Jeremy Kloth : -- keywords: +patch pull_requests: +25166 stage: -> patch review pull_request: https://github.com/python/cpython/pull/26578 ___ Python tracker <https://bugs.python.org/issue44336> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44336] Windows buildbots hang after fatal exit
New submission from Jeremy Kloth : Currently, a stack overflow is causing the debug build Windows buildbots to abort (bpo-11105). Once the regrtest process is terminated, the buildbot test process hangs indefinitely waiting for handles to be closed (see msg350191 from bpo-37531 for some details). -- components: Tests, Windows messages: 395268 nosy: jkloth, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows buildbots hang after fatal exit versions: Python 3.10, Python 3.11, Python 3.9 ___ Python tracker <https://bugs.python.org/issue44336> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44214] PyArg_Parse* for vectorcall?
Change by Jeremy Kloth : -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue44214> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40432] Pegen regenerate project for Windows not working
Jeremy Kloth added the comment: Adding windows team to the nosy list -- components: +Windows nosy: +jkloth, paul.moore, steve.dower, tim.golden, zach.ware ___ Python tracker <https://bugs.python.org/issue40432> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2889] curses for windows (alternative patch)
Change by Jeremy Kloth : -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue2889> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37945] [Windows] test_locale.TestMiscellaneous.test_getsetlocale_issue1813() fails
Change by Jeremy Kloth : -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue37945> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43538] [Windows] support args and cwd in os.startfile()
Change by Jeremy Kloth : -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue43538> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43219] shutil.copy raises IsADirectoryError when the directory does not actually exist
Jeremy Pinto added the comment: In fact, the issue seems to be coming from open() itself when opening a non-existent directory in write mode: [nav] In [1]: import os ...: nonexixstent_dir = 'not_a_dir/' ...: assert not os.path.exists(nonexixstent_dir) ...: with open(nonexixstent_dir, 'wb') as fdst: ...: pass --- IsADirectoryError Traceback (most recent call last) in 2 dir_path = 'not_a_dir/' 3 assert not os.path.exists(nonexixstent_dir) > 4 with open(nonexixstent_dir, 'wb') as fdst: 5 pass IsADirectoryError: [Errno 21] Is a directory: 'not_a_dir/' -- ___ Python tracker <https://bugs.python.org/issue43219> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43219] shutil.copy raises IsADirectoryError when the directory does not actually exist
New submission from Jeremy Pinto : Issue: If you try to copy a file to a directory that doesn't exist using shutil.copy, a IsADirectory error is raised saying the directory exists. This issue is actually caused when `open(not_a_dir, 'wb') is called on a non-existing dir. Expected behaviour: Should instead raise NotADirectoryError - Steps to reproduce: [nav] In [1]: import os ...: from pathlib import Path ...: from shutil import copy ...: ...: tmp_file = '/tmp/some_file.txt' ...: Path(tmp_file).touch() ...: nonexistent_dir = 'not_a_dir/' ...: assert not os.path.exists(nonexistent_dir) ...: copy(tmp_file, nonexistent_dir) --- IsADirectoryError Traceback (most recent call last) in 7 nonexistent_dir = 'not_a_dir/' 8 assert not os.path.exists(nonexistent_dir) > 9 copy(tmp_file, nonexistent_dir) ~/miniconda3/lib/python3.7/shutil.py in copy(src, dst, follow_symlinks) 243 if os.path.isdir(dst): 244 dst = os.path.join(dst, os.path.basename(src)) --> 245 copyfile(src, dst, follow_symlinks=follow_symlinks) 246 copymode(src, dst, follow_symlinks=follow_symlinks) 247 return dst ~/miniconda3/lib/python3.7/shutil.py in copyfile(src, dst, follow_symlinks) 119 else: 120 with open(src, 'rb') as fsrc: --> 121 with open(dst, 'wb') as fdst: 122 copyfileobj(fsrc, fdst) 123 return dst IsADirectoryError: [Errno 21] Is a directory: 'not_a_dir/' -- messages: 386932 nosy: jerpint priority: normal severity: normal status: open title: shutil.copy raises IsADirectoryError when the directory does not actually exist versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue43219> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43022] Unable to dynamically load functions from python3.dll
Change by Jeremy Kloth : -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue43022> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42705] Intercepting thread lock objects not working under context managers
Change by Jeremy Kloth : -- components: -Distutils ___ Python tracker <https://bugs.python.org/issue42705> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42802] distutils: Remove bdist_wininst command
Change by Jeremy Kloth : -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue42802> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42611] PEP 594
Change by Jeremy Kloth : -- nosy: +jkloth ___ Python tracker <https://bugs.python.org/issue42611> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42226] imghdr.what is missing docstring, has a logic error, and is overly complex
Change by Jeremy Howard : -- resolution: -> rejected stage: patch review -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue42226> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42226] imghdr.what is missing docstring, has a logic error, and is overly complex
Change by Jeremy Howard : -- keywords: +patch pull_requests: +21988 stage: -> patch review pull_request: https://github.com/python/cpython/pull/23069 ___ Python tracker <https://bugs.python.org/issue42226> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com