[Python-announce] post to list request
Hi there, Can I please post this message to the email list. Hi Everyone, I'm Adam, the Community Manager at Plotly Dash - data visualizations and data apps in Python. To stay on top of this changing AI landscape, we recently challenged the Plotly Community to build Dash apps that utilize the ChatGPT API. After receiving many impressive Python apps, we are thrilled to announce that several authors will be showcasing their top submissions on August 30. If you're interested in seeing what Python community members were able to build with open source, feel free to register for the live community showcase! https://go.plotly.com/dash-chatgpt Thank you, adam schroeder -- Adam Schroeder Community Manager, Plotly A Gaspe Ave #118, Montreal, QC, H2T 2A3 E a...@plot.ly* *W https://www.plotly.com/* * <https://go.plotly.com/dash-chatgpt?utm_source=Webinar%3A+ChatGPT+8%2F30%2F23_medium=wisestamp_signature> ___ Python-announce-list mailing list -- python-announce-list@python.org To unsubscribe send an email to python-announce-list-le...@python.org https://mail.python.org/mailman3/lists/python-announce-list.python.org/ Member address: arch...@mail-archive.com
[issue41395] pickle and pickletools cli interface doesn't close input and output file.
Adam added the comment: Hi, First-time contributor here, I've made a patch in follow-up to the discussions that happened in Amir's patch in regards to this. I'd appreciate it if someone would be able to take a look and review it! https://github.com/python/cpython/pull/32257 -- ___ Python tracker <https://bugs.python.org/issue41395> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41395] pickle and pickletools cli interface doesn't close input and output file.
Change by Adam : -- nosy: +achhina nosy_count: 7.0 -> 8.0 pull_requests: +30326 pull_request: https://github.com/python/cpython/pull/32257 ___ Python tracker <https://bugs.python.org/issue41395> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
PEPs will move to peps.python.org
With the acceptance of PEP 676, the canonical home of the Python Enhancement Proposal series will shortly move to peps.python.org. All existing links will redirect when the change is made, this announcement is to promote awareness of the new domain as canonical. Thanks, Adam Turner PEP Editor and author of PEP 676 -- https://mail.python.org/mailman/listinfo/python-list
[Python-announce] PEPs will move to peps.python.org
With the acceptance of PEP 676, the canonical home of the Python Enhancement Proposal series will shortly move to peps.python.org. All existing links will redirect when the change is made, this announcement is to promote awareness of the new domain as canonical. Thanks, Adam Turner PEP Editor and author of PEP 676 ___ Python-announce-list mailing list -- python-announce-list@python.org To unsubscribe send an email to python-announce-list-le...@python.org https://mail.python.org/mailman3/lists/python-announce-list.python.org/ Member address: arch...@mail-archive.com
[issue46863] Python 3.10 OpenSSL Configuration Issues
Adam added the comment: Many thanks Christian, that resolved the issue! I really appreciate your efforts here. -- ___ Python tracker <https://bugs.python.org/issue46863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46863] Python 3.10 OpenSSL Configuration Issues
Adam added the comment: Many thanks Christian, see the attached for the output of the commands on Python 3.9.10 and 3.10.2, along with a diff removing version numbers and memory addresses. I've run the commands on the Ubuntu distribution, we can also run the same for the Centos VM, if helpful. There are a few differences in the outputs but nothing that appears obviously the cause. -- Added file: https://bugs.python.org/file50654/python_details.tar.gz ___ Python tracker <https://bugs.python.org/issue46863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46863] Python 3.10 OpenSSL Configuration Issues
Adam added the comment: Update, the Pyenv team confirmed that they do not install OpenSSL in linux, its only installed for MacOS, and it should be built using the system OpenSSL within Linux. We're investigating further to attempt to debug the issue. Interestingly the OpenSSL build flags for both Python versions appear to be the same. `Trying link with OPENSSL_LDFLAGS=; OPENSSL_LIBS=-lssl -lcrypto; OPENSSL_INCLUDES=` I've attached the build logs for both the Python 3.9.10 and 3.10.2 build, in case you're able to review. Many thanks. https://github.com/pyenv/pyenv/issues/2257 -- Added file: https://bugs.python.org/file50653/python_builds.tar.gz ___ Python tracker <https://bugs.python.org/issue46863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38854] Decorator with paren tokens in arguments breaks inspect.getsource
Change by Adam Hopkins : -- nosy: +ahopkins nosy_count: 3.0 -> 4.0 pull_requests: +29736 pull_request: https://github.com/python/cpython/pull/31605 ___ Python tracker <https://bugs.python.org/issue38854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46873] inspect.getsource with some lambdas in decorators does not get the full source
Adam Hopkins added the comment: Duplicate of https://bugs.python.org/issue38854 Sorry I didn't come across our before submitting. -- resolution: -> duplicate stage: patch review -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue46873> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46873] inspect.getsource with some lambdas in decorators does not get the full source
Change by Adam Hopkins : -- keywords: +patch pull_requests: +29728 stage: -> patch review pull_request: https://github.com/python/cpython/pull/31605 ___ Python tracker <https://bugs.python.org/issue46873> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46873] inspect.getsource with some lambdas in decorators does not get the full source
Adam Hopkins added the comment: Sorry about that. I am doing some more digging to see if I can find the route of it and a proposal for a non-breaking patch. The problem seems to be in BlockFinder.tokeneater. -- type: behavior -> versions: +Python 3.7, Python 3.8 ___ Python tracker <https://bugs.python.org/issue46873> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46873] inspect.getsource with some lambdas in decorators does not get the full source
New submission from Adam Hopkins : I believe the following produces an unexpected behavior: from inspect import getsource def bar(*funcs): def decorator(func): return func return decorator @bar(lambda x: bool(True), lambda x: False) async def foo(): ... print(getsource(foo)) The output shows only the decorator declaration and none of the function: @bar(lambda x: bool(True), lambda x: False) >From my investigation, it seems like this requires the following conditions to >be true: - lambdas are passed in decorator arguments - there is more than one lambda - at least one of the lambdas has a function call Passing the lambdas as default function arguments seems okay: async def foo(bar=[lambda x: bool(True), lambda x: False]): ... A single lambda seems okay: @bar(lambda x: bool(True)) async def foo(): ... Lambdas with no function calls also seem okay: @bar(lambda x: not x, lambda: True) async def foo(): ... Tested this on: - Python 3.10.2 - Python 3.9.9 - Python 3.8.11 - Python 3.7.12 -- messages: 414149 nosy: ahopkins2 priority: normal severity: normal status: open title: inspect.getsource with some lambdas in decorators does not get the full source versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue46873> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46863] Python 3.10 OpenSSL Configuration Issues
Adam added the comment: Yes agreed, it may well be a Pyenv issue. Interestingly we can demonstrate that the global OpenSSL crypto policies is respected with the 3.9.10 version, through adjusting the policy. The ssl error occurs with the default policy setting and is resolved with the legacy policy setting. With 3.10.2 this is no longer the case. I can’t see any obvious changes to the build recipe that would cause this. -- ___ Python tracker <https://bugs.python.org/issue46863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46863] Python 3.10 OpenSSL Configuration Issues
Adam added the comment: I found the Python build recipes and Pyenv does appear to install OpenSSL from source. The only difference I can see, aside from the Python version, is an update on the OpenSSL versions; openssl-1.1.1l (3.9.10) to openssl-1.1.1k (3.10.2). The OpenSSL release notes do not appear to suggest anything relevant. https://github.com/pyenv/pyenv/blob/master/plugins/python-build/share/python-build/3.10.2 https://github.com/pyenv/pyenv/blob/master/plugins/python-build/share/python-build/3.9.10 https://github.com/pyenv/pyenv/blob/master/plugins/python-build/bin/python-build https://www.openssl.org/news/openssl-1.1.1-notes.html -- ___ Python tracker <https://bugs.python.org/issue46863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46863] Python 3.10 OpenSSL Configuration Issues
Adam added the comment: Thanks for the quick reply. On both Ubuntu and Centos, I’m installing Python using Pyenv, testing with 3.9.10 and 3.10.2. Pyenv provides a verbose install flag, I can rebuild the Python versions and review the build commands, if helpful? I’m testing with clean Linux distributions and I believe there is only one OpenSSL installed and available. I don’t know if it’s possible to gain more details from the Python ssl module to confirm? I did confirm the OpenSSL versions aligns using ssl.OPENSSL_VERSION. Command: pyenv install 3.10.2 --verbose https://github.com/pyenv/pyenv -- ___ Python tracker <https://bugs.python.org/issue46863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46863] Python 3.10 OpenSSL Configuration Issues
New submission from Adam Pinckard : Python 3.10 does not appear to respecting the OpenSSL configuration within linux. Testing completed using Pyenv on both Ubuntu 20.04.4 and Centos-8. Note PEP 644 which requires OpenSSL >= 1.1.1 is released in Python 3.10. We operate behind a corporate proxy / firewall which causes an SSL error where the Diffie-Hellman key size is too small. In previous Python versions this is resolved by updating the OpenSSL configuration, e.g. downgrading the linux crypto policies `sudo update-crypto-policies --set LEGACY`. The issue is reproducible in both Ubuntu 20.04.4 and Centos-8. In both linux distributions the SSL error is resolvable in earlier Python version, using the OpenSSL configurations, but the configuration is not respected with Python 3.10.2. See the details below on the kernel versions, linux distributions, and Openssl versions, many thanks in advance. 1. Python 3.10.2 Error: (py_3_10_2) ➜ py_3_10_2 pip install --upgrade pip WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:997)'))': /simple/pip/ 2. Ubuntu details uname -a Linux Horatio 5.13.0-30-generic #33~20.04.1-Ubuntu SMP Mon Feb 7 14:25:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description:Ubuntu 20.04.4 LTS Release:20.04 Codename: focal openssl version -a OpenSSL 1.1.1f 31 Mar 2020 built on: Wed Nov 24 13:20:48 2021 UTC platform: debian-amd64 options: bn(64,64) rc4(16x,int) des(int) blowfish(ptr) compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g -O2 -fdebug-prefix-map=/build/openssl-dnfdFp/openssl-1.1.1f=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_TLS_SECURITY_LEVEL=2 -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2 OPENSSLDIR: "/usr/lib/ssl" ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-1.1" Seeding source: os-specific 2. Centos-8 details uname -a Linux localhost.localdomain 5.4.181-1.el8.elrepo.x86_64 #1 SMP Tue Feb 22 10:00:15 EST 2022 x86_64 x86_64 x86_64 GNU/Linux cat /etc/centos-release CentOS Stream release 8 openssl version -a OpenSSL 1.1.1k FIPS 25 Mar 2021 built on: Thu Dec 2 16:40:48 2021 UTC platform: linux-x86_64 options: bn(64,64) md2(char) rc4(16x,int) des(int) idea(int) blowfish(ptr) compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -O3 -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -Wa,--noexecstack -Wa,--generate-missing-build-notes=yes -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DZLIB -DNDEBUG -DPURIFY -DDEVRANDOM="\"/dev/urandom\"" -DSYSTEM_CIPHERS_FILE="/etc/crypto-policies/back-ends/openssl.config" OPENSSLDIR: "/etc/pki/tls" ENGINESDIR: "/usr/lib64/engines-1.1" Seeding source: os-specific engines: rdrand dynamic -- assignee: christian.heimes components: SSL messages: 414072 nosy: adam, christian.heimes priority: normal severity: normal status: open title: Python 3.10 OpenSSL Configuration Issues type: behavior versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue46863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46467] Rounding 5, 50, 500 behaves differently depending on preceding value
New submission from Adam Ulrich : round(250,-2) returns 200 round(350,-2) returns 400 round(450,-2) returns 400 round(550,-2) returns 600 round(5,-1) returns 0 round(15,-1) returns 20 round(500,-3) returns 0 round(1500,-3) returns 2000 expected: round of 5 to consistently rounds up. -- components: Interpreter Core messages: 411222 nosy: adam.ulrich priority: normal severity: normal status: open title: Rounding 5,50,500 behaves differently depending on preceding value type: behavior versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue46467> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45865] Old syntax in unittest
Adam Johnson added the comment: Okay, I updated the PR to only remove inheritance from object. Should I reopen the ticket? (Not sure of the etiquette.) Perhaps I could later submit a second patch for use of `super()`, and so on? -- ___ Python tracker <https://bugs.python.org/issue45865> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25834] getpass falls back when sys.stdin is changed
Adam Bartoš added the comment: Sorry, I don't. But my use case is not relevant any more since my package was a workround for problems with entering Unicode interactively on Windows, and these problems were resolved in Python since then. -- ___ Python tracker <https://bugs.python.org/issue25834> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23882] unittest discovery doesn't detect namespace packages when given no parameters
Adam Johnson added the comment: I just reported https://bugs.python.org/issue45864 , and closed as duplicate of this. -- nosy: +adamchainz ___ Python tracker <https://bugs.python.org/issue23882> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45864] unittest does not discover tests in PEP420 packages
Change by Adam Johnson : -- stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue45864> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45864] unittest does not discover tests in PEP420 packages
Adam Johnson added the comment: It's exactly that ticket. I missed that when searching for duplicates - I only searched for "pep420" and not "namespace packages". Mea culpa. -- resolution: -> duplicate ___ Python tracker <https://bugs.python.org/issue45864> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45865] Old syntax in unittest
Change by Adam Johnson : -- keywords: +patch pull_requests: +27934 stage: -> patch review pull_request: https://github.com/python/cpython/pull/29698 ___ Python tracker <https://bugs.python.org/issue45865> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45865] Old syntax in unittest
New submission from Adam Johnson : I often browse the unittest code in order to write extensions. It still uses some Python 2-isms like classes inheriting from object, it would be nice to clean that up. -- components: Tests messages: 406757 nosy: adamchainz priority: normal severity: normal status: open title: Old syntax in unittest type: enhancement ___ Python tracker <https://bugs.python.org/issue45865> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45864] unittest does not discover tests in PEP420 packages
New submission from Adam Johnson : unittest's test discovery does not descend into directories without `__init__.py`. This avoids discovering test modules that are otherwise valid and importable, after PEP 420. I've seen this more than once where there were valid looking test files not being discovered, and they bit rot. The tests had been run individually when created but never again. (I created [flake8-no-pep420](https://pypi.org/project/flake8-no-pep420/) to avoid this problem on my projects.) For example, take this directory structure: ``` $ tree . └── tests └── test_thing.py 1 directory, 1 file $ cat tests/test_thing.py 1/0 ``` It's valid to import the naughty file, which crashes: ``` $ python -c 'import tests.test_thing' Traceback (most recent call last): File "", line 1, in File "/.../tests/test_thing.py", line 1, in 1/0 ZeroDivisionError: division by zero ``` But unittest does not discover it: ``` $ python -m unittest -- Ran 0 tests in 0.000s OK ``` But, after creating an empty `__init__.py`, the tests doth fail: ``` $ touch tests/__init__.py $ python -m unittest E == ERROR: tests.test_thing (unittest.loader._FailedTest) -- ImportError: Failed to import test module: tests.test_thing Traceback (most recent call last): File "/.../unittest/loader.py", line 436, in _find_test_path module = self._get_module_from_name(name) File "/.../unittest/loader.py", line 377, in _get_module_from_name __import__(name) File "/.../tests/test_thing.py", line 1, in 1/0 ZeroDivisionError: division by zero -- Ran 1 test in 0.000s FAILED (errors=1) ``` -- components: Tests messages: 406756 nosy: adamchainz priority: normal severity: normal status: open title: unittest does not discover tests in PEP420 packages type: behavior ___ Python tracker <https://bugs.python.org/issue45864> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45639] Support modern image formats in MIME types
Change by Adam Konrad : -- keywords: +patch pull_requests: +27523 stage: -> patch review pull_request: https://github.com/python/cpython/pull/29259 ___ Python tracker <https://bugs.python.org/issue45639> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45639] Support modern image formats in MIME types
New submission from Adam Konrad : Modern image types webp and avif are not recognized by the mimetypes module. Problem: Many tools are written in Python and running on macOS. Good example is the AWS CLI. Running commands like "s3 sync" will save files with .webp and .avif extensions with incorrect "binary/octet-stream" Content-Type to S3. This creates additional problems with serving these resources over HTTP. The webp and avif image types are supported by most browsers: https://caniuse.com/#feat=webp https://caniuse.com/#feat=avif While webp is fully supported and largely used, it is not officially registered with IANA. Avif is currently less popular, it is fully registered with IANA. https://www.iana.org/assignments/media-types/media-types.xhtml Please consider the attached GitHub PR as a fix to these MIME Content-Type issues. -- components: Library (Lib) messages: 405145 nosy: adamkonrad priority: normal severity: normal status: open title: Support modern image formats in MIME types type: enhancement versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue45639> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45337] Create venv with pip fails when target dir is under userappdata using Microsoft Store python
Change by Adam Yoblick : -- type: -> behavior ___ Python tracker <https://bugs.python.org/issue45337> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45337] Create venv with pip fails when target dir is under userappdata using Microsoft Store python
New submission from Adam Yoblick : Repro steps: 1. Install Python 3.9 from the Microsoft Store 2. Try to create a virtual environment under the userappdata folder, using a command line similar to the following: "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.2032.0_x64__qbz5n2kfra8p0\python3.9.exe" -m venv "C:\Users\advolker\AppData\Local\Microsoft\CookiecutterTools\env" 3. Observe the following error: Error: Command '['C:\\Users\\advolker\\AppData\\Local\\Microsoft\\CookiecutterTools\\env\\Scripts\\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 106. Note that creating a venv without pip DOES work: "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.2032.0_x64__qbz5n2kfra8p0\python3.9.exe" -m venv "C:\Users\advolker\AppData\Local\Microsoft\CookiecutterTools\env" --without-pip BUT the venv is NOT at the specified location. This is because the Windows Store app creates a redirect when creating the venv, and that redirect is only visible from within the python executable. This means that python doesn't respect the redirect when trying to install pip into the newly created venv. -- components: Windows messages: 402983 nosy: AdamYoblick, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Create venv with pip fails when target dir is under userappdata using Microsoft Store python versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue45337> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45235] argparse does not preserve namespace with subparser defaults
Change by Adam Schwalm : -- keywords: +patch pull_requests: +26832 stage: -> patch review pull_request: https://github.com/python/cpython/pull/28420 ___ Python tracker <https://bugs.python.org/issue45235> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45235] argparse does not preserve namespace with subparser defaults
New submission from Adam Schwalm : The following snippet demonstrates the problem. If a subparser flag has a default set, argparse will override the existing value in the provided 'namespace' if the flag does not appear (e.g., if the default is used): import argparse parser = argparse.ArgumentParser() sub = parser.add_subparsers() example_subparser = sub.add_parser("example") example_subparser.add_argument("--flag", default=10) print(parser.parse_args(["example"], argparse.Namespace(flag=20))) This should return 'Namespace(flag=20)' because 'flag' already exists in the namespace, but instead it returns 'Namespace(flag=10)'. This intended behavior is described and demonstrated in the second example here: https://docs.python.org/3/library/argparse.html#default Lib's behavior is correct for the non-subparser cause. -- components: Library (Lib) messages: 402060 nosy: ALSchwalm priority: normal severity: normal status: open title: argparse does not preserve namespace with subparser defaults type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue45235> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28474] WinError(): Python int too large to convert to C long
Change by Adam Meily : -- keywords: +patch pull_requests: +26407 stage: -> patch review pull_request: https://github.com/python/cpython/pull/27959 ___ Python tracker <https://bugs.python.org/issue28474> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27484] Some Examples in Format String Syntax are incorrect or poorly worded
Change by Adam Meily : -- nosy: +meilyadam nosy_count: 5.0 -> 6.0 pull_requests: +26405 pull_request: https://github.com/python/cpython/pull/27959 ___ Python tracker <https://bugs.python.org/issue27484> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28474] WinError(): Python int too large to convert to C long
Adam Meily added the comment: I can potentially take a stab at writing up a PR for this. I've also seen this affecting other locations that eventually call FormatMessage, including: - ctypes.format_error() - this original issue - os.strerror() - OSError(winerror=X) I will most likely look into fixing all three. -- nosy: +meilyadam ___ Python tracker <https://bugs.python.org/issue28474> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44253] tkinter searches for tk.tcl in wrong directory
Adam Stewart added the comment: Thanks, that does help. Spack uses both `--with-tcltk-includes` and `--with-tcltk-libs`, and actually RPATHs the libraries in place. According to otool, that is all working fine: $ otool -L /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/python-3.8.10-fkj5vkn3tpottyv6yqoj5ucz2emstpvo/lib/python3.8/lib-dynload/_tkinter.cpython-38-darwin.so /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/python-3.8.10-fkj5vkn3tpottyv6yqoj5ucz2emstpvo/lib/python3.8/lib-dynload/_tkinter.cpython-38-darwin.so: /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tcl-8.6.11-n7nea33urrk25rkoqpsc2tdcgai5u4z2/lib/libtcl8.6.dylib (compatibility version 8.6.0, current version 8.6.11) /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tk-8.6.11-ydmhrbboheucxsuhrnyoxqaihgna5dfe/lib/libtk8.6.dylib (compatibility version 8.6.0, current version 8.6.11) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1281.100.1) So like you initially thought, the problem isn't that tkinter/_tkinter can't find tcl, it's that tcl can't find tk. I'll talk more with the tcl developers and see how tcl is trying to find tk. Thanks for all of your help! -- ___ Python tracker <https://bugs.python.org/issue44253> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44253] tkinter searches for tk.tcl in wrong directory
Adam Stewart added the comment: And... now it's not working again. Can you clarify exactly how tkinter finds tk/tcl? Does it rely on TCL_LIBRARY or TK_LIBRARY env vars? TCLLIBPATH? If I use all of these env vars, tkinter finds tcl/tk, but commands like: $ python -m tkinter $ python -c 'import tkinter; tkinter._test()' open a window and immediately minimize it. If I try to maximize the window it immediately closes, so something is definitely wrong with my installation. -- ___ Python tracker <https://bugs.python.org/issue44253> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44253] tkinter searches for tk.tcl in wrong directory
Adam Stewart added the comment: I think I FINALLY figured out the problem. We were setting `TCLLIBPATH` to `/lib/tk8.6` when it should be `/lib`. With this change, tkinter seems to work for me. Thanks for all of your help! -- ___ Python tracker <https://bugs.python.org/issue44253> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44253] tkinter searches for tk.tcl in wrong directory
Adam Stewart added the comment: Thanks, in that case it sounds like the problem is that Spack installs tcl and tk to separate directories, but since tk depends on tcl and not the other way around, tcl has no way of knowing where tk is installed. I'll see if I can convince the other Spack devs to combine tcl and tk into a single package. -- ___ Python tracker <https://bugs.python.org/issue44253> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44253] tkinter searches for tk.tcl in wrong directory
New submission from Adam Stewart : I'm trying to install Python with tkinter support using the Spack package manager. Spack adds the following flags to configure during install: ``` '--with-tcltk-libs=-L/Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tcl-8.6.11-n7nea33urrk25rkoqpsc2tdcgai5u4z2/lib -ltcl8.6 -L/Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tk-8.6.11-ydmhrbboheucxsuhrnyoxqaihgna5dfe/lib -ltk8.6' ``` It also sets the following environment variables: ``` TCLLIBPATH='/Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tcl-8.6.11-n7nea33urrk25rkoqpsc2tdcgai5u4z2/lib/tcl8.6 /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tk-8.6.11-ydmhrbboheucxsuhrnyoxqaihgna5dfe/lib/tcl8.6 /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tk-8.6.11-ydmhrbboheucxsuhrnyoxqaihgna5dfe/lib64/tcl8.6'; export TCLLIBPATH TCL_LIBRARY=/Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tcl-8.6.11-n7nea33urrk25rkoqpsc2tdcgai5u4z2/lib; export TCL_LIBRARY ``` The install seems to correctly pick up tk/tcl and builds correctly. However, when I try to use tkinter, I see the following run-time error: ``` $ python Python 3.8.10 (default, May 27 2021, 13:28:01) [Clang 12.0.0 (clang-1200.0.32.29)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import tkinter >>> tkinter._test() Traceback (most recent call last): File "", line 1, in File "/Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/python-3.8.10-fkj5vkn3tpottyv6yqoj5ucz2emstpvo/lib/python3.8/tkinter/__init__.py", line 4557, in _test root = Tk() File "/Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/python-3.8.10-fkj5vkn3tpottyv6yqoj5ucz2emstpvo/lib/python3.8/tkinter/__init__.py", line 2270, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: Can't find a usable tk.tcl in the following directories: /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tk-8.6.11-ydmhrbboheucxsuhrnyoxqaihgna5dfe/lib /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tcl-8.6.11-n7nea33urrk25rkoqpsc2tdcgai5u4z2/lib/tcl8.6/tk8.6 /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tcl-8.6.11-n7nea33urrk25rkoqpsc2tdcgai5u4z2/lib/tcl8.6/tk8.6/Resources/Scripts /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tk-8.6.11-ydmhrbboheucxsuhrnyoxqaihgna5dfe/lib/tcl8.6/tk8.6 /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tk-8.6.11-ydmhrbboheucxsuhrnyoxqaihgna5dfe/lib/tcl8.6/tk8.6/Resources/Scripts /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tk-8.6.11-ydmhrbboheucxsuhrnyoxqaihgna5dfe/lib64/tcl8.6/tk8.6 /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tk-8.6.11-ydmhrbboheucxsuhrnyoxqaihgna5dfe/lib64/tcl8.6/tk8.6/Resources/Scripts /Users/Adam/spack/opt/spack/darwin-ca talina-x86_64/apple-clang-12.0.0/tcl-8.6.11-n7nea33urrk25rkoqpsc2tdcgai5u4z2/lib/tk8.6 /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tcl-8.6.11-n7nea33urrk25rkoqpsc2tdcgai5u4z2/lib/tk8.6/Resources/Scripts /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tcl-8.6.11-n7nea33urrk25rkoqpsc2tdcgai5u4z2/tk8.6 /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/tcl-8.6.11-n7nea33urrk25rkoqpsc2tdcgai5u4z2/tk8.6/Resources/Scripts /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/python-3.8.10-fkj5vkn3tpottyv6yqoj5ucz2emstpvo/lib/tk8.6 /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/python-3.8.10-fkj5vkn3tpottyv6yqoj5ucz2emstpvo/lib/tk8.6/Resources/Scripts /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/python-3.8.10-fkj5vkn3tpottyv6yqoj5ucz2emstpvo/lib/tk8.6 /Users/Adam/spack/opt/spack/darwin-catalina-x86_64/apple-clang-12.0.0/lib/tk8.6 /Users/Adam/spack/opt/spack/dar win-catalina-x86_64/apple-clang-12.0.0/python-3.8.10-fkj5vkn3tpottyv6yqoj5ucz2emstpvo/library This probably means that tk wasn't installed properly. ``` It seems that tkinter searches for tk.tcl in `/lib`, but tk.tcl is actually installed in `/lib/tk8.6`. I asked the tk developers, but it looks like `/lib/tk8.6` is indeed the correct installation location: https://core.tcl-lang.org/tk/tktview/447bd3e4abe17452d19a80e6840dcc8a2603fcbc Is there a way to tell tkinter where to find tk.tcl? If not, can we modify the default search path to search in `/lib/tk8.6`? Related to https://github.com/spack/spack/issues/23780 -- components: Tkinter messages: 394584 nosy: ajstewart priority: normal severity: normal status: open title: tkinter searches for tk.tcl in wrong directory type: cr
[issue37658] In some cases asyncio.wait_for can lead to socket leak.
Adam Liddell added the comment: Wrapping every resource allocating call like that is what we were trying to avoid, since it makes wait_for go from a simple one-line helper to something you have to be very careful with. Conceptually, a user should expect that wait_for should behave the exact same as awaiting the underlying awaitable, just with auto-cancellation. The problem with the current wait_for is that there is a gap where the underlying task may have completed but a cancellation arrives. In this case, we need to raise the cancellation to be a good asyncio citizen, but the underlying task has no opportunity to act on the cancellation (to free the resource) since it is already complete and cannot be re-entered. So the resource returned by the completed task gets stuck in limbo, since we can't return it and we can't assume a generic 'close' behaviour. See my comment in the PR for a suggestion about an alternative structure for wait_for, which may avoid this gap and hence prevent the leak (but I have not tested it!) -- ___ Python tracker <https://bugs.python.org/issue37658> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37658] In some cases asyncio.wait_for can lead to socket leak.
Adam Liddell added the comment: Some discussion leading up to that change is here https://github.com/MagicStack/asyncpg/pull/548 and in the issues it links. -- ___ Python tracker <https://bugs.python.org/issue37658> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42130] AsyncIO's wait_for can hide cancellation in a rare race condition
Change by Adam Liddell : -- nosy: +aaliddell ___ Python tracker <https://bugs.python.org/issue42130> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43486] Python 3.9 installer not updating ARP table
Adam added the comment: The 64 installer doesn't even show up in the ARP table, only Python Launcher. -- ___ Python tracker <https://bugs.python.org/issue43486> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43486] Python 3.9 installer not updating ARP table
New submission from Adam : 1. Install 3.9.0 using the following command line options: python-3.9.0.exe /quiet InstallAllUsers=1 2. Install 3.9.2 using the following command line options: python-3.9.2.exe /quiet InstallAllUsers=1 3. Observe that 3.9.2 successfully installed, however the ARP table does not reflect the latest version (see first screenshot in the attachment) it still shows 3.9.0 as installed. 4. Uninstall 3.9.2 using the following command line options: python-3.9.2.exe /uninstall /silent 5. Observe that Python 3.9.0 is still listed as installed in the ARP table. Looking in the registry, all Python installed products are removed except for Python Launcher. Maybe it is by design to leave Python Launcher on the system, maybe not, but I think keeping the ARP table tidy would reduce confusion for users. See second screenshot in the attachment. -- components: Installation files: 1.jpg messages: 388615 nosy: codaamok priority: normal severity: normal status: open title: Python 3.9 installer not updating ARP table type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49873/1.jpg ___ Python tracker <https://bugs.python.org/issue43486> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Pip standard error warning about dependency resolver
I started seeing this sometimes from pip: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts. Yeah, sure, that's something to consider. We seem fine with the new resolver. Is there a way to suppress it? We have some back end operations that fail when we get output on standard error, and they're dying from that notice. -- https://mail.python.org/mailman/listinfo/python-list
[issue42967] [security] urllib.parse.parse_qsl(): Web cache poisoning - `; ` as a query args separator
Adam Goldschmidt added the comment: > The difference is that semicolon is defined in a previous specification. I understand, but this will limit us in the future if the spec changes - though I don't have strong feelings regarding this one. > Dear all, now that Adam has signed the CLA, I have closed my PR in favor of > Adam's because I think 2 open PRs might split everyone's attention. Instead, > I'll focus on reviewing Adam's PR. Sorry for any inconvenience caused. ❤ -- ___ Python tracker <https://bugs.python.org/issue42967> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42967] [security] urllib.parse.parse_qsl(): Web cache poisoning - `; ` as a query args separator
Adam Goldschmidt added the comment: > That doesn’t feel necessary to me. I suspect most links use &, some use ;, > nothing else is valid at the moment and I don’t expect a new separator to > suddenly appear. IMO the boolean parameter to also recognize ; was better. That's reasonable. However, I think that we are making this change in order to treat the semicolon as a "custom" separator. In that case, why not let the developer decide on a different custom separator for their own use cases? What's the difference between a semicolon and something else? -- ___ Python tracker <https://bugs.python.org/issue42967> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42967] [security] urllib.parse.parse_qsl(): Web cache poisoning - `; ` as a query args separator
Adam Goldschmidt added the comment: > I _didn't_ change the default - it will allow both '&' and ';' still. Eric > showed a link above that still uses semicolon. So I feel that it's strange to > break backwards compatibility in a patch update. Maybe we can make just '&' > the default in Python 3.10, while backporting the ability to specify > separators to older versions so it's up to users? I like this implementation. I definitely think we should not break backwards compatibility and only change the default in Python 3.10. -- ___ Python tracker <https://bugs.python.org/issue42967> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42967] [security] urllib.parse.parse_qsl(): Web cache poisoning - `; ` as a query args separator
Adam Goldschmidt added the comment: I haven't noticed, I'm sorry. I don't mind closing mine, just thought it could be a nice first contribution. Our PRs are different though - I feel like if we are to implement this, we should let the developer choose the separator and not limit to just `&` and `;` - but that discussion probably belongs in the PR. -- ___ Python tracker <https://bugs.python.org/issue42967> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42967] [security] urllib.parse.parse_qsl(): Web cache poisoning - `; ` as a query args separator
Change by Adam Goldschmidt : -- pull_requests: +23120 pull_request: https://github.com/python/cpython/pull/24297 ___ Python tracker <https://bugs.python.org/issue42967> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42967] Web cache poisoning - `;` as a query args separator
New submission from Adam Goldschmidt : The urlparse module treats semicolon as a separator (https://github.com/python/cpython/blob/master/Lib/urllib/parse.py#L739) - whereas most proxies today only take ampersands as separators. Link to a blog post explaining this vulnerability: https://snyk.io/blog/cache-poisoning-in-popular-open-source-packages/ When the attacker can separate query parameters using a semicolon (;), they can cause a difference in the interpretation of the request between the proxy (running with default configuration) and the server. This can result in malicious requests being cached as completely safe ones, as the proxy would usually not see the semicolon as a separator, and therefore would not include it in a cache key of an unkeyed parameter - such as `utm_*` parameters, which are usually unkeyed. Let’s take the following example of a malicious request: ``` GET /?link=http://google.com_content=1;link='>alert(1) HTTP/1.1 Host: somesite.com Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,imag e/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Connection: close ``` urlparse sees 3 parameters here: `link`, `utm_content` and then `link` again. On the other hand, the proxy considers this full string: `1;link='>alert(1)` as the value of `utm_content`, which is why the cache key would only contain `somesite.com/?link=http://google.com`. A possible solution could be to allow developers to specify a separator, like werkzeug does: https://github.com/pallets/werkzeug/blob/6784c44673d25c91613c6bf2e614c84465ad135b/src/werkzeug/urls.py#L833 -- components: C API messages: 385266 nosy: AdamGold priority: normal severity: normal status: open title: Web cache poisoning - `;` as a query args separator type: security versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue42967> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18838] The order of interactive prompt and traceback on Windows
Adam Bartoš added the comment: The order is fine on Python 3.8, Windows 10. -- ___ Python tracker <https://bugs.python.org/issue18838> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18838] The order of interactive prompt and traceback on Windows
Adam Bartoš added the comment: So far I could reproduce the issue on Python 3.7, Windows Vista 64bit. I'll try with newer versions. The output I got: >>> from subprocess import * >>> Popen("py -i foo.py", stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate() (b'', b'>>> Traceback (most recent call last):\r\n File "foo.py", line 2, in \r\n1/0\r\nZeroDivisionError: division by zero\r\n\r\n') -- status: pending -> open ___ Python tracker <https://bugs.python.org/issue18838> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42184] pdb exits unexpectedly when calling args
New submission from Adam Merchant : When an objects __repr__ or __str__ methods return None a TypeError is raised. However if this object is passed to a function and `args` is called from within pdb, pdb will immediately exit. Attached to this is bug_example.py which contains a simple example of how to reproduce this. Depending on circumstances this can make debugging difficult. exact python version that this happened with: Python 3.6.11 -- files: bug_example.py messages: 379838 nosy: xgenadam priority: normal severity: normal status: open title: pdb exits unexpectedly when calling args type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file49546/bug_example.py ___ Python tracker <https://bugs.python.org/issue42184> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Python 3 Feature Request: `pathlib` Use Trailing Slash Flag
`pathlib` trims trailing slashes by default, but certain packages require trailing slashes. In particular, `cx_Freeze.bdist_msi` option "directories" is used to build the package directory structure of a program and requires trailing slashes. Does anyone think it would be a good idea to add a flag or argument to `pathlib.Path` to keep trailing slashes? For instance, I envision something like: ``` from pathlib import Path my_path = Path(r"foo/bar/", keep_trailing_slash=True) ``` The argument could be made `False` by default to maintain backwards compatibility. The only way I know to keep the backslash and maintain cross-compatibility is as follows: ``` import os from pathlib import Path my_path = f"{Path(r"foo/bar").resolve()}{os.sep}" ``` although this returns a string and the `Path` object is lost. Any thoughts? -- https://mail.python.org/mailman/listinfo/python-list
[issue41283] The parameter name for imghdr.what in the documentation is wrong
New submission from Adam Eltawla : I noticed the parameter name for imghdr.what in the documentation is wrong Link: https://docs.python.org/3.8/library/imghdr.html?highlight=imghdr function imghdr.what(filename, h=None) In reality: def what(file, h=None): It is 'file' not 'filename'. -- assignee: docs@python components: Documentation messages: 373551 nosy: aeltawela, docs@python priority: normal severity: normal status: open title: The parameter name for imghdr.what in the documentation is wrong type: enhancement versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue41283> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Bulletproof json.dump?
On 2020-07-07, Stephen Rosen wrote: > On Mon, Jul 6, 2020 at 6:37 AM Adam Funk wrote: > >> Is there a "bulletproof" version of json.dump somewhere that will >> convert bytes to str, any other iterables to list, etc., so you can >> just get your data into a file & keep working? >> > > Is the data only being read by python programs? If so, consider using > pickle: https://docs.python.org/3/library/pickle.html > Unlike json dumping, the goal of pickle is to represent objects as exactly > as possible and *not* to be interoperable with other languages. > > > If you're using json to pass data between python and some other language, > you don't want to silently convert bytes to strings. > If you have a bytestring of utf-8 data, you want to utf-8 decode it before > passing it to json.dumps. > Likewise, if you have latin-1 data, you want to latin-1 decode it. > There is no universal and correct bytes-to-string conversion. > > On Mon, Jul 6, 2020 at 9:45 AM Chris Angelico wrote: > >> Maybe what we need is to fork out the default JSON encoder into two, >> or have a "strict=True" or "strict=False" flag. In non-strict mode, >> round-tripping is not guaranteed, and various types will be folded to >> each other - mainly, many built-in and stdlib types will be >> represented in strings. In strict mode, compliance with the RFC is >> ensured (so ValueError will be raised on inf/nan), and everything >> should round-trip safely. >> > > Wouldn't it be reasonable to represent this as an encoder which is provided > by `json`? i.e. > > from json import dumps, UnsafeJSONEncoder > ... > json.dumps(foo, cls=UnsafeJSONEncoder) > > Emphasizing the "Unsafe" part of this and introducing people to the idea of > setting an encoder also seems nice. > > > On Mon, Jul 6, 2020 at 9:12 AM Chris Angelico wrote: > >> On Mon, Jul 6, 2020 at 11:06 PM Jon Ribbens via Python-list >> wrote: >> > > >> The 'json' module already fails to provide round-trip functionality: >> > >> > >>> for data in ({True: 1}, {1: 2}, (1, 2)): >> > ... if json.loads(json.dumps(data)) != data: >> > ... print('oops', data, json.loads(json.dumps(data))) >> > ... >> > oops {True: 1} {'true': 1} >> > oops {1: 2} {'1': 2} >> > oops (1, 2) [1, 2] >> >> There's a fundamental limitation of JSON in that it requires string >> keys, so this is an obvious transformation. I suppose you could call >> that one a bug too, but it's very useful and not too dangerous. (And >> then there's the tuple-to-list transformation, which I think probably >> shouldn't happen, although I don't think that's likely to cause issues >> either.) > > > Ideally, all of these bits of support for non-JSON types should be opt-in, > not opt-out. > But it's not worth making a breaking change to the stdlib over this. > > Especially for new programmers, the notion that > deserialize(serialize(x)) != x > just seems like a recipe for subtle bugs. > > You're never guaranteed that the deserialized object will match the > original, but shouldn't one of the goals of a de/serialization library be > to get it as close as is reasonable? > > > I've seen people do things which boil down to > > json.loads(x)["some_id"] == UUID(...) > > plenty of times. It's obviously wrong and the fix is easy, but isn't making > the default json encoder less strict just encouraging this type of bug? > > Comparing JSON data against non-JSON types is part of the same category of > errors: conflating JSON with dictionaries. > It's very easy for people to make this mistake, especially since JSON > syntax is a subset of python dict syntax, so I don't think `json.dumps` > should be encouraging it. > > On Tue, Jul 7, 2020 at 6:52 AM Adam Funk wrote: > >> Here's another "I'd expect to have to deal with this sort of thing in >> Java" example I just ran into: >> >> >>> r = requests.head(url, allow_redirects=True) >> >>> print(json.dumps(r.headers, indent=2)) >> ... >> TypeError: Object of type CaseInsensitiveDict is not JSON serializable >> >>> print(json.dumps(dict(r.headers), indent=2)) >> { >> "Content-Type": "text/html; charset=utf-8", >> "Server": "openresty", >> ... >> } >> > > Why should the JSON encoder know about an arbitrary dict-like type? > It might implement Mapping, but there's no way for json.dumps to know that > in the general case (because no
Re: Bulletproof json.dump?
On 2020-07-06, Adam Funk wrote: > On 2020-07-06, Chris Angelico wrote: >> On Mon, Jul 6, 2020 at 10:11 PM Jon Ribbens via Python-list >> wrote: >>> While I agree entirely with your point, there is however perhaps room >>> for a bit more helpfulness from the json module. There is no sensible >>> reason I can think of that it refuses to serialize sets, for example. >> >> Sets don't exist in JSON. I think that's a sensible reason. > > I don't agree. Tuples & lists don't exist separately in JSON, but > both are serializable (to the same thing). Non-string keys aren't > allowed in JSON, but it silently converts numbers to strings instead > of barfing. Typically, I've been using sets to deduplicate values as > I go along, & having to walk through the whole object changing them to > lists before serialization strikes me as the kind of pointless labor > that I expect when I'm using Java. ;-) Here's another "I'd expect to have to deal with this sort of thing in Java" example I just ran into: >>> r = requests.head(url, allow_redirects=True) >>> print(json.dumps(r.headers, indent=2)) ... TypeError: Object of type CaseInsensitiveDict is not JSON serializable >>> print(json.dumps(dict(r.headers), indent=2)) { "Content-Type": "text/html; charset=utf-8", "Server": "openresty", ... } -- I'm after rebellion --- I'll settle for lies. -- https://mail.python.org/mailman/listinfo/python-list
Re: Bulletproof json.dump?
On 2020-07-06, Chris Angelico wrote: > On Mon, Jul 6, 2020 at 10:11 PM Jon Ribbens via Python-list > wrote: >> >> On 2020-07-06, Chris Angelico wrote: >> > On Mon, Jul 6, 2020 at 8:36 PM Adam Funk wrote: >> >> Is there a "bulletproof" version of json.dump somewhere that will >> >> convert bytes to str, any other iterables to list, etc., so you can >> >> just get your data into a file & keep working? >> > >> > That's the PHP definition of "bulletproof" - whatever happens, no >> > matter how bad, just keep right on going. >> >> While I agree entirely with your point, there is however perhaps room >> for a bit more helpfulness from the json module. There is no sensible >> reason I can think of that it refuses to serialize sets, for example. > > Sets don't exist in JSON. I think that's a sensible reason. I don't agree. Tuples & lists don't exist separately in JSON, but both are serializable (to the same thing). Non-string keys aren't allowed in JSON, but it silently converts numbers to strings instead of barfing. Typically, I've been using sets to deduplicate values as I go along, & having to walk through the whole object changing them to lists before serialization strikes me as the kind of pointless labor that I expect when I'm using Java. ;-) >> Going a bit further and, for example, automatically calling isoformat() >> on date/time/datetime objects would perhaps be a bit more controversial, >> but would frequently be useful, and there's no obvious downside that >> occurs to me. > > They wouldn't round-trip without some way of knowing which strings > represent date/times. If you just want a one-way output format, it's > not too hard to subclass the encoder - there's an example right there > in the docs (showing how to create a representation for complex > numbers). The vanilla JSON encoder shouldn't do any of this. In fact, > just supporting infinities and nans is fairly controversial - see > other threads happening right now. > > Maybe what people want is a pretty printer instead? > > https://docs.python.org/3/library/pprint.html > > Resilient against recursive data structures, able to emit Python-like > code for many formats, is as readable as JSON, and is often > round-trippable. It lacks JSON's interoperability, but if you're > trying to serialize sets and datetimes, you're forfeiting that anyway. > > ChrisA -- "It is the role of librarians to keep government running in difficult times," replied Dramoren. "Librarians are the last line of defence against chaos." (McMullen 2001) -- https://mail.python.org/mailman/listinfo/python-list
Re: Bulletproof json.dump?
On 2020-07-06, Frank Millman wrote: > On 2020-07-06 2:06 PM, Jon Ribbens via Python-list wrote: >> On 2020-07-06, Chris Angelico wrote: >>> On Mon, Jul 6, 2020 at 8:36 PM Adam Funk wrote: >>>> Is there a "bulletproof" version of json.dump somewhere that will >>>> convert bytes to str, any other iterables to list, etc., so you can >>>> just get your data into a file & keep working? >>> >>> That's the PHP definition of "bulletproof" - whatever happens, no >>> matter how bad, just keep right on going. >> >> While I agree entirely with your point, there is however perhaps room >> for a bit more helpfulness from the json module. There is no sensible >> reason I can think of that it refuses to serialize sets, for example. >> Going a bit further and, for example, automatically calling isoformat() >> on date/time/datetime objects would perhaps be a bit more controversial, >> but would frequently be useful, and there's no obvious downside that >> occurs to me. >> > > I may be missing something, but that would cause a downside for me. > > I store Python lists and dicts in a database by calling dumps() when > saving them to the database and loads() when retrieving them. > > If a date was 'dumped' using isoformat(), then on retrieval I would not > know whether it was originally a string, which must remain as is, or was > originally a date object, which must be converted back to a date object. > > There is no perfect answer, but my solution works fairly well. When > dumping, I use 'default=repr'. This means that dates get dumped as > 'datetime.date(2020, 7, 6)'. I look for that pattern on retrieval to > detect that it is actually a date object. > > I use the same trick for Decimal objects. > > Maybe the OP could do something similar. Aha, I think the default=repr option is probably just what I need; maybe (at least in the testing stages) something like this: try: with open(output_file, 'w') as f: json.dump(f) except TypeError: print('unexpected item in the bagging area!') with open(output_file, 'w') as f: json.dump(f, default=repr) and then I'd know when I need to go digging through the output for bytes, sets, etc., but at least I'd have the output to examine. -- Well, we had a lot of luck on Venus We always had a ball on Mars -- https://mail.python.org/mailman/listinfo/python-list
Re: Bulletproof json.dump?
On 2020-07-06, Chris Angelico wrote: > On Mon, Jul 6, 2020 at 8:36 PM Adam Funk wrote: >> >> Hi, >> >> I have a program that does a lot of work with URLs and requests, >> collecting data over about an hour, & then writing the collated data >> to a JSON file. The first time I ran it, the json.dump failed because >> there was a bytes value instead of a str, so I had to figure out where >> that was coming from before I could get any data out. I've previously >> run into the problem of collecting values in sets (for deduplication) >> & forgetting to walk through the big data object changing them to >> lists before serializing. >> >> Is there a "bulletproof" version of json.dump somewhere that will >> convert bytes to str, any other iterables to list, etc., so you can >> just get your data into a file & keep working? >> > > That's the PHP definition of "bulletproof" - whatever happens, no > matter how bad, just keep right on going. If you really want some way Well played! > to write "just anything" to your file, I recommend not using JSON - > instead, write out the repr of your data structure. That'll give a > decent result for bytes, str, all forms of numbers, and pretty much > any collection, and it won't break if given something that can't > safely be represented. Interesting point. At least the TypeError message does say what the unacceptable type is ("Object of type set is not JSON serializable"). -- "It is the role of librarians to keep government running in difficult times," replied Dramoren. "Librarians are the last line of defence against chaos." (McMullen 2001) -- https://mail.python.org/mailman/listinfo/python-list
Bulletproof json.dump?
Hi, I have a program that does a lot of work with URLs and requests, collecting data over about an hour, & then writing the collated data to a JSON file. The first time I ran it, the json.dump failed because there was a bytes value instead of a str, so I had to figure out where that was coming from before I could get any data out. I've previously run into the problem of collecting values in sets (for deduplication) & forgetting to walk through the big data object changing them to lists before serializing. Is there a "bulletproof" version of json.dump somewhere that will convert bytes to str, any other iterables to list, etc., so you can just get your data into a file & keep working? (I'm using Python 3.7.) Thanks! -- Slade was the coolest band in England. They were the kind of guys that would push your car out of a ditch. ---Alice Cooper -- https://mail.python.org/mailman/listinfo/python-list
[issue41008] multiprocessing.Connection.poll raises BrokenPipeError on Windows
New submission from David Adam : On Windows 10 (1909, build 18363.900) in 3.7.7 and 3.9.0b3, poll() on a multiprocessing.Connection object can produce an exception: -- import multiprocessing def run(output_socket): for i in range(10): output_socket.send(i) output_socket.close() def main(): recv, send = multiprocessing.Pipe(duplex=False) process = multiprocessing.Process(target=run, args=(send,)) process.start() send.close() while True: if not process._closed: if recv.poll(): try: print(recv.recv()) except EOFError: process.join() break if __name__ == "__main__": main() -- On Linux/macOS this prints 0-9 and exits successfully, but on Windows produces a backtrace as follows: File "mptest.py", line 17, in main if recv.poll(): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.179.0_x64__qbz5n2kfra8p0\lib\multiprocessing\connection.py", line 262, in poll return self._poll(timeout) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.179.0_x64__qbz5n2kfra8p0\lib\multiprocessing\connection.py", line 333, in _poll _winapi.PeekNamedPipe(self._handle)[0] != 0): BrokenPipeError: [WinError 109] The pipe has been ended -- messages: 371748 nosy: zanchey priority: normal severity: normal status: open title: multiprocessing.Connection.poll raises BrokenPipeError on Windows type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue41008> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Is CONTINUE_LOOP still a thing?
I got to the point of trying to implement continue in my own interpreter project and was surprised when my for-loop just used some jumps to manage its control flow. Actually, I hoped for something else; I don't have logic in my code generation to track jump positions. I kind of hoped there was some CONTINUE opcode with some extra logic I could add at run time to just kind of do it. (that is my own problem and I know there is no such thing as a free lunch, but it's 2AM and I want to hope!) Well, I found CONTINUE_LOOP, which applies for for-loops, but 3.6.8 sure doesn't emit it for pretty basic stuff: >>> def for_continue(): ... a = 0 ... for i in range(0, 3, 1): ... if i == 2: ... continue ... a += i ... else: ... a += 10 ... return a ... >>> for_continue() 11 >>> dis(for_continue) 2 0 LOAD_CONST 1 (0) 2 STORE_FAST 0 (a) 3 4 SETUP_LOOP 46 (to 52) 6 LOAD_GLOBAL 0 (range) 8 LOAD_CONST 1 (0) 10 LOAD_CONST 2 (3) 12 LOAD_CONST 3 (1) 14 CALL_FUNCTION3 16 GET_ITER >> 18 FOR_ITER22 (to 42) 20 STORE_FAST 1 (i) 4 22 LOAD_FAST1 (i) 24 LOAD_CONST 4 (2) 26 COMPARE_OP 2 (==) 28 POP_JUMP_IF_FALSE 32 5 30 JUMP_ABSOLUTE 18 6 >> 32 LOAD_FAST0 (a) 34 LOAD_FAST1 (i) 36 INPLACE_ADD 38 STORE_FAST 0 (a) 40 JUMP_ABSOLUTE 18 >> 42 POP_BLOCK 8 44 LOAD_FAST0 (a) 46 LOAD_CONST 5 (10) 48 INPLACE_ADD 50 STORE_FAST 0 (a) 9 >> 52 LOAD_FAST0 (a) 54 RETURN_VALUE The place where a CONTINUE_LOOP could have made sense would be at address 30 for that JUMP_ABSOLUTE. That'll go back to a FOR_ITER, as CONTINUE_LOOP implies it *must* do. I'm just guessing that at some point, somebody concluded there wasn't anything special about having that opcode over absolute jumps and it got abandoned. I wanted to check if my notions were correct or if there's some gotcha where having that over other things makes sense. -- https://mail.python.org/mailman/listinfo/python-list
[issue40847] New parser considers empty line following a backslash to be a syntax error, old parser didn't
Adam Williamson added the comment: I'm not the best person to ask what I'd "consider" to be a bug or not, to be honest. I'm just a Fedora packaging guy trying to make our packages build with Python 3.9 :) If this is still an important question, I'd suggest asking the folks from the Black issue and PR I linked to, that's the "real world" case if any. -- ___ Python tracker <https://bugs.python.org/issue40847> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40911] Unexpected behaviour for += assignment to list inside tuple
Adam Cmiel added the comment: Got it, I didn't realize that the last step of augmented assignment is (in this case) assigning the result of __iadd__ back to the tuple. Thanks for the explanations! -- ___ Python tracker <https://bugs.python.org/issue40911> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40911] Unexpected behaviour for += assignment to list inside tuple
New submission from Adam Cmiel : Python version: Python 3.8.3 (default, May 15 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] on linux Description: When assigning to a tuple index using +=, if the element at that index is a list, the list is extended and a TypeError is raised. a = ([],) try: a[0] += [1] except TypeError: assert a != ([1],) # assertion fails else: assert a == ([1],) The expected behaviour is that only one of those things would happen (probably the list being extended with no error, given that a[0].extend([1]) works fine). -- components: Interpreter Core messages: 370990 nosy: Adam Cmiel priority: normal severity: normal status: open title: Unexpected behaviour for += assignment to list inside tuple type: behavior versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue40911> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40848] compile() can compile a bare starred expression with `PyCF_ONLY_AST` flag with the old parser, but not the new one
Adam Williamson added the comment: Realized I forgot to give it, so in case it's important, the context here is the black test suite: https://github.com/psf/black/issues/1441 that test suite has a file full of expressions that it expects to be able to parse this way (it uses `ast.parse()`, which in turn calls `compile()` with this flag). A bare (*starred) line is part of that file: https://github.com/psf/black/blob/master/tests/data/expression.py#L149 and has been for as long as black has existed. Presumably if this isn't going to be fixed we'll need to adapt this black test file to test a starred expression in a 'valid' way, somehow. -- ___ Python tracker <https://bugs.python.org/issue40848> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40848] compile() can compile a bare starred expression with `PyCF_ONLY_AST` flag with the old parser, but not the new one
New submission from Adam Williamson : Not 100% sure this would be considered a bug, but it seems at least worth filing to check. This is a behaviour difference between the new parser and the old one. It's very easy to reproduce: sh-5.0# PYTHONOLDPARSER=1 python3 Python 3.9.0b1 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from _ast import * >>> compile("(*starred)", "", "exec", flags=PyCF_ONLY_AST) >>> sh-5.0# python3 Python 3.9.0b1 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from _ast import * >>> compile("(*starred)", "", "exec", flags=PyCF_ONLY_AST) Traceback (most recent call last): File "", line 1, in File "", line 1 (*starred) ^ SyntaxError: invalid syntax That is, you can compile() the expression "(*starred)" with PyCF_ONLY_AST flag set with the old parser, but not with the new one. Without PyCF_ONLY_AST you get a SyntaxError with both parsers, though a with the old parser, the error message is "can't use starred expression here", not "invalid syntax". -- components: Interpreter Core messages: 370620 nosy: adamwill priority: normal severity: normal status: open title: compile() can compile a bare starred expression with `PyCF_ONLY_AST` flag with the old parser, but not the new one versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue40848> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40847] New parser considers empty line following a backslash to be a syntax error, old parser didn't
New submission from Adam Williamson : While debugging issues with the black test suite in Python 3.9, I found one which black upstream says is a Cpython issue, so I'm filing it here. Reproduction is very easy. Just use this four-line tester: print("hello, world") \ print("hello, world 2") with that saved as `test.py`, check the results: sh-5.0# PYTHONOLDPARSER=1 python3 test.py hello, world hello, world 2 sh-5.0# python3 test.py File "/builddir/build/BUILD/black-19.10b0/test.py", line 3 ^ SyntaxError: invalid syntax The reason black has this test (well, a similar test - in black's test, the file *starts* with the backslash then the empty line, but the result is the same) is covered in https://github.com/psf/black/issues/922 and https://github.com/psf/black/pull/948 . -- components: Interpreter Core messages: 370618 nosy: adamwill priority: normal severity: normal status: open title: New parser considers empty line following a backslash to be a syntax error, old parser didn't type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue40847> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Is there some reason that recent Windows 3.6 releases don't included executable nor msi installers?
On Friday, May 29, 2020 at 7:30:32 AM UTC-5, Eryk Sun wrote: > On 5/28/20, Adam Preble wrote: > Sometimes a user will open a script via "open with" and browse to > python.exe or py.exe. This associates .py files with a new progid that > doesn't pass the %* command-line arguments. > > The installed Python.File progid should be listed in the open-with > list, and, if the launcher is installed, the icon should have the > Python logo with a rocket on it. Select that, lock it in by selecting > to always use it, and open the script. This will only be wrong if a > user or misbehaving program modified the Python.File progid and broke > its "open" action. Thank you for responding! The computers showing the problem are remote to me and I won't be able to access one for a few days, but I will be making it a point in particular to check their associations before continuing without anything else with it. -- https://mail.python.org/mailman/listinfo/python-list
Re: Is there some reason that recent Windows 3.6 releases don't included executable nor msi installers?
On Thursday, May 28, 2020 at 7:57:04 PM UTC-5, Terry Reedy wrote: > The OP is so far choosing to not use an installer with those fixes. By > not doing so, he is missing out on the maybe 2000 non-security fixes and > some enhancements that likely would benefit him more than maybe 50 > mostly obscure fixes added between 3.6.8 and 3.6.10*. If a rare user > such as Adam also chooses to not compile the latter, that is his choice. I was going to just stay mute about why I was even looking at 3.6.10, but I felt I should weigh in after some of the other responses. I think somebody would find the issues interesting. We had found what looked like a bug in the Python Launcher where it would eat command line arguments meant for the script. I would find some stuff missing from sys.argv in a script that just imports sys and prints out sys.argv if I ran it directly in cmd.exe as "script.py." If I ran it as "python script.py" then everything was good as usual. So I figured while sorting out what was wrong that I should try the latest 3.6 interpreter since it would be a safe bet. Our organization finally lifted Sisyphus' rock over the 2.7 hump earlier in the year by moving to 3.6. So imagine my surprise when I found the latest 3.6 releases were just source tarballs. This left me with a dilemma and I'm still working through it. I haven't filed an issue about this because I haven't completed my own due diligence on the problem by trying it on a "latest." For the sake of this particular problem, I think I can just use 3.8.3 for exploration, but I'm worrying about my wider organization. I can't count on 3.8 because of some module dependencies our organization's software. 3.7 has a similar issue. So I figured I'd actually just build the thing and see what I can do. I did manage to build it, but there was surprisingly a few quirks. I caused some of it. For example, I didn't care about most of the externals before, but I made sure to include them if I was create a release for others. A few thousand people would be using this and I'm the one that would be accountable if it went bust. So I made sure all the major externals were incorporated, and a lot of those were messing up. Generally, the externals would download, but some would not get moved/renamed to their final name, and then the build would fail when trying to find them. So I wound up with an installation that seemed to run my own code just fine in trials, but I would be terrified to post into it our organization's software stack. I'm now concerned about how long we have with 3.6 because people clearly want us to move on even beyond that. I look online and the official support window for it ends at the end of next year, but it looks like the real support window for that on Windows has already ended. So our organization may have miscalculated this. What does that mean if we managed to make it to 3.8 in a few months? We can't do it right now due to a few missing modules, but now we have to question if we'll only get a year out of 3.8 before we're doing this all over again. -- https://mail.python.org/mailman/listinfo/python-list
Is there some reason that recent Windows 3.6 releases don't included executable nor msi installers?
I wanted to update from 3.6.8 on Windows without necessarily moving on to 3.7+ (yet), so I thought I'd try 3.6.9 or 3.6.10. All I see for both are source archives: https://www.python.org/downloads/release/python-369/ https://www.python.org/downloads/release/python-3610/ So, uh, I theoretically did build a 3.6.10 .exe installer, but I don't really trust I did everything right. Is there an officially sourced installation? -- https://mail.python.org/mailman/listinfo/python-list
Import machinery for extracting non-modules from modules (not using import-from)
The (rightful) obsession with modules in PEP-451 and the import machinery hit me with a gotcha when I was trying to implement importing .NET stuff that mimicked IronPython and Python.NET in my interpreter project. The meat of the question: Is it important that the spec loader actually return a module? Can it just return... stuff? I know a from X import Y is the normal means for this, but if the loader knows better, can it just do it? A normal process is something like: import X A bunch of finders line up to see if they know anything about X. If they don't, they return None. Assume it's found. That finder will return a module spec for how to load it. A little later, that spec is instructed to load the module. If X wasn't a module, you can expect to see something like: ModuleNotFoundError: No module named 'X'; 'X' is not a package ...you were supposed to do 'from something import X'. I'm actually trying to figure out if there's a way with normal Python modules where I can even be in a situation to just blandly trying to import X without a package in front of it. With IronPython--and I'm pretty sure Python.NET, there are situations where you CAN do this. The paths for .NET 'packages' are the .NET namespaces (a slightly different usage of the term). Say I want the machine name. It would be typical to get that with System.Environment.MachineName. MachineName is a static field in Environment. System.Environment is a namespace in mscorlib (in classic .NET framework). The .NET namespace can be null. In that case it's just in the root namespace or something. Let's say I have a .dll I've made known to IronPython or Python.NET using its clr.AddReference, and I want to toy with some class defined without a namespace called "Crazy." This is totally fine: import Crazy I really can't follow what either one is doing here, and I don't know how well they're even latching on the PEP-451. So there's the main question: is it important that the spec loader actually return a module? Can it just return... stuff? I know a from X import Y is the normal means for this, but if the loader knows better, can it just do it? -- https://mail.python.org/mailman/listinfo/python-list
How does the import machinery handle relative imports?
I'm fussing over some details of relative imports while trying to mimic Python module loading in my personal project. This is getting more into corner cases, but I can spare time to talk about it while working on more normal stuff. I first found this place: https://manikos.github.io/how-pythons-import-machinery-works And eventually just started looking at PEP 451. Neither is really explaining relative imports. I decided to try this garbage: from importlib.util import spec_from_loader, module_from_spec from importlib.machinery import SourceFileLoader spec = spec_from_loader("..import_star", SourceFileLoader("package_test.import_star", r"C:\coding\random_python_projects\package_test\import_star.py")) print(spec) mod = module_from_spec(spec) print(mod) spec.loader.exec_module(mod) ...exec_module ultimately fails to do the job. Note the syntax so that I can actually perform a relative import hahaha: C:\Python36\python.exe -m package_test.second_level.import_upwards ModuleSpec(name='..import_star', loader=<_frozen_importlib_external.SourceFileLoader object at 0x0226E914B080>, origin='') '> Traceback (most recent call last): File "C:\Python36\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Python36\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\coding\random_python_projects\package_test\second_level\import_upwards.py", line 15, in spec.loader.exec_module(mod) File "", line 674, in exec_module File "", line 750, in get_code File "", line 398, in _check_name_wrapper ImportError: loader for package_test.import_star cannot handle ..import_star Yeah I don't think I'm doing this right! At this point I'm just trying to figure out where I feed in the relative path. Is that all deduced in advance of creating finding the spec? Can I even give the finders a relative path like that? -- https://mail.python.org/mailman/listinfo/python-list
Re: Why generate POP_TOP after an "import from?"
On Saturday, April 18, 2020 at 1:15:35 PM UTC-5, Alexandre Brault wrote: > >>> def f(): > ... â â from sys import path, argv ... So I figured it out and all but I wanted to ask about the special characters in that output. I've seen that a few times and never figured out what's going on and if I need to change how I'm reading these. Or say: â â â â â â â â â â â â 12 STORE_FASTâ â â â â â â â â â â â â â 1 (argv I don't know if you're seeing all these letter a's. I'm guessing something goofy with Unicode spaces or something? -- https://mail.python.org/mailman/listinfo/python-list
Re: Why generate POP_TOP after an "import from?"
On Friday, April 17, 2020 at 1:37:18 PM UTC-5, Chris Angelico wrote: > The level is used for package-relative imports, and will basically be > the number of leading dots (eg "from ...spam import x" will have a > level of 3). You're absolutely right with your analysis, with one > small clarification: Thanks for taking that on too. I haven't set up module hierarchy yet so I'm not in a position to handle levels, but I have started parsing them and generating the opcodes. Is it sufficient to just use the number of dots as an indication of level? As a side note, I suppose it's sufficient to just *peek* at the stack rather than pop the module and push it again. I'm guessing that's what the Python interpreter is doing. > In theory, I suppose, you could replace the POP_TOP with a STORE_FAST > into "sys", and thus get a two-way import that both grabs the module > and also grabs something out of it. Not very often wanted, but could > be done if you fiddle with the bytecode. I'm trying to follow along for academic purposes. I'm guessing you mean that would basically optimize: from sys import path import sys It would definitely be a fringe thing to do... -- https://mail.python.org/mailman/listinfo/python-list
Re: Why generate POP_TOP after an "import from?"
On Friday, April 17, 2020 at 1:22:18 PM UTC-5, Adam Preble wrote: > At this point, my conceptual stack is empty. If I POP_TOP then I have nothing > to pop and the world would end. Yet, it doesn't. What am I missing? Check out this guy replying to himself 10 minutes later. I guess IMPORT_FROM pushes the module back on to the stack afterwards so that multiple import-from's can be executed off of it. This is then terminated with a POP_TOP: >>> def import_from_multi(): ... from sys import path, bar ... >>> dis(import_from_multi) 2 0 LOAD_CONST 1 (0) 2 LOAD_CONST 2 (('path', 'bar')) 4 IMPORT_NAME 0 (sys) 6 IMPORT_FROM 1 (path) 8 STORE_FAST 0 (path) 10 IMPORT_FROM 2 (bar) 12 STORE_FAST 1 (bar) 14 POP_TOP 16 LOAD_CONST 0 (None) 18 RETURN_VALUE -- https://mail.python.org/mailman/listinfo/python-list
Why generate POP_TOP after an "import from?"
Given this in Python 3.6.8: from dis import dis def import_from_test(): from sys import path >>> dis(import_from_test) 2 0 LOAD_CONST 1 (0) 2 LOAD_CONST 2 (('path',)) 4 IMPORT_NAME 0 (sys) 6 IMPORT_FROM 1 (path) 8 STORE_FAST 0 (path) 10 POP_TOP 12 LOAD_CONST 0 (None) 14 RETURN_VALUE I don't understand why there's a POP_TOP there that I don't get for an import_name grammatical statement. IMPORT_NAME needs to eat the top two entries of the stack for level and the from-list. BTW I don't know what level is for either since my science projects have always had it be zero, but that's another question. IMPORT_NAME will the push the module on to the stack. IMPORT_FROM will import path from the module on the stack, and push that result on the stack. STORE_FAST will store path for use, finally "modifying the namespace." At this point, my conceptual stack is empty. If I POP_TOP then I have nothing to pop and the world would end. Yet, it doesn't. What am I missing? -- https://mail.python.org/mailman/listinfo/python-list
[issue11395] print(s) fails on Windows with long strings
Adam Bartoš added the comment: I've been hit by this issue recently. On my configuration, print("a" * 10215) fails with an infinite loop of OSErrors (WinError 8). This even cannot by interrupted with Ctrl-C nor the exception can be catched. - print("a" * 10214) is fine - print("a" * 10215) is fine when preceeded by print("b" * 2701), but not when preceeded by print("b" * 2700) - the problem (or at least with these numbers) occurs only when the code is saved in a script, and this is run by double-clicking the file (i.e. run by Windows ShellExecute I guess), not by "py test.py" or interactively. My configuration is Python 3.7.3 64 bit on Windows Vista 64 bit. I wonder if anyone is able to reproduce this on their configuration. -- nosy: +Drekin ___ Python tracker <https://bugs.python.org/issue11395> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: How does the super type present itself and do lookups?
On Thursday, March 19, 2020 at 5:02:46 PM UTC-5, Greg Ewing wrote: > On 11/03/20 7:02 am, Adam Preble wrote: > > Is this foo attribute being looked up in an override of __getattr__, > > __getattribute__, or is it a reserved slot that's internally doing this? > > That's what I'm trying to figure out. > > Looking at the source in Objects/typeobject.c, it uses the > tp_getattro type slot, which corresponds to __getattribute__. Thanks for taking the time to look this up for me. I saw the message soon after you originally posted it, but it took me this long to sit down and poke at everything some more. I don't doubt what you got from the source, but I am trying to figure out how I could have inferred that from the code I was trying. It looks like child_instance.__getattribute__ == child_instance.super().__getattribute__. They print out with the same address and pass an equality comparison. That implies that they are the same, and that the super type is NOT doing something special with that slot. Given that super().__getattribute__ internally ultimately should be something else, I am guessing there is something else at play causing an indirection. I have two reasons to be interested in this: 1. There may be obscure behavior I should worry about in general if I'm trying to default to mimicking Python and the data model for my own stuff. 2. I need to improve my kung fu when I'm inspecting these objects so I don't get hung up on stuff like this in the future. The bright side is having a custom get attribute implementation is pretty much correct, although mine would have c.__getattribute != c.super().__getattribute__. -- https://mail.python.org/mailman/listinfo/python-list
Re: How does the super type present itself and do lookups?
On Tuesday, March 10, 2020 at 9:28:11 AM UTC-5, Peter Otten wrote: > self.foo looks up the attribute in the instance, falls back to the class and > then works its way up to the parent class, whereas > > super().foo bypasses both instance and class, and starts its lookup in the > parent class. Is this foo attribute being looked up in an override of __getattr__, __getattribute__, or is it a reserved slot that's internally doing this? That's what I'm trying to figure out. -- https://mail.python.org/mailman/listinfo/python-list
Re: How does the super type present itself and do lookups?
On Monday, March 9, 2020 at 9:31:45 PM UTC-5, Souvik Dutta wrote: > This should be what you are looking for. > https://python-reference.readthedocs.io/en/latest/docs/functions/super.html I'm not trying to figure out how the super() function works, but rather the anatomy of the object is returns. What I think is happening in my investigation is that some of the missing attributes in __dict__ are getting filled in from reserved slots, but it's just a theory. I'm trying to mimic the object in my own interpreter project. -- https://mail.python.org/mailman/listinfo/python-list
Re: How does the super type present itself and do lookups?
On Wednesday, March 4, 2020 at 11:13:20 AM UTC-6, Adam Preble wrote: > Stuff I'm speculating that the stuff I don't see when poking are reserved slots. I figured out how much of a thing that is when I was digging around for how classes know how to construct themselves. I managed to figure out __call__ is like that too. So I guess it's something that doesn't readily reveal itself when asked but is there if you try to use it. (or something) -- https://mail.python.org/mailman/listinfo/python-list
How does the super type present itself and do lookups?
Months ago, I asked a bunch of stuff about super() and managed to fake it well enough to move on to other things for awhile. The day of reckoning came this week and I was forced to implement it better for my personal Python project. I have a hack in place that makes it work well-enough but I found myself frustrated with how shift the super type is. It's both the self and the parent class, but not. If you don't know, you can trap what super() returns some time and poke it with a stick. If you print it you'll be able to tell it's definitely unique: , > If you try to invoke methods on it, it'll invoke the superclass' methods. That's what is supposed to happen and basically what already happens when you do super().invoke_this_thing() anyways. Okay, so how is it doing the lookup for that? The child instance and the super types' __dict__ are the same. The contents pass an equality comparison and are the same if you print them. They have the same __getattribute__ method wrapper. However, if you dir() them you definitely get different stuff. For one, the super type has its special variables __self__, __self_class__, and __thisclass__. It's missing __dict__ from the dir output. But wait, I just looked at that! So I'm thinking that __getattr__ is involved, but it's not listed in anything. If I use getattr on the super, I'll get the parent methods. If I use __getattribute__, I get the child's methods. I get errors every way I've conceived of trying to pull out a __getattr__ dunder. No love. I guess the fundamental question is: what different stuff happens when LOAD_ATTR is performed on a super object versus a regular object? If you are curious about what I'm doing right now, I overrode __getattribute__ since that's primarily what I use for attribute lookups right now. It defer to the superclass' __getattribute__. If a method pops out, it replaces the self with the super's __self__ before kicking it out. I feel kind of dirty doing it: https://github.com/rockobonaparte/cloaca/blob/312758b2abb80320fb3bf344ba540a034875bc4b/LanguageImplementation/DataTypes/PySuperType.cs#L36 If you want to see how I was experimenting with super, here's the code and output: class Parent: def __init__(self): self.a = 1 def stuff(self): print("Parent stuff!") class Child(Parent): def __init__(self): super().__init__() self.b = 2 self.super_instance = super() def stuff(self): print("Child stuff!") def only_in_child(self): print("Only in child!") c = Child() c.super_instance.__init__() c.stuff() c.super_instance.stuff() print(c) print(c.super_instance) print(c.__init__) print(c.super_instance.__init__) print(c.stuff) print(c.super_instance.stuff) print(c.__getattribute__) print(c.super_instance.__getattribute__) print(dir(c)) print(dir(c.super_instance)) print(c.__dict__ == c.super_instance.__dict__) print(getattr(c, "__init__")) print(getattr(c.super_instance, "__init__")) print(c.__getattribute__("__init__")) print(c.super_instance.__getattribute__("__init__")) Child stuff! Parent stuff! <__main__.Child object at 0x026854D99828> , > > > > > ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'a', 'b', 'only_in_child', 'stuff', 'super_instance'] ['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__get__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__self__', '__self_class__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__thisclass__', 'a', 'b', 'super_instance'] True > > > > -- https://mail.python.org/mailman/listinfo/python-list
Re: Data model and attribute resolution in subclasses
On Monday, March 2, 2020 at 3:12:33 PM UTC-6, Marco Sulla wrote: > Is your project published somewhere? What changes have you done to the > interpreter? I'm writing my own mess: https://github.com/rockobonaparte/cloaca It's a .NET Pythonish interpreter with the distinction of using a whole lot of async-await so I can do expressive game scripting with it in one thread. If IronPython had a handle on async-await then I'd probably not be doing this at all. Well, it was also a personal education project to learn me some Python internals for an internal company job change, but they aren't interested in me at all. :( I still hack with it because I got far enough to have a REPL I could dump into Unity and it immediately looked very useful. -- https://mail.python.org/mailman/listinfo/python-list
Re: Data model and attribute resolution in subclasses
On Monday, March 2, 2020 at 7:09:24 AM UTC-6, Lele Gaifax wrote: > Yes, you just used it, although you may have confused its meaning: > Yeah I absolutely got it backwards. That's a fun one I have to fix in my project now! -- https://mail.python.org/mailman/listinfo/python-list
Re: Data model and attribute resolution in subclasses
On Sunday, March 1, 2020 at 3:08:29 PM UTC-6, Terry Reedy wrote: > Because BaseClass is the superclass of SubClass. So there's a mechanism for parent classes to know all their children? -- https://mail.python.org/mailman/listinfo/python-list
Re: Data model and attribute resolution in subclasses
Based on what I was seeing here, I did some experiments to try to understand better what is going on: class BaseClass: def __init__(self): self.a = 1 def base_method(self): return self.a def another_base_method(self): return self.a + 1 class SubClass(BaseClass): def __init__(self): super().__init__() self.b = 2 c = SubClass() print(c.__dict__) print(c.__class__.__dict__) print(c.__class__.__subclasses__()) print(c.__class__.mro()) print(c.__class__.mro()[1].__dict__) print(getattr(c, "base_method")) print(c.b) print(c.a) With some notes: print(c.__dict__) {'a': 1, 'b': 2} So the instance directly has a. I am guessing that the object's own dictionary is directly getting these are both __init__'s are run. print(c.__class__.__dict__) {'__module__': '__main__', '__init__': , '__doc__': None} I am guessing this is what is found and stuffed into the class' namespace when the class is built; that's specifically the BUILD_CLASS opcode doing its thing. print(c.__class__.__subclasses__()) [] What?! Why isn't this []? print(c.__class__.mro()) [, , ] This is more like what I expected to find with subclasses. Okay, no, method resolution order is showing the entire order. print(c.__class__.mro()[1].__dict__) {'__module__': '__main__', '__init__': , 'base_method': , 'another_base_method': , '__dict__': , '__weakref__': , '__doc__': None} No instance-level stuff. Looks like it's the base class namespace when the BUILD_CLASS opcode saw it. Okay, looking good. print(getattr(c, "base_method")) > I'm guessing here it didn't find it in the object's __dict__ nor the class' __dict__ so it went in mro and found it in BaseClass. So I need a __dict__ for the class based on the code defined for it when the class is defined. That's associated with the class. I need another dictionary for each instance. That will get stuffed with whatever started getting dumped into it in __init__ (and possibly elsewhere afterwards). What __dict__ actually is can vary. The mappingproxy helps make sure that strings are given as keys (among other things?). -- https://mail.python.org/mailman/listinfo/python-list
Data model and attribute resolution in subclasses
I have been making some progress on my custom interpreter project but I found I have totally blown implementing proper subclassing in the data model. What I have right now is PyClass defining what a PyObject is. When I make a PyObject from a PyClass, the PyObject sets up a __dict__ that is used for attribute lookup. When I realized I needed to worry about looking up parent namespace stuff, this fell apart because my PyClass had no real notion of a namespace. I'm looking at the Python data model for inspiration. While I don't have to implement the full specifications, it helps me where I don't have an alternative. However, the data model is definitely a programmer document; it's one of those things where the prose is being very precise in what it's saying and that can foil a casual reading. Here's what I think is supposed to exist: 1. PyObject is the base. 2. It has an "internal dictionary." This isn't exposed as __dict__ 3. PyClass subclasses PyObject. 4. PyClass has a __dict__ Is there a term for PyObject's internal dictionary. It wasn't called __dict__ and I think that's for good reasons. I guess the idea is a PyObject doesn't have a namespace, but a PyClass does (?). Now to look something up. I assume that __getattribute__ is supposed to do something like: 1. The PyClass __dict__ for the given PyObject is consulted. 2. The implementation for __getattribute__ for the PyObject will default to looking into the "internal dictionary." 3. Assuming the attribute is not found, the subclasses are then consulted using the subclass' __getattribute__ calls. We might recurse on this. There's probably some trivia here regarding multiple inheritance; I'm not entirely concerned (yet). 4. Assuming it's never found, then the user sees an AttributeError Would each of these failed lookups result in an AttributeError? I don't know how much it matters to me right now that I implement exactly to that, but I was curious if that's really how that goes under the hood. -- https://mail.python.org/mailman/listinfo/python-list
[issue39439] Windows Multiprocessing in Virtualenv: sys.prefix is incorrect
Change by Adam Meily : -- pull_requests: +17546 pull_request: https://github.com/python/cpython/pull/18159 ___ Python tracker <https://bugs.python.org/issue39439> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39439] Windows Multiprocessing in Virtualenv: sys.prefix is incorrect
Change by Adam Meily : -- pull_requests: +17545 pull_request: https://github.com/python/cpython/pull/18158 ___ Python tracker <https://bugs.python.org/issue39439> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39439] Windows Multiprocessing in Virtualenv: sys.prefix is incorrect
Change by Adam Meily : -- pull_requests: +17544 pull_request: https://github.com/python/cpython/pull/18157 ___ Python tracker <https://bugs.python.org/issue39439> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38092] environment variables not passed correctly using new virtualenv launching in windows and python3.7+
Change by Adam Meily : -- pull_requests: +17543 stage: needs patch -> patch review pull_request: https://github.com/python/cpython/pull/18157 ___ Python tracker <https://bugs.python.org/issue38092> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39439] Windows Multiprocessing in Virtualenv: sys.prefix is incorrect
Adam Meily added the comment: OK, that makes sense. For 3.7, I can create a PR for that corrects the order of arguments passed into _winapi.CreateProcess For 3.8 / master, the problem appears to be that the check in popen_spawn_win32.py to set the subprocess env is failing because sys.executable != spawn.get_executable() -- spawn.get_executable() is returning sys._base_executable. So, can you confirm that the fix is to just change spawn.get_executable() to return sys.executable, like it was prior to the PR mentioned in the other ticket? -- ___ Python tracker <https://bugs.python.org/issue39439> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39439] Windows Multiprocessing in Virtualenv: sys.prefix is incorrect
New submission from Adam Meily : I upgraded from Python 3.7.1 to 3.7.6 and began noticing a behavior that was breaking my code. My code detects if it's running in a virtualenv. This check worked in 3.7.1 but is broken in 3.7.6. >From the documentation, sys.prefix and sys.exec_prefix should point to the >virtualenv when one is active. However, I'm seeing that both of these >constants are pointing to the system installation directory and not my >virtualenv when I am in a multiprocessing child. Here is an example output of >a test application running in 3.7.6 (I've attached the test script to this >ticket): Parent process = sys.prefix: C:\Users\user\project\venv sys.exec_prefix: C:\Users\user\project\venv sys.base_prefix: C:\Program Files\Python37 sys.base_exec_prefix: C:\Program Files\Python37 = Subprocess = sys.prefix: C:\Program Files\Python37 sys.exec_prefix: C:\Program Files\Python37 sys.base_prefix: C:\Program Files\Python37 sys.base_exec_prefix: C:\Program Files\Python37 = I would expect that sys.prefix and sys.exec_prefix to be identical in the parent and child process. I verified that this behavior is present in 3.7.5, 3.7.6, and 3.8.1. I am on a Windows 10 x64 system. Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)] on win32 -- components: Windows files: multiproc_venv_prefix.py messages: 360581 nosy: meilyadam, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows Multiprocessing in Virtualenv: sys.prefix is incorrect versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48862/multiproc_venv_prefix.py ___ Python tracker <https://bugs.python.org/issue39439> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Understanding bytecode arguments: 1 byte versus 2 bytes
I'm trying to understand the difference in disassemblies with 3.6+ versus older versions of CPython. It looks like the basic opcodes like LOAD_FAST are 3 bytes in pre-3.6 versions, but 2 bytes in 3.6+. I read online somewhere that there was a change to the argument sizes in 3.6: it became 2 bytes when it used to be just one. I wanted to verify that. For 3.6, if an opcode takes an argument, can I always assume that argument is just one byte? I can think of some situations where that doesn't sounds right. For example, JUMP_ABSOLUTE would be a problem, although I have yet to see that opcode in the wild. Actually, I'd be worried about more involved jumps because it sounds like with just a single-byte offset that I'd have to sometimes make trampolines to jump to where I ultimately need to be. Again, I haven't really hit that, but I'm also use 2-byte opcodes. What I have works, but it looks ... fairly simple for me to reduce the opcode size so I wanted to understand some of the decisions that were made go to a single-byte argument size in 3.6. -- https://mail.python.org/mailman/listinfo/python-list
[issue39201] Threading.timer leaks memory in 3.8.0/3.8.1
Adam added the comment: I filed a bug for this a few weeks ago, and then found another ticket about the same issue before: https://bugs.python.org/issue37788 My ticket: https://bugs.python.org/issue39074 The memory leak was from a change introduced about 6 months ago: https://github.com/python/cpython/commit/468e5fec8a2f534f1685d59da3ca4fad425c38dd -- nosy: +krypticus ___ Python tracker <https://bugs.python.org/issue39201> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37788] fix for bpo-36402 (threading._shutdown() race condition) causes reference leak
Adam added the comment: I ran into this bug as well, and opened an issue for it (before I saw this issue): https://bugs.python.org/issue39074 Was there a conclusion on the best way to fix this? It seems like the previous __del__ implementation would correct the resource leakage by removing the _tstate_lock from _shutdown_locks. -- nosy: +krypticus ___ Python tracker <https://bugs.python.org/issue37788> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com