[Python-Dev] Re: Security releases of CPython
The default Make flags differ from platform to platform (and compiler to compiler, IIRC) as well. Thanks for this overview of RHEL/Fedora Python build security flags. ( Containers are the easiest way to get per- python interpreter SELinux contexts ( in order to limit the impact of exploitation of a vulnerability in CPython that an application exposes to users. FWIW, FWIU, OpenShift is the only k8s platform that does per-container contexts; and only RHEL/Fedora have a container-selinux? https://src.fedoraproject.org/rpms/container-selinux/blob/rawhide/f/container-selinux.spec ) https://github.com/python/miss-islington is the backport bot. https://gitweb.gentoo.org/repo/gentoo.git/tree/dev-lang/python/python-3.9.2.ebuild These are the patches necessary for conda-forge: https://github.com/conda-forge/python-feedstock/blob/master/recipe/meta.yaml These are the only patches necessary on fedora: https://src.fedoraproject.org/rpms/python3.9/tree/rawhide On 2/20/21, Victor Stinner wrote: > On Thu, Feb 11, 2021 at 9:44 PM Michał Górny wrote: >> I feel that vulnerability fixes do not make it to end users fast enough. > > I think that it's time to put that into perspective with past > vulnerabilities. > > Ok, let me look at the timeline of the discussed vulnerability, ctypes > CVE-2021-3177: > https://python-security.readthedocs.io/vuln/ctypes-buffer-overflow-pycarg_repr.html > > 2021-01-16: Python issue bpo-42938 reported by Jordy Zomer > ... > 2021-01-18 (+2 days): commit c347cbe (branch 3.9) > 2021-01-18 (+2 days): commit ece5dfd (branch 3.8) > 2021-01-19 (+3 days): CVE-2021-3177 published > ... > 2021-02-19 (+34 days): Python 3.8.8 released > 2021-02-19 (+34 days): Python 3.9.2 released > > Ok. What about vulnerabilities fixes released last years? > > "HTTP header injection in urllib, urrlib2, httplib and http.client modules" > https://python-security.readthedocs.io/vuln/http-header-injection.html > 2017-09-19 (+1030 days): Python 3.3.7 released > > "CGI directory traversal" > https://python-security.readthedocs.io/vuln/cgi-directory-traversal-is_cgi.html > 2011-05-09 (+1158 days): CVE-2011-1015 published > 2013-04-07 (+1857 days): Python 3.2.4 released > 2013-04-07 (+1857 days): Python 3.3.1 released > > "httplib unlimited read" > https://python-security.readthedocs.io/vuln/httplib-unlimited-read.html > 2011-06-11 (+652 days): Python 2.7.2 released > 2011-06-11 (+652 days): Python 3.1.4 released > > "rgbimg and imageop overflows" > https://python-security.readthedocs.io/vuln/rgbimg-imageop-overflows.html > 2008-12-19 (+460 days): Python 2.5.3 released > > So the CVE-2021-3177 fix was delivery between 14x and 55x faster than > the other listed fixes (I picked a few worst cases to put numbers in > perspective). > > Congrats to the core developers for fixing the vulnerability in only > *3* days and to release manager for releasing *4* (!) Python versions > (3.6.13, 3.7.10, 3.8.8, 3.9.2) in only 34 days! > > I would like to highlight that exploiting a directory traversal or > HTTP header injection is really trivial. Once you find a pattern to > explore the filesystem / inject a HTTP header, the exploit is 100% > reliable. > > On the other side, there is no known exploit for ctypes CVE-2021-3177 > and ctypes is rarely used. I read that Django's GIS uses ctypes and > floats, but so far nobody shows that PyCArg_repr() is called, and > nobody published an exploit. > > To write a CVE-2021-3177 exploit, you must create a 64-bit floating > point number (only 8 bytes!) which becomes valid machine code, and > this code should allow to take control on the machine, once it's > formatted as decimal. For example, PyCArg_repr(123.5) writes "123.5\0" > string into the stack memory. but I don't think that it's valid x86-64 > machine code. It is also hard to write a reliable exploit by injecting > machine code in the stack memory. > > --- > > Nowadays it's way more difficult than 10 years ago to write an exploit > using a stack overflow, C compilers provide multiple hardening layers: > - FORTIFY_SOURCE, > - Control flow Enforcement Technology (Intel CET), > - Address Space Layout Randomization (ASLR), > - stack protector, > - Position Independent Executable (PIE), > - etc. > > See https://wiki.debian.org/Hardening for example of C flags and > linker flags for harderning. > > Did anyone notice that Red Hat Entreprise Linux 8 (RHEL) is *not* > affected by the ctypes CVE-2021-3177 vulnerability thanks to > hardening? > > "Red Hat Enterprise Linux 8: python36:3.6/python36: Not affected" > and > "This flaw could have had a higher Impact, however our packages are > compiled with FORTIFY_SOURCE, which provides runtime protection to > some memory and string functions and prevents this flaw from actually > overwriting the buffer and potentially executing code." > => https://access.redhat.com/security/cve/cve-2021-3177 > > I suggest you checking how your operating system built your Python > executable,
[Python-Dev] PEP 654 -- Exception Groups and except* : request for feedback for SC submission
Hi all, We would like to request feedback on PEP 654 -- Exception Groups and except*. https://www.python.org/dev/peps/pep-0654/ It proposes language extensions that allow programs to raise and handle multiple unrelated exceptions simultaneously, motivated by the needs of asyncio and other concurrency libraries, but with other use cases as well. * A new standard exception type, ExceptionGroup, to represent multiple exceptions with shared traceback. * Updates to the traceback printing code to display (possibly nested) ExceptionGroups. * A new syntax except* for handling ExceptionGroups. A reference implementation (unreviewed) can be found at: https://github.com/iritkatriel/cpython/pull/10 Thank you for your help Kind regards Irit, Yury & Guido ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/L5Q27DVKOKZCDNCAWRIQVOZ5DZCZHLRM/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Maintenance burdem from unresolved process issues (was: Re: Re: Move support of legacy platforms/architectures outside Python)
This, and an earlier letter about the burned of a release manager, confirmes my suspicion that a large share of "maintenance burden" comes from unresolved process issues. E.g. looks like Victor had been unconvinced that unreliable tests are a serious issue and cost the team so much (including himself since they cause ppl to ignore his buildbot issue reports right back) and had blocked the effort to disable them or whatnot (this is just my guess but Terry's message suggests that he was against resolving it). I hope this convinces him and will let the team clear this issue at last! Likewise, half of the bullet points in https://mail.python.org/archives/list/python-dev@python.org/message/LJ3E2UQBPDSFEH7ULNF3QVOYEQYRSAGI/ comes from either ppl trying to bypass the process, or the release manager doing others' jobs that should've already been done had the process been followed. On 22.02.2021 19:48, Terry Reedy wrote: On 2/22/2021 6:20 AM, Victor Stinner wrote: To have an idea of the existing maintenance burden, look at emails sent to: https://mail.python.org/archives/list/buildbot-sta...@python.org/ Every single email is basically a problem. There are around 110 emails over the last 30 years: 30 days, not years. The 20 visible messages are all have title f"Buildbot worker {worker} missing". All the messages on a day are sent at the same time. Replace with a daily "Builbot workers currently missing"? The individual messages seem useless except possibly if sent to individual buildbot owners. ... Multiple buildbots are "unstable": tests are failing randomly. Again, each failure means a new email. For example, test_asyncio likes to fail once every 10 runs (coarse average, I didn't check exactly). I strongly feel that the individual repeatedly failing tests (only 3 or 4 I think) should be disabled and the asyncio people notified. As I remember, you once insisted on keeping routinely failing tests. By the way, these random failures are not only affecting buildbots, but also CIs run on pull requests. It's *common* that these failures prevent to merge a pull request and require manual actions to be able to merge the PR (usually, re-run all CIs). For backports, it means closing the backport and reopening by re-adding a backport label to the master PR. This is why I really really want the repeatedly failing tests disabled. -- Regards, Ivan ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/UYTKOQ67254DDV6DMFZMFZFWH2C3KXEG/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
Michał Górny wrote: > On Mon, 2021-02-22 at 19:27 +, Jessica Clarke wrote: > > Example: 16-bit m68k > > no, it's a 32bit platform with extra alignment requirements. > > Actually, fewer. Most architectures have alignof(x) == sizeof(x) for > > all the primitive types, but m68k is more relaxed and caps alignof(x) > > at 2. This means that assert((p & sizeof(long)) == 0) is too strict, > > and should instead just be assert((p & alignof(long)) == 0), which is > > always correct, rather than relying on implementation-defined > > alignment requirements. In autoconf there's AC_CHECK_ALIGNOF just as > > there is AC_CHECK_SIZEOF, the result of which should be used for the > > alignment check instead. That's the portable way to do it (and would > > allow the removal of the #ifdef). > > I agree, except that -- as I mentioned elsewhere -- the #ifdef was added > because the x86 optimization hack is actually slower on m68k. I suspect > that if more benchmarking was made, it might turn out that #ifdef should > actually disable it on more platforms. I think it's more complicated than that. The code in question didn't just have a bogus assert, it actually relied on ALIGNOF_SIZE_T == SIZEOF_SIZE_T in order to work, but without much good reason other than being written in a poor style. I suspect that the slowdown was seen because the strictness of the `if` on m68k meant the optimised version wasn't used that often but was still sitting there using up space, and time to evaluate the branch, plus potentially the various consequences of additional register pressure, and that the performance hit goes away once the algorithm is fixed to be more general (in such a way that other architectures shouldn't see any performance hits, and possibly even a slight improvement). I've done this in https://github.com/python/cpython/pull/24624. Plus, even if m68k is slightly slower, who cares? It still works, better to have working clean code than hacky slightly-faster code. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/PGR4QL3NXQXMSFG6KRTIKBCDM4UN4EU5/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
Barry Scott wrote: > > On 22 Feb 2021, at 12:40, Michał Górny mgo...@gentoo.org wrote: > > I'm talking about 16-bit memory alignment which > > causes SIGBUS if it's > > not respected on m68k. > > I don't understand why you consider this to be a problem. After all, > > x86 has stronger (32-bit) alignment requirements, so m68k is actually > > less likely to break. > > On x86 you can make unaligned access to memory. > Alignment is a nice to have for performance. > But on m68k you MUST align or you get a SIGBUS. > Barry That is not the problem. Many architectures (SPARC, PA-RISC, IA-64, ...) do not natively support unaligned accesses. The problem is in fact that m68k's alignment requirements are *weaker* than x86 *and they are exposed to C*. On x86, the compiler still aligns ints to 4 byte boundaries, but on m68k they are only aligned to 2 byte boundaries. This means that assert(p % sizeof(int) == 0) does not work. The code in question is #ifdef'ed out for three reasons: 1. The assert is overly strict. This is trivially fixed by changing the SIZEOF_SIZE_T to ALIGNOF_SIZE_T. 2. The `if` uses (and code within the `if` relies on it using) SIZEOF_SIZE_T to check the alignment. 3. The code is potentially slower on m68k than the simple byte-by-byte loop, though I don't think anyone's going to complain about slight performance regressions on m68k if it comes from cleaning up the code, and I imagine the supposed performance hit came from not fixing 2 properly (i.e. there was a bunch of extra code that wasn't able to be used that often due to overly-strict requirements in the algorithm). I have filed https://github.com/python/cpython/pull/24624 to fix all these, though the first smaller commit is the only one strictly required for correctness on m68k (and any other architecture that chooses/has chosen to make the same ill-advised choices in its ABI), whilst the second larger one makes minor changes to the algorithm (that should not affect performance on any supported architecture other than m68k), and the many copies of it, in order to also cope with ALIGNOF_SIZE_T < SIZEOF_SIZE_T. This thus improves m68k support whilst removing m68k-specific hacks and making the code less reliant on implementation-defined behaviour, i.e. is how portability patches are _meant_ to be. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/CDZZLFTOVKWTZDA4YS45IU2NRLNUJ64Q/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On Sun, Feb 21, 2021 at 12:28 PM Gregory P. Smith wrote: > > On Sun, Feb 21, 2021 at 10:15 AM Christian Heimes > wrote: > >> On 21/02/2021 13.47, glaub...@debian.org wrote: >> > Rust doesn't keep any user from building Rust for Tier 2 or Tier 3 >> platforms. There is no separate configure guard. All platforms that Rust >> can build for, are always enabled by default. No one in Rust keeps anyone >> from cross-compiling code for sparc64 or powerpcspe, for example. >> > >> > So if you want to copy Rust's mechanism, you should just leave it as is >> and not claim that users are being confused because "m68k" shows up in >> configure.ac. >> >> A --enable-unstable-platforms configure flag is my peace offer to meet >> you half way. You get a simple way to enable builds on untested >> platforms and we can clearly communicate that some OS and hardware >> platforms are not supported. >> > > I personally wouldn't want to maintain such a check in autoconf, but it'll > be an isolated thing on its own, that if you or someone else creates, will > do its job and not bother the rest of us. > If we add a compile flag to explicitly make people have to realize they are running on an unsupported platform then I think it should be a negation against an allowlist versus a blocklist to be more explicit about what we would take PRs for. > > I think just publishing our list of (1) supported, (2) best-effort > non-release-blocker quasi-supported, and (3) explicitly unsupported in a > policy doc is sufficient. But it's not like any of us are going to stop > someone from codifying that in configure.ac to require a flag. > I like the idea of making PEP 11 list what platforms *are* supported in some way, and being off that list means you're not. I also like the idea of having a tier 1 that will block a release and a tier 2 where we will accept PRs but it will in no way block releases. I also think who ends up on either of those tiers should be an SC problem. Based on how heated this thread has gotten there's obviously some emotional connection for some folks when it comes to whether a platform is supported or not and how that is handled. In that sense, letting the SC take on the burden of saying "no" makes sense. That doesn't mean PEP 11 wouldn't still list out the *minimum* requirements to add a platform, but I don't think it should be an automatic thing simply because a machine was donated and a hand was raised as e.g. #ifdefs have a cognitive cost. So, my suggestion is to update PEP 11 to: - List platforms that can block releases as a "tier 1" supported platform - List platforms that are best effort as "tier 2" and which will never hold up a release - All other platforms will need to manage their patchset externally and we will not accept PRs that are specifically for them - Specify what the *minimum* requirements are to add support for a platform to either tier - Have the SC manage what platforms can end up in what tier (and we can publish guidelines like conditional tier 2 to prove support exists, what is required to graduate to tier 1, removal from tiers, etc.) ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/TKEG3OF4BIIVO56BQT37PJEVBLJZCUJL/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On Mon, 2021-02-22 at 19:27 +, Jessica Clarke wrote: > > > Example: 16-bit m68k > > no, it's a 32bit platform with extra alignment requirements. > > Actually, fewer. Most architectures have alignof(x) == sizeof(x) for > all the primitive types, but m68k is more relaxed and caps alignof(x) > at 2. This means that assert((p & sizeof(long)) == 0) is too strict, > and should instead just be assert((p & alignof(long)) == 0), which is > always correct, rather than relying on implementation-defined > alignment requirements. In autoconf there's AC_CHECK_ALIGNOF just as > there is AC_CHECK_SIZEOF, the result of which should be used for the > alignment check instead. That's the portable way to do it (and would > allow the removal of the #ifdef). I agree, except that -- as I mentioned elsewhere -- the #ifdef was added because the x86 optimization hack is actually slower on m68k. I suspect that if more benchmarking was made, it might turn out that #ifdef should actually disable it on more platforms. -- Best regards, Michał Górny ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/AAM5PRW5KMEDWZPNCO4VHIJM5ZYV5QIH/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On Mon, 2021-02-22 at 19:54 +, Barry Scott wrote: > > > On 22 Feb 2021, at 12:40, Michał Górny wrote: > > > > > I'm talking about 16-bit memory alignment which causes SIGBUS if it's > > > not respected on m68k. > > > > > > > I don't understand why you consider this to be a problem. After all, > > x86 has stronger (32-bit) alignment requirements, so m68k is actually > > less likely to break. > > On x86 you can make unaligned access to memory. > Alignment is a nice to have for performance. Except that modern compilers can emit optimized instructions that rely on aligned memory (e.g. SSE2), so a badly written program can crash on x86 as well. -- Best regards, Michał Górny ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/GCRBLCAK3OBP466QPNLD5KFH5ATBOBWP/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On Mon, 22 Feb 2021 19:50:43 + Rob Boehne wrote: > > The other thing that crept into this thread was the mention of test that > intermittently fail. > That's a huge problem because it suggests that applications will sometimes > fail. > I have usually seen these sort of issues because of > 1) Uninitialized memory being used (read) > 2) Threading problems > 3) Resources used (files, networking, daemons) but unavailable > 4) memory mis-management (buffer overrun that doesn't cause a crash) > > #3 is probably best fixed by testing for resources and skipping when > unavailable 5) Poor quality POSIX support in the target platform. The Python test suite is actually quite demanding in this regard (except on Windows). Regards Antoine. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/CPL5PR3MDBLT65OXFEWHPFJWOWF3BO3Y/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
> > Example: 16-bit m68k > no, it's a 32bit platform with extra alignment requirements. Actually, fewer. Most architectures have alignof(x) == sizeof(x) for all the primitive types, but m68k is more relaxed and caps alignof(x) at 2. This means that assert((p & sizeof(long)) == 0) is too strict, and should instead just be assert((p & alignof(long)) == 0), which is always correct, rather than relying on implementation-defined alignment requirements. In autoconf there's AC_CHECK_ALIGNOF just as there is AC_CHECK_SIZEOF, the result of which should be used for the alignment check instead. That's the portable way to do it (and would allow the removal of the #ifdef). ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/VZU4F6OQZ2ADHHH7ZWL7DGEIRRBBFQYR/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: [Python-ideas] Inadequate error reporting during function call setup stage
Hello, On Mon, 22 Feb 2021 19:47:04 + Barry Scott wrote: > > On 22 Feb 2021, at 10:15, Paul Sokolovsky wrote: > > > > It looks like: > > > > Traceback (most recent call last): > > File "pseudoc_tool.py", line 91, in > > File ".../xforms.py", line 25, in print > > TypeError: unexpected keyword argument 'noann' > > > > - that makes clear that it's "print" function of "xforms.py" module, > > line 25, which got an unexpected keyword argument. > > You are proposing to fake a stack frame that I have to know is not a > stack frame but is in fact the location of the function in the > exception? No, I'm proposing to stop faking lack of the last stack frame due to CPython's implementation details. See the original message for more info. > I'm -1 on that as its confusing. > > Having checked that its python code and not a C extension function > you could use the info in fn.__code__ to get the filename and line of > where the function is defined and put that info into the exception. Could use crystal ball, even. > > Example of the info: > | >>> os.path.join.__code__ > "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/posixpath.py", > line 71> > > I use repr(fn.__code__) a lot when debugging complex code. > > Barry > -- Best regards, Paul mailto:pmis...@gmail.com ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/FB65TFWGZEBENGT3W2X7RBPY63RINU25/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
> On 22 Feb 2021, at 12:40, Michał Górny wrote: > >> I'm talking about 16-bit memory alignment which causes SIGBUS if it's >> not respected on m68k. >> > > I don't understand why you consider this to be a problem. After all, > x86 has stronger (32-bit) alignment requirements, so m68k is actually > less likely to break. On x86 you can make unaligned access to memory. Alignment is a nice to have for performance. But on m68k you MUST align or you get a SIGBUS. Barry ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/PNZUNGJFDVEICWYUCBEYHVTOFE4CW2LP/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: [SPAM] Re: Move support of legacy platforms/architectures outside Python
On 2/22/21, 1:39 PM, "Steve Dower" wrote: On 2/22/2021 5:18 PM, Matthias Klose wrote: > On 2/21/21 1:13 PM, Victor Stinner wrote: > Document what is supported, be inclusive about anything else. Don't make a > distinction yet between legacy and upcoming new architectures. I agree with this, and I don't see any reason why we shouldn't just use the list of stable buildbot platforms as the "supported" list. That makes it really clear what the path is to push support onto upstream (join up and bring a buildbot with you), and also means that we've got a physically restricted set of machines to prove work before doing a release. Actively blocking anything at all seems unnecessary at the source/build level. That's for pre-built binaries and other conveniences. Cheers, Steve ___ +1 to this. I use a few unsupported platforms, not as a hobby, but in my work. I generally don't require ALL the things in python to work on these platforms, so even if I were to contribute a buildbot for an obscure (but definitely not "hobby") platform, it's implied that I would also need to fix test failures in modules I don't use. I generally need to provide some reason for me to use my employer's time, so I can't really justify fixing test failures in code the company doesn't use. I think these users are just asking that what currently works not be broken intentionally, and that follows the spirit of Autoconf, to test if something works and enable if it when it does. There are also other ways to reduce the burden of maintaining a large number of platforms, such as abstracting away the OS services. The other thing that crept into this thread was the mention of test that intermittently fail. That's a huge problem because it suggests that applications will sometimes fail. I have usually seen these sort of issues because of 1) Uninitialized memory being used (read) 2) Threading problems 3) Resources used (files, networking, daemons) but unavailable 4) memory mis-management (buffer overrun that doesn't cause a crash) #3 is probably best fixed by testing for resources and skipping when unavailable The others are problems in the code, and can be fixed with clang sanitizers, but Without some routine running of them, those sorts of problems will reappear. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/6VCZYPSIB4QE3FL4L3D2WER3VU3LZGCO/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: [Python-ideas] Inadequate error reporting during function call setup stage
> On 22 Feb 2021, at 10:15, Paul Sokolovsky wrote: > > It looks like: > > Traceback (most recent call last): > File "pseudoc_tool.py", line 91, in > File ".../xforms.py", line 25, in print > TypeError: unexpected keyword argument 'noann' > > - that makes clear that it's "print" function of "xforms.py" module, > line 25, which got an unexpected keyword argument. You are proposing to fake a stack frame that I have to know is not a stack frame but is in fact the location of the function in the exception? I'm -1 on that as its confusing. Having checked that its python code and not a C extension function you could use the info in fn.__code__ to get the filename and line of where the function is defined and put that info into the exception. Example of the info: | >>> os.path.join.__code__ I use repr(fn.__code__) a lot when debugging complex code. Barry ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/YJ3RNTHGZWJ2DG2T5HFGXKTKFYN3CGAN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Deprecate support for mingw - add to PEP 11
In case anyone is wondering what mingw-w64-python is referring to: https://packages.msys2.org/base/mingw-w64-python The MSYS2 project [0] maintains a CPython variant that builds with gcc/clang+mingw-w64 on Windows. We lack the manpower to reduce the diff to upstream though. The yearly rebuilds, the move to isolated builds and the move away from distutils keeps us busy. But if anyone wants to help you now know where to find us. [0] https://www.msys2.org/ ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/6N3XVYCMKQGQF43ZIIZDA56P6QF436OI/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On 2/22/2021 5:18 PM, Matthias Klose wrote: On 2/21/21 1:13 PM, Victor Stinner wrote: Document what is supported, be inclusive about anything else. Don't make a distinction yet between legacy and upcoming new architectures. I agree with this, and I don't see any reason why we shouldn't just use the list of stable buildbot platforms as the "supported" list. That makes it really clear what the path is to push support onto upstream (join up and bring a buildbot with you), and also means that we've got a physically restricted set of machines to prove work before doing a release. Actively blocking anything at all seems unnecessary at the source/build level. That's for pre-built binaries and other conveniences. Cheers, Steve ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/BKZTBXDYFIEBMVELBOVQ5KGM2ZEXVT2Z/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On 2/22/2021 6:58 AM, Victor Stinner wrote: On Mon, Feb 22, 2021 at 12:51 PM Ivan Pozdeev via Python-Dev wrote: IIRC I suggested earlier that buildsbots should be integrated into the PR workflow in order to make it the contributor's rather than a core dev's burden to fix any breakages that result from their changes. The problems is that many 'breakages' do NOT result from the PR changes. For IDLE patches, EVERY CI failure other than in test_idle is a bogus failure. My productivity is already degraded by the current blind automated rules. My point here is that machines literally have no judgment, and that automation is not the panacea that people think. Writing good rules is often hard to impossible, and blind application of inadequate rules leads to bad results. Some buildbot worker take 2 to 3 hours per build. Also, it would not scale. Buildbots are not fast enough to handle the high number of PR and PR updates. When there is clear relationship between a buildbot failure and a merged PR, a comment is added automatically showing the failed test and explanation how to investigate the issue. This is false and an example of the above comment. The notice is sent whenever any buildbot test fails after a merge, regardless of whether there is any relationship or not. There often (usually?) is not. > It's there for 2 years thanks to Pablo, and so far, I rarely saw developers paying attention to these failures. They just ignore it. Every one of those big, bold, ugly notices I have received has been a false positive. I believe they are usually due to one of the flaky tests (asyncio or otherwise) that should have been disabled but have not been. Of course people ignore routinely false positives. If there is an expert for the module whose test file failed, that person should be the target of the notice. If a PR or merge breaks test_idle, I would like to know. Actually, because I am conscientious, and because there was one instance years ago where test_idle passed CI and failed on one buildbot, and because the notice lists which test file failed (in among the noise), I do check that line before deleting the notice. I'm not trying to blame anyone. Contributing to Python requires a lot of free time. I'm only trying to explain in length what does the "maintenance burden" mean in practice, since some people are pretending that supporting a platform is free. Blind automated blind rules are also not free. -- Terry Jan Reedy ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/ICFHS2IUYAYWK6WLFU6XLVXFNDYQCPWC/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On 2/21/21 1:13 PM, Victor Stinner wrote: > In short, I propose to move maintenance of the legacy platforms/archs > outside Python: people are free to continue support them as patches. > Concrete example: Christian Heimes proposed to drop support for 31-bit > s390 Linux: > https://bugs.python.org/issue43179 I'm not attached to s390. But I doubt that the proposed patch lowers any "maintenance burden". https://github.com/python/cpython/pull/24534.diff touches a few files - configure.ac: PLATFORM_TRIPLET is undefined with your patch, changing the extension suffix. I don't think that maintaining such a delta outside the Python tree would be good practice. I am committed to maintain these definitions, and would like to ask not to remove *any* of these. These definitions come from https://wiki.debian.org/Multiarch/Tuples. - Modules/_ctypes/libffi_osx/ffi.c: Removing a macro usage in a file specific to MacOS X. Removal doesn't help, otherwise might create a patch conflict, if the MacOS X maintainers should ever decide to pull in a new libffi upstream version. - Lib/test/test_sysconfig.py: Doesn't simplify the code, just makes a bit more unfriendly. I became curious about a s390 build, and checked it with a GCC multilib build as available on at least Debian, OpenSuse, RHEL and Ubuntu. Builds fine with CC="gcc -m31" CXX="g++ -m31" ./configure && make && make test. No test failures, although I didn't build all extensions using externals libs. Funny thing, your proposed patch doesn't make any difference in the test results, so I'm not sure why you spent any volunteer time on that patch. With the threat^Wannouncement in issue 43179 to do similar removals for alpha, hppa and m68k, you'll likely step onto more toes, and again not removing any burden. > The lack of clear definition on how a platform is supported or not > confuses users who consider that their favorite platform/arch is > supported, whereas core developers don't want to support it since it > would be too much work. If a platform stops working, fine. Actively breaking a platform in some form: Not ok from my point of view. Python still is ubiquitous on almost all platforms (linux and non-linux ones), and can be built with mostly generic dependencies. Maybe libffi is an exception, however even these maintainers didn't remove s390 support (https://sourceware.org/libffi/). And I was very happy to build Python on a proprietary *nix platform five years ago, and having a working Python. > In fact, the PEP 11 has clear and explicit rules: > https://www.python.org/dev/peps/pep-0011/#supporting-platforms PEP 11 is not fit for that. You're not trying to remove support for the Linux platform, you're addressing a specific architecture. Maybe PEP 11 should be updated. As seen with S390, there's not much architecture specific code. If I see issues on the Linux platform for different architectures, these usually are: (1) 32bit vs 64bit (2) big endian vs. little endian (3) architecture specific alignment requirements, sometimes resulting in degraded performance on unaligned accesses, sometimes in bus errors when running 32bit code on 64 kernels. Looking at https://pythondev.readthedocs.io/platforms.html (yes, that gets the platform/architecture distinction right), - (1) and (2) are covered as "well supported" platforms, although having just x86* might lead to some x86-isms. - (1), (2) and (3) are covered as "best effort support" with aarch64 (64/LE), powerpc64le (64/LE), s390x (64/BE), and arm-linux-gnueabi* as 32/LE having bus errors on unaligned accesses with 64bit kernels. Unless CPython stops supporting these platforms above, I don't see little value of removing that tiny amount of architecture specific code. Document what is supported, be inclusive about anything else. Don't make a distinction yet between legacy and upcoming new architectures. Apparently the cryptography project made the decision to rely on a build tool which is not ubiquitously available anymore. If CPython sees the need for such restrictions in the future, I'd like to propose to use the same deprecation process for modules/functions, e.g. if a new non-ubiquitous tool X is needed to build/run, announce it for 3.10, and only make use of it in 3.11. Matthias > Example: 16-bit m68k no, it's a 32bit platform with extra alignment requirements. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/YWRUGIA7XGRP6NABB3JEV552OSL6O52G/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On 2/22/2021 6:20 AM, Victor Stinner wrote: To have an idea of the existing maintenance burden, look at emails sent to: https://mail.python.org/archives/list/buildbot-sta...@python.org/ Every single email is basically a problem. There are around 110 emails over the last 30 years: 30 days, not years. The 20 visible messages are all have title f"Buildbot worker {worker} missing". All the messages on a day are sent at the same time. Replace with a daily "Builbot workers currently missing"? The individual messages seem useless except possibly if sent to individual buildbot owners. ... Multiple buildbots are "unstable": tests are failing randomly. Again, each failure means a new email. For example, test_asyncio likes to fail once every 10 runs (coarse average, I didn't check exactly). I strongly feel that the individual repeatedly failing tests (only 3 or 4 I think) should be disabled and the asyncio people notified. As I remember, you once insisted on keeping routinely failing tests. By the way, these random failures are not only affecting buildbots, but also CIs run on pull requests. It's *common* that these failures prevent to merge a pull request and require manual actions to be able to merge the PR (usually, re-run all CIs). For backports, it means closing the backport and reopening by re-adding a backport label to the master PR. This is why I really really want the repeatedly failing tests disabled. -- Terry Jan Reedy ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/QGKV7DOVE7XXMGJRAZZNWK7NDPZS4NKJ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Inadequate error reporting during function call setup stage
On 21/02/2021 23:06, Terry Reedy wrote: On 2/21/2021 12:04 PM, Paul Sokolovsky wrote: Traceback (most recent call last): File "pseudoc_tool.py", line 91, in first_class_function_value(func, **pass_params) TypeError: print() got an unexpected keyword argument 'noann' This is not typical behavior in current Python (3.8+). The way I understand it's not about print(), it's about disambiguating multiple functions with the same name. Example: PS > type .\ambiguous_names.py import random def do_stuff(): pass f = do_stuff def do_stuff(a, b): pass g = do_stuff random.choice([f, g])(42) PS > py .\ambiguous_names.py Traceback (most recent call last): File "...\ambiguous_names.py", line 13, in random.choice([f, g])(42) TypeError: do_stuff() missing 1 required positional argument: 'b' The traceback gives no clue which of the two do_stuff() functions caused the error, you have to check both implementations. If that is a comman problem one might consider including module name and co_firstlineno in the message, or at least adding the relevant do_stuff() function to the exception's args. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/E7OUHLWXP3NHEM3UZOBWP5AQRSFL5RDO/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On 2/22/21 12:30 PM, Victor Stinner wrote: > On Mon, Feb 22, 2021 at 8:19 AM wrote: >> There are zero technical reasons for what you are planning here. > > Multiple core developers explained how it's a maintenance burden. It > has been explained in multiple different ways. Well, that doesn't mean these statements are correct. Please don't assume that the people you are talking to are inexperienced developers, we aren't. Downstream distribution maintainers certainly have enough experience with project maintenance to be able to asses whether your claims are valid or not. >> You are inflating a few lines of autoconf into a "platform support", so you >> have a reason to justify adding multiple lines of extra autoconf codes to >> make >> life for downstream distributions harder. > > "Making life harder" sounds to me like oh, maybe supporting one > additional platform is not free and comes with a cost. This cost is > something called the "maintenance burden". Please explain to me how guarding some platforms with *additional* lines autoconf is helping to reduce maintenance burden for the upstream project. > My question is if Python wants to pay this cost, or if we want > transfering the maintenance burden to people who actually care about > these legacy platforms and architectures. > > Your position is: Python must pay this price. My position is: Python should > not. No, my position is that such changes should have valid technical reasons which is simply not the case. You're not helping your point if you are basing your arguments on incorrect technical assumptions. > Honestly, if it's just a few lines, it will be trivial for you to > maintain a downstream patch and I'm not sure why we even need this > conversation. If it's more than a few lines, well, again, we come back > to the problem of the real maintenance burden. This argument goes both ways. The code we are talking about here are just a few lines of autoconf which are hardly touched during normal development work. And the architecture-mapping you have in [1] is probably not even needed (CC @jrtc27). >> The thing is you made assumptions about how downstream distributions use >> Python without doing some research first ("16-bit m68k-linux"). > > I'm talking about 16-bit memory alignment which causes SIGBUS if it's > not respected on m68k. For example, unicodeobject.c requires special > code just for this arch: > > /* > * Issue #17237: m68k is a bit different from most architectures in > * that objects do not use "natural alignment" - for example, int and > * long are only aligned at 2-byte boundaries. Therefore the assert() > * won't work; also, tests have shown that skipping the "optimised > * version" will even speed up m68k. > */ > #if !defined(__m68k__) > (...) > > Such issue is hard to guess when you write code and usually only spot > it while actually running the code on such architecture. This is the only such place in the code where there is an extra section for m68k that I could find. And the bug was fixed by Andreas Schwab [2], so another downstream maintainer which was my point earlier in the discussion. We downstreams care about the platform support, hence we keep it working. Thanks, Adrian > [1] > https://github.com/python/cpython/blob/63298930fb531ba2bb4f23bc3b915dbf1e17e9e1/configure.ac#L724 > [2] > https://github.com/python/cpython/commit/8b0e98426dd0e1fde93715256413bc707759db6f -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaub...@debian.org `. `' Freie Universitaet Berlin - glaub...@physik.fu-berlin.de `-GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/4OALXE4CUIFA4JGH3Y2BCLQ7WI4LR6U6/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On 2/21/21 7:12 PM, Christian Heimes wrote: > On 21/02/2021 13.47, glaub...@debian.org wrote: >> Rust doesn't keep any user from building Rust for Tier 2 or Tier 3 >> platforms. There is no separate configure guard. All platforms that Rust can >> build for, are always enabled by default. No one in Rust keeps anyone from >> cross-compiling code for sparc64 or powerpcspe, for example. >> >> So if you want to copy Rust's mechanism, you should just leave it as is and >> not claim that users are being confused because "m68k" shows up in >> configure.ac. > > A --enable-unstable-platforms configure flag is my peace offer to meet > you half way. Making a "peace offer" is the confession of having started a war. I don't know why you are on this crusade, but I doubt it is the lonely "confused user" cited in https://bugs.python.org/issue43179. Others pointed out in this thread that it's not the first time of such unfriendly behavior. Even others might also see it as some kind of harassment. Matthias ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/6NIXUMZVGKCTBQCS4QFELSIVXYPL5JL2/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On 2/21/21 9:24 PM, Gregory P. Smith wrote: > On Sun, Feb 21, 2021 at 10:15 AM Christian Heimes > wrote: > >> On 21/02/2021 13.47, glaub...@debian.org wrote: >>> Rust doesn't keep any user from building Rust for Tier 2 or Tier 3 >> platforms. There is no separate configure guard. All platforms that Rust >> can build for, are always enabled by default. No one in Rust keeps anyone >> from cross-compiling code for sparc64 or powerpcspe, for example. >>> >>> So if you want to copy Rust's mechanism, you should just leave it as is >> and not claim that users are being confused because "m68k" shows up in >> configure.ac. >> >> A --enable-unstable-platforms configure flag is my peace offer to meet >> you half way. You get a simple way to enable builds on untested >> platforms and we can clearly communicate that some OS and hardware >> platforms are not supported. >> > > I personally wouldn't want to maintain such a check in autoconf, but it'll > be an isolated thing on its own, that if you or someone else creates, will > do its job and not bother the rest of us. > > I think just publishing our list of (1) supported, (2) best-effort > non-release-blocker quasi-supported, and (3) explicitly unsupported in a > policy doc is sufficient. But it's not like any of us are going to stop > someone from codifying that in configure.ac to require a flag. agreed with (1) and(2). I don't like a negative list, as this will be incomplete at any time for both *-linux and *-non{linux,win,mac}. Looking at another project like GCC https://gcc.gnu.org/buildstat.html used to collect information about successful builds on various platforms and architectures. But as you can see, the web page isn't updated in recent times. GCC also doesn't keep an explicit list of the lesser supported platforms: https://gcc.gnu.org/gcc-11/criteria.html Matthias ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/3BMMNW4NERXTORS4SZA7XWHAYEKUQFFU/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On Mon, 2021-02-22 at 12:30 +0100, Victor Stinner wrote: > On Mon, Feb 22, 2021 at 8:19 AM wrote: > > The thing is you made assumptions about how downstream distributions use > > Python without doing some research first ("16-bit m68k-linux"). > > I'm talking about 16-bit memory alignment which causes SIGBUS if it's > not respected on m68k. > I don't understand why you consider this to be a problem. After all, x86 has stronger (32-bit) alignment requirements, so m68k is actually less likely to break. > For example, unicodeobject.c requires special > code just for this arch: > > /* > * Issue #17237: m68k is a bit different from most architectures in > * that objects do not use "natural alignment" - for example, int and > * long are only aligned at 2-byte boundaries. Therefore the assert() > * won't work; also, tests have shown that skipping the "optimised > * version" will even speed up m68k. > */ > #if !defined(__m68k__) > (...) Unless I'm reading something wrong, the code is disabled not because it m68k is broken (as your comment seems to imply) but because the assert is wrong and because the code turned out to be slower. That said, I wonder if this 'optimized' path has been actually benchmarked on other supported platforms. -- Best regards, Michał Górny ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/I7IXAQPMWDV3ZZE7OAFA5KQ35YW4IMSR/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On Mon, Feb 22, 2021 at 12:51 PM Ivan Pozdeev via Python-Dev wrote: > IIRC I suggested earlier that buildsbots should be integrated into the PR > workflow in order to make it the contributor's rather than a core > dev's burden to fix any breakages that result from their changes. Some buildbot worker take 2 to 3 hours per build. Also, it would not scale. Buildbots are not fast enough to handle the high number of PR and PR updates. When there is clear relationship between a buildbot failure and a merged PR, a comment is added automatically showing the failed test and explanation how to investigate the issue. It's there for 2 years thanks to Pablo, and so far, I rarely saw developers paying attention to these failures. They just ignore it. I'm not trying to blame anyone. Contributing to Python requires a lot of free time. I'm only trying to explain in length what does the "maintenance burden" mean in practice, since some people are pretending that supporting a platform is free. Victor -- Night gathers, and now my watch begins. It shall not end until my death. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/KIVBXMBFUF73PKNDOBGRMWJORS2DNXSM/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
IIRC I suggested earlier that buildsbots should be integrated into the PR workflow in order to make it the contributor's rather than a core dev's burden to fix any breakages that result from their changes. On 22.02.2021 14:20, Victor Stinner wrote: On Sun, Feb 21, 2021 at 8:57 PM Michał Górny wrote: The checker serves two purposes: 1) It gives users an opportunity to provide full PEP 11 support (buildbot, engineering time) for a platform. Does that mean that if someone offers to run the build bot for a minor platform and do the necessary maintenance to keep it working, they will be able to stay? How much maintenance is actually expected, i.e. is it sufficient to maintain CPython in a 'good enough' working state to resolve major bugs blocking real usage on these platforms? Maintaining a buildbot doesn't mean to look at it every 6 months. It means getting emails multiple times per month about real bug which must be fixed. My main annoyance is that every single buildbot failure sends me an email, and I'm overwhelmed by emails (I'm not only getting emails from buildbots ;-)). Python has already a long list of buildbot workers (between 75 and 100, I'm not sure of the exact number) and they require a lot of attention. Over the last 5 years, there is basically only Pablo Galindo and me who pay attention to them. I fear that if more buildbots are added, Pablo and me will be the only ones to look at ones. FYI if nobody looks at buildbots, they are basically useless. They only waste resources. To have an idea of the existing maintenance burden, look at emails sent to: https://mail.python.org/archives/list/buildbot-sta...@python.org/ Every single email is basically a problem. There are around 110 emails over the last 30 years: 3.6 email/day in average. When a bug is identified, it requires an investigation which takes between 5 minutes and 6 months depending on the bug. I would say 2 hours in average. Sometimes, if the investigation is too long, we simply revert the change. The buildbot configuration also requires maintenance. For example, 14 commits have been pushed since January 1st: https://github.com/python/buildmaster-config/commits/master Multiple buildbots are "unstable": tests are failing randomly. Again, each failure means a new email. For example, test_asyncio likes to fail once every 10 runs (coarse average, I didn't check exactly). multiprocessing tests, tests using network like imaplib or nntplib, and some other tests fail randomly. Some tests just fail because just once, the buildbot became slower. People have many ideas to automate bug triage from emails, but so far, nobody came with a concrete working solution, and so emails are still read manually one by one. Also, almost nobody is trying to fix tests which are failing randomly. For example, I called multiple times for help to fix test_asyncio, so far it's still randomly every day: * 2020: https://mail.python.org/archives/list/python-dev@python.org/message/Y7I5ADXAQEGK6DOFAPVDTKMBT6NUFNQ4/ * 2019: https://mail.python.org/archives/list/python-dev@python.org/message/R7X6NKGEOKWD3PBWIL2LPZWZ6MMRANN5/ By the way, these random failures are not only affecting buildbots, but also CIs run on pull requests. It's *common* that these failures prevent to merge a pull request and require manual actions to be able to merge the PR (usually, re-run all CIs). I'm not talking about exotic platforms with very slow hardware, but platorms like Linux/x86-64 with "fast" hardware. I expect more random errors on exotic platforms. For example, I reported a crash on AIX one year ago, and nobody fixed it so far. I pushed a few fixes for that crash, but it's not enough to fully fix it: https://bugs.python.org/issue40068 I pushed AIX fixes only because I was annoyed by getting buildbot emails about AIX failures. Sometimes, I just turn off emails from AIX. Since there is no proactive work on fixing AIX issues, I would even prefer to *remove* the AIX buildbots. Victor -- Regards, Ivan ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/IJOVAFKZZJP4XRR5C2BC36DSM347RFRF/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Deprecate support for mingw - add to PEP 11
pybind11 is a famous C++ extension module for Python. Yes, the Python C API is usable in C++ thanks to extern "C" { ... } being used in headers. Victor On Sun, Feb 21, 2021 at 6:59 PM Dan Stromberg wrote: > > > It looks like CPython remains 100% C, so clang becomes more attractive: > https://stackoverflow.com/questions/6329688/llvm-and-visual-studio-obj-binary-incompatibility > > Then again, do we allow C++ extension modules? That might make C++ more > relevant, even if CPython itself is purely C. > > > On Sat, Feb 20, 2021 at 9:08 PM Dan Stromberg wrote: >> >> mingw-w64 might be a small change. >> >> But while one is it at, it might make sense to evaluate: >> https://clang.llvm.org/docs/MSVCCompatibility.html >> Apparently clang on Windows is working on calling convention compatibility >> with Visual Studio. >> >> >> On Sat, Feb 20, 2021 at 8:37 PM wrote: >>> >>> I think perhaps we should admit that this build system is no longer >>> supported. From everything I can tell, the mingw project is no longer >>> maintained. The project's site, mingw.org, is no longer live; the project >>> on sourceforge, although still downloaded daily, had its last commit almost >>> 3 years ago - a commit which changed the official project URI to a new link >>> that now is also dead. >>> Looking over BPO there are a little over 50 bugs open against mingw, but >>> only 7 that have any meaningful activity within the last three years. Three >>> of those issues explicitly mention mingw-w64 which is an active fork of the >>> original mingw project (active homepage, commits almost daily, new release >>> within the last 6 months) and I wonder if this is the project the other 4 >>> projects meant by "mingw"? >>> Ideally any features and flags in the code base for mingw would be checked >>> to already be working with mingw-w64 or else modified to work, but this >>> would require a sponsor for this platform, which appears to be missing. >>> Further, there is no buildbot for mingw, which should be a requirement to >>> be a fully supported platform, as per this PEP. This potential work appears >>> non-trivial with a cursory look at the mingw-w64-python pacman project, >>> which contains close to 100 patch files. I am purposing instead that mingw >>> be deprecated and, if a sponsor comes along, mingw-w64 can become >>> re-supported, or newly supported depending on you point of view, as allowed >>> by the PEP. >>> ___ >>> Python-Dev mailing list -- python-dev@python.org >>> To unsubscribe send an email to python-dev-le...@python.org >>> https://mail.python.org/mailman3/lists/python-dev.python.org/ >>> Message archived at >>> https://mail.python.org/archives/list/python-dev@python.org/message/XIWF3OYL7OQRBVRBBQCBKPPJH5OKVVRC/ >>> Code of Conduct: http://python.org/psf/codeofconduct/ > > ___ > Python-Dev mailing list -- python-dev@python.org > To unsubscribe send an email to python-dev-le...@python.org > https://mail.python.org/mailman3/lists/python-dev.python.org/ > Message archived at > https://mail.python.org/archives/list/python-dev@python.org/message/HEK67QOUQ4RD42HLBDTR3CJJNEMB3HJF/ > Code of Conduct: http://python.org/psf/codeofconduct/ -- Night gathers, and now my watch begins. It shall not end until my death. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/Q7D6ENQNDANZGZQ5AORKGFH6EBEW5AI6/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On Mon, Feb 22, 2021 at 8:19 AM wrote: > There are zero technical reasons for what you are planning here. Multiple core developers explained how it's a maintenance burden. It has been explained in multiple different ways. > You are inflating a few lines of autoconf into a "platform support", so you > have a reason to justify adding multiple lines of extra autoconf codes to > make life for downstream distributions harder. "Making life harder" sounds to me like oh, maybe supporting one additional platform is not free and comes with a cost. This cost is something called the "maintenance burden". My question is if Python wants to pay this cost, or if we want transfering the maintenance burden to people who actually care about these legacy platforms and architectures. Your position is: Python must pay this price. My position is: Python should not. Honestly, if it's just a few lines, it will be trivial for you to maintain a downstream patch and I'm not sure why we even need this conversation. If it's more than a few lines, well, again, we come back to the problem of the real maintenance burden. > The thing is you made assumptions about how downstream distributions use > Python without doing some research first ("16-bit m68k-linux"). I'm talking about 16-bit memory alignment which causes SIGBUS if it's not respected on m68k. For example, unicodeobject.c requires special code just for this arch: /* * Issue #17237: m68k is a bit different from most architectures in * that objects do not use "natural alignment" - for example, int and * long are only aligned at 2-byte boundaries. Therefore the assert() * won't work; also, tests have shown that skipping the "optimised * version" will even speed up m68k. */ #if !defined(__m68k__) (...) Such issue is hard to guess when you write code and usually only spot it while actually running the code on such architecture. Victor -- Night gathers, and now my watch begins. It shall not end until my death. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/UVJX44CB456RI4SWKHU74LTDD72MAAP3/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On Sun, Feb 21, 2021 at 8:57 PM Michał Górny wrote: > > The checker serves two purposes: > > > > 1) It gives users an opportunity to provide full PEP 11 support > > (buildbot, engineering time) for a platform. > > Does that mean that if someone offers to run the build bot for a minor > platform and do the necessary maintenance to keep it working, they will > be able to stay? How much maintenance is actually expected, i.e. is it > sufficient to maintain CPython in a 'good enough' working state to > resolve major bugs blocking real usage on these platforms? Maintaining a buildbot doesn't mean to look at it every 6 months. It means getting emails multiple times per month about real bug which must be fixed. My main annoyance is that every single buildbot failure sends me an email, and I'm overwhelmed by emails (I'm not only getting emails from buildbots ;-)). Python has already a long list of buildbot workers (between 75 and 100, I'm not sure of the exact number) and they require a lot of attention. Over the last 5 years, there is basically only Pablo Galindo and me who pay attention to them. I fear that if more buildbots are added, Pablo and me will be the only ones to look at ones. FYI if nobody looks at buildbots, they are basically useless. They only waste resources. To have an idea of the existing maintenance burden, look at emails sent to: https://mail.python.org/archives/list/buildbot-sta...@python.org/ Every single email is basically a problem. There are around 110 emails over the last 30 years: 3.6 email/day in average. When a bug is identified, it requires an investigation which takes between 5 minutes and 6 months depending on the bug. I would say 2 hours in average. Sometimes, if the investigation is too long, we simply revert the change. The buildbot configuration also requires maintenance. For example, 14 commits have been pushed since January 1st: https://github.com/python/buildmaster-config/commits/master Multiple buildbots are "unstable": tests are failing randomly. Again, each failure means a new email. For example, test_asyncio likes to fail once every 10 runs (coarse average, I didn't check exactly). multiprocessing tests, tests using network like imaplib or nntplib, and some other tests fail randomly. Some tests just fail because just once, the buildbot became slower. People have many ideas to automate bug triage from emails, but so far, nobody came with a concrete working solution, and so emails are still read manually one by one. Also, almost nobody is trying to fix tests which are failing randomly. For example, I called multiple times for help to fix test_asyncio, so far it's still randomly every day: * 2020: https://mail.python.org/archives/list/python-dev@python.org/message/Y7I5ADXAQEGK6DOFAPVDTKMBT6NUFNQ4/ * 2019: https://mail.python.org/archives/list/python-dev@python.org/message/R7X6NKGEOKWD3PBWIL2LPZWZ6MMRANN5/ By the way, these random failures are not only affecting buildbots, but also CIs run on pull requests. It's *common* that these failures prevent to merge a pull request and require manual actions to be able to merge the PR (usually, re-run all CIs). I'm not talking about exotic platforms with very slow hardware, but platorms like Linux/x86-64 with "fast" hardware. I expect more random errors on exotic platforms. For example, I reported a crash on AIX one year ago, and nobody fixed it so far. I pushed a few fixes for that crash, but it's not enough to fully fix it: https://bugs.python.org/issue40068 I pushed AIX fixes only because I was annoyed by getting buildbot emails about AIX failures. Sometimes, I just turn off emails from AIX. Since there is no proactive work on fixing AIX issues, I would even prefer to *remove* the AIX buildbots. Victor -- Night gathers, and now my watch begins. It shall not end until my death. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/KOIK2NHGNACI4RRFTKNY7WCNHVEBW5SN/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: [Python-ideas] Re: Inadequate error reporting during function call setup stage
Hello, On Mon, 22 Feb 2021 10:44:19 +0100 Peter Otten <__pete...@web.de> wrote: > On 21/02/2021 23:06, Terry Reedy wrote: > > On 2/21/2021 12:04 PM, Paul Sokolovsky wrote: > > > >> Traceback (most recent call last): > >> File "pseudoc_tool.py", line 91, in > >> first_class_function_value(func, **pass_params) > >> TypeError: print() got an unexpected keyword argument 'noann' > > > > This is not typical behavior in current Python (3.8+). > > The way I understand it's not about print(), it's about > disambiguating multiple functions with the same name. > Example: > > PS > type .\ambiguous_names.py > import random > > def do_stuff(): > pass > > f = do_stuff > > def do_stuff(a, b): > pass > > g = do_stuff > > random.choice([f, g])(42) Thanks, that's exactly what I meant, and a repro with random "roulette" is also what I had in mind, I just didn't get to it yet ;-). > PS > py .\ambiguous_names.py > Traceback (most recent call last): >File "...\ambiguous_names.py", line 13, in > random.choice([f, g])(42) > TypeError: do_stuff() missing 1 required positional argument: 'b' > > The traceback gives no clue which of the two do_stuff() functions > caused the error, you have to check both implementations. > > If that is a comman problem one might consider including module name > and co_firstlineno in the message, or at least adding the relevant > do_stuff() function to the exception's args. As my original message argues, that's a workaround. Python tracebacks already have places where they show source file and line number - namely, the individual traceback entries. So, instead of cramming that info into the exception message, there should be additional last (latest in the order of execution) traceback entry, pointing to the exact function which had parameter mismatch. As I mentioned, I implemented that in my Python dialect, which happened to have exactly the same problem (code is not based on CPython). It looks like: Traceback (most recent call last): File "pseudoc_tool.py", line 91, in File ".../xforms.py", line 25, in print TypeError: unexpected keyword argument 'noann' - that makes clear that it's "print" function of "xforms.py" module, line 25, which got an unexpected keyword argument. -- Best regards, Paul mailto:pmis...@gmail.com ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/7SJTTAB7TZQYZ6WSRAAPSLLNMRF2OP7F/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
Hello, On Mon, 22 Feb 2021 09:20:46 +0100 Michał Górny wrote: > On Sun, 2021-02-21 at 13:04 -0800, Gregory P. Smith wrote: > > The main thing from a project maintenance perspective is for > > platforms to > > not become a burden to other code maintainers. PRs need to be > > reviewed. > > Every #if/#endif in code is a cognitive burden. So being a minor > > platform > > can come with unexpected breakages that need fixing due to other > > changes > > made in the codebase that did not pay attention to the platform. As > > we > > cannot expect everyone working on code to care about anything beyond > > the > > tier-1 fully supported platforms, buildbot or not. > > > I have to disagree -- the support code (even if any is actually > necessary) does not have to be a burden. Generally 'hobbyists' don't > have a problem that the support for their platform becomes broken > accidentally, or even deliberately because it blocks something else. > They understand that others don't have hardware, time or motivation > to maintain support for their platform properly. That's the problem CPython developers have - they like to remind they're volunteers (it's not too far-fetched to say they do that work as a hobby), but they also want to play big corporate types who with a flick of a wrist shut down systems and throw users in the cold. It all reminds another recent drama with the "cryptography" package: https://github.com/pyca/cryptography/issues/5771 . And it's not surprising that some people on that thread is also here. Likewise, it's not surprising that some Debian people are on this thread, with a clear message, just as they have in https://gist.github.com/tiran/2dec9e03c6f901814f6d1e8dad09528e (which happened to be linked from the ticket above). There's a conflict between 2 forms of volunteering. Old-school, still practiced by Debian, where people volunteer to *maintain* things for the benefit of other people. And new-style, along the lines "Hereby I volunteer to smash some Rust in the face of unsuspecting users" or "I volunteer to pull the rag under the feet of users of some systems". -- Best regards, Paul mailto:pmis...@gmail.com ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/34ZSFVUUOFKO7VOR4TPMX3RIVNQ35GQM/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On 22.02.2021 11:20, Michał Górny wrote: On Sun, 2021-02-21 at 13:04 -0800, Gregory P. Smith wrote: The main thing from a project maintenance perspective is for platforms to not become a burden to other code maintainers. PRs need to be reviewed. Every #if/#endif in code is a cognitive burden. So being a minor platform can come with unexpected breakages that need fixing due to other changes made in the codebase that did not pay attention to the platform. As we cannot expect everyone working on code to care about anything beyond the tier-1 fully supported platforms, buildbot or not. I have to disagree -- the support code (even if any is actually necessary) does not have to be a burden. Generally 'hobbyists' don't have a problem that the support for their platform becomes broken accidentally, or even deliberately because it blocks something else. They understand that others don't have hardware, time or motivation to maintain support for their platform properly. They themselves have limited time to work on it. So it's entirely fine for things to break occasionally, and they provide fixes as their time permits. They don't ask others to maintain their code. There's no real maintenance burden involved. But there is. As was pointed to above, the extra legacy code makes making _any_ changes in the corresponding part of the file more difficult -- because one has to think how to combine the changes with that code. E.g.: at which side of that code to place the changes; if it's a part of a block statement, where in the block statement to place the changes; if I want to change the program structure, how to incorporate that code into the new structure -- all made much more difficult because I cannot be sure what changes would and wouldn't break the "legacy" code. So, by your logic, it would be "blocking" any change to that file and will have to be removed anyway the first time we change that file. Now, if that's the case -- why should we spent time and effort tracking which files were "cleansed" and which weren't (which we'll have to do because such cleansing is an unrelated change in a PR so a PR's author should know whether they need to look out to do any "cleansing" as well as making the change that does what they want) if we can cleanse them all at once and be done with it? In fact, this whole thread feels like removing 80%-complete translations from a program because they 'burden developers' and confuse users. Even if the translations are not actively updated and degenerates with strings changing, some users find them helpful. -- Regards, Ivan ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/AXMROBO7DYGOR4R3HNARFUC3PVKAWFQA/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On Sun, 2021-02-21 at 13:04 -0800, Gregory P. Smith wrote: > The main thing from a project maintenance perspective is for platforms > to > not become a burden to other code maintainers. PRs need to be > reviewed. > Every #if/#endif in code is a cognitive burden. So being a minor > platform > can come with unexpected breakages that need fixing due to other > changes > made in the codebase that did not pay attention to the platform. As > we > cannot expect everyone working on code to care about anything beyond > the > tier-1 fully supported platforms, buildbot or not. I have to disagree -- the support code (even if any is actually necessary) does not have to be a burden. Generally 'hobbyists' don't have a problem that the support for their platform becomes broken accidentally, or even deliberately because it blocks something else. They understand that others don't have hardware, time or motivation to maintain support for their platform properly. They themselves have limited time to work on it. So it's entirely fine for things to break occasionally, and they provide fixes as their time permits. They don't ask others to maintain their code. There's no real maintenance burden involved. In fact, this whole thread feels like removing 80%-complete translations from a program because they 'burden developers' and confuse users. Even if the translations are not actively updated and degenerates with strings changing, some users find them helpful. -- Best regards, Michał Górny ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/X7OEX4LI3AKEOUA4KZFVPFH2BK3YCXEE/ Code of Conduct: http://python.org/psf/codeofconduct/