[issue46187] Optionally support rounding for math.isqrt()
Mark Dickinson added the comment: FWIW, when this need has turned up for me (which it has, a couple of times), I've used this: def risqrt(n): return (isqrt(n<<2) + 1) >> 1 But I'll admit that that's a bit non-obvious. -- ___ Python tracker <https://bugs.python.org/issue46187> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46055] Speed up binary shifting operators
Mark Dickinson added the comment: Two separate significant improvements have been pushed: thanks, Xinhang Xu! The original PR also contained a reworking of the general case for right-shifting a negative integer. The current code (in main) for that case does involve some extra allocations, and it ought to be possible to write something that doesn't need to allocate temporary PyLongs. -- ___ Python tracker <https://bugs.python.org/issue46055> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46055] Speed up binary shifting operators
Mark Dickinson added the comment: New changeset 3581c7abbe15bad6ae08fc38887e5948f8f39e08 by Xinhang Xu in branch 'main': bpo-46055: Speed up binary shifting operators (GH-30044) https://github.com/python/cpython/commit/3581c7abbe15bad6ae08fc38887e5948f8f39e08 -- ___ Python tracker <https://bugs.python.org/issue46055> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46055] Speed up binary shifting operators
Mark Dickinson added the comment: New changeset 360fedc2d2ce6ccb0dab554ef45fe83f7aea1862 by Mark Dickinson in branch 'main': bpo-46055: Streamline inner loop for right shifts (#30243) https://github.com/python/cpython/commit/360fedc2d2ce6ccb0dab554ef45fe83f7aea1862 -- ___ Python tracker <https://bugs.python.org/issue46055> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37295] Possible optimizations for math.comb()
Change by Mark Dickinson : -- pull_requests: +28490 pull_request: https://github.com/python/cpython/pull/30275 ___ Python tracker <https://bugs.python.org/issue37295> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46173] Clarify conditions under which float(x) with large x raises OverflowError
Mark Dickinson added the comment: Changing to a documentation issue. -- assignee: -> docs@python components: +Documentation -Interpreter Core nosy: +docs@python resolution: not a bug -> title: float(x) with large x not raise OverflowError -> Clarify conditions under which float(x) with large x raises OverflowError versions: +Python 3.11 ___ Python tracker <https://bugs.python.org/issue46173> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46173] float(x) with large x not raise OverflowError
Change by Mark Dickinson : -- resolution: -> not a bug ___ Python tracker <https://bugs.python.org/issue46173> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46173] float(x) with large x not raise OverflowError
Mark Dickinson added the comment: If we wanted to make a change, I think the part of the docs that I'd target would be this sentence: > a floating point number with the same value (within Python’s floating point > precision) is returned It's that "same value (within Python's floating point precision)" bit that I'd consider changing. We could consider replacing it with something along the lines that "an integer argument is rounded to the nearest float", possibly with an additional note that under the assumption of IEEE 754 binary64 format, we follow the usual IEEE 754 rules. -- ___ Python tracker <https://bugs.python.org/issue46173> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46173] float(x) with large x not raise OverflowError
Mark Dickinson added the comment: Yes, exactly: Python's intentionally following the normal IEEE 754 rules for rounding a value to the binary64 format using the round-ties-to-even rounding rule, as formalised in section 7.4 of IEEE 754-2019 (and quoted by @cykerway). These are the exact same rules that are followed for conversion from str to float (where we return `inf` rather than raise `OverflowError` for large values, but the overflow boundary is the same), or conversion from Fraction to float, or conversion from Decimal to float, etc. > the python float doc might better say "If the *rounded* argument is > outside..." Docs are hard. I think there's a danger that that word "rounded" would cause more confusion than it alleviates - to me, it suggests that there's some kind of rounding going on *before* conversion to float, rather than *as part of* the conversion to float. This isn't a language specification document, so it's not reasonable to give a perfectly accurate description of what happens - the actual meaning would be lost in the mass of details. (In this case, it would also be rather hard to be precise, given that we have to allow for platforms that aren't using IEEE 754.) I'm not seeing an obvious way to improve the docs here. -- ___ Python tracker <https://bugs.python.org/issue46173> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37295] Possible optimizations for math.comb()
Mark Dickinson added the comment: Raymond: how do you want to proceed on this? Should I code up my suggestion in a PR, or are you already working on it? -- ___ Python tracker <https://bugs.python.org/issue37295> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46055] Speed up binary shifting operators
Change by Mark Dickinson : -- keywords: +patch pull_requests: +28464 stage: -> patch review pull_request: https://github.com/python/cpython/pull/30243 ___ Python tracker <https://bugs.python.org/issue46055> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46055] Speed up binary shifting operators
Change by Mark Dickinson : -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue46055> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20369] concurrent.futures.wait() blocks forever when given duplicate Futures
Change by Mark Dickinson : -- nosy: -mark.dickinson ___ Python tracker <https://bugs.python.org/issue20369> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35037] PYLONG_BITS_IN_DIGIT differs between MinGW and MSVC
Mark Dickinson added the comment: > This should probably be a separate issue, Specifically, issue 45569. -- ___ Python tracker <https://bugs.python.org/issue35037> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37295] Possible optimizations for math.comb()
Mark Dickinson added the comment: [Tim] > The justification for the shift count isn't self-evident, and > appears to me to be an instance of the generalization of Kummer's > theorem to multinomial coefficients. Not sure there's any generalisation here: I think it *is* just Kummer's theorem. Though I confess I wasn't aware that this was a named theorem - I was working directly from what I now discover is called [Legendre's formula](https://en.wikipedia.org/wiki/Legendre%27s_formula), which I originally learned from "Concrete Mathematics" by Knuth et. al., where they also didn't mention any particular names. It's equation 4.24 in my edition; it may have a different number in the 2nd edition. Kummer's theorem says that the 2-valuation of n-choose-k is the number of carries when k is added to n-k in binary. Notation: write `bit(x, i)` for the bit at position `i` of `x` - i.e., `(x >> i) & 1` In the absence of carries when adding `k` to `n-k`, `bit(n, i) = bit(k, i) ^ bit(n-k, i)`. We have an incoming carry whenever `bit(n, i) != bit(k, i) ^ bit(n-k, i)`; i.e., whenever `bit(n ^ k ^ (n-k), i)` is `1`. So the number of carries is the population count of `n ^ k ^ (n-k)`. > I think it would be clearer at first sight to rely instead on that 2**i/(2**j > * 2**k) = 2**(i-j-k), which is shallow. Sounds fine to me, especially if it makes little performance difference. -- ___ Python tracker <https://bugs.python.org/issue37295> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23522] Misleading note in Statistics module documentation
Change by Mark Dickinson : -- stage: patch review -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue23522> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37295] Possible optimizations for math.comb()
Mark Dickinson added the comment: That computation of the shift can be simplified to require only one popcount operation. With F and Finv as before: def comb_small(n, k): assert 0 <= k <= n <= Nmax return (F[n] * Finv[k] * Finv[n-k] % 2**64) << (k ^ n ^ (n-k)).bit_count() -- ___ Python tracker <https://bugs.python.org/issue37295> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46144] math.log() returns improper output
Mark Dickinson added the comment: Yes, confirmed that this is not a bug, but just one of the many consequences of approximating real numbers by floating-point numbers. You may be interested in math.log2 and/or int.bit_length. math.log2(x) *may* give you more accurate results than math.log(x, 2) when x is a power of two, but there are no guarantees - we're at the mercy of the C math library here. -- nosy: +mark.dickinson resolution: -> not a bug ___ Python tracker <https://bugs.python.org/issue46144> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37295] Possible optimizations for math.comb()
Mark Dickinson added the comment: One approach that avoids the use of floating-point arithmetic is to precompute the odd part of the factorial of n modulo 2**64, for all small n. If we also precompute the inverses, then three lookups and two 64x64->64 unsigned integer multiplications gets us the odd part of the combinations modulo 2**64, hence for small enough n and k gets us the actual odd part of the combinations. Then a shift by a suitable amount gives comb(n, k). Here's what that looks like in Python. The "% 2**64" operation obviously wouldn't be needed in C: we'd just do the computation with uint64_t and rely on the normal wrapping semantics. We could also precompute the bit_count values if that's faster. import math # Max n to compute comb(n, k) for. Nmax = 67 # Precomputation def factorial_odd_part(n): f = math.factorial(n) return f // (f & -f) F = [factorial_odd_part(n) % 2**64 for n in range(Nmax+1)] Finv = [pow(f, -1, 2**64) for f in F] PC = [n.bit_count() for n in range(Nmax+1)] # Fast comb for small values. def comb_small(n, k): if not 0 <= k <= n <= Nmax: raise ValueError("k or n out of range") return (F[n] * Finv[k] * Finv[n-k] % 2**64) << k.bit_count() + (n-k).bit_count() - n.bit_count() # Exhaustive test for n in range(Nmax+1): for k in range(0, n+1): assert comb_small(n, k) == math.comb(n, k) -- ___ Python tracker <https://bugs.python.org/issue37295> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37295] Possible optimizations for math.comb()
Mark Dickinson added the comment: > we can get faster code by using a small (3Kb) table of factorial logarithms The problem here is that C gives no guarantees about accuracy of either log2 or exp2, so we'd be playing a guessing game about how far we can go before the calculation becomes unsafe (in the sense of the `round` operation potentially giving the wrong answer). I think it would be better to stick to integer-only arithmetic. -- ___ Python tracker <https://bugs.python.org/issue37295> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45995] string formatting: normalize negative zero
Mark Dickinson added the comment: Thanks, John. I should have time to review within the next week or so. -- ___ Python tracker <https://bugs.python.org/issue45995> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45995] string formatting: normalize negative zero
Change by Mark Dickinson : -- assignee: -> mark.dickinson ___ Python tracker <https://bugs.python.org/issue45995> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23522] Misleading note in Statistics module documentation
Mark Dickinson added the comment: Steven: I've made a PR at https://github.com/python/cpython/pull/30174. Does this match what you had in mind? -- ___ Python tracker <https://bugs.python.org/issue23522> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23522] Misleading note in Statistics module documentation
Change by Mark Dickinson : -- pull_requests: +28390 stage: -> patch review pull_request: https://github.com/python/cpython/pull/30174 ___ Python tracker <https://bugs.python.org/issue23522> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23522] Misleading note in Statistics module documentation
Mark Dickinson added the comment: > "The mean is strongly affected by outliers and is not necessarily a typical > example of the data points. For a more robust, although less efficient, > measure of central tendency, see median()" That wording sounds fine to me. I don't think we can reasonably expect to hear from Jake again, but from my understanding of his post, this addresses his concerns. FWIW, I share those concerns. My brain can't parse "robust estimator for central location", because the term "estimator" has a precise and well-defined meaning in (frequentist) statistics, and what I expect to see after "estimator for" is a description of a parameter of a statistical model - as in for example "estimator for the population mean", or "estimator for the Weibull shape parameter". "central location" doesn't really fit in that slot. > How do we feel about linking to Wikipedia? I can't think of any good reason not to. We have plenty of other external links in the docs, and the Wikipedia links are probably at lower risk of becoming stale than most of the others. -- ___ Python tracker <https://bugs.python.org/issue23522> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23522] Misleading note in Statistics module documentation
Change by Mark Dickinson : -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue23522> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46018] expm1 may incorrectly raise OverflowError on underflow
Mark Dickinson added the comment: > Lines 500-504 are the ones that trigger it. Ah, right. Thanks. > Apparently there are no tests in that file for straight exp() Yes - that file was mostly written to give good coverage for places where we'd written our own implementations rather than simply wrapping an existing libm function, though I think we've now reverted to using the libm expm1 in all cases. -- ___ Python tracker <https://bugs.python.org/issue46018> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46018] expm1 may incorrectly raise OverflowError on underflow
Mark Dickinson added the comment: > I've also got no idea how to write a test for this Yep, that's fine. All I want is that at least one particular value that caused the spurious OverflowError is in the test suite somewhere, but it sounds as though that's already the case. I'd imagine that one of these two testcases should be enough to trigger it: https://github.com/python/cpython/blob/44b0e76f2a80c9a78242b7542b8b1218d244af07/Lib/test/math_testcases.txt#L495-L496 -- ___ Python tracker <https://bugs.python.org/issue46018> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36048] Deprecate implicit truncating when convert Python numbers to C integers: use __index__, not __int__
Mark Dickinson added the comment: For the record, #37999 is the issue that turned the deprecation warnings into errors for Python 3.10. (But as Serhiy says, please open a new issue, or start a discussion on one of the mailing lists.) -- ___ Python tracker <https://bugs.python.org/issue36048> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36048] Deprecate implicit truncating when convert Python numbers to C integers: use __index__, not __int__
Mark Dickinson added the comment: @arhadthedev: Thanks for highlighting the issue. > we need to check if the problem really has place and the PR needs to be > retracted until PyQt5 is ported to newer Python C API I'm not particularly impressed by the arguments from cculianu. This was the right change to make: this is very general machinery, and we've seen many real issues over the years resulting from implicit acceptance of floats or Decimal objects where an integer is expected. It may well be that for some *specific* libraries like PyQt5 it makes sense to make a different choice. And indeed, PySide6 has done exactly that: Python 3.10.0 (default, Nov 12 2021, 12:32:57) [Clang 12.0.5 (clang-1205.0.22.11)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from PySide6.QtCore import QPoint >>> QPoint(2, 3) PySide6.QtCore.QPoint(2, 3) >>> QPoint(2.1, 3.3) PySide6.QtCore.QPoint(2, 3) So no, I don't believe this change should be reverted. At best, we could re-introduce the deprecation warnings and delay the full implementation of the change. But the deprecation warnings were present since Python 3.8, and so either the PyQt5 developers noticed them but didn't want to make the change, or didn't notice them. Either way, it's difficult to see what difference extending the deprecation warning period would make. Moreover, the new behaviour is already released, in Python 3.10.0 and 3.10.1, and the code churn would likely be more annoying than helpful. I would suggest to cculianu that they take this up with the PyQt5 developers. -- ___ Python tracker <https://bugs.python.org/issue36048> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46020] Optimize long_pow for the common case
Change by Mark Dickinson : -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue46020> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46018] expm1 may incorrectly raise OverflowError on underflow
Mark Dickinson added the comment: I presume this is also worth an upstream report? Setting ERANGE on a result that's close to -1.0 is rather questionable. -- ___ Python tracker <https://bugs.python.org/issue46018> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46018] expm1 may incorrectly raise OverflowError on underflow
Mark Dickinson added the comment: It's a bit cheap and nasty, but I think we could simply replace the line: if (fabs(x) < 1.0) in is_error with if (fabs(x) < 2.0) perhaps with an explanatory comment. All we need to do is distinguish underflow from overflow, and 2.0 is still clearly a _long_ way away from any overflow boundary. It would be good to have a test that would trigger the behaviour, too. -- ___ Python tracker <https://bugs.python.org/issue46018> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46018] expm1 may incorrectly raise OverflowError on underflow
Change by Mark Dickinson : -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue46018> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45995] string formatting: normalize negative zero
Mark Dickinson added the comment: I'd support having this functionality available for `format` and for f-strings. (As Steven says, changing %-formatting doesn't seem viable.) It really _is_ awkward to do this in any other way, and I'm reliably informed that normal people don't expect to see negative zeros in formatted numeric output. It did take me a few minutes to get my head around the idea that `f"{-0.01:+.1f}"` would return `"+0.0"` rather than `"-0.0"` or `" 0.0"` or just plain `"0.0"` under this proposal, but I agree that it seems like the only thing that can be consistent and make sense. I'm not 100% convinced by the particular spelling proposed, but I don't have anything better to suggest. If C++ might be going with a "z", would it make sense to do the same for Python? I don't forsee any implementation difficulties for float and complex types. For Decimal, we'd need to "own" the string formatting, taking that responsibility away from mpdecimal, but there are already other reasons to do that. Once we've done that, again the implementation doesn't seem onerous. -- ___ Python tracker <https://bugs.python.org/issue45995> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45995] string formatting: normalize negative zero
Change by Mark Dickinson : -- nosy: +eric.smith ___ Python tracker <https://bugs.python.org/issue45995> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7946] Convoy effect with I/O bound threads and New GIL
Change by Mark Dickinson : -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue7946> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45476] [C API] PEP 674: Disallow using macros as l-value
Change by Mark Dickinson : -- nosy: -mark.dickinson ___ Python tracker <https://bugs.python.org/issue45476> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45917] Add math.exp2() function: 2^x
Mark Dickinson added the comment: [Tim] > on Windows, exp2(x) is way worse then pow(2, x) Darn. > I expect we should just live with it. Agreed. -- ___ Python tracker <https://bugs.python.org/issue45917> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45917] Add math.exp2() function: 2^x
Mark Dickinson added the comment: All done. Many thanks, Gideon! -- resolution: -> fixed stage: patch review -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue45917> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45917] Add math.exp2() function: 2^x
Mark Dickinson added the comment: New changeset 6266e4af873a27c9d352115f2f7a1ad0885fc031 by Gideon in branch 'main': bpo-45917: Add math.exp2() method - return 2 raised to the power of x (GH-29829) https://github.com/python/cpython/commit/6266e4af873a27c9d352115f2f7a1ad0885fc031 -- ___ Python tracker <https://bugs.python.org/issue45917> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45917] Add math.exp2() function: 2^x
Mark Dickinson added the comment: On the subject of accuracy, there doesn't seem to be much in it on my mac laptop, and it looks as though pow(2.0, x) is giving correctly rounded results as often as (if not more often than) exp2(x). Here's the log of a terminal session, after recompiling Python to add exp2. It shows the ulps error (tested against a high-precision Decimal computation, which we're treating as representing the "exact" result) for both exp2(x) and pow(2.0, x) when the two results differ, for a selection of randomly chosen x in the range(-1000.0, 1000.0). Columns in the output are: x (in hex), x (in decimal), ulps error in exp2(x), ulps error in pow(2.0, x) >>> from decimal import getcontext, Decimal >>> from math import exp2, pow, ulp >>> import random >>> getcontext().prec = 200 >>> def exp2_error_ulps(x): ... libm = exp2(x) ... exactish = 2**Decimal(x) ... return float(Decimal(libm) - exactish) / ulp(libm) ... >>> def pow2_error_ulps(x): ... libm = pow(2.0, x) ... exactish = 2**Decimal(x) ... return float(Decimal(libm) - exactish) / ulp(libm) ... >>> for n in range(1): ... x = random.uniform(-1000.0, 999.0) + random.random() ... if exp2(x) != pow(2.0, x): ... print(f"{x.hex():21} {x:22.17f} {exp2_error_ulps(x): .5f}, {pow2_error_ulps(x): .5f}") ... 0x1.e28f2ad3da122p+560.31990590581177969 0.50669, -0.49331 -0x1.929e790e1d293p+9 -805.23806930946227567 0.50082, -0.49918 -0x1.49803564f5b8ap+8 -329.50081473349621319 0.49736, -0.50264 -0x1.534cf08081f4bp+8 -339.30054476902722627 -0.50180, 0.49820 -0x1.b430821fb4ad2p+8 -436.18948553238908517 -0.49883, 0.50117 0x1.2c87a8431bd8fp+8 300.52991122655743084 -0.50376, 0.49624 0x1.3e476f9a09c8cp+7 159.13952332848964488 0.50062, -0.49938 0x1.cb8b9c61e7e89p+9 919.09070991347937252 0.49743, -0.50257 0x1.ab86ed0e6c7f6p+9 855.05410938546879152 0.49742, -0.50258 0x1.97bc9af3cbf85p+9 815.47347876986952997 -0.50076, 0.49924 -0x1.b5434441ba11bp+8 -437.26276026528074681 -0.50062, 0.49938 -0x1.0ead35218910ep+9 -541.35318392937347198 0.50192, -0.49808 -0x1.dbae0b861b89cp+9 -951.35972668022759535 0.50601, -0.49399 0x1.522f005d2dcc4p+684.54589982597377684 -0.50704, 0.49296 0x1.398ff48d53ee1p+9 627.12465063665524667 -0.50102, 0.49898 -0x1.381307fbd89f5p+5 -39.00929257159069863 -0.50526, 0.49474 0x1.9dc4c85f7c53ap+9 827.53736489840161994 -0.50444, 0.49556 0x1.b357f6012d3c2p+9 870.68719496449216422 -0.50403, 0.49597 -0x1.a6446703677bbp+9 -844.53439371636284250 0.50072, -0.49928 0x1.e3dd54b28998bp+7 241.93228681497234334 0.49897, -0.50103 0x1.b4f77f18a233ep+8 436.96678308448815642 0.49593, -0.50407 -0x1.578c4ce7a7c1bp+3 -10.73587651486564276 -0.50505, 0.49495 0x1.25a9540e1ee65p+536.70767985374258302 0.49867, -0.50133 -0x1.6e220f7db7668p+8 -366.13304887511776542 -0.49904, 0.50096 -0x1.94214ed3e5264p+9 -808.26021813095985635 0.50420, -0.49580 0x1.9dcc3d281da18p+551.72472602215219695 -0.50423, 0.49577 -0x1.3ba66909e6a40p+7 -157.82502013149678532 -0.50077, 0.49923 -0x1.9eac2c52a1b47p+9 -829.34510262389892432 -0.50540, 0.49460 -- ___ Python tracker <https://bugs.python.org/issue45917> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45917] Add math.exp2() function: 2^x
Mark Dickinson added the comment: See also previous discussion towards the end of https://bugs.python.org/issue3366. FWIW, I don't think there's value in adding exp2 to the cmath module too: we'd have to write our own implementation, and it's just not a function that appears often in the complex world. -- ___ Python tracker <https://bugs.python.org/issue45917> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45917] Add math.exp2() function: 2^x
Mark Dickinson added the comment: Sounds good to me, provided that all the common platforms that we care about have a reasonable quality implementation. This should be a straightforward wrapping of the C99 function, and with sufficient tests the buildbots should tell us if there are any issues on common platforms. @Gideon: are you're interested in working on a pull request? I'd be happy to review. (Ideally I'd like to have exp10 too, but that's not in C99 so platform support is likely to be spotty. If anyone's interested in pursuing that, we should make it a separate issue.) > a libm exp2 is supposedly more accurate than pow(2.0, x), though I don’t > really see how this would be the case pow is a difficult function to implement at high accuracy, and there are a good number of low quality pow implementations around in system math libraries. It's much easier to come up with a high accuracy implementation of a single-argument function - there are well known techniques for generating approximating polynomials that simply don't extend well to functions of two arguments. sqrt is similar: pow(x, 0.5) is very often not correctly rounded even on systems where sqrt(x) _is_. (Though that one's a bit of a cheat, since common processors have dedicated instructions for a correctly-rounded sqrt.) -- ___ Python tracker <https://bugs.python.org/issue45917> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45739] The Python implementation of Decimal does not support the "N" format
Mark Dickinson added the comment: I could be persuaded for any of options -1, 1 and 2. I don't much like option 0. -- ___ Python tracker <https://bugs.python.org/issue45739> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45739] The Python implementation of Decimal does not support the "N" format
Mark Dickinson added the comment: Eric, Serhiy: do you have opinions on the right way forward? Here are 6 options, on a spectrum of increasing level of acceptance of "N". -2. Remove "N" support for cdecimal right now (i.e., for Python 3.11), on the basis that there's no need for deprecation warnings, because it was never officially a feature. -1. Deprecate "N" support for cdecimal, remove it in Python 3.13. 0. Do nothing (the default), leaving _pydecimal and cdecimal inconsistent. 1. Add "N" support to the Python implementation for parity with cdecimal, but don't document it - leave it as an undocumented feature. 2. Officially add "N" support to decimal formatting - add documentation, tests, and fix the Python implementation. 3. Officially add "N" support to all numeric formatting ... -- ___ Python tracker <https://bugs.python.org/issue45739> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: Concrete example of int/int not being correctly rounded on systems using x87 instructions: on those systems, I'd expect to see 1/2731 return a result of 0.00036616623947272064 (0x1.7ff4005ffd002p-12), as a result of first rounding to 64-bit precision and then to 53-bit. The correctly-rounded result is 0.0003661662394727206 (0x1.7ff4005ffd001p-12). -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: > All the rounding has already happened at the point where ldexp is called, and > the result of the ldexp call is exact. Sketch of proof: [Here](https://github.com/python/cpython/blob/4ebde73b8e416eeb1fd5d2ca3283f7ddb534c5b1/Objects/longobject.c#L3929) we have: shift = Py_MAX(diff, DBL_MIN_EXP) - DBL_MANT_DIG - 2; from which (assuming IEEE 754 as usual) shift >= -1076. (DBL_MIN_EXP = -1021, DBL_MANT_DIG = 53) [Here](https://github.com/python/cpython/blob/4ebde73b8e416eeb1fd5d2ca3283f7ddb534c5b1/Objects/longobject.c#L4008) we round away the last two or three bits of x, after which x is guaranteed to be a multiple of 4: x->ob_digit[0] = low & ~(2U*mask-1U); Then after converting the PyLong x to a double dx with exactly the same value [here](https://github.com/python/cpython/blob/4ebde73b8e416eeb1fd5d2ca3283f7ddb534c5b1/Objects/longobject.c#L4020) we make the ldexp call: result = ldexp(dx, (int)shift); At this point dx is a multiple of 4 and shift >= -1076, so the result of the ldexp scaling is a multiple of 2**-1074, and in the case of a subnormal result, it's already exactly representable. For the int/int division possibly not being correctly rounded on x87, see [here](https://github.com/python/cpython/blob/4ebde73b8e416eeb1fd5d2ca3283f7ddb534c5b1/Objects/longobject.c#L3889-L3892). It won't affect _this_ application, but possibly we should fix this anyway. Though the progression of time is already effectively fixing it for us, as x87 becomes less and less relevant. -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: > Will it suffer the same issues with subnormals on Windows? No, it should be fine. All the rounding has already happened at the point where ldexp is called, and the result of the ldexp call is exact. > Is CPython int/int true division guaranteed to be correctly rounded? Funny you should ask. :-) There's certainly no documented guarantee, and there _is_ a case (documented in comments) where the current code may not return correctly rounded results on machines that use x87: there's a fast path where both numerator and denominator fit into an IEEE 754 double without rounding, and we then do a floating-point division. But we can't hit that case with the proposed code, since the numerator will always have at least 55 bits, so the fast path is never taken. -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: Since I've failed to find a coherent statement and proof of the general principle I articulated anywhere online, I've included one below. (To be absolutely clear, the principle is not new - it's very well known, but oddly hard to find written down anywhere.) -- Setup: introducing jots === *Notation.* R is the set of real numbers, Z is the set of integers, operators like *, / and ^ have their mathematical interpretations, not Python ones. Fix a precision p > 0 and an IEEE 754 binary floating-point format with precision p; write F for the set of representable values in that format, including zeros and infinities. (We don't need NaNs, but feel free to include them if you want to.) Let rnd : R -> F be the rounding function corresponding to any of the standard IEEE 754 rounding modes. We're not ignoring overflow and underflow here: rnd is assumed to round tiny values to +/-0 and large values to +/-infinity as normal. (We only really care about round-ties-to-even, but all of the below is perfectly general.) *Definition.* For the given fixed precision p, a *jot* is a subinterval of the positive reals of the form (m 2^e, (m+1) 2^e) for some integers m and e, with m satisfying 2^p <= m < 2^(p+1). This is a definition-of-convenience, invented purely for the purposes of this proof. (And yes, the name is silly. Suggestions for a better name to replace "jot" are welcome. Naming things is hard.) We've chosen the size of a jot so that between each consecutive pair a and b of positive normal floats in F, there are exactly two jots: one spanning from a to the midpoint (a+b)/2, and another spanning from (a+b)/2 to b. (Since jots are open, the midpoint itself and the floats a and b don't belong to any jot.) Now here's the key point: for values that aren't exactly representable and aren't perfect midpoints, the standard rounding modes, whether directed or round-to-nearest, only ever care about which side of the midpoint the value to be rounded lies. In other words: *Observation.* If x and y belong to the same jot, then rnd(x) = rnd(y). This is the point of jots: they represent the wiggle-room that we have to perturb a real number without affecting the way that it rounds under any of the standard rounding modes. *Note.* Between any two consecutive *subnormal* values, we have 4 or more jots, and above the maximum representable float we have infinitely many, but the observation that rnd is constant on jots remains true at both ends of the spectrum. Also note that jots, as defined above, don't cover the negative reals, but we don't need them to for what follows. Here's a lemma that we'll need shortly. *Lemma.* Suppose that I is an open interval of the form (m 2^e, (m+1) 2^e) for some integers m and e satisfying 2^p <= m. Then I is either a jot, or a subinterval of a jot. *Proof.* If m < 2^(p+1) then this is immediate from the definition. In the general case, m satisfies 2^q <= m < 2^(q+1) for some integer q with p <= q. Write n = floor(m / 2^(q-p)). Then: n <= m / 2^(q-p) < n + 1, so n * 2^(q-p) <= m < (n + 1) * 2^(q-p), so n * 2^(q-p) <= m and m + 1 <= (n + 1) * 2^(q-p) so n * 2^(e+q-p) <= m * 2^e and (m + 1) * 2^e <= (n + 1) * 2^(e+q-p) So I is a subinterval of (n * 2^(e+q-p), (n+1) * 2^(e+q-p)), which is a jot. The magic of round-to-odd = *Definition.* The function to-odd : R -> Z is defined by: - to-odd(x) = x if x is an integer - to-odd(x) = floor(x) if x is not an integer and floor(x) is odd - to-odd(x) = ceil(x) if x is not an integer and floor(x) is even *Properties.* Some easy monotonicity properties of to-odd, with proofs left to the reader: - If x < 2n for real x and integer n, then to-odd(x) < to-odd(2n) - If 2n < x for real x and integer n, then to-odd(2n) < to-odd(x) Here's a restatement of the main principle. *Proposition.* With p and rnd as in the previous section, suppose that x is a positive real number and that e is any integer satisfying 2^e <= x. Define a real number y by: y = 2^(e-p-1) to-odd(x / 2^(e-p-1)) Then rnd(y) = rnd(x). Proof of the principle == In a nutshell, we show that either - y = x, or - x and y belong to the same jot Either way, since rnd is constant on jots, we get rnd(y) = rnd(x). Case 1: x = m * 2^(e-p) for some integer m. Then x / 2^(e-p-1) = 2m is an (even) integer, so to-odd(x / 2^(e-p-1)) = (x / 2^(e-p-1)) and y = x. Hence rnd(y) = rnd(x). Case 2: m * 2^(e-p) < x < (m + 1) * 2^(e-p) for some integer m. Then rearranging, 2m < x / 2^(e-p-1) < 2(m+1). So from the monotonicity properties of to-odd we have: 2m < to-odd(x / 2^(e-p-1)) < 2(m+1) And multiplying through by 2^(e-p-1) we get m * 2^(e-p) < y < (m+1) * 2^(e-p). So both x and y belong to the interval I = (m*2^(e-p), (m+1)*2^(e-p-1)). Furt
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: [Raymond] > [...] perhaps do an int/int division to match the other code path [...] Sure, works for me. -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45902] Bytes and bytesarrays can be sorted with a much faster count sort.
Mark Dickinson added the comment: > If there are enough use cases for it. Well, that's the question. :-) *I* can't think of any good use cases, but maybe others can. But if we can't come up with some use cases, then this feels like a solution looking for a problem, and that makes it hard to justify both the short-term effort and the longer-term maintenance costs of adding the complexity. FWIW, given a need to efficiently compute frequency tables for reasonably long byte data, I'd probably reach first for NumPy (and numpy.bincount in particular): Python 3.10.0 (default, Nov 12 2021, 12:32:57) [Clang 12.0.5 (clang-1205.0.22.11)] Type 'copyright', 'credits' or 'license' for more information IPython 7.28.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import collections, numpy as np In [2]: t = b'MDIAIHHPWIRRPFFPFHSPSRLFDQFFGEHLLESDLFSTATSLSPFYLRPPSFLRAPSWIDTGLSEMRLEKDRFSVNLDVKHFSPEELKVKVLGDVIEVHGKHEERQDEHGFISREFHRKYRI ...: PADVDPLAITSSLSSDGVLTVNGPRKQVSGPERTIPITREEKPAVAAAPKK'; t *= 100 In [3]: %timeit np.bincount(np.frombuffer(t, np.uint8)) 32.7 µs ± 3.15 µs per loop (mean ± std. dev. of 7 runs, 1 loops each) In [4]: %timeit collections.Counter(t) 702 µs ± 25.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [5]: %timeit sorted(t) 896 µs ± 64.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) -- ___ Python tracker <https://bugs.python.org/issue45902> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45902] Bytes and bytesarrays can be sorted with a much faster count sort.
Mark Dickinson added the comment: (Changing the issue type: as I understand it, this is a proposal for a new feature, namely new methods bytes.sort and bytearray.sort, rather than a performance improvement to an existing feature.) -- type: performance -> enhancement ___ Python tracker <https://bugs.python.org/issue45902> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45902] Bytes and bytesarrays can be sorted with a much faster count sort.
Mark Dickinson added the comment: Can you give example use-cases for sorting a bytes or bytearray object? I see value in the intermediate object - the frequency table, but the reconstructed sorted bytes object just seems like an inefficient representation of the frequency table, and I'm not sure how it would be useful. As the wikipedia page for counting sort says, the real value is in sorting items by keys that are small integers, and the special case where the item is identical to the key isn't all that useful: > In some descriptions of counting sort, the input to be sorted is assumed to > be more simply a sequence of integers itself, but this simplification does > not accommodate many applications of counting sort. -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue45902> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: Related: https://stackoverflow.com/questions/32150888/should-ldexp-round-correctly -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: There's also a potential double-rounding issue with ldexp, when the first argument is an int: ldexp(n, e) will first round n to a float, and then (again for results in the subnormal range) potentially also need to round the result. >>> n = 2**53 + 1 >>> e = -1128 >>> math.ldexp(n, e) 0.0 >>> n / (1 << -e) 5e-324 I'm a bit (but only a bit) surprised and disappointed by the Windows issue; thanks, Tim. It seems to be okay on Mac (Intel, macOS 11.6.1): >>> import math >>> d = math.nextafter(0.0, 1.0) >>> d 5e-324 >>> d3 = 7 * d >>> d3 3.5e-323 >>> d3 / 4.0 1e-323 >>> math.ldexp(d3, -2) 1e-323 -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45899] NameError on if clause of class-level list comprehension
Mark Dickinson added the comment: This is expected behaviour. See the docs here: https://docs.python.org/3.9/reference/executionmodel.html#resolution-of-names > The scope of names defined in a class block is limited to the class block; it > does not extend to the code blocks of methods – this includes comprehensions > and generator expressions since they are implemented using a function scope. -- nosy: +mark.dickinson resolution: -> not a bug stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue45899> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: Here's a reference for this use of round-to-odd: https://www.lri.fr/~melquion/doc/05-imacs17_1-expose.pdf I'm happy to provide any proofs that anyone feels are needed. -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43475] Worst-case behaviour of hash collision with float NaN
Mark Dickinson added the comment: Just for fun: I gave a somewhat ranty 10-minute talk on this topic at a (virtual) conference a few months ago: https://www.youtube.com/watch?v=01oeosRVwgY -- ___ Python tracker <https://bugs.python.org/issue43475> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43475] Worst-case behaviour of hash collision with float NaN
Mark Dickinson added the comment: @cwg: Yep, we're aware of this. There are no good solutions here - only a mass of constraints, compromises and trade-offs. I think we're already somewhere on the Pareto boundary of the "best we can do" given the constraints. Moving to another point on the boundary doesn't seem worth the code churn. What concrete action would you propose that the Python core devs take at this point? > it was possible to convert a tuple of floats into a numpy array and back into > a tuple, and the hash values of both tuples would be equal. This is no > longer the case. Sure, but the problem isn't really with hash; that's just a detail. It lies deeper than that - it's with containment itself: >>> import numpy as np >>> import math >>> x = math.nan >>> some_list = [1.5, 2.3, x] >>> x in some_list True >>> x in list(np.array(some_list)) # expect True, get False False The result of the change linked to this PR is that the hash now also reflects that containment depends on object identity, not just object value. Reverting the change would solve the superficial hash problem, but not the underlying containment problem, and would re-introduce the performance issue that was fixed here. -- ___ Python tracker <https://bugs.python.org/issue43475> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: > am still studying the new one Sorry - I should have added at least _some_ comments to it. Here's the underlying principle, which I think of as "The magic of round-to-odd": Suppose x is a positive real number and e is an integer satisfying 2^e <= x. Then assuming IEEE 754 binary64 floating-point format, the quantity: 2^(e-54) * to-odd(x / 2^(e-54)) rounds to the same value as x does under _any_ of the standard IEEE 754 rounding modes, including the round-ties-to-even rounding mode that we care about here. Here, to-odd is the function R -> Z that maps each integer to itself, but maps each non-integral real number x to the *odd* integer nearest x. (So for example all of 2.1, 2.3, 2.9, 3.0, 3.1, 3.9 map to 3.) This works because (x / 2^(e-54)) gives us an integer with at least 55 bits: the 53 bits we'll need in the eventual significand, a rounding bit, and then the to-odd supplies a "sticky" bit that records inexactness. Note that the principle works in the subnormal range too - no additional tricks are needed for that. In that case we just end up wastefully computing a few more bits than we actually _need_ to determine the correctly-rounded value. The code applies this principle to the case x = sqrt(n/m) and e = (n.bit_length() - m.bit_length() - 1)//2. The condition 2^e <= x holds because: 2^(n.bit_length() - 1) <= n m < 2^m.bit_length() so 2^(n.bit_length() - 1 - m.bit_length()) < n/m and taking square roots gives 2^((n.bit_length() - 1 - m.bit_length())/2) < √(n/m) so taking e = (n.bit_length() - 1 - m.bit_length())//2, we have 2^e <= 2^((n.bit_length() - 1 - m.bit_length())/2) < √(n/m) Now putting q = e - 54, we need to compute 2^q * round-to-odd(√(n/m) / 2^q) rounded to a float. The two branches both do this computation, by moving 2^q into either the numerator or denominator of the fraction as appropriate depending on the sign of q. -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: Hmm. isqrt_frac_rto is unnecessarily complicated. Here's a more streamlined version of the code. import math def isqrt_frac_rto(n, m): """ Square root of n/m, rounded to the nearest integer using round-to-odd. """ a = math.isqrt(n*m) // m return a | (a*a*m != n) def sqrt_frac(n, m): """ Square root of n/m as a float, correctly rounded. """ q = (n.bit_length() - m.bit_length() - 109) // 2 if q >= 0: return float(isqrt_frac_rto(n, m << 2 * q) << q) else: return isqrt_frac_rto(n << -2 * q, m) / (1 << -q) -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: > Should the last line of sqrt_frac() be wrapped with float()? It's already a float - it's the result of an int / int division. -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: Here's the float-and-Fraction-based code that I'm using to compare the integer-based code against: def sqrt_frac2(n, m): """ Square root of n/m as a float, correctly rounded. """ f = fractions.Fraction(n, m) # First approximation. x = math.sqrt(n / m) # Use the approximation to find a pair of floats bracketing the actual sqrt if fractions.Fraction(x)**2 >= f: x_lo, x_hi = math.nextafter(x, 0.0), x else: x_lo, x_hi = x, math.nextafter(x, math.inf) # Check the bracketing. If math.sqrt is correctly rounded (as it will be on a # typical machine), then the assert can't fail. But we can't rely on math.sqrt being # correctly rounded in general, so would need some fallback. fx_lo, fx_hi = fractions.Fraction(x_lo), fractions.Fraction(x_hi) assert fx_lo**2 <= f <= fx_hi**2 # Compare true square root with the value halfway between the two floats. mid = (fx_lo + fx_hi) / 2 if mid**2 < f: return x_hi elif mid**2 > f: return x_lo else: # Tricky case: mid**2 == f, so we need to choose the "even" endpoint. # Cheap trick: the addition in 0.5 * (x_lo + x_hi) will round to even. return 0.5 * (x_lo + x_hi) -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: > Does the technique you had in mind involve testing 1 ulp up or down to see > whether its square is closer to the input? Kinda sorta. Below is some code: it's essentially just pure integer operations, with a last minute conversion to float (implicit in the division in the case of the second branch). And it would need to be better tested, documented, and double-checked to be viable. def isqrt_rto(n): """ Square root of n, rounded to the nearest integer using round-to-odd. """ a = math.isqrt(n) return a | (a*a != n) def isqrt_frac_rto(n, m): """ Square root of n/m, rounded to the nearest integer using round-to-odd. """ quotient, remainder = divmod(isqrt_rto(4*n*m), 2*m) return quotient | bool(remainder) def sqrt_frac(n, m): """ Square root of n/m as a float, correctly rounded. """ quantum = (n.bit_length() - m.bit_length() - 1) // 2 - 54 if quantum >= 0: return float(isqrt_frac_rto(n, m << 2 * quantum) << quantum) else: return isqrt_frac_rto(n << -2 * quantum, m) / (1 << -quantum) -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: > we've run the ball almost the full length of the field and then failed to put > the ball over the goal line But if we only go from "faithfully rounded" to "almost always correctly rounded", it seems to me that we're still a couple of millimetres away from that goal line. It wouldn't be hard to go for _always_ correctly rounded and actually get it over. > Yes, the Emin and Emax for the default context is already almost big enough I'm confused: big enough for what? I was thinking of the use-case where the inputs are all floats, in which case an Emax of 999 and an Emin of -999 would already be more than big enough. -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: One thought: would the deci_sqrt approach help with value ranges where the values are well within float limits, but the squares of the values are not? E.g., on my machine, I currently get errors for both of the following: >>> xs = [random.normalvariate(0.0, 1e200) for _ in range(10**6)] >>> statistics.stdev(xs) >>> xs = [random.normalvariate(0.0, 1e-200) for _ in range(10**6)] >>> statistics.stdev(xs) It's hard to imagine that there are too many use-cases for values of this size, but it still feels a bit odd to be constrained to only half of the dynamic range of float. -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Mark Dickinson added the comment: I'm not sure this is worth worrying about. We already have a very tight error bound on the result: if `x` is a (positive) fraction and `y` is the closest float to x, (and assuming IEEE 754 binary64, round-ties-to-even, no overflow or underflow, etc.) then `math.sqrt(y)` will be in error by strictly less than 1 ulp from the true value √x, so we're already faithfully rounded. (And in particular, if the std. dev. is exactly representable as a float, this guarantees that we'll get that standard deviation exactly.) -- ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45876] Improve accuracy of stdev functions in statistics
Change by Mark Dickinson : -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue45876> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32582] chr raises OverflowError
Change by Mark Dickinson : -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue32582> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45862] Anomaly of eval() of list comprehension
Mark Dickinson added the comment: True: there's another detail here that's needed to explain the behaviour. The first "for" clause in a list comprehension is special: it's evaluated in the enclosing scope, rather than in the local function scope that the list comprehension creates. See the docs here: https://docs.python.org/3.9/reference/expressions.html?highlight=list%20comprehension#displays-for-lists-sets-and-dictionaries -- ___ Python tracker <https://bugs.python.org/issue45862> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45862] Anomaly of eval() of list comprehension
Mark Dickinson added the comment: See also #41216 -- ___ Python tracker <https://bugs.python.org/issue45862> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45862] Anomaly of eval() of list comprehension
Mark Dickinson added the comment: Thanks for the report. The behaviour is by design: see #5242 (especially msg81898) for an explanation. Closing this issue as a duplicate of #5242. -- nosy: +mark.dickinson resolution: -> duplicate stage: -> resolved status: open -> closed superseder: -> eval() function in List Comprehension doesn't work ___ Python tracker <https://bugs.python.org/issue45862> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45784] spam
Change by Mark Dickinson : -- title: SAP HANA Training in Chennai -> spam ___ Python tracker <https://bugs.python.org/issue45784> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45784] SAP HANA Training in Chennai
Change by Mark Dickinson : -- Removed message: https://bugs.python.org/msg406152 ___ Python tracker <https://bugs.python.org/issue45784> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45776] abc submodule not an attribute of collections on Python 3.10.0 on Windows
Mark Dickinson added the comment: On Mac, collections.abc is imported at startup time via site.py (which imports rlcompleter, which imports inspect, which imports collections.abc). I'd guess it's the same on Linux. mdickinson@mirzakhani cpython % ./python.exe Python 3.11.0a2+ (heads/main:76d14fac72, Nov 10 2021, 15:43:54) [Clang 13.0.0 (clang-1300.0.29.3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import sys; "collections.abc" in sys.modules True >>> ^D mdickinson@mirzakhani cpython % ./python.exe -S Python 3.11.0a2+ (heads/main:76d14fac72, Nov 10 2021, 15:43:54) [Clang 13.0.0 (clang-1300.0.29.3)] on darwin >>> import sys; "collections.abc" in sys.modules False >>> ^D -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue45776> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45689] Add the ability to give custom names to threads created by ThreadPoolExecutor
Mark Dickinson added the comment: Sorry Rahul: I'm not the right person to help you push this forward. I'm sympathetic to the problem: I've encountered similar issues in "Real Code", where we needed to associate log outputs generated by worker pool threads with the actual tasks that generated those logs. But I'm not convinced either that setting the thread name is the right mechanism to get that association (it doesn't extend nicely to other executor types, for example), or that the API you propose is the right one (I find the duplication between `submit` and `submit_with_name` to be a bit much). I'm wondering whether there's some way that executors could use contextvars to provide a per-task context. Then a task "name" could potentially be part of that context, and possibly you could write a custom log handler that read the name from the context when emitting log messages. If you want to find someone to help push this forward, it may be worth posting on the python-ideas mailing list to get discussion going. -- ___ Python tracker <https://bugs.python.org/issue45689> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45689] Add the ability to give custom names to threads created by ThreadPoolExecutor
Mark Dickinson added the comment: > previously one could write .submit(function_name, *args, **kwargs) > but now one should write > .submit(function_name, name_of_thread, *args, **kwargs) > name_of_thread can be None This approach can't work, I'm afraid: it would be a backwards-incompatible change to the `submit` method signature, and would break many existing uses of submit. -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue45689> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45739] The Python implementation of Decimal does not support the "N" format
Mark Dickinson added the comment: Interesting. I think the behaviour of the Python implementation behaviour is actually more correct here: neither `int` nor `float` supports 'N', and I'm not seeing any indication in tests or documentation that 'N' should be supported. So is this a bug in libmpdec, or a missing feature in the Python implementation? (Either way, it's definitely a bug that the two aren't aligned.) >>> format(123, 'n') '123' >>> format(123, 'N') Traceback (most recent call last): File "", line 1, in ValueError: Unknown format code 'N' for object of type 'int' -- nosy: +eric.smith ___ Python tracker <https://bugs.python.org/issue45739> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45708] PEP 515-style formatting with underscores does not seem to work for Decimal
Mark Dickinson added the comment: Christian Heimes pointed out in the PR discussion that we can't simply modify libmpdec, since some vendors unbundle the mpdecimal library. So some options are: 0. Do nothing. 1. Request that this feature to be added upstream, so that it eventually makes its way into core Python. 2. Bypass mpd_parse_fmt_str and do our own format string parsing in _decimal.c (e.g., by copying and adapting the code in mpdecimal). 3. Wrap mpd_parse_fmt_str and do our own pre- and post- processing in _decimal.c (pre-process to convert "_" to "," in the format string, then post-process the formatted string to convert "," back to "_"). Option 2 makes sense to me from the point of view of separation of concerns: libmpdec aims to implement Cowlishaw's specification, and formatting lies outside of that specification. The decimal specification is pretty much set in stone, but the formatting mini-language could change again in the future, and when that happens we should be able to update the CPython code accordingly. (This brings to mind Robert Martin's Single Responsibility Principle: "Gather together those things that change for the same reason, and separate those things that change for different reasons.") I've updated the PR (and turned it into a draft) to show what option 2 looks like. The duplication is a little ugly. -- nosy: +christian.heimes ___ Python tracker <https://bugs.python.org/issue45708> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45392] docstring of "type" could use an update
Change by Mark Dickinson : -- keywords: +patch pull_requests: +27693 stage: -> patch review pull_request: https://github.com/python/cpython/pull/29439 ___ Python tracker <https://bugs.python.org/issue45392> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45708] PEP 515-style formatting with underscores does not seem to work for Decimal
Mark Dickinson added the comment: > It looks like quite similar changes have already been made: Yes, I think this isn't something that needs to be resolved for this issue, but it is something we need to think about. (Though perhaps the resolution is just "Don't worry about it until we need to.") > I will send a PR, so we can see what exactly it touches / changes. Ah, sorry; I already made one before reading your message. I'd be happy to get your input on that PR, though. (Or to review a PR from you.) -- ___ Python tracker <https://bugs.python.org/issue45708> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45708] PEP 515-style formatting with underscores does not seem to work for Decimal
Change by Mark Dickinson : -- keywords: +patch pull_requests: +27692 stage: -> patch review pull_request: https://github.com/python/cpython/pull/29438 ___ Python tracker <https://bugs.python.org/issue45708> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45708] PEP 515-style formatting with underscores does not seem to work for Decimal
Mark Dickinson added the comment: Serhiy: this is not a duplicate of #43624. That issue is about underscores in the *fractional* part of a (float / complex / Decimal) number, and the changes to the formatting mini-language syntax that would be necessary to support that. This issue is simply about bringing Decimal into line with int and float and allowing inclusion of underscores in the *integral* part of the formatted result. Raymond: the "General Decimal Arithmetic" specification that the decimal module is based on isn't relevant here. It has nothing to say on the subject of formatting. We moved beyond the specification the moment we allowed `format(some_decimal, 'f')`, let alone `format(some_decimal, '.3f')` or `format(some_decimal, ',')`. As Sander Bollen noted, we've already added ","-form thousands separators to Decimal formatting. I can't see any good reason for supporting "," but not supporting "_" as a thousands separator for Decimal. -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue45708> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45708] PEP 515-style formatting with underscores does not seem to work for Decimal
Change by Mark Dickinson : -- nosy: -mark.dickinson ___ Python tracker <https://bugs.python.org/issue45708> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45708] PEP 515-style formatting with underscores does not seem to work for Decimal
Mark Dickinson added the comment: > whether Decimal should extend beyond the specification in this case We already go way beyond the original specification for string formatting. The spec doesn't go further than specifying to-scientific-string and to-engineering-string, neither of which even get into limiting precision. -- ___ Python tracker <https://bugs.python.org/issue45708> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45708] PEP 515-style formatting with underscores does not seem to work for Decimal
Mark Dickinson added the comment: I think the two main reasons that applied to not implementing the parsing part of PEP 515 for the Decimal type (speed, compliance with the IBM specification) don't apply to the formatting side. We do need to think about the implications of making local changes to our copy of the externally-maintained libmpdec library, though. Changing Python versions: this is a new feature, so could only go into Python 3.11. -- type: -> enhancement versions: +Python 3.11 -Python 3.10, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue45708> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45708] PEP 515-style formatting with underscores does not seem to work for Decimal
Change by Mark Dickinson : -- nosy: +eric.smith, mark.dickinson ___ Python tracker <https://bugs.python.org/issue45708> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45702] Python/dtoa.c requires 53 bit hardware rounding unavalable on x64
Mark Dickinson added the comment: I'm not sure I understand the problem that you're reporting - what issues are you seeing in practice? x64 should be fine here. In normal circumstances, the compiled version of dtoa.c will be using SSE2 instructions and will already be doing floating-point arithmetic at 53-bit precision (not 56-bit), following IEEE 754. It's mainly the ancient x86/x87 hardware that's problematic. This code has been working well on many different x64 platforms for around a decade now. Can you describe the issue that you're seeing in more detail? -- ___ Python tracker <https://bugs.python.org/issue45702> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45555] Object stays alive for weak reference if an exception happens in constructor
Mark Dickinson added the comment: I don't think this is a bug: there's still a reference to the `A` instance in `sys.exc_info()` (specifically, in the exception traceback) in this case, so that instance is still alive. If you add an `except: pass` clause to your `try / finally`, you should see that dereferencing the weakref gives `None` in the `finally` clause. -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue4> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45569] Drop support for 15-bit PyLong digits?
New submission from Mark Dickinson : Looking at issue #35037 (which is a compatibility issue having to do with PYLONG_BITS_IN_DIGIT), I'm wondering whether it would make sense to drop support for 15-bit PyLong digits altogether. This would simplify some of the code, eliminate a configuration option, and eliminate the scope for ABI mismatches like that occurring in #35037. There were a couple of reasons that we kept support for 15-bit digits when 30-bit digits were introduced, back in #4258. - At the time, we wanted to use `long long` for the `twodigits` type with 30-bit digits, and we couldn't guarantee that all platforms we cared about would have `long long` or another 64-bit integer type available. - It wasn't clear whether there were systems where using 30-bit digits in place of 15-bit digits might cause a performance regression. Now that we can safely rely on C99 support on all platforms we care about, the existence of a 64-bit integer type isn't an issue (and indeed, we already rely on the existence of such a type in dtoa.c and elsewhere in the codebase). As to performance, I doubt that there are still platforms where using 15-bit digits gives a performance advantage, but I admit I haven't checked. -- messages: 404746 nosy: mark.dickinson priority: normal severity: normal status: open title: Drop support for 15-bit PyLong digits? ___ Python tracker <https://bugs.python.org/issue45569> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35037] PYLONG_BITS_IN_DIGIT differs between MinGW and MSVC
Mark Dickinson added the comment: This should probably be a separate issue, but I wonder whether the 15-bit digit option has value any more. Should we just drop that option and always use 30-bit digits? 30-bit digits were introduced at a time when we couldn't rely on a 64-bit integer type always being available, so we still needed to keep the 15-bit fallback option. But I think that's no longer the case. -- ___ Python tracker <https://bugs.python.org/issue35037> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15996] pow() for complex numbers is rough around the edges
Mark Dickinson added the comment: See also discussion in #44970, which is closed as a duplicate of this issue. -- ___ Python tracker <https://bugs.python.org/issue15996> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44970] Re-examine complex pow special case handling
Mark Dickinson added the comment: > Is not it a duplicate of issue15996? Yes, I think it's close enough. Thanks. -- resolution: -> duplicate stage: -> resolved status: open -> closed superseder: -> pow() for complex numbers is rough around the edges ___ Python tracker <https://bugs.python.org/issue44970> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25934] ICC compiler: ICC treats denormal floating point numbers as 0.0
Mark Dickinson added the comment: > Closing this as out of date. SGTM. Thanks. -- ___ Python tracker <https://bugs.python.org/issue25934> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45476] [C API] Convert "AS" functions, like PyFloat_AS_DOUBLE(), to static inline functions
Change by Mark Dickinson : -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue45476> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45412] [C API] Remove Py_OVERFLOWED(), Py_SET_ERRNO_ON_MATH_ERROR(), Py_ADJUST_ERANGE1()
Mark Dickinson added the comment: +1 for the removals. (We should fix #44970 too, but as you say that's a separate issue. And I suspect that the Py_ADJUST_ERANGE1() use for float pow should be replaced, too.) -- ___ Python tracker <https://bugs.python.org/issue45412> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45392] docstring of "type" could use an update
Mark Dickinson added the comment: Larry: the first line was introduced in #20189. Does it still make sense to keep it at this point? -- nosy: +larry ___ Python tracker <https://bugs.python.org/issue45392> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45392] docstring of "type" could use an update
New submission from Mark Dickinson : The docstring of the "type" builtin is mildly confusing. Here's what the first few lines of the output for `help(type)` look like for me (on Python 3.10.0rc2): class type(object) | type(object_or_name, bases, dict) | type(object) -> the object's type | type(name, bases, dict) -> a new type The first line there seems redundant, and potentially misleading, since it suggests that `type(object, bases, dict)` might be legal. The third line is missing mention of possible keyword arguments. -- messages: 403302 nosy: mark.dickinson priority: normal severity: normal status: open title: docstring of "type" could use an update ___ Python tracker <https://bugs.python.org/issue45392> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45333] += operator and accessors bug?
Mark Dickinson added the comment: Did you by any chance get an error message resembling the following? > "Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with > casting rule 'same_kind'" (If you can give us a complete piece of code that we can run ourselves, that would save us from having to guess what the issue is.) -- nosy: +mark.dickinson ___ Python tracker <https://bugs.python.org/issue45333> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com