I should note that numeric code "that works" is often much subtler than it appears at first glance. So, for educational purposes, I'll point out some of what _wasn't_ said about this crucial function:
[Tim] > import mpmath > from math import fmod > # Return (n, x) such that: > # 1. d degrees is equivalent to x + n*(pi/2) radians. > # 2. x is an mpmath float in [-pi/4, pi/4]. > # 3. n is an integer in range(4). > # There is one potential rounding error, when mpmath.radians() is > # used to convert a number of degrees between -45 and 45. This is > # done using the current mpmath precision. > def treduce(d): > d = fmod(d, 360.0) > n = round(d / 90.0) > assert -4 <= n <= 4 > d -= n * 90.0 > assert -45.0 <= d <= 45.0 > return n & 3, mpmath.radians(d) > > How do we know there is at most one rounding error in that? No, it's not obvious. That `fmod` is exact is guaranteed by the relevant standards, but most people who write a libm don't get it right at first. There is no "cheap" way to implement it correctly. It requires getting the effect of doing exact integer division on, potentially, multi-thousand bit integers. Assuming x > y > 0, a correct implementation of fmod(x, y) ends up in a loop that goes around a number of times roughly equal to log2(x/y), simulating one-bit-at-a-time long division For example, here's glibc's implementation: https://github.com/bminor/glibc/blob/master/sysdeps/ieee754/dbl-64/e_fmod.c Don't expect that to be easy to follow either ;-) Then how do we know that `d -= n * 90.0" is exact? That's not obvious either. It follows from the "Sterbenz lemma", one version of which: if x and y are non-zero floats of the same sign within a factor of 2 of each other, 1/2 <= x/y <= 2 (mathematically) then x-y is exactly representable as a float too. This is true regardless of the floating-point base, or of rounding mode in use. In IEEE-754, it doesn't even need weasel words to exempt underflowing cases. That lemma needs to be applied by cases, for each of the possible values of (the integer) `n`. It gets closest to failing for |n| = 1. For example, if d is a tiny bit larger than 45, n is 1, and then d/90 is (mathematically) very close to 1/2. Which is another thing that needs to be shown: "if d is a tiny bit larger than 45, n is 1". Why? It's certainly true if we were using infinite precision, but we're not. The smallest representable double > 45 is 45 + 2**-47: >>> d = 45 + 2**-47 >>> d 45.00000000000001 >>> _.hex() '0x1.6800000000001p+5' Then d/90.0 (the argument to round()) is, with infinite precision, (45 + 2**-47)/90 = 0.5 + 2**-47/90 1 ULP with respect to 0.5 is 2**-53, so that in turn is equal to 0.5 + 2**-53/(90/64) = 0.5 + (64/90)*2**-53 = 0.5 + 0.71111111111... * 2**-53 Because the tail (0.711...) is greater than 0.5 ULP, it rounds up under nearest-even rounding, to 0.5 + 2**-53 >>> d / 90 0.5000000000000001 >>> 0.5 + 2**-53 0.5000000000000001 and so Python's round() rounds it up to 1: >>> round(_) 1 Note that it would _not_ be true if truncating "rounding" were done, so round-nearest is a hidden assumption in the code. Similar analysis needs to be done at values near the boundaries around all possible values of `n`. That `assert -45.0 <= d <= 45.0` can't fall then follows from all of that. In all, a proof that the code is correct is much longer than the code itself. That's typical. Alas, it's also typical that math library sources rarely point out the subtleties.
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/