This is way above my head. :-) The only requirement *I* would like to see is that for floats that exactly represent ints (or longs for that matter) the result ought of x%y ought to have the same value as the same operation on the corresponding ints (except if the result can't be represented exactly as a float -- I don't know what's best then).
We're fixing this for / in Py3k, so passing an int into an algorithm written for floats won't be harmful and won't require defensiev float() casting everywhere. It would be a shame if we *introduced* a new difference between ints and floats for %. --Guido On 5/2/06, Tim Peters <[EMAIL PROTECTED]> wrote: > [Andrew Koenig, on the counter intuitive -1e-050 % 2.0 == 2.0 example] > >> I disagree. For any two floating-point numbers a and b, with b != 0, it > >> is always possible to represent the exact value of a mod b as a > >> floating-point number--at least on every floating-point system I have ever > >> encountered. The implementation is not even that difficult. > > [also Andrew] > > Oops... This statement is true for the Fortran definition of modulus (result > > has the sign of the dividend) but not the Python definition (result has the > > sign of the divisor). In the Python world, it's true only when the dividend > > and divisor have the same sign. > > Note that you can have it in Python too, by using math.fmod(a, b) > instead of "a % b". > > IMO, it was a mistake (and partly my fault cuz I didn't whine early) > for Python to try to define % the same way for ints and floats. The > hardware realities are too different, and so are the pragmatics. For > floats, it's actually most useful most often to have both that a % b > is exact and that 0.0 <= abs(a % b) <= abs(b/2). Then the sign of a%b > bears no relationship to the signs of a and b, but for purposes of > modular reduction it yields the result with the smallest possible > absolute value. That's often valuable for floats (e.g., think of > argument reduction feeding into a series expansion, where time to > convergence typically depends on the magnitude of the input and > couldn't care less about the input's sign), but rarely useful for > ints. > > I'd like to see this change in Python 3000. Note that IBM's proposed > standard for decimal arithmetic (which Python's "decimal" module > implements) requires two operations here, one that works like > math.fmod(a, b) (exact and sign of a), and the other as described > above (exact and |a%b| <= |b/2|). Those are really the only sane > definitions for a floating point modulo/remainder. > _______________________________________________ > Python-3000 mailing list > Python-3000@python.org > http://mail.python.org/mailman/listinfo/python-3000 > Unsubscribe: > http://mail.python.org/mailman/options/python-3000/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) -- http://mail.python.org/mailman/listinfo/python-list