Tim Peters <t...@python.org> added the comment:

For the first, your hardware's binary floating-point has no concept of 
significant trailing zeroes. If you need such a thing, use Python's `decimal` 
module instead, which does support a "significant trailing zero" concept. You 
would need an entirely new data type to graft such a notion onto Python's (or 
numpy's!) binary floats.

For the second, we'd have to dig into exactly what numpy's `arange()` does. 
Very few of the numbers you're working with are exactly representable in binary 
floating point except for 0.0. For example, "0.001" is approximated by a binary 
float whose exact decimal value is

0.001000000000000000020816681711721685132943093776702880859375

Sometimes the rounded (by machine float arithmetic) multiples of that are 
exactly representable, but usually not. For example,

>>> 0.001 * 250
0.25

rounds to the exactly representable 1/4, and

>>> 0.001 * 750
0.75

to the exactly representable 3/4. However, `round()` uses 
round-to-nearest/even, and then

>>> round(0.25, 1)
0.2
>>> round(0.75, 1)
0.8

both resolve the tie to the closest even value (although neither of those 
_results_ are exactly representable in binary floating-point - although if you 
go on to multiply them by 10.0, they do round (in hardware) to exactly 2.0 and 
8.0).

Note that numpy's arange() docs do warn you against using it ;-)

"""
When using a non-integer step, such as 0.1, the results will often not be 
consistent. It is better to use numpy.linspace for these cases.
"""

----------
nosy: +tim.peters

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue41198>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to