Hi everyone

>> Of course, for a lost of numbers, the decimal representation is simpler, and 
>> just as accurate as the radix-2 hexadecimal representation.
>> But, due to the radix-10 and radix-2 used in the two representations, the 
>> radix-2 may be much easier to use.
> 
> Hex is radix 16, not radix 2 (binary).
Of course, Hex is radix-16!
I was talking about radix-2 because all the exactness problem comes when 
converting binary and decimal, and Hex can be seen as an (exact) compact way to 
express binary, and that's what we want (build literal floats in the same exact 
way they are stored internally, and export them exactly in a compact way).


> 
>> In the "Handbook of Floating-Point Arithmetic" (JM Muller et al, Birkhauser 
>> editor, page 40),the authors claims that the largest exact decimal 
>> representation of a double-precision floating-point requires 767 digits !!
>> So it is not always few characters to type to be just as accurate !!
>> For example (this is the largest exact decimal representation of a 
>> single-precision 32-bit float):
>>> 1.17549421069244107548702944484928734882705242874589333385717453057158887047561890426550235133618116378784179687e-38
>> and
>>> 0x1.fffffc0000000p-127
>> are exactly the same number (one in decimal representation, the other in 
>> radix-2 hexadecimal)!
> 
> That may be so, but that doesn't mean you have to type all 100+ digits 
> in order to reproduce the float exactly. Just 1.1754942106924411e-38 is 
> sufficient:
> 
> py> 1.1754942106924411e-38 == float.fromhex('0x1.fffffc0000000p-127')
> True
> 
> You may be mistaking two different questions:
> 
> (1) How many decimal digits are needed to exactly convert the float to 
> decimal? That can be over 100 for a C single, and over 700 for a double.
> 
> (2) How many decimal digits are needed to uniquely represent the float? 
> Nine digits (plus an exponent) is enough to represent all possible C 
> singles; 17 digits is enough to represent all doubles (Python floats).

You're absolutely right, 1.1754942106924411e-38 is enough to *reproduce* the 
float exactly, BUT it is still different to 0x1.fffffc0000000p-127 (or it's 
112-digits decimal representation).
Because 1.1754942106924411e-38 is rounded at compile-time to 
0x1.fffffc0000000p-127 (so exactly to 
1.17549421069244107548702944484928734882705242874589333385717453057158887047561890426550235133618116378784179687e-38
 in decimal).

So 17 digits are enough to reach each double, after the compile-time 
quantization. But "explicit is better than implicit", as someone says ;-), so I 
prefer, in some particular occasions,  to explicitly express the floating-point 
number I want (like 0x1.fffffc0000000p-127), rather than hoping the 
quantization of my decimal number (1.1754942106924411e-38)  will produce the 
right floating-point (0x1.fffffc0000000p-127)

And that's one of the reasons why the hexadecimal floating-point representation 
exist: 
- as a way to *exactly* export floating-point numbers without any doubt (so 
give a compact form of if binary intern representation)
- as a way to *explicitly* and *exactly* specify some floating-point values in 
your code, directly in the way they are store internally (in a compact way, 
because binary is too long)


> I'm not actually opposed to hex float literals. I think they're cool. 
> But we ought to have a reason more than just "they're cool" for 
> supporting them, and I'm having trouble thinking of any apart from "C 
> supports them, so should we". But maybe that's enough.



To sum up:
-  In some specific context, hexadecimal floating-point constants make it easy 
for the programmers to reproduce the exact value. Typically, a software 
engineer who is concerned about floating-point accuracy would prepare 
hexadecimal floating-point constants for use in a program by generating them 
with special software (e.g., Maple, Mathematica, Sage or some multi-precision 
library). These hexadecimal literals have been added to C (since C99), Java, 
Lua, Ruby, Perl (since v5.22), etc. for the same reasons.
- The exact grammar has been fully documented in the IEEE-754-2008 norm 
(section 5.12.13), and also in C99 (or C++17 and others)
- Of course, hexadecimal floating-point can be manipulated with float.hex() and 
float.fromhex(), *but* it works from strings, and the translation is done at 
execution-time...


I hope this can be seen as a sufficient reason to support hexadecimal floating 
literals.

Thibault



_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to