On 9/22/22 10:49, Aldy Hernandez via Gcc-patches wrote:
It has been suggested that if we start bumping numbers by an ULP when
calculating open ranges (for example the numbers less than 3.0) that
dumping these will become increasingly harder to read, and instead we
should opt for the hex representation.  I still find the floating
point representation easier to read for most numbers, but perhaps we
could have both?

With this patch this is the representation for [15.0, 20.0]:

      [frange] float [1.5e+1 (0x0.fp+4), 2.0e+1 (0x0.ap+5)]

Would you find this useful, or should we stick to the hex
representation only (or something altogether different)?

Tested on x86-64 Linux.

gcc/ChangeLog:

        * value-range-pretty-print.cc (vrange_printer::print_real_value): New.
        (vrange_printer::visit): Call print_real_value.
        * value-range-pretty-print.h: New print_real_value.

The big advantage of the hex representation is you can feed that back into the compiler trivially and be confident the bit pattern hasn't changed.   I've found it invaluable when doing deep FP analysis.


jeff


Reply via email to