https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82318

--- Comment #4 from joseph at codesourcery dot com <joseph at codesourcery dot 
com> ---
I think the glibc reasoning is: libm functions do not need to behave as if 
written in standard C, so in particular F.6 does not apply to them and 
they may return values with excess precision.  Thus libm functions use 
math_narrow_eval (or equivalent assembler macros) in cases where there may 
be overflow and underflow (where excess *range* may be a problem, possibly 
resulting in missing errno setting), but not where there may be excess 
precision without excess range.  (Functions with fully defined results 
such as sqrt still take care to avoid excess precision in their results.)

However, that then implies GCC ought to know which standard library 
functions (only) might return with excess precision, so that casts / 
assignments of the results of those functions can remove that excess 
precision.  Which is tricky, both because it should only apply to standard 
library functions, not to user functions where it's the user's 
responsibility to compile consistently with -fexcess-precision=standard, 
and because the IR available in the front end does not represent the ABI 
information that particular return types are capable of being returned 
with excess precision (you'd need to distinguish a function's ABI return 
type from its semantic return type and produce calls with the ABI return 
type).

Or you could systematically use math_narrow_eval on all float / double 
function returns that might have excess range / precision (or maybe in the 
wrappers where wrappers are used, to avoid affecting glibc-internal uses 
that don't need this as well as to reduce the number of places needing 
changing - but if you do it in the wrappers you don't fix things for the 
__*_finite functions), but there would be some performance cost to doing 
so.

Reply via email to