https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109008

--- Comment #8 from Richard Biener <rguenth at gcc dot gnu.org> ---
We basically have to consider an input range [a, b] as [a - x, b + y]
with the largest positive x and y so that correctly rounding the value
yields a and b again.

Reply via email to