On 3/15/21 5:38 AM, Eric Blake wrote:
On 3/14/21 6:48 PM, Richard Henderson wrote:
Use fma to simulatneously scale and round up fraction.

The libm function will always return a properly rounded double precision
value, which will eliminate any extra precision the x87 co-processor may
give us, which will keep the output predictable vs other hosts.

Adding DBL_EPSILON while scaling should help with fractions like
12.345, where the closest representable number is actually 12.3449*.

Signed-off-by: Richard Henderson <richard.hender...@linaro.org>
---
  util/cutils.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/util/cutils.c b/util/cutils.c
index d89a40a8c3..f7f8e48a68 100644
--- a/util/cutils.c
+++ b/util/cutils.c
@@ -342,7 +342,7 @@ static int do_strtosz(const char *nptr, const char **end,
          retval = -ERANGE;
          goto out;
      }
-    *result = val * mul + (uint64_t) (fraction * mul);
+    *result = val * mul + (uint64_t)fma(fraction, mul, DBL_EPSILON);

Don't you need to include <float.h> to get DBL_EPSILON?

Apparently we get it via some other route.

More importantly, this patch seems wrong.  fma(a, b, c) performs (a * b)
+ c without intermediate rounding errors, but given our values for a and
b, where mul > 1 in any situation we care about, adding 2^-53 is so much
smaller than a*b that it not going to round the result up to the next
integer.  Don't you want to do fma(fraction, mul, 0.5) instead?

I tried that first, but that requires changes to the testsuite, where we have a *.7 result, and expect it to be truncated.

I assumed adding 0.00*1 to 0.99*9 would still have an effect.

I think I should have tried fma(fraction, mul, 0) as well, just to see if it's the elimination of extended-precision results that were the real problem.


r~

Reply via email to