On Thu, Apr 20, 2023 at 11:22:24AM -0400, Siddhesh Poyarekar wrote:
> On 2023-04-20 10:02, Jakub Jelinek wrote:
> > On x86_64-linux with glibc 2.35, I see
> > for i in FLOAT DOUBLE LDOUBLE FLOAT128; do for j in TONEAREST UPWARD 
> > DOWNWARD TOWARDZERO; do gcc -D$i -DROUND=FE_$j -g -O1 -o /tmp/sincos{,.c} 
> > -lm; /tmp/sincos || echo $i $j; done; done
> > Aborted (core dumped)
> > FLOAT UPWARD
> > Aborted (core dumped)
> > FLOAT DOWNWARD
> > On sparc-sun-solaris2.11 I see
> > for i in FLOAT DOUBLE LDOUBLE; do for j in TONEAREST UPWARD DOWNWARD 
> > TOWARDZERO; do gcc -D$i -DROUND=FE_$j -g -O1 -o sincos{,.c} -lm; ./sincos 
> > || echo $i $j; done; done
> > Abort (core dumped)
> > DOUBLE UPWARD
> > Abort (core dumped)
> > DOUBLE DOWNWARD
> > Haven't tried anything else.  So that shows (but doesn't prove) that
> > maybe [-1., 1.] interval is fine for -fno-rounding-math on those, but not
> > for -frounding-math.
> 
> Would there be a reason to not consider these as bugs?  I feel like these
> should be fixed in glibc, or any math implementation that ends up doing
> this.

Why?  Unless an implementation guarantees <= 0.5ulps errors, it can be one
or more ulps off, why is an error at or near 1.0 or -1.0 error any worse
than similar errors for other values?
Similarly for other functions which have other ranges, perhaps not with so
nice round numbers.  Say asin has [-pi/2, pi/2] range, those numbers aren't
exactly representable, but is it any worse to round those values to -inf or
+inf or worse give something 1-5 ulps further from that interval comparing
to other 1-5ulps errors?

        Jakub

Reply via email to