On Tue, Apr 18, 2023 at 03:12:50PM +0200, Aldy Hernandez wrote: > [I don't know why I keep poking at floats. I must really like the pain. > Jakub, are you OK with this patch for trunk?]
Thanks for working on this. Though expectedly here we are running again into the discussions we had in November about math properties of the functions vs. numeric properties in their implementations, how big maximum error shall we expect for the functions (and whether we should hardcode it for all implementations, or have some more fine-grained list of expected ulp errors for each implementation), whether the implementations at least guarantee the basic mathematical properties of the functions even if they have some errors (say for sin/cos, if they really never return > 1.0 or < -1.0) and the same questions repeated for -frounding-math, what kind of extra errors to expect when using non-default rounding and whether say sin could ever return nextafter (1.0, 2.0) or even larger value say when using non-default rounding mode. https://gcc.gnu.org/pipermail/gcc-patches/2022-November/606466.html was my attempt to get at least some numbers on some targets, I'm afraid for most implementations we aren't going to get any numerical proofs of maximum errors and the like. For sin/cos to check whether the implementation really never returns > 1.0 or < -1.0 perhaps instead of using randomized testing we could exhaustively check some range around 0, M_PI, 3*M_PI, -M_PI, -3*M_PI, and say some much larger multiples of M_PI, say 50 ulps in each direction about those points, and similarly for sin around M_PI/2 etc., in all arounding modes. > This is the range-op entry for sin/cos. It is meant to serve as an > example of what we can do for glibc math functions. It is by no means > exhaustive, just a stub to restrict the return range from sin/cos to > [-1.0, 1.0] with appropriate smarts of NANs. > > As can be seen in the testcase, we see sin() as well as > __builtin_sin() in the IL, and can resolve the resulting range > accordingly. > > gcc/ChangeLog: > > * gimple-range-op.cc (class cfn_sincos): New. > (gimple_range_op_handler::maybe_builtin_call): Add case for sin/cos. > > gcc/testsuite/ChangeLog: > > * gcc.dg/tree-ssa/range-sincos.c: New test. > --- > gcc/gimple-range-op.cc | 63 ++++++++++++++++++++ > gcc/testsuite/gcc.dg/tree-ssa/range-sincos.c | 40 +++++++++++++ > 2 files changed, 103 insertions(+) > create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/range-sincos.c > > diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc > index 4ca32a7b5d5..36390f2645e 100644 > --- a/gcc/gimple-range-op.cc > +++ b/gcc/gimple-range-op.cc > @@ -402,6 +402,60 @@ public: > } > } op_cfn_copysign; > > +class cfn_sincos : public range_operator_float > +{ > +public: > + using range_operator_float::fold_range; > + using range_operator_float::op1_range; > + virtual bool fold_range (frange &r, tree type, > + const frange &lh, const frange &, > + relation_trio) const final override > + { > + if (lh.known_isnan () || lh.known_isinf ()) > + { > + r.set_nan (type); > + return true; > + } > + r.set (type, dconstm1, dconst1); See above, are we sure we can use [-1., 1.] range safely, or should that be [-1.-Nulps, 1.+Nulps] for some kind of expected worse error margin of the implementation? And ditto for -frounding-math, shall we increase that interval in that case, or is [-1., 1.] going to be ok? > + if (!lh.maybe_isnan ()) This condition isn't sufficient, even if lh can't be NAN, but just may be +Inf or -Inf, the result needs to include maybe NAN. > + r.clear_nan (); > + return true; Incrementally, if we decide what to do with the maximum allowed errors in ulps, if lh's range is smaller than 2*M_PI (upper_bound () - lower_bound ()), we could narrow it down further by computing the exact values for the bounds and any local maximums or minimums in between if any and creating a range out of that. > + } > + virtual bool op1_range (frange &r, tree type, > + const frange &lhs, const frange &, > + relation_trio) const final override > + { > + if (!lhs.maybe_isnan ()) > + { > + // If NAN is not valid result, the input cannot include either > + // a NAN nor a +-INF. > + REAL_VALUE_TYPE lb = real_min_representable (type); > + REAL_VALUE_TYPE ub = real_max_representable (type); > + r.set (type, lb, ub, nan_state (false, false)); > + return true; > + } > + // A known NAN means the input is [-INF,-INF][+INF,+INF] U +-NAN, > + // which we can't currently represent. > + if (lhs.known_isnan ()) > + { > + r.set_varying (type); > + return true; > + } > + // Results outside of [-1.0, +1.0] are impossible. > + REAL_VALUE_TYPE lb = lhs.lower_bound (); > + REAL_VALUE_TYPE ub = lhs.upper_bound (); > + if (real_less (&lb, &dconstm1) > + || real_less (&dconst1, &ub)) This condition could fit on one line. > + { > + r.set_undefined (); > + return true; > + } > + > + r.set_varying (type); I'm afraid we can't do really much better for the reverse case, even for a singleton range between -1.0 and +1.0 we'd have infinite number of results. In theory we could perhaps narrow it down from the minimum/maximum representable bounds, but even 1ulp there is much larger than 2*M_PI, so not sure it is worth bothering with it. Jakub