https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106831

--- Comment #6 from Aldy Hernandez <aldyh at gcc dot gnu.org> ---
(In reply to Jakub Jelinek from comment #5)
> BTW, I admit I don't know much about decimal{32,64,128}, but
> https://en.wikipedia.org/wiki/Decimal32_floating-point_format
> says:
> Because the significand is not normalized (there is no implicit leading
> "1"), most values with less than 7 significant digits have multiple possible
> representations; 1 × 10^2=0.1 × 10^3=0.01 × 10^4, etc. Zero has 192 possible
> representations (384 when both signed zeros are included).
> So, I think singleton_p should just always return false for
> DECIMAL_FLOAT_TYPE_P (at least for now).

Interestingly, frange_drop_*inf() bails on DECIMAL_FLOAT_MODE_P because
build_real() will ultimately ICE when trying to make a tree out of the max/min
representable number for a type:

  /* dconst{0,1,2,m1,half} are used in various places in
     the middle-end and optimizers, allow them here
     even for decimal floating point types as an exception
     by converting them to decimal.  */
  if (DECIMAL_FLOAT_MODE_P (TYPE_MODE (type))
      && (d.cl == rvc_normal || d.cl == rvc_zero)
      && !d.decimal)
...
...

I know even less about decimal floats.  Jakub, should we disable them
altogether from the frange infrastructure, or is that too big of a hammer?  I'm
just afraid we'll keep running into limitations when we start implementing
floating operations in range-op-float.cc.  Or worse, have to special case them
all over.

bool frange::supports_p (const_tree type) 
{ 
  return SCALAR_FLOAT_TYPE_P (type) && !DECIMAL_FLOAT_MODE_P (type);
}

??

Reply via email to