On 10/27/2017 02:08 AM, Richard Sandiford wrote:
Martin Sebor <mse...@gmail.com> writes:
On 10/26/2017 11:52 AM, Richard Sandiford wrote:
Martin Sebor <mse...@gmail.com> writes:
For offset_int the default precision is 128-bits.  Making that
the default also for wide_int should be unsurprising.

I think it'd be surprising.  offset_int should always be used in
preference to wide_int if the precision is known to be 128 bits
in advance, and there doesn't seem any reason to prefer the
precision of offset_int over widest_int, HOST_WIDE_INT or int.

We would end up with:

  wide_int
  f (const wide_int &y)
  {
    wide_int x;
    x += y;
    return x;
  }

being valid if y happens to have 128 bits as well, and a runtime error
otherwise.

Surely that would be far better than the undefined behavior we
have today.

I disagree.  People shouldn't rely on the above behaviour because
it's never useful.

Well, yes, but the main point of my feedback on the poly_int default
ctor (and the ctor of the extended_tree class, and the existing wide
int classes) is that it makes them easy to misuse.  That they're not
meant to be [mis]used like that isn't an answer.

You explained earlier that the no-op initialization is necessary
for efficiency and I suggested a safer alternative: an API that
makes the lack of initialization explicit, while providing a safe
default.  I still believe this is the right approach for the new
poly_int classes.  I also think it's the right solution for
offset_int.

   wide_int f ()
   {
     wide_int x;
     x += 0;
     return x;
   }

Well, it compiles, but with sufficiently good static analysis
it should trigger a warning.  (GCC might not be there yet,
but these things improve.)  As mentioned above:

Forgive me, but knowingly designing classes to be unsafe with
the hope that their accidental misuses may some day be detected
by sufficiently advanced static analyzers is not helpful.  It's
also unnecessary when less error-prone and equally efficient
alternatives exist.

  wide_int f ()
  {
    wide_int x = ...;
    x += 0;
    return x;
  }

(or some value other than 0) is well-defined because the int
promotes to whatever precision x has.

The problem with the examples I gave was that wide_int always needs
to have a precision and nothing in that code says what the precision
should be.  The "right" way of writing it would be:

   wide_int x = wi::shwi (0, prec);

   wide_int x;
   x = wi::shwi (0, prec);

   wide_int x;
   x = wi::shwi (1, prec);

where prec specifies the precision of the integer.

Yes, I realize that.  But we got here by exploring the effects
of default zero-initialization.  You have given examples showing
where relying on the zero-intialization could lead to bugs.  Sure,
no one is disputing that there are such instances.  Those exist
with any type and are, in general, unavoidable.

My argument is that default initialization that leaves the object
in an indeterminate state suffers from all the same problems your
examples do plus infinitely many others (i.e., undefined behavior),
and so is an obviously inferior choice.  It's a design error that
should be avoided.

Martin

Reply via email to