On 10/26/2017 11:52 AM, Richard Sandiford wrote:
Martin Sebor <mse...@gmail.com> writes:
 /* The tree and const_tree overload templates.   */
 namespace wi
 {
+  class unextended_tree
+  {
+  private:
+    const_tree m_t;
+
+  public:
+    unextended_tree () {}

Defining no-op ctors is quite dangerous and error-prone.  I suggest
to instead default initialize the member(s):

   unextended_tree (): m_t () {}

Ditto everywhere else, such as in:

This is really performance-senesitive code though, so I don't think
we want to add any unnecessary initialisation.  Primitive types are
uninitalised by default too, and the point of this class is to
provide an integer-like interface.

I understand the performance concern (more on that below), but
to clarify the usability issues,  I don't think the analogy with
primitive types is quite fitting here: int() evaluates to zero,
as do the values of i and a[0] and a[1] after an object of type
S is constructed using its default ctor, i.e., S ():

   struct S {
     int i;
     int a[2];

     S (): i (), a () { }
   };

Sure, I realise that.  I meant that:

  int x;

doesn't initialise x to zero.  So it's a question of which case is the
most motivating one: using "x ()" to initialise x to 0 in a constructor
or "int x;" to declare a variable of type x, uninitialised.  I think the
latter use case is much more common (at least in GCC).  Rearranging
things, I said later:

I agree that the latter use case is more common in GCC, but I don't
see it as a good thing.  GCC was written in C and most code still
uses now outdated C practices such as declaring variables at the top
of a (often long) function, and usually without initializing them.
It's been established that it's far better to declare variables with
the smallest scope, and to initialize them on declaration.  Compilers
are smart enough these days to eliminate redundant initialization or
assignments.

In your other message you used the example of explicit default
initialisation, such as:

class foo
{
  foo () : x () {}
  unextended_tree x;
};

But I think we should strongly discourage that kind of thing.
If someone wants to initialise x to a particular value, like
integer_zero_node, then it would be better to do it explicitly.
If they don't care what the initial value is, then for these
integer-mimicing classes, uninitialised is as good as anything
else. :-)

What I meant was: if you want to initialise "i" to 1 in your example,
you'd have to write "i (1)".  Being able to write "i ()" instead of
"i (0)" saves one character but I don't think it adds much clarity.
Explicitly initialising something only seems worthwhile if you say
what you're initialising it to.

My comment is not motivated by convenience.  What I'm concerned
about is that defining a default ctor to be a no-op defeats the
zero-initialization semantics most users expect of T().

This is particularly concerning for a class designed to behave
like an [improved] basic integer type.  Such a class should act
as closely as possible to the type it emulates and in the least
surprising ways.  Any sort of a deviation that replaces well-
defined behavior with undefined is a gotcha and a bug waiting
to happen.

It's also a concern in generic (template) contexts where T() is
expected to zero-initialize.  A template designed to work with
a fundamental integer type should also work with a user-defined
type designed to behave like an integer.

But that kind of situation is one where using "T (0)" over "T ()"
is useful.  It means that template substitution will succeed for
T that are sufficiently integer-like to have a single well-defined
zero but not for T that aren't (such as wide_int).

That strikes me as a little too subtle.  But it also doesn't
sound like wide_int is as close to an integer as its name
suggests.  After all, it doesn't support relational operators
either, or even assignment from other integer types.  It's
really a different beast.  But that still doesn't in my mind
justify the no-op initialization semantics.

For offset_int the default precision is 128-bits.  Making that
the default also for wide_int should be unsurprising.

I think it'd be surprising.  offset_int should always be used in
preference to wide_int if the precision is known to be 128 bits
in advance, and there doesn't seem any reason to prefer the
precision of offset_int over widest_int, HOST_WIDE_INT or int.

We would end up with:

  wide_int
  f (const wide_int &y)
  {
    wide_int x;
    x += y;
    return x;
  }

being valid if y happens to have 128 bits as well, and a runtime error
otherwise.

Surely that would be far better than the undefined behavior we
have today.


Also, I think it'd be inconsistent to allow the specific case of 0
to be assigned by default construction, but not also allow:

  wide_int x (0);

  wide_int x;
  x = 0;

  wide_int x;
  x = 1;

etc.  And wide_int wasn't intended for that use case.

Then perhaps I don't fully understand wide_int.  I would expect
the above assignments to also "just work" and I can't imagine
why we would not want them to.  In what way is rejecting
the above helpful when the following is accepted but undefined?

  wide_int f ()
  {
    wide_int x;
    x += 0;
    return x;
  }

Martin

Reply via email to