On Tue, 07 Jul 2009 14:16:14 -0400, Andrei Alexandrescu <seewebsiteforem...@erdani.org> wrote:

Robert Jacques wrote:
On Tue, 07 Jul 2009 11:36:26 -0400, Andrei Alexandrescu <seewebsiteforem...@erdani.org> wrote:
Robert Jacques wrote:
Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain.

Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long.
 Suggestion 1:
Loft the right hand of the expression (when lofting is valid) to the size of the left hand. i.e.

What does loft mean in this context?

Sorry. loft <=> up-casting. i.e.
byte => short => int => long => cent? => bigInt?

byte a,b,c;
c = a + b;  => c = a + b;

Unsafe.

So is int + int or long + long. Or float + float for that matter. My point is that if a programmer is assigning a value to a byte (or short or int or long) then they are willing to accept the accociated over/under flow errors of that type.

short d;
d = a + b;  => d = cast(short) a + cast(short) b;

Should work today modulo bugs.

int e, f;
e = a + b;  => e = cast(short) a + cast(short) b;

Why cast to short? e has type int.

Opps. You're right. (I was thinking of the new rules, not my suggestion)
Should be:
e = a + b;  => e = cast(int) a + cast(int) b;

e = a + b + d; => e = cast(int)(cast(short) a + cast(short) b) + cast(int) d; Or e = cast(int) a + (cast(int) b + cast(int)d);

I don't understand this.

Same "Opps. You're right." as above.
e = a + b + d; => e = cast(int) a + cast(int) b + cast(int) d;

long g;
g = e + f;  => d = cast(long) e + cast(long) f;

Works today.

Wrong. I just tested this and what happens today is:
g = cast(long)(e+f);
And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not.

When choosing operator overloads or auto, prefer the ideal lofted interpretation (as per the new rules, but without the exception for int/long), over truncated variants. i.e.
auto h = a + b; => short h = cast(short) a + cast(short) b;

This would yield semantics incompatible with C expressions.

How so?
The auto rule is identical to the "new rules".
The overload rule is identical to the "new rules", except when no match can be found, in which case it tries to "relax" the expression to a smaller number of bits.

This would also properly handled some of the corner/inconsistent cases with the current rules:
ubyte  i;
ushort j;
j = -i; => j = -cast(short)i; (This currently evaluates to j = cast(short)(-i);

That should not compile, sigh. Walter wouldn't listen...

And
a += a;
is equivalent to
a = a + a;

Well not quite equivalent. In D2 they aren't. The former clarifies that you want to reassign the expression to a, and no cast is necessary. The latter would not compile if a is shorter than int.

I understand, but that dichotomy increases the cognitive load on the programmer. Also, there's the issue of
byte x;
++x;
which is defined in the spec as being equvilent to
x = x + 1;

and is logically consistent with
byte[] k,l,m;
m[] = k[] + l[];
Essentially, instead of trying to prevent overflows, except for those from int and long, this scheme attempts to minimize the risk of overflows, including those from int (and long, once cent exists. Maybe long+long=>bigInt?)

But if you close operations for types smaller than int, you end up with a scheme even more error-prone that C!

Since C (IIRC) always evaluates "x+x" in the manner most prone to causing overflows, no matter the type, a scheme can't be more error-prone than C (at the instruction level). However, it can be less consistent, which I grant can lead to higher level logic errors. (BTW, operations for types smaller than int are closed (by my non-mathy definition) in C)

The new rules are definitely an improvement over C, but they make byte/ubyte/short/ushort second class citizens, because practically every assignment requires a cast:
byte a,b,c;
c = cast(byte) a + b;
And if it weren't for compatibility issues, it would almost be worth it to remove them completely.



Reply via email to