On 20.10.2011 19:28, Jonathan M Davis wrote:
On Thursday, October 20, 2011 05:13 Don wrote:
Personally, I'd rather completely eliminate implicit conversions between
integers and floating point types. But that's just me.

If it's a narrowing conversion, it should require a cast. If it's not, and
there's no ambguity in the conversion, then I don't see any problem with
allowing the conversion to be implicit. But then again, I deal with floating
point values relatively rarely, so maybe there's something that I'm missing.

My proposal was effectively: if it's ambiguous, choose double. That's all.

Are there _any_ cases in D right now where the compiler doesn't error out on
ambiguity? In all of the cases that I can think of, D chooses to give an error
on ambiguity rather than making a choice for you. I'm all for an int literal
being implicitly converted to a double if the function call is unambiguous and
there's no loss of precision.

The problem is, the existing approach will break a lot of existing code. For example, std.math.log(2) currently compiles. But, once the overload log(double) is added, which *must* happen, that code will break. Note that there is no realistic deprecation option, either. When the overload is added, code will break immediately. If we continue with this approach, we have to accept that EVERY TIME we add a floating point overload, existing code will break.

So, we either make accept that; or we make everything that will ever break, break now (accepting that some stuff _will_ break, that would never have broken); or we introduce a tie-breaker rule.

The question we face is really, which is the lesser evil?

> But if there's any ambiguity, then it's
> definitely against the D way to have the compiler pick for you.

Explain why this compiles:

void foo(ubyte x) {}
void foo(short x) {}
void foo(ushort x) {}
void foo(int x) {}
void foo(uint x) {}
void foo(long x) {}
void foo(ulong x) {}

void main()
{
   byte b = -1;
   foo(b); // How ambiguous can you get?????
}

Reply via email to