Re: Implicit conversion rules

2015-10-22 Thread Sigg via Digitalmars-d-learn

On Wednesday, 21 October 2015 at 22:49:16 UTC, Marco Leise wrote:


God forbid anyone implement such nonsense into D !
That would be the last thing we need



Slight nitpick, but what I suggested for our hypothetical 
situation was only to apply for auto, once variable was assigned 
to auto and got its correct type it would act like normal 
variable. Stuff you mentioned would happen if it was part of an 
expression in the rvalue expression.


Re: Implicit conversion rules

2015-10-21 Thread Maxim Fomin via Digitalmars-d-learn

On Wednesday, 21 October 2015 at 19:49:35 UTC, Ali Çehreli wrote:

On 10/21/2015 12:37 PM, Sigg wrote:

> cause at least few more "fun" side effects.

One of those side effects would be function calls binding 
silently to another overload:


void foo(bool){/* ... */}
void foo(int) {/* ... */}

  auto a = 0;  // If the type were deduced by the value,
  foo(a);  // then this would be a call to foo(bool)...
   // until someone changed the value to 2. :)

Ali


Actually 'a' is deduced to be int, so int version is called (as 
expected?). See my example above for the VRO overload issue.


Implicit conversion rules

2015-10-21 Thread Sigg via Digitalmars-d-learn
I started reading "The D programming Language" earlier, and came 
to the "2.3.3 Typing of Numeric Operators" section which claims 
that "if at least one participant has type ulong, the other is 
implicitly converted to ulong prior to the application and the 
result has type ulong.".


Now I understand reasoning behind it, and know that adding any 
sufficiently negative value to a ulong/uint/ushort will cause an 
underflow as in following example:


void func() {
int a = -10;
ulong b = 0;
ulong c = a + b;
writefln("%s", c);
}

out: 18446744073709551574

But shouldn't declaring c as auto force compiler to go extra step 
and "properly" deduce result of the "a + b" expression, since its 
already as far as I understand doing magic in the background? 
Basically try to cast rvalues to narrowest type without losing 
precision before evaluating expression.


Or is there a proper way to do math with unsigned and signed 
primitives that I'm not aware of?


Re: Implicit conversion rules

2015-10-21 Thread Ali Çehreli via Digitalmars-d-learn

On 10/21/2015 12:37 PM, Sigg wrote:

> cause at least few more "fun" side effects.

One of those side effects would be function calls binding silently to 
another overload:


void foo(bool){/* ... */}
void foo(int) {/* ... */}

  auto a = 0;  // If the type were deduced by the value,
  foo(a);  // then this would be a call to foo(bool)...
   // until someone changed the value to 2. :)

Ali



Re: Implicit conversion rules

2015-10-21 Thread anonymous via Digitalmars-d-learn
On Wednesday, October 21, 2015 07:53 PM, Sigg wrote:

>  void func() {
>  int a = -10;
>  ulong b = 0;
>  ulong c = a + b;
>  writefln("%s", c);
>  }
> 
>  out: 18446744073709551574
> 
> But shouldn't declaring c as auto force compiler to go extra step
> and "properly" deduce result of the "a + b" expression, since its
> already as far as I understand doing magic in the background?
> Basically try to cast rvalues to narrowest type without losing
> precision before evaluating expression.

The problem is of course that int and ulong have no common super type, at 
least not in the primitive integer types. int supports negative values, 
ulong supports values greater than long.max.

As far as I understand, you'd like the compiler to see the values of `a` and 
`b` (-10, 0), figure out that the result is negative, and then make `c` 
signed based on that. That's not how D rolls. The same code must compile 
when the values in `a` and `b` come from run time input. So the type of the 
addition cannot depend on the values of the operands, only on their types.

Or maybe you'd expect an `auto` variable to be able to hold both negative 
and very large values? But `auto` is not a special type, it's just a 
shorthand for typeof(right-hand side). That means, `auto` variables still 
get one specific static type, like int or ulong.

std.bigint and core.checkedint may be of interest to you, if you prefer 
safer operations over faster ones.

http://dlang.org/phobos/std_bigint.html
http://dlang.org/phobos/core_checkedint.html



Re: Implicit conversion rules

2015-10-21 Thread Sigg via Digitalmars-d-learn

On Wednesday, 21 October 2015 at 19:07:24 UTC, anonymous wrote:

The problem is of course that int and ulong have no common 
super type, at least not in the primitive integer types. int 
supports negative values, ulong supports values greater than 
long.max.


Yes, I'm well aware of that. I was under the (wrongful)impression 
that auto was doing much more under the hood and that it was more 
safety oriented, I've prolly mixed it with something else while 
reading some article.


As far as I understand, you'd like the compiler to see the 
values of `a` and `b` (-10, 0), figure out that the result is 
negative, and then make `c` signed based on that. That's not 
how D rolls. The same code must compile when the values in `a` 
and `b` come from run time input. So the type of the addition 
cannot depend on the values of the operands, only on their 
types.


Or maybe you'd expect an `auto` variable to be able to hold 
both negative and very large values? But `auto` is not a 
special type, it's just a shorthand for typeof(right-hand 
side). That means, `auto` variables still get one specific 
static type, like int or ulong.


Ima clarify what I expected using my previous example:

ulong a = 0;
int b = -10;
auto c = a + b;

a gets cast to narrowest primitive type that can hold its value, 
in this case bool since bool can hold 0 value resulting in c 
having value of -10. If a was bigger than max long I'd expect an 
error/exception. Now on the other hand I can see why something 
like this would not be implemented since it would ignore implicit 
conversion table and prolly cause at least few more "fun" side 
effects.


std.bigint and core.checkedint may be of interest to you, if 
you prefer safer operations over faster ones.


http://dlang.org/phobos/std_bigint.html 
http://dlang.org/phobos/core_checkedint.html


This is exactly what I was looking for. Thanks!



Re: Implicit conversion rules

2015-10-21 Thread Marco Leise via Digitalmars-d-learn
Am Wed, 21 Oct 2015 12:49:35 -0700
schrieb Ali Çehreli :

> On 10/21/2015 12:37 PM, Sigg wrote:
> 
>  > cause at least few more "fun" side effects.
> 
> One of those side effects would be function calls binding silently to 
> another overload:
> 
> void foo(bool){/* ... */}
> void foo(int) {/* ... */}
> 
>auto a = 0;  // If the type were deduced by the value,
>foo(a);  // then this would be a call to foo(bool)...
> // until someone changed the value to 2. :)
> 
> Ali

God forbid anyone implement such nonsense into D !
That would be the last thing we need that we cannot rely on
the overload resolution any more. It would be as if making 'a'
const would change the overload resolution when none of the
overloads deal with constness...

import std.format;
import std.stdio;

string foo(bool b) { return format("That's a boolean %s!", b); }
string foo(uint u) { return format("Thats an integral %s!", u); }

void main()
{
  int a = 2497420, b = 2497419;
const int c = 2497420, d = 2497419;
writeln(foo(a-b));
writeln(foo(c-d));
writeln("WAT?!");
}

-- 
Marco



Re: Implicit conversion rules

2015-10-21 Thread Maxim Fomin via Digitalmars-d-learn

On Wednesday, 21 October 2015 at 22:49:16 UTC, Marco Leise wrote:

Am Wed, 21 Oct 2015 12:49:35 -0700
schrieb Ali Çehreli :


On 10/21/2015 12:37 PM, Sigg wrote:

 > cause at least few more "fun" side effects.

One of those side effects would be function calls binding 
silently to another overload:


void foo(bool){/* ... */}
void foo(int) {/* ... */}

   auto a = 0;  // If the type were deduced by the value,
   foo(a);  // then this would be a call to foo(bool)...
// until someone changed the value to 2. :)

Ali


God forbid anyone implement such nonsense into D !
That would be the last thing we need that we cannot rely on
the overload resolution any more. It would be as if making 'a'
const would change the overload resolution when none of the
overloads deal with constness...



AFAIK it was implemented long time ago and discussed last time 
couple of years ago with example similar to Ali's.


void foo(bool)
void foo(int)

foo(0); // bool
foo(1); // bool
foo(2); // int