foobar Wrote:

> how would the to! usage look like with these additions? I suspect at that 
> stage the benefits of to! genericity will be lost.
> 
> to!(int)("1010101", 2); / base 2 ? 

to!(int)(base!(2)("1010101"));

base!() returns a struct which can be created with strings/int/... and to! can 
change it to the different representations?

> to!(int)(3.14, ..); // how do you specify a floor strategy here, enum? 
> this quickly gets messy, ugly and redundant. 

to!(int)(floor(3.14));

> int a = floor(3.14); // KISS - much better

floor is not a type conversion! I could agree the type of floor should be an 
int, but it is not converting a real to an int.

double a = cast(double) floor(3.14);

Do you really want that?

> > > a down cast should _only_ perform dynamic down casts in the OO sense. so:
> > > Base foo = new Derived; 
> > > Derived bar = downCast(foo); // compiles. performed at run-time
> > > 
> > > [const] double -> int is not a down cast, it is a conversion.
> > 
> > I was referring to the need for const_cast
> 
> Sorry, I lost you here. what are you talking about here? 

Sorry, I am trying to shortcut on explanation.

What I mean is, if you have a dynamic_cast operator you would not have the 
issue of casting away const when you meant to downcast.

"const Base obj1 = new Derived();
  auto obj2 = cast(Derived)(obj1); // oops: meant to only down cast"

If you had dynamic cast, you won't use cast to change downcast. The implied 
question was, why do you need a const_cast then?

But you are asking for only having const_cast and dynamic_cast, so it makes 
sense, oh and static_cast.

To me, I don't see a need to distinguish between dynamic and static cast in D. 
These are not concepts you are going to confuse and in general don't even 
compile together:

auto a = cast(double) b;

If b is a class it won't compile (with the exception of opCast, which implies 
it is safe to cast to double anyway).

But I think the implicit casting of D is important. It provides a way to do 
"safe" conversions allowing code to look cleaner. You shouldn't need to 
question if an assignment is performing a conversion that is causing an issue 
in your code, when using D.

The type you are assigning too might be an issue, but this not a bug introduced 
by the conversion. For example you can assign a real to a float implicitly. 
This could result in a loss of precision. The bug did not come from an implicit 
conversion to float, it came because the precision of real was needed but float 
was used.

My initial thought was it was a bad idea, but I came up with that reasoning to 
make myself feel good about D :D

> > Then be explicit in all of _your_ code. That won't stop others from using 
> > implicit conversion, but you can just assume they are of the same type and 
> > be fine.
> > 
> 
> since the entire point is to prevent bugs by having compiler checks, I don't 
> see how the above conclusion helps me at all. I want the compiler to prevent 
> me from getting the kinds of bugs previously shown. 

The compiler _is_ checking the code. There is nothing inherently wrong with 
converting an int to a double, except larger numbers can be held in an int (but 
making the conversion explicit will not resolve this, and won't result in 
people considering it). The example of:

double d = 5 / 2;

Is one place I agree could use an explicit conversion to double. I think this 
is the only major logic bug solved by forcing type conversions. But I also 
believe when the compiler complains the solution to fix it will be:

double d = cast(double) 5 / 2;

Even though it needs to wrap the five/two. That is because a complaining 
compiler does not make a thinking programmer, and the more it complains the 
less thinking. If you want say that the programmer sucks if they don't think 
about the changes they are making then, they also suck when they don't think 5 
/ 2 returns an int.

Reply via email to