Jesse Phillips Wrote: > foobar Wrote: > > > how would the to! usage look like with these additions? I suspect at that > > stage the benefits of to! genericity will be lost. > > > > to!(int)("1010101", 2); / base 2 ? > > to!(int)(base!(2)("1010101"));
semantically this is what i wanted but i still feel that a plain function would be easier on the eyes. for instance: int r = parseInt("101010", 2); // second parameter is optional and defaults to decimal base > > base!() returns a struct which can be created with strings/int/... and to! > can change it to the different representations? > > > to!(int)(3.14, ..); // how do you specify a floor strategy here, enum? > > this quickly gets messy, ugly and redundant. > > to!(int)(floor(3.14)); > > > int a = floor(3.14); // KISS - much better > > floor is not a type conversion! I could agree the type of floor should be an > int, but it is not converting a real to an int. > > double a = cast(double) floor(3.14); you're right, point taken. to fix my previous example: int a = integer(floor(3.14)); // still simpler than the unified template syntax > > Do you really want that? > > > > > a down cast should _only_ perform dynamic down casts in the OO sense. > > > > so: > > > > Base foo = new Derived; > > > > Derived bar = downCast(foo); // compiles. performed at run-time > > > > > > > > [const] double -> int is not a down cast, it is a conversion. > > > > > > I was referring to the need for const_cast > > > > Sorry, I lost you here. what are you talking about here? > > Sorry, I am trying to shortcut on explanation. > > What I mean is, if you have a dynamic_cast operator you would not have the > issue of casting away const when you meant to downcast. > > "const Base obj1 = new Derived(); > auto obj2 = cast(Derived)(obj1); // oops: meant to only down cast" > > If you had dynamic cast, you won't use cast to change downcast. The implied > question was, why do you need a const_cast then? > > But you are asking for only having const_cast and dynamic_cast, so it makes > sense, oh and static_cast. Thanks for explaining. As you said, I want only const_cast and down_cast operators to be provided by the language. > > To me, I don't see a need to distinguish between dynamic and static cast in > D. These are not concepts you are going to confuse and in general don't even > compile together: > > auto a = cast(double) b; > > If b is a class it won't compile (with the exception of opCast, which implies > it is safe to cast to double anyway). IMO, there is a need to distinguish the two because I may want to provide a conversion of my class type to a different class. contrived example: class Person {}; class Kid : Person {}; class Adult : Person {}; I want to be able to convert a Kid instance into an Adult instance when said kid turns 18. > > But I think the implicit casting of D is important. It provides a way to do > "safe" conversions allowing code to look cleaner. You shouldn't need to > question if an assignment is performing a conversion that is causing an issue > in your code, when using D. > > The type you are assigning too might be an issue, but this not a bug > introduced by the conversion. For example you can assign a real to a float > implicitly. This could result in a loss of precision. The bug did not come > from an implicit conversion to float, it came because the precision of real > was needed but float was used. > > My initial thought was it was a bad idea, but I came up with that reasoning > to make myself feel good about D :D > My opinion - this is a misfeature inherited from C. Assignment of incorrect type (e.g. a double value to an int) should be a compile-time error. > > > Then be explicit in all of _your_ code. That won't stop others from using > > > implicit conversion, but you can just assume they are of the same type > > > and be fine. > > > > > > > since the entire point is to prevent bugs by having compiler checks, I > > don't see how the above conclusion helps me at all. I want the compiler to > > prevent me from getting the kinds of bugs previously shown. > > The compiler _is_ checking the code. There is nothing inherently wrong with > converting an int to a double, except larger numbers can be held in an int > (but making the conversion explicit will not resolve this, and won't result > in people considering it). The example of: because of differences in representation this is also unsafe. Not all integral values can be represented accurately in a floating point type and this has nothing to do with size of int. I myself had such a bug where I expected a value of 2.0 as a result of some calculation but got 1.99998.. instead. > > double d = 5 / 2; > > Is one place I agree could use an explicit conversion to double. I think this > is the only major logic bug solved by forcing type conversions. But I also > believe when the compiler complains the solution to fix it will be: > > double d = cast(double) 5 / 2; > > Even though it needs to wrap the five/two. That is because a complaining > compiler does not make a thinking programmer, and the more it complains the > less thinking. If you want say that the programmer sucks if they don't think > about the changes they are making then, they also suck when they don't think > 5 / 2 returns an int. i disagree with this logic. Take a look at ML family of languages to see that more checks do not make worse programmers.