On Wed, 07 May 2014 20:58:21 -0700 Andrei Alexandrescu via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> So there's this recent discussion about making T[] be refcounted if > and only if T has a destructor. > > That's an interesting idea. More generally, there's the notion that > making user-defined types as powerful as built-in types is a Good > Thing(tm). > > Which brings us to something that T[] has that user-defined types > cannot have. Consider: > > import std.stdio; > > void fun(T)(T x) > { > writeln(typeof(x).stringof); > } > > void main() > { > immutable(int[]) a = [ 1, 2 ]; > writeln(typeof(a).stringof); > fun(a); > } > > This program outputs: > > immutable(int[]) > immutable(int)[] > > which means that the type of that value has subtly and silently > changed in the process of passing it to a function. > > This change was introduced a while ago (by Kenji I recall) and it > enabled a lot of code that was gratuitously rejected. > > This magic of T[] is something that custom ranges can't avail > themselves of. In order to bring about parity, we'd need to introduce > opByValue which (if present) would be automatically called whenever > the object is passed by value into a function. > > This change would allow library designers to provide good solutions > to making immutable and const ranges work properly - the way T[] > works. > > There are of course a bunch of details to think about and figure out, > and this is a large change. Please chime in with thoughts. Thanks! As far as I can see, opByValue does the same thing as opSlice, except that it's used specifically when passing to functions, whereas this code immutable int [] a = [1, 2, 3]; immutable(int)[] b = a[]; or even immutable int [] a = [1, 2, 3]; immutable(int)[] b = a; compiles just fine. So, I don't see how adding opByValue helps us any. Simply calling opSlice implicitly for user-defined types in the same places that it's called implicitly on arrays would solve that problem. We may even do some of that already, though I'm not sure. The core problem in either case is that const(MyStruct!T) has no relation to MyStruct!(const T) or even const(MyStruct!(const T)). They're different template instantations and therefore can have completely different members. So, attempts to define opSlice such that it returns a tail-const version of the range tends to result in recursive template instantiations which then blow the stack (or maybe error out due to too many levels - I don't recall which at the moment - but regardless, it fails). I think that careful and clever use of static ifs could resolve that, but that's not terribly pleasant. At best, it would result in an idiom that everyone would have to look up exactly how to do correctly every time they needed to define opSlice. Right now, you'd have to declare something like struct MyRange(T) { ... static if(isMutable!T) MyRange!(const T) opSlice() const {...} else MyRange opSlice() const {...} ... } and I'm not even sure that that quite works, since I haven't even attempted to define a tail-const opSlice recently. Whereas ideally, you'd just do something mroe like struct MyRange(T) { ... MyRange!(const T) opSlice() const {...} ... } but that doesn't currently work due to recursive template instantations. I don't know quite how we can make it work (maybe making the compiler detect when MyRange!T and MyRange!(const T) are effectively identical), but I think that that's really the problem that we need to solve, not coming up with a new function, because opSlice is already there to do what we need (though it may need to have some additional implicit calls added to it to make it match when arrays are implicitly sliced). Regardless, I concur that this is a problem that sorely needs solving it. Without it, const and ranges really don't mix at all. - Jonathan M Davis