On Friday, 25 May 2018 at 01:02:00 UTC, Basile B. wrote:
On Friday, 25 May 2018 at 00:15:39 UTC, IntegratedDimensions wrote:
On Thursday, 24 May 2018 at 23:31:50 UTC, Alex wrote:
On Thursday, 24 May 2018 at 20:24:32 UTC, IntegratedDimensions wrote:
class T;
class TT : T;

interface I
{
   @property T t();
}

abstract class A
{
   T _t;
   @property T t() { return _t; }

}

class C : A
{

// Stuff below uses t as TT but compiler, of course, treats t as T
   ...
}


The issue is that I programmed the class C with a variable that directly was based off TT, I later subderived T from TT and exposed it in I. (TT was refactored in to T and not T)


As as a side note:
I can hardly follow this, as you don't show, where you use the interface I. However, especially if TT was refactored in such a way, that is a set difference of T and not T, why you choose to derive from T instead of to contain T?


It really should be obvious that A was meant to derive from I. This is just standard oop. Simply leaving off : I should not be a deal breaker because it would not change the whole problem from black to white or vice versa.

T is a member to be included. You can only derive from one class. C can't derive from both A and T and even if it did, it would mean something else.



https://en.wikipedia.org/wiki/Composition_over_inheritance
http://wiki.c2.com/?CompositionInsteadOfInheritance

Well, can imagine useful cases though...


This is not a composition pattern.

This is a parallel inherentence pattern.

TT : T = T
|    |   |
v    v   v
C  : A : I

TT is used with C and T with I.

When C changes to C', TT : T changes to TT' : T

All functions that use TT in C are forced to use it as if it were of type T rather than TT which requires a bunch of casts.

This is generally a violation of type logic. There is nothing in that prevents t from being something like TTT which has no direct relation to TT.

But the programming logic of the code enforces t to be of type TT in C *always*. So I don't know why I would have to use casting all the time. It would be nice if there where a simple logical way to enforce a design pattern in the type system knowing that it is enforced at runtime. This makes cleaner code, nothing else.




But all the code in C assumes t is of type TT but now due to the interface it looks like a T, even though internally it is actually a TT.

What I'd like to do is

class C : A
{
private override @property TT t() { return cast(TT)(_t); } // null check if necessary
   // Stuff below uses t which is now a TT
   ...
}

or whatever.

This is simply so I don't have to rename or cast all my uses of t in C to type TT.

I'm pretty much guaranteed that in C, t will be type TT due to the design(C goes with TT like bread with butter).

So, it would be nice if somehow I could inform the type system that in C, t is always of type TT and so treat it as such rather than forcing me to explicitly cast for every use. Again, I could rename things to avoid the same name usage but in this case it is not necessary because of the design.

Is there any semantics that can get me around having to rename?

Maybe, you are looking for Curiously Recurring Template Pattern?

´´´
interface I(P)
{
        @property P t();
}

abstract class T(P) : I!P
{
    P _p;
    @property P t() { return _p; }
}

class TT : T!TT
{

}

void main()
{
        auto tt = new TT();
        static assert(is(typeof(tt.t) == TT));
}
´´´

No, I am trying to keep parallel derived types consistently connected. If A is derived from B and C from D and B uses D then A uses C. Consistency cannot be guaranteed by the type system at compile time because A is typed to use C, I want to restrict it further to D.

You must put a template parameter in the interface and specialize the class that implements the interface.

```
module runnable;

class T{}
class TT : T{}

interface I(N)
{
   @property N t();
}

abstract class A(N) : I!N
{
   N _t;
   @property N t() { return _t; }
}

class C1 : A!T{}
class C2 : A!TT{}

void main(string[] args)
{
    import std.traits;
    static assert(is(ReturnType!(C1.t) == T));
    static assert(is(ReturnType!(C2.t) == TT));
}

module runnable;

class T{}
class TT : T{}

interface I(N)
{
   @property N t();
}

abstract class A(N) : I!N
{
   N _t;
   @property N t() { return _t; }
}

class C1 : A!T{}
class C2 : A!TT{}

void main(string[] args)
{
    import std.traits;
    static assert(is(ReturnType!(C1.t) == T));
    static assert(is(ReturnType!(C2.t) == TT));
}
```

but obviously this won't work if you want to derive C1 or C2...


or if there 100 fields.

This isn't a proper solution.

The whole issue is not outside of C but inside

Hypothetically

class C : A
{
   @property TT : T t() { return _t; }

// t can be used directly as TT rather than having to do (cast(TT)t) everywhere t is used.
}

would solve the problem and it would scale.

The way it would work is that inside C, t is treated as type TT which is derived from T. The derivation means that it satisfies the interface constraint since we can always stick a TT in a T.

Since the type system doesn't allow such behavior, I'm trying to find a convenient way to simulate it that isn't more complicated than just casting. The problem is that casting is very verbose and scales with the code size.

Also, such notation would still work with deriving from C

class CC : C
{
   @property TTT : TT t() { return _t; }
}








Reply via email to