On Sunday, 27 May 2018 at 21:16:46 UTC, Neia Neutuladh wrote:
On Sunday, 27 May 2018 at 20:50:14 UTC, IntegratedDimensions wrote:
The only problem where it can leak is when we treat an cat as an animal then put in dog food in to the animal, which is valid when cat as treated as an animal, then cast back to cat. Now cat has dog food, which is invalid.

It sounds like you don't want to have a `food` setter in the `Animal` base class. Instead, you want setters in each child class that take the specific type required, and possibly an abstract getter in the base class. You could use a mixin to ease the process of defining appropriate food types, and you could have a method that takes the base `Food` class and does runtime validation.

You might also change `Animal` to `Animal!(TFood : Food)` and go from there. You'd likely need to extract another base class or interface so you can have a collection of arbitrary animals.

The problem with all this is that it is not the correct way. For N types you have to scale: Animal(Food, Skin, Eyes, ...).

While not having a specific setter in the Animal class does solve the problem of preventing assignment and sorta solve the problem by requiring assignment to occur at the proper object it does not solve the general problem. If we create an animal we should we able to assign it animal food(rather than tools). In a sense this goes a bit too far. Remember, a type hierarchy could be much more complex and we can have types of types which will then all have to follow these "workarounds" resulting if a very complex mess.

life
   domain
      kingdom
         phylum
            class
               order
                  family
                     genus
                        species
                           ...
Within each of these types there are specializations

the human species is a type of species in the homo genus, etc.

Now, how can we model such a thing be providing maximum compile time typing structure?

class life; // Sorta like object, something that can be anything and contains only the common structure to all things that can be considered living

class domain : life;
...

class species : genus;
class human : species;

Now, this is all standard. But all types will *use* other types. Humans will use, for example, tools. So we will have another complex hierarchy where tools fall somewhere

class tool : object;
class hammer : tool;


so

class human : species
{
    tool[] t;
}


Now, suppose we have bob the house builder, a human,

class houseBuilder : humanWorker;

in which we could add a tool to bob

auto bob = new houseBuilder();
bob.tools ~= new hammer();

All fine an dandy!


This is because tool there is not a natural transformation between human and houseBuilder and tool and hammer. There a hammer being derived from a tool in no way corresponds to a houseBuilder being derived from a human. The "uses" in this case is from types to types and objects to objects (humans use tools and bob uses a hammer).

For the structure I am talking about, "parallel inheritance" there is a natural relationship between types to types and types to types that naturally transform in "parallel".

To understand this we need to think about something that parallels our taxonomy so that as we inherit through one there is a *natural* inheritance through the other and a sort of "correspondence"(the natural transformation) that keeps everything aligned. Sorta like a ladder where we can only move up and down but each end of a rung.

Life         ->    A
Domain       ->    B
...          ->    ...
Genus        ->    X
Species      ->    Y
Human        ->    Z
houseBuilder ->    _

Now, if your paying attention, Human is actually not part of the taxonomy above, it is a specialization of Species.

Unfortunately in D we only have one level of typing rather that types of types. We only have a type. The taxonomy above would be a type of a type of an object while humans would be a type of object. So what happens is the different conceptualizations are conflated. Since types can be treated as sets, this is like saying that sets of sets are the same as sets. Well, sets of sets are sets but not all sets are sets of sets, hence they are not exactly the same(it is inclusion rather than equality).

{1,2,3} is not a set of sets. (although, it is true we can treat 1,2,3 as sets and so we could say it is but but I don't want to get in to the sets of things that are not sets problem).

So, really what we have is that the taxonomy is precisely this paralleling that goes on:

Life         ->    Life
Domain       ->    Eukaryote
...          ->    ...                    ...           ...
Genus        ->    Homo          _>       ...           ...
Species -> Human _> houseBuilder --> bob --> hammer



So, for example, Eukaryote's have cells that build things and different ways to class them. Hence "Builders" would have have a hierarchy of things that build stuff(A heart cell, like bob, builds a heart using a "hammer").

The point with all this is that somethings have natural transformations(by design) and some things don't but unfortunately in D we must represent it all with classes since we have no way to specify any higher order type. A human is contained within the type Species but it it is not a sub type of species. There is a huge difference. This is why we say "The *Human* Species" and not the "Genus Species"(If human where a sub type of species then species would be a sub type of genus and we could say it and it would make sense). Similarly, bob is of type human but house Builder is not but bob is a type of houseBuilder.

There are all kinds of *types* of relationships and they are generally confused and conflated. Depending on the problem one can get away with it or finagle code to make things work.

The only way D can handle this problem is by keeping class hierarchies distinct which is what most people are able to do naturally anyways. Even though classes are classes, if they are unrelated it is sort of like grouping them in to different type sets.

The problem comes from when they are not completely unrelated. In D we only have the ability to include as an "element" an object(a field), its type(the field type), and template parameters. This type of dependency, inclusion, is very open. It really allows just about anything to be included as long as the type matches. We can even make it more free or less by using template parameters:

class A(T)
{
    T a;
}

here A depends on any type T, which makes A more general than A!int. Of course, we must always specify T at some point so ultimately A is just as restrictive as A!int, so we only gain syntactic sugar using template parameters(everything that can be done with template parameters can be done without).

We can do have some tools to restrict T though using constraints:

class A(T)
if (T is int)
{
    T a;
}

and this helps quite a bit since we could do

class A(T)
if (is(T : Q))
{
   T a;
}


but even with all this we can't do something very simple:

class A
{
   T a;
}

class C : A
{
   TT : T a;
}


We can't specify parallel restrictions.

If C inherent from A then TT must inherent from T.

Sure we can use template parameters for a small number of inclusions but the code bloat grows potentially exponentially and does this really solve the problem?

class bit;

class ebit : bit
{
    bit[8] data;
}

class number
{
   bit[] a;
}

class byte : number
{
   (ebit : bit)[] a;
}

every byte is a number and every ebit is a bit(using the composite pattern). They are separate "classes" of type relationships;


number      bit

byte        ebit


We can always go up the ladder. A double is a number and an ebit is a bit.

We cannot always go down the ladder. Not all numbers are byte and not all bits are ebits.

We, though mix and match in one way. A number can use an ebit(since it is just a bit). The only case that fails is when a byte, treated as a number, uses only a bit. This failure case is no different than trying to treat a bit as an ebit or a number as a double when they are not.

What the above does is essentially expand every bit to be 8 bits. This would scale all number by a factor of 8 bits and 7 of those would not be used when treated as a number.


Now, if we can't specify such a nice parallel relation we are stuck using bit rather than ebit when ebit is a more natural type to use as it relates to byte.

What you should realize is that the failing condition is the same condition that fails in general, trying to upcast an object in the wrong type. This is always the problem and is no different here. We just have two parallel type hierarchies that are naturally related and so the up casting is more complex as it must deal with both sides(or many sides).

For multiple inclusions the process is just as simple. If we are up casting n type hierarchies then each type must be able to up cast. Hence for N inclusions there are N+1 potential ways to fail:

class A
{
   T1 a;
   T2 b;
   T3 c;
}

class C : A
{
   TT1 : T1 a;
   TT2 : T2 b;
   TT3 : T3 c;

}


and when casting from an A to a C, it may be that any combination of a,b,c contain invalid types which can't be cast. Although, with proper design everything should fail together or not fail at all.

All this is really no different than the single object case. You can always down cast but can only upcast if consistency is preserved. Here we just have to check several cases and deal with the slight complexity that having multiple cases gives. In the singular case we either get the object or null. In the multiple case we get an N tuple case that is either the object or null. For C above we get the possibilities (C | null, TT1 | null, TT2 | null, TT3 | null).













Reply via email to