On Thursday, 7 September 2017 at 22:53:31 UTC, Biotronic wrote:
On Thursday, 7 September 2017 at 16:55:02 UTC, EntangledQuanta wrote:
Sorry, I think you missed the point completely... or I didn't explain things very well.

I don't think I did - your new explanation didn't change my understanding at least. This indicates I'm the one who's bad at explaining. Ah well.

The point of my post was mostly to rewrite the code you'd posted in a form that I (and, I hope, others) found easier to understand.

I see no where in your code where you have a variant like type.

True. I've now rewritten it to use std.variant.Algebraic with these semantics:

auto foo(T1, T2)(T1 a, T2 b, int n) {
    import std.conv;
return T1.stringof~": "~to!string(a)~" - "~T2.stringof~": "~to!string(b);
}

unittest {
    import std.variant;
    Algebraic!(float, int) a = 4f;
    Algebraic!(double, byte) b = 1.23;

    auto res = varCall!foo(a, b, 3);
    assert(res == "float: 4 - double: 1.23");
}

template varCall(alias fn) {
    import std.variant;
    auto varCall(int n = 0, Args...)(Args args) {
        static if (n == Args.length) {
            return fn(args);
        } else {
            auto arg = args[n];
            static if (is(typeof(arg) == VariantN!U, U...)) {
                foreach (T; arg.AllowedTypes) {
                    if (arg.type == typeid(T))
return varCall!(n+1)(args[0..n], arg.get!T, args[n+1..$]);
                }
                assert(false);
            } else {
                return varCall!(n+1)(args);
            }
        }
    }
}

Sadly, by using std.variant, I've given up on the elegant switch/case in exchange for a linear search by typeid. This can be fixed, but requires changes in std.variant.

Of course, it would be possible to hide all this behind compiler magic. Is that desirable? I frankly do not think so. We should be wary of adding too much magic to the compiler - it complicates the language and its implementation. This is little more than an optimization, and while a compiler solution would be less intrusive and perhaps more elegant, I do not feel it provides enough added value to warrant its inclusion.

Next, I'm curious about this code:

void bar(var t)
{
    writeln("\tbar: Type = ", t.type, ", Value = ", t);
}

void main()
{
   bar(3); // calls bar as if bar was `void bar(int)`
   bar(3.4f); // calls bar as if bar was `void bar(float)`
   bar("sad"); // calls bar as if bar was `void bar(string)`
}

What does 'var' add here, that regular templates do not? (serious question, I'm not trying to shoot down your idea, only to better understand it) One possible problem with var here (if I understand it correctly) would be separate compilation - a generated switch would need to know about types in other source files that may not be available at the time it is compiled.

Next:

var foo(var x)
{
   if (x == 3)
       return x;
   return "error!";
}

This looks like a sort of reverse alias this, which I've argued for on many occasions. Currently, it is impossible to implement a type var as in that function - the conversion from string to var would fail. A means of implementing this has been discussed since at least 2007, and I wrote a DIP[1] about it way back in 2013. It would make working with variants and many other types much more pleasant.

[1]: https://wiki.dlang.org/DIP52

I use something similar where I use structs behaving like enums. Each field in the struct is an "enum value" which an attribute, this is because I have not had luck with using attributes on enum values directly and that structs allow enums with a bit more power.

When a runtime value depends on these structs one can build a mapping between the values and functional aspects of program. Since D has a nice type system, one can provide one templated function that represents code for all the enum values.


E.g.,

enum TypeID // actually done with a struct
{
   @("int") i, @("float") f
}


struct someType
{
   TypeID id;
}

someType.id is runtime dependent. But we want to map behavior for each type.

if (s.id == TypeID.i) fooInt();
if (s.id == TypeID.f) fooFloat();

For lots of values this is tedius and requires N functions. Turning foo in to a template and autogenerating the mapping using mixins we can get something like

mixin(MapEnum!(TypeID, "foo")(s.id))

which generates the following code:

switch(s.id)
{
   case TypeID.i: foo!int(); break;
   case TypeID.f: foo!float(); break;
}


and of course we must create foo:

void foo(T)()
{

}


but rather than one for each enum member, we just have to have one. For certain types of code, this works wonders. We can treat runtime dependent values as if they were compile time values without too much work. MapEnum maps runtime to compile time behavior allowing us to use use templates to handle runtime variables. T in foo is actually acting in a runtime fashion depending on the value of id.

My code is not elegant as it suits my specific needs but if a general purpose framework was created, variant types could easily be treated as compile time template parameters. It has been a pretty powerful concept in the code I write which has to handle many of the primitive types and combinations of them. I can create functions like

Add!(A,B,C)(A a,B b,C c)

and depending on what the runtime values of some object are, have the mapping code call the appropriate Add function, but only have to create one since there is a canonical form such as

Add!(A,B,C)(A a,B b,C c)
{
    return a + b + c;
}

the type system even verifies the code is correct! Template instantiates that are illegal will create errors.

Of course, If one has to specialize for each combination then this method is not much more convenient than doing it all by hand. It still lets one think about runtime types that enumerate behavior as compile time templates though and leverage all the functionality they have rather than using using runtime code.

This, in fact, is why I use the technique. Instead of having runtime checks I can move them in to compile time increasing performance(the cost is the switch statement). What makes it performant generally is because of how code can be organized in a more streamlined manner rather than getting in to a rats nest of handling all the possible combinations at runtime.


For example:

Suppose we must process a void[]. We do not know the underlying type at runtime. Our processing does not depend on the primitive underlying type.

We just need to specify

void Process(T)(T[] buf)
{
   T t;
   foreach(b; buf)
     t = max(b, t);
   if (max > 0) assert("error!");
}

and now we have T that corresponds to the buffer type. The MapEnum hooks up the runtime type variable's value to the compile type template. Because process does not depend on the specifics of T except that they are primitive, we only have to have one general function that handles them all.

Of course, return types are more difficult. I do not deal with return types in my code but I suppose one could use the same type of technique where we store the return type in a variant along with its type and then use the same techniques to deal with their values.

Not all difficult and would make coding runtime very nearly like compile time. Unfortunately what is really going on here is all combinations that a runtime variable can take are mapped in compile time code and this could lead to exponential increases in code size. OTOH, it probably would be nearly as performant as possible and easier to code with the proper tools rather than having to do it by hand.

A compiler feature set regarding this type of coding scheme would definition be nice it could manage keeping everything consistent.





Reply via email to