Re: New programming paradigm

2018-06-03 Thread DigitalDesigns via Digitalmars-d-learn

On Sunday, 3 June 2018 at 16:36:52 UTC, Simen Kjærås wrote:

On Sunday, 3 June 2018 at 14:57:37 UTC, DigitalDesigns wrote:

On Sunday, 3 June 2018 at 09:52:01 UTC, Malte wrote:
You might want to have a look at 
https://wiki.dlang.org/Dynamic_typing
This sounds very similar to what you are doing. I never 
really looked into it, because I prefer to know which type is 
used and give me errors if I try to do stupid things, but I 
think it's a cool idea.


No, this is not what I'm talking about, although maybe it 
could be related in some way.


Actually, it sort of is. Your mapEnum is essentially the same 
as std.variant.visit 
(https://dlang.org/phobos/std_variant#.visit), and 
std.variant.Algebraic is the type that encapsulates both the 
runtime tag and the void[] containing the data of unknown type.


Now, there may be many important differences - Algebraic 
encapsulates the data and tag, which may or may not be what you 
want, visit only takes one algebraic argument, mapEnum may be 
faster, more or less generic, etc. The idea of converting a 
run-time value to a compile-time value is the same, though.


--
  Simen


I didn't know that variants had those functions! pretty nice. 
Yes, it is similar to what I'm doing. Same principles but just a 
little different perspective. I use enums, UDA's, and templates 
rather than a Algebraic and delegates.


The difference is that the enum stores only the type information 
rather than the variable and the type info that Algebraic stores.


If I were to have know about this before I might have used it 
instead and everything would have probably been fine.


The only thing is that the enum version lets me store the type 
info separately than with the data. When several variables depend 
on the type id I think it will make it a little easier than 
having to manage several Algebraic type info's across several 
variables to sync them.


For example

dataType type;
void[] in, out;

rather than

Algebraic!(type1,..., typen) in, out;

and then having to make sure the types are synced between in and 
out. At least in my case it might be a little easier. Also my way 
uses a templated function directly rather than an array of 
lambads, although they are equivalent:


Algebraic!(string, int) variant;

variant.visit!((string s) => cast(int) s.length, (int i)=> 
i)();


which could be written as

variant.visit!((string s) => foo(s), (int i)=> foo(i))();

auto foo(T)(T t) { }


would become

enum variant
{
@("int") _int,
@("string") _string,
}

mixin(variant.MapEnum!("foo")());

auto foo(T)(T t) { }


So, they definitely are very similar and actually might be 
identical. I haven't used Algebraic and visit any to know.


What I do know is that for several Algebraics you would have to 
do something like


variant.visit!((string s) => variant2.visit!((double d) => { 
foo(s,d); })), (int i)=> foo(i))();


etc. Which is creating the nested switch structure and can become 
complicated while my method still remains one line but foo just 
takes more than one template parameter. My feeling is mine is a 
little less robust since it's more for specific types of code 
while visit is a little more general. Mainly because of the hard 
coding of the mixin structure.







Re: New programming paradigm

2018-06-03 Thread Simen Kjærås via Digitalmars-d-learn

On Sunday, 3 June 2018 at 14:57:37 UTC, DigitalDesigns wrote:

On Sunday, 3 June 2018 at 09:52:01 UTC, Malte wrote:
You might want to have a look at 
https://wiki.dlang.org/Dynamic_typing
This sounds very similar to what you are doing. I never really 
looked into it, because I prefer to know which type is used 
and give me errors if I try to do stupid things, but I think 
it's a cool idea.


No, this is not what I'm talking about, although maybe it could 
be related in some way.


Actually, it sort of is. Your mapEnum is essentially the same as 
std.variant.visit (https://dlang.org/phobos/std_variant#.visit), 
and std.variant.Algebraic is the type that encapsulates both the 
runtime tag and the void[] containing the data of unknown type.


Now, there may be many important differences - Algebraic 
encapsulates the data and tag, which may or may not be what you 
want, visit only takes one algebraic argument, mapEnum may be 
faster, more or less generic, etc. The idea of converting a 
run-time value to a compile-time value is the same, though.


--
  Simen


Re: New programming paradigm

2018-06-03 Thread Paul Backus via Digitalmars-d-learn
On Monday, 4 September 2017 at 03:26:23 UTC, EntangledQuanta 
wrote:
Take a variant type. It contains the "type" and the data. To 
simplify, we will treat look at it like


(pseudo-code, use your brain)

enum Type { int, float }

foo(void* Data, Type type);

The normal way to deal with this is a switch:

switch(type)
{
case int: auto val = *(cast(int*)Data);
case float: auto val = *(cast(float*)Data);
}


But what if the switch could be generated for us?

[...]

But, in fact, since we can specialize on the type we don't have 
to use switch and in some cases do not even need to specialize:


for example:

foo(T)(T* Data) { writeln(*Data); }

is a compile time template that is called with the correct type 
value at run-time due to the "magic" which I have yet to 
introduce.


Note that if we just use a standard runtime variant, writeln 
would see a variant, not the correct type that Data really is. 
This is the key difference and what makes this "technique" 
valuable. We can treat our dynamic variables as compile time 
types(use the compile time system) without much hassle. They 
fit naturally in it and we do not clutter our code switches. We 
can have a true auto/var like C# without the overhead of the 
IR. The cost, of course, is that switches are still used, they 
are generated behind the scenes though and the runtime cost is 
a few instructions that all switches have and that we cannot 
avoid.


To get a feel for what this new way of dealing with dynamic 
types might look like:


void foo(var y) { writeln(y); }

var x = "3"; // or possibly var!(string, int) for the explicit 
types used

foo(x);
x = 3;
foo(x);


It sounds like what you are describing is a sum type. There is an 
implementation of one in the standard library, 
std.variant.Algebraic, as well as several alternative 
implementations on code.dlang.org, including my own, "sumtype" 
[1].


Using sumtype, your example would look like this:

alias Var = SumType!(string, int);

void foo(Var y) {
var.match!(
(value) { writeln(value); } // template lambda
);
}

Var x = "3";
foo(x);
x = 3;
foo(x);

The match method takes a list of functions as template arguments, 
and generates a switch statement that maps each possible type of 
Var to one of those functions. All type checking is done at 
compile time.


[1] https://code.dlang.org/packages/sumtype


Re: New programming paradigm

2018-06-03 Thread DigitalDesigns via Digitalmars-d-learn

On Sunday, 3 June 2018 at 09:52:01 UTC, Malte wrote:

On Saturday, 2 June 2018 at 23:12:46 UTC, DigitalDesigns wrote:

On Thursday, 7 September 2017 at 22:53:31 UTC, Biotronic wrote:

[...]


I use something similar where I use structs behaving like 
enums. Each field in the struct is an "enum value" which an 
attribute, this is because I have not had luck with using 
attributes on enum values directly and that structs allow 
enums with a bit more power.


[...]


You might want to have a look at 
https://wiki.dlang.org/Dynamic_typing
This sounds very similar to what you are doing. I never really 
looked into it, because I prefer to know which type is used and 
give me errors if I try to do stupid things, but I think it's a 
cool idea.


No, this is not what I'm talking about, although maybe it could 
be related in some way.


What I am talking about is hooking up a runtime variable that can 
take a few values, such as from an enum and have those be mapped 
to a compile time template value.


This way you get full static time checking of runtime code. Seems 
impossible? It's not!


What it does is leverage D's meta programming engine to deal with 
all the routine possiblities.


A large switch statement is what makes the runtime to compile 
time magic happen


int x;

switch(x)
{
default: foo!void();
case 0: foo!int();
case 1: foo!double();
etc...
}

See how the switch maps a runtime value x to a templated function 
foo?


we then can handle the x values with foo

void foo(T)()
{
   // if x = 0 then T = int
   // if x = 1 then T = double


}

But inside foo we have T, the template variable that is the 
compile time representation of the dynamic variable x. Remember, 
x's value is unknown at compile time... the switch is what maps 
the runtime to the compile time.


But in foo, because we have T, the type system all works fine.

What makes this very useful that we can call templated functions 
using T and the meta engine will pick the right template function 
specialization.


e.g.,

void bar(S)()
{

}


can be used inside foo by calling bar!T(). It doesn't seem like 
much here if you had to use x it would be a pain. Either you 
would have to manually create switches or create a rats nest of 
if statements. But we never have to worry about that stuff when 
using the above method because it is exactly like programming at 
compile time as if x a compile time value(like say, just int).


It works great when you have several template variables and just 
want everything to work together without having to go in to much 
trouble:



foo(A,B)()
{
   B b = 4;
   bar!(A)(b)
}


suppose A can come from int, double, and B from float and long

That is 4 different combinations one would normally have to 
represent. Not a big deal until you have to handle every 
combination.



Suppose you are working with void arrays. They contain types but 
you don't know the type except after compile time.


Without using this technique you have to use casts and tricks and 
you'll find out if you screwed up some typing stuff at runtime. 
Using this technique you will not have a void array but a T[] 
with T being any of the possible types that you specify using 
UDA's.


if you could

if (x == 0)
{
foo(cast(int[])a);
} else

if (x == 1)
{
foo(cast(double[])a);
} else


but I can do that with one line which simply generates the switch 
for me. Really all I'm doing is hiding the switch so it all looks 
like some magic is happening in one line. But the fact that it 
becomes really simple to do seems to open up the use of it and 
conceptually on can then think of "x" as a compile time variable 
that can take on several possibilities.






Re: New programming paradigm

2018-06-03 Thread Malte via Digitalmars-d-learn

On Saturday, 2 June 2018 at 23:12:46 UTC, DigitalDesigns wrote:

On Thursday, 7 September 2017 at 22:53:31 UTC, Biotronic wrote:

[...]


I use something similar where I use structs behaving like 
enums. Each field in the struct is an "enum value" which an 
attribute, this is because I have not had luck with using 
attributes on enum values directly and that structs allow enums 
with a bit more power.


[...]


You might want to have a look at 
https://wiki.dlang.org/Dynamic_typing
This sounds very similar to what you are doing. I never really 
looked into it, because I prefer to know which type is used and 
give me errors if I try to do stupid things, but I think it's a 
cool idea.


Re: New programming paradigm

2018-06-02 Thread DigitalDesigns via Digitalmars-d-learn

On Thursday, 7 September 2017 at 22:53:31 UTC, Biotronic wrote:
On Thursday, 7 September 2017 at 16:55:02 UTC, EntangledQuanta 
wrote:
Sorry, I think you missed the point completely... or I didn't 
explain things very well.


I don't think I did - your new explanation didn't change my 
understanding at least. This indicates I'm the one who's bad at 
explaining. Ah well.


The point of my post was mostly to rewrite the code you'd 
posted in a form that I (and, I hope, others) found easier to 
understand.



I see no where in your code where you have a variant like type.


True. I've now rewritten it to use std.variant.Algebraic with 
these semantics:


auto foo(T1, T2)(T1 a, T2 b, int n) {
import std.conv;
return T1.stringof~": "~to!string(a)~" - "~T2.stringof~": 
"~to!string(b);

}

unittest {
import std.variant;
Algebraic!(float, int) a = 4f;
Algebraic!(double, byte) b = 1.23;

auto res = varCall!foo(a, b, 3);
assert(res == "float: 4 - double: 1.23");
}

template varCall(alias fn) {
import std.variant;
auto varCall(int n = 0, Args...)(Args args) {
static if (n == Args.length) {
return fn(args);
} else {
auto arg = args[n];
static if (is(typeof(arg) == VariantN!U, U...)) {
foreach (T; arg.AllowedTypes) {
if (arg.type == typeid(T))
return varCall!(n+1)(args[0..n], 
arg.get!T, args[n+1..$]);

}
assert(false);
} else {
return varCall!(n+1)(args);
}
}
}
}

Sadly, by using std.variant, I've given up on the elegant 
switch/case in exchange for a linear search by typeid. This can 
be fixed, but requires changes in std.variant.


Of course, it would be possible to hide all this behind 
compiler magic. Is that desirable? I frankly do not think so. 
We should be wary of adding too much magic to the compiler - it 
complicates the language and its implementation. This is little 
more than an optimization, and while a compiler solution would 
be less intrusive and perhaps more elegant, I do not feel it 
provides enough added value to warrant its inclusion.


Next, I'm curious about this code:


void bar(var t)
{
writeln("\tbar: Type = ", t.type, ", Value = ", t);
}

void main()
{
   bar(3); // calls bar as if bar was `void bar(int)`
   bar(3.4f); // calls bar as if bar was `void bar(float)`
   bar("sad"); // calls bar as if bar was `void bar(string)`
}


What does 'var' add here, that regular templates do not? 
(serious question, I'm not trying to shoot down your idea, only 
to better understand it) One possible problem with var here (if 
I understand it correctly) would be separate compilation - a 
generated switch would need to know about types in other source 
files that may not be available at the time it is compiled.


Next:


var foo(var x)
{
   if (x == 3)
   return x;
   return "error!";
}


This looks like a sort of reverse alias this, which I've argued 
for on many occasions. Currently, it is impossible to implement 
a type var as in that function - the conversion from string to 
var would fail. A means of implementing this has been discussed 
since at least 2007, and I wrote a DIP[1] about it way back in 
2013. It would make working with variants and many other types 
much more pleasant.


[1]: https://wiki.dlang.org/DIP52


I use something similar where I use structs behaving like enums. 
Each field in the struct is an "enum value" which an attribute, 
this is because I have not had luck with using attributes on enum 
values directly and that structs allow enums with a bit more 
power.


When a runtime value depends on these structs one can build a 
mapping between the values and functional aspects of program. 
Since D has a nice type system, one can provide one templated 
function that represents code for all the enum values.



E.g.,

enum TypeID // actually done with a struct
{
   @("int") i, @("float") f
}


struct someType
{
   TypeID id;
}

someType.id is runtime dependent. But we want to map behavior for 
each type.


if (s.id == TypeID.i) fooInt();
if (s.id == TypeID.f) fooFloat();

For lots of values this is tedius and requires N functions. 
Turning foo in to a template and autogenerating the mapping using 
mixins we can get something like


mixin(MapEnum!(TypeID, "foo")(s.id))

which generates the following code:

switch(s.id)
{
   case TypeID.i: foo!int(); break;
   case TypeID.f: foo!float(); break;
}


and of course we must create foo:

void foo(T)()
{

}


but rather than one for each enum member, we just have to have 
one. For certain types of code, this works wonders. We can treat 
runtime dependent values as if they were compile time values 
without too much work. MapEnum maps runtime to compile time 
behavior allowing us to use use templates to handle runtime 
variables. T in foo is actually acting in a runtime fashion 
depending on the value of id.



Re: New programming paradigm

2017-09-07 Thread Biotronic via Digitalmars-d-learn
On Thursday, 7 September 2017 at 16:55:02 UTC, EntangledQuanta 
wrote:
Sorry, I think you missed the point completely... or I didn't 
explain things very well.


I don't think I did - your new explanation didn't change my 
understanding at least. This indicates I'm the one who's bad at 
explaining. Ah well.


The point of my post was mostly to rewrite the code you'd posted 
in a form that I (and, I hope, others) found easier to understand.



I see no where in your code where you have a variant like type.


True. I've now rewritten it to use std.variant.Algebraic with 
these semantics:


auto foo(T1, T2)(T1 a, T2 b, int n) {
import std.conv;
return T1.stringof~": "~to!string(a)~" - "~T2.stringof~": 
"~to!string(b);

}

unittest {
import std.variant;
Algebraic!(float, int) a = 4f;
Algebraic!(double, byte) b = 1.23;

auto res = varCall!foo(a, b, 3);
assert(res == "float: 4 - double: 1.23");
}

template varCall(alias fn) {
import std.variant;
auto varCall(int n = 0, Args...)(Args args) {
static if (n == Args.length) {
return fn(args);
} else {
auto arg = args[n];
static if (is(typeof(arg) == VariantN!U, U...)) {
foreach (T; arg.AllowedTypes) {
if (arg.type == typeid(T))
return varCall!(n+1)(args[0..n], 
arg.get!T, args[n+1..$]);

}
assert(false);
} else {
return varCall!(n+1)(args);
}
}
}
}

Sadly, by using std.variant, I've given up on the elegant 
switch/case in exchange for a linear search by typeid. This can 
be fixed, but requires changes in std.variant.


Of course, it would be possible to hide all this behind compiler 
magic. Is that desirable? I frankly do not think so. We should be 
wary of adding too much magic to the compiler - it complicates 
the language and its implementation. This is little more than an 
optimization, and while a compiler solution would be less 
intrusive and perhaps more elegant, I do not feel it provides 
enough added value to warrant its inclusion.


Next, I'm curious about this code:


void bar(var t)
{
writeln("\tbar: Type = ", t.type, ", Value = ", t);
}

void main()
{
   bar(3); // calls bar as if bar was `void bar(int)`
   bar(3.4f); // calls bar as if bar was `void bar(float)`
   bar("sad"); // calls bar as if bar was `void bar(string)`
}


What does 'var' add here, that regular templates do not? (serious 
question, I'm not trying to shoot down your idea, only to better 
understand it) One possible problem with var here (if I 
understand it correctly) would be separate compilation - a 
generated switch would need to know about types in other source 
files that may not be available at the time it is compiled.


Next:


var foo(var x)
{
   if (x == 3)
   return x;
   return "error!";
}


This looks like a sort of reverse alias this, which I've argued 
for on many occasions. Currently, it is impossible to implement a 
type var as in that function - the conversion from string to var 
would fail. A means of implementing this has been discussed since 
at least 2007, and I wrote a DIP[1] about it way back in 2013. It 
would make working with variants and many other types much more 
pleasant.


[1]: https://wiki.dlang.org/DIP52


Re: New programming paradigm

2017-09-07 Thread EntangledQuanta via Digitalmars-d-learn

On Thursday, 7 September 2017 at 19:33:01 UTC, apz28 wrote:
On Thursday, 7 September 2017 at 17:13:43 UTC, EntangledQuanta 
wrote:
On Thursday, 7 September 2017 at 15:36:47 UTC, Jesse Phillips 
wrote:

[...]


All types have a type ;) You specified in the above case that 
m is an int by setting it to 4(I assume that is what var(4) 
means). But the downside, at least on some level, all the 
usable types must be know or the switch cannot be 
generated(there is the default case which might be able to 
solve the unknown type problem in some way).


[...]


Nice for simple types but fail for struct, array & object
Current variant implementation is lack of type-id to check for 
above ones. For this lacking, is there a runtime (not compile 
time - trait) to check if a type is a struct or array or object?


Cheer




On Thursday, 7 September 2017 at 19:33:01 UTC, apz28 wrote:
On Thursday, 7 September 2017 at 17:13:43 UTC, EntangledQuanta 
wrote:
On Thursday, 7 September 2017 at 15:36:47 UTC, Jesse Phillips 
wrote:

[...]


All types have a type ;) You specified in the above case that 
m is an int by setting it to 4(I assume that is what var(4) 
means). But the downside, at least on some level, all the 
usable types must be know or the switch cannot be 
generated(there is the default case which might be able to 
solve the unknown type problem in some way).


[...]


Nice for simple types but fail for struct, array & object
Current variant implementation is lack of type-id to check for 
above ones. For this lacking, is there a runtime (not compile 
time - trait) to check if a type is a struct or array or object?


Cheer


No, it is not a big deal. One simply has to have a mapping, it 
doesn't matter what kind of type, only that it exists at compile 
time. It can be extended to be used with any specific type. One 
will need to be able to include some type information in the 
types that do not have them though, but that only costs a little 
memory.


The point is not the exact method I used, which is just fodder, 
but that if the compiler implemented such a feature, it would be 
very clean. I left, obviously, a lot of details out that the 
compiler would have to due. In the protoypes, you see that I 
included an enum... the enum is what does the work... it contains 
type information.


enum types
{
   Class,
   Float,
   Int,
   MySpecificClass,
}

the switch then can be used and as long as the actual values 
'typeid' matches, it will link up with the template.


You can't use types directly, that would be pointless, they have 
to be wrapped in a variant like type which contains the type 
value. e.g.,


struct Variant(T)
{
   types type;
   T val;
   alias this = val;
}

which is a lightweight wrapper around anything. This is basically 
like std.variant.Variant except the type indicator comes from an 
enum. Again, this simplifies the discussion but it is not a 
problem for classes, structs, enums, or any other type, as long 
as they exist at compile time.



I only used std.variant.Variant to simplify things, but the 
compiler would have to construct the typeid list internally. (I 
did it in my add explicitly for the types I was going to use)



As far as runtime checking, no, because bits are bits. You can 
cast any pointer to any type you want and there is no way to know 
if it is suppose to be valid or not. This is why you have to 
include the type info somewhere for the object. classes have 
classinfo but there would be no way to validate it 100%.








Re: New programming paradigm

2017-09-07 Thread apz28 via Digitalmars-d-learn
On Thursday, 7 September 2017 at 17:13:43 UTC, EntangledQuanta 
wrote:
On Thursday, 7 September 2017 at 15:36:47 UTC, Jesse Phillips 
wrote:

[...]


All types have a type ;) You specified in the above case that m 
is an int by setting it to 4(I assume that is what var(4) 
means). But the downside, at least on some level, all the 
usable types must be know or the switch cannot be 
generated(there is the default case which might be able to 
solve the unknown type problem in some way).


[...]


Nice for simple types but fail for struct, array & object
Current variant implementation is lack of type-id to check for 
above ones. For this lacking, is there a runtime (not compile 
time - trait) to check if a type is a struct or array or object?


Cheer


Re: New programming paradigm

2017-09-07 Thread EntangledQuanta via Digitalmars-d-learn
On Thursday, 7 September 2017 at 15:36:47 UTC, Jesse Phillips 
wrote:
On Monday, 4 September 2017 at 03:26:23 UTC, EntangledQuanta 
wrote:
To get a feel for what this new way of dealing with dynamic 
types might look like:


void foo(var y) { writeln(y); }

var x = "3"; // or possibly var!(string, int) for the explicit 
types used

foo(x);
x = 3;
foo(x);

(just pseudo code, don't take the syntax literally, that is 
not what is important)


While this example is trivial, the thing to note is that there 
is one foo declared, but two created at runtime. One for 
string and one for and int. It is like a variant, yet we don't 
have to do any testing. It is very similar to `dynamic` in C#, 
but better since actually can "know" the type at compile time, 
so to speak. It's not that we actually know, but that we write 
code as if we knew.. it's treated as if it's statically typed.


It is an interesting thought but I'm not sure of its utility. 
First let me describe how I had to go about thinking of what 
this means. Today I think it would be possible for a given 
function 'call()' to write this:


alias var = Algebraic!(double, string);

void foo(var y) {
mixin(call!writeln(y));
}

Again the implementation of call() is yet to exist but likely 
uses many of the techniques you describe and use.


Where I'm questioning the utility, and I haven't used C#'s 
dynamic much, is with the frequency I'm manipulating arbitrary 
data the same, that is to say:


auto m = var(4);
mixin(call!find(m, "hello"));

This would have to throw a runtime exception, that is to say, 
in order to use the type value I need to know its type.


All types have a type ;) You specified in the above case that m 
is an int by setting it to 4(I assume that is what var(4) means). 
But the downside, at least on some level, all the usable types 
must be know or the switch cannot be generated(there is the 
default case which might be able to solve the unknown type 
problem in some way).



A couple of additional thoughts:

The call() function could do something similar to pattern 
matching but args could be confusing:


mixin(call!(find, round)(m, "hello"));

But I feel that would just get confusing. The call() function 
could still be useful even when needing to check the type to 
know what operations to do.


if(m.type == string)
mixin(call!find(m, "hello"));

instead of:
if(m.type == string)
m.get!string.find("hello");


The whole point is to avoid those checks as much as possible. 
With the typical library solution using variant, the checks are 
100% necessary. With the solution I'm proposing, the compiler 
generates the checks behind the scenes and calls the template 
that corresponds to the check. This is the main difference. We 
can use a single template that the switch directs all checks to. 
But since the template is compile time, we only need one, and we 
can treat it like any other compile time template(that is the 
main key here, we are leveraging D's template's to deal with the 
runtime complexity).


See my reply to Biotronic with the examples I gave as they should 
be more clear.


The usefulness of such things are as useful as they are. Hard to 
tell without the actual ability to use them. The code I created 
in the other thread was useful to me as it allowed me to handle a 
variant type that was beyond my control(given to me by an 
external library) in a nice and simple way using a template. 
Since all the types were confluent(integral values), I could use 
a single template without any type dispatching... so it worked 
out well.


e.g., Take com's variant. If you are doing com programming, 
you'll have to deal with it. The only way is a large switch 
statement. You can't get around that. Even with this method it 
will still require approximately the same checking because most 
of the types are not confluent. So, in these cases all the method 
does is push the "switch" in to the template. BUT it still turns 
it in to a compile time test(since the runtime test was done in 
the switch). Instead of one large switch one can do it in 
templates(and specialize where necessary) which, IMO, looks nicer 
with the added benefit of more control and more inline with how D 
works.


Also, most of the work is simply at the "end" point. If, say, all 
of phobos was rewritten to us these variants instead of runtime 
types, then a normal program would have to deal very little with 
any type checking. The downside would be an explosion in size and 
decrease in performance(possibly mitigated to some degree but 
still large).


So, it's not a panacea, but nothing is. I see it as more of a 
bridge between runtime and compile time that helps in certain 
cases quite well. e.g., Having to write a switch statement for 
all possible types a variable could have. With the mxin, or a 
comiler solution, this is reduced to virtually nothing in many 
cases and ends up just looking like normal D template code. 

Re: New programming paradigm

2017-09-07 Thread EntangledQuanta via Digitalmars-d-learn

On Thursday, 7 September 2017 at 14:28:14 UTC, Biotronic wrote:
On Wednesday, 6 September 2017 at 23:20:41 UTC, EntangledQuanta 
wrote:
So, no body thinks this is a useful idea or is it that no one 
understands what I'm talking about?


Frankly, you'd written a lot of fairly dense code, so 
understanding exactly what it was doing took a while. So I sat 
down and rewrote it in what I'd consider more idiomatic D, 
partly to better understand what it was doing, partly to 
facilitate discussion of your ideas.


The usage section of your code boils down to this:




Sorry, I think you missed the point completely... or I didn't 
explain things very well.


I see no where in your code where you have a variant like type.


What I am talking about is quite simple: One chooses the correct 
template to use, not at compile time based on the type(like 
normal), but at runtime based on a runtime variable that 
specifies the type. This is has variants are normally used except 
one must manually call the correct function or code block based 
on the variable value.



Here is a demonstration of the problem:


import std.stdio, std.variant, std.conv;



void foo(T)(T t)
{
writeln("\tfoo: Type = ", T.stringof, ", Value = ", t);
}

void bar(Variant val)
{
writeln("Variant's Type = ", to!string(val.type));

// foo called with val as a variant
foo(val);

writeln("Dynamic type conversion:");
switch(to!string(val.type))
{
		case "int": foo(val.get!int); break;	// foo called with val's 
value as int
		case "float": foo(val.get!float); break;	// foo called with 
val's value as float
		case "immutable(char)[]": foo(val.get!string); break;	// foo 
called with val's value as string
		case "short": foo(val.get!short); break;	// foo called with 
val's value as short

default: writeln("Unknown Conversion!");
}


}

void main()
{
Variant val;
writeln("\nVarant with int value:");
val = 3;
bar(val);
writeln("\n\nVarant with float value:");
val = 3.243f;
bar(val);
writeln("\n\nVarant with string value:");
val = "XXX";
bar(val);
writeln("\n\nVarant with short value:");
val = cast(short)2;
bar(val);

getchar();
}

Output:

Varant with int value:
Variant's Type = int
foo: Type = VariantN!20u, Value = 3
Dynamic type conversion:
foo: Type = int, Value = 3


Varant with float value:
Variant's Type = float
foo: Type = VariantN!20u, Value = 3.243
Dynamic type conversion:
foo: Type = float, Value = 3.243


Varant with string value:
Variant's Type = immutable(char)[]
foo: Type = VariantN!20u, Value = XXX
Dynamic type conversion:
foo: Type = string, Value = XXX


Varant with short value:
Variant's Type = short
foo: Type = VariantN!20u, Value = 2
Dynamic type conversion:
foo: Type = short, Value = 2

The concept to gleam from this is that the switch calls foo with 
the correct type AT compile time. The switch creates the mapping 
from the runtime type that the variant can have to the compile 
time foo.


So the first call to foo gives: `foo: Type = VariantN!20u, Value 
= 2`. The writeln call receives the val as a variant! It knows 
how to print a variant in this case, lucky for us, but we have 
called foo!(VariantN!20u)(val)!


But the switch actually sets it up so it calls 
foo!(int)(val.get!int). This is a different foo!


The switch statement can be seen as a dynamic dispatch that calls 
the appropriate compile time template BUT it actually depends on 
the runtime type of the variant!


This magic links up a Variant, who's type is dynamic, with 
compile time templates.


But you must realize the nature of the problem. Most code that 
uses a variant wouldn't use a single template to handle all the 
different cases:


switch(to!string(val.type))
{
case "int": fooInt(val.get!int); break;   
case "float": fooFloat(val.get!float); break; 
case "immutable(char)[]": fooString(val.get!string); break;   
case "short": fooShort(val.get!short); break;
default: writeln("Unknown Conversion!");
}



These functions might actually just be code blocks to handle the 
different cases.


Now, if you understand that, the paradigm I am talking about is 
to have D basically generate all the switching code for us 
instead of us ever having to deal with the variant internals.


We have something like

void bar(var t)
{
writeln("\tbar: Type = ", t.type, ", Value = ", t);
}


AND it would effectively print the same results. var is akin to 
variant but the compiler understands this and generates N 
different bar's internally and a switch statement to dynamically 
call the desired one at runtime, yet, we can simply call bar with 
any value we want.


e.g.,

void main()
{
   bar(3); // calls bar as if bar was 

Re: New programming paradigm

2017-09-07 Thread Jesse Phillips via Digitalmars-d-learn
On Monday, 4 September 2017 at 03:26:23 UTC, EntangledQuanta 
wrote:
To get a feel for what this new way of dealing with dynamic 
types might look like:


void foo(var y) { writeln(y); }

var x = "3"; // or possibly var!(string, int) for the explicit 
types used

foo(x);
x = 3;
foo(x);

(just pseudo code, don't take the syntax literally, that is not 
what is important)


While this example is trivial, the thing to note is that there 
is one foo declared, but two created at runtime. One for string 
and one for and int. It is like a variant, yet we don't have to 
do any testing. It is very similar to `dynamic` in C#, but 
better since actually can "know" the type at compile time, so 
to speak. It's not that we actually know, but that we write 
code as if we knew.. it's treated as if it's statically typed.


It is an interesting thought but I'm not sure of its utility. 
First let me describe how I had to go about thinking of what this 
means. Today I think it would be possible for a given function 
'call()' to write this:


alias var = Algebraic!(double, string);

void foo(var y) {
mixin(call!writeln(y));
}

Again the implementation of call() is yet to exist but likely 
uses many of the techniques you describe and use.


Where I'm questioning the utility, and I haven't used C#'s 
dynamic much, is with the frequency I'm manipulating arbitrary 
data the same, that is to say:


auto m = var(4);
mixin(call!find(m, "hello"));

This would have to throw a runtime exception, that is to say, in 
order to use the type value I need to know its type.


A couple of additional thoughts:

The call() function could do something similar to pattern 
matching but args could be confusing:


mixin(call!(find, round)(m, "hello"));

But I feel that would just get confusing. The call() function 
could still be useful even when needing to check the type to know 
what operations to do.


if(m.type == string)
mixin(call!find(m, "hello"));

instead of:
if(m.type == string)
m.get!string.find("hello");


Re: New programming paradigm

2017-09-07 Thread Biotronic via Digitalmars-d-learn
On Wednesday, 6 September 2017 at 23:20:41 UTC, EntangledQuanta 
wrote:
So, no body thinks this is a useful idea or is it that no one 
understands what I'm talking about?


Frankly, you'd written a lot of fairly dense code, so 
understanding exactly what it was doing took a while. So I sat 
down and rewrote it in what I'd consider more idiomatic D, partly 
to better understand what it was doing, partly to facilitate 
discussion of your ideas.


The usage section of your code boils down to this:

alias EnumA = TypeMap!(float, int);
alias EnumB = TypeMap!(double, byte);

auto foo(T1, T2)(T1 a, T2 b) {
import std.conv;
	return T1.stringof~": "~to!string(a)~" - "~T2.stringof~": 
"~to!string(b);

}

unittest {
int a = 4;
double b = 1.23;
EnumA enumAVal = EnumA.get!float;
EnumB enumBVal = EnumB.get!byte;

auto res = enumMapper!(foo, enumAVal, enumBVal)(a, b);
assert(res == "float: 4 - byte: 1");
}

With this implementation behind the scenes:

struct TypeMap(T...) {
import std.meta : staticIndexOf;

private int value;
alias value this;

alias Types = T;

static TypeMap get(T2)() if (staticIndexOf!(T2, T) > -1) {
return TypeMap(staticIndexOf!(T2, T));
}
}

template enumMapper(alias fn, Maps...) {
auto enumMapper(Args...)(Args args) {
return enumMapperImpl!(OpaqueAliasSeq!(), Args)(args);
}
auto enumMapperImpl(alias ArgTypes, Args...)(Args args) {
alias Assigned = ArgTypes.Aliases;
alias Remaining = Maps[Assigned.length..$];

static if (Remaining.length == 0) {
import std.traits : Parameters;
alias fun = fn!Assigned;
alias params = Parameters!fun;
return fun(castTuple!params(args).expand);
} else {
alias typemap = Remaining[0];
switch (typemap) {
foreach (T; typemap.Types) {
case typemap.get!T:
alias Types = OpaqueAliasSeq!(Assigned, 
T);

return enumMapperImpl!Types(args);
}
default: assert(false);
}
}
}
}

template castTuple(T...) {
import std.typecons : tuple;
auto castTuple(Args...)(Args args) if (Args.length == 
T.length) {

static if (T.length == 0) {
return tuple();
} else {
auto result = .castTuple!(T[1..$])(args[1..$]);
return tuple(cast(T[0])args[0], result.expand);
}
}
}

template OpaqueAliasSeq(T...) {
alias Aliases = T;
}


Re: New programming paradigm

2017-09-07 Thread XavierAP via Digitalmars-d-learn
On Wednesday, 6 September 2017 at 23:20:41 UTC, EntangledQuanta 
wrote:
So, no body thinks this is a useful idea or is it that no one 
understands what I'm talking about?


I think it may be a good use, although I haven't invested so much 
time looking into your particular application.


It looks like a normal, sane use of templates. This is what they 
are primarily intended for. And yes, combining them with mixins 
provide some great possibilities that are not available in many 
other languages.


Have you seen how D recommends avoiding duplicate code when 
overloading operators, also by means of mixins:

https://dlang.org/spec/operatoroverloading.html#binary

I thought you may come from C since you mention void pointers as 
an alternative. But that is not considered the normal way in D, 
your new way is far better, and more "normal".


It looks you may be mistaking what happens at "run-time", or it 
may be a way of speaking. In D, templates called with different 
types generate different code already at compile-time -- even if 
in the source code you write, it all looks and works so 
polymorphically. This is a similar approach as in C++ and it's 
why D generics are called "templates"; as opposed for example to 
C#, where generics are not compiled into static types and keep 
existing at run-time. Andrei discusses both approaches in his 
book, and why the first one was chosen for D.


Re: New programming paradigm

2017-09-06 Thread EntangledQuanta via Digitalmars-d-learn
So, no body thinks this is a useful idea or is it that no one 
understands what I'm talking about?