On Friday, October 11, 2019 8:43:49 AM MDT Just Dave via Digitalmars-d-learn wrote: > I come from both a C++ and C# background. Those have been the > primary languages I have used. In C# you can do something like > this: > > public interface ISomeInterface<T> > { > T Value { get; } > } > > public class SomeClass<T> : ISomeInterface<T> > { > T Value { get; set; } > } > > public class SomeOtherClass<T> : ISomeInterface<T> > { > T Value { get; set; } > } > > public static class Example > { > public static void Foo() > { > var instance1 = new SomeClass<int>(){ Value = 4; }; > var instance2 = new SomeClass<int>(){ Value = 2; }; > > if (instance1 is ISomeInterface<int>) > { > Console.WriteLine("Instance1 is interface!"); > } > > if (instance2 is ISomeInterface<int>) > { > Console.WriteLine("Instance2 is interface!"); > } > } > } > > Expected output is both WriteLines get hit: > > Instance1 is interface! > > Instance2 is interface! > > > So now the 'D' version: > > interface ISomeInterface(T) > { > T getValue(); > } > > class SomeClass(T) : ISomeInterface!T > { > private: > T t; > > public: > this(T t) > { > this.t = t; > } > > T getValue() > { > return t; > } > } > > class SomeOtherClass(T) : ISomeInterface!T > { > private: > T t; > > public: > this(T t) > { > this.t = t; > } > > T getValue() > { > return t; > } > } > > ...which seems to work the same way with preliminary testing. I > guess my question is...templates are different than generics, but > can I feel confident continuing forward with such a design in D > and expect this more or less to behave as I would expect in C#? > Or are there lots of caveats I should be aware of?
Generics and templates are syntactically similiar but are really doing very different things. Generic functions and types operate on Object underneath the hood. If you have Container<Foo> and Container<Bar>, you really just have Container<Object> with some syntactic niceties to avoid explicit casts. You get type checks to ensure that Container<Foo> isn't given a Bar unless Bar is derived from Foo, and the casts to and from Object when giving Container<Foo> a Foo are taken care of for you, but it's still always Container<Object> underneath the hood. In the case of Java, the type of T in Container<T> or foo<T>() is truly only a compile time thing, so the bytecode only has Container<Object> and no clue what type is actually supposed to be used (the casts are there where the container or function is used, but the container or function has no clue what the type is; it just sees Object). That makes it possible to cheat with reflection and put something not derived from Foo in Container<Foo> but will then usually result in runtime failures when the casts the compiler inserted are run. C# doesn't have that kind of type erasure in that the information that Container<Foo> contains Foo rather than Object is maintained at runtime, but you still have a Container<Object>. It's just a Container<Object> with some metadata which keeps track of the fact that for this particular object of Container<Object>, Object is always supposed to be a Foo. As I'm a lot less familiar with C# than Java, I'm not all that familiar with what the practical benefits that gives are, though I'd expect that it would mean that reflection code would catch when you're trying to put a Bar into Container<Foo> and wouldn't let you. Note that for generics to work, they have to a common base type, and you only ever get one version of a generic class or function even if it gets used with many different types derived from Object. For a primitive type like int or float (as well as for structs in the case of C#), they have to be put into a type derived from Object in order to be used with generics (as I expect you're aware, C# calls this boxing and unboxing). Templates don't act like this at all. Templates are literally templates for generating code. A template is nothing by itself. Something like struct Container(T) { T[] data; } or T foo(T)(T t) { return t; } doesn't result in any code being in the binary until unless template is instantiated with a specific type, and when that template is instantiated, code is generated based on the type that it's instantiated with. So, Container!int and Container!Foo result in two different versions of Container being generated and put in the binary - one which operates on int, and one which operates on Foo. There is no conversion to Object going on here. The code literally uses int and Foo directly and is generated specifically for those types. Not only does that mean that the generated code can be optimized for the specific type rather than being for any Object, but it also means that the code itself could do something completely different for each type. e.g. with the template T foo(T)(T t) { static if(is(T == int)) return t + 42; else static if(is T == float) return t * 7; else return t; } foo!int would be equivalent to int foo(int t) { return t + 42; } foo!float would be equivalent to float foo(float t) { return t * 7; } and foo!(int[]) would be equivalent to int[] foo(int[] t) { return t; } and you would literally get functions like that generated in the binary. Every separate instantiation of foo would result in a separate function in the binary, and which branches of the static if got compiled in would depend on which condition in the static if was true (just like with a normal if). In the case of D (unlike C++, which doesn't have function attributes the way that D does), because templated functions have attribute inference, the generated functions can actually have completely different attributes as well. e.g. with T addOne(T)(T t) { return t + 1; } addOne!int would result in something like int addOne(int t) @safe pure nothrow { return t + 1; } whereas because pointer arithemitic is @system, addOne!(int*) would result in something like int* addOne(int* t) @system pure nothrow { return t + 1; } And since not all types have +, something like addOne!Object or addOne!(int[]) wouldn't even compile. D uses template constraints to make it so that that can be caught before the internals of the template are even instantiated. e.g. D addOne(T)(T t) if(is(typeof(t + 1))) { return t + 1; } or D addOne(T)(T t) if(__traits(compiles, t + 1)) { return t + 1; } would give an error for addOne!Object telling you that the template constraint failed rather than telling you that the line return t + 1; failed to compile. Template constraints can also be used to overload templates similar to how static if can be used inside them to generate different code based on the template argument. e.g. T foo(T)(T t) if(is(T == int)) { return t + 42; } T foo(T)(T t) if(is(T == float)) { return t * 7; } T foo(T)(T t) if(!is(T == int) && !is(T == float)) { return t; } though it's considered better practice to only overload templates when their API is different and to use static if to change the internals when the API is the same. So, in that example, the static if version would be better, whereas something like auto find(alias pred, T, U)(T[] haystack, U needle) if(is(pred(T.init, U.init) : bool)) { ... } and auto find(alias pred, T, U, V)(T[] haystack, U needle1, V needle2) if(is(pred(T.init, U.init) : bool) && is(pred(T.init, V.init) : bool)) { ... } would use overloads, because the number of parameters is different. I'm sure that there are other issues to discuss here, but the core difference between generics and templates is that generics generate a single piece of code using Object that gets reused every time that the generic is used, no matter the type(s) that's used with the generic, whereas templates generate a different piece of code for every set of template arguments. In fact, in D, something like auto foo(string file = __FILE__, size_t line = __LINE)(int blah) { ... } would generate a different function for every single line that it's called on (which is why file and line number are usually used as function arguments rather than template arguments). C++ fills in __FILE__ and __LINE__ based on the site of the declaration rather than the call site, so it wouldn't have quite the same problem, but for both languages, foo!int or foo<int> would generate a different piece of code than foo!MyClass or foo<MyClass> generates. So, you can get what gets called "template bloat" with templates when you instantiate them with a bunch of different template arguments, because you're getting a different piece of code generated for each instantiation, whereas with generics you only get the one version of the generic, which means that you don't get the bloat, but you also don't have as much flexibility. If all you're doing in D is creating templates that operate on types derived from Object, then you probably won't notice much difference between templates and generics, but you could notice some subtle differences when using types not derived from Object (e.g. at least with Java, because it doesn't have automatic boxing and unboxing the way that C# does, trying to use primitive types with generics can fail in surprising ways if you're used to using a language with templates), and because templates outright generate code, you can do a lot more with them than you could ever do with generics (e.g. making code differ based on the template arguments by using template constraints and/or static if). D's compile-time capabilities actually make it extremely powerful for generating code, and templates are a key part of that. - Jonathan M Davis