Re: What the hell is wrong with D?
On Wednesday, 20 September 2017 at 02:57:21 UTC, jmh530 wrote: On Wednesday, 20 September 2017 at 02:36:50 UTC, Jonathan M Davis wrote: Please try to be civil. It's fine if you're unhappy about some aspect of how D works and want to discuss it, but we do not condone personal attacks here. - Jonathan M Davis He seemed to be threatening the guy's life over operator precedence. Ridiculous... Are you an idiot? Seriously, you must be. You just want to create drama instead of supply an actual logical argument(which I read your argument and it is pathetic). Show me where I threatened the guys life! Fucking moron. You must be some TSA goon or DHS wannabe.
Re: What the hell is wrong with D?
On Wednesday, 20 September 2017 at 02:36:50 UTC, Jonathan M Davis wrote: On Wednesday, September 20, 2017 02:16:16 EntangledQuanta via Digitalmars-d- learn wrote: On Tuesday, 19 September 2017 at 21:17:53 UTC, nkm1 wrote: > On Tuesday, 19 September 2017 at 17:40:20 UTC, > EntangledQuanta > > wrote: >> [...] > > There are two issues there; operator precedence and booleans > (_win[0] == '@') being a valid operands to +. > If someone is too stupid to learn how precedence works, they > should consider a different career instead of blaming others. > OTOH, booleans converting to numbers is a very questionable > feature. I certainly have never seen any good use for it. > This > is just an unfortunate legacy of C, which didn't even have > booleans for a long time. Your an idiot, I know about how operator precedence works far more than you do. Wanna bet? how much? Your house? your wife? Your life? It's about doing things correctly, you seem to fail to understand, not your fault, can't expect a turd to understand logic. Please try to be civil. It's fine if you're unhappy about some aspect of how D works and want to discuss it, but we do not condone personal attacks here. - Jonathan M Davis But, of course, It's ok for him to come me an idiot. Let me quote, not that it matters, since you are biased and a hypocrite: ">> > If someone is too stupid to learn how precedence works, they > should consider a different career instead of blaming > others." But when I call him an idiot, I'm put in the corner. I see how it works around here. What a cult!
Re: What the hell is wrong with D?
On Tuesday, 19 September 2017 at 22:11:44 UTC, Jesse Phillips wrote: On Tuesday, 19 September 2017 at 19:16:05 UTC, EntangledQuanta wrote: The D community preaches all this safety shit but when it comes down to it they don't seem to really care(look at the other responses like like "Hey, C does it" or "Hey, look up the operator precedence"... as if those responses are meaningful). jmh530 points out why you're met with such non-agreement of the issue. You're not open do discussion of why it is implemented in the fashion it is. Instead it is an attack on the community and Walter as though there is no logical reason it is implemented in the way that it is. I'm not open to discussion because it is not a discussion. There is no point. What could would it do to explain the short commings? You see the responses, the mentality. People think doing something wrong is valid because it was done. Two wrongs don't make a right no matter how you justify it. When someone takes on the task of doing a job and pretends the results to a community then refuse to accept responsibility for the failure to do the job properly and perpetuate ignorance(invalid logic that creates confusing, wastes peoples times, etc) then they deserve to be criticized, it's a two way street. When they then make up excuses to try to justify the wrong and turn it in to a right, they deserved to be attacked. It not just a harmless mistake. Peoples lives could be a jeopardy, but do they care? Do they REALLY care? Of course not. They don't see it as a significant issue. Simply learn how D works exactly and you'll be fine! Of course, for someone that programs in about 20 different languages regularly, having logical consistency is important. It's one thing to say "Well, I made a mistake, lets try to remedy it the best we can" than to say "Well, too bad, we can't break backwards compatibility!". People want to perpetuate insanity(which is what being illogical is). Sure you can express that it is illogical to have made that choice, but that requires first know what used to make that decision. No, it doesn't logic is not based on circumstances, it's based on something that is completely independent of us... which is why it is called logic... because it is something we can all agree on regardless of our circumstances or environment... it is what math and hence all science is based on and is the only real thing that has made steady progress in the world. Illogic is what all the insanity is based on... what wars are from, and just about everything else, when you actually spend the time to think about it, which most people don't. For example one of the original principles for D was: If it looks like C it should have the same semantics or be a compiler error (note this was not completely achieved) Now if we look at other languages we see, they implement it the same as C or they don't implement it at all. Just based on this it would make sense to choose to implement it like C if it is desired to have. The suggestion I made fulfills this, but it also slightly defeats one purpose of the operator, being terse. We also now need to keep backwards compatibility, this fails. Again, two wrongs don't make a right. What is the point of reimplementing C exactly as C is done? There is already a C, why have two? Was the whole point of D not to improve upon C? Doesn't D claim to be a "better C"? So, if you are claiming that the choice for the ternary operator's issue of ambiguity was to be consistent with C then that directly contradicts the statements that D is suppose to be safer and better. I'm fine with this AS long as it is clearly stated as such and people don't try to justify or pretend that it is a good thing, which is exactly the opposite of what they. Most are followers of the cult and cannot make any rational decision on their own but simply parrot the elders. So, when they do that, I have no desire or reason to be logical with them(again, it takes two to tango). For example, you have been rational, so I will be rational with you. To be rational, you must argue logically which you have done. Even though you haven't really argued the issue(of course, I didn't state it clear on purpose because this isn't really a discussion thread... I knew that the trolls/cult members would spew there stupid shit so I was just trolling them. Of course, I always hope that there would be some light in the tunnel, which you provided a glimmer... still all meaningless, nothing will change, at least not with the cult members, but someone that is not so brainwashed might be semi-enlightened if they implement their own language and not make the same mistakes). e.g., my attack is on the claims that D attempts to be *safe* and a *better C* and yet this(the ternary if) is just another instance of them contradicting themselves. Presenting something as safer when it is not gives the perception of safety
Re: What the hell is wrong with D?
On Tuesday, 19 September 2017 at 21:17:53 UTC, nkm1 wrote: On Tuesday, 19 September 2017 at 17:40:20 UTC, EntangledQuanta wrote: Yeah, that is really logical! No wonder D sucks and has so many bugs! Always wants me to be explicit about the stuff it won't figure out but it implicitly does stuff that makes no sense. The whole point of the parenthesis is to inform the compiler about the expression to use. Not use everything to the left of ?. There are two issues there; operator precedence and booleans (_win[0] == '@') being a valid operands to +. If someone is too stupid to learn how precedence works, they should consider a different career instead of blaming others. OTOH, booleans converting to numbers is a very questionable feature. I certainly have never seen any good use for it. This is just an unfortunate legacy of C, which didn't even have booleans for a long time. Your an idiot, I know about how operator precedence works far more than you do. Wanna bet? how much? Your house? your wife? Your life? It's about doing things correctly, you seem to fail to understand, not your fault, can't expect a turd to understand logic.
Re: What the hell is wrong with D?
On Tuesday, 19 September 2017 at 18:51:51 UTC, Jesse Phillips wrote: On Tuesday, 19 September 2017 at 17:40:20 UTC, EntangledQuanta wrote: I assume someone is going to tell me that the compiler treats it as writeln((x + (_win[0] == '@')) ? w/2 : 0); Yeah, that is really logical! Yeah, I've been bitten by that in languages like C#. I wish D didn't follow in C#'s footsteps and chosen a different syntax: `()? :` That way if there aren't any parentheses the compiler could throw out an error until you specify what the operating is working with. It would make for a little overhead but these complex ternary expressions can be confusing. Yes, it's not that they are confusing but illogical. a + b ? c : d in a complex expression can be hard to interpret if a and b are complex. The whole point of parenthesis is to disambiguate and group things. To not use them is pretty ignorant. 1 + 2 ? 3 : 4 That is ambiguous. is it (1 + 2) ? 3 : 4 or 1 + (2 ? 3 : 4)? Well, ()?: is not ambiguous! The D community preaches all this safety shit but when it comes down to it they don't seem to really care(look at the other responses like like "Hey, C does it" or "Hey, look up the operator precedence"... as if those responses are meaningful). I'm just glad there is at least one sane person that decided to chime in... was quite surprised actually. I find it quite pathetic when someone tries to justify a wrong by pointing to other wrongs. It takes away all credibility that they have.
What the hell is wrong with D?
writeln(x + ((_win[0] == '@') ? w/2 : 0)); writeln(x + (_win[0] == '@') ? w/2 : 0); The first returns x + w/2 and the second returns w/2! WTF!!! This stupid bug has caused me considerable waste of time. Thanks Walter! I know you care so much about my time! I assume someone is going to tell me that the compiler treats it as writeln((x + (_win[0] == '@')) ? w/2 : 0); Yeah, that is really logical! No wonder D sucks and has so many bugs! Always wants me to be explicit about the stuff it won't figure out but it implicitly does stuff that makes no sense. The whole point of the parenthesis is to inform the compiler about the expression to use. Not use everything to the left of ?. Thanks for wasting some of my life... Just curious about who will justify the behavior and what excuses they will give.
Re: New programming paradigm
On Thursday, 7 September 2017 at 19:33:01 UTC, apz28 wrote: On Thursday, 7 September 2017 at 17:13:43 UTC, EntangledQuanta wrote: On Thursday, 7 September 2017 at 15:36:47 UTC, Jesse Phillips wrote: [...] All types have a type ;) You specified in the above case that m is an int by setting it to 4(I assume that is what var(4) means). But the downside, at least on some level, all the usable types must be know or the switch cannot be generated(there is the default case which might be able to solve the unknown type problem in some way). [...] Nice for simple types but fail for struct, array & object Current variant implementation is lack of type-id to check for above ones. For this lacking, is there a runtime (not compile time - trait) to check if a type is a struct or array or object? Cheer On Thursday, 7 September 2017 at 19:33:01 UTC, apz28 wrote: On Thursday, 7 September 2017 at 17:13:43 UTC, EntangledQuanta wrote: On Thursday, 7 September 2017 at 15:36:47 UTC, Jesse Phillips wrote: [...] All types have a type ;) You specified in the above case that m is an int by setting it to 4(I assume that is what var(4) means). But the downside, at least on some level, all the usable types must be know or the switch cannot be generated(there is the default case which might be able to solve the unknown type problem in some way). [...] Nice for simple types but fail for struct, array & object Current variant implementation is lack of type-id to check for above ones. For this lacking, is there a runtime (not compile time - trait) to check if a type is a struct or array or object? Cheer No, it is not a big deal. One simply has to have a mapping, it doesn't matter what kind of type, only that it exists at compile time. It can be extended to be used with any specific type. One will need to be able to include some type information in the types that do not have them though, but that only costs a little memory. The point is not the exact method I used, which is just fodder, but that if the compiler implemented such a feature, it would be very clean. I left, obviously, a lot of details out that the compiler would have to due. In the protoypes, you see that I included an enum... the enum is what does the work... it contains type information. enum types { Class, Float, Int, MySpecificClass, } the switch then can be used and as long as the actual values 'typeid' matches, it will link up with the template. You can't use types directly, that would be pointless, they have to be wrapped in a variant like type which contains the type value. e.g., struct Variant(T) { types type; T val; alias this = val; } which is a lightweight wrapper around anything. This is basically like std.variant.Variant except the type indicator comes from an enum. Again, this simplifies the discussion but it is not a problem for classes, structs, enums, or any other type, as long as they exist at compile time. I only used std.variant.Variant to simplify things, but the compiler would have to construct the typeid list internally. (I did it in my add explicitly for the types I was going to use) As far as runtime checking, no, because bits are bits. You can cast any pointer to any type you want and there is no way to know if it is suppose to be valid or not. This is why you have to include the type info somewhere for the object. classes have classinfo but there would be no way to validate it 100%.
Re: New programming paradigm
On Thursday, 7 September 2017 at 15:36:47 UTC, Jesse Phillips wrote: On Monday, 4 September 2017 at 03:26:23 UTC, EntangledQuanta wrote: To get a feel for what this new way of dealing with dynamic types might look like: void foo(var y) { writeln(y); } var x = "3"; // or possibly var!(string, int) for the explicit types used foo(x); x = 3; foo(x); (just pseudo code, don't take the syntax literally, that is not what is important) While this example is trivial, the thing to note is that there is one foo declared, but two created at runtime. One for string and one for and int. It is like a variant, yet we don't have to do any testing. It is very similar to `dynamic` in C#, but better since actually can "know" the type at compile time, so to speak. It's not that we actually know, but that we write code as if we knew.. it's treated as if it's statically typed. It is an interesting thought but I'm not sure of its utility. First let me describe how I had to go about thinking of what this means. Today I think it would be possible for a given function 'call()' to write this: alias var = Algebraic!(double, string); void foo(var y) { mixin(call!writeln(y)); } Again the implementation of call() is yet to exist but likely uses many of the techniques you describe and use. Where I'm questioning the utility, and I haven't used C#'s dynamic much, is with the frequency I'm manipulating arbitrary data the same, that is to say: auto m = var(4); mixin(call!find(m, "hello")); This would have to throw a runtime exception, that is to say, in order to use the type value I need to know its type. All types have a type ;) You specified in the above case that m is an int by setting it to 4(I assume that is what var(4) means). But the downside, at least on some level, all the usable types must be know or the switch cannot be generated(there is the default case which might be able to solve the unknown type problem in some way). A couple of additional thoughts: The call() function could do something similar to pattern matching but args could be confusing: mixin(call!(find, round)(m, "hello")); But I feel that would just get confusing. The call() function could still be useful even when needing to check the type to know what operations to do. if(m.type == string) mixin(call!find(m, "hello")); instead of: if(m.type == string) m.get!string.find("hello"); The whole point is to avoid those checks as much as possible. With the typical library solution using variant, the checks are 100% necessary. With the solution I'm proposing, the compiler generates the checks behind the scenes and calls the template that corresponds to the check. This is the main difference. We can use a single template that the switch directs all checks to. But since the template is compile time, we only need one, and we can treat it like any other compile time template(that is the main key here, we are leveraging D's template's to deal with the runtime complexity). See my reply to Biotronic with the examples I gave as they should be more clear. The usefulness of such things are as useful as they are. Hard to tell without the actual ability to use them. The code I created in the other thread was useful to me as it allowed me to handle a variant type that was beyond my control(given to me by an external library) in a nice and simple way using a template. Since all the types were confluent(integral values), I could use a single template without any type dispatching... so it worked out well. e.g., Take com's variant. If you are doing com programming, you'll have to deal with it. The only way is a large switch statement. You can't get around that. Even with this method it will still require approximately the same checking because most of the types are not confluent. So, in these cases all the method does is push the "switch" in to the template. BUT it still turns it in to a compile time test(since the runtime test was done in the switch). Instead of one large switch one can do it in templates(and specialize where necessary) which, IMO, looks nicer with the added benefit of more control and more inline with how D works. Also, most of the work is simply at the "end" point. If, say, all of phobos was rewritten to us these variants instead of runtime types, then a normal program would have to deal very little with any type checking. The downside would be an explosion in size and decrease in performance(possibly mitigated to some degree but still large). So, it's not a panacea, but nothing is. I see it as more of a bridge between runtime and compile time that helps in certain cases quite well. e.g., Having to write a switch statement for all possible types a variable could have. With the mxin, or a comiler solution, this is reduced to virtually nothing in many cases and ends up just looking like normal D template code. Reme
Re: New programming paradigm
On Thursday, 7 September 2017 at 14:28:14 UTC, Biotronic wrote: On Wednesday, 6 September 2017 at 23:20:41 UTC, EntangledQuanta wrote: So, no body thinks this is a useful idea or is it that no one understands what I'm talking about? Frankly, you'd written a lot of fairly dense code, so understanding exactly what it was doing took a while. So I sat down and rewrote it in what I'd consider more idiomatic D, partly to better understand what it was doing, partly to facilitate discussion of your ideas. The usage section of your code boils down to this: Sorry, I think you missed the point completely... or I didn't explain things very well. I see no where in your code where you have a variant like type. What I am talking about is quite simple: One chooses the correct template to use, not at compile time based on the type(like normal), but at runtime based on a runtime variable that specifies the type. This is has variants are normally used except one must manually call the correct function or code block based on the variable value. Here is a demonstration of the problem: import std.stdio, std.variant, std.conv; void foo(T)(T t) { writeln("\tfoo: Type = ", T.stringof, ", Value = ", t); } void bar(Variant val) { writeln("Variant's Type = ", to!string(val.type)); // foo called with val as a variant foo(val); writeln("Dynamic type conversion:"); switch(to!string(val.type)) { case "int": foo(val.get!int); break; // foo called with val's value as int case "float": foo(val.get!float); break; // foo called with val's value as float case "immutable(char)[]": foo(val.get!string); break; // foo called with val's value as string case "short": foo(val.get!short); break; // foo called with val's value as short default: writeln("Unknown Conversion!"); } } void main() { Variant val; writeln("\nVarant with int value:"); val = 3; bar(val); writeln("\n\nVarant with float value:"); val = 3.243f; bar(val); writeln("\n\nVarant with string value:"); val = "XXX"; bar(val); writeln("\n\nVarant with short value:"); val = cast(short)2; bar(val); getchar(); } Output: Varant with int value: Variant's Type = int foo: Type = VariantN!20u, Value = 3 Dynamic type conversion: foo: Type = int, Value = 3 Varant with float value: Variant's Type = float foo: Type = VariantN!20u, Value = 3.243 Dynamic type conversion: foo: Type = float, Value = 3.243 Varant with string value: Variant's Type = immutable(char)[] foo: Type = VariantN!20u, Value = XXX Dynamic type conversion: foo: Type = string, Value = XXX Varant with short value: Variant's Type = short foo: Type = VariantN!20u, Value = 2 Dynamic type conversion: foo: Type = short, Value = 2 The concept to gleam from this is that the switch calls foo with the correct type AT compile time. The switch creates the mapping from the runtime type that the variant can have to the compile time foo. So the first call to foo gives: `foo: Type = VariantN!20u, Value = 2`. The writeln call receives the val as a variant! It knows how to print a variant in this case, lucky for us, but we have called foo!(VariantN!20u)(val)! But the switch actually sets it up so it calls foo!(int)(val.get!int). This is a different foo! The switch statement can be seen as a dynamic dispatch that calls the appropriate compile time template BUT it actually depends on the runtime type of the variant! This magic links up a Variant, who's type is dynamic, with compile time templates. But you must realize the nature of the problem. Most code that uses a variant wouldn't use a single template to handle all the different cases: switch(to!string(val.type)) { case "int": fooInt(val.get!int); break; case "float": fooFloat(val.get!float); break; case "immutable(char)[]": fooString(val.get!string); break; case "short": fooShort(val.get!short); break; default: writeln("Unknown Conversion!"); } These functions might actually just be code blocks to handle the different cases. Now, if you understand that, the paradigm I am talking about is to have D basically generate all the switching code for us instead of us ever having to deal with the variant internals. We have something like void bar(var t) { writeln("\tbar: Type = ", t.type, ", Value = ", t); } AND it would effectively print the same results. var is akin to variant but the compiler understands this and generates N different bar's internally and a switch statement to dynamically call the desired one at runtime, yet, we can simply call bar with any value we want. e.g., void main() { bar(3); // calls bar as if bar was `vo
Re: New programming paradigm
So, no body thinks this is a useful idea or is it that no one understands what I'm talking about?
Re: D scripting
On Tuesday, 5 September 2017 at 19:59:27 UTC, Andre Pany wrote: On Tuesday, 5 September 2017 at 19:44:40 UTC, EntangledQuanta wrote: Just an idea for you: in delphi you can set the properties of a component (a class with runtime reflection enabled) on runtime. You can even call the methods and events of a component. I build a Delphi Bridge for D (see recent post on announce). It is almost the same scenario as here are also dll calls involved. What I want to say, you could build something like the Delphi rtti for your D classes and make generic methods available via the dll interface. But that would be quite a bit of work? Modifying the compiler? I'm just looking for something relatively straightforward and simple ;) It is possible without modifying the compiler. In every class you want enable for runtime reflection you need to add a generic method which generates for all public properties/methods coding to fill/call them. It is a mix of templates and mixins. In the end compile time reflection capabilities of D are so powerful that you can write runtime reflection with it. Thanks for the tip! Kind regards André Thanks, Yeah, that is essentially what I was going to do with attributes, but rather than having a member do it, have a free function that tries to do the same thing... But then the question remains how to output that information so it can then be used to link in to the "script" that will be compiled?
Re: D scripting
On Tuesday, 5 September 2017 at 19:19:19 UTC, Andre Pany wrote: On Tuesday, 5 September 2017 at 18:37:17 UTC, EntangledQuanta wrote: On Tuesday, 5 September 2017 at 08:13:02 UTC, Andre Pany wrote: On Tuesday, 5 September 2017 at 07:32:24 UTC, EntangledQuanta wrote: I would like to use D as a "scripting" language for my D app. There seems to be no such thing. Since we can include the D compiler in our distribution, it is easy to enable "plugin" capabilities, but directly interfacing with the source code seems like it would require a bit of work(duplicating the code that one wants to include so it can be linked in and "hot swapping"). Which OS do you use? I had a similiar idea but failed on windows due to some strange effects. I think they were caused by the known windows dll unload bug, discussed here: http://forum.dlang.org/thread/rreyasqnvyagrkvqr...@forum.dlang.org At the end I decided to use the script engine from Adam Ruppe (arsd) until this bug is fixed. Kind regards André Yes, windows ;/ Seems that thread has some answers! Maybe bug him enough to fix the bug? How far did you get with it? "The problem seems to only manifest when a proper DllMain() method is exported from the library. If none is provided, or if the given implementation can be optimized away, the error does not ocurr." Was that the case for you too? That could be overcome with just using a normal function that is called right after loading? I'm curious how the exporting of code as that seems to be the biggest challenge(so that we don't have to hand write the exports ourselves). Thanks. My issue was that after unload the shared library, the dll file was still locked for deletion on the file system. Therefore I was not able to change something in my "script" and restart it. Somehow even after terminating in task manager, the dll file was still locked. I assume this reproducable effect is caused by the known issue. I already give up at this point ): Hmm, I used to have that problem with windows and visual studio. It was a Visual studio issue. Not sure if that is what you were using. Sometimes it's just programs that lock on to it for no good reason. There are ways around that: Use "unlocker" to unlock the file before deletion. Possibly rename the file to a random name opening up the space. Remove the generated files later when they build up. Load the DLL from memory(There are some online memory DLL loaders). Just an idea for you: in delphi you can set the properties of a component (a class with runtime reflection enabled) on runtime. You can even call the methods and events of a component. I build a Delphi Bridge for D (see recent post on announce). It is almost the same scenario as here are also dll calls involved. What I want to say, you could build something like the Delphi rtti for your D classes and make generic methods available via the dll interface. But that would be quite a bit of work? Modifying the compiler? I'm just looking for something relatively straightforward and simple ;)
Re: D scripting
On Tuesday, 5 September 2017 at 08:13:02 UTC, Andre Pany wrote: On Tuesday, 5 September 2017 at 07:32:24 UTC, EntangledQuanta wrote: I would like to use D as a "scripting" language for my D app. There seems to be no such thing. Since we can include the D compiler in our distribution, it is easy to enable "plugin" capabilities, but directly interfacing with the source code seems like it would require a bit of work(duplicating the code that one wants to include so it can be linked in and "hot swapping"). Which OS do you use? I had a similiar idea but failed on windows due to some strange effects. I think they were caused by the known windows dll unload bug, discussed here: http://forum.dlang.org/thread/rreyasqnvyagrkvqr...@forum.dlang.org At the end I decided to use the script engine from Adam Ruppe (arsd) until this bug is fixed. Kind regards André Yes, windows ;/ Seems that thread has some answers! Maybe bug him enough to fix the bug? How far did you get with it? "The problem seems to only manifest when a proper DllMain() method is exported from the library. If none is provided, or if the given implementation can be optimized away, the error does not ocurr." Was that the case for you too? That could be overcome with just using a normal function that is called right after loading? I'm curious how the exporting of code as that seems to be the biggest challenge(so that we don't have to hand write the exports ourselves). Thanks.
D scripting
I would like to use D as a "scripting" language for my D app. There seems to be no such thing. Since we can include the D compiler in our distribution, it is easy to enable "plugin" capabilities, but directly interfacing with the source code seems like it would require a bit of work(duplicating the code that one wants to include so it can be linked in and "hot swapping"). e.g., Suppose I have a program like struct X { ... } void main() { runScript("X x;"} } for the script to have access to X, it must be included in the script being compiled. That requires it having access to the "program code". I'd have to remove all the stuff I do not want it to have access to and that could be a real pain. But surely we can use attributes, something like @scriptable struct X { ... } that is makes X exportable to an obj file that can then be included in to the script. Everything, then, in my program, that is marked as such can be accessed and the script becomes a simple DLL. Dealing with hot swapping is then the only trouble. For simple scripts, this shouldn't be a problem though. Anyone see a way that this could be achieved rather easily? Say, at compilation, a template gathers all the scriptable elements, gets their source code(which would usually be classes, structs, enums, functions, and some global variables, and emits the code in a way that ends up in it's own object file(since we can't write to files at compile time ;/) Then in the scripting section of the app, It's just a matter of compiling with the obj file to give the script access to some of the program internals. D needs an export("filename")... I'm ok with the security hole. No need to bust my balls for it. A switch could be required to enable it or a mail in rebate. No need to force me in to a box that doesn't exist, is there? (could only export to the -J path and maybe require a few other hoops to jump through if one is so worried about security... maybe an optometric scanner?)
Re: replace switch for mapping
On Monday, 4 September 2017 at 09:23:24 UTC, Andrea Fontana wrote: On Thursday, 31 August 2017 at 23:17:52 UTC, EntangledQuanta wrote: Generally one has to use a switch to map dynamic components. Given a set X and Y one can form a switch to map X to Y: [...] Does this work for you? https://dpaste.dzfl.pl/e2669b595539 Andrea No, do you realize you are passing those enums at compile time? It won't work if they are "runtime" variables, which is the whole point of doing all this. You've essentially make a simple problem complicated. Why not just overload foo properly?
New programming paradigm
In coming up with a solution that maps enums to templates, I think it might provide a means to allow template like behavior at runtime. That is, type information is contained with in the enum which then can, with the use of compile time templates, be treated as dynamic behaviors. Let me explain: Take a variant type. It contains the "type" and the data. To simplify, we will treat look at it like (pseudo-code, use your brain) enum Type { int, float } foo(void* Data, Type type); The normal way to deal with this is a switch: switch(type) { case int: auto val = *(cast(int*)Data); case float: auto val = *(cast(float*)Data); } But what if the switch could be generated for us? Instead of foo(void* Data, Type type) { switch(type) { case int: auto val = *(cast(int*)Data); case float: auto val = *(cast(float*)Data); } } we have foo(T)(T* Data) { } which, if we need to specialize on a type, we can do foo(int* Data) { } foo(float* Data) { } One may claim that this isn't very useful because it's not much different than the switch because we might still have to do things like: foo(T)(T* Data) { static switch(T) { case int: break; case float: break; } } but note that it is a CT switch. But, in fact, since we can specialize on the type we don't have to use switch and in some cases do not even need to specialize: for example: foo(T)(T* Data) { writeln(*Data); } is a compile time template that is called with the correct type value at run-time due to the "magic" which I have yet to introduce. Note that if we just use a standard runtime variant, writeln would see a variant, not the correct type that Data really is. This is the key difference and what makes this "technique" valuable. We can treat our dynamic variables as compile time types(use the compile time system) without much hassle. They fit naturally in it and we do not clutter our code switches. We can have a true auto/var like C# without the overhead of the IR. The cost, of course, is that switches are still used, they are generated behind the scenes though and the runtime cost is a few instructions that all switches have and that we cannot avoid. To get a feel for what this new way of dealing with dynamic types might look like: void foo(var y) { writeln(y); } var x = "3"; // or possibly var!(string, int) for the explicit types used foo(x); x = 3; foo(x); (just pseudo code, don't take the syntax literally, that is not what is important) While this example is trivial, the thing to note is that there is one foo declared, but two created at runtime. One for string and one for and int. It is like a variant, yet we don't have to do any testing. It is very similar to `dynamic` in C#, but better since actually can "know" the type at compile time, so to speak. It's not that we actually know, but that we write code as if we knew.. it's treated as if it's statically typed. In fact, we still have to specify the possible types a value can take on(just like variant), but once that is done the switch statement can be generated and we just have to write our templated function to handle this new "type". You can see some of the code here, which I won't repeat for sake of brevity: https://forum.dlang.org/thread/qtnawzubqocllhacu...@forum.dlang.org The thing to note, is that by defining foo in a specific way, the mixin generates a proxy that creates the switch statement for us. This deals with casting the type by using the type specifier and calling the template with it. If the compiler were to use such a type as a first class citizen, we would have a very easy and natural way for dealing with dynamic types that can only have a finite number of type specializations. This should be the general case, although I haven't looked at how it would work with oop. The cost is the same as any dynamic type... a switch statement, which is just a few extra cycles. (a lookup table could be used, of course, but not sure the benefit) As far as I know, no other language actually does this. Those with dynamic types have a lot more overhead since they don't couple them with templates(since they are not a statically typed language). Anyways, not a thoroughly thought out idea, but actually if it works well(I'm using the code I linked to and it works quite well for dealing with buffers that can take several different times. One function, no explicit switching in it) and could be implemented in the compiler, would probably be a very nice feature for D? One of the downsides is code bloat. Having multiple var's increase the size O(n^m) since one has to deal with every combination. These result in very large nested switch structures... only O(m) to transverse at runtime though, but still takes up a lot of bits to represent.
Re: Bug in D!!!
On Monday, 4 September 2017 at 01:50:48 UTC, Moritz Maxeiner wrote: On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta wrote: On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner wrote: On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta wrote: On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta wrote: [...] The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning). Why? Don't you realize that the contexts matters and [...] Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder. ... Yes, In an absolute sense, it will take more time to have to parse the context. But that sounds like a case of "pre-optimization". I don't agree, because once something is in the language syntax, removing it is a long deprecation process (years), so these things have to be considered well beforehand. That's true. But I don't see how it matters to much in the current argument. Remember, I'm not advocating using 'in' ;) I'm only saying it doesn't matter in a theoretical sense. If humans were as logical as they should be, it would matter less. For example, a computer has no issue with using `in`, and it doesn't really take any more processing(maybe a cycle, but the context makes it clear). But, of course, we are not computers. So, in a practical sense, yes, the line has to be draw somewhere, even if, IMO, it is not the best place. You agree with this because you say it's ok for parenthesis but not in. You didn't seem to answer anything about my statements and question about images though. But, I'm ok with people drawing lines in the sand, that really isn't what I'm arguing. We have to draw lines. My point is, we should know we are drawing lines. You seem to know this on some significant level, but I don't think most people do. So, what would happen, if we argued for the next 10 years, we would just come to some refinement of our current opinions and experiences about the idea. That's a good thing in a sense, but I don't have 10 years to waste on such a trivial concept that really doesn't matter much ;) (again, remember, I'm not advocating in, I'm advocating anything, but against doing nothing.) If we are worried about saving time then what about the tooling? compiler speed? IDE startup time? etc? All these take time too and optimizing one single aspect, as you know, won't necessarily save much time. Their speed generally does not affect the time one has to spend to understand a piece of code. Yes, but you are picking and choosing. To understand code, you have to write code, to write code you need a compiler, ide, etc. You need a book, the internet, or other resources to learn things too. It's a much much bigger can of worms than you realize or want to get in to. Everything is interdependent. It's nice to make believe that we can separate everything in to nice little quanta, but we can't, and when we ultimately try we get results that make no sense. But, of course, it's about the best we can do with where humans are at in their evolution currently. The ramifications of one minor change can change everything... See the butterfly effect. Life is fractal-life, IMO(I can't prove it but the evidence is staggering). I mean, when you say "read code faster" I assume you mean the moment you start to read a piece of code with your eyes to the end of the code... But do you realize that, in some sense, that is meaningless? What about the time it takes to turn on your computer? Why are you not including that? Or the time to scroll your mouse? These things matter because surely you are trying to save time in the "absolute" sense? e.g., so you have more time to spend with your family at the end of the day? Or spend more time hitting a little white ball in a hole? or whatever? If all you did was read code and had no other factors involved in the absolute time, then you would be 100% correct. But all those other factors do add up too. Of course, the more code you read the more important it becomes and the less the other factors become, but then why are you reading so much code if you think it's a waste of time? So you can save some more time to read more code? If your goal is to truly read as much code as you can in your life span, then I think your analysis is 99.999...% correct. If you only code as a means to an end for other things, then I think your answer is about 10-40% correct(with a high degree of error and dependent on context). For me, and the way I "value"/"judge
Re: 24-bit int
On Sunday, 3 September 2017 at 04:01:34 UTC, Ilya Yaroshenko wrote: On Saturday, 2 September 2017 at 03:29:20 UTC, EntangledQuanta wrote: On Saturday, 2 September 2017 at 02:49:41 UTC, Ilya Yaroshenko wrote: On Friday, 1 September 2017 at 19:39:14 UTC, EntangledQuanta wrote: Is there a way to create a 24-bit int? One that for all practical purposes acts as such? This is for 24-bit stuff like audio. It would respect endianness, allow for arrays int24[] that work properly, etc. Hi, Probably you are looking for bitpack ndslice topology: http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#.bitpack sizediff_t[] data; // creates a packed signed integer slice with max allowed value equal to `2^^24 - 1`. auto packs = data[].sliced.bitpack!24; packs has the same API as D arrays Package is Mir Algorithm http://code.dlang.org/packages/mir-algorithm Best, Ilya Thanks. Seems useful. Just added `bytegroup` topology. Released in v0.6.12 (will be available in DUB after few minutes.) http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#bytegroup It is faster for your task then `bitpack`. Best regards, Ilya Thanks! I might end up using this. Is this basically just a logical mapping(cast(int)bytes[i*3]) & 0xFF) type of stuff or is there more of a performance hit? I could do the mapping myself if that is the case as I do not need much of a general solution. I'll probably be using in a just a few lines of code. It just needs to be nearly as fast as direct access.
Re: Bug in D!!!
On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner wrote: On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta wrote: On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta wrote: [...] The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning). Why? Don't you realize that the contexts matters and [...] Because instead of seeing the keyword and knowing its one meaning you also have to consider the context it appears in. That is intrinsically more work (though the difference may be very small) and thus harder. ... Yes, In an absolute sense, it will take more time to have to parse the context. But that sounds like a case of "pre-optimization". If we are worried about saving time then what about the tooling? compiler speed? IDE startup time? etc? All these take time too and optimizing one single aspect, as you know, won't necessarily save much time. Maybe the language itself should be designed so there are no ambiguities at all? A single simple for each function? A new keyboard design should be implemented(ultimately a direct brain to editor interface for the fastest time, excluding the time for development and learning)? So, in this case I have to go with the practical of saying that it may be theoretically slower, but it is such an insignificant cost that it is an over optimization. I think you would agree, at least in this case. Again, the exact syntax is not import to me. If you really think it matters that much to you and it does(you are not tricking yourself), then use a different keyword. When I see something I try to see it at once rather than reading it left to right. It is how music is read properly, for example. One can't read left to right and process the notes in real time fast enough. You must "see at once" a large chunk. When I see foo(A in B)() I see it at once, not in parts or sub-symbols(subconsciously that may be what happens, but it either is so quick or my brain has learned to see differently that I do not feel it to be any slower). that is, I do not read it like f, o, o (, A, , i,... but just like how one sees an image. Sure, there are clustering such as foo and (...), and I do sub-parse those at some point, but the context is derived very quickly. Now, of course, I do make assumptions to be able to do that. Obviously I have to sorta assume I'm reading D code and that the expression is a templated function, etc. But that is required regardless. It's like seeing a picture of an ocean. You can see the global characteristics immediately without getting bogged down in the details until you need it. You can determine the approximate time of day(morning, noon, evening, night) relatively instantaneously without even knowing much else. To really counter your argument: What about parenthesis? They too have the same problem with in. They have perceived ambiguity... but they are not ambiguity. So your argument should be said about them too and you should be against them also, but are you? [To be clear here: foo()() and (3+4) have 3 different use cases of ()'s... The first is templated arguments, the second is function arguments, and the third is expression grouping] If you are, then you are being logical and consistent, If you are not, then you are not being logical nor consistent. If you fall in the latter case, I suggest you re-evaluate the way you think about such things because you are picking and choosing. Now, if you are just stating a mathematical fast that it takes longer, then I can't really deny that, although I can't technically prove it either as you can't because we would require knowing exactly how the brain processes the information. [...] Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used. Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right. As I countered that in the above, I don't think your rebuttal is valid. Well, hopefully I countered that in my rebuttal of your rebuttal of my rebuttal ;) Again, you don't actually know how the brain processes information(no one does, it is all educated guesses). You use the concept that the more information one has to process the more time it takes... which seems logical, but it is not necessarily applicable directly to the interpretation of written symbols. Think of an image
Re: Bug in D!!!
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta wrote: On Saturday, 2 September 2017 at 21:19:31 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta wrote: On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips wrote: I've love being able to inherit and override generic functions in C#. Unfortunately C# doesn't use templates and I hit so many other issues where Generics just suck. I don't think it is appropriate to dismiss the need for the compiler to generate a virtual function for every instantiated T, after all, the compiler can't know you have a finite known set of T unless you tell it. But lets assume we've told the compiler that it is compiling all the source code and it does not need to compile for future linking. First the compiler will need to make sure all virtual functions can be generated for the derived classes. In this case the compiler must note the template function and validate all derived classes include it. That was easy. Next up each instantiation of the function needs a new v-table entry in all derived classes. Current compiler implementation will compile each module independently of each other; so this feature could be specified to work within the same module or new semantics can be written up of how the compiler modifies already compiled modules and those which reference the compiled modules (the object sizes would be changing due to the v-table modifications) With those three simple changes to the language I think that this feature will work for every T. Specifying that there will be no further linkage is the same as making T finite. T must be finite. C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D. By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy: foo(T in [A,B,C])() and possibly for your case foo(T in )() would work or foo(T in )() the `in` keyword makes sense here and is not used nor ambiguous, I believe. While I agree that `in` does make sense for the semantics involved, it is already used to do a failable key lookup (return pointer to value or null if not present) into an associative array [1] and input contracts. It wouldn't be ambiguous AFAICT, but having a keyword mean three different things depending on context would make the language even more complex (to read). Yes, but they are independent, are they not? Maybe not. foo(T in Typelist)() in, as used here is not a input contract and completely independent. I suppose for arrays it could be ambiguous. The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning). Why? Don't you realize that the contexts matters and it's what separates the meaning? In truly unambiguous contexts, it shouldn't matter. It may require one to decipher the context, which takes time, but there is nothing inherently wrong with it and we are limited to how many symbols we use(unfortunately we are generally stuck with the querty keyboard design, else we could use symbols out the ying yang and make things much clearer, but even mathematics, which is a near perfect language, "overloads" symbols meanings). You have to do this sort of thing when you limit the number of keywords you use. Again, ultimately it doesn't matter. A symbol is just a symbol. For me, as long as the context is clear, I don't see what kind of harm it can cause. You say it is bad, but you don't give the reasons why it is bad. If you like to think of `in` has having only one definition then the question is why? You are limiting yourself. The natural languages are abound with such multi-definitions. Usually in an ambiguous way and it can cause a lot of problems, but for computer languages, it can't(else we couldn't actually compile the programs). Context sensitive grammars are provably more expressive than context free. https://en.wikipedia.org/wiki/Context-sensitive_grammar Again, I'm not necessarily arguing for them, just saying that one shouldn't avoid them just to avoid them. For me, and this is just me, I do not find it ambiguous. I don't find different meanings ambiguous unless the context overlaps. Perceived ambiguity is not ambiguity, it's just ignorance... which can be overcome through learning. Hell, D has many cases where there are perceived ambiguities... as do most things. It's not about ambiguity for me, it's about readability. The more significantly different meanings you overload some keyword - or symbol, for that matter - with, the harder it becomes to read. I don't think that is true. Everything is hard to read
Re: Bug in D!!!
On Saturday, 2 September 2017 at 21:19:31 UTC, Moritz Maxeiner wrote: On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta wrote: On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips wrote: I've love being able to inherit and override generic functions in C#. Unfortunately C# doesn't use templates and I hit so many other issues where Generics just suck. I don't think it is appropriate to dismiss the need for the compiler to generate a virtual function for every instantiated T, after all, the compiler can't know you have a finite known set of T unless you tell it. But lets assume we've told the compiler that it is compiling all the source code and it does not need to compile for future linking. First the compiler will need to make sure all virtual functions can be generated for the derived classes. In this case the compiler must note the template function and validate all derived classes include it. That was easy. Next up each instantiation of the function needs a new v-table entry in all derived classes. Current compiler implementation will compile each module independently of each other; so this feature could be specified to work within the same module or new semantics can be written up of how the compiler modifies already compiled modules and those which reference the compiled modules (the object sizes would be changing due to the v-table modifications) With those three simple changes to the language I think that this feature will work for every T. Specifying that there will be no further linkage is the same as making T finite. T must be finite. C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D. By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy: foo(T in [A,B,C])() and possibly for your case foo(T in )() would work or foo(T in )() the `in` keyword makes sense here and is not used nor ambiguous, I believe. While I agree that `in` does make sense for the semantics involved, it is already used to do a failable key lookup (return pointer to value or null if not present) into an associative array [1] and input contracts. It wouldn't be ambiguous AFAICT, but having a keyword mean three different things depending on context would make the language even more complex (to read). Yes, but they are independent, are they not? Maybe not. foo(T in Typelist)() in, as used here is not a input contract and completely independent. I suppose for arrays it could be ambiguous. For me, and this is just me, I do not find it ambiguous. I don't find different meanings ambiguous unless the context overlaps. Perceived ambiguity is not ambiguity, it's just ignorance... which can be overcome through learning. Hell, D has many cases where there are perceived ambiguities... as do most things. But in any case, I could care less about the exact syntax. It's just a suggestion that makes the most logical sense with regard to the standard usage of in. If it is truly unambiguous then it can be used. Another alternative is foo(T of Typelist) which, AFAIK, of is not used in D and even most programming languages. Another could be foo(T -> Typelist) or even foo(T from Typelist) or whatever. Doesn't really matter. They all mean the same to me once the definition has been written in stone. Could use `foo(T eifjasldj Typelist)` for all I care. The import thing for me is that such a simple syntax exists rather than the "complex syntax's" that have already been given(which are ultimately syntax's as everything is at the end of the day). W.r.t. to the idea in general: I think something like that could be valuable to have in the language, but since this essentially amounts to syntactic sugar (AFAICT), but I'm not (yet) convinced that with `static foreach` being included it's worth the cost. Everything is syntactic sugar. So it isn't about if but how much. We are all coding in 0's and 1's whether we realize it or not. The point if syntax(or syntactic sugar) is to reduce the amount of 0's and 1's that we have to *effectively* code by grouping common patterns in to symbolic equivalents(by definition). This is all programming is. We define certain symbols to mean certain bit patterns, or generic bit matters(an if keyword/symbol is a generic bit pattern, a set of machine instructions(0's and 1's) and substitution placeholders that are eventually filled with 0's and 1's). No one can judge the usefulness of syntax until it has been created because what determines how useful something is is its use. But you can't use something if it doesn't exist. I think many fail to get that. The initial questions should be: Is there a gap in the language? (Yes in this case). Can the gap be filled? (this is a theoretical/mathematical question that has to be answered. Most people jump the gun here and make assumptions) Does the gap need t
Re: templated type reduction
I should point out that I know it isn't safe in some cases(I already mentioned about the order mattering in some cases) but in that case a compiler error could be thrown. It's safe in some cases and I have the ability to create a safe case since I'm the designer of the code(e.g., put things in correct order).
templated type reduction
Suppose one had the need to template a something like struct X(T) { string type = T.stringof; T t; } But one needs to get the type to know how to interpret X!T but one only has a void* to a type X!T. That is, we know it is an "X" but we don't know the specific T. Now, this is easy as X!void or X!int or adding any specific but arbitrary type T, if the value we want is not dependent T... but in this case it is: void* x = new X!int; (passed around the program) switch(x.type) { case "int" : break; } which is invalid yet perfectly valid! Is there any way to make this work legitly in D? I could get the offset of the string then parse it, but that's a hack I'd rather not use and isn't really safe(change the order and it will break). note that it is really no different from struct X(T) { string type = "asdf"; T t; } in which we can do string type = (cast(X!int)x).type; // = asdf or string type = (cast(X!float)x).type; // = asdf but even this is a bit fishy. Heres some code that does the offset hack: struct X(T) { string type = T.stringof; T x; } int main(string[] args) { void* x = new X!int; int o = (X!float).type.offsetof; auto y = *cast(string*)(x + o); writeln(y); return 0; }
Re: Bug in D!!!
On Saturday, 2 September 2017 at 16:20:10 UTC, Jesse Phillips wrote: On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta wrote: Regardless of the implementation, the idea that we should throw the baby out with the bathwater is simply wrong. At least there are a few who get that. By looking in to it in a serious manner an event better solution might be found. Not looking at all results in no solutions and no progress. Problem is that you didn't define the problem. You showed some code the compiler rejected and expressed that the compiler needed to figure it out. You did change it to having the compiler instantiate specified types, but that isn't defining the problem. I think the problem is clearly defined, it's not my job to be a D compiler researcher and spell everything out for everyone else. Do I get paid for solving D's problems? You didn't like the code needed which would generate the functions and you hit a Visual D with the new static foreach. This sentence makes no sense. "hit a Visual D" what? Do you mean bug? If that is the case, how is that my fault? Amd I suppose to know off the bat that an access violation is caused by Visual D and not dmd when there is no info about the violation? Is it my fault that someone didn't code one of those tools good enough to express enough information for one to figure it out immediately? All of these are problems you could define, and you could have evaluated static foreach as a solution but instead stopped at problems with the tooling. Huh? I think you fail to understand the real problem. The problem has nothing to do with tooling and I never said it did. The static foreach "solution" came after the fact when SEVERAL people(ok, 2) said it was an impossible task to do. That is where all this mess started. I then came up with a solution which proved that it is possible to do on some level, that is a solution to a problem that was defined, else the solution wouldn't exist. You also don't appear to care about the complexity of the language. I expressed three required changes some of which may not play nicely with least surprise. You went straight to, we just need to define a syntax for that instead of expressing concern that the compiler will also need to handle errors to the use, such that the user understands that a feature they use is limited to very specific situations. Do you not understand that if a library solution exists then there is no real complexity added? It is called "lowering" by some. The compiler simply "rewrites" whatever new syntax is added in a form that the library solution realized. You are pretended, why?, that what I am proposed will somehow potentially affect every square micron of the D language and compiler, when it won't. Not all additions to a compiler are add *real* complexity. That is a failing of you and many on the D forums who resist change. Consider if you have a module defined interface, is that interface only available for use in that module? If not, how does a different model inherent the interface, does it need a different syntax. What does that have to do with this problem? We are not talking about interfaces. We are talking about something inside interfaces, so the problem about interfaces is irrelevant to this discussion because it applies to interfaces in general... interfaces that already exist and the problem exists regardless of what I There is a lot more to a feature then having a way to express your desires. If your going to stick to a stance that it must exist and aren't going to accept there are problems with the request why expect others to work through the request. No, your problem is your ego and your inability to interpret things outside of your own mental box. You should always keep in mind that you are interpreting someone elses mental wordage in your own way and it is not a perfect translation, in fact, we are lucky if 50% is interpreted properly. Now, if I do not have a right to express my desires, then at least state that, but I do have a right not to express any more than that. As far as motivating other people, that is isn't my job. I could care less actually. D is a hobby for me and I do it because I like the power D has, but D is the most frustrating language I have ever used. It's the most(hyperbole) buggy, most incomplete(good docs system: regardless of what the biased want to claim, tool, etc), most uninformative(errors that just toss the whole kitchen sink at you), etc. But I do have hope... which is the only reason I use it. Maybe I'm just an idiot and should go with the crowed, it would at least save me some frustration. C#, since you are familiar with it, you should know there is a huge difference. If D was like C# as far as the organizational structure(I do not mean MS, I mean the docs, library, etc) you would surely agree that D would most likely be the #1 language on this planet? C# has
Re: 24-bit int
On Saturday, 2 September 2017 at 02:37:08 UTC, Mike Parker wrote: On Saturday, 2 September 2017 at 01:19:52 UTC, EntangledQuanta wrote: The whole point is so that there is no wasted space, so if it requires that then it's not a waste of space but a bug. Audio that is in24 is 3 bytes per sample, not 4. Every 3 bytes are a sample, not every 3 out of 4. Basically a byte[] cast to a int24 array should be 1/3 the size and every 3 bytes are the same as an int24. Thanks for pointing this out if it is necessary. It's not a bug, but a feature. Data structure alignment is important for efficient reads, so several languages (D, C, C++, Ada, and more) will automatically pad structs so that they can maintain specific byte alignments. On a 32-bit system, 4-byte boundaries are the default. So a struct with 3 ubytes is going to be padded with an extra byte at the end. Telling the compiler to align on a 1-byte boundary (essentially disabling alignment) will save you space, but will will generally cost you cycles in accessing the data. You fail to read correctly. A bug in his code. If he is treating in24's as int32's and simply ignoring the upper byte then it is not a true int24 and all the work he did would be pointless. I can do that by simply reading an int32 and masking the high bit.
Re: 24-bit int
On Saturday, 2 September 2017 at 02:49:41 UTC, Ilya Yaroshenko wrote: On Friday, 1 September 2017 at 19:39:14 UTC, EntangledQuanta wrote: Is there a way to create a 24-bit int? One that for all practical purposes acts as such? This is for 24-bit stuff like audio. It would respect endianness, allow for arrays int24[] that work properly, etc. Hi, Probably you are looking for bitpack ndslice topology: http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#.bitpack sizediff_t[] data; // creates a packed signed integer slice with max allowed value equal to `2^^24 - 1`. auto packs = data[].sliced.bitpack!24; packs has the same API as D arrays Package is Mir Algorithm http://code.dlang.org/packages/mir-algorithm Best, Ilya Thanks. Seems useful.
Re: replace switch for mapping
I came up with a library solution that isn't pretty ;/ I offer it up to the gods, but being gods, they probably don't care. template EnumMapper(alias func, string[] args, eT...) { import std.meta, std.typecons, std.traits, std.string, std.algorithm, std.array, std.conv; private auto recSwitch(string[] args, int depth, alias N, T...)(string[] attrs = null) { string str; auto tab = replicate("\t", depth); static if (T.length == 0) { string at; foreach(k, a; args) { at ~= "cast(Parameters!("~func~"!("~attrs.join(", ")~"))["~to!string(k)~"])"~a; if (k < args.length-1) at ~= ", "; } return tab~"\treturn "~func~"!("~attrs.join(", ")~")("~at~");\n"; } else { str ~= tab~"switch("~N[0]~")\n"~tab~"{\n"~tab~"\tdefault: break;\n"; foreach(v; __traits(allMembers, T[0])) { mixin(`enum attr = __traits(getAttributes, T[0].`~v~`).stringof[6..$-1].strip();`); static if (attr != "") { str ~= tab~"\t"~"case "~v~":\n"; attrs ~= attr[1..$-1]; str ~= recSwitch!(args, depth + 2 , N[1..$], T[1..$])(attrs); attrs = attrs[0..$-1]; str ~= tab~"\t\tbreak;\n"; } } str ~= tab~"}\n"; return str; } } private auto genMapper(string[] args, alias N, T...)() { string str; foreach(e; AliasSeq!(eT[0..eT.length/2])) str ~= "with("~e.stringof~") "; auto code = recSwitch!(args, 0, N, T)(); return str~"\n"~code; } auto EnumMapper() { return "import std.traits;\n"~genMapper!(args, [eT[eT.length/2..$]], eT[0..eT.length/2])(); } } Because D only half-assley implements __traits for templates, a lot of it is hacks and kludges. It is used like struct enumA { int value; alias value this; @("float") enum Float = cast(enumA)0; @("int") enum Int = cast(enumA)1; } struct enumB { int value; alias value this; @("double") enum Double = cast(enumB)0; @("byte") enum Byte = cast(enumB)1; } auto foo(T1, T2)(T1 a, T2 b) { import std.conv; return to!string(a)~" - "~to!string(b); } void main() { auto res = () { int a = 4; double b = 1.23; enumA enumAVal = enumA.Float; enumB enumBVal = enumB.Byte; mixin(EnumMapper!("foo", ["a", "b"], enumA, enumB, "enumAVal", "enumBVal")()); return ""; }(); writeln(res); getchar(); } and basically generates the nested switch structure: - with(enumA) with(enumB) switch(enumAVal) { default: break; case Float: switch(enumBVal) { default: break; case Double: return foo!(float, double)(cast(Parameters!(foo!(float, double))[0])a, cast(Parameters!(foo!(float, double))[1])b); break; case Byte: return foo!(float, byte)(cast(Parameters!(foo!(float, byte))[0])a, cast(Parameters!(foo!(float, byte))[1])b); break; } break; case Int: switch(enumBVal) { default: break; case Double: return foo!(int, double)(cast(Parameters!(foo!(int, double))[0])a, cast(Parameters!(foo!(int, double))[1])b); break; case Byte: return foo!(int, byte)(cast(Parameters!(foo!(int, byte))[0])a, cast(Parameters!(foo!(int, byte))[1])b); break; } break; } - and so it maps the arbitrary (a,b) to the correct foo. The idea is simple: Given a templated function, we want map the arbitrary values, assuming they can be properly cast to the templated function depending on the enum values. the enum values control which foo is called. But this works at runtime! This is useful when one has many different representations of data that all can be overloaded, but one doesn't kno
Re: 24-bit int
On Saturday, 2 September 2017 at 00:43:00 UTC, Nicholas Wilson wrote: On Friday, 1 September 2017 at 22:10:43 UTC, Biotronic wrote: On Friday, 1 September 2017 at 19:39:14 UTC, EntangledQuanta wrote: Is there a way to create a 24-bit int? One that for all practical purposes acts as such? This is for 24-bit stuff like audio. It would respect endianness, allow for arrays int24[] that work properly, etc. I haven't looked at endianness beyond it working on my computer. If you have special needs in that regard, consider this a starting point: struct int24 { ubyte[3] _payload; this(int x) { value = x; } ... } -- Biotronic You may also want to put an align(1) on it so that you dont waste 25% of the allocated memory in an array of int24's The whole point is so that there is no wasted space, so if it requires that then it's not a waste of space but a bug. Audio that is in24 is 3 bytes per sample, not 4. Every 3 bytes are a sample, not every 3 out of 4. Basically a byte[] cast to a int24 array should be 1/3 the size and every 3 bytes are the same as an int24. Thanks for pointing this out if it is necessary.
Re: Bug in D!!!
On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips wrote: I've love being able to inherit and override generic functions in C#. Unfortunately C# doesn't use templates and I hit so many other issues where Generics just suck. I don't think it is appropriate to dismiss the need for the compiler to generate a virtual function for every instantiated T, after all, the compiler can't know you have a finite known set of T unless you tell it. But lets assume we've told the compiler that it is compiling all the source code and it does not need to compile for future linking. First the compiler will need to make sure all virtual functions can be generated for the derived classes. In this case the compiler must note the template function and validate all derived classes include it. That was easy. Next up each instantiation of the function needs a new v-table entry in all derived classes. Current compiler implementation will compile each module independently of each other; so this feature could be specified to work within the same module or new semantics can be written up of how the compiler modifies already compiled modules and those which reference the compiled modules (the object sizes would be changing due to the v-table modifications) With those three simple changes to the language I think that this feature will work for every T. Specifying that there will be no further linkage is the same as making T finite. T must be finite. C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D. By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy: foo(T in [A,B,C])() and possibly for your case foo(T in )() would work or foo(T in )() the `in` keyword makes sense here and is not used nor ambiguous, I believe. Regardless of the implementation, the idea that we should throw the baby out with the bathwater is simply wrong. At least there are a few who get that. By looking in to it in a serious manner an event better solution might be found. Not looking at all results in no solutions and no progress.
Re: get parameter names
On Friday, 1 September 2017 at 22:21:18 UTC, Biotronic wrote: On Friday, 1 September 2017 at 20:58:20 UTC, EntangledQuanta wrote: template(A, B...) { auto foo(C...)(C c) { ... get c's parameter names, should be alpha, beta } } foo!(., .)(alpha, beta) I need the actual identifiers passed to foo. I can get the types(obviously C) but when I try to get the identifier names(__traits(identifier or other methods) I stuff get _param_k or errors. I need both C's types and the parameter identifier names past, else I'd just pass as strings. Like Jonathan M Davis points out, this is impossible for regular parameters. For template alias parameters, on the other hand, this works: void bar(alias fn)() { assert(fn.stringof == "alpha"); } unittest { int alpha; bar!(alpha); } -- Biotronic The problem I have with this is that when I try to pass variables in the template complains that there is no "this" So, what I have resorted to doing is passing the type and the name, which seems redundant: bar!(int, "alpha") rather than bar!(alpha) or bar(alpha) alpha is a variable in a object in my case. I've tried basically something like the following void bar(alias fn)() { typeof(fn) should return int and fn.stringof should return "alpha" } although my code is more complex since I have multiple template parameters(using a variadic).
Re: 24-bit int
On Friday, 1 September 2017 at 22:10:43 UTC, Biotronic wrote: On Friday, 1 September 2017 at 19:39:14 UTC, EntangledQuanta wrote: [...] I haven't looked at endianness beyond it working on my computer. If you have special needs in that regard, consider this a starting point: [...] Thanks, I'll check it out and see.
get parameter names
template(A, B...) { auto foo(C...)(C c) { ... get c's parameter names, should be alpha, beta } } foo!(., .)(alpha, beta) I need the actual identifiers passed to foo. I can get the types(obviously C) but when I try to get the identifier names(__traits(identifier or other methods) I stuff get _param_k or errors. I need both C's types and the parameter identifier names past, else I'd just pass as strings.
Re: Bug in D!!!
This happens when building, not running. This might be a Visual D issue as when I use dmd from the command line, it works fine ;/
24-bit int
Is there a way to create a 24-bit int? One that for all practical purposes acts as such? This is for 24-bit stuff like audio. It would respect endianness, allow for arrays int24[] that work properly, etc.
Re: Bug in D!!!
On Friday, 1 September 2017 at 19:25:53 UTC, Adam D Ruppe wrote: On Friday, 1 September 2017 at 18:17:22 UTC, EntangledQuanta wrote: I get an access violation, changed the code to What is the rest of your code? access violation usually means you didn't new the class... No, that is the code! I added nothing. Try it out and you'll see. I just upgraded to released dmd too. alias I(A...) = A; interface Foo { static foreach(T; I!(int, float)) void set(T t); // define virt funcs for a list of types } class Ass : Foo { static foreach(T; I!(int, float)) void set(T t) { // simplement } } void main() { } try it.
Re: Bug in D!!!
On Friday, 1 September 2017 at 15:24:39 UTC, Adam D. Ruppe wrote: static foreach is now in the new release! You can now do stuff like: --- alias I(A...) = A; interface Foo { static foreach(T; I!(int, float)) void set(T t); // define virt funcs for a list of types } class Ass : Foo { static foreach(T; I!(int, float)) void set(T t) { // simplement } } --- really easily. I get an access violation, changed the code to import std.meta; static foreach(T; AliasSeq!("int", "float")) mixin("void set("~T~" t);"); and also get an access violation ;/
replace switch for mapping
Generally one has to use a switch to map dynamic components. Given a set X and Y one can form a switch to map X to Y: switch(X) { case x1 : y1; break; case x1 : y1; } Is there any easier way to do this where one simply specifies the set's rather than having to create a switch directly? In my specific case, I have to map a two sets of types A = {Ta1,...,Tan} B = {Tb1,...,Tbm} to a template function that takes two types foo(Tak, Taj) so, given an arbitrary (a,b) in AxB, it it maps to foo(F(a),G(b)). Using switches would require n*m cases. What I actually have is something like enum X { Float, Int, `Etc` } and X x, y; and need to call foo!(x,y) but with x and y replaced by their correct D equivalent types. e.g., if x = X.Float; y = X.Int; then I need to call foo!(float,int) rather than foo!(X.Float,x.Int). This allows me to create a dynamic dispatcher at runtime and use a templated function rather than having to handle each type independently. One templated function rather than nxm regular functions for each type or a nxm switch. Unfortunately, a complicating factor is that the enum's names do not directly correspond to the D types through some simple transformation(e.g., lowerCase). D doesn't seem to support attributes on enum members for some inane reason and using strings will complicate things[https://forum.dlang.org/post/nmgloo$bd1$1...@digitalmars.com]. I think I could use a struct though to solve that. So, given something like struct A { @("float") enum Float = 0, @("int") enum Int = 1, } struct B { @("double") enum Double = 0, @("short") enum Short = 1, } foo(T1,T2)(); create a mapping that takes an A and B and maps AxB to foo that does something like the following internally. fooDispatch(A a, B b) { switch(a) // Actually needs to be over attributes { case "float" : switch(b) // Actually needs to be over attributes { case "double" : return foo!(float, double)(); } ... } } or whatever. I could write a string mixin that generates the above code but I'm hoping I don't have to and some genius will find a simple way to do it quickly, efficiently, and performant.
Re: Bug in D!!!
On Thursday, 31 August 2017 at 10:34:14 UTC, Kagamin wrote: On Thursday, 31 August 2017 at 00:49:22 UTC, EntangledQuanta wrote: I've already implemented a half ass library solution. It can be improved alot. Then, by all means, genius!
Re: Bug in D!!!
On Wednesday, 30 August 2017 at 22:52:41 UTC, Adam D. Ruppe wrote: On Wednesday, 30 August 2017 at 20:47:12 UTC, EntangledQuanta wrote: This is quite surprising! In the new version pending release (scheduled for later this week), we get a new feature `static foreach` that will let you loop through the types you want and declare all the functions that way. When it is released, we'll have to take a second look at this problem. I've already implemented a half ass library solution. It works, but is not robust. The compiler can and should do this! string OverLoadTemplateDefinition(string name, alias func, T...)() { import std.string; string str; foreach(t; T) str ~= ((func!t).stringof).replace("function(", name~"(")~";\n"; return str; } string OverLoadTemplateMethod(string name, alias func, T...)() { import std.traits, std.algorithm, std.meta, std.string; alias RT(S) = ReturnType!(func!S); alias PN(S) = ParameterIdentifierTuple!(func!S); alias P(S) = Parameters!(func!S); alias PD(S) = ParameterDefaults!(func!S); string str; foreach(t; T) { str ~= (RT!t).stringof~" "~name~"("; foreach(k,p; P!t) { auto d = ""; static if (PD!t[k].stringof != "void") d = " = "~(PD!t)[k].stringof; str ~= p.stringof~" "~(PN!t)[k]~d; if (k < (P!t).length - 1) str ~= ", "; } str ~= ") { _"~name~"("; foreach(k, n; PN!t) { str ~= n; if (k < (P!t).length - 1) str ~= ", "; } str ~= "); }\n"; } return str; } They are basically the generic version of what Jonathan implemented by hand. In the interface: private alias _Go(T) = void function(); mixin(OverLoadTemplateDefinition!("Go", _Go, int, short, float, double)()); In class: mixin(OverLoadTemplateMethod!("Go", _Go, int, short, float, double)()); protected final void _Go(T)() { } The alias simply defines the function that we are creating. The mixin OverLoadTemplateDefinition creates the N templates. in the class, we have to do something similar but dispatch them to the protected _Go... very similar to what Jonathan did by hand. But the code to do so is not robust and will break in many cases because I left a lot of details out(linkage, attributes, etc). It is a proof of concept, and as you can see, it is not difficult. The compiler, and anyone that has a decent understanding of the internals of it, should be able to implement something quite easily. Maybe it is also possible to use OpCall to do something similar? I'd like to reiterate that this is not an insolvable problem or an NP problem. It is quite easy. If we require restricting the types to a computable set, it is just simple overloading and templatizing to reduce the complexity. Having the compiler to this can reduce the noise and increase the robustness and also provide a nice feature that it currently does not have, but should. Using templates with inheritance is a good thing, It should be allowed instead of blinding preventing all cases when only one case is uncomputable. The logic that some are using is akin to "We can't divide by 0 so lets not divide at all", but of course, division is very useful and one pathological case doesn't prevent all other cases from being useful.
Re: Bug in D!!!
On Wednesday, 30 August 2017 at 22:08:03 UTC, Jonathan M Davis wrote: On Wednesday, August 30, 2017 21:51:57 EntangledQuanta via Digitalmars-d- learn wrote: [...] Templates have no idea what arguments you intend to use with them. You can pass them any arguments you want, and as long as they pass the template constraint, the compiler will attempt to instiate the template with those arguments - which may or may not compile, but the compiler doesn't care about that until you attempt to instantiate the template. [...] I'm going to try to implement it as a library solution, something that basically does what you have done. This will at least simplify each instance to a few lines of code but would be required in all derived classes.
Re: Bug in D!!!
On Wednesday, 30 August 2017 at 22:08:03 UTC, Jonathan M Davis wrote: On Wednesday, August 30, 2017 21:51:57 EntangledQuanta via Digitalmars-d- learn wrote: The point you are trying to making, and not doing a great job, is that the compiler cannot create an unknown set of virtual functions from a single templated virtual function. BUT, when you realize that is what the problem is, the unknown set is the issue NOT templated virtual functions. Make the set known and finite somehow then you have a solution, and it's not that difficult. Just requires some elbow grease. Templates have no idea what arguments you intend to use with them. You can pass them any arguments you want, and as long as they pass the template constraint, the compiler will attempt to instiate the template with those arguments - which may or may not compile, but the compiler doesn't care about that until you attempt to instantiate the template. The language does not support a mechanism for creating a templated function where you define ahead of time what all of the legal arguments are such that the compiler will just instantiate them all for you. The compiler only instantiates templates when the code instantiates them. Feel free to open up an enhancement request for some sort of template which has a specified list of arguments to be instantiated with which the compiler will then instantiate up front and allow no others, but that is not currently a language feature. and my point is that it is not always the case that T can be anything. What if T is meant to only be algebraic? auto foo(T : Algebraic!(int, float, double))(T t){ } will the compiler be smart enough to be able to deduce that there are only 3 possibilities? No, but it should. (but of course, we don't want to use algebraic because that makes thing messy, and the whole point of all this is to reduce the mess) As far as a feature request, my guess is no one will care, I'd hope that wouldn't be the case, but seeming how much excitement in solving this problem has generated leads me to believe no one really cares about solving it. The normal solution for something like that right now would be to explicitly declare each function that you want and then have them call a templated function in order to share the implementation. e.g. class C { public: auto foo(int i) { return _foo(i); } auto foo(float f) { return _foo(f); } auto foo(string s) { return _foo(s); } private: auto _foo(T)(T T) { ...} } - Jonathan M Davis Yes, but this is really just explicit overloading. It doesn't solve the problem that templates are suppose to solve. When one starts overloading things, it becomes a bigger mess as each class needs to deal with the overloading and dispatching. It all could be solved with a bit of compiler "magic"(which should be quite simple). I mean, the compiler optimizes all kinds of things, this case shouldn't be any different. If it can determine a template parameter is reasonably finite then it should convert the templates method in to a series of overloaded methods for us... which is what you essentially did.
Re: Bug in D!!!
On Wednesday, 30 August 2017 at 21:33:30 UTC, Jonathan M Davis wrote: On Wednesday, August 30, 2017 20:47:12 EntangledQuanta via Digitalmars-d- learn wrote: This is quite surprising! public struct S(T) { T s; } interface I { void Go(T)(S!T s); static final I New() { return new C(); } } abstract class A : I { } class C : A { void Go(T)(S!T s) { } } void main() { S!int s; auto c = I.New(); c.Go(s);// fails! //(cast(C)c).Go(s); // Works, only difference is we have made c an explicit C. } https://dpaste.dzfl.pl/dbc5a0663802 Everything works when Go is not templatized(we explicitly make T an int) This is a blocker for me! Can someone open a ticket? It is not possible to have a function be both virtual and templated. A function template generates a new function definition every time that it's a called with a new set of template arguments. So, the actual functions are not known up front, and that fundamentally does not work with virtual functions, where the functions need to be known up front, and you get a different function by a look-up for occurring in the virtual function call table for the class. Templates and virtual functions simply don't mix. You're going to have to come up with a solution that does not try and mix templates and virtual functions. - Jonathan M Davis I have a finite number of possible values of T, lets say 3. They are known at compile time, just because you are or D thinks they are not simply means you or D is not trying hard enough. So, saying that virtual methods and templates are not compatible is wrong. Just because you think they are or D thinks they are means you haven't thought about it hard enough. If I can overload a virtual function to get all my use cases and that is all I need then I **should** be able to do it with templates. Simple as that, if D can't do that then D needs to be enhanced to do so. e.g., class C { Go(Primitive!T)(T t); } The compiler can realize that T can only be a primitive, and generates all possible combinations of primitives, which is finite. This is doable, it is not impossible, regardless of what you think. It is equivalent to class C { Go(Primitive1 t); Go(Primitive2 t); ... Go(PrimitiveN t); } In fact, we can use string mixins to generate such code, but it doens't save us trouble, which is what templates are suppose to do in the first place. Just become someone hasn't implemented special cases does not mean it is theoretically impossible to do. A different syntax would be better interface I { Go(T in [float, double, int])(T t); } class C : I { Go(T in [float, double, int])(T t) { } } which the compiler "unrolls" to interface I { Go(float t); Go(double t); Go(int t); } class C { Go(float t) { } Go(double t) { } Go(int t) { } } Which, is standard D code. There is nothing wrong with specializing the most common cases. The point you are trying to making, and not doing a great job, is that the compiler cannot create an unknown set of virtual functions from a single templated virtual function. BUT, when you realize that is what the problem is, the unknown set is the issue NOT templated virtual functions. Make the set known and finite somehow then you have a solution, and it's not that difficult. Just requires some elbow grease. Primitives are obviously known at compile time so that is a doable special case. Although there will probably be quite a bit of wasted space since each primitive will have a function generated for it for each templated function, that really isn't an issue. By adding a new syntax in D, we could allow for any arbitrary(but known and finite) set to be used Go(T in [A,B,C])(T t) Where A,B,C are known types at compile time. This generates 3 functions and is doable. (should be simple for any D compiler genius to add for testing)
Re: Bug in D!!!
On Wednesday, 30 August 2017 at 21:13:19 UTC, Kagamin wrote: It can't work this way. You can try std.variant. Sure it can! What are you talking about! std.variant has nothing to do with it! It works if T is hard coded, so it should work generically. What's the point of templates variables if they can't be used across inheritance? I could overload Go for each type and hence it should work. There is absolutely no reason why it can't work. Replace T with short, it works, replace T with anything and it works, hence it should work with T. If you are claiming that the compiler has to make a virtual function for each T, that is nonsense, I only need it for primitives, and there are a finite number of them. I could create overloads for short, int, double, float, etc but why? The whole point of templates is to solve that problem. Variants do not help. Openmethods can solve this problem too, but D should be more intelligent than simply writing off all normal use cases because someone thinks something can't be done. How many people thought it was impossible to go to the moon, yet it happened. Anyone can't deny anything, it's such a simple thing to do...
Bug in D!!!
This is quite surprising! public struct S(T) { T s; } interface I { void Go(T)(S!T s); static final I New() { return new C(); } } abstract class A : I { } class C : A { void Go(T)(S!T s) { } } void main() { S!int s; auto c = I.New(); c.Go(s);// fails! //(cast(C)c).Go(s); // Works, only difference is we have made c an explicit C. } https://dpaste.dzfl.pl/dbc5a0663802 Everything works when Go is not templatized(we explicitly make T an int) This is a blocker for me! Can someone open a ticket?