Re: [Semi-OT] to!string(enumType)
On Friday, 19 May 2017 at 21:01:09 UTC, Jonathan M Davis wrote: Wait, what? Doesn't D specifically _not_ have SFINAE? You can use static if to test what compiles, and the branch whose condition compiles is then the on that gets compiled in, which kind of emulates what you'd get with SFINAE, but that's not really the same as SFINAE, which just outright picks the the template specialization which happens to compile while letting the others that don't compile not generate errors. D complains when you have multiple, matching templates. So, what do you mean that D has SFINAE? - Jonathan M Davis If a template does trigger a static assert, that static assert is ignored if there is another template in the overload set that could match.
Re: [Semi-OT] to!string(enumType)
On Friday, 19 May 2017 at 20:23:16 UTC, Dominikus Dittes Scherkl wrote: On Friday, 19 May 2017 at 17:47:42 UTC, Stefan Koch wrote: On Friday, 19 May 2017 at 17:34:28 UTC, Dominikus Dittes Scherkl wrote: [...] the static assert tells what's going on. It it does result in a simple overload not found. Hm. Maybe in this case it's ok, because enum is pretty much all that can be expected as argument to "enumToString". But normally I would calling not using a constraint "stealing overload possibilities", because it would not be possible to overload the same function for a different type if you use this kind of assert. And the error message is not really better. You can still overload :) D has SFINAE
Re: [Semi-OT] to!string(enumType)
On Friday, 19 May 2017 at 17:34:28 UTC, Dominikus Dittes Scherkl wrote: On Friday, 19 May 2017 at 00:14:05 UTC, Stefan Koch wrote: string enumToString(E)(E v) { static assert(is(E == enum), "emumToString is only meant for enums"); Why that assert? We can check it at compiletime. Doesn't this cry for a constraint? I would use asserts only ever for stuff that's only known at runtime. string enumToString(E)(E v) if(is(E == enum)) { ... } the static assert tells what's going on. It it does result in a simple overload not found.
Re: Fantastic exchange from DConf
On Friday, 19 May 2017 at 16:29:59 UTC, Timon Gehr wrote: On 19.05.2017 17:12, Steven Schveighoffer wrote: I mean libraries which only contain @safe and @system calls. i.e.: $ grep -R '@trusted' libsafe | wc -l 0 mixin("@"~"trusted void nasty(){ corruptAllTheMemory(); }"); dmd -vcg-ast *.d
Re: Code improvement for DNA reverse complement?
On Friday, 19 May 2017 at 07:29:44 UTC, biocyberman wrote: I am solving this problem http://rosalind.info/problems/revc/ as an exercise to learn D. This is my solution: https://dpaste.dzfl.pl/8aa667f962b7 Is there some D tricks I can use to make the `reverseComplement` function more concise and speedy? Any other comments for improvement of the whole solution are also much appreciated. I think doing a switch or even a if-else chain would be faster then using an AA.
Re: [Semi-OT] to!string(enumType)
On Thursday, 18 May 2017 at 23:15:46 UTC, ag0aep6g wrote: On 05/19/2017 12:31 AM, Stefan Koch wrote: string enumToString(E)(E v) { static assert(is(E == enum), "emumToString is only meant for enums"); mixin ({ string result = "final switch(v) {\n"; foreach(m;[__traits(allMembers, E)]) { result ~= "\tcase E." ~ m ~ " :\n" ~ "\t\treturn \"" ~ m ~ "\";\n" ~ "\tbreak;\n"; } return result ~ "}"; } ()); } I'm sure that can be de-uglified a fair bit without hurting performance. 1) "final switch(v) {" and the closing brace can be moved out of the string. This should be completely free. 2) No need for `break` after `return`. Also free. 3) With a static foreach over `__traits(allMembers, E)` you can get rid of the function literal. Doesn't seem to affect performance much if at all. So far: string enumToString(E)(E v) { static assert(is(E == enum), "emumToString is only meant for enums"); final switch (v) { foreach(m; __traits(allMembers, E)) { mixin("case E." ~ m ~ ": return \"" ~ m ~ "\";"); } } } 4) If EnumMembers is an option, you can get rid of the string mixin altogether: string enumToString(E)(E v) { import std.meta: AliasSeq; import std.traits: EnumMembers; static assert(is(E == enum), "emumToString is only meant for enums"); alias memberNames = AliasSeq!(__traits(allMembers, E)); final switch(v) { foreach(i, m; EnumMembers!E) { case m: return memberNames[i]; } } } That takes a bit longer. May just be the time it takes to parse the std.* modules. Object size stays the same. Nice work beatifying the implementation. Although AliasSeq and EnumMembers are unnecessary. I incorporated your idea into the following version: string enumToString(E)(E v) { static assert(is(E == enum), "emumToString is only meant for enums"); switch(v) { foreach(m; __traits(allMembers, E)) { case mixin("E." ~ m) : return m; } default : { string result = "cast(" ~ E.stringof ~ ")"; uint val = v; enum headLength = E.stringof.length + "cast()".length; uint log10Val = (val < 10) ? 0 : (val < 100) ? 1 : (val < 1000) ? 2 : (val < 1) ? 3 : (val < 10) ? 4 : (val < 100) ? 5 : (val < 1000) ? 6 : (val < 1) ? 7 : (val < 10) ? 8 : 9; result.length += log10Val + 1; for(uint i;i != log10Val + 1;i++) { cast(char)result[headLength + log10Val - i] = cast(char) ('0' + (val % 10)); val /= 10; } return cast(string) result; } } }
Re: [Semi-OT] to!string(enumType)
On Thursday, 18 May 2017 at 22:31:47 UTC, Stefan Koch wrote: Granted this version will result in undefined behavior if you pass something like (cast(ET) 3) to it. But the 55x increase in compilation speed is well worth it :) This code will replicate to!string behavior perfectly but will only take 30 milliseconds to compile: string enumToString(E)(E v) { static assert(is(E == enum), "emumToString is only meant for enums"); mixin({ string result = "switch(v) {\n"; foreach(m;[__traits(allMembers, E)]) { result ~= "\tcase E." ~ m ~ " :\n" ~ "\t\treturn \"" ~ m ~ "\";\n"; } result ~= "\tdefault: break;\n"; result ~= "}\n"; enum headLength = E.stringof.length + "cast()".length; result ~= ` enum headLength = ` ~ headLength.stringof ~ `; uint val = v; char[` ~ (headLength + 10).stringof ~ `] res = "cast(` ~ E.stringof ~ `)"; uint log10Val = (val < 10) ? 0 : (val < 100) ? 1 : (val < 1000) ? 2 : (val < 1) ? 3 : (val < 10) ? 4 : (val < 100) ? 5 : (val < 1000) ? 6 : (val < 1) ? 7 : (val < 10) ? 8 : 9; foreach(i;0 .. log10Val + 1) { res[headLength + log10Val - i] = cast(char) ('0' + (val % 10)); val /= 10; } return res[0 .. headLength + log10Val + 1].idup; `; return result; } ()); }
Re: [Semi-OT] to!string(enumType)
On Thursday, 18 May 2017 at 22:31:47 UTC, Stefan Koch wrote: Hi, I just took a look into commonly used functionality of Phobos. Such as getting the string representation of a enum. [...] Using -vcg-ast we see that it expands to ~50 lines.
[Semi-OT] to!string(enumType)
Hi, I just took a look into commonly used functionality of Phobos. Such as getting the string representation of a enum. the following code: import std.conv; enum ET { One, Two } static assert(to!string(ET.One) == "One"); takes about 220 milliseconds to compile. creating a 7.5k object file Using my -vcg-ast it becomes visible that it expands to ~17000 lines of template-instantiations. explaining both the compilation time and the size. Compiling the following code: string enumToString(E)(E v) { static assert(is(E == enum), "emumToString is only meant for enums"); mixin ({ string result = "final switch(v) {\n"; foreach(m;[__traits(allMembers, E)]) { result ~= "\tcase E." ~ m ~ " :\n" ~ "\t\treturn \"" ~ m ~ "\";\n" ~ "\tbreak;\n"; } return result ~ "}"; } ()); } private enum ET { One, Two } static assert (enumToString(ET.One) == "One"); takes about 4 milliseconds to compile. creating a 4.8k object file. Granted this version will result in undefined behavior if you pass something like (cast(ET) 3) to it. But the 55x increase in compilation speed is well worth it :)
Re: CTFE Status 2
On Tuesday, 16 May 2017 at 13:44:27 UTC, Stefan Koch wrote: [ ... ] The reason ABI issues. Where exactly ? No Idea. Not just abi issues ... There are more fundamental problems where we sometimes forget to allocate space, for locals (of composite types).
Re: llvm-d 2.2 Dynamic loading (yet again)
On Wednesday, 17 May 2017 at 14:55:12 UTC, Moritz Maxeiner wrote: In response to a DConf 2017 request regarding this, llvm-d again supports dynamic loading. The API is essentially the same as is was for llvm 1.x, though you have to enable it with D versions. [...] Many Thanks.
Re: Replacing std.math raw pointer arithmetic with a union type
On Wednesday, 17 May 2017 at 19:26:32 UTC, tsbockman wrote: On Wednesday, 17 May 2017 at 15:30:29 UTC, Stefan Koch wrote: the special case it supports if cast(uint*) and cast(ulong*) What about casting from real* when real.sizeof > double.sizeof? unsupported. The code in ctfeExpr (paintFloatInt) explicitly checks for size == 4 || size == 8
Re: Replacing std.math raw pointer arithmetic with a union type
On Wednesday, 17 May 2017 at 15:15:04 UTC, Walter Bright wrote: On 5/16/2017 8:02 PM, tsbockman wrote: Such code is verbose, hard to read, and, judging by the bugs and missing specializations I've found, hard to write correctly. It is also not compatible with CTFE. CTFE does support things like: double d; int i = *(cast(int*)); as a special case, specifically to support bit manipulation of floating point types. the special case it supports if cast(uint*) and cast(ulong*)
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] So ... I just encountered more ABI issues; related to slices which are part of structures. struct R { uint[] s1; uint[] s2; } like this. R returnSlices(int[] s1, int[] s2) { return R(s1[], s2[]); } static assert(returnSlices([1,2,3], [4,5,6,7]) == R([1,2,3][4.5.6.7])); // works R returnSlicedSlices(int[] s1, int[] s2) { return R(s1[], s2[1 .. $-1]); } static assert(returnSlicedSlices([1,2,3], [4,5,6,7]) == R([1,2,3],[5,6])); // fails // returns R([1,2,3],null); at the moment The reason ABI issues. Where exactly ? No Idea.
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] So I have fixed a few cases of outer function evaluation. Unfortunately this exposed some hard to track down bugs in how expressions are handled. The JIT and Debugger features are on ice, until those bugs are eliminated. Sorry about the delay.
Re: CTFE Status 2
On Friday, 12 May 2017 at 11:21:56 UTC, Stefan Koch wrote: ... anyway. I am happy this is fixed now. Now I am less happy. The fallout of this fix causes code in std.ascii to miscompile. Apperantly we don't make sure our function list is cleared before finalization.
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Hi Guys, Outer function arguments are now supperted. meaning this code will now work: int[] filterBy(int[] arr , bool function(uint) fn) { int[] result = []; uint resultLength; result.length = arr.length; foreach(i;0 .. arr.length) { auto e = arr[i]; bool r = true; r = fn(e); if(r) { result[resultLength++] = e; } } int[] filterResult; filterResult.length = resultLength; foreach(i; 0 .. resultLength) { filterResult[i] = result[i]; } return filterResult; } bool isDiv2(uint e) { bool result; result = (e % 2 == 0); return result; } static assert(filterBy([3,4,5], ) == [4]); before this would behaved very strangely ;) because isDiv would have been executed instead filterBy. And since after bytecode compilation there is no type checking anymore the arrayPtr would have been interpreter as an integer. (which is always 4byte aligend) that would have caused the isDiv to return 1; which would have been interpreted as address. and whatever was there would have been treated as array descriptor. resulting in mostly the [] return value. ... anyway. I am happy this is fixed now.
Re: The cost of doing compile time introspection
On Thursday, 11 May 2017 at 21:57:06 UTC, Timon Gehr wrote: On 10.05.2017 16:28, Stefan Koch wrote: On Wednesday, 10 May 2017 at 14:03:58 UTC, Biotronic wrote: On Wednesday, 10 May 2017 at 11:45:05 UTC, Moritz Maxeiner wrote: [CTFE slow] First, as you may know, Stefan Koch is working on an improved CTFE engine that will hopefully make things a lot better. It will not; This is issue is caused by templates, and not by CTFE. I think my measurements show that the main bottleneck is actually Appender in CTFE, while templates contribute a smaller, yet significant, amount. You are correct. I should not have made this statement without actually measuring. Still templates produce the enormous amounts of code that ctfe has to wade trough. So while they are not the bottleneck in this case, they are still the cause.
Re: Concerns about using struct initializer in UDA?
On Thursday, 11 May 2017 at 11:36:17 UTC, Andre Pany wrote: On Thursday, 11 May 2017 at 10:51:09 UTC, Stefan Koch wrote: On Thursday, 11 May 2017 at 10:49:58 UTC, Andre Pany wrote: [...] We have that syntax already. I do not understand. Should the syntax I have written already work as I expect or do you mean my proposal is not possible as the syntax is ambiguous? Kind regards André I thought it should have worked already. My apologies the struct literal initialization syntax is unsupported because of the parser implementation. I don't know if you would introduce new ambiguities; I suspect that you wouldn't.
Re: Concerns about using struct initializer in UDA?
On Thursday, 11 May 2017 at 10:49:58 UTC, Andre Pany wrote: Hi, I know there are concerns about struct initialization in method calls but what is about struct initializer in UDA? Scenario: I want to set several UDA values. At the moment I have to create for each value a structure with exactly 1 field. But it would be quite nice if I could use struct initialization to group these values: struct Field { string location; string locationName; } struct Foo { @A = {locationName: "B"} int c; // <-- } void main() {} Of course the syntax is questionable, it is just a proposal. What do you think? Kind regards André We have that syntax already.
Re: struct File. property size.
On Thursday, 11 May 2017 at 07:24:00 UTC, AntonSotov wrote: import std.stdio; int main() { auto big = File("bigfile", "r+"); //bigfile size 20 GB writeln(big.size); // ERROR! return 0; } // std.exception.ErrnoException@std\stdio.d(1029): Could not seek in file `bigfile` (Invalid argument) I can not work with a large file? 32 bit executable. it seems you cannot :) files bigger then 4G are still problematic on many platforms.
Re: Static foreach pull request
On Wednesday, 10 May 2017 at 18:41:30 UTC, Timon Gehr wrote: On 10.05.2017 16:21, Stefan Koch wrote: On Wednesday, 10 May 2017 at 14:13:09 UTC, Timon Gehr wrote: On 10.05.2017 15:18, Stefan Koch wrote: if you try assert([] is null), it should fail. It doesn't. I have tried to make that point before, unsuccessfully. Empty arrays may or may not be null, but the empty array literal is always null. cat t3.d static assert([] is null); --- dmd t.d -c --- t3.d(1): Error: static assert ([] is null) is false void main(){ import std.stdio; enum x = [] is null; auto y = [] is null; writeln(x," ",y); // "false true" } Oh fudge. Another case where the ctfe-engine goes the right way; And the runtime version does not ... we should fix this one of these days.
Re: Lookahead in unittest
On Wednesday, 10 May 2017 at 16:09:06 UTC, Raiderium wrote: Heyo, On 2.074.0, the following test fails with "Error: undefined identifier 'B' " unittest { class A { B b; } class B { } } I can't figure out if this is intended behaviour. It's making a template-heavy module difficult to test. Would appreciate any help. First post here, be gentle :) It looks like this unitest-test block are treated like a function. What is the surrounding code ? If this is at module level then it is a bug.
Re: The cost of doing compile time introspection
On Wednesday, 10 May 2017 at 14:03:58 UTC, Biotronic wrote: On Wednesday, 10 May 2017 at 11:45:05 UTC, Moritz Maxeiner wrote: [CTFE slow] First, as you may know, Stefan Koch is working on an improved CTFE engine that will hopefully make things a lot better. It will not; This is issue is caused by templates, and not by CTFE.
Re: Static foreach pull request
On Wednesday, 10 May 2017 at 14:13:09 UTC, Timon Gehr wrote: On 10.05.2017 15:18, Stefan Koch wrote: if you try assert([] is null), it should fail. It doesn't. I have tried to make that point before, unsuccessfully. Empty arrays may or may not be null, but the empty array literal is always null. cat t3.d static assert([] is null); --- dmd t.d -c --- t3.d(1): Error: static assert ([] is null) is false
Re: Problems with Array Assignment?
On Wednesday, 10 May 2017 at 13:34:30 UTC, Samwise wrote: I'm really sure this is just a stupid mistake I made, but I can't for the life of me figure out what is going on. Basically I'm trying to assign a reference to an object to an array, and the objects exist (an explicit destructor is writing lines at the end of the program, when the objects are GC'd), but there are not references to them in the array. You can see the complete code, and the line that I think is giving me trouble here: https://github.com/MggMuggins/TrafficLights/blob/master/source/renderer.d#L146 Any help is greatly appreciated. Like I said, I'm sure it's just a silly mistake because I don't understand something, but I appreciate any time you waste on me all the same. Thanks, ~Sam tiles can be zero if you do not initialize it explicitly. assert(tiles, "tiles is zero"); should clue you in.
Re: Static foreach pull request
On Wednesday, 10 May 2017 at 13:11:46 UTC, Atila Neves wrote: On Wednesday, 10 May 2017 at 11:12:06 UTC, Stefan Koch wrote: null : () { Slice* s; s = null; return s; } [] : () { Slice* s; s = alloca(sizeof(*s)); s.base = null; s.length = 0; return s; } Therefore null.length => (cast(Slice*)null).length; which results in a segfault. and [].length => (cast(Slice*)somevalidSliceDiscriptor).length; That's not how "regular" D works though. Atila What do you mean ? Hmm this should be how it works They reason why assert([] == null) holds. is because base is implicitly alias thised. if you try assert([] is null), it should fail.
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Thanks to Daniel Murphy's input; '&&' works now in my experimental version. I hope to get it to pass the auto-tester soon! This is a big step :) Thanks Daniel.
Re: Static foreach pull request
On Wednesday, 10 May 2017 at 09:42:53 UTC, Timon Gehr wrote: On 09.05.2017 23:56, Timon Gehr wrote: core.exception.AssertError@ddmd/blockexit.d(90): Assertion failure ... Thanks! (It's a known issue though: https://github.com/tgehr/dmd/blob/static-foreach/test_staticforeach.d#L330.) Actually, yours is a different case with the same outcome (f and idonotexist do not matter at all, the issue exists even for static foreach(j;0..0){}). All static foreach loops over empty (non-AliasSeq) aggregates failed that assertion. The reason was that CTFE can return a null literal from a function that returns T[], but the constant folder cannot actually evaluate null.length for some reason. So here is the difference between null and []: null : () { Slice* s; s = null; return s; } [] : () { Slice* s; s = alloca(sizeof(*s)); s.base = null; s.length = 0; return s; } Therefore null.length => (cast(Slice*)null).length; which results in a segfault. and [].length => (cast(Slice*)somevalidSliceDiscriptor).length;
Re: DConf 2017 Hackathon report
On Wednesday, 10 May 2017 at 10:55:09 UTC, Atila Neves wrote: I felt like a wizard afterwards for modifying the compiler, which is a nice bonus. Nice, I usually feel confused after modifying the compiler. Now if only I could get the autotester to be green... Just think about how much more wizardly you will feel after it is green.
Re: Turn .opApply into ranges
On Tuesday, 9 May 2017 at 17:23:36 UTC, Yuxuan Shui wrote: I wondered if I can turn struct that defines opApply into ranges. And it turns out to be surprisingly easy: https://gist.github.com/yshui/716cfe987c89997760cabc2c951ca430 Maybe we can phase out opApply support in foreach? ;) BTW, is there a way to get the "element type" from .opApply? This "conversion" relies on fibers which need heavy runtime support. So no, opApply still has it's place.
Re: Structure of platform specific vs non platform specific code
On Tuesday, 9 May 2017 at 15:28:20 UTC, WhatMeWorry wrote: On Monday, 8 May 2017 at 21:16:53 UTC, Igor wrote: Hi, I am following Casey Muratori's Handmade Hero and writing it in DLang. This sounds very interesting. Maybe make it a public github project? It can only accessible for those who bought the game.
Re: -vcg-ast dmd command line switch
On Sunday, 7 May 2017 at 15:16:48 UTC, Ali Çehreli wrote: I've just commented on the following thread on the 'internals' newsgroup: http://forum.dlang.org/thread/tiiuucwivajgsnoos...@forum.dlang.org I think this should be improved to display code that is being mixed-in. Ali I just submitted a PR. This will give you now all templates and mixin templates. Of course it'll also print string-mixins.
Re: reasoning of evaluating code after return in current block (static if return)
On Sunday, 7 May 2017 at 23:41:00 UTC, bastien penavayre wrote: On Sunday, 7 May 2017 at 23:20:26 UTC, Adam D. Ruppe wrote: [...] I just realized that I accidentally posted this while editing. I agree with you on that this is barely different from just adding "else". [...] compile your code with the -vcg-ast switch and look at the .cg output.
Re: CTFE Status 2
On Wednesday, 3 May 2017 at 08:23:54 UTC, Nordlöw wrote: On Wednesday, 3 May 2017 at 07:35:56 UTC, Stefan Koch wrote: On Wednesday, 3 May 2017 at 06:10:22 UTC, Adrian Matoga wrote: So you're going to reinvent TCP in your debugging protocol? No. there is no need for a full blown recovery mechanism. For the typical usecase a lossless orderd connection can be assumed. And most things are not order dependent What about packet boundaries? We send source line by line. Packets should be under 1k in most cases.
Re: CTFE Status 2
On Wednesday, 3 May 2017 at 06:10:22 UTC, Adrian Matoga wrote: So you're going to reinvent TCP in your debugging protocol? No. there is no need for a full blown recovery mechanism. For the typical usecase a lossless orderd connection can be assumed. And most things are not order dependent
Re: See you soon at dconf
On Wednesday, 3 May 2017 at 06:45:05 UTC, Bastiaan Veelo wrote: On Tuesday, 2 May 2017 at 20:19:02 UTC, Stefan Koch wrote: Hi, I am very happy to see you soon at dconf. Likewise! I am at the airport as I type. Bastiaan. are you in Berlin already ? I am going to arrive near 19:00. Anyone up for having a pre-conf drink ?
Re: CTFE Status 2
On Tuesday, 2 May 2017 at 22:08:31 UTC, Moritz Maxeiner wrote: On Tuesday, 2 May 2017 at 09:55:56 UTC, Stefan Koch wrote: [...] I intended for the debugging functionality to be exposed via a udp socket listening on localhost. Such that a debug-ui does not have to deal with ipc difficulties. Hm, rationale for UDP over TCP here? I would assume one wouldn't want debugging info to be delivered out of order (or not at all); I mean, I guess it would be ok for localhost only (though one is then depending on implementation specifics vs. protocol semantics), but *if* you use sockets, you will eventually get people who use that over the network (and then the fun begins). Using TCP would just remove any potential future headache from the equation. I think any ordering should be done explicitly at the debugging protocol level. for example when sending sourceline messages the order is given by the line number and ordering can be done by the application. It's the same for breakpoint setting or for breakpoint trigger notification. As for lost packages, we want to respond intelligently as well. And maybe reduce the amount of data in the package, to make arrival of future packages more likely.
See you soon at dconf
Hi, I am very happy to see you soon at dconf. And I apologize in advance for my nearly slideless talk. Hope this time there is dmd on the machine! Cheers Stefan
Re: CTFE Status 2
On Monday, 1 May 2017 at 19:06:24 UTC, H. S. Teoh wrote: On Mon, May 01, 2017 at 06:23:08PM +, Stefan Koch via Digitalmars-d wrote: [...] I'm not sure about providing a debugger UI inside the compiler itself... it's certainly possible, and could lead to interesting new ways of using a compiler, but I was thinking more along the lines of providing the necessary hooks so that you could attach an external debugger to the CTFE engine. But if the debugger UI is simple enough, perhaps having it built into the compiler may not be a bad thing. It would also avoid potential trouble caused by some platforms not having debuggers capable of plugging into the compiler in that way. But still, I can see people demanding IDE integration for this eventually... :-O T I intended for the debugging functionality to be exposed via a udp socket listening on localhost. Such that a debug-ui does not have to deal with ipc difficulties. I am strictly against building a debugger into dmd.
Re: DConf Hackathon Ideas
On Monday, 1 May 2017 at 17:04:42 UTC, Iain Buclaw wrote: On 1 May 2017 at 16:51, Mike Parker via Digitalmars-dwrote: On Monday, 1 May 2017 at 14:38:11 UTC, Joseph Rushton Wakeling wrote: On Thursday, 27 April 2017 at 16:33:02 UTC, singingbush wrote: SDL should be dropped. Deprecated, sure. But dropping it seems a bad idea given that various projects do still use it for their DUB package config. NOBODY USES IT! Probably not true. Perhaps a hackathon project could be to create a little app to find which projects on GitHub (or at least code.dlang.org) still use a `dub.sdl`, and auto-submit a PR to fix that? :-) I love SDL and much prefer it over JSON for DUB configs. Use it for all of my D projects. It looks cleaner and supports comments. I really would hate to see support dropped. We should make XML the default config format for DUB. http://code.dlang.org/; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;> description="The vibe.d server application running gdcproject.org." copyright="Copyright © 2014, Iain Buclaw"> /Runaway! You forgot a few / there
Re: CTFE Status 2
On Sunday, 30 April 2017 at 19:52:27 UTC, H. S. Teoh wrote: On Sun, Apr 30, 2017 at 01:26:09PM +, Stefan Koch via Digitalmars-d wrote: On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: > [ ... ] Big news! The first step to include debug info has been done. Yes this means you will be able to step through ctfe code while the compiler executes it. Wow! Will that be accessible to users in the end? That could be a totally awesome way of debugging CTFE code! T Yes the plan is to make it accessible for the advanced user. probably with a really bad ui, though (since I am awful at UI code).
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Big news! The first step to include debug info has been done. Yes this means you will be able to step through ctfe code while the compiler executes it.
Re: Transitive bit-packing of fields
On Sunday, 30 April 2017 at 11:02:52 UTC, Nordlöw wrote: Have anybody found a way to do transitive packing of bitfields? For instance, in import std.bitmanip : bitfields; struct X { // one bit too many to fit in one byte mixin(bitfields!(bool, `a`, 1, bool, `b`, 1, ubyte, `c`, 7, ubyte, `_pad`, 7); } struct Y { // one unused bit mixin(bitfields!(ubyte, `d`, 7, ubyte, `_pad`, 1); } struct XY { X x; Y y; } `XY` will currently occupy 4 bytes, when only 1+1+7+7=16 bits are actually used in `a`, `b`, `c` and `d`. Rust just got support for this. You'd have to write your own template to do it; it's easy though :)
Re: CTFE Status 2
On Friday, 28 April 2017 at 17:53:04 UTC, Stefan Koch wrote: On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Hi Guys, I just implemented sliceAssigment. meaning the following code will now compile: uint[] assignSlice(uint from, uint to, uint[] stuff) { uint[] slice; slice.length = to + 4; foreach (uint i; 0 .. to + 4) { slice[i] = i + 1; } slice[from .. to] = stuff; return slice; } static assert(assignSlice(1, 4, [9, 8, 7]) == [1, 9, 8, 7, 5, 6, 7, 8]); FIXED!
Re: CTFE Status 2
On Friday, 28 April 2017 at 08:47:43 UTC, Stefan Koch wrote: On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] After a little of exploration of the JIT, I have now determined that a simple risc architecture is still the best. (codegen for scaled loads is hard :p) I am now back to fixing non-compiling code, such as : struct S { uint[] slice; } uint fn() { S s; s.slice.length = 12; return cast(uint)s.slice.length; } static assert(fn() == 12); This simple test does not compile because; ahm well ... Somewhere along the road we loose the type of s.slice and we cannot tell where to get .length from. I fixed this just now! ABI's are hard :)
Re: CTFE Status 2
On Friday, 28 April 2017 at 17:53:04 UTC, Stefan Koch wrote: On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Hi Guys, I just implemented sliceAssigment. meaning the following code will now compile: uint[] assignSlice(uint from, uint to, uint[] stuff) { uint[] slice; slice.length = to + 4; foreach (uint i; 0 .. to + 4) { slice[i] = i + 1; } slice[from .. to] = stuff; return slice; } static assert(assignSlice(1, 4, [9, 8, 7]) == [1, 9, 8, 7, 5, 6, 7, 8]); as always ... I spoke too soon :( while running test I forgot to specify -bc-ctfe ... Although I use the same code for slicing ... it seems it misbehaves in the usecase.
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Hi Guys, I just implemented sliceAssigment. meaning the following code will now compile: uint[] assignSlice(uint from, uint to, uint[] stuff) { uint[] slice; slice.length = to + 4; foreach (uint i; 0 .. to + 4) { slice[i] = i + 1; } slice[from .. to] = stuff; return slice; } static assert(assignSlice(1, 4, [9, 8, 7]) == [1, 9, 8, 7, 5, 6, 7, 8]);
Re: CTFE Status 2
On Friday, 28 April 2017 at 13:03:42 UTC, Nordlöw wrote: On Friday, 28 April 2017 at 08:47:43 UTC, Stefan Koch wrote: After a little of exploration of the JIT, I have now determined that a simple risc architecture is still the best. (codegen for scaled loads is hard :p) Do you mean no Jit? Of course there will be a JIT. But currently I am fixing busy bugs in the generated IR. So the implementation of jit will have to wait a little.
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] After a little of exploration of the JIT, I have now determined that a simple risc architecture is still the best. (codegen for scaled loads is hard :p) I am now back to fixing non-compiling code, such as : struct S { uint[] slice; } uint fn() { S s; s.slice.length = 12; return cast(uint)s.slice.length; } static assert(fn() == 12); This simple test does not compile because; ahm well ... Somewhere along the road we loose the type of s.slice and we cannot tell where to get .length from.
Re: CTFE Status 2
On Thursday, 27 April 2017 at 08:51:17 UTC, Dmitry Olshansky wrote: On 4/27/17 4:15 AM, Stefan Koch wrote: On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Hi Guys, As you already probably know some work has been done in the past week to get an x86 jit rolling. It is designed to produce very simple code with _any_ optimization at all. Since optimization introduces heavy complexity down the road, even if at first it looks very affordable. My opinion is : "_any_ optimization too much." There is also trade-off of spending too much time doing an optimization. That being said simple peep-hole optimizations may be well worth the effort. This stance should make it possible to get some _really_ shiny performance numbers for dconf. Cheers, Stefan I should probably clarify; I made a typo. I was meaning to write "without _any_ optimization at all." Peep-holing would be worth it for wanting to get the last drop of performance; However in the specific case of newCTFE, the crappiest JIT will already be much faster then an optimized interpreter would be. Small peephole optimization quickly turns into and endless source of bugs.
Re: CTFE Status 2
On Thursday, 27 April 2017 at 03:33:03 UTC, H. S. Teoh wrote: Is it possible at all to use any of the backend (in particular what parts of the optimizer that are pertinent), or is the API not conducive for this? T It is of course possible to use dmds backend but not very desirable, dmds backend works on an expression-tree, which would be expensive to build from the linear IR newCTFE uses. Dmds backend is also very hard to debug for anyone who is not Walter. CTFE is the common case will be fastest if executed without any optimizer interfering. modern x86 chips done a very fine job indeed executing crappy code fast. Therefore making it possible to get away with very simple and fast codegen. (Where fast means code-generation speed rather then code execution speed).
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Hi Guys, As you already probably know some work has been done in the past week to get an x86 jit rolling. It is designed to produce very simple code with _any_ optimization at all. Since optimization introduces heavy complexity down the road, even if at first it looks very affordable. My opinion is : "_any_ optimization too much." This stance should make it possible to get some _really_ shiny performance numbers for dconf. Cheers, Stefan
Re: Shortest quine in D
On Wednesday, 26 April 2017 at 23:19:32 UTC, H. S. Teoh wrote: --hello.d:-- import std.stdio;void main(){write(import("hello.d"));} Thanks to string imports, quines in D are actually trivial. :-D T use __FILE__ to make it a little more portable
Re: {OT} Youtube Video: newCTFE: Starting to write the x86 JIT
On Monday, 24 April 2017 at 11:29:01 UTC, Ola Fosheim Grøstad wrote: What are scaled loads? x86 has addressing modes which allow you to multiply an index by a certain set of scalars and add it as on offset to the pointer you want to load. Thereby making memory access patterns more transparent to the caching and prefetch systems. As well as reducing the overall code-size.
Re: {OT} Youtube Video: newCTFE: Starting to write the x86 JIT
On Sunday, 23 April 2017 at 02:45:09 UTC, evilrat wrote: On Saturday, 22 April 2017 at 10:38:45 UTC, Stefan Koch wrote: On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote: [...] If you could share the code it would be appreciated. If you cannot share it publicly come in irc sometime. I am Uplink|DMD there. Sorry, I failed, that was actually caused by build system and added dependencies(which is compiled every time no matter what, hence the slowdown). Testing overloaded functions vs template shows no significant difference in build times. Ah I see. 4x slowdown for 10 instances seemed rather unusual. Though doubtlessly possible.
Re: DIP 1005 - Preliminary Review Round 1
On Saturday, 22 April 2017 at 16:13:20 UTC, Timon Gehr wrote: This is how it works for static if, and it is also how it will work for static foreach, so it is even consistent with other language features. So you will touch up your static foreach DIP ? If so I am okay with building the implementation.
Re: {OT} Youtube Video: newCTFE: Starting to write the x86 JIT
On Saturday, 22 April 2017 at 14:22:18 UTC, John Colvin wrote: On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote: Hi Guys, I just begun work on the x86 jit backend. Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured. Since I do believe that this is an interesting topic; I will give you the over-the-shoulder perspective on this. At the time of posting the video is still uploading, but you should be able to see it soon. https://www.youtube.com/watch?v=pKorjPAvhQY Cheers, Stefan Is there not some way that you could get the current interpreter-based implementation in to dmd sooner and then modify the design later if necessary when you do x86 jit? The benefits of having just *fast* ctfe sooner are perhaps larger than the benefits of having *even faster* ctfe later. Faster templates are also something that might be higher priority - assuming it will be you who does the work there. Obviously it's your time and you're free to do whatever you like whenever you like, but I was just wondering what you're reasoning for the order of your plan is? newCTFE is currently at a phase where high-level features have to be implemented. And for that reason I am looking to extend the interface to support for example scaled loads and the like. Otherwise you and up with 1000 temporaries that add offsets to pointers. Also and perhaps more importantly I am sick and tired of hearing "why don't you use ldc/llvm?" all the time...
Re: typeof(this) return wrong type
On Saturday, 22 April 2017 at 11:33:22 UTC, Andrey wrote: Hello, I trying to add custom attribute OnClickListener, the problem is that typeof always return BaseView type instead of MyView. struct OnClickListener { string id; } class BaseView { void onCreate() { writeln(getSymbolsByUDA!(typeof(this), OnClickListener).stringof); } } class MyView : BaseView { @OnClickListener("okButton") void onOkButtonClick() { writeln("Hello world!"); } } typeof returns a static type not a dynamic type. If there is a branch of the function that does not return myview the closed base-type is used.
Re: {OT} Youtube Video: newCTFE: Starting to write the x86 JIT
On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote: On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote: On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote: Could you explain where it can be helpful? It's helpful for newCTFE's development. :) The I estimate the jit will easily be 10 times faster then my bytecode interpreter. which will make it about 100-1000x faster then the current CTFE. Is this apply to templates too? I recently tried some code, and templated version with about 10 instantiations for 4-5 types increased compile time from about 1 sec up to 4! The template itself was staightforward, just had a bunch of static if-else-else for types special cases. If you could share the code it would be appreciated. If you cannot share it publicly come in irc sometime. I am Uplink|DMD there.
Re: {OT} Youtube Video: newCTFE: Starting to write the x86 JIT
On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote: On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote: On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote: Could you explain where it can be helpful? It's helpful for newCTFE's development. :) The I estimate the jit will easily be 10 times faster then my bytecode interpreter. which will make it about 100-1000x faster then the current CTFE. Is this apply to templates too? I recently tried some code, and templated version with about 10 instantiations for 4-5 types increased compile time from about 1 sec up to 4! The template itself was staightforward, just had a bunch of static if-else-else for types special cases. No it most likely will not. However I am planning to work on speeding templates up after newCTFE is done.
Re: multiple `alias this` suggestion
On Friday, 21 April 2017 at 16:41:45 UTC, Meta wrote: On Friday, 21 April 2017 at 16:21:57 UTC, H. S. Teoh wrote: On Fri, Apr 21, 2017 at 08:17:28AM -0400, Andrei Alexandrescu via Digitalmars-d wrote: [...] This is interesting, and would be timely to discuss before an implementation of multiple alias this gets started. -- Andrei Whatever happened to the almost-complete implementation of alias this that was sitting in the PR queue a while back? Have we just let it bitrot into oblivion? :-( T R https://github.com/dlang/dmd/pull/3998/files It's written against C++ DMD as it was in 2014 so it may not be feasible anymore to easily port it to DDMD. This one looks easy to port. However I am not sure if those are disired semantics and that was one of the points raised against the PR iirc.
Re: {OT} Youtube Video: newCTFE: Starting to write the x86 JIT
On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote: Could you explain where it can be helpful? It's helpful for newCTFE's development. :) The I estimate the jit will easily be 10 times faster then my bytecode interpreter. which will make it about 100-1000x faster then the current CTFE.
Re: {OT} Youtube Video: newCTFE: Starting to write the x86 JIT
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote: Hi Guys, I just begun work on the x86 jit backend. Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured. Since I do believe that this is an interesting topic; I will give you the over-the-shoulder perspective on this. At the time of posting the video is still uploading, but you should be able to see it soon. https://www.youtube.com/watch?v=pKorjPAvhQY Cheers, Stefan Actual code-gen starts at 34:00 something.
{OT} Youtube Video: newCTFE: Starting to write the x86 JIT
Hi Guys, I just begun work on the x86 jit backend. Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured. Since I do believe that this is an interesting topic; I will give you the over-the-shoulder perspective on this. At the time of posting the video is still uploading, but you should be able to see it soon. https://www.youtube.com/watch?v=pKorjPAvhQY Cheers, Stefan
Re: Thoughts from newcommer
On Tuesday, 18 April 2017 at 16:42:38 UTC, Andrei Alexandrescu wrote: On 04/18/2017 03:00 AM, Shachar Shemesh wrote: D would have the ability to have a nice container that would do RAII (for classes since for structs, __dtors are called automatically) That's just it, though. They are not. Not reliably. Yah, clearly there's a problem with the language implementation (and the definition that is incomplete, leaving too much leeway to the implementation). Clearly the way to go is fix the bug, which has been preapproved and of raised gravity. That would obviate the entire "implementation has a bug therefore language does not support RAII" line of reasoning. Thanks Stefan for looking into this! -- Andrei This is going to be tricky without breaking code which worked around the bug.
Re: Interpolated strings
On Wednesday, 19 April 2017 at 12:10:33 UTC, Jonas Drewsen wrote: On Wednesday, 19 April 2017 at 12:03:47 UTC, Stefan Koch wrote: On Wednesday, 19 April 2017 at 11:59:51 UTC, Jonas Drewsen wrote: What about supporting an optional prefix inside the {} like: int year = 2017; format($"The date is {%04d year}"); so if there is a % immediately following the { then the chars until next whitespace is format specifier. You can of course leave out the format specifier and it will default to %s. I really don't see how string interpolation is better then ` "The date is " ~ format("%04d", year)); ` As mentioned before it is only because it is such a common pattern that it justifies the change. Seems like many other languages reached that conclusion as well. Also take a look at a more realistic example with some more formatting and it will be more obvious (at least to me it is :) ) "The date is " ~ format("%04d", year)) ~ " and " ~ user ~ " just logged into " ~ here; $"The date is {%04d year} and {user} just logged into {here}" I see. So you want to build format strings as well. This is going to be nasty, and likely to complex for a robust implementation. Here is what I would support: String interpolation literals can only be used with strings. And they need to start with some prefix which is not an operator. I"The date is %dateString and the time is %timeString"
Re: Interpolated strings
On Wednesday, 19 April 2017 at 11:59:51 UTC, Jonas Drewsen wrote: What about supporting an optional prefix inside the {} like: int year = 2017; format($"The date is {%04d year}"); so if there is a % immediately following the { then the chars until next whitespace is format specifier. You can of course leave out the format specifier and it will default to %s. I really don't see how string interpolation is better then ` "The date is " ~ format("%04d", year)); `
Re: Optilink bugs(or DMD)
On Wednesday, 19 April 2017 at 03:52:54 UTC, Nierjerson wrote: Major optilink bugs, blocker. Code is long but demonstrates the issue. Compiles with ldc. [...] There are two instances of void ForegroundColor(cSolidColor rhs)
Re: Interpolated strings
On Tuesday, 18 April 2017 at 06:54:11 UTC, Jacob Carlborg wrote: On 2017-04-17 21:28, Jonas Drewsen wrote: The page could also list pre-approved language changes such as async functions (which Walter wants afaik). Another feature that can be implemented with AST macros. This is starting to get ridicules. So many features have been added and are talked about that can be implemented with AST macros instead. Same as with the scope/safe related DIPs. Many smallish specialized features are added instead of a generic feature that can handle all of them. Increasing the complexity of the language. The corresponding ast-macros would be extremely complex, slow and worst of all not checkable.
Re: Generating switch at Compile Time
On Thursday, 13 April 2017 at 21:06:52 UTC, Jesse Phillips wrote: I realize that this is likely really pushing the compile time generation but a recent change to the switch statement[1] is surfacing because of this usage. uninitswitch2.d(13): Deprecation: 'switch' skips declaration of variable uninits witch2.main.li at uninitswitch2.d(14) - import std.traits; import std.typecons; import std.meta; private static immutable list = AliasSeq!( tuple("a", "q"), tuple("b", "r"), ); void main() { import std.stdio; string search; switch(search) { --->foreach(li; list) { // li initialization is skipped mixin("case li[0]:"); mixin("writeln(li[1]);"); return; } default: break; } // Works mixin(genSwitch("search")); } - I realize I can build out the entire switch and mix it in: - string genSwitch(string search) { auto ans = "switch(" ~ search ~ ") {\n"; foreach(li; list) { ans ~= "case \"" ~ li[0] ~ "\":\n"; ans ~= "writeln(\"" ~ li[1] ~ "\");\n"; ans ~= "return;\n"; } ans ~= "default:\n"; ans ~= "break;\n"; ans ~= "}"; return ans; } - But I'm just wondering if the new initialization check should not be triggered from this utilization. - // Unrolled based on // https://wiki.dlang.org/User:Quickfur/Compile-time_vs._compile-time description version(none) void func2243(Tuple param0, Tuple param1) { { { case param0[0]: writeln(param0[1]); return; } { case param1[0]: writeln(param1[1]); return; } } } - Thoughts? 1. https://issues.dlang.org/show_bug.cgi?id=14532 This is what is generated by your example: switch (search) { unrolled { { // does not actually open a new scope immutable immutable(Tuple!(string, string)) li = __list_field_0; case "a": { } writeln(li.__expand_field_1); return 0; } { // same here we do not actually open a new scope. immutable immutable(Tuple!(string, string)) li = __list_field_1; case "b": { } writeln(li.__expand_field_1); return 0; } } default: { break; } } return 0; it should be clear to sww why li's initialisation is referred to as skipped.
Re: Duplicated functions not reported?
On Sunday, 16 April 2017 at 17:10:14 UTC, Temtaime wrote: On Sunday, 16 April 2017 at 15:54:16 UTC, Stefan Koch wrote: On Sunday, 16 April 2017 at 10:56:37 UTC, Era Scarecrow wrote: On Saturday, 15 April 2017 at 11:10:01 UTC, Stefan Koch wrote: It would requires an O(n^2) check per declaration. Even it is never used. which would make imports that much more expensive. Seems wrong to me... If you made a list/array of all the functions (based purely on signatures) then sorted them, then any duplicates would be adjacent. Scanning that list would be O(n-1). This assumes it's done after all functions are scanned and identified, doing it earlier is a waste of time and energy. sorting has O(n^2) worst case complexity. Therefore totaling to O(n^2) worst case again. Why this difficulty ? Function[args][name] funcs; AA lookup is O(1). AA lookup is _NOT_ O(1). Worst case is O(n).
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Hi Guys, I just fixed default initialization of structs. So now a larger portion of code will be compiled and executed by newCTFE. my_struct MyStruct; will now work, before it would trigger a bailout. NOTE: this will create bogus results if the struct does contain complex initializers i.e. anything other then integers. Complex type support will come after dconf.
Re: Thoughts from newcommer
On Sunday, 16 April 2017 at 14:25:22 UTC, Andrei Alexandrescu wrote: On 4/16/17 3:50 AM, Shachar Shemesh wrote: https://issues.dlang.org/show_bug.cgi?id=14246 I'd raised the importance and urgency of this issue in the past. Walter is really overloaded for the time being. Any volunteer wants to look into this now? -- Andrei I am going to take a look. This bug has been bugging me for a while ... it's time to take a shot at it ;)
Re: Duplicated functions not reported?
On Sunday, 16 April 2017 at 10:56:37 UTC, Era Scarecrow wrote: On Saturday, 15 April 2017 at 11:10:01 UTC, Stefan Koch wrote: It would requires an O(n^2) check per declaration. Even it is never used. which would make imports that much more expensive. Seems wrong to me... If you made a list/array of all the functions (based purely on signatures) then sorted them, then any duplicates would be adjacent. Scanning that list would be O(n-1). This assumes it's done after all functions are scanned and identified, doing it earlier is a waste of time and energy. sorting has O(n^2) worst case complexity. Therefore totaling to O(n^2) worst case again.
Re: Suboptimal array copy in druntime?
On Sunday, 16 April 2017 at 10:08:22 UTC, Guillaume Chatelet wrote: I was looking at the _d_arrayassign family functions in druntime: https://github.com/dlang/druntime/blob/master/src/rt/arrayassign.d#L47 https://github.com/dlang/druntime/blob/master/src/rt/arrayassign.d#L139 [...] Nope. Those are valid points. Templatizing the code is the way to go.
Re: Strange stack variable corruption error after calling extern(C) function
On Sunday, 16 April 2017 at 08:34:12 UTC, cc wrote: All this with extern(Windows) rather than extern(C) by the way. Why not use loadLibraryA ? then all the problems go away :) this is how derelict does it as well.
Re: Duplicated functions not reported?
On Saturday, 15 April 2017 at 09:17:08 UTC, Jacob Carlborg wrote: I'm not sure if I'm missing something obvious here, but the following code compiles and runs: void foo() {} void foo() {} void main() {} Although if I do call "foo", the compiler will complain that it matches both versions of "foo". Is this expected behavior of how function overloading works? Is it possible to for the compiler to report this error? At least this example is pretty obvious for a human to see. It would requires an O(n^2) check per declaration. Even it is never used. which would make imports that much more expensive.
Re: CTFE Status 2
On Saturday, 15 April 2017 at 10:30:57 UTC, Moritz Maxeiner wrote: On Saturday, 15 April 2017 at 10:10:54 UTC, Stefan Koch wrote: On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: Hi Guys, due to the old CTFE status thread getting to page 30, I am now starting a new one. [...] The llvm backend is back in a fully working state. It's about 2times slower in my then my interpreter ;) Huh. In all cases, or only in trivial ones? Because I would have expected the overhead of jitting to become less relevant the more complex stuff you interpret vs jit. It's an average number. on tests the represent the usual usage of ctfe.
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: Hi Guys, due to the old CTFE status thread getting to page 30, I am now starting a new one. [...] The llvm backend is back in a fully working state. It's about 2times slower in my then my interpreter ;)
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: { ... } Wonderful news! Most of the Byteocode macros are gone! meaning less templates and faster bytecode generartion!
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Hi I want to share another story. I was pretty happy to have recursive function calls working. So happy in fact that I overlooked that they were actually generated twice. Let me illustrate what happend. Suppose we have the following module : void fn(uint rcount) { uint ctr = rcount; while (rcount--) ctr += fn(rcount); return ctr; } pragma(msg, fn(26)); the compiler hits the pragma(msg) and goes on to evaluate fn(26) newCTFE receives the functionBody as an ast-node. it starts processing (function 1) and hits the fn(rcount) it checks if it has the code for fn already. this check returns false since fn has not been completely generated yet. when this check returns it will write the FunctionBody in it's todo list. it wires up the call to the entry in the todo list. (function 2) it then hits the end of fn (function 1)and saves it in its code-cache. now it processes the TODO list it finds that it has to process fn. it starts processing fn (function 2) it hits the call. This time it does find an entry in the code cache for fn (funcion 1) it wires of the call and returns. The generate pseudo-code looks like : ... fn_0 (...) { ... call (fn_1, ...) ... } ... fn_1 (...) { ... call (fn_0, ...) ... } It was very surprised then I saw this :) Cheers, Stefan
Re: Deduplicating template reflection code
On Friday, 14 April 2017 at 08:24:00 UTC, Johannes Pfau wrote: I've got this code duplicated in quite some functions: - foreach (member; __traits(derivedMembers, API)) { // Guards against private members static if (__traits(compiles, __traits(getMember, API, member))) { static if (isSomeFunction!(__traits(getMember, API, member)) && !hasUDA!(__traits(getMember, API, member), IgnoreUDA) && !isSpecialFunction!member) { alias overloads = MemberFunctionsTuple!(API, member); foreach (MethodType; overloads) { // function dependent code here } } } } What's the idiomatic way to refactor / reuse this code fragment? -- Johannes The Idiomatic way would be to wrap it inside another template. In a year or so you won't need to worry about template overhead anymore, (if I succeed that is :) )
{OT} Youtube video: finding an eliusive bug with CTFE
Hi Guys, while building newCTFE is ran into a really nasty bug. Which took me hours to find, but with CTFE and __traits it is preventable and will never haunt me again. Because I was so happy that I could prevent this bug; I want to share it with the whole world: https://www.youtube.com/watch?v=9seMTaNmQDI Cheers, Stefan
Re: Dlang Features You Would Like To Share
On Wednesday, 12 April 2017 at 21:40:48 UTC, bluecat wrote: What are some features that you have discovered that you would like to share with the community? For me, one thing I found interesting was the ability to define structures dynamically using mixins: import std.stdio; import std.format: format; template MakePoint(string name, string x, string y) { const char[] MakePoint = "struct %s {int %s; int %s;}".format(name, x, y); } mixin(MakePoint!("Point", "x", "y")); void main() { auto pt = new Point; pt.x = 1; pt.y = 2; writefln("point at (%s, %s)", pt.x, pt.y); } this one does not need to be a template. You can build the string using ctfe which is much faster indeed.
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Comma expressions should now work.
Re: CTFE Status 2
On Wednesday, 12 April 2017 at 09:19:39 UTC, Stefan Koch wrote: On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] I just found more states we get into, that should be impossible to ever get into. I am stumped. Baffled. And seriously befuddled! So .. this is partially because we assume the stack to be zeroed if we have not written to it yet. It is zero-initialized after all, however If we are returning from a function that wrote to the stack and then we are calling another function, that function will see the state the previous function left there... which just means ... we have to zero our temporaries and locals on function entery. implementing this however breaks incremental code-generation. Aw
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] I just fixed the static assert((null ~ null) is null); Hence I can now enable string-concat!
Re: ctfe append too slow, but can't speed up
On Tuesday, 11 April 2017 at 02:20:37 UTC, Jethro wrote: ctfe string appending is way to slow, I have tried the suggested methods and nothing works and slows down by at least an order of magnitude. I need a drop in replacement(no other changes) that can take over the duties of string and allow for memory reuse. reserve, cpacity, assumeSafeAppend cannot be used by ctfe so are useless. I've tried to shadow a pre-allocated buffer but that doens't work either. You can give me your code, and I'll see what has to be done to run it with newCTFE.
Re: CTFE using of replaceAll from std.regex posible?
On Wednesday, 12 April 2017 at 12:00:27 UTC, Martin Tschierschke wrote: It there a way to use "replaceAll" at compile time? Regards mt. Not yet :) I assume it would bring the current system to it's needs. I you want to experiment you could replace malloc with new.
Re: The New CTFE Engine on the Blog
On Wednesday, 12 April 2017 at 05:51:20 UTC, Ali Çehreli wrote: On 04/10/2017 06:07 AM, Mike Parker wrote: Stefan has been diligently keeping us all updated on NewCTFE here in the forums. Now, he's gone to the blog to say something to tell the world about it. The blog: https://dlang.org/blog/2017/04/10/the-new-ctfe-engine/ Reddit: https://www.reddit.com/r/programming/comments/64jfes/an_introduction_to_ds_new_engine_for_compiletime/ The first code sample is private immutable ubyte[128] uri_flags = // indexed by character ({ // ... })(); and the text says "The ({ starts a function-literal, the }) closes it.". Why the extra parentheses? Just the curly brackets works, no? private immutable ubyte[128] uri_flags = // indexed by character { // ... }(); Ali Yes it would work. But I like to distinguish function-literals from blocks :)
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] I just found more states we get into, that should be impossible to ever get into. I am stumped. Baffled. And seriously befuddled!
Re: Thoughts from newcommer
On Tuesday, 11 April 2017 at 19:57:19 UTC, Piotr Kowalski wrote: Hello D community, I am language polyglot that lately got interested in D. I love it, it's very elegant language, so simple and so powerful same time. I will write some thoughts as outsider. The reason I am looking at D in 2017 is that D it's almost nonexistent on popular sites for programmers (reddit/HN etc). The only discussion I remember about D was that it had two standard libraries and there were no consensus on which to use and that it uses GC so it's slow. In my opinion D can get traction but it needs a lot more marketing and people need to be more vocal about blog posts written about D usage. If you don't have account on Reddit or HackerNews, create one, upvote articles about D, answer questions about D in comments, promote D (without zealotry). Two other important things to change people minds about D performance: http://benchmarksgame.alioth.debian.org/ Why D is not there? https://www.techempower.com/benchmarks/previews/round14/#section=data-r14=ph=plaintext This is another very popular benchmark that people discuss and can attract new people to look at D. vibe.d here is not working, it would be good to fix that. What is the plan for D for next 5 years? What about RAII? Did you consider implementing solutions from which we could get memory safety without GC in D? Static analysis combined with some additional annotations/rules (but not intrusive). For example using some of the solutions from Cyclone[1] or Rust or proposed solutions for lifetimes in C++[2] [1] http://www.cs.umd.edu/projects/cyclone/papers/cyclone-safety.pdf [2] https://github.com/isocpp/CppCoreGuidelines/blob/master/docs/Lifetimes%20I%20and%20II%20-%20v0.9.1.pdf The memory safety is currently in the works. We just have one std-lib now. GC is slow, yes. The short-term solution is to avoid it. (All gc-ed langauges recommend static preallocation :P)
Re: CTFE Status 2
On Monday, 10 April 2017 at 20:49:58 UTC, Stefan Koch wrote: On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Hi Guys :) I am currently fixing a bug involving complex members of structs (where complex means (slice, struct, array or pointer)) I did expect them to be broken ... but not to be _that_ broken :) struct S { uint[] slice; } uint fn() { S s; s.slice.length = 12; return cast(uint)s.slice.length; } static assert(fn() == 12); This code will not work because s.slice has no elementType :) (which does not mean it has the s.slice[0] has the type void) newCTFE literally looses the type information somewhere. And people wonder why I don't like mondays :) I found out that slice was never allocated :) This is an orthogonal problem, but it's fixed now. The problem from the above post still remains. And I still don't know why it happens.
{OT} Youtube video small tutorial to work with newCTFE's IR
Hi Guys, I have uploaded a video showing howto implement pow in newCTFE's IR. I hope that this is of interest to some of you :) Cheers, Stefan
Re: CTFE Status 2
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote: [ ... ] Hi Guys :) I am currently fixing a bug involving complex members of structs (where complex means (slice, struct, array or pointer)) I did expect them to be broken ... but not to be _that_ broken :) struct S { uint[] slice; } uint fn() { S s; s.slice.length = 12; return cast(uint)s.slice.length; } static assert(fn() == 12); This code will not work because s.slice has no elementType :) (which does not mean it has the s.slice[0] has the type void) newCTFE literally looses the type information somewhere. And people wonder why I don't like mondays :)
Re: The D ecosystem in Debian with free-as-in-freedom DMD
On Monday, 10 April 2017 at 15:11:01 UTC, Jack Stouffer wrote: On Monday, 10 April 2017 at 11:40:12 UTC, Matthias Klumpp wrote: 3) Will DMD support more architectures in the near future? How should the architecture issue be handled? This can be definitively answered as "no", https://issues.dlang.org/show_bug.cgi?id=15108 When templates are done I might give it a shot :)
Re: Strange CTFE issue, string replacement
On Sunday, 9 April 2017 at 20:20:55 UTC, Jethro wrote: On Sunday, 9 April 2017 at 19:55:57 UTC, Stefan Koch wrote: On Sunday, 9 April 2017 at 19:38:33 UTC, Jethro wrote: [...] The constructor is nuts. You do not need to zero the string! Also avoid templates if you can. Please don't criticize banal stuff by choosing one meaningless line and calling the whole thing "nuts". It's a waste of time. I zero'ed it because it helps in debugging and for other errors.(if used in C terminated like string usage it helps to have the string actually terminate at some point) My questions still stand though. Why is it 10x the speed and memory?!?! The short answer is because the current (soon to be replaced) CTFE implementation has a number of issues.
Re: Strange CTFE issue, string replacement
On Sunday, 9 April 2017 at 19:38:33 UTC, Jethro wrote: I tried to make a string like replacement called fstring which uses buffering to avoid lots of little allocations. The problem is, that it actually causes the compiler to use 10x the memory and about 10 times slower. It doesn't do anything special but tries to emulate string(so it can be a drop in replacement, more or less) Why would it be so ineffective in CTFE? I know strings are built in and the compiler probably has some optimizations, but an order of magnitude more memory usage is was not expected(should be lower)? I could understand a small speed slowdown, but not 10x. Any ideas? class fstring(T = char) { import std.traits; T[] buf; int length = 0; int Inc = 65535; this(int inc = 65535) { Inc = inc; buf.length = Inc; for(int i = 0; i < buf.length; i++) buf[i] = 0; } // foreach ReturnType!(D) opApply(D)(scope D dg) { ReturnType!(D) result; for (int i = 0; i < length; i++) { result = dg(buf[i]); if (result) break; } return result; } // Append string to end auto Append(S)(S s) { while (length + s.length >= buf.length) buf.length += Inc; for (int i = 0; i < s.length; i++) buf[length + i] = cast(T)(s[i]); length += s.length; } // Prepends string to start auto Prepend(S)(S s) { while (length + s.length >= buf.length) buf.length += Inc; for (int i = 0; i < length; i++) buf[s.length + length - i - 1] = buf[length - i - 1]; for (int i = 0; i < s.length; i++) buf[i] = cast(T)(s[i]); length += s.length; } auto opOpAssign(string op, S)(S s) if (op == "~") { Append(s); } void opAssign(S)(S s) if (!is(S == typeof(this))) { length = 0; Append(s); } auto opBinary(string op, S)(S s) if (op == "~") { Append(s); return this; } auto opBinaryRight(string op, S)(S s) if (op == "~") { Prepend(s); return this; } bool opEquals(S)(S s) { if (buf.length != s.length) return false; for(int i = 0; i < buf.length; i++) if (buf[i] != s[i]) return false; return true; } // Forward replaces string a with string b in place(no memory allocations) and greedily(as they are found from left to right) auto replace(A, B)(A a, B b) { if (length < a.length || a.length == 0 || b.length == 0) return this; auto small = (a.length >= b.length); int idx = 0; int count = 0; // number of matches, used when b.length > a.length to determine how much to overallocate(which depends on the number of matches) for(int i = 0; i <= length - a.length; i++) { if (buf[i] == a[0]) { auto found = true; for(int j = 1; j < a.length; j++) // Work backwards as more likely to be a mismatch(faster) if (buf[i + a.length - j] != a[a.length - j]) { found = false; break; } if (found) { count++; i += a.length - 1; // dec by 1 because for loop will inc if (small) for(int j = 0; j < b.length; j++) buf[idx++] = b[j]; continue; } } if (small) buf[idx++] = buf[i]; } int extra = -count*(a.length - b.length); if (small) { length += extra; if (buf.length > length) buf[length] = 0; return this; } // We now know the count so the (a.length - b.length)*count is the amount to expand/contract buf auto end = length + extra; if (end >= buf.length) buf.length += Inc; // shift the string to make space at the head, this allows us to use essentially the same algorithm as above, the difference is that we must first know the number of matches to overallocate the buffer if necessary(could do dynamically) and to shift the string ahead for(int i = 0; i < length; i++) buf[end - 1 - i] = buf[length - i - 1]; idx = 0; for(int i = extra; i <= length - a.length + extra; i++) { if (buf[i] == a[0]) {
Re: -fPIC and 32-bit dmd.conf settings
On Sunday, 9 April 2017 at 09:23:07 UTC, Joseph Rushton Wakeling wrote: Thanks for the explanation. TBH I find myself wondering whether `-fPIC` should be in the flags defined in dmd.conf _at all_ (even for 64-bit environments); surely that should be on request of individual project builds ... ? It's needed in order to compile working executables on debain & ubuntu. b/c of PIE.