Re: New developments on topic of memset, memcpy and similar
On Saturday, 27 November 2021 at 11:48:02 UTC, Imperatorn wrote: On Saturday, 27 November 2021 at 11:15:45 UTC, Igor wrote: Two years ago there was a [Google Summer of Code project](https://forum.dlang.org/thread/izaufklyvmktnwsrm...@forum.dlang.org) to implement these primitives in pure D for various reason. It was concluded the project isn't viable and was abandoned, but there were some interesting learnings. I now stumbled on some new work in C land about these that might be interesting to people that were following the original project so I am sharing it here: Custom ASM implementation that outperforms libc: https://github.com/nadavrot/memset_benchmark Paper on automatic implementation of these primitives: https://dl.acm.org/doi/pdf/10.1145/3459898.3463904 Haven't read yet, but how did they make it portable? Regarding custom ASM implementation I think it is only for X64 platform.
New developments on topic of memset, memcpy and similar
Two years ago there was a [Google Summer of Code project](https://forum.dlang.org/thread/izaufklyvmktnwsrm...@forum.dlang.org) to implement these primitives in pure D for various reason. It was concluded the project isn't viable and was abandoned, but there were some interesting learnings. I now stumbled on some new work in C land about these that might be interesting to people that were following the original project so I am sharing it here: Custom ASM implementation that outperforms libc: https://github.com/nadavrot/memset_benchmark Paper on automatic implementation of these primitives: https://dl.acm.org/doi/pdf/10.1145/3459898.3463904
Dlang Setup Tutorials
Hi, I made my first few video tutorials and they are about how to setup DLang development environment on Windows and Linux. Hopefully it can help new people quickly setup everything for playing around with our beautiful language :). [YouTube Intro](https://www.youtube.com/watch?v=OzASFrPzil4=PLNiswfy6ptAnw_QmqAuy-Bz02oeu4pnLL) [YouTube Windows install](https://www.youtube.com/watch?v=fuJBj_tgsR8=PLNiswfy6ptAnw_QmqAuy-Bz02oeu4pnLL=2) [YouTube Ubuntu install](https://www.youtube.com/watch?v=fJ-u29rDVXk=PLNiswfy6ptAnw_QmqAuy-Bz02oeu4pnLL=3) [YouTube Manjaro install](https://www.youtube.com/watch?v=rM6_S6Fy7aQ=PLNiswfy6ptAnw_QmqAuy-Bz02oeu4pnLL=4) And if you prefer a platform that doesn't spy on you or bothers you with ads: [Odysee Intro](https://open.lbry.com/@DrIggy:5/DLangIntro:4?r=fDUmrZFcxBqRdhnyvH55CTGDV8k8p7XC) [Odysee Windows install](https://open.lbry.com/@DrIggy:5/DLangWindowsInstall:3?r=fDUmrZFcxBqRdhnyvH55CTGDV8k8p7XC) [Odysee Ubuntu install](https://open.lbry.com/@DrIggy:5/DLangUbuntuInstall:9?r=fDUmrZFcxBqRdhnyvH55CTGDV8k8p7XC) [Odysee Manjaro install](https://open.lbry.com/@DrIggy:5/DLangManjaroInstall:1?r=fDUmrZFcxBqRdhnyvH55CTGDV8k8p7XC)
Re: Small or big dub packages
On Thursday, 1 November 2018 at 14:07:37 UTC, Guillaume Piolat wrote: On Monday, 29 October 2018 at 11:31:55 UTC, Igor wrote: The way I see it the advantage of smaller packages is that users can pick and choose and and only have the code they really need in their project, but the con could become managing a lot of dependencies. Also I am not sure how compile time on clean project and previously compiled project would be affected. Pros: Users can pick exactly what they need. Encourages decoupling instead of too much cohesion. Less code to build and maintain. Less chances of breakage on upgrade since you depend on less. Improve build time since only modified sub-packages get rebuilt. Good for the ecosystem. Cons: More link-time operations when not using --combined, each sub-package is compiled at once. Too much sub-package can slow down builds. Possibly hitting more DUB edge cases (less the case since DUB has tests) Directory layout may need to change for proper VisualD support. On the DUB registry, sub-packages are less popular than "big" packages because less discoverable and for some reasons some people won't just pick a sub-package when there is a toplevel package. Thanks for the list Guillaume. I don't think subpackages are a good choice for what I have in mind. For example there is a number of packages in repository that offer image loading and saving to multiple formats, but what I am looking for is the most simple PNG reader that just supports the most common formats, no indexed or interlaced, but is extensible and in case I do get to need those I can just add dependency to PngIndexed and PngInterlaced packages and get support for those too. After that I might need to also use PNG writer which would be a separate package. Subpackage in this scenario (and in my mind) would only make sense for some common code that these packages would share. Of course it could be that I don't fully understand subpackages... What do you think about above scenario? Are there any points you would add or change regarding it specifically?
Small or big dub packages
Can someone tell me what are pros and cons of having multiple extra small dub packages that depend on each other versus one dub package that has a bunch of functionality? Good example for this is dlib (https://github.com/gecko0307/dlib). It has many functionalities that could be split into separate packages. The way I see it the advantage of smaller packages is that users can pick and choose and and only have the code they really need in their project, but the con could become managing a lot of dependencies. Also I am not sure how compile time on clean project and previously compiled project would be affected.
Re: Load D shared library on windows x64
On Saturday, 18 August 2018 at 00:31:49 UTC, Tofu Ninja wrote: On Friday, 17 August 2018 at 20:27:05 UTC, Tofu Ninja wrote: Its this part that fails... always returns null HMODULE h = cast(HMODULE) Runtime.loadLibrary(dllName); if (h is null) { writeln("error loading"); return; } I there any way to see why Runtime.loadLibrary is failing? It just returns null on error which is not very helpful. Maybe you can find something useful in how Derelict does it here: https://github.com/DerelictOrg/DerelictUtil/blob/master/source/derelict/util/sharedlib.d
Re: Any book recommendation for writing a compiler?
On Thursday, 2 November 2017 at 03:55:27 UTC, Michael V. Franklin wrote: On Wednesday, 1 November 2017 at 20:53:44 UTC, Dr. Assembly wrote: Hey guys, if I were to get into dmd's source code to play a little bit (just for fun, no commercial use at all), which books/resources do you recommend to start out? I found this to be quite helpful: http://llvm.org/docs/tutorial/ Specifically the Kaleidoscope tutorial. Mike If you are interested in using LLVM my little project might be helpful: https://github.com/igor84/summus
Re: My two cents
On Monday, 23 October 2017 at 11:02:41 UTC, Martin Nowak wrote: In C++ incremental rebuilds are simple as you compile each file individually anyhow, but that's the crux for why C++ compilations are so slow in the first place. Compiling multiple modules at once provides lots of speedups as you do not have to reparse and analyze common/mutual imports, but on the downside it cannot be parallelized that well. I wish I knew how Delphi was compiling things because it is by far the fastest compiler I have ever tried. It compiled individual files as well but not into obj files but some dcu files and it used them if source wasn't changed when compiling sources that depended on that module.
Re: Dynamically import() files
On Monday, 23 October 2017 at 12:15:17 UTC, Andre Pany wrote: Hi, I have a folder "i18n" which contains message bundle files. For now it contains only the message bundle file written by the developer: "messagebundle.properties". [...] Can't you just create all the files that you expect to have and leave them empty and then you import them all and process them differently if there is no content?
Re: is(this : myClass)
On Friday, 20 October 2017 at 23:24:17 UTC, Patrick wrote: On Friday, 20 October 2017 at 23:01:25 UTC, Steven Schveighoffer wrote: On 10/20/17 6:23 PM, Patrick wrote: On Friday, 20 October 2017 at 22:15:36 UTC, Steven Schveighoffer wrote: On 10/20/17 5:55 PM, Patrick wrote: Due to the very specific nature of the 'is' operator, why wouldn't the compiler know to implicitly query the class types? Why must it be explicitly written, typeof(this)? The compiler generally doesn't "fix" errors for you, it tells you there is a problem, and then you have to fix it. You have to be clear and unambiguous to the compiler. Otherwise debugging would be hell. Not asking the compiler to fix my errors. When would is(this, myClass) not mean: is(typeof(this) : typeof(myClass))? class C { } int c; C myC; is(myC : c); oops, forgot to capitalize. But compiler says "I know, you really meant is(typeof(myC) : typeof(c)) -> false. -Steve If I explicitly wrote: is(typeof(myC) : typeof(c)) the outcome would still be false and it would still require debugging. So your example demonstrates nothing other then a type-o was made. Try again... In this unique case, the compiler should identify the class and primitive types are incompatible and should issue an error instead (and not return false). Patrick But with the current compiler you would never write is(typeOf(myC) : typeof(c)) if in your mind "c" is actually a class "C" because if that is in your mind you would just write is(typeof(myC) : c) which would get you the error. You only need typeof(variable) to get to the type, there is no point in doing typeof(type), you just write type and C is a type. Right?
Re: How do I convert a LPVOID (void*) to string?
On Monday, 16 October 2017 at 22:54:32 UTC, Adam D. Ruppe wrote: On Monday, 16 October 2017 at 21:48:35 UTC, Nieto wrote: How do I convert/create a D string from LPVOID (void*)? There is no one answer to this, but for the specific function are are looking at, the ALLOCATE_BUFFER argument means it puts the pointer in the pointer. So the way I'd do it is: char* lpMsgBuf; instead of LPVOID. You might as well keep some type info there; no need to call it VOID yet (it will implicitly cast to that when it is necessary). You still need to cast at the function call point, so the rest remains the same, but you should keep the return value of FormatMessageA. Then, you can do something like this: string s = lpMsgBuf[0 .. returned_value].idup; and it will copy it into the D string. You could also skip that ALLOCATE_BUFFER argument and pass it a buffer yourself, soemthing like: char[400] buffer; auto ret = FormatMessageA( FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, errorMessageID, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), buffer.ptr, buffer.length, NULL); return buffer[0 .. ret].idup; would also work. If you will not use this buffer in any other way but as an immutable string slightly better way is to: import std.exception : assumeUnique; return assumeUnique(buffer[0..ret]); This will not allocate another buffer only to copy data to it as immutable.
Re: the best language I have ever met(?)
On Friday, 25 November 2016 at 19:16:43 UTC, ketmar wrote: On Friday, 25 November 2016 at 14:27:39 UTC, Igor Shirkalin wrote: On Wednesday, 23 November 2016 at 18:58:55 UTC, ketmar wrote: We can define static array without counting the elements as following: enum array_ = [1u,2,3,4]; uint[array_.length] static_array = array_; there are workarounds, of course. yet i'll take mine `uint[$] a = [1u,2,3,4];` over that quoted mess at any time, without second thought. ;-) I think you may write it (I mean actual D) with using some template like this: yeah. but i'm not Andrei, i don't believe that the only compiler task is to resolve templated code. ;-) i.e. Andrei believes that everything (and more) should be moved out of compiler core and done with library templates. Andrei is genius, for sure, but he is living somewhere in future, where our PCs are not bound by memory, CPU, and other silly restrictions. ;-) tl;dr: using template for this sux. I just don't understand how is it worth to add to a language that instead of typing someArray.length you can just type $ but it is not ok to add to the language the same thing for static array size...
Re: Two way struct wrapper
On Wednesday, 11 October 2017 at 12:35:51 UTC, drug wrote: Using `alias this` it's easy to make wrapper for structure that calls wrapped structure methods like its own. This is one way - from wrapper to wrapped transformation. Is it possible to create the opposite way from wrapped to wrapper? https://run.dlang.io/is/Avyu3I All calls to Bar is redirected to Foo, but output of Foo is not redirected to Bar. I don't see how compiler can just deduce that... In either case you can just wrap the expression with another Bar(): auto v5 = Bar(Bar(Foo(2)) + Bar(Foo(3)));
Re: Alias on an array element
On Friday, 13 October 2017 at 02:04:03 UTC, Meta wrote: On Friday, 13 October 2017 at 01:12:38 UTC, solidstate1991 wrote: On Friday, 13 October 2017 at 01:09:56 UTC, solidstate1991 wrote: I'm making a struct for easy color handling Here's a code sample: ublic struct Color{ union{ uint raw; ///Raw representation in integer form, also forces the system to align in INT32. ubyte[4] colors; ///Normal representation, aliases are used for color naming. ubyte alpha, red, green, blue; } version(LittleEndian){ alias alpha = colors[0]; alias red = colors[1]; alias green = colors[2]; alias blue = colors[3]; }else{ alias alpha = colors[3]; alias red = colors[2]; alias green = colors[1]; alias blue = colors[0]; } } All the aliases are fail to compile, have not found anything about it in any of the documentations I checked. Edit: ubyte alpha, red, green, blue; was added so I can continue debugging after the refactoring until I find a solution. You can only create aliases for symbols, not expressions. You could create accessor functions that return the appropriate indices. Why not just do this: version(LittleEndian) { ubyte alpha, red, green, blue; } else { ubyte blue, green, red, alpha; } BTW. What platforms do you have in mind when thinking about BigEndian? I am curious because I usually consider BigEndian dead for my purposes.
Re: Multiline string literal improvements
On Friday, 13 October 2017 at 07:59:36 UTC, Biotronic wrote: D version that works in CTFE: Thanks Biotronic! This is just what I had in mind.
Re: Multiline string literal improvements
On Wednesday, 11 October 2017 at 14:28:32 UTC, Meta wrote: On Wednesday, 11 October 2017 at 09:56:52 UTC, Igor wrote: On Wednesday, 11 October 2017 at 08:35:51 UTC, Walter Bright wrote: On 10/10/2017 3:16 PM, sarn wrote: Works even better in D because it can run at compile time. Yes, I see no need for a language feature what can be easily and far more flexibly done with a regular function - especially since what |q{ and -q{ do gives no clue from the syntax. You are right. My mind is just still not used to the power of D templates so I didn't think of this. On the other hand that is why D is still making me say "WOW!" on a regular basis :). Just to confirm I understand, for example the following would give me compile time stripping of white space: template stripws(string l) { enum stripws = l.replaceAll(regex("\s+", "g"), " "); } string variable = stripws(q{ whatever andever; }); And I would get variable to be equal to " whatever and ever; ". Right? Even better, you could write the same code that you would for doing this at runtime and it'll Just Work: string variable = q{ whatever and ever; }.replaceAll(regex(`\s+`, "g"), " "); I tried this but Disassembly view shows: call std.regex.regex!string.regex and call std.regex.replaceAll!(string, char, std.regex.internal.ir.Regex!char).replaceAll which means that replaceAll with regex is done at runtime, not compile time. Also when I just added enum in front of string variable then I got this: Error: malloc cannot be interpreted at compile time, because it has no available source code
Re: Multiline string literal improvements
On Wednesday, 11 October 2017 at 08:35:51 UTC, Walter Bright wrote: On 10/10/2017 3:16 PM, sarn wrote: Works even better in D because it can run at compile time. Yes, I see no need for a language feature what can be easily and far more flexibly done with a regular function - especially since what |q{ and -q{ do gives no clue from the syntax. You are right. My mind is just still not used to the power of D templates so I didn't think of this. On the other hand that is why D is still making me say "WOW!" on a regular basis :). Just to confirm I understand, for example the following would give me compile time stripping of white space: template stripws(string l) { enum stripws = l.replaceAll(regex("\s+", "g"), " "); } string variable = stripws(q{ whatever andever; }); And I would get variable to be equal to " whatever and ever; ". Right?
Multiline string literal improvements
D has a very nice feature of token strings: string a = q{ looksLikeCode(); }; It is useful in writing mixins and to have syntax highlighting in editors. Although I like it, it is not something I ever felt like missing in other languages. What I do always miss are these two options: 1. Have something like this: string a = |q{ firstLine(); if (cond) { secondLine() } }; mean count the number of whitespace characters at the start of a first new line of string literal and then strip up to that many whitespace characters from the start of each line. 2. If we just put for example "-" instead of "|" in above example have that mean: replace all whitespace with a single space in following string literal. I think it is clear why would these be useful but if you want me I can add a few examples. This would not make any breaking changes to the language and it should be possible to simply implement it wholly in the lexer. So what do think?
Re: @nogc formattedWrite
On Saturday, 7 October 2017 at 18:27:36 UTC, jmh530 wrote: On Saturday, 7 October 2017 at 18:14:00 UTC, Nordlöw wrote: It would be nice to be able to formatted output in -betterC... Agreed. If you know the size of the buffer, you can use sformat, which might be @nogc, but I don't know if it's compatible with betterC. Also, you might check out Ocean, which might have nogc formatting. https://github.com/sociomantic-tsunami/ocean As far as I know sformat is still not @nogc because it may throw an exception if buffer is not large enough, and throwing exceptions requires allocation.
Re: Region-based memory management and GC?
On Friday, 29 September 2017 at 22:13:01 UTC, Jon Degenhardt wrote: Have there been any investigations into using region-based memory management (aka memory arenas) in D, possibly in conjunction with GC allocated memory? This would be a very speculative idea, but it'd be interesting to know if there have been looks at this area. My own interest is request-response applications, where memory allocated as part of a specific request can be discarded as a single block when the processing of that request completes, without running destructors. I've also seen some papers describing GC systems targeting big data platforms that incorporate this idea. eg. http://www.ics.uci.edu/~khanhtn1/papers/osdi16.pdf --Jon Sounds like just want to use https://dlang.org/phobos/std_experimental_allocator_building_blocks_region.html.
Re: Allocating byte aligned array
On Wednesday, 27 September 2017 at 21:48:35 UTC, timvol wrote: On Wednesday, 27 September 2017 at 21:44:48 UTC, Ali Çehreli wrote: On 09/27/2017 02:39 PM, timvol wrote: [...] void main() { auto mem = new ubyte[1024+15]; auto ptr = cast(ubyte*)(cast(ulong)(mem.ptr + 15) & ~0x0FUL); auto arr = ptr[0..1024]; } Ali Works perfect. Thank you! I know you can also use this with static arrays: align(16) ubyte[1024] mem; But I guess align directive doesn't work with dynamic arrays...
Re: Temporarily adding -vgc to a DUB build
On Sunday, 17 September 2017 at 01:50:08 UTC, Nicholas Wilson wrote: On Saturday, 16 September 2017 at 21:45:34 UTC, Nordlöw wrote: How do I temporarily enable -vgc when building my app with DUB? I've tried DFLAGS=-vgc /usr/bin/dub build --build=unittest but it doesn't seem to have any effect as it doesn't rebuild directly after the call /usr/bin/dub build --build=unittest I'm using DUB version 1.5.0 Or is adding a new build configuration, say unittest-vgc, the only way to accomplish this? Setting the dflags in the dub.json should work. This is what I use for dcompute: { ... "dflags" : ["-mdcompute-targets=cuda-210" ,"-oq", "-betterC"], ... } so just change those flags to "-vgc" should do the trick. You can also just execute: export DFLAGS=-vgc before running dub. That should work since as far as I know "export" is needed to make env var visible to other processes started from the same shell session.
Re: failed loading freetype 2.6 via derelict-ft
On Wednesday, 13 September 2017 at 19:01:52 UTC, Spacen wrote: Hello, I am trying to resurect an old project, but can't get the freetype library loaded. I can't figgure out what to do it's been a while. I have just built the freetype 2.6 library with visual studio 2015. Any tips would be appreciated. dub build Performing "debug" build using dmd for x86. derelict-util 2.0.6: target for configuration "library" is up to date. derelict-ft 1.1.3: building configuration "library"... derelict-gl3 1.0.23: target for configuration "library" is up to date. derelict-sdl2 1.9.7: target for configuration "library" is up to date. gltest ~master: building configuration "application"... Linking... To force a rebuild of up-to-date targets, run again with --force. C:\>gltest.exe Exception: Failed to load symbol FT_Stream_OpenBzip2 from shared library freetype.dll Make sure dll is also 32bit if you are building 32bit app and your Visual Studio should have dumpbin utility which you can use to make sure the required symbols are properly exported: dumpbin /EXPORTS your.dll
Re: DerelictGL3 slow compilation speed with contexts
On Wednesday, 13 September 2017 at 13:25:01 UTC, Mike Parker wrote: On Wednesday, 13 September 2017 at 10:28:26 UTC, Igor wrote: Well since minimal example is a window app that opens a window, sets everything up and calls opengl stuff I will just push it to my github project this evening and you will can try with that. I will let you know when its done. In the meantime I can tell you that it seems that the main culprit for long compile time is "-inline" compiler option. In that case, I should be able to reproduce it. I've not compiled the context stuff with -inline before. Once I do reproduce it, I'll open an issue over github for future discussion. I'll let you know if I need more info. I tested it again with my entire project and it seems it is not inline thing but -O (optimized build). You can checkout the project here: https://github.com/igor84/dngin if you try to build it with "dub build -ax86_64 -b release" you will experience very long compilation.
Re: DerelictGL3 slow compilation speed with contexts
On Wednesday, 13 September 2017 at 01:30:10 UTC, Mike Parker wrote: On Tuesday, 12 September 2017 at 21:55:23 UTC, Igor wrote: Hi All, I switched from using free functions in DerelictGL3 to DerelictGL3_Contexts and compilation speed in optimized build using DMD went from 2 seconds to 7 minutes and using LDC from 2 seconds to 10 seconds. Is this a known problem? Are there any workarounds? The support for contexts is still relatively new and I've received no feedback specific to it since I first released it in the 2.0 alpha. Given the heavy use of string & template mixins, it's possible I'm hitting a corner case somewhere. In my simple tests, I haven't seen it. So the only way I'm going to be able to narrow it down is with a minimal example that I can compile and test with. Is there anything you can give me? Well since minimal example is a window app that opens a window, sets everything up and calls opengl stuff I will just push it to my github project this evening and you will can try with that. I will let you know when its done. In the meantime I can tell you that it seems that the main culprit for long compile time is "-inline" compiler option.
DerelictGL3 slow compilation speed with contexts
Hi All, I switched from using free functions in DerelictGL3 to DerelictGL3_Contexts and compilation speed in optimized build using DMD went from 2 seconds to 7 minutes and using LDC from 2 seconds to 10 seconds. Is this a known problem? Are there any workarounds?
Re: SIMD under LDC
On Monday, 11 September 2017 at 11:55:45 UTC, Igor wrote: In the meantime can anyone tell me how to add an attribute to a function only if something is defined, since this doesn't work: version(USE_SIMD_WITH_LDC) { import ldc.attributes; @target("ssse3") } void funcThatUsesSIMD() { ... version(LDC) { import ldc.gccbuiltins_x86; c = __builtin_ia32_pshufb128(c, *simdMasks); } else { c = __simd(XMM.PSHUFB, c, *simdMasks); } ... } Regarding the crash in debug mode the problem was that my masks variable wasn't properly aligned and I guess the best I can do with the attribute is this: version(LDC) import ldc.attributes; else private struct target { string specifier; } @target("ssse3") void funcThatUsesSIMD() {...}
Re: SIMD under LDC
On Thursday, 7 September 2017 at 16:45:40 UTC, Igor wrote: On Thursday, 7 September 2017 at 15:24:13 UTC, Johan Engelen wrote: On Wednesday, 6 September 2017 at 20:43:01 UTC, Igor wrote: I opened a feature request on github. I also tried using the gccbuiltins but I got this error: LLVM ERROR: Cannot select: 0x2199c96fd70: v16i8 = X86ISD::PSHUFB 0x2199c74e9a8, 0x2199c74d6c0 That's because SSSE3 instructions are not enabled by default, so the compiler isn't allowed to generate the PSHUFB instruction. Some options you have: 1. Set a cpu that has ssse3, e.g. compile with `-mcpu=native` 2. Enable SSSE3: compile with `-mattr=+ssse3` 3. Perhaps best for your case, enable SSSE3 for that function, importing the ldc.attributes module and using the @target("ssse3") UDA on that function. -Johan Thanks Johan. I tried this and now it does compile but it crashes with Access Violation in debug build. In optimized build it seems to be working though. I will try to reproduce this in minimal project and open LDC bug if successful. In the meantime can anyone tell me how to add an attribute to a function only if something is defined, since this doesn't work: version(USE_SIMD_WITH_LDC) { import ldc.attributes; @target("ssse3") } void funcThatUsesSIMD() { ... version(LDC) { import ldc.gccbuiltins_x86; c = __builtin_ia32_pshufb128(c, *simdMasks); } else { c = __simd(XMM.PSHUFB, c, *simdMasks); } ... }
VisualD setup problems
I seem to have corrupted something within my installation and I can't find how to fix it. Earlier I was able to setup a breakpoint within some phobos module that I used and step through phobos code but that doesn't work any more. Does anyone know how I can make that work again?
Re: SIMD under LDC
On Thursday, 7 September 2017 at 15:24:13 UTC, Johan Engelen wrote: On Wednesday, 6 September 2017 at 20:43:01 UTC, Igor wrote: I opened a feature request on github. I also tried using the gccbuiltins but I got this error: LLVM ERROR: Cannot select: 0x2199c96fd70: v16i8 = X86ISD::PSHUFB 0x2199c74e9a8, 0x2199c74d6c0 That's because SSSE3 instructions are not enabled by default, so the compiler isn't allowed to generate the PSHUFB instruction. Some options you have: 1. Set a cpu that has ssse3, e.g. compile with `-mcpu=native` 2. Enable SSSE3: compile with `-mattr=+ssse3` 3. Perhaps best for your case, enable SSSE3 for that function, importing the ldc.attributes module and using the @target("ssse3") UDA on that function. -Johan Thanks Johan. I tried this and now it does compile but it crashes with Access Violation in debug build. In optimized build it seems to be working though.
Re: SIMD under LDC
On Wednesday, 6 September 2017 at 09:01:18 UTC, Igor wrote: On Tuesday, 5 September 2017 at 18:50:34 UTC, Johan Engelen wrote: On Monday, 4 September 2017 at 20:39:11 UTC, Igor wrote: I found that I can't use __simd function from core.simd under LDC and that it has ldc.simd but I couldn't find how to implement equivalent to this with it: ubyte16* masks = ...; foreach (ref c; pixels) { c = __simd(XMM.PSHUFB, c, *masks); } I see it has shufflevector function but it only accepts constant masks and I am using a variable one. Is this possible under LDC? You can use the module ldc.gccbuiltins_x86.di, __builtin_ia32_pshufb128 and __builtin_ia32_pshufb256. (also see https://gcc.gnu.org/onlinedocs/gcc-4.4.5/gcc/X86-Built_002din-Functions.html) Please file a feature request about shufflevector with variable mask in our (LDC) issue tracker on Github; with some code that you'd expect to work. Thanks. - Johan I'll try that this evening. Thanks! I'll also open an issue but are you sure such feature request is valid since LLVM shufflevector instruction, as far as I see, only supports constant masks as well. I opened a feature request on github. I also tried using the gccbuiltins but I got this error: LLVM ERROR: Cannot select: 0x2199c96fd70: v16i8 = X86ISD::PSHUFB 0x2199c74e9a8, 0x2199c74d6c0 0x2199c74e9a8: v16i8,ch = CopyFromReg 0x21994bcfd90, Register:v16i8 %vreg384 0x2199c96fb00: v16i8 = Register %vreg384 0x2199c74d6c0: v16i8,ch = CopyFromReg 0x21994bcfd90, Register:v16i8 %vreg385 0x2199c74ed50: v16i8 = Register %vreg385 In function: _D7assetdb12loadBmpImageFAxaZf Building x64\LDCDebug\DNgin.exe failed! You can see the code I used here: https://github.com/igor84/dngin/blob/3c171330843af71170a6ee4ae164a76bf58c35f6/source/assetdb.d#L123 Note that if you want to try it you will need a test.bmp in specific format where header.compression == 3, like this one: https://drive.google.com/file/d/0B9l8IgnRaPwCU0hIWEtHUElhTTg/view?usp=sharing
Re: SIMD under LDC
On Tuesday, 5 September 2017 at 18:50:34 UTC, Johan Engelen wrote: On Monday, 4 September 2017 at 20:39:11 UTC, Igor wrote: I found that I can't use __simd function from core.simd under LDC and that it has ldc.simd but I couldn't find how to implement equivalent to this with it: ubyte16* masks = ...; foreach (ref c; pixels) { c = __simd(XMM.PSHUFB, c, *masks); } I see it has shufflevector function but it only accepts constant masks and I am using a variable one. Is this possible under LDC? You can use the module ldc.gccbuiltins_x86.di, __builtin_ia32_pshufb128 and __builtin_ia32_pshufb256. (also see https://gcc.gnu.org/onlinedocs/gcc-4.4.5/gcc/X86-Built_002din-Functions.html) Please file a feature request about shufflevector with variable mask in our (LDC) issue tracker on Github; with some code that you'd expect to work. Thanks. - Johan I'll try that this evening. Thanks! I'll also open an issue but are you sure such feature request is valid since LLVM shufflevector instruction, as far as I see, only supports constant masks as well.
Re: SIMD under LDC
On Tuesday, 5 September 2017 at 01:11:29 UTC, 12345swordy wrote: On Monday, 4 September 2017 at 23:06:27 UTC, Nicholas Wilson wrote: Don't underestimate ldc's optimiser ;) I seen cases where the compiler fail to optimized for smid. I tried it and LDC optimized build did generate SIMD instructions from regular code but it used multiple ones to do job so it is about 1.4 times slower than manual SIMD version with DMD. That is probably good enough for me.
SIMD under LDC
I found that I can't use __simd function from core.simd under LDC and that it has ldc.simd but I couldn't find how to implement equivalent to this with it: ubyte16* masks = ...; foreach (ref c; pixels) { c = __simd(XMM.PSHUFB, c, *masks); } I see it has shufflevector function but it only accepts constant masks and I am using a variable one. Is this possible under LDC? BTW. Shuffling channels within pixels using DMD simd is about 5 times faster than with normal code on my machine :)
Re: Making a repo of downloaded dub package
On Monday, 4 September 2017 at 14:35:47 UTC, Dukc wrote: Bump Search for word "local" here: https://code.dlang.org/docs/commandline. Maybe some of those can help you. If not you could make a pull request for dub that would support such a thing :)
Re: Problems with std.experimental.allocator
On Saturday, 2 September 2017 at 11:23:00 UTC, Igor wrote: I realize these are not yet stable but I would like to know if I am doing something wrong or is it a lib bug. My first attempt was to do this: theAllocator = allocatorObject(Region!MmapAllocator(1024*MB)); If I got it right this doesn't work because it actually does this: 1. Create Region struct and allocate 1024MB from MMapAllocator 2. Wrap the struct in IAllocator by copying it because it has state 3. Destroy original struct which frees the memory 4. Now the struct copy points to released memory Am I right here? Next attempt was this: theAllocator = allocatorObject(Region!()(cast(ubyte[])MmapAllocator.instance.allocate(1024*MB))); Since I give actual memory instead of the allocator to the Region it can not dealocate that memory so even the copy will still point to valid memory. After looking at what will the allocatorObject do in this case my conclusion is that it will take a "copyable" static if branch and create an instance of CAllocatorImpl which will have a "Region!() impl" field within itself but given Region!() struct is never copied into that field. Am I right here? If I am right about both are then these considered as lib bugs? I finally got it working with: auto newAlloc = Region!()(cast(ubyte[])MmapAllocator.instance.allocate(1024*MB)); theAllocator = allocatorObject(); Next I tried setting processAllocator instead of theAllocator by using: auto newAlloc = Region!()(cast(ubyte[])MmapAllocator.instance.allocate(1024*MB)); processAllocator = sharedAllocatorObject(); but that complained how it "cannot implicitly convert expression `pa` of type `Region!()*` to `shared(Region!()*)`" and since Region doesn't define its methods as shared does this mean one can not use Region as processAllocator? If that is so, what is the reason behind it? After a lot of reading I learned that I need a separate implementation like SharedRegion and I tried implementing one. If anyone is interested you can take a look here https://github.com/igor84/dngin/blob/master/source/util/allocators.d. It doesn't have expand at the moment but I tried making it work with this: processAllocator = sharedAllocatorObject(shared SharedRegion!MmapAllocator(1024*MB)); I did it by reserving first two bytes of allocated memory for counting the references to that memory and then increasing that count on postblit and decreasing it in destructor and identity assignment operator. I then only release the memory if this count gets bellow 0. Problem is I couldn't get it to compile since I would get this error on above line: Error: shared method util.allocators.SharedRegion!(MmapAllocator, 8u, cast(Flag)false).SharedRegion.~this is not callable using a non-shared object I couldn't figure out why is it looking for non-shared destructor. If I remove shared from destructor then I get that "non-shared method SharedRegion.~this is not callable using a shared object" in std/experimental/allocator/package.d(2067). I currently use it with pointer construction: https://github.com/igor84/dngin/blob/master/source/winmain.d#L175
Problems with std.experimental.allocator
I realize these are not yet stable but I would like to know if I am doing something wrong or is it a lib bug. My first attempt was to do this: theAllocator = allocatorObject(Region!MmapAllocator(1024*MB)); If I got it right this doesn't work because it actually does this: 1. Create Region struct and allocate 1024MB from MMapAllocator 2. Wrap the struct in IAllocator by copying it because it has state 3. Destroy original struct which frees the memory 4. Now the struct copy points to released memory Am I right here? Next attempt was this: theAllocator = allocatorObject(Region!()(cast(ubyte[])MmapAllocator.instance.allocate(1024*MB))); Since I give actual memory instead of the allocator to the Region it can not dealocate that memory so even the copy will still point to valid memory. After looking at what will the allocatorObject do in this case my conclusion is that it will take a "copyable" static if branch and create an instance of CAllocatorImpl which will have a "Region!() impl" field within itself but given Region!() struct is never copied into that field. Am I right here? If I am right about both are then these considered as lib bugs? I finally got it working with: auto newAlloc = Region!()(cast(ubyte[])MmapAllocator.instance.allocate(1024*MB)); theAllocator = allocatorObject(); Next I tried setting processAllocator instead of theAllocator by using: auto newAlloc = Region!()(cast(ubyte[])MmapAllocator.instance.allocate(1024*MB)); processAllocator = sharedAllocatorObject(); but that complained how it "cannot implicitly convert expression `pa` of type `Region!()*` to `shared(Region!()*)`" and since Region doesn't define its methods as shared does this mean one can not use Region as processAllocator? If that is so, what is the reason behind it?
Re: (SIMD) Optimized multi-byte chunk scanning
On Wednesday, 23 August 2017 at 22:07:30 UTC, Nordlöw wrote: I recall seeing some C/C++/D code that optimizes the comment- and whitespace-skipping parts (tokens) of lexers by operating on 2, 4 or 8-byte chunks instead of single-byte chunks. This in the case when token-terminators are expressed as sets of (alternative) ASCII-characters. For instance, when searching for the end of a line comment, I would like to speed up the while-loop in size_t offset; string input = "// \n"; // a line-comment string import std.algorithm : among; // until end-of-line or file terminator while (!input[offset].among!('\0', '\n', '\r') { ++offset; } by taking `offset`-steps larger than one. Note that my file reading function that creates the real `input`, appends a '\0' at the end to enable sentinel-based search as shown in the call to `among` above. I further recall that there are x86_64 intrinsics that can be used here for further speedups. Refs, anyone? On line comments it doesn't sound like it will pay off since you would have to do extra work to make sure you work on 16 byte aligned memory. For multi-line comments maybe. As for a nice reference of intel intrinsics: https://software.intel.com/sites/landingpage/IntrinsicsGuide/
Re: DerelictGL3 reload crashes in 32 builds
On Wednesday, 23 August 2017 at 12:59:38 UTC, Mike Parker wrote: On Tuesday, 22 August 2017 at 12:03:18 UTC, Igor wrote: [...] I'm not sure what you're referring to. There are a few static if(Derelict_OS_Android) blocks in there as well. [...] Ok Mike. Thanks for the info. If I learn anything new about the issue I will post it to the github.
Re: DerelictGL3 reload crashes in 32 builds
On Tuesday, 22 August 2017 at 12:03:18 UTC, Igor wrote: On Monday, 21 August 2017 at 12:38:28 UTC, Mike Parker wrote: Have you tried to compile outside of VisualD? Hmmm... I though I tried running with just typing dub which should use m32 by default as far as I know and got the error. I will check one more time this evening. But LDC 32bit builds crash for sure. Note that I committed a version last night where I commented out the Derelict.reload call, so just make sure it is not commented before trying it out. I must have remembered it wrong. I just tried it again and DMD 32bit debug build works. It is DMD 32bit release build that is not working. When I run dub --build=release --force I get: ... Error object.Error@(0): Access Violation 0x59BED731 0x5A6202C9 in wglGetProcAddress 0x004103B6 0x0040DC80 0x0040C691 Program exited with code 1
Re: DerelictGL3 reload crashes in 32 builds
On Monday, 21 August 2017 at 12:38:28 UTC, Mike Parker wrote: On Monday, 21 August 2017 at 02:40:59 UTC, Mike Parker wrote: On Sunday, 20 August 2017 at 19:29:55 UTC, Igor wrote: In 64 bit builds it works with both LDC and DMD but in 32 bit LDC version crashes and DMD release version crashes. Using LDC debug build I managed to find that it crashes after executing ret instruction from bindGLFunc in glloader. If someone wants to try it you can do it with this project: https://github.com/igor84/dngin. I was testing this from Visual Studio but dub 32 bit LDC build also crashed. Am I doing something wrong or is this some known DerelictGL3 or compiler issue? This is a known issue [1] that I'm currently trying to resolve. I hadn't yet tested it using free functions (the bug report uses context types), so this new information helps. [1] https://github.com/DerelictOrg/DerelictGL3/issues/56 I'm unable to reproduce this locally using my little test app. It only crashes for me in 32-bit when using context objects. I also took your winmain.d module, modified it to compile with `dub --single`, then compiled and executed it with both the default architecture (-m32) and -m32mscoff (via dub's -ax86_mscoff command line argument). In both cases it compiled executed just fine. Have you tried to compile outside of VisualD? Hmmm... I though I tried running with just typing dub which should use m32 by default as far as I know and got the error. I will check one more time this evening. But LDC 32bit builds crash for sure. Note that I committed a version last night where I commented out the Derelict.reload call, so just make sure it is not commented before trying it out. In the meantime can you tell me these two things: 1. How come DerelictGLES only has: static if( Derelict_OS_Windows ) ... else static if( Derelict_OS_Posix && !Derelict_OS_Mac )... when GLES is primarily intended for mobile platforms as far as I know. What should I use for Android then? 2. I see that DerelictGL3 used to have wglext.d file where wglSwapIntervalEXT was loaded. How can I get access to this function now since I can't find it anywhere in the latest version?
DerelictGL3 reload crashes in 32 builds
In 64 bit builds it works with both LDC and DMD but in 32 bit LDC version crashes and DMD release version crashes. Using LDC debug build I managed to find that it crashes after executing ret instruction from bindGLFunc in glloader. If someone wants to try it you can do it with this project: https://github.com/igor84/dngin. I was testing this from Visual Studio but dub 32 bit LDC build also crashed. Am I doing something wrong or is this some known DerelictGL3 or compiler issue?
Re: Unresolved external symbol InterlockedIncrement
On Sunday, 13 August 2017 at 16:29:14 UTC, Igor wrote: I am building a 64 bit windows app with latest DMD and I keep getting this linker error: error LNK2019: unresolved external symbol InterlockedIncrement referenced in function ThreadProc This function should be a part of kernel32.lib which I verified is found by using /VERBOSE:LIB linker arg. The only other clue I have is a bigger comment block above the InterlockedIncrement declaration in winbase.d which starts with: // These functions are problematic Does anyone know what is the problem here? As far as I managed to find this is actually a compiler intrinsic for C so it probably shouldn't have been defined in winbase.d at all... I switched to using core.atomic.atomicOp which should be the same thing. Actually atomicFetchAdd would be the same thing but for some reason it is defined as private...
Unresolved external symbol InterlockedIncrement
I am building a 64 bit windows app with latest DMD and I keep getting this linker error: error LNK2019: unresolved external symbol InterlockedIncrement referenced in function ThreadProc This function should be a part of kernel32.lib which I verified is found by using /VERBOSE:LIB linker arg. The only other clue I have is a bigger comment block above the InterlockedIncrement declaration in winbase.d which starts with: // These functions are problematic Does anyone know what is the problem here?
Re: Read/Write memory barriers in D?
On Sunday, 13 August 2017 at 11:58:56 UTC, Daniel Kozak wrote: or maybe use core.atomic.atomicLoad and store with right https://dlang.org/phobos/core_atomic.html#.MemoryOrder On Sun, Aug 13, 2017 at 1:51 PM, Daniel Kozakwrote: maybe something like https://dlang.org/phobos/ core_bitop.html#.volatileLoad and https://dlang.org/phobos/ core_bitop.html#.volatileStore Based on documentation volatileLoad/volatileStore seems like the closest thing so I'll go with that for now. Thanks.
Read/Write memory barriers in D?
I am converting a C code that uses this macro: #define CompletePastWritesBeforeFutureWrites _WriteBarrier(); _mm_sfence() As far as I see core.atomic:atomicFence() is the equivalent of _mm_sfence() but I can't find what would be the equivalent of _WriteBarrier(). As far as I understand it is used just to tell the compiler it can't rearrange instructions during optimizations so subsequent memory writes happen before previous ones. Same for _ReadBarrier().
Profiling Windows App and DLL
Is there a known limitation in profiling these or am I doing something wrong? When I try to run my application from VisualD (x64 build) with -profile switch I just get Access Violation reported on WinMain function (actual declaration, it doesn't enter its body). If I build it with dub build --build=profile and then try to run it nothing happens, like it doesn't run at all. If I only add -profile switch on DLL part of the application I get the same Access Violation on DllMain. I also tried "Very Sleepy" profiler but it only shows symbols for main application and not for the DLL that it loads which is also built with debug info.
Re: Go 1.9
Maybe I am wrong but I get a feeling from posts in this thread that some people are greatly underestimating the size of some segments, like mentioning niche C++ programmers and only 0.01% percent of developers needing memory management. The games industry is growing like crazy [1][2] and after all these years C++ is still the main language for that except that today 99% of those developers have many bad things to say about it. Imagine how D adoption would jump if someone created something on par with Unity or Unreal engine, or even Cocos engine in D. And I think D is already up to that task, with biggest pain points being only cross platform support, especially for Android and iOS. Also regarding the question whether D should be marketed as general purpose or some special purpose language I find this article [3] has it explained nicely, except that it seems to me language should be marketed as general but have strong libraries (or game engines) for specific purposes through which it should market itself as something specialized. [1] http://kotaku.com/nearly-40-of-all-steam-games-were-released-in-2016-1789535450 [2] http://www.gamasutra.com/view/news/267645/Over_500_games_now_submitted_to_iOS_App_Store_every_day.php [3] https://simpleprogrammer.com/2017/06/19/generalists-specialists/
Re: Simple c header => Dlang constants using mixins in compile time
On Saturday, 17 June 2017 at 10:56:52 UTC, Igor Shirkalin wrote: Hello! I have a simple C header file that looks like: #define Name1 101 #define Name2 122 #define NameN 157 It comes from resource compiler and I need all these constants to be available in my Dlang program in compile time. It seems to me it is possible. I know I can simply write external program (in python, for example) that does it, but it means I should constantly run it after every change before D compilation. Please, can anyone help to direct me how to realize it? Thank you in advance! Igor Shirkalin Maybe I am not quite understanding what you are asking but can't you just use: enum Name1 = 101; enum Name2 = 122; ...
Re: Why is DUB not passing dll.def file to linker
On Sunday, 21 May 2017 at 11:47:15 UTC, Mike Parker wrote: So what I would try in your situation is to add three new configurations to the exeProject's dub.json. Use the "platforms" directive to limit one to "windows-x86", another to "windows-x86_64", and leave the other one empty. List the empty one last and it should become the default on non-Windows platforms. Move your preBuildCommands directive to the windows-x86 configuration, and copy it to the windows-x86_64 configuration with the addition of "-ax86_64" to the "dub build" command. Thanks for the suggestion Mike. I just added this for now and it works: "preBuildCommands-x86_64": ["cd game & dub build -ax86_64"], "preBuildCommands-x86": ["cd game & dub build"],
Re: Why is DUB not passing dll.def file to linker
On Sunday, 21 May 2017 at 10:15:40 UTC, Mike Parker wrote: Then you can add the following to exeProject/dub.json: "dependencies": { "dllProjectName": {"path" : "../dllProject" } } I would expect the import lib to be linked automatically. This should ensure the dll is compiled with the same architecture as the exe. DUB reports: Dynamic libraries are not yet supported as dependencies - building as static library.
Re: Why is DUB not passing dll.def file to linker
On Saturday, 20 May 2017 at 21:36:42 UTC, Mike Parker wrote: On Saturday, 20 May 2017 at 20:26:29 UTC, Igor wrote: So my question is if the fix is so simple what are the reasons it isn't implemented? Am I missing something? I don't know, but you could always submit a PR or an enhancement request. Actually, it turned out since 32bit def file needs additional settings in it compared to 64bit version it is handy to be able to have separate def files and use: "sourceFiles-windows-x86_64" : ["dll64.def"], "sourceFiles-windows-x86" : ["dll32.def"], to only pass appropriate one. Now since dll project can't be built as a dependency I added this to my main project dub.json: "preBuildCommands": ["cd game & dub build"], If I now run dub build in main project both projects compile and work together, but if I run dub build -ax86_64 only main project is built as 64bit while dll project is still being built as 32bit. Does anyone have a suggestion how can I make this work for both architectures?
Re: Why is DUB not passing dll.def file to linker
On Saturday, 20 May 2017 at 20:04:27 UTC, Mike Parker wrote: On Saturday, 20 May 2017 at 19:53:16 UTC, Igor wrote: There is no mention of dll.def file. Add a "sourceFiles" directive: "sourceFiles-windows" : ["dll.def"] See the comments at the following: https://github.com/dlang/dub/issues/575 https://github.com/dlang/dub/pull/399 Thanks Mike. Google wasn't this helpful :). In the meantime I tried debugging dub to see what is happening and with this simple change in packagerecipe.d it seems to work: // collect source files (instead of just "*.d" i put the following at line 206) dst.addSourceFiles(collectFiles(sourcePaths, "*.{d,def}")); This is the output I got: Performing "debug" build using dmd for x86_64. handmade ~master: building configuration "internal"... FILES IN BUILDSETTINGS: ["dll.def", "handmade.d", "handmade_h.d", "windll.d"] dmd -m64 -c -of.dub\build\internal-debug-windows-x86_64-dmd_2074-C56D6B49201C03F44B01E754688EACEE\handmade.obj -debug -g -w -version=HANDMADE_INTERNAL -version=Have_handmade handmade.d handmade_h.d windll.d -vcolumns Linking... dmd -of.dub\build\internal-debug-windows-x86_64-dmd_2074-C56D6B49201C03F44B01E754688EACEE\handmade.dll .dub\build\internal-debug-windows-x86_64-dmd_2074-C56D6B49201C03F44B01E754688EACEE\handmade.obj dll.def -m64 -shared -g As you can see def file just got added to link command. So my question is if the fix is so simple what are the reasons it isn't implemented? Am I missing something?
Why is DUB not passing dll.def file to linker
I am using DUB 1.3.0. My folder structure is: project dub.json source windll.d handmade.d dll.def dub.json looks like this: { "name": "handmade", "targetType": "dynamicLibrary", "targetPath": "build", "configurations": [ { "name": "internal", "versions": ["HANDMADE_INTERNAL"] } ] } But when I run: dub -v build -ax86_64 --force I get: Generating using build Generate target handmade (dynamicLibrary D:\git\temp\build handmade) Performing "debug" build using dmd for x86_64. handmade ~master: building configuration "internal"... dmd -m64 -c -of.dub\build\internal-debug-windows-x86_64-dmd_2074-C56D6B49201C03F44B01E754688EACEE\handmade.obj -debug -g -w -version=HANDMADE_INTERNAL -version=Have_handmade -Isource source\handmade.d source\windll.d -vcolumns Linking... dmd -of.dub\build\internal-debug-windows-x86_64-dmd_2074-C56D6B49201C03F44B01E754688EACEE\handmade.dll .dub\build\internal-debug-windows-x86_64-dmd_2074-C56D6B49201C03F44B01E754688EACEE\handmade.obj -m64 -shared -g Copying target from D:\git\temp\.dub\build\internal-debug-windows-x86_64-dmd_2074-C56D6B49201C03F44B01E754688EACEE\handmade.dll to D:\git\temp\build There is no mention of dll.def file.
Re: How to check a struct exists at a particular memory address?
On Thursday, 18 May 2017 at 20:20:47 UTC, Gary Willoughby wrote: This might be a really silly question but: I've allocated some memory like this (Foo is a struct): this._data = cast(Foo*) calloc(n, Foo.sizeof); How can I then later check that there is a valid Foo at `this._data` or `this._data + n`? Well... I think the right answer is that everything you do with memory should be very deterministic so you should just know where is what and not have a need to check :). The only thing that crosses my mind if you really need to check is to make sure you always write some specific big number just before each struct in memory as a flag that what follows is Foo and then you can check if that is set properly. I think you could do this by wrapping Foo in another struct whose first field is immutable long set to some specific value (that isn't zero) :) and then using that struct in place of Foo. Although I am not sure if compiler would optimize away checks if an immutable is equal to its init value...
Re: How to setup DLL and EXE projects in one VS solution
On Thursday, 18 May 2017 at 18:00:02 UTC, Rainer Schuetze wrote: That's what I meant with "other cross module dependencies". In this case it might work if you export _D10handmade_h10game_input6__initZ from your DLL with the help of a def-file, but that's not something that scales well. This talk will give you some details on the complications involved: https://www.youtube.com/watch?v=MQRHxI2SrYM Thanks for the link Rainer. I actually watched that a few days ago but it didn't help me understand what I just figured out. It slipped my mind that structs in D are not just structs like the ones in C. In D they have default initializer which is actually a function and as you said I am not exporting that function and it doesn't scale well to hunt all hidden functions that you need to export for all common datatypes. Even if I did, since I am loading DLL and its functions dynamically using GetProcAddress, how could I even tell D to use loaded function pointer as initialization function for a type... Anyway I will stick with the solution of using a separate file where all common datatypes are defined in both projects and see how far I get with it.
Re: How to setup DLL and EXE projects in one VS solution
On Thursday, 18 May 2017 at 07:10:54 UTC, Rainer Schuetze wrote: You have to add an import path to the folder with dllproj inside to the project configuration of the exeproject. If you want to limit the imported code to the declarations, you can enable "generate interface headers" and add an import path to these instead. Sharing data or resources between executable and DLL is currently limited to separate ownership as each binary contains its own copy of the runtime. (There is currently work being done to have a shared runtime, though). You might also run into other cross module dependencies... I tried just adding import paths to project and to di files and although compilation passes I still get link errors like: error LNK2019: unresolved external symbol _D10handmade_h10game_input6__initZ (handmade_h.game_input.__init) referenced in function _D8platform5win324main9myWinMainFPvPvPaiZi (int platform.win32.main.myWinMain(void*, void*, char*, int)) where game_input is a struct in interface file. I also tried adding the di file to the exeproject but it still doesn't work. It seems VisualD doesn't add di files to compiler invocation arguments like it does with *.d and *.def files.
Re: How to setup DLL and EXE projects in one VS solution
On Wednesday, 17 May 2017 at 18:03:04 UTC, Igor wrote: What exactly do mean by "binding"? If I understand the rest you are saying that I could just use "Add existing item" to add the dllproj.d file to EXEProject as well, but that would cause all of the code from it to be linked in the EXE and I only want that code in the DLL. I should also mention that I don't want to statically bind to DLL using a lib file because I want to be able to reload the DLL while the application is running. I managed to get it to work by extracting all common structs to dllprojInterface.d module that sits at /source/dllprojInterface.d on the file system but is added to both projects in the solution. I am still wondering if there is a better solution? Also I am wondering if using extern(C) as opposed to extern(D) only affects name mangling or am I losing some DLang possibilities since I am only calling DLang DLL from DLang EXE?
Re: How to setup DLL and EXE projects in one VS solution
On Wednesday, 17 May 2017 at 17:48:50 UTC, solidstate1991 wrote: I think you should make a binding for your DLL file. On the other hand I successfully set up a static library and an application in the same solution (now it has 2 apps, one is my map editor and file converter, the other is a window layout editor, going to add a third one for testing other functions later on) by adding the engine's sources and library files to the app. What exactly do mean by "binding"? If I understand the rest you are saying that I could just use "Add existing item" to add the dllproj.d file to EXEProject as well, but that would cause all of the code from it to be linked in the EXE and I only want that code in the DLL. I should also mention that I don't want to statically bind to DLL using a lib file because I want to be able to reload the DLL while the application is running.
How to setup DLL and EXE projects in one VS solution
At the moment I have: EXEProject: app.d - it does loadlibrary of dllproj and uses data structures defined in dllproj.d (it imports dllproj). On the file system this file is under /platform/win32/ and is defined as module win32.app; DLLProject dllproj.d - exports functions and contains data structures those function use. On file system this file is under /source and is defined as module dllproj; EXEProject depends on DLLProject. DLL project compiles and builds DLL fine but, of course EXE project breaks with an error: module dllproj is in file dllproj.d which cannot be read. I could just copy all the structs from dllproj.d to app.d and remove the import and I guess it would all work but there has to be a better way to structure code so structs are only written in one place?
Re: Current LDC Android status
On Tuesday, 16 May 2017 at 03:00:08 UTC, Mike B Johnson wrote: So what is currently the state of affairs with LDC and android? Last time I remember, it *could* compile to android but barely. About a month ago I tried to build OpenGL sample app following directions from here: https://wiki.dlang.org/Build_LDC_for_Android but I used these samples to see how to configure the project in latest Android Studio which includes .so file compiled from D and nothing else: https://github.com/googlesamples/android-ndk/tree/master/hello-libs and it worked. I used Ubuntu bash on Windows to do it. It is not too complicated but is certainly not straightforward and I am also looking forward to having all this nicely integrated in one official compiler that would just work.
Re: Structure of platform specific vs non platform specific code
On Tuesday, 9 May 2017 at 15:37:44 UTC, Stefan Koch wrote: On Tuesday, 9 May 2017 at 15:28:20 UTC, WhatMeWorry wrote: On Monday, 8 May 2017 at 21:16:53 UTC, Igor wrote: Hi, I am following Casey Muratori's Handmade Hero and writing it in DLang. This sounds very interesting. Maybe make it a public github project? It can only accessible for those who bought the game. That is right. If I manage to keep it up at least a bit more I will put it at https://github.com/HandmadeHero but that is only accessible for those who buy the game. Also thanks for the suggestions. I will definitely use it for platformServices part. In case you are interested in the reasoning for having platform code that imports game code Casey explains that in case where you structure all platform specific code in functions that other code should call you are making a needlessly big interface polluting the API space. For example you would need CreateWindow function in such library which games would only need to call once at startup; they won't need to create and close additional windows during their execution and they don't even need to know "Window" is a thing. Also some of that code is so different on some platforms that no API can cover it clearly. For example what should one expect CreateWindow to do on Android platform.
Structure of platform specific vs non platform specific code
Hi, I am following Casey Muratori's Handmade Hero and writing it in DLang. I got to Day 011: The Basics of Platform API Design where Casey explains the best way to structure platform specific vs non-platform specific code but his method cannot work in DLang since it uses modules and I am wondering what would be the best way to achieve the same in DLang. His way is to have these files: - platform.cpp (includes game.cpp directly, not game.h) - game.h (declares non-platform specific data types for communicating with platform layer and both game functions that platform layer needs to call and platform functions that game needs to call) - game.cpp (includes game.h and defines declared game functions) This scheme makes preprocessor actually merge all files into one but logically game.* files see nothing that is platform specific. The best idea for DLang I have is to separate platform into two modules: - platform.d (contains only code that needs to call into game code so it imports game.d) - platformServices.d (contains only code that game needs to call but wrapped in a common abstraction layer so game.d imports it)
Re: DMD VS2017 Support
On Monday, 1 May 2017 at 18:30:53 UTC, Rainer Schuetze wrote: VS 2017 uses a "private" registry that the Visual D installer doesn't have access to. I'll change the registry location in the next release. Please note that the next dmd installer will also detect VS2017 and setup directories correctly in sc.ini: https://github.com/dlang/installer/pull/227 That is great news! Thanks for quick response.
Re: DMD VS2017 Support
On Monday, 1 May 2017 at 01:54:30 UTC, evilrat wrote: On Sunday, 30 April 2017 at 16:05:10 UTC, Igor wrote: I should also mention that compiling using DUB works. It only doesn't work from VS. Check your VisualD settings and make sure it has DMD path set up. See under Tools>Options>Projects and solutions>Visual D Settings That was it. It didn't occur to me that this was the problem because I payed closed attention during VisualD installation and saw it properly recognized where DMD was installed but for some reason the path wasn't set in Options. Once I did set it, compile and build worked. Thanks evilrat! So in conclusion it seems the problem is in VisualD installation which doesn't set the path properly even though it recognizes where DMD is installed. Hope the author takes a look at this problem so beginners wanting to try D don't give up on a problem like this.
Re: DMD VS2017 Support
On Sunday, 30 April 2017 at 16:31:13 UTC, John Chapman wrote: Here are mine, if it helps: I tried but still the same problem. I also tried reinstalling VisualD after changing sc.ini in DMD but that didn't help either.
Re: DMD VS2017 Support
On Sunday, 30 April 2017 at 15:53:07 UTC, Mike Parker wrote: On Sunday, 30 April 2017 at 14:56:44 UTC, Igor wrote: I tried updating sc.ini to new paths but I still get this error. Can someone offer some advice? Which paths did you set? These are the ones I changed: VCINSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.25017\ UCRTVersion=10.0.15063.0 LINKCMD=%VCINSTALLDIR%\bin\HostX64\x64\link.exe PATH=%PATH%;%VCINSTALLDIR%\bin\HostX64\x64 LIB=%LIB%;"%VCINSTALLDIR%\lib\x64" Same for x86 environment, except, of course I replaced x64 with x86 in the values. I should also mention that compiling using DUB works. It only doesn't work from VS.
Re: DMD VS2017 Support
On Saturday, 22 April 2017 at 02:46:30 UTC, Mike Parker wrote: On Saturday, 22 April 2017 at 02:39:41 UTC, evilrat wrote: Also VS 2017 is much more modular now, so its now lighter than ever before. but of course for C++ (and D) you still need Windows SDK. The SDK stuff is installed with VS. IIRC D also can be used without VS or WinSDK at all, just forget about m32mscoff and x64 builds Yes, that is correct. But that comes with its own headaches. I had a working VS 2015 with VisualD and DMD. Today I uninstalled VS2015 and VisualD, then installed VS2017 and latest VisualD but when I create new D windows app and try to run it I get this error: Command Line set PATH=C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.25017\bin\HostX86\x86;C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE;C:\Program Files (x86)\Windows Kits\8.1\bin\x86;.\windows\bin;%PATH% dmd -g -debug -X -Xf"Win32\Debug\testapp.json" -deps="Win32\Debug\testapp.dep" -c -of"Win32\Debug\testapp.obj" winmain.d if errorlevel 1 goto reportError set LIB= echo. > D:\git\testapp\testapp\Win32\Debug\testapp.build.lnkarg echo "Win32\Debug\testapp.obj","Win32\Debug\testapp.exe","Win32\Debug\testapp.map",ole32.lib+ >> D:\git\testapp\testapp\Win32\Debug\testapp.build.lnkarg echo kernel32.lib+ >> D:\git\testapp\testapp\Win32\Debug\testapp.build.lnkarg echo user32.lib+ >> D:\git\testapp\testapp\Win32\Debug\testapp.build.lnkarg echo comctl32.lib+ >> D:\git\testapp\testapp\Win32\Debug\testapp.build.lnkarg echo comdlg32.lib+ >> D:\git\testapp\testapp\Win32\Debug\testapp.build.lnkarg echo user32.lib+ >> D:\git\testapp\testapp\Win32\Debug\testapp.build.lnkarg echo kernel32.lib/NOMAP/CO/NOI/DELEXE /SUBSYSTEM:WINDOWS >> D:\git\testapp\testapp\Win32\Debug\testapp.build.lnkarg "C:\Program Files (x86)\VisualD\pipedmd.exe" -deps Win32\Debug\testapp.lnkdep link.exe @D:\git\testapp\testapp\Win32\Debug\testapp.build.lnkarg if errorlevel 1 goto reportError if not exist "Win32\Debug\testapp.exe" (echo "Win32\Debug\testapp.exe" not created! && goto reportError) goto noError :reportError echo Building Win32\Debug\testapp.exe failed! :noError Output Microsoft (R) Incremental Linker Version 14.10.25019.0 Copyright (C) Microsoft Corporation. All rights reserved. "Win32\Debug\testapp.obj,Win32\Debug\testapp.exe,Win32\Debug\testapp.map,ole32.lib+" kernel32.lib+ user32.lib+ comctl32.lib+ comdlg32.lib+ user32.lib+ kernel32.lib/NOMAP/CO/NOI/DELEXE /SUBSYSTEM:WINDOWS LINK : fatal error LNK1181: cannot open input file 'Win32\Debug\testapp.obj,Win32\Debug\testapp.exe,Win32\Debug\testapp.map,ole32.lib+' Building Win32\Debug\testapp.exe failed! I tried updating sc.ini to new paths but I still get this error. Can someone offer some advice?
Re: How compiler detects forward reference errors
Thank you all for your replies. I am trying to learn a bit about compiler and language design and I really like D among many other languages I read about so I am trying to learn from it as well.
Re: How compiler detects forward reference errors
On Saturday, 3 September 2016 at 14:13:27 UTC, Lodovico Giaretta wrote: On Saturday, 3 September 2016 at 14:06:06 UTC, Igor wrote: Can anyone explain in plain English how does compiler process and detect a "test.d(6) Error: forward reference of variable a" in following code: import std.stdio; enum a = 1 + b; enum d = 5 + a; // No error here enum b = 12 + c; enum c = 10 + a; // error here void main() { writeln("Hello World!", b); } a needs b to be initialized. So b must be initialized before a. Let's write this b->a. Now b needs c. So c->b. c needs a, so a->c. If we sum everything, we have that a->c->b->a. This mean that to initialize a we need b, to initialize b we need c, but to initialize c we need a. So to initialize a we need a, which is not possible. We need a before having initialized it. On the other hand, a->d is not a problem, as d can be initialized after a. So, you are saying compiler is keeping a kind of linked list of dependencies and then checks if any of those lists are circular? But how exactly would that list be structured since one expression can have multiple dependencies, like: enum a = b + c + d + e; enum b = 10 + c; enum c = d + e + a; ...
View model separation X and D.
Suppose I create a model in D and would like it to support a gui written in another language such as Qt C++ or WPF .NET. Can anyone think of an efficient and very easy way to hook up the "bindings"? The gui doesn't need to be updated more than about 30 times a second and it should run in it's own thread. I would think a simple message queue would work that uses commands to tell the gui that value X has changed. The gui then checks the queue and updates the values visually. This should be quick except one needs to deal with potential multiple changes of the same value so the queue is not flooded with useless changes(changes that are not able to be displayed due to the fps limitation). Does this seem relatively viable? It would need a rather minimal api and should be relatively fast? The other problem of the gui updating the or changing the model is a bit harder but commands could also be used. Seems like most of the work would be done in creating the bindings to the model so the view can act on them. I imagine a simple syntax in the gui design could be used to bind view to model actions. Any ideas on the subject?
Re: View model separation X and D.
On Friday, 29 January 2016 at 20:04:59 UTC, Igor wrote: Suppose I create a model in D and would like it to support a gui written in another language such as Qt C++ or WPF .NET. Can anyone think of an efficient and very easy way to hook up the "bindings"? The gui doesn't need to be updated more than about 30 times a second and it should run in it's own thread. I would think a simple message queue would work that uses commands to tell the gui that value X has changed. The gui then checks the queue and updates the values visually. This should be quick except one needs to deal with potential multiple changes of the same value so the queue is not flooded with useless changes(changes that are not able to be displayed due to the fps limitation). Does this seem relatively viable? It would need a rather minimal api and should be relatively fast? The other problem of the gui updating the or changing the model is a bit harder but commands could also be used. Seems like most of the work would be done in creating the bindings to the model so the view can act on them. I imagine a simple syntax in the gui design could be used to bind view to model actions. Any ideas on the subject? If the above is a good way, then the approach I would take is I: Use attributes on properties to attach them to the message queue. All marked properties are automatically modified so they "add a message" to the queue. This should somehow be real fast(messages would be statically created). II: Use attributes on methods and objects to export them so they can be called by the view. The goal is to use attributes and allow the meta programming to do all the heavy lifting behind the scenes. Sounds good? Anyone see any performance issues with this? My main goal is to avoid the model from being bogged down in step I.
Re: C++17
On Wednesday, 27 January 2016 at 05:28:56 UTC, Jack Stouffer wrote: On Tuesday, 26 January 2016 at 21:50:58 UTC, Igor wrote: What D lacks is organizational structure! It's akin to a bunch of kids programming in their bedrooms cobbling together stuff and being ecstatic that it works(for them at least). I'm going to chalk this up to lack of experience in volunteer based software projects. D is basically stagnate(bug fixes and piddly stuff don't count), which is pretty sad considering it looks to be a one of the best languages on the planet. This is absolutely ridiculous, I'm sorry, there's no other way to describe this. The sheer number of new language features, removals of bad ideas, and new library features makes C++ growth look glacial in comparison. I literally know of no other language than Python that has as quick a turnaround on new ideas than D has. Um, 1. I'm not comparing it to C++. 2. Compared to the explosion that D1 had several years ago and all the libraries that were create and all the work, D is relatively stagnate. Just because stuff is still happening doesn't mean anything. D has lost a lot of momentum in the Phobos\Tango mess and many of the eager programmers in D seemed to have moved on to greener pastures. It's been over 15 years since D's incarnation and one would expect it to be much much further along? "C was originally developed by Dennis Ritchie between 1969 and 1973". Within 10 years C as pretty much the defacto standard. Maybe D needs to create it's own OS built on an OOP foundation without all the pitfalls of modern windows, OSX and Linux? Maybe that will put it on the map.
Re: how to allocate class without gc?
On Wednesday, 27 January 2016 at 06:40:00 UTC, Basile B. wrote: On Tuesday, 26 January 2016 at 01:09:50 UTC, Igor wrote: Is there any examples that shows how to properly allocate an object of a class type with the new allocators and then release it when desired? This is more or less the same answer as you've get previously except that I don't use emplace but rather a copy of what's done in _d_new_class() from the D runtime: CT construct(CT, A...)(A a) @trusted @nogc if (is(CT == class)) { import std.experimental.allocator.mallocator; auto size = typeid(CT).init.length; auto memory = Mallocator.instance.allocate(size); // D runtime use GC here memory[0 .. size] = typeid(CT).init[]; static if (__traits(hasMember, CT, "__ctor")) (cast(CT) (memory.ptr)).__ctor(a); import core.memory: GC; GC.addRange(memory.ptr, size, typeid(CT)); return cast(CT) memory.ptr; } the GC stuff could look superfluous but without this and if there's a GC allocated members in your class (even a simple dynamic array) then you'll encounter random errors at run-time. Thanks. But doesn't this ultimately defeat the purpose of having manual memory management if one has to add it to the GC to be scanned? Seems like it is just an extra step. The whole point is to prevent the GC from having to mess with the object in the first place. I understand if it uses GC based objects then the GC needs to be informed but this really feels like it defeats the purpose. Ultimately I want no GC dependency. Is there an article that shows how this can be done?
Re: free causes exception
On Wednesday, 27 January 2016 at 14:31:20 UTC, Steven Schveighoffer wrote: On 1/26/16 4:23 PM, Igor wrote: On Tuesday, 26 January 2016 at 20:17:20 UTC, Steven Schveighoffer wrote: [...] um? Memory manager? I am doing it manually C++ style so I don't have to worry about the god forsaken memory manager. Why is it so difficult? I create the object and release it when I need to. As Mike said, I mean whatever you are using for memory management. The class is not responsible for allocating or deallocating itself, just initializing itself and deinitializing itself. So if you use malloc and free, that is your memory manager. I can replace the destroy(f) with free(inline the code) but I don't see why that should matter. The whole point of destructors is to do this sort of stuff. That's why they were invented in the first place!?! It isn't even this way in C++. No destructors deallocate 'this'. All D destructors should destroy all the members. And generally speaking, if you ever plan to use a class with the GC, you should only destroy non-GC members. The GC members may already be destroyed. -Steve There needs to be better docs on this? Or at least someone needs to provide a link! ;) Why can there be an "deplace" equivalent to emplace? Everone says it's so easy not to use the GC in D yet I can't seem to find any real world examples ;/
Re: Fun with extern(C++)
On Tuesday, 26 January 2016 at 16:25:35 UTC, Benjamin Thaut wrote: On Tuesday, 26 January 2016 at 16:13:55 UTC, Manu wrote: Probably, but the layout of the vtable is defined by the interface, and the interface type is always known, so I don't see why there should be any problem. Whether it's extern(C++) or extern(D), the class populating the vtable with functions knows the layout. I think it all comes down to this conversion to Object thing. If an interface must do that, then that's probably an issue without jamming an Object instance in the class somewhere. For a C++ class the first entry in the vtable is actually the first virtual function. (usually the destructor). For a D class the first entry in the vtable is the classinfo. Thus the problem if you derive a D class from a extern(C++) base class. I don't see any way to actually fix this, adjusting the this pointer won't help. Once you derive a D class from a extern(C++) base class it is no longer a fully functional D class. For example monitor (e.g. synchronized methods) won't work. Why couldn't D have been designed to extend the C++ class layout with the vtable at the start and the new stuff at the bottom? Would that have worked? If so, why not allow for a new class layout like "extern(C++) class X { }"
Re: C++17
On Tuesday, 26 January 2016 at 21:21:53 UTC, Ola Fosheim Grøstad wrote: On Tuesday, 26 January 2016 at 21:15:07 UTC, rsw0x wrote: In any case where you attempt to write code in D that is equal in performance to C++, you must avoid the GC. Yes, I don't see why anyone should have to link in the GC if they don't want to use it. I think the leaders are in over their heads. They have no real vision or plan to make D the language it can be. They are satisfied with their niche in the programming world. They need a Steve Jobs type of person to make this stuff happen. Someone that understands what has to be done, in what order, and how to make it happen. What D lacks is organizational structure! It's akin to a bunch of kids programming in their bedrooms cobbling together stuff and being ecstatic that it works(for them at least). D is basically stagnate(bug fixes and piddly stuff don't count), which is pretty sad considering it looks to be a one of the best languages on the planet. If I said this stuff in North Korea I'd be hung!
Re: how to allocate class without gc?
On Tuesday, 26 January 2016 at 09:32:06 UTC, Daniel Kozak wrote: V Tue, 26 Jan 2016 05:47:42 + Igor via Digitalmars-d-learn <digitalmars-d-learn@puremagic.com> napsáno: [...] Can you try it with GC.disable()? Didn't change anything.
Re: how to allocate class without gc?
On Tuesday, 26 January 2016 at 09:32:06 UTC, Daniel Kozak wrote: V Tue, 26 Jan 2016 05:47:42 + Igor via Digitalmars-d-learn <digitalmars-d-learn@puremagic.com> napsáno: On Tuesday, 26 January 2016 at 05:11:54 UTC, Mike Parker wrote: > [...] Can you try it with GC.disable()? //ubyte[__traits(classInstanceSize, App)] buffer; auto buffer = core.stdc.stdlib.malloc(__traits(classInstanceSize, App))[0..__traits(classInstanceSize, App)]; works, so it is the ubyte line.
free causes exception
I have successfully malloc'ed an object but when I go to free it in the destructor I get an exception. The destructor simply has ~this() // destructor for Foo { core.stdc.stdlib.free(); } auto buffer = core.stdc.stdlib.malloc(__traits(classInstanceSize, App))[0..__traits(classInstanceSize, App)]; auto app = cast(App)emplace!App(buffer[]); I tried to retain a ptr to buffer and free that but still no good. I also get a depreciation warning that is not an lvalue. Hopefully I don't have to keep a ptr around to this simply to free it and avoid future issues? So how am I suppose to free an object?
Re: free causes exception
On Tuesday, 26 January 2016 at 14:48:48 UTC, Daniel Kozak wrote: V Tue, 26 Jan 2016 14:20:29 + Igor via Digitalmars-d-learn <digitalmars-d-learn@puremagic.com> napsáno: [...] core.stdc.stdlib.free(cast(void *)this); I still get an exception: Exception thrown at 0x7FF6C7CA3700 in test.exe: 0xC005: Access violation reading location 0x.
Re: free causes exception
On Tuesday, 26 January 2016 at 20:17:20 UTC, Steven Schveighoffer wrote: On 1/26/16 9:20 AM, Igor wrote: I have successfully malloc'ed an object but when I go to free it in the destructor I get an exception. The destructor simply has ~this() // destructor for Foo { core.stdc.stdlib.free(); } auto buffer = core.stdc.stdlib.malloc(__traits(classInstanceSize, App))[0..__traits(classInstanceSize, App)]; auto app = cast(App)emplace!App(buffer[]); I tried to retain a ptr to buffer and free that but still no good. I also get a depreciation warning that is not an lvalue. Hopefully I don't have to keep a ptr around to this simply to free it and avoid future issues? So how am I suppose to free an object? Don't do it in the destructor. I can only imagine that you are triggering the destructor with destroy? In this case, destroy is calling the destructor, but then tries to zero the memory (which has already been freed). There is a mechanism D supports (but I believe is deprecated) by overriding new and delete. You may want to try that. It's deprecated, but has been for years and years, and I doubt it's going away any time soon. A class shouldn't care how it's allocated or destroyed. That is for the memory manager to worry about. um? Memory manager? I am doing it manually C++ style so I don't have to worry about the god forsaken memory manager. Why is it so difficult? I create the object and release it when I need to. I can replace the destroy(f) with free(inline the code) but I don't see why that should matter. The whole point of destructors is to do this sort of stuff. That's why they were invented in the first place!?!
Re: free causes exception
On Tuesday, 26 January 2016 at 19:34:22 UTC, Ali Çehreli wrote: On 01/26/2016 06:20 AM, Igor wrote: > I have successfully malloc'ed an object but when I go to free it in the > destructor I get an exception. The destructor simply has > > ~this() // destructor for Foo > { > core.stdc.stdlib.free(); > } That design suggests a complexity regarding object responsibilities: Assuming that the object was constructed on a piece of memory that it did *not* allocate, the memory was owned by somebody else. In that case and in general, freeing the memory should be the responsibility of that other somebody as well. Even if it is acceptable, you must also make sure that opAssign() and post-blit do the right thing: no two object should own the same piece of memory. Ali That shouldn't be the case. I allocate in a static method called New once. I then deallocate in the destructor. Basically just as one would do in C++. I'm not sure about opAssign and post-blit class Foo { ~this() // destructor for Foo { core.stdc.stdlib.free(cast(void *)this); } // Creates a Foo static public Foo New() { auto buffer = core.stdc.stdlib.malloc(__traits(classInstanceSize, Foo))[0..__traits(classInstanceSize, Foo)]; auto app = cast(Foo)emplace!Foo(buffer[]); } } hence auto f = Foo.New(); then .destroy(f); which is where the crash happens. If I don't destroy, it works fine + memory leak.
Re: nogc Array
On Tuesday, 26 January 2016 at 03:06:40 UTC, maik klein wrote: On Tuesday, 26 January 2016 at 03:03:40 UTC, Igor wrote: Is there a GC-less array that we can use out of the box or do I have to create my own? https://dlang.org/phobos/std_container_array.html How do we use std.algorithm with it? I could like to use find but I have no luck. I have std.container.array!MyClass classes; then std.algorithm.find!("a.myInt == b")(classes, 3) I was hoping this would find the first object in classes who has myInt == 3 but I just get many errors about not being able to find the right definition. I guess std.container.array isn't a range? Or am I using it wrong?
Re: nogc Array
On Tuesday, 26 January 2016 at 04:38:13 UTC, Adam D. Ruppe wrote: On Tuesday, 26 January 2016 at 04:31:07 UTC, Igor wrote: then std.algorithm.find!("a.myInt == b")(classes, 3) Try std.algorithm.find!("a.myInt == b")(classes[], 3) notice the [] after classes I guess std.container.array isn't a range? Or am I using it wrong? Containers aren't really ranges, they instead *offer* ranges that iterate over them. Built in arrays are a bit special in that they do this implicitly so the line is more blurred there, but it is a general rule that you need to get a range out of a container. Otherwise, consider that iterating over it with popFront would result in the container being automatically emptied and not reusable! Ok, does the [] do any conversion or any thing I don't want or does it just make the template know we are working over an array? Are there any performance issues? I am already using a for loop to find the type, it's 6 lines of code. I was hoping to get that down to one or 2 and make it a bit easier to understand. App app = null; for(int i = 0; i < Apps.length(); i++) if ((Apps[i] !is null) && (Apps[i].hWnd == hWnd)) { app = Apps[i]; break; } versus find!("a.hWnd == b")(Apps[], hWnd); Does [] take time to convert to a built in a array or range or whatever or will it be just as fast as the above code?
Re: how to allocate class without gc?
On Tuesday, 26 January 2016 at 05:11:54 UTC, Mike Parker wrote: On Tuesday, 26 January 2016 at 01:09:50 UTC, Igor wrote: Is there any examples that shows how to properly allocate an object of a class type with the new allocators and then release it when desired? Allocate a block of memory big enough to hold an instance of your class using whichever allocator you need, then instantiate a class instance with std.conv.emplace. http://p0nce.github.io/d-idioms/#Placement-new-with-emplace I created a class using this example. But my code is now failing. It seems one can't just replace new with this and expect it to work? What is happening is some fields(strings) are not retaining their value. ubyte[__traits(classInstanceSize, App)] buffer; auto app = cast(App)emplace!App(buffer[]); //auto app = new App(); Basically the comment is the original. When I finally call createWindow, it fails because the string representing the name inside App is null... Which doesn't happen when I use new. Should it work as expected(which it isn't) or do I have to also emplace all the fields and such so they are not for some reason released?
D Dll's usefulness
Can D Dll's be linked and used as if they were compiled directly with the program? I was thinking of writing some library routines and put them in a Dll but now I'm not thinking that would be very useful because the Dll's won't export class like behavior. in my DLL: class MyClass { void foo() { } } in my app: auto c = new MyClass(); I'd like to use MyClass as if it were defined directly here, but I think I can only load the dll and attach to foo? Is this right? Since I'm the "owner" of the library I can always just "drag and drop" the source code into the project to get the desired behavior. I'd like the DLL to provide that to future projects that may not have the source code. Either the DLL's don't support this, which I think is the case, or I have to include a "header". Hopefully there is a tool that could take a "library" that will be used as a dll and strip it down into modules that can be included into the main app so it can be used as if the source code was directly compiled in. Am I off target here?
Re: When is the Win API supported?
On Monday, 25 January 2016 at 18:40:14 UTC, Vladimir Panteleev wrote: On Monday, 25 January 2016 at 18:34:30 UTC, Igor wrote: [...] There currently aren't any. [...] The core.sys package mirrors various C system headers. As such, you can consult the C documentation instead. Though I agree, it would be nice to have at least an overview for each module. Feel free to file a bug or contribute. [...] core.sys.windows is a package (without a package.d), so you can't import it directly. You probably want to import core.sys.windows.windows, that will import most common headers (as described above, it is the equivalent of "#include "). Thanks. I see how it works now.
Re: When is the Win API supported?
On Monday, 25 January 2016 at 02:21:49 UTC, Rikki Cattermole wrote: On 25/01/16 2:46 PM, Igor wrote: When will the proper Win API be included in D? About how long(months, years?)? Does it support seamless narrow and wide characters? I am not referring to the defunct win32 support already included. You mean the MingW based bindings that is in 2.070? https://github.com/D-Programming-Language/druntime/tree/v2.070.0-b2/src/core/sys/windows The only issue is for -m32 with import libs. How do I use it? I can't find any docs on it(The D page, at least doesn't seem to have anything for windows in phobos except charset.
Re: Dmd sc.ini and VS do not work well together!
On Sunday, 24 January 2016 at 08:27:43 UTC, Igor wrote: On Sunday, 24 January 2016 at 05:34:18 UTC, Brad Anderson wrote: [...] Ok, I hope that it will be fixed because it seems like a rather simple issue(location issue). I can't know if there are any other problems until it is fixed. [...] When I added 'LIB=%LIB%;"C:\PF\Windows\Kits\10\Lib\10.0.10586.0\ucrt\x64"' to sc.ini, I was able to compile. The installer is not adding this path. The installer is not functioning properly for latest VS. By scanning for these folders, it should work. The installer needs a more generic and intelligent path finding system than the somewhat hard coded one it uses now. This will make it easier to maintain in the years to come.
Re: When is the Win API supported?
On Monday, 25 January 2016 at 18:18:48 UTC, Vladimir Panteleev wrote: On Monday, 25 January 2016 at 18:09:47 UTC, Igor wrote: On Monday, 25 January 2016 at 02:21:49 UTC, Rikki Cattermole wrote: On 25/01/16 2:46 PM, Igor wrote: When will the proper Win API be included in D? About how long(months, years?)? Does it support seamless narrow and wide characters? I am not referring to the defunct win32 support already included. You mean the MingW based bindings that is in 2.070? https://github.com/D-Programming-Language/druntime/tree/v2.070.0-b2/src/core/sys/windows The only issue is for -m32 with import libs. I've included the .def files that came with the Win32 bindings, and added them to the build, so many import libs will be supported already. How do I use it? I can't find any docs on it(The D page, at least doesn't seem to have anything for windows in phobos except charset. When in C you would `#include `, in D you can `import core.sys.windows.foo;`, then just use it exactly like from C. Where are the docs for this? http://dlang.org/phobos-prerelease/index.html doesn't show anything about sys in core. I remember seeing somewhere that win32 api was included in dmd but that it wasn't working and was suppose to be updated. I would like to try it out but I can't find any working information about it. If I add `import core.sys.windows;` Dmd says `Error: module windows is in file 'core\sys\windows.d' which cannot be read`. I am using DMD Beta 2.070.0-b2 Thanks.
Re: D Dll's usefulness
On Monday, 25 January 2016 at 21:42:07 UTC, Kagamin wrote: Um... A closed-source library is one thing, DLL is another thing, DLL class library is a third thing, seamless linking of DLL class library is a fourth thing. Well... see what you can get working. Thanks for the help! I really appreciate your wisdom and hospitality! Give me your address and I'll send you a thank you card!
New Win32 Core api broke?
error LNK2019: unresolved external symbol GetStockObject referenced in function _D2Application10createWindowMFPFZvZi (int Application.createWindow(void function()*)) and the line of code is wc.hbrBackground = GetStockObject(WHITE_BRUSH); I've tried to import core.sys.windows.wingdi; But that just got me to this error. When I don't use it, the code compiles I was able to solve this problem by importing gdi32.lib on the command line. Shouldn't the module import the lib like it does for the other ones? or does it?
static assignment
how can I have a static assignment static bool isStarted = false; public static Application Start(string Name, void function(Application) entryPoint) { if (isStarted) assert("Can only call Start once"); isStarted = true; ... but I don't want this a runtime check. I want to know at compile time. Start should never be used more than once in the entire program. It's only used to get control from windows startup. It's only called in the static this() of the main module. changing stuff to static if and static assert doesn't help. I tried enum, const, etc but all prevent setting isStarted to true.
how to allocate class without gc?
Is there any examples that shows how to properly allocate an object of a class type with the new allocators and then release it when desired?
nogc Array
Is there a GC-less array that we can use out of the box or do I have to create my own?
Re: Dmd sc.ini and VS do not work well together!
On Sunday, 24 January 2016 at 05:34:18 UTC, Brad Anderson wrote: On Saturday, 23 January 2016 at 21:38:19 UTC, Igor wrote: I feel like I am in the cave man times. I installed Dmd2 from scratch. VisualD x64 project would not compile due to libucrt.lib not being found. Sorry you are having trouble. The Universal CRT and Visual Studio 2015 are very new and I'm sure there are still some bugs to work out. Ok, I hope that it will be fixed because it seems like a rather simple issue(location issue). I can't know if there are any other problems until it is fixed. [snip] HKLM "Software\Microsoft\Windows Kits\Installed Roots" "KitsRoot10" Then searching for the latest UCRT version in a subdirectory. That's probably where the bug is. This is brand new detection however so it might be buggy. You can see how it works here: https://github.com/D-Programming-Language/installer/blob/master/windows/d2-installer.nsi#L379 Would you happen to know a better way to do this? [snip] The installer should be modernized and provide path information and resolve dependencies properly before installing. It is clear that dmd was not designed for windows use. We make updates to it pretty much every release cycle. We'd love some help making it bulletproof. Rock solid VS/Platform SDK detection is what we want but it's proved a bit trickier than expected (Microsoft has changed a few things up with different VS releases so there is not just one way to do it). Ok, I think you need to use all the reg keys in HKLM "Software\Microsoft\Windows Kits\Installed Roots" and search all valid paths for the proper files. In my case, if that was done, it would find the proper library files. Basically any subdirectory that has a lib file is a possible candidate for a lib path. This is the part! The hard part is to figure out which are the "correct" libs. One can check the libs for architecture. 32-bit libs folders are candidates for the 32-bit build and ditto for 64-bit. Out of those, one should determine debug versions and separate that, even though sc.ini doesn't seem to have this capability. Maybe it could be added and then dmd can choose the correct path for debug builds. After than, one will have duplicates due to versioning. One could try and partition down further, take the latest version, or present the user with options at this point. At the very least, the installer could add all possible path candidates to the sc.ini as comments so when one goes in and edits the file, they just have to try one at a time and not go bonkers looking for the paths. Also, allow the option to copy the lib files(and possibly VC linker files) into a subfolder to use instead. This way one can avoid even having to reinstall the kits or could uninstall them and still retain the libraries outside of the VS mess. This would avoid having to have multiple VS versions installed. A simple sc.ini reconfiguration tool or re-install/modify could then be used to select the different build varieties.
When is the Win API supported?
When will the proper Win API be included in D? About how long(months, years?)? Does it support seamless narrow and wide characters? I am not referring to the defunct win32 support already included.
Re: Dmd sc.ini and VS do not work well together!
The installer should be modernized and provide path information and resolve dependencies properly before installing. It is clear that dmd was not designed for windows use. Also, sc.ini global variables should be at the top most section: [Environment] DFLAGS="-I%@P%\..\..\src\phobos" "-I%@P%\..\..\src\druntime\import" LIB="%@P%\..\lib" These should be placed here instead of the individual sections as it creates redunancy and is bug prone: VCINSTALLDIR=C:\PF\VS2015\VC\ WindowsSdkDir=C:\PF\Windows Kits\8.1\ UniversalCRTSdkDir=C:\PF\Windows\Kits\10\ or even in [Version]
Re: Dmd sc.ini and VS do not work well together!
On Saturday, 23 January 2016 at 22:47:35 UTC, Walter Bright wrote: On 1/23/2016 1:38 PM, Igor wrote: As of now I personally cannot use dmd to build windows apps. You know, sc.ini is editable by you! Yes, But why do you expect me to be so smart or have a desire to waste my time looking for paths and such when YOU can write about 100 lines of code in about the same time it would take me to get sc.ini to work properly? There is a multiplicative factor here. If you do the work then it saves N people N hours of there life. If I do it, it wastes 1 hr of my life and helps no one! Please don't be a life thief! I know it takes your own life-hours to implement the code but you are the head D hauncho! Maybe hire someone or ask someone? You seem to have a following! If I actually knew what sc.ini really needed to work properly then I might do it myself, but it looks kinda crappy old win3.1 style stuff that never made much sense in the first place.