Re: Building a wasm library, need to override .object._d_newitemT!T
On Sunday, 24 December 2023 at 10:50:41 UTC, Johan wrote: _d_newitemT!T is fairly new, what compiler version are you using? -Johan Nevermind, I managed to get it working but I had to compile without including druntime and phobos and move everything into the library. I'm using ldc 1.36.0-beta1
Building a wasm library, need to override .object._d_newitemT!T
Hello, I've been developing a library[1] based on spasm for which I've implemented the druntime and currently it compiles web apps properly with TypeInfo, no GC, and it even uses diet templates. I'm having a problem implementing the `new` keyword, so that I can start importing more libraries with minimal change. However, LDC calls .object._d_newitemT!T from the original druntime - which I need for compile-time function execution, but my implementation in `module object` doesn't override it in the compiler and the original implementation tries import core.stdc.time which errors out in wasm (with good reasons). Is there a compiler flag than I can use to override module templates? Thanks in advance. [1] https://github.com/etcimon/libwasm
Re: Lazy and GC Allocations
On Monday, 20 February 2023 at 19:58:32 UTC, Steven Schveighoffer wrote: On 2/20/23 1:50 PM, Etienne wrote: On Monday, 20 February 2023 at 02:50:20 UTC, Steven Schveighoffer wrote: See Adam's bug report: https://issues.dlang.org/show_bug.cgi?id=23627 So, according to this bug report, the implementation is allocating a closure on the GC even though the spec says it shouldn't? The opposite, the delegate doesn't force a closure, and so when the variable goes out of scope, memory corruption ensues. I've been writing some betterC and the lazy parameter was prohibited because it allocates on the GC, so I'm wondering what the situation is currently It shouldn't. Now, lazy can't be `@nogc` (because that's just what the compiler dictates), but it won't actually *use* the GC if you don't allocate in the function call. I just tested and you can use lazy parameters with betterC. -Steve The @nogc issue might be what might be why it didn't work for me. I use it because it's easier to work with betterC but perhaps I should avoid writing @nogc code altogether Thanks for the info! Etienne
Re: Lazy and GC Allocations
On Monday, 20 February 2023 at 02:50:20 UTC, Steven Schveighoffer wrote: See Adam's bug report: https://issues.dlang.org/show_bug.cgi?id=23627 -Steve So, according to this bug report, the implementation is allocating a closure on the GC even though the spec says it shouldn't? I've been writing some betterC and the lazy parameter was prohibited because it allocates on the GC, so I'm wondering what the situation is currently Etienne
Lazy and GC Allocations
Hello, I'm wondering at which moment the following would make an allocation of the scope variables on the GC. Should I assume that the second parameter of enforce being lazy, we would get a delegate/literal that saves the current scope on the GC even if it's not needed? I'm asking purely for a performance perspective of avoiding GC allocations. ``` void main() { int a = 5; enforce(true, format("a: %d", a)); } ``` Thanks Etienne
Re: Invalid assembler comparison
On Friday, 23 October 2015 at 15:17:43 UTC, Etienne Cimon wrote: Hello, I've been trying to understand this for a while now: https://github.com/etcimon/botan/blob/master/source/botan/math/mp/mp_core.d#L765 This comparison (looking at it with windbg during cmp operation) has these invalid values in the respective registers: rdx: 9366584610601550696 r15: 8407293697099479287 When moving them into a ulong variable with a mov [R11], RDX before the CMP command, I get: RDX: 7549031027420429441 R15: 17850297365717953652 Which are the valid values. Any idea how these values could have gotten corrupted this way? Is there a signed integer conversion going on behind the scenes? I found out that there was an integer conversion going on behind the scenes when using jnl. I had to use jnb http://stackoverflow.com/questions/27284895/how-to-compare-a-signed-value-and-an-unsigned-value-in-x86-assembly
Invalid assembler comparison
Hello, I've been trying to understand this for a while now: https://github.com/etcimon/botan/blob/master/source/botan/math/mp/mp_core.d#L765 This comparison (looking at it with windbg during cmp operation) has these invalid values in the respective registers: rdx: 9366584610601550696 r15: 8407293697099479287 When moving them into a ulong variable with a mov [R11], RDX before the CMP command, I get: RDX: 7549031027420429441 R15: 17850297365717953652 Which are the valid values. Any idea how these values could have gotten corrupted this way? Is there a signed integer conversion going on behind the scenes?
Re: Dangular - D Rest server + Angular frontend
On Sunday, 19 July 2015 at 19:54:31 UTC, Jarl André Hübenthal wrote: Hi I have created a personal project that aims to learn myself more about D/vibe.d and to create a simple and easy to grasp example on Mongo - Vibe - Angular. Nice ! I'm also working on a project like this, using some paid angularjs admin template from themeforest, although I'm powering it with Vibe.d / Redis and PostgreSQL 9.4 with its new json type. controllers, but it works as a bootstrap example. I am thinking to create another view that uses ReactJS because its much much more better than Angular. ReactJS has been very promising and I hear a lot of hype around it. However I believe Angular lived through its hype and is now more mature in plenty of areas, for example its Ionic framework for cross-mobile apps is reaching its gold age with seemingly fluid performance on every device! With vibe.d being a cross-platform framework, you'd be even able to build a Web Application that communicates with a client-side OS API, effectively closing the gap between web dev and software dev. So, there is two structs, but I really only want to have one. Should I use classes for this? Inheritance? Vibe.d is famous for its compile-time evaluation, understanding structures like with reflection but producing the most optimized machine code possible. You won't be dealing with interfaces in this case, you should look at the UDA api instead: http://vibed.org/api/vibe.data.serialization/. For example, if your field might not always be in the JSON, you can mark it @optional. struct PersonDoc { @optional BsonObjectID _id; ulong id; string firstName; string lastName; } You can also compile-time override the default serialization/deserialization instructions for a struct by defining the function signatures specified here: http://vibed.org/api/vibe.data.json/serializeToJson or the example here: https://github.com/rejectedsoftware/vibe.d/blob/master/examples/serialization/source/app.d This being said, your questions are most likely to be answered if you ask at http://forum.rejectedsoftware.com
Re: goroutines vs vibe.d tasks
On Wednesday, 1 July 2015 at 18:09:19 UTC, Mathias Lang wrote: On Tuesday, 30 June 2015 at 15:18:36 UTC, Jack Applegame wrote: Just creating a bunch (10k) of sleeping (for 100 msecs) goroutines/tasks. Compilers go: go version go1.4.2 linux/amd64 vibe.d: DMD64 D Compiler v2.067.1 linux/amd64, vibe.d 0.7.23 Code go: http://pastebin.com/2zBnGBpt vibe.d: http://pastebin.com/JkpwSe47 go version build with go build test.go vibe.d version built with dub build --build=release test.d Results on my machine: go: 168.736462ms (overhead ~ 68ms) vibe.d: 1944ms (overhead ~ 1844ms) Why creating of vibe.d tasks is so slow (more then 10 times)??? In your dub.json, can you use the following: subConfigurations: { vibe-d: libasync }, dependencies: { vibe-d: ~0.7.24-beta.3 }, Turns out it makes it much faster on my machine (371ms vs 1474ms). I guess it could be a good thing to investigate if we can make it the default in 0.7.25. I don't benchmark my code frequently, but that's definitely flattering :) I hope we can see a release LDC 2.067.0 soon so that I can optimize the code further. I've given up on 2.066 a while back
Re: CPU cores threads fibers
On 2015-06-14 08:35, Robert M. Münch wrote: Hi, just to x-check if I have the correct understanding: fibers = look parallel, are sequential = use 1 CPU core threads = look parallel, are parallel = use several CPU cores Is that right? Yes, however nothing really guarantees multi-threading = multi-core. The kernel reserves the right and will most likely do everything possible to keep your process core-local to use caching efficiently. There's a few ways around that though https://msdn.microsoft.com/en-us/library/windows/desktop/ms686247%28v=vs.85%29.aspx http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html
Re: mscoff x86 invalid pointers
On 2015-05-10 03:54, Baz wrote: On Sunday, 10 May 2015 at 04:16:45 UTC, Etienne Cimon wrote: On 2015-05-09 05:44, Baz wrote: On Saturday, 9 May 2015 at 06:21:11 UTC, extrawurst wrote: On Saturday, 9 May 2015 at 00:16:28 UTC, Etienne wrote: I'm trying to compile a library that I think used to work with -m32mscoff flag before I reset my machine configurations. https://github.com/etcimon/memutils Whenever I run `dub test --config=32mscoff` it gives me an assertion failure, which is a global variable that already has a pointer value for some reason.. I'm wondering if someone here could test this out on their machine with v2.067.1? There's no reason why this shouldn't work, it runs fine in DMD32/optlink and DMD64/mscoff, just not in DMD32/mscoff. Thanks! you can always use travis-ci to do such a job for you ;) doesn't -m32mscoff recquire phobos to be compiled as COFF too ? I think that travis uses the official releases (win32 releases have phobos as OMF) so he can't run the unittests like that... The dark side of the story is that you have to recompile phobos by hand with -m32mscoff...I'm not even sure that there is a option for this in the win32.mak... Meh, I ended up upgrading to 2.068 and everything went well. I clearly remember 2.067.1 working but spent a whole day recompiling druntime/phobos COFF versions in every configuration possible and never got it working again Could you tell me the way to compile druntime phobos 32bit COFF ? Would you have some custom win32.mak to share ? Thx. I edited win64.mak, you need to change it to MODEL=32mscoff and remove all occurence of amd64/ in the file (there are 3), for both druntime and phobos. Save this to win32mscoff.mak You need to place the phobos32mscoff.lib into dmd2/windows/lib32mscoff/ (the folder doesn't exist)
Re: mscoff x86 invalid pointers
On 2015-05-09 05:44, Baz wrote: On Saturday, 9 May 2015 at 06:21:11 UTC, extrawurst wrote: On Saturday, 9 May 2015 at 00:16:28 UTC, Etienne wrote: I'm trying to compile a library that I think used to work with -m32mscoff flag before I reset my machine configurations. https://github.com/etcimon/memutils Whenever I run `dub test --config=32mscoff` it gives me an assertion failure, which is a global variable that already has a pointer value for some reason.. I'm wondering if someone here could test this out on their machine with v2.067.1? There's no reason why this shouldn't work, it runs fine in DMD32/optlink and DMD64/mscoff, just not in DMD32/mscoff. Thanks! you can always use travis-ci to do such a job for you ;) doesn't -m32mscoff recquire phobos to be compiled as COFF too ? I think that travis uses the official releases (win32 releases have phobos as OMF) so he can't run the unittests like that... The dark side of the story is that you have to recompile phobos by hand with -m32mscoff...I'm not even sure that there is a option for this in the win32.mak... Meh, I ended up upgrading to 2.068 and everything went well. I clearly remember 2.067.1 working but spent a whole day recompiling druntime/phobos COFF versions in every configuration possible and never got it working again
mscoff x86 invalid pointers
I'm trying to compile a library that I think used to work with -m32mscoff flag before I reset my machine configurations. https://github.com/etcimon/memutils Whenever I run `dub test --config=32mscoff` it gives me an assertion failure, which is a global variable that already has a pointer value for some reason.. I'm wondering if someone here could test this out on their machine with v2.067.1? There's no reason why this shouldn't work, it runs fine in DMD32/optlink and DMD64/mscoff, just not in DMD32/mscoff. Thanks!
Re: build vibe-d-0.7.23 error on win7 x86?
On Monday, 20 April 2015 at 07:58:40 UTC, mzf wrote: win 7 x86,3GB ram: 1. dmd 2.066 vibe-d-0.7.23, it's ok. 2. dmd 2.067 vibe-d-0.7.23, show error msg out of memory why? http://forum.dlang.org/thread/mghqlf$10l2$1...@digitalmars.com#post-ybrtcxrcmrrsoaaksdbj:40forum.dlang.org
Internal symbols?
Is there a way to prevent DMD from exporting a symbol? Basically, I would need an attribute like extern(none) because my library makes heavy use of CTFE and the linker takes 13 seconds, while OMF is also off the table for me and this is creating tons of problems... Thanks in advance!
Re: DMD 64 bit on Windows
On Tuesday, 14 April 2015 at 09:57:55 UTC, wobbles wrote: On Tuesday, 14 April 2015 at 01:31:27 UTC, Etienne wrote: I'm currently experiencing Out Of Memory errors when compiling in DMD on Windows Has anyone found a way to compile a DMD x86_64 compiler on Windows? I've been having this same issue. Over-use of CTFE is what's causing it on my part, to fix it, I've had to split my CTFE functions out into a seperate runtime tool (to make use of GC goodness. See people, GC is good! :)) The tool will print the code that my CTFE functions would normally generate to a file, and then use import to get it back into my main application. Bit of a mess, but it works. Dunno if this will help with your situation, as not sure if CTFE is causing it. It's actually a pretty big program, I'm compiling Botan with 80k lines of code, libasync, libhttp2, memutils, vibe.d together. I intend to build a pretty big library on top of it all, so i pretty much need the memory. It takes about 3gb on x86, although i setup mscoff to link the high amount of symbols. So, any idea if dmd can be moved to 64 bit anytime soon?
Re: DMD 64 bit on Windows
On 4/14/2015 3:47 PM, Kai Nacke wrote: Short recipe: Download VisualStudio 2013 Community Edition Download the DMD source OR clone from GitHub repository. Start VS 2013. Open solution dmd_msc_vs10.sln (in folser src) Right click solution dmd_msc_vs10 and select Properties. Change Configuration to Release and Platform to x64. Right click solution dmd_msc_vs10 and select Rebuild. Result is 64bit exex dmd_msc.exe in folder src. Regards, Kai Woa, you're awesome, thanks!
DMD 64 bit on Windows
I'm currently experiencing Out Of Memory errors when compiling in DMD on Windows Has anyone found a way to compile a DMD x86_64 compiler on Windows?
Re: DMD 64 bit on Windows
On 4/13/2015 9:42 PM, Dennis Ritchie wrote: Main article here: http://wiki.dlang.org/Installing_DMD_on_64-bit_Windows_7_%28COFF-compatible%29 I think this might be about the -m64 option in the d compiler. I'm actually having the Out Of Memory error with the -m64 option, because DMD crashes at 4gb of ram What I need is DMD compiled with 64 bit pointer size
Re: How to connect asynchronously (non block) with kqueue?
On 3/24/2015 5:50 AM, zhmt wrote: I am using kqueue on macosx, I know how to write a simple server. But don't know which event will be triggered in kqueue when connected successfully , which event when failed? EVFILT_READ or EVFILT_WRITE? I have googled this question, got no examples, any suggestions are welcome, Thanks. Look into libasync for an abstraction to kqueue https://github.com/etcimon/libasync Also, this is what you're looking for: https://github.com/etcimon/libasync/blob/628850e8a6020298612e8a35229f5539d7385bae/source/libasync/posix.d#L1826
Re: OPTLINK Error 45 Too Much DEBUG Data for Old CodeView format
On 3/20/2015 1:35 PM, Koi wrote: Hello, after some coding i needed to update some external libraries like DerelictSDL2. As we all know, one update isn't enough, so i updated my whole d-environment at the end of the day (current dmd version, VisualD). After getting rid of some linking errors (symbols undefined) i have only one error left: Optlink: Error 45: Too Much DEBUG Data for Old CodeView format i googled, but really can't figure out what this error is about. This is due to a high amount of symbols in your code. I fixed this almost a year ago in the optlink repository: https://github.com/DigitalMars/optlink/pull/15 You should be able to download it on the digitalmars.com website under: Digital Mars C/C++ Compiler Version 8.57 (3662658 bytes) The link.exe file in the bin folder is up-to-date.
Re: State of Windows x64 COFF support?
On 2015-02-19 1:41 PM, ketmar wrote: i remember that DMD creates one section for each function (to allow smartlink feature). with templates this can be alot. maybe it needs new cli flag --collapse-sections or something like it. I watched the section names and discovered over 20,000 sections named : .debug$S According to the code, a new .debug$S section is created every time it is searched with the flag IMAGE_SCN_LNK_COMDAT in: https://github.com/D-Programming-Language/dmd/blob/de6fccf8391b1dfdb959fa0f089920c2c8e6aff8/src/backend/mscoffobj.c#L1724 I deleted this flag and now the program links correctly with COFF with 50k sections. It would have been easier to debug with an error in mscoffobj.c:1591: assert(scnhdr_cnt 65536, Too many symbols for COFF format); https://github.com/D-Programming-Language/dmd/blob/de6fccf8391b1dfdb959fa0f089920c2c8e6aff8/src/backend/mscoffobj.c#L1591 With this simple patch on the compiler, there are stil some errors compiling Botan on win64, but they are not related to the COFF format anymore.
Re: State of Windows x64 COFF support?
On 2015-02-19 11:39 AM, Etienne wrote: I'm having corrupt symbol table errors on a Win64 build of a big application, I can't find a way around it. I'm wondering if the COFF support is still experimental in DMD? Thanks! I just counted 67k sections using a printf in DMD... The limit is 65k so that explains that.
State of Windows x64 COFF support?
I'm having corrupt symbol table errors on a Win64 build of a big application, I can't find a way around it. I'm wondering if the COFF support is still experimental in DMD? Thanks!
Re: jsnode crypto createHmac createHash
Keep an eye on this one: Botan in D, https://github.com/etcimon/botan Should be finished in a couple weeks. e.g. from the TLS module: auto hmac = get_mac(HMAC(SHA-256)); hmac.set_key(secret_key); hmac.update_be(client_hello_bits.length); hmac.update(client_hello_bits); hmac.update_be(client_identity.length); hmac.update(client_identity); m_cookie = unlock(hmac.flush());
Re: Is someone still using or maintaining std.xml2 aka xmlp?
On 2014-11-28 15:15, Tobias Pankrath wrote: Old project link is http://www.dsource.org/projects/xmlp The launchpad and dsource repositories are dead for two years now. Anyone using it? Nope. I found kXML while searching for the same, it has everything I've needed up to spec. I'm maintaining a fork that works with DMD 2.066+ here: https://github.com/etcimon/kxml
Re: Reducing Pegged ASTs
On 2014-11-25 10:12, Nordlöw wrote: Is there a way to (on the fly) reduce Pegged parse results such as I've made an asn.1 parser using pegged tree map, it's not so complex and does the reducing as well. https://github.com/globecsys/asn1.d Most of the meat is in asn1/generator/ In short, it's much easier when you put all the info in the same object, in this case it's an AEntity: https://github.com/globecsys/asn1.d/blob/master/asn1/generator/asntree.d#L239 When the whole tree is done that way, you can easily traverse it and move nodes like a linked list.. I've made a helper function here: https://github.com/globecsys/asn1.d/blob/master/asn1/generator/asntree.d#L10 You can see it being used here: https://github.com/globecsys/asn1.d/blob/38bd1907498cf69a08604a96394892416f7aa3bd/asn1/generator/asntree.d#L109 and then here: https://github.com/globecsys/asn1.d/blob/master/asn1/generator/generator.d#L500 Also, the garbage collector really makes it easy to optimize memory usage, ie. when you use a node in multiple places and need to re-order the tree elements. I still have a bunch of work to do, and I intend on replacing botan's ASN1 functionalities with this and a DER serialization module. Beware, the pegged structure isn't insanely fast to parse because of the recursion limits I implemented very inefficiently because I was too lazy to translate the standard asn.1 BNF into PEG.. Also, the bigger bottleneck would be error strings. For a 1-2 months of work (incl. learning ASN.1), I'm very satisfied with the technology involved and would recommend intermediate structures with traversal helpers.
Re: vibe.d problem
On 2014-11-18 8:37 AM, Lázaro Armando via Digitalmars-d-learn wrote: How could I do that using dub? You must download the most recent version of dub from http://code.dlang.org Run the commands: dub remove vibe-d --force-remove dub fetch vibe-d --version=0.7.21-rc.4 The list of versions you can use is here: https://github.com/rejectedsoftware/vibe.d/releases
Re: Precise TLS GC
I realize this shouldn't belong in D.learn :) http://forum.dlang.org/thread/m4aahr$25qd$2...@digitalmars.com#post-m4aahr:2425qd:242:40digitalmars.com
Precise TLS GC
I always wondered why we would use the shared keyword on GC allocations if only the stack can be optimized for TLS Storage. After thinking about how shared objects should work with the GC, it's become obvious that the GC should be optimized for local data. Anything shared would have to be manually managed, because the biggest slowdown of all is stopping the world to facilitate concurrency. With a precise GC on the way, it's become easy to filter out allocations from shared objects. Simply proxy them through malloc and get right of the locks. Make the GC thread-local, and you can expect it to scale with the number of processors. Any thread-local data should already have to be duplicated into a shared object to be used from another thread, and the lifetime is easy to manage manually. SomeTLS variable = new SomeTLS(Data); shared SomeTLS variable2 = cast(shared) variable.dupShared(); Tid tid = spawn(doSomething, variable2); variable = receive!variable2(tid).dupLocal(); delete variable2; Programming with a syntax that makes use of shared objects, and forces manual management on those, seems to make stop the world a thing of the past. Any thoughts?
Strictness of language styling
I'm translating the library Botan and I'm at a point where I have to ask myself if I'm going to change functions and object names respectively from snake_case and Camel_Case to camelCase and CamelCase. Same goes for file names. Is this a strict rule for D libraries?
Re: Strictness of language styling
On 2014-11-10 11:16 AM, Adam D. Ruppe wrote: Personally, I don't really care about naming conventions. I prefer the camelCase and it seems most D people do, but if you're translating another library, there's value it keeping it the same for ease of documentation lookups from the original etc. I was thinking the same but sure am glad to hear you say it :D
Re: Memory usage of dmd
On 2014-11-10 11:32 AM, Xavier Bigand wrote: Is there some options can help me to reduce the memory consumption? As it's for production purpose I don't think that is a good idea to remove compiler optimizations. The memory issues are probably related to diet templates. LDC and GDC won't help. You should definitely work and build on a machine with 4GB of ram. The server application could use as low as 8mb of ram, but compiling requires a workstation. Perhaps renting an amazon instance a few minutes for compilation would be a better idea?
Re: Memory usage of dmd
On 2014-11-10 12:02 PM, Xavier Bigand wrote: As I know to be able to have no down time with vibe we need to be able to build directly on the server where the program runs. Maybe I just need to wait that I have some users to pay a better server with more memory. With a low number of users, there's no reason to worry about a 1 second downtime from closing the process and replacing the application file. You should use a bash script to keep the process open though: # monitor.sh nohup ./autostart.sh stdout.log 2 crash.log /dev/null # autostart.sh while true ; do if ! pgrep -f '{processname}' /dev/null ; then sh /home/{mysitefolder}/start.sh fi sleep 1 done # start.sh nohup ./{yourapplication} --uid={user} --gid={group} stdout.log 2 crash.log stdout.log # install.sh pkill -f '{processname}' /bin/cp -rf {yourapplication} /home/{mysitefolder}/ Using a console, run monitor.sh and the autostart.sh script will re-launch your server through start.sh into a daemon. Verifications will be made every second to ensure your server is never down because of a badly placed assert. If you need to replace your server application with an update, run the install.sh script from the folder where the update is.
Re: Memory usage of dmd
On 2014-11-10 2:52 PM, Marc =?UTF-8?B?U2Now7x0eiI=?= schue...@gmx.net wrote: If your server runs systemd, I would strongly recommend to use that instead of a shell script. You can use Restart=always or Restart=on-failure in the unit file. It also provides socket activation, which will allow you to restart the program without downtime. I totally agree, I couldn't find a good resource about this though. The shell script method is quite literally rudimentary but it took me a few minutes to put together and it's been working great during 6 months so far. I'll go out and read a systemd tutorial eventually to get this done the right way, if anyone had anything ready and compatible with vibe.d apps I'd be happy about it. For the install script, I guess the best way would be to put together an RPM script and upgrade through that. I'd have to explore that solution, it's quite a lot more than 2 lines of code though :)
Re: Access Violation Tracking
On 2014-11-05 6:09 AM, Bauss wrote: Is there any way to track down access violations, instead of me having to look through my source code manually. I have a pretty big source code and an access violation happens at runtime, but it's going to be a nightmare looking through it all to find the access violation. Not to mention all the tests I have to run. So if there is a way to catch an access violation and find out where it occured it would be appreciated! I've seen a lot more invalid memory operation errors since the GC calls destructors. Letting the GC destroy objects out of order can be the issue. We might have to make an associative array of static global flags (debug bool[void*]) for each object to see if it was destroyed, and use asserts in the destructors / update the associative array, as a new idiom.
Re: Pragma mangle and D shared objects
On 2014-10-26 14:25, Etienne Cimon wrote: On 2014-10-25 23:31, H. S. Teoh via Digitalmars-d-learn wrote: Hmm. You can probably use __traits(getAllMembers...) to introspect a library module at compile-time and build a hash based on that, so that it's completely automated. If you have this available as a mixin, you could just mixin(exportLibrarySymbols()) in your module to produce the hash. Exactly, or I could also make it export specific functions into the hashmap, a little like a router. It seems like a very decent option. I found an elegant solution for dealing with dynamic libraries: https://github.com/bitwise-github/D-Reflection
Equivalent in D for .p2align 4,,15 ?
I'm looking for the D inline assembler equivalent of the .p2align 4,,15 directive to optimize a loop. Here's more information: http://stackoverflow.com/questions/21546946/what-p2align-does-in-asm-code I tried searching through a turbo assembler tutorial (because D's is based on it) and found nothing except a few hints here: http://www.csn.ul.ie/~darkstar/assembler/manual/a10.txt There might be a way through segments, directives, but I'm not sure at all if D supports it. Does anyone have any idea if/how I can align my code this way or if the compiler handles it?
Re: Equivalent in D for .p2align 4,,15 ?
On 2014-10-29 1:44 PM, anonymous wrote: D inline assembler has an 'align' directive [1]. Aligning to a 16 byte boundary in D: `align 16;`. [1] http://dlang.org/iasm.html -- align IntegerExpression, near the top Of course, align directive works on instructions in asm. Thanks anonymous, that was a very simple explanation.
Re: Dart bindings for D?
On 2014-10-29 18:12, Laeeth Isharc wrote: Rationale for using Dart in combination with D is that I am not thrilled about learning or writing in Javascript, yet one has to do processing on the client in some language, and there seem very few viable alternatives for that. It would be nice to run D from front to back, but at least Dart has C-like syntax and is reasonably well thought out. I actually thought this over in the past and posted my research here: http://forum.dlang.org/thread/ll38cn$ojv$1...@digitalmars.com It would be awesome to write front-end tools in D. However, there won't be much browser support unless you're backed by Google or Microsoft. What's going to replace javascript? Will it be typescript? asm.js? dart? PNaCl? The solution is obviously to compile from D to the target language. But what's the real advantage? Re-using some back-end MVC libraries? All the communication is actually done through sockets, there's never any real interaction between the back-end/front-end. Also, you realize the front-end stuff is so full of community contributions that you're actually shooting yourself in the foot if you divert away from the more popular language and methodologies. So, I settle with javascript, and I shop for libraries instead of writing anything at all. There's so much diversity in the front-end world, a few hundred lines of code at most are going to be necessary for an original piece of work. Heh.
Using imm8 through inline assembler
I'm trying to write (in DMD) the assembler that handles the function : __m256i _mm256_permute4x64_epi64(__m256i a, in int M); This translates to vpermq The closest thing I could find in DMD assembly is VPERMILPS, which is called with: asm { vpermilps YMM0, YMM1, IMM8; } However, I cannot find out how to make IMM8 with `in int M`. I can convert int - short[4] - ubyte, but I can't move it into imm8 asm { mov imm8, ub; } -- Fails. Does anyone have an idea of what this is and how to define it? Thanks in advance.
Re: Using imm8 through inline assembler
imm8 is not a register. imm stands for immediate, i.e. a constant, hard-coded value. E.g.: asm { vpermilps YMM0, YMM1, 0 /* no idea what would be a meaningful value */; } Oh, well, that makes sense. This means I should use string mixins to insert the actual value. I couldn't run AVX2 either, I have to work on this blindly until I get my hands on a new Xeon E5-2620 3rd generation processor. :/ I think my CPU doesn't have AVX, so I can't test anything beyond compiling. When running all I get is Illegal instruction (core dumped). My error is vmovdqu, I get: Error: AVX vector types not supported Anyone know what this is? I can't find a reference to it in the dmd compiler code. Also, 'vpunpcklqdq _ymm, _ymm, _ymm' is undefined in DMD, so I get : avx2.d(23): Error: bad type/size of operands 'vpunpcklqdq' Looks like abunch of work is needed to get avx2 working...
Re: Pragma mangle and D shared objects
On 2014-10-25 23:31, H. S. Teoh via Digitalmars-d-learn wrote: Hmm. You can probably use __traits(getAllMembers...) to introspect a library module at compile-time and build a hash based on that, so that it's completely automated. If you have this available as a mixin, you could just mixin(exportLibrarySymbols()) in your module to produce the hash. Exactly, or I could also make it export specific functions into the hashmap, a little like a router. It seems like a very decent option.
Pragma mangle and D shared objects
I haven't been able to find much about pragma mangle. I'd like to do the following: http://forum.dlang.org/thread/hznsrmviciaeirqkj...@forum.dlang.org#post-zhxnqqubyudteycwudzz:40forum.dlang.org The part I find ugly is this: void* vp = dlsym(lib, _D6plugin11getInstanceFZC2bc2Bc\0.ptr); I want to write a framework that stores a dynamic library name and symbol to execute, and downloads the dynamic library if it's not available. This would be in a long-running server/networking application, and needs to be simple to use. The mangling makes it less obvious for the programmer writing a plugin. Does mangle make it possible to change this to dlsym(lib, myOwnMangledName), or would it still have strange symbols? Also, I've never seen the thunkEBX change merged from here: http://forum.dlang.org/thread/hznsrmviciaeirqkj...@forum.dlang.org?page=2#post-lg2lqi:241ga3:241:40digitalmars.com
Re: Pragma mangle and D shared objects
That looks like exactly the solution I need, very clever. It'll take some time to wrap my head around it :-P
Re: Pragma mangle and D shared objects
On 2014-10-25 11:56, Etienne Cimon wrote: That looks like exactly the solution I need, very clever. It'll take some time to wrap my head around it :-P Just brainstorming here, but I think every dynamic library should hold a utility container (hash map?) that searches for and returns the mangled names in itself using regex match. This container would always be in the same module/function name in every dynamic library. The mangling list would load itself using introspection through a `shared static this()`. For each module, for each class, insert mangling in the hashmap... This seems ideal because then you basically have libraries that document themselves at runtime. Of course, the return type and arguments would have to be decided in advance, but otherwise it allows loading and use of (possibly remote and unknown) DLLs, in a very simple way.
Re: Pragma mangle and D shared objects
On 2014-10-25 21:26, H. S. Teoh via Digitalmars-d-learn wrote: Not sure what nm uses, but a lot of posix tools for manipulating object files are based on binutils, which understands the local system's object file format and deal directly with the binary representation. The problem is, I don't know of any *standard* system functions that can do this, so you'd have to rely on OS-specific stuff to make it work, which is less than ideal. Which makes it better to export the mangling into a container at compile-time! That way, you can build a standard interface into DLLs so that other D application know what they can call =)
classInstanceSize and vtable
I'm trying to figure out the size difference between a final class and a class (which carries a vtable pointer). import std.stdio; class A { void print(){} } final class B { void print(){} } void main(){ writeln(__traits(classInstanceSize, A)); writeln(__traits(classInstanceSize, B)); } Returns: 8 8 I'm not sure, why does a final class carry a vtable pointer?
Re: classInstanceSize and vtable
On 2014-10-23 20:12, bearophile wrote: In D all class instances contain a pointer to the class and a monitor pointer. The table is used for run-time reflection, and for standard virtual methods like toString, etc. Bye, bearophile So what's the point of making a class or methods final? Does it only free some space and allow inline to take place?
Translating inline ASM from C++
I'm translating some library from C++ and I'd need some help with assembler translations. I've read the guides on D inline assembler but it's fairly thin. I don't understand what =r, =a, %0, %1 should be in D. Is this some sort of register? https://github.com/etcimon/botan/blob/master/source/botan/entropy/hres_timer.d#L107 I currently only need to translate these commented statements. If anyone could donate some code for it I'd be grateful, and it would help us all move towards a completely native crypto/ssl library ;)
Re: Translating inline ASM from C++
On 2014-10-15 09:48, Etienne wrote: I currently only need to translate these commented statements. If anyone I found the most useful information here in case anyone is wondering: http://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html The D syntax for inline assembly is Intel style, whereas the GCC syntax is ATT style. This guide seems to show exactly how to translate from C++ to D.
Re: Translating inline ASM from C++
On 2014-10-15 19:47, Etienne Cimon wrote: The D syntax for inline assembly is Intel style, whereas the GCC syntax is ATT style. This guide seems to show exactly how to translate from C++ to D. I'm posting this research for anyone searching the forums for a solution. I found a better guide to D assembly on the digitalmars website: http://www.digitalmars.com/ctg/ctgInlineAsm.html This says it emulates Borland Turbo Assembly. Here's a manual I found: http://www.csn.ul.ie/~darkstar/assembler/manual/ Chapter 6 is the most interesting: http://www.csn.ul.ie/~darkstar/assembler/manual/a06a.txt Also, this works in DMD but also compatible with LDC and GDC, they all support the D inline assembler syntax for x86 and x86_64 : https://github.com/ldc-developers/ldc/blob/master/gen/asm-x86.h
Re: How do I write __simd(void16*, void16) ?
On 2014-10-10 4:12 AM, ketmar via Digitalmars-d-learn wrote: actually, importing it works like a trigger, and then programmer has access to GCC builtins defined in GCC source. one can read GCC documentation to find more information about 'em. Hi ketmar, Which type would have to be sent to the corresponding functions? I have a hard time figuring out how to use the __m128i with the proper mangling. Does it use core.simd's Vector!x types there?
Re: How do I write __simd(void16*, void16) ?
On 2014-10-10 9:01 AM, ketmar via Digitalmars-d-learn wrote: import core.simd; import gcc.builtins; void main () { float4 a, b; auto tmp = __builtin_ia32_mulps(a, b); // a*b } i don't know what the hell this means, but at least it accepts types from core.simd. ;-) so i assume that other such builtins will accept other types too. Nice! Nobody knows simd but they all know how to make it work. Go figure =)
Using inline assembler
I'm a bit new to the inline assembler, I'm trying to use the `movdqu` operation to move a 128 bit double quadword from a pointer location into another location like this: align(16) union __m128i { ubyte[16] data }; void store(__m128i* src, __m128i* dst) { asm { movdqu [dst], src; } } The compiler complains about a bad type/size of operands 'movdqu', but these two data segments are 16 byte align so they should be in an XMM# register? Is there something I'm missing here?
Re: Using inline assembler
On 2014-10-09 8:54 AM, anonymous wrote: This compiles: align(16) union __m128i { ubyte[16] data; } /* note the position of the semicolon */ void store(__m128i* src, __m128i* dst) { asm { movdqu XMM0, [src]; /* note: [src] */ movdqu [dst], XMM0; } } Yes, this does compile, but the value from src never ends up stored in dst. void main() { __m128i src; src.data[0] = 255; __m128i dst; writeln(src.data); // shows 255 at offset 0 store(src, dst); writeln(dst.data); // remains set as the initial array } http://x86.renejeschke.de/html/file_module_x86_id_184.html Is this how it's meant to be used?
Re: Using inline assembler
Maybe someone can help with the more specific problem. I'm translating a crypto engine here: https://github.com/etcimon/botan/blob/master/source/botan/block/aes_ni/aes_ni.d But I need this to work on DMD, LDC and GDC. I decided to write the assembler code directly for the functions in this module: https://github.com/etcimon/botan/blob/master/source/botan/utils/simd/xmmintrin.d If there's anything someone can tell me about this, I'd be thankful. I'm very experienced in every aspect of programming, but still at my first baby steps in assembler.
Re: Using inline assembler
On 2014-10-09 9:46 AM, anonymous wrote: I'm out of my knowledge zone here, but it seems to work when you move the pointers to registers first: void store(__m128i* src, __m128i* dst) { asm { mov RAX, src; mov RBX, dst; movdqu XMM0, [RAX]; movdqu [RBX], XMM0; } } Absolutely incredible! My first useful working assembler code. You save the day. Now I can probably write a whole SIMD library ;)
Re: How do I write __simd(void16*, void16) ?
On 2014-10-09 2:32 PM, Benjamin Thaut wrote: I know that GDC stopped supporting D style inline asm a while ago. If you need inline asm with GDC you have to use the gcc style inline assembly. I don't know about ldc though. But generally you want to use the official intrinsics with gdc and ldc because they won't perform any optimizations on inline assembly. Kind Regards Benjamin Thaut Any idea where I can find the headers in D for it?
Re: How do I write __simd(void16*, void16) ?
On 2014-10-09 4:29 PM, Benjamin Thaut wrote: I think a good starting point would be Manu's std.simd module. I don't know if he is still working on it, but a old version can be found here: https://github.com/TurkeyMan/simd/blob/master/std/simd.d That's a great reference! I can do a lot from that. I wish it wasn't an EDSL, makes it really hard to translate the simd code to D. You can also find the druntime versions of ldc and gdc on github. For example: https://github.com/ldc-developers/druntime/blob/ldc/src/ldc/simd.di https://github.com/D-Programming-GDC/GDC/blob/master/libphobos/libdruntime/gcc/builtins.d Unforunately the gcc.buildints module seems to be generated during compilation of gdc, so you might want to get a binary version or compile it yourself to see the module. OK, thanks !
Re: How do I write __simd(void16*, void16) ?
On 2014-10-09 5:05 PM, David Nadlinger wrote: On Thursday, 9 October 2014 at 20:29:44 UTC, Benjamin Thaut wrote: Unforunately the gcc.buildints module seems to be generated during compilation of gdc, so you might want to get a binary version or compile it yourself to see the module. By the way, LDC has ldc.gccbuiltins_x86 too. LLVM doesn't export all the GCC-style intrinsics, though, if they are easily representable in normal LLVM IR (thus ldc.simd). Daivd That's very helpful, the problem remains that the API is unfamiliar. I think most of the time, simd code will only need to be translated from basic function calls, it would've been nice to have equivalents :-p
Re: How do I write __simd(void16*, void16) ?
On 2014-10-09 17:32, Etienne wrote: That's very helpful, the problem remains that the API is unfamiliar. I think most of the time, simd code will only need to be translated from basic function calls, it would've been nice to have equivalents :-p Sorry, I think I had a bad understanding. I found out through a github issue that you need to use pragma(LDC_intrinsic, llvm.*) [function declaration] https://github.com/ldc-developers/ldc/issues/627 And the possible gcc-style intrinsics are defined here: https://www.opensource.apple.com/source/clamav/clamav-158/clamav.Bin/clamav-0.98/libclamav/c++/llvm/include/llvm/Intrinsics.gen This really begs for at binding hat works with all compilers.
How do I write __simd(void16*, void16) ?
I can't seem to find this function anywhere: __simd(void16*, void16) The mangling seems to go through to dmd's glue.lib This is for SSE2 operations: MOVDQU = void _mm_storeu_si128 ( __m128i *p, __m128i a) MOVDQU = __m128i _mm_loadu_si128 ( __m128i *p) Would I have to write this with ASM?
Re: How do I write __simd(void16*, void16) ?
On 2014-10-08 3:04 PM, Benjamin Thaut wrote: I strongly advise to not use core.simd at this point. It is in a horribly broken state and generates code that is far from efficient. If I think I'll have to re-write the xmmintrin.h functions I need as string mixins to inline the assembly. Is that supposed to be compatible with LDC/GDC anyways?
Re: vibe.d https_server example fails
Yes, the ssl_stream should be defined outside the if clause. The FreeLostRef refcount goes to 0 when it goes out of scope in http/server.d On Monday, 29 September 2014 at 21:39:03 UTC, Nordlöw wrote: On Monday, 29 September 2014 at 18:37:28 UTC, Martin Nowak wrote: Please report it https://github.com/rejectedsoftware/vibe.d/issues, there seems to be some issue with interface/class casting and manual class allocation. This time I got: Handling of connection failed: Failed to accept SSL tunnel: (336027804) Handling of connection failed: Failed to accept SSL tunnel: fPu: (336027804) Handling of connection failed: Failed to accept SSL tunnel: fPu: (336027804) Error executing command run: Program exited with code -11 I know nothing about https. Do I have to tell my browser about certificates?
Segfault in DMD OSX
I'm having issues with DMD returning exit code -11 rather than compiling my project. I have no idea how to debug this, I'm using Mac OS X 10.9.4 with latest git DMD tagged 2.066, and this project: https://github.com/etcimon/event.d When I hit `dub run`, I get the exit code. Not sure why or where the problem comes from, I can't get GDB to run on the mac (something about Mach task port error in gdb), and dmd DEBUG gives no additional info. Any ideas?
Re: Segfault in DMD OSX
I managed to get gdb running, here's what I get: Starting program: /bin/dmd -lib -m64 -g source/event/internals/epoll.d source/event/internals/kqueue.d source/event/internals/path.d source/event/internals/validator.d source/event/internals/hashmap.d source/event/internals/memory.d source/event/internals/socket_compat.d source/event/internals/win32.d source/event/file.d source/event/tcp.d source/event/timer.d source/event/watcher.d source/event/dns.d source/event/types.d source/event/windows.d source/event/events.d source/event/notifier.d source/event/signal.d source/event/threads.d source/event/udp.d source/event/d.d source/event/posix2.d source/event/posix.d source/event/test.d Using FreeBSD KQueue for events Program received signal SIGSEGV, Segmentation fault. 0x0001000d818b in TemplateInstance::findBestMatch(Scope*, ArrayExpression**) () (gdb) show registers Undefined show command: registers. Try help show. (gdb) registers Undefined command: registers. Try help. (gdb) info registers rax 0x1093fb750 4450137936 rbx 0x0 0 rcx 0x0 0 rdx 0x7fff5fbfebf0 140734799801328 rsi 0x12048d950 4836612432 rdi 0x0 0 rbp 0x7fff5fbfed60 0x7fff5fbfed60 rsp 0x7fff5fbfecb0 0x7fff5fbfecb0 r8 0x106117660 4396775008 r9 0x7fff5fbfed00 140734799801600 r10 0x7fff5fbfed00 140734799801600 r11 0x1a340b60 439618400 r12 0x0 0 r13 0x1001b14cf 4296742095 r14 0x0 0 r15 0x101a34050 4322443344 rip 0x1000d818b 0x1000d818b TemplateInstance::findBestMatch(Scope*, ArrayExpression**)+1339 eflags 0x10246 [ PF ZF IF RF ] cs 0x2b 43 ss unavailable ds unavailable es unavailable fs 0x0 0 gs 0x93f 155123712 (gdb) On 2014-09-22 12:35 PM, Etienne wrote: I'm having issues with DMD returning exit code -11 rather than compiling my project. I have no idea how to debug this, I'm using Mac OS X 10.9.4 with latest git DMD tagged 2.066, and this project: https://github.com/etcimon/event.d When I hit `dub run`, I get the exit code. Not sure why or where the problem comes from, I can't get GDB to run on the mac (something about Mach task port error in gdb), and dmd DEBUG gives no additional info. Any ideas?
Re: Segfault in DMD OSX
Here's with debug symbols in DMD Program received signal SIGSEGV, Segmentation fault. 0x00010013c4d0 in TemplateInstance::findBestMatch (this=0x101c34050, sc=0x12058d740, fargs=0x0) at template.c:7329 7329 tempdecl-kind(), tempdecl-parent-toPrettyChars(), tempdecl-ident-toChars()); (gdb) print havetempdecl $1 = false (gdb) print tempdecl-kind() $2 = 0x100290149 overload alias (gdb) print tempdecl-parent-toPrettyChars() Cannot access memory at address 0x0 (gdb) print tempdecl-ident-toChars() $3 = 0x100515bd8 Array (gdb) print tempdel-parent No symbol tempdel in current context. (gdb) print tempdecl-parent $4 = (Dsymbol *) 0x0 (gdb) print tempdecl $5 = (Dsymbol *) 0x1095fbc50 (gdb) The tempdecl-parent is being accessed even though it may be null in :7329 : ::error(loc, %s %s.%s does not match any template declaration, tempdecl-kind(), tempdecl-parent-toPrettyChars(), tempdecl-ident-toChars());
std.container.array linker error on OS X
Hello, Can anyone help me? Maybe someone has seen this before. I'm getting a (strange) error when testing on OSX with dmd 2.066, the build works with the same compiler/library versions on linux x64 and windows x86 but for some reason it's not working here. The repos are: https://github.com/etcimon/event.d https://github.com/etcimon/vibe.d Both repos must be cloned in the same directory, and I use `dub run` in `vibe.d/examples/http_server` Thanks in advance Here's the error: Linking... Undefined symbols for architecture x86_64: _D3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5Range7opSliceMFZS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5Range, referenced from: _D4vibe4core7drivers6native17NativeManualEvent6__dtorMFZv in libvibe-d.a(native_31a0_6c7.o) _D4vibe4core7drivers6native17NativeManualEvent4emitMFZv in libvibe-d.a(native_31a0_6c7.o) _D3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5Range7opSliceMFmmZS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5Range, referenced from: _D3std9algorithm157__T4copyTS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5RangeTS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5RangeZ4copyFS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5RangeS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5RangeZ11genericImplFS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5RangeS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5RangeZS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5Range in libvibe-d.a(algorithm_f43_653.o) _D3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array7opSliceMFZS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5Range, referenced from: _D4vibe4core7drivers6native17NativeManualEvent6__dtorMFZv in libvibe-d.a(native_31a0_6c7.o) _D4vibe4core7drivers6native17NativeManualEvent4emitMFZv in libvibe-d.a(native_31a0_6c7.o) _D4vibe4core7drivers6native17NativeManualEvent14removeMySignalMFZv in libvibe-d.a(native_31a0_6c7.o) _D3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array7opSliceMFmmZS3std9container5array38__T5ArrayTC5event6signal11AsyncSignalZ5Array5Range, referenced from: _D4vibe4core7drivers6native17NativeManualEvent14removeMySignalMFZv in libvibe-d.a(native_31a0_6c7.o) ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) --- errorlevel 1 FAIL .dub/build/application-debug-posix.osx-x86_64-dmd-8B9608AA5A28F1D92DA07A70A9BFD1B2/ http-server-example executable
Re: std.container.array linker error on OS X
I have to specify that AsyncSignal is defined as a final shared class, and the array is defined as a Array!AsyncSignal I'm not sure if the compiler on OSX could act different and add the shared symbols there?
Re: std.container.array linker error on OS X
On 2014-09-09 4:11 PM, Etienne wrote: On 2014-09-09 3:58 PM, Etienne wrote: I have to specify that AsyncSignal is defined as a final shared class, and the array is defined as a Array!AsyncSignal I'm not sure if the compiler on OSX could act different and add the shared symbols there? Meh, I answered my own question. Substituting it from Array!AsyncSignal to Array!(void*) and casting away worked. Looks like dmd-osx has templating problems with shared declarations. I really had to Think Different on that one.
Re: std.container.array linker error on OS X
On 2014-09-09 3:58 PM, Etienne wrote: I have to specify that AsyncSignal is defined as a final shared class, and the array is defined as a Array!AsyncSignal I'm not sure if the compiler on OSX could act different and add the shared symbols there? Meh, I answered my own question. Substituting it from Array!AsyncSignal to Array!(void*) and casting away worked. Looks like dmd-osx has templating problems with shared declarations.
Both shared local classes, method selection
Hey, I'm trying to build a driver for my native event implementation in vibe (https://github.com/etcimon/event.d/ and https://github.com/etcimon/vibe.d/blob/native-events/source/vibe/core/drivers/native.d) I currently have 2 ways of signaling an event loop to wake up, the AsyncNotifier (event loop is local, this is lock-less) and the AsyncSignal (event loop in another thread, this one locks). I must modify LibevManualEvent and rename it to NativeManualEvent in the native.d file, this class inherits ManualEvent. What I'd like to do is be able to have : shared NativeManualEvent m_signal; // uses AsyncSignal as implementation NativeManualEvent m_notifier; // uses AsyncNotifier as implementation I'd like to be able to do `static if (is(typeof(this) == shared))` to implement these two versions differently, but I'm not sure if this event works or if it is the best way. Ideally, I'd like to have the same class redefined with and without `shared`. e.g. final shared class NativeManualEvent; final class NativeManualEvent; so the compiler can choose the right one. Is there a defined way of dealing with this problem?
Re: Both shared local classes, method selection
On 2014-08-29 9:39 AM, Dicebot wrote: based on shared qualified I'd call it a smart ass one and never accepted it through code review :) Such things really need to be explicit, magic is worst enemy of multi-threading The other option is to keep `__gshared ThreadSlot[Thread] gs_signals;` member and add a `NotifierSlot[] m_notifiers;` member, based on a boolean in the constructor `this(bool makeShared = true)`. I wouldn't really want to make another class in the vibe.d interface and make this one forcefully shared, or should I?
Using the delete Keyword /w GC
People have been saying for quite a long time not to use the `delete` keyword on GC-allocated pointers. I've looked extensively through the code inside the engine and even made a few modifications on it/benchmarked it for weeks and I still can't see why it would be wrong. Wouldn't it help avoid collections and make a good hybrid of manual management/collected code? The free lists in the GC engine look quite convenient to use. Any ideas?
COFF32 limitations?
I'm wondering about the COFF on windows x86, if I compile a C++ library in Mingw (gcc 4.9), using the new extern(C++, a.b.c) will I be able to statically link it through DMD?
Re: COFF32 limitations?
On 2014-08-22 13:55, Kagamin wrote: Which linker do you plan to use? ld on linux or visual studio's link on win32
delegates GC allocations
I've been hearing that delegates get a context pointer which will be allocated on the GC. Is this also true for delegates which stay in scope? e.g. void addThree() { int val; void addOne() { val++; } addOne(); addOne(); addOne(); return val; } Will the above function allocate on the GC?
Re: delegates GC allocations
On 2014-08-20 5:25 PM, Ola Fosheim Gr wrote: Well, I guess simple recursion could be solved easily too by having a wrapper function that puts the frame pointer in a free callee save register... So, my question inspired a new optimization? :-p
Re: extern (c++) std::function?
On 2014-08-15 6:12 AM, Rémy Mouëza wrote: assignments of anonymous/inline ones. You may want to add a layer of abstraction to the API you expose in D so that user D delegates are used from a second extern (C) delegate itself used by the C++ wrapper: Thanks for the detailed answer, this is the direction I'm going to be taking!
extern (c++) std::function?
I'm looking into making a binding for the C++ API called Botan, and the constructors in it take a std::function. I'm wondering if there's a D equivalent for this binding to work out, or if I have to make a C++ wrapper as well?
Re: Deploying Vibe.d applications to OpenShift
On 2014-07-24 3:45 AM, Nikolay wrote: Please let me know how you did it, because I know it's possible with the DIY-cartridge they provide you(atleast it should be). I tried it some time ago. It is possible but: - vibe.d requires a lot of memory for project compilation - it is hard to install additional libraries (it is not usual Linux distrib) You should compile and test on a CentOS 6.3 machine first and then write the cartridge using the wget command to move the same libevent package and the compiled vibe.d binary. You won't be able to compile on a cartridge. Your cartridge would look like this: https://github.com/Filirom1/openshift-cartridge-nodejs/blob/master/bin/setup
Windows DLL / Windows service
Hello, I'm looking to compile a server into a windows service, and there doesn't seem to be any info out there except this : http://forum.dlang.org/thread/c95ngs$1t0n$1...@digitaldaemon.com It doesn't call rt_init, would that be the only thing missing from there? Also, the d runtime seems to have a windows dll module https://github.com/D-Programming-Language/druntime/blob/master/src/core/sys/windows/dll.d no documentation though. Any idea how to attach/detach with a known example? I'd also like to create a windows DLL that compiles through DMD/GDC/LDC with extern(c) so that folks from C++ can link with it .
Re: Windows DLL / Windows service
On Saturday, 7 June 2014 at 16:07:47 UTC, Adam D. Ruppe wrote: On Saturday, 7 June 2014 at 14:41:15 UTC, Etienne Cimon wrote: no documentation though. Any idea how to attach/detach with a known example? I'd also like to create a windows DLL that compiles through DMD/GDC/LDC with extern(c) so that folks from C++ can link with it . Check this out: http://wiki.dlang.org/Win32_DLLs_in_D should help you get started. Looks good ! I couldn't find a more recent example for a service but it should be similar I guess
asserts and release
Are asserts supposed to be evaluated in DMD release? I was getting a privileged instructions error 0xC096 which was caused by an assert, when doing some gc programming in druntime
Re: foreach over string
On 2014-05-24 12:46, Kagamin wrote: foreach over string apparently iterates over chars by default instead of dchars. Didn't it prefer dchars? string s=weiß; int i; foreach(c;s)i++; assert(i==5); A string is defined by: alias string = immutable(char)[]; It doesn't add anything to that type (unless you import a library like std.algorithm, which adds many methods thanks to UFCS and generic functions) I believe you are looking for dstring which is defined by: alias dstring = immutable(dchar)[]; dstring s=weiß; int i; foreach(c;s)i++; assert(i==4);
Re: On Concurrency
On 2014-04-18 13:20, Nordlöw wrote: Could someone please give some references to thorough explainings on these latest concurrency mechanisms - Go: Goroutines - Coroutines (Boost): - https://en.wikipedia.org/wiki/Coroutine - http://www.boost.org/doc/libs/1_55_0/libs/coroutine/doc/html/coroutine/intro.html - D: core.thread.Fiber: http://dlang.org/library/core/thread/Fiber.html - D: vibe.d and how they relate to the following questions: 1. Is D's Fiber the same as a coroutine? If not, how do they differ? 2. Typical usecases when Fibers are superior to threads/coroutines? 3. What mechanism does/should D's builtin Threadpool ideally use to package and manage computations? 4. I've read that vibe.d's has a more lightweight mechanism than what core.thread.Fiber provides. Could someone explain to me the difference? When will this be introduced and will this be a breaking change? 5. And finally how does data sharing/immutability relate to the above questions? I'll admit that I'm not the expert you may be expecting for this but I could answer somewhat 1, 2, and 5. Coroutines, fibers, threads, multi-threading and all of this task-management stuff is a very complex science and most of the kernels actually rely on this to do their magic, keeping stack frames around with contexts is the idea and working with it made me feel like it's much more complex than meta-programming but I've been reading and getting a hang of it within the last 7 months now. Coroutines give you control over what exactly you'd like to keep around once the yield returned. You make a callback with boost::asio::yield_context or something of the likes and it'll contain exactly what you're expecting, but you're receiving it in another function that expects it as a parameter, making it asynchronous but it can't just resume within the same function because it does rely on a callback function like javascript. D's fibers are very much simplified (we can argue whether it's more or less powerful), you launch them like a thread ( Fiber fib = new Fiber( delegate ) ) and just move around from fiber to fiber with Fiber.call(fiber) and Fiber.yield(). The yield function called within a Fiber-called function will stop in a middle of that function's procedures if you want and it'll just return like the function ended, but you can rest assured that once another Fiber calls that fiber instance again it'll resume with all the stack info restored. They're made possible through some very low-level assembly magic, you can look through the library it's really impressive, the guy who wrote that must be some kind of wizard. Vibe.d's fibers are built right above this, core.thread.fiber (explained above) with the slight difference that they're packed with more power by putting them on top of a kernel-powered event loop rotating infinitely in epoll or windows message queues to resume them, (the libevent driver for vibe.d is the best developed event loop for this). So basically when a new Task is called (which has the Fiber class as a private member) you can yield it with yield() until the kernel wakes it up again with a timer, socket event, signal, etc. And it'll resume right after the yield() function. This is what helps vibe.d have async I/O while remaining procedural without having to shuffle with mutexes : the fiber is yielded every time it needs to wait for the network sockets and awaken again when packets are received so until the expected buffer length is met! I believe this answer is very mediocre and you could go on reading about all I said for months, it's a very wide subject. You can have Task message queues and Task concurrency with Task semaphores, it's like multi-threading in a single thread!
Re: async socket programming in D?
On 2014-04-20 18:44, Bauss wrote: I know the socket has the nonblocking settings, but how would I actually go around using it in D? Is there a specific procedure for it to work correctly etc. I've taken a look at splat.d but it seems to be very outdated, so that's why I went ahead and asked here as I'd probably have to end up writing my own wrapper. I was actually working on this in a new event loop for vibe.d here: https://github.com/globecsys/vibe.d/tree/native-events/source/vibe/core/events I've left it without activity for a week b/c I'm currently busy making a (closed source) SSL library to replace openSSL in my projects, but I'll return to this one project here within a couple weeks at most. It doesn't build yet, but you can probably use some of it at least as a reference, it took me a while to harvest the info on windows and linux kernels for async I/O. Some interesting parts like that which you wanted are found here: https://github.com/globecsys/vibe.d/blob/native-events/source/vibe/core/events/epoll.d#L403 I think I was at handling a new connection or incoming data though, so you won't find accept() or read callbacks, but with it I think it was pretty much ready for async TCP.
Re: async socket programming in D?
On 2014-04-21 00:32, Etienne Cimon wrote: On 2014-04-20 18:44, Bauss wrote: I know the socket has the nonblocking settings, but how would I actually go around using it in D? Is there a specific procedure for it to work correctly etc. I've taken a look at splat.d but it seems to be very outdated, so that's why I went ahead and asked here as I'd probably have to end up writing my own wrapper. I was actually working on this in a new event loop for vibe.d here: https://github.com/globecsys/vibe.d/tree/native-events/source/vibe/core/events I've left it without activity for a week b/c I'm currently busy making a (closed source) SSL library to replace openSSL in my projects, but I'll return to this one project here within a couple weeks at most. It doesn't build yet, but you can probably use some of it at least as a reference, it took me a while to harvest the info on windows and linux kernels for async I/O. Some interesting parts like that which you wanted are found here: https://github.com/globecsys/vibe.d/blob/native-events/source/vibe/core/events/epoll.d#L403 I think I was at handling a new connection or incoming data though, so you won't find accept() or read callbacks, but with it I think it was pretty much ready for async TCP. But of course, nothing stops you from using vibe.d with libevent ;)
sending a delegate through extern (C)
Hi, I'm trying to send a delegate to a modified version of druntime's GC as follows: struct GC { static void onCollect(void* dg) { gc_onCollect(cast(void*)dg); } ... my code: extern (C) extern __gshared void collecting(void* p, size_t sz){ import std.stdio; writeln(p, sz); } GC.onCollect(cast(void*)collecting); and the caller in gc.d casts this to a function(void*, size_t) before sending the parameters. I get the following: object.Error: Access Violation 0x00536A9D in gc_onCollect 0x0051FA18 in D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv 0x0051F9EB in void rt.dmain2._d_run_main(int, char**, extern (C) int function(ch ar[][])*).runAll() 0x0051F904 in _d_run_main 0x00446258 in main 0x0055C991 in mainCRTStartup 0x7513495D in BaseThreadInitThunk 0x770B98EE in RtlInitializeExceptionChain 0x770B98C4 in RtlInitializeExceptionChain There are still tasks running at exit. Error executing command run: Program exited with code 1 I'm wondering, why doesn't this execute the callback? I've tried many combinations of types but it looks like the pointer isn't accessible... Thanks!
Re: GC allocation issue
On 2014-03-21 2:53 AM, monarch_dodra wrote: On Friday, 21 March 2014 at 00:56:22 UTC, Etienne wrote: I'm trying to store a copy of strings for long-running processes with malloc. I tried using emplace but the copy gets deleted by the GC. Any idea why? Could you show the snippet where you used emplace? I'd like to know how you are using it. In particular, where you are emplacing, and *what*: the slice, or the slice contents? https://github.com/globecsys/cache.d/blob/master/chd/table.d#L1089 This line does the copying I don't think it's the memory copying algorithm anymore however. The GC crashes altogether during fullcollect(), the logs give me this: cache-d_d.exe!gc@gc@Gcx@mark(void * this, void * nRecurse, int ptop) Line 2266 C++ cache-d_d.exe!gc@gc@Gcx@mark(void * this, void * ptop) Line 2249 C++ cache-d_d.exe!gc@gc@Gcx@fullcollect() Line 2454 C++ cache-d_d.exe!gc@gc@GC@mallocNoSync(unsigned int this, unsigned int alloc_size, unsigned int * alloc_size) Line 458 C++ cache-d_d.exe!gc@gc@GC@malloc(unsigned int this, unsigned int alloc_size, unsigned int * bits) Line 413 C++ ... With ptop= 03D8F030, pbot= 03E4F030 They both point invalid memory. It looks like a really wide range too, the usual would be 037CCB80 - 037CCBA0 or such. I don't know how to find out where they come from... Maybe I could do an assert on that specific value in druntime
Re: GC allocation issue
On 2014-03-21 9:36 AM, Etienne wrote: With ptop= 03D8F030, pbot= 03E4F030 They both point invalid memory. It looks like a really wide range too, the usual would be 037CCB80 - 037CCBA0 or such. I don't know how to find out where they come from... Maybe I could do an assert on that specific value in druntime Looks like the range of the string[] keys array, it gets pretty big after adding 1s of strings. +GC.addRange(p = 03EA0AB0, sz = 0x38), p + sz = 03EA0AE8 set: 209499732595 = ¨98303126 +GC.addRange(p = 03EA0B40, sz = 0x38), p + sz = 03EA0B78 set: 6491851329 = ¨50107378 +GC.addRange(p = 03EA0BD0, sz = 0x38), p + sz = 03EA0C08 set: 262797465895 = ¨14438090 +GC.addRange(p = 03EA0C60, sz = 0x38), p + sz = 03EA0C98 set: 95992076217 = ¨65000864 +GC.addRange(p = 03EA0CF0, sz = 0x38), p + sz = 03EA0D28 +GC.addRange(p = 03EA0D50, sz = 0x3), p + sz = 03ED0D50 It crashes when sz approaches 0x18, it looks like (my best guess) the resized array doesn't get allocated but the GC still tries to scan it.
Re: GC allocation issue
On 2014-03-21 10:34 AM, Etienne wrote: It crashes when sz approaches 0x18, it looks like (my best guess) the resized array doesn't get allocated but the GC still tries to scan it. Ok I found it in the manual implementation of a malloc-based HashMap. The right way to debug this was, sadly, to add a lot of printf and a few asserts in druntime, and redirecting the stdout to a file from the shell (./exe logoutput.txt). The druntime win32.mak doesn't have a debug build so I had to add -debug -g in there to add symbols and make the sources show up instead of the disassembly in VisualD. In this case, the logs showed gc's mark() was failing on wide ranges, so I added an assert in addRange to make it throw when that range was added, and it finally gave me the call stack of the culprit. The issue was that a malloc range was (maybe) not being properly initialized before being added to the GC. https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/utils/hashmap.d#L221 https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/utils/memory.d#L153 In this case, ptr isn't null and the range existed, but there's still an access violation from the GC for some reason. I'll keep searching for the root cause but it doesn't seem to be a GC issue anymore; though the debugging procedure could use some documentation. Thanks
bool Associative Array Synchronized
I'd like to cowboy it on an AA that looks like this: __gshared bool[string] m_mutex; I think it'll be much faster for my code because this AA could need to be checked and switched possibly millions of times per second and I wouldn't want to slow it down with a mutex protecting it. I'm thinking the bool would be synchronized at the hardware level anyway, since it's just a bit being flipped or returned. Am I right to assume this and (maybe an assembly guru can answer) it safe to use? Thanks
Re: bool Associative Array Synchronized
On 2014-03-20 1:52 PM, Steven Schveighoffer wrote: On Thu, 20 Mar 2014 13:43:58 -0400, Etienne etci...@gmail.com wrote: I'd like to cowboy it on an AA that looks like this: __gshared bool[string] m_mutex; I think it'll be much faster for my code because this AA could need to be checked and switched possibly millions of times per second and I wouldn't want to slow it down with a mutex protecting it. I'm thinking the bool would be synchronized at the hardware level anyway, since it's just a bit being flipped or returned. Am I right to assume this and (maybe an assembly guru can answer) it safe to use? No, it's not safe. With memory reordering, there is no safe unless you use mutexes or low-level atomics. Long story short, the compiler, the processor, or the memory cache can effectively reorder operations, making one thread see things happen in a different order than they are written/executed on another thread. There are no guarantees. -Steve Right, I was assuming it was always ordered, but modern processor pipelines are different I guess.
Re: bool Associative Array Synchronized
On 2014-03-20 1:58 PM, John Colvin wrote: Also, atomicity is not a strong enough guarantee for implementing a mutex, which is what I assume you are trying to do. You need ordering guarantees as well. Heh. Will do!
Re: bool Associative Array Synchronized
On 2014-03-20 2:47 PM, H. S. Teoh wrote: On Thu, Mar 20, 2014 at 06:39:10PM +, Chris Williams wrote: On Thursday, 20 March 2014 at 18:06:18 UTC, Etienne wrote: Right, I was assuming it was always ordered, but modern processor pipelines are different I guess. Even without rearranging the order of your code, your bit exists in RAM but all the logic takes place in a CPU register, meaning that any time you operate with the bit, there's at least two copies. When the CPU switches threads (which could be at any point), it empties the CPU into RAM. If the thread it's switching to then tries to interact with the bit, it's going to create a third copy. Now whichever of the two threads is slower to write back to the original location is going to smash the other's. Furthermore, the CPU does not access bits directly; it processes them as (at least) bytes. To set/clear a bit in memory location X, the CPU has to first read X into a register (at least 8 bits long), update the bit, and write it back into memory. If two threads are simultaneously operating on different bits that resides in the same byte, whichever CPU runs last will smash whatever the first CPU wrote. Let's say you start with b in location X, and CPU1 wants to set the first bit and CPU2 wants to set the second bit. Both CPU's initially reads b into their registers, then the first CPU sets the first bit, so it becomes 0001b (in register) and the second CPU sets the second bit so it becomes 0010b (in register). Now CPU1 writes 0001b back to memory, followed by CPU2 writing 0010b back to memory. Now what CPU1 did has been smashed by CPU2's write. Now, the current AA implementation doesn't actually pack bits like this, so this particular problem doesn't actually happen, but other similar problems will occur if you add/remove keys from the AA -- two CPU's will try to update internal AA pointers simultaneously, and end up trashing it. T Hmm my misunderstanding comes from not taking into account everything I knew about CPU caches. As a side note, I think I heard somewhere that abstractions don't flatten out the learning curve at all, it's like a bigger gun but you still need to know all the basics to avoid shooting yourself with it. So true. Well, thanks for the explanation there :)
GC allocation issue
I'm running some tests on a cache store where I planned to use only Malloc for the values being stored, I'm hoping to eliminate the GC in 95% of the program, but to keep it only for actively used items.. My problem is: when the program reaches 40MB it suddenly goes down to 0.9MB and blocks. From every resource I've read, the understanding that came out is that the GC will stop all threads and search for pointers for data that was allocated only by the GC (using addRange or addRoot to extend its reach). This means that in the worst case scenario, there could be leaks, but I'm seeing the data being deleted by the GC so I'm a little stomped here. What am I missing? I'm using FreeListAlloc from here: https://github.com/rejectedsoftware/vibe.d/blob/master/source/vibe/utils/memory.d#L152 in here: https://github.com/globecsys/cache.d/blob/master/chd/table.d#L1087 and this is how I made it crash: https://github.com/globecsys/cache.d/blob/master/chd/connection.d#L550 I know it's the GC because using GC.disable() fixes it, so I'm only really asking if the GC has a habit of deleting mallocated data like this.
Re: GC allocation issue
On 2014-03-20 8:39 PM, bearophile wrote: Etienne: I'm running some tests on a cache store where I planned to use only Malloc for the values being stored, I'm hoping to eliminate the GC in 95% of the program, but to keep it only for actively used items.. Usually 95%-100% of a D program uses the GC and the 0%-5% uses malloc :-) Bye, bearophile I'm trying to store a copy of strings for long-running processes with malloc. I tried using emplace but the copy gets deleted by the GC. Any idea why?
Re: GC allocation issue
On 2014-03-20 21:08, Adam D. Ruppe wrote: On Friday, 21 March 2014 at 00:56:22 UTC, Etienne wrote: I tried using emplace but the copy gets deleted by the GC. Any idea why? That's extremely unlikely, the GC doesn't know how to free manually allocated things. Are you sure that's where the crash happens? Taking a really quick look at your code, this line raises a red flag: https://github.com/globecsys/cache.d/blob/master/chd/table.d#L55 Class destructors in D aren't allowed to reference GC allocated memory through their members. Accessing that string in the dtor could be a problem that goes away with GC.disable too. Yes, you're right I may have a lack of understanding about destructors, I'll review this. I managed to generate a VisualD projet and the debugger confirms the program crashes on the GC b/c it has a random call stack for everything under fullcollect(). cache-d_d.exe!gc@gc@Gcx@mark() C++ cache-d_d.exe!gc@gc@Gcx@fullcollect() C++ cache-d_d.exe!std@array@Appender!string@Appender@ensureAddable(unsigned int this) Line 2389 C++ [External Code] cache-d_d.exe!std@array@Appender!string@Appender@ensureAddable(unsigned int this) Line 2383 C++ I have no methodology for debugging under these circumstances, do you know of anything else I can do than manually review the pathways in the source code?