Building a wasm library, need to override .object._d_newitemT!T
Hello, I've been developing a library[1] based on spasm for which I've implemented the druntime and currently it compiles web apps properly with TypeInfo, no GC, and it even uses diet templates. I'm having a problem implementing the `new` keyword, so that I can start importing more libraries with minimal change. However, LDC calls .object._d_newitemT!T from the original druntime - which I need for compile-time function execution, but my implementation in `module object` doesn't override it in the compiler and the original implementation tries import core.stdc.time which errors out in wasm (with good reasons). Is there a compiler flag than I can use to override module templates? Thanks in advance. [1] https://github.com/etcimon/libwasm
Re: Invalid assembler comparison
On Friday, 23 October 2015 at 15:17:43 UTC, Etienne Cimon wrote: Hello, I've been trying to understand this for a while now: https://github.com/etcimon/botan/blob/master/source/botan/math/mp/mp_core.d#L765 This comparison (looking at it with windbg during cmp operation) has these invalid values in the respective registers: rdx: 9366584610601550696 r15: 8407293697099479287 When moving them into a ulong variable with a mov [R11], RDX before the CMP command, I get: RDX: 7549031027420429441 R15: 17850297365717953652 Which are the valid values. Any idea how these values could have gotten corrupted this way? Is there a signed integer conversion going on behind the scenes? I found out that there was an integer conversion going on behind the scenes when using jnl. I had to use jnb http://stackoverflow.com/questions/27284895/how-to-compare-a-signed-value-and-an-unsigned-value-in-x86-assembly
Invalid assembler comparison
Hello, I've been trying to understand this for a while now: https://github.com/etcimon/botan/blob/master/source/botan/math/mp/mp_core.d#L765 This comparison (looking at it with windbg during cmp operation) has these invalid values in the respective registers: rdx: 9366584610601550696 r15: 8407293697099479287 When moving them into a ulong variable with a mov [R11], RDX before the CMP command, I get: RDX: 7549031027420429441 R15: 17850297365717953652 Which are the valid values. Any idea how these values could have gotten corrupted this way? Is there a signed integer conversion going on behind the scenes?
Re: Dangular - D Rest server + Angular frontend
On Sunday, 19 July 2015 at 19:54:31 UTC, Jarl André Hübenthal wrote: Hi I have created a personal project that aims to learn myself more about D/vibe.d and to create a simple and easy to grasp example on Mongo -> Vibe -> Angular. Nice ! I'm also working on a project like this, using some paid angularjs admin template from themeforest, although I'm powering it with Vibe.d / Redis and PostgreSQL 9.4 with its new json type. controllers, but it works as a bootstrap example. I am thinking to create another view that uses ReactJS because its much much more better than Angular. ReactJS has been very promising and I hear a lot of hype around it. However I believe Angular lived through its hype and is now more mature in plenty of areas, for example its Ionic framework for cross-mobile apps is reaching its gold age with seemingly fluid performance on every device! With vibe.d being a cross-platform framework, you'd be even able to build a Web Application that communicates with a client-side OS API, effectively closing the gap between web dev and software dev. So, there is two structs, but I really only want to have one. Should I use classes for this? Inheritance? Vibe.d is famous for its compile-time evaluation, understanding structures like with reflection but producing the most optimized machine code possible. You won't be dealing with interfaces in this case, you should look at the UDA api instead: http://vibed.org/api/vibe.data.serialization/. For example, if your field might not always be in the JSON, you can mark it @optional. struct PersonDoc { @optional BsonObjectID _id; ulong id; string firstName; string lastName; } You can also "compile-time override" the default serialization/deserialization instructions for a struct by defining the function signatures specified here: http://vibed.org/api/vibe.data.json/serializeToJson or the example here: https://github.com/rejectedsoftware/vibe.d/blob/master/examples/serialization/source/app.d This being said, your questions are most likely to be answered if you ask at http://forum.rejectedsoftware.com
Re: goroutines vs vibe.d tasks
On Wednesday, 1 July 2015 at 18:09:19 UTC, Mathias Lang wrote: On Tuesday, 30 June 2015 at 15:18:36 UTC, Jack Applegame wrote: Just creating a bunch (10k) of sleeping (for 100 msecs) goroutines/tasks. Compilers go: go version go1.4.2 linux/amd64 vibe.d: DMD64 D Compiler v2.067.1 linux/amd64, vibe.d 0.7.23 Code go: http://pastebin.com/2zBnGBpt vibe.d: http://pastebin.com/JkpwSe47 go version build with "go build test.go" vibe.d version built with "dub build --build=release test.d" Results on my machine: go: 168.736462ms (overhead ~ 68ms) vibe.d: 1944ms (overhead ~ 1844ms) Why creating of vibe.d tasks is so slow (more then 10 times)??? In your dub.json, can you use the following: "subConfigurations": { "vibe-d": "libasync" }, "dependencies": { "vibe-d": "~>0.7.24-beta.3" }, Turns out it makes it much faster on my machine (371ms vs 1474ms). I guess it could be a good thing to investigate if we can make it the default in 0.7.25. I don't benchmark my code frequently, but that's definitely flattering :) I hope we can see a release LDC 2.067.0 soon so that I can optimize the code further. I've given up on 2.066 a while back
Re: CPU cores & threads & fibers
On 2015-06-14 08:35, Robert M. Münch wrote: Hi, just to x-check if I have the correct understanding: fibers = look parallel, are sequential => use 1 CPU core threads = look parallel, are parallel => use several CPU cores Is that right? Yes, however nothing really guarantees multi-threading = multi-core. The kernel reserves the right and will most likely do everything possible to keep your process core-local to use caching efficiently. There's a few ways around that though https://msdn.microsoft.com/en-us/library/windows/desktop/ms686247%28v=vs.85%29.aspx http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html
Re: mscoff x86 invalid pointers
On 2015-05-10 03:54, Baz wrote: On Sunday, 10 May 2015 at 04:16:45 UTC, Etienne Cimon wrote: On 2015-05-09 05:44, Baz wrote: On Saturday, 9 May 2015 at 06:21:11 UTC, extrawurst wrote: On Saturday, 9 May 2015 at 00:16:28 UTC, Etienne wrote: I'm trying to compile a library that I think used to work with -m32mscoff flag before I reset my machine configurations. https://github.com/etcimon/memutils Whenever I run `dub test --config=32mscoff` it gives me an assertion failure, which is a global variable that already has a pointer value for some reason.. I'm wondering if someone here could test this out on their machine with v2.067.1? There's no reason why this shouldn't work, it runs fine in DMD32/optlink and DMD64/mscoff, just not in DMD32/mscoff. Thanks! you can always use travis-ci to do such a job for you ;) doesn't -m32mscoff recquire phobos to be compiled as COFF too ? I think that travis uses the official releases (win32 releases have phobos as OMF) so he can't run the unittests like that... The dark side of the story is that you have to recompile phobos by hand with -m32mscoff...I'm not even sure that there is a option for this in the win32.mak... Meh, I ended up upgrading to 2.068 and everything went well. I clearly remember 2.067.1 working but spent a whole day recompiling druntime/phobos COFF versions in every configuration possible and never got it working again Could you tell me the way to compile druntime & phobos 32bit COFF ? Would you have some custom win32.mak to share ? Thx. I edited win64.mak, you need to change it to MODEL=32mscoff and remove all occurence of amd64/ in the file (there are 3), for both druntime and phobos. Save this to win32mscoff.mak You need to place the phobos32mscoff.lib into dmd2/windows/lib32mscoff/ (the folder doesn't exist)
Re: mscoff x86 invalid pointers
On 2015-05-09 05:44, Baz wrote: On Saturday, 9 May 2015 at 06:21:11 UTC, extrawurst wrote: On Saturday, 9 May 2015 at 00:16:28 UTC, Etienne wrote: I'm trying to compile a library that I think used to work with -m32mscoff flag before I reset my machine configurations. https://github.com/etcimon/memutils Whenever I run `dub test --config=32mscoff` it gives me an assertion failure, which is a global variable that already has a pointer value for some reason.. I'm wondering if someone here could test this out on their machine with v2.067.1? There's no reason why this shouldn't work, it runs fine in DMD32/optlink and DMD64/mscoff, just not in DMD32/mscoff. Thanks! you can always use travis-ci to do such a job for you ;) doesn't -m32mscoff recquire phobos to be compiled as COFF too ? I think that travis uses the official releases (win32 releases have phobos as OMF) so he can't run the unittests like that... The dark side of the story is that you have to recompile phobos by hand with -m32mscoff...I'm not even sure that there is a option for this in the win32.mak... Meh, I ended up upgrading to 2.068 and everything went well. I clearly remember 2.067.1 working but spent a whole day recompiling druntime/phobos COFF versions in every configuration possible and never got it working again
Re: Is someone still using or maintaining std.xml2 aka xmlp?
On 2014-11-28 15:15, Tobias Pankrath wrote: Old project link is http://www.dsource.org/projects/xmlp The launchpad and dsource repositories are dead for two years now. Anyone using it? Nope. I found kXML while searching for the same, it has everything I've needed up to spec. I'm maintaining a fork that works with DMD 2.066+ here: https://github.com/etcimon/kxml
Re: jsnode crypto createHmac & createHash
Keep an eye on this one: Botan in D, https://github.com/etcimon/botan Should be finished in a couple weeks. e.g. from the TLS module: auto hmac = get_mac("HMAC(SHA-256)"); hmac.set_key(secret_key); hmac.update_be(client_hello_bits.length); hmac.update(client_hello_bits); hmac.update_be(client_identity.length); hmac.update(client_identity); m_cookie = unlock(hmac.flush());
Re: Reducing Pegged ASTs
On 2014-11-25 10:12, "Nordlöw" wrote: Is there a way to (on the fly) reduce Pegged parse results such as I've made an asn.1 parser using pegged tree map, it's not so complex and does the reducing as well. https://github.com/globecsys/asn1.d Most of the meat is in asn1/generator/ In short, it's much easier when you put all the info in the same object, in this case it's an AEntity: https://github.com/globecsys/asn1.d/blob/master/asn1/generator/asntree.d#L239 When the whole tree is done that way, you can easily traverse it and move nodes like a linked list.. I've made a helper function here: https://github.com/globecsys/asn1.d/blob/master/asn1/generator/asntree.d#L10 You can see it being used here: https://github.com/globecsys/asn1.d/blob/38bd1907498cf69a08604a96394892416f7aa3bd/asn1/generator/asntree.d#L109 and then here: https://github.com/globecsys/asn1.d/blob/master/asn1/generator/generator.d#L500 Also, the garbage collector really makes it easy to optimize memory usage, ie. when you use a node in multiple places and need to re-order the tree elements. I still have a bunch of work to do, and I intend on replacing botan's ASN1 functionalities with this and a DER serialization module. Beware, the pegged structure isn't insanely fast to parse because of the recursion limits I implemented very inefficiently because I was too lazy to translate the standard asn.1 BNF into PEG.. Also, the bigger bottleneck would be error strings. For a 1-2 months of work (incl. learning ASN.1), I'm very satisfied with the technology involved and would recommend intermediate structures with traversal helpers.
Re: Pragma mangle and D shared objects
On 2014-10-26 14:25, Etienne Cimon wrote: On 2014-10-25 23:31, H. S. Teoh via Digitalmars-d-learn wrote: Hmm. You can probably use __traits(getAllMembers...) to introspect a library module at compile-time and build a hash based on that, so that it's completely automated. If you have this available as a mixin, you could just mixin(exportLibrarySymbols()) in your module to produce the hash. Exactly, or I could also make it export specific functions into the hashmap, a little like a router. It seems like a very decent option. I found an elegant solution for dealing with dynamic libraries: https://github.com/bitwise-github/D-Reflection
Re: Dart bindings for D?
On 2014-10-29 18:12, Laeeth Isharc wrote: Rationale for using Dart in combination with D is that I am not thrilled about learning or writing in Javascript, yet one has to do processing on the client in some language, and there seem very few viable alternatives for that. It would be nice to run D from front to back, but at least Dart has C-like syntax and is reasonably well thought out. I actually thought this over in the past and posted my research here: http://forum.dlang.org/thread/ll38cn$ojv$1...@digitalmars.com It would be awesome to write front-end tools in D. However, there won't be much browser support unless you're backed by Google or Microsoft. What's going to replace javascript? Will it be typescript? asm.js? dart? PNaCl? The solution is obviously to compile from D to the target language. But what's the real advantage? Re-using some back-end MVC libraries? All the communication is actually done through sockets, there's never any real interaction between the back-end/front-end. Also, you realize the front-end stuff is so full of community contributions that you're actually shooting yourself in the foot if you divert away from the more popular language and methodologies. So, I settle with javascript, and I shop for libraries instead of writing anything at all. There's so much diversity in the front-end world, a few hundred lines of code at most are going to be necessary for an original piece of work. Heh.
Re: Using imm8 through inline assembler
"imm8" is not a register. "imm" stands for "immediate", i.e. a constant, hard-coded value. E.g.: asm { vpermilps YMM0, YMM1, 0 /* no idea what would be a meaningful value */; } Oh, well, that makes sense. This means I should use string mixins to insert the actual value. I couldn't run AVX2 either, I have to work on this blindly until I get my hands on a new Xeon E5-2620 3rd generation processor. :/ > I think my CPU doesn't have AVX, so I can't test anything beyond compiling. When running all I get is "Illegal instruction (core dumped)". My error is vmovdqu, I get: Error: AVX vector types not supported Anyone know what this is? I can't find a reference to it in the dmd compiler code. Also, 'vpunpcklqdq _ymm, _ymm, _ymm' is undefined in DMD, so I get : avx2.d(23): Error: bad type/size of operands 'vpunpcklqdq' Looks like abunch of work is needed to get avx2 working...
Re: Pragma mangle and D shared objects
On 2014-10-25 23:31, H. S. Teoh via Digitalmars-d-learn wrote: Hmm. You can probably use __traits(getAllMembers...) to introspect a library module at compile-time and build a hash based on that, so that it's completely automated. If you have this available as a mixin, you could just mixin(exportLibrarySymbols()) in your module to produce the hash. Exactly, or I could also make it export specific functions into the hashmap, a little like a router. It seems like a very decent option.
Re: Pragma mangle and D shared objects
On 2014-10-25 21:26, H. S. Teoh via Digitalmars-d-learn wrote: Not sure what nm uses, but a lot of posix tools for manipulating object files are based on binutils, which understands the local system's object file format and deal directly with the binary representation. The problem is, I don't know of any *standard* system functions that can do this, so you'd have to rely on OS-specific stuff to make it work, which is less than ideal. Which makes it better to export the mangling into a container at compile-time! That way, you can build a standard interface into DLLs so that other D application know what they can call =)
Re: Pragma mangle and D shared objects
On 2014-10-25 11:56, Etienne Cimon wrote: That looks like exactly the solution I need, very clever. It'll take some time to wrap my head around it :-P Just brainstorming here, but I think every dynamic library should hold a utility container (hash map?) that searches for and returns the mangled names in itself using regex match. This container would always be in the same module/function name in every dynamic library. The mangling list would load itself using introspection through a `shared static this()`. For each module, for each class, insert mangling in the hashmap... This seems ideal because then you basically have libraries that document themselves at runtime. Of course, the return type and arguments would have to be decided in advance, but otherwise it allows loading and use of (possibly remote and unknown) DLLs, in a very simple way.
Re: Pragma mangle and D shared objects
That looks like exactly the solution I need, very clever. It'll take some time to wrap my head around it :-P
Pragma mangle and D shared objects
I haven't been able to find much about pragma mangle. I'd like to do the following: http://forum.dlang.org/thread/hznsrmviciaeirqkj...@forum.dlang.org#post-zhxnqqubyudteycwudzz:40forum.dlang.org The part I find ugly is this: void* vp = dlsym(lib, "_D6plugin11getInstanceFZC2bc2Bc\0".ptr); I want to write a framework that stores a dynamic library name and symbol to execute, and downloads the dynamic library if it's not available. This would be in a long-running server/networking application, and needs to be simple to use. The mangling makes it less obvious for the programmer writing a plugin. Does mangle make it possible to change this to dlsym(lib, "myOwnMangledName"), or would it still have strange symbols? Also, I've never seen the thunkEBX change merged from here: http://forum.dlang.org/thread/hznsrmviciaeirqkj...@forum.dlang.org?page=2#post-lg2lqi:241ga3:241:40digitalmars.com
Re: classInstanceSize and vtable
On 2014-10-23 20:12, bearophile wrote: In D all class instances contain a pointer to the class and a monitor pointer. The table is used for run-time reflection, and for standard virtual methods like toString, etc. Bye, bearophile So what's the point of making a class or methods final? Does it only free some space and allow inline to take place?
classInstanceSize and vtable
I'm trying to figure out the size difference between a final class and a class (which carries a vtable pointer). import std.stdio; class A { void print(){} } final class B { void print(){} } void main(){ writeln(__traits(classInstanceSize, A)); writeln(__traits(classInstanceSize, B)); } Returns: 8 8 I'm not sure, why does a final class carry a vtable pointer?
Re: Translating inline ASM from C++
On 2014-10-15 19:47, Etienne Cimon wrote: The D syntax for inline assembly is Intel style, whereas the GCC syntax is AT&T style. This guide seems to show exactly how to translate from C++ to D. I'm posting this research for anyone searching the forums for a solution. I found a better guide to D assembly on the digitalmars website: http://www.digitalmars.com/ctg/ctgInlineAsm.html This says it emulates Borland Turbo Assembly. Here's a manual I found: http://www.csn.ul.ie/~darkstar/assembler/manual/ Chapter 6 is the most interesting: http://www.csn.ul.ie/~darkstar/assembler/manual/a06a.txt Also, this works in DMD but also compatible with LDC and GDC, they all support the D inline assembler syntax for x86 and x86_64 : https://github.com/ldc-developers/ldc/blob/master/gen/asm-x86.h
Re: Translating inline ASM from C++
On 2014-10-15 09:48, Etienne wrote: I currently only need to translate these commented statements. If anyone I found the most useful information here in case anyone is wondering: http://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html The D syntax for inline assembly is Intel style, whereas the GCC syntax is AT&T style. This guide seems to show exactly how to translate from C++ to D.
Re: How do I write __simd(void16*, void16) ?
On 2014-10-09 17:32, Etienne wrote: That's very helpful, the problem remains that the API is unfamiliar. I think most of the time, simd code will only need to be translated from basic function calls, it would've been nice to have equivalents :-p Sorry, I think I had a bad understanding. I found out through a github issue that you need to use pragma(LDC_intrinsic, "llvm.*") [function declaration] https://github.com/ldc-developers/ldc/issues/627 And the possible gcc-style intrinsics are defined here: https://www.opensource.apple.com/source/clamav/clamav-158/clamav.Bin/clamav-0.98/libclamav/c++/llvm/include/llvm/Intrinsics.gen This really begs for at binding hat works with all compilers.
Re: COFF32 limitations?
On 2014-08-22 13:55, Kagamin wrote: Which linker do you plan to use? ld on linux or visual studio's link on win32
extern (c++) std::function?
I'm looking into making a binding for the C++ API called Botan, and the constructors in it take a std::function. I'm wondering if there's a D equivalent for this binding to work out, or if I have to make a C++ wrapper as well?
Windows DLL / Windows service
Hello, I'm looking to compile a server into a windows service, and there doesn't seem to be any info out there except this : http://forum.dlang.org/thread/c95ngs$1t0n$1...@digitaldaemon.com It doesn't call rt_init, would that be the only thing missing from there? Also, the d runtime seems to have a windows dll module https://github.com/D-Programming-Language/druntime/blob/master/src/core/sys/windows/dll.d no documentation though. Any idea how to attach/detach with a known example? I'd also like to create a windows DLL that compiles through DMD/GDC/LDC with extern(c) so that folks from C++ can link with it .
Re: foreach over string
On 2014-05-24 12:46, Kagamin wrote: foreach over string apparently iterates over chars by default instead of dchars. Didn't it prefer dchars? string s="weiß"; int i; foreach(c;s)i++; assert(i==5); A string is defined by: alias string = immutable(char)[]; It doesn't add anything to that type (unless you import a library like std.algorithm, which adds many "methods" thanks to UFCS and generic functions) I believe you are looking for dstring which is defined by: alias dstring = immutable(dchar)[]; dstring s="weiß"; int i; foreach(c;s)i++; assert(i==4);
Re: async socket programming in D?
On 2014-04-21 00:32, Etienne Cimon wrote: On 2014-04-20 18:44, Bauss wrote: I know the socket has the nonblocking settings, but how would I actually go around using it in D? Is there a specific procedure for it to work correctly etc. I've taken a look at splat.d but it seems to be very outdated, so that's why I went ahead and asked here as I'd probably have to end up writing my own wrapper. I was actually working on this in a new event loop for vibe.d here: https://github.com/globecsys/vibe.d/tree/native-events/source/vibe/core/events I've left it without activity for a week b/c I'm currently busy making a (closed source) SSL library to replace openSSL in my projects, but I'll return to this one project here within a couple weeks at most. It doesn't build yet, but you can probably use some of it at least as a reference, it took me a while to harvest the info on windows and linux kernels for async I/O. Some interesting parts like that which you wanted are found here: https://github.com/globecsys/vibe.d/blob/native-events/source/vibe/core/events/epoll.d#L403 I think I was at handling a new connection or incoming data though, so you won't find accept() or read callbacks, but with it I think it was pretty much ready for async TCP. But of course, nothing stops you from using vibe.d with libevent ;)
Re: async socket programming in D?
On 2014-04-20 18:44, Bauss wrote: I know the socket has the nonblocking settings, but how would I actually go around using it in D? Is there a specific procedure for it to work correctly etc. I've taken a look at splat.d but it seems to be very outdated, so that's why I went ahead and asked here as I'd probably have to end up writing my own wrapper. I was actually working on this in a new event loop for vibe.d here: https://github.com/globecsys/vibe.d/tree/native-events/source/vibe/core/events I've left it without activity for a week b/c I'm currently busy making a (closed source) SSL library to replace openSSL in my projects, but I'll return to this one project here within a couple weeks at most. It doesn't build yet, but you can probably use some of it at least as a reference, it took me a while to harvest the info on windows and linux kernels for async I/O. Some interesting parts like that which you wanted are found here: https://github.com/globecsys/vibe.d/blob/native-events/source/vibe/core/events/epoll.d#L403 I think I was at handling a new connection or incoming data though, so you won't find accept() or read callbacks, but with it I think it was pretty much ready for async TCP.
Re: On Concurrency
On 2014-04-18 13:20, "Nordlöw" wrote: Could someone please give some references to thorough explainings on these latest concurrency mechanisms - Go: Goroutines - Coroutines (Boost): - https://en.wikipedia.org/wiki/Coroutine - http://www.boost.org/doc/libs/1_55_0/libs/coroutine/doc/html/coroutine/intro.html - D: core.thread.Fiber: http://dlang.org/library/core/thread/Fiber.html - D: vibe.d and how they relate to the following questions: 1. Is D's Fiber the same as a coroutine? If not, how do they differ? 2. Typical usecases when Fibers are superior to threads/coroutines? 3. What mechanism does/should D's builtin Threadpool ideally use to package and manage computations? 4. I've read that vibe.d's has a more lightweight mechanism than what core.thread.Fiber provides. Could someone explain to me the difference? When will this be introduced and will this be a breaking change? 5. And finally how does data sharing/immutability relate to the above questions? I'll admit that I'm not the expert you may be expecting for this but I could answer somewhat 1, 2, and 5. Coroutines, fibers, threads, multi-threading and all of this task-management "stuff" is a very complex science and most of the kernels actually rely on this to do their magic, keeping stack frames around with contexts is the idea and working with it made me feel like it's much more complex than meta-programming but I've been reading and getting a hang of it within the last 7 months now. Coroutines give you control over what exactly you'd like to keep around once the "yield" returned. You make a callback with "boost::asio::yield_context" or something of the likes and it'll contain exactly what you're expecting, but you're receiving it in another function that expects it as a parameter, making it asynchronous but it can't just resume within the same function because it does rely on a callback function like javascript. D's fibers are very much simplified (we can argue whether it's more or less powerful), you launch them like a thread ( Fiber fib = new Fiber( &delegate ) ) and just move around from fiber to fiber with Fiber.call(fiber) and Fiber.yield(). The yield function called within a Fiber-called function will stop in a middle of that function's procedures if you want and it'll just return like the function ended, but you can rest assured that once another Fiber calls that fiber instance again it'll resume with all the stack info restored. They're made possible through some very low-level assembly magic, you can look through the library it's really impressive, the guy who wrote that must be some kind of wizard. Vibe.d's fibers are built right above this, core.thread.fiber (explained above) with the slight difference that they're packed with more power by putting them on top of a kernel-powered event loop rotating infinitely in epoll or windows message queues to resume them, (the libevent driver for vibe.d is the best developed event loop for this). So basically when a new "Task" is called (which has the Fiber class as a private member) you can yield it with yield() until the kernel wakes it up again with a timer, socket event, signal, etc. And it'll resume right after the yield() function. This is what helps vibe.d have async I/O while remaining procedural without having to shuffle with mutexes : the fiber is yielded every time it needs to wait for the network sockets and awaken again when packets are received so until the expected buffer length is met! I believe this answer is very mediocre and you could go on reading about all I said for months, it's a very wide subject. You can have "Task message queues" and "Task concurrency" with "Task semaphores", it's like multi-threading in a single thread!
Re: GC allocation issue
On 2014-03-20 21:46, Etienne Cimon wrote: On 2014-03-20 21:08, Adam D. Ruppe wrote: On Friday, 21 March 2014 at 00:56:22 UTC, Etienne wrote: I tried using emplace but the copy gets deleted by the GC. Any idea why? That's extremely unlikely, the GC doesn't know how to free manually allocated things. Are you sure that's where the crash happens? Taking a really quick look at your code, this line raises a red flag: https://github.com/globecsys/cache.d/blob/master/chd/table.d#L55 Class destructors in D aren't allowed to reference GC allocated memory through their members. Accessing that string in the dtor could be a problem that goes away with GC.disable too. Yes, you're right I may have a lack of understanding about destructors, I'll review this. I managed to generate a VisualD projet and the debugger confirms the program crashes on the GC b/c it has a random call stack for everything under fullcollect(). cache-d_d.exe!gc@gc@Gcx@mark()C++ cache-d_d.exe!gc@gc@Gcx@fullcollect()C++ > cache-d_d.exe!std@array@Appender!string@Appender@ensureAddable(unsigned int this) Line 2389C++ [External Code] cache-d_d.exe!std@array@Appender!string@Appender@ensureAddable(unsigned int this) Line 2383C++ I have no methodology for debugging under these circumstances, do you know of anything else I can do than manually review the pathways in the source code? It seems to be crashing somewhere here in druntime's gc.d : void mark(void *pbot, void *ptop, int nRecurse) { //import core.stdc.stdio;printf("nRecurse = %d\n", nRecurse); void **p1 = cast(void **)pbot; void **p2 = cast(void **)ptop; Considering it's an access violation of a root that was probably added by phobos, could this be an issue with these libraries?
Re: GC allocation issue
On 2014-03-20 21:08, Adam D. Ruppe wrote: On Friday, 21 March 2014 at 00:56:22 UTC, Etienne wrote: I tried using emplace but the copy gets deleted by the GC. Any idea why? That's extremely unlikely, the GC doesn't know how to free manually allocated things. Are you sure that's where the crash happens? Taking a really quick look at your code, this line raises a red flag: https://github.com/globecsys/cache.d/blob/master/chd/table.d#L55 Class destructors in D aren't allowed to reference GC allocated memory through their members. Accessing that string in the dtor could be a problem that goes away with GC.disable too. Yes, you're right I may have a lack of understanding about destructors, I'll review this. I managed to generate a VisualD projet and the debugger confirms the program crashes on the GC b/c it has a random call stack for everything under fullcollect(). cache-d_d.exe!gc@gc@Gcx@mark() C++ cache-d_d.exe!gc@gc@Gcx@fullcollect() C++ > cache-d_d.exe!std@array@Appender!string@Appender@ensureAddable(unsigned int this) Line 2389 C++ [External Code] cache-d_d.exe!std@array@Appender!string@Appender@ensureAddable(unsigned int this) Line 2383 C++ I have no methodology for debugging under these circumstances, do you know of anything else I can do than manually review the pathways in the source code?
Re: Colons and brackets
On 2014-03-01 16:42, anonymous wrote: I.e. "version(all):" doesn't cancel a former "version(foo):". There may or may not be a way to cancel a "version(foo):". I can't think of anything. Also, I don't see the problem with brackets. Brackets (braces) with "good practice" requires changing the indentation of everything in it :/ To keep it clean I decided to put the different sources in separate files and put the version clause near the import or with version (DEFINES): at the top of the page..
Colons and brackets
Hi all, I'm a little perplexed b/c I can't seem to find anything that could tell me where this ends: version(something): code code code \eof How do you stop statements from belonging to the specific version of code without using brackets? Thanks!
Re: DMD exit code -9
On 2014-02-19 17:15, Craig Dillabaugh wrote: However, I would still be interested in finding out where I could get a listing of what the various exit codes mean ... or do I need to delve into the DMD source code? That seems to be the SIGKILL signal from linux kernel (-9). DMD didn't have a chance to react when the OOME occured.