Re: Throw an exception but hide the top frame?
The backtrace code has a parameter that lets you tell it how many leading frames you want it to skip when generating the result. This is to get out of the Throwable ctor code itself, but it wouldn't be hard to bump this by one or two if you need it to.
Re: detaching a thread from druntime 2.067
On Tuesday, 16 December 2014 at 04:56:10 UTC, Ellery Newcomer wrote: If I have a thread that I need to detach from druntime, I can call thread_detachInstance, but for 2.066, this function does not exist. Is there any way to do this in 2.066? I notice there is a thread_detachByAddr, but I'm not sure how to get a ThreadAddr out of a Thread.. thread_detachThis?
Re: How to use Linux message queues?
Sounds like a module that should be in core.sys.linux. Care to submit a pull request?
Re: ODBC Library?
On Monday, 10 November 2014 at 16:01:21 UTC, Charles wrote: Hi guys, I've been looking and haven't found any libraries for ODBC or MSSQL. I saw some for D v1, but nothing for v2. Anyone know of any, or anyone know of a tutorial that I could use to create this myself? Assuming you're using ODBC on Windows, here's an old port of an even older C++ wrapper I used to use for ODBC work. It includes an ODBC header and library, so should serve as a good basis for whatever you're trying to do. If you're on Unix, you may have to update the ODBC header a bit. I got partway through that project back in the day but never finished: http://invisibleduck.org/sean/tmp/sql.zip
Re: ODBC Library?
Oh, here's a sample, since it doesn't look like that zip includes one: import sql.Connection; import sql.Exception; import sql.ResultSet; import sql.Statement; import core.stdc.stdio; pragma( lib, odbc32.lib ); pragma( lib, sql.lib ); void main() { try { auto conn = new Connection( driver={SQL Server}; server=(local); trusted_connection=no; database=test; uid=sa; pwd=hello; ); //network=dbmssocn; ); auto stmt = conn.prepare( SELECT Name FROM Person WHERE PersonID = ? ); stmt[0] = 1; //auto stmt = conn.prepare( SELECT Name FROM Person ); auto rs = stmt.open(); printf( %.*s\n\n, rs[0].name ); while( rs.next() ) printf( %.*s\n, rs[0].asUtf8 ); } catch( SQLException e ) { foreach( rec; e ) { printf( %.*s - %d: %.*s\n, rec.state, rec.code, rec.msg ); } } }
Re: druntime vararg implementation
On Wednesday, 5 November 2014 at 09:45:50 UTC, Mike wrote: Greetings, In core.varar. (https://github.com/D-Programming-Language/druntime/blob/master/src/core/vararg.d), why is the X86 implementation singled out and written in D rather than leveraging the standard c library implementation like the others? No idea. It seems like a pointless duplication of code. Maybe just to have some documented functions within the module?
Re: API hooking in Dlang?
On Monday, 3 November 2014 at 04:31:40 UTC, Dirk wrote: I should of mentioned that I have also seen the MadCodeHook Library bindings, which is great but the MCH library is very expensive. Weird, it used to be open source and free.
Re: Question about eponymous template trick
On Monday, 3 November 2014 at 14:58:03 UTC, Ali Çehreli wrote: I think it's the intended behavior. I think documentation is outdated. Both forms should really work though. I had always thought that the short form was simply possible if the names matched.
Re: spawnProcess() not child?
On Monday, 3 November 2014 at 14:09:21 UTC, Steven Schveighoffer wrote: From OP's code, he is on Windows. I believe on Windows you have to sort out some kind of permissions to terminate a process. No idea if std.process does this, but it sounds like probably not.
Re: How are theads, Tid and spawn related?
Note that thread_joinAll is called automatically when main exits, so if you just want to be sure that your spawned thread completes you don't have to do anything at all. The decision to obscure the Thread object in std.concurrency was deliberate, as it allows us to use more than just kernel threads for concurrency. The thread may even live in another process and the message sent via IPC. If you want to start an asynchronous task and wait for it to complete I suggest the method Ali outlines above. You can also create a Thread directly. What we should really have for this sort of thing is futures, but they don't exist yet. std.parallelism might be worth a look as well, since it has a task queue.
Re: How are theads, Tid and spawn related?
Note that thread_joinAll is called automatically when main exits, so if you just want to be sure that your spawned thread completes you don't have to do anything at all. The decision to obscure the Thread object in std.concurrency was deliberate, as it allows us to use more than just kernel threads for concurrency. The thread may even live in another process and the message sent via IPC. If you want to start an asynchronous task and wait for it to complete I suggest the method Ali outlines above. You can also create a Thread directly. What we should really have for this sort of thing is futures, but they don't exist yet. std.parallelism might be worth a look as well, since it has a task queue.
Re: D int and C/C++ int etc not really compatible when interfacing to C/C++
On Sunday, 2 November 2014 at 11:59:27 UTC, Marc Schütz wrote: On Saturday, 1 November 2014 at 21:00:54 UTC, Kagamin wrote: D claims compatibility with system C compiler, which usually have 32-bit int. ... and for C/C++ long which can be 32 or 64 bit, DMD recently introduced the types c_long and c_ulong. (Not released yet.) c_long and c_ulong have existed as aliases in core.stdc since the beginning. The change was to make them an explicit type so name mangling could be different.
Re: How are theads, Tid and spawn related?
On Sunday, 2 November 2014 at 06:23:38 UTC, Ali Çehreli wrote: On 11/01/2014 11:13 PM, Sean Kelly wrote: Note that thread_joinAll is called automatically when main exits Has that always been the case? I remember having to inject thread_joinAll() calls at the ends of the main()s of a couple of examples because of having trouble otherwise. Can I safely remove thread_joinAll()s if they are the last lines in main()? It has always been the case. In fact, I have a comment in the body of Thread.start() explaining this potential race and explaining the need for certain operations in that function. So if there is a race, it isn't meant to be here and should be fixed. I also just filed: https://issues.dlang.org/show_bug.cgi?id=13672 so some attention needs to be paid to this function anyway.
Re: D int and C/C++ int etc not really compatible when interfacing to C/C++
On Sunday, 2 November 2014 at 16:53:06 UTC, ponce wrote: c_long and c_ulong get used, should c_int and c_uint too in bindings? Looks like fringe use case. On common 32 and 64-bit platforms, the only type whose size changed between 32 and 64 bits is long, so the other aliases were deemed unnecessary. It's possible that as D is ported to more platforms this will have to change, but I seriously hope not.
Re: How are theads, Tid and spawn related?
For those cases you could use spawnLinked and then receive LinkTerminated as well, if you're looking for a solution within the concurrency API.
Re: Problems with Mutex
On Monday, 27 October 2014 at 19:13:13 UTC, Jonathan M Davis via Digitalmars-d-learn wrote: The reason that it's not shared is because Sean Kelly didn't want to make much of anything in druntime shared until shared was better ironed out, which keeps getting talked about but never done. Yep. There was a fairly productive (brief) discussion on shared in digitalmars.D recently, but no one who can really make such decisions weighed in. I honestly don't know when shared will get a serious look, despite it being probably the most significant unfinished language feature in D 2.0.
Re: new(malloc) locks everything in multithreading
On Friday, 24 October 2014 at 21:02:05 UTC, Kapps wrote: Yes, GDB is stopping on SIGUSR1 / SIGUSR2 since that's the default settings. D's GC uses these signals for suspending / resuming threads during a collection. You need to type what I said above, prior to typing 'run'. I took a look at the Boehm GC earlier today and it appears they've made the signal set configurable, both to try and not use SIGUSR1/2 by default and to let the user specify another signal set if they need SIGUSR1/2 for some reason. It's probably worth doing this in our own code as well. The Boehm GC also does some magic with clearing signals at various points to make sure the right signal handlers will be called. Probably another enhancement request.
Re: m_condition.mutex cannot be used in shared method ?
On Monday, 20 October 2014 at 09:53:23 UTC, Marco Leise wrote: Thank you for that honest response. The situation is really bizarre. I just tried to create a shared worker thread and there is no ctor in Thread that creates a shared instance. Is a shared constructor even meaningful? [1] If we want to try for having thread-local memory pools then yes. If yes, what do we need it for? See above. Though there are other problems that will probably prevent this anyway (immutable being implicitly shared, for one). Can't we otherwise just implicitly and safely cast to shared _after_ the constructor ran when we write `new shared(Foo)(...)`? Yep. Casting away shared is not @safe. Since this is normal to do in synchronized blocks, I figure the whole core.Thread and core.sync.xxx family are @system functionality ? Yes. Though I really don't like feeling that casts are necessary in general. If I have to cast in order to do normal work then there's probably something wrong with the type system. Though I'll note that I also use mutable in C++ for what I feel are completely justifiable reasons (like on a contained Mutex so I can lock/unlock some region of code in a const method), and D has been firmly established in opposition to logical const. Mutexes are actually a special case in D because they bypass normal type checking thanks to the way synchronized blocks work, and I'm sure we could do something similar for shared, but it feels wrong. I kind of hope that someone will show me that casting away shared isn't necessary, kind of like how Monads are a clever response to immutability in Haskell. [1] (Note that I created a PR for DMD that disables shared destruction: https://github.com/D-Programming-Language/dmd/pull/4072) With all the recent work on the GC, we really really need to start tracking which thread owns a given non-shared object so it can be finalized properly. This may mean having the process of casting away shared make the executing thread the new owner of the object.
Re: m_condition.mutex cannot be used in shared method ?
On Sunday, 19 October 2014 at 13:42:05 UTC, Marco Leise wrote: I have a thread that is shared by others, so I have a shared method, inside of which I wrote: final void opOpAssign(string op : ~)(ref StreamingObject item) shared { synchronized (m_condition.mutex) { m_list.unshared ~= item; m_condition.notify(); } } Error: non-shared method core.sync.condition.Condition.mutex is not callable using a shared object Where exactly should my stuff stop to be shared so I can call .mutex ? What really needs to happen is for everything in core.sync to be made shared. I got partway through this at one point and stopped, because it was imposing a terrible design on the classes--I had shared methods that were casting away shared and then calling the non-shared methods to do the work. The reason for this was that the transitivity of shared was preventing me from calling pthread_mutex_lock or whatever because those functions didn't take a shared pthread_mutex_t. And attempting to rewrite core.sys.posix to make the logically shared types explicitly shared had a cascading effect that made me uncomfortable. Because of this, I remain unconvinced that the semantics of the shared attribute are actually correct when applied to user-defined types. I want some kind of an I know what I'm doing label, perhaps equivalent to the mutable attribute in C++, but to exempt contained types from shared.
Re: how to get the \uxxxx unicode code from a char
On Tuesday, 14 October 2014 at 20:08:03 UTC, Brad Anderson wrote: On Tuesday, 14 October 2014 at 20:05:07 UTC, Brad Anderson wrote: https://github.com/D-Programming-Language/phobos/blob/master/std/json.d#L579 Oops. Linked the the parser section. I actually don't see any unicode escape encoder in here. Perhaps he meant the upcoming JSON module. Wow... the current std.json doesn't do string encoding? I knew it was bad, but... In any case, yes, I mentioned JSON because strings are supposed to be encoded exactly the way you're asking. It was easier to point at that than try to outline the process explicitly.
Re: how to get the \uxxxx unicode code from a char
On Tuesday, 14 October 2014 at 19:47:00 UTC, jicman wrote: Greetings. Imagine this code, char[] s = ABCabc; foreach (char c; s) { // how do I convert c to something an Unicode code? ie. \u. } I'd look at the JSON string encoder.
Re: A few questions regarding GC.malloc
On Thursday, 25 September 2014 at 21:43:53 UTC, monarch_dodra wrote: On Thursday, 25 September 2014 at 20:58:29 UTC, Gary Willoughby wrote: A few questions regarding GC.malloc. When requesting a chunk of memory from GC.malloc am i right in assuming that this chunk is scanned for pointers to other GC resources in order to make decisions whether to collect them or not? By default, yes, but you can use BlkAttr.NO_SCAN if you do not want that (eg, if you want to store integers). As a rule of thumb, you can use hasIndirections!T to know if or if not to scan (that's what most of phobos relies on). Yep. It's generally just easier to do a new T[] when you want a block to ensure that the proper flags are set, as GC.malloc is conservative in terms of the flags it sets. What does BlkAttr.FINALIZE do when used in the GC.malloc call? I have no idea. I think its for classes though, since we (currently) don't finalize structs anyways. Yes it's for memory blocks containing class instances. It basically tells the GC to call Object.~this() when collecting the block.
Re: GC can collect object allocated in function, despite a pointer to the object living on?
Interface and object variables are reference types--you don't need the '*' to make them so. By adding the extra layer of indirection you're losing the only reference the GC can decipher to the currentState instance.
Re: core.thread.Fiber --- runtime stack overflow unlike goroutines
On Friday, 15 August 2014 at 08:36:34 UTC, Kagamin wrote: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887%28v=vs.85%29.aspx Allocates memory charges (from the overall size of memory and the paging files on disk) for the specified reserved memory pages. The function also guarantees that when the caller later initially accesses the memory, the contents will be zero. Actual physical pages are not allocated unless/until the virtual addresses are actually accessed. Oh handy, so there's basically no work to be done on Windows. I'll have to check the behavior of mmap on Posix.
Re: core.thread.Fiber --- runtime stack overflow unlike goroutines
On Friday, 15 August 2014 at 14:28:34 UTC, Dicebot wrote: Won't that kind of kill the purpose of Fiber as low-cost context abstraction? Stack size does add up for thousands of fibers. As long as allocation speed is fast for large allocs (which I have to test), I want to change the default size to be very large. The virtual address space in a 64-bit app is enormous, and if the memory is committed on demand then physical memory use should only match what the user actually requires. This should allow us to create millions of fibers and not overrun system memory, and also not worry about stack overruns, which has always been a concern with the default fiber stack size.
Re: core.thread.Fiber --- runtime stack overflow unlike goroutines
On Friday, 15 August 2014 at 14:26:28 UTC, Sean Kelly wrote: On Friday, 15 August 2014 at 08:36:34 UTC, Kagamin wrote: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887%28v=vs.85%29.aspx Allocates memory charges (from the overall size of memory and the paging files on disk) for the specified reserved memory pages. The function also guarantees that when the caller later initially accesses the memory, the contents will be zero. Actual physical pages are not allocated unless/until the virtual addresses are actually accessed. Oh handy, so there's basically no work to be done on Windows. I'll have to check the behavior of mmap on Posix. It sounds like mmap (typically) works the same way on Linux, and that malloc generally does as well. I'll have to test this to be sure. If so, and if doing so is fast, I'm going to increase the default stack size on 64-bit systems to something reasonably large. And here I thought I would have to do this all manually.
Re: core.thread.Fiber --- runtime stack overflow unlike goroutines
At least on OSX, it appears that mapping memory is constant time regardless of size, but there is some max total memory I'm allowed to map, presumably based on the size of a vmm lookup tabe. The max block size I can allocate is 1 GB, and I can allocate roughly 131,000 of these blocks before getting an out of memory error. If I reduce the block size to 4 MB I can allocate more than 10M blocks without error. I think some default stack size around 4 MB seems about right. Increasing the size to 16 MB failed after about 800,000 allocations, which isn't enough (potential) fibers.
Re: core.thread.Fiber --- runtime stack overflow unlike goroutines
On Friday, 15 August 2014 at 15:25:23 UTC, Dicebot wrote: No, I was referring to the proposal to supply bigger stack size to Fiber constructor - AFAIR it currently does allocate that memory eagerly (and does not use any OS CoW tools), doesn't it? I thought it did, but apparently the behavior of VirtualAlloc and mmap (which Fiber uses to allocate the stack) simply reserves the range and then commits it lazily, even though what you've told it to do is allocate the memory. This is really great news since it means that no code changes will be required to do the thing I wanted to do anyway.
Re: core.thread.Fiber --- runtime stack overflow unlike goroutines
On Friday, 15 August 2014 at 20:17:51 UTC, Carl Sturtivant wrote: On Friday, 15 August 2014 at 15:40:35 UTC, Sean Kelly wrote: I thought it did, but apparently the behavior of VirtualAlloc and mmap (which Fiber uses to allocate the stack) simply reserves the range and then commits it lazily, even though what you've told it to do is allocate the memory. This is really great news since it means that no code changes will be required to do the thing I wanted to do anyway. Just read this after posting earlier replies! Very exciting. I'll be doing some experiments to see how this works out. What about at 32-bits? I'm sure it works the same, but reserving large chunks of memory there would eat up the address space. I think the default will have to remain some reasonably low number on 32-bit.
Re: core.thread.Fiber --- runtime stack overflow unlike goroutines
On 64 bit, reserve a huge chunk of memory, set a SEGV handler and commit more as needed. Basically how kernel thread stacks work. I've been meaning to do this but haven't gotten around to it yet.
Re: What hashing algorithm is used for the D implementation of associative arrays?
Superfast. Though Murmur has gotten good enough that I'm tempted to switch. At the time, Murmur didn't even have a license so it wasn't an option.
Re: Deprecation: Read-modify-write operations are not allowed for shared variables
On Tuesday, 12 August 2014 at 15:06:38 UTC, ketmar via Digitalmars-d-learn wrote: besides, using atomic operations will allow you to drop synchronize altogether which makes your code slightly faster. ... and potentially quite broken. At the very least, if this value is ready anywhere you'll have to use an atomicLoad there in place of the synchronized block you would have used.
Re: Threadpools, difference between DMD and LDC
On Monday, 4 August 2014 at 21:19:14 UTC, Philippe Sigaud via Digitalmars-d-learn wrote: Has anyone used (the fiber/taks of) vibe.d for something other than powering websites? https://github.com/D-Programming-Language/phobos/pull/1910
Re: Unexpected memory reuse
This looks like an optimizer bug. Do you see the same result with -release set vs. not, etc?
Re: Unexpected memory reuse
On Thursday, 31 July 2014 at 19:28:24 UTC, Marc Schütz wrote: On Thursday, 31 July 2014 at 18:30:41 UTC, Anonymous wrote: module test; import std.stdio; class buffer(T, size_t sz) { auto arr = new T[sz]; This allocates an array with `sz` elements once _at compile time_, places it somewhere into the executable, and uses its address as the default initializer for the member `arr`. All instances of `buffer` (with the same template parameters) that you create and don't change `arr` have it point to the same memory. Huh. For some reason I thought in-class initializations like this were effectively rewritten to occur as a part of the class ctor. But looking at the docs I guess this actually affects the format of the static initializer.
Re: Linux Dynamic Loading of shared libraries
On Monday, 10 March 2014 at 11:59:20 UTC, Steve Teale wrote: Note that there is no call to Runtime.unloadLibrary(). The assumption her is that once the plugin has been loaded it will be there for the duration of the program. If you want to unload it you'll probably have to make sure the plugin object is purged from memory first, and I have not discovered how to do that yet ;=( A long time ago, Andrei suggested creating a GC interface for mapped memory. I think something similar might be appropriate here. Perhaps the location of instantiated classes could be determined by their vtbl pointer? Then the entire vtbl range could be treated as a dynamically allocated struct of sorts, and its dtor would queue up a job to unmap the library after collection is complete (ie. not immediately, since the order of destruction during a collection is undefined). So... (just thinking out loud) when a library is loaded, you basically perform an in-place construction of this LoadedLibrary struct on top of the vtbl range. You'd need an interface similar to the GC tracks memory mapped files idea to tell the GC to track references to this range of memory that lives outside its own pool set, and then some kind of post-collection job queue that's externally appendable so the LoadedLibrary struct could add an unload call when it's collected. Heck, we could really handle all dtors this way, so normal dtors would be inserted at the front of the list and special cases like this would be inserted at the back. It doesn't sound tremendously difficult, though we'd need the memory-mapped file support API first. Is there something I'm missing that makes this unfeasible?
Re: spawn and wait
On Thursday, 3 July 2014 at 10:25:41 UTC, Bienlein wrote: There is also a Semaphore and Barrier class: http://dlang.org/phobos/core_sync_barrier.html http://dlang.org/phobos/core_sync_semaphore.html This is probably what I'd do, though both this and thread_joinAll will only work if you have one kernel thread per spawn. If you're using a fiber-based Scheduler, this won't work as expected. In that case you might want to use spawnLinked and trap the LinkTerminated messages or something like that.
Re: Thread-safety and lazy-initialization of libraries
On Monday, 30 June 2014 at 20:53:25 UTC, Sergey Protko wrote: Is there any proper way to do on-demand lazy-initialization of used library, which will be also thread-safe? How do i need to handle cases where some methods, which requires library to be initialized, called from different threads at the same time? pthread_once comes to mind.
Re: What is best way to communicate between computer in local network ?
On Friday, 27 June 2014 at 13:03:20 UTC, John Colvin wrote: It's an application and network dependant decision, but I would suggest http://code.dlang.org/packages/zmqd as suitable for most situations. Yeah, this would be my first choice. Or HTTP if integration with other applications is an option. I really like JSON-RPC, though it seems to not get much attention. Longer term, I'd like to extend the messaging in std.concurrency to allow interprocess communication as well.
Re: What is best way to communicate between computer in local network ?
On Saturday, 28 June 2014 at 17:11:51 UTC, Russel Winder via Digitalmars-d-learn wrote: Sadly, I don't have time to contribute to any constructive work on this just now. And I ought to be doing a review and update to std.parallelism… That's fine. I have zero free time until August.
Re: GC.calloc(), then what?
On Friday, 27 June 2014 at 07:34:55 UTC, safety0ff wrote: On Friday, 27 June 2014 at 07:03:28 UTC, Ali Çehreli wrote: 1) After allocating memory by GC.calloc() to place objects on it, what else should one do? Use std.conv.emplace. And possibly set BlkInfo flags to indicate whether the block has pointers, and the finalize flag to indicate that it's an object. I'd look at _d_newclass in Druntime/src/rt/lifetime.d for the specifics. To be honest, I think the GC interface is horribly outdated, but my proposal for a redesign (first in 2010, then again in 2012 and once again in 2013) never gained traction. In short, what I'd really like to have is a way to tell the GC to allocate an object of type T. Perhaps Andrei's allocators will sort this out and the issue will be moot. For reference: http://lists.puremagic.com/pipermail/d-runtime/2010-August/75.html http://lists.puremagic.com/pipermail/d-runtime/2012-April/001095.html http://lists.puremagic.com/pipermail/d-runtime/2013-July/001840.html
Re: HeadUnshared in core.atomic
On Thursday, 12 June 2014 at 05:29:39 UTC, Mike Franklin wrote: Hello, I was recently exposed to this template in core.atomic: private { template HeadUnshared(T) { static if( is( T U : shared(U*) ) ) alias shared(U)* HeadUnshared; else alias T HeadUnshared; } } Could someone please explain/elaborate on what this is doing, and why it's necessary and used so often in core.atomic? This is used to generate the return type when the return value is a copy of the input. This is of particular importance for operations like atomicLoad, whose purpose is to atomically obtain a local copy of a shared value.
Re: Attach and detach C threads to D runtime
On Thursday, 22 May 2014 at 19:21:26 UTC, David Soria Parra via Digitalmars-d-learn wrote: I know that thread_detachByAddr exists, but the Thread object from Thread.getAll or Thread.opApply doesn't expose the thread address. Would thread_detachThis work for you? Alternately, you can use pthread_self to get the current thread's address on Posix and GetCurrentThreadId on Windows.
Re: std.concurrency bug?
On Wednesday, 21 May 2014 at 20:19:32 UTC, Ali Çehreli wrote: I think this is a known issue with immutable and Variant, which std.concurrency uses for unknown messages. This looks related: https://issues.dlang.org/show_bug.cgi?id=5538 std.concurrency actually uses Variant as the transport mechanism for all messages, and this is most likely the cause of your problem. If this is just to make a class pass type checking for transport, casting to shared is probably a better bet. The real solution is to make std.concurrency effectively allow uniquely referenced classes to be transferred, but that's a bit farther out.
Re: Mixing messages and socket operations
On Tuesday, 11 March 2014 at 14:44:51 UTC, Andre Kostur wrote: Hi, I'm trying a prototype project at work and have been trying to find a good example of network programming in D. What I'm trying to do is have a separate thread to deal with the socket (calls .accept() for example), but I'd also like the thread to be responsive to the OwnerTerminated message coming from the parent thread (the daemon is shutting down, time to orderly tear down the threads). However, .accept is blocking, as is receive(). I'd rather not have to resort to polling each of the two. What's the community's standard approach to this problem? To make accept() non-blocking, you can wait on the accept socket using select or poll. When the socket becomes readable, you're ready to listen. Then you could set a timeout for the select call and do a receiveTimeout(0). Things get trickier if you don't like the busy wait approach though. You'd need to have a file descriptor signaled when a message arrived or a message sent on a socket event (I'd favor the former). I can imagine sorting this out using the new Scheduler stuff in std.concurrency, but an easier approach might be to just use vibe.d, which integrates message passing and socket events already.
Re: Allocating memory from library
The GC will only scan through and try to ECG memory that it owns. So that's safe.
Re: Mutexes and locking
For what it's worth, you can also do: auto m = new Mutex; sycnchronized (m) { // do stuff } The synchronized block will call lock on enter and unlock on exit, even as a result of a throw.
Re: Nobody understands templates?
On Friday, 28 February 2014 at 18:42:57 UTC, Steve Teale wrote: All the D aficionados seem to wet their pants over meta-programming, but I struggle to find a place to use it. IIRC, I used it in a couple of places when I was trying to write library stuff for MySQL, but in my current project, I use it only once. That's when I want to stuff something onto my undo stack. For that I have two template functions - push(T)(MybaseClass* p, T t, int ID), and pushC, which is just the same except that it checks the top of the stack to see if the ID there is the same as what it is wanting to push. This has served me very reliably, but I struggle to find other places in the whole application where I would benefit from templates. Is this typical - libraries use templates, applications don't, or am I just being unimaginative? This is certainly my experience with C++ and is why I wrote the chapter on templates in the Tango with D book. Personally though, I use templates constantly. For functions, the most common case is to eliminate code duplication where I might normally overload for different parameter types.
Re: Custom default exception handler?
On Wednesday, 12 February 2014 at 02:41:34 UTC, Nick Sabalausky wrote: On 2/11/2014 6:35 PM, Sean Kelly wrote: Throw a static exception (maybe even derived directly from Throwable), I assume then that throwing something directly derived from Throwable would still run cleanup code (like scope guards and finally) like throwing Exception would? Or is it like throwing an Error, skipping cleanup code? Everything runs cleanup code right now, unless someone changed things on me. But if a change were made, I'd say that only Error and its children would skip cleanup. The default would be to perform cleanup. That lets users create their own exception hierarchies.
Re: Custom default exception handler?
On Wednesday, 12 February 2014 at 03:31:38 UTC, Nick Sabalausky wrote: Hmm, my custom toString isn't being executed. Am I doing something wrong here? Same result if I inherit direct from Throwable instead of Exception. class Fail : Exception { private this() { super(null); } private static Fail opCall(string msg, string file=__FILE__, int line=__LINE__) { auto f = cast(Fail) cast(void*) Fail.classinfo.init; f.msg = msg; f.file = file; f.line = line; return f; } override string toString() { writeln(In Fail.toString()); return someapp: ERROR: ~msg; } } It looks like this has changed, and the method that's called now is: void toString(scope void delegate(in char[]) sink) const; I suspect this has broken a lot of custom exception messages, since everything in core.exception still uses toString() for its output.
Re: Custom default exception handler?
On Wednesday, 12 February 2014 at 22:42:45 UTC, Nick Sabalausky wrote: Hmm, that still isn't getting called for me either: void toString(scope void delegate(in char[]) sink) const { import std.stdio; writeln(In Fail.toString()); sink(someapp: ERROR: ~msg); } Tried on both 2.064.2 and 2.065-b3. Could it be that the custom toString just doesn't get run for static exceptions? It should. I'm not entirely sure why it isn't working. For reference, the relevant code is in rt/dmain2.d (printThrowable) and object_.d (toString(sink)) in Druntime.
Re: Custom default exception handler?
On Wednesday, 12 February 2014 at 01:07:31 UTC, Nick Sabalausky wrote: Oh, interesting. Is this something that can be relied on long-term? Ie, is a static non-Exception Throwable deliberately *supposed* to not include a stack trace, or is it potentially more of a currently-missing feature? It's intentional, and was done to serve two purposes. The first was to provide some way for throwing OutOfMemory to not accidentally try to allocate, and the second was because if you throw the same static instance in two threads simultaneously, the trace would end up invalid for one of them. The only safe thing to do is not trace at all.
Re: std.prelude vs core library
Another small reason is to enforce decoupling between required code and the rest of the library. Back when Phobos was all one library, half the library was compiled into every program. The runtime writes to stderr, the IO package relies on other modules... Kind of like what happens now if you import std.stdio. And while this can be accomplished via deliberate effort towards decoupling. But that's really hard to accomplish in an open source project. Functionally, think of core as being similar to java.lang.
Re: libphobos.so and loading libraries at runtime
On Sunday, 5 January 2014 at 20:47:44 UTC, FreeSlave wrote: import core.runtime; int main() { Runtime.loadLibrary(does not care); Runtime.unloadLibrary(null); return 0; } When I try to compile this code with 'dmd main.d', I get errors main.o: In function `_D4core7runtime7Runtime17__T11loadLibraryZ11loadLibraryFxAaZPv': main.d:(.text._D4core7runtime7Runtime17__T11loadLibraryZ11loadLibraryFxAaZPv+0x4d): undefined reference to `rt_loadLibrary' main.o: In function `_D4core7runtime7Runtime19__T13unloadLibraryZ13unloadLibraryFPvZb': main.d:(.text._D4core7runtime7Runtime19__T13unloadLibraryZ13unloadLibraryFPvZb+0x8): undefined reference to `rt_unloadLibrary' But it's built without errors when I separate compile and link parts: dmd -c main.d gcc main.o -lphobos2 -o main I checked libraries with nm utility and actually found no such symbols in static version of libphobos. But shared one has these symbols. Well, I'm aware of that runtime loading requires shared version of phobos2 to avoid duplicating of D runtime. But why does dmd use static linking by default? Is shared version of phobos still experimental? Anyway we need some remarks in documentation about the lack of these functions in static version of phobos. This seems like a weird change if intentional, since the function is publicly declared. I'd file a bug report.
Re: Current size of GC memory
On Monday, 4 November 2013 at 22:25:14 UTC, Rainer Schuetze wrote: On 04.11.2013 11:23, Namespace wrote: And how can I use it? import gc.proxy; doesn't work. You need to add dmd-install-path/src/druntime/src to the import search paths. Or simply declare the extern (C) function in your code.
Re: Current size of GC memory
On Tuesday, 5 November 2013 at 20:19:03 UTC, Namespace wrote: And what is with the return type? It's a struct. You must import it. You don't have to import it. The layout of the struct isn't going to change any time soon. Just copy/paste the definition into your code. Or import it if you want. It's just that the original definition of this stuff is in a portion of Druntime that was never intended to be published to a location where the module could be imported by users.
Re: val.init
On Oct 1, 2013, at 7:10 PM, Nick Sabalausky seewebsitetocontac...@semitwist.com wrote: I thought variable.init was different from T.init and gave the value of the explicit initializer if one was used. Was I mistaken?: import std.stdio; void main() { int a = 5; writeln(a.init); // Outputs 0, not 5 } I think it used to work roughly this way but was changed… um… maybe 2 years ago?
Re: GC.collect bug ?
On Sep 17, 2013, at 4:14 AM, Temtaime temta...@gmail.com wrote: I cannot use the delete/destroy. I want to call dtor at all unreferenced objects. Manual from Dlang size says that GC.collect triggers a full collection. But it doesn't. It does. But the collector isn't guaranteed to collect everything that is no longer referenced during a given collection cycle. It's possible a register still holds the reference to that object. Try doing a bit more stuff before collecting and see if that changes behavior. Or allocate a second dummy object.
Re: Exception chaining
On Sep 13, 2013, at 2:14 PM, monarch_dodra monarchdo...@gmail.com wrote: In one of my exception handling blocks, I call some code that could *also*, potentially throw (it's actually a loop, where each iteration can throw, but I have to do them *all*, meaning I need to handle *several* extra exceptions). I'm wondering what the correct way of handling both exceptions is. We can chain them, but which comes first? Ideally, it'd be first in, first out, but given that exceptions make a singly linked list, pushing back is expensive (well, as expensive as it can get in exception handling code I guess). Exception chaining is actually built-in. I did some digging for an official description, but couldn't find one. Here's a summary: If an Exception is thrown when an Exception or Error is already in flight, it is considered collateral damage and appended to the chain (built via Throwable.next). If an Error is thrown when an Exception is already in flight, it will replace the in flight Exception and reference it via Error.bypassedException. If an Error is thrown when an Error is already in flight, it will be considered collateral damage and appended to the Throwable.next chain. So in the case where an Exception is thrown, more Exceptions are generated as collateral damage, then an Error is thrown which bypasses the in flight exception, and then more Exceptions and Errors are generated as collateral damage, you can have one chain off the primary Error and a second chain of the bypassedException. It's possible that bit needs to be cleaned up so there's only one chain.
Re: wmemchar for unix
On Aug 26, 2013, at 11:57 PM, monarch_dodra monarchdo...@gmail.com wrote: For performance reasons, I need a w version of memchr. C defines wmemchr as: wchar_t * wmemchr ( const wchar_t *, wchar_t, size_t ); Unfortunatly, on unix, wchar_t is defined a *4* bytes long, making wmemchr, effectivelly, dmemchr. Are there any 2 byte alternatives for wmemchr on unix? Why not cast the array to ushort[] and do a find()? Or is that too slow as well?
Re: Associative array key order
On Jul 31, 2013, at 7:55 AM, Dicebot pub...@dicebot.lv wrote: On Wednesday, 31 July 2013 at 14:43:21 UTC, Daniel Kozak wrote: is there a way for AA to behave same as PHP? I doubt it. This snippet suggests that AA's in PHP are not simply AA's and do additionally track insertion order (or use some similar trick). Data in associative arrays is organized in a way that allows fast key lookup and iterating in an original order requires tracking extra state. Contrary to PHP, D does care about performance so I see no way this can happen automatically. This seems more likely to be a library type. I have something like this that I use very frequently at work where the behavior is pluggable, so it can be used as a cache that automatically does LIFO/FIFO eviction according to an age window, number of entries, etc. Or none of those if all you want is a plain old hashtable. It's not a tremendous amount of work to adapt an existing implementation to work this way. The only annoying bit I ran into (at least with C++) is that the symbol lookup rules pretty much prevent overrides of existing functions to be mixed in via the policies. Does alias this work around this problem in D? I've never tried.
Re: Getting number of messages in MessageBox
On Aug 5, 2013, at 4:18 PM, Marek Janukowicz ma...@janukowicz.net wrote: I'm using std.concurrency message passing and I'd like to check which thread might be a bottleneck. The easiest would be check number of messages piled up for each of them, but I could not find a way to do that. Is it possible? Every detail about MessageBox seems to be hidden... Not currently. It wouldn't be hard to add a function to get the count used by setMaxMailboxSize though. I'll make a note to add it.
Re: Getting number of messages in MessageBox
On Aug 6, 2013, at 1:27 PM, Dmitry Olshansky dmitry.o...@gmail.com wrote: 06-Aug-2013 03:18, Marek Janukowicz пишет: I'm using std.concurrency message passing and I'd like to check which thread might be a bottleneck. The easiest would be check number of messages piled up for each of them, but I could not find a way to do that. Is it possible? Every detail about MessageBox seems to be hidden... This is sadly intentional. The reasons must be due to some efficient concurrent queues not being able to produce reliable item count if at all. However this seems at odds with setMaxMailboxSize ... The lack of a function to get the current mailbox size is mostly an oversight. It would be easy to return the same message count used by setMaxMailboxSize. This isn't an exact count, since that would be inefficient as you say, but it's a reasonable approximation. So I wouldn't use the returned message count as a pivotal part of a protocol design, for example, but it would be fine for determining whether a consumer is being overwhelmed by requests. That said, with the possible exception of setMaxMailboxSize allowing one thread to affect the behavior of another thread's mailbox, I think the current functionality should all extend to interprocess messaging without alteration. I'm not sure that this could be done for a getMailboxSize call. Certainly not with any semblance of efficiency anyway. But perhaps it's reasonable to have some subset of functionality that only works on local threads so long as the core messaging API doesn't have such limitations.
Re: Async messages to a thread.
On Jul 29, 2013, at 8:28 AM, lindenk ztaticn...@gmail.com wrote: Ah, no I mean, what if do_some_blocking_function blocks for some indeterminate amount of time. I would like it to exit even when it is currently blocking (as it could be unpredictable when it will stop blocking). Execute the blocking functions in a separate thread? Really, this sounds like the kind of thing futures are for. Alternately, maybe the sequence of blocking functions could call receive periodically, or maybe be executed in a fiber and yield periodically, etc.
Re: Async messages to a thread.
On Jul 29, 2013, at 10:07 AM, lindenk ztaticn...@gmail.com wrote: After a bit more research it looks like everyone else uses - while(checkIfRunning()) { // block with timeout } which leads me to believe this might not be possible or standard. Although, should something along the lines of this be possible? It sounds like you're maybe doing socket IO? In those cases, you can open a pipe for signaling. select() on whatever other socket plus the read end of the pipe, and if you want to break out of the blocking select() call, write a byte to the pipe. Ultimately, std.concurrency should get some socket integration so data can arrive as messages, but that will never be as efficient as the approach I've described. Process p = new Process(); p.doTask(p.func()); //create new thread and start function // do stuff until this needs to exit p.stop(); // halt running function, p.cleanup(); // call a clean up function for func's resources p.kill(); // forcibly kill the thread Forcibly killing threads tends to be a pretty bad idea if you intend to keep running after the thread is killed. It can even screw up attempts at a clean shutdown.
Re: Do threads 'end' themselves using core.thread?
On Jul 22, 2013, at 9:15 AM, Ali Çehreli acehr...@yahoo.com wrote: Apparently, it is possible to detach from a thread or even to start it in the detached state to begin with: By default, a new thread is created in a joinable state, unless attr was set to create the thread in a detached state (using pthread_attr_setdetachstate(3)). But core.thread doesn't seem to provide either of those. Nope. The thread can't be detached early because the GC might need to operate on the thread in some way.
Re: Do threads 'end' themselves using core.thread?
On Jul 20, 2013, at 12:34 PM, Alex Horvat al...@gmail.com wrote: If I use core.thread.Thread to create a new thread associated to a function like this: Thread testThread = new Thread(DoSomething); Will the testThread dispose of itself after DoSomething() completes, or do I need to join/destroy/somethingelse testThread? The thread will clean itself up automatically if you don't hold any references to it. In essence, the thread's dtor will conditionally release any handles if join() was never called.
Re: Do threads 'end' themselves using core.thread?
On Jul 22, 2013, at 9:45 AM, Alex Horvat al...@gmail.com wrote: When a detached thread terminates, its resources are auto- matically released back to the system: Sounds like I can call Thread.getThis().thread_detachThis() from within DelayedHideTitle() and that will make the thread detached and therefore it will destroy itself properly. Only call thread_detachThis() if you initialized the thread object via thread_attachThis(). Otherwise just discard the reference and let it clean itself up. The reason for the do not delete restriction is because the thread objects are tracked internally by the GC, and they must remain in a correct state until the GC has recognized that they have terminated.
Re: Do threads 'end' themselves using core.thread?
On Jul 22, 2013, at 10:25 AM, Alex Horvat al...@gmail.com wrote: On Monday, 22 July 2013 at 16:58:00 UTC, Sean Kelly wrote: On Jul 22, 2013, at 9:45 AM, Alex Horvat al...@gmail.com wrote: When a detached thread terminates, its resources are auto- matically released back to the system: Sounds like I can call Thread.getThis().thread_detachThis() from within DelayedHideTitle() and that will make the thread detached and therefore it will destroy itself properly. Only call thread_detachThis() if you initialized the thread object via thread_attachThis(). Otherwise just discard the reference and let it clean itself up. The reason for the do not delete restriction is because the thread objects are tracked internally by the GC, and they must remain in a correct state until the GC has recognized that they have terminated. Thanks, sounds like I can just do this and the thread will clean itself up: Thread TitleHider = new Thread(DelayedHideTitle); TitleHider.start(); TitleHider = null; Yep.
Re: concurrency problem with pointers
On Jul 18, 2013, at 8:29 AM, evilrat evilrat...@gmail.com wrote: shortly speaking, WINDOW is pointer to window in C library so it's shared, and i need it in another thread to make opengl context current in that thread, but compiler simply doesn't allow me to do anything with that, and i can't change definition since it's 3rd party bindings. any suggestions? In short, you're trying to send a pointer via std.concurrency. The easiest way is to cast it to/from shared. Otherwise, you'd need to make it an opaque type like a size_t rather than a type D can tell is a reference.
Re: Passing a class instance to a thread via spawn()
On Jul 18, 2013, at 4:23 PM, Joseph Rushton Wakeling joseph.wakel...@webdrake.net wrote: Hello all, I have a data structure which is a final class. Once created, the contents of the class will not be mutated (only its const methods will be called). Is there any way to pass this to a thread via spawn() or via message-passing? I've seen instructions to use shared() but if I try and declare the class this way I get errors such as non-shared method is not callable using a shared object and non-shared const method is not callable using a shared object. The ideal would be to mark this object, once created, as immutable -- but trusty methods like assumeUnique() don't work for classes! I'd like to add move semantics of a sort which would work via something very like assumeUnique. For now, cast the class to shared before sending, and cast away shared on receipt. The only other option would be fore std.concurrency to copy the class when sent, but this requires serialization support.
Re: inverse of std.demangle?
On Jul 11, 2013, at 9:56 AM, Adam D. Ruppe destructiona...@gmail.com wrote: On Thursday, 11 July 2013 at 16:38:48 UTC, Jacob Carlborg wrote: enum string[23] _primitives = [ ... ]; static immutable primitives = _primitives; Cool, a variant of that did work. Thanks! Now it is 100% heap allocation free. Sweet! And to be fair, I'm fine with heap allocation as a failsafe. What's important is that if the user provides a sufficiently large input buffer, then the routine doesn't allocate. That way, the GC and other sensitive parts of the code can use these functions if needed.
Re: pointers, assignments, Garbage Collection Oh My?
On Jul 10, 2013, at 10:45 AM, Namespace rswhi...@googlemail.com wrote: A string in D, and all arrays, is a struct looking like this: struct Array (T) { T* ptr; size_t length; } I always thought it looks like this: struct Array(T) { T* ptr; size_t length, capacity; } Sadly, no. The only way to determine the capacity of an array is to query the GC.
Re: inverse of std.demangle?
On Jul 10, 2013, at 10:44 AM, Timothee Cour thelastmamm...@gmail.com wrote: Thanks much, that's a good start. Template support would definitely be needed as it's so common. This should go in std.demangle (or maybe a new std.mangle) core.mangle/demangle. It would have to be done in a way that avoided allocating though (similar to core.demangle), to be in core.
Re: Assert failures in threads
On Jul 1, 2013, at 4:04 AM, Joseph Rushton Wakeling joseph.wakel...@webdrake.net wrote: I've noticed that when an assert fails inside a thread, no error message is printed and the program/thread just hangs. Is there any way to ensure that an assertion failure inside a thread does output a message? For the purposes of my current needs, it's fine if it also brings down the whole program, just so long as I'm alerted to what generated the error. If you join the thread, any unhanded exception will be rethrown in the joining thread by default. Or you can have a catch(Throwable) in the top level of your thread function. I thought about adding an overridable unhandled exception filter (I think Java has something like this) but that seemed like it would interact strangely with the join behavior.
Re: Assert failures in threads
On Jul 9, 2013, at 3:33 PM, Jonathan M Davis jmdavisp...@gmx.com wrote: On Tuesday, July 09, 2013 10:39:59 Sean Kelly wrote: If you join the thread, any unhanded exception will be rethrown in the joining thread by default. What about threads which were spawned by std.concurrency? IIRC, those are never explicitly joined. Is catching Throwable in the spawned thread required in order to get stuff like AssertErrors printed? Unfortunately yes. I suppose std.concurency could do something fancy to propagate uncaught exceptions the thread's owner as a message (similar to the LinkTerminated message) but it doesn't do so now.
Re: Can someone give me a little program design advice please?
On Jun 16, 2013, at 8:27 AM, Gary Willoughby d...@kalekold.net wrote: I'm writing a little program in D to perform some database operations and have a small question about design. Part of my program watches a log file for changes and this involves code which is wrapped up in a class. So the usage is something like this: auto fileWatcher = new FileWatcher(fileName); fileWatcher.onChange(delegate); fileWatcher.start(); Once the start method is called a loop is entered within the class and the file is watched. Changes are handle through calling the registered delegate. The loop uses different watch methods for different platforms. What i need to be able to do is to stop the current watch and change the watched file. Because this is in an infinite loop, i can't check externally i.e. outside of the class, if i need to break from the loop simply because control never returns to the caller of the start() method. Am i missing something simple here? Any advice is welcome. I thought about threading and message passing but that's maybe overkill for something as simple as this? Some form of concurrency is probably what you want here. But be aware that your delegate may have to be made thread-safe, depending on what it does. Regan suggested using Thread, and you can use spawn as well: import std.concurrency; import std.conv; import std.datetime; import std.stdio; import core.thread; void main() { auto tid = spawn(fileWatcher, file0); foreach (i; 1 .. 5) { Thread.sleep(dur!msecs(300)); // sleep for a bit to simulate work send(tid, file ~ to!string(i)); } } void fileWatcher(string fileName) { while (true) { receiveTimeout(dur!msecs(0), (string n) = fileName = n); writefln(checking %s, fileName); Thread.sleep(dur!msecs(100)); // sleep for a bit to simulate work } writeln(bye!); }
Re: Can someone give me a little program design advice please?
On Jun 19, 2013, at 12:54 PM, Ali Çehreli acehr...@yahoo.com wrote: On 06/19/2013 11:46 AM, Sean Kelly wrote: Thread.sleep(dur!msecs(300)); Totally unrelated but there has been some positive changes. :) The following is much better: Thread.sleep(300.msecs); Hooray for UFCS!
Re: GC dead-locking ?
On Jun 18, 2013, at 7:01 AM, Marco Leise marco.le...@gmx.de wrote: Am Mon, 17 Jun 2013 10:46:19 -0700 schrieb Sean Kelly s...@invisibleduck.org: On Jun 13, 2013, at 2:22 AM, Marco Leise marco.le...@gmx.de wrote: Here is an excerpt from a stack trace I got while profiling with OProfile: #0 sem_wait () from /lib64/libpthread.so.0 #1 thread_suspendAll () at core/thread.d:2471 #2 gc.gcx.Gcx.fullcollect() (this=...) at gc/gcx.d:2427 #3 gc.gcx.Gcx.bigAlloc() (this=..., size=16401, poolPtr=0x7fc3d4bfe3c8, alloc_size=0x7fc3d4bfe418) at gc/gcx.d:2099 #4 gc.gcx.GC.mallocNoSync (alloc_size=0x7fc3d4bfe418, bits=10, size=16401, this=...) gc/gcx.d:503 #5 gc.gcx.GC.malloc() (this=..., size=16401, bits=10, alloc_size=0x7fc3d4bfe418) gc/gcx.d:421 #6 gc.gc.gc_qalloc (ba=10, sz=optimized out) gc/gc.d:203 #7 gc_qalloc (sz=optimized out, ba=10) gc/gc.d:198 #8 _d_newarrayT (ti=..., length=4096) rt/lifetime.d:807 #9 sequencer.algorithm.gzip.HuffmanTree.__T6__ctorTG32hZ.__ctor() (this=..., bitLengths=...) sequencer/algorithm/gzip.d:444 Two more threads are alive, but waiting on a condition variable (i.e.: in pthread_cond_wait(), but from my own and not from druntime code. Is there some obvious way I could have dead-locked the GC ? Or is there a bug ? I assume you're running on Linux, which uses signals (SIGUSR1, specifically) to suspend threads for a collection. So I imagine what's happening is that your thread is trying to suspend all the other threads so it can collect, and those threads are ignoring the signal for some reason. I would expect pthread_cond_wait to be interrupted if a signal arrives though. Have you overridden the signal handler for SIGUSR1? No, I have not overridden the signal handler. I'm aware of the fact that signals make pthread_cond_wait() return early and put them in a while loop as one would expect, that is all. Hrm... Can you trap this in a debugger and post the stack traces of all threads? That stack above is a thread waiting for others to say they're suspended so it can collect.
Re: Fibers vs Async/await
Fibers don't actually execute asynchronously. They represent an alternate execution context (code and stack) but are executed by the thread that calls them, and control is returned when they either yield or complete. This video is a good introduction to fibers: http://vimeo.com/1873969 On Jun 15, 2013, at 10:54 AM, Jonathan Dunlap jad...@gmail.com wrote: *bump* On Tuesday, 11 June 2013 at 19:57:27 UTC, Jonathan Dunlap wrote: I was listening to one of the DConf sessions and where was some talk about implementing async from C# into D someday in the far future. Recently I learned about D's fibers... and it looks like the same thing to me. What are the major differences in principle? -Jonathan @jonathanAdunlap
Re: GC dead-locking ?
On Jun 13, 2013, at 2:22 AM, Marco Leise marco.le...@gmx.de wrote: Here is an excerpt from a stack trace I got while profiling with OProfile: #0 sem_wait () from /lib64/libpthread.so.0 #1 thread_suspendAll () at core/thread.d:2471 #2 gc.gcx.Gcx.fullcollect() (this=...) at gc/gcx.d:2427 #3 gc.gcx.Gcx.bigAlloc() (this=..., size=16401, poolPtr=0x7fc3d4bfe3c8, alloc_size=0x7fc3d4bfe418) at gc/gcx.d:2099 #4 gc.gcx.GC.mallocNoSync (alloc_size=0x7fc3d4bfe418, bits=10, size=16401, this=...) gc/gcx.d:503 #5 gc.gcx.GC.malloc() (this=..., size=16401, bits=10, alloc_size=0x7fc3d4bfe418) gc/gcx.d:421 #6 gc.gc.gc_qalloc (ba=10, sz=optimized out) gc/gc.d:203 #7 gc_qalloc (sz=optimized out, ba=10) gc/gc.d:198 #8 _d_newarrayT (ti=..., length=4096) rt/lifetime.d:807 #9 sequencer.algorithm.gzip.HuffmanTree.__T6__ctorTG32hZ.__ctor() (this=..., bitLengths=...) sequencer/algorithm/gzip.d:444 Two more threads are alive, but waiting on a condition variable (i.e.: in pthread_cond_wait(), but from my own and not from druntime code. Is there some obvious way I could have dead-locked the GC ? Or is there a bug ? I assume you're running on Linux, which uses signals (SIGUSR1, specifically) to suspend threads for a collection. So I imagine what's happening is that your thread is trying to suspend all the other threads so it can collect, and those threads are ignoring the signal for some reason. I would expect pthread_cond_wait to be interrupted if a signal arrives though. Have you overridden the signal handler for SIGUSR1?
Re: Mac OS crash, details inside...
On Jun 14, 2013, at 2:49 AM, Gary Willoughby d...@kalekold.net wrote: In fact i have the same problem reading files too. It only reads files up to a certain amount of bytes then crashes in the same manner explained above. Again this only happens when the program runs as a daemon. Run as a daemon how?
Re: What sync object should i use?
On Wednesday, 15 May 2013 at 15:35:05 UTC, Juan Manuel Cabo wrote: It sounds like you need to: 1) use a Message Queue. 2) Copy the message while you work on it with the consumer, so that you can exit the mutex. At which point I'll suggest considering std.concurrency instead of rolling everything yourself.
Re: What sync object should i use?
I'm doing this from my phone so please bear with me. You use a mutex in combination with a condition variable so you can check the state of something to determine if waiting is necessary. So the classic producer/consumer would be something like: T get() { shnchronized(c.mutex) { while (q.isEmpty) c.wait(); return q.take(); } } void put(T val) { synchronized (c.mutex) { q.add(val); c.notify(); } } You can emulate a Win32 event that stays flipped by checking/setting a bool or int inside the mutex.
Re: What sync object should i use?
On May 14, 2013, at 9:09 AM, Heinz thor...@gmail.com wrote: On Monday, 13 May 2013 at 21:04:23 UTC, Juan Manuel Cabo wrote: There is one thing that should definitely added to the documentation, and that is what happens when one issues a notify while the thread hasn't yet called Condition.wait(). I can confirm that under Win32 calling notify() before wait() internally signals the condition and then calling wait() returns immediately and actually does not wait. This is the expected behavior and is actually how Win32 events work. A Win32 event can be simulated basically like so: class Event { Condition c; bool signaled = false; this() { c = new Condition(new Mutex); } void wait() { synchronized (c.mutex) { while (!signaled) c.wait(); signaled = false; } } void notify() { synchronized (c.mutex) { signaled = true; c.notify(); } } } auto e = new Event; T get() { while (true) { e.wait(); // A -- race here synchronized (m) { if (!m.isEmpty()) return m.take(); } } } void put(T val) { synchronized(m) { m.add(val); } e.notify(); } You'll notice the redundancy here though to combat the race at point A. Generally, you really want to avoid the Win32 model and use the mutex/condition pair directly with your container or whatever. On Tuesday, 14 May 2013 at 08:58:31 UTC, Dmitry Olshansky wrote: Have to lock it otherwise you have a race condition on a condition variable (wow!). Ok, i'll lock it just in case. It also makes me feel my code is more robust. This will do right? ... synchronized(cond.mutex) cond.notify(); … Yep. My internal bool variable that affects the condition (the one that decides if the consumer thread should wait) must be setable at any moment by any thread so i leave it outside the lock. Also, after setting this variable i immediately call notify() with mutex unlocked. That's why it is working i think. I don't understand what you mean here. If the variable is protected by a lock it can still be set by any thread at any time. Just only one thread at a time. Doing a lock-free write of the variable basically just means that the variable will probably be set eventually, which is rarely what you actually want.
Re: What sync object should i use?
On May 14, 2013, at 10:59 AM, Heinz thor...@gmail.com wrote: Guys, this is a precise example of what i'm trying to do. You'll notice that there're 2 ways of waking up the consumer: / Condition cond; // Previously instantiated. bool loop = false; // This variable determine if the consumer should take the next iteration or wait until a thread calls SetLoop(true). Object my_variable; // A value used for comunicating producer and consumer. It's not a prerequisite to be set for an iteration to happen. Let's back up a minute. Can you be more specific about what you're trying to do? I think you shouldn't need to use the loop var at all, but I'm not entirely sure. Also, loop and my_variable will be thread-local given how they're declared.
Re: What sync object should i use?
On May 14, 2013, at 12:02 PM, Dmitry Olshansky dmitry.o...@gmail.com wrote: 14-May-2013 21:02, Steven Schveighoffer пишет: But since you have to lock anyway, signaling while holding the lock, or while being outside the lock isn't really a difference. On the level of gut feeling there must be something about it as you don't see: synchronized(m){ // ... send message } notify(); anytime of day. And hosting work out of mutex seems natural isn't it? If you move the notify out of the mutex then you can end up with multiple threads competing for the same value. Say the producer puts a value in the queue, leaves the mutex, and notifies a waiting thread. Then consumer A enters the mutex, sees that there's something in the container and takes it, then consumer B receives the notification and wakes up to discover that the container is empty. So long as your wait loops are done properly the only bad result will be pointless wakeups, but worst case you could have a crash or exception if you're removing data from the container without checking if it's empty.
Re: Tricky code with exceptions
For what it's worth, this runs fine on 64-bit OSX.
Re: WinAPI callbacks and GC
On May 2, 2013, at 6:17 AM, Regan Heath re...@netmail.co.nz wrote: On Wed, 01 May 2013 01:12:39 +0100, Sean Kelly s...@invisibleduck.org wrote: On Apr 23, 2013, at 2:21 PM, Jack Applegame jappleg...@gmail.com wrote: According WinAPI documentation, CtrlHandler will be called in new additional thread. Is it safe to allocate GC memory in NOT Phobos threads? If not, how to make it safe? I'm trying call thread_attachThis() at the beginning of CtrlHandler fucntion, but it doesn't compile because thread_attachThis() is not no throw. thread_attachThis should probably just be labeled nothrow. I don't think there's anything in that function that can throw an Exception. That makes it callable.. but did you see my post about the various timing issues with using this in a non-GC thread (potentially while the GC is already collecting - or similar). The GC holds a lock on the global thread list while collecting, so it shouldn't be possible for thread_attachThis to register a thread when this is happening. In fact, thread_attachThis even temporarily disables the GC, and since this operation is protected by the GC lock, it's blocked there as well.
Re: Dispatching values to handlers, as in std.concurrency
On May 6, 2013, at 10:03 AM, Luís.Marques luismarq...@gmail.com@puremagic.com luismarq...@gmail.com wrote: How can I also accept subclasses? How are the messages stored? std.concurrency uses Variant.convertsTo.
Re: WinAPI callbacks and GC
On Apr 23, 2013, at 2:21 PM, Jack Applegame jappleg...@gmail.com wrote: According WinAPI documentation, CtrlHandler will be called in new additional thread. Is it safe to allocate GC memory in NOT Phobos threads? If not, how to make it safe? I'm trying call thread_attachThis() at the beginning of CtrlHandler fucntion, but it doesn't compile because thread_attachThis() is not no throw. thread_attachThis should probably just be labeled nothrow. I don't think there's anything in that function that can throw an Exception.
Re: When to call setAssertHandler?
On Mar 26, 2013, at 11:37 AM, Benjamin Thaut c...@benjamin-thaut.de wrote: Am 25.03.2013 23:49, schrieb Sean Kelly: On Mar 22, 2013, at 2:58 AM, Benjamin Thaut c...@benjamin-thaut.de wrote: So I want to install my own assertHandler. The problem is, that even if I call setAssetHandler in a shared module constructor, and that module does not import any other modules, it is still not initialized first. Is there a way to set the assert before any module constructors run without hacking druntime? I'm afraid not. I suppose this is a case I should have handled, but it didn't occur to me at the time. Do you have some idea how to solve this in a generic way? Is this even possible without adding another feature to the language? I'd have to give it some thought. My first idea for solving the problem (looking for a specific named function using dlsym) should work but it seems like a hack. The easiest fix without any library changes would be to have all the modules in your app that you want to run after the hook is set import the module that initializes the hook.
Re: Exiting blocked threads (socket.accept)
Have each thread select() on the read end of a pipe that the main thread writes to when it wants to trigger a wakeup--write() is legal even in signal handlers.
Re: Can std.json handle Unicode?
On Mar 23, 2013, at 5:22 AM, Jacob Carlborg d...@me.com wrote: I'm wondering because I see that std.json uses isControl, isDigit and isHexDigit from std.ascii and not std.uni. This also causes a problem with a pull request I recently made for std.net.isemail. In one of its unit tests the DEL character (127) is used. According to std.ascii.isControl this is a control character, but not according to std.uni.isControl. This will cause the test suite for the pull request not to be run since std.json chokes on the DEL character. I don't know about control characters, but std.json doesn't handle UTF-16 surrogate pairs in strings, which are legal JSON. I *think* it does properly handle the 32 bit code points though.
Re: When to call setAssertHandler?
On Mar 22, 2013, at 2:58 AM, Benjamin Thaut c...@benjamin-thaut.de wrote: So I want to install my own assertHandler. The problem is, that even if I call setAssetHandler in a shared module constructor, and that module does not import any other modules, it is still not initialized first. Is there a way to set the assert before any module constructors run without hacking druntime? I'm afraid not. I suppose this is a case I should have handled, but it didn't occur to me at the time.
Re: Concurrency and program speed
Does the laptop really have 4 cores or is it 2 cores with hyperthreading? My guess is the latter, and that will contribute to the timing you're seeing. Also, other things are going on in the system. Do larger jobs show a better or worse speedup? On Feb 28, 2013, at 6:15 AM, Joseph Rushton Wakeling joseph.wakel...@webdrake.net wrote: Hello all, I'm in need of some guidance regarding std.concurrency. Before writing further, I should add that I'm an almost complete novice where concurrency is concerned, in general and particularly with D: I've written a few programs that made use of std.parallelism but that's about it. In this case, there's a strong need to use std.concurrency because the functions that will be run in parallel involve generating substantial quantities of random numbers. AFAICS std.parallelism just isn't safe for that, in a statistical sense (no idea how it might slow things down in terms of shared access to a common rndGen). Now, I'm not naive enough to believe that using n threads will simply result in the program runtime being divided by n. However, the results I'm getting with some simple test code (attached) are curious and I'd like to understand better what's going on. The program is simple enough: foreach(i; iota(n)) spawn(randomFunc, m); ... where randomFunc is a function that generates and sums m different random numbers. For speed comparison one can do instead, foreach(i; iota(n)) randomFunc(m); With m = 100_000_000 being chosen for my case. Setting n = 2 on my 4-core laptop, the sequential case runs in about 4 s; the concurrent version using spawn() runs in about 2.2 s (the total amount of user time given for the sequential programs is about 4 s and about 4.3 s respectively). So, roughly half speed, as you might expect. Setting n = 3, the sequential case runs in about 6 s (surprise!), the concurrent version in about 3 (with about 8.1 s of user time recorded). In other words, the program speed is only half that of the sequential version, even though there's no shared data and the CPU can well accommodate the 3 threads at full speed. (In fact 270% CPU usage is recorded, but that should still see a faster program.) Setting n = 4, the sequential case runs in 8 s, the concurrent in about 3.8 (with 14.8 s of user time recorded), with 390% CPU usage. In other words, it doesn't seem possible to get more than about 2 * speedup on my system from using concurrency, even though there should not be any data races or other factors that might explain slower performance. I didn't expect speed / n, but I did expect something a little better than this -- so can anyone suggest what might be going on here? (Unfortunately, I don't have a system with a greater number of cores on which to test with greater numbers of threads.) The times reported here are for programs compiled with GDC, but using LDC or DMD produces similar behaviour. Can anyone advise? Thanks best wishes, -- Joe concur.d
Re: GC free Writer
On Feb 8, 2013, at 7:57 AM, David d...@dav1d.de wrote: I am currently implementing a logging module, I want to make logging to stderr/stdout/any file possible, also during runtime-shutdown (logging from dtors) Atm it lookes like this: void log(LogLevel level, Args...)(Args args) { string message = format(args); ... pass string to writer } --- But format allocates! That makes it throw an InvalidMemoryOperationError when calling the logging function from a dtor. So I need a GC-free writer for std.format.formattedWrite, similiar to std.stdio's LockingTextWriter but with a buffer instead of a file (std.array.Appender GC free something like that). I couldn't come up with a working solution, I hope you have ideas. Does your IO layer require one call per log line or can you do multiple writes followed by an end log line terminator? Assuming the former, the most obvious approach would be for the writer to have a static array equal to the max log line size, accumulate until done and then issue one write to the output layer.
Re: Understanding the GC
On Jan 31, 2013, at 11:07 PM, monarch_dodra monarchdo...@gmail.com wrote: On Thursday, 31 January 2013 at 23:53:26 UTC, Steven Schveighoffer wrote: A destructor should ONLY be used to free up resources other than GC allocated memory. Because of that, it's generally not used. It should be used almost as a last resort. For example, a class that holds a file descriptor should have both a destructor (which closes the descriptor) and a manual close method. The former is to clean up the file descriptor in case nobody thought to close it manually before all references were gone, and the latter is because file descriptors are not really managed by the GC, and so should be cleaned up when they are no longer used. This kind of gives us a paradox, since the class is managed via the GC, how do you know it's no longer used (that is, how do you know this is the last reference to it)? That is really up to the application design. But I wouldn't recommend relying on the GC to clean up your descriptors. -Steve I've actually run into this very issue: I was iterating on files, opening them, and placing the descriptor in GC-allocated RAII data. I can't remember if class or struct, but not a big issue. Come to think about it, I think I was using File, but allocating them because I thought they were classes `auto f = new File(my file, r)`. After running for a second, my program halts, because an exception was thrown trying to open a new file: Cannot open file: Too many open file handles. It was basically: Sure, the GC will destroy and close files for you... if you forget... eventually... I ended up closing them in scope(exit) blocks. Problem immediately solved. Or I could have stopped allocating my File's on the heap. Either way, it shows you shouldn't rely on the GC for deterministic destruction. The GC currently doesn't finalize structs, only classes. So that's an issue as well.
Re: Understanding the GC
GG.reserve can be handy for this. It tells the GC to pre allocate a block of memory from the OS. On Jan 31, 2013, at 7:12 AM, Steven Schveighoffer schvei...@yahoo.com wrote: On Wed, 30 Jan 2013 03:15:14 -0500, Mike Parker aldac...@gmail.com wrote: My understanding is that the current implementation only runs collections when memory is allocated. Meaning, when you allocate a new object instance, or cause memory to be allocated via some built-in operations (on arrays, for example), the GC will check if anything needs to be collected and will do it at that time. I don't know if it's run on every allocation, or only when certain criteria or met, and I really don't care. That's an implementation detail. The D language itself does not specify any of that. This isn't quite accurate. The GC first checks to see if there is a free block that would satisfy the allocation, and if it can't find one, THEN it runs a collection cycle, and if then it cannot allocate the block from any memory regained, it then asks for more memory from the OS. This can lead to the collection cycle running quite a bit when allocating lots of data. I don't know if there are any measures to mitigate that, but there probably should be. -Steve
Re: Looking for command for synchronization of threads
On Jan 30, 2013, at 2:58 PM, Sparsh Mittal sparsh0mit...@gmail.com wrote: Background: I am implementing an iterative algorithm in parallel manner. The algorithm iteratively updates a matrix (2D grid) of data. So, I will divide the grid to different threads, which will work on it for single iteration. After each iteration, all threads should wait since next iteration depends on previous iteration. My issue: To achieve synchronization, I am looking for an equivalent of sync in Cilk or cudaEventSynchronize in CUDA. I saw synchronized, but was not sure, if that is the answer. Please help me. I will put that command at end of for loop and it will be executed once per iteration. I suggest looking at std.parallelism since it's designed for this kind of thing. That aside, all traditional synchronization methods are in core.sync. The equivalent of sync in Cylk would be core.sync.barrier.