On Apr 12, 2013, at 6:26 PM, James Wirth <jwir...@suddenlink.net> wrote:

> The discussion:
> 
>    
> http://forum.dlang.org/thread/mailman.426.1286264462.858.digitalmar...@puremagic.com?page=9
> 
> implies that:
>   receiveTimeout(dur!"msecs"(0), some-callback-function)
> 
> is acceptable - meaning that no blocking occurs.  A simple experiment 
> verifies this - but I hesitate to use "undocumented" features.  Some APIs 
> would interpret the 0 as infinity.

Then consider it documented.  receiveTimeout(0, …) is intended to work as it 
does currently.  If you want it to block forever, use receive(…).


> I also fear that placing such a nonblocking recieve into the main event loop 
> of a GUI program would impact performance - it would also be non-generic.  Is 
> there a fast function which returns true just when the "mail box" is 
> non-empty?

The receive calls are as lightweight as I can make them.  In essence:

1. Walk a thread-local list of received messages looking for a match.  If 
found, pass to the callback, remove the message, and return.
2. Lock a shared list of recently received messages.  If the list is empty, 
block on a condition variable until a message is received, then go to step 3.
3. Move all messages from the shared list to a local list and release the lock.
4. Walk this new list of messages looking for a match.  If found, pass the 
callback, remove the message, append these new messages to the thread-local 
list and return.
5. If no match, append these new messages to the thread-local list and go to 
step 2.

So if the message you want is already present in your local message queue, 
receive doesn't even need to acquire a mutex.  If not, it acquires the mutex 
for just as long as it takes to move the new messages from the shared list to 
the local list (basically reassigning a few pointers).  If no match anywhere, 
then it will block for as long as indicated, either forever for receive() or 
until the timeout has elapsed with receiveTimeout().

By the way, I just noticed that the receiveTimeout() version uses the same 
timeout for each condvar wait when it should be reducing it on each iteration 
to ensure that the maximum wait time is as indicated.  This is a bug and needs 
to be fixed.  And because of how Condition is implemented, this will mean a 
kernel call to determine time elapsed on each iteration where a message was 
received before the timeout.  So this case at least will be a bit less optimal 
that what could be done targeting Posix specifically.

Reply via email to