I think this topic has pretty much run its course, but maybe I'll add my 
summary in hopes that it reenforces the already correct portions of the 
other emails.

Like general relativity, you really need a frame of reference when 
talking about synchronous versus asynchronous.

It is possible to make nodes that return a status code of "operation 
started", and return a notification object or expect the caller to poll 
to detect the completion of the operation.  These are difficult to use, 
since they require several coordinated steps by the caller, so 
typically, this asynchronous mechanism is wrapped in a synchronous 
mechanism that yields.  TCP, DDE, most forms of I/O do this.  You simply 
call read and it returns with the data, so it is synchronous from the 
caller's point of view.  In its implementation, it can use synchronous 
or asynchronous mechanisms.

One possibility is that it is built on top of asynchronous mechanisms 
where it starts something, then waits on an occurrence or polls using 
one of several wait mechanisms.  Again, to the caller, it is synchronous 
and correct.  The differences in implementation mostly affect the 
ability for other tasks to proceed and the CPU usage.

I think there are four choices here, but many of them are very similar.

Occurrences, LV Wait Node -- both of these allow the OS thread making to 
call to be used for other LV or OS tasks.  It is a cooperative yield of 
the thread.  The Wait will regain control at some time following the 
timeout, and the occurrence will regain control at the timeout or before 
if signaled.  There will be a latency between the completion of the task 
and the notification or detection.  This is the only system that works 
well on single threaded systems, and when the code must be called in the 
UI thread.

OS sleep -- uSleep(), mSleep(), Sleep(), threadSleep(), are all done at 
the OS level and LV has no control over what happens to the CPU.  The 
thread is blocked at the sleep call and cannot proceed.  Any other task 
needing this thread is blocked.  The CPU is not blocked and can carry 
out work for other OS threads.  This is also true of messages, and other 
OS level notification objects.  As Rolf pointed out, LV can and now does 
have multiple threads per execution system, and this means that up to 
three I/O tasks can block a thread, and the diagram will continue to 
execute nodes in that execution system.  One more block and something 
has to complete first.  Like occurrences and LV Wait node, this adds 
latency between the task completion and the notification or detection of 
completion.
 
Busy loop -- is really just a special case of the above, sort of like 
sleeping(0).  The thread can't do anything else, and the OS will 
occasionally switch to other threads, but much less frequently than if 
you sleep or use a notification object.  This may minimize the latency 
between completion and detection, but by using most or all CPU cycles to 
do so.

So really, we have a typical tradeoff.  We want synchronous wrappers 
around asynchronous mechanisms that have a reasonable latency.  
Occurrences are implemented on the OS notifications, so their latency is 
the OS latency plus a small constant.  The Wait is built on the OS sleep 
in certain cases in RT, but on an occurrence in the normal cases.  
Polling systems can have less latency than a notification, but usually 
at the expense of more CPU usage.

I believe DAQmx uses sleeps and notifications from the OS trying to 
balance CPU usage and latency.  The more useful feedback would be under 
what circumstances is the balance wrong.  When is the latency too big 
lowering throughput, or when is the CPU usage too high for a low 
throughput I/O task?  DAQmx isn't built on LV, and doesn't really need 
to use LV waits or occurrences to release the calling LV thread as long 
as there are other threads to carry out the tasks you want to proceed in 
parallel.  This is different than older DAQ where the balance was often 
out of whack and one mechanism existed to yield the CPU and thread back 
to LV.

Greg McKaskle


Reply via email to