RE: DAQmx, etc.

2004-05-26 Thread Rolf Kalbermatter
Jason Dunham [EMAIL PROTECTED] wrote:

Other synchronous nodes may block the entire thread, preventing any
other LV tasks in that thread from executing.  Remember that we LV
programmers can choose execution systems but we don't have direct
control over threads. We are also limited in that parallel code on the
same diagram is always going to end up in the same execution system,
which probably puts it in the same thread. The term we LV programmers
use for these kinds of nodes is pure evil.

There are plenty of nodes in LV which appear synchronous to LV
programmers (nearly all the wait functions, queue functions, etc), but
don't block the LV execution system from  executing parallel tasks in
the same VI.  That's what we want from our DAQ and GPIB calls too, but
we're at the mercy of the NI engineers who create the LabVIEW node.  I'd
guess that these kinds of nodes have to be internally asynchronous, so
they don't block the thread, but must contain code to wait until the
spawned thread finishes its task so that they appear synchronous to us.

Well there are several ways of parallelism in LabVIEW. One is indeed
the LabVIEW task scheduler itself. This one can be only used for LabVIEW
nodes itself and has been in LabVIEW since at least 3.0 but probably
already in 2.0. Most nodes in LabVIEW are synchronous in respect to this
scheduler, with exception of the Wait and similar functions and some
other low level nodes such as VISA Read/Write, TCP Read/Write, and the
obsolete Device nodes etc. Those nodes internally use callbacks and
occurrences to allow asynchronous operation and also interface to the
scheduler to tell it that they are waiting for something and that other
stuff should get priority.
The Call Library Node and CIN can not really make use of such things
(Well they could but NI would have to document a very complicated API
to interface to that scheduler, which I'm sure has a very delicate
mechanisme easily affected with new features in LabVIEW. Second the
C code would need to take very specific precoutions, very difficult
to completely document). As such I'm sure NI does not have the slightest
interest to use such an interface even in their own external code
drivers as it would make such a driver way to dependant on the actual
LabVIEW version.

So CLN and CIN are synchronous in that they block the calling thread
in LabVIEW. This is also almost not avoidable as LabVIEW has no way
of knowing what an external code is doing while executing. For all
it is worth it could be doing horrible things to the stack and only
restore it properly just before returning. If LabVIEW would try to
preempt such an external code even a BSOD could be a possibility.
The big difference a CLN or CIN can make, if it is safely written to
be reentrant, you can configure the according LabVIEW node to call
the code reentrant which in LabVIEW makes a difference as reentrant
external code is called in the current execution system while non-
reentrant external code is called in the UI execution system.

Before LabVIEW 7.0 this still had some limitation as LabVIEW by default
only allocated 1 thread per execution system. It was ok as you only
blocked one of the many execution systems but you had to be careful when
assigining execution systems to VIs to account for any external code
which might block such an execution system for some time. In LabVIEW 7
this has been increased to 4 threads I believe (except the UI system
which still uses only one thread to make non-reentrant code safe to run
in there). Of course non-reentrant external code will use up the single
thread available for the UI system and will also compete with the actual
UI drawing itself and in that way block many other things in LabVIEW
indirectly as LabVIEW sometimes has to wait for the UI thread to complete
its thing before it can continue executing the diagram code.

So in LabVIEW 7 even though the reentrant CLN blocks the calling thread
LabVIEW still has more threads left in that execution system to execute
other not dataflow dependant parts of the diagram. As such I have to admit
I'm very impressed by the almost seemingless way LabVIEW makes multi-
threading simply work.

There is a VI in vi.lib/utilities/sysinfo.llb to change the number of
threads allocated to an execution system. This supposedly also works in
LabVIEW 6.x but there may be issues with multiple threads in one execution
system which work not as smooth as in LabVIEW 7.0.

If the DAQmx nodes in LV don't have this capability, then I'm not sure
DAQmx the Great Leap Forward which I've been led to believe.  We can get
the behavior we want by polling the status of AI Read, so why change to
DAQmx if we still have to implement this workaround to get decent
multitasking from our computers.

The big leap forward in DAQmx is that the entire underlying DAQ framework
is made reentrant whereas that was not the case for NI-DAQ. Eventhough I
think the CLNs calling NI-DAQ were mostly configured to be reentrant, the
underlying 

Re: DAQmx, etc.

2004-05-26 Thread Rolf Kalbermatter

While I'm certainly not an NI-488.2 expert, I believe there are both
synchronous and asynchronous functions in NI-488.2.

Not in the 488.2 spec but in the C library and/or the driver and at the
VISA call level. The problem is that there is NO NI-488.2 for OS X
(at the moment). So I am developing sth-488.2 based on the incomplete
488.2 DDK toolkit that NI provides.  The trick is how to move that
asynchronous call in the library (when I get it implemented) into the
synchronous nature of a LabVIEW call into a library.

There are two options for this.

One is using occurrences. You basically call into the external code passing
it the occurrence refnum and invoke the asynchronous C API function in there
and return back to LabVIEW. This C API signals back at completion either
through an OS event or callback and at that time you trigger the occurrence
in your C code.
To do this triggering you simply call the Occur(LVRefNum occurrence)
function. This function is not prototyped in extcode.h but is exported
by the labview.lib link library in all versions of LabVIEW.

The LabVIEW diagram waits on the occurrence with a timeout and on return
of the Wait on Occurrence you call either the external code to retrieve
the data (no timeout) or cancel the operation (timeout occurred).

This is basically the same the VISA Read node for instance does internally
although it does this in the LabVIEW code and not on a diagram. I have not
been able to find the necessary LabVIEW API calls to actually place all this
into the external code. The big problem is the implementation of the 
Wait on Occurrence in a way that LabVIEW knows that it should do other
things until the occurrence is triggered. It may also be so that this is theoretically 
doable in CINs only as CLNs do not have the necessary
additional calls such as CINAbort etc to allow such an external code to be interrupted 
from waiting when the user wants to shutdown the application.

The other solution is to actually stay in the C code and pass control back
to the OS through OS API functions while you wait on the event to occur.
I do believe this is what DAQmx does. This blocks the LabVIEW thread invoking
the CLN but allows LabVIEW and other applications to continue to make use
of the CPU. A cosmetic issue with this is at least under Windows, that in the
task scheduler LabVIEW seems to use up 100% CPU power while waiting in this
way but everything on the system is as responsive as otherwise.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: DAQmx, etc.

2004-05-26 Thread Greg McKaskle
I think this topic has pretty much run its course, but maybe I'll add my 
summary in hopes that it reenforces the already correct portions of the 
other emails.

Like general relativity, you really need a frame of reference when 
talking about synchronous versus asynchronous.

It is possible to make nodes that return a status code of operation 
started, and return a notification object or expect the caller to poll 
to detect the completion of the operation.  These are difficult to use, 
since they require several coordinated steps by the caller, so 
typically, this asynchronous mechanism is wrapped in a synchronous 
mechanism that yields.  TCP, DDE, most forms of I/O do this.  You simply 
call read and it returns with the data, so it is synchronous from the 
caller's point of view.  In its implementation, it can use synchronous 
or asynchronous mechanisms.

One possibility is that it is built on top of asynchronous mechanisms 
where it starts something, then waits on an occurrence or polls using 
one of several wait mechanisms.  Again, to the caller, it is synchronous 
and correct.  The differences in implementation mostly affect the 
ability for other tasks to proceed and the CPU usage.

I think there are four choices here, but many of them are very similar.

Occurrences, LV Wait Node -- both of these allow the OS thread making to 
call to be used for other LV or OS tasks.  It is a cooperative yield of 
the thread.  The Wait will regain control at some time following the 
timeout, and the occurrence will regain control at the timeout or before 
if signaled.  There will be a latency between the completion of the task 
and the notification or detection.  This is the only system that works 
well on single threaded systems, and when the code must be called in the 
UI thread.

OS sleep -- uSleep(), mSleep(), Sleep(), threadSleep(), are all done at 
the OS level and LV has no control over what happens to the CPU.  The 
thread is blocked at the sleep call and cannot proceed.  Any other task 
needing this thread is blocked.  The CPU is not blocked and can carry 
out work for other OS threads.  This is also true of messages, and other 
OS level notification objects.  As Rolf pointed out, LV can and now does 
have multiple threads per execution system, and this means that up to 
three I/O tasks can block a thread, and the diagram will continue to 
execute nodes in that execution system.  One more block and something 
has to complete first.  Like occurrences and LV Wait node, this adds 
latency between the task completion and the notification or detection of 
completion.
 
Busy loop -- is really just a special case of the above, sort of like 
sleeping(0).  The thread can't do anything else, and the OS will 
occasionally switch to other threads, but much less frequently than if 
you sleep or use a notification object.  This may minimize the latency 
between completion and detection, but by using most or all CPU cycles to 
do so.

So really, we have a typical tradeoff.  We want synchronous wrappers 
around asynchronous mechanisms that have a reasonable latency.  
Occurrences are implemented on the OS notifications, so their latency is 
the OS latency plus a small constant.  The Wait is built on the OS sleep 
in certain cases in RT, but on an occurrence in the normal cases.  
Polling systems can have less latency than a notification, but usually 
at the expense of more CPU usage.

I believe DAQmx uses sleeps and notifications from the OS trying to 
balance CPU usage and latency.  The more useful feedback would be under 
what circumstances is the balance wrong.  When is the latency too big 
lowering throughput, or when is the CPU usage too high for a low 
throughput I/O task?  DAQmx isn't built on LV, and doesn't really need 
to use LV waits or occurrences to release the calling LV thread as long 
as there are other threads to carry out the tasks you want to proceed in 
parallel.  This is different than older DAQ where the balance was often 
out of whack and one mechanism existed to yield the CPU and thread back 
to LV.

Greg McKaskle




Re: DAQmx, etc.

2004-05-25 Thread Geoffrey Schmit
On 19/05/2004 at 9:00 AM, Scott Hannahs [EMAIL PROTECTED] wrote:

 At 7:58 -0500 5/19/04, [EMAIL PROTECTED] wrote:
 If the task acquires a finite number of samples and you set this
 input to -1, the VI waits for the task to acquire all requested
 samples, then reads those samples.

 Is this a blocking wait like the non-DAQmx call into a CIN (or LLB) or
 is it a LV style wait where other LV nodes even in the same thread
 can execute? That would be nice! Is there a way to do this threading
 in our own created dlls (frameworks)?. I would like to make my PCI-
 GPIB library non-blocking but that is not trivial because of this
 limitation. Is there a way to set up a call back into LV for a non-
 blocking wait from a CIN or framework?

I'm not sure exactly what you are asking; so, I'll define some terms,
explain a couple of things, and hope I get lucky :)  For example, I'm
not sure what you mean by a LV style wait.  I'm not aware of a
construct that would allow a thread to wait on a synchronization
primitive and yet continue to execute.  That is, a thread is either
running or waiting.  If my definition of an asynchronous call matches
your definition of a LV style wait, great.  If not, please let me know
what you have in mind.

So, here are some definitions that I'll use.  Note that these aren't
necessary standard definitions, but I'll use them is this mail:

Synchronous call:  the thread that invokes the call performs the
specified operation in its entirety.  When the calling thread has
returned to its originating context, the operation has been completed
either successfully or unsuccessfully.  Most VIs, including the DAQmx
Read VI, operate in this fashion.

Asynchronous call:  the thread that invokes the call doesn't perform the
specified operation in its entirety but, rather, schedules another
thread to perform the operation.  When the calling thread has returned
to its originating context, the operation may or may not have been
completed.  Usually, there is a mechanism where either the scheduled
thread calls back to inform the application of the status of the
operation or the calling thread can invoke another call to check on the
status of the operation.  Few VIs operate in this fashion.  From a
certain perspective, the DAQmx Write VI operates in this fashion when
performing a generation timed by a method other than on-demand timing.
That is, the DAQmx Write VI may return before all samples are generated.
The DAQmx Is Task Done VI is used to query the status of this
asynchronous operation.  (The validity of this example is dependent on
the perspective that the operation performed by the DAQmx Write VI is to
generate all samples from the device rather than to only copy the
samples to the device.)

These two types of calls present fundamentally different API models.  In
general, synchronous calls cannot simply be converted to asynchronous
calls without changing the semantics of the API and adding additional
concepts (e.g., call backs, query calls).

Regardless of the type of call, other threads in the application may
continue to execute.  There are at least a couple of factors that
determine how these other threads may be constrained by the thread
making the call.  One factor is how the synchronous call waits.  If the
thread is continuously trying to execute code to check the status of a
pending operation (i.e., polling), other threads will be competing with
this thread for the processor.  However, if the thread waits on a
synchronization primitive, other threads will not be competing with this
thread until this synchronization primitive is signaled and the thread
is awakened.  The polling approach is more responsive (i.e., the change
in status is detected more quickly) and less efficient (i.e., the
processor is used more) than the thread waiting on a synchronization
primitive.  The DAQmx Read VI falls somewhere between these two extremes
in an attempt to balance responsiveness and efficiency.

A second factor relates to what kind of locks are acquired by the call
(either a synchronous or asynchronous call).  A call that acquires a
very general lock that other threads in the system may need (e.g.,
Traditional NI-DAQ VIs that acquire a lock for all of Traditional
NI-DAQ) will prevent any other threads from acquiring that same lock;
these other threads will wait for the lock to be released.  However, a
call that acquires a more specific lock (e.g., DAQmx VIs that acquire a
lock for a specific task) will only prevent other threads that try to
acquire that specific lock from executing.

While I'm certainly not an NI-488.2 expert, I believe there are both
synchronous and asynchronous functions in NI-488.2.  Regardless, other
threads in your application should be able to execute while a NI-488.2
call is waiting subject to the restrictions I've mentioned.  If you're
concerned that your PCI-GPIB library is too inefficient when waiting and
it is polling, maybe you can change its implementation such that the
thread waits 

RE: DAQmx, etc.

2004-05-25 Thread Jason Dunham
Geoff:

Presumably you are more of a DAQ-centric person than a LabVIEW-centric
person at NI.

I think what Scott means by a LV style wait is that the thread
continues to execute, so other parallel nodes on a diagram will continue
to be executed, even though the node in question is synchronous, or
doesn't return until its task has completed.  The LV's Execution Engine,
or whatever the task manager is called, uses a task scheduler to run
nodes so that they seem like they are running in parallel even though
it's all running in one thread. I think that's overly simplified because
other threads are used for housekeeping, but that is all transparent to
LV users.  I remember all this from various NI week presentations.

Other synchronous nodes may block the entire thread, preventing any
other LV tasks in that thread from executing.  Remember that we LV
programmers can choose execution systems but we don't have direct
control over threads. We are also limited in that parallel code on the
same diagram is always going to end up in the same execution system,
which probably puts it in the same thread. The term we LV programmers
use for these kinds of nodes is pure evil.

There are plenty of nodes in LV which appear synchronous to LV
programmers (nearly all the wait functions, queue functions, etc), but
don't block the LV execution system from  executing parallel tasks in
the same VI.  That's what we want from our DAQ and GPIB calls too, but
we're at the mercy of the NI engineers who create the LabVIEW node.  I'd
guess that these kinds of nodes have to be internally asynchronous, so
they don't block the thread, but must contain code to wait until the
spawned thread finishes its task so that they appear synchronous to us.

If the DAQmx nodes in LV don't have this capability, then I'm not sure
DAQmx the Great Leap Forward which I've been led to believe.  We can get
the behavior we want by polling the status of AI Read, so why change to
DAQmx if we still have to implement this workaround to get decent
multitasking from our computers.

Please correct me if I'm wrong about the details of DAQmx or the LabVIEW
execution system.  Most of the above is just educated guesswork. 


Jason Dunham
SF Industrial Software, Inc.


-Original Message-
From: Geoffrey Schmit [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 25, 2004 5:26 AM
To: Info LabVIEW Mailing List
Cc: [EMAIL PROTECTED]
Subject: Re: DAQmx, etc.

On 19/05/2004 at 9:00 AM, Scott Hannahs [EMAIL PROTECTED] wrote:

 At 7:58 -0500 5/19/04, [EMAIL PROTECTED] wrote:
 If the task acquires a finite number of samples and you set this
 input to -1, the VI waits for the task to acquire all requested
 samples, then reads those samples.

 Is this a blocking wait like the non-DAQmx call into a CIN (or LLB) or
 is it a LV style wait where other LV nodes even in the same thread
 can execute? That would be nice! Is there a way to do this threading
 in our own created dlls (frameworks)?. I would like to make my PCI-
 GPIB library non-blocking but that is not trivial because of this
 limitation. Is there a way to set up a call back into LV for a non-
 blocking wait from a CIN or framework?

I'm not sure exactly what you are asking; so, I'll define some terms,
explain a couple of things, and hope I get lucky :)  For example, I'm
not sure what you mean by a LV style wait.  I'm not aware of a
construct that would allow a thread to wait on a synchronization
primitive and yet continue to execute.  That is, a thread is either
running or waiting.  If my definition of an asynchronous call matches
your definition of a LV style wait, great.  If not, please let me know
what you have in mind.






Re: DAQmx, etc.

2004-05-25 Thread Scott Hannahs
At 7:25 -0500 5/25/04, Geoffrey Schmit wrote:
For example, I'm
not sure what you mean by a LV style wait.  I'm not aware of a
construct that would allow a thread to wait on a synchronization
primitive and yet continue to execute.  That is, a thread is either
running or waiting.  If my definition of an asynchronous call matches
your definition of a LV style wait, great.  If not, please let me know
what you have in mind.

Yep.  Just like the Wait (ms) node within LV.  If that could be called (sort of a 
cooperative multi-tasking) from a CIN then you could make non-blocking CIN or Shared 
libraries.

Most VIs, including the DAQmx
Read VI, operate in this fashion.

Good to know, I am not familiar with the new mx stuff and thought it may be 
different.

These two types of calls present fundamentally different API models.  In
general, synchronous calls cannot simply be converted to asynchronous
calls without changing the semantics of the API and adding additional
concepts (e.g., call backs, query calls).

Ok.  I was thinking along the lines of Asynchronous/Synchronous VISA read and write 
calls that can be switched from synchronous to asynchronous with a pop-up menu 
selection.  I realize that the underlying calls might be different but they present 
the same wrapper.

However, if the thread waits on a
synchronization primitive, other threads will not be competing with this
thread until this synchronization primitive is signaled and the thread
is awakened.
This is why I may make my labview calls do a wait for occurrence and pass the 
occurrence to the library in a reentrant call.  The library can signal the occurrence 
eventually and thus there is a no penalty wait for I/O to complete.

However, a
call that acquires a more specific lock (e.g., DAQmx VIs that acquire a
lock for a specific task) will only prevent other threads that try to
acquire that specific lock from executing.

So DAQmx has threading and locks at a much finer (task) level rather than the whole 
DAQ system.

While I'm certainly not an NI-488.2 expert, I believe there are both
synchronous and asynchronous functions in NI-488.2.
Not in the 488.2 spec but in the C library and/or the driver and at the VISA call 
level.  The problem is that there is NO NI-488.2 for OS X (at the moment).  So I am 
developing sth-488.2 based on the incomplete 488.2 DDK toolkit that NI provides.  The 
trick is how to move that asynchronous call in the library (when I get it implemented) 
into the synchronous nature of a LabVIEW call into a library.


  Regardless, other
threads in your application should be able to execute while a NI-488.2
call is waiting subject to the restrictions I've mentioned.
If there were such a thing as an NI-488.2 call!  I need to implement that asynchronous 
call not merely use it.


  If you're
concerned that your PCI-GPIB library is too inefficient when waiting and
it is polling, maybe you can change its implementation such that the
thread waits on a synchronization object rather than polls (or at least
occasionally yields to give other threads more of a chance to use the
processor).
How do I yield from a C program back to the labview program?  It may allow the other 
threads to run automatically but it blocks the UI thread I think and that is really a 
significant block


  If you're concerned that other threads in your application
cannot execute because the thread that is waiting has acquired a common
lock, perhaps the lock can be eliminated or a finer-grained lock can be
used instead.
But I can't change the basic locking mechanism of the Labview execution system...  Or 
if I can, I think I would get a big warning message Rusty Nails Ahead!.  I think 
Jason also expressed my problem quite well and outlined the difference between what we 
can access as a LabVIEW programmer vs. what I get if I dust of my old C programming 
hat.

  Presumably you are more of a DAQ-centric person than a LabVIEW-centric
  person at NI.
I'm more of an operating-system-centric person at NI :)

NI is developing a new OS?!  LabVIEW everywhere indeed!  :-)

Cheers,
Scott





Re: LV2 style globals, events. Was Re: DAQmx, etc.

2004-05-20 Thread David A. Moore
Scott,
It looks like I was probably wrong in my earlier message. I did a quick
test of the situation where I thought LV2 globals could improve on queues,
but the LV2 globals were actually worse than the queues. The test pushed
1000 arrays of 1000 random numbers each through the queue or global, and
used Array Max  Min to find the index of the maximum for each array. My
thought was that the global could index its buffer in place, perform the
analysis function, and return the result, whereas the queue would have to
make a copy as the data was read. But as you point out, the queue doesn't
need to retain a copy of the data after it's read, so it doesn't have to
perform any copying either.
The original project where I saw the advantage had a somewhat different
situation that really did show an advantage to a LV2 style global. I was
performing a data acquisition in a time critical thread, decimating the
data for display in the user interface, and also buffering the full set
of data to disk. Since the data was going two places, a single queue
wouldn't work. And by putting the decimation into the LV2 global, I was
able to have the user interface thread perform that work without needing
a full copy of the data.
--David Moore
At 02:01 AM 2004-05-20 -0400, you wrote:
Subject: Re: LV2 style globals, events. Was Re: DAQmx, etc.
From: Scott Hannahs [EMAIL PROTECTED]
Date: Wed, 19 May 2004 09:55:35 -0400
At 6:04 -0600 5/19/04, David A. Moore wrote:
If you're passing LARGE data, probably not, because LV2 style globals
are more efficient than queues i.e. you can avoid making extra copies
of the data.
Interesting.  Does anyone know why a queue would make extra copies of 
data.  I haven't tried pushing MBytes thru queues but there doesn't seem 
to be any inherent reason that a queue would make extra copies of the 
data.  It should make a copy upon entering the data into the queue and the 
caller would release it.  Upon dequeue the queue should release the memory 
and the receiving VI would copy it.  (Note: these may not be actual copies 
but passing of pointers).

This should be equivalent to a LV2 style global with a USR?  The exact 
same memory allocation/deallocation should take place.

I guess the real test is to make a couple of benchmarks and profile it.
-Scott
-
-W-H-E-A-T-W-H-E-E-L-L-E-N-S-T-O-W-I-N-D-O-W-I-N-G-G-P-R-O-G-R-A-M-W-E-B-
-
-- David A. Moore - Moore Good Ideas, Inc. --
-- (801) 773-3067 - NI Alliance Member --
-- 1996 Allison Way --- www.MooreGoodIdeas.com --
-- Syracuse, Utah 84075 - [EMAIL PROTECTED] --
- 




Re: DAQmx, etc.

2004-05-20 Thread Rolf Kalbermatter
Scott Hannahs [EMAIL PROTECTED] wrote:

At 7:58 -0500 5/19/04, [EMAIL PROTECTED] wrote:
If the task acquires a finite number of samples and you set this input to 
 -1, the VI waits for the task to acquire all requested samples, then reads
 those samples.

Is this a blocking wait like the non-DAQmx call into a CIN (or LLB) or
is it a LV style wait where other LV nodes even in the same thread can
execute?  That would be nice!

It is a blocking wait for the thread the Call Library Node to DAQmx is
called in and in LabVIEW before 7.0 this would cause the execution subsystem
in which the according VI runs to block entirely. LabVIEW 7.0 and higher
acquires by default several threads per execution subsystem so other VIs
beside the CLN but in the same execution subsystem can keep running.

Is there a way to do this threading in our own created dlls (frameworks)?.
I would like to make my PCI-GPIB library non-blocking but that is not
trivial because of this limitation. Is there a way to set up a call back
into LV for a non-blocking wait from a CIN or framework?

You don't need to do anything special other than making sure your DLL is
reentrant (no globals or if there are, they must be protected by mutexes,
but watch out to not create mutual exclusion locks if you might end up
using more than one mutex to protect different resources) and then set
the Call Library Node to execute reentrant (the node changes from orange
to light yellow if it is reentrant).
Before LabVIEW 7 this would block the execution subsystem anyhow unless
you reconfigure the thread allocation for that subsystem with an
undocumented VI in the vi.lib/utility/sysinfo.llb. In LabVIEW 7 it will
not block the execution subsystem but only the single thread LabVIEW uses
to call that library function.

Then again, DAQmx could be ported to all the other platforms since it was
written in such a nice modular way that it only needs a trivial stub driver
for other platforms.  :-)

Everybody would hope it's trivial but I'm sure it is anything but that ;-)

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: DAQmx, etc.

2004-05-19 Thread Uwe Frenz
Tim,
you asked on Tue, 18 May 2004 10:24:03 -0700:
In LV6 the 'AI read.vi' hogs the CPU waiting for the number of samples 
requested to be available. someone (sorry for not remembering the name)
came up with a 'non-blocking AI read.vi' which i have been using happily.
i noticed the new DAQmx 'contgraph.vi' is hogging the CPU also.  is 
there a cure for it ( i think it makes DLL calls)?
The idea bvehind this 'cure' is in a 1st call to ask for 0 (zero) samples 
and to get the number of availabel samples from that call. Than in a 2nd 
call ask for that number only. If you need more, wait an apropriate time 
and repeat.
another question:  i've been using LV2 style globals to pass data 
between parallel loops
and am wondering if a queue is a better way to go?
Both ways will work. Speed will probably not vary much. But queues are in 
the end easier to handle and maintain and to separate into different VIs. 
For example, a listener VI can autostop when the queue gets destroyed by a 
(the) writer. You do not need any extra technique to stop that -otherwise 
independent- loop.
AND queues are variant as inserted from the libs; YOU define their data 
type when creating a new queue. A LV2-style global has to be made variant 
to function that way. (My older templates used strings as data format for 
LV2 globals and casted any given data to/from strings).

HTH and
Greetings from Germany!
--
Uwe Frenz
~
Dr. Uwe Frenz
Entwicklung
getemed Medizin- und Informationtechnik AG
Oderstr. 59
D-14513 Teltow
Tel.  +49 3328 39 42 0
Fax   +49 3328 39 42 99
[EMAIL PROTECTED]
WWW.Getemed.de



Re: LV2 style globals, events. Was Re: DAQmx, etc.

2004-05-19 Thread David A. Moore

At 10:24 -0700 5/18/04, tim wen wrote:
another question:  i've been using LV2 style globals to pass data 
between parallel loops
and am wondering if a queue is a better way to go?

Scott Hannahs [EMAIL PROTECTED] wrote:
Probably.  If you are just passing data in one direction it can work 
well.  With a LV2 style global you can build in internal processing and 
value manipulation to the global (ie intelligent global).
If you're passing LARGE data, probably not, because LV2 style globals
are more efficient than queues i.e. you can avoid making extra copies
of the data. For small data where that doesn't matter, queues or
notifiers are simpler to use.
If your source loop is periodic and your destination loop is event driven,
you might also want to consider generating a user-defined event that
contains your data. That's probably the slowest choice, but can be the
cleanest looking method.
--David Moore
-
-W-H-E-A-T-W-H-E-E-L-L-E-N-S-T-O-W-I-N-D-O-W-I-N-G-G-P-R-O-G-R-A-M-W-E-B-
-
-- David A. Moore - Moore Good Ideas, Inc. --
-- (801) 773-3067 - NI Alliance Member --
-- 1996 Allison Way --- www.MooreGoodIdeas.com --
-- Syracuse, Utah 84075 - [EMAIL PROTECTED] --
- 




Re: DAQmx, etc.

2004-05-19 Thread rajesh . vaidya




Hi,

I'd like to clarify a few things about the blocking behavior of the Read
vis (the same rules also apply to the rest of DAQmx).

- One of the main problems with the blocking behavior in Traditional DAQ
was that you could not do other operations while you were waiting in the
AI.Read.vi for samples to become available. This was due to the single
threaded model of the Traditional DAQ driver and the fact the Traditional
DAQ used CINs (though they were replaced by DLLs in NI-DAQ 6.x)
DAQmx if fully multithreaded and does not have this limitation. For
example, you can run two versions of your contgraph.vi example
simultaneously (on different devices) w/o problems. Alternately, if you
have two parallel while loops in a VI with a Read.vi in one of them, the
other while loop will not be slowed down while the Read vi is waiting for
samples to become available.

- Scott's non-blocking AI. Read vi in Traditional DAQ works by waiting
for the requested number of samples to become available before calling the
Read.vi.
 DAQmx supports this feature natively when -1 is wired into the number
of samples per channel parameter on the DAQmx Read vis for continuous
acquisitions. Here is the online documentation for this parameter.

 number of samples per channel specifies the number of samples to read. If  
 you leave this input unwired or set it to -1, NI-DAQmx determines how many 
 samples to read based on if the task acquires samples continuously or  
 acquires a finite number of samples.   


 If the task acquires samples continuously and you set this input to -1,
 this VI reads all the samples currently available in the buffer.   


 If the task acquires a finite number of samples and you set this input to  
 -1, the VI waits for the task to acquire all requested samples, then reads 
 those samples. If you set the Read All Available Data property to TRUE,
 the VI reads the samples currently available in the buffer and does not
 wait for the task to acquire all requested samples.





Regards,
Rajesh Vaidya

Measurements Infrastructure Group
National Instruments






Subject: Re: DAQmx, etc.
From: Scott Hannahs [EMAIL PROTECTED]
Date: Tue, 18 May 2004 15:16:16 -0400

At 10:24 -0700 5/18/04, tim wen wrote:
In LV6 the 'AI read.vi' hogs the CPU waiting for the number of samples
requested to be available.
someone (sorry for not remembering the name) came up with a 'non-blocking
AI read.vi' which i have
been using happily.
i noticed the new DAQmx 'contgraph.vi' is hogging the CPU also.  is there
a cure for it ( i think it makes
DLL calls)?

One version is at http://sthmac.magnet.fsu.edu/labview in the VI library.
I think there are a number of these around.  I have not updated it for
DAQmx since it is not available for my development platform. :-(

I don't know if it would be a simple modification to make it work with
DAQmx.  It is a fairly simple concept and not too complicated code.


another question:  i've been using LV2 style globals to pass data between
parallel loops
and am wondering if a queue is a better way to go?
Probably.  If you are just passing data in one direction it can work well.
With a LV2 style global you can build in internal processing and value
manipulation to the global (ie intelligent global).

-Scott





RE: DAQmx, etc.

2004-05-19 Thread Bruce Ammons
That's funny, because I recently experienced the CPU slowdown also when
reading fixed size blocks from a continuous acquisition in DAQmx.
Perhaps it is not being totally blocked, but it definitely slows it
down.  Perhaps there is a bug in DAQmx that NI is not aware of.  I am
pretty sure I set up multithreading properly, but I'm not positive.

I fixed the problem by using the old method.  I check the number of
samples available using a property node, calculate how many milliseconds
it will be until my data is ready, use a millisecond delay to wait until
the data is ready, then read the block of data.  My CPU time went from a
high percentage to a very low percentage when this was added to the
code.

Bruce

--
Bruce Ammons
Ammons Engineering
www.ammonsengineering.com
(810) 687-4288 Phone
(810) 687-6202 Fax



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Wednesday, May 19, 2004 8:59 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: DAQmx, etc.






Hi,

I'd like to clarify a few things about the blocking behavior of the
Read vis (the same rules also apply to the rest of DAQmx).

- One of the main problems with the blocking behavior in Traditional DAQ
was that you could not do other operations while you were waiting in the
AI.Read.vi for samples to become available. This was due to the single
threaded model of the Traditional DAQ driver and the fact the
Traditional DAQ used CINs (though they were replaced by DLLs in NI-DAQ
6.x) DAQmx if fully multithreaded and does not have this limitation. For
example, you can run two versions of your contgraph.vi example
simultaneously (on different devices) w/o problems. Alternately, if you
have two parallel while loops in a VI with a Read.vi in one of them, the
other while loop will not be slowed down while the Read vi is waiting
for samples to become available.

- Scott's non-blocking AI. Read vi in Traditional DAQ works by waiting
for the requested number of samples to become available before calling
the Read.vi.  DAQmx supports this feature natively when -1 is wired
into the number of samples per channel parameter on the DAQmx Read vis
for continuous acquisitions. Here is the online documentation for this
parameter.
 

 number of samples per channel specifies the number of samples to read.
If  
 you leave this input unwired or set it to -1, NI-DAQmx determines how
many 
 samples to read based on if the task acquires samples continuously or

 acquires a finite number of samples.

 

 

 If the task acquires samples continuously and you set this input to -1,

 this VI reads all the samples currently available in the buffer.

 

 

 If the task acquires a finite number of samples and you set this input
to  
 -1, the VI waits for the task to acquire all requested samples, then
reads 
 those samples. If you set the Read All Available Data property to TRUE,

 the VI reads the samples currently available in the buffer and does not

 wait for the task to acquire all requested samples.

 





Regards,
Rajesh Vaidya

Measurements Infrastructure Group
National Instruments







Subject: Re: DAQmx, etc.
From: Scott Hannahs [EMAIL PROTECTED]
Date: Tue, 18 May 2004 15:16:16 -0400

At 10:24 -0700 5/18/04, tim wen wrote:
In LV6 the 'AI read.vi' hogs the CPU waiting for the number of samples
requested to be available.
someone (sorry for not remembering the name) came up with a 
'non-blocking
AI read.vi' which i have
been using happily.
i noticed the new DAQmx 'contgraph.vi' is hogging the CPU also.  is 
there
a cure for it ( i think it makes
DLL calls)?

One version is at http://sthmac.magnet.fsu.edu/labview in the VI
library. I think there are a number of these around.  I have not updated
it for DAQmx since it is not available for my development platform. :-(

I don't know if it would be a simple modification to make it work with
DAQmx.  It is a fairly simple concept and not too complicated code.


another question:  i've been using LV2 style globals to pass data 
between
parallel loops
and am wondering if a queue is a better way to go?
Probably.  If you are just passing data in one direction it can work
well. With a LV2 style global you can build in internal processing and
value manipulation to the global (ie intelligent global).

-Scott







Re: DAQmx, etc.

2004-05-19 Thread Scott Hannahs
At 7:58 -0500 5/19/04, [EMAIL PROTECTED] wrote:
If the task acquires a finite number of samples and you set this input to 
 -1, the VI waits for the task to acquire all requested samples, then reads
 those samples.

Is this a blocking wait like the non-DAQmx call into a CIN (or LLB) or is it a LV 
style wait where other LV nodes even in the same thread can execute?  That would be 
nice!  Is there a way to do this threading in our own created dlls (frameworks)?.  I 
would like to make my PCI-GPIB library non-blocking but that is not trivial because of 
this limitation.  Is there a way to set up a call back into LV for a non-blocking wait 
from a CIN or framework?

Then again, DAQmx could be ported to all the other platforms since it was written in 
such a nice modular way that it only needs a trivial stub driver for other 
platforms.  :-)

-Scott




Re: LV2 style globals, events. Was Re: DAQmx, etc.

2004-05-19 Thread Scott Hannahs
At 6:04 -0600 5/19/04, David A. Moore wrote:
At 10:24 -0700 5/18/04, tim wen wrote:
another question:  i've been using LV2 style globals to pass data between parallel 
loops
and am wondering if a queue is a better way to go?

Scott Hannahs [EMAIL PROTECTED] wrote:
Probably.  If you are just passing data in one direction it can work well.  With a 
LV2 style global you can build in internal processing and value manipulation to the 
global (ie intelligent global).

If you're passing LARGE data, probably not, because LV2 style globals
are more efficient than queues i.e. you can avoid making extra copies
of the data.

Interesting.  Does anyone know why a queue would make extra copies of data.  I haven't 
tried pushing MBytes thru queues but there doesn't seem to be any inherent reason that 
a queue would make extra copies of the data.  It should make a copy upon entering the 
data into the queue and the caller would release it.  Upon dequeue the queue should 
release the memory and the receiving VI would copy it.  (Note: these may not be actual 
copies but passing of pointers).

This should be equivalent to a LV2 style global with a USR?  The exact same memory 
allocation/deallocation should take place.

I guess the real test is to make a couple of benchmarks and profile it.

-Scott




Re: DAQmx, etc.

2004-05-19 Thread Geoffrey Schmit
On 18/05/2004 at 11:24 AM, Info LabVIEW Mailing List
[EMAIL PROTECTED] wrote:

 In LV6 the 'AI read.vi' hogs the CPU waiting for the number of samples 
 requested to be available.
 someone (sorry for not remembering the name) came up with a
 'non-blocking AI read.vi' which i have been using happily.
 i noticed the new DAQmx 'contgraph.vi' is hogging the CPU also.  is
 there a cure for it ( i think it makes DLL calls)?

While read operations in DAQmx consume 100% of the CPU, like reads in
Traditional DAQ, there are a couple of critical differences:

*  In DAQmx, other threads in LabVIEW are not blocked and may
   continue to execute. In fact, even other threads performing DAQmx
   operations that involve other DAQmx tasks are not blocked and may
   continue to execute.

*  In DAQmx, read operations yield. That is, if there are other threads
   running on the system, they are not blocked and may execute.


Basically, DAQmx read operations say, Hey operating system, if you have
nothing better to do, let me run so I can be more responsive.  As a
result, the CPU monitor is not an effective tool to measure the
efficiency of the application.  Another potential disadvantage is that
your [EMAIL PROTECTED] client won't run during read operations :)

As a result, replacement read VIs should not be necessary for most
applications that use DAQmx.

This knowledge base article discusses this topic:
http://digital.ni.com/public.nsf/websearch/
09D80223FA84113D86256D6A004B97C3?OpenDocument

geoff
-- 
Geoffrey Schmit
Senior Software Engineer
National Instruments
[EMAIL PROTECTED]
www.ni.com




Re: DAQmx, etc.

2004-05-18 Thread Scott Hannahs
At 10:24 -0700 5/18/04, tim wen wrote:
In LV6 the 'AI read.vi' hogs the CPU waiting for the number of samples requested to 
be available.
someone (sorry for not remembering the name) came up with a 'non-blocking AI read.vi' 
which i have
been using happily.
i noticed the new DAQmx 'contgraph.vi' is hogging the CPU also.  is there a cure for 
it ( i think it makes
DLL calls)?

One version is at http://sthmac.magnet.fsu.edu/labview in the VI library.  I think 
there are a number of these around.  I have not updated it for DAQmx since it is not 
available for my development platform. :-(

I don't know if it would be a simple modification to make it work with DAQmx.  It is a 
fairly simple concept and not too complicated code.


another question:  i've been using LV2 style globals to pass data between parallel 
loops
and am wondering if a queue is a better way to go?
Probably.  If you are just passing data in one direction it can work well.  With a LV2 
style global you can build in internal processing and value manipulation to the global 
(ie intelligent global).

-Scott