Re: Passing data from subvi's several levles down to top level vi

2004-06-14 Thread CoastalMaineBird
Pass REFERENCES to the front panel controls thru sub1 --> sub2 -->
sub3
In sub3, use an unattached property node - pass it the reference to a
specific control, and set its VALUE property to the new value.

Unattached property nodes are on the VI SERVER palette.



Re: Congratulations, Dennis Knutson!

2004-05-28 Thread CoastalMaineBird
Congratulations and a big THANK YOU from all of us out here in
user-land.



Re: Reading a File and by using TCP/IP transfer protocol?

2004-05-25 Thread CoastalMaineBird
If you have ONE client and ONE server machine then you don't want to
have FIVE TCP connections.

1... Design a data structure. It should hold your five values, plus a
timestamp, or whatever else you want.  Make it a TypeDef, for ease of
changes, if you want.

On the server (where the data comes from):
2... I would have a single loop to collect your data and put it into
this data structure, at whatever rate you need.  This loop runs
regardless of whether there is a TCP connection or not. It also sets a
flag called NEW DATA.
3... Conn ID = invalid CONN ID (Initialize a shift reg) on 2nd  WHILE
loop
3... Repeat
4...If shift reg is not VALID conn ID:
Wait for connection - assign conn ID
else
if NEW DATA flag  NOT set
 wait 20 mSec (or something)
else
Flatten latest data cluster into string.
Clear the NEW DATA flag.
Write string to connection.
5... until done.

On the client, you:
1... Open a connection.
2... S = flatten your Data Cluster into a string.
3... N = size of (S).  Do not hardwire this constant, let  it be
figured for you.
4... Repeat
5... Read N bytes from connection.
6... Unflatten string into your data type.
7... Display, store, whatever.
8... until done.
9... Close connection.

You'll have to add error checking, what to do if the connection
breaks, etc.
But that's the idea.



Re: Reading a File and by using TCP/IP transfer protocol?

2004-05-25 Thread CoastalMaineBird
Then I don't understand why you asked about files.

The TCP/IP examples show how to transmit data blocks.



Re: write data to file periodically

2004-05-25 Thread CoastalMaineBird
The easiest way is to open (create) the file once at the beginning of
the acquisition, and leave it open. You should use DENY WRITE
permissions, at least, to prevent other apps from writing the file.

Start with an empty array of data in a shift reg.

Repeat
Get new value

Use BUILD ARRAY to prepend the new value to the existing array (IN
FRONT OF the old data).

Use ARRAY SUBSET to limit the array to 9000 samples.

Use SUM ARRAY ELEMENTS to figure the sum.

Use ARRAY LENGTH to find out how many you have.

Divide the sum by the number N, and that's your current average.

Write this directly to the file, or convert it to ASCII and write
that.

until done.

Close file.

This maintains the average, even when you have less than 9000 samples
in your buffer.

The DISADVANTAGE of this, is that the file is open during the whole
event. You'll have to take special care if you want some other program
to open the file while data is coming in.

That also means that the contents of the file ON DISK are not
necessarily up to date - there could be a block of data in memory
that's not been written to disk yet. (Because of the OS, not because
of your code).



Re: write data to file periodically

2004-05-24 Thread CoastalMaineBird
If I understand you correctly, the act of writing the file is
disturbing your timing on the data-collection part.

You need to have an average of the latest 9000 samples (15 * 60 * 10),
and write the new average every 100 mSec.

Are you opening/writing/closing the file each time?  While that's the
safest way (file is intact even if LabVIEW or your program crashes),
it's not the fastest.



Re: I had a problem with Labview 7.0 and AI Config.

2004-05-24 Thread CoastalMaineBird
AI GROUP CONFIG is called by the higher-level AI CONFIG.

You'll have to tell us what the error message is.

Probably something to do with bad channel numbers.



Re: Reading a File and by using TCP/IP transfer protocol?

2004-05-24 Thread CoastalMaineBird
What specifically is your question?

It's commonly done, but there's no built-in function for it.

Here's one way:
On the server side, you
1... Wait for connection
2... Open the file
3... Remainder = determine file size (in bytes)
4... Repeat
5...N = remainder
6...If N > 1024, N = 1024 (limit chunk size )
7...Read N bytes from file.
8...Flatten N to a string of four bytes.
9...Write string to connection.
10...   Write N bytes to connection.
11...   Remainder = Remainder - N
12... until N = 0.
13... Flatten 0 to a string of four bytes.
14... Send 4-byte string (end-of-file marker).
15... Close File.
16... Close connection.


On the client side, you:
1... Open connection to server.
2... Create (or overwrite) local file.
3... repeat
4...Read 4 bytes from connection.
5...Unflatten 4 bytes to I32 (N = size of chunk).
6...If N > 0
7...Read N Bytes from connection.
8...Write N bytes to file.
9...  until N=0, or error.
10... Close connection.
11... Close file.


You will need to add error-checking to that, but that's the basic
idea.

The chunk size can be no more than approx 1500 bytes (I forget the
exact number), per IP rules.

If you run both client and server on the same machine, you can use
127.0.0.1 as the IP address, and it will copy a file from one place to
another.  Use that to test it before bothering with two machines.



Re: Max Number of Labview objects

2004-05-23 Thread CoastalMaineBird
What sort of "objects" are you counting?



Re: Max Number of Labview objects

2004-05-22 Thread CoastalMaineBird
There is no specific limit.
It depends on what kind of objects, and how much RAM your computer
has.
In 15 years of LabVIEW programming, I have never encountered a problem
concerning too many objects.

I have run out of memory when slinging 50 MByte arrays around, but not
in the code itself.



Re: Prevent deletion from file dialog

2004-05-22 Thread CoastalMaineBird
If it's that critical, then test for the INI file yourself, and create
it if it's not there.



Re: smtp send message.vi has a wrong send time

2004-05-10 Thread CoastalMaineBird
I would suggest looking at your computer's DST settings - perhaps you
have set the clock ahead but not turned on DST.  If so, the conversion
to GMT and back will not be correct.



Re: Constructing an image in real time

2004-05-10 Thread CoastalMaineBird
You don't say what the data represents. Is it 3-channel RGB? Is it
intensity?
You can use the intensity graph to show an X vs. Y vs. Intensity
image. If you have a monochrome camera, that makes a fine image - just
set your min and max colors to be black and white.

If you mean RGB, one way is to scale all three components into U8s
(0..255)

Combine them into an array of I32s, then use FLATTEN PIXMAP to convert
them into an image to feed to a PICTURE indicator.

You can then get the complete image data from the picture and save as
PNG / JPG / BMP.



Re: Constructing an image in real time

2004-05-07 Thread CoastalMaineBird
Use the AI CONFIG to set up your acquisition.
Use AI CLOCK config to set up your timing.
Use AI TRIGGER CONFIG to set up your trigger.
Use AI CONTROL to start the DAQ (use TOTAL SCANS TO ACQUIRE = 0 for
unlimited scanning.

repeat (While loop)
Use AI READ with a NUMBER of SCANS TO READ = 0. This tells you the
backlog (how many scans in the buffer)

Use AI READ with NUMBER OF SCANS TO READ = to the backlog number
from above. (this empties the buffer)

Process the scans received into your image - append to previous
data and re-display.

If you are doing a fixed number of scans, add this backlog number
to the total done so far, and see if you're done. If you're doing
continuous, skip this step.

Wait 100 mSec if you can, 10 mSec if you can't.

Loop (until # scans done >= # scans wanted OR Quit button, whatever).

Use AI CONTROL to stop the DAQ.

The idea is that the DAQ operation happens in the background.  You
just ask how many are available, and then read that many and use them.

You wait 100 mSec to allow the display update to happen.



Re: how to use notifiers?

2004-05-03 Thread CoastalMaineBird
So what happens when you put WAIT functions in the loops?



Re: how to use notifiers?

2004-05-02 Thread CoastalMaineBird
I would suggest using a WAIT function for 100 mSec or so in the main
loop.  If you don't have that, the main loop will poll your button at
a million times per second or more. There's no need to poll the thing
that fast - it will starve any other loops.

It means that you will respond to the QUIT button at most 100 mSec
later - not a big deal, usually.



Re: write jpeg.vi path issue

2004-04-24 Thread CoastalMaineBird
I browsed for the unix folder using safari.=A0
If you tried what I suggested, you might have better luck.



Re: Reading a file in a CIN

2004-04-24 Thread CoastalMaineBird
--3643137.1082680681924.JavaMail.quiq.tekken
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

I'm not sure if it will help or not, but I have attached the detailed
instructions I wrote up when wrestling with this last year.
I needed C++ in a DLL (not CIN), and detailed the steps to get there.

If CINRun is unresolved, you're not including the library in the
library list.
--3643137.1082680681924.JavaMail.quiq.tekken
Content-Type: text/plain; name=Compile_Notes.txt
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename=Compile_Notes.txt
Content-Description: Compile Notes

HOW THE DLL "XX.dll" WAS BUILT:

1... In Visual C++ 6.0, select File | New | Projects | Win32 DLL. Give it a name, 
choose a location, select CREATE NEW WORKSPACE, and then OK.  On the next page, select 
CREATE EMPTY DLL.

   --- this creates a folder with a skeleton project in it.

2... In LabVIEW, create a VI with the inputs and outputs corresponding to the DLL's 
inputs and outputs. Put a CALL LIBRARY function on the diagram.

3... Pop up on the CALL LIBRARY node and select CONFIGURE to get a dialog box. Leave 
the LIBRARY NAME field blank for now. Enter a function name and select C as the 
calling convention.

4... Change the words "return type" to "error", or some descriptive name.  Set the 
TYPE selector to NUMERIC, and the DATA TYPE selector to SIGNED 32-BIT INTEGER.  This 
establishes the return type of your C function (in the DLL).

5... Add parameters, set their TYPES and DATA TYPES and pass BY VALUE or BY REFERENCE 
as needed.  For XX, I chose:
errorCode   Numeric Signed 32-bit integer
MachineName String  String Handle
ProgramName String  String Handle
ErrorMessageString  String Handle
ChannelTags Adapt To Type   Handles By Value

6... Check the FUNCTION PROTOTYPE field to verify it looks correct.

7... Click OK, and the CALL LIBRARY node changes to match your settings.

8... Wire the inputs and outputs to the CALL LIBRARY node.

9... Save the VI in the project directory.

10.. Pop up on the CALL LIBRARY node, and choose CREATE C FILE.  Change the extension 
to .CPP if needed, and save the file in the project directory.  Quit LabVIEW if you 
want.

11.. Open the DSW file in Visual C++.

12.. Select PROJECT - ADD FILES - and add the C (or CPP file) you just created.

13.. Select PROJECT - ADD FILES - and add LABVIEW.LIB (from the LabVIEW CINTOOLS 
folder), if you need manager functions.

14.. Choose PROJECT - SETTINGS - ALL CONFIGURATIONS - C++ - PREPROCESSOR and enter the 
full path to the CINTOOLS directory in the ADDITIONAL INCLUDE DIRECTORIES field.

15..  Choose PROJECT - SETTINGS - ALL CONFIGURATIONS - CODE GEN and select STRUCT 
ALIGNMENT = 1 BYTE

16..  Choose PROJECT - SETTINGS - DEBUG CONFIGURATION - choose DEBUG MULTITHREADED DLL 
in the USE RUNTIME LIBRARY selector. If you use DCOM (OLE between machines), enter the 
following after the last /D entry in PROJECT OPTIONS: |/D "_WIN32_DCOM" |   (not 
including the | marks)

17..  Choose PROJECT - SETTINGS - RELEASE CONFIGURATION - choose MULTITHREADED DLL in 
the USE RUNTIME LIBRARY selector. If you use DCOM (OLE between machines), enter the 
following after the last /D entry in PROJECT OPTIONS: |/D "_WIN32_DCOM" |   (not 
including the | marks)


18.. Open the C (or CPP) file in Visual C++. If you're using C++, put the text |extern 
"C" {| just before the declaration of your function, and the text |}| immediately 
after it.  (not including the | marks)

19.. Put the text |_declspec(dllexport)| immediately before both the declaration and 
the definition of the function.  (not including the | marks)

20.. Erase the line YOUR CODE GOES HERE, and enter your code.

21.. Build the DLL.  Set the BUILD - ACTIVE CONFIGURATION to RELEASE and use the DLL 
in the RELEASE folder once all bugs are fixed.

22.. Open the VI again in LabVIEW and pop up on the CALL LIBRARY node to select 
CONFIGURE. Enter the file name of the DLL in the LIBRARY NAME field, or BROWSE to the 
version in the DEBUG (release) folder.

23.. Run the VI.

24.. The DLL is loaded when the VI is loaded, and unloaded when the VI is unloaded.  
If you go back to C to make revisions, close the VI before rebuilding, or you'll get a 
CAN'T WRITE TO XX.DLL FILE error.
--3643137.1082680681924.JavaMail.quiq.tekken--




Re: How to turn off new LV7 Ring Control behavior?

2004-04-24 Thread CoastalMaineBird

you will need to change both the Strings and Value property node.

Drat - that's the part that breaks my code. Well, it doesn't "break" ,
but it's now uglier than it used to be.
Thanks for the info.



Re: Inherited reentrancy?

2004-04-24 Thread CoastalMaineBird
OK - that occurred to me that there could be 4 values of V, but I
wasn't sure. (It's good in my case).

I had always thought about it in terms of callers - there is one V for
each caller instance. But if the caller is reentrant itself, then S
has FOUR callers, EVEN THOUGH there are only two instances on the
diagram.

Thanks for your clarification.



Re: Inherited reentrancy?

2004-04-24 Thread CoastalMaineBird
OK, thanks.
The confusing thing for me is that S would have FOUR callers (and thus
FOUR values of V), even though there are only TWO instances of it in
the code.



Inherited reentrancy?

2004-04-24 Thread CoastalMaineBird
I understand some things about reentrancy:

Say my SubVI "S" has an uninitialized shift register that is supposed
to remember some value "V" from one call to the next.

If S is normal (not re-entrant), and S is called from two or more
places, then there is only one V - each caller would interact with
this same value of V.

If S is reentrant, then there are TWO separate Vs - each caller would
interact with its own value of V.

Suppose he caller VI "C" calls S in two places - there would be two
separate values of V.

Suppose C is itself reentrant. Suppose there is a master VI "M" which
has two calls to C. Each instance of C calls two instances of S.  Are
there then FOUR different values of V?

In other words, does the reentrancy get passed up the chain?



How to turn off new LV7 Ring Control behavior?

2004-04-20 Thread CoastalMaineBird
I have a ring control.

While the program is running:

1... I set its strings array to { "A", "B", "C" } via property node.

2... I set its value to 1 or 2 (by selecting "B" or "C") via mouse.

3... I set its strings array to empty.

4... The control now shows a "<1>" or "<2>" in LV7, where it showed
blank previously in LV6.1 and earlier.

This messes with my program, because I don't explicitly set its value
to zero (it doesn't show "<0> - it shows blank).

 Is there an INI setting or other setting that will turn off this
unwanted behavior?  Or do I have to explicitly set the value to zero
each time I empty its strings?



Re: draw an angle

2004-04-20 Thread CoastalMaineBird
Not enough information - you don't say how long the angled line is to
be.

Suppose your baseline goes from {X0,Y0} to {X1,Y1}.
Your Angle point is {X2,Y2}

Your angle is A (in radians)

The cosine C = cos(A), the sine S = sin(A).

You have to pick a length L.

Then your end point {X3,Y3} would be {X2 + L * C, Y2 + L * S}.
So the 2nd line would be from {X2,Y2} to {X3,Y3}



Re: RS-232 Framing Error with HP 34401A Mulitmeter

2004-04-13 Thread CoastalMaineBird
"Framing Error" in an RS-232 context means a very specific thing -
when the receiver was expecting a stop bit, the line was not in SPACE
condition.  This can be the result of:
1... Baud rate mismatch (although other problems would likely crop up
first).
2... Data Length problem, If I send 8 data bits and you expect 7, the
stop bit is in the wrong place.
3... Parity setting mismatch - If I send 7 data bits + parity and you
expect 7 data bits and no parity, the stop bit is in the wrong place.
4... Mismatch in # Stop bits - If I send you 7 Data bits + parity +
one stop bit, and you expect 7 data bits + parity + TWO stop bits, the
second one might not be correct, although most devices do not complain
about this.


--- I would double-check ALL the com settings, including the # STOP
BITS - that's not shown on your screenshot.



Re: measure phase difference between two pulse trains or an encoder A-B phases

2004-04-13 Thread CoastalMaineBird
Three ideas:

1... Use CTR GROUP CONFIG to set up two counters.
 Use CTR MODE CONFIG to set up MODE 3 (Count Continuously), using
 an internal clock timebase.
 Use CTR Control to start the counters.

 Periodically:
   Call CTR BUFFER READ to check the backlog of readings waiting.
   If backlog > 0, call CTR BUFFER READ to read that many readings
   The readings are basically timestamps of the transitions of A
   and B signals. Figure the phase difference by where the B transition
   is, in relation to two consecutive A transitions.

You need a really clean signal to get past NI's persnickety front end
circuit. 0 to 5 V CMOS is OK, TTL won't do it reliably.

2... Run the A and B signals into two channels of a DAQ card (or
reduce the voltage and wire into the audio input port if you have
stereo input).
 Record enough samples to get at least one cycle of each encoder
 line (A and B).
 Perform an FFT on each channel.
 Pick out the highest magnitude peak from the spectrum (will be
 fundamental frequency).
 Pick out the phase corresponding to that peak (for each channel)
 Subtract the two phase numbers. Remember that each phase is
 relative to the start of the recorded block, and so doesn't mean
 anything by itself (unless you synchronize the trigger start).

3... Run the A and B signals into two channels of a DAQ card (or
reduce the voltage and wire into the audio input port if you have
stereo input).
 Record enough samples to get at least one cycle of each encoder
 line (A and B).
 Search thru the data for two 0-->1 transitions on the A channel.
 Remember the sample number where they occurred.
 Search thru the data for a single 0-->1 transition on the B
 channel.  Use the sample number of the B transition as a proportion of
 the interval between the two A transitions.

  My experience with optical quadrature encoders is that they vary
quite a bit. One manufacturer made a cheap model that varied +/- 60
degrees from the 90 degree ideal, as it went thru one revolution.
Another mfgr made one that varied less than 5 degrees.



Re: How to keep alignment in Windows LARGE and SMALL fonts

2004-04-13 Thread CoastalMaineBird
It seems your solution would be to either move the controls to
allow enough space for the enlarged fonts?
My solution, such as it is, is to space things so that labels with
either large or small fonts work without overlapping, and while
maintaining alignment with the numeric (or other) field they are
labeling.

Auch, talk about self modifying programs!
The solution I use does not self-modify anything, but it only works if
the font is "standard" size.
I mentioned a possible solution involving changing label positions on
the fly, but, yes, that's ugly, like I said.

Or would you want NI to use the small font anyhow and ignore the
font scaling factor imposed by the Control Panel settings?
No, I don't - as I said before, the point of someone switching to
large fonts is so they can see a larger font !
The point is, if I set label X to be 17 point, and then switch to
SMALL fonts, Label X becomes 13 point without any action from me.

BUT

If I set label X to be 16 point, and then switch to SMALL fonts, it
stays 16 point !

I was hoping for a trick (INI setting maybe ??) that would scale 16
down to 13 or 12 without explicitly doing so (via property nodes and
code).

I have yet to find one.

Or what else would be your solution?

If I had a good solution, I wouldn't be asking the question.

Default font is not a fixed font. You can change that too and I do
not see how it is supposed to be different than any other font in what
LabVIEW should do about it.

Well the question is why -IS- it treated differently?



Re: How to keep alignment in Windows LARGE and SMALL fonts

2004-04-13 Thread CoastalMaineBird
All right, well, your statement that "to change the monitor's settings
for dpi from something above 100 to something below." is what strikes
me as wrong. I don't think you can tell the MONITOR to change DPI for
text, but not for graphics.

IN ANY CASE:

IT IS A FACT that on my system, if I set a label to 17 point (default
size), turn off the SIZE TO TEXT attribute, RIGHT JUSTIFY it, align
the bottom of it with the bottom of the 17-point numeric's text, it
looks the way I want it.

IT IS A FACT that if I save that VI, switch to SMALL fonts, and open
it again, it still looks the way I want it.

IT IS A FACT that if I ask LabVIEW what the text size is, it now says
that it is 13 point. This number has automagically changed.


IT IS A FACT that if I do the same experiment with 16 or 18  (or any
other, I suppose) size font, the font point size does not change, and
thus becomes misaligned (and ugly, IMO).

Apparently there is a rule somewhere (where, exactly?) that says: if
the text was stored with the default font size, and stored with a DPI
that is different from the current one, then change it's font size, by
the ratio of the two DPI numbers.  But if the size was ANYTHING OTHER
than default size, leave it alone.

#1: Is that true?
#2: Can I change it?

I understand that the user can select 837 DPI if he wants to, and I
don't expect LV to accomodate that.  But the small (96 DPI) and large
(120 DPI) choices are more common, I was hoping that -I- could
accomodate them for -MY- customers.



Re: calibration of the voltage

2004-04-13 Thread CoastalMaineBird
Well, we know that the VOLTAGE resolution is 20 V divided by 2^11
steps = 0.009766 Volts per step.
Suppose your LVDT produces 0-5 V for a displacement of 0-10 inches.
That's a factor of 10 inches divided by 5 Volts = 2 inches per volt.

Multiply 2 inches per Volt times 0.009766 Volts per step, and you get
0.019532 inches per Step - that's your displacement resolution.

If you can possibly configure the DAQ card to use a 0-5 V range
instead of +/- 10V, you're better off doing that. That will get you
5-mil (.004883 inches) resolution instead of 20-mils (.019532).

In any case just remember:

Inches   VoltsInches
--- x -- = ---
 Volt  Step  Step





Re: How to keep alignment in Windows LARGE and SMALL fonts

2004-04-13 Thread CoastalMaineBird
ignoring the font choice and sticking to the small fonts would e.g.
be better."
Well, what I don't like about that is that it doesn't respond to the
user's settings.

Presumably, he switched from SMALL to LARGE fonts so he could maybe
see larger fonts. If the text on MY program doesn't get bigger when he
does that, guess whose fault it is?

Mine.



Re: How to keep alignment in Windows LARGE and SMALL fonts

2004-04-13 Thread CoastalMaineBird
 the way
small and large fonts work is primarily to change the monitor's
settings
for dpi from something above 100 to something below.=A0 This affects all
graphics using text measuring system, mostly text, and decides how
many
pixels they use to get one mm or one inch of screen.

Sorry, but that's way off the mark. Try it yourself: place a STOP
button, and use the new SET WIDTH and HEIGHT to 200 pixels.  The
result is a button 4.5 x 4.5 mm on my screen.
Save it.
Change from LARGE to SMALL fonts or vice versa.

Open the thing and measure it again.

It's still 4.5mm by 4.5 mm - of course.



Re: calibration of the voltage

2004-04-09 Thread CoastalMaineBird
how am i wasting 2 bits. 
By using only one fourth of the input range, you are in effect
throwing away two bits (2^2 = 4)

since the full scale voltage reading is 0 to 5 volts then should it
not be 5/2048 resolution.
But you said the resolution of the CARD was +/- 10 Volts. The card
doesn't know that you're only connecting a 0-5V signal to it. If it's
configured to measure +/- 10V, then that's what it will do. Half of
your measurement range is wasted because your signal can't be
negative. Half of the remainder is wasted because your signal can't be
above 5 Volts.

although i may be wasting 1/4 of the scale 
You are wasting THREE fourths.

but is it having an effect on the calculation as the calculations
look the same even when the signal goes from 0 to 5.

No - that's my point.  The calculations will be the same, regardless
of whether your signal goes from 0 to 5V or 0 to 10 V or -10 to 10 V.
It's the range of the CARD divided by the number of different values
that the CARD produces.
In your case this is 20 V divided by 2^11 = 0.009766 V.

If that's not clear, keep asking.



How to keep alignment in Windows LARGE and SMALL fonts

2004-04-09 Thread CoastalMaineBird
For years, I fought with LabVIEW, trying to keep things looking decent
in both Windows LARGE and SMALL font sizes.

By this, I mean Settings - Control Panels - Display - Settings -
Advanced - General - Font Size

I love to use large fonts - I run my display at 1600x1200 and it just
looks better.

  But if you plop an indicator on a panel in LARGE fonts, save it,
then open it with SMALL fonts in effect, it looks horrible.

  N.I. Doesn't care about this - just look at how ugly their own
examples get when viewed in LARGE fonts.

  So I developed a method to make the bottom of the label text line up
with the bottom of the numerical text, whether in LARGE fonts or
SMALL.
 http://www.culverson.com/LVStyleGuide/ObjectLabels.html";>Click
here for examples.  Basically, I turn OFF the SIZE to TEXT
attribute and right-justify the label (if on the left of the numeric).

This works OK, except I have just discovered that it depends on the
text being the default size.

If I have a 17-pt numeric with a 17-point label, then they both appear
as 13-pt when I switch to SMALL fonts - great -- they're still
aligned.

But if I have a 17-point numeric with a 16-pt label, when I switch to
SMALL, the numeric shrinks to 13, but the label stays at 16, and is
thus vertically misaligned with the numeric.

So how can I have a 17-pt numeric with a 13-pt label, and have them
BOTH shrink when I switch to SMALL fonts?

I suppose I could use property nodes, and detect the switch myself,
but I have 50+ of these on a single VI to do - that would be ugly to
watch the labels jumping around.

I suppose I could have two VIs - one for LARGE fonts and one for
SMALL, but I don't want to duplicate the code.

Any other ideas?



Re: calibration of the voltage

2004-04-08 Thread CoastalMaineBird
11 bits resolution (an unusual number, by the way) is equal to 2^11 =
2048 discrete values that a single reading can take.
If your card has a range of +/- 10 V, then that's a 20-volt spread,
divided into 2048 steps (I'll assume -1024 to +1023).
That means you have a resolution of 0.009766 Volts. (20 V / 2048
steps)

If you could set your card to be UNIPOLAR, it would be better, since
your signal is unipolar (not negative voltage).

By using a 0 to +5V signal into a -10 to +10 Volt range, you are using
only one fourth of the range (wasting 2 bits).

If the range is -1024 to +1023 counts, your signal is only going to
range from 0 to +511.

So if you had a reading of 256, for example, the voltage would be 256
* 0.009766 = 2.5000 Volts.

Hope that helps.



Re: Best way to pass an array of waveforms inbetween VIs

2004-04-08 Thread CoastalMaineBird
You're probably tired of hearing me say this, but the global function
idea is the best.

A global variable makes a COPY of the data each time you read it. That
takes time, and memory. If each of 3 window reads the global, and
picks out a channel, then you've made 3 copies of the original data.
Depending on your data size, that may or may not be a problem.

A queue means that once you read the data, it's lost from the queue.
If window A reads from the queue, that data chunk is not in the queue
any longer, so Window B can't read that same data.

If the main window generates the data and deposits it in a global
function, then each window can ask the global function for a specific
channel to display.

DATA STORAGE.vi:
Inputs:
  WRITE/read (boolean)
  DATA IN (2-D array of DBL, for example)
  Channel Index (I32)
Outputs:
  DATA OUT: 1-D array of DBL (or waveform, maybe).

Code:
  WHILE
 If WRITE/read
Store DATA IN in a shift reg.
 else
Pass Shift reg IN to Shift reg Out.
Pick out channel via CHANNEL INDEX
Pass selected data to DATA OUT.
  Loop NEVER.

The purpose of the loop is just to have a shift reg. It doesn't really
loop - it executes once per call.



Re: Best way to pass an array of waveforms inbetween VIs

2004-04-08 Thread CoastalMaineBird
The "problem" I mentioned won't be from the code itself. It will work
just fine, insofar as it won't crash or puke or anything.
But I don't know your requirements.

The (potential) problem comes from this:

Suppose the DAQ posts new data every 1000 mSec.
Suppose each window replots every 1000 mSec.
Unless you synchronize things, you have the potential for one window
to show data from the latest DAQ operation, and another to show data
from the time before.

In other words, the execution order might go:
1... Window A replots from Data Source.
2... Main updates Data Source with new data.
3... Window B replots from Data Source.

So the data shown in window A was not taken at the same time as the
data shown in window B.

Whether that's a problem in your situation or not - I can't say.

I have a solution, if you need it.
Guess what? It involves occurrences !



Re: Best way to pass an array of waveforms inbetween VIs

2004-04-08 Thread CoastalMaineBird
No.  Do NOT make it re-entrant.
Your read and write operations should be quick.
Assuming you don't have any issues about exact synchronicity between
the windows, you're good to go.



Re: Best way to pass an array of waveforms inbetween VIs

2004-04-08 Thread CoastalMaineBird
this global function it will simply be a VI 
Correct.

that is not reentrant
Correct.

that I will stick on every plot
Correct.

 and the main form.
Correct.

So I would have the main form contiously aquiring this data and
writing to the global function?=A0
Correct.



E-mail VIs - how to authenticate?

2004-04-08 Thread CoastalMaineBird
I am writing a proposal to add an e-mail notification feature to a DAQ
app. In looking at the examples, sending the message seems simple
enough
EXCEPT

I know my own e-mail provider requires authentication to send any
e-mail ([EMAIL PROTECTED] spammers!). There is a separate password that is
sent before the message itself is sent. If the password is wrong, the
message is not sent.

I see no place for this in the "SMTP Email Send Message (Small)" or
the "SMTP Email Send Message" VIs.

So what happens if my client uses this and his server requires a
password?  Does he get a dialog asking for it? Does it reject all
messages?

The help docs don't.



Re: Best way to pass an array of waveforms inbetween VIs

2004-04-08 Thread CoastalMaineBird
OK, the WAIT ON OCCURRENCE will solve that for you.
They have a TIMEOUT input.
Set it to 500 mSec or something, and that thread will not occupy the
CPU until A) triggered, or B) timed out. You can tell the difference
from the output and respond accordingly.



Re: Best way to pass an array of waveforms inbetween VIs

2004-04-08 Thread CoastalMaineBird
My guess is you have missed something.

If the graph windows are waiting on occurrences, they should be
waiting most of the time, I would think.
If you have no signal generation, then I presume you have no updates,
which means the graph windows never do anything except whatever your
timeout function does.

I have noticed that when I run some of my VIs my task manager says
100% of my processor is being used.
This is where you need to focus:  A user-interface VI, where you're
mostly waiting to see if he pushed this button or that button, should
use very little of the CPU. If you're polling the buttons, do so no
faster than every 100 mSec. If you're using EVENT structure, it's done
for you.
If you have two or more parallel WHILE loops, remember that if one of
them is blocked (waiting for button push), the other will still run.
If it doesn't have a WAIT function in it, it will eat the CPU.

The basic rule is:  A while loop in a CALCULATION of some sort doesn't
need a delay - LabVIEW will interrupt it when necessary.  A while loop
in a USER INTERFACE function (waiting on user interface actions) needs
to wait about 100 mSec or so to share the CPU.  The EVENT structure
will help you do this - it waits until one of the events happens. You
don't need to poll that button 10,000,000 times in a second.

If you want to post your latest code, I'll take a look at it.



Re: Thanks a lot

2004-04-08 Thread CoastalMaineBird
It was nothing.



Re: Best way to pass an array of waveforms inbetween VIs

2004-04-08 Thread CoastalMaineBird
FYI:
I just measured the SIMULATE SIGNAL express VI at 628 uSec to generate
the default signal (100 samples, 1000 Hz)

I just measured the SINE PATTERN vi (old standby for me) at 18 uSec
(100 samples)

That's 35x speed improvement.

Maybe express IS your problem...



Re: Best way to pass an array of waveforms inbetween VIs

2004-04-08 Thread CoastalMaineBird
I have never used Waveform type.  It does nothing for me. You are not
forced to use that type - the EXPRESS VIs are for speed of CODING, not
speed of execution.

  In the manuals for the RT (real-time) module (a project I am
currently working on), it says something to the effect that you should
never use any EXPRESS vis on the real-time card - they have too much
overhead.

  Still - I find it hard to believe that 16 signal generators would
bog your computer down, unless you're running on an old Apple II.

  By what are you judging that your computer has slowed down? What do
you mean?

  You should be (and as a newcomer perhaps you are not) aware that it
is necessary to add timing delays to loops in some situations. I have
seen newbie code where someone will have a while loop waiting on a
button push. That's fine, except it might eat your CPU time. Putting
an indicator on the [i] variable showed that it was polling the button
over 10 million times per second!!  The problem with that is that the
CPU is busy polling the one button, and doing nothing else.  Putting a
wait of 100 mSec in the loop lets somebody else use the CPU (the task
that does the wait goes to sleep for 100 mSec - the CPU is available
for something else), while still being reasonably responsive (100 mSec
is not noticable) to the user. Perhaps you have such a situation?

  In short, while I know that the waveform data type adds overhead, I
would look somewhere else FIRST , for ways to solve your "slow down"
problem.

  Tell me how you measure the speed.



Re: Best way to pass an array of waveforms inbetween VIs

2004-04-08 Thread CoastalMaineBird
Occurrences have been around since LabVIEW 3, I think. Notifiers have
not.

I don't know who recommended Notifiers over occurrences IN GENERAL.

All you're interested in is the fact that an update OCCURRED. When
that happens, you will call the Data Source VI and ask for some
channel's new data.

A notifier has a data type and carries data when it's triggered.  In
this case, what would you carry via the Notifier?
I suppose you could carry the whole new data array, but that's making
copies of the big array again. Yuck.

I suppose you could tell the Data Source that Notifier X is now
assigned to channel Y, so when updated, it could poke channel Y's data
into the Notifier.  That's not too bad.  You have to call the Data
Source with a Notifier ID, and a channel ID, and a function of "Set
Channel". Whenever the channel selector changed, you would do that
again.  Whenever the notifier fired, the data would be right there.  I
suppose that's workable.

My idea was that each window generated an occurrence upon opening. It
called the Data Source with a function of NOTIFY ME, passing its
occurrence.
The Data Source appends that occurrence to an internal array.
When Updated, the Data Source went thru the array and triggered every
occurrence (there might be 1 window or 9 windows open).

When a window closed, it called the Data Source with a function of
"STOP NOTIFYING ME", and the occurrence.  The Data Source would find
that occurrence and remove it from its list.

In the Window code, there would be a WAIT ON OCCURRENCE - this would
wait for 500 mSec or something.  If Timed out, it just checks for the
DONE button or something.
If NOT timed out, it was triggered. So call the Data Source with the
current channel number and replot the data you get back.



Re: E-mail VIs - how to authenticate?

2004-04-08 Thread CoastalMaineBird
Great, thanks.
I'm not an expert on the low-level stuff.  So I'll have to parse the
RFCs and emulate what a regular mail client can do.
As long as that's possible, I'm OK...



Re: E-mail VIs - how to authenticate?

2004-04-08 Thread CoastalMaineBird
Thanks for the pointer.  It didn't work out of the box, but I'm still
trying.



Re: Combobox pausing operation of other VIs

2004-04-08 Thread CoastalMaineBird
If LOOP A has the graphing VIs and Loop B has the UI stuff, then Loop
A ought to run and run and run, whatever happens in loop B.

You can prove (or disprove) this by putting an indicator on the [i]
variable of your graphing loop.  If it stops when you adjust the
combobox, then you have some sort of (unwanted) dependency between the
loops.



Re: Multiple Plots In Seperate Windows

2004-04-08 Thread CoastalMaineBird

I fixed the focus problem by changed the window property to
floating.=A0
That's not a good fix - although the focus quits flashing, the windows
don't - you can't control the Z-order (who's in front). Overlap the
windows and see.

The occurrences simply controlled the launching of the windows, not
the distribution of data. Once a window was opened, the occurrence
played no part until that window was closed and opened again.

The key factor was that the code in each window was running at the
same time as all other windows.  Your scheme still has the window
display new data and stop.  Therefore you're tied to the fact that the
parent has execution most of the time.



Re: Multiple Plots In Seperate Windows

2004-04-08 Thread CoastalMaineBird
If you have a re-entrant VI in each plot window, then they are NOT
showing the same data. The re-entrant attribute means that each copy
of the VI has its own data space.  In your sine-wave example, the
results might be the same, but it's really two (or 3, or 4...) copies
of the same data - not multiple views of a single pile of data.


If all your windows are identical in function (mine weren't, but in
your example yours were), then you want to put as much code into a
common VI as possible.

I'm not sure what your end result is trying to accomplish, but if I
were trying to allow each window to look at a different channel, then
I would have the main store data for all channels) into the global
function, and have each window call that function with a request for
channel 2's data or channel 3's data or channel 2 vs. channel 3 in X-Y
format.  Make the global function do the work of extracting the
requested data and assembling it into a plot.
  The plot window then just calls the global function with a request
to get channel X, and puts the result into a graph.

  Long story short:  Put as much code and data as you can into a
common place, leave as little as possible in the multiple copies that
are showing.

 really wish I could mail you a thank you card or something you
have been a life saver.=A0 Thank you so much!=A0 I will keep learning so
maybe one day I could help you."
Your payment will be to stay on the forum and answer somebody else's
questions now and then.



Re: creating enum from list

2004-04-08 Thread CoastalMaineBird
I don't think you can do that. A LabVIEW enum includes the string
names as part of the data type. You cannot change the data type of a
control or indicator programatically.

If you use a TEXT RING or a MENU RING, then you can get approximately
the same behavior - you can set the STRINGS property of either to an
array of string values.  The difference is that the VALUE of the
control is a numeric, not an enum.  If you wire the control to a CASE
statement for example, the cases would be 0,1,2, where a true enum
would have "A", "B", "C" cases.



Re: Multiple Plots In Seperate Windows

2004-04-08 Thread CoastalMaineBird
I don't have time to figure out xeactly what you're doing, but here's
some ideas from a previous project I did - maybe they'll fit:

The project had one main window, and six sub-windows.

The main window had a button for each sub-window, to call it up.

Each sub window ran on its own, offering a different view on live data
being taken by the main.

Each window operated as expected - it stays in front if you click it -
you move them around like any other windows.

You couldn't click the CLOSE box, but each window had its own DONE
button which closed it.

Here's how it worked:

1... A "Window Manager" vi was initialized (from the main) to generate
six OCCURRENCEs and store them in a shift reg.

2... A "Window Launcher" vi is started.  It first calls the WINDOW
MANAGER to obtain the OCCURRENCE refnums, then launches six parallel
WHILE loops.

3... Each WHILE loop has a WAIT ON OCCURRENCE function. It waits a
specific time (500 mSec in my case).

4... If the WAIT ON OCCURRENCE timed out, we simply check the state of
a global PROGRAM RUNNING variable, and loop if it's true.

5... If the WAIT ON OCCURRENCE did NOT time out, then it was
triggered, and we call our particular sub-window (there's one WHILE
loop for each window).  The sub-windows are set to show front panel
when opened, and close when done.

6... Each sub-window monitors the PROGRAM RUNNING global and stops if
that is false.

7... Periodically, the six main buttons are fed to the "window
manager"'s CHECK function.  This checks each button, and if TRUE,
calls "OPEN or MOVE to FRONT" VI with the path to the associated
sub-window, and the occurrence for the associated window.

8... The OPEN or MOVE to FRONT vi opens a reference to the associated
VI, and checks the FP Open property to see if it's already open.  If
it's already open, the FP.IsFrontmost property is set TRUE to bring it
to the front (one might click on the main button while this window is
hidden - this will bring it forward).

9... If the window is NOT already open, the associated occurrence is
fired.  The Window Launcher vi will then start it up.

The windows do whatever work they need within themselves - it's not
like the main is doing the plotting and sending data to the windows.
It's more like the main is putting data into a common pot and
whichever sub-windows are open (running) go and get it.

Hope that helps.



Re: Multiple Plots In Seperate Windows

2004-04-08 Thread CoastalMaineBird
A global function is a VI whose principal job is to store some data
(typically in the shift registers of a WHILE loop with LOOP set to
false).

At one time (LabView 2 and earlier) there were no global variables.
The way you stored something for use by two or more independent VIs
was to have a VI with a READ/WRITE switch input, a DATA input and a
DATA output.

If you called it with WRITE = TRUE, it stored the data you gave it.

If you called it with WRITE = FALSE, you read out the data it had
stored.

Advantages:
1... You don't make copies of the data when you read it. That's
important for memory usage when the data is a 200x64*512 array of CDB
(I've used it like that).

2... It's atomic.  If you want to store 6 separate items into six
globals, you have the problem of knowing when ALL items have changed.
Whatever reads them might read them during your change, and read 3 old
values and 3 new ones. The global function is guaranteed to deliver
the data as a set.

3... You can add functionality. I like to encapsulate things - for
example: a config file handler.  I like to have a handler (global
function) for each type of config file.  It has a FUNCTION input,
which typically has functions like INIT, SET DEFAULT, READ FILE, WRITE
FILE, COMPARE, and so on. The actual config is stored by this VI (in a
shift reg, natch). Anybody who wants to deal with a config file calls
this VI.  That way, anytime I need to change something involving a
config file, I have ONE and ONLY ONE place to go.

In my multi-window project, I used a global function to create, store,
and manage the occurrences.



Re: Multiple Plots In Seperate Windows

2004-04-08 Thread CoastalMaineBird
Occurrence refnum controls are on the REFNUM palette - you can pass
them thru connectors, or make globals out of them. Or you can pop up
on the GENERATE OCCURRENCE terminal and choose CREATE INDICATOR or
CREATE CONTROL.

The Shift Reg. is just a place to store them. You could probably use
global variables in this case and be OK.  (A global function can store
things in a shift reg and have advantages: it's inherently atomic -
and if the data is large, a shift reg uses less memory than global
variables (no unnecessary copies made).
In this case those advantages are probably not relevant.  I have a
tendency to encapsulate everything.



Re: Multiple Plots In Seperate Windows

2004-04-08 Thread CoastalMaineBird
why do you put them in a shift register?"
Also, at the time, I didn't realize that Occurrences were special, in
that a given instance of the GENERATE OCCURRENCE function will always
refer to the same occurrence, even if it's been called before.  In
otherwords, the function that generated them didn't NEED to store
them.

  For this reason, you can't make an array of refnums by using
GENERATE OCCURRENCE inside a loop - the array of refnums will all
refer to the same occurrence.



Re: Multiple Plots In Seperate Windows

2004-04-08 Thread CoastalMaineBird
 The way I have wrote it causes the windows to lose and gain focus
as expected.=A0 Does anyone know how to stop the focus from constantly
changing?=A0
The focus is changing because the execution is changing (in the same
thread). You have the VI with the graph execute, and then quit.  This
will lose focus.

My occurrence-based scheme had all child windows executing at the same
time (different threads). This meant that the window on top had focus
- no other window would have it until you clicked on that window (and
brought it forward).

Hope that helps.



Re: Multiple Plots In Seperate Windows

2004-04-08 Thread CoastalMaineBird
You have a rather hairy state machine going on (I looked at your code
again).

The difference I see is that you have ONE button which spawns window
#1 or window #2 or window #3, depending on who's up already. But the
logic to support that is part of the polling process of every loop.
My app had one button for each window (each window was different).

My reason for using occurrences was to eliminate all execution paths
between the main and child windows.  In other words the main didn't
CALL the child directly, so therefore, they became parallel
independent processes. This led to the correct behavior as far as
windows on top, who has focus, etc.

There are probably other ways to do it - VI server, for one.  I did
this project in 6.0 and didn't know about VI Server then.



Re: Multiple Plots In Seperate Windows

2004-04-08 Thread CoastalMaineBird
thats really similar to what I am doing its just that my main form
sends data which is causing the flashing.=A0
--- I would suggest putting the data into a separate global function
(better than global variable for non-trivial amounts of data), and
letting the "slaves" pull it out.  At least that's what I did.

 This design is a great idea.=A0
I can't send the code, as it was contract labor.

Thanks a MILLION for your help and time!"
Please send check for one MILLION to:

;->



Re: How can I define where my program opens a dialogue (for example in the middle of the monitor)?

2004-04-08 Thread CoastalMaineBird
Are you using the standard SAVE dialog?  Or are you using a custom VI
as a dialog?

If you're using the standard box, you can't change its location. You
could move the window that calls it (the one with the info you want to
see).  Make it bigger, move the buttons to the outside edge, whatever.
You could actually move it (record it's window position, set new
position to 0,0, run dialog, restore original position), although
that's a bit jarring.  You could suggest a file name using the info
("Run13 High Speed.txt" or something).  You could put the info into
the prompt ("Please save the data for the high-speed run13").

If you are using a custom VI as a dialog, then you're free to move it
(set its position) wherever. You could copy the info from the calling
VI and display it on the SAVE dialog.

Hope there's an idea you can use in there.



Re: write jpeg.vi path issue

2004-03-30 Thread CoastalMaineBird
 made sure that I used=A0 backslashes in the path for the write jpeg
vi for the unix server.
Backslashes are a Windows separator - it's not universal.  Mac uses
":" as the separator (for historical reasons - even though it's "/" in
the Mac command line.  You need to compose your path using colons, not
backslashes.

Suggestion:
1... Manually move a file to the unix server, where you want to put
your JPEG.
2... Put a PATH control on a new VI.
3... Use the BROWSE button to browse to the unix folder and choose the
file you manually put there. (you have to mount the volume first).
4... Compose your path to look like what you see in that PATH control.



Re: musica

2004-03-30 Thread CoastalMaineBird
Ruido.



Re: Conditionally capturing certain data while continuously acquiring data

2004-03-30 Thread CoastalMaineBird
Your XY Chart only shows the most recent capture.

If you want to have an array of them, you have to build an array.

Put a shift register on the WHILE loop.
Initialize it with an EMPTY array of X-Y pairs.
Inside the CASE TRUE, append the new X-Y pair (you just captured) to
the shift register left side and put the new array out to the right
side
Inside the CASE FALSE, pass the shift reg thru unchanged.

You can plot the new array inside the TRUE case, if you're not
concerned about speed (it will plot every time you click the button),
or outside the while loop (it will plot only after you stop).

Outside the while loop, save your data to a file.



Re: Binary newbie in need of assistance

2004-03-30 Thread CoastalMaineBird
It's not clear to me - are you reading the data from the A/D, or are
you reading the data from files?

If you're reading from files, then you have to know the file format.
I would GUESS that the files would be an array of I16s - signed 16-bit
integers, with four bits wasted on each sample.  It's POSSIBLE they're
packed -  two 12-bit samples into 3 8-bit bytes.
  You can judge this from the file size, if you know this file is 1000
samples and the file size is 2000 bytes, you're ready to go - just
read the file as a bytestream file of I16s, and swap bytes to get them
into big-endian (standard) order.

If the 1000-sample file is 1500 bytes, then they're packed - you will
have to do some serious bit twiddling to unravel that - but it can be
done.

If the 1000-sample file is some other size, there's some sort of
header included - you'll have to figure that out from documentation,
or just trial and error.



Re: Query Results into drop down menu

2004-03-30 Thread CoastalMaineBird
I don't know about the database end of it, but you have to get the
dates into an array of strings (one date per string).
Once you have that, use a TEXT RING or MENU RING (your choice). Create
a property node for it, and put your array of date strings into the
STRINGS [ ] property of the control.

The control's VALUE will be a number (0..N-1) meaning which data the
user wants.



Re: Conditionally capturing certain data while continuously acquiring data

2004-03-30 Thread CoastalMaineBird
Your use of the terms "Waveforms" and "amplitude" implies a
measurement of many samples into one number (for each channel).
For example, if your signal was pure 100 Hz, 1.000 Volts peak, and you
captured a 0.1 second block of that signal, then you would have 10
cycles in the capture buffer, varying from +1V to -1V.  You can use
the RMS function to turn that block of samples into a number of 0.707,
meaning the RMS value is 0.707V (= 1.0 V peak).

The basic philosophy would be:
1... Start a continuous acquisition.  By specifying NUMBER oF SAMPLES
= 0 in the AI-START vi, you will acquire data forever (until the
buffer overflows, or until stopped).
2... Periodically (based on a timer), you ask the AI READ function to
read zero samples. This does not remove data, but it does tell you how
many samples are in the buffer.
3... You immediately read that many samples out of the buffer, and
convert from volts to engineering units (if appropriate).
4... If you are measuring their frequency, you need to perform an FFT
now.  The highest peak in the magnitude of the FFT output is the
dominant frequency in the signal. (Freq. of peak = Index of peak *
Sample Rate / NPoints in block.
5... If you are doing FFTs, you get a big speed advantage if your
block size is a power of two (128, 256, 512, 1024, etc.). If so, then
replace step 3 with *-IF-* backlog >= N (power of 2) then read N
points from buffer.
6... Apply your criteria - *-IF-* the button has been pressed, *-OR-*
if the frequency of channel 2 is different, then compute the RMS or
peak values and append to your final results array. Otherwise do
nothing.

If you want to detect 2-Hz changes, you need to know the rules of how
FFTs work - the frequency resolution is the BIN WIDTH, which equals
the sample rate divided by the number of points in the block.  If you
have a 1000 Hz sample rate and a 1024 point block, your resolution is
1000 / 1024 or just less than 1 Hz.  But if you have a 1000 Hz sample
rate and a 32 point block, your resolution is 1000 / 32 or about 30
Hz.



Re: Inverse FFT's

2004-03-25 Thread CoastalMaineBird
making my real set of data complex by adding an imaginary part
equal to zero.

Not sure why you're doing that instead of usinig REAL FFT and INVERSE
REAL FFT.

Are you sure you're generating the imaginary array the same length as
the real array?



Re: Inverse FFT's

2004-03-25 Thread CoastalMaineBird
I just ran your problem waveform through REAL FFT and INVERSE REAL
FFT, subtracted the difference and got an error of 7 e-11 on the DC
term, and all other errors are on the order of e-14.

LabVIEW 7 Mac OS X



Re: LabView Object Speed Specs

2004-03-24 Thread CoastalMaineBird

BE AWARE OF WHAT YOU'RE MEASURING.

I just took my own advice.
I re-measured the ADD operation by putting 10 ADDS into the frame 4
loop instead of 1
I get a loop time of 39.4 nanoseconds,meaning an add time of 3.94
nSec.
I guess the original estimate (discarding the output) was better.

My guess is that for the primitive functions like Add, Multiply, etc.,
it's faster to just do it than to decide if you need to or not.

But I have seen certain functions perform WAY faster if the output is
not connected.



Re: How can I communicate betwen parallel asynchronous VIs?

2004-03-24 Thread CoastalMaineBird
Not sure if this will help, but here's a trick I just used.

The main loop handles UI stuff - changing which channel gets plotted,
such things as that. It's a WHILE loop with an EVENT structure in it.

A separate WHILE loop needs to run every 1000 mSec. It reads DAQ data,
calculates averages, and stores data away.

I set the DAQ loop to wait on an occurrence, with a 1000 mSec timeout.
Inside the DAQ loop I put a CASE structure, selected by the TIMED OUT
output from WAIT ON OCCURRENCE.

If the occurrence timed out, I perform the DAQ operations, and loop.

If it did NOT time out, it must have been triggered, so I terminate
the DAQ loop immediately.

Inside the UI loop, I trigger the occurrence based on a DONE button.

Outside BOTH loops, I generate an occurrence that feeds to both
TRIGGER and WAIT functions in both loops.
You can make THAT connection thru global variables, or pass thru
connectors, with no penalty.



Re: Why timer anomalies in event structure?

2004-03-23 Thread CoastalMaineBird
< slaps head >
Well, duhhh.. I should have seen that.
If it's 999 mSec into the timeout when the "wait" event occurs, then
it will add 200 mSec to the 999, and start another timeout counter at
1199. If that one times out, then I'm sitting at 2199 mSec between
events.

Sometimes I can't see the forest for the trees.

Thanks,



Re: What Mac OSes will LAbview 3.1.1 work on?

2004-03-23 Thread CoastalMaineBird
< dials into the WAYBACK machine >

If I remember correctly, that's a 68k-only version. I belive it
required 7.0 as a minimum.

So my answer would be system 7.0 - 9.1 ( I think 9.1 is 68k also).



Re: Speed Contest...Linux Vs Windows

2004-03-23 Thread CoastalMaineBird
Before you make ANY decisions about alternate platforms, the first
question you should ask (and answer) is "Where is the time being
spent?"

The PROFILE tool can help with that.

Are you initializing a new DAQ operation every time? You shouldn't.

20 RPM = 0.333 Hz = 3 Sec per rev.  At 4000 clicks per rev, that's 1
click every 750 uSec.  That's not unreasonable.

Are you trying to react to every event?  It's possible you may not
need to - just read however many readings there are, process however
many readings there are, and repeat.

Without seeing your code, it's hard to guess whether you truly are
taxing the CPU, but maybe you could elaborate.  I'm in the business of
making things run faster.

The short answer is that you can't make the program run faster, you
have to make it do less.

Linux or other OS will only help if your problem is in the OS portion
of your code. (disk i/o ? graphics?)



Best way to programatically re-size graphs?

2004-03-21 Thread CoastalMaineBird
I have a window that pops up, containing X-Y graphs, that are say, 600
pixels wide.

Under some circumstances, I want to reduce this to say, 400 pixels,
and uncover a cluster of numeric displays occupying the other space.

I would like to set the BOUNDS.WIDTH of the graph so that it extends
right up to the POSITION.LEFT of the cluster.

But I can't adjust the BOUNDS.WIDTH of the graph itself (that property
is read-only).

I can adjust the PLOT AREA.SIZE.WIDTH of the graph, which will affect
the BOUNDS.WIDTH.

But I can't use that for calculations, because I don't know the margin
between the plot area and the graph bounds.

I want it to work when the user re-sizes the window (All objects are
scaled when window re-sizes).  He may resize the window, close it,
open it again and the size remains as he left it.  That means I can't
use absolute pixel values to move things around.

I have thought of using decorations: a hidden rectangle that's the
size of the small plot area, and another one that's the size of the
large plot area.  Then set the plot area's SIZE.WIDTH to the width of
the small or large decoration.  They decorations would scale along
with everything else, and all would be OK.


1... Does anyone have a better / different idea?

2... If not, how should I get the reference to the decorations? They
can't be named. Using the VI reference, I would have to pick out
decorations #3 and #4 or something (there are other decorations
already present).  That just seems a bit fragile.  How can I move
these new ones to be #0, and #1 ??



Re: Best way to programatically re-size graphs?

2004-03-21 Thread CoastalMaineBird
The logic behind this is that the difference between the total
bounds, and the plot area is constant.

I don't think that's true, given the fact that the window may be
resized, and the graphs will scale with it.

I *-THINK-* I have it working using a dummy button, rather than a
decoration.  If the button is hidden, it still re-sizes with the
window.  So I made a button that was the width of the small plot area,
and another which is the width of the large plot area, and hid them
both.

  When the window pops up, I decide which I need, and set the graph's
PLOT AREA to one width, or the other.

When the window resizes, the button scales too, and the next time the
decision is made, the buttons are the right size as well.

  I guess the decoration idea was a tangent.



Re: Why does priority of NON RUNNING vi affects execution time of

2004-03-05 Thread CoastalMaineBird
OK, that's farther up the CompSci tree than -I- can follow, but as
long as it's a known behavior, I can deal with it.

=A0 By the way, this doesn't happen at compile time as the VI doesn't
need to be recompiled for this to happen, but it does happen on load.

That makes sense, because in my real object the dummy caller is BROKEN
and won't compile (required connections are missing), but the behavior
happens anyway.

I wonder what happens if the high-priority caller is loaded WHILE the
low-priority system is running. Do priorities change DURING execution?
No answer necessary- If I really want to know, I'll test it and find
out.



Re: Why does priority of NON RUNNING vi affects execution time of

2004-03-03 Thread CoastalMaineBird
When you have a shared nonreentrant subVI, and
one caller is high priority and one is lower, then the subVI's
priority
is boosted to match the higher of the callers.

--- Notice that my later tests prove that you don't need TWO callers.
The same effect is apparent with only ONE caller.

To recap: here are the mysteries solved:

1... If A calls B, and B's priority is higher, a context switch
overhead is incurred.
2... If A calls B, and C calls B, then B's priority is boosted, if
necessary, to match the higher of A's or C's priority.
3... # 2 applies EVEN if one of the callers is not even running.

It's the combination of these three that confused me.

Thanks, Greg and Ben!



Re: Why does priority of NON RUNNING vi affects execution time of

2004-03-03 Thread CoastalMaineBird
When you have a shared nonreentrant subVI, and
one caller is high priority and one is lower, then the subVI's
priority
is boosted to match the higher of the callers.

--- You suggested this before and all tests seem to bear this out.
but the
lower priority VI will need to go through thread context switches to
and
from the high priority subVI calls.
--- This was our inferred explanation - good to have confirmation. I'm
still not sure why a priority-switch is equated to a thread switch,
though.
Priorities do not pipe higher octane electrons through your
computer.=A0 Priorities can only speed one thing up by slowing another
down.
--- I understand that. What was a mystery (and is now explained) is
that the subVI is treated=A0as a shared resource EVEN THOUGH there is no
competition for it (only one caller). On the face of it, it doesn't
make sense for priorities to be involved at all when a single call to
a single VI is used.  But, you have to go through the same motions,
because you don't really know if another VI might call it or not.
Also, what was a mystery was the fact that if A calls B, then B's
priority is boosted, if necessary, to match A's. And this occurs AT
COMPILE TIME, whether the A--> B call is actually executed or not.
And threads add overhead, especially when not used
appropriately.
--- Threads only entered into this as experiments to explain what is
happening. I started with everything on the default (SAME AS CALLER)
thread.



Re: Why does priority of NON RUNNING vi affects execution time of

2004-03-03 Thread CoastalMaineBird
Thanks for the link, though I'm not sure it applies.
1... I'm not using RT things - there's not an RT board present.
2... These shared resources include global variables,
non-reentrant subVIs,... -- So a non-reentrant subVI is a
"shared resource".

3... Priority inheritance allows the lower priority thread to
temporarily =93inherit=94 the time-critical priority setting, long enough
to finish using the shared resource and to remove the protection. Once
the protection is removed, the lower priority thread resumes its
original lower priority setting and is taken off of the processor.
--- I suppose the subVI could be a "shared resource", even though one
of the sharers is not running.  But if that's what is happening, then
you are suggesting that the process of changing priorities up and back
down adds 10 mSec to the execution time.

--- However, the tester calls the thing 1000 times, so I suppose the
extra 10 mSec is divided by 1000, so it's 10 uSec per call - that
seems believeable.

--- And since subroutine-priority avoids the scheduler altogether,
it's not subject to this priority inversion.

--- And when I made the subVI re-entrant, it's no longer a "shared
resource" and is not subject to this priority inversion.


--- OK, I'm convinced you're on the right track, assuming this note
about RT applies to the plain LabVIEW as well.



Re: Why does priority of NON RUNNING vi affects execution time of

2004-03-03 Thread CoastalMaineBird
The problem with this idea is that in the case where I don't load the
dummy caller, the resource is not "shared" - there's only one call to
it!  Apparently the   fact that the subVI is higher priority is enough
to get it to be declared "shared", even though there's no competition
for it.

Do you agree with that reasoning?



Re: Why does priority of NON RUNNING vi affects execution time of

2004-03-03 Thread CoastalMaineBird
I am leaning toward this being the OS related. 
For what it's worth, I DID verify that setting the priority of the
tester to match the dummy caller restores the normal execution time.

Also for what it's worth, I ran the same tests on Mac OS X.  The same
behavior results, except that the difference is more extreme (10 mSec
--> 55 mSec). But matching priorities works there as well.

More information:
The dummy caller is not even necessary to prove the point - like Greg
said, it is boosting the priority of the VI it calls, even though it
doesn't execute.
Without the dummy even loaded, if you set the priority of the
time-waster VI to HIGHER than the tester, the penalty is imposed.

The rule as I understand it:
A time penalty is imposed when a VI calls a higher-priority VI.
(Subroutine priority excepted).



Re: Why does priority of NON RUNNING vi affects execution time of

2004-03-03 Thread CoastalMaineBird
Regular VIs are greatly affected by whether the panel is in memory
and whether the panel is open.=A0 I suspect that you have eliminated
these effects, but it is always worth restating.=A0 After any of these
changes
to priorities, perform a Save All before taking any timings.
--- I understand that, and yes, I did save things and close the front
panels.
The subVI can only be run by one of them at a time, assuming it
isn't reentrant, and if its preferred priority is set lower than one
of the callers, it will be boosted to match the higher of the
callers.
--- The subVI was set to "normal" priority, excluding the experiments.
So, you're saying that the "object" subVI in question was boosted to
above-normal priority because one of it's callers was above-normal.
Fine.
That doesn't explain why it then took LONGER to execute. Is there a
penalty for calling a higher-priority VI from a lower one?  Apparently
so.  Maybe some extra trips thru the scheduler?

I didn't test it, but I'd guess from what you're saying that if I
boosted the priority of the tester, the penalty would have
disappeared.

Regardless, I am working fine with subroutine priority, but I want to
understand things better.  I've read the app note (#114, I think), but
I can't see a connection between the words there and my situation
here.



Re: Why does priority of NON RUNNING vi affects execution time of subVI?

2004-03-02 Thread CoastalMaineBird
If I set the "wasteTime.VI" as sub-routine, the undesirable effect
goes away!

Hmmm - that works for me too.  In the real case, the "Object" has a
subVI which has a subVI. I have to set those to subroutine as well,
because a subroutine cannot call a non-subroutine priority VI.  But I
think that's OK, in my case.
If I do that, I get times of 8-9 mSec on the real object!

I didn't try subroutine because I thought it would be the same as the
other above-normal priorities - guess I was wrong.

Still don't understand why, though.

Thanks for your eyes!



Re: Why does priority of NON RUNNING vi affects execution time of subVI?

2004-03-02 Thread CoastalMaineBird
If the OBJECT is set to: the time is :

RE-ENTRANT - NORMAL - SAME AS CALLER  11.2 mSec
 NORMAL - SAME AS CALLER  28.9 mSec
RE-ENTRANT - NORMAL - OTHER1  33.8 mSec
 NORMAL - OTHER1  29.2 mSec

All of that with the phony caller (that's not running) set to TIME
CRITICAL (or anything above NORMAL).

Any clues there?  Looks like thread makes no difference...



Re: Why does priority of NON RUNNING vi affects execution time of subVI?

2004-03-02 Thread CoastalMaineBird
How about if everything is set for the same thread? This would rule
out the thread swapping theory.

--- That's where we came in.  I just double checked, and everything is
set to NORMAL - SAME ExeSys AS CALLER (Non re-entrant).
The times are about 11.4 mSec.
If I set the DUMMY priority anywhere above NORMAL, I get 28 - 29 mSec
times.



Re: Why does priority of NON RUNNING vi affects execution time of subVI?

2004-03-02 Thread CoastalMaineBird
Is the "object" configured to run in the "Same thread as
caller"?
Yes.
But that doesn't explain (to me) the connection - given that one
execution path is not even running, why is the other penalized?

MORE INFORMATION:

If I make the object RE-ENTRANT, the execution time is back to
"normal", regardless of priorities.  That's not a real solution for
me, as I have shift registers within the object I need to keep, but
maybe it's a hint as to what's going on.



Re: Why does priority of NON RUNNING vi affects execution time of subVI?

2004-03-01 Thread CoastalMaineBird
MORE INFORMATION:

If I put the "Object" VI (normal priority) in a brand-new VI, the
tester shows the faster execution time.

If this new VI is set to ABOVE NORMAL priority, the execution time
balloons upward again.

If I raise the TESTER's priority to match (or exceed) the new VI's
priority, it's back down again.

In other words, it looks like calling a lower priority VI from a
higher priority VI puts a "fence" around that  lower-priority VI, EVEN
IF THE CALLER IS NOT RUNNING.

Other VIs of the same lower priority suffer the penalty of that
"fence".



Re: Setting a property of a substring fails only during first execution.

2004-03-01 Thread CoastalMaineBird
It fails because there is no text in the indicator the first time.

You are setting the attributes BEFORE you set the text into the
indicator.

With no text present, you can't set SEL START or SEL END to anything
but 0.



Why does priority of NON RUNNING vi affects execution time of subVI?

2004-03-01 Thread CoastalMaineBird
I'm chasing a weird timing problem.  LabVIEW 7.0 + Windows 2000

I have a VI (called "the Object"), whose purpose is to store and
retrieve numerical values by name.

The name is hashed, converted to an array index and the values are
read / written to an array slot.

That all works fine.


I have a tester for the Object.  It generates random names and random
values. starts a timer, stores the values (say 1000 of them),marks the
timer, reads the values, and marks the timer again.  The point is to
judge the execution time of the Object.

That all works fine.


EXCEPT


I measure a particular case at 11.44 mSec for 1000 writes.
Repetitions result in numbers similar to 11.44

I then load a particular VI from a program I'm having trouble with.
The trouble is it's taking more time than expected to process data.

This VI is normally set to execute at TIME CRITICAL priority.

This VI uses (indirectly) the Object.

If I load this VI WITHOUT RUNNING IT, and run the tester, the
execution time goes from 11.44 to 27.08 mSec.

The VI is NOT RUNNING, yet it's presence affects the execution time of
a VI it uses.

If I change its priority to NORMAL or BACKGROUND, then the tester
reports the shorter times (11.44 or so).

If I change its priority to ANYTHING above NORMAL, I get a 27+ mSec
time.  I can change it back to normal, and get the "normal" execution
time.

Why does a given VI's priority affect it's subVI's execution time IF
IT IS NOT EVEN RUNNING?

And how do I fix it?



Re: create a small data base stored in a file

2004-01-05 Thread CoastalMaineBird
No, I program for a living.
Everything you need is in my previous answer.



Re: create a small data base stored in a file

2004-01-05 Thread CoastalMaineBird
For something this small, use a single VI to manage it in memory:

It has a FUNCTION input, an enum with values of (INIT, READ FILE,
WRITE FILE, ADD ITEM, FIND ITEM).

It has a CLUSTER input, which is your record type {Item number, serial
number, time stamp, description}

It has a CLUSTER output, of the same type.

It has an ITEM NUMBER input, which is an integer (assuming your item
number is truly a number).

The code is a WHILE loop with the CONTINUE input wired to FALSE (it
never loops).

Inside the WHILE LOOP is a CASE statement, with the selector wired to
the FUNCTION control.

For case INIT, make an empty array of records (your cluster type) and
feed it to a shift register on the WHILE loop.

For case WRITE FILE, take the shift register input  and CREATE, WRITE,
and CLOSE a file. (pass it thru to the output as well). Wire the
cluster to the DATALOG TYPE of the CREATE FILE function to create a
datalog file.

For case READ FILE, use OPEN FILE, READ FILE, and CLOSE FILE
functions, with DATALOG TYPE wired to the cluster type.

For case ADD ITEM, just append the new item (input cluster control) to
the array from the shift reg and put the array back in the shift reg.


For case FIND ITEM, just search thru the array (from the shift reg)
until you find the matching item number, then return the whole record
in the output.


You'll have to pass the left shift reg thru the case to the right
shift reg in all cases except INIT, READ FILE, and ADD ITEM.

This means the actual storage is in the shift reg, for max efficiency.

If you get beyound a hundred items, I would suggest a different FIND
ITEM technique (keep a separate list for ITEM NUMBERS and search that,
rather than the whole thing).

This assumes you have control of shutdown - any changes you make are
lost unless you call WRITE FILE afterwards.