Re: Time Shareing One GPIB Spectrum Analyzer (HP6562A) Between Two VI's.
An easier method than locking is to build a higher level vi that handles complete commands (writing and reading) to the analyzer. In this way labview will automatically schedule the first caller and put the second (or more) caller on wait. This synchronisation is perfect for such an application. Locking on visa level is needed when you are running more executables at the same time.
windows like interface
Do you guys have any tips regarding how to make my programs look like OS native programs? I know that I can use the controls from the Dialog controls toolbox, but there is many functions that are not available from there (a file selection control, per example)... Thanks
Re: I want my application to have a main menu(vi) and few sub.vi...
Dear Mr.Nayisoe, Thanks for yor reply. The code you you have attached could not be opened as the error pops up to say the vi version(6.1) is newer than 6.0. But I myself tried as per your suggestion. But the front panel just pops and closes. Just flickers. Actually I want a sub vi's front panel to open depending on the status of a switch in the main.vi. The sub vi should run and a boolean swtich like EXit in the front panel of the Sub Vi on being pressed should close the sub vi front panel and return to the main vi. In fact it is required to run the sub vi by the switch in the main vi and stop the sub vi by the switch in the sub vi's front panel. Pl comment thanks
Re: I would like to create a pop-up window appear from Labview...
Hello Dennis, Since my requirement is somewhat is solvable with your method, I tried to use your vis. But they could not be open (error9: newer labview version 6.1 than version 6.0 ). could you give the code compabtible for labview6i. Thanks
Re: dragdrop mouse over - special effects
Thank you for the answer joe. working inside labview would be enough for me. is it possiple to post or send me an example. i'm not so familiar with event structure and so i could learn. thanxs a lot and a nice day. st
Some Problems with LabVIEW and Matlab DLLs
As many other LabVIEW users, I wanted to include Matlab functions in LabVIEW by means of a DLL. To achieve this, instead of using a wrapper DLL as I have seen in this Discussion Forums, I created the DLL with Mathworks COM Builder, directly from the '.m' files, that builds and registers the component. Then I used this DLL in LabVIEW as an ActiveX component, using an Automation Refnum to select the DLL in the Front Panel, and opening the reference and invoking the methods that I wanted to use in the Block Diagram, and after several difficulties (you have to use a 2D array type to obtain an 1D array output parameter; you are not permited to work with complex parameters, you have to use real and imaginary components separately), I was able to have the Matlab functions doing their job inside LabVIEW without having Matlab running on my PC. My problem arises when I rebuild the Matlab DLL. From that moment on, when I run again my top VI, a message of 'Class not registered' appears when The Invoke Node is executed. To solve this, I have to open my DLL VI, and reasign the DLL on the Automation Refnum to the same DLL. It seems that each time the DLL is recreated, something related with it changes, and LabVIEW does not recognize the DLL, despite the fact that the name of the DLL is exactly the same. Does anyone know why is this happening? Any help? Thanks and have a good day, spj
Re: closing front panels
Hello f_d, yes it's possible. You have to use the VI property nodes. With this you can control the front panel of sub-vi's (or main-vi's). You have to pass a vi reference (Open VI Reference) and then select the property FrontPanel.Open, wire true (or false, just as needed) and then it works... Best regards Gerd
Re: closing front panels
Most often controlling the closing and opening of the panel is done from the VI properties - window appearance setup, there you can set whether the VIs front panel should open when called or loaded and whether it should close when finished. The same properties can be set/overrun when calling the VI in the sub-VI node setup which you can access by right-clicking on the sub-VI. However - you can also open and close front panels using property nodes or methods. If you get a reference to the VI and set it's front panel open property to false you close the window and vice versa. Using an invoke node there is a method called Open FP...lot's of options in other words. You'll need to make sure the data flow is correct though; it does not help to close a window if you really need the VI to stop and unload e.g. Closing the last open front panel by making all code end and then set the front panel open property to false is the cleanest way to exit a built application by the way; unlike using the quit function the last window will dissapear instantly, no flashing of a halted VI window (In my oppinion NI should modify this function so that it does not do this in built applications...but anyaway).
Re: automatically changing min and max values in a colour ramp while the VI is running
This is done using a property node. Right-click on the control and select create - property node. A property ndoe will pop up on the diagramme. Click on the property node and you can select what property to read or write. You can expand the node to include multiple property terminals. You'll find the properties you'll need in the sub-menu of the scale property.
closing front panels
Hi there, is there a way to close a vi's front panel (in labview 6.1) besides the quit labview function, this function closes all vi's that are open. Say you have a main vi and several subvi's, in the main you select a task which opens a subvi where you can select several subtasks. But from the moment the subvi opens the main vi should be gone, sort of like browsing thru your files in windows. I would also like to integrate a back button for when the vi has finished so that you can go back to the main. Is this possible? Thanks
Re: Why does a vi not release memory even when arrays have been emptied?
Thank you Mads - you've pointed me in a new direction.
save data from a frequency domain acquisition
I am an italian student and I am trying to monitoring a cutting marble process using accelerometers. I have done a programm using Labview 6 and I am able to save data displaing in time domain to read them with excel, but I am not able to save data from displaing in frequency domain.Someone can help me? Thank you.
Re: program pic 16f84 with labview from code c
Ok thanks!, but is it possible to call to external program compiler? with the vi to modified the text in c to hex and then with labview vi program the pic... dont is?. thanks, any help?.
Re: save data from a frequency domain acquisition
Hello Frequency = 1/Time frequency is the reciprocal function of time and is available in Functions pallette - Numeric pallette wire your time value into this and it will return 1/X which is the frequency of the corresponding time. and save the data which is giving the frequency output to save to file Have a great day. Regards~ [EMAIL PROTECTED]
Re: automatically changing min and max values in a colour ramp while the VI is running
Many thanks - this has worked a treat!
Re: Problems with load multiple XYgraphs
Any help about the first error, i tried to remove the error but I am not to be able to solve it. I would like the legend plot name, at start the program was items, and i add curves, show the names and when i remove the any curve,it would remove the name, but i dont know how can i doing it... Regards,Jose.
Re: Optimizing network throughput over Ethernet with TCP/IP
Hi Adam, It has been a couple of days and no response on this Q so I will relpy. First, you have done your homework well. I have not used the TCP functions from Sheldon Instruments so I can not comment on that. I think the Naegel option would help if you were send alot of small packets that need individual resposnes. I am a bit concerned about your target of 30mbps. It has been years since I dug this low but I think 30mbps is going to be the upper limit of a 100Mbps LAN. This is based on the old days when ethernet ran over coax. The issue as I see it is how the hardware gets access to the LAN. Ethernet adapters have to do a number of things when transmitting a packet. First it has to check the network to see if anyone else is talking. If the network is free, it will then transmit the data while simultaneously read from the network. If another node is trying to do the same thing at the same time, they will corrupt each others transmission resulting in a bad CRC being detected by each transmitting node. This is a collision. The interfaces are then supposed to execute a random wait and then retry. You probably new this already. When the network utilization is low, collisions are less likely and data can fly. Once the network utilization starts to climb, the collisions become more frequent and over-all throughput will decline. In the old days we used to recomend trying to keep utilization below 30% of the theoretical max. This is why I am concerned about you trying to do 30mbps over a 100mbps LAN. You are right on the edge where performance starts to drop off. Remember that TCP packets are ack'd by the reciever! This ack'ing is going to introduce collisions because this has to happen to support the integrety of the transmision. Enough about me fears, how about some ideas. First; Keep LV out of the mix and just try your hardware and OS out using file transfers. Just try copying large files from one machine to the other on your dedicated LAN. If you can transfer files at 30mbps, then you may be able to do the same with an app in LV. (Note File sharing is generally implemented using TCP/IP). Second; I think you probably will have to go for the 1Gig route. Your 30mbps is only 3% of that bandwidth so provided your hardware can pump as fast as the net can suck, you should be able to pull this off. In closing I would like to clearly state that I am not an expert in this area and it has been years since I got into the low-level details, so please forgive me if I have got something wrong here. I will watch this Q to see how things work out for you and see if anyone else would like to jump in here and straighten me out. Trying to help, Ben a href=http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101RPAGEID=261HRedirected=TrueHUserId=101_3529RFORMNUMBER=6;Ben Rayner/a a href= http://volt.ni.com/niwc/common.jsp?page=products_certification_cldnode=10638; Certified LabVIEW Developer /a www.DSAutomation.com
Re: Why does a vi not release memory even when arrays have been emptied?
You can set under ToolsOptionsPerformance and Disk the Deallocate memory as soon as possible check box. This will release memory earlier than LV will normally do. It will increase runtime in the case when memory is allocated and freed often during the processing of the data. In LV 7.0 under AdvancedData Manipulation is a function Request Deallocation which will release allocated memory. This way you have a more detailed control over the memory usage. Waldemar
Re: priority in events
You cannot prioritize the events handled by the event structure. The events will always be handled in a FIFO like manner. There seems to be no other method than Abort VI to terminate an event. If you need to reorder the events you have to use your own queue and a state machine to do that. Also long running events need to check a signal like a notifier. Waldemar
Re: How do I call another event from the current event in an event
I don't believe though that the other event will actually execute until the first one completely finishes. Even if you use a sequence to call the other event. The other event's timecode will indicate when it was called or placed in the queue to be executed. Note this behavior is different than other languages (Delphi for example) where a second event can be made to execute at any point in time from within another event. This is an important thing to remember, and hopefully the terminology will help. LV doesn't execute events, it delivers or fires or posts them. Some systems have both mechanisms, Windows uses the terms SendEvent and PostEvent. SendEvent is a function call, there is nothing event like about it except that your parameters are locked into an event structure. These Sent Events cut in line and block everything until returned. Posted messages on the otherhand go into a queue and execute when their turn comes. Posted messages provide for asynchronous notification. If you have more questions, ask away. Greg McKaskle
Short Time Processing Methods
Has anyone ever come across Short Time Processing methods? I would like to look into Short Time Energy Function, Short Time Average Zero Crossing Rate and Median Smoothing. I don't know a lot about them, but don't want to get too deep in before I find out it's going to be a nightmare to work on! I'm not sure if the STFT Spectogram.vi will help me, or whether that's something different. Thanks.
Re: Open vi reference problems in labview 7 after building an application
Are you taking into account that the path to a VI is different in a built app?
Re: Open vi reference problems in labview 7 after building an application
If you have strict typedefs for the dynamic loaded VIs you need to include the line BldApp.RemovePolyVIsandTypedefs=False into your LV ini file. See a href=http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101RPAGEID=135HOID=50650008005497UCATEGORY_0=_49_%24_6_UCATEGORY_S=0;this/a discussion. Waldemar
Re: Any way to tell when an image display has been updated?
My kludge/workaround is to add a variable delay before the message is cleared. The delay is based on the image size and some empirically-derived constants. The issue is that the terminals and locals of controls schedule an update and allow for batching. They are not synchronous. To synchronize things to them, you can try to use the Value Property on the control's property node. At the moment I believe this is synchronized. If not, you can use the Panel's property called Defer Panel Updates. You can use the to synchronize multiple property nodes and values by setting to TRUE, modifying, then setting defer to FALSE. So, you can set your Boolean or string telling the user an update is in progress, then defer, then set the image and string/Boolean again, then defer to FALSE. The string/Boolean and image should update at close to the same time telling the user that the update is finished. Greg McKaskle
Re: I would like to create a pop-up window appear from Labview...
I posted 6.1 and 5.0 versions. Open the 5.0 one.
Re: How can I detect if a global bolean variable has changed
Hi, I don't udnerstand what you mean. If I want to store the new time value in a global, I need to detect my change of state of this bolean indicator. How can I mange it? Could you send me a small VI? Thanks, Angel
Re: why the tree control(lv 6.1)'s branch always shrink when run...
dear pawel; thank you ,but what you say is proper for lv7.0,my question is used in lv 6.1,can you help me!!1
Re: dragdrop mouse over - special effects
Unfortunately I don't have the code anymore. My harddrive was damaged sometime ago, and puff, everything was gone. Search for some examples from HelpFind Examples ... Try to search for structures or mouse. The examples are straightforward. Joe
Re: Error 1 occurred at Open File
The error could be due to an invalid path or file name. How are you providing the path to the function? Is it a constant or are you using the File Dialog function first to allow for path selection?
Re: DAQ+storage must happen independent of other screens OR user interaction
The best way to implement this is using a background VI for DAQ+storage. Load this VI from your main VI with Open VI Reference and start the VI with the Invoke node Run VI. Use a named queue to send data from your UI VI to the background VI. Use a strict typedef for the data to transfer over the queue. This way the background VI will get the configuration information. Use another named queue or a named notifier to send the aquired data from the background VI to the UI VI. Notifiers have the advantage that more than one reader can receive the data but if he is to slow he will loose data. Queues don't loose data but you can have only one reader. If screen3 is a subVI of screen2 it can also create a named queue with the same name as in screen2 and read the data. If screen3 is closed screen2 will read the data but the data read by screen3 will be missing. If you don't want this you have to load screen3 and run it with Run VI and screen2 has to forward the data to screen3 over another named queue. Look for a href=http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101RPAGEID=135HOID=5065000800B2BDUSEARCHCONTEXT_CATEGORY_0=_49_%24_6_USEARCHCONTEXT_CATEGORY_S=0UCATEGORY_0=_49_%24_6_UCATEGORY_S=0;Bens/a example of who to run UI VIs independently. You only need to add the fourth VI for the DAQ+storage and the queue to communicate between the VIs and of course the code and controls to do the things you want. Waldemar
Re: how to hide mouse cursor for labview linux
I exhausted every resource I could find, and came up with only one answer: unclutter It is a small utility that will hide the mouse cursor after a small period of inactivity. It will not work on the root window, however. Also, it is probably not ideal for a touchscreen since the cursor will reappear with any movement/buttonclicks. In the case of my embedded app, the mouse is never used. Hope this helps.
Re: Does anyone have a program to emulate a terminal like a vt100 or xterm?
Thanks to Sean and Ben for their responses. Each response is useful but I am actually looking for something a little different. My application needs to do five things. 1) communicate with the host via serial port or tcp/ip. 2) emulated the vt100 (xterm) display 3) direct input to host from the LV application 4) parse output from host into LV application 5) handle users input via the keyboard The first is can be done with the examples from the serial vi's and internet tool kit. Tasks 3 and 4 are fairly straight forwards LV code. Task 5 can be handle by the Global Text Box with Event Structure and keyboard.vi from the LabVIEW development library. If anyone knows which approach is better please let me know. The hardest part is trying to make sense of all of the escape sequences that the host spits out and try to present something that the use can look at. The commercial application that Sean found looks like it will be able to handle the display on serial or tcp/ip. I would still try and figure out how to manage the LV application interfaces. What I am looking for most is something written in LV that can handle at least some of the display functions and I can write the rest. Thanks Roger
Re: saving to version 6 from version 7, (not version 6.1)
There is not a direct way of saving back 2 versions. LabVIEW only supports saving a single version back, ie 7.0 to 6.1. I believe the conversion kit that was mentioned refers to the ability to open old versions, 3.1, in newer versions of LabVIEW. The only way that you can save a VI from 7.0 to 6.0 is to save it first as a 6.1 VI. So you need to save for previous as a 6.1 VI, and then open the VI in 6.1 and save it for previous again. Every time you save back you increase the chances that you will loss a section of code. For example if your VI is using the event structure. You will be able to save to 6.1, but not to 6.0, because 6.0 does not have the event structure. So going through this process lets you know what features you lost at what versions so you can weigh how important that feature was to you versus the cost of upgrading. If you have LabVIEW 6.1 you can do the save yourself. If you do not and your project is relatively small you may be able to post it here and have someone attempt to save it back for you. However you need to keep in mind that we won=92t be able to test or verify that the save back worked. Evan National Instruments
Re: How can I set one loop's frequency as a slave of another loop ?
instead of a case struture. How do you think about the event structure ? Would it be a better idea ?
Waveform to binary file to read with other application
I'm streaming waveform data to a binary file to be read by a third-party waveform analysis software. When opening the file, however, the waveform data is unintelligible. I have read that Labview uses BigEndian coding, while most PC programs use LittleEndian coding; how do I convert from Big to Little and save the file for use by the outside program?
Re: LabVIEW Runtime 7.0 Linux and Mozilla
Hello Ondra, After some research I have discovered that Mozilla is not supported for viewing remote panels. I suggest using Netscape 6 or later. I hope this helps, Sean C. Applications Engineer National Instruments
Re: Any way to tell when an image display has been updated?
How about a sequence like this: 1. Set cursor to busy (wait glass) 2. Update display 3. Reset cursor George Zou http://gtoolbox.yeah.net
Re: Error 1 occurred at Open File
By function I mean a LabVIEW function or VI. If the paths are constants, I'd check to make sure that the path is still valid. You could also replace the constant with the File Dialog function on the File I/OAdvanced File Options palette or use a front panel path control with the browse button visible. You can click the Browse button to select a file as well. If you select a file with a known good path and don't get the error, then you'll know something is wrong with the constant.
Re: When I open an autorefnum I get an error 3005.
Doug 80910, I apologize for the false link. I tried to link you to an internal only page. I will list the steps below in case you are still interested. (Again, replace steps 4-6 with the appropriate steps for your activeX object) 1 On the front panel, open the Controls Palette and select Refnum=BBAutomation Refnum. 2 Right-Click on the refnum and select ActiveX Class=BBBrowse 3 Select Show Creatable Items only. 4 In the Type Library drop-down menu, select Microsoft Internet Controls. 5 Click on the plus sign next to InternetExplorer (InternetExplorer.Application.1). 6 Select the IWebBrowser2 from that header. Additionally, the reason you can only open the refnum when you check the show only creatable objects is that the object is dependant on other objects. By selecting show only... you are taking these dependencies into account. I hope this helps, Sean C. Applications Engineer National Instruments
Re: How do you change fonts in an HTML report?
Dear Dave, The setting of the size is working but your parameters are inadequate for a HTML document. The reporting VIs are generating HTML which is decrepated. It generates a FONT tag with a size attribute with 30pt and 10pt as you have set in the Set Report Font parameters. The HTML standard allows only values of 1 to 7. The standard works with relative sizes so a pt does not apply to it. 1 is the smallest size and the actual displayed size depends on the setting of the browser. In MS IE you can set it under ViewSize to one of five degrees. If you would use 5 and 7 instead of 10pt and 30pt you will see the text in different size. Your values are coerced to 7. That's why you see something you didn't expected. Unfortunatly I have no experience with the reporting VIs. I'm not sure if they can use the style sheet formatting which will allow font sizes measured in points. Waldemar
Re: Getting access to logged data during acquisition
Thanks. I actually came up with that solution but I was having a bit of a problem implementing/modifying the simple data logger since I'm quite new to LabView and am working with limited time. I'm considering other solutions in LabView as well. Thanks for your help.
Re: Error 1 occurred at Open File
thanks I will try that
Re: Passing a string buffer pointer to a DLL in LabVIEW
Hello Greg Not sure what happened in my last posting but the LabVIEW error detail got missed off the posting! In answer to your questions regarding the blank string buffer - I've tried 24 bytes, 48 bytes and 100 bytes of reserved space for the output string buffer from the DLL but LabVIEW always crashes with the usual LabVIEW: An exception occurred within the external code called by a Call Library Node..., VI was stopped at Call Library Node Function Node 0x6CC of subVI. The output of the DLL should return (I'm assuming) the data to the string indicator (check the Visual Basic, LabWindows/CVI and VC++ screen snapshots from previous postings) but all I receive is the LV error and no data returned from the DLL. This DLL is supplied by the hardware manufacturer and I cannot re-build or breakpoint the DLL as I do not have the source. I've also looked at the excellent Call DLL.vi example that ship with LV7.0 and have tried to replicate the Call Lib Node configuration in the VIs (WinCHAR.vi, ANSIunsignedchar.vi and LVCStr.vi). Why is it that Visual Basic, LabWindows/CVI and VC++ can all execute this simple DLL call but LabVIEW seems to have a real problem? -Chroma
Re: lvanlys.dll Real IFT function change from version 6.1 to version 7
Well that certainly did it. Thank you very much for your time in looking at this so quickly. This is definitely a point that I missed. It is interesting that LabVIEW 6.1 handled this error okay, but I am happy to know the correct way to build the input waveform. Thanks again.
Re: How do you change fonts in an HTML report?
Waldemar, Thank you so much for your response! I knew there was something silly I was doing wrong, but I just couldn't figure it out. Thanks again, you've saved me a bunch of time! Dave Neumann
ActiveX container disappears after vi saved and recalled
I can create a container and insert an active x object. The vi will run but if saved and recalled the control disappears! No other indication of a problem.
Labview for a policy server?
Hi Is there any one out there who has used LabVIEW to build a policy server? What is I mean by policy server here is a set of rules to administer, manage, and control access to some resources and its use in granting access. Would like to know if there exists any such readily available LV tools/VIs that can be customized. Other possibilities are, if any one has incorporated stand-alone tools in other environments within LabVIEW environment. Thanks in advance. Sheela
Re: Agilent 5973 GC driver
Hello David, Thank you for contacting National Instruments. We currently do not have drivers specific to that instrument in our Instrument Driver Network. You may need to use the VISA functions in LabVIEW to communicate with the instrument. You can, however, submit a request that drivers be made for that instrument by using the following link: www.ni.com/idnet You can also follow up on your request by emailing [EMAIL PROTECTED] I hope this helps! If you have further questions, let me know. Matthew C Applications Engineer National Instruments
Re: Does anyone have a program to emulate a terminal like a vt100 or xterm?
Try enabling flow control (not hardware handshaking). That should handle all of the X-on and X-off's. The only things you have to cover then are FF and LF. The other characters may not even come up if you are not using the device to do editing of commands. Your termination character will be CR, probably. Ben
Re: I am looking for a indicator like a FLASH LIGH, just show my sequencies moving on.
Why not use a progress bar. From your front panel, go into the Controls Palette, select Numeric and pick one of the 4 available progress bar. Pick something that you can measure progress with and wire it to the bar. If you wish to have a flash-light that moves sideways (like Windows), then try to find the icon and place it within an active loop (s). I prefere the progress bar. That way, you can have an idea of when it is going to complete. You can even have granular steps that show part of the code is done.. Experiment... and have fun.. JLV
Re: VISA Performance
I wouldn't expect a speed difference because of the serial port. What about the performance of the different PCs? I suspect the Win-NT is more powerful. If so, it may be capable of doing the other tasks (non-serial communication) faster. One bottleneck may be writing data to the HardDrive. When you say they are the same, is all the hardware within the PC the same and the only difference is the OS?
how do I make input text boxes the same size on the front pannel?
i have six 6 string control (text boxes) for say 12 chars Max. i can place text boxes on my front panel, align label them but i can't remember how to make them the same size???
Re: example labview VI with intensity graph for four inputs
Please learn to use the find example ... from the Help menu, search for colors, there is a VI called Create IntGraph Color Table.vi which you can use to generate the color table you want to use. Then you need to create a property node of the intensity graph and there is a property called colorTable. Wire the value you get from the other vi to this property. The VI basically cuts / divide the input of the intensity graph into 4 even quadrants (Q1 - Q4, see the coord in the vi). Each quadrants is a subset of the data you send to the graph, and each quadrants is a 2D array. As long as your data is 2D array then it does not matter what you wire in. The basic idea is to combine 4 small intencity graph into a big one, each small one occupies one quadrant. On the diagram everything inside the rectangle should be made into a subvi. To do so, select everything inside it, and goto Edit Create SubVI, Joe
Re: how does the interface to ADO drivers work?
Hi, Mike I can't completely answer your question but will tell you the best I can. If you dig into the guts of the Database Connectivity Toolkit VIs, you'll eventually get down to the ActiveX layer. Here you will see exactly what methods and properties are used and in what order. You can use the LabVIEW profiling tools to see what subVIs are the bottlenecks and then dig into that code to see which node is taking the majority of the time. I'm not sure what you mean by LabVIEW filtering out some of the methods and properties. The Invoke and Property Nodes you see are exactly what is called and with the values that are wired into the various parameters. These are the same methods and properties that you'd call if you were writing a VC or VB program. LabVIEW itself doesn't directly talk to the Oracle drivers, as you see, it just passes data to the ActiveX nodes. ActiveX is what is passing the data to/from the drivers. I, too, have noticed that the Oracle drivers are slow. However, the ones written by Microsoft for Oracle don't work at all for binary data. So I'll take slow over not functioning any day. Just out of curiosity, which of Oracle's drivers are you using-- ODBC or OLE DB? What happens when you use the other one? Please let me know what you find out. Please let me know if I can be of further assistance. Crystal
Re: When I open an autorefnum I get an error 3005.
Sean, Thank you for the follow up.
Failure: drawmgr.cpp, line 3443, LV 6.1
Hi! This is another failure that I got lately. I think it also has something with the 3D Surface Graph that I already posted a question about (in fact, just yesterday). My impression is that cw 3D Surface Graph has some fatalities in it that cause the whole application work unstably. Thanks, Miro
Re: how do I make input text boxes the same size on the front pannel?
Hi! I am not certain about some more sophisticated way to do it, but this 'primitive one' will still do the job: - you completely edit one (first one) box - simply clone the first box multiple times (by Ctrl + mouse drag, or by copy and then paste) and change their names to the ones you need (new labels) - align the boxes
Re: VISA Performance
Having run LabVIEW on a variety of systems, I've found that it's not that unusual to find a program run faster on an NT system than on other systems. I haven't compared WIN2000 with NT yet, but WIN2000 does run faster than on XP. NT has *MUCH* better multitasking than Win98, so this explains quite a bit. Those background tasks don't pull you down as badly. And, yes, there are always background tasks (such as updating the clock). Rob
Re: I am looking for a indicator like a FLASH LIGH, just show my sequencies moving on.
LabVIEW 6.1 and up allow you to put animated GIF files on the panel and on the diagram (we have a picture of a burning computer on one diagram :). You can put an animated GIF in a picture control and hide the control until you want people to see it. We are using an animated hourglass to show that the program is working. Just a quick and dirty thought. Rob
Re: Looking for the VI examples for the Learning with LabView 6i book by Bishop
LMichaud; The LabVIEW 6i Student Edition a href=http://sine.ni.com/apps/we/niepd_web_display.DISPLAY_EPD4?p_guid=B4B83FB38BFC3469E034080020E74861p_node=DZ52026p_submitted=Np_rank=p_answer=p_source=External;VIs are here/a. I think those are the VIs included in the Learning with LabView 6i book. Regards; Enrique Vargas www.visecurity.com
Re: Problem with LvVariant.cpp on line 696.
esvaq; You can check (i.e., search) internal errors at the following page: http://www.ni.com/support/failure/ Apparently, the specific internal error you are mentioning is not documented in NI Knowledge base. Check the NI document a href=http://digital.ni.com/public.nsf/3efedde4322fef19862567740067f3cc/01b186b5ceb1d07786256747000f4319?OpenDocument;LabVIEW Internal Errors/a which is a general guide to troubleshoot internal errors. The other option is to contact a href=http://nacimiento.com;Nacimiento Software Corporation/a, the creators of a href=http://nacimiento.com/AppletVIEW/;AppletVIEW/a. In their website they have a a href=http://nacimiento.com/AppletVIEW/Support/faq.html;FAQ/a which includes error information related to LvVariant.cpp. Check if the solution applies to you. Otherwise, a href=http://nacimiento.com/AppletVIEW/Support/;they provide support/a to their product. Regards; Enrique Vargas www.visecurity.com
Re: Panel Object xy coordinates and Multiple window list in task
The first question is just a question about the inital placement of a control/indicator. I had my application just about finished when I thought about hiding and moving controls. At which point the 'Origin' of the panel was lost. I just programmatically move a control to 0,0 and took it from there. There is no other way to find the panel origin that I can see. If using LV7, if you show the panel grid, 0,0 is marked by darker lines and should be pretty obvious. The next question though, is why is 0,0 needed? The most common usage is to place the control relative to something else. Read the position of a graph for example and put the other controls ten pixels to the left of the right edge. With this usage, you don't really need to know any particular screen position. Greg McKaskle
Re: A Pearl of Wisdom from Greg...
Accepting the past is smart. Predicting the future is really hard! Greg McKaskle I don't know if that quote was orginated by you Greg, but I like it! ...although I think my staff and clients will come to dislike it in the very near future :)
Re: 60MS/sec- is this possible?
I think NI needs to realize that with the latest round of high-speed DAQ cards, they've broken the back of the PCI/PXI bus. The PCI/PXI backplane, in reality, will only handle 70-80 MBytes/sec. Continuous 60 MS/s of 12-bit or better data is 120MBytes/sec. What we have used for high data rate xfers is the ICS-554 from ICS Ltd. The cards are expensive, but they have LabVIEW support on both Windows and Linux. AND with a 66MHz, 64-bit PCI 2.2 interface they can keep up with your data rate. http://www.ics-ltd.com/info/ productInfo.cfm?ID=10prod=1info=Analog%20Input%20Boards. Richard On Tuesday, February 10, 2004, at 06:30 AM, E. Blasberg wrote: Hi All, This may sound naive, but I have a customer who wants to sample up to 6 channels across 3 boards (2 channels/board) at a sampling rate of 10 Megasamples/sec/channel (i.e., 60 MS/s in total). Three different units are being measured simultaneously, which is why 3 boards are necessary (they don't all start or stop at the same time, but they could all be on at the same time). Currently the system measures 50KS/s/channel with no problem (plus up to 2 channels of AO). The data is averaged each second and put into a FIFO buffer which is usually 2 minutes (120 samples) long. Every 2 seconds, the FIFO is averaged (to get a running average). I should add that a GPIB data logger is read every 2 seconds for slower data. My NI rep has suggested the PCI-6115 whose specs could certainly handle this. What I'm worried about is the computer keeping up with averaging 60MS/s. Anyone have any experience using these data rates over an extended period of time (measurements can last WEEKS). Much TIA, E. Blasberg iDAQ Solutions Ltd
RE: 60MS/sec- is this possible?
Instead of asking if this is possible, I would ask if this is necessary. If you are just averaging the data, there is no reason to acquire at 10 MS/s. I assume you are using a portion of this data for some sort of analysis or storing it for later use. Perhaps you could use a slower speed card and an averaging filter to monitor the data, then use the high speed card to grab the block of data you need. If the events being monitored are not predictable, you could use a circular buffer to acquire 3 seconds worth of data for each channel into onboard memory. Whenever you need the data, you could download it into the computer's memory. I don't know if this would work with your application, but it is probably worth looking at alternative strategies for acquiring the data you need with lower overhead. Bruce -- Bruce Ammons Ammons Engineering www.ammonsengineering.com (810) 687-4288 Phone (810) 687-6202 Fax -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of E. Blasberg Sent: Tuesday, February 10, 2004 9:30 AM To: [EMAIL PROTECTED] Subject: 60MS/sec- is this possible? Hi All, This may sound naive, but I have a customer who wants to sample up to 6 channels across 3 boards (2 channels/board) at a sampling rate of 10 Megasamples/sec/channel (i.e., 60 MS/s in total). Three different units are being measured simultaneously, which is why 3 boards are necessary (they don't all start or stop at the same time, but they could all be on at the same time). Currently the system measures 50KS/s/channel with no problem (plus up to 2 channels of AO). The data is averaged each second and put into a FIFO buffer which is usually 2 minutes (120 samples) long. Every 2 seconds, the FIFO is averaged (to get a running average). I should add that a GPIB data logger is read every 2 seconds for slower data. My NI rep has suggested the PCI-6115 whose specs could certainly handle this. What I'm worried about is the computer keeping up with averaging 60MS/s. Anyone have any experience using these data rates over an extended period of time (measurements can last WEEKS). Much TIA, E. Blasberg iDAQ Solutions Ltd
Re: Addressing a scanner from LV
Dave, I think that your best bet would be to access a TWAIN driver from LabVIEW. If you go any lower-level than this, you will need to know a lot more about your scanner than you probably care to learn. Alliance Vision sells a library that contains the LabVIEW / IMAQ Vision Vi's for TWAIN protocol. Take a look here: http://www.alliancevision.com/net/twain/fr_twain.htm Good luck, -Jim David Ferster [EMAIL PROTECTED] said: Has anyone ever tried to access a flatbed or sheetfeed scanner directly from LV? One could have a user make files from the scanner's software and then access the files from LV, but it would be nicer to have the LV app do everything... David -- David Ferster Actimetrics, Inc. 1024 Austin St., Evanston, IL 60202 http://www.actimetrics.com 847/922-2643 Phone 847/589-8103 FAX --