Re: VI Slow Down/performance

2004-04-08 Thread Mads
Normally introducing a sub-VI is trouble-free and as you say it's
better to arrange the code that way. If the code is to be called
repeatedly (thousands of times) and it all needs to finish as fast as
possible the accumulated overhead introduced can get significant and
you must skip the sub-VI.

(I actually suggested to NI two days ago to add a feature in LV where
you can organize the code in a sub-VI and set it to flatten when
compiled (or a new type of object) only for organization purposes and
then have the compiler flatten the code to optimize performance).

When it comes to memory usage the story is a bit more complicated. If
you do not have the front panel in memory that will prevent some
copies from being made, however even with the front panel removed /
not in memory the sub-VI may introduce data copies. If the data amount
is large (which is perhaps the case here?) this can be costly.

Normally that would not give much of a performance penalty though -
but if the amount of memory available is too small and the software
must use virtual memory (disk) you can quickly get a significant
performance drop.

Could lack of memory be an issue here? If not, could you post the
caller and sub-VI?



Re: VI Slow Down/performance

2004-04-08 Thread shoneill
Hi Glen,

Could it be that you're updating indicators on the main VI panel
through references in the sub-vi?  I know LV does this automatically
sometimes when you make a sub-vi out of an existing piece of code and
as far as I know uses a lot of CPU time...

Even still, 10 times slower is really a lot.

Hope this helps

Shane.



Re: VI Slow Down/performance

2004-04-08 Thread AstroMed Glen
Sorry, to see the differences in performance, set the speed switch
to the left position. Then compare how quickly the X-axis increments.
The one without the sub vi goes more than 10,000 points in a second.
While the one with the sub vi does more like 1,000.



Re: VI Slow Down/performance

2004-04-08 Thread Mads
I could not notice any speed difference. If the one without sub-VI is
set to full speed then yes the one with sub-VI will slow down because
the other one is taking all the CPU time, however that works the other
way around too (if you set the sub-VI type to full speed first).



Re: VI Slow Down/performance

2004-04-08 Thread Greg McKaskle
  I have a question about performance. I created a VI that creates
  some dummy waveforms, does some processing to them, and displays the
  results. Pretty basic. I decided to simplify the Vi by putting all of
  the processing algorithms into a sub vi. When I did this the VI slowed
  down significantly. Like about five to ten times. The code in the sub
  VI is exactly the same as when it was just lying in the top level vi.
  I even saved two versions and compared them. The processing is some
  FFTs and a little complex math, but I don't see how that is important.
  Does anybody have an explaination as to why I am experiencing such a
  drastic slow down? and maybe how to deal with it without keeping my
  code on the top level. Any help would be gladly appreciated. Thanks in
  advance.
 

I looked into it and I can sort of explain it, to myself.  I'll bounce 
this off some others at work tomorrow so that I can explain it better.

In a nutshell, the time isn't being spent executing the subVI.  By using 
the profiler, and letting both VIs run about five seconds, the single 
subVI solution accounts for about five seconds, and much of the time is 
spent in the top level diagram.  In fact, the case statement is only 
executed about 100 times.

When I run the subVI edition for about five seconds, there is some 
missing time.  The time spent in the VI is something like 1.5 seconds 
including the time spent in the subVI.  Indeed, the VI did run slower, 
because it spent most of its time waiting for LV to schedule it.  At 
least that is my theory.  I played around a bit and figured out 
something that makes the subVI version runs the same speed.

I'll check back tomorrow.

Greg McKaskle




Re: VI Slow Down/performance

2004-04-08 Thread Greg McKaskle2
OK, I feel like I finally understand what is going on here, and it is
a bit of a wierd one.  It has very little to do with the subVI except
that the subVI reorganizes the code and provokes this.

The outer loop is running quite fast and the case structure's contents
run very rarely and have little impact on the execution speed.  The
problem is that the contents of the case statement contain the
property node to read the history of the chart.  In the fast code, the
loop runs in the Std Execution system and occasionally, inside the
case statement, it needs to transition the execution to the UI thread
to get the history value.  It may stay in the UI thread awhile, or it
may immediately switch back.  Either way, there are just a few of
these transitions.  Each of them adds overhead, but compared to the
work being done, the overhead is pretty small and acceptable.

In the code that runs about 8x slower, the UI-ness of reading the
history is leaking out to the case statement itself.  The outer loop
starts running in Std, then to execute the case structure, including
the empty diagram, it transitions to the UI execution system.  After
the loop it could stay in the UI for awhile, but you also have the
asynchronous Wait MS Multiple and that tends to push execution back to
the original thread.  With both of these in the same loop, we have 750
times more execution system swaps than in the original code.  If the
code in the loop did more, it would probably shrink back into the
noise, but at the moment the loop runs fast enough that this extra
cost means an 8x loop rate penalty.

Morals?
This is one of those unfortunate times when the static analysis LV
does on the diagram fails us and LV guesses wrong. It assumes that
since some of the code needs the UI thread, you are better off moving
most of it and amortizing the cost.  Unfortunately, something else
isn't playing along and we end up being penalized. The good news is
that this alignment of the stars doesn't happen very often, and it
shouldn't be something you really have to worry about. And this is
something we are constantly tweaking to get the best performance in
the typical usage.

The thing that provokes this is mixing UI and data computation in the
same loop.  In particular, the chart's history buffer is being used as
the circular buffer for the analysis.  If the circular buffer was
shift-reg style, the whole diagram would be running in the same
execution system and it would be still faster than even the fast loop
here.

So I guess the thing to take away is that using the chart as a quick
and dirty circular buffer is sometimes too quick and too dirty and can
gum up the works.

Greg McKaskle