Scott,

OK, this is becoming a detailled discussion, but here it goes:

Scott Hannahs schrieb:
At 9:46 +0200 6/16/04, Uwe Frenz wrote:
> > ... System is low on virtual memory.... > If it is just a rare state it may well be worth just to increase the
> > systems virtual memory. This can be done by inserting some extra RAM
> > (always a good idea, but with some costs) or by changing some settings.
> > These however depend on the machine architecture (a PC, WS, Mac or
> > whatever) and on the OS (Win, linux, OS X etc.).  Changing the virtual
> > memory settings does not take no extra costs and uses just a little
> > time.
Amount of "Virtual" memory is independent of RAM unless one is
> drastically misusing the term virtual memory. Virtual memory is limited
> by DISK space and the basic addressing limitations of the CPU.
Maybe my knowledge is outdated or maybe in Bill G's empire this term is used differently than in Steve J's.
_My_ understanding of virtual memory is this being the sum of all availabel memory blocks for any given process running in a system. And this is allmost all physical RAM plus the difference between maxSwapFileSize and actualSwapFileUsage. There are some memory blocks that can't be swapped. Those are accounted for in the word 'almost' ;-)


MostCPUs can address at least 2 GB for a single process, so having a few
> GB of disk space available should make that available to LabVIEW.
In Win (except some special server versions) the availabel adress space of i386 CPUs is still limited to 2 GB. I know of Win2003Server having the ability to adress the full adress space of 32-bit-CPUs, which is 4GB. This limits the performance of really big database systems and created the desire for 64-bit-CPUs.


Adding RAM will make the use of the DISK for memory less frequent and
> give MUCH better performance but it should run without failing.
Again a Win-special: Windows itself always uses dynamic swap space on the first HD (usually C:), AFAIK. This can be overrun with manual settings. Having a smaller system with just one HD leads sooner or later to a nearly filled and heavily fragmentd HD. Than swap space as well as swapping speed is dramatically decreased. This can be avoided by defining a static swap area (optimally on the fastest HD after defragmentation)


The only other limitation is running out of internal registers to manage RAM
Never heard of such in the Win-world ;-))

Your steps are a good hint to solve most of such problems. Period.
But from the primary post I got the impression of an infrequently popping up error message. This may than as well be caused by a filled and fragmented HD.


As a couple steps,
1. Check that LabVIEW is really trying to use a lot of RAM.  "Activity
> Monitor" works well for OS X and I assume that winders has
> a similar tool.
Win-ners have the task manager showing amoung others the memory and CPU usage ...
2. Find out where in LV you are chewing up massive amounts of memory
> and if this is necessary. The LV profiler works wonders!
> if you are chewing up a GB of ram in a particular VI it will be
> very obvious!
Fully agree. Besides this being much too slow and clumbsy, if one expects such huge amount of data than working with much smaller pieces will usually dramatically increase program speed. The programmer knows which data must be kept and which data space can be re-used. At least he should ;-))


Greetings from Germany!
--
Uwe Frenz


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr. Uwe Frenz Entwicklung getemed Medizin- und Informationtechnik AG Oderstr. 59 D-14513 Teltow

Tel.  +49 3328 39 42 0
Fax   +49 3328 39 42 99
[EMAIL PROTECTED]
WWW.Getemed.de




Reply via email to