Niels Larsen writes:
>
> The script below extracts 10 columns from a 2D
> raw array of 7688 columns (and 137050 rows),
> and then it accesses those 10 columns in some
> irrelevant way. The physical ram consumtion is
> then about 1 gb. This seems wrong, working on a
> tiny part of a dataset shouldnt trigger all of
> it to be loaded. I hope I am missing something,
> or else it will be a stopper for me, because the
> 2D arrays can be as large as the file system. PDL
> works underneath this...

Memory mapping a file just allows you to access the
file contents by read/writing memory locations rather
than a sequence of file io calls (read/write/seek).  If
you wish to memory map a 1TB file, you'll need to have
an OS and hardware than can support a 1TB memory
region (32-bit hardware only has a raw address
space of 4GB << 1TB!).

A 64-bit OS and hardware should be able to do regions
of this size if the OS implementation supports it.
Someone with true 64-bit hardware/OS knowledge
will have to comment on specifics.  For 32-bit hardware
or 64-bit hardware without OS support, you'll have to
work through the file with IO calls and not "mmap magic".

Good luck,
Chris
_______________________________________________
Perldl mailing list
[email protected]
http://mailman.jach.hawaii.edu/mailman/listinfo/perldl

Reply via email to