Scott,
there is a package called ff that '... provides data structures that are
stored on disk but behave (almost) as if they were in RAM ...'
i hope it helps
peter
On Sat, Aug 9, 2014 at 6:31 PM, Waichler, Scott R scott.waich...@pnnl.gov
wrote:
Hi,
I have some very large (~1.1 GB) output
Hi,
I have some very large (~1.1 GB) output files from a groundwater model called
STOMP that I want to read as efficiently as possible. For each variable there
are over 1 million values to read. Variables are not organized in columns;
instead they are written out in sections in the file,
Informally abbreviating data is not recommended... I faked some, but would
appreciate if you would make your example reproducible next time.
All I really did for performance was use the data you read in rather than
re-scanning the file.
# generated by using dput()
lines - c(X-Direction Node
3 matches
Mail list logo