It does depend on what you're doing, as well. If you write the program
correctly, you shouldn't need a lot of memory. For instance, something like
this (which I see alot):
open( FH, "file.txt" );
@LINES = <FH>;
close( FH );
while( @LINES ){
&do_something();
}
Could be written like this:
open( FH, "file.txt" )
while( <FH> ) { # Same as while( $_ = <FH>)
&do_something();
}
This will use up much less space under nearly all OSs, and can still be used
to perform sorts, comparisons, etc. if done correctly. If you need more
speed, then you really should be using a DBI module rather than files. I've
used XBase files for small stuff. Try the DBI module for them (check
DBD::Xbase on CPAN). Even gdbm would be better. The nice thing with Xbase
though, is you can use Excel or OpenOffice Calc to read them.
> I use somewhat large files in perl (most under 1 megabyte but over
> 100K) and I've never had a memory problem (fingers crossed), but I am
> wondering about using some rotating buffer techniques for when these
> data files start to get larger. What is the proper method for taking
> a file of say 10,000 lines, and slurping up just lines 2,200 to 2,300
> ???