Just read/lines %filename will cause REBOL to crash and burn on large files,
as I found out the hard way (the way I usually do find out stuff<g>). I
wrote some beautiful scripts to monitor various logs on my server using
read/lines on files. Was very proud of meself, as they worked very slickly.
Then, one day, they broke. And it was because of memory problems. But the
technique below zips through 10 megabyte logs very nicely.  .... I'm putting
together a script now to delete my logs after they get so huge<g>, but am
mining some great info in the meantime.

--Ralph Roberts

>
> I have been reading really large files using this setup;
>
> file: read/lines %filename
>
>       foreach line file [do stuff]
>
> This seems to work well, however what is the limitation. What are the
> limitations on such a read, does the entire file get read into memory
> when issuing the read/lines command?
>
> Seems to work for me anyhow....
>
> Francois
>
>
>
> On Thu, 24 Feb 2000 [EMAIL PROTECTED] wrote:
>
> >
> > > I need to read a BIG text file line by line but I
> > > don't know exactly how to do.
> > >
> > > As I understand the following will read the entire
> > > file to a list of lines and that is not what I want.
> > >
> > > lines: read/lines %textfile
> > >
> > > I want to read one line, process it before reading
> > > the next line and so on.
> > >
> >
> > Hi Peter:
> >
> > It's relatively easy to act on files larger than memory, one
> line at a time.
> > I believe BO at REBOL came up with the technique originally.
> I've adapted it
> > and use it for manipulating large log files on my internet servers.
> >
> > Here it is, enjoy:
> >
> >       hugefile: open/direct/read/lines %huge_file
> >
> >               while [ ( line: pick hugefile 1 ) <> none ] [
> >                     ;;do stuff to each line;; ]
> >
> >       close hugefile
> >
> >
> > --Ralph Roberts
> >
>

Reply via email to