I need to split large (1 Gig) log files for a client into more managable chunks. The routine works fine, then (apparently) starts to slow down. I suspect that I have not implemented 'seek' correctly, and that the file is being read from zero each time, but would appreciate anyone's insights into optimizing this utility...

Given tFilepath, write out 1Mb files sequentially numbered...
 put the hilite of btn "Binary" into isBinary
 if isBinary then open file tFilePath for binary read
 else open file tFilePath for text read
 set the numberFormat to "####"
 seek to 0 in file tFilePath
 repeat
   set the cursor to busy
   add 1 to n
   seek relative 0 in file tFilePath
   read from file tFilePath for 1000000
   put the result="eof" into isEOF
   if (it="") then exit repeat
   if isBinary then put it into URL("binfile:"& tDir&"/" &n& ".txt")
   else put it into URL("file:"& tDir&"/" &n& ".txt")
   if (isEOF OR the result <>"") then exit repeat
 end repeat
 close file tFilePath

Many appreciations in advance.

/H
_______________________________________________
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to