I think I would have script locals to hold the names of source and destination files, and then simply write to the destination file each time new data is read, closing it only when finished:

local sSourceFile
local sDestFile

on run
  open file sDestFile for update
  nextUpdate
end run

on nextUpdate
put char 5 to 8 (or whatever) of URL("file:" & sSourceFile) into tData
  write tData to file sDestFile
  if (your exit condition) then
    endProcess
  else
    send "nextUpdate" to me in 300 milliseconds
 end if
end nextUpdate

on endProcess
  close file sDestFile
end endProcess


You get the idea...


Best,

Mark

On 9 Mar 2007, at 21:22, David Glasgow wrote:

I plan to time it all with a 'send doit to me in tmills milliseconds' but I go a bit vague beyond that. Any suggestions about how to do the read-append as efficiently as possible? Should I try to read the specific bytes I want, or read and append the lot, then pick out the parts I want later? It may be a dumb question, but if the source file is in the same directory as the growing destination file, is it quicker than if the destination file is 'way over there' buried 6 deep in a different directory? Does an append get slower the larger the destination file becomes? Finally, what is the most efficient way of making the data read conditional? I had thought about putting a 'repeat while' somewhere, but I am not sure where.

_______________________________________________
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to