Recently, Todd Geist wrote: > I had the need to search large binary files for a string. The files > could be over a gigabyte in size, so I decided not to load the whole > file into ram but digest in chunks instead. This is the routine I > came up with, it seems to work very quickly but I am wondering if > some of you might be able to speed it up.
Not sure about the efficiency of your code, but I would also suggest using larger chunks. On a couple of occasions I've had to search a corrupted mail file over 3 gigs in size for lost messages and I've found Rev can easily (and quickly) handle 750,000 char chunks. Probably more, I just haven't done any testing. I just checked right now using an older version of the MetaCard IDE and it was able to read/search a 750,000 character chunk in 49 milliseconds. In my situation, the search string is found in the 58th chunk (58 * 750,000 characters) which completes in just over 4 seconds. Not too shabby. Regards, Scott Rossi Creative Director Tactile Media, Multimedia & Design ----- E: [EMAIL PROTECTED] W: http://www.tactilemedia.com _______________________________________________ use-revolution mailing list use-revolution@lists.runrev.com Please visit this url to subscribe, unsubscribe and manage your subscription preferences: http://lists.runrev.com/mailman/listinfo/use-revolution