>>>>> dean gaudet <[EMAIL PROTECTED]>
>>>>> wrote the following on Wed, 21 Dec 2005 19:15:42 -0800 (PST)
> 
> it looks like you're keeping data about every file in memory...  i hacked 
> around that in mine by trimming the base component of every path before 
> adding it into the hash tables... so there are only as many hash keys as 
> directories, which is a bit more manageable.
> 
> if you want a more general solution you might consider invoking external 
> sort(1) ... sort(1) is geared towards sorting inputs which could be as 
> large as /tmp or $TMPDIR without consuming all of memory.  but that can be 
> slower on smallish inputs because you end up parsing strings multiple 
> times.

Ok I used a more complicated algorithm so it runs in constant space
while still being able to report individual files and not just
directories.  I profiled it but even after a few optimizations it's
still about as slow as the original perl script.


-- 
Ben Escoto

Attachment: pgpiXAT9ENU5T.pgp
Description: PGP signature

_______________________________________________
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Reply via email to