> Hi, not sure what you mean by memory limited

Well, by my calculations (500 kb on average per 10,000 files),
rdiff-backup would need 64 MB for 1.3 million files, and 1 GB for 20
million files. That could be a problem. I haven't actually tested with
that many files yet.

Also, my tests were for all the files in a single directory. It's
possible that rdiff-backup uses less memory when the files are spread
over a directory structure.

> , but I definitely back up lots
> of repos with rdiff-backup and have never noticed any problems with memory
> usage.

What is the largest number of files you backup at once with
rdiff-backup? Could you check how much RAM rdiff-backup uses for that
backup?

> However transferring the same files with rsync takes much more RAM,
> as rsync loads the whole file list in memory, rdiff-backup doesn't seem to.

New versions of rsync (after 3.0.0) use an incremental file list, so
it doesn't need to keep the entire file list in memory.

> Perhaps there is some ceiling you are not reaching in terms of counting
> files before more ram consumption.
> Just my 2c.

With my tests I got up to around 60,000 files and the memory used was
still increasing with each new set of 10,000. So there's no reason to
think it wouldn't keep growing.

David.


_______________________________________________
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Reply via email to