Yup - already working on that :-D the prev.count() check indeed seems to
take lots of time, as it seems to go over the whole list of previous
files for every file. Attached is a patch that turns this into a hash
lookup: the prev list is copied to a hash (prevHash) after it has been
completely filled, and then prevHash.has_key() is used for lookup. This
avoids meddling with the prev list creation itself :-) but I guess the
prev list (which is not used after prevHash has been filled) still
occupies memory after finishing - might be another thing to look into.
Anyway, creation of the prev list itself seems to be pretty quick (takes
few seconds).

The patched sbackupd has now just finished five incremental backups,
with some changes to the test directory structure in between. Time
ranges between 20 seconds for incremental backup of an unchanged
directory structure, to 8 minutes for incremental backup after changing
all file attributes in the tree with `chmod -R u-w sbackup-test/root/`
... I guess that's fast enough for me at the moment (will try this on
the real system the next days).


** Attachment added: "Patch to use hash instead of list for matching against 
previous files"
   http://librarian.launchpad.net/7578840/sbd-prev-as-hash.diff

-- 
sbackup is very slow when working on many files
https://bugs.launchpad.net/bugs/102577
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to