Attached is some kind of logfile from backing up the 280000 files (it's
the output of "ls -l" on the target directory, once a minute). This
shows that creating the flist file took around 70 minutes, which is way
faster than the original case; but now this was run on a much newer
machine (K7 2600+, nforce2, 1GB RAM). The actual tar'ring only took few
minutes it seems. So I guess it still uses too much time for the flist
creation. Now one could also try to display the flist file growth over
time (and would probably see that the growth would slow down - that's
what I saw on the original machine).

Regarding the idea to create the flist from the tgz: I thought the flist
is also created to see what files actually changed and so need to be
included in the incremental copy... Would that still work when only tar
is used?

I guess the basic algorithm in sbackup is good, but there are probably a
few operations in there which get very expensive when dealing with much
data.

** Attachment added: "sbackup-watch-1.txt.gz"
   http://librarian.launchpad.net/7408359/sbackup-watch-1.txt.gz

-- 
sbackup is very slow when working on many files
https://bugs.launchpad.net/bugs/102577
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to