Public bug reported:

Binary package hint: sbackup

When backing up a large directory structure (in my case: ~ 280000 files, 
probably many directories, ~ 10 GB of data all in all), sbackup seems to get 
slower and slower while creating the list of files (i.e. creating the fprops 
file)... It can be seen how the fprops file grows by a few kb every few seconds.
After creation of fprops file, actual tar'ing and gzip'ing doesn't show any 
strange performance (most CPU time goes to gzip, and some goes to I/O wait, as 
expected).

Details:
Creating the fprops file took 280 minutes in the end; that means it has handled 
17 files per second. At the beginning of backup, I had measured a speed of 
about 60 files / second. During fprops creation, the HDD led didn't light up 
often, and top showed that all CPU load was used by the sbackupd process - so 
the slowdown is apparently not disk bound. Also, when testing backup just with 
the default config (i.e. backing up the system files) the first run took maybe 
a minute, and an incremental backup took around 15 seconds... The machine is a 
Pentium 1 with 400 MHz, with 128 MB RAM, a 8 GB system disk and a 40 GB data 
disk.

This happened with sbackup from Dapper (0.9-1 IIRC) and also with the
newest release (0.10.3-0.1).

Maybe there is a O(n^2) action in the code? I've seen a line like "if
not parent in dirs_in ..." which seems to search for every file name in
the list (map?) of known files - so this operation probably gets slower,
the more files are added. Is there a way to profile this?

** Affects: sbackup (Ubuntu)
     Importance: Undecided
         Status: Unconfirmed

-- 
sbackup is very slow when working on many files
https://bugs.launchpad.net/bugs/102577
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to