On 10/24/07, Hendrik Friedel <[EMAIL PROTECTED]> wrote:
> Well, what surprises me is, that I can't hear it seeking...

Try using `iostat 3` or similar during a backup. Typical 7200 rpm IDE
disks can't do more than 100-150 IOP/s or so.

> /dev/hda5 94% /mnt/data <--xfs, not used by backuppc
> /dev/hdb1 99% /mnt/data1 <--reiserfs, backuppc

Eek! Keeping your disks over 90% full is a bad idea. I really suspect
that your reiserfs partition is very heavily fragmented. I must wonder
if the directory structures have also become fragmented. At least with
xfs you can degragment it online.

> > You should also check that each disk is running full speed by
> > running hdparm -tT /dev/hdX. You should be seeing at least
> > 30MB/s, probably
> > 40-50MB+.
>
> Well, it's just doing a backup and an emerge (xfs_fsr ;-)
>
> /dev/hda:
>  Timing cached reads:   154 MB in  2.02 seconds =  76.18 MB/sec
>  Timing buffered disk reads:   66 MB in  3.04 seconds =  21.72 MB/sec
>
> /dev/hdb:
>  Timing cached reads:   144 MB in  2.00 seconds =  71.90 MB/sec
>  Timing buffered disk reads:   60 MB in  3.00 seconds =  19.97 MB/sec
>
> Were you refering to the buffered or unbuffered speed?

Buffered speeds. Those speeds are low if the system is idle. You
should be seeing 30MB+ for those disks, probably at least 40MB/s. If
the system wasn't idle, please try again when it is.

Anyway, seeing the iostat data will let us know if the disks are maxed
out or not.

-Dave

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to