Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).

Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.

Load on the box is .59.

8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.

Known hardware errors:
- 1 of 8 SAS lanes is down- though we've seen the same poor
performance when using the backup where all 8 lanes work.
- Target 44 occasionally throws an error (less than once a week). When
this happens the pool will become unresponsive for a second, then
continue working normally.

Read performance when we read off the file system (including cache and
using dd with a 1meg block size) shows 1.6GB/sec. zpool iostat will
show numerous reads of 500 MB/s when doing this test.

I'm willing to consider that hardware could be the culprit here- but I
would expect to see signs if that were the case. The lack of any slow
service times, the lack of any effort at disk IO all seem to point
elsewhere.

I will provide any additional information people might find helpful
and will, if possible, test any suggestions.

Thanks in advance,
-Don
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to