To follow up on this issue, at one point the stats were down to this:
extended device statistics
device r/s w/skr/skw/s qlen svc_t %b
da00.0 0.0 0.0 0.00 0.0 0
da10.0 0.0 0.0 0.00 0.0 0
da2 127.9 0.0
On 8. aug. 2013, at 00:08, Frank Leonhardt fra...@fjl.co.uk wrote:
As a suggestion, what happens if you read from the drives directly? Boot in
single user and try reading a Gb or two using /bin/dd. It might eliminate or
confirm a problem with ZFS.
If not too inconvenient, it'd be very
On 08/08/2013 12:42, Terje Elde wrote:
On 8. aug. 2013, at 00:08, Frank Leonhardt fra...@fjl.co.uk wrote:
As a suggestion, what happens if you read from the drives directly? Boot in
single user and try reading a Gb or two using /bin/dd. It might eliminate or
confirm a problem with ZFS.
If
Maybe one of your drives is bad, so it's constantly doing error correction?
On Tue, Aug 6, 2013 at 9:48 PM, J David j.david.li...@gmail.com wrote:
We have a machine running 9.2-RC1 that's getting terrible disk I/O
performance. Its performance has always been pretty bad, but it
didn't really
On Wed, Aug 7, 2013 at 3:15 PM, James Gosnell jamesgosn...@gmail.com wrote:
Maybe one of your drives is bad, so it's constantly doing error correction?
Not according to SMART; all the drives report no problems. Also, all
the drives seem to perform in lock-step for both reading and writing.
E.g.
On 07/08/2013 21:36, J David wrote:
It feels like some sort of issue with the
bus/controller/kernel/driver/ZFS that is affecting all the drives
equally.
Also, even ls takes forever (10-30 seconds for ls -lh /) but when it
eventually does finish, time ls -lh / reports:
0.02 real
We have a machine running 9.2-RC1 that's getting terrible disk I/O
performance. Its performance has always been pretty bad, but it
didn't really become clear how bad until we did a zpool replace on one
of the drives and realized it was going to take 3 weeks to rebuild a
1TB drive.
The hardware