Scott Lawson writes:
> Also you may wish to look at the output of 'iostat -xnce 1' as well.
>
> You can post those to the list if you have a specific problem.
>
> You want to be looking for error counts increasing and specifically 'asvc_t'
> for the service times on the disks. I higher num
Running "iostat -nxce 1", I saw write sizes alternate between two raidz groups
in the same pool.
At one time, drives on cotroller 1 have larger writes (3-10 times) than ones on
controller2:
extended device statistics errors ---
r/sw/s
Maybe you can run a Dtrace probe using Chime?
http://blogs.sun.com/observatory/entry/chime
Initial Traces -> Device IO
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
Also you may wish to look at the output of 'iostat -xnce 1' as well.
You can post those to the list if you have a specific problem.
You want to be looking for error counts increasing and specifically 'asvc_t'
for the service times on the disks. I higher number for asvc_t may help to
isolate poo
You can try:
zpool iostat pool_name -v 1
This will show you IO on each vdev at one second intervals. Perhaps you will
see different IO behavior on any suspect drive.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
Hi,
I'd appreciate if anyone can point me how to identify poor performing disks
that might have dragged down performance of the pool. Also the system logged
following error about one of the drives. Does it show the disk was having
problem?
Aug 17 13:45:56 zfs1.domain.com scsi: [ID 107833 kern.