I would expect such iostat output from a device which can handle
only a single queued I/O to the device (eg. IDE driver) and an I/O
is stuck.  There are 3 more I/Os in the wait queue waiting for the
active I/O to complete.  The %w and %b are measured as the percent
of time during which an I/O was in queue.  The svc_t is 0 because
the I/O is not finished.

By default, most of the drivers will retry I/Os which don't seem to
finish, but the retry interval is often on the order of 60 seconds.
If a retry succeeds, then no message is logged to syslog, so you
might not see any messages.  But just to be sure, what does
fmdump (and fmdump -e) say about the system?  Are messages
logged in /var/adm/messages?
 -- richard

Joe Little wrote:
> I was playing with a Gigabyte i-RAM card and found out it works great
> to improve overall performance when there are a lot of writes of small
> files over NFS to such a ZFS pool.
>
> However, I noted a frequent situation in periods of long writes over
> NFS of small files. Here's a snippet of iostat during that period.
> sd15/sd16 are two iscsi targets, and sd17 is the iRAM card (2GB)
>
>                  extended device statistics
> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> sd15      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd16      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd17      0.0    0.0    0.0    0.0  3.0  1.0    0.0 100 100
>                  extended device statistics
> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> sd15      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd16      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd17      0.0    0.0    0.0    0.0  3.0  1.0    0.0 100 100
>                  extended device statistics
> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> sd15      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd16      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd17      0.0    0.0    0.0    0.0  3.0  1.0    0.0 100 100
>                  extended device statistics
> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> sd15      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd16      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd17      0.0    0.0    0.0    0.0  3.0  1.0    0.0 100 100
>                  extended device statistics
> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> sd15      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd16      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd17      0.0    0.0    0.0    0.0  3.0  1.0    0.0 100 100
>                  extended device statistics
> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> sd15      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd16      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd17      0.0    0.0    0.0    0.0  3.0  1.0    0.0 100 100
>                  extended device statistics
> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> sd15      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd16      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd17      0.0    0.0    0.0    0.0  3.0  1.0    0.0 100 100
>                  extended device statistics
> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> sd15      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd16      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd17      0.0    0.0    0.0    0.0  3.0  1.0    0.0 100 100
>                  extended device statistics
> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> sd15      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd16      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> sd17      0.0    0.0    0.0    0.0  3.0  1.0    0.0 100 100
>
> During this time no operations can occur. I've attached the iRAM disk
> via a 3124 card. I've never seen a svc_t time of 0, and full wait and
> busy disk. Any clue what this might mean?
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to