G'Day Victor,

On Thu, Jun 26, 2008 at 11:56:26AM -0700, victor wrote:
> Thank you for the suggestion here are the results. I hate to scale it down to 
> a 34 meg file, 128 megs took too long to write out. Compression is off.
> 
> [EMAIL PROTECTED]:~# ptime dd if=/dev/urandom of=foo bs=128k count=256
> 256+0 records in
> 256+0 records out
> 33554432 bytes (34 MB) copied, 29.7683 s, 1.1 MB/s
> 
> real       29.770
> user        0.000
> sys         1.335

Ok, that is indeed slow!

> [EMAIL PROTECTED]:~# iostat -xne 5
>                             extended device statistics       ---- errors --- 
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
> device
>     4.8    4.0  250.5  102.7  0.3  0.1   39.6    8.5   4   3   0   0   0   0 
> c5t0d0
>     0.6    6.8   37.3  280.7  8.9  2.4 1217.4  327.3  51  24   0   0   0   0 
> c5t1d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0  10   0  10 
> c4t1d0
>                             extended device statistics       ---- errors --- 
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
> device
>     0.8   14.6    0.3 1020.9  3.1  0.2  204.1   11.8  18  18   0   0   0   0 
> c5t0d0
>     0.8   12.4    3.4 1002.3  8.8  0.4  666.2   31.7  80  42   0   0   0   0 
> c5t1d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0  10   0  10 
> c4t1d0
>                             extended device statistics       ---- errors --- 
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
> device
>     0.0   20.8    0.0 1139.2  2.1  0.1   99.2    6.4  13  13   0   0   0   0 
> c5t0d0
>     0.8   18.8    3.4  664.0  4.8  0.2  242.8   12.3  53  24   0   0   0   0 
> c5t1d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0  10   0  10 
> c4t1d0
>                             extended device statistics       ---- errors --- 
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
> device
>     1.6   11.2    0.6   50.0  0.3  0.1   20.2    3.9   5   5   0   0   0   0 
> c5t0d0
>     1.4   15.8    5.1  543.8  5.4  0.3  313.5   15.8  57  27   0   0   0   0 
> c5t1d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0  10   0  10 
> c4t1d0
>                             extended device statistics       ---- errors --- 
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
> device
>     0.0   21.0    0.0 1104.7  5.3  0.3  253.0   13.6  31  28   0   0   0   0 
> c5t0d0
>     0.4   21.4    3.3 1104.7  7.6  0.4  350.2   16.2  87  35   0   0   0   0 
> c5t1d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0  10   0  10 
> c4t1d0
>                             extended device statistics       ---- errors --- 
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
> device
>     1.6   26.0    0.6 1102.3  0.0  3.1    0.0  111.7   0  12   0   0   0   0 
> c5t0d0
>     1.6   25.6    6.8 1102.3  0.0  4.3    0.0  157.3   0  19   0   0   0   0 
> c5t1d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0  10   0  10 
> c4t1d0
>                             extended device statistics       ---- errors --- 
>     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
> device
>     1.0   22.8   25.9 1102.3  0.1  4.0    2.2  169.1   5  17   0   0   0   0 
> c5t0d0
>     0.2   22.2   12.8 1102.3  0.0  7.8    0.0  346.7   0  45   0   0   0   0 
> c5t1d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0   0  10   0  10 
> c4t1d0

Hmm - if the first output is the summary since boot, and that shows 6.8 w/s
for c5t1d0 at asvc_t of 327.3 ms.  SATA 7200 RPM drives should be faster
than this, although it's a bit hard to know for sure with only 6.8 w/s.
The asvc_t is measuring time for the disk subsystem to return - meaning
this is looking like a disk or disk driver issue, not ZFS.  To check further,
I'd test that disk with something simple:

        window1# dd if=/dev/rdsk/c5t1d0 of=/dev/null bs=48k

        window2# iostat -xne 5

which performs a raw read test with about the same I/O size as your writes.
Performing a raw write test would be more interesting, but you'd need to
destroy the pool first - since it would overwrite the disk.

I'd also check /var/adm/messages to see if something else odd was going on.

Brendan

-- 
Brendan
[CA, USA]
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to