Danny Carroll wrote:

>  - I have seen sustained 130Mb reads from ZFS:
>                capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> bigarray    1.29T  3.25T  1.10K      0   140M      0
> bigarray    1.29T  3.25T  1.00K      0   128M      0
> bigarray    1.29T  3.25T    945      0   118M      0
> bigarray    1.29T  3.25T  1.05K      0   135M      0
> bigarray    1.29T  3.25T  1.01K      0   129M      0
> bigarray    1.29T  3.25T    994      0   124M      0
> 
>            ad4              ad6              ad8             cpu
> KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
> 0.00   0  0.00  65.90 375 24.10  63.74 387 24.08   0  0 19  2 78
> 0.00   0  0.00  66.36 357 23.16  63.93 370 23.11   0  0 23  2 75
> 16.00  0  0.00  64.84 387 24.51  63.79 389 24.20   0  0 23  2 75
> 16.00  2  0.03  68.09 407 27.04  64.98 409 25.98   0  0 28  2 70

> I'm curious if the ~130M figure shown above is bandwidth from the array
> or a total of all the drives.  In other words, does it include reading
> the parity information?  I think it does not since if I look at iostat
> figures and add up all of the drives it is greater than that reported by
> zfs by a factor of 5/4  (100M in Zfs iostat = 5 x 25Mb in standard iostat).

The numbers make sense - you have 5 drives in RAID-Z and 4/5ths of total
bandwidth is the "real" bandwidth. On the other hand, 25 MB/s is very
slow for modern drives (assuming you're doing sequential read/write
tests). Are you having hardware problems?

> Lastly, The windows client which performed these tests was measuring
> local bandwidth at about 30-50Mb/s.  I believe this figure to be
> incorrect (given how much I transferred in X seconds...)

Using Samba? Search the lists for Samba performance advice - the default
configuration isn't nearly optimal.

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to