That's interesting - 'zpool iostat' shows quite small read volume to any pool 
however if I run 'zpool iostat -v' then I can see that while read volume to a 
pool is small, read volume to each disk is actually quite large so in summary I 
get over 10x read volume if I sum all disks in a pool than on pool itself. 
These data are consistent with iostat. So now even zpool claims that it 
actually issues 10x (and more) read volume to all disks in a pool than to pool 
itself.
Now - why???? It really hits performance here...

bash-3.00# zpool iostat -v p1 1
                              capacity     operations    bandwidth
pool                        used  avail   read  write   read  write
-------------------------  -----  -----  -----  -----  -----  -----
p1                          749G  67.2G     58     90   878K   903K
  raidz                     749G  67.2G     58     90   878K   903K
    c4t500000E011909320d0      -      -     15     40   959K  87.3K
    c4t500000E011909300d0      -      -     14     40   929K  86.5K
    c4t500000E011903030d0      -      -     18     40  1.11M  86.8K
    c4t500000E011903300d0      -      -     13     32   823K  77.7K
    c4t500000E0119091E0d0      -      -     15     40   961K  87.3K
    c4t500000E0119032D0d0      -      -     14     40   930K  86.5K
    c4t500000E011903370d0      -      -     18     40  1.11M  86.8K
    c4t500000E011903190d0      -      -     13     32   828K  77.8K
    c4t500000E011903350d0      -      -     15     40   964K  87.3K
    c4t500000E0119095A0d0      -      -     14     40   934K  86.5K
    c4t500000E0119032A0d0      -      -     18     40  1.11M  86.8K
    c4t500000E011903340d0      -      -     13     32   821K  77.7K
-------------------------  -----  -----  -----  -----  -----  -----

                              capacity     operations    bandwidth
pool                        used  avail   read  write   read  write
-------------------------  -----  -----  -----  -----  -----  -----
p1                          749G  67.2G     49     44   897K  1.02M
  raidz                     749G  67.2G     49     44   897K  1.02M
    c4t500000E011909320d0      -      -     17     25  1.05M  96.4K
    c4t500000E011909300d0      -      -     15     25   972K  96.2K
    c4t500000E011903030d0      -      -     20     25  1.25M  96.3K
    c4t500000E011903300d0      -      -     14     25   853K  91.2K
    c4t500000E0119091E0d0      -      -     16     25  1017K  96.7K
    c4t500000E0119032D0d0      -      -     15     25   955K  96.7K
    c4t500000E011903370d0      -      -     19     25  1.21M  96.6K
    c4t500000E011903190d0      -      -     13     25   843K  91.0K
    c4t500000E011903350d0      -      -     16     25  1001K  96.5K
    c4t500000E0119095A0d0      -      -     15     25   974K  96.3K
    c4t500000E0119032A0d0      -      -     20     25  1.22M  96.5K
    c4t500000E011903340d0      -      -     14     25   855K  90.7K
-------------------------  -----  -----  -----  -----  -----  -----

^C
bash-3.00#
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to