zpool create t1 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 c0t6d0 c0t7d0
 zpool create t2 c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 c1t7d0
 zpool create t3 c4t0d0 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 c4t6d0 c4t7d0
 zpool create t4 c5t1d0 c5t2d0 c5t3d0 c5t5d0 c5t6d0 c5t7d0
 zpool create t5 c6t0d0 c6t1d0 c6t2d0 c6t3d0 c6t4d0 c6t5d0 c6t6d0 c6t7d0
 zpool create t6 c7t0d0 c7t1d0 c7t2d0 c7t3d0 c7t4d0 c7t5d0 c7t6d0 c7t7d0

 zfs set atime=off t1
 zfs set atime=off t2
 zfs set atime=off t3
 zfs set atime=off t4
 zfs set atime=off t5
 zfs set atime=off t6

 dd if=/dev/zero of=/t1/q1 bs=512k&
[1] 903
 dd if=/dev/zero of=/t2/q1 bs=512k&
[2] 908
 dd if=/dev/zero of=/t3/q1 bs=512k&
[3] 909
 dd if=/dev/zero of=/t4/q1 bs=512k&
[4] 910
 dd if=/dev/zero of=/t5/q1 bs=512k&
[5] 911
 dd if=/dev/zero of=/t6/q1 bs=512k&
[6] 912



 zpool iostat 1
[...]
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
t1          20.1G  3.61T      0  3.19K      0   405M
t2          12.9G  3.61T      0  2.38K      0   302M
t3          8.51G  3.62T      0  2.79K  63.4K   357M
t4          5.19G  2.71T      0  1.39K  63.4K   170M
t5          1.96G  3.62T      0  2.65K      0   336M
t6          1.29G  3.62T      0  1.05K  63.4K   127M
----------  -----  -----  -----  -----  -----  -----
t1          20.1G  3.61T      0  3.77K      0   483M
t2          12.9G  3.61T      0  3.49K      0   446M
t3          8.51G  3.62T      0  2.36K  63.3K   295M
t4          5.19G  2.71T      0  2.84K      0   359M
t5          2.29G  3.62T      0     97  62.7K   494K
t6          1.29G  3.62T      0  4.03K      0   510M
----------  -----  -----  -----  -----  -----  -----

 iostat -xnzCM 1 | egrep "device| c[0-7]$"
[...]
                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0 5277.8    0.0  659.7  0.6 120.2    0.1   22.8   1 646 c0
    0.0 5625.7    0.0  703.2  0.1 116.7    0.0   20.7   0 691 c1
    0.0 4806.7    0.0  599.4  0.0 83.9    0.0   17.4   0 582 c4
    0.0 2457.4    0.0  307.2  3.3 134.9    1.3   54.9   2 600 c5
    0.0 3882.8    0.0  485.3  0.4 157.1    0.1   40.5   0 751 c7



So right now I'm getting up-to 2,7GB/s.

It's still jumpy (I provided only peak outputs) but it's much better than one 
large pool - lets try again:

# zpool create test c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 c0t6d0 c0t7d0 
c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 c1t7d0 c4t0d0 c4t1d0 c4t2d0 
c4t3d0 c4t4d0 c4t5d0 c4t6d0 c4t7d0 c5t1d0 c5t2d0 c5t3d0 c5t5d0 c5t6d0 c5t7d0 
c6t0d0 c6t1d0 c6t2d0 c6t3d0 c6t4d0 c6t5d0 c6t6d0 c6t7d0 c7t0d0 c7t1d0 c7t2d0 
c7t3d0 c7t4d0 c7t5d0 c7t6d0 c7t7d0

 zfs set atime=off test

 dd if=/dev/zero of=/test/q1 bs=512k&
 dd if=/dev/zero of=/test/q2 bs=512k&
 dd if=/dev/zero of=/test/q3 bs=512k&
 dd if=/dev/zero of=/test/q4 bs=512k&
 dd if=/dev/zero of=/test/q5 bs=512k&
 dd if=/dev/zero of=/test/q6 bs=512k&


iostat -xnzCM 1 | egrep "device| c[0-7]$"
[...]
                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0 1891.9    0.0  233.0 11.7 13.5    6.2    7.1   3 374 c0
    0.0 1944.9    0.0  239.5 10.9 14.0    5.6    7.2   3 350 c1
    7.0 1897.9    0.1  233.0 11.3 13.3    5.9    7.0   3 339 c4
   13.0 1455.9    0.2  178.5 13.2  6.1    9.0    4.2   3 226 c5
    0.0 1921.9    0.0  236.0  8.1 10.7    4.2    5.5   2 322 c6
    0.0 1919.9    0.0  236.0  7.8 10.5    4.1    5.5   2 321 c7

So it's about 1.3GB/s - about half of what I get with more pools.


Looks like a problem with scalability of one pool.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to