On 21/10/2009 03:54, Bob Friesenhahn wrote:

I would be interested to know how many IOPS an OS like Solaris is able to push through a single device interface. The normal driver stack is likely limited as to how many IOPS it can sustain for a given LUN since the driver stack is optimized for high latency devices like disk drives. If you are creating a driver stack, the design decisions you make when requests will be satisfied in about 12ms would be much different than if requests are satisfied in 50us. Limitations of existing software stacks are likely reasons why Sun is designing hardware with more device interfaces and more independent devices.


Open Solaris 2009.06, 1KB READ I/O:

# dd of=/dev/null bs=1k if=/dev/rdsk/c0t0d0p0&
# iostat -xnzCM 1|egrep "device|c[0123]$"
[...]
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 17497.3    0.0   17.1    0.0  0.0  0.8    0.0    0.0   0  82 c0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 17498.8    0.0   17.1    0.0  0.0  0.8    0.0    0.0   0  82 c0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 17277.6    0.0   16.9    0.0  0.0  0.8    0.0    0.0   0  82 c0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 17441.3    0.0   17.0    0.0  0.0  0.8    0.0    0.0   0  82 c0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 17333.9    0.0   16.9    0.0  0.0  0.8    0.0    0.0   0  82 c0


Now lets see how it looks like for a single SAS connection but dd to 11x SSDs:

# dd of=/dev/null bs=1k if=/dev/rdsk/c0t0d0p0&
# dd of=/dev/null bs=1k if=/dev/rdsk/c0t1d0p0&
# dd of=/dev/null bs=1k if=/dev/rdsk/c0t2d0p0&
# dd of=/dev/null bs=1k if=/dev/rdsk/c0t4d0p0&
# dd of=/dev/null bs=1k if=/dev/rdsk/c0t5d0p0&
# dd of=/dev/null bs=1k if=/dev/rdsk/c0t6d0p0&
# dd of=/dev/null bs=1k if=/dev/rdsk/c0t7d0p0&
# dd of=/dev/null bs=1k if=/dev/rdsk/c0t8d0p0&
# dd of=/dev/null bs=1k if=/dev/rdsk/c0t9d0p0&
# dd of=/dev/null bs=1k if=/dev/rdsk/c0t10d0p0&
# dd of=/dev/null bs=1k if=/dev/rdsk/c0t11d0p0&

# iostat -xnzCM 1|egrep "device|c[0123]$"
[...]
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 104243.3    0.0  101.8    0.0  0.2  9.7    0.0    0.1   0 968 c0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 104249.2    0.0  101.8    0.0  0.2  9.7    0.0    0.1   0 968 c0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 104208.1    0.0  101.8    0.0  0.2  9.7    0.0    0.1   0 967 c0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 104245.8    0.0  101.8    0.0  0.2  9.7    0.0    0.1   0 966 c0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 104221.9    0.0  101.8    0.0  0.2  9.7    0.0    0.1   0 968 c0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
 104212.2    0.0  101.8    0.0  0.2  9.7    0.0    0.1   0 967 c0


It looks like a single CPU core still hasn't been saturated and the bottleneck is in the device rather then OS/CPU. So the MPT driver in Solaris 2009.06 can do at least 100,000 IOPS to a single SAS port.

It also scales well - I did run above dd's over 4x SAS ports at the same time and it scaled linearly by achieving well over 400k IOPS.


hw used: x4270, 2x Intel X5570 2.93GHz, 4x SAS SG-PCIE8SAS-E-Z (fw. 1.27.3.0), connected to F5100.


--
Robert Milkowski
http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to