A couple points in line below ...

On Wed, Oct 26, 2011 at 10:56 PM, weiliam.hong <weiliam.h...@gmail.com> wrote:

> I have a fresh installation of OI151a:
> - SM X8DTH, 12GB RAM, LSI 9211-8i (latest IT-mode firmware)
> - pool_A : SG ES.2 Constellation (SAS)
> - pool_B : WD RE4 (SATA)
> - no settings in /etc/system

> Load generation via 2 concurrent dd streams:
> --------------------------------------------------
> dd if=/dev/zero of=/pool_A/bigfile bs=1024k count=1000000
> dd if=/dev/zero of=/pool_B/bigfile bs=1024k count=1000000

dd generates "straight line" data, all sequential.

>                capacity     operations    bandwidth
> pool        alloc   free   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> pool_A      15.5G  2.70T      0     50      0  6.29M
>   mirror    15.5G  2.70T      0     50      0  6.29M
>     c7t5000C50035062EC1d0      -      -      0     62      0  7.76M
>     c8t5000C50034C03759d0      -      -      0     50      0  6.29M
> ----------  -----  -----  -----  -----  -----  -----
> pool_B      28.0G  1.79T      0  1.07K      0   123M
>   mirror    28.0G  1.79T      0  1.07K      0   123M
>     c1t50014EE057FCD628d0      -      -      0  1.02K      0   123M
>     c2t50014EE6ABB89957d0      -      -      0  1.02K      0   123M

What does `iostat -xnM c7t5000C50035062EC1d0 c8t5000C50034C03759d0
c1t50014EE057FCD628d0 c2t50014EE6ABB89957d0 1` show ? That will give
you much more insight into the OS <-> drive interface.

What does `fsstat /pool_A /pool_B 1` show ? That will give you much
more insight into the application <-> filesystem interface. In this
case "application" == "dd".

In my opinion, `zpool iostat -v` is somewhat limited in what you can
learn from it. The only thing I use it for these days is to see
distribution of data and I/O between vdevs.

> Questions:
> 1. Why does SG SAS drives degrade to <10 MB/s while WD RE4 remain consistent
> at >100MB/s after 10-15 min?

Something changes to slow them down ? Sorry for the obvious retort :-)
See what iostat has to say. If the %b column is climbing, then you are
slowly saturating the drives themselves, for example.

> 2. Why does SG SAS drive show only 70+ MB/s where is the published figures
> are > 100MB/s refer here?

"published" where ? What does a "dd" to the device itself (no ZFS, no
FS at all) show ? For example, `dd if=/dev/zero
of=/dev/dsk/c7t5000C50035062EC1d0s0 bs=1024k count=1000000` (after you
destroy the zpool and use format to create an s0 of the entire disk).
This will test the device driver / HBA / drive with no FS or volume
manager involved. Use iostat to watch the OS <-> drive interface.

> 3. All 4 drives are connected to a single HBA, so I assume the mpt_sas
> driver is used. Are SAS and SATA drives handled differently ?

I assume there are (at least) four ports on the HBA ? I assume this
from the c7, c8, c1, c2 device names. That means that the drives
should _not_ be affecting each other. As another poster mentioned, the
behavior of the interface chip may change based on which drives are
seeing I/O, but I doubt that would be this big of a factor.

> This is a test server, so any ideas to try and help me understand greatly
> appreciated.

What do real benchmarks (iozone, filebench, orion) show ?

-- 
{--------1---------2---------3---------4---------5---------6---------7---------}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to