Yes I noticed that thread a while back and have been doing a great deal of 
testing with various scsi_vhci options.  
I am disappointed that the thread hasn't moved further since I also suspect 
that it is related to mpt-sas or multipath or expander related.

I was able to get aggregate writes up to 500MB out to the disks but reads have 
not improved beyond an aggregate average of about 50-70MBps for the pool.

I did not look much at read speeds during alot of my previous testing because I 
thought write speeds were my issue... And I've since realized that my userland 
write speed problem from zpool <-> zpool was actually read limited.

Since then I've tried mirrors, stripes, raidz, checked my drive caches, tested 
recordsizes, volblocksizes, clustersizes, combinations therein, tried 
vol-backed luns, file-backed luns, wcd=false - etc.

Reads from disk are slow no matter what.  Of course - once the arc cache is 
populated, the userland experience is blazing - because the disks are not being 
read.


Seeing write speeds so much faster that read strikes me as quite strange from a 
hardware perspective, though, since writes also invoke a read operation - do 
they not?

> This sounds very similar to another post last month.
> http://opensolaris.org/jive/thread.jspa?messageID=4874
> 53
> 
> The trouble appears to be below ZFS, so you might try
> asking on the 
> storage-discuss forum.
>  -- richard
> On Jul 28, 2010, at 5:23 PM, Karol wrote:
> 
> > I appear to be getting between 2-9MB/s reads from
> individual disks in my zpool as shown in iostat -v 
> > I expect upwards of 100MBps per disk, or at least
> aggregate performance on par with the number of disks
> that I have.
> > 
> > My configuration is as follows:
> > Two Quad-core 5520 processors
> > 48GB ECC/REG ram
> > 2x LSI 9200-8e SAS HBAs (2008 chipset)
> > Supermicro 846e2 enclosure with LSI sasx36 expander
> backplane
> > 20 seagate constellation 2TB SAS harddrives
> > 2x 8GB Qlogic dual-port FC adapters in target mode
> > 4x Intel X25-E 32GB SSDs available (attached via
> LSI sata-sas interposer)
> > mpt_sas driver
> > multipath enabled, all four LSI ports connected for
> 4 paths available:
> > f_sym, load-balance logical-block region size 11 on
> seagate drives
> > f_asym_sun, load-balance none, on intel ssd drives
> > 
> > currently not using the SSDs in the pools since it
> seems I have a deeper issue here.
> > Pool configuration is four 2-drive mirror vdevs in
> one pool, and the same in another pool. 2 drives are
> for OS and 2 drives aren't being used at the moment.
> > 
> > Where should I go from here to figure out what's
> wrong?
> > Thank you in advance - I've spent days reading and
> testing but I'm not getting anywhere. 
> > 
> > P.S: I need the aid of some Genius here.
> > -- 
> > This message posted from opensolaris.org
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> >
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
> 
> -- 
> Richard Elling
> rich...@nexenta.com   +1-760-896-4422
> Enterprise class storage for everyone
> www.nexenta.com
> 
> 
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
>
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to