Re: [zfs-discuss] ZFS read performance terrible

2010-08-01 Thread Karol
I can achive 140MBps to individual disks until I hit a 1GBps system ceiling 
which I suspect 1GBps may be all that the 4x SAS HBA connection on a 3Gbps sas 
expander can handle. (just a guess)

Anyway, with ZFS or SVM I can't do much beyond a single disk performance total 
(if that)  I am thinking my hardware is OK and this is something else.

I wonder if my issue could have anything to do with:
http://opensolaris.org/jive/thread.jspa?messageID=33739菋

Anyway, I've already blown away my OSOL install to test Linux performance  - so 
I can't test ZFS at the moment.  However - does anyone know if the above post 
could be related to sequential performance?  Toward the end they suggest 
increasing an sd tunable so that more data is sent to the device in some 
respect - if I understand it correctly - so that the harddrive has enough data 
to work with on every rotation?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Richard Elling
On Jul 29, 2010, at 6:04 PM, Carol wrote:
> Richard,
>  
> I disconnected all but one path and disabled mpxio via stmsboot -d and my 
> read performance doubled.  I saw about 100MBps average from the pool. 

This is a start.  Something is certainly fishy in the data paths, but
it is proving to be difficult to pinpoint.  The only common factor I see
at this time is the SuperMicro JBOD chassis. It would be worthwhile
checking to see if there are firmware updates available for the
chassis or expanders.

>  
> BTW, single harddrive performance (single disk in a pool) is about 140MBps.
> What do you think? 

That is about right per disk.  I usually SWAG 100 +/- 50 MB/sec for HDD
media speed.
 -- richard

>  
> Thank you again for your help!
> 
> --- On Thu, 7/29/10, Richard Elling  wrote:
> 
> From: Richard Elling 
> Subject: Re: [zfs-discuss] ZFS read performance terrible
> To: "Carol" 
> Cc: "zfs-discuss@opensolaris.org" 
> Date: Thursday, July 29, 2010, 2:03 PM
> 
> On Jul 29, 2010, at 9:57 AM, Carol wrote:
> 
> > Yes I noticed that thread a while back and have been doing a great deal of 
> > testing with various scsi_vhci options.  
> > I am disappointed that the thread hasn't moved further since I also suspect 
> > that it is related to mpt-sas or multipath or expander related.
> 
> The thread is in the ZFS forum, but the problem is not a ZFS problem.
> 
> > I was able to get aggregate writes up to 500MB out to the disks but reads 
> > have not improved beyond an aggregate average of about 50-70MBps for the 
> > pool.
> 
> I find "zpool iostat" to be only marginally useful.  You need to look at the
> output of "iostat -zxCn" which will show the latency of the I/Os.  Check to
> see if the latency (asvc_t) is similar to the previous thread.
> 
> > I did not look much at read speeds during alot of my previous testing 
> > because I thought write speeds were my issue... And I've since realized 
> > that my userland write speed problem from zpool <-> zpool was actually read 
> > limited.
> 
> Writes are cached in RAM, so looking at iostat or zpool iostat doesn't offer
> the observation point you'd expect.
> 
> > Since then I've tried mirrors, stripes, raidz, checked my drive caches, 
> > tested recordsizes, volblocksizes, clustersizes, combinations therein, 
> > tried vol-backed luns, file-backed luns, wcd=false - etc.
> > 
> > Reads from disk are slow no matter what.  Of course - once the arc cache is 
> > populated, the userland experience is blazing - because the disks are not 
> > being read.
> 
> Yep, classic case of slow disk I/O.
> 
> > Seeing write speeds so much faster that read strikes me as quite strange 
> > from a hardware perspective, though, since writes also invoke a read 
> > operation - do they not?
> 
> In many cases, writes do not invoke a read.
> -- richard
> 
> 

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Brent Jones
On Thu, Jul 29, 2010 at 6:04 PM, Carol  wrote:

> Richard,
>
> I disconnected all but one path and disabled mpxio via stmsboot -d and my
> read performance doubled.  I saw about 100MBps average from the pool.
>
> BTW, single harddrive performance (single disk in a pool) is about 140MBps.
>
> What do you think?
>
> Thank you again for your help!
>
>

I somehow doubt nearline 2TB drives will do 140MB/sec. Maybe on the outter
tracks, and only on optimum block sizes, with sequential reads.
You mentioned you were using COMSTAR, and depending on the initiator OS, and
the device you are mapping to to create the LUN (/dev/dsk or /dev/rdsk), you
may be writing 512byte blocks to ZFS.
Search older threads for rdks vs. dsk performance with COMSTAR.
Also, see what block size you are using on the initiator, if in fact you are
using COMSTAR. I found absolutely terrible COMSTAR performance when the
initiator FS was using anything less than 8KB blocks, too many small sync
writes over iSCSI = death for storage performance.

Go the usual route at looking at jumbo frames, flow control on the switches,
etc.

-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Carol
Richard,
 
I disconnected all but one path and disabled mpxio via stmsboot -d and my read 
performance doubled.  I saw about 100MBps average from the pool. 
 
BTW, single harddrive performance (single disk in a pool) is about 140MBps. 
What do you think?  
 
Thank you again for your help!

--- On Thu, 7/29/10, Richard Elling  wrote:


From: Richard Elling 
Subject: Re: [zfs-discuss] ZFS read performance terrible
To: "Carol" 
Cc: "zfs-discuss@opensolaris.org" 
Date: Thursday, July 29, 2010, 2:03 PM


On Jul 29, 2010, at 9:57 AM, Carol wrote:

> Yes I noticed that thread a while back and have been doing a great deal of 
> testing with various scsi_vhci options.  
> I am disappointed that the thread hasn't moved further since I also suspect 
> that it is related to mpt-sas or multipath or expander related.

The thread is in the ZFS forum, but the problem is not a ZFS problem.

> I was able to get aggregate writes up to 500MB out to the disks but reads 
> have not improved beyond an aggregate average of about 50-70MBps for the pool.

I find "zpool iostat" to be only marginally useful.  You need to look at the
output of "iostat -zxCn" which will show the latency of the I/Os.  Check to
see if the latency (asvc_t) is similar to the previous thread.

> I did not look much at read speeds during alot of my previous testing because 
> I thought write speeds were my issue... And I've since realized that my 
> userland write speed problem from zpool <-> zpool was actually read limited.

Writes are cached in RAM, so looking at iostat or zpool iostat doesn't offer
the observation point you'd expect.

> Since then I've tried mirrors, stripes, raidz, checked my drive caches, 
> tested recordsizes, volblocksizes, clustersizes, combinations therein, tried 
> vol-backed luns, file-backed luns, wcd=false - etc.
> 
> Reads from disk are slow no matter what.  Of course - once the arc cache is 
> populated, the userland experience is blazing - because the disks are not 
> being read.

Yep, classic case of slow disk I/O.

> Seeing write speeds so much faster that read strikes me as quite strange from 
> a hardware perspective, though, since writes also invoke a read operation - 
> do they not?

In many cases, writes do not invoke a read.
-- richard




  ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Carol
> Yep. With round robin it's about 80 for each disk for ascv_t
Any ideas?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Carol
Yes I noticed that thread a while back and have been doing a great deal of 
testing with various scsi_vhci options.  
I am disappointed that the thread hasn't moved further since I also suspect 
that it is related to mpt-sas or multipath or expander related.

I was able to get aggregate writes up to 500MB out to the disks but reads have 
not improved beyond an aggregate average of about 50-70MBps for the pool.

 I did not look much at read speeds during alot of my previous testing because 
I thought write speeds were my issue... And I've since realized that my 
userland write speed problem from zpool <-> zpool was actually read limited.

Since then I've tried mirrors, stripes, raidz, checked my drive caches, tested 
recordsizes, volblocksizes, clustersizes, combinations therein, tried 
vol-backed luns, file-backed luns, wcd=false - etc.

Reads from disk are slow no matter what.  Of course - once the arc cache is 
populated, the userland experience is blazing - because the disks are not being 
read.


Seeing write speeds so much faster that read strikes me as quite strange from a 
hardware perspective, though, since writes also invoke a read operation - do 
they not?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Karol
I'm about to do some testing with that dtrace script..

However, in the meantime - I've disabled primarycache (set primarycache=none) 
since I noticed that it was easily caching /dev/zero and I wanted to do some 
tests within the OS rather than over FC.

I am getting the same results through dd.
Virtually the exact same numbers.
I imagine this particular fact is a testament to COMSTAR - of course I suspect 
if I ever get the disks pushing what they're cable of - then maybe I will 
notice some slight COMSTAR inefficiencies later on...  for now there don't seem 
to be any at this performance level.

Anyway - there seems to be a 523MBps (or so) overall throughput limit.  If two 
pools are writing, the aggregate total zpool throughput for all pools will not 
exceed about 523MBps.

That's of course not the biggest issue.
With the ARC cache disabled - some strange numbers are becoming apparent:
dd throughput hovers about 70MBps for reads, 800MBps for writes.
Meanwhile - zpool throughput shows:
 50-150MBps throughput for reads / 520MBps for writes.

If I set zfs_prefetch_disable, then zpool throuhgput for reads matches userland 
throughput - but stays in the 70-90MBps range.

I am starting to think that there is a ZFS write ordering issue (which becomes 
apparent when you subsequently read the data) or zfs prefetch is completely 
off-key and unable to properly read ahead in order to saturate the read 
pipeline...

What do you all think?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Eff Norwood
Yes because the author was too smart for his own good and ssd is for Sparc, you 
use SD. Delete all the ssd lines. Here's that script which will work for you 
provided it doesn't get wrapped or otherwise maligned by this html interface:

#!/usr/sbin/dtrace -s

#pragma D option quiet

fbt:sd:sdstrategy:entry
{
start[(struct buf *)arg0] = timestamp;
}

fbt:sd:sdintr:entry
/ start[(this->buf = (struct buf *)((struct scsi_pkt *)arg0)->pkt_private)] != 
0 /
{
this->un = ((struct sd_xbuf *) this->buf->b_private)->xb_un;
@[this->un] = lquantize((timestamp - start[this->buf])/100,
 6, 60, 6);
@q[this->un] = quantize((timestamp - start[this->buf])/100);

start[this->buf] = 0;
}

I'll also try to attach the tarball of all three corrected scripts that I use.
-- 
This message posted from opensolaris.org

scripts.tar
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Karol
> You should look at your disk IO patterns which will
> likely lead you to find unset IO queues in sd.conf.
> Look at this
> http://blogs.sun.com/chrisg/entry/latency_bubble_in_yo
> ur_io as a place to start. 

Any idea why I would get this message from the dtrace script?

(I'm new to dtrace / opensolaris )

dtrace: failed to compile script ./ssdrwtime.d: 
line 1: probe description fbt:ssd:ssdstrategy:entry does not match any probes
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Karol
Good idea.
I will keep this test in mind - I'd do it immediately except for the fact that 
it would be somewhat difficult to connect power to the drives considering the 
design of my chassis, but I'm sure I can figure something out if it comes to 
it...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Eff Norwood
You should look at your disk IO patterns which will likely lead you to find 
unset IO queues in sd.conf. Look at this 
http://blogs.sun.com/chrisg/entry/latency_bubble_in_your_io as a place to 
start. The parameter you can try to set globally (bad idea) is done by doing 
echo zfs_vdev_max_pending/W0t2 | mdb -kw. The old default value was 35, and the 
new one is 10. On my LSI 9211-8i with 16 SATA disks, I found the ideal value to 
be 2 for our workload.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Alexander Lesle
Hello Karol,

you wrote at, 29. Juli 2010 02:23:

> I appear to be getting between 2-9MB/s reads from individual disks

It sounds for me that you have a hardware failure because 2-9 MB/s
are less than dropping.

> 2x LSI 9200-8e SAS HBAs (2008 chipset)
> Supermicro 846e2 enclosure with LSI sasx36 expander backplane
> 20 seagate constellation 2TB SAS harddrives
> 2x 8GB Qlogic dual-port FC adapters in target mode

You wrote that you have tested single disk after that
I would disconnect all upper items and start with a fanout cable and
built a small mirror to check it out.
And so on.

-- 
Regards
Alex



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
Hi Robert -
I tried all of your suggestions but unfortunately my performance did not 
improve.

I tested single disk performance and I get 120-140MBps read/write to a single 
disk.  As soon as I add an additional disk (mirror, stripe, raidz) , my 
performance drops significantly.

I'm using 8Gbit FC. From a block standpoint, I suppose it's quite similar to 
iSCSI.
However, performance is the idea in my case - gigabit won't do what I need.
I need throughput with large files.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread StorageConcepts
Actually writes faster then reads are typical fora Copy on Write FC (or Write 
Anywhere). I usually describe it like this. 

CoW in ZFS works like when you come home after a long day and you ust want to 
go to bed. You take of one pice of clothing after another and drop it on the 
floor just where you are - this is very fast (and it actually is copy on write 
with block allocation policy of "closest"). 

Then the next day when you have to get to work (in this example assuming that 
you wear the same underwear again - remember not supported ! :) - you have to 
pick up all the cloths one after another and you have to move across all the 
room to get dressed. This takes time and it is the same for reads. 

So in CoW it is usual that writes are fast then reads (especially for 
RaidZ/RaidZ2, where each vdev can be viewes as one disk). For 100% synchronous 
writes (wcd=true), you should see the same write and read performance. 

So for your setup I assume: 

4 x 2 disk mirror with Nearline SATA:

Write (sync, wcd=true) = 4 x 80 IOPS = 320 IOPS x 8 KB Recordsize = 2,6 MB/Sec 
if you see more thats ZFS optimizations already. If you see less - make sure 
you have proper partition alignment (otherwise 1 write can become 2).

Read = 8 x 100 IOPS (some more IOPS because of head optimization and elevator) 
= 800 IOPS x 8k = 6,4 MB /sec from disk. Same problem with partiton alignment.

For 128k block size ? 

Write: 320 x 128k = 42 MB/sec
Read: 102 MB/sec 

ZFS needs caching (L2ARC,ZIL etc.), otherwise it is slow  - just as any other 
disk system for random I/O. For sequencial I/O ZFS is not optimimal because of 
CoW. Also with iSCSI you have more fragmentation becase of the small block 
updates. 

So how to tune ? 

1) Use ZIL (this will make your writes more sequencial, so also optimize the 
reads)
2) Use L2ARC
3) Make sure partition aligment is ok
4) try to disable read-ahead on the client (otherwise you case eben more random 
I/O)
5) use larger block size (128k) to ave some kind of implicit read-ahead  
(except for DB workloads)

Regards, 
Robert Heinzmann
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Richard Elling
On Jul 29, 2010, at 9:57 AM, Carol wrote:

> Yes I noticed that thread a while back and have been doing a great deal of 
> testing with various scsi_vhci options.  
> I am disappointed that the thread hasn't moved further since I also suspect 
> that it is related to mpt-sas or multipath or expander related.

The thread is in the ZFS forum, but the problem is not a ZFS problem.

> I was able to get aggregate writes up to 500MB out to the disks but reads 
> have not improved beyond an aggregate average of about 50-70MBps for the pool.

I find "zpool iostat" to be only marginally useful.  You need to look at the
output of "iostat -zxCn" which will show the latency of the I/Os.  Check to
see if the latency (asvc_t) is similar to the previous thread.

> I did not look much at read speeds during alot of my previous testing because 
> I thought write speeds were my issue... And I've since realized that my 
> userland write speed problem from zpool <-> zpool was actually read limited.

Writes are cached in RAM, so looking at iostat or zpool iostat doesn't offer
the observation point you'd expect.

> Since then I've tried mirrors, stripes, raidz, checked my drive caches, 
> tested recordsizes, volblocksizes, clustersizes, combinations therein, tried 
> vol-backed luns, file-backed luns, wcd=false - etc.
> 
> Reads from disk are slow no matter what.  Of course - once the arc cache is 
> populated, the userland experience is blazing - because the disks are not 
> being read.

Yep, classic case of slow disk I/O.

> Seeing write speeds so much faster that read strikes me as quite strange from 
> a hardware perspective, though, since writes also invoke a read operation - 
> do they not?

In many cases, writes do not invoke a read.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
Yes I noticed that thread a while back and have been doing a great deal of 
testing with various scsi_vhci options.  
I am disappointed that the thread hasn't moved further since I also suspect 
that it is related to mpt-sas or multipath or expander related.

I was able to get aggregate writes up to 500MB out to the disks but reads have 
not improved beyond an aggregate average of about 50-70MBps for the pool.

I did not look much at read speeds during alot of my previous testing because I 
thought write speeds were my issue... And I've since realized that my userland 
write speed problem from zpool <-> zpool was actually read limited.

Since then I've tried mirrors, stripes, raidz, checked my drive caches, tested 
recordsizes, volblocksizes, clustersizes, combinations therein, tried 
vol-backed luns, file-backed luns, wcd=false - etc.

Reads from disk are slow no matter what.  Of course - once the arc cache is 
populated, the userland experience is blazing - because the disks are not being 
read.


Seeing write speeds so much faster that read strikes me as quite strange from a 
hardware perspective, though, since writes also invoke a read operation - do 
they not?

> This sounds very similar to another post last month.
> http://opensolaris.org/jive/thread.jspa?messageID=4874
> 53
> 
> The trouble appears to be below ZFS, so you might try
> asking on the 
> storage-discuss forum.
>  -- richard
> On Jul 28, 2010, at 5:23 PM, Karol wrote:
> 
> > I appear to be getting between 2-9MB/s reads from
> individual disks in my zpool as shown in iostat -v 
> > I expect upwards of 100MBps per disk, or at least
> aggregate performance on par with the number of disks
> that I have.
> > 
> > My configuration is as follows:
> > Two Quad-core 5520 processors
> > 48GB ECC/REG ram
> > 2x LSI 9200-8e SAS HBAs (2008 chipset)
> > Supermicro 846e2 enclosure with LSI sasx36 expander
> backplane
> > 20 seagate constellation 2TB SAS harddrives
> > 2x 8GB Qlogic dual-port FC adapters in target mode
> > 4x Intel X25-E 32GB SSDs available (attached via
> LSI sata-sas interposer)
> > mpt_sas driver
> > multipath enabled, all four LSI ports connected for
> 4 paths available:
> > f_sym, load-balance logical-block region size 11 on
> seagate drives
> > f_asym_sun, load-balance none, on intel ssd drives
> > 
> > currently not using the SSDs in the pools since it
> seems I have a deeper issue here.
> > Pool configuration is four 2-drive mirror vdevs in
> one pool, and the same in another pool. 2 drives are
> for OS and 2 drives aren't being used at the moment.
> > 
> > Where should I go from here to figure out what's
> wrong?
> > Thank you in advance - I've spent days reading and
> testing but I'm not getting anywhere. 
> > 
> > P.S: I need the aid of some Genius here.
> > -- 
> > This message posted from opensolaris.org
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> >
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
> 
> -- 
> Richard Elling
> rich...@nexenta.com   +1-760-896-4422
> Enterprise class storage for everyone
> www.nexenta.com
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
>
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Richard Elling
This sounds very similar to another post last month.
http://opensolaris.org/jive/thread.jspa?messageID=487453

The trouble appears to be below ZFS, so you might try asking on the 
storage-discuss forum.
 -- richard

On Jul 28, 2010, at 5:23 PM, Karol wrote:

> I appear to be getting between 2-9MB/s reads from individual disks in my 
> zpool as shown in iostat -v 
> I expect upwards of 100MBps per disk, or at least aggregate performance on 
> par with the number of disks that I have.
> 
> My configuration is as follows:
> Two Quad-core 5520 processors
> 48GB ECC/REG ram
> 2x LSI 9200-8e SAS HBAs (2008 chipset)
> Supermicro 846e2 enclosure with LSI sasx36 expander backplane
> 20 seagate constellation 2TB SAS harddrives
> 2x 8GB Qlogic dual-port FC adapters in target mode
> 4x Intel X25-E 32GB SSDs available (attached via LSI sata-sas interposer)
> mpt_sas driver
> multipath enabled, all four LSI ports connected for 4 paths available:
> f_sym, load-balance logical-block region size 11 on seagate drives
> f_asym_sun, load-balance none, on intel ssd drives
> 
> currently not using the SSDs in the pools since it seems I have a deeper 
> issue here.
> Pool configuration is four 2-drive mirror vdevs in one pool, and the same in 
> another pool. 2 drives are for OS and 2 drives aren't being used at the 
> moment.
> 
> Where should I go from here to figure out what's wrong?
> Thank you in advance - I've spent days reading and testing but I'm not 
> getting anywhere. 
> 
> P.S: I need the aid of some Genius here.
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Richard Jahnel
>Hi r2ch

>The operations column shows about 370 operations for read - per spindle
>(Between 400-900 for writes)
>How should I be measuring iops? 

It seems to me then that your spindles are going about as fast as they can and 
your just moving small block sizes.

There are lots of ways to test for iops, but for this purpose imo the 
operations column is fine. 

I think the next step would be to attatch a couple of inexpensive SSDs as cache 
and zil to see what that did for me. Understanding that it wil only make a 
difference on data that is warm for reads and commit required for writes.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
> Update to my own post.  Further tests more
> consistently resulted in closer to 150MB/s.
> 
> When I took one disk offline, it was just shy of
> 100MB/s on the single disk.  There is both an obvious
> improvement with the mirror, and a trade-off (perhaps
> the latter is controller related?).
> 
> I did the same tests on my work computer, which has
> the same 7200.12 disks (except larger), an i7-920,
> ICH10, and 12GB memory.  The mirrored pool
> performance was identical, but the individual disks
> performed at near 120MB/s when isolated.  Seems like
> the 150MB/s may be a wall, and all disks and
> controllers are definitely in SATA2 mode.  But I
> digress

You could be running into a hardware bandwidth bottleneck somewhere 
(controller, bus, memory, cpu, etc.) - however my experience isn't exactly 
similar to yours since I am not even getting 150MBps from 8 disks - so I am 
probably running into a 1) hardware issue 2) driver issue 3) zfs issue 4) 
configuration issue

I have tried with Osol 09.06 but the driver doesn't recognize my SAS controller.
I then went with Osol b134 to get my controller recognized and have the 
performance issues I am discussing now, and now I'm using the RC2 of Nexenta 
(osol b134 with backported fixes) with the same performance issues.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Karol
Hi r2ch

The operations column shows about 370 operations for read - per spindle
(Between 400-900 for writes)
How should I be measuring iops?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Richard Jahnel
How many iops per spindle are you getting?

A rule of thumb I use is to expect no more than 125 iops per spindle for 
regular HDDs.

SSDs are a different story of course. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss