Re: [zfs-discuss] ZFS read performance terrible

2010-08-01 Thread Karol
I can achive 140MBps to individual disks until I hit a 1GBps system ceiling which I suspect 1GBps may be all that the 4x SAS HBA connection on a 3Gbps sas expander can handle. (just a guess) Anyway, with ZFS or SVM I can't do much beyond a single disk performance total (if that) I am thinking

Re: [zfs-discuss] Getting performance out of ZFS

2010-08-01 Thread Karol
I wonder if this has anything to do with it: http://opensolaris.org/jive/thread.jspa?messageID=33739菋 Anyway, I've already blown away my OSOL install to test Linux performance - so I can't test ZFS at the moment. -- This message posted from opensolaris.org __

Re: [zfs-discuss] Getting performance out of ZFS

2010-08-01 Thread Karol
Horace - I've run more tests and come up with basically the exact same numbers you do. On Opensolaris - I get about the same from my drives (140MBps) and hit a 1GBps (almost exactly) top end system bottle neck when pushing data to all drives. However, if I give ZFS more than one drive (mirror, s

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Karol
I'm about to do some testing with that dtrace script.. However, in the meantime - I've disabled primarycache (set primarycache=none) since I noticed that it was easily caching /dev/zero and I wanted to do some tests within the OS rather than over FC. I am getting the same results through dd. Vi

Re: [zfs-discuss] Moved to new controller, pool now degraded

2010-07-30 Thread Karol
I had the same problem after disabling multipath and some of my device names having changed. I performed replace -f - then noticed that the pool was resilvering. Once finished it displayed the new device name if I recall correctly. I could be wrong, but that's how I remember it. -- This messa

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Karol
> You should look at your disk IO patterns which will > likely lead you to find unset IO queues in sd.conf. > Look at this > http://blogs.sun.com/chrisg/entry/latency_bubble_in_yo > ur_io as a place to start. Any idea why I would get this message from the dtrace script? (I'm new to dtrace / open

Re: [zfs-discuss] ZFS read performance terrible

2010-07-30 Thread Karol
Good idea. I will keep this test in mind - I'd do it immediately except for the fact that it would be somewhat difficult to connect power to the drives considering the design of my chassis, but I'm sure I can figure something out if it comes to it... -- This message posted from opensolaris.org

Re: [zfs-discuss] Getting performance out of ZFS

2010-07-30 Thread Karol
I believe, I'm in a very similar situation than yours. Have you figured something out? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
Hi Robert - I tried all of your suggestions but unfortunately my performance did not improve. I tested single disk performance and I get 120-140MBps read/write to a single disk. As soon as I add an additional disk (mirror, stripe, raidz) , my performance drops significantly. I'm using 8Gbit F

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
y not? > This sounds very similar to another post last month. > http://opensolaris.org/jive/thread.jspa?messageID=4874 > 53 > > The trouble appears to be below ZFS, so you might try > asking on the > storage-discuss forum. > -- richard > On Jul 28, 2010, at 5:23 PM, Ka

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
> Update to my own post. Further tests more > consistently resulted in closer to 150MB/s. > > When I took one disk offline, it was just shy of > 100MB/s on the single disk. There is both an obvious > improvement with the mirror, and a trade-off (perhaps > the latter is controller related?). > >

Re: [zfs-discuss] [osol-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
Sorry - I said the 2 iostats were run at the same time - the second was run after the first during the same file copy operation. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris

Re: [zfs-discuss] [osol-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
Hi Eric - thanks for your reply. Yes, zpool iostat -v I've re-configured the setup into two pools for a test: 1st pool: 8 disk stripe vdev 2nd pool: 8 disk stripe vdev The SSDs are currently not in the pool since I am not even reaching what the spinning rust is capable of - I believe I have a de

Re: [zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Karol
Hi r2ch The operations column shows about 370 operations for read - per spindle (Between 400-900 for writes) How should I be measuring iops? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail

[zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Karol
I appear to be getting between 2-9MB/s reads from individual disks in my zpool as shown in iostat -v I expect upwards of 100MBps per disk, or at least aggregate performance on par with the number of disks that I have. My configuration is as follows: Two Quad-core 5520 processors 48GB ECC/REG ra