So to give a little background on this, we have been benchmarking Oracle RAC on 
Linux vs. Oracle on Solaris.  In the Solaris test, we are using vxvm and vxfs.
We noticed that the same Oracle TPC benchmark at roughly the same transaction 
rate was causing twice as many disk I/O's to the backend DMX4-1500.

So we concluded this is pretty much either Oracle is very different in RAC, or 
our filesystems may be the culprits.  This testing is wrapping up (it all gets 
dismantled Monday), so we took the time to run a simulated disk I/O test with 
an 8K IO size.


vxvm with vxfs we achieved 2387 IOPS
vxvm with ufs we achieved 4447 IOPS
ufs on disk devices we achieved 4540 IOPS
zfs we achieved 1232 IOPS

The only zfs tunings we have done are setting set zfs:zfs_nocache=1
in /etc/system and changing the recordsize to be 8K to match the test.

I think the files we are using in the test were created before we changed the 
recordsize, so I deleted them and recreated them and have started the other 
test...but does anyone have any other ideas?

This is my first experience with ZFS with a comercial RAID array and so far 
it's not that great.

For those interested, we are using the iorate command from EMC for the 
benchmark.  For the different test, we have 13 luns presented.  Each one is its 
own volume and filesystem and a singel file on those filesystems.  We are 
running 13 iorate processes in parallel (there is no cpu bottleneck in this 
either).

For zfs, we put all those luns in a pool with no redundancy and created 13 
filesystems and still running 13 iorate processes.

we are running Solaris 10U6
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to