JS wrote:
General Oracle zpool/zfs tuning, from my tests with Oracle 9i and the APS 
Memory Based Planner and filebench. All tests completed using Solaris 10 update 
2 and update 3.:

 -use zpools with 8k blocksize for data

definitely!

 -don't use zfs for redo logs - use ufs with directio and noatime. Building 
redo logs on EMC RAID 10 pools presented as separate devices seemed to produce 
the most %busy headroom for the log volumes during high activity.

We are currently recommending separate (ZFS) file systems for redo logs.
Did you try that?  Or did you go straight to a separate UFS file system for
redo logs?

 -when using highly available SAN storage, export the disks as LUNS and use zfs 
to do your redundancy - using array rundandancy (say 5 mirrors that you will 
zpool together as a stripe) will cause the machine to crap out and die if any 
of those mirrored devices, say, gets too much io and causes the machine to do a 
bus reset. At this point it's better to export 10 disks and let zpool make your 
mirrors and your hot spares. When using Pillar storage where you don't have 
direct access to the disk devices, I just made multiple luns and wasted a few 
extra blocks to give zpool local redundancy.
-I found no big performance difference using powerpath or mpxio, though device 
names are easier to use in powerpath and mpxio is cheaper.
-using the set_arc.sh script (mdb -k'ing a ceiling for the arc cache) to keep 
the arc cache low and a lot of memory wide open is essential for oracle 
performance. It's effectiveness is a little inconsistant, but I believe that's 
being looked into now. It'll be great when I can set a ceiling in Update 4 in 
/etc/system.
-sd:sd_max_throttle testing didn't seem to present any great gain to values 
higher than 20, then EMC setting recommendation.

This is not surprising. We see more issues with this when there is mixed
storage because other devices can be penalized by EMC's requirements.  Some
day, perhaps our grandchildren will have a protocol that does proper flow
control and we won't need this :-)
 -- richard

-my best rule of thumb for creating zpools is to determine the number of disks 
I'd normally use if I were creating the same Oracle setup using EMC LUNS, then 
apply about the same amount of devices to the zpool. Still working on a better 
way to use only the storage I need, but be able to get top performance.

Specific performance notes for Oracle's APS and memory based planner:
-after a point, IO and the filesystem tuning doesn't seem to gain performance benefits. -other than having memory overhead for the Memory Based Planner normally wants, excess memory doesn't seem to help. Memory size and Oracle caching seems to do less than having wider memory bandwidth.
-faster processors were the best way to ensure direct performance gain in the 
Memory Based Planner tests.
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to