Le 28 sept. 09 à 17:58, Glenn Fawcett a écrit :

Been there, done that, got the tee shirt.... A larger SGA will *always* be more efficient at servicing Oracle requests for blocks. You avoid going through all the IO code of Oracle and it simply reduces to a hash.


Sounds like good advice when the system is single purpose. But if that leads to memory being paritioned into multiple chunks (e.g. consolidation scenarios), then putting you're beans in the unified filesystem cache should not be overlooked.

-r



http://blogs.sun.com/glennf/entry/where_do_you_cache_oracle

al...@sun wrote:
Hi all,

There is no generic response for:
Is it better to have a "small SGA + big ZFS ARC" or "large SGA + small ZFS ARC"?

We can awser:
Have a large enough SGA do get good cache hit ratio (higher than 90 % for OLTP). Have some GB ZFS arc (Not less than 500M, usually more than 16GB is not usefull).

Then you have to tune. We know that ZFS cache help the database reads. The cache strategies of ZFS and Oracle are different, and usually they help each other.

The is no reason to avoid to cache the same data twice.
Exemple:
Oracle query ask for a range scan on index. ZFS detect sequential reads and start to prefetch the data. ZFS try to cache the data that Oracle will probably ask next.
When Oracle ask, the data is cache twice.
All the cache are dynamics.


The best knowned record size for an OLTP environment is :
Dataset     Recordsize
Table Data     8K (db_block_size)
Redo Logs     128K
Index     8K (db_block_size)
Undo
   128K
Temp
   128K


We still recommand a distinct zpool for redologs.

Regards.

Alain Chéreau


Enda O'Connor a écrit :
Richard Elling wrote:
On Sep 24, 2009, at 10:30 AM, Javier Conde wrote:

Hello,

Given the following configuration:

* Server with 12 SPARCVII CPUs  and 96 GB of RAM
* ZFS used as file system for Oracle data
* Oracle 10.2.0.4 with 1.7TB of data and indexes
* 1800 concurrents users with PeopleSoft Financial
* 20000 PeopleSoft transactions per day
* HDS USP1100 with LUNs stripped on 6 parity groups (450xRAID7+1), total 48 disks
* 2x 4Gbps FC with MPxIO

Which is the best Oracle SGA size to avoid cache duplication between Oracle and ZFS?

Is it better to have a "small SGA + big ZFS ARC" or "large SGA + small ZFS ARC"?

Who does a better cache for overall performance?

In general, it is better to cache closer to the consumer (application).

You don't mention what version of Solaris or ZFS you are using.
For later versions, the primarycache property allows you to control the
ARC usage on a per-dataset basis.
-- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi
addign oracle-interest
I would suggest some testing but standard recommendation to start with are keep zfs record size is db block size, keep oracle log writer to it's own pool ( 128k recordsize is recommended I believe for this one ), the log writer is a io limiting factor as such , use latest Ku's for solaris as they contain some critical fixes for zfs/oracle, ie 6775697 for instance. Small SGA is not usually recommended, but of course a lot depends on application layer as well, I can only say test with the recommendations above and then deviate from there, perhaps keeping zil on separate high latency device might help ( again only analysis can determine all that ). Then remember that even after that with a large SGA etc, sometimes perf can degrade, ie might need to instruct oracle to actually cache, via alter table cache command etc.

getting familiar with statspack aws will be a must here :-) as only an analysis of Oracle from an oracle point of view can really tell what is workign as such.

Enda



Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to