On Mon, 2009-06-29 at 16:52 +0200, Bohmer, Andre ten wrote:
> The blocksize of the database is 16k; db_file_multiblock_read_count is
> 64.
> 
> The buffer cache around 8G. It varies a bit (automatic tuning). SGA
> uses huge pages.
> 
> The whole dataset may fit in the cache.

How does he know the whole dataset may fit in the cache?  How big is the
GRID_YIELD table?  This certainly doesn't seem like a system where the
entire dataset is in the cache, this looks like an I/O bound full table
scan.  Is the system running other queries simultaneously?

> From the SAN specialist:
> 
> Storage system HP EVA8000
> 112 spindles in this specific diskgroup (+- 75 servers with in total
> 200 vdisks).
> 500 GB luns, VRAID5.
> FC-SCSI 10K disks
> SAN load sample graphic (not related to this Oracle specific) :

I clipped the picture, but it showed a SAN that was pretty much IOP
saturated.  The disk group had a maximum recommended IOP load of 5089
but was currently running 5233 and had peak IOPS of over 6000.

> Also collected some iostat samples during a test query, see attached
> file iostat_oracle_data.txt .

The iostat is interesting because it appears to show a very uneven
pattern of reads and writes.  The writes are what I find curious.  Why
is the system performing so many writes?  Is this log activity or
perhaps sorts being pushed out to temp tablespaces?

No matter what, the system appears to be preforming within reasonable
limits based on the IOP's available on the backend storage, the system
is using all of the IOP's your storage array has available based on the
graph you sent.  You either need more spindles on a less loaded SAN, or
you need to cache more of the data on the Oracle side (which wouldn't
help the first run, but should help the others).  I really don't think
there is a problem with the RHEL server itself.

Later,
Tom



_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to