Some questions and a couple of comments regarding Dave's note:
1. RE the tuning from a blue collar DBA perspective, is it accurate to
paraphrase the described method as: No matter what might be causing the
performance problem, check this List Of Things first, using tools that
vary significantly from one platform to the next. ?
2. Dave is multiplying Oracle's time statistics by 1/1000 (wrong)
instead of 1/100 (correct). Oracle is really reporting 'db file
sequential read' average latencies of .311cs = 0.003s (not 0.0003s), 'db
file scattered read' latencies of .506cs = 0.005s (not 0.0005s), 'db
file parallel write' latencies of 3.036cs = 0.030s (not 0.003s), and so
on. (Dave's I/O subsystem has consumed an average of 30ms for each 'db
file parallel write' call.)
3. Note that it's only because the data are collected system-wide that
it is necessary to ignore the 'SQL*Net%' events. This is a waste,
though, because with properly time-scoped session-level data, the
'SQL*Net%' events constitute probably the easiest way to detect when you
have bad applications code (not the SQL, but the stuff that calls the
SQL).
4. 'db file sequential read' does *not* typically indicate a full-table
scan, because 'db file sequential read' events, since Oracle8.0 are
almost always single-block read calls (before that, the event could
indicate multi-blocks reads of sort segment blocks into a PGA).
5. 'LGWR wait for redo copy' is *not* affected by the archiver not
keeping up. The alert log *is* a better way to detect this (because
'LGWR wait for redo copy' doesn't detect it at all). An even better way
is to look for occurrences of 'log file switch (archiving needed)'.
6. 'latch free' -- Question: Does anybody know what total_timeouts
means for the 'latch free' event? I see nothing in v$latch that possibly
corresponds to something that could be called a timeout. And nothing
in either algorithm that Oracle uses to acquire a latch makes any sense
to call a timeout. (The two algorithms Oracle uses that I know about
are the spinning algorithm, and the latch wait posting algorithm.)
Consequently, I'm having a hard time coming up with what a high or low
rate of timeout/waits might mean. Thanks...
Cary Millsap
Hotsos Enterprises, Ltd.
http://www.hotsos.com
Upcoming events:
- Hotsos Clinic, Oct 1-3 San Francisco, Oct 15-17 Dallas, Dec 9-11
Honolulu
- 2003 Hotsos Symposium on OracleR System Performance, Feb 9-12 Dallas
- Next event: NCOAUG Training Day, Aug 16 Chicago
-Original Message-
Morgan
Sent: Monday, August 12, 2002 9:53 PM
To: Multiple recipients of list ORACLE-L
Hi All,
Well after 6 wekks of testing here is the basic way
to operate SUN T3's as efficiently as possible.
Be prepared for arguments with High priests from the cult of
SAME.
SUN T3's are fiber attached hardware RAID 5 arrays with
a modern cache. The hardware engineers argue that if you need
more
I/Os/sec just add another array as a concatenated volume. The
theory
being the hardware is intelligent enough to use the cache to
increase
throughput. It actually works as they claim. Never did explain
why
it wasn't a single point of failure in the end though.
My hardware was 3 4810's, each with 4 - 8 cpus, each with 4 -
8GB,
2 - 4 bricks per machine
First insist that multiple bricks be mounted on at least 2 mount
points.
(D2 and D3). DO NOT USE the forcedirectio option. I don't know
why but
I have been unable to take less than a 40% throughput hit with
it
turned on. And I don't care what other people say, no matter how
much respect I have for them
Insist on at least one JBOD for oracle binaries and configs
Insist on at least one JBOD for redo logs (D1)
This a bare minimum.
One set of redo on D1
One set of redo on D2
Archive logs, Rollback and Temp on D3
All data files where needed on D2
Next Level up
Add another JBOD for redo and move redo on to it
Move Rollback and Temp to D2
At this point to get more throughput you have to take the
JBOD to raw devices. Or try forcedirectio on these devices :)
If even better performance is needed, more JBOD, for rollback
and redo.
If more disk spaces is needed, get another brick.
Which leads me to the recent discussion on proper way to tune
Huh? Why make it so complex?
Tuning from a blue collar DBA perspective:
Assess the machine first
No matter what your ratios or what your waiting for:
sar to see if the machine is ever pinned
vmstat to see your queues and paging
iostat to see disk activity
top at timed intervals to catch rogue jobs
read your logs and config files
Then talk to the users
Is the system slow or is it specific jobs?
log on run ratio reports