Dick,

The database size is 200Gb (datafiles), not including the redolog. This is a
internet content delivery company. The database structure is very simple.
Some processes called getter continually insert the http site to the
database. Other processes keep check the url is expired or not. Based on
that, will do delete the url. So we do have a lot of insert, update (update
the expire date) and delete. The interesting thing is we don't do backup.
even not cold backup. No snapshot either. Since the content of internet is
keeping change. There is no point to backup. The major thing is performance.
We all use raw disk on sun 5.6 or HP. The db_block_size is 8k,
db_block_buffer is 50000. The hit ratio is around 85%.

Joan


-----Original Message-----
Sent: Thursday, February 01, 2001 9:33 AM
To: Joan Hsieh; Multiple recipients of list ORACLE-L


Joan,

    Something is indeed very fishy in London.  The first question I would
ask is
what is creating so much redo generation?  9x250MB = 2.1GB of data changes
inside of that hour.  One thought may be that the database is still in hot
backup mode from a previously disturbed backup.  The other is that several
users
are doing a lot of temporary table creation, but not in the temp space.

        What is the size of this database?  Where does incoming data come
from?
Are their a lot of very frequently refreshed snapshots?  Do they do a full
refresh vs a fast refresh?  What is the db_block_buffer hit ratio like?
Having
a low hit ratio can indicate an excessively busy dbwr writing dirty blocks
to
disk which can cause lots of check points.

Dick Goulet

____________________Reply Separator____________________
Author: "Joan Hsieh" <[EMAIL PROTECTED]>
Date:       2/1/2001 6:05 AM

Dear Listers,

Our database in London has tremendous cf enque lock. Since I am new here. I
checked the parameter found the log_checkpoint_interval set to 3200 and
log_checkpoint_timeout set to default (1800 sec). So I suggest to set
log_checkpoint_interval to 100000000 and log_checkpoint_timeout to 0. The
second thing I found we have average log switch is 6 t0 8 per hour. We have
40 redo logs, each of them is 250m. Our log buffer set to 1m. I believe
after we changed the parameter, the control file schema global enqueue lock
should be released. But it get worse, we have 98% control file enqueue lock
now. I think we have too many log switch (9 per hour now)and suggested to
increase the log to 500m, but our principle dba is not convinced, he think
log buffer size should play a more important role.

Any ideas,

Joan

--
Please see the official ORACLE-L FAQ: http://www.orafaq.com
--
Author: Joan Hsieh
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- (858) 538-5051  FAX: (858) 538-5051
San Diego, California        -- Public Internet access / Mailing Lists
--------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: Joan Hsieh
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- (858) 538-5051  FAX: (858) 538-5051
San Diego, California        -- Public Internet access / Mailing Lists
--------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

Reply via email to