RE: data file sizing question

2002-09-30 Thread Jamadagni, Rajendra
Title: RE: data file sizing question





Thanks Tim,


I'll try to do this exercise today/tomorrow ... 


The reason I mentioned papers is Managers are easily impressed by thing that are done by outsiders (they are considered experts, in-house knowledge is never sufficient). You probably know what I mean.

Raj
__
Rajendra Jamadagni  MIS, ESPN Inc.
Rajendra dot Jamadagni at ESPN dot com
Any opinion expressed here is personal and doesn't reflect that of ESPN Inc. 
QOTD: Any clod can have facts, but having an opinion is an art!


-Original Message-
From: Tim Gorman [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 30, 2002 10:18 AM
To: Multiple recipients of list ORACLE-L
Subject: Re: data file sizing question



Do your own testing.  Don't rely on papers.  Prove it yourself.  It's easy.


There are two types of "performance" implied in this discussion about extent allocation and deallocation:
performance of SQL statements like SELECT, INSERT, UPDATE, DELETE (i.e. DML) 
performance of statements like CREATE, ALTER, DROP, and TRUNCATE (i.e. DDL)
There is no reason to suggest that the performance of DML might be affected by the number of extents, whether 1 extent or 500,000 extents.  Think about it.  Random, single-block reads (i.e. indexed scans) are completely unaffected by Oracle extent size and number;  they are block-level accesses, after all.  They care nothing about the concept of extent.  Sequential, multi-block reads (i.e. full table scans, fast full index scans) can only be affected if the extent size is extremely small but is completely unaffected by the number of extents.  Extremely small extents can obviously affect a multi-block read if they consistently limit the number of blocks that can be read.

Since testing this requires some non-trivial resources (i.e. test data and disk space) to prove, I'll leave the proving to those who have both (in addition to time).

This leaves DDL, which is mercifully easy to test on any environment using locally-managed tablespaces.  Do *not* do this type of testing in dictionary-managed tablespaces, as there is no point.  LMTs were created to alleviate the problems you'd be experiencing with DMTs...

Try an exercise like the following in SQL*Plus:
set timing on
create table bumpf (xxx number) tablespace ;
begin
    for i in 1.. loop
    execute immediate 'alter table bumpf allocate extent';
    end loop;
end loop;
/
drop table bumpf;
Re-run the test for different values of , all the way up to values like 250,000 or 500,000, if you like.  The timings for CREATE TABLE should be consistent, of course, as it is the exact same command each time.  The time spent in the PL/SQL loop should be roughly linear with the value of , the point being that each ALLOCATE EXTENT takes roughly the same amount of time.  You might observe an "elbow" in the plotted curve of timings at some point which Rachel suggested at 4000 but I think will vary depending on your environment.  On my laptop, I've seen the curve stay linear up into the 100,000s.  The time spent in DROP may not vary a great deal;  it should be roughly linear with the value of COUNTER but I find that it is much better than linear, which leads me to believe that some parts of a DROP/TRUNCATE operation are asynchronous.

Try it out!




*This e-mail 
message is confidential, intended only for the named recipient(s) above and may 
contain information that is privileged, attorney work product or exempt from 
disclosure under applicable law. If you have received this message in error, or are 
not the named recipient(s), please immediately notify corporate MIS at (860) 766-2000 
and delete this e-mail message from your computer, Thank 
you.*1



Re: FILE SIZING

2001-12-07 Thread Jared . Still




1)  In our testing environment log file switch is occuring every 2 minutes.
The size of log files is 100M .
 What shold be the size of log files in Performance testing ---  500M
or
1G . Is there any drawback in using large
 size of log files. For performance testing we will be running in
noarchive log mode.

1 gig should work fine.  You may want to set log_checkpoint_timeout to a
a value to force a checkpoint every 20-30 minutes during periods of
inactivity.

-

2)  Two of our table sizes is 175G and 160G. We are partitioning both the
tables into 10 partitions. What should be the
 ideal datafile size and initial extent size. Is there any drawbacks in
using big datafiles. We always used about 2G
 file size for small databases.

128 Meg extents would work nicely.  That's only about 150 extents per
partition.

--

3)  We will be using locally manages tablespaces . which option is better
for large database Uniform size or Autoallocate.

Personally I prefer uniform size. no chance of fragmentation, which can
be an issue with large extents.

Jared



-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: 
  INET: [EMAIL PROTECTED]

Fat City Network Services-- (858) 538-5051  FAX: (858) 538-5051
San Diego, California-- Public Internet access / Mailing Lists

To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).



RE: FILE SIZING

2001-12-07 Thread Jesse, Rich

If you're testing performance, wouldn't you want your test environment to
mirror what production would look like?  I would think that a performance
test of a 
NOARCHIVELOG db would have limited validity for an ARCHIVELOG mode
production, especially with 100M log switches every two minutes.

My $.02  :)

Rich Jesse  System/Database Administrator
[EMAIL PROTECTED] Quad/Tech International, Sussex, WI USA

-Original Message-
Sent: Friday, December 07, 2001 08:55
To: Multiple recipients of list ORACLE-L


Hi,


We never worked on large databases but we have to test our application for
scability and performance test.
It will be great if DBA handling large databases give some input on
following :
Environment is 9i on SUN 2.8. Database size will be about 375G.
 
1)  In our testing environment log file switch is occuring every 2 minutes.
The size of log files is 100M .
 What shold be the size of log files in Performance testing ---  500M or
1G . Is there any drawback in using large
 size of log files. For performance testing we will be running in
noarchive log mode.
2)  Two of our table sizes is 175G and 160G. We are partitioning both the
tables into 10 partitions. What should be the
 ideal datafile size and initial extent size. Is there any drawbacks in
using big datafiles. We always used about 2G 
 file size for small databases. 
3)  We will be using locally manages tablespaces . which option is better
for large database Uniform size or Autoallocate.

Thanks
--Harvinder
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: Jesse, Rich
  INET: [EMAIL PROTECTED]

Fat City Network Services-- (858) 538-5051  FAX: (858) 538-5051
San Diego, California-- Public Internet access / Mailing Lists

To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).



FILE SIZING

2001-12-07 Thread Harvinder Singh

Hi,


We never worked on large databases but we have to test our application for
scability and performance test.
It will be great if DBA handling large databases give some input on
following :
Environment is 9i on SUN 2.8. Database size will be about 375G.
 
1)  In our testing environment log file switch is occuring every 2 minutes.
The size of log files is 100M .
 What shold be the size of log files in Performance testing ---  500M or
1G . Is there any drawback in using large
 size of log files. For performance testing we will be running in
noarchive log mode.
2)  Two of our table sizes is 175G and 160G. We are partitioning both the
tables into 10 partitions. What should be the
 ideal datafile size and initial extent size. Is there any drawbacks in
using big datafiles. We always used about 2G 
 file size for small databases. 
3)  We will be using locally manages tablespaces . which option is better
for large database Uniform size or Autoallocate.

Thanks
--Harvinder
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: Harvinder Singh
  INET: [EMAIL PROTECTED]

Fat City Network Services-- (858) 538-5051  FAX: (858) 538-5051
San Diego, California-- Public Internet access / Mailing Lists

To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).



RE: Redo log file sizing

2001-12-07 Thread jeram

Hi

I will try to answer you questions :

The size of redo log files is depend on the transaction activity, you have
to monitor the Checkpoint activity and Log Switch activity, the ideal time
for Log file switching can be minimum 15 minutes per Group, the information
you can get from v$loghist.

OFA standard is minimum 2 Groups per Database, you can define 3 Groups is
OK, and each group s minimum 2 members or 3 members is OK.

Hope this can help you


Rgds/Jeram
-Original Message-
Susantio
Sent: Thursday, December 06, 2001 10:20 PM
To: Multiple recipients of list ORACLE-L


Hi all,

Can anyone share how to do proper redo log file sizing ?
Would 1G redo log file be enough for 5 TB datafile ?

and how to define howmany redo log file we need to have in one database ?


Thanks & Regards
Herman



--
Please see the official ORACLE-L FAQ: http://www.orafaq.com
--
Author: Herman Susantio
  INET: [EMAIL PROTECTED]

Fat City Network Services-- (858) 538-5051  FAX: (858) 538-5051
San Diego, California-- Public Internet access / Mailing Lists

To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: jeram
  INET: [EMAIL PROTECTED]

Fat City Network Services-- (858) 538-5051  FAX: (858) 538-5051
San Diego, California-- Public Internet access / Mailing Lists

To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).



Re: Redo log file sizing

2001-12-07 Thread Connor McDonald

Its not the size of the database that counts, it the
amount of changes (that generated redo) that matters.

The general consensus for redo logs is "big is
beautiful" but of course, other things can alter that
(archiving frequency, population of standby database
etc).

1G would seem a sensible starting point.

hth
connor

 --- Herman Susantio <[EMAIL PROTECTED]> wrote: > Hi
all,
> 
> Can anyone share how to do proper redo log file
> sizing ?
> Would 1G redo log file be enough for 5 TB datafile ?
> 
> and how to define howmany redo log file we need to
> have in one database ?
> 
> 
> Thanks & Regards
> Herman
> 
> 
> 
> -- 
> Please see the official ORACLE-L FAQ:
> http://www.orafaq.com
> -- 
> Author: Herman Susantio
>   INET: [EMAIL PROTECTED]
> 
> Fat City Network Services-- (858) 538-5051  FAX:
> (858) 538-5051
> San Diego, California-- Public Internet
> access / Mailing Lists
>

> To REMOVE yourself from this mailing list, send an
> E-Mail message
> to: [EMAIL PROTECTED] (note EXACT spelling of
> 'ListGuru') and in
> the message BODY, include a line containing: UNSUB
> ORACLE-L
> (or the name of mailing list you want to be removed
> from).  You may
> also send the HELP command for other information
> (like subscribing). 

=
Connor McDonald
http://www.oracledba.co.uk (mirrored at 
http://www.oradba.freeserve.co.uk)

"Some days you're the pigeon, some days you're the statue"


Nokia 5510 looks weird sounds great. 
Go to http://uk.promotions.yahoo.com/nokia/ discover and win it! 
The competition ends 16 th of December 2001.
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: =?iso-8859-1?q?Connor=20McDonald?=
  INET: [EMAIL PROTECTED]

Fat City Network Services-- (858) 538-5051  FAX: (858) 538-5051
San Diego, California-- Public Internet access / Mailing Lists

To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).



Redo log file sizing

2001-12-06 Thread Herman Susantio

Hi all,

Can anyone share how to do proper redo log file sizing ?
Would 1G redo log file be enough for 5 TB datafile ?

and how to define howmany redo log file we need to have in one database ?


Thanks & Regards
Herman



-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: Herman Susantio
  INET: [EMAIL PROTECTED]

Fat City Network Services-- (858) 538-5051  FAX: (858) 538-5051
San Diego, California-- Public Internet access / Mailing Lists

To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).