Hi!
Also, there is (or was?) a tradeoff between size
and number of datafiles.
Before 8i, if I recall correctly, all datafile
headers had to be physically visited (header SCN changes were written)
before a checkpoint could finish. If you got thousands of files, this adds
significant overhead. Starting from 8i you actually have to do same thing, but
with incremenal checkpointing, the datafile headers are updated in groups, thus
distributing the jump in IO load over longer period of time.
In 6.0, I believe there was no checkpoint process
and logwriter was the one responsible updating all file headers. When log buffer
was full, all transactions were halt. Others can comment it, because I haven't
worked with O 6.
Another thing with datafile sizes was recovery in
case of corrupt block - it's faster to restore and recover a 1GB file than a
10GB file, but with 9i RMAN block level recovery this problem is relieved as
well.
And last, a lot of people have had problems with
file sizes over 2GB, even in last few years. Some filesystems simply didn't
support files over 2GB, others could corrupt the whole file when extended over
2GB (8.1.5 on linux for example). That's why sizes like 1G or 2040 MB were used
a lot (by me;)
Tanel.
|
- Increase tablespace, which way is better? Liu, Jack
- Re: Increase tablespace, which way is better? AK
- RE: Increase tablespace, which way is better? Stephane Paquette
- RE: Increase tablespace, which way is better? David Wagoner
- RE: Increase tablespace, which way is better? Liu, Jack
- Re: Increase tablespace, which way is better? Jos
- RE: Increase tablespace, which way is better? Jamadagni, Rajendra
- Re: Increase tablespace, which way is better? Tanel Poder
- Tanel Poder