Hi!
 
Also, there is (or was?) a tradeoff between size and number of datafiles.
 
Before 8i, if I recall correctly, all datafile headers had to be physically visited (header SCN changes were written) before a checkpoint could finish. If you got thousands of files, this adds significant overhead. Starting from 8i you actually have to do same thing, but with incremenal checkpointing, the datafile headers are updated in groups, thus distributing the jump in IO load over longer period of time.
 
In 6.0, I believe there was no checkpoint process and logwriter was the one responsible updating all file headers. When log buffer was full, all transactions were halt. Others can comment it, because I haven't worked with O 6.
 
Another thing with datafile sizes was recovery in case of corrupt block - it's faster to restore and recover a 1GB file than a 10GB file, but with 9i RMAN block level recovery this problem is relieved as well.
 
And last, a lot of people have had problems with file sizes over 2GB, even in last few years. Some filesystems simply didn't support files over 2GB, others could corrupt the whole file when extended over 2GB (8.1.5 on linux for example). That's why sizes like 1G or 2040 MB were used a lot (by me;)
 
Tanel.
 

"Liu, Jack" <[EMAIL PROTECTED]> wrote:
Hi,
I want to increase tablespace, just want to know which way is better:
1. ALTER TABLESPACE SYSTEM
ADD DATAFILE '/u01/oradata/orcl/users02.dbf'
SIZE 1M
AUTOEXTEND ON
NEXT 1M
MAXSIZE 100M;
 
2.alter tablespace system add datafile '/u01/oradata/orcl/users02.dbf' size 100m;
 
Thanks,
 
Jack 
 



Yahoo! Mobile
- Check & compose your email via SMS on your Telstra or Vodafone mobile.

Reply via email to