t;executed against a test
>database(dwdsa)the response time is as expected to
>be. However when it
>is executed against the production instance (dwods)
>it is 2,5 to 3 times
>slower.
>
>I have traced the session on both occasions and
>reading the book of
>Carry Milsap I ha
is 2,5 to 3
times slower.
I have traced the session on both occasions and reading
the book of Carry Milsap I have spotted that the WAIT for SQL*Net message from
client is very high for the PARSE call of an INSERT statement, whereas for the
test instance there is no delay.
I cannot however explain
]
12/26/2003 10:54 AM
Please respond to ORACLE-L
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
cc:
Subject: Re: undo and insert
What is ITL ?
- Original Message -
To: "Multiple recipients of list ORACLE-
What is ITL ?
- Original Message -
To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
Sent: Wednesday, December 24, 2003 4:59 PM
>
> Just the previous version of the changed columns,
> plus an overhead of about 80 bytes which relates
> to ITLs, linked lists, operation descrip
Just the previous version of the changed columns,
plus an overhead of about 80 bytes which relates
to ITLs, linked lists, operation descriptions etc.
Bear in mind that undo relating to indexes is not
the same as undo relating to tables, though. An
update to an indexed column results in one index
And then there's the previous version of whichever
ITL entry gets taken by the transaction doing the
insert.
Regards
Jonathan Lewis
http://www.jlcomp.demon.co.uk
The educated person is not the person
who can answer the questions, but the
person who can question the answers -- T. S
log files generated. Thank youzhu chao <[EMAIL PROTECTED]> wrote:
For insert, in order to rollback, Oracle will still have to get the rowid of the new inserted rows, so that it can rollback when needed.So there will still be undo.
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
For insert, in order to rollback, Oracle will still have to get the rowid of the new
inserted rows, so that it can rollback when needed.
So there will still be undo.
- Original Message -
To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
Sent: Thursday, Dec
An undo segment is used to save the old value of data.
For insert operation, there is no old data to be saved.
So, there should be no undo generated. Right?
Roger Xu
Database Administrator
Dr Pepper Bottling Company of Texas
(972)721-8337
Janne,
Thanks very much for your wonderful detail suggestion. I'll definitely use
it to analyze the redo.
David
From: Jan Korecki <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Subject: Re: insert in batch load
; set serveroutput on size 10
SQL> set autotrace on STATISTICS
SQL> insert /*+ append */ into append_test select * from
[EMAIL PROTECTED];
35605 rows created.
Statistics
--
1213732 redo size
35605 rows processed
SQL> ro
checkpoint? Thanks for your inputs.
David
From: Mladen Gogala <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Subject: Re: insert in batch loading
Date: Tue, 25 Nov 2003 12:49:28 -0800
What does it mean that redo log si
log size (100 MB) dominates the checkpoint frequency.
> The table has two indexes. We don't set them to unusable during inserting.
>
> David
>
>
> >From: Jan Korecki <[EMAIL PROTECTED]>
> >Reply-To: [EMAIL PROTECTED]
> >To: Multiple recipients of list ORA
t;[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Subject: Re: insert in batch loading
Date: Tue, 25 Nov 2003 08:19:26 -0800
David Boyd wrote:
Hi All,
We have some batch loading jobs that truncate the tables first, then
insert i
We have 5 groups of redo log. Each group has two members. Each member is
100 MB.
David
From: Mladen Gogala <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Subject: Re: insert in batch loading
Date: Tue, 25 Nov 2003 07
David Boyd wrote:
Hi All,
We have some batch loading jobs that truncate the tables first, then
insert into the tables as select from tables through database link.
Those jobs run daily. Every time when those jobs run, they cause
"cannot allocate new log, Checkpoint not complete"
How big are yor redo logs? How many o them do you have?
On 11/25/2003 10:29:37 AM, David Boyd wrote:
> Hi All,
>
> We have some batch loading jobs that truncate the tables first, then insert
> into the tables as select from tables through database link. Those jobs run
> daily.
Hi All,
We have some batch loading jobs that truncate the tables first, then insert
into the tables as select from tables through database link. Those jobs run
daily. Every time when those jobs run, they cause "cannot allocate new log,
Checkpoint not complete". All of tables
Hi ,
Does anyone have a sample code to insert/update/delete a BLOB into a table
from a
stored procedure.
Thanks
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Basavaraja, Ravindra
INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051 http
At 04:44 PM 10/17/2003, you wrote:
Got it, thanks to both of you, I really appreciate it.
maa
MaryAnn,
To get back to the original question, use a MOD function on rownum to get
your commit point. Check the docs to make sure, but try
If mod(rownum, 1000) = 0
then
commit;
end if;
Daniel Fink
--
END IF;
Max = tbl_resource_id(tbl_resource_id.MAX)
FORALL j IN tbl_resource_id.FIRST .. tbl_resource_id.LAST
INSERT INTO RESOURCE VALUES
tbl_resource_id(j), tbl_classification(j);
DBMS_OUTPUT.PUT_LINE ('TABLE Resource:
> > My bad. The SQL is not quite right: 'append' is a hint:
> > alter table resource nologging;
>
> dont know what nologging does.
>
>
> > insert /*+ append */ into resource
> > select * from rqmt;
>
> me no understand... me no see 1000 anywhere
MaryAnn,
To get back to the original question, use a MOD function on rownum to get
your commit point. Check the docs to make sure, but try
If mod(rownum, 1000) = 0
then
commit;
end if;
Daniel Fink
MaryAnn Atkinson wrote:
> I still dont get it...
> I dont know what I have done to have me
an I first asked the question...
--- [EMAIL PROTECTED] wrote:
> My bad. The SQL is not quite right: 'append' is a hint:
> alter table resource nologging;
dont know what nologging does.
> insert /*+ append */ into resource
> select * from rqmt;
me no understand... me no
Nologging suspends redo log entries generation FOR DIRECT OPERATIONS
ONLY! Direct operations do not use SQL, they use Lisp (just kidding).
Direct insert preformats blocks and appends blocks as a whole after
the highwater mark, without using SQL in the process.
On 10/17/2003 02:43:29 PM, MaryAnn
I still dont get it...
I dont know what I have done to have me confused more
than I first asked the question...
--- [EMAIL PROTECTED] wrote:
> My bad. The SQL is not quite right: 'append' is a hint:
> alter table resource nologging;
dont know what nologging does.
>
EXECUTE IMMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
SELECT Count(*)
INTO RowCount
FROM RQMT;
IF RowCount > 0 THEN
INSERT /*+ APPEND */ INTO RESOURCE
SELECT Resource_Id, Classification
FROM RQMT;
RowCou
ioned as well */
>
> EXECUTE IMMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
> SELECT Count(*)
> INTO RowCount
> FROM RQMT;
>
> IF RowCount > 0 THEN
>
> INSERT /*+ APPEND */ INTO RESOURCE
> SELECT Resource_Id
7;ve fought
long and hard to explain why this rule doesn't make sense. One approach is
to consider the amount of rollback - if the row length is (perhaps) 100
bytes then a 1000 row insert is only looking to use about 10k of
Rollback... How large is the rollback on your system.
Having said that,
My bad. The SQL is not quite right: 'append' is a hint:
alter table resource nologging;
insert /*+ append */ into resource
select * from rqmt;
Read up on direct load insert in the concepts manual,
along with nologging.
Bypass the redo and undo - no need for commits.
Just back
log into target database.
SQL> set long 32000 (or whatever if you have long datatype involved)
SQL> set arraysize 100
SQL> set copycommit 1000 <-- LOOKY!!
SQL> COPY FROM ${REMOTE_LOGIN}/[EMAIL PROTECTED] INSERT
${LOCAL_SCHEMA}.${THE_TABLE} USING ${QUERY};
In this case QUERY w
--- [EMAIL PROTECTED] wrote:
> That will work, slowly.
> You might like to try something like this
> insert into resource
> nologging
> select * from rqmt
> append;
How's that commiting every 1000 records?
> Read up on the
That will work, slowly.
You might like to try something like this
insert into resource
nologging
select * from rqmt
append;
Read up on the 'append' and 'nologging' first.
Jared
Maryann Atkinson <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
10/16/2003 0
LE PARALLEL DML';
SELECT Count(*)
INTO RowCount
FROM RQMT;
IF RowCount > 0 THEN
INSERT /*+ APPEND */ INTO RESOURCE
SELECT Resource_Id, Classification
FROM RQMT;
RowCount := SQL%RowCount;
DBMS_OUTPUT.PUT_LINE ('TA
yeah dont commit every 1000 records and do it in one shot. this is going to be much
slower.
why do you want to do it this way? Ive done 100m inserts with just an insert select
and one commit.
>
> From: Maryann Atkinson <[EMAIL PROTECTED]>
> Date: 2003/10/16 Thu AM 11:54:33 EDT
I have 2 tables, Rqmt and Resource, same structure.
I need to take all almost-one-million records from Rqmt and
insert them to Resource. So far this worked ok:
DECLARE
RowCount NUMBER:= 0;
BEGIN
SELECT Count(*)
INTO RowCount
FROM RQMT;
IF RowCount > 0 T
: Tuesday, September 23, 2003
4:05 PM
To: Multiple recipients of list ORACLE-L
Subject: RE: Insert performance
I have already bumped
that up quite a bit as well, but I still get many waits.
Thanks,
Rick
Stephenson
Oracle
Database Administrator
Ovid
Technologies, Inc.
[EMAIL PROTECTED
22, 2003
11:10 PM
To: Multiple recipients of list
ORACLE-L
Subject: Re: Insert performance
At 09:44 PM 9/22/2003, you wrote:
Does anyone have any idea how to
improve performance for multiple inserts into a table that uses a sequence
generated primary key?
Is the sequence cache set to an
r
primary keys values to be spread over different value ranges (either by using
two sequences, one is "main" seq, other is a "wrapping" cycling from 1 to 1000
for example - the result sequence would be wrapping seq * main seq, thus each
insert is going to different part in i
At 09:44 PM 9/22/2003, you wrote:
Does
anyone have any idea how to improve performance for multiple inserts into
a table that uses a sequence generated primary
key?
Is the sequence cache set to an appropriately large value?
Justin Cave
I have approximately 6 concurrent inserts per second into t
Rick,
I haven't tried this myself but you could consider a reverse key index (depending on
your version).
That way multiple inserts won't go to the same block.
However, (from Perf Tuning 101) you will incur much more IO than a normal index if you
do range scans, so you'd need to consider how yo
Does anyone have any idea how to improve performance for
multiple inserts into a table that uses a sequence generated primary key?
I have approximately 6 concurrent inserts per second into
this table which causes the primary key index to become a hot block. This
in turn causes "buffer
More information:
- Running the insert statement from SQL*Plus works fine.
- Normally this is run by executing a stored procedure that is in a
package. Specifically, a master procedure calls a series of procedures
within the package. The first 5 work fine, then this one doesn't complete.
-
bably would have accomplished
nothing.
[...]
> SQL> explain plan for
> 2 INSERT /*+ APPEND */INTO CURRJOBFACT NOLOGGING
> 3 ( bunch of columns )
> 21SELECT /*+ use_hash(cjs, ps. md, a, o, x, wv, apc, wv1) */
> 22 CJS.JOBNBR,CJS.SOURCEFISCALYEAR,CJS.LIFETOUC
CBO
to consider HASH JOIN?
The original query had USE_HASH hints on the subqueries. Somewhere I
thought I recalled that you could only put hints on the outer SQL statement
-- is that true? So we tried adding the USE_HASH hint to the overall INSERT
statement, to no effect. Here is the relevant
You are welcome. Happens a lot. You see what should be there rather than
what IS there and wonder why it's not working as designed. Someone else,
uninvolved, comes along, takes one look at the thing, points out the error
and leaves you (me) feeling like an idiot.
At 12:54 PM 8/8/2003 -0800, yo
4: nam='db file scattered read' ela= 1 p1=197 p2=104157 p3=8
> WAIT #14: nam='db file scattered read' ela= 2 p1=197 p2=104165 p3=8
> WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104173 p3=8
> WAIT #14: nam='db file scattered read' ela= 1
But then it's not the same sql anymore and the access plan can be wildly
different. You need to use bind variables in your sqlplus session as well.
Unfortunately, even then it is not guaranteed that you'll get the same plan
as you get in the plsql proc.
At 06:44 AM 8/7/2003 -0800, you wrote:
Wo
197 p2=104181 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104189 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104197 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104205 p3=8
WAIT #14: nam='db file scattered read'
Not certain that this is the case, but could it be that statistics are missing for one
partition, thus occulting (as in 'undefined and something is undefined') statistics
for the other partitions and the table ? Not necessarily for the table you are trying
to insert into.
Not sure th
em down this far. It has
>kept me from rebuilding the table which probably would have accomplished
>nothing.
[...]
> > SQL> explain plan for
> > 2 INSERT /*+ APPEND */INTO CURRJOBFACT NOLOGGING
> > 3 ( bunch of columns )
> > 21SELECT /
query had USE_HASH hints on the subqueries. Somewhere I
thought I recalled that you could only put hints on the outer SQL statement
-- is that true? So we tried adding the USE_HASH hint to the overall INSERT
statement, to no effect. Here is the relevant portion of the tkprof output.
Thanks again t
1 p1=197 p2=104173 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104181 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104189 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104197 p3=8
WAIT #14: nam='db file scattered read
QL statement
>-- is that true? So we tried adding the USE_HASH hint to the overall INSERT
>statement, to no effect. Here is the relevant portion of the tkprof output.
>Thanks again to eveyone.
>
>Dennis Williams
>DBA, 80%OCP, 100% DBA
>Lifetouch, Inc.
>[EMAIL PROTECTED]
>
- Original Message -
To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
Sent: Wednesday, August 06, 2003 6:39 PM
>
> We have a situation where a process can't insert into a partition of a
> partitioned table. The process just keeps running.
> - A
? So we tried adding the USE_HASH hint to the overall INSERT
statement, to no effect. Here is the relevant portion of the tkprof output.
Thanks again to eveyone.
Dennis Williams
DBA, 80%OCP, 100% DBA
Lifetouch, Inc.
[EMAIL PROTECTED]
INSERT /*+ APPEND use_hash(CJS, PS, APC, MD, C, WV1, WV) */INTO
We have a situation where a process can't insert into a partition of a
partitioned table. The process just keeps running.
- A stored procedure executes a SQL insert statement (listing below).
- It normally completes in 30 minutes, but now just runs for hours.
- Oracle 8.1.6 on Dec/Comp
ad' ela= 2 p1=197 p2=104165 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104173 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104181 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104189 p3=8
WAIT #14: nam='db fil
#14: nam='db file scattered read' ela= 1 p1=197 p2=104189 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104197 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104205 p3=8
WAIT #14: nam='db file scattered read' ela= 1 p1=197 p2=104213 p3
Dennis,
Could you plese post the v$session_wait. Do you have a 10046 trace?
Henry
-Original Message-
DENNIS WILLIAMS
Sent: Wednesday, August 06, 2003 11:40 AM
To: Multiple recipients of list ORACLE-L
We have a situation where a process can't insert into a partition of a
partit
ared
On Monday 28 July 2003 07:54, Jay Hostetter wrote:
> We have a developer that is inserting a large number of records using a VB
> program. An insert trigger exists on the table. This trigger checks a
> parent table for records. I know this trigger really is not needed, si
There are some relevant notes on MetaLink that may be of help.
Search on 'visual basic ora-1000'
Jared
On Monday 28 July 2003 07:54, Jay Hostetter wrote:
> We have a developer that is inserting a large number of records using a VB
> program. An insert trigger exists on
Title: RE: Maximum Open Cursors on Insert Trigger
Just for a more _completeness_ I'd put a curser close statement in the exception clause ... I agree with you that for-update is a no no in this context.
We have a developer that is inserting a large number of records using a VB program.
An insert trigger exists on the table. This trigger checks a parent table for
records. I know this trigger really is not needed, since a Foreign Key exists to
enforce referential integrity, so I plan to
Guido,
Where are getting the data from - other database tables? Because, you could
build a PL/SQL block that contains PL/SQL variables that can hold up to 32k
of data. You could build the string in the variable and insert it into the
LONG column.
Something like:
declare
str_var varchar2
ORACLE-L
Hi dear list members,
we run a database 8.1.6.2
I've run into a problem. I've got a table that
contains a LONG column (I know, I know).
I have to insert a string into it that has a length > 8500 Byte.
Whenever I issue the insert statement SQL/PLUS throws
the error 'ORA-0170
s,
> >
> > we run a database 8.1.6.2
> > I've run into a problem. I've got a table that
> > contains a LONG column (I know, I know).
> > I have to insert a string into it that has a
> length > 8500 Byte.
> > Whenever I issue the insert statemen
I've run into a problem. I've got a table that
contains a LONG column (I know, I know).
I have to insert a string into it that has a length > 8500 Byte.
Whenever I issue the insert statement SQL/PLUS throws
the error 'ORA-01704: string literal too long'.
Does anyone have an
age -
To: "Multiple recipients of list ORACLE-L" <[EMAIL PROTECTED]>
Sent: Wednesday, July 16, 2003 9:54 AM
> Hi dear list members,
>
> we run a database 8.1.6.2
> I've run into a problem. I've got a table that
> contains a LONG column (I know, I know).
&
> I've run into a problem. I've got a table that
> contains a LONG column (I know, I know).
> I have to insert a string into it that has a length > 8500 Byte.
> Whenever I issue the insert statement SQL/PLUS throws
> the error 'ORA-01704: string literal too long
Hi dear list members,
we run a database 8.1.6.2
I've run into a problem. I've got a table that
contains a LONG column (I know, I know).
I have to insert a string into it that has a length > 8500 Byte.
Whenever I issue the insert statement SQL/PLUS throws
the error 'ORA-01704:
those headers as
well.
6) Read the first free block on the list to memory. Insert data.
7) Read the appropriate branch and leaf blocks for each index
8) If space management is required, read the extent map(s) (dictionary
or bitmap)
As you can see, an insert of a single record can cause many rea
ald
> > Sent: Monday, March 17, 2003 13:34
> > To: Multiple recipients of list ORACLE-L
> > Subject: Re: Using the /*+ append */ insert hint
> >
> >
> > APPEND came in at 8.0 so it will work there.
> >
> > The hint can be very useful, but it works
straint validations?
>
> RF
>
>
> -Original Message-
> To: Multiple recipients of list ORACLE-L
> Sent: 3/17/2003 11:22 AM
>
> In sqlarea I am finding some insert statements with high disk reads .
> why would an insert statement will generate disk reads ? An
insert
into ... select * from ..
-Original Message-From: AK
[mailto:[EMAIL PROTECTED]Sent: Monday, March 17, 2003 12:22
PMTo: Multiple recipients of list ORACLE-LSubject:
insert stmt disk reads
In sqlarea I am finding some insert statements
with high disk reads . why
Do you have many indexes on your system ?
A table insert often results in index updates;
and for large tables with many indexes you
usually find that some of the index leaf blocks
have to be read from disk.
Regards
Jonathan Lewis
http://www.jlcomp.demon.co.uk
Now available One-day tutorials
Constraint validations?
RF
-Original Message-
To: Multiple recipients of list ORACLE-L
Sent: 3/17/2003 11:22 AM
In sqlarea I am finding some insert statements with high disk reads .
why would an insert statement will generate disk reads ? Any idea .
-ak
--
Please see the official
:
insert stmt disk reads
In sqlarea I am finding some insert statements
with high disk reads . why would an insert statement will generate disk reads
? Any idea .
-ak
Partial List
Foreign Key Validation
Primary Key Validation
Reading blocks on the freelist for insert
Before Insert/After Insert Triggers
--
Daniel W. Fink
http://www.optimaldba.com
IOUG-A Live! April 27 - May 1, 2003 Orlando, FL
Sunday, April 27 8:30am - 4:30pm - Problem Solving with
In sqlarea I am finding some insert statements with
high disk reads . why would an insert statement will generate disk reads ? Any
idea .
-ak
x27;s name.
Stephane
-Original Message-
Sent: Monday, March 17, 2003 5:24 AM
To: Multiple recipients of list ORACLE-L
Hi all,
In a discussion with an Oracle rep last week it was suggested we use the /*+
append */ insert hint to allow some inserts to use direct-path. The
suggestion is i
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Toepke,
> Kevin M
> Sent: Monday, March 17, 2003 13:24
> To: Multiple recipients of list ORACLE-L
> Subject: RE: Using the /*+ append */ insert hint
>
>
> >From experience,
requires a
rethink of your backup strategy.
hth
connor
--- Grant Allen <[EMAIL PROTECTED]> wrote: > Hi
all,
>
> In a discussion with an Oracle rep last week it was
> suggested we use the /*+
> append */ insert hint to allow some inserts to use
> direct-path. The
> sug
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Connor
> McDonald
> Sent: Monday, March 17, 2003 13:34
> To: Multiple recipients of list ORACLE-L
> Subject: Re: Using the /*+ append */ insert hint
>
>
> APPEND came in at 8.0 so
>From experience, do not use the APPEND hint for singular inserts. You will
get tons of wasted space. Only use it for bulk inserts such as INSERT INTO
.. SELECT FROM, sqlldr, PL/SQL bulk inserts and the like.
Converting from buld inserts without the append hint to bulk inserts with
the app
I can't say in regards to 7.3.x or 8.0.x, but in an 8.1.7.4, I've traced a complete
snapshot refresh and seen that Oracle is using an insert /*+ append */. Good, bad, or
otherwise, someone at Oracle believes in it.
I will say that it is very likely the hint will just be ignored if not
Hi all,
In a discussion with an Oracle rep last week it was suggested we use the /*+
append */ insert hint to allow some inserts to use direct-path. The
suggestion is interesting - the business logic won't have any problems with
the limitations this implies.
Has anyone had any experience
Current mode gets for index inserts is one
option. Another cause of CU gets is from
checking referential integrity. The block
holding the parent key is acquired in current
mode as the child row is inserted. There
is an optimisation to reduce this effect in
arrays/select inserts in Oracle 9, tho
I thought I'd repost to see if I could get a response. Anyone?
--- Paul Baumgartel <[EMAIL PROTECTED]> wrote:
> I'm looking at a client's tkprof output, showing among other things
> that the insertion of about 135,000 rows taking 450 seconds of CPU,
> and
> with current mode buffer gets numberin
I'm looking at a client's tkprof output, showing among other things
that the insertion of about 135,000 rows taking 450 seconds of CPU, and
with current mode buffer gets numbering almost 800,000. This is a
daily warehouse load process, and I know that indexes are left in place
during the load. Am
Thanks for the inputs.
Here are some important results.
1)The insert into Original table with 43million records takes about 10minutes and the
explain plan as attached in
the text file "original table"
insert into original table...
select ... from a,b
where a.col1=b.col1
Ignoring bulk insert, and assuming you are performing a single insert
statement have you looked at the following (as already suggested):
1) What, and how many, indexes exist on the destination table. Each index
requires updating as records are inserted. If there are indexes not
required then
ch and see what number
is best for u. considering my hardware etc, i got better performance with
5000 fetches at a time.
also, till 8.1.7. u could not have any stmts in the FORALL loop. only 1,
i.e. insert/update/delete. but with 9i u can have a pl/sql block & any
dynamic stmts too. moreover, with
me.
also, till 8.1.7. u could not have any stmts in the FORALL loop. only 1,
i.e. insert/update/delete. but with 9i u can have a pl/sql block & any
dynamic stmts too. moreover, with 9i u can also trap for individual
exception whereas with 8.1.7. if there was an error, entire bulk process
would r
eleted the rows.
finally, try playing with the limit clause in the fetch and see what number
is best for u. considering my hardware etc, i got better performance with
5000 fetches at a time.
also, till 8.1.7. u could not have any stmts in the FORALL loop. only 1,
i.e. insert/update/delete. but with 9i
of course, u surely might have thought of APPEND hint &
dropping/disabling indexes etc.
-Original Message-
Ravindra
Sent: Friday, February 21, 2003 3:08 PM
To: Multiple recipients of list ORACLE-L
hi,
I have an insert statement that will insert about 40 records into a
table ha
list ORACLE-L
hi,
I have an insert statement that will insert about 40 records into a
table having 43million records.The values for the
insert statement are from a select statement that has a join.This query take
about 5-10minutes.What are the ways in
which we can speed up this process.the state
hi,
I have an insert statement that will insert about 40 records into a table having
43million records.The values for the
insert statement are from a select statement that has a join.This query take about
5-10minutes.What are the ways in
which we can speed up this process.the statement
AME (Previously Existent) Data
Value , Does a DELETE & RE-Insert of the Same Row to the index happen
nevertheless ?
>
--
Please see the official ORACLE-L FAQ: http://www.orafaq.net
--
Author: Jonathan Lewis
INET: [EMAIL PROTECTED]
Fat City Network Services-- 858-538-5051 http://
1 - 100 of 368 matches
Mail list logo