Mladen Gogala wrote:
> 
> I believe that this would be the best solution:
> DECLARE
>      RowCount       NUMBER        := 0;
> 
> BEGIN
>      /* This will work if the RESOURCE table has the "parallel"
>      attribute set. In 8i, table needs to be partitioned as well */
> 
>      EXECUTE IMMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
>      SELECT Count(*)
>      INTO   RowCount
>      FROM   RQMT;
> 
>      IF RowCount > 0 THEN
> 
>          INSERT /*+ APPEND */ INTO RESOURCE
>              SELECT  Resource_Id, Classification
>              FROM RQMT;
> 
>          RowCount      := SQL%RowCount;
> 
>          DBMS_OUTPUT.PUT_LINE ('TABLE Resource: '  || RowCount || '
> Rows
> transitioned.');
>          COMMIT;
>      ELSE
>          DBMS_OUTPUT.PUT_LINE ('TABLE Resource is empty. No data
> transitioned.');
>      END IF;
> 
>      EXCEPTION
>      WHEN OTHERS THEN
>          Raise NOT_LOGGED_ON;
> END;
> /
> On 10/16/2003 01:29:33 PM, [EMAIL PROTECTED] wrote:
> > yeah dont commit every 1000 records and do it in one shot. this is
> > going to be much slower.
> >
> > why do you want to do it this way? Ive done 100m inserts with just an
> > insert select and one commit.
> > >
> > > From: Maryann Atkinson <[EMAIL PROTECTED]>
> > > Date: 2003/10/16 Thu AM 11:54:33 EDT
> > > To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
> > > Subject: insert and commit 1000 records at a time
> > >
> > > I have 2 tables, Rqmt and Resource, same structure.
> > >
> > > I need to take all almost-one-million records from Rqmt and
> > > insert them to Resource. So far this worked ok:
> > >
> > > DECLARE
> > >      RowCount       NUMBER        := 0;
> > >
> > > BEGIN
> > >      SELECT Count(*)
> > >      INTO   RowCount
> > >      FROM   RQMT;
> > >
> > >      IF RowCount > 0 THEN
> > >
> > >          INSERT INTO RESOURCE
> > >              SELECT  Resource_Id, Classification
> > >              FROM RQMT;
> > >
> > >          RowCount      := SQL%RowCount;
> > >
> > >          DBMS_OUTPUT.PUT_LINE ('TABLE Resource: '  || RowCount || '
> > Rows
> > > transitioned.');
> > >          COMMIT;
> > >      ELSE
> > >          DBMS_OUTPUT.PUT_LINE ('TABLE Resource is empty. No data
> > > transitioned.');
> > >      END IF;
> > >
> > >      EXCEPTION
> > >      WHEN OTHERS THEN
> > >          Raise;
> > > END;
> > > /
> > >
> > >
> > > But now I need to commit every 1000 records. Any suggestions as to
> > > what would be the best way? I dont think ROWNUM would help here,
> > > because it would pick the same 1000 records every time, causing
> > > primary key violation...
> > >
> > >
> > > thx
> > > maa
> > >
> > > --
> > > Please see the official ORACLE-L FAQ: http://www.orafaq.net
> > > --
> > > Author: Maryann Atkinson
> > >   INET: [EMAIL PROTECTED]
> > >


Why do you need the first count(*) ? If the table you have to insert
from is big it's a waste of time. You can check SQL%ROWCOUNT after the
insert in all cases.

Otherwise I fully agree with the implicit suggestion that you should
question the reason for committing every 1000 rows. It would force you
to adopt a row-by-row logic which will kill performance.
The most acceptable solution might perhaps be an OCI program, in which
you would fetch 1000 rows per 1000 rows in arrays and insert likewise. I
don't see any way to do something similar in PL/SQL but it's close to
midnight here and I am getting pretty tired.

-- 
Regards,

Stephane Faroult
Oriole Software
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Stephane Faroult
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- 858-538-5051 http://www.fatcity.com
San Diego, California        -- Mailing list and web hosting services
---------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

Reply via email to