Hello, W!

Wednesday, February 6, 2013, 11:03:13 PM, you wrote:

WO> THAT'S INCREDIBLE!!!!!!!

WO> Importing row by row 3.600.000 records takes about 14 hours. Importing
WO> 1.160.000 records using an external table takes seconds, less than a minute.

WO> Thanks very much to all for the advice.

The most common solutions to speedup import

1. do not use cached datasets to read data. The more records you read,
the more cache allocated in client program, the slower import will be.
Source dataset must be able to "cache" only one row.

2. do not commit on each insert. That's the rule. The more commits,
the slower performance. If you can't control transactions during
import, you'll fail.

3. lot of indices on target table slowdowns inserts and updates. Turn
that indices off (inactive) for import. Or, at least, turn of less
selective indices.

4. do not use DataSets for target. Insert must be made by simple query
"insert ..." and nothing else.

5. use parameters to insert data. plain text insert with data takes
more time on prepare/execute cycles, than one prepare, and cycle with
change_params/execute.

-- 
Dmitry Kuzmenko, www.ib-aid.com

Reply via email to