I have an application in which I'd like to create a database as
quickly as possible. The application has a batch process at
startup which creates the data. I am using a single transaction
for all of the INSERT statements. I'm also using prepared statements
to alleviate some of the overhead for processing SQL text. Typical
table sizes are on the order of a few million rows. I can get
about 50,000 rows/second on my 3GHz Linux system, but would like to
do better.

What other things would be useful to speed up inserts? Something
like a batch insert where I could do tens, hundreds, or thousands
at a time might work well, as I have that level of data granularity.
Is there any bias toward particular key values to optimize insertions.
I've been using autogenerated keys and that seems to work about as
fast as any other keys that I've used. Any other ideas?

Thanks very much,
Chuck Pahlmeyer


Reply via email to