My data types are similar to those in your example. Using the suggestions on your page, I was able to increase performance a fair bit. I'm able to get near 240,000 inserts/second on my 3GHz Xeon Linux system with your example program.

I did find a place for optimization in the btree code which I'll mention in a separate post.

Thanks to all who replied.
--Chuck Pahlmeyer

Al Danial wrote:

On 7/21/05, Chuck Pahlmeyer - MTI <[EMAIL PROTECTED]> wrote:
I have an application in which I'd like to create a database as
quickly as possible. The application has a batch process at
startup which creates the data. I am using a single transaction
for all of the INSERT statements. I'm also using prepared statements
to alleviate some of the overhead for processing SQL text. Typical
table sizes are on the order of a few million rows. I can get
about 50,000 rows/second on my 3GHz Linux system, but would like to
do better.

What are the data types of the columns?  For integers and floats
I've seen insert speeds of over 300,000 rows/second.  One thing that
helps a lot is building SQLite with optimization flags that are tweaked
to your CPU.  http://anchor.homelinux.org/SQLiteTuning shows some
good settings for Pentium4, Opteron, and Athlon.  That site also
gives specs for a system which can do nearly 310,000 inserts/sec
for a table having one integer and three floats.

Other things to look at:  disk drives with large densities (>= 300 GB)
and high RPM, different file systems (xfs has proven to be fast but
I bet the old ext2 may be faster still), different Linux kernels.

I've found a marginal (3%) performance boost by dowloading the beta for GCC v4.1, then building SQLite (again, with all the optimization
tweaks) with it instead of GCC 3.x.  But that's a lot of hassle for little
gain.            -- Al

Reply via email to