Thanks for replies everyone. Actually, I don't include the code but I do make a very small mention of using batch inserts w/ a transaction ("> //every dataInsertPs gets added to a batch and committed every 1000 records").
I am using JDBC so I do not use BEGIN and END statements. Do I need to use BEGIN and END *ALONG WITH* the JDBC api transaction commands? I don't think I do since using jdbc transaction objects shows different insert times than not using them. Please let me know. -Julian On Wed, Oct 29, 2008 at 2:09 AM, Neville Franks <[EMAIL PROTECTED]>wrote: > The most common reason which comes up here time and again is that the > inserts are wrapped in a transaction. See BEGIN, END statements in the > Docs. You haven't mentioned whether you are using a transaction, so I > may be misguided in my reply. But the sample code doesn't! > > Wednesday, October 29, 2008, 7:59:54 PM, you wrote: > > JB> Hi everyone, > > JB> First off, I'm a database and sqlite newbie. I'm inserting many many > JB> records and indexing over one of the double attributes. I am seeing > JB> my insert times slowly degrade as the database grows in size until > JB> it's unacceptable - less than 1 write per millisecond (other databases > JB> have scaled well). I'm using a intel core 2 duo with 2 GB of ram and > JB> an ordinary HDD. > JB> ... > > -- > Best regards, > Neville Franks, http://www.surfulater.com http://blog.surfulater.com > > > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users