On Tue, 14 Dec 2004, Christopher Petrilli wrote:

On Tue, 14 Dec 2004 12:03:01 -0700 (MST), Ara.T.Howard
<[EMAIL PROTECTED]> wrote:
On Tue, 14 Dec 2004, Christopher Petrilli wrote:

Has anyone had any experience in storing a million or more rows in a
SQLite3 database?  I've got a database that I've been building, which gets
250 inserts/second, roughly, and which has about 3M rows in it.  At that
point, the CPU load is huge.

Note that I've got syncing turned off, because I'm willing to accept the
risks.

Thoughts?

Chris

--
| Christopher Petrilli
| [EMAIL PROTECTED]

on linux perhaps?

   cp ./db /dev/shm && a.out /dev/shm/db && mv /dev/shm/db ./db

this will be fast.

Right, but not really workable when total DB size is in gigabytes. :-)

ya never know - it's hard to beat the kernel with regards to io... gigabytes of ram are cheap too compared to a couple days of a prgrammer's time.

are you sure it's not YOUR 'building' code which is killing the cpu?  can
you gperf it?

Yes, my code is using under 20% of the CPU. The rest is basically blocked up in sqlite3 code, and kernel time. In order to eliminate all possibility of my code being the issue, I actually built a rig that prebuilds 10,000 rows, and inserts them in sequence repeatedly putting new primary keys on them as its going alone. So the system basically just runs in a loop doing sqlite calls.

this is probably a stupid question - but are the inserts inside of a transaction?

-a
--
===============================================================================
| EMAIL   :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE   :: 303.497.6469
| When you do something, you should burn yourself completely, like a good
| bonfire, leaving no trace of yourself.  --Shunryu Suzuki
===============================================================================

Reply via email to