This is what I am inserting per record.
Insert into table values(1, 1, 172, 97, 1, 4, 1, 2.29, 'A',
'2006012410052941', 12345, 0, 0, 0, 1, 1, 0)

Other then that, I do some updates on the last field by setting the
value to 1 or 2.


-----Original Message-----
From: Robert Simpson [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 27, 2006 12:06 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Save my harddrive!

----- Original Message ----- 
From: "nbiggs" <[EMAIL PROTECTED]>
>
> My application generates about 12 records a second.  I have no
problems
> storing the records into the database, but started thinking that if I
> commit every 12 records, will my hard drive eventually die to extreme
> usage?  During a 24 hour period up to 1 million records will be
> generated and inserted.  At the end of the day, all the records will
be
> deleted and the inserts will start again for another 24 hours.
>
> Can I store the records into memory, or just not commit as often,
maybe
> once every 5 minutes while still protecting my data in case of a PC
> crash or unexpected shutdown due to user ignorance?
>
> Does anyone have any ideas for this type of situation?

How large are these rows?  12 inserts a second is chump change if
they're 
small ... If you're inserting 100k blobs then you may want to rethink 
things.

At 12 rows per second (given a relatively small row), 24hrs of usage
will 
still be less than the amount of harddrive churning involved in a single

reboot of your machine.  Consider that a fast app can insert about 1
million 
rows into a SQLite table in about 15 seconds.

Robert

Reply via email to