I've noticed a similar thing happening.  The first 1/3rd loads quickly; the
remain 2/3rds stagnates.  It appears that there is some kind of bottleneck
happening.  I thought it was the SAN.

My application begins a transaction, does all its inserts, and then
commits.  There could be millions in the transaction.  Would it be better
processing to commit in batches, say 250m or 500m?

Now's the time for me to make these changes, as the application is being
prep'd for production.

dvn

On Wed, Feb 8, 2012 at 4:29 PM, Simon Slavin <slav...@bigfraud.org> wrote:

>
> On 8 Feb 2012, at 10:22pm, Oliver Peters wrote:
>
> > It's the Primary Key that you're using cause for every INSERT it is
> checked if unix_time is already present in a record.
> >
> > So the question is if you really need unix_time as a PK
>
> If you're batching your INSERTs up into transactions, try doing a VACUUM
> after each COMMIT.
>
> Simon.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to