no matter how big my database is inserts starts with the same speed and keep
getting slower.
thats why it is better to split 1000 of files into 10x 100 files cause all
of those x100 packages will be imported fast. but i also gets this file lock
error /



Simon Slavin-3 wrote:
> 
> 
> On 9 Nov 2011, at 10:21pm, yqpl wrote:
> 
>> im starting a transaction
>> then make a lot of inserts and commit.
>> ive got about 30 inserts per second but after a while it is dropping to
>> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
>> 1-2 insert per sec.
> 
> Here is a guess, but it is just a guess.
> 
> Your computer has a certain amount of memory, or disk cache free.  While
> your whole SQLite database fits in that space, your process is fast.  Once
> the database gets bigger than that amount of free, the computer has to
> keep swapping bits of file in and out, and this makes the process slower. 
> So my guess is that the slow down will be related not to the number of
> inserts so far, but to the total size of the database file so far.
> 
> Something that will help to clarify this is to get an idea of how big the
> files are that you are inserting.  There was a recent post to this list
> where the SQLite team had tested the relative speeds of holding entire
> image files in a SQLite database compared to holding just the paths of
> those files.
> 
> Simon.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815038.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to