i did some test do check if indexes make it slow. instead of inserting to
disk database i use ":memory:" database - i have copied tables only - i
assume without indexes and then do inserts - and it works the same.

does it prove that it isnt because indexes?



Richard Hipp-3 wrote:
> 
> On Wed, Nov 9, 2011 at 6:17 PM, yqpl <y...@poczta.onet.pl> wrote:
> 
>>
>> no matter how big my database is inserts starts with the same speed and
>> keep
>> getting slower.
>> thats why it is better to split 1000 of files into 10x 100 files cause
>> all
>> of those x100 packages will be imported fast. but i also gets this file
>> lock
>> error /
>>
> 
> First create your tables without indices.  Then do your inserts.  After
> all
> the data is in the table, then do your CREATE INDEX statements.
> 
> Also avoid UNIQUE constraints since they generate indices.  Instead add
> CREATE UNIQUE INDEX after all inserts are done.
> 
> Inserting content into a table is fast because it can simply append.
> Inserting content into an index is slower since it has to look up the
> correct place in the middle of the index and insert it there, which
> involves a binary search, etc.
> 
> The latest release of SQLite contains an optimization that makes CREATE
> INDEX go up to 100x faster for very large tables.  So be sure to use
> SQLite
> version 3.7.8 or 3.7.9 to do the CREATE INDEX.
> 
> 
>>
>>
>>
>> Simon Slavin-3 wrote:
>> >
>> >
>> > On 9 Nov 2011, at 10:21pm, yqpl wrote:
>> >
>> >> im starting a transaction
>> >> then make a lot of inserts and commit.
>> >> ive got about 30 inserts per second but after a while it is dropping
>> to
>> >> about 1-2 inserts per second. it takse about ~500 inserts to drop to
>> this
>> >> 1-2 insert per sec.
>> >
>> > Here is a guess, but it is just a guess.
>> >
>> > Your computer has a certain amount of memory, or disk cache free. 
>> While
>> > your whole SQLite database fits in that space, your process is fast.
>>  Once
>> > the database gets bigger than that amount of free, the computer has to
>> > keep swapping bits of file in and out, and this makes the process
>> slower.
>> > So my guess is that the slow down will be related not to the number of
>> > inserts so far, but to the total size of the database file so far.
>> >
>> > Something that will help to clarify this is to get an idea of how big
>> the
>> > files are that you are inserting.  There was a recent post to this list
>> > where the SQLite team had tested the relative speeds of holding entire
>> > image files in a SQLite database compared to holding just the paths of
>> > those files.
>> >
>> > Simon.
>> > _______________________________________________
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> >
>> >
>>
>> --
>> View this message in context:
>> http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815038.html
>> Sent from the SQLite mailing list archive at Nabble.com.
>>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32817068.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to