On 2 Sep 2011, at 12:30pm, Terry Cumaranatunge wrote:

> We have an application that creates many small databases (over 100K) to be
> able to control the size of the database and provide more deterministic
> performance. At process startup, it opens many of these databases in a loop
> to keep them open for faster transaction response times. The behavior we are
> noticing is that the it takes progressively a longer time for each
> sqlite3_open_v2 to complete as the number of databases kept
> opened increases. These are some of the measurements:
> 
> 5000 DBs = 3 secs
> 10000 DBs = 11 secs
> 15000 DBs = 35 secs
> 20000 DBs = 75 secs
> 
> Many processes can concurrently open 5000 db's at the same time and it takes
> about the same time as a single process doing the work. So, it doesn't
> appear to be related to the OS related issue with the number of opens.

I suspect that your application is having to work through a big list each time 
you specify a database to work on.  This might be caused by your own code or 
there might be something inside one of the SQLite routines that needs to keep a 
list of open connections.  How are you storing your connection handles ?  In 
other words, what are you handing sqlite3_open_v2() for its sqlite3** value ?  
Are you perhaps using a big array, or a linked list ?

As you can probably guess, creating 20000 databases with the same structure is 
something one doesn't often do since this is what table columns are for.  
Although you have your reasons for doing so, you might find merging them all 
into one database may actually save you time and memory.  Of course I don't 
know how hard it will be to change your programming to run a test with this way 
of doing things.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to