Just give it a try and see what happens. You just need to enable the shared 
cache once.
   
  I'd think the blocking would not be any different with the shared cache 
enabled. But you should get reduced I/O load since the cache will be larger and 
accessible to all threads.
   
   
  HTH,
  Ken

Doug <[EMAIL PROTECTED]> wrote:
  I have a heavily threaded app (I know, evil) where there might be 50 threads
accessing 10 databases. Each thread always calls sqlite3_open when it
starts working with a database and sqlite3_close when it's done (so no
sharing of handles across threads). A thread might have two or more handles
open to separate databases at once. And separate threads can be working on
the same database at once. It's extremely rare for a second process to ever
access the databases. I'd guess typically 70% of the database activity is
INSERTS or UPDATES, with 25% simple single-table SELECTS, and the occasional
large SELECT (joining tables, etc).



Right now every database connection has its own page cache, all the default
size. Some threads do very little work and don't use their full cache,
where others could definitely benefit from a larger cache. I'd like to have
a single, quite large, cache that the threads share with the hopes that the
'smaller' threads would use what they need and the 'larger' threads would be
able to take advantage of the larger cache size available to them.



Given that, is this a good scenario for using the shared cache? I've read
http://www.sqlite.org/sharedcache.html but I'm not confident enough in my
understanding to know whether I'll run into more or less blocking. 



Thanks for any insight.



Doug





_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to