I just read the page on Shared-Cache Mode and it left me with some questions...

Q1: Is my understanding correct:  Shared-Cache Mode is used within a
process to gain table locking, as compared to the normal file locking.

How to Enabling Shared-Cache Mode in the following situation:

SQLite is being used in an Apache module which uses the Apache DBD
API.  The DBD is a connection pooling API.  In other words, the DBD
calls sqlite3_open_v2() and the module simply gets a connections from
the DBD. Before the module code ever gets executed, the DBD creates 4
connections to the database.

Q2: Is my understanding correct:  The first time the module code gets
a connection and calls int sqlite3_enable_shared_cache(int), the other
three connections will NOT be in the Shared-Cache, but any future
connections will be in the shared-cache.

Q3: Further, when the module code gets the second connection and calls
int sqlite3_enable_shared_cache(int), it will be added to the same
shared-cache.

Q4: My thought is each and every time the module code gets a
connection, it simply calls int sqlite3_enable_shared_cache(int) to
make sure that connection is in the shared-pool.  Am I correct in
assuming that the cost of calling int sqlite3_enable_shared_cache(1)
when shared-cache is already enabled is very small?

Sam
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to