I'm trying to use sqlite3 as a cache layer for queries on other
database. Actually, replace an existing layer cache which is very
adhoc and propietary (and that sucks), and i'd like to replace it with
something peer reviewed / open source, etc like sqlite.

So... I would like to have a process which periodically gets some
tables from a master database and stores it in a sqlite database.

Then some other processes would access the sqlite database to read and
use such cached data.

So its the tipical scenario of "one writer and many readers".

So the question is... regarding the locking mechanism...

What would be better during the "writer" process's "refresh" cycle for
each table:

1. Create a new table, load the new content (lots of inserts), and
then replace the old table by dropping the old one and renaming the
new one within a transaction.
or
2. Drop the table and then insert the new content within a transaction.

The objective is to block readers the least ammount of time possible
(probably using read uncommited), and of course, not cause writer
starvation.

I don't understand well from the documentation how "read uncommited"
performs when the master table is modified (as in option 1).

And yes, i've considered using a wal journal, but the problem is i'm
scared of the wal file growing out of control as i can't completely
guarantee that ALL prepared statements would be "reset" at the same
time and thus reach a checkpoint. The reason for this is that i must
satisfy a certain api, beyond my control :S for backwards
compatibility reasons.

Any advice is welcome.

Thank you!

Alejandro
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to