I have two independent processes that open the same sqlite database.
One inserts data into a table and the other reads from the same table
and creates reports.

The reader process does something like "select * from table" and then
does some heavy processing between each call to sqlite3_step().  It can
be minutes between the prepare() and finalize().

If the writer process tries to update the table during that time, then
it blocks (or times out) which is not what I want.

I've read the docs on locking and it would appear that the behaviour
given by pragma read_uncommitted would do what I want (and I understand
the
drawbacks) but I can't use shared-cache mode because of having two
separate processes.

The only solutions I can come up with are:

1) Run the "select * from table" and read all the transactions in to
memory
   straight away before processing.

2) Use the backup interface to copy the DB (maybe in to memory) and then
run
   the reader process on the copy.

3) Change the reader SQL to "select * from table where rowid=?" and then
   re-bind and re-query for each row (using "select max(rowid)" first).

None of these are particularly attractive to me, so is there a better
way?

Typical db files (with other tables) are in the region of 50Mb, typical
datasets in the order of 100,000 rows.  Using Linux if that makes any
difference; currently with sqlite 3.6.14.2, but I don't have a problem
upgrading if necessary.

TIA,

Paul.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to