Hi all,

 

I've got a database that is accessed by two processes on the same PC.
When I add a large number of records to a table from one process (all
records wrapped in a BEGIN DEFERRED / END so it's committed to disk all
at once; and we're talking about adding 500,000 recs to a DB containing
around 3 million recs, so it takes some time), the other process gets
stuck in the sqlite3_step() call when doing a SELECT call.  It doesn't
return SQLITE_BUSY, it just sits there waiting.  It only continues after
the first process calls END to complete the transaction.

 

According to http://www.sqlite.org/lang_transaction.html, in deferred
mode, sqlite creates a RESERVED lock on the DB when it starts a write
operation, but this should allow other SHARE locks at the same time,
implying other processes should still be able to query the database
during the write transation.

 

Does this sound like correct behaviour, or not?  If correct, should my
second process be getting SQLITE_BUSY returned from sqlite3_step()?

(Note that the first process is adding data to the same table that the
second process wants to do a SELECT query on).

 

If this is all correct... what would be the best way to allow me to add
(or delete!) lots of records from a table in one process whilst still
allowing the other process to query that table's data during the insert
time?  An in-memory database is obviously an option for the second
process, but that has an impact on RAM usage and requiring the second
process to know when the database has changed on disk.

 

Thanks,

Nick.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to