I'm writing this system wherein I want operations performed on the
database to block when a lock cannot be achieved, and I'm looking at my
options. This system that has multiple processes accessing a single
sqlite file with a single database with a single table. I was
disappointed to find out yesterday that when a function in the API tries
to achieve a lock on the db, it doesn't block, and put the request in a
queue, it just returns an error. Since then I've come to realize that
sqlite doesn't have such a blocking feature. Is that correct?
    I was thinking that a good solution would be to have a lock file,
with POSIX locks (I'm doing this in Linux) on it whenever one tries to
access the db in such a way that might return an SQLITE_LOCKED error. Is
this a good solution for the system I have setup? Is there a better one?

    To be clear, my idea of blocking is as follows: if one tries to
achieve a lock, and it is not possible, the request is put into a queue,
and the caller stops consuming cycles. Locks are then granted (when
feasible) in the queue in the order that they were requested. Simulating
blocking by looping back to the API call on every SQLITE_LOCKED error
doesn't count, because lock requests are not put into a queue, and it
can be very expensive on cycles.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to