On Fri, 24 Sep 2004, Eddy Macnaghten wrote:

However, reading the documentation it seems that SQLite is not hot on
concurrent access through networks, or across platforms, due to the funnies
of fcntl, or incompatibilities between Windows and Linux and so on.

it works o.k. if your nfs impl is good (netapp, linux - not sun) but code needs to be prepared to handle BUSY errors because sqlite uses byte ranges locks and therefore you may have a lock only to find out you need some 'more' lock. i do think this type of use is frowned upon though - i certainly find it unnerving.

To achieve this I have written a mechanism that creates a "lock" file, (in
the same directory as the database, with the same name with ".lock" after
it).  This is used by SQLite clients to register, and deregister locks (what
fcntl would do under the current Linux clients).  I have designed it so it
does not use fcntl, but could be used to perform the locking SQLite requires
to guarantee concurrency and integrity.

i've done the same thing. what's your locking algorithim? there are only a few operations which are atomic on nfs and they are not open(O_EXCL) or mkdir which people typically use. i've written a robust nfs safe lockfile api based on the atomicity of link(2) and command line tool that is highly configurable and that supports automatic recovery from stale locks left over from dead processes/machine reboots, you can find it here

  http://raa.ruby-lang.org/project/lockfile/

you can use it from the command line like so

  ~ > rlock lockfile sqlite db 'insert into t values(42)'

this will aquire the lock (possibly cleaning up any stale locks), run the
command, then release the lock under all exit conditions except kill -9.

if nothing else the docs will be of use if your impl is not nfs safe.  of
course it may be, in which case i'd like to read your code if available. ;-)

another thing to consider is that any lockfile impl means readers must wait in
the same line a writers.  another solution i'm currently usingis to have an
empty 'semaphore' file.  like so

  path/to/db
  path/to/db.lock

this file is emtpy and used simply to apply read or write fcntl based locks to
by the applications that use the db.  note that you cannot use the db itself
to apply the lock to since further lock requests will up/down-grade that lock
to a byte range one.  this works great and supports single writer multiply
reader semantics.

Of course, when this locking mechanism is in use there is a performance hit,
as extra network processing and read/writes are required each time a lock is
required, so therefore this is not right for all scenarios, though the
performance hit would probably not be significant for what I want to use it
for.

my system is essentially an nfs mounted priority queue of jobs to run for a small cluster. the cost if getting a job is typically a second or so for jobs that run for hours or even days so i came to the same conclusion you have - this is fine for many real-world cases.

Ideally, I would like an indicator in the SQLite database file header record
to determine if this file locking is required, and set accordingly when the
database is created, and possibly be switched on and off with a PRAGMA
command or similar, or maybe it's own utility.

this could be o.k. - but better IMHO would simply be a switch that allowed the entire database to be locked read/write in a blocking fashion rather than using byte range locks. on the other hand this is an easy task for the application domain... you are suggesting a pragma that would indicate some sort of external lock was required to access the database?

What are people's views on this?

i think it's sufficiently common to merit discussion on best practices at least.

kind regards.

-a
--
===============================================================================
| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE :: 303.497.6469
| A flower falls, even though we love it;
| and a weed grows, even though we do not love it. | --Dogen
===============================================================================

Reply via email to