-----Original Message-----
From: D. Richard Hipp [mailto:[EMAIL PROTECTED]
Sent: Monday, June 14, 2004 10:51 AM
Cc: [EMAIL PROTECTED]
Subject: Re: [sqlite] Solving the "busy" problem with many reads and a
single write


Wempa, Kristofer (Kris), ALABS wrote:
 > I'm new to this list and I didn't see this question answered in the
places I
 > looked.  We have a system that uses a handful of commands to read and
write
 > to an SQLITE database.  We already control the commands so that only
1 thread
 > in one process writes to the database file at any point in time.
However,
 > there may be multiple READS attempting to access the same database.
My
 > question is this.  Can I simply change the SQLITE code so that the
fcntl()
 > call for a read lock does a BLOCKING wait instead of a non-blocking
wait ?  I
 > don't care if the commands take longer to run, but I just don't want
to cause
 > a deadlock.  Has anybody tried this ?  What about changing the
fcntl() for
 > both reads and writes to a BLOCKING wait ?  Any help would be
appreciated.
 > We are currently getting too many "busy" failures when reads are
attempted.
 >

>Using the sqlite_busy_handler() or sqlite_busy_timeout() APIs might be
>safer, since they can be rigged to fail if the lock is held for too
long.
>You can try switching to blocking locks if you want and see what
happens.


Our tool to query the database is written in shell and invokes the
sqlite command.  Rather than re-write it in C to go through the API, I
figured that I'd try changing the read locks to blocking in order to try
to save time.  Reads ONLY happen through this shell tool.  Writes only
happen through an API call.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to