Hey Guys,

I'm writing a web proxy in C for Linux that logs all of the connections
that go through it to a sqlite database.  There is then a PHP web GUI
that allows the user to perform queries against all of the data in the
log.  Needless to say, this database can get fairly large.

I use a 5 second busy timeout when I'm writing to the log, but it seems
like when the read is for a whole lot of entries that it can sometimes
take longer than 5 seconds.  These reads are being done by PHP using the
PDO driver for sqlite3.  My understanding of the problem is that the PHP
reader is holding a SHARED lock for longer than 5 seconds, so while the
C program can acquire a PENDING lock, it can not get the EXCLUSIVE lock
inside of 5 seconds and thus times out.

This presents two problems.  First and foremost is that I somehow have
to handle the case when writing to the database times out.  Another
large problem is that, since my web proxy is written to be non-blocking
and runs in one thread in one process, when it takes 5 seconds for the
write to fail, that's 5 seconds that my web proxy is not handling
requests.

I'm curious to know how other people have handled this in their own
programs.  I have come up with two ideas so far:

1.  Have all of the logging happen in a separate thread.  So, each
request that comes in will put all of the data that it needs to log into
a data structure that will get appended to a queue.  When the logging
thread runs it will take data off of the queue and write it to the
sqlite database.  The connection to the sqlite database would be
entirely owned by the logging thread, so the threading shouldn't affect
sqlite at all.  This fixes the problem of blocking for a number of
seconds (since the working thread would still be able to run even if the
logging thread is being held up by the PHP reader).  But, it does not
solve the problem of handling a timeout of over 5 seconds.  I guess I
could just try and write to the DB again, or increase the timeout, but
these both feel a little hackish.

2. I could continue to write to the database in the single thread, but
if the write fails, add the data to a queue and continue.  Then, when
another piece of data has to be logged, and it succeeds, empty the queue
and write all of the data from it into the sqlite DB.  This fixes both
problems fairly nicely without having to create a separate thread.  I
could do away with any busy_timeout handler and just rely on the queue
to handle every instance of the sqlite DB being busy.  I could also have
some piece of code that ran during idle periods to just try and empty
the queue in case there won't be any other logging requests for a few
minutes or something like that.

Like I said, I'm interested to know how other people have handled such
situations.  Any ideas are greatly appreciated.

Thanks,
Mark Drago

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to