Jeff,

Apologies - missed the start of this thread... but reading between the lines, you are seeing stale data from some DB handles, and updated data elsewhere? I get the impression that you are replacing / updating the underlying db files from source control?

>> It's almost like the OS is keeping two distinct
>> versions of this file, when there is only one on the drive.

This is normal on *nix - files are really referenced by inode, and directories contain a name -> inode mapping. For example you have a file named my.db which the directory points to inode 123. You copy a new file over - you now have my.db -> inode 234. If any processes had inode 123 open, those handles will continue to work on inode 123 even though that name now points to 234. New processes will open 234. The OS will keep inode 123 around until all processes close the handle.

Databases do not like you playing with the underlying files while there are handles open. If this is your problem, probably the best solution is to use the database interface to update the contents - change your release process to use the database, open the table, truncate it and load in all the new data (or use a BCP style load if there is one).

It is possible to install the contents of a file into the old inode but this will cause issues if the database engine implements any caching / locking etc. Alternatively, as Perrin mentioned, you could re-open the database connection (maybe after stating a flag file).

Just a thought,

Jeff

-------- Original Message --------
Subject: Re:SQLite and multiple process behavior
From: Perrin Harkins <[EMAIL PROTECTED]>
To: Jeff Nokes <[EMAIL PROTECTED]>
CC: modperl@perl.apache.org
Date: 18 June 2007 20:42:28


Reply via email to