> I am currently using Storables lock_store and lock_retrieve to maintain
> a persistent data structure.  I use a session_id as a key and each data
> structure has a last modified time that I use to expire it.  I was under
> the impression that these two functions would be safe for concurrent
> access, but I seem to be getting 'bad hashes' returned after there is an
> attempt at concurrent access to the storable file.

(You're not using NFS, right?)

What are the specifics of your bad hashes?  Are they actually corrupted, or
do they just contain data that's different from what you expected?  The
lock_retrieve function only locks the file while it is being read, so
there's nothing to stop a different process from running in and updating the
file while you are manipulating the data in memory.  Then you store it and
overwrite whatever updates other processes may have done.  If you want to
avoid this, you'll need to do your own locking, or just use Apache::Session.
- Perrin

Reply via email to