Re: Sharing objects

2003-07-23 Thread Perrin Harkins
On Wed, 2003-07-23 at 18:21, Aleksandr Guidrevitch wrote:
 What are common patterns of sharing data beetween apache processes,
 for example I'd like to share some indexes. Also, I'd like to avoid
 complex sycronyzation process (currently IPC::Sahreable seem to be the
 right thing)

No, IPC::Shareable is slow.  You are better off with one of these:
MLDBM::Sync
Cache::Mmap
BerkeleyDB (with native locking)
Cache::FileCache
IPC::MM

- Perrin


Re: Sharing objects

2003-07-23 Thread Aleksandr Guidrevitch
Hello Perrin

No, IPC::Shareable is slow.  You are better off with one of these:
MLDBM::Sync
Cache::Mmap
BerkeleyDB (with native locking)
Cache::FileCache
Actually I think to use Cache::FileCache as the storage backend.
But I need to have Cache keys to be sorted by various criteria.
I strive to avoid re-reading Cache::* keys and sort them each time,
but to share somehow sorted lists beetween apache processes (as they 
could be huge). However, having an extra key like keys_sorted_by_* in 
Cache::FileCache will probably solve the problem.

Thanks

IPC::MM
Alex



Re: Sharing objects

2003-07-23 Thread Perrin Harkins
Aleksandr Guidrevitch wrote:
Actually I think to use Cache::FileCache as the storage backend.
But I need to have Cache keys to be sorted by various criteria.
I strive to avoid re-reading Cache::* keys and sort them each time,
but to share somehow sorted lists beetween apache processes (as they 
could be huge). However, having an extra key like keys_sorted_by_* in 
Cache::FileCache will probably solve the problem.
It could, but if that's a large list it could be slow.  Cache::FileCache 
will serialize each cache value with Storable, and in this case the 
value will be one big array.

A different approach would be store a separate BerkeleyDB index as a 
BTree with a custom sorting function, or share your array as a 
BerkeleyDB recno database (serial records accessed like an array).

- Perrin