Perrin Harkins wrote:
> 
> Adi Fairbank wrote:
> >
> > I am trying to squeeze more performance out of my persistent session cache.  In
> > my application, the Storable image size of my sessions can grow upwards of
> > 100-200K.  It can take on the order of 200ms for Storable to deserialize and
> > serialize this on my (lousy) hardware.
> >
> > I'm looking at RSE's MM and the Perl module IPC::MM as a persistent session
> > cache.  Right now IPC::MM doesn't support multi-dimensional Perl data
> > structures, nor blessed references, so I will have to extend it to support
> > these.
> 
> Is there a way you can do that without using Storable?

Right after I sent the message, I was thinking to myself that same question...
If I extended IPC::MM, how could I get it to be any faster than Storable already
is?

Basically what I came up with off the top of my head was to try to map each Perl
hash to a mm_hash and each Perl array to a mm_btree_table, all the way down
through the multi-level data structure.  Every time you add a hashref to your
tied IPC::MM hash, it would create a new mm_hash and store the reference to that
child in the parent.  Ditto for arrayrefs, but use mm_btree_table.

If this is possible, then you could operate on the guts of a deep data structure
without completely serializing and deserializing it every time.

> If not, maybe
> you should look at partitioning your data more, so that only the parts
> you really need for a given request are loaded and saved.

Good idea!  That would save a lot of speed, and would be easy to do with my
design.  Silly I didn't think of that.

> 
> I'm pleased to see people using IPC::MM, since I bugged Arthur to put it
> on CPAN.  However, if it doesn't work for you there are other options
> such as BerkeleyDB (not DB_File) which should provide a similar level of
> performance.

Thanks.. I'll look at BerkeleyDB.

-Adi

Reply via email to