On Tue, Oct 28, 2003 at 02:37:29PM +0000, [EMAIL PROTECTED] wrote:
> Tim Bunce <[EMAIL PROTECTED]> wrote:
> > Michael Carman wrote:
> >> 
> >> I tried it, and it does help some. In my very unscientific test[1] it 
> >> ran about 20% faster. The size of the db file (on disk) was about 75% 
> >> smaller.
> > 
> > Thanks. 20% is certainly useful.
> 
> I ran some more tests, some of which might be more significant:
> 
>                    time(sec)   db size (kB)    peak RAM (MB)
> no coverage           15          ---             ~ 10  
> Data::Dumper+eval    246          245             ~ 23.4
> Storable             190           60             ~ 19.7
> no storage           184          ---             ~ 18
> 
> The 'no coverage' run is to provide a baseline.
> 
> For the 'no storage' test, I ran using Devel::Cover, but modified the 
> read() and write() methods to be essentially no-ops. I did this to 
> isolate the time overhead of coverage itself, as opposed to the time 
> spent reading and writing the db.

Excellent. From 23.4-18 to 19.7-18 is 5.4 to 1.7. So Storable is taking
only 30% of the time that Data::Dumper+eval took.

> Storable looks like it's performing pretty well, with only a small 
> overhead. Eventually, I think that a transition to a real database 
> (where you can read/write only the portions of interest) would be good.

How would you define "portions of interest"?

Certainly some changes are needed in the higher level processing.
But there's possibly no need for a "real database" (if you mean
DBI/SQL etc which carry significant overheads). Multiple files, for
example, may suffice.

Tim [who would really like to find the time...]

Reply via email to