On Wed, Aug 22, 2001 at 01:49:56PM -0700, Tony Payne wrote:
> Nice.  Here are my latest profiling results, which highlight a couple of
> routines which might also be candidates for XSing... possibly in v3?

Let's have a look...

> %Time ExclSec CumulS #Calls sec/call Csec/c  Name
>  0.06   4.705 67.230  15016   0.0003 0.0045  Template::Provider::__ANON__

Makes sense.  Template::Provider::__ANON__ is what the compiled template
subs are known by.  Most things happen from inside one of these.

>  0.03   2.727 10.496  70016   0.0000 0.0001  Template::Stash::XS::get

Yep, every variable access.  In v3, compile time constants will be 
able to eliminate some-many of these calls.  

>  0.03   2.572 50.012  14448   0.0002 0.0035  Template::Context::include
>  0.02   1.639  1.559  14732   0.0001 0.0001  Template::Stash::clone

These two go hand in hand.  include() does little more than wait around
for the stash to clone(), call a template sub, then return.  I think 
Perrin already mentioned that PROCESS is your friend over INCLUDE for
speed.  Another thought is a copy-on-write stash which only does the
clone when a variable gets updated.  This isn't possible with the 
current architecture because the stash can't clone itself, only the 
surrounding context can clone it (because the context is what holds
the ref to original/clone).  Still, for v3, it's a definate poss.

>  0.02   1.405  2.703   1181   0.0012 0.0023  Petsmart::DBO::Product::lo...

S.E.P.    :-)

>  0.01   1.129  1.040  16297   0.0001 0.0001  Template::Iterator::get_next

Iterators are very slow compared to a regular for { } loop.  In v3, 
FOREACH will be iterator driven, as currently, and FOR will be a no-
iterator (no magic) fast loop for those who want it.

[ snip sep ]

>  0.01   0.749  0.747   1417   0.0005 0.0005  Template::Provider::load
>  0.01   0.719  1.121  15300   0.0000 0.0001  Template::Context::template

These two are related because template() calls fetch() which calls load().
I guess this shows up because of an underlying I/O bottleneck?

[ more seps ]

The reassuring thing is that all the things that I expected to show up
there showed up there.  And for most, if not all of them, I think we've
got some good ideas about ways to work around some of the bottlenecks.


Cheers
A



Reply via email to