Andy Wardley <[EMAIL PROTECTED]> wrote:
> Yep, Perrin's right.  Version 1 compiled templates to tree form.  Items
> in the tree were scalars (plain text) or references to directive objects
> which performed some processing (like INCLUDE another template, and so
> on).
> 
> This is actually pretty efficient when you have a limited directive set,
> but doesn't scale very well.  For version 1.00 I was more concerned
> about getting it functioning correctly than running fast (it was already
> an order of magnitude or two faster than Text::MetaText, the predecessor,
> so I was happy).  

I'm developping yet another toolkit for templating under mod_perl (don't
flame me YET, it does things that are significantly different from
Mason, Embperl and others: namely completely separation of data and
code, good multilingual support, and a reverse-include-based (aka OO
without code) component model).

For the first version, I'm doing the tree thing too; a page gets
compiled to an object, which is a hashref with properties; the contents
themselves are stored in arrayrefs, whose elements are either
scalar-refs (plain text), or element arrayrefs (method name &
arguments).

The reason for these scalar-refs is that it lets me clone objets by
making new hashrefs, so that one object can alter the list and the other
isn't affected, and at the same time keep the strings shared.

The evaluation function goes through all these things, calling the
methods by name, as in 'my $addtxt = $self->$method', and putting the
results together.  One important requirement is that no functions or
callbacks are allowed to print, because there can be global filters on
sections, which run at the end of evaluation.

[ compiling to perl code is good ]
> Apart from the benefits of speed, this also means that you can cache
> compiled templates to disk (i.e. write the Perl to a file).  Thus you
> can run a web server directly from template components compiled to Perl
> and you don't even need to load the template parser.

With tree objects, I do exactly that; there's a memory cache of 30 page
components using Tie::Cache, and a disk cache of compiled components,
using Storable.

Like Andy, I'm interested in speed too, but design, features and
reliability come first.  Preliminary benchmarks show that speed is quite
reasonable; I'm getting 60 requests per second for a test page that
calls components from 4 files (but no db connections), on my desktop PC
(pIII-550 w/ Linux).  My impression is that reading a compiled tree from
a Storable file should be faster than compiling perl code, and that
dynamic method calls aren't particularily slow either.  On the other
hand, compiling to perl code could remove some of the overhead in the
evaluation functions, by including only the support for the things that
are actually used in that component.

So, if there's a serious argument that compiling to perl code is better,
I'm interested.  So far, I'm a bit doubtful that it's worth it, esp. for
a large site (where you'd spend a lot of time on the perl compilation
phase, reading perl files over and over; unless someone figures a way to
store bytecode?).

Btw, I'm doing this for a website (www.iagora.com), and it's already
stable enough that the production server runs it;  I just got the
approval to release it as Open Source... as soon as I find the time!
I'll post a pointer here before announcing it on sites like
freshmeat.net, in a week or two at most.

-- 
Roger Espel Llima, [EMAIL PROTECTED]
http://www.iagora.com/~espel/index.html

Reply via email to