Serge Knystautas wrote:
Besides Velocity is extremely flexible and trivial to stick a
memory-based LRU template cache. JDK 1.4 provides an LRU map, so I
can create that with a cap instead of the default hashtable, and I've
got an impl that I would hope is as reliable as what Attila
implemented.
It should be trivial. The thing is designed to be pluggable for this
reason.
But my real goal is to cache without using memory...
And compute w/o cycles!
The holy grail of computing.
Really though, something like caching the parsed templates (i.e., the
AST nodes) to disk
Well, you're talking about storing the AST in some format on disk, but
there is already a way of storing a VTL AST on disk and that is the way
it is stored already, as a .vtl file. I mean, it is not obvious to me
offhand why re-instantiating the AST from the disk in your scenario is
going to be that much more efficient than just reparsing the template
from its canonical VTL format.
I mean, at the end of the day, you're just reading the template (in some
format) off the disk again. Okay, you could have a binary format for
storage that is a bit more efficient than the VTL text format, but how
much difference would it make?
In terms of overall Velocity development, how much value added could
their be?
Yes, it is a fair point as to whether the serializable API would be
faster, but I would hope so. Given the number of pages, I have to
turn off memory caching, so every page requires parsing.
My benchmarks calculated 37ms/MB to read a file (just read into a
string), 754ms/MB to compile (create a Template object), and 31ms/MB
to merge it with a simple context. Obviously if I can eat into that
700+ms/MB rate by serializing the ASTNodes or something else, then
that'd be great.
There should be no reason why not. (Thinking, thinking...) I've oft
wanted to modify the VelocityEngine and add something like
Template t = compileTemplate(Reader r)
and also maybe a helper to serialize and de-serialize a Template instance.
And beyond serialization, if Velocity templates could get translated
into bytecode, then we've changed velocity from interpreted to
compiled templating, much like what Resin is doing with PHP
(http://wiki.caucho.com/Quercus). I'm not expecting someone else
write it... Just wondering if it seemed at all feasible and/or
querying for other people who have dealt with this.
We thought about that years ago, and came to the conclusion that absent
some kind of dynamic optimization like a JIT (which I'd argue you could
do now), that you wouldn't get much.
The code in each node in the AST needs to be executed, and already is
bytecode. So I never saw what you'd be able to wring out of it, other
than save the method calls where one node in a AST invokes it's children.
Maybe a JIT would help. Dunno.
geir
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]