I think that Velocity has one global hash table for macros from the *.vm
libraries and that is more or less static for the life time of the Velocity
engine.

I wish there there was a mechanism to control the list of the *.vm files
and their order of lookup for each individual merge (thread). This would
facilitate macro overloads based on the context.
Unfortunately this feature is not available.

I think the 1.7 behavior is (more or less):

When template reference is found (i.e. #parse("x")) it is looked-up in the
resource cache and if found there (with all the expiration checks, etc.)
the parsed AST tree is used.
If not found the template is loaded from the file, actually parsed and put
into the cache. During the actual parsing process the macros that are
defined in the template are put into the macro manager cache which is
organized as:
"defining template name (name space)" => "macro name" => AST macro code
The AST is then rendered in the current context running #parse.

When the time comes to call a macro there is a lookup process which can be
influenced by some props, but the most general case is:

1. Lookup in the global *.vm files, if found use that.
2. Lookup in the same "name space" that calls the macro, if found use that.
3. Going back through the "list" of the #parse-d templates lookup in each
name space on the stack.

The stack can be actually very long too, for example

#foreach($templ in [1..5])
  #parse("${templ}.vtl")
#end

#mymacro()

The lookup list here would contain:

1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl

This is true even for cases where the name is the same:

#foreach($item in [1..5])
  #parse('item.vtl')
#end

The lookup list here would contain:

item.vtl, item.vtl, item.vtl, item.vtl, item.vtl

There is no attempt to optimize the lookup list and collapse the duplicates.

Unfortunately 1.7 also had some nasty concurrency bugs there that had to do
with clearing the name space of all the macros and repopulating it again on
each parse which did not work at all with multiple threads.
One thread could clear the name space while another was doing a lookup, etc.

I think there was an effort to redesign that part in 2.0, but I have not
looked at that yet.

Alex

On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
bradley.wag...@hannonhill.com> wrote:

> Hi,
>
> We recently made some changes to our software to use just a single
> VelocityEngine as per recommendations on this group.
>
> We ran into an issue where macros were all of the sudden being shared
> across template renders because we had not
> specified: velocimacro.permissions.allow.inline.local.scope = true.
> However, we also had not ever turned on caching in our props file
> with: class.resource.loader.cache = true.
>
> Does this mean that macros are cached separately from whatever is being
> cached in the class.resource.loader.cache cache? Is there any way to
> control that caching or is just using this property the
> way: velocimacro.permissions.allow.inline.local.scope = true
>
> One side effect of our recent changes is that the app seems to have an
> increased mem footprint. We're not *sure* it can be attributed to velocity
> but I was trying to see what kinds of things Velocity could be hanging on
> to and how much memory they might be taking up.
>
> Thanks!
>

Reply via email to