One more question. One of the other areas we making changes is around how
macros are called. Right now the lookup process is fairly rigid and can be
inefficient as well. Consider a couple of examples:

1. Scenario one - overload macro in the template This works now because the
lookup checks the namespace where the call is made first.

macros.vtl
========
#macro(B)
#end

#macro(A)
    #B()
#end
========

template.vtl
=========
#parse('macros.vtl')
#macro(A)
   ## overloaded macro A
#end
#A() ## this calls local macro A which is expected.
=========

2. Scenario two - overload macro B - this does not work now.

template.vtl
=========
#parse('macros.vtl')
#macro(B)
   ## overloaded macro B
#end
#A() ## this calls macro A which is then does NOT call overloaded B, since
the look-up would prefer the B from the same module where A is.
=========

We have created a custom directive #call which is similar to the
RuntimeMacro, but allows more control over the lookup process. This also
allows stuff like:
#call('self::macroName')
#call('nameSpace::macroName')

The way every #parse(d) template is appended to the list of the
"macroLibraries" can be very bad for macro look-up speed:
#foreach( $item in $items )
    #parse('item.vtl')
#end

You end-up with a lot of "item.vtl" references added to the list which then
cause a lot of duplicate macro lookups in namespaces.
We have fixes that avoid adding duplicate sequences of template names to
the macroLibraries list.

Do you guys have any plans to address this area?

We are also inclined to always use namespaces for macros, always allow
local macros to replace globals and keep the macros as ConcurrentHashMap in
the Template object.
This way VelocimacroManager can be dropped all together, there is no issue
with reloading templates while maintaining the VelocimacroManager in sync,
less synchronization problems, etc.
The flag "isFromLib" for macros would also not be needed, so all the hair
around maintaining it during the "init" phase could be removed as well.

Alex









On Fri, Mar 6, 2015 at 2:48 AM, Claude Brisson <cla...@renegat.net> wrote:

> Hi.
>
> We'll publish the 2.0 one day, it's slowly moving...
>
> Thank you for your proposal. Of course, it is interesting for the project
> if you can share your 1.7 bugfixes as long as your synchronization work on
> the 2.0 branch.
>
> The process is rather easy: patches are to be submitted on JIRA :
> https://issues.apache.org/jira/browse/VELOCITY/
>
> Very small patches (too small to be considered intellectual property)
> don't need anything else. For more consequent patches, some people say you
> should at least add a sentence like "feel free to use this patch under the
> Apache License" in the issue.
>
>
>   Claude
>
>
>
> On 04/03/2015 17:48, Alex Fedotov wrote:
>
>> Hello Guys,
>>
>> We use Velocity quite a bit and are considering moving forward and
>> deploying the trunk build in production even though there never was a 2.0
>> release. We currently use a custom 1.7 build with a couple of bugs fixed
>> by
>> us.
>>
>> One area where we are going to make some changes before it can go in
>> production is the synchronization in the VelocimacroFactory and
>> VelicimacroManager classes. It appears to be a mix of ConcurrentHashMaps
>> and global synchronized blocks. Our code is highly threaded, loads quite a
>> bit of templates dynamically at run-time and having these global locks
>> would likely present a synchronization hotspot.
>>
>> Our plan is to see if we can cleanup the synchronization in this area keep
>> it robust, but remove the global synchronized blocks.
>>
>> If you guys have any interest in accepting our contribution back please
>> let
>> me know what the process should be.
>>
>> Thank you,
>> Alex
>>
>>
>

Reply via email to