It's not really a Velocity specific suggestion - I would just dump the heap
and trace the instances to the garbage collection roots. Eclipse MAT or
YourKit can do it as probably can a lot of other Java tools.

On Tue, Jul 31, 2012 at 11:37 AM, Bradley Wagner <
bradley.wag...@hannonhill.com> wrote:

> Whoops, was misreading the API. It's actually that tempTemplateName
> variable.
>
> On Tue, Jul 31, 2012 at 11:21 AM, Bradley Wagner <
> bradley.wag...@hannonhill.com> wrote:
>
> > A StringWriter:
> >
> > String template = ... the string containing the dynamic template to
> > generate ...
> > // Get a template as stream.
> > StringWriter writer = new StringWriter();
> > StringReader reader = new StringReader(template);
> > // create a temporary template name
> > String tempTemplateName = "velocityTransform-" +
> > System.currentTimeMillis();
> >
> > // ask Velocity to evaluate it.
> > VelocityEngine engine = getEngine();
> > boolean success = engine.evaluate(context, writer, tempTemplateName,
> > reader);
> >
> > if (!success)
> > {
> >     LOG.debug("Velocity could not evaluate template with content: \n" +
> > template);
> >     return null;
> > }
> > LOG.debug("Velocity successfully evaluted template with content: \n" +
> > template);
> > String strResult = writer.getBuffer().toString();
> > return strResult;
> >
> > On Tue, Jul 31, 2012 at 11:10 AM, Nathan Bubna <nbu...@gmail.com> wrote:
> >
> >> What do you use for logTag (template name) when you are using
> evaluate()?
> >>
> >> On Tue, Jul 31, 2012 at 8:01 AM, Bradley Wagner
> >> <bradley.wag...@hannonhill.com> wrote:
> >> > Doing both. In the other case we're using a classpath resource loader
> to
> >> > evaluate templates like this:
> >> >
> >> > VelocityContext = ... a context that we're building each time ...
> >> > VelocityEngine engine = ... our single engine ...
> >> > Template template = engine.getTemplate(templatePath);
> >> > StringWriter writer = new StringWriter();
> >> > template.merge(context, writer);
> >> >
> >> > However, we only have 7 of those static templates in our whole system.
> >> >
> >> > On Tue, Jul 31, 2012 at 10:52 AM, Nathan Bubna <nbu...@gmail.com>
> >> wrote:
> >> >>
> >> >> And you're sure you're only using VelocityEngine.evaluate?  Not
> >> >> loading templates through the resource loader?  Or are you doing
> both?
> >> >>
> >> >> On Mon, Jul 30, 2012 at 2:51 PM, Bradley Wagner
> >> >> <bradley.wag...@hannonhill.com> wrote:
> >> >> > Nathan,
> >> >> >
> >> >> > Tokens are referenced by
> >> >> > org.apache.velocity.runtime.parser.node.ASTReference which seem to
> be
> >> >> > referenced by arrays of
> >> org.apache.velocity.runtime.parser.node.Nodes.
> >> >> > Most
> >> >> > of the classes referencing these things are AST classes in the
> >> >> > org.apache.velocity.runtime.parser.node package.
> >> >> >
> >> >> > Here's our properties file:
> >> >> >
> >> >> > runtime.log.logsystem.class =
> >> >> >
> >> >> >
> >>
> org.apache.velocity.runtime.log.Log4JLogChute,org.apache.velocity.runtime.log.AvalonLogSystem
> >> >> >
> >> >> >
> >>
> runtime.log.logsystem.log4j.logger=com.hannonhill.cascade.velocity.VelocityEngine
> >> >> >
> >> >> > runtime.log.error.stacktrace = false
> >> >> > runtime.log.warn.stacktrace = false
> >> >> > runtime.log.info.stacktrace = false
> >> >> > runtime.log.invalid.reference = true
> >> >> >
> >> >> > input.encoding=UTF-8
> >> >> > output.encoding=UTF-8
> >> >> >
> >> >> > directive.foreach.counter.name = velocityCount
> >> >> > directive.foreach.counter.initial.value = 1
> >> >> >
> >> >> > resource.loader = class
> >> >> >
> >> >> > class.resource.loader.description = Velocity Classpath Resource
> >> Loader
> >> >> > class.resource.loader.class =
> >> >> > org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
> >> >> >
> >> >> > velocimacro.permissions.allow.inline.local.scope = true
> >> >> >
> >> >> > Thanks!
> >> >> > Bradley
> >> >> >
> >> >> > On Mon, Jul 30, 2012 at 4:47 PM, Nathan Bubna <nbu...@gmail.com>
> >> wrote:
> >> >> >>
> >> >> >> On Mon, Jul 30, 2012 at 1:30 PM, Bradley Wagner
> >> >> >> <bradley.wag...@hannonhill.com> wrote:
> >> >> >> > Thanks for the input.
> >> >> >> >
> >> >> >> > What we're seeing is that Velocity seems to be holding on to a
> lot
> >> >> >> > of org.apache.velocity.runtime.parser.Token objects (around 5
> >> >> >> > million).
> >> >> >> > We
> >> >> >> > allow people to write arbitrary Velocity templates in our system
> >> and
> >> >> >> > are
> >> >> >> > evaluating them with:
> >> >> >> >
> >> >> >> > VelocityEngine.evaluate(Context context, Writer writer, String
> >> >> >> > logTag,
> >> >> >> > Reader reader)
> >> >> >> >
> >> >> >> > I was under the impression that Templates evaluated this way are
> >> >> >> > inherently
> >> >> >> > not cacheable. Is that the case? If that's not true, is there a
> >> way
> >> >> >> > to
> >> >> >> > control the cache Velocity is using for these?
> >> >> >>
> >> >> >> me too.  just out of curiosity, what properties are you using for
> >> >> >> configuration?  and can you tell any more about what class is
> >> holding
> >> >> >> onto those Tokens?
> >> >> >>
> >> >> >> > On Thu, Jul 19, 2012 at 10:26 AM, Alex Fedotov <a...@kayak.com>
> >> >> >> > wrote:
> >> >> >> >
> >> >> >> >> I think that Velocity has one global hash table for macros from
> >> the
> >> >> >> >> *.vm
> >> >> >> >> libraries and that is more or less static for the life time of
> >> the
> >> >> >> >> Velocity
> >> >> >> >> engine.
> >> >> >> >>
> >> >> >> >> I wish there there was a mechanism to control the list of the
> >> *.vm
> >> >> >> >> files
> >> >> >> >> and their order of lookup for each individual merge (thread).
> >> This
> >> >> >> >> would
> >> >> >> >> facilitate macro overloads based on the context.
> >> >> >> >> Unfortunately this feature is not available.
> >> >> >> >>
> >> >> >> >> I think the 1.7 behavior is (more or less):
> >> >> >> >>
> >> >> >> >> When template reference is found (i.e. #parse("x")) it is
> >> looked-up
> >> >> >> >> in
> >> >> >> >> the
> >> >> >> >> resource cache and if found there (with all the expiration
> >> checks,
> >> >> >> >> etc.)
> >> >> >> >> the parsed AST tree is used.
> >> >> >> >> If not found the template is loaded from the file, actually
> >> parsed
> >> >> >> >> and
> >> >> >> >> put
> >> >> >> >> into the cache. During the actual parsing process the macros
> that
> >> >> >> >> are
> >> >> >> >> defined in the template are put into the macro manager cache
> >> which
> >> >> >> >> is
> >> >> >> >> organized as:
> >> >> >> >> "defining template name (name space)" => "macro name" => AST
> >> macro
> >> >> >> >> code
> >> >> >> >> The AST is then rendered in the current context running #parse.
> >> >> >> >>
> >> >> >> >> When the time comes to call a macro there is a lookup process
> >> which
> >> >> >> >> can
> >> >> >> >> be
> >> >> >> >> influenced by some props, but the most general case is:
> >> >> >> >>
> >> >> >> >> 1. Lookup in the global *.vm files, if found use that.
> >> >> >> >> 2. Lookup in the same "name space" that calls the macro, if
> found
> >> >> >> >> use
> >> >> >> >> that.
> >> >> >> >> 3. Going back through the "list" of the #parse-d templates
> >> lookup in
> >> >> >> >> each
> >> >> >> >> name space on the stack.
> >> >> >> >>
> >> >> >> >> The stack can be actually very long too, for example
> >> >> >> >>
> >> >> >> >> #foreach($templ in [1..5])
> >> >> >> >>   #parse("${templ}.vtl")
> >> >> >> >> #end
> >> >> >> >>
> >> >> >> >> #mymacro()
> >> >> >> >>
> >> >> >> >> The lookup list here would contain:
> >> >> >> >>
> >> >> >> >> 1.vtl, 2.vtl, 3.vtl, 4.vtl, 5.vtl
> >> >> >> >>
> >> >> >> >> This is true even for cases where the name is the same:
> >> >> >> >>
> >> >> >> >> #foreach($item in [1..5])
> >> >> >> >>   #parse('item.vtl')
> >> >> >> >> #end
> >> >> >> >>
> >> >> >> >> The lookup list here would contain:
> >> >> >> >>
> >> >> >> >> item.vtl, item.vtl, item.vtl, item.vtl, item.vtl
> >> >> >> >>
> >> >> >> >> There is no attempt to optimize the lookup list and collapse
> the
> >> >> >> >> duplicates.
> >> >> >> >>
> >> >> >> >> Unfortunately 1.7 also had some nasty concurrency bugs there
> that
> >> >> >> >> had
> >> >> >> >> to do
> >> >> >> >> with clearing the name space of all the macros and repopulating
> >> it
> >> >> >> >> again on
> >> >> >> >> each parse which did not work at all with multiple threads.
> >> >> >> >> One thread could clear the name space while another was doing a
> >> >> >> >> lookup,
> >> >> >> >> etc.
> >> >> >> >>
> >> >> >> >> I think there was an effort to redesign that part in 2.0, but I
> >> have
> >> >> >> >> not
> >> >> >> >> looked at that yet.
> >> >> >> >>
> >> >> >> >> Alex
> >> >> >> >>
> >> >> >> >> On Wed, Jul 18, 2012 at 5:42 PM, Bradley Wagner <
> >> >> >> >> bradley.wag...@hannonhill.com> wrote:
> >> >> >> >>
> >> >> >> >> > Hi,
> >> >> >> >> >
> >> >> >> >> > We recently made some changes to our software to use just a
> >> single
> >> >> >> >> > VelocityEngine as per recommendations on this group.
> >> >> >> >> >
> >> >> >> >> > We ran into an issue where macros were all of the sudden
> being
> >> >> >> >> > shared
> >> >> >> >> > across template renders because we had not
> >> >> >> >> > specified: velocimacro.permissions.allow.inline.local.scope =
> >> >> >> >> > true.
> >> >> >> >> > However, we also had not ever turned on caching in our props
> >> file
> >> >> >> >> > with: class.resource.loader.cache = true.
> >> >> >> >> >
> >> >> >> >> > Does this mean that macros are cached separately from
> whatever
> >> is
> >> >> >> >> > being
> >> >> >> >> > cached in the class.resource.loader.cache cache? Is there any
> >> way
> >> >> >> >> > to
> >> >> >> >> > control that caching or is just using this property the
> >> >> >> >> > way: velocimacro.permissions.allow.inline.local.scope = true
> >> >> >> >> >
> >> >> >> >> > One side effect of our recent changes is that the app seems
> to
> >> >> >> >> > have
> >> >> >> >> > an
> >> >> >> >> > increased mem footprint. We're not *sure* it can be
> attributed
> >> to
> >> >> >> >> velocity
> >> >> >> >> > but I was trying to see what kinds of things Velocity could
> be
> >> >> >> >> > hanging on
> >> >> >> >> > to and how much memory they might be taking up.
> >> >> >> >> >
> >> >> >> >> > Thanks!
> >> >> >> >> >
> >> >> >> >>
> >> >> >
> >> >> >
> >> >
> >> >
> >>
> >
> >
>

Reply via email to