Byron Foster-2 wrote:
> 
> Top 10 offending methods.  Some thoughts:  It seems like the  
> context.gets are takiing some time.  We could use a VelocityReference  
> that would be a delegate for String, but the hashCode value would be  
> saved, so it wouldn't have to recalculate on every get. ASTReference  
> would then use VelocityReference.
> 

Hmm. java.lang.String already caches the hash value internally so that
wouldn't do anything. Besides, the time it takes to compute the hash is
extremely small compared to other operations (e.g. ASTReference.execute())
that happen inside the VMProxyContext.get method.


Byron Foster-2 wrote:
> 
> ASTReference could register every reference in init() in an identity  
> table.  When the put method was called on the VelocityContext it would  
> first look it up with the identity table (probably just a hash), if  
> found it would use the String from the identity table.  HashMap get  
> works much faster in this case because equality can be determined from  
> str1 == str2 instead of str1.equals(str2), as it is done now.
> 

I don't quite see what you mean by this. How do you get a list of "every
reference" already in the init method? HashMap lookups are already very fast
and attempts to optimize that just create more complexity with very little
(if any at all) gain. Personally I value simplicity and maintainability
more.
 
The first rule of optimizing is: don't do it. The second one is: don't do it
yet. And if you still plan to do it, make sure you get rid of the worst
bottleneck while ensuring that the code is still simple and maintainable.
Hashmap lookups aren't the most costly operations in this case. 

-- 
View this message in context: 
http://www.nabble.com/More-Performance-tp21291494p21294298.html
Sent from the Velocity - Dev mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to