On 1/5/09 9:33 , Jarkko Viinamäki wrote:
Hmm. java.lang.String already caches the hash value internally so
that wouldn't do anything. Besides, the time it takes to compute the
hash is extremely small compared to other operations (e.g.
ASTReference.execute()) that happen inside the VMProxyContext.get
method.

You're right, nice.

Byron Foster-2 wrote:
ASTReference could register every reference in init() in an
identity table.  When the put method was called on the
VelocityContext it would first look it up with the identity table
(probably just a hash), if found it would use the String from the
identity table.  HashMap get works much faster in this case because
equality can be determined from str1 == str2 instead of
str1.equals(str2), as it is done now.

I don't quite see what you mean by this. How do you get a list of
"every reference" already in the init method? HashMap lookups are
already very fast and attempts to optimize that just create more
complexity with very little (if any at all) gain. Personally I value
simplicity and maintainability more.

In my convoluted way I was saying we should intern the strings.

The first rule of optimizing is: don't do it. The second one is:
don't do it yet. And if you still plan to do it, make sure you get
rid of the worst bottleneck while ensuring that the code is still
simple and maintainable. Hashmap lookups aren't the most costly
operations in this case.

Performance matters, especially in a library, and in my experience has a direct correlation with the quality of the package. I would agree in the sense that I certainly don't advocate code obscurity for the sake of marginal performance gains.

I sprinkled interns around in appropriate places and benchmarking shows about a 5% increase in speed. I would call a 5% bump and less memory usage a good day.

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to