Performance is very difficult to determine with simple tests, so what I've been doing is using qa suite stress tests that run for several minutes for my benchmarks, because I test on different hardware, the individual results are only comparable on that hardware.

I run jvisualvm to identify hotspots, then manually inspect the code for possible improvements, for example the modification might be as simple as replacing a call to array.length in a loop, with an instance field. I don't make any performance improvements that make the code harder to read, just simple things like replacing string concatenation in loops.

Generally though my focus isn't on performance, it's on correctness and long standing Jini issues users have identified.

Despite how good people think our code is, we have inherited a lot of legacy code that is very hard to reason about.



Patricia Shanahan wrote:
On 4/6/2013 7:26 PM, Greg Trasuk wrote:
...
Once we have a stable set of regression tests, then OK, we could
think about improving performance or using Maven repositories as the
codebase server.
...

I think there is something else you need before it would be a good idea to release any changes for the sake of performance - some examples of workloads whose performance you want to improve, and that are in fact improved by the changes.

I've worked on many performance campaigns, at several different companies including Cray Research and Sun Microsystems. Each campaign has been based on one or more benchmarks. The benchmarks could be industry-standard benchmarks, or could be sanitized versions of user workloads.

I've had many opportunities to compare measurements to expert predictions, including my own predictions, about what needs to be changed to improve performance. In no case have the expert predictions matched measurement.

Based on that experience I had serious concerns about working on River performance with no benchmarks. Are there any River users who care enough about performance to help with creating benchmarks, and running them in highly parallel environments? If not, maybe performance changes should not be a priority.

Without benchmarks, changes intended to improve performance may carry functional risk for no gain, or even for a performance regression.

Patricia


Reply via email to