It's odd that J2DBench showed a difference since nothing you did should
have affected those benchmarks. I don't think it has any benchmark
which shows the impact of a pipeline validation, so your standalone test
is the only one that I think addresses the issue.
Rather than setting up a machine for a 72 hour run (not sure a run of
what, though - J2DBench?), I'd rather see either trying to do the check
with a virtual method call (Pipe.needsLoops) or just going back to the
old style of initializing them in the validation branches that we know
need them and we can revisit this mechanism some other time when we have
the time to really figure out how to make it cheap. In particular, I
have some ideas for how to make validation incredibly cheap at the cost
of a few K of lookup tables per SurfaceData, but I think the scale of
that is beyond us for now...
...jim
Mario Torre wrote:
Il 15/07/2009 23:41, Jim Graham ha scritto:
Numbers that small aren't statistically significant. Our J2DBench
benchmark calibrates each test to run a number of iterations that result
in at least 2.5 seconds of run time. Try upping your loop iterations by
a factor of 100 and you'll get numbers with better accuracy and
precision...
...jim
Hi Jim,
I multiplied the small test numbers by 100 and this is the result:
Patched JDK:
warmed up run time in ms: 3226
total time in ms: 9586
Clean JDK:
warmed up run time in ms: 3039
total time in ms: 9172
I also run the more meaningful Java2DBench. I had no time to run an
extensive benchmark, and only limited to what I think it's the very bare
minimum:
http://cr.openjdk.java.net/~neugens/100068/comparision-100068-0.1/Summary_Report.html
So, looks like this approach has indeed an impact, although is well
within 10%.
Should I try the other approaches? I hardly see how a method call can be
faster than an instanceof, but maybe I miss something obvious.
If you want, I can try to setup a machine at work to run a full 72 hour
test, but this will take some time ( > 72*2 hours :).
Cheers,
Mario