Hi Jon,
On 12/05/2014 01:52 PM, Jonathan Gibbons wrote:
On 12/05/2014 11:46 AM, Staffan Friberg wrote:
Hi,
On 12/04/2014 10:23 PM, joe darcy wrote:
Hello,
On 12/4/2014 4:34 PM, David Holmes wrote:
Hi Staffan,
On 2/12/2014 10:08 AM, Staffan Friberg wrote:
Hi,
Hopefully this is the right list for this discussion.
As part of adding Microbenchmarks to the OpenJDK source tree, I'm
trying
to understand how we best would add the benchmark sources to the
existing OpenJDK tree structure.
Is there a reason this needs to be inside the OpenJDK instead of a
stand-alone project? If it ends up in its own repo and the repo is
not needed by anything else, then it is already like a stand-alone
project.
I think David is raising a good question here. A related question is
if we want to update/fix the microbenchmarks, in how many release
trains will we want to make that fix? If the expected answer is much
greater than one, to me that would seem to me to argue for a
separate forest in the overall OpenJDK effort, but not bundled into
a particular release.
For example, in the past, the webrev.ksh script was included in the
JDK forest. That was an improvement over every engineer having his
or her personal fork, but still made keeping webrev updated
unnecessarily difficult since any changes would need to be done
multiple time and there is nothing fundamentally binding a version
of webrev to a particular JDK release. Likewise, even though (to my
knowledge) jtreg is only used for testing the JDK, jtreg has its own
repository and release schedule separate from any JDK release.
So if the microbenchmarks are to a first-approximation version
agnostic, then they should probably have a forest not associated
with a JDK release. If they are tightly bound to a release, that
argues for putting them into the JDK forest itself.
The reasons are similar to co-locating functional tests, stable test
base that is not moving after FC, tests support new features and all
work with the JDK you are developing.
Looking at some of the goals in the JEP and why I think co-locating
helps.
* Stable and tuned benchmarks, targeted for continuous performance
testing
o A stable and non-moving suite after the Feature Complete
milestone of a feature release, and for non-feature releases
To fulfill this having a separate repository would force us to branch
and create builds in sync with the JDK we want to test. As the number
of JDK version increases so will the microbenchmark suite. Making it
complex for anyone adding or running tests to keep track of which
version one should use for a specific JDK branch.
As an example, for a stable branch we probably do not want to update
the JMH version, unless there is a critical bug, as doing so can
invalidate any earlier results causing us to lose all history that is
critical when tracking a release. However during development of a new
release we probably want to do that to take advantage of new features
in JMH. And we might require new features in JMH to support changes
in the JDK.
Fixing bugs in microbenchmarks will in a similar fashion to any bugs
sometimes need to be backported, but we would have similar struggles
in a separate repository if we want to keep the suite stable for
multiple releases and have multiple branches to support that.
JMH similar to jtreg is a separate repository already, and will stay
that way. The co-located part would be the benchmarks, which similar
to the tests written using jtreg benefit from being co-located.
* Simplicity
o Easy to add new benchmarks
o Easy to update tests as APIs and options change, are deprecated,
or are removed during development
o Easy to find and run a benchmark
Having it co-located reduces the steps for anyone developing a new
feature to push benchmarks at the same time as the feature in the
same way as functional tests can be pushed.
Any benchmark relying on a feature being removed can easily be
deleted without concern about reducing the test coverage of older
releases still providing it.
Having it in a completely separate repository I fear will cause
microbenchmark to be out of sight and out of mind, which is the
opposite of the intention of creating the suite. We want more people
to write and run benchmarks in a simple way. Co-location will allow
anyone to simply just add and do make to build for their current
branch and test the feature they are working on, and those benchmarks
will directly be picked up for regression testing once pushed.
Hope this helps to further clarify the reason for having the suite as
part of the OpenJDK source tree.
Regards,
Staffan
Staffan,
You make a good point for it being within the OpenJDK source tree. But
you can still use a separate repository in the forest, so that it is
versioned, tracked, copied along with the other trees in the forest.
-- Jon
Agree, and I believe we have reached that conclusion as the best option
in the "other part" of the thread.
I liked the idea of starting of in the top repository as Magnus
suggested, and move out if required, but as Mark confirmed the historic
diffs would still be around and keeping the top repository focused on
Makefiles and other meta data I think make sense as well.
Regards,
Staffan