Some of these tests are just comparing two different ways of doing things
against each other (testAVersusBPerformance tests). These ones wouldn't make
sense as proper performance tests. They could probably be removed, although
they might sometimes come in handy for reference. Some of them look like
micro-benchmarks that wouldn't be helpful as proper performance tests, but
some could probably could be turned into proper performance tests with
baselines, etc. In particular benchmarks of slicer, resolver, and reconciler
performance would be really useful to guard against regressions.

John

On Thu, Sep 1, 2011 at 8:38 AM, Thomas Watson <[email protected]> wrote:

>  While monitoring a recent p2 test failure in the Indigo SR1 (3.7.1) build
> I noticed a test failing that had "Performance" in the name. I found the
> following tests that also have "Performance" in the name:
>
> testParserPerformanc
> testMatchQueryVersusExpressionPerformance
> testMatchQueryVersusIndexedExpressionPerformance
> testMatchQueryVersusIndexedExpressionPerformance2
> testMatchQueryVersusMatchIteratorPerformance
> testCapabilityQueryPerformance
> testIUPropertyQueryPerformance
> testSlicerPerformance
> testPermissiveSlicerPerformance
>
> So that made me wonder, are these really performance tests? Should they be
> run during the performance test bucket so that their stats can be tracked
> against previous releases to see if we have improved or regressed
> performance of these tests?
>
>
> Tom
>
>
> _______________________________________________
> p2-dev mailing list
> [email protected]
> https://dev.eclipse.org/mailman/listinfo/p2-dev
>
>
_______________________________________________
p2-dev mailing list
[email protected]
https://dev.eclipse.org/mailman/listinfo/p2-dev

Reply via email to