[I got sick of the thread subject - it blended into every other JIRA thread... ]

There is a 4th option - not mix performance infrastructure with unit testing.

I'm all for getting "PerformanceTest" out of the class hierarchy, and not having unit tests yammer out to console if we can avoid it. (I do testing in console, and don't really care about the output, but it will slew the performance numbers as console i/o is relatively expensive...)

That said, I do believe in the importance of having performance numbers to help detect regressions.

George outlined how to use standard JUnit mechanisms to do this. IMO, they are good because they are the canonical way using JUnit, but they also are a bit invasive too.

Some other options :

1) This problem seems to be to be one of three usecases in the universe for using aspects (the other two being logging and caching, of course...) So that's one area we might investigate - we would add an interceptor for each test/suite/whatever to do the perf that we need to be done. We might be able to use it to turn debug logging on and off as well in a cheap and uninvasive way.

2) TestNG - I do want to give this a hard look, as it's annotations based, and see if there's something in there (or coming in there) for this. TestNG will also run JUnit tests as is, so playing with it is going to be easy.

geir


Mikhail Loenko wrote:
To summarize, we have 3 options:

1. Keep PerformanceTest as a super class. Set printAllowed to false by default.
2. Remove PerformnceTest. Introduce a simple Logger that does not print by
default.
3. Move performance functionality to Decorator.

#1 is the most unliked. #3 as I wrote before does not work.

So I can submit a script that goes through the tests replacing
"extends PerformanceTest" with "extends TestCase"
"import PerformanceTest" with "import Logger"
and putting "Logger." before
logln() and other log functions

Thanks,
Mikhail


On 1/19/06, Geir Magnusson Jr <[EMAIL PROTECTED]> wrote:

Mikhail Loenko wrote:
On 1/19/06, Geir Magnusson Jr <[EMAIL PROTECTED]> wrote:
Mikhail Loenko wrote:
The problem is unstable execution time of java programs:

If you consequently run the same java program on the same computer
in the same conditions, execution time may vary by 20% or even more
Why?  Given that computers are pretty determinstic, I'd argue that you
don't have the same conditions from run to run.
Did you make experiments or it's your theoretical conclusion :) ?
Have done experiments.  I never claim that it's the same conditions
every run.  That's the issue, I think.

geir

Try to create an application that runs 20 seconds and run it several times.

Frankly, I do not exactly know why. But I know a lot of reasons that could
affect this dispersion. For example, there is a number of serving
threads and GC that impact on execution time.
Thanks,
Mikhail


geir




Reply via email to