I agree with the need for performance tests.

>From my own experience I'd say you'd want to be able to run those tests in
isolation but also together to get a big picture view of a change because
spaces being what they are, it's incredibly easy for an optimisation that
improves one test to cripple another.

On 22 December 2010 19:08, Patricia Shanahan <[email protected]> wrote:

> On 12/22/2010 10:57 AM, [email protected] wrote:
> ...
>
>  This is the biggest concern, I think.   As such, I'd be interested in
>> seeing performance runs, to back up the intuition.   Then, at least,
>> we'd know precisely what trade-off we're talking about.
>>
>> The test would need to cover both small batches and large, both in
>> multiples of the batch-size/takeMultipleLimit and for numbers off of
>> those multiples, with transactions and without.
>>
>
> I think we need a lot of performance tests, some way to organize them, and
> some way to retain their results.
>
> I propose adding a "performance" folder to the River trunk, with
> subdirectories "src" and "results". src would contain benchmark source code.
> result would contain benchmark output.
>
> System level tests could have their own package hierarchy, under
> org.apache.impl, but reflecting what is being measured. Unit level tests
> would need to follow the package hierarchy for the code being tested, to get
> package access. The results hierarchy would mirror that src hierarchy for
> the tests.
>
> Any ideas, alternatives, changes, improvements?
>
> Patricia
>

Reply via email to