My understanding is that, 1) there isn't any specific relationship
between the iterations, and 2) the final output is a summary over all
iterations. The idea is that randomness might affect results on any
particular iteration, but by running multiple times (20 I think?) and
then aggregating the statistics over the repeated trials, hopefully
the noise gets smoothed out and only the real impact of the change
being tested shows.

Cheers,
-Greg

On Tue, Dec 21, 2021 at 6:00 PM 364367207 <[email protected]> wrote:
>
> Hi Lucene Community,
> When using luceneutil do some benchmark, it’s output shows several results 
> which compares baseline and my_modified_version. It seems like to do 
> iteration many times().
>
> So my questions:  1)  Is there any relationship between different iteration 
> result ? 2) is the last iteration result the final benchmark result ?
>
> Thanks~
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to