Hi,

On Wed, Jul 3, 2013 at 11:54 AM, Jukka Zitting <jukka.zitt...@gmail.com> wrote:
> On Wed, Jul 3, 2013 at 11:22 AM, Thomas Mueller <muel...@adobe.com> wrote:
>> I usually look at "N" first :-)
>
> It's also a good measure.

Actually not that good, as only the lower limit on the amount of time
over which those N iterations happen is defined, so it's for example
not possible to compute an accurate mean execution time from the
reported N. Also, the N figure also covers the before/afterTest()
methods, which are not included in the other statistics and that which
aren't really within the scope of the functionality that a benchmark
intends to measure. The reason I originally included N in the output
was to given an idea about the statistical significance of the other
figures.

Perhaps we should replace the median (50%) or the 10th percentile (not
a very useful figure) with a more exactly calculated mean execution
time, as that would better represent the information for which N
currently only acts as a rough proxy.

BR,

Jukka Zitting

Reply via email to