New issue 2201: Benchmark results unintelligible
https://bitbucket.org/pypy/pypy/issues/2201/benchmark-results-unintelligible

Pete Vine:

Benchmarking two pypy binaries side by side (--changed option) prints the 
following example result to the console:

```
 ### ai ###
Min: 0.490526 -> 0.479404: 1.0232x faster
Avg: 0.494283 -> 0.486747: 1.0155x faster
Not significant
Stddev: 0.00369 -> 0.00644: 1.7443x larger
```

Provided the results are being presented in their respective running order, 
there's no clue as to whether **more** is better/worse meaning it's impossible 
to know which binary was actually faster!




_______________________________________________
pypy-issue mailing list
[email protected]
https://mail.python.org/mailman/listinfo/pypy-issue

Reply via email to