On Sat, Jun 11, 2016 at 07:43:18PM -0400, Random832 wrote:
> On Fri, Jun 10, 2016, at 21:45, Steven D'Aprano wrote:
> > If you express your performances as speeds (as "calculations per
> > second") then the harmonic mean is the right way to average them.
>
> That's true in so far as you get the
On Fri, Jun 10, 2016, at 21:45, Steven D'Aprano wrote:
> If you express your performances as speeds (as "calculations per
> second") then the harmonic mean is the right way to average them.
That's true in so far as you get the same result as if you were to take
the arithmetic mean of the times
On Fri, Jun 10, 2016 at 1:13 PM, Victor Stinner
wrote:
> Hi,
>
> Last weeks, I made researchs on how to get stable and reliable
> benchmarks, especially for the corner case of microbenchmarks. The
> first result is a serie of article, here are the first three:
>
>
On Fri, Jun 10, 2016 at 11:22:42PM +0200, Victor Stinner wrote:
> 2016-06-10 20:47 GMT+02:00 Meador Inge :
> > Apologies in advance if this is answered in one of the links you posted, but
> > out of curiosity was geometric mean considered?
> >
> > In the compiler world this is a
On Sat, Jun 11, 2016 at 12:06:31AM +0200, Victor Stinner wrote:
> > Victor if you could calculate the sample skewness of your results I think
> > that would be very interesting!
>
> I'm good to copy/paste code, but less to compute statistics :-) Would
> be interesed to write a pull request, or
Hi,
2016-06-10 20:37 GMT+02:00 Kevin Modzelewski via Python-Dev
:
> Hi all, I wrote a blog post about this.
> http://blog.kevmod.com/2016/06/benchmarking-minimum-vs-average/
Oh nice, it's even better to have different articles to explain the
problem of using the minimum
2016-06-10 20:47 GMT+02:00 Meador Inge :
> Apologies in advance if this is answered in one of the links you posted, but
> out of curiosity was geometric mean considered?
>
> In the compiler world this is a very common way of aggregating performance
> results.
FYI I chose to
On 11 June 2016 at 04:09, Victor Stinner wrote:
..> We should design a CLI command to do timeit+compare at once.
http://judge.readthedocs.io/en/latest/ might offer some inspiration
There's also ministat -
On 6/10/2016 12:09 PM, Victor Stinner wrote:
2016-06-10 17:09 GMT+02:00 Paul Moore :
Also, the way people commonly use
micro-benchmarks ("hey, look, this way of writing the expression goes
faster than that way") doesn't really address questions like "is the
difference
On Fri, Jun 10, 2016 at 6:13 AM, Victor Stinner
wrote:
The second result is a new perf module which includes all "tricks"
> discovered in my research: compute average and standard deviation,
> spawn multiple worker child processes, automatically calibrate the
> number
Hi all, I wrote a blog post about this.
http://blog.kevmod.com/2016/06/benchmarking-minimum-vs-average/
We can rule out any argument that one (minimum or average) is strictly
better than the other, since there are cases that make either one better.
It comes down to our expectation of the
On Fri, 10 Jun 2016 at 10:11 Steven D'Aprano wrote:
> On Fri, Jun 10, 2016 at 05:07:18PM +0200, Victor Stinner wrote:
> > I started to work on visualisation. IMHO it helps to understand the
> problem.
> >
> > Let's create a large dataset: 500 samples (100 processes x 5
On Fri, Jun 10, 2016 at 05:07:18PM +0200, Victor Stinner wrote:
> I started to work on visualisation. IMHO it helps to understand the problem.
>
> Let's create a large dataset: 500 samples (100 processes x 5 samples):
> ---
> $ python3 telco.py --json-file=telco.json -p 100 -n 5
> ---
>
>
On 6/10/2016 11:07 AM, Victor Stinner wrote:
I started to work on visualisation. IMHO it helps to understand the problem.
Let's create a large dataset: 500 samples (100 processes x 5 samples):
As I finished by response to Steven, I was thinking you should do
something like this to get real
On 6/10/2016 9:20 AM, Steven D'Aprano wrote:
On Fri, Jun 10, 2016 at 01:13:10PM +0200, Victor Stinner wrote:
Hi,
Last weeks, I made researchs on how to get stable and reliable
benchmarks, especially for the corner case of microbenchmarks. The
first result is a serie of article, here are the
2016-06-10 17:09 GMT+02:00 Paul Moore :
> Also, the way people commonly use
> micro-benchmarks ("hey, look, this way of writing the expression goes
> faster than that way") doesn't really address questions like "is the
> difference statistically significant".
If you use the
On 10 June 2016 at 15:34, David Malcolm wrote:
>> The problem is that random noise can only ever slow the code down, it
>> cannot speed it up.
[...]
> Isn't it possible that under some circumstances the 2nd process could
> prefetch memory into the cache in such a way that the
On Fri, 2016-06-10 at 23:20 +1000, Steven D'Aprano wrote:
> On Fri, Jun 10, 2016 at 01:13:10PM +0200, Victor Stinner wrote:
> > Hi,
> >
> > Last weeks, I made researchs on how to get stable and reliable
> > benchmarks, especially for the corner case of microbenchmarks. The
> > first result is a
On Fri, Jun 10, 2016 at 01:13:10PM +0200, Victor Stinner wrote:
> Hi,
>
> Last weeks, I made researchs on how to get stable and reliable
> benchmarks, especially for the corner case of microbenchmarks. The
> first result is a serie of article, here are the first three:
Thank you for this! I am
Hi,
Last weeks, I made researchs on how to get stable and reliable
benchmarks, especially for the corner case of microbenchmarks. The
first result is a serie of article, here are the first three:
https://haypo.github.io/journey-to-stable-benchmark-system.html
20 matches
Mail list logo