From: Fabio Zadrozny <[email protected]<mailto:[email protected]>>
Date: Tuesday, December 1, 2015 at 1:36 AM
To: David Stewart <[email protected]<mailto:[email protected]>>
Cc: "R. David Murray" <[email protected]<mailto:[email protected]>>, 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: [Python-Dev] Avoiding CPython performance regressions


On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C 
<[email protected]<mailto:[email protected]>> wrote:

On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" 
<[email protected]<mailto:[email protected]>
 on behalf of [email protected]<mailto:[email protected]>> wrote:

>
>There's also an Intel project posted about here recently that checks
>individual benchmarks for performance regressions and posts the results
>to python-checkins.

The description of the project is at https://01.org/lp - Python results are 
indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to 
Romania National Day holiday!)

There is also a graphic dashboard at http://languagesperformance.intel.com/

​Hi Dave,

Interesting, but ​I'm curious on which benchmark set are you running? From the 
graphs it seems it has a really high standard deviation, so, I'm curious to 
know if that's really due to changes in the CPython codebase / issues in the 
benchmark set or in how the benchmarks are run... (it doesn't seem to be the 
benchmarks from https://hg.python.org/benchmarks/ right?).

Fabio – my advice to you is to check out the daily emails sent to 
python-checkins. An example is 
https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If 
you still have questions, Stefan can answer (he is copied).

The graphs are really just a manager-level indicator of trends, which I find 
very useful (I have it running continuously on one of the monitors in my 
office) but core developers might want to see day-to-day the effect of their 
changes. (Particular if they thought one was going to improve performance. It's 
nice to see if you get community confirmation).

We do run nightly a subset of https://hg.python.org/benchmarks/ and run the 
full set when we are evaluating our performance patches.

Some of the "benchmarks" really do have a high standard deviation, which makes 
them hardly very useful for measuring incremental performance improvements, 
IMHO. I like to see it spelled out so I can tell whether I should be worried or 
not about a particular delta.

Dave
_______________________________________________
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to