I suggest to limit to one "dot" per week, since CodeSpeed (the website
to browse the benchmark results) is somehow limited to 50 dots (it can
display more if you only display a single benchmark).

Previously, it was closer to one "dot" per month which allowed to
display a timeline over 5 years. In my experience, significant
performance changes are rare and only happen once every 3 months. So a
granularity of 1 day is not needed.

We may consider to use the tool "asv" which has a nice web UI to
browse results. It also provides a tool to automatically run a bisect
to identify which commit introduced a speedup or slowdown.

Last time I checked, asv has a simpler way to run benchmarks than
pyperf. It doesn't spawn multiple processes for example. I don't know
if it would be possible to plug pyperf into asv.

Victor

Le mer. 14 oct. 2020 à 17:03, Pablo Galindo Salgado
<pablog...@gmail.com> a écrit :
>
> > Would it be possible rerun the tests with the current
> setup for say the last 1000 revisions or perhaps a subset of these
> (e.g. every 10th revision) to try to binary search for the revision which
> introduced the change ?
>
> Every run takes 1-2 h so doing 1000 would be certainly time-consuming :)
>
> That's why from now on I am trying to invest in daily builds for master,
> so we can answer that exact question if we detect regressions in the future.
>
>
> On Wed, 14 Oct 2020 at 15:04, M.-A. Lemburg <m...@egenix.com> wrote:
>>
>> On 14.10.2020 16:00, Pablo Galindo Salgado wrote:
>> >> Would it be possible to get the data for older runs back, so that
>> > it's easier to find the changes which caused the slowdown ?
>> >
>> > Unfortunately no. The reasons are that that data was misleading because
>> > different points were computed with a different version of pyperformance 
>> > and
>> > therefore with different packages (and therefore different code). So the 
>> > points
>> > could not be compared among themselves.
>> >
>> > Also, past data didn't include 3.9 commits because the data gathering was 
>> > not
>> > automated and it didn't run in a long time :(
>>
>> Make sense.
>>
>> Would it be possible rerun the tests with the current
>> setup for say the last 1000 revisions or perhaps a subset of these
>> (e.g. every 10th revision) to try to binary search for the revision which
>> introduced the change ?
>>
>> > On Wed, 14 Oct 2020 at 14:57, M.-A. Lemburg <m...@egenix.com
>> > <mailto:m...@egenix.com>> wrote:
>> >
>> >     Hi Pablo,
>> >
>> >     thanks for pointing this out.
>> >
>> >     Would it be possible to get the data for older runs back, so that
>> >     it's easier to find the changes which caused the slowdown ?
>> >
>> >     Going to the timeline, it seems that the system only has data
>> >     for Oct 14 (today):
>> >
>> >     
>> > https://speed.python.org/timeline/#/?exe=12&ben=regex_dna&env=1&revs=1000&equid=off&quarts=on&extr=on&base=none
>> >
>> >     In addition to unpack_sequence, the regex_dna test has slowed
>> >     down a lot compared to Py3.8.
>> >
>> >     
>> > https://github.com/python/pyperformance/blob/master/pyperformance/benchmarks/bm_unpack_sequence.py
>> >     
>> > https://github.com/python/pyperformance/blob/master/pyperformance/benchmarks/bm_regex_dna.py
>> >
>> >     Thanks.
>> >
>> >     On 14.10.2020 15:16, Pablo Galindo Salgado wrote:
>> >     > Hi!
>> >     >
>> >     > I have updated the branch benchmarks in the pyperformance server and 
>> > now they
>> >     > include 3.9. There are
>> >     > some benchmarks that are faster but on the other hand some 
>> > benchmarks are
>> >     > substantially slower, pointing
>> >     > at a possible performance regression in 3.9 in some aspects. In 
>> > particular
>> >     some
>> >     > tests like "unpack sequence" are
>> >     > almost 20% slower. As there are some other tests were 3.9 is faster, 
>> > is
>> >     not fair
>> >     > to conclude that 3.9 is slower, but
>> >     > this is something we should look into in my opinion.
>> >     >
>> >     > You can check these benchmarks I am talking about by:
>> >     >
>> >     > * Go here: https://speed.python.org/comparison/
>> >     > * In the left bar, select "lto-pgo latest in branch '3.9'" and 
>> > "lto-pgo latest
>> >     > in branch '3.8'"
>> >     > * To better read the plot, I would recommend to select a 
>> > "Normalization"
>> >     to the
>> >     > 3.8 branch (this is in the top part of the page)
>> >     >    and to check the "horizontal" checkbox.
>> >     >
>> >     > These benchmarks are very stable: I have executed them several times 
>> > over the
>> >     > weekend yielding the same results and,
>> >     > more importantly, they are being executed on a server specially 
>> > prepared to
>> >     > running reproducible benchmarks: CPU affinity,
>> >     > CPU isolation, CPU pinning for NUMA nodes, CPU frequency is fixed, 
>> > CPU
>> >     governor
>> >     > set to performance mode, IRQ affinity is
>> >     > disabled for the benchmarking CPU nodes...etc so you can trust these 
>> > numbers.
>> >     >
>> >     > I kindly suggest for everyone interested in trying to improve the 
>> > 3.9 (and
>> >     > master) performance, to review these benchmarks
>> >     > and try to identify the problems and fix them or to find what changes
>> >     introduced
>> >     > the regressions in the first place. All benchmarks
>> >     > are the ones being executed by the pyperformance suite
>> >     > (https://github.com/python/pyperformance) so you can execute them
>> >     > locally if you need to.
>> >     >
>> >     > ---
>> >     >
>> >     > On a related note, I am also working on the speed.python.org
>> >     <http://speed.python.org>
>> >     > <http://speed.python.org> server to provide more automation and
>> >     > ideally some integrations with GitHub to detect performance 
>> > regressions. For
>> >     > now, I have done the following:
>> >     >
>> >     > * Recompute benchmarks for all branches using the same version of
>> >     > pyperformance (except master) so they can
>> >     >    be compared with each other. This can only be seen in the 
>> > "Comparison"
>> >     > tab: https://speed.python.org/comparison/
>> >     > * I am setting daily builds of the master branch so we can detect 
>> > performance
>> >     > regressions with daily granularity. These
>> >     >    daily builds will be located in the "Changes" and "Timeline" tabs
>> >     > (https://speed.python.org/timeline/).
>> >     > * Once the daily builds are working as expected, I plan to work on 
>> > trying to
>> >     > automatically comment or PRs or on bpo if
>> >     > we detect that a commit has introduced some notable performance 
>> > regression.
>> >     >
>> >     > Regards from sunny London,
>> >     > Pablo Galindo Salgado.
>> >     >
>> >     > _______________________________________________
>> >     > python-committers mailing list -- python-committ...@python.org
>> >     <mailto:python-committ...@python.org>
>> >     > To unsubscribe send an email to python-committers-le...@python.org
>> >     <mailto:python-committers-le...@python.org>
>> >     > https://mail.python.org/mailman3/lists/python-committers.python.org/
>> >     > Message archived at
>> >     
>> > https://mail.python.org/archives/list/python-committ...@python.org/message/G3LB4BCAY7T7WG22YQJNQ64XA4BXBCT4/
>> >     > Code of Conduct: https://www.python.org/psf/codeofconduct/
>> >     >
>> >
>> >     --
>> >     Marc-Andre Lemburg
>> >     eGenix.com
>> >
>> >     Professional Python Services directly from the Experts (#1, Oct 14 
>> > 2020)
>> >     >>> Python Projects, Coaching and Support ...    
>> > https://www.egenix.com/
>> >     >>> Python Product Development ...        
>> > https://consulting.egenix.com/
>> >     
>> > ________________________________________________________________________
>> >
>> >     ::: We implement business ideas - efficiently in both time and costs 
>> > :::
>> >
>> >        eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
>> >         D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
>> >                Registered at Amtsgericht Duesseldorf: HRB 46611
>> >                    https://www.egenix.com/company/contact/
>> >                          https://www.malemburg.com/
>> >
>>
>> --
>> Marc-Andre Lemburg
>> eGenix.com
>>
>> Professional Python Services directly from the Experts (#1, Oct 14 2020)
>> >>> Python Projects, Coaching and Support ...    https://www.egenix.com/
>> >>> Python Product Development ...        https://consulting.egenix.com/
>> ________________________________________________________________________
>>
>> ::: We implement business ideas - efficiently in both time and costs :::
>>
>>    eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
>>     D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
>>            Registered at Amtsgericht Duesseldorf: HRB 46611
>>                https://www.egenix.com/company/contact/
>>                      https://www.malemburg.com/
>>
> _______________________________________________
> python-committers mailing list -- python-committ...@python.org
> To unsubscribe send an email to python-committers-le...@python.org
> https://mail.python.org/mailman3/lists/python-committers.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-committ...@python.org/message/LBEAVPI5WT6ZV5RKCKHW3EWXLDY534IQ/
> Code of Conduct: https://www.python.org/psf/codeofconduct/



-- 
Night gathers, and now my watch begins. It shall not end until my death.
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MPCXQ6JGAF5KOROXYMGSE57MUGCSEWN4/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to