On 15.10.2020 15:50, Victor Stinner wrote:
> Le mer. 14 oct. 2020 à 17:59, Antoine Pitrou a écrit :
>> unpack-sequence is a micro-benchmark. (...)
>
> I suggest removing it.
>
> I removed other similar micro-benchmarks from pyperformance in the
> past, since they can easily be misunderstood and
Le mer. 14 oct. 2020 à 17:59, Antoine Pitrou a écrit :
> unpack-sequence is a micro-benchmark. (...)
I suggest removing it.
I removed other similar micro-benchmarks from pyperformance in the
past, since they can easily be misunderstood and misleading. For
curious people, I'm keeping a collection
On 14.10.2020 16:14, Antoine Pitrou wrote:
> Le 14/10/2020 à 15:16, Pablo Galindo Salgado a écrit :
>> Hi!
>>
>> I have updated the branch benchmarks in the pyperformance server and now
>> they include 3.9. There are
>> some benchmarks that are faster but on the other hand some benchmarks
>> are su
> Would it be possible instead to run git-bisect for only a _particular_
benchmark? It seems that may be all that’s needed to track down particular
regressions. Also, if e.g. git-bisect is used it wouldn’t be every e.g.
10th revision but rather O(log(n)) revisions.
That only works if there is a si
MOn Wed, Oct 14, 2020 at 8:03 AM Pablo Galindo Salgado
wrote:
> > Would it be possible rerun the tests with the current
> setup for say the last 1000 revisions or perhaps a subset of these
> (e.g. every 10th revision) to try to binary search for the revision which
> introduced the change ?
>
> Ev
On 14.10.2020 17:59, Antoine Pitrou wrote:
>
> Le 14/10/2020 à 17:25, M.-A. Lemburg a écrit :
>>
>> Well, there's a trend here:
>>
>> [...]
>>
>> Those two benchmarks were somewhat faster in Py3.7 and got slower in 3.8
>> and then again in 3.9, so this is more than just an artifact.
>
> unpack-se
Le 14/10/2020 à 17:25, M.-A. Lemburg a écrit :
>
> Well, there's a trend here:
>
> [...]
>
> Those two benchmarks were somewhat faster in Py3.7 and got slower in 3.8
> and then again in 3.9, so this is more than just an artifact.
unpack-sequence is a micro-benchmark. It's useful if you want t
I suggest to limit to one "dot" per week, since CodeSpeed (the website
to browse the benchmark results) is somehow limited to 50 dots (it can
display more if you only display a single benchmark).
Previously, it was closer to one "dot" per month which allowed to
display a timeline over 5 years. In
> I wouldn't worry about a small regression on a micro- or mini-benchmark
while the overall picture is
stable.
Absolutely, I agree is not something to *worry* but I think it makes sense
to investigate as
the possible fix may be trivial. Part of the reason I wanted to recompute
them was because
th
> Would it be possible rerun the tests with the current
setup for say the last 1000 revisions or perhaps a subset of these
(e.g. every 10th revision) to try to binary search for the revision which
introduced the change ?
Every run takes 1-2 h so doing 1000 would be certainly time-consuming :)
Tha
Le 14/10/2020 à 15:16, Pablo Galindo Salgado a écrit :
> Hi!
>
> I have updated the branch benchmarks in the pyperformance server and now
> they include 3.9. There are
> some benchmarks that are faster but on the other hand some benchmarks
> are substantially slower, pointing
> at a possible perf
On 14.10.2020 16:00, Pablo Galindo Salgado wrote:
>> Would it be possible to get the data for older runs back, so that
> it's easier to find the changes which caused the slowdown ?
>
> Unfortunately no. The reasons are that that data was misleading because
> different points were computed with a d
> Would it be possible to get the data for older runs back, so that
it's easier to find the changes which caused the slowdown ?
Unfortunately no. The reasons are that that data was misleading because
different points were computed with a different version of pyperformance
and therefore with differ
Hi Pablo,
thanks for pointing this out.
Would it be possible to get the data for older runs back, so that
it's easier to find the changes which caused the slowdown ?
Going to the timeline, it seems that the system only has data
for Oct 14 (today):
https://speed.python.org/timeline/#/?exe=12&ben
> The performance figures in the Python 3.9 "What's New"
Those are also micro-benchmarks, which can have no effect at all on
macro-benchmarks. The ones I am
linking are almost all macro-benchmarks, so, unfortunately, the ones
in Python 3.9 "What's New" are
not lying and they seem to be correlated
The performance figures in the Python 3.9 "What's New" (here -
https://docs.python.org/3/whatsnew/3.9.html#optimizations) did look
oddly like a lot of things went slower, to me. I assumed I'd misread
the figures, and moved on, but maybe I was wrong to do so...
Paul
On Wed, 14 Oct 2020 at 14:17, P
16 matches
Mail list logo