502
On Fri, 19 Mar 2021 at 00:05, Ben Gamari wrote:
>
> Ben Gamari writes:
>
> > Hi all,
> >
> > I will be performing a GitLab upgrade starting in approximately one
> > hour. While I generally try to do this with more notice and out of
> > working hours, in this case there is an exploitable GitL
Ben Gamari writes:
> Hi all,
>
> I will be performing a GitLab upgrade starting in approximately one
> hour. While I generally try to do this with more notice and out of
> working hours, in this case there is an exploitable GitLab vulnerability
> that deserves swift action. Thank you for you pati
Hi all,
I will be performing a GitLab upgrade starting in approximately one
hour. While I generally try to do this with more notice and out of
working hours, in this case there is an exploitable GitLab vulnerability
that deserves swift action. Thank you for you patience.
Cheers,
- Ben
signatur
Simon Peyton Jones via ghc-devs writes:
> > We need to do something about this, and I'd advocate for just not making
> > stats fail with marge.
>
> Generally I agree. One point you don’t mention is that our perf tests
> (which CI forces us to look at assiduously) are often pretty weird
> cases.
Karel Gardas writes:
> On 3/17/21 4:16 PM, Andreas Klebinger wrote:
>> Now that isn't really an issue anyway I think. The question is rather is
>> 2% a large enough regression to worry about? 5%? 10%?
>
> 5-10% is still around system noise even on lightly loaded workstation.
> Not sure if CI is n
My guess is most of the "noise" is not run time, but the compiled code
changing in hard to predict ways.
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1776/diffs for
example was a very small PR that took *months* of on-off work to get
passing metrics tests. In the end, binding `is_boot`
I left the wiggle room for things like longer wall time causing more time
events in the IO Manager/RTS which can be a thermal/HW issue.
They're small and indirect though
-davean
On Thu, Mar 18, 2021 at 1:37 PM Sebastian Graf wrote:
> To be clear: All performance tests that run as part of CI mea
To be clear: All performance tests that run as part of CI measure
allocations only. No wall clock time.
Those measurements are (mostly) deterministic and reproducible between
compiles of the same worktree and not impacted by thermal issues/hardware
at all.
Am Do., 18. März 2021 um 18:09 Uhr schrie
That really shouldn't be near system noise for a well constructed
performance test. You might be seeing things like thermal issues, etc
though - good benchmarking is a serious subject.
Also we're not talking wall clock tests, we're talking specific metrics.
The machines do tend to be bare metal, bu
I think the 0.8.1.2 release should be fine for 9.0.2. Could you, Ben,
confirm that?
- Oleg
On 18.3.2021 18.32, Judah Jacobson wrote:
> Hi Oleg, I apologize for the delay in response. I have merged your
> Data.List PR and released haskeline-0.8.1.2 containing that change. I
> will also look into m
Hi Oleg, I apologize for the delay in response. I have merged your
Data.List PR and released haskeline-0.8.1.2 containing that change. I will
also look into making releases corresponding to ghc-9.0.*.
On Thu, Mar 18, 2021 at 8:14 AM Oleg Grenrus wrote:
> Hi Judah,
>
> I'm sending your an email
Hi Judah,
I'm sending your an email, in case you haven't noticed GitHub notifications.
I have a PR https://github.com/judah/haskeline/pull/153 (now opened for
two months).
It's blocking work on Data.List specialization.
Also Ben have pinged you on https://github.com/judah/haskeline/issues/154
to
12 matches
Mail list logo