GHC team would like to acknowledge some ongoing CI instabilities in
> the GHC project:
>
> - regular failures of the i386 job, possibly related to the bump to
> debian 12
> <https://gitlab.haskell.org/ghc/ghc/-/commit/203830065b81fe29003c1640a354f11661ffc604>
> (although
Hi all,
The GHC team would like to acknowledge some ongoing CI instabilities in the
GHC project:
- regular failures of the i386 job, possibly related to the bump to
debian 12
<https://gitlab.haskell.org/ghc/ghc/-/commit/203830065b81fe29003c1640a354f11661ffc604>
(although the job still
Peyton Jones wrote:
Dear GHC devs
Is GHC's CI stuck in some way? My !12928 has been scheduled by Marge
over 10 times now, and each time the commit has failed. Ten seems...
a lot.
Thanks
Simon
___
ghc-devs mailing list
ghc-devs@haskell.org
http
Dear GHC devs
Is GHC's CI stuck in some way? My !12928 has been scheduled by Marge over
10 times now, and each time the commit has failed. Ten seems... a lot.
Thanks
Simon
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi
Hello,
This is the first (and, perhaps, last[1]) monthly update on GitLab CI. This
month in particular deserves its own little email for the following reasons:
1. Some of the Darwin runners recently spontaneously self-upgraded,
introducing toolchain changes that broke CI. All the fixes are now
Final update: both problems are solved!
Now we just need to wait for the wheels of time to do their magic. The
final patch is still waiting to get picked up and merged.
The queue for CI jobs is still a bit longer than usual right now, but I
think it's legitimate. There are simply more open MRs
after being rebased.
> I’m pushing a fix.
>
> - Rodrigo
>
> On 28 Jun 2023, at 06:41, Bryan Richter via ghc-devs
> wrote:
>
> Two things are negatively impacting GHC CI right now:
>
> Darwin runner capacity is down to one machine, since the other three are
> pause
The root of the second problem was !10723, which started failing on its own
pipeline after being rebased.
I’m pushing a fix.
- Rodrigo
> On 28 Jun 2023, at 06:41, Bryan Richter via ghc-devs
> wrote:
>
> Two things are negatively impacting GHC CI right now:
>
> Darwin runne
Two things are negatively impacting GHC CI right now:
Darwin runner capacity is down to one machine, since the other three are
paused. The problem and solution are known[1], but until the fix is
implemented in GHC, expect pipelines to get backed up. I will work on a
patch this morning
[1]: https
ot
>>> cause of that particular loop.
>>>
>>> On Sat, 18 Mar 2023, 15.06 Sam Derbyshire,
>>> wrote:
>>>
>>>> I think there's a problem with jobs restarting, on my renamer MR
>>>> <https://gitlab.haskell.org/ghc/ghc/-/merge_r
Sat, 18 Mar 2023, 15.06 Sam Derbyshire,
>> wrote:
>>
>>> I think there's a problem with jobs restarting, on my renamer MR
>>> <https://gitlab.haskell.org/ghc/ghc/-/merge_requests/8686> there were 5
>>> full pipelines running at once. I had to cancel some
/merge_requests/8686> there were 5
>> full pipelines running at once. I had to cancel some of them, but also it
>> seems some got cancelled by some new CI pipelines restarting.
>>
>> On Sat, 18 Mar 2023 at 13:59, Simon Peyton Jones <
>> simon.peytonjo
, 15.06 Sam Derbyshire, wrote:
> I think there's a problem with jobs restarting, on my renamer MR
> <https://gitlab.haskell.org/ghc/ghc/-/merge_requests/8686> there were 5
> full pipelines running at once. I had to cancel some of them, but also it
> seems some got cancelled by som
I think there's a problem with jobs restarting, on my renamer MR
<https://gitlab.haskell.org/ghc/ghc/-/merge_requests/8686> there were 5
full pipelines running at once. I had to cancel some of them, but also it
seems some got cancelled by some new CI pipelines restarting.
On Sat, 18 Ma
All GHC CI pipelines seem stalled, sadly
e.g. https://gitlab.haskell.org/ghc/ghc/-/merge_requests/10123/pipelines
Can someone unglue it?
Thanks!
Simon
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc
g/groups/ghc/-/epics/4
* FreeBSD CI revival: https://gitlab.haskell.org/groups/ghc/-/epics/5
These epics have no deadline and their purpose is to track the evolution
of our workload for certain "big" tasks that go beyond a single ticket.
They are also useful as they are a (albeit impr
(Adding ghc-devs)
Are these fragile tests?
1. T14346 got a "bad file descriptor" on Darwin
2. linker_unload got some gold errors on Linux
Neither of these have been reported to me before, so I don't know much
about them. Nor have I looked deeply (or at all) at the tests themselves,
yet.
On
; >> >> >
>> >> >> > I haven't looked yet to see if they always fail in the same place,
>> >> >> > but I'll do that soon. The first example I looked at, however, has
>> >> >> > the line &q
t; >> >
> >> >> > As a consequence of showing up on the dashboard, the jobs get
> restarted. Since they fail consistently, they keep getting restarted. Since
> the jobs keep getting restarted, the pipelines stay alive. When I checked
> just now, there were 8 ni
; >> >
>> >> > As a consequence of showing up on the dashboard, the jobs get
>> >> > restarted. Since they fail consistently, they keep getting restarted.
>> >> > Since the jobs keep getting restarted, the pipelines stay alive. When I
>> >> &g
d, the pipelines stay alive. When I checked
> just now, there were 8 nightly runs still running. :) Thus I'm going to
> cancel the still-running nightly-i386-linux-deb9-validate jobs and let the
> pipelines die in peace. You can s
showing up on the dashboard, the jobs get restarted.
>> > Since they fail consistently, they keep getting restarted. Since the jobs
>> > keep getting restarted, the pipelines stay alive. When I checked just now,
>> > there were 8 nightly runs still running. :) Thus I'm
nes
> die in peace. You can still find all examples of failed jobs on the
> dashboard:
> >
> >
> https://grafana.gitlab.haskell.org/d/167r9v6nk/ci-spurious-failures?orgId=2=now-90d=now=5m=cannot_allocate
> >
> > To prevent future problems, it would be good if someon
e. You can still find all examples of failed jobs on the dashboard:
>
> https://grafana.gitlab.haskell.org/d/167r9v6nk/ci-spurious-failures?orgId=2=now-90d=now=5m=cannot_allocate
>
> To prevent future problems, it would be good if someone could help me look
> into this. Otherwise I'll j
he
dashboard:
https://grafana.gitlab.haskell.org/d/167r9v6nk/ci-spurious-failures?orgId=2=now-90d=now=5m=cannot_allocate
To prevent future problems, it would be good if someone could help me look
into this. Otherwise I'll just disable the job. :(
___
ghc-devs m
Dear devs
I used to use the build *x86_64-linux-deb10-int_native-validate* as the
place to look for compiler/bytes-allocated changes in perf/compiler. But
now it doesn't show those results any more, only runtime/bytes-allocated in
perf/should_run.
- Should we not run perf/compiler in every
Ben Gamari writes:
> Simon Peyton Jones writes:
>
>> Matthew, Ben, Bryan
>>
>> CI is failing in in "lint-ci-config"..
>>
>> See https://gitlab.haskell.org/ghc/ghc/-/merge_requests/8916
>> or https://gitlab.haskell.org/ghc/ghc/-/merge_requests
Simon Peyton Jones writes:
> Matthew, Ben, Bryan
>
> CI is failing in in "lint-ci-config"..
>
> See https://gitlab.haskell.org/ghc/ghc/-/merge_requests/8916
> or https://gitlab.haskell.org/ghc/ghc/-/merge_requests/7847
>
I'll investigate.
Cheers,
- Ben
Matthew, Ben, Bryan
CI is failing in in "lint-ci-config"..
See https://gitlab.haskell.org/ghc/ghc/-/merge_requests/8916
or https://gitlab.haskell.org/ghc/ghc/-/merge_requests/7847
What's up?
Simon
___
ghc-devs mailing list
ghc-devs@haskel
Hello again,
Thanks to everyone who pointed out spurious failures over the last few
weeks. Here's the current state of affairs and some discussion on next
steps.
*
*
*Dashboard
***
I made a dashboard for tracking spurious failures:
https://grafana.gitlab.haskell.org/d/167r9v6nk/ci
Hi all,
I'd like to get some data on weird CI failures. Before clicking "retry" on a
spurious failure, please paste the url for your job into the spreadsheet you'll
find linked at https://gitlab.haskell.org/ghc/ghc/-/issues/21591.
Sorry for the slight misdirection. I wanted the s
Hi all,
Currently the windows CI issue is experiencing high amounts of
instability so if your patch fails for this reason then don't worry.
We are attempting to fix it.
Cheers,
Matt
___
ghc-devs mailing list
ghc-devs@haskell.org
http
Devs
As you'll see from this pipeline record
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/7105/pipelines
CI consistently fails once a single commit has trailing whitespace, even if
it is fixed in a subsequent commit
- dce2054d
<https://gitlab.haskell.org/ghc/ghc/-/com
s/7184
>
> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/7231 also looks relevant.
>
> Richard
>
>> On Dec 22, 2021, at 7:19 AM, Joachim Breitner
>> wrote:
>>
>> Hi,
>>
>> the new (or “new”?) handling of perf numbers, where CI just magically
&g
wrote:
>
> Hi,
>
> the new (or “new”?) handling of perf numbers, where CI just magically
> records and compares them, without us having to manually edit the
> `all.T` files, is a big improvement, thanks!
>
> However, I found the choice of the base commit to compare agai
Hi,
the new (or “new”?) handling of perf numbers, where CI just magically
records and compares them, without us having to manually edit the
`all.T` files, is a big improvement, thanks!
However, I found the choice of the base commit to compare against
unhelpful. Assume master is at commit M
Thanks, this is all great news
On Tue, Jul 27, 2021, 21:56 Ben Gamari wrote:
> ÉRDI Gergő writes:
>
> > Hi,
> >
> > I'm seeing three build failures in CI:
> >
> Hi,
>
> > 1. On perf-nofib, it fails with:
> >
> Don't worry about this on
ÉRDI Gergő writes:
> Hi,
>
> I'm seeing three build failures in CI:
>
Hi,
> 1. On perf-nofib, it fails with:
>
Don't worry about this one for the moment. This job is marked as
accepting of failure for a reason (hence the job state being an orange
exclamation mark r
ld x86_64 GHCs. There is a fix
> somewhere from Ben, so it’s just a question of time until it’s properly
> fixed.
>
> The other two I’m afraid I have no idea. I’ll see to restart them. (You
> can’t ?)
>
> On Tue 27. Jul 2021 at 18:10, ÉRDI Gergő wrote:
>
>> Hi,
>
.
The other two I’m afraid I have no idea. I’ll see to restart them. (You
can’t ?)
On Tue 27. Jul 2021 at 18:10, ÉRDI Gergő wrote:
> Hi,
>
> I'm seeing three build failures in CI:
>
> 1. On perf-nofib, it fails with:
>
> == make boot -j --jobserver-fds=3,4 --no-print-directory;
Hi,
I'm seeing three build failures in CI:
1. On perf-nofib, it fails with:
== make boot -j --jobserver-fds=3,4 --no-print-directory;
in /builds/cactus/ghc/nofib/real/smallpt
/builds/cactus/ghc/ghc/bin/ghc -M -dep
Hi all,
There is a configuration issue with the darwin builders which has
meant that for the last 6 days CI has been broken if you have pushed
from a fork because the majority of darwin builders are only
configured to work with branches pushed to the main project. These
failures manifest
Ben Gamari writes:
> Hi all,
>
Hi all,
At this point CI should be fully functional on the stable and master
branches again. However, do note that older base commits may refer to
Docker images that are sadly no longer available. Such cases can be
resolved by simply rebasing.
Cheers,
Hi all,
As you may have realized, CI has been a bit of disaster over the last
few days. It appears that this is just the most recent chapter in our
on-going troubles with Docker image storage, being due to an outage of
our upstream storage service [1]. Davean and I have started to implement
Hi there!
You might have seen failed or stuck or pending darwin builds. Our CI
builders we got generously donated have ~250GB of disk space (which should
be absolutely adequat for what we do), and macOS BigSur does some odd
reservation of 200GB in /System/Volumes/Data, this is despite automatic
Thanks Moritz for that update.
The latest is that currently darwin CI is disabled and the merge train
is unblocked (*choo choo*).
I am testing Moritz's patches to speed-up CI and will merge them in
shortly to get darwin coverage back.
Cheers,
Matt
On Wed, May 19, 2021 at 9:46 AM Moritz
(or some similar directory) call for each
and every possible
path.
Switching to hadrian will cut down the time from ~5hs to ~2hs. At some
point we had make
builds <90min by just killing all DYLD_LIBRARY_PATH logic we ever had, but
that broke
bindists.
The CI has time values attached and some summ
Hi all,
The darwin pipelines are gumming up the merge pipeline as they are
taking over 4 hours to complete on average.
I am going to disable them -
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5785
Please can someone give me access to one of the M1 builders so I can
debug why the tests
will be linted. How can I run hlint locally
so that I can easily respond to trouble before CI takes a crack? And where
would I learn this information (that is, how to run hlint locally)?
Thanks!
Richard
On Mar 25, 2021, at 11:19 AM, Hécate wrote:
Hello fellow devs,
this email is an activity
Thanks for this update! Glad to know this effort is going well.
One quick question: suppose I am editing something in `base`. My understanding
is that my edit will be linted. How can I run hlint locally so that I can
easily respond to trouble before CI takes a crack? And where would I learn
Hello fellow devs,
this email is an activity report on the integration of the HLint[0] tool
in the Continuous Integration (CI) pipelines.
On Jul. 5, 2020 I opened a discussion ticket[1] on the topic of code
linting in the several components of the GHC code-base. It has served as
a reference
> What about the case where the rebase *lessens* the improvement? That
is, you're expecting these 10 cases to improve, but after a rebase, only
1 improves. That's news! But a blanket "accept improvements" won't tell you.
I don't think that scenario currently triggers a C
single test, for a single build flavour
> > crossing the
> > improvement threshold where CI fails after rebasing I wondered.
> >
> > When would accepting a unexpected perf improvement ever backfire?
> >
> > In practice I either have a patch that I expect to improve
ghc/-/merge_requests/4759
> which failed because of a single test, for a single build flavour
> crossing the
> improvement threshold where CI fails after rebasing I wondered.
>
> When would accepting a unexpected perf improvement ever backfire?
>
> In practice I either hav
After the idea of letting marge accept unexpected perf improvements and
looking at https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4759
which failed because of a single test, for a single build flavour
crossing the
improvement threshold where CI fails after rebasing I wondered.
When would
Simon Peyton Jones via ghc-devs writes:
> > We need to do something about this, and I'd advocate for just not making
> > stats fail with marge.
>
> Generally I agree. One point you don’t mention is that our perf tests
> (which CI forces us to look at assiduously) are often
Karel Gardas writes:
> On 3/17/21 4:16 PM, Andreas Klebinger wrote:
>> Now that isn't really an issue anyway I think. The question is rather is
>> 2% a large enough regression to worry about? 5%? 10%?
>
> 5-10% is still around system noise even on lightly loaded workstat
h
-davean
On Thu, Mar 18, 2021 at 1:37 PM Sebastian Graf <mailto:sgraf1...@gmail.com>> wrote:
To be clear: All performance tests that run as part of CI measure
allocations only. No wall clock time.
Those measurements are (mostly) deterministic and reproducible
between c
I left the wiggle room for things like longer wall time causing more time
events in the IO Manager/RTS which can be a thermal/HW issue.
They're small and indirect though
-davean
On Thu, Mar 18, 2021 at 1:37 PM Sebastian Graf wrote:
> To be clear: All performance tests that run as part of
To be clear: All performance tests that run as part of CI measure
allocations only. No wall clock time.
Those measurements are (mostly) deterministic and reproducible between
compiles of the same worktree and not impacted by thermal issues/hardware
at all.
Am Do., 18. März 2021 um 18:09 Uhr
worry about? 5%? 10%?
>
> 5-10% is still around system noise even on lightly loaded workstation.
> Not sure if CI is not run on some shared cloud resources where it may be
> even higher.
>
> I've done simple experiment of pining ghc compiling ghc-cabal and I've
> been able
On 3/17/21 4:16 PM, Andreas Klebinger wrote:
> Now that isn't really an issue anyway I think. The question is rather is
> 2% a large enough regression to worry about? 5%? 10%?
5-10% is still around system noise even on lightly loaded workstation.
Not sure if CI is not run on some shared
On 17 Mar 2021, at 16:16, Andreas Klebinger wrote:
>
> While I fully agree with this. We should *always* want to know if a small
> syntetic benchmark regresses by a lot.
> Or in other words we don't want CI to accept such a regression for us ever,
> but the developer of a pa
tic benchmark regresses by a lot.
Or in other words we don't want CI to accept such a regression for us
ever, but the developer of a patch should need to explicitly ok it.
Otherwise we just slow down a lot of seldom-used code paths by a lot.
Now that isn't really an issue anyway I think. The quest
Yes, I think the counter point of "automating what Ben does" so people
besides Ben can do it is very important. In this case, I think a good
thing we could do is asynchronously build more of master post-merge,
such as use the perf stats to automatically bisect anything that is
fishy, including
Re: Performance drift: I opened
https://gitlab.haskell.org/ghc/ghc/-/issues/17658 a while ago with an idea
of how to measure drift a bit better.
It's basically an automatically checked version of "Ben stares at
performance reports every two weeks and sees that T9872 has regressed by
10% since 9.0"
> On Mar 17, 2021, at 6:18 AM, Moritz Angermann
> wrote:
>
> But what do we expect of patch authors? Right now if five people write
> patches to GHC, and each of them eventually manage to get their MRs green,
> after a long review, they finally see it assigned to marge, and then it
>
t practitioners care about more than the micro benchmarks.
Again, I'm absolutely not in favour of GHC regressing, it's slow enough as
it is. I just think CI should be assisting us and not holding development
back.
Cheers,
Moritz
On Wed, Mar 17, 2021 at 5:54 PM Spiwack, Arnaud
wrote:
> A
Ah, so it was really two identical pipelines (one for the branch where
Margebot batches commits, and one for the MR that Margebot creates before
merging). That's indeed a non-trivial amount of purely wasted
computer-hours.
Taking a step back, I am inclined to agree with the proposal of not
We need to do something about this, and I'd advocate for just not making stats
fail with marge.
Generally I agree. One point you don’t mention is that our perf tests (which
CI forces us to look at assiduously) are often pretty weird cases. So there is
at least a danger that these more
*why* is a very good question. The MR fixing it is here:
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5275
On Wed, Mar 17, 2021 at 4:26 PM Spiwack, Arnaud
wrote:
> Then I have a question: why are there two pipelines running on each merge
> batch?
>
> On Wed, Mar 17, 2021 at 9:22 AM
Then I have a question: why are there two pipelines running on each merge
batch?
On Wed, Mar 17, 2021 at 9:22 AM Moritz Angermann
wrote:
> No it wasn't. It was about the stat failures described in the next
> paragraph. I could have been more clear about that. My apologies!
>
> On Wed, Mar 17,
No it wasn't. It was about the stat failures described in the next
paragraph. I could have been more clear about that. My apologies!
On Wed, Mar 17, 2021 at 4:14 PM Spiwack, Arnaud
wrote:
>
> and if either of both (see below) failed, marge's merge would fail as well.
>>
>
> Re: “see below” is
> and if either of both (see below) failed, marge's merge would fail as well.
>
Re: “see below” is this referring to a missing part of your email?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Hi there!
Just a quick update on our CI situation. Ben, John, Davean and I have been
discussion on CI yesterday, and what we can do about it, as well as some
minor notes on why we are frustrated with it. This is an open invitation to
anyone who in earnest wants to work on CI. Please come forward
mailto:ghc-devs@haskell.org>> wrote:
Incremental CI can cut multiple hours to < mere minutes,
especially with the test suite being embarrassingly parallel.
There simply no way optimizations to the compiler independent from
sharing a cache between CI runs can get anywh
reusing build incrementally, is not building at all.
On Mon, Feb 22, 2021 at 10:09 AM Simon Peyton Jones via ghc-devs <
ghc-devs@haskell.org> wrote:
> Incremental CI can cut multiple hours to < mere minutes, especially with
> the test suite being embarrassingly parallel. Ther
Incremental CI can cut multiple hours to < mere minutes, especially with the
test suite being embarrassingly parallel. There simply no way optimizations to
the compiler independent from sharing a cache between CI runs can get anywhere
close to that return on investment.
I rather ag
I'm not opposed to some effort going into this, but I would strongly
opposite putting all our effort there. Incremental CI can cut multiple
hours to < mere minutes, especially with the test suite being
embarrassingly parallel. There simply no way optimizations to the
compiler independent f
There are some good ideas here, but I want to throw out another one: put all
our effort into reducing compile times. There is a loud plea to do this on
Discourse
<https://discourse.haskell.org/t/call-for-ideas-forming-a-technical-agenda/1901/24>,
and it would both solve these CI pr
Recompilation avoidance
I think in order to cache more in CI, we first have to invest some time in
fixing recompilation avoidance in our bootstrapped build system.
I just tested on a hadrian perf ticky build: Adding one line of *comment*
in the compiler causes
- a (pretty slow, yet
nt: Friday, February 19, 2021 8:57 AM
To: John Ericson ; ghc-devs
Subject: RE: On CI
1. Building and testing happen together. When tests failure spuriously, we
also have to rebuild GHC in addition to re-running the tests. That's pure
waste.
https://gitlab.haskell.org/ghc/ghc/-/issues
Simon Peyton Jones via ghc-devs writes:
>> 1. Building and testing happen together. When tests failure
>> spuriously, we also have to rebuild GHC in addition to re-running
>> the tests. That's pure waste.
>> https://gitlab.haskell.org/ghc/ghc/-/issues/13897 tracks this more
>> or less.
03:19
To: ghc-devs
Subject: Re: On CI
I am also wary of us to deferring checking whole platforms and what not. I
think that's just kicking the can down the road, and will result in more
variance and uncertainty. It might be alright for those authoring PRs, but it
will make Ben's job keeping the
.
Before getting into these complex trade-offs, I think we should focus on
the cornerstone issue that CI isn't incremental.
1. Building and testing happen together. When tests failure spuriously,
we also have to rebuild GHC in addition to re-running the tests.
That's pure waste. https
bit constraint by powerful and fast CI
> machines but probabaly bearable for the time being. I doubt anyone really
> looks at those jobs anyway as they are permitted to fail.
For the record, I look at this once in a while to make sure that they
haven't broken (and usually pick of
Apologies for the latency here. This thread has required a fair amount of
reflection.
Sebastian Graf writes:
> Hi Moritz,
>
> I, too, had my gripes with CI turnaround times in the past. Here's a
> somewhat radical proposal:
>
>- Run "full-build" stage builds on
wibbles.
The CI script Ben wrote, and generously used to help set up the new
builder, seems to assume an older Git install,
and thus a path was broken which thankfully to gitlab led to the brilliant
error of just stalling.
Next up, because we use msys2's pacman to provision the windows builders
At this point I believe we have ample Linux build capacity. Darwin looks
pretty good as well the ~4 M1s we have should in principle also be able to
build x86_64-darwin at acceptable speeds. Although on Big Sur only.
The aarch64-Linux story is a bit constraint by powerful and fast CI
machines
Hi Moritz,
I, too, had my gripes with CI turnaround times in the past. Here's a
somewhat radical proposal:
- Run "full-build" stage builds only on Marge MRs. Then we can assign to
Marge much earlier, but probably have to do a bit more of (manual)
bisecting of spoiled Mar
Friends,
I've been looking at CI recently again, as I was facing CI turnaround times
of 9-12hs; and this just keeps dragging out and making progress hard.
The pending pipeline currently has 2 darwin, and 15 windows builds waiting.
Windows builds on average take ~220minutes. We have five builders
tl;dr. GHC's CI capacity will be a reduced due to a loss of sponsorship,
particularly in Windows runner capacity. Help wanted in finding
additional capacity.
Hi all,
For many years Google X has generously donated Google Compute Engine
resources to GHC's CI infrastructure. We all
Simon Peyton Jones via ghc-devs writes:
> Ben
>
> This sounds like a good decision to me, thanks.
>
> Is there a possibility to have a slow CI-on-windows job (not part of
> the "this must pass before merging" step), which will slowly, but
> reliably, fail if the
Ben
This sounds like a good decision to me, thanks.
Is there a possibility to have a slow CI-on-windows job (not part of the "this
must pass before merging" step), which will slowly, but reliably, fail if the
Windows build fails. E.g. does it help to make the build be 100%
Hi everyone,
After multiple weeks of effort struggling to get Windows CI into a stable
condition I'm sorry to say that we're going to need to revert to
allowing it to fail for a bit longer. The status quo is essentially
holding up the entire merge queue and we still seem quite far from
resolving
Hi
After pushing some commits, while running a CI pipeline, I persistently
get the following error for the validate-i386-linux-deb9 step:
$ git submodule update --init --recursive
fatal: Unable to create
'/builds/RolandSenn/ghc/.git/modules/libraries/Cabal/index.lock': File
exists.
Another git
Try rebasing!
Due to some unfortunate circumstances the performance tests (T9630 and
haddock.base) became fragile. This should be fixed, but you need to
rebase off of the latest master (at at least
c931f2561207aa06f1750827afbb68fbee241c6f) for the tests to pass.
Happy Hacking,
David
Yeah. That’s my current theory. It doesn’t help that the queue length
isn’t visible
On Mon, May 13, 2019 at 8:43 AM Ben Gamari wrote:
> Carter Schonwald writes:
>
> > Cool. I recommend irc and devs list plus urls / copies of error
> messages.
> >
> > Hard to debug timeout if we don’t have
Carter Schonwald writes:
> Cool. I recommend irc and devs list plus urls / copies of error messages.
>
> Hard to debug timeout if we don’t have the literal url or error messages
> shared !
>
For what it's worth I suspect these timeouts are simply due to the fact
that we are somewhat lacking in
Subject: Re: CI on forked projects: Darwin woes
Thanks! I'll send a note if it starts happening again.
On 5/12/19 7:23 AM, Carter Schonwald wrote:
>
[ . . . ]
> Next time you hit a failure could you share with the devs list and or
> #ghc irc ?
--
K
1 - 100 of 169 matches
Mail list logo