Re: Searching tickets
I tend to use google with the extra keyword site: gitlab.haskell.org/ghc/ghc/-/issues, as it often has better search results than gitlab itself. It doesn't work too well for very recent tickets though (e.g. haskeline one as you mentioned), but otherwise it works for me. Cheers, Cheng On Tue, Jan 24, 2023 at 7:22 PM Ben Gamari wrote: > Simon Peyton Jones writes: > > > I keep finding that 'search' in Gitlab misses things. > > > > Example > > https://gitlab.haskell.org/ghc/ghc/-/issues/22715 mentions "Haskeline" > > (just look on that page) > > > > Yet when I go to > > https://gitlab.haskell.org/ghc/ghc/-/issues/ > > and search for "Haskeline", this ticket isn't reported. > > > > What am I doing wrong? > > > I belive that the search bar at the top of > https://gitlab.haskell.org/ghc/ghc/-/issues/ only searches the issues' > title and description. However, in this particular ticket Haskeline is > only mentioned in a comment. > > It is possible to search in comments via > https://gitlab.haskell.org/search?group_id=2&project_id=1&scope=notes. > Sadly, searching for "Haskeline" here turns up over 300 results and none > of which are the ticket you are looking for; I'm sure that the ticket is > in the returned results somewhere, but I don't think this would be an > efficient way to find it (although being able to sort the result set by > date would make this much easier). > > For this reason I routinely edit issue labels and descriptions to ensure > that they mention useful keywords. However, even then locating tickets > can be challenging. > > Cheers, > > - Ben > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Almost all tests fail after 08bf28819b
The fix just landed on d198a19ae08fec797121e3907ca93c5840db0c53 on master. On Thu, Nov 24, 2022 at 8:11 PM Matthew Farkas-Dyck via ghc-devs < ghc-devs@haskell.org> wrote: > Ah, it seems i was in error earlier... i can reproduce this now. ☹ > > +1 for CI job testing static build > > --- Original Message --- > On Wednesday, November 23rd, 2022 at 20:46, Erdi, Gergo > wrote: > > > > > > > > PUBLIC > > > > Nope, still getting the same error after deleting all of _build. I'm > also on AMD64 Linux. I've tried with GHC 9.2.5 and 9.4.3. For reference, my > exact command line (after deleting _build) is: > > > > ./boot && ./configure && ./hadrian/build-stack --flavour=devel2 -j10 > test --only="ann01" > > > > -Original Message- > > From: Matthew Farkas-Dyck strake...@proton.me > > > > Sent: Wednesday, November 23, 2022 2:29 PM > > To: Erdi, Gergo gergo.e...@sc.com > > > > Cc: ghc-devs@haskell.org > > Subject: Re: Almost all tests fail after 08bf28819b > > > > I had the same problem. Deleting the _build directory and rebuilding > solved it for me. > > > > I'm also on amd64 Linux, by the by. > > > > This email and any attachments are confidential and may also be > privileged. If you are not the intended recipient, please delete all copies > and notify the sender immediately. You may wish to refer to the > incorporation details of Standard Chartered PLC, Standard Chartered Bank > and their subsidiaries at https: //www.sc.com/en/our-locations > > > > Where you have a Financial Markets relationship with Standard Chartered > PLC, Standard Chartered Bank and their subsidiaries (the "Group"), > information on the regulatory standards we adhere to and how it may affect > you can be found in our Regulatory Compliance Statement at https: // > www.sc.com/rcs/ and Regulatory Compliance Disclosures at http: // > www.sc.com/rcs/fm > > > > Insofar as this communication is not sent by the Global Research team > and contains any market commentary, the market commentary has been prepared > by the sales and/or trading desk of Standard Chartered Bank or its > affiliate. It is not and does not constitute research material, independent > research, recommendation or financial advice. Any market commentary is for > information purpose only and shall not be relied on for any other purpose > and is subject to the relevant disclaimers available at https: // > www.sc.com/en/regulatory-disclosures/#market-disclaimer. > > > > Insofar as this communication is sent by the Global Research team and > contains any research materials prepared by members of the team, the > research material is for information purpose only and shall not be relied > on for any other purpose, and is subject to the relevant disclaimers > available at https: // > research.sc.com/research/api/application/static/terms-and-conditions. > > > > Insofar as this e-mail contains the term sheet for a proposed > transaction, by responding affirmatively to this e-mail, you agree that you > have understood the terms and conditions in the attached term sheet and > evaluated the merits and risks of the transaction. We may at times also > request you to sign the term sheet to acknowledge the same. > > > > Please visit https: //www.sc.com/en/regulatory-disclosures/dodd-frank/ > for important information with respect to derivative products. > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Almost all tests fail after 08bf28819b
Sorry, my oversight. I'll open an MR to fix the static configuration :/ On Thu, Nov 24, 2022 at 10:36 AM Sylvain Henry wrote: > With devel2 only the static libs are built. Hence the RTS linker is used. > > > https://gitlab.haskell.org/ghc/ghc/-/commit/08bf28819b78e740550a73a90eda62cce8d21c#de77f4916e67137b0ad5e12cc6c2704c64313900 > made some symbols public (newArena, arenaAlloc, arenaFree) but they weren't > added to rts/RtsSymbols.c so the RTS linker isn't aware of them. > > We should have a CI job testing the static configuration. Wait, there is > one, and tests have been failing with this error too: > https://gitlab.haskell.org/ghc/ghc/-/jobs/1242286#L2859 Too bad it was a > job allowed to fail :) > > > > On 24/11/2022 05:46, Erdi, Gergo via ghc-devs wrote: > > PUBLIC > > Nope, still getting the same error after deleting all of _build. I'm also on > AMD64 Linux. I've tried with GHC 9.2.5 and 9.4.3. For reference, my exact > command line (after deleting _build) is: > > ./boot && ./configure && ./hadrian/build-stack --flavour=devel2 -j10 test > --only="ann01" > > -Original Message- > From: Matthew Farkas-Dyck > Sent: Wednesday, November 23, 2022 2:29 PM > To: Erdi, Gergo > Cc: ghc-devs@haskell.org > Subject: Re: Almost all tests fail after 08bf28819b > > I had the same problem. Deleting the _build directory and rebuilding solved > it for me. > > I'm also on amd64 Linux, by the by. > > This email and any attachments are confidential and may also be privileged. > If you are not the intended recipient, please delete all copies and notify > the sender immediately. You may wish to refer to the incorporation details of > Standard Chartered PLC, Standard Chartered Bank and their subsidiaries at > https: //www.sc.com/en/our-locations > > Where you have a Financial Markets relationship with Standard Chartered PLC, > Standard Chartered Bank and their subsidiaries (the "Group"), information on > the regulatory standards we adhere to and how it may affect you can be found > in our Regulatory Compliance Statement at https: //www.sc.com/rcs/ and > Regulatory Compliance Disclosures at http: //www.sc.com/rcs/fm > > Insofar as this communication is not sent by the Global Research team and > contains any market commentary, the market commentary has been prepared by > the sales and/or trading desk of Standard Chartered Bank or its affiliate. It > is not and does not constitute research material, independent research, > recommendation or financial advice. Any market commentary is for information > purpose only and shall not be relied on for any other purpose and is subject > to the relevant disclaimers available at https: > //www.sc.com/en/regulatory-disclosures/#market-disclaimer. > > Insofar as this communication is sent by the Global Research team and > contains any research materials prepared by members of the team, the research > material is for information purpose only and shall not be relied on for any > other purpose, and is subject to the relevant disclaimers available at https: > //research.sc.com/research/api/application/static/terms-and-conditions. > > Insofar as this e-mail contains the term sheet for a proposed transaction, by > responding affirmatively to this e-mail, you agree that you have understood > the terms and conditions in the attached term sheet and evaluated the merits > and risks of the transaction. We may at times also request you to sign the > term sheet to acknowledge the same. > > Please visit https: //www.sc.com/en/regulatory-disclosures/dodd-frank/ for > important information with respect to derivative products. > ___ > ghc-devs mailing > listghc-devs@haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Consistent CI failure in job nightly-i386-linux-deb9-validate
When hadrian builds the binary-dist job, invoking tar and xz is already the last step and there'll be no other ongoing jobs. But I do agree with reverting, this minor optimization I proposed has caused more trouble than its worth :/ On Thu, Sep 29, 2022 at 9:25 AM Bryan Richter wrote: > > Matthew pointed out that the build system already parallelizes jobs, so it's > risky to force parallelization of any individual job. That means I should > just revert. > > On Wed, Sep 28, 2022 at 2:38 PM Cheng Shao wrote: >> >> I believe we can either modify ci.sh to disable parallel compression >> for i386, or modify .gitlab/gen_ci.hs and .gitlab/jobs.yaml to disable >> XZ_OPT=-9 for i386. >> >> On Wed, Sep 28, 2022 at 1:21 PM Bryan Richter >> wrote: >> > >> > Aha: while i386-linux-deb9-validate sets no extra XZ options, >> > nightly-i386-linux-deb9-validate (the failing job) sets "XZ_OPT = 9". >> > >> > A revert would fix the problem, but presumably so would tweaking that >> > option. Does anyone have information that would lead to a better decision >> > here? >> > >> > >> > On Wed, Sep 28, 2022 at 2:02 PM Cheng Shao wrote: >> >> >> >> Sure, in which case pls revert it. Apologies for the impact, though >> >> I'm still a bit curious, the i386 job did pass in the original MR. >> >> >> >> On Wed, Sep 28, 2022 at 1:00 PM Bryan Richter >> >> wrote: >> >> > >> >> > Yep, it seems to mostly be xz that is running out of memory. (All >> >> > recent builds that I sampled, but not all builds through all time.) >> >> > Thanks for pointing it out! >> >> > >> >> > I can revert the change. >> >> > >> >> > On Wed, Sep 28, 2022 at 11:46 AM Cheng Shao wrote: >> >> >> >> >> >> Hi Bryan, >> >> >> >> >> >> This may be an unintended fallout of !8940. Would you try starting an >> >> >> i386 pipeline with it reversed to see if it solves the issue, in which >> >> >> case we should revert or fix it in master? >> >> >> >> >> >> On Wed, Sep 28, 2022 at 9:58 AM Bryan Richter via ghc-devs >> >> >> wrote: >> >> >> > >> >> >> > Hi all, >> >> >> > >> >> >> > For the past week or so, nightly-i386-linux-deb9-validate has been >> >> >> > failing consistently. >> >> >> > >> >> >> > They show up on the failure dashboard because the logs contain the >> >> >> > phrase "Cannot allocate memory". >> >> >> > >> >> >> > I haven't looked yet to see if they always fail in the same place, >> >> >> > but I'll do that soon. The first example I looked at, however, has >> >> >> > the line "xz: (stdin): Cannot allocate memory", so it's not GHC >> >> >> > (alone) causing the problem. >> >> >> > >> >> >> > As a consequence of showing up on the dashboard, the jobs get >> >> >> > restarted. Since they fail consistently, they keep getting >> >> >> > restarted. Since the jobs keep getting restarted, the pipelines stay >> >> >> > alive. When I checked just now, there were 8 nightly runs still >> >> >> > running. :) Thus I'm going to cancel the still-running >> >> >> > nightly-i386-linux-deb9-validate jobs and let the pipelines die in >> >> >> > peace. You can still find all examples of failed jobs on the >> >> >> > dashboard: >> >> >> > >> >> >> > https://grafana.gitlab.haskell.org/d/167r9v6nk/ci-spurious-failures?orgId=2&from=now-90d&to=now&refresh=5m&var-types=cannot_allocate >> >> >> > >> >> >> > To prevent future problems, it would be good if someone could help >> >> >> > me look into this. Otherwise I'll just disable the job. :( >> >> >> > ___ >> >> >> > ghc-devs mailing list >> >> >> > ghc-devs@haskell.org >> >> >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Consistent CI failure in job nightly-i386-linux-deb9-validate
I believe we can either modify ci.sh to disable parallel compression for i386, or modify .gitlab/gen_ci.hs and .gitlab/jobs.yaml to disable XZ_OPT=-9 for i386. On Wed, Sep 28, 2022 at 1:21 PM Bryan Richter wrote: > > Aha: while i386-linux-deb9-validate sets no extra XZ options, > nightly-i386-linux-deb9-validate (the failing job) sets "XZ_OPT = 9". > > A revert would fix the problem, but presumably so would tweaking that option. > Does anyone have information that would lead to a better decision here? > > > On Wed, Sep 28, 2022 at 2:02 PM Cheng Shao wrote: >> >> Sure, in which case pls revert it. Apologies for the impact, though >> I'm still a bit curious, the i386 job did pass in the original MR. >> >> On Wed, Sep 28, 2022 at 1:00 PM Bryan Richter >> wrote: >> > >> > Yep, it seems to mostly be xz that is running out of memory. (All recent >> > builds that I sampled, but not all builds through all time.) Thanks for >> > pointing it out! >> > >> > I can revert the change. >> > >> > On Wed, Sep 28, 2022 at 11:46 AM Cheng Shao wrote: >> >> >> >> Hi Bryan, >> >> >> >> This may be an unintended fallout of !8940. Would you try starting an >> >> i386 pipeline with it reversed to see if it solves the issue, in which >> >> case we should revert or fix it in master? >> >> >> >> On Wed, Sep 28, 2022 at 9:58 AM Bryan Richter via ghc-devs >> >> wrote: >> >> > >> >> > Hi all, >> >> > >> >> > For the past week or so, nightly-i386-linux-deb9-validate has been >> >> > failing consistently. >> >> > >> >> > They show up on the failure dashboard because the logs contain the >> >> > phrase "Cannot allocate memory". >> >> > >> >> > I haven't looked yet to see if they always fail in the same place, but >> >> > I'll do that soon. The first example I looked at, however, has the line >> >> > "xz: (stdin): Cannot allocate memory", so it's not GHC (alone) causing >> >> > the problem. >> >> > >> >> > As a consequence of showing up on the dashboard, the jobs get >> >> > restarted. Since they fail consistently, they keep getting restarted. >> >> > Since the jobs keep getting restarted, the pipelines stay alive. When I >> >> > checked just now, there were 8 nightly runs still running. :) Thus I'm >> >> > going to cancel the still-running nightly-i386-linux-deb9-validate jobs >> >> > and let the pipelines die in peace. You can still find all examples of >> >> > failed jobs on the dashboard: >> >> > >> >> > https://grafana.gitlab.haskell.org/d/167r9v6nk/ci-spurious-failures?orgId=2&from=now-90d&to=now&refresh=5m&var-types=cannot_allocate >> >> > >> >> > To prevent future problems, it would be good if someone could help me >> >> > look into this. Otherwise I'll just disable the job. :( >> >> > ___ >> >> > ghc-devs mailing list >> >> > ghc-devs@haskell.org >> >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Consistent CI failure in job nightly-i386-linux-deb9-validate
Sure, in which case pls revert it. Apologies for the impact, though I'm still a bit curious, the i386 job did pass in the original MR. On Wed, Sep 28, 2022 at 1:00 PM Bryan Richter wrote: > > Yep, it seems to mostly be xz that is running out of memory. (All recent > builds that I sampled, but not all builds through all time.) Thanks for > pointing it out! > > I can revert the change. > > On Wed, Sep 28, 2022 at 11:46 AM Cheng Shao wrote: >> >> Hi Bryan, >> >> This may be an unintended fallout of !8940. Would you try starting an >> i386 pipeline with it reversed to see if it solves the issue, in which >> case we should revert or fix it in master? >> >> On Wed, Sep 28, 2022 at 9:58 AM Bryan Richter via ghc-devs >> wrote: >> > >> > Hi all, >> > >> > For the past week or so, nightly-i386-linux-deb9-validate has been failing >> > consistently. >> > >> > They show up on the failure dashboard because the logs contain the phrase >> > "Cannot allocate memory". >> > >> > I haven't looked yet to see if they always fail in the same place, but >> > I'll do that soon. The first example I looked at, however, has the line >> > "xz: (stdin): Cannot allocate memory", so it's not GHC (alone) causing the >> > problem. >> > >> > As a consequence of showing up on the dashboard, the jobs get restarted. >> > Since they fail consistently, they keep getting restarted. Since the jobs >> > keep getting restarted, the pipelines stay alive. When I checked just now, >> > there were 8 nightly runs still running. :) Thus I'm going to cancel the >> > still-running nightly-i386-linux-deb9-validate jobs and let the pipelines >> > die in peace. You can still find all examples of failed jobs on the >> > dashboard: >> > >> > https://grafana.gitlab.haskell.org/d/167r9v6nk/ci-spurious-failures?orgId=2&from=now-90d&to=now&refresh=5m&var-types=cannot_allocate >> > >> > To prevent future problems, it would be good if someone could help me look >> > into this. Otherwise I'll just disable the job. :( >> > ___ >> > ghc-devs mailing list >> > ghc-devs@haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Consistent CI failure in job nightly-i386-linux-deb9-validate
Hi Bryan, This may be an unintended fallout of !8940. Would you try starting an i386 pipeline with it reversed to see if it solves the issue, in which case we should revert or fix it in master? On Wed, Sep 28, 2022 at 9:58 AM Bryan Richter via ghc-devs wrote: > > Hi all, > > For the past week or so, nightly-i386-linux-deb9-validate has been failing > consistently. > > They show up on the failure dashboard because the logs contain the phrase > "Cannot allocate memory". > > I haven't looked yet to see if they always fail in the same place, but I'll > do that soon. The first example I looked at, however, has the line "xz: > (stdin): Cannot allocate memory", so it's not GHC (alone) causing the problem. > > As a consequence of showing up on the dashboard, the jobs get restarted. > Since they fail consistently, they keep getting restarted. Since the jobs > keep getting restarted, the pipelines stay alive. When I checked just now, > there were 8 nightly runs still running. :) Thus I'm going to cancel the > still-running nightly-i386-linux-deb9-validate jobs and let the pipelines die > in peace. You can still find all examples of failed jobs on the dashboard: > > https://grafana.gitlab.haskell.org/d/167r9v6nk/ci-spurious-failures?orgId=2&from=now-90d&to=now&refresh=5m&var-types=cannot_allocate > > To prevent future problems, it would be good if someone could help me look > into this. Otherwise I'll just disable the job. :( > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: When do we get loads/stores with volatile semantics (if we get them at all)?
Hi Travis, - Do we have a type like #Addr whose loads/stores have volatile semantics? Nope - Do we have any primops with volatile semantics at all? We have atomicReadIntArray#, atomicWriteIntArray# prim ops, the read/write on MutableByteArray# will introduce proper memory barriers. However there's no Addr# equivalent yet. - Are there any tricks one could use to get volatile semantics out of base’s peek/poke functions? It's always possible to write cbits for volatile load/store. To avoid the overhead of unsafe foreign ccalls, the proper thing to do would be to add primops, and lower them to MO_AtomicRead/MO_AtomicWrite at the Cmm level. Cheers, Cheng On Fri, Sep 9, 2022 at 7:14 PM Travis Whitaker wrote: > > Hello Haskell Friends, > > Recently I noticed some strange behavior in a program that uses peek/poke to > manipulate memory mapped hardware registers, that smells a lot like missing > volatile semantics to me. It hadn’t occurred to me that peek/poke might not > have volatile semantics (they return an IO, and that IO has to happen, > right?), but naturally once they’re lowered all such bets are off. This made > me wonder: > > - Do we have a type like #Addr whose loads/stores have volatile semantics? > > - Do we have any primops with volatile semantics at all? > > - Are there any tricks one could use to get volatile semantics out of base’s > peek/poke functions? > > Poking around, I’m afraid the answer to all three of these is “no.” If so, > I’d be very interested in contributing volatile load/store primops and > working out some way to make them easily available from Foreign.Storable. > Does no such thing currently exist or am I missing something? > > Thanks for all your ongoing efforts, > > Travis > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Test suite
Simon, I believe you can use `--docs=none`, at least it'll prevent running haddock. On Thu, Jul 28, 2022 at 3:46 PM Simon Peyton Jones wrote: > > Doing hadrian/build test > takes absolutely ages. It seems that it is building haddock, running > haddock, building check_exact, and other things. Eg right now it is doing > Run Haddock BuildPackage: libraries/parsec/src/Text/Parsec.hs (and 24 more) > => _build/doc/html/libraries/parsec-3.1.15.0/parsec.haddock > > But I didn't ask to do Haddock. I just wanted to run the testsuite. How can > I do that? > > I would prefer never to build haddock. > > Thanks > > Simon > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: What to do with gmp wasm fixes
Hi Sylvain, > Couldn't the changes be upstreamed into libgmp directly? GMP is a very old project with long release cycles, last release dates back to 2020. So there's no guarantee a GMP release will be ready by the time 9.6 branch is cut. Even if the patch is upstreamed, there's no proper CI to avoid wasm breakage. > Or are the changes specific to GHC? Not really. It's due to the timing & future-proof issue above, so even if we do send patch to gmp, we need to prepare for situation as if we still need to do patching in our build. > I'm not sure if it's still the case, but in the past we applied some > patches to gmp before building it (to use fPIC and to remove the docs). > So it should be possible to do it for wasm. We still patch gmp tarball to remove the docs. Yes, as long as GHC HQ doesn't push back the idea of including a patch for wasm, I'll send a PR to gmp-tarballs. > I would ensure that everything works with > ghc-bignum's native backend before worrying about using gmp. Both gmp/native already works for wasm32. As long as we figure out the plan to include gmp patches, we intend to provide both gmp/native bindists. As for benchmarking, may be worthwhile at some point, but we got tons of other stuff on our plate right now. On Mon, May 23, 2022 at 11:36 AM Sylvain Henry wrote: > > Hi Cheng, > > Couldn't the changes be upstreamed into libgmp directly? Other projects > may benefit from being able to compile libgmp into wasm. Or are the > changes specific to GHC? > > > - Send a PR to gmp-tarballs, including our patch (doesn't alter > behavior on native archs) and the updated tarball build script > > I'm not sure if it's still the case, but in the past we applied some > patches to gmp before building it (to use fPIC and to remove the docs). > So it should be possible to do it for wasm. > > > - Give up gmp completely, only support native bignum for wasm32. > > That's the solution we will use for the JS backend. For wasm, it would > be great to compare performance between both native and gmp ghc-bignum > backends. libgmp uses some asm code when it is directly compiled to > x86-64 asm for example and afaict passing through wasm will make it use > less optimized codes. It may make the gmp backend less relevant: only > benchmarks will tell. I would ensure that everything works with > ghc-bignum's native backend before worrying about using gmp. > > Cheers, > Sylvain > > > On 20/05/2022 13:43, Cheng Shao wrote: > > Hi all, > > > > The ghc wasm32-wasi build needs to patch gmp. Currently, our working > > branch uses a fork of gmp-tarballs that includes the patch into the > > tarball, but at some point we need to upstream it. What's the best way > > to add these fixes? > > > > - Send a PR to gmp-tarballs, including our patch (doesn't alter > > behavior on native archs) and the updated tarball build script > > - Don't touch gmp-tarballs, use "system" gmp, so the wasm32-wasi gmp > > build process is decoupled from ghc build > > - Give up gmp completely, only support native bignum for wasm32. > > > > Cheers. > > Cheng > > ___ > > ghc-devs mailing list > > ghc-devs@haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
What to do with gmp wasm fixes
Hi all, The ghc wasm32-wasi build needs to patch gmp. Currently, our working branch uses a fork of gmp-tarballs that includes the patch into the tarball, but at some point we need to upstream it. What's the best way to add these fixes? - Send a PR to gmp-tarballs, including our patch (doesn't alter behavior on native archs) and the updated tarball build script - Don't touch gmp-tarballs, use "system" gmp, so the wasm32-wasi gmp build process is decoupled from ghc build - Give up gmp completely, only support native bignum for wasm32. Cheers. Cheng ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Wiki page for WebAssembly support in GHC
Hi all, We're planning to add WebAssembly support to GHC. This work will be delivered by me and Norman Ramsey, the former Asterius team at Tweag. In the past few months, we did a strategic shift and focused on growing GHC towards Asterius, leveraging existing RTS as much as possible. We've made decent progress since then, and we expect to target 9.6 release much like GHCJS. We've created the https://gitlab.haskell.org/ghc/ghc/-/wikis/WebAssembly-backend wiki page, as a central point for communication. You're welcome to take a look and ask questions (either in this thread or editing that page in place), we'll answer the questions in the FAQ section. Cheers, Cheng ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Thoughts on async RTS API?
Hi Alex, Thanks for reminding. hs_try_put_mvar() wouldn't work for our use case. If the C function finishes work and calls hs_try_put_mvar() synchronously, in Haskell takeMVar wouldn't block at all, which is all fine. However, if the C function is expected to call hs_try_put_mvar() asynchronously, the non-threaded RTS will hang! Here's a minimal repro. It works with non-threaded RTS at first, but if you change scheduleCallback() in C so hs_try_put_mvar() is only invoked in a detached pthread, then the program hangs. The proposed async RTS API and related scheduler refactorings can't be avoided, if the MVar is intended to be fulfilled in an async manner, using the non-threaded RTS, on a platform with extremely limited syscall capabilities. ```haskell import Control.Concurrent import Control.Exception import Foreign import Foreign.C import GHC.Conc main :: IO () main = makeExternalCall >>= print makeExternalCall :: IO CInt makeExternalCall = mask_ $ do mvar <- newEmptyMVar sp <- newStablePtrPrimMVar mvar fp <- mallocForeignPtr withForeignPtr fp $ \presult -> do (cap,_) <- threadCapability =<< myThreadId scheduleCallback sp cap presult takeMVar mvar peek presult foreign import ccall "scheduleCallback" scheduleCallback :: StablePtr PrimMVar -> Int -> Ptr CInt -> IO () ``` ```c #include "HsFFI.h" #include "Rts.h" #include "RtsAPI.h" #include #include struct callback { HsStablePtr mvar; int cap; int *presult; }; void* callback(struct callback *p) { usleep(1000); *p->presult = 42; hs_try_putmvar(p->cap,p->mvar); free(p); return NULL; } void scheduleCallback(HsStablePtr mvar, HsInt cap, int *presult) { pthread_t t; struct callback *p = malloc(sizeof(struct callback)); p->mvar = mvar; p->cap = cap; p->presult = presult; // pthread_create(&t, NULL, callback, p); // pthread_detach(t); callback(p); } ``` On Thu, Dec 16, 2021 at 12:10 PM Alexander V Vershilov wrote: > > Hello, replying off-the thread as it would be basically an offtopic. > > But you can achieve the solution using MVars only. > The idea is that you can call mkStablePtr on the MVar that way it will > not be marked as dead, so RTS will not exit. > Then you can use hs_try_put_mvar in C thread to call the thread back. > > On Wed, 15 Dec 2021 at 05:07, Cheng Shao wrote: > > > > Hi devs, > > > > To invoke Haskell computation in C, we need to call one of rts_eval* > > functions, which enters the scheduler loop, and returns only when the > > specified Haskell thread is finished or killed. We'd like to enhance > > the scheduler and add async variants of the rts_eval* functions, which > > take C callbacks to consume the Haskell thread result, kick off the > > scheduler loop, and the loop is allowed to exit when the Haskell > > thread is blocked. Sync variants of RTS API will continue to work with > > unchanged behavior. > > > > The main intended use case is async foreign calls for the WebAssembly > > target. When an async foreign call is made, the Haskell thread will > > block on an MVar to be fulfilled with the call result. But the > > scheduler will eventually fail to find work due to empty run queue and > > exit with error! We need a way to gracefully exit the scheduler, so > > the RTS API caller can process the async foreign call, fulfill that > > MVar and resume Haskell computation later. > > > > Question I: does the idea of adding async RTS API sound acceptable by > > GHC HQ? To be honest, it's not impossible to workaround lack of async > > RTS API: reuse the awaitEvent() logic in non-threaded RTS, pretend > > each async foreign call reads from a file descriptor and can be > > handled by the POSIX select() function in awaitEvent(). But it'd > > surely be nice to avoid such hacks and do things the principled way. > > > > Question II: how to modify the scheduler loop to implement this > > feature? Straightforward answer seems to be: check some RTS API > > non-blocking flag, if present, allow early exit due to empty run > > queue. > > > > Thanks a lot for reading this, I appreciate any suggestions or > > questions :) > > > > Best regards, > > Cheng > > ___ > > ghc-devs mailing list > > ghc-devs@haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -- > -- > Alexander ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Thoughts on async RTS API?
> While the idea here sounds reasonable, I'm not sure I quite understand > how this will be used in Asterius's case. Specifically, I would be > worried about the lack of fairness in this scheme: no progress will be > made on any foreign call until all Haskell evaluation has blocked. > Is this really the semantics that you want? Asterius runtime scheduler divides work into individual "tick"s. Each tick does some work, much like one single iteration in the while(1) scheduler loop. Ticks are not synchronously invoked by previous ticks, instead they are started asynchronously and placed inside the host event loop, fully interleaved with other host events. This way, Haskell concurrency works with host concurrency without requiring host multi-threading. It's possible to wait for run queue to be emptied, then process all blocking foreign calls in one batch, similar to awaitEvent() logic in non-threaded RTS. It's also possible to exit scheduler and resume it many more times, similar to current Asterius scheduler. Both semantics can be implemented, to guarantee fairness, the latter sounds more preferrable. The key issue is finding a way to break up the current while(1) loop in schedule() in a principled way. > `schedule` is already a very large function with loops, gotos, > mutability, and quite complex control flow. I would be reluctant > to add to this complexity without first carrying out some > simplification. Instead of adding yet another bail-out case to the loop, > I would probably rather try to extract the loop body into a new > function. That is, currently `schedule` is of the form: > > // Perform work until we are asked to shut down. > Capability *schedule (Capability *initialCapability, Task *task) { > Capability *cap = initialCapability; > while (1) { > scheduleYield(&cap, task); > > if (emptyRunQueue(cap)) { > continue; > } > > if (shutting_down) { > return cap; > } > > StgTSO *t = popRunQueue(cap); > > if (! t.can_run_on_capability(cap)) { > // Push back on the run queue and loop around again to > // yield the capability to the appropriate task > pushOnRunQueue(cap, t); > continue; > } > > runMutator(t); > > if (needs_gc) { > scheduleDoGC(); > } > } > } > > I might rather extract this into something like: > > enum ScheduleResult { > NoWork, // There was no work to do > PerformedWork, // Ran precisely one thread > Yield, // The next thread scheduled to run cannot run on the > // given capability; yield. > ShuttingDown,// We were asked to shut down > } > > // Schedule at most one thread once > ScheduleResult scheduleOnce (Capability **cap, Task *task) { > if (emptyRunQueue(cap)) { > return NoWork; > } > > if (shutting_down) { > return ShuttingDown; > } > > StgTSO *t = popRunQueue(cap); > > if (! t.can_run_on_capability(cap)) { > pushOnRunQueue(cap, t); > return Yield; > } > > runMutator(t); > > if (needs_gc) { > scheduleDoGC(); > } > > return PerformedWork; > } > > This is just a sketch but I hope it's clear that with something like > this this you can easily implement the existing `schedule` function, as > well as your asynchronous variant. > Thanks for the sketch! I definitely agree we should simplify schedule() in some way instead of adding ad-hoc bail out case. The ScheduleResult type and scheduleOnce() function looks good to me, although I need to do a lot more experiments to confirm. Cheers, Cheng ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Thoughts on async RTS API?
Hi devs, To invoke Haskell computation in C, we need to call one of rts_eval* functions, which enters the scheduler loop, and returns only when the specified Haskell thread is finished or killed. We'd like to enhance the scheduler and add async variants of the rts_eval* functions, which take C callbacks to consume the Haskell thread result, kick off the scheduler loop, and the loop is allowed to exit when the Haskell thread is blocked. Sync variants of RTS API will continue to work with unchanged behavior. The main intended use case is async foreign calls for the WebAssembly target. When an async foreign call is made, the Haskell thread will block on an MVar to be fulfilled with the call result. But the scheduler will eventually fail to find work due to empty run queue and exit with error! We need a way to gracefully exit the scheduler, so the RTS API caller can process the async foreign call, fulfill that MVar and resume Haskell computation later. Question I: does the idea of adding async RTS API sound acceptable by GHC HQ? To be honest, it's not impossible to workaround lack of async RTS API: reuse the awaitEvent() logic in non-threaded RTS, pretend each async foreign call reads from a file descriptor and can be handled by the POSIX select() function in awaitEvent(). But it'd surely be nice to avoid such hacks and do things the principled way. Question II: how to modify the scheduler loop to implement this feature? Straightforward answer seems to be: check some RTS API non-blocking flag, if present, allow early exit due to empty run queue. Thanks a lot for reading this, I appreciate any suggestions or questions :) Best regards, Cheng ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: GSOC Idea: Bytecode serialization and/or Fat Interface files
I believe Josh has already been working on 2 some time ago? cc'ing him to this thread. I'm personally in favor of 2 since it's also super useful for prototyping whole-program ghc backends, where one can just read all the CgGuts from the .hi files, and get all codegen-related Core for free. Cheers, Cheng On Fri, Mar 12, 2021 at 10:32 PM Zubin Duggal wrote: > > Hi all, > > This is following up on this recent discussion on the list concerning fat > interface files: > https://mail.haskell.org/pipermail/ghc-devs/2020-October/019324.html > > Now that we have been accepted as a GSOC organisation, I think > it would be a good project idea for a sufficiently motivated and > advanced student. This is a call for mentors (and students as > well!) who would be interested in this project > > The problem is the following: > > Haskell Language Server (and ghci with `-fno-code`) have very > fast startup times for codebases which don't make use of Template > Haskell, and thus don't require any code-gen to typecheck. This > is because they can simply read the cached iface files generated by a > previous compile and don't need to re-invoke the typechecker. > > However, as soon as TH is involved, we are forced to retypecheck and > compile files, since it is not possible to restart the code-gen process > starting with only a iface file. I can think of two ways to address this > problem: > > 1. Allow bytecode to be serialized > > 2. Serialize desugared Core into iface files (fat interfaces), so that > (byte)code-gen can be restarted from this point and doesn't need > > (1) might be challenging, but offers a few more advantages over (2), > in that we can reduce the work done to load TH-heavy codebases to just > a load of the cached bytecode objects from disk, and could make the > load process (and times) for these codebases directly comparable to > their TH-free cousins. > > It would also make ghci startup a lot faster with a warm cache of > bytecode objects, bringing ghci startup times in line with those of > -fno-code > > However (2) might be much easier to achieve and offers many > of the same advantages, in that we would not need to re-run > the compiler frontend or core-to-core optimisation phases. > There is also already a (slightly bitrotted) implementation > of (2) thanks to the work of Edward Yang. > > If any of this sounds exciting to you as a student or a mentor, please > get in touch. > > In particular, I think (2) is a feasible project that can be completed > with minimal mentoring effort. However, I'm only vaguely familiar with > the details of the byte code generator, so if (1) is a direction we want > to pursue, we would need a mentor familiar with the details of this part > of GHC. > > Cheers, > Zubin > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: GHC's internal confusion about Ints and Words
Indeed STG to Cmm lowering drops the correct size information for ccall arguments, there's even a TODO comment that has been around for quite a few years: https://gitlab.haskell.org/ghc/ghc/-/blob/master/compiler/GHC/StgToCmm/Foreign.hs#L83 This has been an annoyance for Asterius as well. When we try to translate a CmmUnsafeForeignCall node to a wasm function call, a CInt argument (which should be i32 in wasm) can be mistyped as i64 which causes a validation error. We have to insert wrap/extend opcodes based on the callee function signature, but if we preserve correct argument size in Cmm (or at least enrich the hints to include it), we won't need such a hack. On Tue, Oct 20, 2020 at 4:05 PM Moritz Angermann wrote: > > Yes, that's right. I'm not sure it's in core though, as the width information > still seems to be available in Stg. However the lowering from > stg into cmm widens it. > > On Tue, Oct 20, 2020 at 9:57 PM Carter Schonwald > wrote: >> >> ... are you talking about Haskell Int and word? Those are always the same >> size in bits and should match native point size. That is definitely an >> assumption of ghc >> >> It sounds like some information that is dropped after core is needed to >> correctly do something in stg/cmm in the context of the ARM64 ncg that was >> recently added to handle cint being 32bit in this context ? >> >> >> On Tue, Oct 20, 2020 at 5:49 AM Moritz Angermann >> wrote: >>> >>> Alright, let me expand a bit. I've been looking at aarch64 NCG for ghc. >>> The Linux side of things is looking really good, >>> so I've moved onto the macOS side (I'm afraid I don't have any Windows >>> aarch64 hardware, nor much windows knowledge >>> to even attempt a Windows version yet). >>> >>> When calling C functions, the usual approach is to pass the first few >>> arguments in registers, and then arguments that exceed >>> the argument passing slots on the stack. The Arm AArch64 Procedure Call >>> Standard (aapcs) for C does this by assigning 8byte >>> slots to each overflow argument on the stack. A company I won't name, has >>> decided to implement a slightly different variation of >>> the Procedure Call Standard, often referred to as darwinpcs. This deviates >>> from the aapcs for vargs, as well as for handling of >>> spilled arguments on the stack. >>> >>> The aapcs allows us to generate calls to C functions without knowing the >>> actual prototype of the function, as all arguments are >>> simply spilled into 8byte slots on the stack. The darwinpcs however >>> requires us to know the size of the arguments, so we can >>> properly pack them onto the stack. Ints have 4 bytes, so we need to pack >>> them into 4byte slots. >>> >>> In the process library we have this rather fun foreign import: >>> foreign import ccall unsafe "runInteractiveProcess" >>> c_runInteractiveProcess >>> :: Ptr CString >>> -> CString >>> -> Ptr CString >>> -> FD >>> -> FD >>> -> FD >>> -> Ptr FD >>> -> Ptr FD >>> -> Ptr FD >>> -> Ptr CGid >>> -> Ptr CUid >>> -> CInt -- reset child's SIGINT & SIGQUIT >>> handlers >>> -> CInt -- flags >>> -> Ptr CString >>> -> IO PHANDLE >>> >>> with the corresponding C declaration: >>> >>> extern ProcHandle runInteractiveProcess( char *const args[], >>> char *workingDirectory, >>> char **environment, >>> int fdStdIn, >>> int fdStdOut, >>> int fdStdErr, >>> int *pfdStdInput, >>> int *pfdStdOutput, >>> int *pfdStdError, >>> gid_t *childGroup, >>> uid_t *childUser, >>> int reset_int_quit_handlers, >>> int flags, >>> char **failed_doing); >>> This function thus takes 14 arguments. We pass only the first 8 arguments >>> in registers, and the others on the stack. >>> Argument 12 and 13 are of type int. On linux using the aapcs, we can pass >>> those in 8byte slots on the stack. That is >>> both of them are effectively 64bits wide when passed. However for >>> darwinpcs, it is expected that these adhere to their >>> size and are packed as such. Therefore Argument 12 and 13 need to be passed >>> as 4byte slots each on the stack. >>> >>> This yields a moderate 8byte saving on the stack for the same function call >>> on darwinpcs compared to aapcs. >>> >>> Now onto GHC. When we generate function calls for foreign C functions, we >>> deal with something like: >>> >>> genCCall >
Re: [ANNOUNCE] Glasgow Haskell Compiler 9.0.1-alpha1 released
> that I know we have musl (x86_64, aarch64) based ghcs in production. These ghc bindists (at least the one produced on gitlab ci) have dynamically linked ghc executables iirc. Last time I tried to turn off DYNAMIC_GHC_PROGRAM when building ghc on alpine, the produced ghc will panic on TH code. That's on the 8.8 branch though. On Tue, Sep 29, 2020 at 4:21 PM Moritz Angermann wrote: > > This sent me down an interesting path. You are right that dlopen on returns > NULL with musl on x86_64, and dlerror will subsequently produce "Dynamic > loading not supported" if asked to compile with -static. I think GHC has > code to fallback to archives in the case where loading shared objects fails, > but I can't find the code right now. It still means you'd need to have > static sqlite (in this case) and other libraries around. > > I'm still a bit puzzled, and I think I'm missing something. It remains, that > I know we have musl (x86_64, aarch64) based ghcs in production. I wonder if > there is something we got right by accident, that makes this work smoothly > for us. Warrants more investigation. > > Cheers, > Moritz > > On Tue, Sep 29, 2020 at 7:45 PM Moritz Angermann > wrote: >> >> Happy to give this a try later today. Been using fully static musl builds >> (including cross compilation) for x86_64 for a while now; and did not (yet?) >> run into that SQLite issue. But did have it use shared objects in iserv. >> >> On Tue, 29 Sep 2020 at 7:18 PM, Cheng Shao wrote: >>> >>> Hi Moritz, >>> >>> >>> >>> > However dlopen with musl on x86 seems fine. >>> >>> >>> >>> Here's a dlopen example that segfaults if linked with -static: >>> >>> >>> >>> #include >>> >>> #include >>> >>> >>> >>> int main() { >>> >>> void *h = dlopen("/usr/lib/libsqlite3.so", RTLD_NOW); >>> >>> char *f = dlsym(h, "sqlite3_version"); >>> >>> printf("%s\n", f); >>> >>> return 0; >>> >>> } >>> >>> >>> >>> On Tue, Sep 29, 2020 at 1:04 PM Moritz Angermann >>> >>> wrote: >>> >>> > >>> >>> > No. Not necessarily. We can perfectly fine load archives and the >>> > pre-linked ghci objects. However dlopen with musl on x86 seems fine. On >>> > arm it’s not implemented, and just throws an error message. There is a >>> > -dynamic flag in HEAD, which disables GHC even trying to load dynamic >>> > libraries and always assuming there is no dynamic linking facility, even >>> > if configure reports the existence of dlopen... >>> >>> > >>> >>> > On Tue, 29 Sep 2020 at 6:54 PM, Cheng Shao wrote: >>> >>> >> >>> >>> >> Hi Ben, >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> > We will likely transition the Alpine binary distribution to be fully >>> >>> >> >>> >>> >>statically-linked, providing a convenient, distribution-independent >>> >>> >> >>> >>> >>packaging option for Linux users. >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> iirc for statically linked executables, musl doesn't even support >>> >>> >> >>> >>> >> dlopen, so wouldn't this mean such a bindist would fail for all >>> >>> >> >>> >>> >> LoadDLL ghci commands? >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> Cheers, >>> >>> >> >>> >>> >> Cheng >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> On Mon, Sep 28, 2020 at 9:15 PM Ben Gamari wrote: >>> >>> >> >>> >>> >> > >>> >>> >> >>> >>> >> > Hello all, >>> >>> >> >>> >>> >> > >>> >>> >> >>> >>> >> > The GHC team is very pleased to announce the availability of the first >>> >>> >> >>> >>> >
Re: [ANNOUNCE] Glasgow Haskell Compiler 9.0.1-alpha1 released
Hi Moritz, > However dlopen with musl on x86 seems fine. Here's a dlopen example that segfaults if linked with -static: #include #include int main() { void *h = dlopen("/usr/lib/libsqlite3.so", RTLD_NOW); char *f = dlsym(h, "sqlite3_version"); printf("%s\n", f); return 0; } On Tue, Sep 29, 2020 at 1:04 PM Moritz Angermann wrote: > > No. Not necessarily. We can perfectly fine load archives and the pre-linked > ghci objects. However dlopen with musl on x86 seems fine. On arm it’s not > implemented, and just throws an error message. There is a -dynamic flag in > HEAD, which disables GHC even trying to load dynamic libraries and always > assuming there is no dynamic linking facility, even if configure reports the > existence of dlopen... > > On Tue, 29 Sep 2020 at 6:54 PM, Cheng Shao wrote: >> >> Hi Ben, >> >> >> >> > We will likely transition the Alpine binary distribution to be fully >> >>statically-linked, providing a convenient, distribution-independent >> >>packaging option for Linux users. >> >> >> >> iirc for statically linked executables, musl doesn't even support >> >> dlopen, so wouldn't this mean such a bindist would fail for all >> >> LoadDLL ghci commands? >> >> >> >> Cheers, >> >> Cheng >> >> >> >> On Mon, Sep 28, 2020 at 9:15 PM Ben Gamari wrote: >> >> > >> >> > Hello all, >> >> > >> >> > The GHC team is very pleased to announce the availability of the first >> >> > alpha release in the GHC 9.0 series. Source and binary distributions are >> >> > available at the usual place: >> >> > >> >> > https://downloads.haskell.org/ghc/9.0.1-alpha1/ >> >> > >> >> > This first alpha comes quite a bit later than expected. However, we have >> >> > done a significant amount of testing on this pre-release and therefore >> >> > hope to be able to move forward quickly with a release candidate next >> >> > week and with a final release in mid-October. >> >> > >> >> > GHC 9.0.1 will bring a number of new features: >> >> > >> >> > * A first cut of the new LinearTypes language extension [1], allowing >> >> >use of linear function syntax and linear record fields. >> >> > >> >> > * A new bignum library (ghc-bignum), allowing GHC to be more easily >> >> >used with integer libraries other than GMP. >> >> > >> >> > * Improvements in code generation, resulting in considerable >> >> >performance improvements in some programs. >> >> > >> >> > * Improvements in pattern-match checking, allowing more precise >> >> >detection of redundant cases and reduced compilation time. >> >> > >> >> > * Implementation of the "simplified subsumption" proposal [2] >> >> >simplifying the type system and paving the way for QuickLook >> >> >impredicativity in GHC 9.2. >> >> > >> >> > * Implementation of the QualifiedDo extension [3], allowing more >> >> >convenient overloading of `do` syntax. >> >> > >> >> > * Improvements in compilation time. >> >> > >> >> > And many more. See the release notes [4] for a full accounting of the >> >> > changes in this release. >> >> > >> >> > Do note that there are a few things that we expect will change before >> >> > the final release: >> >> > >> >> > * We expect to sort out a notarization workflow for Apple Darwin, >> >> >allowing our binary distributions to be used on macOS Catalina >> >> >without hassle. >> >> > >> >> >Until this has been sorted out Catalina users can exempt the >> >> >current macOS binary distribution from the notarization requirement >> >> >themselves by running `xattr -cr .` on the unpacked tree before >> >> >running `make install`. >> >> > >> >> > * We will likely transition the Alpine binary distribution to be fully >> >> >statically-linked, providing a convenient, distribution-independent >> >> >packaging option for Linux users. >> >> > >> >> > * We will be merging a robust solution for
Re: [ANNOUNCE] Glasgow Haskell Compiler 9.0.1-alpha1 released
Hi Ben, > We will likely transition the Alpine binary distribution to be fully statically-linked, providing a convenient, distribution-independent packaging option for Linux users. iirc for statically linked executables, musl doesn't even support dlopen, so wouldn't this mean such a bindist would fail for all LoadDLL ghci commands? Cheers, Cheng On Mon, Sep 28, 2020 at 9:15 PM Ben Gamari wrote: > > Hello all, > > The GHC team is very pleased to announce the availability of the first > alpha release in the GHC 9.0 series. Source and binary distributions are > available at the usual place: > > https://downloads.haskell.org/ghc/9.0.1-alpha1/ > > This first alpha comes quite a bit later than expected. However, we have > done a significant amount of testing on this pre-release and therefore > hope to be able to move forward quickly with a release candidate next > week and with a final release in mid-October. > > GHC 9.0.1 will bring a number of new features: > > * A first cut of the new LinearTypes language extension [1], allowing >use of linear function syntax and linear record fields. > > * A new bignum library (ghc-bignum), allowing GHC to be more easily >used with integer libraries other than GMP. > > * Improvements in code generation, resulting in considerable >performance improvements in some programs. > > * Improvements in pattern-match checking, allowing more precise >detection of redundant cases and reduced compilation time. > > * Implementation of the "simplified subsumption" proposal [2] >simplifying the type system and paving the way for QuickLook >impredicativity in GHC 9.2. > > * Implementation of the QualifiedDo extension [3], allowing more >convenient overloading of `do` syntax. > > * Improvements in compilation time. > > And many more. See the release notes [4] for a full accounting of the > changes in this release. > > Do note that there are a few things that we expect will change before > the final release: > > * We expect to sort out a notarization workflow for Apple Darwin, >allowing our binary distributions to be used on macOS Catalina >without hassle. > >Until this has been sorted out Catalina users can exempt the >current macOS binary distribution from the notarization requirement >themselves by running `xattr -cr .` on the unpacked tree before >running `make install`. > > * We will likely transition the Alpine binary distribution to be fully >statically-linked, providing a convenient, distribution-independent >packaging option for Linux users. > > * We will be merging a robust solution for #17760 which will introduce >a new primitive, `keepAlive#`, to the `base` library, subsuming >most uses of `touch#`. > > As always, do test this release and open tickets for whatever issues you > encounter. To help with this, we will be publishing a blog post > describing use of our new `head.hackage` infrastructure to ease testing > of larger projects with Hackage dependencies later this week. > > Cheers, > > - Ben > > > [1] > https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0111-linear-types.rst > [2] > https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0287-simplify-subsumption.rst > [3] > https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0216-qualified-do.rst > [4] > https://downloads.haskell.org/ghc/9.0.1-alpha1/docs/html/users_guide/9.0.1-notes.html > ___ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Weird "missing hi file" problem with a serializable Core patch
Hi Ben, The -ddump-if-trace output is attached here. The error is produced when compiling GHC.Types in ghc-prim. > Note that interface files are written after the Core pipeline is run. Sorry for the confusion, I didn't mean the Core simplifier pipeline. I mean the "Core -> Iface -> Core" roundtrip I tried to perform using the output of CorePrep. By the time we do CorePrep, the hi files should already have been written. On Wed, Sep 16, 2020 at 11:48 PM Ben Gamari wrote: > > Cheng Shao writes: > > > Hi all, > > > > Following a short chat in #ghc last week, I did a first attempt of > > reusing existing Iface logic to implement serialization for > > codegen-related Core. The implementation is included in the attached > > patch (~100 loc). As a quick and dirty validation of whether it works, > > I also modified the codegen pipeline logic to do a roundtrip: after > > CorePrep, the Core bits are converted to Iface, then we immediately > > convert it back and use it for later compiling. > > > > With the patch applied, stage-1 GHC would produce a "missing hi file" > > error like: > > > > : Bad interface file: _build/stage1/libraries/ghc-prim/build/GHC/Types.hi > > _build/stage1/libraries/ghc-prim/build/GHC/Types.hi: > > openBinaryFile: does not exist (No such file or directory) > > > Hi Cheng, > > Which module is being compiled when this error is produced? Could you > provide -ddump-if-trace output for the failing compilation? > > > The error surprises me, since by the time we perform the Core-to-Core > > roundtrip, the .hi file should already have been written to disk. Is > > there anything obviously wrong with the implementation? I'd appreciate > > any pointers or further questions, thanks a lot! > > > Note that interface files are written after the Core pipeline is run. > > Cheers, > > - Ben > | Run Ghc CompileHs Stage1: libraries/ghc-prim/GHC/Types.hs => _build/stage1/libraries/ghc-prim/build/GHC/Types.o FYI: cannot read old interface file: _build/stage1/libraries/ghc-prim/build/GHC/Types.hi: openBinaryFile: does not exist (No such file or directory) loadHiBootInterface GHC.Types Reading [boot] interface for ghc-prim:GHC.Types; reason: Need the hi-boot interface for GHC.Types to compare against the Real Thing readIFace _build/stage1/libraries/ghc-prim/build/GHC/Types.hi-boot Considering whether to load GHC.Prim Reading interface for ghc-prim:GHC.Prim; reason: GHC.Prim is directly imported updating EPS updating EPS newGlobalBinder GHC.Types TyCon libraries/ghc-prim/GHC/Types.hs:(527,1)-(532,26) TyCon newGlobalBinder GHC.Types TyCon libraries/ghc-prim/GHC/Types.hs:(527,14)-(532,26) TyCon newGlobalBinder GHC.Types TypeLitSort libraries/ghc-prim/GHC/Types.hs:(523,1)-(524,29) TypeLitSort newGlobalBinder GHC.Types TypeLitSymbol libraries/ghc-prim/GHC/Types.hs:523:20-32 TypeLitSymbol newGlobalBinder GHC.Types TypeLitNat libraries/ghc-prim/GHC/Types.hs:524:20-29 TypeLitNat newGlobalBinder GHC.Types KindRep libraries/ghc-prim/GHC/Types.hs:(515,1)-(521,49) KindRep newGlobalBinder GHC.Types KindRepTyConApp libraries/ghc-prim/GHC/Types.hs:515:16-46 KindRepTyConApp newGlobalBinder GHC.Types KindRepVar libraries/ghc-prim/GHC/Types.hs:516:16-35 KindRepVar newGlobalBinder GHC.Types KindRepApp libraries/ghc-prim/GHC/Types.hs:517:16-41 KindRepApp newGlobalBinder GHC.Types KindRepFun libraries/ghc-prim/GHC/Types.hs:518:16-41 KindRepFun newGlobalBinder GHC.Types KindRepTYPE libraries/ghc-prim/GHC/Types.hs:519:16-38 KindRepTYPE newGlobalBinder GHC.Types KindRepTypeLitS libraries/ghc-prim/GHC/Types.hs:520:16-48 KindRepTypeLitS newGlobalBinder GHC.Types KindRepTypeLitD libraries/ghc-prim/GHC/Types.hs:521:16-49 KindRepTypeLitD newGlobalBinder GHC.Types KindBndr libraries/ghc-prim/GHC/Types.hs:503:1-19 KindBndr newGlobalBinder GHC.Types TrName libraries/ghc-prim/GHC/Types.hs:(498,1)-(500,18) TrName newGlobalBinder GHC.Types TrNameS libraries/ghc-prim/GHC/Types.hs:499:5-17 TrNameS newGlobalBinder GHC.Types TrNameD libraries/ghc-prim/GHC/Types.hs:500:5-18 TrNameD newGlobalBinder GHC.Types Module libraries/ghc-prim/GHC/Types.hs:(494,1)-(496,22) Module newGlobalBinder GHC.Types Module libraries/ghc-prim/GHC/Types.hs:(494,15)-(496,22) Module newGlobalBinder GHC.Types Void# libraries/ghc-prim/GHC/Types.hs:467:1-18 Void# newGlobalBinder GHC.Types VecElem libraries/ghc-prim/GHC/Types.hs:(454,1)-(463,28) VecElem newGlobalBinder GHC.Types Int8ElemRep libraries/gh
Re: Weird "missing hi file" problem with a serializable Core patch
Thanks Brandon, I checked the strace log and before the error is written, there's a log entry: openat(AT_FDCWD, "_build/stage1/libraries/ghc-prim/build/GHC/Types.hi", O_RDONLY|O_NOCTTY|O_NONBLOCK) = -1 ENOENT (No such file or directory) So it looks like GHC is indeed looking at the correct hi path, not the doubled path. On Wed, Sep 16, 2020 at 9:03 PM Brandon Allbery wrote: > > Without looking at the implementation, it looks to me like the filename is > doubled for some reason. This may suggest places to look. > > On Wed, Sep 16, 2020 at 2:57 PM Cheng Shao wrote: >> >> Hi all, >> >> Following a short chat in #ghc last week, I did a first attempt of >> reusing existing Iface logic to implement serialization for >> codegen-related Core. The implementation is included in the attached >> patch (~100 loc). As a quick and dirty validation of whether it works, >> I also modified the codegen pipeline logic to do a roundtrip: after >> CorePrep, the Core bits are converted to Iface, then we immediately >> convert it back and use it for later compiling. >> >> With the patch applied, stage-1 GHC would produce a "missing hi file" >> error like: >> >> : Bad interface file: _build/stage1/libraries/ghc-prim/build/GHC/Types.hi >> _build/stage1/libraries/ghc-prim/build/GHC/Types.hi: >> openBinaryFile: does not exist (No such file or directory) >> >> The error surprises me, since by the time we perform the Core-to-Core >> roundtrip, the .hi file should already have been written to disk. Is >> there anything obviously wrong with the implementation? I'd appreciate >> any pointers or further questions, thanks a lot! >> >> Best regards, >> Cheng >> ___ >> ghc-devs mailing list >> ghc-devs@haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -- > brandon s allbery kf8nh > allber...@gmail.com ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Weird "missing hi file" problem with a serializable Core patch
Hi all, Following a short chat in #ghc last week, I did a first attempt of reusing existing Iface logic to implement serialization for codegen-related Core. The implementation is included in the attached patch (~100 loc). As a quick and dirty validation of whether it works, I also modified the codegen pipeline logic to do a roundtrip: after CorePrep, the Core bits are converted to Iface, then we immediately convert it back and use it for later compiling. With the patch applied, stage-1 GHC would produce a "missing hi file" error like: : Bad interface file: _build/stage1/libraries/ghc-prim/build/GHC/Types.hi _build/stage1/libraries/ghc-prim/build/GHC/Types.hi: openBinaryFile: does not exist (No such file or directory) The error surprises me, since by the time we perform the Core-to-Core roundtrip, the .hi file should already have been written to disk. Is there anything obviously wrong with the implementation? I'd appreciate any pointers or further questions, thanks a lot! Best regards, Cheng diff --git a/compiler/ExtCore.hs b/compiler/ExtCore.hs new file mode 100644 index 00..48f7792cf3 --- /dev/null +++ b/compiler/ExtCore.hs @@ -0,0 +1,82 @@ +{-# OPTIONS_GHC -Wall #-} + +module ExtCore where + +import Data.Traversable +import qualified GHC.CoreToIface as GHC +import qualified GHC.Iface.Syntax as GHC +import qualified GHC.IfaceToCore as GHC +import qualified GHC.Plugins as GHC +import GHC.Prelude +import qualified GHC.Tc.Utils.Monad as GHC +import qualified GHC.Utils.Binary as GHC + +data ExtCoreBind + = NonRec GHC.IfExtName GHC.IfaceExpr + | Rec [(GHC.IfExtName, GHC.IfaceExpr)] + +instance GHC.Binary ExtCoreBind where + put_ bh (NonRec bndr rhs) = +GHC.putByte bh 0 *> GHC.put_ bh bndr *> GHC.put_ bh rhs + put_ bh (Rec pairs) = GHC.putByte bh 1 *> GHC.put_ bh pairs + + get bh = do +h <- GHC.getByte bh +case h of + 0 -> NonRec <$> GHC.get bh <*> GHC.get bh + 1 -> Rec <$> GHC.get bh + _ -> fail "instance Binary ExtCoreBind" + +toExtCoreBinds :: GHC.CoreProgram -> [ExtCoreBind] +toExtCoreBinds = map toExtCoreBind + +toExtCoreBind :: GHC.CoreBind -> ExtCoreBind +toExtCoreBind (GHC.NonRec b r) = + NonRec (unIfaceExt $ GHC.toIfaceVar b) (GHC.toIfaceExpr r) +toExtCoreBind (GHC.Rec prs) = + Rec [(unIfaceExt $ GHC.toIfaceVar b, GHC.toIfaceExpr r) | (b, r) <- prs] + +tcExtCoreBindsDriver :: + GHC.HscEnv -> GHC.Module -> [ExtCoreBind] -> IO GHC.CoreProgram +tcExtCoreBindsDriver hsc_env this_mod ext_core_binds = do + (_, maybe_result) <- +GHC.initTc hsc_env GHC.HsSrcFile False this_mod (error "lazy RealSrcSpan") $ + initIfaceExtCore $ +tcExtCoreBinds ext_core_binds + case maybe_result of +Just r -> pure r +_ -> fail "tcExtCoreBindsDriver" + +tcExtCoreBinds :: [ExtCoreBind] -> GHC.IfL GHC.CoreProgram +tcExtCoreBinds = traverse tcExtCoreBind + +tcExtCoreBind :: ExtCoreBind -> GHC.IfL GHC.CoreBind +tcExtCoreBind (NonRec bndr rhs) = do + bndr' <- fmap unCoreVar $ GHC.tcIfaceExpr $ GHC.IfaceExt bndr + rhs' <- GHC.tcIfaceExpr rhs + pure $ GHC.NonRec bndr' rhs' +tcExtCoreBind (Rec pairs) = do + let (bndrs, rhss) = unzip pairs + bndrs' <- for bndrs $ fmap unCoreVar . GHC.tcIfaceExpr . GHC.IfaceExt + rhss' <- for rhss GHC.tcIfaceExpr + pure $ GHC.Rec $ zip bndrs' rhss' + +initIfaceExtCore :: GHC.IfL a -> GHC.TcRn a +initIfaceExtCore thing_inside = do + tcg_env <- GHC.getGblEnv + let this_mod = GHC.tcg_mod tcg_env + if_env = +GHC.IfGblEnv + { GHC.if_doc = GHC.empty, +GHC.if_rec_types = Just (this_mod, pure (GHC.tcg_type_env tcg_env)) + } + if_lenv = GHC.mkIfLclEnv this_mod GHC.empty GHC.NotBoot + GHC.setEnvs (if_env, if_lenv) thing_inside + +unIfaceExt :: GHC.IfaceExpr -> GHC.IfExtName +unIfaceExt (GHC.IfaceExt bndr) = bndr +unIfaceExt _ = error "unIfaceExt" + +unCoreVar :: GHC.CoreExpr -> GHC.CoreBndr +unCoreVar (GHC.Var bndr) = bndr +unCoreVar _ = error "unCoreVar" diff --git a/compiler/GHC/Driver/Main.hs b/compiler/GHC/Driver/Main.hs index 90a07d7490..61bfc96693 100644 --- a/compiler/GHC/Driver/Main.hs +++ b/compiler/GHC/Driver/Main.hs @@ -183,6 +183,8 @@ import GHC.Iface.Ext.Types ( getAsts, hie_asts, hie_module ) import GHC.Iface.Ext.Binary ( readHieFile, writeHieFile , hie_file_result, NameCacheUpdater(..)) import GHC.Iface.Ext.Debug ( diffFile, validateScopes ) +import ExtCore + #include "HsVersions.h" @@ -1409,9 +1411,12 @@ hscGenHardCode hsc_env cgguts location output_filename = do --- -- PREPARE FOR CODE GENERATION -- Do saturation and convert to A-normal form -(prepd_binds, local_ccs) <- {-# SCC "CorePrep" #-} +(prepd_binds', local_ccs) <- {-# SCC "CorePrep" #-} corePrepPgm hsc_env this_mod location core_binds data_tycons + +prepd_binds <- tcExtCoreBindsDriver hsc_env this_mod $ toExtCoreBinds prepd_binds' + - Convert to
Re: Is "cml_cont" of CmmCall used in practice?
Hi Ömer, See e.g. `lowerSafeForeignCall` and `blockCode` > which set the field with `Just`. The former seems to be related with > foreign > calls so perhaps try compiling a FFI package. I tried compiling a module with a `foreign import ccall safe` declaration, yet the output raw Cmm still doesn't use `cml_cont`. `CmmLayoutStack` uses that field > for code generation (I don't understand the details yet). > Thanks for pointing that out, I'll check its implementation. I also found another evidence of the field not used by codegens: in `PprC` which is used by unregisterised builds, only the `cml_target` field is used. So for now I assume that the assertion of `cml_cont = Nothing` holds for final Cmm output, and backend writers need not be concerned with it. Regards, Shao Cheng ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs