Re: Annotating instances

2023-12-04 Thread Moritz Angermann
I see. That’s where the confusion comes from. Hlint uses them to allow
ignoring specific Hlint warnings:

{-# ANN module "HLint: ignore Use string literal" #-}

{- HLINT ignore "Use string literal" -}

and similar. One could maybe argue they should have never been ANN pragmas
to begin with.

Examples taken from this SO question:
https://stackoverflow.com/questions/19237695/haskell-how-to-tell-hlint-not-to-warning-use-string-literal

On Mon, 4 Dec 2023 at 8:07 PM, Simon Peyton Jones <
simon.peytonjo...@gmail.com> wrote:

> > I don’t think they do anything specific.
>
> Now I am truly baffled!  If they don't do anything, why would they be a
> module at all!  Surely they do something?
>
> Simon
>
> On Mon, 4 Dec 2023 at 11:58, Moritz Angermann 
> wrote:
>
>> I don’t think they do anything specific. They just function as a marker
>> to Hlint to find when parsing the source files. Here is one of the original
>> issues we had:
>> https://github.com/ndmitchell/hlint/issues/1251
>>
>> Simply by not being ANN, it doesn’t trigger the Templar Haskell machinery
>> and thus does not cause compilation slowdowns or iserv needs (e.g. render
>> the module impossible to cross compiler for stage1 cross compilers with not
>> TH support).
>>
>> On Mon, 4 Dec 2023 at 7:45 PM, Simon Peyton Jones <
>> simon.peytonjo...@gmail.com> wrote:
>>
>>> Luckily Hlint also support HLINT instead which removed the TH pipeline.
>>>>
>>>
>>> Where is this described/documented?   All I can see here
>>> <https://github.com/ndmitchell/hlint#readme>is
>>>
>>>> For {-# HLINT #-} pragmas GHC may give a warning about an unrecognised
>>>> pragma, which can be suppressed with -Wno-unrecognised-pragmas.
>>>>
>>> which mentions HLINT pragmas but says nothing about what they do.
>>>
>>> Simon
>>>
>>> On Mon, 4 Dec 2023 at 09:05, Moritz Angermann <
>>> moritz.angerm...@gmail.com> wrote:
>>>
>>>> Any ANN annotation triggers the TH pipeline and makes them really
>>>> painful to work with, in non-stage2 settings. Lots of Hlint annotations use
>>>> ANN and then you have iserv be triggered for each module that has an ANN
>>>> annotation.
>>>>
>>>> Luckily Hlint also support HLINT instead which removed the TH pipeline.
>>>>
>>>> That alone is enough for me personally to recommend against using ANN
>>>> if there is an alternator option to anyone who asks me.
>>>>
>>>> On Mon, 4 Dec 2023 at 5:01 PM, Simon Peyton Jones <
>>>> simon.peytonjo...@gmail.com> wrote:
>>>>
>>>>> The whole ANN mechanism
>>>>> <https://ghc.gitlab.haskell.org/ghc/doc/users_guide/extending_ghc.html?highlight=ann#source-annotations>is,
>>>>> at root, a good idea. It is pretty generan, and allows annotations to be
>>>>> arbitrary expressions, provided they are in Typable and Data.  And they 
>>>>> are
>>>>> serialised across modules.
>>>>>
>>>>> In practice though, I'm not sure how widely used they are. I'm not
>>>>> sure why. I'd love to hear of counter-examples.
>>>>>
>>>>> Only top level binders can be annotated; but there is no reason in
>>>>> principle that you should not annotate instance declarations.  I don't
>>>>> think it'd be too hard to implement.
>>>>>
>>>>> Simon
>>>>>
>>>>> On Sat, 2 Dec 2023 at 14:51, Jaro Reinders 
>>>>> wrote:
>>>>>
>>>>>> Hi GHC devs,
>>>>>>
>>>>>> I'm working on a GHC plugin which implements a custom instance
>>>>>> resolution
>>>>>> mechanism:
>>>>>>
>>>>>> https://github.com/noughtmare/transitive-constraint-plugin
>>>>>>
>>>>>> Currently, I need to place instances in a specific order in a
>>>>>> specific file to
>>>>>> recognize them and use them in my plugin. I think my life would be a
>>>>>> lot easier
>>>>>> if I could put annotations on instances. I imagine a syntax like this:
>>>>>>
>>>>>>  data MyInstanceTypes = Refl | Trans deriving Eq
>>>>>>
>>>>>>  class f <= g where
>>>>>>inj :: f x -> g x
>>>>>>

Re: Annotating instances

2023-12-04 Thread Moritz Angermann
I don’t think they do anything specific. They just function as a marker to
Hlint to find when parsing the source files. Here is one of the original
issues we had:
https://github.com/ndmitchell/hlint/issues/1251

Simply by not being ANN, it doesn’t trigger the Templar Haskell machinery
and thus does not cause compilation slowdowns or iserv needs (e.g. render
the module impossible to cross compiler for stage1 cross compilers with not
TH support).

On Mon, 4 Dec 2023 at 7:45 PM, Simon Peyton Jones <
simon.peytonjo...@gmail.com> wrote:

> Luckily Hlint also support HLINT instead which removed the TH pipeline.
>>
>
> Where is this described/documented?   All I can see here
> <https://github.com/ndmitchell/hlint#readme>is
>
>> For {-# HLINT #-} pragmas GHC may give a warning about an unrecognised
>> pragma, which can be suppressed with -Wno-unrecognised-pragmas.
>>
> which mentions HLINT pragmas but says nothing about what they do.
>
> Simon
>
> On Mon, 4 Dec 2023 at 09:05, Moritz Angermann 
> wrote:
>
>> Any ANN annotation triggers the TH pipeline and makes them really painful
>> to work with, in non-stage2 settings. Lots of Hlint annotations use ANN and
>> then you have iserv be triggered for each module that has an ANN annotation.
>>
>> Luckily Hlint also support HLINT instead which removed the TH pipeline.
>>
>> That alone is enough for me personally to recommend against using ANN if
>> there is an alternator option to anyone who asks me.
>>
>> On Mon, 4 Dec 2023 at 5:01 PM, Simon Peyton Jones <
>> simon.peytonjo...@gmail.com> wrote:
>>
>>> The whole ANN mechanism
>>> <https://ghc.gitlab.haskell.org/ghc/doc/users_guide/extending_ghc.html?highlight=ann#source-annotations>is,
>>> at root, a good idea. It is pretty generan, and allows annotations to be
>>> arbitrary expressions, provided they are in Typable and Data.  And they are
>>> serialised across modules.
>>>
>>> In practice though, I'm not sure how widely used they are. I'm not sure
>>> why. I'd love to hear of counter-examples.
>>>
>>> Only top level binders can be annotated; but there is no reason in
>>> principle that you should not annotate instance declarations.  I don't
>>> think it'd be too hard to implement.
>>>
>>> Simon
>>>
>>> On Sat, 2 Dec 2023 at 14:51, Jaro Reinders 
>>> wrote:
>>>
>>>> Hi GHC devs,
>>>>
>>>> I'm working on a GHC plugin which implements a custom instance
>>>> resolution
>>>> mechanism:
>>>>
>>>> https://github.com/noughtmare/transitive-constraint-plugin
>>>>
>>>> Currently, I need to place instances in a specific order in a specific
>>>> file to
>>>> recognize them and use them in my plugin. I think my life would be a
>>>> lot easier
>>>> if I could put annotations on instances. I imagine a syntax like this:
>>>>
>>>>  data MyInstanceTypes = Refl | Trans deriving Eq
>>>>
>>>>  class f <= g where
>>>>inj :: f x -> g x
>>>>
>>>>  instance {-# ANN instance Refl #-} f <= f where
>>>>inj = id
>>>>
>>>>  instance {-# ANN instance Trans #-}
>>>>  forall f g h. (f <= g, g <= h) => f <= h
>>>>where
>>>>  inj = inj @g @h . inj @f @g
>>>>
>>>> Using this information I should be able to find the right instances in
>>>> a more
>>>> reliable way.
>>>>
>>>> One more thing I was thinking about is to make it possible to remove
>>>> these
>>>> instances from the normal resolution algorithm and only allow them to
>>>> be used
>>>> by my plugin.
>>>>
>>>> Do you think this would be easy to implement and useful? Or are there
>>>> other
>>>> ways to achieve this?
>>>>
>>>> Cheers,
>>>>
>>>> Jaro
>>>> ___
>>>> ghc-devs mailing list
>>>> ghc-devs@haskell.org
>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>>
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>
>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Annotating instances

2023-12-04 Thread Moritz Angermann
Any ANN annotation triggers the TH pipeline and makes them really painful
to work with, in non-stage2 settings. Lots of Hlint annotations use ANN and
then you have iserv be triggered for each module that has an ANN annotation.

Luckily Hlint also support HLINT instead which removed the TH pipeline.

That alone is enough for me personally to recommend against using ANN if
there is an alternator option to anyone who asks me.

On Mon, 4 Dec 2023 at 5:01 PM, Simon Peyton Jones <
simon.peytonjo...@gmail.com> wrote:

> The whole ANN mechanism
> is,
> at root, a good idea. It is pretty generan, and allows annotations to be
> arbitrary expressions, provided they are in Typable and Data.  And they are
> serialised across modules.
>
> In practice though, I'm not sure how widely used they are. I'm not sure
> why. I'd love to hear of counter-examples.
>
> Only top level binders can be annotated; but there is no reason in
> principle that you should not annotate instance declarations.  I don't
> think it'd be too hard to implement.
>
> Simon
>
> On Sat, 2 Dec 2023 at 14:51, Jaro Reinders 
> wrote:
>
>> Hi GHC devs,
>>
>> I'm working on a GHC plugin which implements a custom instance resolution
>> mechanism:
>>
>> https://github.com/noughtmare/transitive-constraint-plugin
>>
>> Currently, I need to place instances in a specific order in a specific
>> file to
>> recognize them and use them in my plugin. I think my life would be a lot
>> easier
>> if I could put annotations on instances. I imagine a syntax like this:
>>
>>  data MyInstanceTypes = Refl | Trans deriving Eq
>>
>>  class f <= g where
>>inj :: f x -> g x
>>
>>  instance {-# ANN instance Refl #-} f <= f where
>>inj = id
>>
>>  instance {-# ANN instance Trans #-}
>>  forall f g h. (f <= g, g <= h) => f <= h
>>where
>>  inj = inj @g @h . inj @f @g
>>
>> Using this information I should be able to find the right instances in a
>> more
>> reliable way.
>>
>> One more thing I was thinking about is to make it possible to remove
>> these
>> instances from the normal resolution algorithm and only allow them to be
>> used
>> by my plugin.
>>
>> Do you think this would be easy to implement and useful? Or are there
>> other
>> ways to achieve this?
>>
>> Cheers,
>>
>> Jaro
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Upcoming GHC 9.6.4 release

2023-11-24 Thread Moritz Angermann
Thanks! Backport-freeze for mid-December sounds absolutely reasonable!

-Moritz

On Fri, 24 Nov 2023 at 9:39 PM, Zubin Duggal  wrote:

> Thanks for raising this Moritz. We decided to do the release in December
> before we are busy with other matters in January like the 9.10 fork.
> However, we do see the burdern this would place on distributors to react
> just a week before the holidays.
>
> We have decided to push the release date to the first week of January
> 2024.
>
> Please note that we still plan to finalise the release branch by
> mid-December so what I said originally about marking patches for
> backport ASAP still applies.
>
> On 23/11/24 20:26, Moritz Angermann wrote:
> >Can I suggest the release to be pushed into 2024?
> >
> >Releasing it on the 15th or a few days later will put significant burden
> >onto integrators, and distributors. To react to the ghc release.
> Especially
> >right before the holiday season for quite some.
> >
> >Cheers,
> >  Moritz
> >
> >On Fri, 24 Nov 2023 at 8:17 PM, Zubin Duggal 
> wrote:
> >
> >> Hi all,
> >>
> >> We are planning a release in the 9.6 series before Christmas, with a
> >> preliminary ETA of 15th December.
> >>
> >> Release tracking ticket:
> https://gitlab.haskell.org/ghc/ghc/-/issues/24017
> >> Please use this ticket to request any submodule bumps or for any other
> >> discussion related to the release.
> >>
> >> If you would like any patches to be considered for inclusion in this
> >> release please ensure that the corresponding Merge Requests are marked
> >> with the ~"backport needed:9.6" label.
> >>
> >> The current set of all MRs being considered for inclusion can be viewed
> at
> >>
> >>
> https://gitlab.haskell.org/ghc/ghc/-/merge_requests?scope=all=all_name[]=backport%20needed%3A9.6
> >>
> >> Cheers,
> >> Zubin
> >> ___
> >> ghc-devs mailing list
> >> ghc-devs@haskell.org
> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Upcoming GHC 9.6.4 release

2023-11-24 Thread Moritz Angermann
Can I suggest the release to be pushed into 2024?

Releasing it on the 15th or a few days later will put significant burden
onto integrators, and distributors. To react to the ghc release. Especially
right before the holiday season for quite some.

Cheers,
  Moritz

On Fri, 24 Nov 2023 at 8:17 PM, Zubin Duggal  wrote:

> Hi all,
>
> We are planning a release in the 9.6 series before Christmas, with a
> preliminary ETA of 15th December.
>
> Release tracking ticket: https://gitlab.haskell.org/ghc/ghc/-/issues/24017
> Please use this ticket to request any submodule bumps or for any other
> discussion related to the release.
>
> If you would like any patches to be considered for inclusion in this
> release please ensure that the corresponding Merge Requests are marked
> with the ~"backport needed:9.6" label.
>
> The current set of all MRs being considered for inclusion can be viewed at
>
> https://gitlab.haskell.org/ghc/ghc/-/merge_requests?scope=all=all_name[]=backport%20needed%3A9.6
>
> Cheers,
> Zubin
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Reinstallable - base

2023-10-17 Thread Moritz Angermann
Something I haven’t gotten around to but only preliminary experiments with
is dynamically built iserv binaries.

Using -fexternal-interpreter can decouple the symbols the interpreter sees
and those the compiler sees (They can even be of different architectures).
iserv could be linked against the base the project wants to use, whereas
GHC itself could use a different base. I’m not sure this covers everything,
but it covers at least the case where we don’t need to load two different
packages into the same process.

Wrt to TH, I’m a bit behind on reading all the prior work to solve this,
but conceptually I still believe template-haskell itself should not expose
the internal ast, but only a combinator API to it.

Regarding DSO’s: let’s please not make the existence of DSO a hard
dependency. There are platforms for which we don’t have DSO capabilities,
and where we are forced to use the in-memory loader and linker.

On Wed, 18 Oct 2023 at 4:17 AM, Simon Peyton Jones <
simon.peytonjo...@gmail.com> wrote:

> (Meta-question: on reflection, would this discussion perhaps be better on
> a ticket? But where?  GHC's repo?  Or HF's?)
>
> The difficulty is that, as a normal Haskell library, ghc itself will be
>> compiled against a particular verson of base. Then when Template Haskell is
>> used (with the internal interpreter), code will be dynamically loaded into
>> a process that already has symbols for ghc's version of base, which means
>> it is not safe for the code to depend on a different version of base.
>
>
> I'm not understanding the difficulty yet.
>
> Let's say that
>
>- An old library mylib (which uses TH) depends on base-4.7.
>- A new GHC, say GHC 9.10, depends on a newer version of base-4.9,
>which in turn depends on ghc-internal-9.10.
>- At the same time, though, we release base-4.7.1, which depends on
>ghc-internal-9.10, and exposes the base-4.7 API.
>
> At this point we use ghc-9.10 to compile L, against base-4.7.1.   (Note
> the the ghc-9.10 binary includes a compiled form of `base-4.9`.
>
>- That produces compiled object files, such as, mylib:M.o.
>- To run TH we need to link them with the running binary
>- So we need to link the compiled `base-4.7.1` as well.  No problem:
>it contains very little code; it is mostly a shim for ghc-internal-9.10
>
> So the only thing we need is the ability to have a single linked binary
> that includes (the compiled form for) two different versions/instantiations
> of `base`.   I think that's already supported: each has a distinct
> "installed package id".
>
> What am I missing?
>
> Simon
>
>
>
> On Tue, 17 Oct 2023 at 16:54, Adam Gundry  wrote:
>
>> Hi Simon,
>>
>> Thanks for starting this discussion, it would be good to see progress in
>> this direction. As it happens I was discussing this question with Ben
>> and Matt over dinner last night, and unfortunately they explained to me
>> that it is more difficult than I naively hoped, even once wired-in and
>> known-key things are moved to ghc-internal.
>>
>> The difficulty is that, as a normal Haskell library, ghc itself will be
>> compiled against a particular version of base. Then when Template
>> Haskell is used (with the internal interpreter), code will be
>> dynamically loaded into a process that already has symbols for ghc's
>> version of base, which means it is not safe for the code to depend on a
>> different version of base. This is rather like the situation with TH and
>> cross-compilers.
>>
>> Adam
>>
>>
>>
>> On 17/10/2023 11:08, Simon Peyton Jones wrote:
>> > Dear GHC devs
>> >
>> > Given the now-agreed split between ghc-internal and base
>> > , what
>> > stands in the way of a "reinstallable base"?
>> >
>> > Specifically, suppose that
>> >
>> >   * GHC 9.8 comes out with base-4.9
>> >   * The CLC decides to make some change to `base`, so we get base-4.10
>> >   * Then GHC 9.10 comes out with base-4.10
>> >
>> > I think we'd all like it if someone could use GHC 9.10 to compile a
>> > library L that depends on base-4.9 and either L doesn't work at all
>> with
>> > base-4.10, or L's dependency bounds have not yet been adjusted to allow
>> > base-4.10.
>> >
>> > We'd like to have a version of `base`, say `base-4.9.1` that has the
>> > exact same API as `base-4.9` but works with GHC 9.10.
>> >
>> > Today, GHC 9.10 comes with a specific version of base, /and you can't
>> > change it/. The original reason for that was, I recall, that GHC knows
>> > the precise place where (say) the type Int is declared, and it'll get
>> > very confused if that data type definition moves around.
>> >
>> > But now we have `ghc-internal`, all these "things that GHC magically
>> > knows" are in `ghc-internal`, not `base`.
>> >
>> > *Hence my question: what (now) stops us making `base` behave like any
>> > other library*?  That would be a big step forward, because it would
>> mean
>> > that a newer GHC could compile old libraries against 

Re: How do you keep tabs on commits that fix issues?

2023-09-28 Thread Moritz Angermann
I usually end up looking at the source I’m actually compiling and checking
if the expected changes are in the source or not. If not, I end up
backporting stuff to the source at hand. As this is often for compilers
that are way past their end of life cycle (e.g. 8.10), there seems little
point in the overhead of upstreaming it.

I do agree with Alan that chery-pick -x is often the right approach to keep
track of where things originally came from. I also agree with Andreas that
having the ticket ids in the coming message helps when searching.

Moritz

On Fri, 29 Sep 2023 at 6:27 AM, Andreas Klebinger 
wrote:

> Personally I try to include fixes #1234 in the commit so then I can just
> check which tags contain a commit mentioning the issue.
>
> If the issue isn't mentioned in the commit I usually look at the issue
> -> look for related mrs -> look for the commit with the fix -> grep for
> the commit message of the commit or look for the marge MR mentioned on
> the mr.
>
> Am 28/09/2023 um 08:56 schrieb Bryan Richter via ghc-devs:
> > I am not sure of the best ways for checking if a certain issue has
> > been fixed on a certain release. My past ways of using git run into
> > certain problems:
> >
> > The commit (or commits!) that fix an issue get rewritten once by Marge
> > as they are rebased onto master, and then potentially a second time as
> > they are cherry-picked onto release branches. So just following the
> > original commits doesn't work.
> >
> > If a commit mentions the issue it fixes, you might get some clues as
> > to where it has ended up from GitLab. But those clues are often
> > drowning in irrelevant mentions: each failed Marge batch, for
> > instance, of which there can be many.
> >
> > The only other thing I can think to do is look at the original merge
> > request, pluck out the commit messages, and use git to search for
> > commits by commit message and check each one for which branches
> > contain it. But then I also need to know the context of the fix to
> > know whether I should also be looking for other, logically related
> > commits, and repeat the dance. (Sometimes fixes are only partially
> > applied to certain releases, exacerbating the need for knowing the
> > context.) This seems like a mechanism that can't rely on trusting the
> > author of the original set of patches (which may be your past self)
> > and instead requires a deep understanding to be brought to bear every
> > time you would want to double check the situation. So it's not very
> > scalable and I wouldn't expect many people to be able to do it.
> >
> > Are there better mechanisms already available? As I've said before, I
> > am used to a different git workflow and I'm still learning how to use
> > the one used by GHC. I'd like to know how others handle it.
> >
> > Thanks!
> >
> > -Bryan
> >
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Inquiry about Supporting RISC-V 32 in Haskell

2023-06-08 Thread Moritz Angermann
 Subject:Re: Inquiry about Supporting RISC-V 32 in Haskell
>>
>> 您好诗炜!
>> 对不起,我的中文不很好。写英语。
>> I assume you've found my email from the ghc source repo? Yes, I have
>> extensive experience adding compiler backends to GHC, including GHC's
>> internal static linker.
>> Recently a friend and I have started working on RV64 support in GHC (See
>> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/10367). So we have
>> some prior experience
>> with RISC-V.
>>
>> I understand you want to have support for RV32 in ghc? Are you looking
>> for _native_ RV32 support GHC the compiler running natively on RISC-V 32
>> hardware? Or are you
>> primarily interested in _cross compilation_ to RV32 from e.g. RV64,
>> x86_64, aarch64?
>>
>> Also which platform are you interested in? Linux?
>>
>> Happy to provide more concrete feedback based on your exact requirements
>> and vision. Ultimately GHC is a community project and everyone can
>> contribute. I can certainly
>> provide guidance and assistance here.
>>
>> Cheers,
>>  Moritz
>>
>> On Wed, 7 Jun 2023 at 18:16, 卢诗炜  wrote:
>>
>>> Dear Moritz Angermann,
>>>
>>> Hello! I am writing on behalf of Hunan Compiler Technology Co., Ltd. to
>>> inquire about the possibility of supporting RISC-V 32 in Haskell o
>>>
>>> r if you're supporting it
>>> . We are a technology company based in China and we would like to
>>> contribute to the Haskell community by providing support for this
>>> architecture.
>>>
>>> As you may know, RISC-V is an open-source instruction set architecture
>>> that is gaining popularity in the industry. It is important for us to
>>> ensure that our products are compatible with this architecture, and we
>>> believe that supporting RISC-V in Haskell would be a valuable contribution
>>> to the community.
>>>
>>>
>>> We are willing to work closely with the community to understand the
>>> requirements and challenges of supporting RISC-V in Haskell. We have a team
>>> of experienced engineers who are familiar with both Haskell and RISC-V, and
>>> we are confident that we can provide high-quality support for this
>>> architecture.
>>>
>>>
>>> We would appreciate any guidance or feedback from the community on this
>>> matter. Please let us know if there are any specific requirements or
>>> challenges that we should be aware of. We look forward to working with the
>>> Haskell community to support RISC-V.
>>>
>>>
>>> Thank you for your time and consideration.
>>>
>>>
>>> Sincerely, Shiwei LU, Hunan Compiler Technology Co., Ltd.
>>>
>>>
>>>
>>> <https://lingxi.office.163.com/static_html/signature.html?id=69484630301586661=>
>>>
>>>
>>>
>>>
>>
>>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: does llvm 16 work with ghc 9.6.1 ?

2023-04-24 Thread Moritz Angermann
Hi George,

while I personally haven’t tried. I’d encourage you to just try. Unless
they changed their textual IR (they don’t do that often anymore), it could
just work.

Whether or not you run into bugs for the specific target you are looking
at, is hard to say without knowing the target.

My suggestion would be to just try building your configuration with the
llvm backend against llvm16, and run validate if you can.

Cheers,
  Moritz

On Tue, 25 Apr 2023 at 4:50 AM, George Colpitts 
wrote:

> Hi
>
> Does anybody know if  llvm 16 works with ghc 9.6.1 ?
>
> Thanks
> George
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GitLab maintenance has started

2022-11-24 Thread Moritz Angermann
The zw3rk runners should be back.

On Thu, 24 Nov 2022 at 17:41, Bryan Richter via ghc-devs <
ghc-devs@haskell.org> wrote:

> Hello again,
>
> After a few surprises, I have finished my migration.
>
> Unfortunately, CI IS STILL BROKEN.
>
> All CI runners disabled themselves while GitLab was in maintenance
> mode. (That's not very helpful, but never mind.)
>
> As I do not have the access or privileges required to restart the
> runners, they will stay dead until the proper people can restart them.
>
> Sorry for the trouble!
>
> -Bryan
>
> On Thu, Nov 24, 2022 at 8:15 AM Bryan Richter 
> wrote:
> >
> > Hi all,
> >
> > I've started the migration, so the maintenance period has officially
> started.
> >
> > GitLab itself will go read-only soon.
> >
> > As a reminder, I scheduled 4 hours for the maintenance window.
> >
> > I'll send updates at major points of the migration.
> >
> > -Bryan
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


How do you build GHC 9.2.4 cross compilers for windows?

2022-11-15 Thread Moritz Angermann
Hi,

Does anyone know how to build a 9.2.4 windows cross compiler? Hadrian seems
to get in
the way a lot?

This is the essence of what we tried to get working:

./configure --enable-bootstrap-with-devel-snapshot
--build=x86_64-apple-darwin --host=x86_64-apple-darwin
--target=x86_64-w64-mingw32
hadrian --flavour=default+no_dynamic_ghc --docs=no-sphinx -j

Can someone please advise how this is supposed to be built now?

Cheers,
 Moritz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Tier 1 architectures

2022-09-22 Thread Moritz Angermann
Thanks Ben!

Just FYI: We do have folks actively deploying to iOS and Android  at
simplex chat.

I do agree that we want this to be in the user guide though; as it’s quite
version dependent.

Cheers,
  Moritz

On Fri, 23 Sep 2022 at 3:22 AM, Ben Gamari  wrote:

> Simon Peyton Jones  writes:
>
> > Ben, Matthew, Moritz, and friends
> >
> > Is this wiki page about architectures still accurate?
> > https://gitlab.haskell.org/ghc/ghc/-/wikis/platforms
> >
> Hi Simon,
>
> Indeed there were a few inaccuracies on that page; I have fixed
> these and generally cleaned up the page.
>
> > For example, ARM is not Tier 1, or "apple silicon".
> >
> > Yet I know some of our developers have invested lots of effort in other
> > architectures, so maybe those efforts are not reflected here.
> >
> Fairly recently there has been work on RISC-V (rv64) and PowerPC
> (ppc64le), as well as some work on s390x via LLVM. However, I wouldn't
> consider any of these Tier 1.
>
> > Relevant is Moritz's post about 32-bit architectures
> > <
> https://discourse.haskell.org/t/running-project-built-on-raspberry-pi-with-cabal-gives-weird-errors/2429/6
> >
> > .
> >
> > We should in due course add Javascript and Web Assembly as Tier 1 back
> ends?
> >
> Indeed, that is the plan although 9.6 will rather ship these as Tier 2
> targets.
>
> > Are we saying "if your customer bases uses Tier 2 architectures, you
> can't
> > rely on GHC from one release to the next"?  I wonder if there are
> companies
> > for which Tier-2 architectures are mission-critical.  Mis-aligned
> > expectations cause upset.
> >
> I have heard that some people are using amd64/FreeBSD, although that can
> very
> nearly be promoted to a Tier 1 now. Bodigrim once mentioned that he was
> considering deploying Haskell on s390x, although I'm not sure what
> became of that. Otherwise I would be quite surprised if any commercial
> customers are relying on any of the other Tier 2 or Tier 3 platforms.
>
> > I mention all this because it is relevant to our stability guarantees.
> > Every time we release we should point to this list.
> >
> My sense is that this list should ideally rather live in the users guide
> since it changes from release to release.
>
> Cheers,
>
> - Ben
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC development asks too much of the host system

2022-07-19 Thread Moritz Angermann
Hi Hecate,

I don't think this is entirely fair in either direction.  So sharing my
personal experience
might shed some light.  I've often worked on GHC on fairly weak machines.
However
the ability to use HLS on GHC or even the ability to load GHC into GHCi are
fairly
recent additions.

I don't run the full test-suite either much.

The general development experience has more been closer to this:
- pick an issue I want to work on
- checkout the relevant branch (or master)
- kick off a ghc build (hadrian)
- start looking for the relevant code in GHC to address this.
- build a tiny reproducer (if possible, or run the relevant test from the
test-suite if available) -- once the initial ghc is build.
- hack on the codebase; rebuild (subsequent rebuilds are fairly fast)
- retry the reproducer, iterate until done.

Most of my development has been without much codelevel help and at most a
syntax highlighter. This is decidedly different from the experience you can
have
working on haskell libraries with the availability of ghcid, hls, ... would
it be nice
if ghc development would be that nice as well? I'd assume so, I've just
never
even tried.

Cheers,
 Moritz


On Tue, 19 Jul 2022 at 18:21, Hécate  wrote:

> Hello ghc-devs,
>
> I hadn't made significant contributions to the GHC code base in a while,
> until a few days ago, where I discovered that my computer wasn't able to
> sustain running the test suite, nor handle HLS well.
>
> Whether it is my OS automatically killing the process due to oom-killer
> or just the fact that I don't have a war machine, I find it too bad and
> I'm frankly discouraged.
> This is not the first time such feedback emerges, as the documentation
> task force for the base library was unable to properly onboard some
> people from third-world countries who do not have access to hardware
> we'd consider "standard" in western Europe or some parts of North
> America. Or at least "standard" until even my standard stuff didn't cut
> it anymore.
>
> So yeah, I'll stay around but I'm afraid I'm going to have to focus on
> projects for which the feedback loop is not on the scale of hours , as
> this is a hobby project.
>
> Hope this will open some eyes.
>
> Cheers,
> Hécate
>
> --
> Hécate ✨
> : @TechnoEmpress
> IRC: Hecate
> WWW: https://glitchbra.in
> RUN: BSD
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: The GHC(i)/RTS linker and Template Haskell

2022-05-31 Thread Moritz Angermann
Hi Alexis,

let me try to provide the high up view. I'm sorry if I'm going a bit
overboard on details you already know. But let's
start with clearing up a misconception first. No, GHCi does not always
require dynamic linking.

At the very abstract level we have a compiler that knows how to turn
various inputs into object code. This includes, C, Cmm, Haskell, assembly
(.c, .S, .cmm, .hs -> .o). Thus a complete haskell package ends up as a
bunch of object code files. We know that for dynamic linkers, we may need
slightly different arguments (e.g. PIC).

We next roll these up into archives (.a) and occasionally a pre-linked
object file (e.g. link the whole set of object files into one object file,
and resolve internal references); as well as a dynamic (shared object,
dylib) file.

For GHCi's purposes (and TH), we ultimately want to call a haskell function
to produce some AST to splice in. This haskell
function might not be defined in a different package, but the same, so
we'll have to deal with some in-flight packages anyway.
We may habe some Byte Code Object (BCO) glue code to invoke the haskell
function, which GHCi will interpret during evaluation. However that
function can depend on a large dependency tree, and we don't have BCO for
everything. I still think it would be nice to have an abstract machine and
an Intermediate Representation/ByteCode, that's a much larger project
though. Also until recently BCO's couldn't encode unboxed types/sums even.

So given the BCO glue code, we really want to call into object code (also
for performance). You can instruct GHCi to prefer object code as well via
-fobject-code.

This now leads us to the need of getting the object code somehow into
memory and running it. The dynamic system linker approach would be to turn
the object code with the function we want to call into a shared library,
and just hand that over to the linker (e.g. dlopen).

However, GHC has for a long time grown it's own in-memory static linker. As
such it has the capability to load object file (.o) and resolve them on the
fly. There is no need for system shared libraries, a system linker, and to
deal with potential bugs in that linker. It also means we can link on
platforms that don't have a system linker or a severely restricted one
(e.g. iOS).

So from a high level you can look at GHC's RTS linker as a special feature
of GHC that allows us to not need a system
provided dynamic linker, if there is none available, or using it is
undesirable.

Whether or not stuff is loaded through the internal or external interpreter
has near no difference. You _can_ load different abi's through the external
iserv (as that iserv can be built against a different abi).

Hope this helps a bit? Feel free to ask more questions.

Cheers,
 Moritz

On Wed, 1 Jun 2022 at 03:38, Alexis King  wrote:

> Hi all,
>
> I’ve recently been trying to better understand how and where time is spent
> at compile-time when running Template Haskell splices, and one of the areas
> I’ve been struggling to figure out is the operation of the linker. From
> reading the source code, here’s a summary of what I think I’ve figured out
> so far:
>
>- TH splices are executed using the GHCi interpreter, though it may be
>internal or external (if -fexternal-interpreter is used).
>
>- Regardless of which mode is used, TH splices need their dependencies
>loaded into the interpreter context before they can be run. This is handled
>by the call to loadDecls in hscCompileCoreExpr', which in turn calls
>loadDependencies in GHC.Linker.Loader.
>
>- loadDependencies loads packages and modules in different ways.
>Package dependencies are just loaded via the appropriate built shared
>libraries, but modules from the current package have to be loaded a
>different way, via loadObjects (also in GHC.Linker.Loader).
>
> Here, however, is where I get a bit lost. GHC has two strategies for
> loading individual objects, which it chooses between depending on whether
> the current value of interpreterDynamic is True. But I don’t actually
> understand what interpreterDynamic means! The Haddock comment just says
> that it determines whether or not the “interpreter uses the Dynamic way”,
> but I don’t see why that matters. My understanding was that GHCi *always*
> requires dynamic linking, since it is, after all, loading code dynamically.
> Under what circumstances would interpreterDynamic ever be False?
>
> Furthermore, I don’t actually understand precisely how and why this
> influences the choice of loading strategy. In the case that
> interpreterDynamic is True, GHC appears to convert the desired dyn_o object
> into a shared library by calling the system linker, then loads that, which
> can be very slow but otherwise works. However, when interpreterDynamic is
> False, it loads the object directly. Both paths eventually call into “the
> RTS linker”, implemented in rts/Linker.c, to actually load the resulting
> object.
>
> I have found 

Re: gitlab spam

2022-05-16 Thread Moritz Angermann
The second one is an issue if it consumes CI Ressource. Ideally we’d have
only “blessed” repos allowed to consume CI. The issue with this is that
(random) new users can’t fork GHC and have CI run against their change.

I’d still very much like to see a solution to this; it is a security
concern.

Moritz

On Tue, 17 May 2022 at 1:27 AM, Ben Gamari  wrote:

> Bryan and I discussed this in person but I'll repeat what I said there
> here:
>
> In short, there are two kinds of spam:
>
>  * user creation without the creation of any other content
>  * spam content (primarily projects and snippets)
>
> My sense is that the former has thusfar been harmless and consequently
> we shouldn't worry lose any sleep over it. On the other hand, spam
> content is quite problematic and we should strive to eliminate it.
> Once every few months I take a bit of time and do some cleaning (with
> some mechanical help [1]). It's also helpful when users use GitLab's
> "Report Abuse" feature to flag spam accounts as these cases are very
> easy to handle.
>
> Cheers,
>
> - Ben
>
>
> [1]
> https://gitlab.haskell.org/bgamari/ghc-utils/-/blob/master/gitlab-utils/gitlab_utils/spam_util.py
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: the linters are killing me slowly

2022-02-09 Thread Moritz Angermann
Just to add to this. I think we should *optimize* for people not working
full time on GHC.
Anything that's going to be smooth for people working only a few hours a
week on GHC
will implicitly improve the situation for those working more hours on GHC
as well. As in
what is pleasant for someone with little time should also make it pleasant
for someone
with lots of time.

I don't see why marge would be an issue though? If we allow all failures,
why would
someone assign something to marge that still *has* failures? Isn't the idea
that you
assign to marge once you've cleaned up all failures?

I mean ideally we'd want to get a summary of all failures (without the
noise).

The only drawback I can see is this: if we allow all failures, and then end
up with lots
of MRs, we might run into build constraints (e.g. you are going to wait
hours for your
MR to even be picked up by the fleat of CI machines). I don't see this
happening
immediately.  And maybe if this happens we can get ~7k EUR / year form the
HF?
That's about as much as the builders I provide cost per year.

Cheers,
 Moritz


On Thu, 10 Feb 2022 at 10:04, Ben Gamari  wrote:

> Richard Eisenberg  writes:
>
> > Hi devs,
> >
> Hi Richard,
>
> > Can we please, please not have the linters stop more useful output
> > during CI? Over the past few months, I've lost several days of
> > productivity due to the current design.
>
> Mmm, yes, this doesn't sound good. I'm sorry it's been such a hassle.
>
> > Why several days? Because I typically end up with only 1.5-2 hours for
> > GHC work in a day, and when I have to spend half of that recreating
> > test results (which, sometimes, don't work, especially if I use
> > Hadrian, as recommended), I often decide to prioritize other tasks
> > that I have a more reasonable chance of actually finishing.
> >
> > It was floated some time ago that "Draft" MRs could skip linters. But
> > I actually have a non-Draft MR now, and it failed the new Notes
> > linter. (Why, actually, is that even a separate pass? I thought there
> > was a test case for that, which I'm thrilled with.)
> >
> It's a separate pass to help the contributor distinguish
>
> > It just feels to me that the current workflow is optimized for those
> > of us who work on GHC close to 100% of the time. This is not the way
> > to get new contributors.
> >
> Yes, I am sympathetic with this concern. One alternative design that we
> could try is to rather allow linters to fail *except* in Marge jobs.
> This would mean that we would need to be very careful not to pass jobs
> with failing lints to Marge as doing so would spoil the entire Marge
> batch. However, it would also mean that it would make contribution in a
> less-than-full-time setting a bit easier. How does this sound?
>
> If we had more devops capacity we could mitigate the Marge-spoilage
> problem by teaching Marge not to consider MRs which are failing lints.
> However, at the moment I don't think we have the bandwidth to implement
> this.
>
> Cheers,
>
> - Ben
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: DWARF support

2021-11-17 Thread Moritz Angermann
Thanks Carter!

Yes I completely forgot about the unwinding librarys.

Sorry. My bad!

Best,
 Moritz

On Wed 17. Nov 2021 at 21:08, Carter Schonwald 
wrote:

> My understanding is that the platform specific part of ghc dwarf support
> atm is the stack walking to generate dwarf data in stack traces. This is
> because the dwarf stack walking Libs that are relatively mature are mostly
> centered around elf?
>
> It should still be possible with some work to use perf and gdb style
> tools, though the complications are
>
> a) you have to make sure all the Libs are built with dwarf
>
> b) there’s some complications around loading / placing the dwarf files
> adjacent to the object code files on Darwin (at least last time I checked
> which was years ago following the wiki entry johan tibbel wrote up I
> think?)
>
> C) scheduler yields make gdb stepping through a program a tad more
> annoying, I think the “setting the yield timer to zero” is the work around
>
> D) the “source” you step through is essentially the c— z-encoded code? So
> you still need to do some puzzling out of stuff
>
>
> On Wed, Nov 17, 2021 at 7:28 AM Moritz Angermann <
> moritz.angerm...@gmail.com> wrote:
>
>> Hi Richard,
>>
>> I’m not sure using platform native AND  the term DWARF would help rather
>> than add to confusion.  Let me still try to
>> help a bit with context here.
>>
>> For Linux and most BSDs, we have settled on the Executable and Linking
>> Format (ELF) as the container format for
>> your machine code.  And you might see where the inspiration for DWARF
>> might come from.
>>
>> For macOS, we have mach object (mach-o) as the container format. Its
>> distinctly different to ELF, and also the
>> reason why Linux/BSD and macOS are sometimes substantially different, wrt
>> to executable packaging and linking.
>>
>> For windows we have Portable Executable (PE) as the container format.
>>
>> My recollection is that we implemented DWARF in the NCG only for ELF.
>> I've always wanted to scratch an itch
>> and try to make it work for mach-o as well, but never got around to it
>> (yet?).  The NCGs have flags that specify
>> if we want to emit debug info or not.  I believe most codegens except for
>> x86_64/elf ignore that flag.
>>
>> This is a non-trivial engineering effort to get done properly, I believe.
>> And we all spend time on many other things.
>>
>> Depending on how familiar you are with development on macOS, you might
>> know the notion of dSYM folders,
>> where macOS usually separate the application binary into the binary, and
>> then stores the (d)ebug (SYM)bols in
>> a separate folder. Those are iirc DWARF objects in the end.
>>
>> Hope this helps a bit; my recollection might be a bit rusty.
>>
>> Best,
>>  Moritz
>>
>>
>>
>> On Wed 17. Nov 2021 at 20:02, Richard Eisenberg 
>> wrote:
>>
>>> Hi devs,
>>>
>>> I was intrigued by Bodigrim's comment about HasCallStack in base (
>>> https://github.com/haskell/core-libraries-committee/issues/5#issuecomment-970942580)
>>> that there are other alternatives, such as DWARF. Over the years, I had
>>> tuned out every time I saw the word DWARF: it was (and is!) an unknown
>>> acronym and seems like a low-level detail. But Bodigrim's comment made me
>>> want to re-think this stance.
>>>
>>> I found Ben's series of blog posts on DWARF, starting with
>>> https://www.haskell.org/ghc/blog/20200403-dwarf-1.html. These are very
>>> helpful! In particular, they taught me that DWARF = platform-native
>>> debugging metadata. Is that translation accurate? If so, perhaps we should
>>> use both names: if I see that GHC x.y.z has DWARF support, I quickly scroll
>>> to the next bullet. If I see that GHC x.y.z has support for platform-native
>>> debugging metadata and is now compatible with e.g. gdb, I'm interested.
>>>
>>> Going further, I have a key question for my use case: is this support
>>> available on Mac? The first post in the series describes support for "Linux
>>> and several BSDs" and the last post says that "Windows PDB support" is
>>> future work. (Is "PDB" platform-native debugging metadata for Windows? I
>>> don't know.) But I don't see any mention of Mac. What's the status here?
>>>
>>> It would be very cool if this conversation ends with me making a video
>>> on how a few simple GHC flags can allow us to, say, get a stack trace on a
>>> pattern-match failure in a Haskell program.
>>>
>>> Thanks!
>>> Richard
>>>
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: DWARF support

2021-11-17 Thread Moritz Angermann
Hi Richard,

I’m not sure using platform native AND  the term DWARF would help rather
than add to confusion.  Let me still try to
help a bit with context here.

For Linux and most BSDs, we have settled on the Executable and Linking
Format (ELF) as the container format for
your machine code.  And you might see where the inspiration for DWARF
might come from.

For macOS, we have mach object (mach-o) as the container format. Its
distinctly different to ELF, and also the
reason why Linux/BSD and macOS are sometimes substantially different, wrt
to executable packaging and linking.

For windows we have Portable Executable (PE) as the container format.

My recollection is that we implemented DWARF in the NCG only for ELF.  I've
always wanted to scratch an itch
and try to make it work for mach-o as well, but never got around to it
(yet?).  The NCGs have flags that specify
if we want to emit debug info or not.  I believe most codegens except for
x86_64/elf ignore that flag.

This is a non-trivial engineering effort to get done properly, I believe.
And we all spend time on many other things.

Depending on how familiar you are with development on macOS, you might know
the notion of dSYM folders,
where macOS usually separate the application binary into the binary, and
then stores the (d)ebug (SYM)bols in
a separate folder. Those are iirc DWARF objects in the end.

Hope this helps a bit; my recollection might be a bit rusty.

Best,
 Moritz



On Wed 17. Nov 2021 at 20:02, Richard Eisenberg  wrote:

> Hi devs,
>
> I was intrigued by Bodigrim's comment about HasCallStack in base (
> https://github.com/haskell/core-libraries-committee/issues/5#issuecomment-970942580)
> that there are other alternatives, such as DWARF. Over the years, I had
> tuned out every time I saw the word DWARF: it was (and is!) an unknown
> acronym and seems like a low-level detail. But Bodigrim's comment made me
> want to re-think this stance.
>
> I found Ben's series of blog posts on DWARF, starting with
> https://www.haskell.org/ghc/blog/20200403-dwarf-1.html. These are very
> helpful! In particular, they taught me that DWARF = platform-native
> debugging metadata. Is that translation accurate? If so, perhaps we should
> use both names: if I see that GHC x.y.z has DWARF support, I quickly scroll
> to the next bullet. If I see that GHC x.y.z has support for platform-native
> debugging metadata and is now compatible with e.g. gdb, I'm interested.
>
> Going further, I have a key question for my use case: is this support
> available on Mac? The first post in the series describes support for "Linux
> and several BSDs" and the last post says that "Windows PDB support" is
> future work. (Is "PDB" platform-native debugging metadata for Windows? I
> don't know.) But I don't see any mention of Mac. What's the status here?
>
> It would be very cool if this conversation ends with me making a video on
> how a few simple GHC flags can allow us to, say, get a stack trace on a
> pattern-match failure in a Haskell program.
>
> Thanks!
> Richard
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Two questions about building GHC from sources

2021-09-22 Thread Moritz Angermann
(1), I would hope so. If not, that's a bug.
(2), hadrian supports --build-root=, such that you can have different build
product trees based from one source. One of the nice features of hadrian
is, that it finally is able to separate build and source directories, there
are still the inplace files, which I believe should go. Yet, anything under
the --build-root, which defaults to _build, should be hermetic in there.
Whether or not you can just *move* those directories around is a bit
complicated, primarily because there are facilities in Cabal (the Paths
module), as well as potentially use of non ${pkgroot} prefixed package
config files, that hardcode paths. Again the Paths module from Cabal should
just be removed at some point. And missing ${pkgroot} values could be
fixed. My guess is, that we've done quite a lot of making it relocatable,
so just try to move it somewhere else and see if things break when calling
it somewhere else, they might, but if they do, it's a bug.


On Wed, Sep 22, 2021 at 3:27 PM Norman Ramsey  wrote:

> I've got two questions about building:
>
>  1. If my bootstrap compiler changes (e.g., from 8.10.5 to 9.0.1), is
> Hadrian smart enough to rebuild everything?  If not, how do I
> force it to start over?
>
>  2. At the moment I'm not installing my GHC; I'm using elements
> directly from the build tree.  If I build GHC and then move the
> whole tree to the whole location, will everything be happy?
> Or do I need to do something?  (Again, using Hadrian.)
>
>
> Norman
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Problem building GHC

2021-09-20 Thread Moritz Angermann
Those errors in both logs seem fairly odd. The first one appears to be
mostly a missing libgmp, the second one looks like hsc2hs's template file
is missing, however hadrian should have a rule for that.

You can try to clean your tree and rebuild.

git clean -xfd
git submodule foreach git clean -xfd
git submodule update --init --recursive

./boot && ./configure
cabal v2-update
hadrian/build -j --flavour=Quick/boot

In general the instructions on https://ghc.dev are pretty good.

Cheers,
 Moritz

On Mon, Sep 20, 2021 at 8:50 AM Solomon Bothwell 
wrote:

> Hi. I'm a Haskell developer and I have recently started exploring and
> building GHC, mainly with the hope of writing some small MRs for base.
>
> I use nixos and ghc.nix to setup a build environment. Last week I was able
> to build `ghc-8.10.7-release` with no trouble. I'm not sure what may have
> happened but I am unable to build either `ghc-8-10.7-release` or `master`
> this week. I tried deleting my fork and creating a new one but am getting
> the same results.
>
> For `master` I get this error:
> https://gist.github.com/ssbothwell/9693a919c521decf52503e9152b879b6
> For `ghc-8.10.7-release` I get this error:
> https://gist.github.com/ssbothwell/fb5c418b4ee509cf0ad95d4337381e9e
>
> I hope this is the right venue to be asking for help on this. Would it be
> better for me to write an issue on the ghc.nix github page?
>
> Thanks,
> Solomon.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC plugin authors?

2021-09-01 Thread Moritz Angermann
> With regards to point 2. if I understand correctly, the gitlab CI creates
> bindists as build artifacts, is that correct?
> I guess it would be helpful if that were advertised more prominently, so
> it's easier to test a new branch without having to build GHC.


Yes, they are. Let's take this MR for example (I just picked a random green
one)
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/6452
If you look at the associated CI pipeline:
https://gitlab.haskell.org/ghc/ghc/-/pipelines/40166
You'll find the x86_64-deb-debug validate build here:
https://gitlab.haskell.org/ghc/ghc/-/jobs/774465
Which on the right side has "Job artifacts", clicking "browse" there leads
us to: https://gitlab.haskell.org/ghc/ghc/-/jobs/774465/artifacts/browse
from where we can obtain
https://gitlab.haskell.org/ghc/ghc/-/jobs/774465/artifacts/file/ghc-x86_64-deb9-linux-debug.tar.xz

Hope this helps, it's not very trivial, but at least this should give
directions on how to get the artifacts.
There are also nightly builds which have the artifacts attached as well.

On Wed, Sep 1, 2021 at 10:46 AM Christiaan Baaij 
wrote:

> Hi Richard,
>
> So there's multiple reasons/aspects as to why I haven't responded
> 1. Sam's question/seeking feedback on changing the return type for
> constraint solver plugins is something Sam and I discussed at HIW using
> ICFPs airmeet instance, as in, it was something I suggested.
>
> And the TL;DR of next three reasons: I can't find enough time / need to
> get better at time management / need to hand over work
> 2. While I've had to maintain the
> ghc-typelits-{natnormalize;knownnat;extra} constraint solver plugins as
> part of maintaining my (and our company's) livelihood, I usually only try
> the next version of the GHC module collection (API) when there's a binary
> dist of like an alpha or RC.
> 3. I guess I've become conditioned to just do the regular impedance
> matching for every new major GHC release, as can be witnessed by the
> following CPP extravaganza:
> https://github.com/clash-lang/ghc-typelits-natnormalise/blob/3f3ae60a796061b7a496f64ba71f4d57dedd01db/src/GHC/TypeLits/Normalise.hs#L181-L316
> 4. I'm a day-time-only programmer, and with our company growing I've had
> less time for coding and maintaining software.
>
> With regards to point 2. if I understand correctly, the gitlab CI creates
> bindists as build artifacts, is that correct?
> I guess it would be helpful if that were advertised more prominently, so
> it's easier to test a new branch without having to build GHC.
>
> With regards to point 3. The "blessed" GHC API, i.e. the module called
> "GHC", "Plugins", "TcPlugins" (I forgot what their new names are in the
> post ghc 9.0 era), were never enough to get everything done that needed to
> be done.
> That meant one had to reach out to the GHC module collection, which, for
> good reasons (the type and constraint system is always under heavy
> development), is changing with every major GHC release.
> I guess (constraint solving) plugin creators whose day-job or hobby
> doesn't include maintaining plugins got burned out by having to keep up
> with these changes.
> So I wonder how many constraint-solving plugins currently compile against
> GHC 9.0, let alone GHC 9.2; and as a consequence I wonder how many people
> are interested in these API changes (since they stopped maintaining their
> plugins).
>
> Perhaps they are interested though! But simply not registered to this
> mailing list. So asking for feedback on both the haskell discourse and the
> haskell sub-reddit certainly wouldn't hurt!
>
> Finally, with regards to point 4. Constraint-solving plugins certainly
> required some time to get into.
> So often my colleagues look to me whenever there's a new alpha or RC out
> to upgrade the plugins.
> But Sam's changes for solving type family constraints certainly give the
> impression that things will easier going forward!
> So now is probably the best time to hand over the baton to one of my
> colleagues and hopefully they can provide the asked-for feedback.
>
> On Wed, 1 Sept 2021 at 09:21, Moritz Angermann 
> wrote:
>
>> Hi Richard,
>>
>> I believe that this is mostly due to plugin development happening to
>> satisfy a plugin need.  I doubt there is a grand unified vision for
>> plugins.  And I don't have one either.  I've dabbled with codegen plugins a
>> long time ago, these days I'm primarily concerned with plugins having a
>> chance to work in cross compilation settings, and even that is still a very
>> uncharted area, but Luite has come up with a hack and Sylvain is making
>> progress :-) We still don't have the cabal side fixes, where we'd need some
>> `plugin-depends` stan

Re: GHC plugin authors?

2021-09-01 Thread Moritz Angermann
Hi Richard,

I believe that this is mostly due to plugin development happening to
satisfy a plugin need.  I doubt there is a grand unified vision for
plugins.  And I don't have one either.  I've dabbled with codegen plugins a
long time ago, these days I'm primarily concerned with plugins having a
chance to work in cross compilation settings, and even that is still a very
uncharted area, but Luite has come up with a hack and Sylvain is making
progress :-) We still don't have the cabal side fixes, where we'd need some
`plugin-depends` stanza, but all that only makes sense, once we have the
fundamentals for plugins disentangled in ghc.

I agree that a discussion on discourse might help.  But we won't know
without trying.

On Tue, Aug 31, 2021 at 9:34 PM Richard Eisenberg 
wrote:

> Hi all,
>
> I have seen a few posts from Sam Derbyshire here asking for feedback about
> plugin API design, and the responses have been minimal. This poses a design
> challenge, because the GHC folk who design the interface are sometimes
> distinct from the people who use the interface. We're trying to be good,
> seeking feedback from real, live clients. Is there a better way to do so
> than this mailing list? Example: we could create a Category on
> discourse.haskell.org, if that would reach the audience better. Or we
> could make a repo with issue trackers somewhere simply to track plugin
> design. What would work?
>
> (I recognize that I'm asking in a perhaps-ineffective channel for advice,
> but I really don't have a better idea right now. Maybe some of you plugin
> authors are here and will point us in a better direction.)
>
> Thanks,
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: CI build failures

2021-07-27 Thread Moritz Angermann
You can safely ignore the x86_64-darwin failure. I can get you the juicy
details over a beverage some time. It boils down to some odd behavior using
rosetta2 on AArch64 Mac mini’s to build x86_64 GHCs. There is a fix
somewhere from Ben, so it’s just a question of time until it’s properly
fixed.

The other two I’m afraid I have no idea. I’ll see to restart them. (You
can’t ?)

On Tue 27. Jul 2021 at 18:10, ÉRDI Gergő  wrote:

> Hi,
>
> I'm seeing three build failures in CI:
>
> 1. On perf-nofib, it fails with:
>
> == make boot -j --jobserver-fds=3,4 --no-print-directory;
>   in /builds/cactus/ghc/nofib/real/smallpt
> 
> /builds/cactus/ghc/ghc/bin/ghc  -M -dep-suffix "" -dep-makefile .depend
> -osuf o -O2 -Wno-tabs -Rghc-timing -H32m -hisuf hi -packageunboxed-ref
> -rtsopts smallpt.hs
> : cannot satisfy -package unboxed-ref
>  (use -v for more information)
>
> (e.g. https://gitlab.haskell.org/cactus/ghc/-/jobs/743141#L1465)
>
> 2. On validate-x86_64-darwin, pretty much every test fails because of the
> following extra stderr output:
>
> +
> +:
> +warning: Couldn't figure out C compiler information!
> + Make sure you're using GNU gcc, or clang
>
> (e.g. https://gitlab.haskell.org/cactus/ghc/-/jobs/743129#L3655)
>
> 3. On validate-x86_64-linux-deb9-integer-simple, T11545 fails on memory
> consumption:
>
> Unexpected stat failures:
> perf/compiler/T11545.run  T11545 [stat decreased from
> x86_64-linux-deb9-integer-simple-validate baseline @
> 5f3991c7cab8ccc9ab8daeebbfce57afbd9acc33] (normal)
>
> This one is interesting because there is already a commit that is supposed
> to fix this:
>
> commit efaad7add092c88eab46e00a9f349d4675bbee06
> Author: Matthew Pickering 
> Date:   Wed Jul 21 10:03:42 2021 +0100
>
>  Stop ug_boring_info retaining a chain of old CoreExpr
>
>  [...]
>
>  -
>  Metric Decrease:
>  T11545
>  -
>
> But still, it's failing.
>
> Can someone kick these build setups please?
>
> --
>
>.--= ULLA! =-.
> \ http://gergo.erdi.hu   \
>  `---= ge...@erdi.hu =---'
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


[CI] macOS builds

2021-06-05 Thread Moritz Angermann
Hi there!

You might have seen failed or stuck or pending darwin builds. Our CI
builders we got generously donated have ~250GB of disk space (which should
be absolutely adequat for what we do), and macOS BigSur does some odd
reservation of 200GB in /System/Volumes/Data, this is despite automatic
updates being disabled and time machine being disabled.

It used to happen only when the system was expecting an update to be
performed and the 200GB were freed after the update was done. After the
latest update to 11.4, however, it seems to have not freed that space. This
leaves the CI machine with ~50GB for for the system + build tools + gitlab
checkouts and builds, and they frequently run out of space :-/

If someone knows how to prevent the system from doing stupid stuff like
this (my hunch is it's keeping a backup of the system pre-udpate, for
disaster recovery). Please come forward, my google searches haven't
revealed anything useful yet.

I have filed a TSI with Apple (still had a few on my developer account),
but I don't expect them to come back to me before the end of June. Next
week is WWDC, and there will be a massive backlog of issues that queued up
leading up to, and during the WWDC.  I've also only had very marginal
success with them resolving issues that were not "you wrote this program
wrong".

If everything fails, maybe the solution is to attach some usbc ssd's to the
macs and have gitlab builds be run dedicatedly on those disks. I'm a bit
concerned about performance but we would have to see.

Any ideas are welcome, please also feel free to hit me up on
libera.chat#ghc, or the haskell foundations slack.

Cheers,
 Moritz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC and the future of Freenode

2021-05-21 Thread Moritz Angermann
Fair point. From a view over the last few days, I’d say it’s closer to 100%
on libera. Lots of people just switched. Quite surprising.

On Fri, 21 May 2021 at 7:14 PM, Jens Petersen  wrote:

> My vote goes for Matrix.
>
> I am not a heavy user yet, but I hope this episode helps to drive more
> people to it away from irc.
> Having half the people on Freenode and the other half on Libera seems the
> worst possible outcome in the short- to mid-term.
> The Fedora project also has plans to move to Matrix as its main group chat
> messaging platform.
>
> Jens
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Darwin CI Status

2021-05-19 Thread Moritz Angermann
Matt has access to the M1 builder in my closet now. The darwin performance
issue
is mainly there since BigSur, and (afaik) primarily due to the amount of
DYLD_LIBRARY_PATH's
we pass to GHC invocations. The system linker spends the majority of the
time in the
kernel stat'ing and getelements (or some similar directory) call for each
and every possible
path.

Switching to hadrian will cut down the time from ~5hs to ~2hs. At some
point we had make
builds <90min by just killing all DYLD_LIBRARY_PATH logic we ever had, but
that broke
bindists.

The CI has time values attached and some summary at the end right now,
which highlights
time spent in the system and in user mode. This is up to 80% sys, 20% user,
and went to
something like 20% sys, 80% user after nuking all DYLD_LIBRARY_PATH's, with
hadrian it's
closer to ~25% sys, 75% user.

Of note, this is mostly due to time spent during the *test-suite*, not the
actual build. For the
actual build make and hadrian are comparable, though I've seen hadrian to
oddly have a
much higher variance in how long it takes to *build* ghc, whereas the make
build was more
consistent.

The test-suite quite notoriously calls GHC *a lot of times*, which makes
any linker issue due
to DYLD_LIBRARY_PATH (and similar lookups) much worse.

If we would finally split building and testing, we'd see this more clearly
I believe. Maybe this
is motivation enough for someone to come forward to break build/test into
two CI steps?

Cheers,
 Moritz

On Wed, May 19, 2021 at 4:14 PM Matthew Pickering <
matthewtpicker...@gmail.com> wrote:

> Hi all,
>
> The darwin pipelines are gumming up the merge pipeline as they are
> taking over 4 hours to complete on average.
>
> I am going to disable them -
> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5785
>
> Please can someone give me access to one of the M1 builders so I can
> debug why the tests are taking so long. Once I have fixed the issue
> then I will enable the pipelines.
>
> Cheers,
>
> Matt
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Options for targeting Windows XP?

2021-03-29 Thread Moritz Angermann
>
>* Upstream changes into Cabal to make your new compiler a first-class
>  citizen. This is what GHCJS did.


Just a word of caution, please don't do this. It leads to non-negligible
maintainence burden on your and on the cabal side. Rather try as hard as
you can to make your compiler behave like a ghc wrt to cabal. Or add
generic support for more powerful compilers to cabal. Adding special
handling for one additional compiler will just result in bitrot, odd quirks
that only happen with that one compiler, and just a maintenance nightmare
for everyone involved.

We will be ripping out GHCJS custom logic from cabal. And I've also advised
the Asterius author not to go down that route.

My suggesting--if I may--is to try and build a c-like toolchain around your
compiler. That has some notion of compiler, archiver, linker, and those
could you empty shell wrappers, or no-ops, depending on your target.

Cheers,
 Moritz

On Tue, Mar 30, 2021 at 11:08 AM Ben Gamari  wrote:

> Clinton Mead  writes:
>
> > Thanks again for the detailed reply Ben.
> >
> > I guess the other dream of mine is to give GHC a .NET backend. For my
> > problem it would be the ideal solution, but it looks like other attempts
> in
> > this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with
> > updates to GHC. So I'm sure it's not trivial.
> >
> > It would be quite lovely though if I could generate .NET + Java + even
> > Python bytecode from GHC.
> >
> > Whilst not solving my immediate problem, perhaps my efforts are best
> spent
> > in giving GHC a plugin architecture for backends (or if one already
> > exists?) trying to make a .NET backend.
> >
> This is an interesting (albeit ambitious, for the reasons others have
> mentioned) idea. In particular, I think the CLR has a slightly advantage
> over the JVM as a Haskell target in that it has native tail-call
> support [1]. This avoids a fair amount of complexity (and performance
> overhead) that Eta had to employ to work around this lack in the JVM.
>
> I suspect that writing an STG -> CLR IR wouldn't itself be difficult.
> The hard part is dealing with the primops, RTS, and core libraries.
>
> [1]
> https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/56c08k0k(v=vs.95)?redirectedfrom=MSDN
>
> > I believe "Csaba Hruska" is working in this space with GRIN, yes?
>
> Csaba is indeed using GHC's front-end and Core pipeline to feed his own
> compilation pipeline. However, I believe his approach is currently quite
> decoupled from GHC. This may or may not complicate the ability to
> integrate with the rest of the ecosystem (e.g. Cabal; Csaba, perhaps you
> could
> comment here?)
>
>
> >
> > I read SPJs paper on Implementing Lazy Functional Languages on Stock
> > Hardware: The Spineless Tagless G-machine
> > <
> https://www.microsoft.com/en-us/research/publication/implementing-lazy-functional-languages-on-stock-hardware-the-spineless-tagless-g-machine/
> >
> > which
> > implemented STG in C and whilst it wasn't trivial, it didn't seem
> > stupendously complex (even I managed to roughly follow it). I thought to
> > myself also, implementing this in .NET would be even easier because I can
> > hand off garbage collection to the .NET runtime so there's one less thing
> > to worry about. I also, initially, don't care _too_ much about
> performance.
> >
> Indeed, STG itself is reasonably straightforward. Implementing tagged
> unions in the CLR doesn't even look that hard (F# does it, afterall).
> However, there are plenty of tricky bits:
>
>  * You still need to implement a fair amount of RTS support for a full
>implementation (e.g. light-weight threads and STM)
>
>  * You need to shim-out or reimplement the event manager in `base`
>
>  * What do you do about the many `foreign import`s used by, e.g.,
>`text`?
>
>  * How do you deal with `foreign import`s elsewhere?
>
> > Of course, there's probably a whole bunch of nuance. One actually needs
> to,
> > for example, represent all the complexities of GADTs into object
> orientated
> > classes, maybe converting sum types to inheritance hierarchies with
> Visitor
> > Patterns. And also you'd actually have to make sure to do one's best to
> > ensure exposed Haskell functions look like something sensible.
> >
> > So I guess, given I have a bit of an interest here, what would be the
> best
> > approach if I wanted to help GHC develop more backends and into an
> > architecture where people can add backends without forking GHC? Where
> could
> > I start helping that effort? Should I contact "Csaba Hruska" and get
> > involved in GRIN? Or is there something that I can start working on in
> GHC
> > proper?
> >
> At the moment we rather lack a good model for how new backends should
> work. There are quite a few axes to consider here:
>
>  * How do core libraries (e.g. `text`) work? Various choices are:
>
>* Disregard the core libraries (along with most of Hackage) and just
>  take the 

Re: Options for targeting Windows XP?

2021-03-26 Thread Moritz Angermann
I believe there is a bit of misconception about what requires a new backend
or not. GHC is a bunch of different intermediate representations from which
one can take off to build backends. The STG, or Cmm ones are the most
popular. All our Native Code Generators and the LLVM code gen take off from
the Cmm one. Whether or not that is the correct input representation for
your target largely depends on the target and design of the codegenerator.
GHCJS takes off from STG, and so does Csaba's GRIN work via the external
STG I believe. IIRC Asterius takes off from Cmm. I don't remember the
details about Eta.

Why fork? Do you want to deal with GHC, and GHC's development? If not,
fork. Do you want to have to keep up with GHC's development? Maybe not
fork. Do you think your compiler can stand on it's own and doesn't follow
GHC much, except for being a haskell compiler? By all means fork.

Eta is a bit special here, Eta forked off, and basically started
customising their Haskell compiler specifically to the JVM, and this also
allowed them to make radical changes to GHC, which would not have been
permissible in the mainline GHC. (Mainline GHC tries to support multiple
platforms and architectures at all times, breaking any of them isn't really
an option that can be taken lightheartedly.) Eta also started having Etlas,
a custom Cabal, ... I'd still like to see a lot from Eta and the ecosystem
be re-integrated into GHC. There have to be good ideas there that can be
brought back. It just needs someone to go look and do the work.

GHCJS is being aligned more with GHC right now precisely to eventually
re-integrate it with GHC.

Asterius went down the same path, likely inspired by GHCJS, but I think I
was able to convince the author that eventual upstreaming should be the
goal and the project should try to stay as close as possible to GHC for
that reason.

Now if you consider adding a codegen backend, this can be done, but again
depends on your exact target. I'd love to see a CLR target, yet I don't
know enough about CLR to give informed suggestions here.

If you have a toolchain that functions sufficiently similar to a stock c
toolchain, (or you can make your toolchain look sufficiently similar to
one, easily), most of it will just work. If you can separate your building
into compilation of source to some form of object code, and some form of
object code aggregates (archives), and some form of linking (objects and
archives into shared objects, or executables), you can likely plug in your
toolchain into GHC (and Cabal), and have it work, once you taught GHC how
to produce your target languages object code.

If your toolchain does stuff differently, a bit more work is involved in
teaching GHC (and Cabal) about that.

This all only gives you *haskell* though. You still need the Runtime
System. If you have a C -> Target compiler, you can try to re-use GHC's
RTS. This is what the WebGHC project did. They re-used GHC's RTS, and
implemented a shim for linux syscalls, so that they can emulate enough to
have the RTS think it's running on some musl like linux. You most likely
want something proper here eventually; but this might be a first stab at it
to get something working.

Next you'll have to deal with c-bits. Haskell Packages that link against C
parts. This is going to be challenging, not impossible but challenging as
much of the haskell ecosystem expects the ability to compile C files and
use those for low level system interaction.

You can use hackage overlays to build a set of patched packages, once you
have your codegen working. At that point you could start patching ecosystem
packages to work on your target, until your changes are upstreamed, and
provide your user with a hackage overlay (essentially hackage + patches for
specific packages).

Hope this helps.

You'll find most of us on irc.freenode.net#ghc

On Fri, Mar 26, 2021 at 1:29 PM Clinton Mead  wrote:

> Thanks again for the detailed reply Ben.
>
> I guess the other dream of mine is to give GHC a .NET backend. For my
> problem it would be the ideal solution, but it looks like other attempts in
> this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with
> updates to GHC. So I'm sure it's not trivial.
>
> It would be quite lovely though if I could generate .NET + Java + even
> Python bytecode from GHC.
>
> Whilst not solving my immediate problem, perhaps my efforts are best spent
> in giving GHC a plugin architecture for backends (or if one already
> exists?) trying to make a .NET backend.
>
> I believe "Csaba Hruska" is working in this space with GRIN, yes?
>
> I read SPJs paper on Implementing Lazy Functional Languages on Stock
> Hardware: The Spineless Tagless G-machine
> 
>  which
> implemented STG in C and whilst it wasn't trivial, it didn't seem
> stupendously complex (even I managed to roughly follow 

Re: On CI

2021-03-24 Thread Moritz Angermann
Yes, this is exactly one of the issues that marge might run into as well,
the aggregate ends up performing differently from the individual ones. Now
we have marge to ensure that at least the aggregate builds together, which
is the whole point of these merge trains. Not to end up in a situation
where two patches that are fine on their own, end up to produce a broken
merged state that doesn't build anymore.

Now we have marge to ensure every commit is buildable. Next we should run
regression tests on all commits on master (and that includes each and
everyone that marge brings into master. Then we have visualisation that
tells us how performance metrics go up/down over time, and we can drill
down into commits if they yield interesting results in either way.

Now lets say you had a commit that should have made GHC 50% faster across
the board, but somehow after the aggregate with other patches this didn't
happen anymore? We'd still expect this to somehow show in each of the
singular commits on master right?

On Wed, Mar 24, 2021 at 8:09 PM Richard Eisenberg  wrote:

> What about the case where the rebase *lessens* the improvement? That is,
> you're expecting these 10 cases to improve, but after a rebase, only 1
> improves. That's news! But a blanket "accept improvements" won't tell you.
>
> I'm not hard against this proposal, because I know precise tracking has
> its own costs. Just wanted to bring up another scenario that might be
> factored in.
>
> Richard
>
> > On Mar 24, 2021, at 7:44 AM, Andreas Klebinger 
> wrote:
> >
> > After the idea of letting marge accept unexpected perf improvements and
> > looking at https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4759
> > which failed because of a single test, for a single build flavour
> > crossing the
> > improvement threshold where CI fails after rebasing I wondered.
> >
> > When would accepting a unexpected perf improvement ever backfire?
> >
> > In practice I either have a patch that I expect to improve performance
> > for some things
> > so I want to accept whatever gains I get. Or I don't expect improvements
> > so it's *maybe*
> > worth failing CI for in case I optimized away some code I shouldn't or
> > something of that
> > sort.
> >
> > How could this be actionable? Perhaps having a set of indicator for CI of
> > "Accept allocation decreases"
> > "Accept residency decreases"
> >
> > Would be saner. I have personally *never* gotten value out of the
> > requirement
> > to list the indivial tests that improve. Usually a whole lot of them do.
> > Some cross
> > the threshold so I add them. If I'm unlucky I have to rebase and a new
> > one might
> > make it across the threshold.
> >
> > Being able to accept improvements (but not regressions) wholesale might
> be a
> > reasonable alternative.
> >
> > Opinions?
> >
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC 8.10 backports?

2021-03-24 Thread Moritz Angermann
More like abandoned backport attempt :D

On Wed, Mar 24, 2021 at 7:29 PM Andreas Klebinger 
wrote:

> Yes, only changing the rule did indeed cause regressions.
> Whichwhen not including the string changes. I don't think it's worth
> having one without the other.
>
> But it seems you already backported this?
> See https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5263
>
> Cheers
> Andreas
> Am 22/03/2021 um 07:02 schrieb Moritz Angermann:
>
> The commit message from
> https://gitlab.haskell.org/ghc/ghc/-/commit/f10d11fa49fa9a7a506c4fdbdf86521c2a8d3495,
>
> makes the changes to string seem required. Applying the commit on its own
> doesn't apply cleanly and pulls in quite a
> bit of extra dependent commits. Just applying the elem rules appears
> rather risky. Thus will I agree that having that
> would be a nice fix to have, the amount of necessary code changes makes me
> rather uncomfortable for a minor release :-/
>
> On Mon, Mar 22, 2021 at 1:58 PM Gergő Érdi  wrote:
>
>> Thanks, that makes it less appealing. In the original thread, I got no
>> further replies after my email announcing my "discovery" of that commit, so
>> I thought that was the whole story.
>>
>> On Mon, Mar 22, 2021, 13:53 Viktor Dukhovni 
>> wrote:
>>
>>> On Mon, Mar 22, 2021 at 12:39:28PM +0800, Gergő Érdi wrote:
>>>
>>> > I'd love to have this in a GHC 8.10 release:
>>> > https://mail.haskell.org/pipermail/ghc-devs/2021-March/019629.html
>>>
>>> This is already in 9.0, 9.2 and master, but it is a rather non-trivial
>>> change, given all the new work that went into the String case.  So I am
>>> not sure it is small/simple enough to make for a compelling backport.
>>>
>>> There's a lot of recent activity in this space.  See also
>>> <https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5259>, which is not
>>> yet merged into master, and might still be eta-reduced one more step).
>>>
>>> I don't know whether such optimisation tweaks (not a bugfix) are in
>>> scope for backporting, we certainly need to be confident they'll not
>>> cause any new problems.  FWIW, 5259 is dramatically simpler...
>>>
>>> Of course we also have
>>> <https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4890> in much the
>>> same territory, but there we're still blocked on someone figuring out
>>> what's going on with the 20% compile-time hit with T13056, and whether
>>> that's acceptable or not...
>>>
>>> --
>>> Viktor.
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
> ___
> ghc-devs mailing 
> listghc-devs@haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC 8.10 backports?

2021-03-23 Thread Moritz Angermann
Thanks! I’ll make sure not to forget that one.
I’m afraid 8.10 will be delayed yet again a bit
as we find ourselves in docker purgatory.

On Tue, 23 Mar 2021 at 2:18 PM, Phyx  wrote:

> Hi,
>
> I currently have https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5055
> marked for backports but don't know if it was done or not.
>
> Thanks,
> Tamar
>
> Sent from my Mobile
>
> On Mon, Mar 22, 2021, 04:33 Moritz Angermann 
> wrote:
>
>> Hi there!
>>
>> Does anyone have any backports they'd like to see for consideration for
>> 8.10.5?
>>
>> Cheers,
>>  Moritz
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC 8.10 backports?

2021-03-22 Thread Moritz Angermann
The commit message from
https://gitlab.haskell.org/ghc/ghc/-/commit/f10d11fa49fa9a7a506c4fdbdf86521c2a8d3495
,
makes the changes to string seem required. Applying the commit on its own
doesn't apply cleanly and pulls in quite a
bit of extra dependent commits. Just applying the elem rules appears rather
risky. Thus will I agree that having that
would be a nice fix to have, the amount of necessary code changes makes me
rather uncomfortable for a minor release :-/

On Mon, Mar 22, 2021 at 1:58 PM Gergő Érdi  wrote:

> Thanks, that makes it less appealing. In the original thread, I got no
> further replies after my email announcing my "discovery" of that commit, so
> I thought that was the whole story.
>
> On Mon, Mar 22, 2021, 13:53 Viktor Dukhovni 
> wrote:
>
>> On Mon, Mar 22, 2021 at 12:39:28PM +0800, Gergő Érdi wrote:
>>
>> > I'd love to have this in a GHC 8.10 release:
>> > https://mail.haskell.org/pipermail/ghc-devs/2021-March/019629.html
>>
>> This is already in 9.0, 9.2 and master, but it is a rather non-trivial
>> change, given all the new work that went into the String case.  So I am
>> not sure it is small/simple enough to make for a compelling backport.
>>
>> There's a lot of recent activity in this space.  See also
>> , which is not
>> yet merged into master, and might still be eta-reduced one more step).
>>
>> I don't know whether such optimisation tweaks (not a bugfix) are in
>> scope for backporting, we certainly need to be confident they'll not
>> cause any new problems.  FWIW, 5259 is dramatically simpler...
>>
>> Of course we also have
>>  in much the
>> same territory, but there we're still blocked on someone figuring out
>> what's going on with the 20% compile-time hit with T13056, and whether
>> that's acceptable or not...
>>
>> --
>> Viktor.
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GitLab is down: urgent

2021-03-21 Thread Moritz Angermann
Davean has resurrected gitlab for now.

On Mon, Mar 22, 2021 at 12:31 PM Moritz Angermann <
moritz.angerm...@gmail.com> wrote:

> It appears as if gitlab went down again, this time since around 4AM UTC on
> Monday.
>
> On Sun, Mar 21, 2021 at 9:49 PM Ben Gamari  wrote:
>
>> Moritz Angermann  writes:
>>
>> > Just a heads up everyone. Gitlab appears down again.  This seemed to
>> have
>> > happened around Sunday, 4AM UTC.
>> >
>> > Everyone have a blissful Sunday!
>> >
>> It is back up, again. It appears that GitLab's backup retention logic is
>> now either broken or has changed since it is now failing to delete old
>> backups, resulting a full disk. I'll be keeping an eye on this until we
>> sort out the root cause.
>>
>> Cheers,
>>
>> - Ben
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


GHC 8.10 backports?

2021-03-21 Thread Moritz Angermann
Hi there!

Does anyone have any backports they'd like to see for consideration for
8.10.5?

Cheers,
 Moritz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GitLab is down: urgent

2021-03-21 Thread Moritz Angermann
It appears as if gitlab went down again, this time since around 4AM UTC on
Monday.

On Sun, Mar 21, 2021 at 9:49 PM Ben Gamari  wrote:

> Moritz Angermann  writes:
>
> > Just a heads up everyone. Gitlab appears down again.  This seemed to have
> > happened around Sunday, 4AM UTC.
> >
> > Everyone have a blissful Sunday!
> >
> It is back up, again. It appears that GitLab's backup retention logic is
> now either broken or has changed since it is now failing to delete old
> backups, resulting a full disk. I'll be keeping an eye on this until we
> sort out the root cause.
>
> Cheers,
>
> - Ben
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GitLab is down: urgent

2021-03-20 Thread Moritz Angermann
Just a heads up everyone. Gitlab appears down again.  This seemed to have
happened around Sunday, 4AM UTC.

Everyone have a blissful Sunday!

On Sat, 20 Mar 2021 at 6:05 PM, Giorgio Marinelli 
wrote:

> I can also help (~UTC+1), I've a long history and experience in
> systems management and engineering.
>
> Best,
>
> Giorgio Marinelli
> https://marinelli.dev/cv
>
> On Sat, 20 Mar 2021 at 00:31, Moritz Angermann
>  wrote:
> >
> > I can try to step up and be backup on the other side of the planet. Ben
> and I are almost 12hs apart exactly.
> >
> > On Sat, 20 Mar 2021 at 1:32 AM, Richard Eisenberg 
> wrote:
> >>
> >>
> >>
> >> On Mar 19, 2021, at 12:44 PM, howard.b.gol...@gmail.com wrote:
> >>
> >> I would like to help however I can. I already maintain the Haskell
> >> wiki, and I would like to improve and document its configuration using
> >> devops techniques, preferably consistent with gitlab.haskell.org.
> >>
> >>
> >> Thanks, Howard!
> >>
> >> I will try to take you up on your offer to help: do you think you could
> start this documentation process more broadly? That is, not just covering
> the Haskell Wiki, but also, say, gitlab.haskell.org. (You say you wish to
> document the wiki's configuration consistently with gitlab.haskell.org,
> but I don't know that the latter is documented!)
> >>
> >> Ideally, I would love to know what services haskell.org hosts, who
> runs them, and what happens if those people become unavailable. There's a
> zoo of services out there, and knowing who does what would be invaluable.
> >>
> >> Of course, anyone can start this process, but it takes someone willing
> to stick with it and see it through for a few weeks. Since Howard boldly
> stepped forward, I nominate him. :)
> >>
> >> Thanks,
> >> Richard
> >> ___
> >> ghc-devs mailing list
> >> ghc-devs@haskell.org
> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GitLab is down: urgent

2021-03-19 Thread Moritz Angermann
I can try to step up and be backup on the other side of the planet. Ben and
I are almost 12hs apart exactly.

On Sat, 20 Mar 2021 at 1:32 AM, Richard Eisenberg  wrote:

>
>
> On Mar 19, 2021, at 12:44 PM, howard.b.gol...@gmail.com wrote:
>
> I would like to help however I can. I already maintain the Haskell
> wiki, and I would like to improve and document its configuration using
> devops techniques, preferably consistent with gitlab.haskell.org.
>
>
> Thanks, Howard!
>
> I will try to take you up on your offer to help: do you think you could
> start this documentation process more broadly? That is, not just covering
> the Haskell Wiki, but also, say, gitlab.haskell.org. (You say you wish to
> document the wiki's configuration consistently with gitlab.haskell.org,
> but I don't know that the latter is documented!)
>
> Ideally, I would love to know what services haskell.org hosts, who runs
> them, and what happens if those people become unavailable. There's a zoo of
> services out there, and knowing who does what would be invaluable.
>
> Of course, anyone can start this process, but it takes someone willing to
> stick with it and see it through for a few weeks. Since Howard boldly
> stepped forward, I nominate him. :)
>
> Thanks,
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: On CI

2021-03-17 Thread Moritz Angermann
I am not advocating to drop perf tests during merge requests, I just want
them to not be fatal for marge batches. Yes this means that a bunch of
unrelated merge requests all could be fine wrt to the perf checks per merge
request, but the aggregate might fail perf.  And then subsequently the next
MR against the merged aggregate will start failing. Even that is a pretty
bad situation imo.

I honestly don't have a good answer, I just see marge work on batches, over
and over and over again, just to fail. Eventually marge should figure out a
subset of the merges that fit into the perf window, but that might be after
10 tries? So after up to ~30+hours?, which means there won't be any merge
request landing in GHC for 30hs. I find that rather unacceptable.

I think we need better visualisation of perf regressions that happen on
master. Ben has some wip for this, and I think John said there might be
some way to add a nice (maybe reflex) ui to it.  If we can see regressions
on master easily, and go from "ohh this point in time GHC got worse", to
"this is the commit". We might be able to figure it out.

But what do we expect of patch authors? Right now if five people write
patches to GHC, and each of them eventually manage to get their MRs green,
after a long review, they finally see it assigned to marge, and then it
starts failing? Their patch on its own was fine, but their aggregate with
other people's code leads to regressions? So we now expect all patch
authors together to try to figure out what happened? Figuring out why
something regressed is hard enough, and we only have a very few people who
are actually capable of debugging this. Thus I believe it would end up with
Ben, Andreas, Matthiew, Simon, ... or someone else from GHC HQ anyway to
figure out why it regressed, be it in the Review Stage, or dissecting a
marge aggregate, or on master.

Thus I believe in most cases we'd have to look at the regressions anyway,
and right now we just convolutedly make working on GHC a rather depressing
job. Increasing the barrier to entry by also requiring everyone to have
absolutely stellar perf regression skills is quite a challenge.

There is also the question of our synthetic benchmarks actually measuring
real world performance? Do the micro benchmarks translate to the same
regressions in say building aeson, vector or Cabal? The latter being what
most practitioners care about more than the micro benchmarks.

Again, I'm absolutely not in favour of GHC regressing, it's slow enough as
it is. I just think CI should be assisting us and not holding development
back.

Cheers,
 Moritz

On Wed, Mar 17, 2021 at 5:54 PM Spiwack, Arnaud 
wrote:

> Ah, so it was really two identical pipelines (one for the branch where
> Margebot batches commits, and one for the MR that Margebot creates before
> merging). That's indeed a non-trivial amount of purely wasted
> computer-hours.
>
> Taking a step back, I am inclined to agree with the proposal of not
> checking stat regressions in Margebot. My high-level opinion on this is
> that perf tests don't actually test the right thing. Namely, they don't
> prevent performance drift over time (if a given test is allowed to degrade
> by 2% every commit, it can take a 100% performance hit in just 35 commits).
> While it is important to measure performance, and to avoid too egregious
> performance degradation in a given commit, it's usually performance over
> time which matters. I don't really know how to apply it to collaborative
> development, and help maintain healthy performance. But flagging
> performance regressions in MRs, while not making them block batched merges
> sounds like a reasonable compromise.
>
>
> On Wed, Mar 17, 2021 at 9:34 AM Moritz Angermann <
> moritz.angerm...@gmail.com> wrote:
>
>> *why* is a very good question. The MR fixing it is here:
>> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5275
>>
>> On Wed, Mar 17, 2021 at 4:26 PM Spiwack, Arnaud 
>> wrote:
>>
>>> Then I have a question: why are there two pipelines running on each
>>> merge batch?
>>>
>>> On Wed, Mar 17, 2021 at 9:22 AM Moritz Angermann <
>>> moritz.angerm...@gmail.com> wrote:
>>>
>>>> No it wasn't. It was about the stat failures described in the next
>>>> paragraph. I could have been more clear about that. My apologies!
>>>>
>>>> On Wed, Mar 17, 2021 at 4:14 PM Spiwack, Arnaud <
>>>> arnaud.spiw...@tweag.io> wrote:
>>>>
>>>>>
>>>>> and if either of both (see below) failed, marge's merge would fail as
>>>>>> well.
>>>>>>
>>>>>
>>>>> Re: “see below” is this referring to a missing part of your email?
>>>>>
>>>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: On CI

2021-03-17 Thread Moritz Angermann
*why* is a very good question. The MR fixing it is here:
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5275

On Wed, Mar 17, 2021 at 4:26 PM Spiwack, Arnaud 
wrote:

> Then I have a question: why are there two pipelines running on each merge
> batch?
>
> On Wed, Mar 17, 2021 at 9:22 AM Moritz Angermann <
> moritz.angerm...@gmail.com> wrote:
>
>> No it wasn't. It was about the stat failures described in the next
>> paragraph. I could have been more clear about that. My apologies!
>>
>> On Wed, Mar 17, 2021 at 4:14 PM Spiwack, Arnaud 
>> wrote:
>>
>>>
>>> and if either of both (see below) failed, marge's merge would fail as
>>>> well.
>>>>
>>>
>>> Re: “see below” is this referring to a missing part of your email?
>>>
>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: On CI

2021-03-17 Thread Moritz Angermann
No it wasn't. It was about the stat failures described in the next
paragraph. I could have been more clear about that. My apologies!

On Wed, Mar 17, 2021 at 4:14 PM Spiwack, Arnaud 
wrote:

>
> and if either of both (see below) failed, marge's merge would fail as well.
>>
>
> Re: “see below” is this referring to a missing part of your email?
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


On CI

2021-03-16 Thread Moritz Angermann
Hi there!

Just a quick update on our CI situation. Ben, John, Davean and I have been
discussion on CI yesterday, and what we can do about it, as well as some
minor notes on why we are frustrated with it. This is an open invitation to
anyone who in earnest wants to work on CI. Please come forward and help!
We'd be glad to have more people involved!

First the good news, over the last few weeks we've seen we *can* improve
CI performance quite substantially. And the goal is now to have MR go
through
CI within at most 3hs.  There are some ideas on how to make this even
faster,
especially on wide (high core count) machines; however that will take a bit
more
time.

Now to the more thorny issue: Stat failures.  We do not want GHC to regress,
and I believe everyone is on board with that mission.  Yet we have just
witnessed a train of marge trials all fail due to a -2% regression in a few
tests. Thus we've been blocking getting stuff into master for at least
another day. This is (in my opinion) not acceptable! We just had five days
of nothing working because master was broken and subsequently all CI
pipelines kept failing. We have thus effectively wasted a week. While we
can mitigate the latter part by enforcing marge for all merges to master
(and with faster pipeline turnaround times this might be more palatable
than with 9-12h turnaround times -- when you need to get something done!
ha!), but that won't help us with issues where marge can't find a set of
buildable MRs, because she just keeps hitting a combination of MRs that
somehow together increase or decrease metrics.

We have three knobs to adjust:
- Make GHC build faster / make the testsuite run faster.
  There is some rather interesting work going on about parallelizing
(earlier)
  during builds. We've also seen that we've wasted enormous amounts of
  time during darwin builds in the kernel, because of a bug in the
testdriver.
- Use faster hardware.
  We've seen that just this can cut windows build times from 220min to
80min.
- Reduce the amount of builds.
  We used to build two pipelines for each marge merge, and if either of both
  (see below) failed, marge's merge would fail as well. So not only did we
build
  twice as much as we needed, we also increased our chances to hit bogous
  build failures by 2.

We need to do something about this, and I'd advocate for just not making
stats fail with marge. Build errors of course, but stat failures, no. And
then have a separate dashboard (and Ben has some old code lying around for
this, which someone would need to pick up and polish, ...), that tracks
GHC's Performance for each commit to master, with easy access from the
dashboard to the offending commit. We will also need to consider the
implications of synthetic micro benchmarks, as opposed to say building
Cabal or other packages, that reflect more real-world experience of users
using GHC.

I will try to provide a data driven report on GHC's CI on a bi-weekly or
month (we will have to see what the costs for writing it up, and the
usefulness is) going forward. And my sincere hope is that it will help us
better understand our CI situation; instead of just having some vague
complaints about it.

Cheers,
 Moritz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Build failure -- missing dependency? Help!

2021-03-15 Thread Moritz Angermann
Hi Viktor,

- I believe the "test spaces" part is important and would need to be fixed,
if spaces break this is not desirable.

- For the Relocations part, I'm happy to offer guidance and help for anyone
who wants to take a stab at it, right
  now I'm not in a position where I could take this on myself, I'm afraid.

Cheers,
 Moritz

On Tue, Mar 16, 2021 at 12:29 AM Viktor Dukhovni 
wrote:

> On Mon, Mar 15, 2021 at 06:44:20AM -0400, Viktor Dukhovni wrote:
>
> > ..., the FreeBSD "validate --legacy"
> > successfully builds GHC.  [ The tests seem to all be failing, perhaps
> > the test driver scripts are not portable to FreeBSD, but previously
> > the compiler was not building. ]
>
> FWIW, the tests seem to fail for two reasons:
>
> 1.  The "install   dir" and "test   space" directories don't
> appear to be handled correctly.  I had to drop the spaces.
>
> 2.  On FreeBSD many tests run into the dreaded:
>
> unhandled ELF relocation(RelA) type 19
>
> Can anyone versed in Elf internals help with:
>
> https://gitlab.haskell.org/ghc/ghc/-/issues/19086
>
> --
> Viktor.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GSOC Idea: Bytecode serialization and/or Fat Interface files

2021-03-12 Thread Moritz Angermann
I'd be happy to mentor anyone on either of these. The CI part is going to
be grueling demotivatinal work with very long pauses in between, which is
why I didn't propose it yet.

I agree with John, that I'm a bit skeptical about a Student being able to
help/pull anything off in the current state how things are with multiple
parties being actively involved in this already, without being relegated to
a spectators position.

On Sat, Mar 13, 2021 at 9:34 AM John Ericson 
wrote:

> Yes, see
> https://gitlab.haskell.org/ghc/ghc/-/wikis/Plan-for-increased-parallelism-and-more-detailed-intermediate-output
> where we (Obsidian) and IOHK have been planning together.
>
> I must saw, I am a bit skeptical about a GSOC being able to take this on
> successfully. I thought Fendor did a great job with multiple home units,
> for example, but we have still to finish merging all his work! The driver
> is perhaps the biggest cesspool of technical debt in GHC, and it will take
> a while to untangle let alone implement new features.
>
> I forget what the rules are for more incremental or multifaceted projects,
> but I would prefer an approach of trying to untangle things with no
> singular large goal. Or maybe we can involve a student with efforts to
> improve CI, attacking the root cause for why it's so hard to land things in
> the first place .
>
> John
> On 3/12/21 7:11 PM, Moritz Angermann wrote:
>
> Yes there is also John resumable compilation ideas. And the current
> performance work obsidian systems does.
>
> On Sat, 13 Mar 2021 at 6:21 AM, Cheng Shao  wrote:
>
>> I believe Josh has already been working on 2 some time ago? cc'ing him
>> to this thread.
>>
>> I'm personally in favor of 2 since it's also super useful for
>> prototyping whole-program ghc backends, where one can just read all
>> the CgGuts from the .hi files, and get all codegen-related Core for
>> free.
>>
>> Cheers,
>> Cheng
>>
>> On Fri, Mar 12, 2021 at 10:32 PM Zubin Duggal 
>> wrote:
>> >
>> > Hi all,
>> >
>> > This is following up on this recent discussion on the list concerning
>> fat
>> > interface files:
>> https://mail.haskell.org/pipermail/ghc-devs/2020-October/019324.html
>> >
>> > Now that we have been accepted as a GSOC organisation, I think
>> > it would be a good project idea for a sufficiently motivated and
>> > advanced student. This is a call for mentors (and students as
>> > well!) who would be interested in this project
>> >
>> > The problem is the following:
>> >
>> > Haskell Language Server (and ghci with `-fno-code`) have very
>> > fast startup times for codebases which don't make use of Template
>> > Haskell, and thus don't require any code-gen to typecheck. This
>> > is because they can simply read the cached iface files generated by a
>> > previous compile and don't need to re-invoke the typechecker.
>> >
>> > However, as soon as TH is involved, we are forced to retypecheck and
>> > compile files, since it is not possible to restart the code-gen process
>> > starting with only a iface file. I can think of two ways to address this
>> > problem:
>> >
>> > 1. Allow bytecode to be serialized
>> >
>> > 2. Serialize desugared Core into iface files (fat interfaces), so that
>> > (byte)code-gen can be restarted from this point and doesn't need
>> >
>> > (1) might be challenging, but offers a few more advantages over (2),
>> > in that we can reduce the work done to load TH-heavy codebases to just
>> > a load of the cached bytecode objects from disk, and could make the
>> > load process (and times) for these codebases directly comparable to
>> > their TH-free cousins.
>> >
>> > It would also make ghci startup a lot faster with a warm cache of
>> > bytecode objects, bringing ghci startup times in line with those of
>> > -fno-code
>> >
>> > However (2) might be much easier to achieve and offers many
>> > of the same advantages, in that we would not need to re-run
>> > the compiler frontend or core-to-core optimisation phases.
>> > There is also already a (slightly bitrotted) implementation
>> > of (2) thanks to the work of Edward Yang.
>> >
>> > If any of this sounds exciting to you as a student or a mentor, please
>> > get in touch.
>> >
>> > In particular, I think (2) is a feasible project that can be completed
>> > with minimal mentoring effort. However, I'm only vaguely familiar with
>> > the details of the byte code generator, so 

Re: GSOC Idea: Bytecode serialization and/or Fat Interface files

2021-03-12 Thread Moritz Angermann
Yes there is also John resumable compilation ideas. And the current
performance work obsidian systems does.

On Sat, 13 Mar 2021 at 6:21 AM, Cheng Shao  wrote:

> I believe Josh has already been working on 2 some time ago? cc'ing him
> to this thread.
>
> I'm personally in favor of 2 since it's also super useful for
> prototyping whole-program ghc backends, where one can just read all
> the CgGuts from the .hi files, and get all codegen-related Core for
> free.
>
> Cheers,
> Cheng
>
> On Fri, Mar 12, 2021 at 10:32 PM Zubin Duggal 
> wrote:
> >
> > Hi all,
> >
> > This is following up on this recent discussion on the list concerning fat
> > interface files:
> https://mail.haskell.org/pipermail/ghc-devs/2020-October/019324.html
> >
> > Now that we have been accepted as a GSOC organisation, I think
> > it would be a good project idea for a sufficiently motivated and
> > advanced student. This is a call for mentors (and students as
> > well!) who would be interested in this project
> >
> > The problem is the following:
> >
> > Haskell Language Server (and ghci with `-fno-code`) have very
> > fast startup times for codebases which don't make use of Template
> > Haskell, and thus don't require any code-gen to typecheck. This
> > is because they can simply read the cached iface files generated by a
> > previous compile and don't need to re-invoke the typechecker.
> >
> > However, as soon as TH is involved, we are forced to retypecheck and
> > compile files, since it is not possible to restart the code-gen process
> > starting with only a iface file. I can think of two ways to address this
> > problem:
> >
> > 1. Allow bytecode to be serialized
> >
> > 2. Serialize desugared Core into iface files (fat interfaces), so that
> > (byte)code-gen can be restarted from this point and doesn't need
> >
> > (1) might be challenging, but offers a few more advantages over (2),
> > in that we can reduce the work done to load TH-heavy codebases to just
> > a load of the cached bytecode objects from disk, and could make the
> > load process (and times) for these codebases directly comparable to
> > their TH-free cousins.
> >
> > It would also make ghci startup a lot faster with a warm cache of
> > bytecode objects, bringing ghci startup times in line with those of
> > -fno-code
> >
> > However (2) might be much easier to achieve and offers many
> > of the same advantages, in that we would not need to re-run
> > the compiler frontend or core-to-core optimisation phases.
> > There is also already a (slightly bitrotted) implementation
> > of (2) thanks to the work of Edward Yang.
> >
> > If any of this sounds exciting to you as a student or a mentor, please
> > get in touch.
> >
> > In particular, I think (2) is a feasible project that can be completed
> > with minimal mentoring effort. However, I'm only vaguely familiar with
> > the details of the byte code generator, so if (1) is a direction we want
> > to pursue, we would need a mentor familiar with the details of this part
> > of GHC.
> >
> > Cheers,
> > Zubin
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: On CI

2021-02-18 Thread Moritz Angermann
I'm glad to report that my math was off. But it was off only because I
assumed that we'd successfully build all
windows configurations, which we of course don't. Thus some builds fail
faster.

Sylvain also provided a windows machine temporarily, until it expired.
This led to a slew of new windows wibbles.
The CI script Ben wrote, and generously used to help set up the new
builder, seems to assume an older Git install,
and thus a path was broken which thankfully to gitlab led to the brilliant
error of just stalling.
Next up, because we use msys2's pacman to provision the windows builders,
and pacman essentially gives us
symbols for packages to install, we ended up getting a newer autoconf onto
the new builder (and I assume this
will happen with any other builders we add as well). This new autoconf
(which I've also ran into on the M1s) doesn't
like our configure.ac/aclocal.m4 anymore and barfs; I wasn't able to figure
out how to force pacman to install an
older version and *not* give it some odd version suffix (which prevents it
from working as a drop in replacement).

In any case we *must* update our autoconf files. So I guess the time is now.


On Wed, Feb 17, 2021 at 6:58 PM Moritz Angermann 
wrote:

> At this point I believe we have ample Linux build capacity. Darwin looks
> pretty good as well the ~4 M1s we have should in principle also be able to
> build x86_64-darwin at acceptable speeds. Although on Big Sur only.
>
> The aarch64-Linux story is a bit constraint by powerful and fast CI
> machines but probabaly bearable for the time being. I doubt anyone really
> looks at those jobs anyway as they are permitted to fail. If aarch64 would
> become a bottle neck, I’d be inclined to just disable them. With the NCG
> soon this will likely become much more bearable as wel, even though we
> might want to run the nightly llvm builds.
>
> To be frank, I don’t see 9.2 happening in two weeks with the current CI.
>
> If we subtract aarch64-linux and windows builds we could probably do a
> full run in less than three hours maybe even less. And that is mostly
> because we have a serialized pipeline. I have discussed some ideas with Ben
> on prioritizing the first few stages by the faster ci machines to
> effectively fail fast and provide feedback.
>
> But yes. Working on ghc right now is quite painful due to long and
> unpredictable CI times.
>
> Cheers,
>  Moritz
>
> On Wed, 17 Feb 2021 at 6:31 PM, Sebastian Graf 
> wrote:
>
>> Hi Moritz,
>>
>> I, too, had my gripes with CI turnaround times in the past. Here's a
>> somewhat radical proposal:
>>
>>- Run "full-build" stage builds only on Marge MRs. Then we can assign
>>to Marge much earlier, but probably have to do a bit more of (manual)
>>bisecting of spoiled Marge batches.
>>   - I hope this gets rid of a bit of the friction of small MRs. I
>>   recently caught myself wanting to do a bunch of small, independent, but
>>   related changes as part of the same MR, simply because it's such a 
>> hassle
>>   to post them in individual MRs right now and also because it steals so 
>> much
>>   CI capacity.
>>- Regular MRs should still have the ability to easily run individual
>>builds of what is now the "full-build" stage, similar to how we can run
>>optional "hackage" builds today. This is probably useful to pin down the
>>reason for a spoiled Marge batch.
>>- The CI capacity we free up can probably be used to run a perf build
>>(such as the fedora release build) on the "build" stage (the one where we
>>currently run stack-hadrian-build and the validate-deb9-hadrian build), in
>>parallel.
>>- If we decide against the latter, a micro-optimisation could be to
>>cache the build artifacts of the "lint-base" build and continue the build
>>in the validate-deb9-hadrian build of the "build" stage.
>>
>> The usefulness of this approach depends on how many MRs cause metric
>> changes on different architectures.
>>
>> Another frustrating aspect is that if you want to merge an n-sized chain
>> of dependent changes individually, you have to
>>
>>- Open an MR for each change (initially the last change will be
>>comprised of n commits)
>>- Review first change, turn pipeline green   (A)
>>- Assign to Marge, wait for batch to be merged   (B)
>>- Review second change, turn pipeline green
>>- Assign to Marge, wait for batch to be merged
>>- ... and so on ...
>>
>> Note that (A) incurs many context switches for the dev and the latency of
>> *at least* one run of CI.
>> And then (B) in

Re: On CI

2021-02-17 Thread Moritz Angermann
At this point I believe we have ample Linux build capacity. Darwin looks
pretty good as well the ~4 M1s we have should in principle also be able to
build x86_64-darwin at acceptable speeds. Although on Big Sur only.

The aarch64-Linux story is a bit constraint by powerful and fast CI
machines but probabaly bearable for the time being. I doubt anyone really
looks at those jobs anyway as they are permitted to fail. If aarch64 would
become a bottle neck, I’d be inclined to just disable them. With the NCG
soon this will likely become much more bearable as wel, even though we
might want to run the nightly llvm builds.

To be frank, I don’t see 9.2 happening in two weeks with the current CI.

If we subtract aarch64-linux and windows builds we could probably do a full
run in less than three hours maybe even less. And that is mostly because we
have a serialized pipeline. I have discussed some ideas with Ben on
prioritizing the first few stages by the faster ci machines to effectively
fail fast and provide feedback.

But yes. Working on ghc right now is quite painful due to long and
unpredictable CI times.

Cheers,
 Moritz

On Wed, 17 Feb 2021 at 6:31 PM, Sebastian Graf  wrote:

> Hi Moritz,
>
> I, too, had my gripes with CI turnaround times in the past. Here's a
> somewhat radical proposal:
>
>- Run "full-build" stage builds only on Marge MRs. Then we can assign
>to Marge much earlier, but probably have to do a bit more of (manual)
>bisecting of spoiled Marge batches.
>   - I hope this gets rid of a bit of the friction of small MRs. I
>   recently caught myself wanting to do a bunch of small, independent, but
>   related changes as part of the same MR, simply because it's such a 
> hassle
>   to post them in individual MRs right now and also because it steals so 
> much
>   CI capacity.
>- Regular MRs should still have the ability to easily run individual
>builds of what is now the "full-build" stage, similar to how we can run
>optional "hackage" builds today. This is probably useful to pin down the
>reason for a spoiled Marge batch.
>- The CI capacity we free up can probably be used to run a perf build
>(such as the fedora release build) on the "build" stage (the one where we
>currently run stack-hadrian-build and the validate-deb9-hadrian build), in
>parallel.
>- If we decide against the latter, a micro-optimisation could be to
>cache the build artifacts of the "lint-base" build and continue the build
>in the validate-deb9-hadrian build of the "build" stage.
>
> The usefulness of this approach depends on how many MRs cause metric
> changes on different architectures.
>
> Another frustrating aspect is that if you want to merge an n-sized chain
> of dependent changes individually, you have to
>
>- Open an MR for each change (initially the last change will be
>comprised of n commits)
>- Review first change, turn pipeline green   (A)
>- Assign to Marge, wait for batch to be merged   (B)
>- Review second change, turn pipeline green
>- Assign to Marge, wait for batch to be merged
>- ... and so on ...
>
> Note that (A) incurs many context switches for the dev and the latency of
> *at least* one run of CI.
> And then (B) incurs the latency of *at least* one full-build, if you're
> lucky and the batch succeeds. I've recently seen batches that were
> resubmitted by Marge at least 5 times due to spurious CI failures and
> timeouts. I think this is a huge factor for latency.
>
> Although after (A), I should just pop the the patch off my mental stack,
> that isn't particularly true, because Marge keeps on reminding me when a
> stack fails or succeeds, both of which require at least some attention from
> me: Failed 2 times => Make sure it was spurious, Succeeds => Rebase next
> change.
>
> Maybe we can also learn from other projects like Rust, GCC or clang, which
> I haven't had a look at yet.
>
> Cheers,
> Sebastian
>
> Am Mi., 17. Feb. 2021 um 09:11 Uhr schrieb Moritz Angermann <
> moritz.angerm...@gmail.com>:
>
>> Friends,
>>
>> I've been looking at CI recently again, as I was facing CI turnaround
>> times of 9-12hs; and this just keeps dragging out and making progress hard.
>>
>> The pending pipeline currently has 2 darwin, and 15 windows builds
>> waiting. Windows builds on average take ~220minutes. We have five builders,
>> so we can expect this queue to be done in ~660 minutes assuming perfect
>> scheduling and good performance. That is 11hs! The next windows build can
>> be started in 11hs. Please check my math and tell me I'm wrong!
>>
>> If you submit a MR today, with some luck, you'l

On CI

2021-02-17 Thread Moritz Angermann
Friends,

I've been looking at CI recently again, as I was facing CI turnaround times
of 9-12hs; and this just keeps dragging out and making progress hard.

The pending pipeline currently has 2 darwin, and 15 windows builds waiting.
Windows builds on average take ~220minutes. We have five builders, so we
can expect this queue to be done in ~660 minutes assuming perfect
scheduling and good performance. That is 11hs! The next windows build can
be started in 11hs. Please check my math and tell me I'm wrong!

If you submit a MR today, with some luck, you'll be able to know if it will
be mergeable some time tomorrow. At which point you can assign it to marge,
and marge, if you are lucky and the set of patches she tries to merge
together is mergeable, will merge you work into master probably some time
on Friday. If a job fails, well you have to start over again.

What are our options here? Ben has been pretty clear about not wanting a
broken commit for windows to end up in the tree, and I'm there with him.

Cheers,
 Moritz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Stop holding hadrian back with backwards compatibility

2021-02-11 Thread Moritz Angermann
Tamar,

thanks so much for the backstory and the tickets. I’ll go dig down this
path a bit more.

Cheers,
 Moritz

On Thu, 11 Feb 2021 at 5:31 PM, Phyx  wrote:

> Hi, Just leaving my two cents feel free to ignore..
>
> > I almost suggested that this had to be the reason for the back-compat
> design
>
> You're right, but not for backwards compat of Hadrian vs Make, but for
> compat with RTS versions.
> I could be wrong, but my understanding is the current design in Make is
> just an artifact of getting something that works on all OSes without much
> pain, but has proven to be suboptimal in a very important use case (slight
> detour time):
>
> You have to make a choice of which RTS to use at compile time.  Which is
> quite bad.  Because it means that you can't swap between two RTS flavors
> with the same ABI. It also means building presents a problem, you want your
> compiler at the end of stage1 to use your new rts, not the one of the
> stage0 compiler.
>
> You can't have multiple versions of the RTS in one library, but if you
> have the full name as a dependency the dynamic loader happily loads you
> multiple copies.
>
> To solve this issue the design was made to not declare the RTS as a
> dependency on any haskell library. i.e. there's not DT_NEEDED entry for it
> on ELF operating systems.  Which means before you load a Haskell produced
> dynamic library on Linux you need to LD_PRELOAD an rts. It's clunky, but it
> works, it allows you to switch between debug and non-debug rts at
> initialization time.
>
> On Windows, this problem was punted, because everything is statically
> linked.  But the problem exists that you can have multiple DLLs with
> different RTS and ABIs.  This is fine as long as the DLLs have no
> dependencies on each other. Once they do... you have a big problem.  This
> is one of the primary blockers of shared library support on Windows.
>
> I.. don't know whatever wacky solution MacOS uses so can't comment there.
>
> Now back to the original question about version 1.0, this has nothing to
> do with Make at all. Make based system only implemented the scheme that was
> wanted. It's not like any Make system design issues forced this scheme. Now
> over the years, assumptions that the RTS is always version 1.0 could have
> krept into the build system.  But I don't believe this to have been design,
> just convenience. Right now, the design only requires you to know the GHC
> version, which is available in all makefiles.  Knowing the RTS version
> would be difficult, but the point is that in a proper design you don't need
> to know the version.
>
> Almost half a decade ago a plan was made to replace this scheme with one
> that would work on all OSes and would allow us to solve these issues. The
> design was made and debated here
> https://gitlab.haskell.org/ghc/ghc/-/issues/10352
>
> The actual solution isn't as simple as just adding the rts version to the
> library name or add it only to the build system, in fact this would be the
> wrong approach as it makes it impossible to observe backwards compatibility
> between GHC releases.
> i.e. without it, you'd need to have GHC 9.0.1 installed to run GHC 9.0.1
> programs, you can't run using GHC 9.2.x rts if the version changed.
>
> Typically ELF based platforms solve this by a combination of SONAME and
> symbol versioning.  Windows solves this by a combination of SxS Assembly
> versioning or mingw style SONAME.
>
> All of which require you to have the same filename for the libraries, but
> use a different path to disambiguate:
>
> lib/ghc-${ver}/rts-1.0/libHSrts-ghc${ver}.so
>
> lib/ghc-${ver}/rts-1.0/thr/libHSrts-ghc${ver}.so
>
> lib/ghc-${ver}/rts-1.0/debug/libHSrts-ghc${ver}.so
>
> lib/ghc-${ver}/rts-1.0/l/libHSrts-ghc${ver}.so
>
> lib/ghc-${ver}/rts-1.0/thr_l/libHSrts-ghc${ver}.so
>
> for each RTS with the same ABI. profiling libs for instance have a
> different ABI and can't use this scheme.
>
> So what has taken so long to implement this? Well.. time. As it turns out,
> getting this scheme to work required a lot of foundational work in GHC
> (Particularly on Windows where dynamic linking design wasn't optimal, but
> both GHC and the dynamic linker are happy now).
>
> On Linux it took a while to get SONAME support in cabal
> https://github.com/haskell/cabal/issues/4052 so we don't have to hack
> around in the build system.
>
> But anyway this is why the current scheme exists, and why just adding an
> rts version isn't really sufficient, especially if the name propagates to
> the shared lib.
>
> TL;DR;
>
> If we are going to change the build system, we should do it properly.
>
> The current scheme exists because GHC does not observe any mechanism to
> support multiple runtimes with the same ABI and does not really have a
> backwards compatibility story.
>
> Kind Regards,
>
> Tamar
>
> On Wed, Feb 10, 2021 at 11:00 PM Richard Eisenberg 
> wrote:
>
>>
>>
>> On Feb 10, 2021, at 8:50 AM, Simon Peyton Jones 
>> wrote:
>>
>> build with hadrian, and 

Re: Stop holding hadrian back with backwards compatibility

2021-02-10 Thread Moritz Angermann
My understanding of this backwards compat logic is that it's only there to
allow you to do stuff like:
build with hadrian, and then continue using make with the artifacts
(partially) built by hadrian.  I think
this is a horrible idea in and onto itself, even if I can somewhat see the
appeal as a gateway drug; in
which you'd slowly have hadrian take over parts that make used to do, and
use make for the stuff that
doesn't work (yet) in hadrian.

However, I don't think the benefit of constraining hadrian to work in the
make framework makes much
sense. We should be permitted to explore new (and better) solutions, that
do not align with how the
make based build system did things if it allows for a less complex build
system or faster builds or ...

Cheers,
 Moritz

On Wed, Feb 10, 2021 at 9:28 PM Richard Eisenberg  wrote:

> This sounds very reasonable on the surface, but I don't understand the
> consequences of this proposal. What are these consequences? Will this break
> `make`? (It sounds like it won't, given that the change is to Hadrian.)
> Does this mean horrible things will happen if I use `make` and `hadrian` in
> the same tree? (I have never done this, other than with hadrian/ghci, which
> seems to have its own working directory.) Basically: for someone who uses
> the build system but does not work on it, how does this affect me? (Maybe
> not at all!)
>
> I would explicitly like to endorse the direction of travel toward Hadrian
> and away from `make`.
>
> Richard
>
> > On Feb 10, 2021, at 8:05 AM, Moritz Angermann <
> moritz.angerm...@gmail.com> wrote:
> >
> > Hi,
> >
> > so we've finally run into a case where we need to bump the rts version.
> This has a great ripple effect.  There is some implicit assumption that
> rts-1.0 will always be true. Of course that was a lie, but a lie we lived
> with for a long time.
> >
> > Now, hadrian tries *really* hard to replicate some of the Make based
> build systems idiosyncrasies, this includes creating versionless symlinks
> for the rts. E.g. libHSrts -> libHSrts-1.0. There is a great deal of
> logic just to achieve this, and of course it all crumbles now.
> >
> > I'd therefore like to float and propose the idea that we agree to *not*
> bother (too?) much with make based build systems backwards compatibility
> and warts that grew over the years in the make based build system with
> hadrian going forward.
> >
> > Yes, I can probably fix this, and add even more code to this burning
> pile of complexity, but why?  The next person will assume libHSrts does not
> need to be versioned and continue with this mess.
> >
> > Let's have Hadrian be a clean cut in some areas (it already is, it does
> away with the horrible abomination that ghc-cabal is--which only serves the
> purpose of translating cabal descriptions into make readable files), and
> not be bogged down by backwards compatibility.
> >
> > This is thus my call for voicing concern or the upkeep of legacy
> support, or I'll take silence as the collective support of making hadrian
> *not* be held back by backwards compatibility. (This would mean in this
> case, that I'd just delete the backwards compat code instead of adding even
> more to it).
> >
> > I hope we all still want Hadrian to replace Make, if not and we want to
> keep Make, why are we concerning ourselves with Hadrian in the first place.
> If we are intending to ditch Make, let's not be held back by it.
> >
> > Cheers,
> >  Moritz
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Stop holding hadrian back with backwards compatibility

2021-02-10 Thread Moritz Angermann
Hi,

so we've finally run into a case where we need to bump the rts version.
This has a great ripple effect.  There is some implicit assumption that
rts-1.0 will always be true. Of course that was a lie, but a lie we lived
with for a long time.

Now, hadrian tries *really* hard to replicate some of the Make based build
systems idiosyncrasies, this includes creating versionless symlinks for the
rts. E.g. libHSrts -> libHSrts-1.0. There is a great deal of logic
just to achieve this, and of course it all crumbles now.

I'd therefore like to float and propose the idea that we agree to *not*
bother (too?) much with make based build systems backwards compatibility
and warts that grew over the years in the make based build system with
hadrian going forward.

Yes, I can probably fix this, and add even more code to this burning pile
of complexity, but why?  The next person will assume libHSrts does not need
to be versioned and continue with this mess.

Let's have Hadrian be a clean cut in some areas (it already is, it does
away with the horrible abomination that ghc-cabal is--which only serves the
purpose of translating cabal descriptions into make readable files), and
not be bogged down by backwards compatibility.

This is thus my call for voicing concern or the upkeep of legacy support,
or I'll take silence as the collective support of making hadrian *not* be
held back by backwards compatibility. (This would mean in this case, that
I'd just delete the backwards compat code instead of adding even more to
it).

I hope we all still want Hadrian to replace Make, if not and we want to
keep Make, why are we concerning ourselves with Hadrian in the first place.
If we are intending to ditch Make, let's not be held back by it.

Cheers,
 Moritz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Has ghc-9.0 for windows changed to require installation?

2021-02-08 Thread Moritz Angermann
Thanks for flagging this. This would be the opposite direction of what I’ve
been advocating for. That we get bindists for Linux and macOS that work by
simply unpacking them.

On Mon, 8 Feb 2021 at 10:05 PM, Takenobu Tani  wrote:

> Hi devs,
>
> The ghc-binary for windows needs to `make install` since ghc-9.0 [1].
> Is this an intended change?
>
> Previously, ghc-8.10.4 binary for windows [2] doesn't need to `make
> install`.
> We only expand the tar-file and then we can execute `bin/ghcii.sh`.
>
> [1]:
> https://downloads.haskell.org/ghc/9.0.1/ghc-9.0.1-x86_64-unknown-mingw32.tar.xz
> [2]:
> https://downloads.haskell.org/ghc/8.10.4/ghc-8.10.4-x86_64-unknown-mingw32.tar.xz
>
> Regards,
> Takenobu
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC's internal confusion about Ints and Words

2020-10-22 Thread Moritz Angermann
Hi *,

so, after some discussion with Simon and Simon, as well as Ben, we are all
in agreement that using sized hints
is a band-aid solution for the real underlying problem.  Where the
underlying problem is that we have CInt ~ Int32,
and we represent Int32 as I32# Int#.  And the proper solution would not
likely be to represent Int32 as I32# Int32#.

After some trial and error (mostly be being too aggressive on changing Ints
to sized ones, unnecessarily -- thanks
Ben for helping me stay on course!), I've produce what mostly amounts to
this patch[1].

It also requires some additional narrow/extend calls to a few
Data.Array.Base signatures to make them typecheck.

However I've got plenty of failures in the testsuite now. Hooray!

Most of them are of this form:

*** Core Lint errors : in result of Desugar (before optimization) ***
T12010.hsc:34:1: warning:
Argument value doesn't match argument type:
Fun type: Int# -> Int#
Arg type: Int32#
Arg: ds_d1B3
In the RHS of c_socket :: CInt -> CInt -> CInt -> IO CInt
In the body of lambda with binder ds_d1AU :: Int32
In the body of lambda with binder ds_d1AV :: Int32
In the body of lambda with binder ds_d1AW :: Int32
In a case alternative: (I32# ds_d1AY :: Int32#)
In a case alternative: (I32# ds_d1B0 :: Int32#)
In a case alternative: (I32# ds_d1B2 :: Int32#)
In the body of lambda with binder ds_d1B5 :: State# RealWorld
In a case alternative: ((#,#) ds_d1B4 :: State# RealWorld,
  ds_d1B3 :: Int32#)
Substitution: [TCvSubst
 In scope: InScope {}
 Type env: []
 Co env: []]

(full log at
https://gist.github.com/angerman/3d6e1e3da5299b9365125ee9e0a2c40f)

Some other minor ones are test that now need explicit narrow/extending
where it didn't need before.

As well as this beauty:

-- RHS size: {terms: 16, types: 0, coercions: 0, joins: 0/0}
i32 :: Int32
[GblId,
 Cpr=m1,
 Unf=Unf{Src=, TopLvl=True, Value=True, ConLike=True,
 WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 23 10}]
i32
  = GHC.Int.I32#
  (GHC.Prim.narrowInt32#
 (GHC.Prim.andI#
(GHC.Prim.extendInt32#
   (GHC.Prim.narrowInt32#
  (GHC.Prim.extendInt32# (GHC.Prim.narrowInt32# 1#
(GHC.Prim.extendInt32#
   (GHC.Prim.narrowInt32#
  (GHC.Prim.notI#
 (GHC.Prim.extendInt32#
(GHC.Prim.narrowInt32#
   (GHC.Prim.extendInt32# (GHC.Prim.narrowInt32#
1#)

This clearly needs some clean up.

Apart from that the rest seems to be mostly working. Any input would be
appreciated. I'll need to do the same for
Word as well I'm afraid.

Cheers,
 Moritz
--
[1]:
https://gitlab.haskell.org/ghc/ghc/-/commit/acb5ce792806bc3c1e1730c6bdae853d2755de16?merge_request_iid=3641

On Tue, Oct 20, 2020 at 10:34 PM Cheng Shao  wrote:

> Indeed STG to Cmm lowering drops the correct size information for
> ccall arguments, there's even a TODO comment that has been around for
> quite a few years:
>
> https://gitlab.haskell.org/ghc/ghc/-/blob/master/compiler/GHC/StgToCmm/Foreign.hs#L83
>
> This has been an annoyance for Asterius as well. When we try to
> translate a CmmUnsafeForeignCall node to a wasm function call, a CInt
> argument (which should be i32 in wasm) can be mistyped as i64 which
> causes a validation error. We have to insert wrap/extend opcodes based
> on the callee function signature, but if we preserve correct argument
> size in Cmm (or at least enrich the hints to include it), we won't
> need such a hack.
>
> On Tue, Oct 20, 2020 at 4:05 PM Moritz Angermann
>  wrote:
> >
> > Yes, that's right. I'm not sure it's in core though, as the width
> information still seems to be available in Stg. However the lowering from
> > stg into cmm widens it.
> >
> > On Tue, Oct 20, 2020 at 9:57 PM Carter Schonwald <
> carter.schonw...@gmail.com> wrote:
> >>
> >> ... are you talking about Haskell Int and word? Those are always the
> same size in bits and should match native point size. That is definitely an
> assumption of ghc
> >>
> >> It sounds like some information that is dropped after core is needed to
> correctly do something in stg/cmm in the context of the ARM64 ncg that was
> recently added to handle cint being 32bit in this context ?
> >>
> >>
> >> On Tue, Oct 20, 2020 at 5:49 AM Moritz Angermann <
> moritz.angerm...@gmail.com> wrote:
> >>>
> >>> Alright, let me expand a bit.  I've been looking at aarch64 NCG for
> ghc.  The Linux side of things is looking really good,
> >>> so I've moved onto the macOS side (I'm afraid I don't have any Windows
> aarch64 hardware, nor much windows knowledge
&g

Re: Fat interface files?

2020-10-21 Thread Moritz Angermann
Right, my understanding is that they are not sufficient however, as Michael
layed out here
https://gitlab.haskell.org/ghc/ghc/-/wikis/Core-interface-section#unfoldings

This should be linked together better. We'll improve this.

On Wed, Oct 21, 2020 at 5:56 PM Simon Peyton Jones 
wrote:

> Thanks Mortiz
>
>
>
> That wiki page is about extensible interface files in general. It says
> nothing about specifically putting Core terms into interface files.
>
>
>
> For the Core part, since GHC already puts Core into unfoldings, the simple
> thing is just to expose all unfoldings, no?
>
>
>
> Simon
>
>
>
> *From:* ghc-devs  *On Behalf Of *Moritz
> Angermann
> *Sent:* 21 October 2020 10:36
> *To:* Ben Gamari 
> *Cc:* Edward Yang (ezy...@cs.stanford.edu) ;
> ghc-devs@haskell.org
> *Subject:* Re: Fat interface files?
>
>
>
> Just to make sure we are aware of all the ongoing efforts. We've been
> working on ebedding Core into interface files as well.
>
> Josh has updated the Wiki page here
> https://gitlab.haskell.org/ghc/ghc/-/wikis/Extensible-Interface-Files
> <https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.haskell.org%2Fghc%2Fghc%2F-%2Fwikis%2FExtensible-Interface-Files=04%7C01%7Csimonpj%40microsoft.com%7C644d055c20ed4377753c08d875a4cfa1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637388698431720509%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000=5xMRK5FF7OKifyceURAEkcNzo6naUzvLkRWnsCto7pc%3D=0>
> .
>
>
>
> Cheers,
>
>  Moritz
>
>
>
> On Wed, Oct 21, 2020 at 12:06 AM Ben Gamari  wrote:
>
> Hi Edward,
>
> While chatting with the ghc-ide folks recently I realized that it would
> be useful to be able to preserve Core such that compilation can be
> restarted (e.g. to be pushed down the bytecode pipeline to evaluate TH
> splices).
>
> As I recall this is precisely what you implemented in your "fat
> interface file" work. Do you recall what the state of this work was? Do
> you have a branch with last-known-good work? Do you recall any tricky
> questions that remained outstanding?
>
> Cheers,
>
> - Ben
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> <https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs=04%7C01%7Csimonpj%40microsoft.com%7C644d055c20ed4377753c08d875a4cfa1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637388698431720509%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000=ETWa%2BwN2LQyoi4NVDdNktv%2BUiQMfWvvj5rg5HdL24Tk%3D=0>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fat interface files?

2020-10-21 Thread Moritz Angermann
Just to make sure we are aware of all the ongoing efforts. We've been
working on ebedding Core into interface files as well.
Josh has updated the Wiki page here
https://gitlab.haskell.org/ghc/ghc/-/wikis/Extensible-Interface-Files.

Cheers,
 Moritz

On Wed, Oct 21, 2020 at 12:06 AM Ben Gamari  wrote:

> Hi Edward,
>
> While chatting with the ghc-ide folks recently I realized that it would
> be useful to be able to preserve Core such that compilation can be
> restarted (e.g. to be pushed down the bytecode pipeline to evaluate TH
> splices).
>
> As I recall this is precisely what you implemented in your "fat
> interface file" work. Do you recall what the state of this work was? Do
> you have a branch with last-known-good work? Do you recall any tricky
> questions that remained outstanding?
>
> Cheers,
>
> - Ben
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC's internal confusion about Ints and Words

2020-10-20 Thread Moritz Angermann
Yes, that's right. I'm not sure it's in core though, as the width
information still seems to be available in Stg. However the lowering from
stg into cmm widens it.

On Tue, Oct 20, 2020 at 9:57 PM Carter Schonwald 
wrote:

> ... are you talking about Haskell Int and word? Those are always the same
> size in bits and should match native point size. That is definitely an
> assumption of ghc
>
> It sounds like some information that is dropped after core is needed to
> correctly do something in stg/cmm in the context of the ARM64 ncg that was
> recently added to handle cint being 32bit in this context ?
>
>
> On Tue, Oct 20, 2020 at 5:49 AM Moritz Angermann <
> moritz.angerm...@gmail.com> wrote:
>
>> Alright, let me expand a bit.  I've been looking at aarch64 NCG for ghc.
>> The Linux side of things is looking really good,
>> so I've moved onto the macOS side (I'm afraid I don't have any Windows
>> aarch64 hardware, nor much windows knowledge
>> to even attempt a Windows version yet).
>>
>> When calling C functions, the usual approach is to pass the first few
>> arguments in registers, and then arguments that exceed
>> the argument passing slots on the stack.  The Arm AArch64 Procedure Call
>> Standard (aapcs) for C does this by assigning 8byte
>> slots to each overflow argument on the stack.  A company I won't name,
>> has decided to implement a slightly different variation of
>> the Procedure Call Standard, often referred to as darwinpcs.  This
>> deviates from the aapcs for vargs, as well as for handling of
>> spilled arguments on the stack.
>>
>> The aapcs allows us to generate calls to C functions without knowing the
>> actual prototype of the function, as all arguments are
>> simply spilled into 8byte slots on the stack.  The darwinpcs however
>> requires us to know the size of the arguments, so we can
>> properly pack them onto the stack.  Ints have 4 bytes, so we need to pack
>> them into 4byte slots.
>>
>> In the process library we have this rather fun foreign import:
>> foreign import ccall unsafe "runInteractiveProcess"
>>   c_runInteractiveProcess
>> ::  Ptr CString
>> -> CString
>> -> Ptr CString
>> -> FD
>> -> FD
>> -> FD
>> -> Ptr FD
>> -> Ptr FD
>> -> Ptr FD
>> -> Ptr CGid
>> -> Ptr CUid
>> -> CInt -- reset child's SIGINT & SIGQUIT
>> handlers
>> -> CInt -- flags
>> -> Ptr CString
>> -> IO PHANDLE
>>
>> with the corresponding C declaration:
>>
>> extern ProcHandle runInteractiveProcess( char *const args[],
>>  char *workingDirectory,
>>  char **environment,
>>  int fdStdIn,
>>  int fdStdOut,
>>  int fdStdErr,
>>  int *pfdStdInput,
>>  int *pfdStdOutput,
>>  int *pfdStdError,
>>  gid_t *childGroup,
>>  uid_t *childUser,
>>  int reset_int_quit_handlers,
>>  int flags,
>>  char **failed_doing);
>> This function thus takes 14 arguments. We pass only the first 8 arguments
>> in registers, and the others on the stack.
>> Argument 12 and 13 are of type int.  On linux using the aapcs, we can
>> pass those in 8byte slots on the stack. That is
>> both of them are effectively 64bits wide when passed.  However for
>> darwinpcs, it is expected that these adhere to their
>> size and are packed as such. Therefore Argument 12 and 13 need to be
>> passed as 4byte slots each on the stack.
>>
>> This yields a moderate 8byte saving on the stack for the same function
>> call on darwinpcs compared to aapcs.
>>
>> Now onto GHC.  When we generate function calls for foreign C functions,
>> we deal with something like:
>>
>> genCCall
>> :: ForeignTarget  -- function to call
>> -> [CmmFormal]-- where to put the result
>> -> [CmmActual]-- arguments (of mixed type)
>> -> BlockId-- The block we are in
>> -> NatM (Inst

Re: GHC's internal confusion about Ints and Words

2020-10-20 Thread Moritz Angermann
Type BitsCat W64))),SignedHint W32)
,(CmmReg (CmmLocal (LocalReg s6Gu (CmmType BitsCat W64))),AddrHint)
,(CmmReg (CmmLocal (LocalReg s6Gw (CmmType BitsCat W64))),AddrHint)
,(CmmReg (CmmLocal (LocalReg s6Gy (CmmType BitsCat W64))),AddrHint)
,(CmmReg (CmmLocal (LocalReg s6Cp (CmmType BitsCat W64))),AddrHint)
,(CmmReg (CmmLocal (LocalReg s6FU (CmmType BitsCat W64))),AddrHint)
,(CmmReg (CmmLocal (LocalReg s6GA (CmmType BitsCat W64))),SignedHint W32)
,(CmmReg (CmmLocal (LocalReg s6GR (CmmType BitsCat W64))),SignedHint W32)
,(CmmReg (CmmLocal (LocalReg s6GM (CmmType BitsCat W64))),AddrHint)]

Thus, while we *do* know the right size from STG (which is what the Hints
are computed from), we loose this information when lowering
into Cmm, where we represent them with W64. This is what I was alluding to
in the previous email. In primRepCmmType, and mkIntCLit, we set their type
to 64bit for Ints; which on this platform does not hold.

Now I've gone ahead and effectively assume Cmm is lying to me when
generating Foreign Function Calls, and rely on the (new) sized
hints to produce the appropriate argument packing on the stack.  However I
believe the correct way would be for GHC not to conflate Ints
and Words in Cmm; implicitly assuming they are the same width.  Sadly it's
not as simple as having primRepCmmType and mkIntCLit produce 32bit types. I
fear GHC internally assumes "Int" means 64bit Integer, and then just
happens to make the Int ~ CInt assumption.

Cheers,
 Moritz

On Tue, Oct 20, 2020 at 3:33 PM Simon Peyton Jones 
wrote:

> Moritz
>
>
>
> I’m afraid I don’t understand any of this.  Not your fault, but  I just
> don’t have enough context to know what you mean.
>
>
>
> Is there a current bug?  If so, can you demonstrate it?   If not, what is
> the problem you want to solve?  Examples are always helpful.
>
>
>
> Maybe it’s worth opening a ticket too?
>
>
>
> Thanks!
>
>
>
> Simon
>
>
>
> *From:* ghc-devs  *On Behalf Of *Moritz
> Angermann
> *Sent:* 20 October 2020 02:51
> *To:* ghc-devs 
> *Subject:* GHC's internal confusion about Ints and Words
>
>
>
> Hi there!
>
>
>
> So there is a procedure calling convention that for reasons I did not
> fully understand, but seem to be historically grown, uses packed arguments
> for those that are spilled onto the stack. On top of that, CInt is 32bit,
> Word is 64bits. This provides the following spectacle:
>
>
>
> While we know in STG that the CInt is 32bits wide, when lowered into Cmm,
> it's represented as I64 in the arguments to the C function.  Thus packing
> based on the format of the Cmm type would yield 8 bytes. And now, all
> further packed arguments have the wrong offset (by four).
>
>
>
> Specifically in GHC.Cmm.Utils we find:
>
> primRepCmmType :: Platform -> PrimRep -> CmmType
>
> primRepCmmType platform IntRep = bWord platform
>
>
>
> mkIntCLit :: Platform -> Int -> CmmLit
> mkIntCLit platform i = CmmInt (toInteger i) (wordWidth platform)
>
>
>
> The naive idea to just fix this and make them return cIntWidth instead,
> seemingly produces the correct Cmm expressions at a local level, but
> produces a broken compiler.
>
>
>
> A second approach could be to extend the Hints into providing sizes, and
> using those during the foreign call generation to pack spilled arguments.
> This however appears to be more of a patching up of some fundamental
> underlying issue, instead of rectifying it properly.
>
>
>
> Maybe I'll have to go down the Hint path, it does however break current Eq
> assumptions, as they are sized now, and what was equal before, is only
> equal now if they represent the same size.
>
>
>
> From a cursory glance at the issues with naively fixing the width for Int,
> it seems that GHC internally assumes sizeof(Int) = sizeof(Word).  Maybe
> there is a whole level of HsInt vs CInt discrimination missing?
>
>
>
> Cheers,
>
>  Moritz
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


GHC's internal confusion about Ints and Words

2020-10-19 Thread Moritz Angermann
Hi there!

So there is a procedure calling convention that for reasons I did not fully
understand, but seem to be historically grown, uses packed arguments for
those that are spilled onto the stack. On top of that, CInt is 32bit, Word
is 64bits. This provides the following spectacle:

While we know in STG that the CInt is 32bits wide, when lowered into Cmm,
it's represented as I64 in the arguments to the C function.  Thus packing
based on the format of the Cmm type would yield 8 bytes. And now, all
further packed arguments have the wrong offset (by four).

Specifically in GHC.Cmm.Utils we find:

primRepCmmType :: Platform -> PrimRep -> CmmType
primRepCmmType platform IntRep = bWord platform

mkIntCLit :: Platform -> Int -> CmmLit
mkIntCLit platform i = CmmInt (toInteger i) (wordWidth platform)

The naive idea to just fix this and make them return cIntWidth instead,
seemingly produces the correct Cmm expressions at a local level, but
produces a broken compiler.

A second approach could be to extend the Hints into providing sizes, and
using those during the foreign call generation to pack spilled arguments.
This however appears to be more of a patching up of some fundamental
underlying issue, instead of rectifying it properly.

Maybe I'll have to go down the Hint path, it does however break current Eq
assumptions, as they are sized now, and what was equal before, is only
equal now if they represent the same size.

>From a cursory glance at the issues with naively fixing the width for Int,
it seems that GHC internally assumes sizeof(Int) = sizeof(Word).  Maybe
there is a whole level of HsInt vs CInt discrimination missing?

Cheers,
 Moritz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] Glasgow Haskell Compiler 9.0.1-alpha1 released

2020-09-29 Thread Moritz Angermann
This sent me down an interesting path.  You are right that dlopen on
returns NULL with musl on x86_64, and dlerror will subsequently produce
"Dynamic loading not supported" if asked to compile with -static.  I think
GHC has code to fallback to archives in the case where loading
shared objects fails, but I can't find the code right now.  It still means
you'd need to have static sqlite (in this case) and other libraries around.

I'm still a bit puzzled, and I think I'm missing something.  It remains,
that I know we have musl (x86_64, aarch64) based ghcs in production.  I
wonder if there is something we got right by accident, that makes this work
smoothly for us.  Warrants more investigation.

Cheers,
 Moritz

On Tue, Sep 29, 2020 at 7:45 PM Moritz Angermann 
wrote:

> Happy to give this a try later today. Been using fully static musl builds
> (including cross compilation) for x86_64 for a while now; and did not
> (yet?) run into that SQLite issue. But did have it use shared objects in
> iserv.
>
> On Tue, 29 Sep 2020 at 7:18 PM, Cheng Shao  wrote:
>
>> Hi Moritz,
>>
>>
>>
>> > However dlopen with musl on x86 seems fine.
>>
>>
>>
>> Here's a dlopen example that segfaults if linked with -static:
>>
>>
>>
>> #include 
>>
>> #include 
>>
>>
>>
>> int main() {
>>
>>   void *h = dlopen("/usr/lib/libsqlite3.so", RTLD_NOW);
>>
>>   char *f = dlsym(h, "sqlite3_version");
>>
>>   printf("%s\n", f);
>>
>>   return 0;
>>
>> }
>>
>>
>>
>> On Tue, Sep 29, 2020 at 1:04 PM Moritz Angermann
>>
>>  wrote:
>>
>> >
>>
>> > No. Not necessarily. We can perfectly fine load archives and the
>> pre-linked ghci objects. However dlopen with musl on x86 seems fine. On arm
>> it’s not implemented, and just throws an error message. There is a -dynamic
>> flag in HEAD, which disables GHC even trying to load dynamic libraries and
>> always assuming there is no dynamic linking facility, even if configure
>> reports the existence of dlopen...
>>
>> >
>>
>> > On Tue, 29 Sep 2020 at 6:54 PM, Cheng Shao  wrote:
>>
>> >>
>>
>> >> Hi Ben,
>>
>> >>
>>
>> >>
>>
>> >>
>>
>> >> > We will likely transition the Alpine binary distribution to be fully
>>
>> >>
>>
>> >>statically-linked, providing a convenient, distribution-independent
>>
>> >>
>>
>> >>packaging option for Linux users.
>>
>> >>
>>
>> >>
>>
>> >>
>>
>> >> iirc for statically linked executables, musl doesn't even support
>>
>> >>
>>
>> >> dlopen, so wouldn't this mean such a bindist would fail for all
>>
>> >>
>>
>> >> LoadDLL ghci commands?
>>
>> >>
>>
>> >>
>>
>> >>
>>
>> >> Cheers,
>>
>> >>
>>
>> >> Cheng
>>
>> >>
>>
>> >>
>>
>> >>
>>
>> >> On Mon, Sep 28, 2020 at 9:15 PM Ben Gamari  wrote:
>>
>> >>
>>
>> >> >
>>
>> >>
>>
>> >> > Hello all,
>>
>> >>
>>
>> >> >
>>
>> >>
>>
>> >> > The GHC team is very pleased to announce the availability of the
>> first
>>
>> >>
>>
>> >> > alpha release in the GHC 9.0 series. Source and binary distributions
>> are
>>
>> >>
>>
>> >> > available at the usual place:
>>
>> >>
>>
>> >> >
>>
>> >>
>>
>> >> > https://downloads.haskell.org/ghc/9.0.1-alpha1/
>>
>> >>
>>
>> >> >
>>
>> >>
>>
>> >> > This first alpha comes quite a bit later than expected. However, we
>> have
>>
>> >>
>>
>> >> > done a significant amount of testing on this pre-release and
>> therefore
>>
>> >>
>>
>> >> > hope to be able to move forward quickly with a release candidate next
>>
>> >>
>>
>> >> > week and with a final release in mid-October.
>>
>> >>
>>
>> >> >
>>
>> >>
>>
>> >> > GHC 9.0.1 will bring a number of new f

Re: [ANNOUNCE] Glasgow Haskell Compiler 9.0.1-alpha1 released

2020-09-29 Thread Moritz Angermann
Happy to give this a try later today. Been using fully static musl builds
(including cross compilation) for x86_64 for a while now; and did not
(yet?) run into that SQLite issue. But did have it use shared objects in
iserv.

On Tue, 29 Sep 2020 at 7:18 PM, Cheng Shao  wrote:

> Hi Moritz,
>
>
>
> > However dlopen with musl on x86 seems fine.
>
>
>
> Here's a dlopen example that segfaults if linked with -static:
>
>
>
> #include 
>
> #include 
>
>
>
> int main() {
>
>   void *h = dlopen("/usr/lib/libsqlite3.so", RTLD_NOW);
>
>   char *f = dlsym(h, "sqlite3_version");
>
>   printf("%s\n", f);
>
>   return 0;
>
> }
>
>
>
> On Tue, Sep 29, 2020 at 1:04 PM Moritz Angermann
>
>  wrote:
>
> >
>
> > No. Not necessarily. We can perfectly fine load archives and the
> pre-linked ghci objects. However dlopen with musl on x86 seems fine. On arm
> it’s not implemented, and just throws an error message. There is a -dynamic
> flag in HEAD, which disables GHC even trying to load dynamic libraries and
> always assuming there is no dynamic linking facility, even if configure
> reports the existence of dlopen...
>
> >
>
> > On Tue, 29 Sep 2020 at 6:54 PM, Cheng Shao  wrote:
>
> >>
>
> >> Hi Ben,
>
> >>
>
> >>
>
> >>
>
> >> > We will likely transition the Alpine binary distribution to be fully
>
> >>
>
> >>statically-linked, providing a convenient, distribution-independent
>
> >>
>
> >>packaging option for Linux users.
>
> >>
>
> >>
>
> >>
>
> >> iirc for statically linked executables, musl doesn't even support
>
> >>
>
> >> dlopen, so wouldn't this mean such a bindist would fail for all
>
> >>
>
> >> LoadDLL ghci commands?
>
> >>
>
> >>
>
> >>
>
> >> Cheers,
>
> >>
>
> >> Cheng
>
> >>
>
> >>
>
> >>
>
> >> On Mon, Sep 28, 2020 at 9:15 PM Ben Gamari  wrote:
>
> >>
>
> >> >
>
> >>
>
> >> > Hello all,
>
> >>
>
> >> >
>
> >>
>
> >> > The GHC team is very pleased to announce the availability of the first
>
> >>
>
> >> > alpha release in the GHC 9.0 series. Source and binary distributions
> are
>
> >>
>
> >> > available at the usual place:
>
> >>
>
> >> >
>
> >>
>
> >> > https://downloads.haskell.org/ghc/9.0.1-alpha1/
>
> >>
>
> >> >
>
> >>
>
> >> > This first alpha comes quite a bit later than expected. However, we
> have
>
> >>
>
> >> > done a significant amount of testing on this pre-release and therefore
>
> >>
>
> >> > hope to be able to move forward quickly with a release candidate next
>
> >>
>
> >> > week and with a final release in mid-October.
>
> >>
>
> >> >
>
> >>
>
> >> > GHC 9.0.1 will bring a number of new features:
>
> >>
>
> >> >
>
> >>
>
> >> >  * A first cut of the new LinearTypes language extension [1], allowing
>
> >>
>
> >> >use of linear function syntax and linear record fields.
>
> >>
>
> >> >
>
> >>
>
> >> >  * A new bignum library (ghc-bignum), allowing GHC to be more easily
>
> >>
>
> >> >used with integer libraries other than GMP.
>
> >>
>
> >> >
>
> >>
>
> >> >  * Improvements in code generation, resulting in considerable
>
> >>
>
> >> >performance improvements in some programs.
>
> >>
>
> >> >
>
> >>
>
> >> >  * Improvements in pattern-match checking, allowing more precise
>
> >>
>
> >> >detection of redundant cases and reduced compilation time.
>
> >>
>
> >> >
>
> >>
>
> >> >  * Implementation of the "simplified subsumption" proposal [2]
>
> >>
>
> >> >simplifying the type system and paving the way for QuickLook
>
> >>
>
> >> >impredicativity in GHC 9.2.
>
> >>
>
> >> >
>
> >>
>
> >> >  * Implementation of the QualifiedDo

Re: [ANNOUNCE] Glasgow Haskell Compiler 9.0.1-alpha1 released

2020-09-29 Thread Moritz Angermann
No. Not necessarily. We can perfectly fine load archives and the pre-linked
ghci objects. However dlopen with musl on x86 seems fine. On arm it’s not
implemented, and just throws an error message. There is a -dynamic flag in
HEAD, which disables GHC even trying to load dynamic libraries and always
assuming there is no dynamic linking facility, even if configure reports
the existence of dlopen...

On Tue, 29 Sep 2020 at 6:54 PM, Cheng Shao  wrote:

> Hi Ben,
>
>
>
> > We will likely transition the Alpine binary distribution to be fully
>
>statically-linked, providing a convenient, distribution-independent
>
>packaging option for Linux users.
>
>
>
> iirc for statically linked executables, musl doesn't even support
>
> dlopen, so wouldn't this mean such a bindist would fail for all
>
> LoadDLL ghci commands?
>
>
>
> Cheers,
>
> Cheng
>
>
>
> On Mon, Sep 28, 2020 at 9:15 PM Ben Gamari  wrote:
>
> >
>
> > Hello all,
>
> >
>
> > The GHC team is very pleased to announce the availability of the first
>
> > alpha release in the GHC 9.0 series. Source and binary distributions are
>
> > available at the usual place:
>
> >
>
> > https://downloads.haskell.org/ghc/9.0.1-alpha1/
>
> >
>
> > This first alpha comes quite a bit later than expected. However, we have
>
> > done a significant amount of testing on this pre-release and therefore
>
> > hope to be able to move forward quickly with a release candidate next
>
> > week and with a final release in mid-October.
>
> >
>
> > GHC 9.0.1 will bring a number of new features:
>
> >
>
> >  * A first cut of the new LinearTypes language extension [1], allowing
>
> >use of linear function syntax and linear record fields.
>
> >
>
> >  * A new bignum library (ghc-bignum), allowing GHC to be more easily
>
> >used with integer libraries other than GMP.
>
> >
>
> >  * Improvements in code generation, resulting in considerable
>
> >performance improvements in some programs.
>
> >
>
> >  * Improvements in pattern-match checking, allowing more precise
>
> >detection of redundant cases and reduced compilation time.
>
> >
>
> >  * Implementation of the "simplified subsumption" proposal [2]
>
> >simplifying the type system and paving the way for QuickLook
>
> >impredicativity in GHC 9.2.
>
> >
>
> >  * Implementation of the QualifiedDo extension [3], allowing more
>
> >convenient overloading of `do` syntax.
>
> >
>
> >  * Improvements in compilation time.
>
> >
>
> > And many more. See the release notes [4] for a full accounting of the
>
> > changes in this release.
>
> >
>
> > Do note that there are a few things that we expect will change before
>
> > the final release:
>
> >
>
> >  * We expect to sort out a notarization workflow for Apple Darwin,
>
> >allowing our binary distributions to be used on macOS Catalina
>
> >without hassle.
>
> >
>
> >Until this has been sorted out Catalina users can exempt the
>
> >current macOS binary distribution from the notarization requirement
>
> >themselves by running `xattr -cr .` on the unpacked tree before
>
> >running `make install`.
>
> >
>
> >  * We will likely transition the Alpine binary distribution to be fully
>
> >statically-linked, providing a convenient, distribution-independent
>
> >packaging option for Linux users.
>
> >
>
> >  * We will be merging a robust solution for #17760 which will introduce
>
> >a new primitive, `keepAlive#`, to the `base` library, subsuming
>
> >most uses of `touch#`.
>
> >
>
> > As always, do test this release and open tickets for whatever issues you
>
> > encounter. To help with this, we will be publishing a blog post
>
> > describing use of our new `head.hackage` infrastructure to ease testing
>
> > of larger projects with Hackage dependencies later this week.
>
> >
>
> > Cheers,
>
> >
>
> > - Ben
>
> >
>
> >
>
> > [1]
> https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0111-linear-types.rst
>
> > [2]
> https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0287-simplify-subsumption.rst
>
> > [3]
> https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0216-qualified-do.rst
>
> > [4]
> https://downloads.haskell.org/ghc/9.0.1-alpha1/docs/html/users_guide/9.0.1-notes.html
>
> > ___
>
> > ghc-devs mailing list
>
> > ghc-devs@haskell.org
>
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
> ___
>
> ghc-devs mailing list
>
> ghc-devs@haskell.org
>
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How is GHC.Prim.unpackInt8X64# meant to be used?

2020-09-26 Thread Moritz Angermann
I think as long as it's bounded it's ok.

On Sat, Sep 26, 2020 at 8:52 PM Ben Gamari  wrote:

> I think it would be worth trying to add tuples up to width 64. The only
> real cost here is the interface file size of GHC.Tuple and if adding a
> 63-wide tuple really does induce a crash then that is a bug in its own
> right that deserves investigation.
>
> - Ben
>
> On September 26, 2020 8:26:32 AM EDT, Ryan Scott 
> wrote:
>>
>> I had a feeling that this might be the case. Unfortunately, this
>> technology preview is actively blocking progress on !4097, which leaves me
>> at a loss for what to do. I can see two ways forward:
>>
>> 1. Remove unpackInt8X64# and friends.
>> 2. Reconsider whether the tuple size limit should apply to unboxed
>> tuples. Perhaps this size limit only makes sense for boxed tuples? This
>> comment [1] suggests that defining a boxed tuple of size greater than 62
>> induces a segfault, but it's unclear to me if the same thing happens for
>> unboxed tuples.
>>
>> Ryan S.
>> -
>> [1]
>> https://gitlab.haskell.org/ghc/ghc/-/blob/a1f34d37b47826e86343e368a5c00f1a4b1f2bce/libraries/ghc-prim/GHC/Tuple.hs#L170
>>
>> On Sat, Sep 26, 2020 at 7:54 AM Ben Gamari  wrote:
>>
>>> On September 25, 2020 6:21:23 PM EDT, Ryan Scott <
>>> ryan.gl.sc...@gmail.com> wrote:
>>> ...
>>> >However, I discovered recently that there are places where GHC *does*
>>> >use
>>> >unboxed tuples with arity greater than 62. For example, the
>>> >GHC.Prim.unpackInt8X64# [2] function returns an unboxed tuple of size
>>> >64. I
>>> >was confused for a while about how this was even possible, but I
>>> >realized
>>> >later than GHC only enforces the tuple size limit in expressions and
>>> >patterns [3]. Simply having a type signature with a large unboxed tuple
>>> >is
>>> >fine in and of itself, and since unpackInt8X64# is implemented as a
>>> >primop,
>>> >no large unboxed tuples are ever used in the "body" of the function.
>>> >(Indeed, primops don't have function bodies in the conventional sense.)
>>> >Other functions in GHC.Prim that use unboxed tuples of arity 64 include
>>> >unpackWord8X64# [4], packInt8X64# [5], and packWord8X64# [6].
>>> >
>>> >But this makes me wonder: how on earth is it even possible to *use*
>>> >unpackInt8X64#?
>>>
>>>
>>> I strongly suspect that the answer here is "you can't yet no one has
>>> noticed until now." The SIMD operations were essentially introduced as a
>>> technology preview and therefore never had proper tests added. Only a
>>> subset of these operations have any tests at all and I doubt anyone has
>>> attempted to use the 64-wide operations, which are rather specialized.
>>>
>>> Cheers,
>>>
>>> - Ben
>>>
>>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How is GHC.Prim.unpackInt8X64# meant to be used?

2020-09-26 Thread Moritz Angermann
Luite is currently working on unboxed tuple support in the interpreter.
This will also be limited, as getting a generic solution for arbitrary
sized tuples raises a lot of complications.

Thus form a practical point of view, I’d go for (1) ;-)

We’ll need to rethink and get SIMD proper support at some point though, the
lack of such is rather sad.

On Sat, 26 Sep 2020 at 8:27 PM, Ryan Scott  wrote:

> I had a feeling that this might be the case. Unfortunately, this
> technology preview is actively blocking progress on
>
> !4097, which leaves me at a loss for what to do. I can see two ways
> forward:
>
> 1. Remove
>
> unpackInt8X64#
>
>
>
> and friends.
> 2. Reconsider whether the tuple size limit should apply to unboxed tuples.
> Perhaps this size limit only makes sense for boxed tuples? This comment [1]
> suggests that defining a boxed tuple of size greater than 62 induces a
> segfault, but it's unclear to me if the same thing happens for unboxed
> tuples.
>
> Ryan S.
> -
> [1]
> https://gitlab.haskell.org/ghc/ghc/-/blob/a1f34d37b47826e86343e368a5c00f1a4b1f2bce/libraries/ghc-prim/GHC/Tuple.hs#L170
>
>
>
>
> On Sat, Sep 26, 2020 at 7:54 AM Ben Gamari  wrote:
>
>> On September 25, 2020 6:21:23 PM EDT, Ryan Scott 
>> wrote:
>>
>>
>> ...
>>
>>
>> >However, I discovered recently that there are places where GHC *does*
>>
>>
>> >use
>>
>>
>> >unboxed tuples with arity greater than 62. For example, the
>>
>>
>> >GHC.Prim.unpackInt8X64# [2] function returns an unboxed tuple of size
>>
>>
>> >64. I
>>
>>
>> >was confused for a while about how this was even possible, but I
>>
>>
>> >realized
>>
>>
>> >later than GHC only enforces the tuple size limit in expressions and
>>
>>
>> >patterns [3]. Simply having a type signature with a large unboxed tuple
>>
>>
>> >is
>>
>>
>> >fine in and of itself, and since unpackInt8X64# is implemented as a
>>
>>
>> >primop,
>>
>>
>> >no large unboxed tuples are ever used in the "body" of the function.
>>
>>
>> >(Indeed, primops don't have function bodies in the conventional sense.)
>>
>>
>> >Other functions in GHC.Prim that use unboxed tuples of arity 64 include
>>
>>
>> >unpackWord8X64# [4], packInt8X64# [5], and packWord8X64# [6].
>>
>>
>> >
>>
>>
>> >But this makes me wonder: how on earth is it even possible to *use*
>>
>>
>> >unpackInt8X64#?
>>
>>
>>
>>
>>
>>
>>
>>
>> I strongly suspect that the answer here is "you can't yet no one has
>> noticed until now." The SIMD operations were essentially introduced as a
>> technology preview and therefore never had proper tests added. Only a
>> subset of these operations have any tests at all and I doubt anyone has
>> attempted to use the 64-wide operations, which are rather specialized.
>>
>>
>>
>>
>>
>> Cheers,
>>
>>
>>
>>
>>
>> - Ben
>>
>>
>>
>
> ___
>
> ghc-devs mailing list
>
> ghc-devs@haskell.org
>
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Native Code Generator for AArch64

2020-09-26 Thread Moritz Angermann
Hi there!

As some may know I've been working on a native code generation backend for
aarch64[1].  When Ben initially wrote about The state of GHC on ARM[2], I
was quite skeptical if a native code generator would really be what we
should be doing.  And the claim it would take a week or two, might have
been underestimating the complexity a bit; but also the time needed to
debug crashing programs.

The idea of a NCG however intrigued me.  I did work on an alternative llvm
backend once, so I did know a bit about the code gen backend.  I also knew
a bit about aarch64 assembly from working on the rts linker for aarch64.

So here we are today, with an aarch64 ncg for ghc[3], that has some basic
optimizations included, but does not beat the llvm codegen yet in runtime
performance. It is however substantially faster than the llvm codegen for
compile time performance.

I have performed nofib benchmarks for:
- full llvm build vs full native build[4]
- llvm nofib, with native libraries, vs full native build[5]
to discriminate effects of compiling just the nofib programs vs. the impact
the libraries have.

I've only had time to take a cursory look over the generated assembly for
the CSD test, and the llvm codegen seems to be able to produce quite
different assembly, thus there seem to be some good optimizations llvm
manages to exploit. I'll have to investigate this closer and probably look
at the llvm IR we generate and the intermediate optimization steps llvm
manages to apply to it, as the llvm assembly doesn't ressemble the ncg
assembly much.

I plan to look at aarch64/mach-o and performance over the coming weeks.

I hope we can get this in for 9.2.

Cheers,
 Moritz

--
[1]: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3641
[2]: https://www.haskell.org/ghc/blog/20200515-ghc-on-arm.html
[3]: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3641
[4]: https://gist.github.com/9d93454b832b769b5bdb4e731a10c068
[5]: https://gist.github.com/acc4dab7836f1f509716ac398a94d949
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser depends on DynFlags, depends on Hooks, depends on TcM, DsM, ...

2020-09-18 Thread Moritz Angermann
I'm not certain anything in HEAD actually breaks any plugin today. But the
whole idea of plugins having full access to what currently is "DynFlags" is
not something I believe we can sustain. @Sylvain Henry  is
currently cleaning up a lot of unnecessary DynFlags usage. I'm not against
keeping the necessary infrastructure for hooks and other interfaces with
plugins, but I'd like to advocate towards not expecting DynFlags to keep
existing in eternity. If we assume a subset of what used to be in DynFlags
to be relevant to Plugins, let's collect that in say PluginHooks, but let's
keep that interface minimal. And maybe that can be specified to stay stable.

DynFlags is our state kitchensink in GHC, and it is everywhere. The state
is threaded through everything and the module is gargantuous. So far there
seemed to be broad support in removing this wart.

Cheers,
Moritz

On Fri, Sep 18, 2020 at 5:52 PM Adam Gundry  wrote:

> On 14/09/2020 13:02, Moritz Angermann wrote:
> > I believe this to already be broken in HEAD. DynFlags already got quite
> > an overhaul/break. I'd rather we drop supporting DynFlagPlugins. And
> > offer alternative stable interfaces. Though to be honest, I believe our
> > Plugin story is rather poor so far.
> >
> > Do you happen to know of DynFlagPlugins, Adam?
>
> A few have been mentioned in the thread now. What specifically do you
> believe is broken in HEAD regarding DynFlags plugins, and is there an
> issue for it? AFAICS the hooks-plugin test which corresponds to the
> user's guide text is still there.
>
> I think it is important to retain the ability for plugins to manipulate
> both DynFlags and Hooks, whether the latter are separated out of the
> former or not. Both have legitimate use cases, and plugins necessarily
> involve using unstable interfaces (at least until someone designs a
> stable interface). I agree that the current state of plugins/hooks is
> somewhat ad-hoc and could do with more effort put into the design (like
> much else in the GHC API!) but that doesn't mean we should remove things
> that work already.
>
> Slightly tangential note: discussing this with Alp I learned about the
> log_action/dump_action/trace_action fields of DynFlags, which also seem
> to violate Simon's "We should think of DynFlags as an abstract syntax
> tree." And indeed it would be useful for plugins to be able to override
> log_action, especially combined with #18516, as then we would have a
> nice story for plugins overriding error message generation to allow for
> domain-specific error messages.
>
> Cheers,
>
> Adam
>
>
> > On Mon, Sep 14, 2020 at 7:09 PM Adam Gundry  > <mailto:a...@well-typed.com>> wrote:
> >
> > I'm supportive of the goal, but a complication with removing hooks
> from
> > DynFlags is that GHC currently supports "DynFlags plugins" that allow
> > plugins to install custom hooks
> > (
> https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/extending_ghc.html#dynflags-plugins
> ).
> > I guess this can be worked around, perhaps by passing hooks
> separately
> > to DynFlags and providing a separate plugin interface to modify
> hooks.
> > But doing so will necessarily break existing plugins.
> >
> > Adam
> >
> >
> > On 14/09/2020 11:25, Simon Peyton Jones via ghc-devs wrote:
> > > I thought I’d sent a message about this DynFlags thing, but I can’t
> > > trace it now.   So here’s a resend.
> > >
> > >
> > >
> > > Currently
> > >
> > >   * The DynFlags record includes Hooks
> > >   * Hooks in contains functions, that mention TcM, DsM etc
> > >
> > >
> > >
> > > This is bad.  We should think of DynFlags as an *abstract syntax
> > tree*.
> > > That is, the result of parsing the flag strings, yes, but not much
> > > more.  So for hooks we should have an algebraic data type
> representing
> > > the hook /specification/, but it should not be the hook functions
> > > themselves.  HsSyn, for example, after parsing, is just a tree with
> > > strings in it.  No TyCons, Ids, etc. That comes much later.
> > >
> > >
> > >
> > > So DynFlags should be a collection of algebraic data types, but
> should
> > > not depend on anything else.
> > >
> > >
> > >
> > > I think that may cut a bunch of awkward loops.
> > >
> > >
> > >
> > > Simon
> > >
&g

Re: Parser depends on DynFlags, depends on Hooks, depends on TcM, DsM, ...

2020-09-14 Thread Moritz Angermann
I believe this to already be broken in HEAD. DynFlags already got quite an
overhaul/break. I'd rather we drop supporting DynFlagPlugins. And
offer alternative stable interfaces. Though to be honest, I believe our
Plugin story is rather poor so far.

Do you happen to know of DynFlagPlugins, Adam?

Cheers,
 Moritz

On Mon, Sep 14, 2020 at 7:09 PM Adam Gundry  wrote:

> I'm supportive of the goal, but a complication with removing hooks from
> DynFlags is that GHC currently supports "DynFlags plugins" that allow
> plugins to install custom hooks
> (
> https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/extending_ghc.html#dynflags-plugins
> ).
> I guess this can be worked around, perhaps by passing hooks separately
> to DynFlags and providing a separate plugin interface to modify hooks.
> But doing so will necessarily break existing plugins.
>
> Adam
>
>
> On 14/09/2020 11:25, Simon Peyton Jones via ghc-devs wrote:
> > I thought I’d sent a message about this DynFlags thing, but I can’t
> > trace it now.   So here’s a resend.
> >
> >
> >
> > Currently
> >
> >   * The DynFlags record includes Hooks
> >   * Hooks in contains functions, that mention TcM, DsM etc
> >
> >
> >
> > This is bad.  We should think of DynFlags as an *abstract syntax tree*.
> > That is, the result of parsing the flag strings, yes, but not much
> > more.  So for hooks we should have an algebraic data type representing
> > the hook /specification/, but it should not be the hook functions
> > themselves.  HsSyn, for example, after parsing, is just a tree with
> > strings in it.  No TyCons, Ids, etc. That comes much later.
> >
> >
> >
> > So DynFlags should be a collection of algebraic data types, but should
> > not depend on anything else.
> >
> >
> >
> > I think that may cut a bunch of awkward loops.
> >
> >
> >
> > Simon
> >
> >
> >
> > *From:*Simon Peyton Jones
> > *Sent:* 10 September 2020 14:17
> > *To:* Sebastian Graf ; Sylvain Henry
> > 
> > *Cc:* ghc-devs 
> > *Subject:* RE: Parser depends on DynFlags, depends on Hooks, depends on
> > TcM, DsM, ...
> >
> >
> >
> > And for sure the **parser** should not depend on the **desugarer** and
> > **typechecker**.   (Which it does, as described below.)
> >
> >
> >
> https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/extending_ghc.html#dynflags-plugins
> > S
> >
> >
> >
> > *From:*ghc-devs  > > *On Behalf Of *Sebastian Graf
> > *Sent:* 10 September 2020 14:12
> > *To:* Sylvain Henry mailto:sylv...@haskus.fr>>
> > *Cc:* ghc-devs mailto:ghc-devs@haskell.org>>
> > *Subject:* Parser depends on DynFlags, depends on Hooks, depends on TcM,
> > DsM, ...
> >
> >
> >
> > Hey Sylvain,
> >
> >
> >
> > In https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3971
> > <
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.haskell.org%2Fghc%2Fghc%2F-%2Fmerge_requests%2F3971=02%7C01%7Csimonpj%40microsoft.com%7C0c3760e72fad4200d39408d8558b3871%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637353404753453548=fVpIzJgaqFfWaJ5ppCE5daHwdETTQF03o1h0uNtDxGA%3D=0
> >
> > I had to fight once more with the transitive dependency set of the
> > parser, the minimality of which is crucial for ghc-lib-parser
> > <
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhackage.haskell.org%2Fpackage%2Fghc-lib-parser=02%7C01%7Csimonpj%40microsoft.com%7C0c3760e72fad4200d39408d8558b3871%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637353404753463506=HZMaqK6t7PLifc26wf%2BqcUef4Ko%2BQcaPRx4o7XLcVq8%3D=0
> >
> > and tested by the CountParserDeps test.
> >
> >
> >
> > I discovered that I need to make (parts of) `DsM` abstract, because it
> > is transitively imported from the Parser for example through Parser.y ->
> > Lexer.x -> DynFlags -> Hooks -> {DsM,TcM}.
> >
> > Since you are our mastermind behind the "Tame DynFlags" initiative, I'd
> > like to hear your opinion on where progress can be/is made on that front.
> >
> >
> >
> > I see there is https://gitlab.haskell.org/ghc/ghc/-/issues/10961
> > <
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.haskell.org%2Fghc%2Fghc%2F-%2Fissues%2F10961=02%7C01%7Csimonpj%40microsoft.com%7C0c3760e72fad4200d39408d8558b3871%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637353404753463506=sn9zv1MO8p%2FSbwsm1NDaSiUaumE%2FvTo4NkGreYOjITA%3D=0
> >
> > and https://gitlab.haskell.org/ghc/ghc/-/issues/11301
> > <
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.haskell.org%2Fghc%2Fghc%2F-%2Fissues%2F11301=02%7C01%7Csimonpj%40microsoft.com%7C0c3760e72fad4200d39408d8558b3871%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637353404753463506=vFTEuEzIQLJTtpu7%2BuwFnOEWMPv8eY%2B%2FvgbrrV18uss%3D=0
> >
> > which ask a related, but different question: They want a DynFlags-free
> > interface, but I even want a DynFlags-free *module*.
> >
> >
> >
> > Would you say it's reasonable to abstract the definition of `PState`
> > over the `DynFlags` type? I think it's only used for 

non-threaded rts and CollectGarbage

2020-09-09 Thread Moritz Angermann
Hi there!

in the non-threaded rts we use itimer to do light weight scheduling of
threads via SIGALRM signals.  I'm seeing quite a bit of heap corruption on
aarch64, and it appears that I also see a lot of signal handling in the GC,
for example during evacuate.

Is there a fundamental reason why we can't just disable the timer during GC?

Cheers,
 Moritz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Creative ideas on how to debug heap corruption

2020-08-31 Thread Moritz Angermann
Thanks everyone. I have indeed been trying to get somewhere with sanity
checking. That used to help quite a bit for the deadstripping stuff that
happened on iOS a long time ago, but that was also much more deterministic.
Maybe I'll try to see if running it through qemu will give me some more
determinism. That at least gives somewhat predictable allocations. It could
still end up being some annoying memory ordering issues, the llvm backend
just managed to happen to not run into by luck, or optimisation passes.

On Mon, Aug 31, 2020 at 10:29 PM Csaba Hruska 
wrote:

> Fuzzing:
>
>1. generate simple random stg programs
>2. compile and run with RTS sanity checking enabled
>3. compare the program result between different backends
>
> The fuzzer should cover all codegen cases and all code in RTS. Maybe this
> could be checked by the existing tools.
>
> On Mon, Aug 31, 2020 at 4:19 PM George Colpitts 
> wrote:
>
>> +Moritz
>>
>> On Mon, Aug 31, 2020 at 11:17 AM George Colpitts <
>> george.colpi...@gmail.com> wrote:
>>
>>> I assume you're familiar with the following from
>>> https://www.aosabook.org/en/ghc.html and that this facility is still
>>> there. Just in case you are not:
>>>
>>> So, the debug RTS has an optional mode that we call *sanity checking*.
>>> Sanity checking enables all kinds of expensive assertions, and can make the
>>> program run many times more slowly. In particular, sanity checking runs a
>>> full scan of the heap to check for dangling pointers (amongst other
>>> things), before *and* after every GC. The first job when investigating
>>> a runtime crash is to run the program with sanity checking turned on;
>>> sometimes this will catch the invariant violation well before the program
>>> actually crashes.
>>>
>>>
>>> On Mon, Aug 31, 2020 at 11:08 AM Csaba Hruska 
>>> wrote:
>>>
>>>> Dump the whole heap into file during GC traversal or taking the whole
>>>> allocated area. hmm, maybe this is the same as core dump.
>>>>
>>>> On Mon, Aug 31, 2020 at 11:00 AM Ben Lippmeier 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> > On 31 Aug 2020, at 5:54 pm, Moritz Angermann <
>>>>> moritz.angerm...@gmail.com> wrote:
>>>>> >
>>>>> > If anyone has some create ideas, I'd love to hear them.  I've been
>>>>> wondering
>>>>> > if just logging allocations (offset, range, type) would help
>>>>> figuring out what we
>>>>> > expected to be there; and then maybe try to break on the allocation,
>>>>> (and
>>>>> > subsequent writes).
>>>>> >
>>>>> > I'm sure some have been down this road before.
>>>>>
>>>>> Force a GC before every allocation, and make the GC check the validity
>>>>> of the objects before it moves anything. I think this used to be possible
>>>>> by compiling the runtime system in debug mode.
>>>>>
>>>>> The usual pain of heap corruption is that once the heap is corrupted
>>>>> it may be several GC cycles before you get the actual crash, and in the
>>>>> meantime the objects have all been moved around. The GC walks over all the
>>>>> objects by nature, so get it to validate the heap every time it does, then
>>>>> force it to run as often as you possibly can.
>>>>>
>>>>> A user space approach is to use a library like vacuum or packman that
>>>>> also walks over the heap objects directly.
>>>>>
>>>>> http://hackage.haskell.org/package/vacuum-2.2.0.0/docs/GHC-Vacuum.html
>>>>> https://hackage.haskell.org/package/packman
>>>>>
>>>>> Ben.
>>>>>
>>>>>
>>>>>
>>>>> ___
>>>>> ghc-devs mailing list
>>>>> ghc-devs@haskell.org
>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>>>
>>>> ___
>>>> ghc-devs mailing list
>>>> ghc-devs@haskell.org
>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>>
>>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Creative ideas on how to debug heap corruption

2020-08-31 Thread Moritz Angermann
Hi there!

as some of you may know, I've been working on an aarch64 native code
generator.  Now I've hit a situation where my stage2 compiler somehow
corrupts my heap.  Initially I thought this would likely be missing memory
barriers, however they are emitted.  This doesn't mean it can't be, but at
least it's not as simple as "they are just missing".

The crashes I see are non deterministic, in fact I sometimes even manage
to compile a Hello World module, without crashes.  Other times it crashes
with unknown closure errors or it just crashes.  But it always crashes
during GC.  Changing the nursery size make it crasha bit more frequent,
but nothing obvious sticks out yet.

If anyone has some create ideas, I'd love to hear them.  I've been wondering
if just logging allocations (offset, range, type) would help figuring out
what we
expected to be there; and then maybe try to break on the allocation, (and
subsequent writes).

I'm sure some have been down this road before.

Cheers,
 Moritz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Call for GHC Maintainers

2020-08-18 Thread Moritz Angermann
Hi there!

So a few more discussions have come up.  And they have mainly centered
around the question of quality assurance.  Cutting GHC releases is time
consuming and not trivial.  And those people would need to take ownership
of those releases and stand by them.  How do we ensure that backports do
not inadvertently break working compiler?  I'm completely against preventing
new contributors to help with making releases on the ground that things can
go wrong.  This would inevitably just end up preventing people form even
trying, and how do you get good at something if you can't even try to get
good
at it?

So the question then is: what can we do to improve/ensure quality of
releases?
We certainly have the test-suite, but that might have holes, and
backporting the
test-suite will only work so far. Language features that change
stdout/stderr
will inevitably be fixed in newer test-suites to accomodate newer
compilers, but
will not work with older compilers.

However, we have a large body of public libraries on hackage.  And a curated
set of packages per compiler in the form Stackage LTS sets.  We have
something
slightly similar for HEAD with the hackage head overlay.  For older
compilers
we can rely on something more mature!

Thus, if we can build some automation to test a compiler against an
existing set
of packages, and run their test-suites. There will inevitably be failures,
but we'd
be interested in looking at the drivitive only anyway. If the same set of
tests fail
that previous compilers failed at, I don't think that should be much of
concern. If
fewer tests fail, it would indicate something might have been fixed, or the
test
now surfaces some new behaviour that we might want to look at.  Worst case
would be new test that fail, but didn't before.  This should raise red
flags and
either have a *very* good argument for why the backport is still the right
thing to
do and the test-failures are actually faulty tests, or the backport should
just not
be performed.

In the end it will be about striking a balance between fixing bugs and not
regressing, with a higher priority on not regressing.  However we if we
can't
detect we regress, we have to assume we don't, as we'd otherwise be unable
to even make any releases.

I'd be happy to discuss this further, and setup some nix based test harness
for
this, as time permits (with windows test being run through some cross
compilation
and wine based) setup.

Cheers,
 Moritz

On Sat, Aug 15, 2020 at 3:31 PM Moritz Angermann 
wrote:

> Hi there!
>
> Thanks everyone for showing interest. I've started a wiki page here:
> https://gitlab.haskell.org/ghc/ghc/-/wikis/ghc-maintainers
> Please add yourself to the release you'd like to maintain. I've tried
> to come up with a plan on how to actually look at this problem,
> and it appears to me that we want a list of Merge Requests that are
> considered for backporting, and then see to which GHC we
> backport them.  So essentially a matrix with GHC releases / merge
> requests, and values being either empty or the commit in which
> the MR was backported.
>
> To get the existing matrix we might try to extract this from the git
> history? Does anyone have a good idea how to do this properly?
> The alternative would be to go through all existing MRs, and check for
> backports, which would be quite tedious, and an automated
> solution (at least to get the initial matrix would be good?).  In
> general I believe there to be value in a matrix of backports for easy
> lookup.
>
> Then we'll need a good way to flag new incoming MRs for backports, and
> have the release maintainers look at them, and their
> applicability/suitability for a given release.
>
> Finally, let's not kid ourselves here, this will require some time
> investment, taking ownership and coordination. I don't think we need
> to rush releases, but we should make sure that releases are of good
> quality.
>
> Cheers,
>  Moritz
>
> On Tue, Aug 11, 2020 at 11:29 PM Hemanth Kapila 
> wrote:
> >
> > Thanks for the note.
> >
> > I will be happy to pitch in.
> >
> > Thanks,
> > Hemanth
> >
> > On Tue, 11 Aug 2020, 07:40 Moritz Angermann, 
> wrote:
> >>
> >> Hi there!
> >>
> >> As it stands right now, Ben is the one who works tirelessly trying to
> >> cut releases. Not just for the most recent version, but also for
> >> previous versions. Most recently 8.10.2, but we have 9.0 coming up as
> >> well.
> >>
> >> I know that there are some people who deeply care for personal or
> >> professional reasons for older releases, 8.4, 8.6, 8.8, ... Some of
> >> them have stacks of patches applied, or proprietary extensions. I'd
> >> argue that most of those applied patches are backports of bug fixes
> >> and ra

Re: The curious case of #367: Infinite loops can hang Concurrent Haskell

2020-08-17 Thread Moritz Angermann
I'll investigate why we end up generating the loops, and will report
back if I find anything that looks awfully off. I don't dispute that
there might be legitimate reasons to generate code like this.  From a
user perspective I'd however be grateful if the compiler warns
me about this. Maybe it was my intention, but maybe it wasn't?  Of
course as this might only catch a subset of potential infinite loops
it's not a comprehensive check.

I'll report back if I find anything in the tests that looks off.
Otherwise assume the tests do indeed intend to generate infinite
loops.

Cheers,
 Moritz

On Mon, Aug 17, 2020 at 8:38 PM Simon Peyton Jones
 wrote:
>
> My question is earlier: why do we generate code that we will never to get out 
> again?
>
>
>
> Ah, well, if you say, for example
>
> f x = f x
>
> then it seems reasonably to generate an infinite loop.   I don’t know if 
> that’s what’s happening here, but it seems reasonable in principle.
>
>
>
> I’m unsure about what you are proposing to change.
>
>
> Simon
>
>
>
> From: Moritz Angermann 
> Sent: 17 August 2020 13:28
> To: Simon Peyton Jones 
> Cc: ghc-devs 
> Subject: Re: The curious case of #367: Infinite loops can hang Concurrent 
> Haskell
>
>
>
>
>
>
>
> On Mon, 17 Aug 2020 at 8:14 PM, Simon Peyton Jones  
> wrote:
>
> |  and the question then becomes, do we want to investigate if we can
>
> |  a) detect this is dead code
>
> |  b) remove it in Cmm or higher, or flat out prevent it from being
>
> |  generated.
>
> |  c) we don't care about producing this code, and hope the linker will
>
> |  eliminate it.
>
>
>
> I'm still puzzled.  Why do you thing _cCO is dead?  What alternative code are 
> you thinking we might generate?
>
>
>
> My question is earlier: why do we generate code that we will never to get out 
> again? The generated code is effectively: while(true);.
>
>
>
> This code does not have to be dead, and there may very well be reasons why we 
> want to generate an infinite loop that can only be terminated from the 
> outside. Maybe it’s just my naive expectation that the user more likely did 
> not want to generate this code.
>
>
>
> Once _cCO is entered, there is no way out for the application.
>
>
>
> Cheers,
>
>  Moritz
>
>
>
>
>
>
>
> S
>
>
>
> |  -Original Message-
>
> |  From: Moritz Angermann 
>
> |  Sent: 17 August 2020 10:30
>
> |  To: Simon Peyton Jones 
>
> |  Cc: ghc-devs 
>
> |  Subject: Re: The curious case of #367: Infinite loops can hang
>
> |  Concurrent Haskell
>
> |
>
> |  Hi Simon,
>
> |
>
> |  sure, I could have been a bit clearer:
>
> |
>
> |  Code we currently generate is:
>
> |  ```
>
> |   _cCO:
>
> |  bl _cCO
>
> |  ```
>
> |
>
> |  or
>
> |
>
> |  ```
>
> |   _czf:
>
> |  mov x17, x18
>
> |  bl _czf
>
> |  ```
>
> |
>
> |  and the question then becomes, do we want to investigate if we can
>
> |  a) detect this is dead code
>
> |  b) remove it in Cmm or higher, or flat out prevent it from being
>
> |  generated.
>
> |  c) we don't care about producing this code, and hope the linker will
>
> |  eliminate it.|
>
> |  Cheers,
>
> |Moritz
>
> |
>
> |  On Mon, Aug 17, 2020 at 5:18 PM Simon Peyton Jones
>
> |   wrote:
>
> |  >
>
> |  > Moritz
>
> |  >
>
> |  > I'm not getting this.
>
> |  >
>
> |  > |  So, my question then is this: are we fine with ghc generating
>
> |  this
>
> |  > |  code? Or, if we are not, do we want to figure out if we can
>
> |  eliminate
>
> |  > |  it?
>
> |  >
>
> |  > What exactly is "this code" and "it"?
>
> |  >
>
> |  > You could be asking
>
> |  >
>
> |  > * Should we switch off -fomit-yields by default?
>
> |  > * Should we implement -fno-omit-yields in a cleverer way that
>
> |  generates less code?
>
> |  >
>
> |  > Or you could be asking something else again.
>
> |  >
>
> |  > Your deadlock-detection patch (which is presumably not in GHC) is
>
> |  very special-case: it detects some infinite loops, but only some.
>
> |  I'm not sure what role it plays in your thinking.
>
> |  >
>
> |  > Simon
>
> |  >
>
> |  >
>
> |  > |  -Original Message-
>
> |  > |  From: ghc-devs  On Behalf Of Moritz
>
> |  > |  Angermann
>
> |  > |  Sent: 17 August 2020 09:40
>
> |  

Re: The curious case of #367: Infinite loops can hang Concurrent Haskell

2020-08-17 Thread Moritz Angermann
On Mon, 17 Aug 2020 at 8:14 PM, Simon Peyton Jones 
wrote:

> |  and the question then becomes, do we want to investigate if we can
>
> |  a) detect this is dead code
>
> |  b) remove it in Cmm or higher, or flat out prevent it from being
>
> |  generated.
>
> |  c) we don't care about producing this code, and hope the linker will
>
> |  eliminate it.
>
>
>
> I'm still puzzled.  Why do you thing _cCO is dead?  What alternative code
> are you thinking we might generate?


My question is earlier: why do we generate code that we will never to get
out again? The generated code is effectively: while(true);.

This code does not have to be dead, and there may very well be reasons why
we want to generate an infinite loop that can only be terminated from the
outside. Maybe it’s just my naive expectation that the user more likely did
not want to generate this code.

Once _cCO is entered, there is no way out for the application.

Cheers,
 Moritz


>
>
>
> S
>
>
>
> |  -Original Message-
>
> |  From: Moritz Angermann 
>
> |  Sent: 17 August 2020 10:30
>
> |  To: Simon Peyton Jones 
>
> |  Cc: ghc-devs 
>
> |  Subject: Re: The curious case of #367: Infinite loops can hang
>
> |  Concurrent Haskell
>
> |
>
> |  Hi Simon,
>
> |
>
> |  sure, I could have been a bit clearer:
>
> |
>
> |  Code we currently generate is:
>
> |  ```
>
> |   _cCO:
>
> |  bl _cCO
>
> |  ```
>
> |
>
> |  or
>
> |
>
> |  ```
>
> |   _czf:
>
> |  mov x17, x18
>
> |  bl _czf
>
> |  ```
>
> |
>
> |  and the question then becomes, do we want to investigate if we can
>
> |  a) detect this is dead code
>
> |  b) remove it in Cmm or higher, or flat out prevent it from being
>
> |  generated.
>
> |  c) we don't care about producing this code, and hope the linker will
>
> |  eliminate it.|
>
> |  Cheers,
>
> |Moritz
>
> |
>
> |  On Mon, Aug 17, 2020 at 5:18 PM Simon Peyton Jones
>
> |   wrote:
>
> |  >
>
> |  > Moritz
>
> |  >
>
> |  > I'm not getting this.
>
> |  >
>
> |  > |  So, my question then is this: are we fine with ghc generating
>
> |  this
>
> |  > |  code? Or, if we are not, do we want to figure out if we can
>
> |  eliminate
>
> |  > |  it?
>
> |  >
>
> |  > What exactly is "this code" and "it"?
>
> |  >
>
> |  > You could be asking
>
> |  >
>
> |  > * Should we switch off -fomit-yields by default?
>
> |  > * Should we implement -fno-omit-yields in a cleverer way that
>
> |  generates less code?
>
> |  >
>
> |  > Or you could be asking something else again.
>
> |  >
>
> |  > Your deadlock-detection patch (which is presumably not in GHC) is
>
> |  very special-case: it detects some infinite loops, but only some.
>
> |  I'm not sure what role it plays in your thinking.
>
> |  >
>
> |  > Simon
>
> |  >
>
> |  >
>
> |  > |  -Original Message-
>
> |  > |  From: ghc-devs  On Behalf Of Moritz
>
> |  > |  Angermann
>
> |  > |  Sent: 17 August 2020 09:40
>
> |  > |  To: ghc-devs 
>
> |  > |  Subject: The curious case of #367: Infinite loops can hang
>
> |  Concurrent
>
> |  > |  Haskell
>
> |  > |
>
> |  > |  Hi there!
>
> |  > |
>
> |  > |  While working on a NCG, I eventually came across #367[0], which
>
> |  make GHC
>
> |  > |  produce
>
> |  > |  code that looks similar to this:
>
> |  > |
>
> |  > |  ```
>
> |  > |  label:
>
> |  > |[non-branch-instructions]*
>
> |  > |brach-instruction label
>
> |  > |  ```
>
> |  > |
>
> |  > |  so essentially an uninterruptible loop. The solution for GHC to
>
> |  > |  produce code that
>
> |  > |  can be interrupted is to pass -fno-omit-yields.
>
> |  > |
>
> |  > |  So far so good. Out of curiosity, I did add a small piece of code
>
> |  to
>
> |  > |  detect this to my NCG
>
> |  > |  to complain if code like the above was generated[1].
>
> |  > |
>
> |  > |  Three weeks ago, I kind of maneuvered myself into a memory blow
>
> |  up
>
> |  > |  corner, and then
>
> |  > |  life happened, but this weekend I managed to find some time to
>
> |  revert
>
> |  > |  some memory
>
> |  > |  blow up and continue working on the NCG.  Turns out I can build a
>
> |  > |  stage2 "quick" f

Re: The curious case of #367: Infinite loops can hang Concurrent Haskell

2020-08-17 Thread Moritz Angermann
Hi Simon,

sure, I could have been a bit clearer:

Code we currently generate is:
```
 _cCO:
bl _cCO
```

or

```
 _czf:
mov x17, x18
bl _czf
```

and the question then becomes, do we want to investigate if we can
a) detect this is dead code
b) remove it in Cmm or higher, or flat out prevent it from being generated.
c) we don't care about producing this code, and hope the linker will
eliminate it.

Cheers,
  Moritz

On Mon, Aug 17, 2020 at 5:18 PM Simon Peyton Jones
 wrote:
>
> Moritz
>
> I'm not getting this.
>
> |  So, my question then is this: are we fine with ghc generating this
> |  code? Or, if we are not, do we want to figure out if we can eliminate
> |  it?
>
> What exactly is "this code" and "it"?
>
> You could be asking
>
> * Should we switch off -fomit-yields by default?
> * Should we implement -fno-omit-yields in a cleverer way that generates less 
> code?
>
> Or you could be asking something else again.
>
> Your deadlock-detection patch (which is presumably not in GHC) is very 
> special-case: it detects some infinite loops, but only some.   I'm not sure 
> what role it plays in your thinking.
>
> Simon
>
>
> |  -Original Message-
> |  From: ghc-devs  On Behalf Of Moritz
> |  Angermann
> |  Sent: 17 August 2020 09:40
> |  To: ghc-devs 
> |  Subject: The curious case of #367: Infinite loops can hang Concurrent
> |  Haskell
> |
> |  Hi there!
> |
> |  While working on a NCG, I eventually came across #367[0], which make GHC
> |  produce
> |  code that looks similar to this:
> |
> |  ```
> |  label:
> |[non-branch-instructions]*
> |brach-instruction label
> |  ```
> |
> |  so essentially an uninterruptible loop. The solution for GHC to
> |  produce code that
> |  can be interrupted is to pass -fno-omit-yields.
> |
> |  So far so good. Out of curiosity, I did add a small piece of code to
> |  detect this to my NCG
> |  to complain if code like the above was generated[1].
> |
> |  Three weeks ago, I kind of maneuvered myself into a memory blow up
> |  corner, and then
> |  life happened, but this weekend I managed to find some time to revert
> |  some memory
> |  blow up and continue working on the NCG.  Turns out I can build a
> |  stage2 "quick" flavour
> |  of the NCG without dynamic support just fine.  I never saw the dead
> |  lock detection code fire.
> |
> |  Now I did leave the test suite running yesterday night, and when
> |  looking through the
> |  test suite results, there were quite a few failure. Curiously a lot of
> |  them were due to
> |  ghc missing dynamic support (doh!).  But also quite a few that failed
> |  due to the deadlock
> |  detection.
> |
> |  T12485, hs_try_putmvar003, ds-wildcard, ds001, read029, T2817, tc011,
> |  tc021, T4524
> |
> |  So, my question then is this: are we fine with ghc generating this
> |  code? Or, if we are not, do we want to figure out if we can eliminate
> |  it? The issue 367 goes into quite a bit of detail why this is tricky
> |  to handle generally.
> |
> |  Or should we add -fno-omit-yields to the test-cases? The ultimate
> |  option is to just turn of the
> |  detection, and I'm fine with doing so. However I'd rather ask if
> |  anyone sees value in detecting
> |  this or not.
> |
> |  Cheers,
> |   Moritz
> |
> |  --
> |  [0]:
> |  https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h
> |  askell.org%2Fghc%2Fghc%2F-
> |  %2Fissues%2F367data=02%7C01%7Csimonpj%40microsoft.com%7C06a6ead062e64
> |  1e6c16608d842893959%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637332504
> |  423501799sdata=KKZYaNgl%2FliDXwfcEqWIosjRjDYt%2FDc9i1sBEfS22mQ%3D
> |  ;reserved=0
> |  [1]:
> |  https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h
> |  askell.org%2Fghc%2Fghc%2F-
> |  %2Fblob%2F46fba2c91e1c4d23d46fa2d9b18dcd000c80363d%2Fcompiler%2FGHC%2FCmmT
> |  oAsm%2FAArch64%2FPpr.hs%23L134-
> |  159data=02%7C01%7Csimonpj%40microsoft.com%7C06a6ead062e641e6c16608d84
> |  2893959%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637332504423501799
> |  p;sdata=RMXio8BI9tSjWnKK4HSXA3s%2BXNNM7ntk2ftQjmRJxzE%3Dreserved=0
> |  ___
> |  ghc-devs mailing list
> |  ghc-devs@haskell.org
> |  https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask
> |  ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> |  devsdata=02%7C01%7Csimonpj%40microsoft.com%7C06a6ead062e641e6c16608d8
> |  42893959%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637332504423501799
> |  mp;sdata=8W595qb3lWsqdAeGeFp0T26DsCXzA6ngrCQLKihCXkA%3Dreserved=0
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


The curious case of #367: Infinite loops can hang Concurrent Haskell

2020-08-17 Thread Moritz Angermann
Hi there!

While working on a NCG, I eventually came across #367[0], which make GHC produce
code that looks similar to this:

```
label:
  [non-branch-instructions]*
  brach-instruction label
```

so essentially an uninterruptible loop. The solution for GHC to
produce code that
can be interrupted is to pass -fno-omit-yields.

So far so good. Out of curiosity, I did add a small piece of code to
detect this to my NCG
to complain if code like the above was generated[1].

Three weeks ago, I kind of maneuvered myself into a memory blow up
corner, and then
life happened, but this weekend I managed to find some time to revert
some memory
blow up and continue working on the NCG.  Turns out I can build a
stage2 "quick" flavour
of the NCG without dynamic support just fine.  I never saw the dead
lock detection code fire.

Now I did leave the test suite running yesterday night, and when
looking through the
test suite results, there were quite a few failure. Curiously a lot of
them were due to
ghc missing dynamic support (doh!).  But also quite a few that failed
due to the deadlock
detection.

T12485, hs_try_putmvar003, ds-wildcard, ds001, read029, T2817, tc011,
tc021, T4524

So, my question then is this: are we fine with ghc generating this
code? Or, if we are not, do we want to figure out if we can eliminate
it? The issue 367 goes into quite a bit of detail why this is tricky
to handle generally.

Or should we add -fno-omit-yields to the test-cases? The ultimate
option is to just turn of the
detection, and I'm fine with doing so. However I'd rather ask if
anyone sees value in detecting
this or not.

Cheers,
 Moritz

--
[0]: https://gitlab.haskell.org/ghc/ghc/-/issues/367
[1]: 
https://gitlab.haskell.org/ghc/ghc/-/blob/46fba2c91e1c4d23d46fa2d9b18dcd000c80363d/compiler/GHC/CmmToAsm/AArch64/Ppr.hs#L134-159
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Call for GHC Maintainers

2020-08-15 Thread Moritz Angermann
Hi there!

Thanks everyone for showing interest. I've started a wiki page here:
https://gitlab.haskell.org/ghc/ghc/-/wikis/ghc-maintainers
Please add yourself to the release you'd like to maintain. I've tried
to come up with a plan on how to actually look at this problem,
and it appears to me that we want a list of Merge Requests that are
considered for backporting, and then see to which GHC we
backport them.  So essentially a matrix with GHC releases / merge
requests, and values being either empty or the commit in which
the MR was backported.

To get the existing matrix we might try to extract this from the git
history? Does anyone have a good idea how to do this properly?
The alternative would be to go through all existing MRs, and check for
backports, which would be quite tedious, and an automated
solution (at least to get the initial matrix would be good?).  In
general I believe there to be value in a matrix of backports for easy
lookup.

Then we'll need a good way to flag new incoming MRs for backports, and
have the release maintainers look at them, and their
applicability/suitability for a given release.

Finally, let's not kid ourselves here, this will require some time
investment, taking ownership and coordination. I don't think we need
to rush releases, but we should make sure that releases are of good quality.

Cheers,
 Moritz

On Tue, Aug 11, 2020 at 11:29 PM Hemanth Kapila  wrote:
>
> Thanks for the note.
>
> I will be happy to pitch in.
>
> Thanks,
> Hemanth
>
> On Tue, 11 Aug 2020, 07:40 Moritz Angermann,  
> wrote:
>>
>> Hi there!
>>
>> As it stands right now, Ben is the one who works tirelessly trying to
>> cut releases. Not just for the most recent version, but also for
>> previous versions. Most recently 8.10.2, but we have 9.0 coming up as
>> well.
>>
>> I know that there are some people who deeply care for personal or
>> professional reasons for older releases, 8.4, 8.6, 8.8, ... Some of
>> them have stacks of patches applied, or proprietary extensions. I'd
>> argue that most of those applied patches are backports of bug fixes
>> and rarely language features, as language features will break
>> compatibility (due to ghc, base, and other library versions anyway).
>>
>> I would therefore like drum up a group of people who will take care
>> (ideally 2+ per release) of backporting and making minor partch
>> releases. This does not have to go on forever, but it would take much
>> needed load off of Ben to focus on what ever happens in ghc HEAD.
>>
>> So what would this work actually look like? It would consist of
>> - going through the list of MRs and tagging those which are relevant
>> for backporting to a certain release.
>> - backport MRs where the MR does not cleanly apply.
>> - fixup any test-suite failures.
>> - agree on a date to cut/make the release.
>>
>> This is not a permanent commitment. I hope we can attract more people
>> to the ghc release managers.
>>
>> I'm looking forward to great many responses. And I'm sure Ben will be
>> able to help mentor us through cutting the first releases. I'll
>> volunteer to be part of the 8.6 branch maintainers for now.
>>
>> Cheers,
>>  Moritz
>>
>> PS: There is a slightly related discussion about release cadence and
>> versions and how other projects deal with this in this ticket:
>> https://gitlab.haskell.org/ghc/ghc/-/issues/18222
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Call for GHC Maintainers

2020-08-10 Thread Moritz Angermann
Hi there!

As it stands right now, Ben is the one who works tirelessly trying to
cut releases. Not just for the most recent version, but also for
previous versions. Most recently 8.10.2, but we have 9.0 coming up as
well.

I know that there are some people who deeply care for personal or
professional reasons for older releases, 8.4, 8.6, 8.8, ... Some of
them have stacks of patches applied, or proprietary extensions. I'd
argue that most of those applied patches are backports of bug fixes
and rarely language features, as language features will break
compatibility (due to ghc, base, and other library versions anyway).

I would therefore like drum up a group of people who will take care
(ideally 2+ per release) of backporting and making minor partch
releases. This does not have to go on forever, but it would take much
needed load off of Ben to focus on what ever happens in ghc HEAD.

So what would this work actually look like? It would consist of
- going through the list of MRs and tagging those which are relevant
for backporting to a certain release.
- backport MRs where the MR does not cleanly apply.
- fixup any test-suite failures.
- agree on a date to cut/make the release.

This is not a permanent commitment. I hope we can attract more people
to the ghc release managers.

I'm looking forward to great many responses. And I'm sure Ben will be
able to help mentor us through cutting the first releases. I'll
volunteer to be part of the 8.6 branch maintainers for now.

Cheers,
 Moritz

PS: There is a slightly related discussion about release cadence and
versions and how other projects deal with this in this ticket:
https://gitlab.haskell.org/ghc/ghc/-/issues/18222
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Question about binary distributions

2020-08-07 Thread Moritz Angermann
Hi Mathieu,

you can! See http://hackage.mobilehaskell.org/; it's been one of the
design goals I had when I was hacking on hadrian. The whole configure
&& make install shenanigans were just too much.
Initially I wanted to drop that outright, but was convinced that
configure and make install is something distributions want (that
install into different locations) as well as some people who prefer to
install
into ghc into custom locations.

GHC has for a while now had relocatable support on our major
platforms, which means you don't even need that wrapper script anymore
as long as the bin and lib folder are next to each other and you
operating system can find the path of the executable. I'm told AIX
can't if the executable is a symlink.

I'm all for having "unpack and run" bindists with an optional
configure && make install phase for those who want it. As we are
moving over to hadrian as the primary build system I think this should
work,
but might have regressed?

Cheers,
 Moritz

On Fri, Aug 7, 2020 at 9:50 PM Mathieu Boespflug  wrote:
>
> Hi all,
>
> GHC currently has 3 tier-1 platforms: Linux, macOS and Windows. I'll focus 
> the dicussion below on these three platforms. The binary distributions for 
> Linux and macOS are designed to be unpacked, then the user types ./configure 
> && make install. This is not the case for Windows.
>
> On all platforms it's possible to create "relocatable" installations, such 
> that GHC doesn't really care where it's installed, and commands will still 
> work if the install directory changes location on the filesystem. So my 
> question is, why do we have a ./configure step on Linux and macOS? Why could 
> we not have bindists for all platforms that work like the Windows one? I.e. a 
> binary distribution that you just unpack, in any directory of your choice, 
> without any configuration or installation step.
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Using a development snapshot of happy

2020-08-02 Thread Moritz Angermann
This dependency on alex and happy to boot ghc has been annoying, but
wasn't  that terrible until a while ago when some
ghc versions needed happy <= 1.19.11 and others happy >= 1.19.12. If
happy was part of ghc, this would not have been an issue.
As such I'd be on board with adding happy *and* alex as submodules
into the `utils` folder. And thereby reducing the external
boot dependencies of ghc!

I believe
- `compiler/ghc.cabal.in` would need to get a `build-tool-depends:`
stanza for happy and alex,
- `utils/genprimopcode/genprimopcode.cabal` same
- `utils/hpc/hpc-bin.cabal` same for happy only.
For `hadrian`, you'd need to make it aware of happy and alex packages
in `hadrian/src/Packages.hs`.
(Just follow other "util"s in there, e.g. unlit or touchy).

In general hadrian should follow cabal dependencies properly.

cheers,
 Moritz

On Sun, Aug 2, 2020 at 3:43 PM Vladislav Zavialov  wrote:
>
> Hi ghc-devs,
>
> I’m working on the unification of parsers for terms and types, and one of the 
> things I’d really like to make use of is a feature I implemented in ‘happy’ 
> in October 2019 (9 months ago):
>
>   https://github.com/simonmar/happy/pull/153
>
> It’s been merged upstream, but there has been no release of ‘happy’, despite 
> repeated requests:
>
>   1. I asked for a release in December: 
> https://github.com/simonmar/happy/issues/164
>   2. Ben asked for a release a month ago: 
> https://github.com/simonmar/happy/issues/168
>
> I see two solutions here:
>
>   a) Find a co-maintainer for ‘happy’ who could make releases more frequently 
> (I understand the current maintainers probably don’t have the time to do it).
>   b) Use a development snapshot of ‘happy’ in GHC
>
> Maybe we need to do both, but one reason I’d like to see (b) in particular 
> happen is that I can imagine introducing more features to ‘happy’ for use in 
> GHC, and it’d be nice not to wait for a release every time. For instance, 
> there are some changes I’d like to make to happy/alex in order to implement 
> #17750
>
> So here are two questions I have:
>
>   1. Are there any objections to this idea?
>   2. If not, could someone more familiar with the build process guide me as 
> to how this should be implemented? Do I add ‘happy’ as a submodule and change 
> something in the ./configure script, or is there more to it? Do I need to 
> modify make/hadrian, and if so, then how?
>
> Thanks,
> - Vlad
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How should we treat changes to the GHC API?

2020-07-31 Thread Moritz Angermann
I think this is the core issue here:
> What should GHC’s extensibility interface be like?   Plugins and all that.  
> What is a good design for (say) extensible interface files?  What “hooks” 
> should the GHC API afford?  This is more than just “what arguments should 
> this function take”… it’s a matter of fundamental design.   But design 
> questions like this belong in the GHC-API world (not the core GHC world) 
> because they are all about extension points.
I don't think we know this apriori, and it will be discovered over
time as more and more producers and consumers start making use of it.
Is the current design the best one? I have my doubts, is it a first
approximation? I think so.

I think these features are more discovered than designed. It's easier
to iterate over concrete implementations in this area than over
abstract ideas.

I would propose having some EXPERIMENTAL markers for these kinds of features.

I do agree that a group of people who feel strongly about this should
be listed in the CODEOWNERS file for the respective parts of the
codebase, and take an active part in code review.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: HEAD doesn't build. Totally stalled.

2020-07-20 Thread Moritz Angermann
Ther revert MR is here: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3714
It's kind of ironic that it's stuck in CI limbo, whereas the initial MR wasn't.

> I'm surprised gitlab presubmit merge did not detect the build breakage.
So am I!

As laid out, I believe a better solution is to have a mapping of
symbols to potential
carrying libraries, and have GHC know about that, when the linker tries to link
arbitrary objects and encounters those symbols. Another strategy that Tamar
employed to great success on the windows side, is to just increase the
set of libraries
GHC tries to load by default, and thus get rid of the annoying list of
symbols in the
RTS.

I hope the above MR will pass now (after another rebase); and I can
find some time to
implement a better solution soon.

Cheers,
 Moritz

On Mon, Jul 20, 2020 at 4:28 PM Sergei Trofimovich  wrote:
>
> On Fri, 17 Jul 2020 10:45:37 +0800
> Moritz Angermann  wrote:
>
> > Well, we actually *do* test for __SSP__ in HEAD:
> > https://github.com/ghc/ghc/blob/master/rts/RtsSymbols.c#L1170
> > Which currently lists:
> > #if !defined(mingw32_HOST_OS) && !defined(DYNAMIC) &&
> > (defined(_FORTIFY_SOURCE) || defined(__SSP__))
>
> I believe it's a https://gitlab.haskell.org/ghc/ghc/-/issues/18442
>
> It breaks for me as well.
>
> It triggers if one has gcc compiler with any of 2 properties:
>
> 1. gcc is built with --enable-default-ssp (sets __SSP__ for all compilations)
> 2. gcc defaults to _FORTIFY_SOURCE
>
> Note that presence or absence of __stack_chk_guard is indicated
> by neither of these and instead is present when gcc is built with
> --enable-libssp (use gcc's __stack_* functions instead gcc's direct TLS
> instructions with one glibc fallback.)
>
> Gentoo does both [1.] and [2.] by default. I believe Debian does at least
> [2.] by default. I'm surprised gitlab presubmit merge did not detect the
> build breakage.
>
> What do macros [1] and [2.] mean for glibc-linux:
>
> - _FORTIFY_SOURCE only affects glibc headers to change memcpy()
>   calls to memcpy_chk() to add overflow checks. It does not affect
>   symbol exports available by libc. __stack_* symbols are always present.
>   Parts of libc or other libraries we link ghc with coult already call 
> __stack_*
>   function as they could already be built with _FORTIFY_SOURCE. Regardless
>   of how ghc is being built: with _FORTIFY_SOURCE or without.
>
> - __SSP__  indicates code generation of stack canary placement by gcc
>   (-fstack-protector-* options, or default override with gcc's 
> --enable-default-ssp)
>
>   If target is not a gcc's libssp target (a.k.a. --disable-libssp), a default 
> for all
>   linux-glibc targets) then gcc never uses -lssp and uses gcc's builtin 
> instructions
>   instead of __stack_chk_guard helpers. In this mode __stack_chk_guard is not
>   present in any libraries installed by gcc or glibc. The only symbol 
> provided by glibc
>   is __stack_chk_fail (which arguably should not be exposed at all as it's an
>   unusual contract between glibc/gcc: https://gcc.gnu.org/PR93509)
>
> --enable-libssp for gcc does bring in __stack_chk_guard. Library is present 
> and could
> use __stack_chk_guard in libraries ghc depends on regardless of
> -fstack-protector-* options used to build ghc. I believe --enable-libssp is 
> used only
> on mingw.
>
> What I'm trying to say is that presence of __stack_chk_guard is orthogonal
> to either __SSP__ define or _FORTIFY_SOURCE ghc uses today..
>
> It's rather a function of how gcc toolchain was built: --enable-libssp or not.
>
> > But this seems to still be ill conceived.  And while Simon is the only
> > one I'm aware of, for whom this breaks we need to find a better
> > solution. As such, we will revert the commits.
> >
> > Why do we do all this symbol nonsense in the RTS to begin with?  It
> > has to do with our static linker we have in GHC. Loading arbitrary
> > archives, means we need to be able to resolve all kinds of symbols
> > that objects might refer to. For regular dependencies this will work
> > if the dependencies are listed in the package configuration file, the
> > linker will know which dependencies to link. This get a bit annoying
> > for libraries that the compiler will automagically provide. libgcc,
> > libssp, librt, ...
> >
> > The solution so far was simply to have the RTS depend on these
> > symbols, and keep a list of them around. That way when the linker
> > built the RTS we'd get it to link all these symbols into the RTS, and
> > we could refer to them in the linker. Essentially looking them up in
> > the linked binary (ghc, or iserv).
> >
> > This is a rather tricky problem, and 

Re: HEAD doesn't build. Totally stalled.

2020-07-16 Thread Moritz Angermann
Well, we actually *do* test for __SSP__ in HEAD:
https://github.com/ghc/ghc/blob/master/rts/RtsSymbols.c#L1170
Which currently lists:
#if !defined(mingw32_HOST_OS) && !defined(DYNAMIC) &&
(defined(_FORTIFY_SOURCE) || defined(__SSP__))

But this seems to still be ill conceived.  And while Simon is the only
one I'm aware of, for whom this breaks we need to find a better
solution. As such, we will revert the commits.

Why do we do all this symbol nonsense in the RTS to begin with?  It
has to do with our static linker we have in GHC. Loading arbitrary
archives, means we need to be able to resolve all kinds of symbols
that objects might refer to. For regular dependencies this will work
if the dependencies are listed in the package configuration file, the
linker will know which dependencies to link. This get a bit annoying
for libraries that the compiler will automagically provide. libgcc,
libssp, librt, ...

The solution so far was simply to have the RTS depend on these
symbols, and keep a list of them around. That way when the linker
built the RTS we'd get it to link all these symbols into the RTS, and
we could refer to them in the linker. Essentially looking them up in
the linked binary (ghc, or iserv).

This is a rather tricky problem, and almost all solutions we came up
with are annoying in one or more dimensions.  After some discussion on
IRC last night, we'll go forward trying the following solution:

We'll keep a file in the lib folder (similar to the settings,
llvm-targets, ...) that is essentially a lookup table of Symbol ->
[Library]. If we encounter an unknown symbol, and we have it in our
lookup table, we will try to load the named libraries, hoping for them
to contain the symbol we are looking for. If everything fails we'll
bail.

For the example symbols that prompted this issue: (which are emitted
when stack smashing protector hardening is enabled, which seems to be
the default on most linux distributions today, which is likely why I
couldn't reproduce this easily.)

[("__stack_chk_guard", ["ssp"])]

would tell the compiler to try to locate (through the usual library
location means) the library called "ssp", if it encounters the symbol
"__stack_chk_guard".

Isn't this what the dynamic linker is supposed to solve? Why do we
have to do all this on our own? Can't we just use the dynamic linker?
Yes, and no. Yes we can use the dynamic linker, and we even do. But
not all platforms have a working, or usable linker. iOS for example
has a working dynamic linker, but user programs can't use it. muslc
reports "Dynamic loading not supported" when calling dlopen on arm.

Thus I'm reluctant to drop the static linker outright for the dynamic linker.

Cheers,
 Moritz

On Fri, Jul 17, 2020 at 2:45 AM Phyx  wrote:
>
> But, where do you actually check for __SSP__
>
> The guard just checks for not windows and not dynamic 
> https://github.com/ghc/ghc/commit/686e72253aed3880268dd6858eadd8c320f09e97#diff-03f5bc5a50fd8ae13e902782c4392c38R1157
>
> shouldn't it just be checking for defined(__SSP__) instead? This check is 
> currently only correct if the distro has turned stack protector on by default.
>
>
> Regards,
> Tamar
>
> On Thu, Jul 16, 2020 at 3:46 PM Moritz Angermann  
> wrote:
>>
>> I’ve tried to reproduce this and it turns out, I fail to. You are somehow 
>> building the rts either with _FORTYFY_SOURCE or __SSP__, but then your 
>> linker ends up not passing -lssp or the equivalent for your tool chain.
>>
>> At this point I’m tempted to add an additional ARM arch guard. While that 
>> would be conceptually wrong, it would reduce the cases where this could go 
>> wrong to a rarely used platform. Maybe @Ben Gamari has an idea?
>>
>> On Thu, 16 Jul 2020 at 10:25 PM, Simon Peyton Jones  
>> wrote:
>>>
>>> Moritz
>>>
>>> How’s it going getting this patch committed?
>>>
>>> It’s painful manually applying a fix, but then NOT committing that to 
>>> master by mistake
>>>
>>>
>>>
>>> Thanks
>>>
>>> s
>>>
>>>
>>>
>>> From: Moritz Angermann 
>>> Sent: 14 July 2020 12:14
>>> To: Simon Peyton Jones 
>>> Cc: ghc-devs@haskell.org
>>> Subject: Re: HEAD doesn't build. Totally stalled.
>>>
>>>
>>>
>>> For some reason, you end up in the defined RTS_SSP_SYMBOLS, I believe and 
>>> then the RTS wants __stack_chk symbols. Which it can’t find when linking.
>>>
>>>
>>>
>>> Replacing
>>>
>>> #if !defined(mingw32_HOST_OS) && !defined(DYNAMIC)
>>>
>>> #define RTS_SSP_SYMBOLS\
>>>
>>>   Sym

Re: HEAD doesn't build. Totally stalled.

2020-07-16 Thread Moritz Angermann
I’ve tried to reproduce this and it turns out, I fail to. You are somehow
building the rts either with _FORTYFY_SOURCE or __SSP__, but then your
linker ends up not passing -lssp or the equivalent for your tool chain.

At this point I’m tempted to add an additional ARM arch guard. While that
would be conceptually wrong, it would reduce the cases where this could go
wrong to a rarely used platform. Maybe @Ben Gamari has an idea?

On Thu, 16 Jul 2020 at 10:25 PM, Simon Peyton Jones 
wrote:

> Moritz
>
> How’s it going getting this patch committed?
>
> It’s painful manually applying a fix, but then NOT committing that to
> master by mistake
>
>
>
> Thanks
>
> s
>
>
>
> *From:* Moritz Angermann 
> *Sent:* 14 July 2020 12:14
> *To:* Simon Peyton Jones 
> *Cc:* ghc-devs@haskell.org
> *Subject:* Re: HEAD doesn't build. Totally stalled.
>
>
>
> For some reason, you end up in the defined RTS_SSP_SYMBOLS, I believe and
> then the RTS wants __stack_chk symbols. Which it can’t find when linking.
>
>
>
> Replacing
>
> #if !defined(mingw32_HOST_OS) && !defined(DYNAMIC)
>
> #define RTS_SSP_SYMBOLS\
>
>   SymI_NeedsProto(__stack_chk_guard)   \
>
>   SymI_NeedsProto(__stack_chk_fail)
>
> #else
>
> #define RTS_SSP_SYMBOLS
>
> #endif
>
> With just
>
>
>
> #define RTS_SSP_SYMBOLS
>
>
>
> Should do. I hope.
>
>
>
> Currently only on mobile phone :-/
>
>
>
> Cheers,
>
>  Moritz
>
>
>
> On Tue, 14 Jul 2020 at 7:06 PM, Simon Peyton Jones 
> wrote:
>
> thanks.  What specifically do I comment out?
>
>
>
> *From:* Moritz Angermann 
> *Sent:* 14 July 2020 12:00
> *To:* Simon Peyton Jones 
> *Cc:* ghc-devs@haskell.org
> *Subject:* Re: HEAD doesn't build. Totally stalled.
>
>
>
> This was my fault. Not sure why this wasn’t caught in CI.
>
> It’s due to the addition of the symbols here
>
>
>
>
> https://github.com/ghc/ghc/commit/686e72253aed3880268dd6858eadd8c320f09e97#diff-03f5bc5a50fd8ae13e902782c4392c38R1159
> <https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fghc%2Fghc%2Fcommit%2F686e72253aed3880268dd6858eadd8c320f09e97%23diff-03f5bc5a50fd8ae13e902782c4392c38R1159=02%7C01%7Csimonpj%40microsoft.com%7C39f8b294c72e460d79fe08d827e6fa9b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637303220283532757=C8UXF5FKuQ3MuoRuV5UgOH2bftZ3GMgXiKx4B1BAtZM%3D=0>
>
>
>
> You should be able to just comment them out. I’ll prepare a proper fix.
>
>
>
> Cheers,
>
>  Moritz
>
>
>
> On Tue, 14 Jul 2020 at 6:41 PM, Simon Peyton Jones via ghc-devs <
> ghc-devs@haskell.org> wrote:
>
> I’m getting this failure in a clean HEAD build. Any ideas?I’m totally
> stalled because I can’t build GHC any more.
>
> I’m using Windows Subsystem for Linux (WSL).
>
> Help help!
>
> Thanks
>
> Simon
>
> /home/simonpj/code/HEAD-9/rts/dist/build/libHSrts_thr_p.a(RtsSymbols.thr_p_o):
> RtsSymbols.c:rtsSyms: error: undefined reference to '__stack_chk_guard'
>
> collect2: error: ld returned 1 exit status
>
> `cc' failed in phase `Linker'. (Exit code: 1)
>
> utils/iserv/ghc.mk:105
> <https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fghc.mk%3A105%2F=02%7C01%7Csimonpj%40microsoft.com%7C39f8b294c72e460d79fe08d827e6fa9b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637303220283542753=BRO3xbuVoyYupwHcYZf0QUnvNG4obhyfMQt8OYEVkBw%3D=0>:
> recipe for target 'utils/iserv/stage2_p/build/tmp/ghc-iserv-prof' failed
>
> make[1]: *** [utils/iserv/stage2_p/build/tmp/ghc-iserv-prof] Error 1
>
> make[1]: *** Waiting for unfinished jobs
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> <https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs=02%7C01%7Csimonpj%40microsoft.com%7C39f8b294c72e460d79fe08d827e6fa9b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637303220283552748=pA9JfE5GLqIWANkrpRmbXbZUCTNcRNzWFKM%2FjyWms0c%3D=0>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: HEAD doesn't build. Totally stalled.

2020-07-14 Thread Moritz Angermann
For some reason, you end up in the defined RTS_SSP_SYMBOLS, I believe and
then the RTS wants __stack_chk symbols. Which it can’t find when linking.

Replacing

#if !defined(mingw32_HOST_OS) && !defined(DYNAMIC)
#define RTS_SSP_SYMBOLS\
  SymI_NeedsProto(__stack_chk_guard)   \
  SymI_NeedsProto(__stack_chk_fail)
#else
#define RTS_SSP_SYMBOLS
#endif

With just


#define RTS_SSP_SYMBOLS


Should do. I hope.

Currently only on mobile phone :-/

Cheers,
 Moritz

On Tue, 14 Jul 2020 at 7:06 PM, Simon Peyton Jones 
wrote:

> thanks.  What specifically do I comment out?
>
>
>
> *From:* Moritz Angermann 
> *Sent:* 14 July 2020 12:00
> *To:* Simon Peyton Jones 
> *Cc:* ghc-devs@haskell.org
> *Subject:* Re: HEAD doesn't build. Totally stalled.
>
>
>
> This was my fault. Not sure why this wasn’t caught in CI.
>
> It’s due to the addition of the symbols here
>
>
>
>
> https://github.com/ghc/ghc/commit/686e72253aed3880268dd6858eadd8c320f09e97#diff-03f5bc5a50fd8ae13e902782c4392c38R1159
> <https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fghc%2Fghc%2Fcommit%2F686e72253aed3880268dd6858eadd8c320f09e97%23diff-03f5bc5a50fd8ae13e902782c4392c38R1159=02%7C01%7Csimonpj%40microsoft.com%7C40749bcb0bb14dcea25f08d827e523ae%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637303212391074872=smhfL4kdMecscZW5aStt7UISTz0GP%2BbOa5B0OZwEsnU%3D=0>
>
>
>
> You should be able to just comment them out. I’ll prepare a proper fix.
>
>
>
> Cheers,
>
>  Moritz
>
>
>
> On Tue, 14 Jul 2020 at 6:41 PM, Simon Peyton Jones via ghc-devs <
> ghc-devs@haskell.org> wrote:
>
> I’m getting this failure in a clean HEAD build. Any ideas?I’m totally
> stalled because I can’t build GHC any more.
>
> I’m using Windows Subsystem for Linux (WSL).
>
> Help help!
>
> Thanks
>
> Simon
>
> /home/simonpj/code/HEAD-9/rts/dist/build/libHSrts_thr_p.a(RtsSymbols.thr_p_o):
> RtsSymbols.c:rtsSyms: error: undefined reference to '__stack_chk_guard'
>
> collect2: error: ld returned 1 exit status
>
> `cc' failed in phase `Linker'. (Exit code: 1)
>
> utils/iserv/ghc.mk:105
> <https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fghc.mk%3A105%2F=02%7C01%7Csimonpj%40microsoft.com%7C40749bcb0bb14dcea25f08d827e523ae%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637303212391084868=8tc45RXK%2B0SsDi%2FrE3BmiqzgqXapuKOf%2Bxh05G3SFxE%3D=0>:
> recipe for target 'utils/iserv/stage2_p/build/tmp/ghc-iserv-prof' failed
>
> make[1]: *** [utils/iserv/stage2_p/build/tmp/ghc-iserv-prof] Error 1
>
> make[1]: *** Waiting for unfinished jobs
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> <https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs=02%7C01%7Csimonpj%40microsoft.com%7C40749bcb0bb14dcea25f08d827e523ae%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637303212391084868=ro0MHyhAq0UPzvtwQCbDUZikr2a4AHa4FkCa9OtQ68Q%3D=0>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: HEAD doesn't build. Totally stalled.

2020-07-14 Thread Moritz Angermann
This was my fault. Not sure why this wasn’t caught in CI.
It’s due to the addition of the symbols here

https://github.com/ghc/ghc/commit/686e72253aed3880268dd6858eadd8c320f09e97#diff-03f5bc5a50fd8ae13e902782c4392c38R1159

You should be able to just comment them out. I’ll prepare a proper fix.

Cheers,
 Moritz

On Tue, 14 Jul 2020 at 6:41 PM, Simon Peyton Jones via ghc-devs <
ghc-devs@haskell.org> wrote:

> I’m getting this failure in a clean HEAD build. Any ideas?I’m totally
> stalled because I can’t build GHC any more.
>
> I’m using Windows Subsystem for Linux (WSL).
>
> Help help!
>
> Thanks
>
> Simon
>
> /home/simonpj/code/HEAD-9/rts/dist/build/libHSrts_thr_p.a(RtsSymbols.thr_p_o):
> RtsSymbols.c:rtsSyms: error: undefined reference to '__stack_chk_guard'
>
> collect2: error: ld returned 1 exit status
>
> `cc' failed in phase `Linker'. (Exit code: 1)
>
> utils/iserv/ghc.mk:105: recipe for target
> 'utils/iserv/stage2_p/build/tmp/ghc-iserv-prof' failed
>
> make[1]: *** [utils/iserv/stage2_p/build/tmp/ghc-iserv-prof] Error 1
>
> make[1]: *** Waiting for unfinished jobs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: A small documentation PR on github

2020-07-03 Thread Moritz Angermann
The performance of GH is still better than GL. Reading the code on GH is
faster and easier to navigate than GL. This might be an artifact of my
location? The GL UI feels a lot more sluggish. Though GH is doing their
part with service downtimes recently as well.

Making a small change on GH to a file is almost comically trivial. Press
Edit, make the change, commit and open the PR. All from within the browser
in a few seconds. Wasn’t this this primary motivation for allowing
documentation PRs on GH?

On Sat, 4 Jul 2020 at 2:18 AM, Ben Gamari  wrote:

> Alexander Kjeldaas  writes:
>
> > Hi devs!
> >
> > I created a small documentation PR for the GHC FFI on github and noticed
> > that there's another one-liner PR from May 2019 that was not merged.
> >
> > https://github.com/ghc/ghc/pull/260
> > https://github.com/ghc/ghc/pull/255
> >
> > Just checking that simple PRs are still accepted on github.
> >
> An excellent point. In my mind the move to GitLab has addressed the
> principle reason why we started accepted small PRs on GitHub. My sense
> is that we should move these PRs to GitLab and formally stop accepting
> PRs via GitHub.
>
> If there is no objection I will do this in three days.
>
> Cheers,
>
> - Ben
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


GHCs dependencies (libraries) and maintenance

2020-06-01 Thread Moritz Angermann
Hi there!

so this comes up periodically, and I think we need to discuss this.  This is not
related to anything right now, so if you wonder if I'm writing this because of
something that just happened that I'm involved and you might have missed
something, you probably did not.  It came up on the #ghc IRC channel a
few day ago.

GHC depends on quite a set of libraries, and ships those in releases. When ever
a new GHC release is cut, all these dependencies need to be on hackage and
have release versions.  We do not want to produce a GHC release which depends
on in-flight packages.  In-flight might happen for example due to GHC having to
patch dependencies to make them work with HEAD.

Everyone who maintains any kind of software online, knows that maintenance can
be a drag, and then life happens, and what not.  There are many very responsive
maintainers and we all owe them a grate amount of gratitude towards their
relentless work, keeping those libraries up to date and responding to questions,
patches, ...

I therefore would like to float the following idea to make the GHC
release processes
a bit more reliable.  GHCHQ (that is those in charge of producing GHC
releases for
us all), are made co-maintainers on each library GHC depends on, to guarantee
that GHC can move forward in the worst of circumstances.  Now I would
hope that in
almost all cases GHCHQ would never have to maintain any of the dependencies
actively, they deal with GHC already, so let's try to keep it that
way.  However GHCHQ
can, after a 14day notification period, exercise the co-maintainance
and cut releases
(and upload them to hackage), should the maintainer not be able to do
so on his own
for various reasons.

I'd like to see this as an insurance policy for GHC continuous
development.  The only
alternative that I see would be that GHCHQ starts forking dependencies
and initiates
the hackage maintainer takeover protocol, which will cause additional
delays, and
incurs an extra burden on the GHC maintainers.

I hope we can all agree that libraries that end up being dependencies
of GHC should
be held in high regards and form the very foundation GHC is built
upon.  As such it
should be an honour to have GHCHQ being a co-maintainer for ones library, as it
signifies that importances of the library for the continuous development of GHC.

Again I don't expect much to change, except for GHCHQ becoming co-maintainers
for libraries GHC depends on. The baseline expectation will remain as
it is.  However we
will have ensured the frictionless development of GHC going forward.

Cheers,
 Moritz
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: 8.12 plans

2020-05-22 Thread Moritz Angermann
I have a few aarch64 linker patches I'll need to open MRs for. They
are mostly 5-10
line changes, but will give us a working linker :-)

On Wed, May 6, 2020 at 2:13 AM Ben Gamari  wrote:
>
> Hi everyone,
>
> The time is again upon us to start thinking about release planning for
> the next major release: GHC 8.12. In keeping with our 6-month release
> schedule, I propose the following schedule:
>
>  * Mid-June 2020: Aim to have all major features in the tree
>  * Late-June 2020: Cut the ghc-8.12 branch
>  * June - August 2020: 3 alpha releases
>  * 1 September 2020: beta release
>  * 25 September 2020: Final 8.12.1 release
>
> So, if you have any major features which you would like to merge for
> 8.12, now is the time to start planning how to wrap them up in the next
> month or so. As always, do let me know if you think this may be
> problematic and we can discuss options.
>
> Cheers,
>
> - Ben
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why is libraries/ghci built with stage 0 compiler instead of stage 1?

2019-10-18 Thread Moritz Angermann
It can run TH using iserv; which depends on lib:ghci iirc.

On Fri, 18 Oct 2019 at 7:24 PM, Ömer Sinan Ağacan 
wrote:

> Stage 1 compiler doesn't have interpreter, and doesn't run plugins or TH,
> so I
> think GHCi stuff should not be used by stage 1 compiler, but for some
> reason the
> "ghci" library (libraries/ghci) is built with stage 0 compiler instead of
> stage
> 1. Anyone know what this is?
>
> Thanks,
>
> Ömer
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Building GHC 8.4.3 Cross-Compiler Linux x86 -> Linux ARM

2019-05-28 Thread Moritz Angermann
Hi Michael,

any reason you want to build specifically 8.4.3?  And any specific
reason you want to use 8.0.2 to build it?  Anyway, you likely don't
want to build unregistered, but use the llvm backend.

You want to set the BuildFlavour to quick-cross, e.g. with

sed -E "s/^#BuildFlavour[ ]+= quick-cross$/BuildFlavour = quick-cross/" < 
mk/build.mk.sample > mk/build.mk 

Once you have your compiler built, you'll likely want to use toolchain
wrapper[1] to make things a bit easier.

If you just want to try to build some simple code for a raspberry pi, you
could also try the experimental pre-built cross compilers from[2]

Cheers,
 Moritz

--

[1]: https://github.com/zw3rk/toolchain-wrapper
[2]: http://hackage.mobilehaskell.org/

> On May 28, 2019, at 5:25 AM, Ben Gamari  wrote:
> 
> Michael Dunn  writes:
> 
>> Ben,
>> 
>> I saw that you responded to my question in #ghc on freenode last
>> weekend (mdunnio), and I missed your message.
>> 
> Michael,
> 
> I would be happy to help. I'm CCing ghc-devs so others may benefit from
> this discussion.
> 
>> I'm trying to build a cross compiler using ghc 8.0.1 to build ghc
>> 8.4.3. You mentioned that you have experience.
>> 
>> Have you tried on newer versions of GHC? I've looked at a few guides
>> (mostly https://gitlab.haskell.org/ghc/ghc/wikis/building/cross-compiling)
>> and they all are using pre-8.0. The problem I'm running into now just
>> seems to be that I can't build the base packages (ghc-pkg
>> specifically).
>> 
>> My configure command is:
>> 
>> ./configure --target=arm-linux-gnueabihf CC=arm-linux-gnueabihf-gcc
>> --enable-unregisterised
>> 
> This is helpful to know but you didn't specify what your `make`
> invocation looks like or your mk/build.mk. It would be very helpful to
> know both of these things.
> 
> Presumably you at least had to set `HADDOCK_DOCS=NO` since otherwise the
> build system fails very early on.
> 
> Cheers,
> 
> - Ben
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Discussion: Hadrian's defaults

2019-03-15 Thread Moritz Angermann
Hi Arnaud,

> On Mar 15, 2019, at 8:32 PM, Spiwack, Arnaud  wrote:
> 
> On Thu, Mar 14, 2019 at 7:20 PM Herbert Valerio Riedel  
> wrote:
> I don't have the ticket number at my fingertips but it should be fairly easy 
> to find.
> 
> I'm afraid it doesn't appear to be. Could you share your arguments in this 
> thread?
This was the last one that lead to the current `-c` state:
- https://github.com/snowleopard/hadrian/issues/457
There is also
- https://github.com/snowleopard/hadrian/issues/655

if you look through the issues on snowleopard/hadrian and sort by comment 
frequency
you'll likely find quite a lot of further discussion about not making configure 
and
boot the default.

> 
> On Fri, Mar 15, 2019 at 3:10 AM Moritz Angermann  
> wrote:
> It's magically conflating two different phases with `-c`. The configure phase 
> and
> the build phase. Making this the default means it's always magic. I don't 
> like magic!
> 
> Unfortunately, I really don't understand what you are saying. What's magic 
> about combining the phases?

We have two phases:

Phase 1: autoconf

  This phase is essentially a code-generation phase, where specific templates 
are
  instantiated to configure time value.  Which again can be split into two 
specific
  subproblems:

  - Generation of the configure script from the configure.ac and aclocal.m4 
files
using autoconf.
  - Generating code using the configure script by computing configure time 
calues
and filling those into the `.in` files producing the files that lack the 
`.in`
extension.

Phase 2: building

  This has been traditionally the job of make, and this is what hadrian should
  replace.


By subsuming the configure phase (by invoking ./configure) from hadrian we loose
the phase distinction and if the `-c` flag is optional, users will *not even 
see*
a flag that indicates that the system will run `./configure` for them. This is 
the
magic I'm referring to and to which I strongly object.  If we can retire 
autoconf
and do the whole configuration in hadrian, that story may change.  But as long 
as
we are using an autoconf based configuration we should *not* run that magically.
The `-c` flag is at least there to show that hadrian is explicilty instructed to
run configure.

./configure supports its own set of flags, if hadrian subsumes those, we'd need
some generic way of passing flags to ./configure, at which point I have to ask
why do we do this in the first place and try to call ./configure from within 
hadrian?

Unless you want to reconfigure ghc, or hack on it's autoconf part, you are 
likely
going to run the following only:

./boot --hadrian
./configure 
./hadrian/build.sh -j ...
./hadrian/build.sh -j ...
./hadrian/build.sh -j ...
./hadrian/build.sh -j ...
...

the configure step is required, and should be explicit. That is where you 
configure
your ghc build. Set host/build/target values, and other configure flags that
influence how you want your ghc to be configure. Hadrian is there to build that
configuration.  Mixing both may be convenient but hides the fact that there is a
./configure step.  I consider this hiding to be magic which is meant to benefit 
the
user but hides what's really going on.  And again I don't like magic.

Cheers,
 Moritz

PS: we also don't hide the `./configure` step in the usual `./configure  
&& make -j`
instructions when building other software, even though you could surely 
hack that into
your Makefile if you so wanted to.  Why start with ghc now?


signature.asc
Description: Message signed with OpenPGP
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Discussion: Hadrian's defaults

2019-03-14 Thread Moritz Angermann


> On Mar 15, 2019, at 2:19 AM, Herbert Valerio Riedel  
> wrote:
> On Thu, Mar 14, 2019 at 4:20 PM Spiwack, Arnaud  
> wrote:
>   • The -c option should be the default.
> Very strong -1 from me on this one; I've been quite vocal on the Hadrian 
> issue tracker early on and multiple times against having Hadrian invoke 
> ./configure at all, even more so against having it do so by default. I don't 
> have the ticket number at my fingertips but it should be fairly easy to find.

I'm with Herbert here. I think that the `-c` flag should be mandatory if you 
want
hadrian to invoke autoconf magic.

I believe the confusion might stem from the newcomers guide[1]?  I'd rather see 
the
newcomers guide *not* use `-c`, and instead make it obvious to call `boot` and
`configure`.  These are essential steps and hiding them makes them less obvious.

`boot` does
- (1) checks that url rewrites are in place.
- (2) checks that all bootpackages are available
- (3) run autoreconf as needed
- (4) and generates a bunch of make files for the make based build system.

(4) can be disabled by passing `--hadrian` to `boot`.  (1) is needed due to
the relative submodules I beleive.

`configure` generates the necessary configuration files based on the configure 
flags
passed. And hadrian does the actual build step that `make` used to do.

It's magically conflating two different phases with `-c`. The configure phase 
and
the build phase. Making this the default means it's always magic. I don't like 
magic!

Cheers,
 Moritz

--
[1]: 
https://github.com/tdammers/ghc-wiki/blob/wip/newcomers/newcomers-tutorial.md


signature.asc
Description: Message signed with OpenPGP
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why can't we re-install the template-haskell package?

2019-03-07 Thread Moritz Angermann
Hi Ryan, hi Richard,

> My (limited) understanding is that template-haskell is not reinstallable for 
> the same reasons that base and ghc-prim aren't reinstallable: the GHC 
> codebase directly wires in several definitions several functions, types, and 
> data constructors from template-haskell. See THNames [1]. If one were to, 
> say, use GHC 8.6 but install a different version of template-haskell than the 
> one that came bundled with 8.6, then it's almost certain that lots of code in 
> THNames would no longer work, since they could be referencing identifiers 
> that no longer exist (if the new version of template-haskell doesn't have 
> them).

Right, I'm mostly concerned about re-installing the *same* version again.  For 
the motivation:
template-haskell depends on pretty, deepseq and array.  Let's assume there is 
some off-by-one issue
in array that only affects my application right now.  Now I'm going to patch my 
array package, but
if my application depends on template-haskell, I end up with two different 
array packages in my dependency
tree that are not identical.  What I'd ideally like to do here is to re-install 
deepseq, pretty and
template-haskell based on my fixed array package.  Now I don't have two 
different array packeds in my
dependencies anymore; it would however require me to be able to rebuild (the 
same version) of template-haskell.


> I have to admit I don't have a strong grasp on what "reinstallable" implies. 
> Does a package get the same hash after reinstalling? What could make a 
> package not reinstallable? Why aren't packages reinstallable today? Why isn't 
> ghc-prim reinstallable?

We can't re-install packages that depend on build-time values. The RTS and 
ghc-prim right now
include files that are generated during the build process and they have no 
capability on their
own to create those files, as such re-installing them is not possible right now.

> My concern stems from the fact that ghc is interlinked with TH in at least 
> two ways:
> - GHC imports definitions from template-haskell. But this is the same as the 
> way GHC is involved with, say, `base`.
> - GHC also wires in some template-haskell definitions. This is the aspect I 
> am worried about. Is `base` reinstallable? If so, then perhaps 
> template-haskell could be, too.


Now especially with TH I might see an issue when *running/using* TH, as at that 
point the compiler
and the produces code have to interact.  That is we are compiling splices 
against a different
template-haskell package than the compiler is built against.  This is where I 
see *upgrading* (that
is building a newer version of Template Haskell) could be an issue, but I feel 
I don't fully grasp
why rebuilding the same version should pose an issue.

If we go one step further and use iserv (-fexternal-interpreter) outright, then 
I think we could just
rebuild iserv against the re-installed template-haskell, and would not even run 
into the above
mentioned issue. Might this potentially even be allow us to upgrade 
template-haskell?

Cheers,
 Moritz


signature.asc
Description: Message signed with OpenPGP
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Why can't we re-install the template-haskell package?

2019-03-06 Thread Moritz Angermann
Dear friends,

as I'm working on making lib:ghc re-installable [1][2].

Ideally I'd prefer we only had to freeze rts + ghc-prim
and maybe base (which would implicate the integer library).

However, we apparently can't re-install Template Haskell
I'm told.  Not just not upgrade it, but not even re-install
it (same version). I've attached some rough dependency graph
(which I hope is correct for ghc 8.8). Fixing template-haskell
implies ghc-boot-th, pretty, deepseq and array.

Can someone shed some light onto the details here? What
are the fundamental issues we can't reinstall template-haskell?

Cheers,
 Moritz

--
[1]: https://gitlab.haskell.org/ghc/ghc/merge_requests/490
[2]: https://gitlab.haskell.org/ghc/ghc/merge_requests/506

ghc-8.8-deps.pdf
Description: Adobe PDF document


signature.asc
Description: Message signed with OpenPGP
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Hackage and ghc shipped package contents mismatch

2019-01-17 Thread Moritz Angermann
Dear friends,

the other day I've run into an issue where I could not build a package.  It 
turned out that
the underlying reason was that the packages ghc ships with are not necessarily 
the same
as they are on hackage. Ryan was kind enough to open Ticket 16199[1].

My use case is that I reduce the set of packages I use from the package 
database that ghc
ships, and build the others myself.  Thus if packages in the package database 
as shipped
with ghc do not match the ones found on hackage for the SAME VERSION, I run 
into issues.

I've since devised a script to compute the difference between packages in GHC 
and the ones
on hackage with the same version[2]. 

I've then sat out to compute the differences for ghc8.4 and ghc8.6 using the 
following
approach:

for release in ghc-8.4.1-release ghc-8.4.2-release ghc-8.4.3-release 
ghc-8.4.4-release \
   ghc-8.6.4-release ghc-8.6.2-release ghc-8.6.3-release; do
  git checkout $release
  git reset --hard HEAD && git clean -xffd
  git submodule update --init --recursive && git clean -xffd
  ../verify-packages.sh
  mkdir -p ../$release
  find package-diffs -not -empty -type f -exec cp {} ../$release \;
done

I've compiled them[3] for inspection (and because I need to patch my hackage 
packages to
match the ones that ghc ships).

I'm sorry to report that I found discrepancies between the package we ship with 
GHC and
the ones on haskell for each ghc release in 8.4 and 8.6.  Some changes are 
minor, such
as version bumps, as can be represented with revisions on hackage as well.  
Other are not
so minor. The one over which I tripped was transformers-0.5.5.0.  The diff[4] 
is 20K.

I therefore propose that we make sure to only ship packages with GHC that match 
their
respective versions on hackage.

Cheers,
 Moritz

--
[1]: http://ghc.haskell.org/trac/ghc/ticket/16199
[2]: https://gitlab.haskell.org/ghc/ghc/merge_requests/139
[3]: https://github.com/angerman/haskell.nix/tree/master/patches
[4]: 
https://github.com/angerman/haskell.nix/blob/master/patches/ghc863/transformers-0.5.5.0.patch
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: MR does not merge

2019-01-16 Thread Moritz Angermann
I wonder if gitlab could have a feature like what bors offers. Validate and 
merge or [rebase and validate and merge]+. Thus eventually merging it or 
rejecting it due to conflict or validation failure.

Sent from my iPhone

> On 16 Jan 2019, at 10:55 PM, Matthew Pickering  
> wrote:
> 
> There is problem with the interaction between "merge when validated"
> and "fast forward merge only" option.
> 
> If anyone commits to master between clicking the button and validation
> finishing then the merge will fail as the patch needs to be rebased
> before it can be merged.
> 
> I'm not sure what the plan to deal with this is.
> 
> On Wed, Jan 16, 2019 at 2:49 PM Simon Peyton Jones via ghc-devs
>  wrote:
>> 
>> Ben
>> 
>> Six days ago I submitted this MR
>> 
>> https://gitlab.haskell.org/ghc/ghc/merge_requests/109
>> 
>> Just tiny refactorings.  I said “merge when validated”
>> 
>> But six days later, it still appears not to have merged.  What’s up?  I was 
>> expecting it to merge in a matter of an hour or two.
>> 
>> Thanks
>> 
>> Simon
>> 
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GitLab forks and submodules

2019-01-10 Thread Moritz Angermann
Alright let me add some example that is really painful with submodules.

Say I have a custom ghc fork angerman/ghc, because I really don't want
to overload CI with all my stupidity and I *know* I'd forget to mark
every commit with [skip ci] or something.

Now I need to modify a bunch of submodules as well, say
- libraries/bytestring
- libraires/unix

And next I want to have someone else collaborate on this with me, either
for testing or contributing or what not.

So I'm going to give them the following commands to run:

git clone --recursive https://gitlab.haskell.org/ghc/ghc
(cd ghc && git remote add angerman https://gitlab.haskell.org/angerman/ghc)
(cd ghc && git fetch --all)
(cd ghc/libraries/bytestring && git remote add angerman 
https://github.com/angerman/bytestring && git fetch --all)
(cd ghc/libraries/unix && git remote add angerman 
https://github.com/angerman/unix && git fetch --all)
(cd ghc && git checkout angerman/awesome/sauce)
(cd ghc && git submodule update --init --recursive)

instead of

git clone --recursive https://gitlab.haskell.org/angerman/ghc --branch 
awesome/sauce

Of course that would require me to change the absolute paths for bytestring and 
unix
in my repo. So maybe I only need 5 instead of 7 commands to remember to tell, 
and
type, and ...

Cheers,
 Moritz

> On Jan 8, 2019, at 11:16 PM, Carter Schonwald  
> wrote:
> 
> Depending on the patch,  the ci feedback may be fundamental.  Eg some of the 
> native code gen hackery im doing impacts a whole bunch of configurations I 
> can’t do locally.
> 
> We could also have a wip/no-ci prefix ?
> 
> Either way it’s certainlu true that we have finite resources and should 
> endeavor to use them thoughtfully
> 
> On Tue, Jan 8, 2019 at 5:32 AM Matthew Pickering 
>  wrote:
> I agree with Omer that we shouldn't encourage people to push wip branches to 
> ghc/ghc. It wastes resources and pollutes the repo with lots of branches that 
> will invariably not be deleted.
> 
> I would rather we use absolute paths in the submodule file as I have spent 
> far longer than I expected trying to get git to use the right submodule in 
> the past when operating on forks.
> 
> Matt
> 
> 
> On Tue, 8 Jan 2019, 10:09 Gabor Greif  You can specify `[skip ci]` in the commit message if you don't want to
> run the pipeline. When you are done, just amend your commit with the
> finalised note.
> 
> Gabor
> 
> On 1/8/19, Ömer Sinan Ağacan  wrote:
> >> As I mention in the documentation, those with commits bits should feel
> >> free to push branches to ghc/ghc.
> >
> > This is sometimes not ideal as it wastes GHC's CI resources. For example I
> > make
> > a lot of WIP commits to my work branches, and I don't want to keep CI
> > machines
> > busy for those.
> >
> > Ömer
> >
> > Ben Gamari , 8 Oca 2019 Sal, 04:53 tarihinde şunu 
> > yazdı:
> >>
> >> Moritz Angermann  writes:
> >>
> >> > Can’t we have absolute submodule paths? Wouldn’t that elevate the
> >> > issue?
> >> >
> >> Perhaps; I mentioned this possibility in my earlier response. It's not
> >> clear which trade-off is better overall, however.
> >>
> >> > When we all had branches on ghc/ghc this
> >> > was not an issue.
> >> >
> >> As I mention in the documentation, those with commits bits should feel
> >> free to push branches to ghc/ghc.
> >>
> >> Cheers,
> >>
> >> - Ben
> >> ___
> >> ghc-devs mailing list
> >> ghc-devs@haskell.org
> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs



signature.asc
Description: Message signed with OpenPGP
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GitLab forks and submodules

2019-01-07 Thread Moritz Angermann
Can’t we have absolute submodule paths? Wouldn’t that elevate the issue?

When we all had branches on ghc/ghc this 
was not an issue.

Sent from my iPhone

> On 8 Jan 2019, at 5:24 AM, Ben Gamari  wrote:
> 
> Simon Peyton Jones via ghc-devs  writes:
> 
>> Would it be worth describing this workflow explicitly in our "How to
>> use GitLab for GHC development" page?
>> 
> Yes, indeed it would. I have asked David, who is currently looking at
> revising our contributor documentation, to do so.
> 
> Cheers,
> 
> - Ben
> 
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


GitLab forks and submodules

2019-01-06 Thread Moritz Angermann
Hi *,

so what do we do with submodules? If you point someone to a fork of ghc, say:

  gitlab.haskell.org/foo/ghc

and they try to check it out, they will run into issues because foo didn't 
clone all the
submodules.  So how is one supposed to clone a forked ghc repository?

Cheers,
 Moritz


signature.asc
Description: Message signed with OpenPGP
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Treatment of unknown pragmas

2018-10-17 Thread Moritz Angermann
Does this need to be *this* hardcoded?  Or could we just parse the pragma and
compare it to a list of known pragmas to be parsed from a file (or settings 
value?).

The change in question does:


-pragmas = options_pragmas ++ ["cfiles", "contract"]
+pragmas = options_pragmas ++ ["cfiles", "contract", 
"hlint"]

to the `compiler/parser/Lexer.x`, and as such is somewhat hardcoded.  So we 
already
ignore a bunch of `option_` and those three pragmas.

And I see


<0,option_prags> {
 "{-#"  { warnThen Opt_WarnUnrecognisedPragmas (text "Unrecognised pragma")
   (nested_comment lexToken) }
}

which I believe handles the unrecognisedPragmas case.

Can't we have a ignored-pragmas value in the settings, that just lists all those
we want to ignore, instead of hardcoding them in the Lexer?

That at least feels to me like a less invasive (and easier to adapt) appraoch, 
that
might be less controversial?  Yes it's just moving goal posts, but it moves the 
logic
into a runtime value instead of a compile time value.

Cheers,
Moritz

> On Oct 17, 2018, at 4:05 PM, Simon Marlow  wrote:
> 
> Simon - GHC provides some protection against mistyped pragma names, in the 
> form of the -Wunrecognised-pragmas warning, but only for {-# ... #-} pragmas. 
> If tools decide to use their own pragma syntax, they don't benefit from this. 
> That's one downside, in addition to the others that Neil mentioned.
> 
> You might say we shouldn't care about mistyped pragma names. If the user 
> accidentally writes {- HLNIT -} and it is silently ignored, that's not our 
> problem. OK, but we cared about it enough for the pragmas that GHC 
> understands to add the special warning, and it's reasonable to expect that 
> HLint users also care about it. 
> 
> (personally I have no stance on whether we should have this warning, there 
> are upsides and downsides. But that's where we are now.)
> 
> Cheers
> Simon
> 
> On Tue, 16 Oct 2018 at 23:34, Simon Peyton Jones  
> wrote:
> I’m still not understanding what’s wrong with
> 
> 
> 
> {- HLINT blah blah -}
> 
> 
> 
> GHC will ignore it.  HLint can look at it.  Simple.
> 
> 
> 
> I must be missing something obvious.
> 
> 
> 
> Simon
> 
> 
> 
> From: ghc-devs  On Behalf Of Simon Marlow
> Sent: 16 October 2018 21:44
> To: Neil Mitchell 
> Cc: ghc-devs 
> Subject: Re: Treatment of unknown pragmas
> 
> 
> 
> I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed 
> like the least worst option taking into account the various issues that have 
> already been described in this thread. I'm OK with adding HLINT; after all we 
> already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other OPTIONS, 
> CFILES (a Hugs relic), and several more that GHC ignores.
> 
> 
> 
> We can either
> 
> (a) not protect people from mistyped pragmas, or
> 
> (b) protect people from mistyped pragma names, but then we have to bake in 
> the set of known pragmas
> 
> 
> 
> We could choose to have a different convention for pragmas that GHC doesn't 
> know about (as Ben suggests), but then of course we don't get any protection 
> for mistyped pragma names when using that convention.
> 
> 
> 
> Cheers
> 
> Simon
> 
> 
> 
> 
> 
> On Tue, 16 Oct 2018 at 21:12, Neil Mitchell  wrote:
> 
>> A warning flag is an interesting way to deal with the issue. On the
>> other hand, it's not great from an ergonomic perspective; afterall, this
>> would mean that all users of HLint (and any other tool requiring special
> 
> Yep, this means every HLint user has to do an extra thing. I (the
> HLint author) now have a whole pile of "how do I disable warnings in
> Stack", and "what's the equivalent of this in Nix". Personally, it ups
> the support level significantly that I wouldn't go this route.
> 
> I think it might be a useful feature in general, as new tools could
> use the flag to prototype new types of warning, but I imagine once a
> feature gets popular it becomes too much fuss.
> 
>>> I think it makes a lot of sense to have a standard way for third-parties
>>> to attach string-y information to Haskell source constructs. While it's
>>> not strictly speaking necessary to standardize the syntax, doing
>>> so minimizes the chance that tools overlap and hopefully reduces
>>> the language ecosystem learning curve.
>> 
>> This sounds exactly like the existing ANN pragma, which is what I've wanted 
>> LiquidHaskell to move towards for a long time. What is wrong with using the 
>> ANN pragma?
> 
> Significant compilation performance penalty and extra recompilation.
> ANN pragmas is what HLint currently uses.
> 
>> I'm a bit skeptical of this idea. Afterall, adding cases to the
>> lexer for every tool that wants a pragma seems quite unsustainable.
> 
> I don't find this argument that convincing. Given the list already
> includes CATCH and DERIVE, the bar can't have been _that_ high to
> entry. And yet, the list remains pretty short. My guess is the demand
> is pretty low - we're 

Re: Does it sound a good idea to implement "backend plugins"?

2018-10-04 Thread Moritz Angermann
A long time ago, I’ve tried to inject plugin logic to allows some control over 
the driver pipeline (phase ordering) and hooking various code gen related 
functions.

See https://phabricator.haskell.org/D535

At that time I ran into issues that might simply not exist with plugins anymore 
today, but I haven’t looked.

The whole design wasn’t quite right and injects everything into the dynflags.  
Also ghc wanted to be able to compile the plugin on the fly, but I needed the 
plugin to be loaded very early during the startup phase to exert enough control 
of the rest of the pipeline through the plugin.

Cheers,
 Moritz

Sent from my iPhone

On 5 Oct 2018, at 1:52 AM, Shao, Cheng  wrote:

>> Adding "pluggable backends" to spin up new targets seems to require quite a 
>> bit of additional infrastructure for initialising a library directory and 
>> package database. But there are probably more specific use cases that need 
>> inspecting/modifying STG or Cmm where plugins would already be useful in 
>> practice.
> 
> I think setting up a new global libdir/pkgdb is beyond the scope of
> backend plugins. The user shall implement his/her own boot script to
> configure for the new architecture, generate relevant headers, run
> Cabal's Setup program to launch GHC with the plugin loaded.
> 
>> Hooks (or rather their locations in the pipeline) are rather ad hoc by 
>> nature, but for Asterius a hook that takes Cmm and takes over from there 
>> seems like a reasonable approach given the current state of things. I think 
>> the Cmm hook you implemented (or something similar) would be perfectly 
>> acceptable to use for now.
> 
> For the use case of asterius itself, indeed Hooks already fit the use
> case for now. But since we seek to upstream our newly added features
> in our ghc fork back to ghc hq, we should upstream those changes early
> and make them more principled. Compared to Hooks, I prefer to move to
> Plugins entirely since:
> 
> * Plugins are more composable, you can load multiple plugins in one
> ghc invocation. Hooks are not.
> * If I implement the same mechanisms in Plugins, this can be
> beneficial to other projects. Currently, in asterius, everything works
> via a pile of hacks upon hacks in ghc-toolkit, and it's not good for
> reuse.
> * The newly added backend plugins shouldn't have visible
> correctness/performance impact if they're not used, and it's just a
> few local modifications in the ghc codebase.
> 
>>> On Thu, Oct 4, 2018 at 3:56 PM Shao, Cheng  wrote:
>>> 
>>> Hi all,
>>> 
>>> I'm thinking of adding "backend plugins" in the current Plugins
>>> mechanism which allows one to inspect/modify the IRs post simplifier
>>> pass (STG/Cmm), similar to the recently added source plugins for HsSyn
>>> IRs. This can be useful for anyone creating a custom GHC backend to
>>> target an experimental platform (e.g. the Asterius compiler which
>>> targets WebAssembly), and previously in order to retrieve those IRs
>>> from the regular pipeline, we need to use Hooks which is somewhat
>>> hacky.
>>> 
>>> Does this sound a good idea to you? If so, I can open a trac ticket
>>> and a Phab diff for this feature.
>>> 
>>> Best,
>>> Shao Cheng
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Non-Reinstallable packages

2018-08-06 Thread Moritz Angermann
Dear friends,

we have a set of non-reinstallable packages with GHC, these
include iirc template-haskell, and some other.  I've got
a few questions concerning those:

- do we have a complete up-to-date list of those?
- why can't we reinstall them (let's assume we use the
 identical version for now; and don't upgrade)
- does this also hold if we essentially build a stage3
 compiler with packages?

Our usual build process is:
1. take a boost-strap compiler, which doesn't need to have
  the same version as the final compiler.
2. build the libraries necessary to build the stage1 compiler
  while ensuring we build some extra libraries as well,
  so we don't have to rely on those shipped with the boot-strap
  compiler.
3. use the stage1 compiler to build all libraries we want to ship
  with the stage2 compiler; and build the stage2 compiler.

Now I do understand that the stage1 compiler could potentially be
tainted by the boot-strap compiler and as such yield different
libraries compared to what the stage2 compiler would yield.

Shouldn't rebuilding any library with the stage1 compiler yield the
same libraries these days?

If the boot-strap compiler is the same version as the one we build,
shouldn't the stage2 compiler be capable of building good enough
libraries as well so that we can reinstall them?

What I ideally would like to have is a minimal compiler:
ghc + rts; than keep building all the lirbaries from ground up.

A potential problem I see is that if we use dynamic libraries and
get into TH, we could run into issues where we want to link libraries
that are different to the ones that the ghc binary links against.
Would this also hold if we used `-fexternal-interpreter` only?

Cheers,
Moritz



signature.asc
Description: Message signed with OpenPGP
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


  1   2   >