Re: GHC development asks too much of the host system

2022-07-19 Thread Rodrigo Mesquita
Dear Ben,

The list of tips you put together is quite nice.

I suggest we add it to hadrian’s wiki page under a “Tips for making your life 
easier” section (as is, it is already useful! at least I learned something new).

Cheers
Rodrigo

> On 19 Jul 2022, at 21:11, Ben Gamari  wrote:
> 
> Hécate  writes:
> 
>> Hello ghc-devs,
>> 
>> I hadn't made significant contributions to the GHC code base in a while, 
>> until a few days ago, where I discovered that my computer wasn't able to 
>> sustain running the test suite, nor handle HLS well.
>> 
>> Whether it is my OS automatically killing the process due to oom-killer 
>> or just the fact that I don't have a war machine, I find it too bad and 
>> I'm frankly discouraged.
> 
> Do you know which process was being killed? There is one testsuite tests
> that I know of which does have quite a considerable memory footprint
> (T16992) due to its nature; otherwise I would expect a reasonably recent
> machine to pass the testsuite without much trouble. It's particularly
> concerning if this is a new regression; is this the first time you have
> observed this particular failure?
> 
>> This is not the first time such feedback emerges, as the documentation 
>> task force for the base library was unable to properly onboard some 
>> people from third-world countries who do not have access to hardware 
>> we'd consider "standard" in western Europe or some parts of North 
>> America. Or at least "standard" until even my standard stuff didn't cut 
>> it anymore.
>> 
>> So yeah, I'll stay around but I'm afraid I'm going to have to focus on 
>> projects for which the feedback loop is not on the scale of hours , as 
>> this is a hobby project.
>> 
>> Hope this will open some eyes.
>> 
> Hi Hécate,
> 
> I would reiterate that the more specific feedback you can offer, the
> better.
> 
> To share my some of my own experience: I have access to a variety of hardware,
> some of which is quite powerful. However, I find that I end up doing
> much of my development on my laptop which, while certainly not a slouch
> (being a Ryzen 4750U), is also not a monster. In particular, while a
> fresh build takes nearly twice as long on my laptop than some of the
> other hardware I have, I nevertheless find ways to make it worthwhile
> (due to the ease of iteration compared to ssh). If you routinely have
> multi-hour iteration times then something isn't right.
> 
> In particular, I think there are a few tricks which make life far
> easier:
> 
> 
> * Be careful about doing things that would incur
>   significant amounts of rebuilding. This includes:
> 
>* After modifying, e.g., `compiler/ghc.cabal.in` (e.g. to add a new
>  module to GHC), modify `compiler/ghc.cabal` manually instead of
>  rerunning `configure`.
> 
>* Be careful about pulling/rebase. I generally pick a base commit to
>  build off of and rebase sparingly: Having to stop what I'm doing to
>  wait for full rebuild is an easy way to lose momentum.
> 
>* Avoid switching branches; I generally have a GHC tree per on-going
>  project.
> 
> * Take advantage of Hadrian's `--freeze1` flag
> 
> * Use `hadrian/ghci` to typecheck changes
> 
> * Use the stage1 compiler instead of stage2 to smoke-test changes when
>   possible. (specifically, using the script generated by Hadrian's
>   `_build/ghc-stage1` target)
> 
> * Use the right build flavour for the task at hand: If I don't need a
>   performant compiler and am confident that I can get by without
>   thorough testsuite validation, I use `quick`. Otherwise, plan ahead
>   for what you need (e.g. `default+assertions+debug_info` or
>   `validate`)
> 
> * Run the fraction of the testsuite that is relevant to your change.
>   Hadrian's `--test-way` and `--only` flags are your friends.
> 
> * Take advantage of CI. At the moment we have a fair amount of CI
>   capacity. If you think that your change is close to working, you can
>   open an MR and start a build locally. If it fails, iterate on just the
>   failing testcases locally.
> 
> * Task-level parallelism. Admittedly, this is harder when you are
>   working as a hobby, but I often have two or three projects on-going
>   at a time. While one tree is building I try to make progress on
>   another.
> 
> I don't use HLS so I may be insulated from some of the pain in this
> regard. However, I do know that Matt is a regular user and he
> disables most plugins.
> 
> I would also say that, sadly, GHC is comparable to other similarly-size
> compilers in its build time: A build of LLVM (not even clang) takes ~50
> minutes on my 8-core desktop; impressively, rustc takes ~7 minutes
> although it is a considerably smaller compiler (being just a front-end).
> By contrast, GHC takes around 20 minutes. I know that this doesn't
> make the cost any easier to bear and I would love to bring this number
> down, but ultimately there are only so many hours in the day.
> 
> I think one underexplored approach to addressing the build-time 

Re: GHC 9.6.1 rejects previously working code

2023-04-12 Thread Rodrigo Mesquita
Indeed, this is included in the GHC 9.6.x Migration Guide 
.

Unfortunately, I’m also not sure there is a solution for this particular where 
(T m) is only a Monad if m instances MonadIO.
As Tom explained, under transformers 0.6 `T` no longer is a monad transformer.

A few workarounds I can think of:

- No longer instance `MonadTrans T`, and use a instance `MonadIO m => MonadIO 
(T m)` instead.
  Rationale: if you always require `m` to be `MonadIO`, perhaps the ability to 
always lift an `m` to `T m` with `liftIO` is sufficient.

- Add the `MonadIO` instance to the `m` field of `T`, GADT style, `data T m a 
where T :: MonadIO m => m -> T m a`
  Rational: You would no longer need `MonadIO` in the `Monad` instance, which 
will make it possible to instance `MonadTrans`.

- Redefine your own `lift` regardless of `MonadTrans`

Good luck!
Rodrigo

> On 12 Apr 2023, at 10:10, Tom Ellis 
>  wrote:
> 
> On Wed, Apr 12, 2023 at 02:32:43PM +0530, Harendra Kumar wrote:
>> instance MonadIO m => Monad (T m) where
>>return = pure
>>(>>=) = undefined
>> 
>> instance MonadTrans T where
>>lift = undefined
> 
> I guess it's nothing to do with 9.6 per se, but rather the difference
> between
> 
> * 
> https://hackage.haskell.org/package/transformers-0.5.6.2/docs/Control-Monad-Trans-Class.html#t:MonadTrans
> 
> * 
> https://hackage.haskell.org/package/transformers-0.6.1.0/docs/Control-Monad-Trans-Class.html#t:MonadTrans
> 
> I'm not sure I can see any solution for this.  A monad transformer `T`
> must give rise to a monad `T m` regardless of what `m` is.  If `T m`
> is only a monad when `MonadIO m` then `T` can't be a monad transformer
> (under transformers 0.6).
> 
> Tom
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: CI *sad face*

2023-06-27 Thread Rodrigo Mesquita
The root of the second problem was !10723, which started failing on its own 
pipeline after being rebased.
I’m pushing a fix.

- Rodrigo

> On 28 Jun 2023, at 06:41, Bryan Richter via ghc-devs  
> wrote:
> 
> Two things are negatively impacting GHC CI right now:
> 
> Darwin runner capacity is down to one machine, since the other three are 
> paused. The problem and solution are known[1], but until the fix is 
> implemented in GHC, expect pipelines to get backed up. I will work on a patch 
> this morning
> 
> [1]: https://gitlab.haskell.org/ghc/ghc/-/issues/23561
> 
> The other problem is one I just noticed, and I don't have any good info about 
> it yet. The symptom is that Marge batch merges are failing reliably. Three 
> patches that do fine individually somehow cause a type error in the 
> hadrian-ghc-in-ghci job when combined[2]. The only clue is the error itself, 
> which complains of an out-of-scope data constructor "ArchJavaScript" in the 
> file compiler/GHC/Driver/Main.hs. A cursory look at the individual patches 
> doesn't shed any light. I just rebased all of them to see if I can shake the 
> error out of them that way. Any knowledge that can be brought to bear would 
> be appreciated
> 
> [2]: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/10745#note_507418
> 
> -Bryan
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can't build nofib

2023-07-12 Thread Rodrigo Mesquita
From the error message it looks like you’re using ghc-9.6(and base 4.18) while 
nofib requires base < 4.17.
I’d say as a temporary workaround you can likely run your invocation 
additionally with —allow-newer, and hope that doesn’t break. Otherwise you 
could downgrade to 9.4 or bump the version manually in the cabal file of nofib?

Rodrigo

> On 12 Jul 2023, at 12:38, Simon Peyton Jones  
> wrote:
> 
> Friends
> 
> With a clean HEAD I can't build nofib.  See below.  What should I do?
> 
> Thanks
> 
> Simon
> 
> (cd nofib; cabal v2-run -- nofib-run 
> --compiler=`pwd`/../_build/stage1/bin/ghc --output=`date -I`)
> Resolving dependencies...
> Error: cabal: Could not resolve dependencies:
> [__0] trying: nofib-0.1.0.0 (user goal)
> [__1] next goal: base (dependency of nofib)
> [__1] rejecting: base-4.18.0.0/installed-4.18.0.0 (conflict: nofib =>
> base>=4.5 && <4.17)
> [__1] skipping: base-4.18.0.0, base-4.17.1.0, base-4.17.0.0 (has the same
> characteristics that caused the previous version to fail: excluded by
> constraint '>=4.5 && <4.17' from 'nofib')
> [__1] rejecting: base-4.16.4.0, base-4.16.3.0, base-4.16.2.0, base-4.16.1.0,
> base-4.16.0.0, base-4.15.1.0, base-4.15.0.0, base-4.14.3.0, base-4.14.2.0,
> base-4.14.1.0, base-4.14.0.0, base-4.13.0.0, base-4.12.0.0, base-4.11.1.0,
> base-4.11.0.0, base-4.10.1.0, base-4.10.0.0, base-4.9.1.0, base-4.9.0.0,
> base-4.8.2.0, base-4.8.1.0, base-4.8.0.0, base-4.7.0.2, base-4.7.0.1,
> base-4.7.0.0, base-4.6.0.1, base-4.6.0.0, base-4.5.1.0, base-4.5.0.0,
> base-4.4.1.0, base-4.4.0.0, base-4.3.1.0, base-4.3.0.0, base-4.2.0.2,
> base-4.2.0.1, base-4.2.0.0, base-4.1.0.0, base-4.0.0.0, base-3.0.3.2,
> base-3.0.3.1 (constraint from non-upgradeable package requires installed
> instance)
> [__1] fail (backjumping, conflict set: base, nofib)
> After searching the rest of the dependency tree exhaustively, these were the
> goals I've had most trouble fulfilling: base, nofib
> 
> make: *** [/home/simonpj/code/Makefile-spj:39: nofib] Error 1
> 
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can't build nofib

2023-07-12 Thread Rodrigo Mesquita
I would recommend —allow-newer rather than rebuilding with 9.4. In retrospect, 
9.4 implies base == 4.17, but nofib seems to only allow < 4.17, which would 
leave 9.4 out.

Rodrigo

> On 12 Jul 2023, at 12:48, Simon Peyton Jones  
> wrote:
> 
> Thanks.  That is very unfortunate: ./configure does not issue any complaint.
> 
> I upgraded from 9.2 because GHC won't compile with 9.2 any more.  But now you 
> are saying that nofib won't build with 9.6?  So that leaves 9.4 only.
> 
> Well I can install 9.4 and rebuild everything.  But really, it would be good 
> if configure complained if you are using a boot compiler that won't work.  
> That's what configure is for!
> 
> Simon
> 
> On Wed, 12 Jul 2023 at 12:41, Rodrigo Mesquita  <mailto:rodrigo.m.mesqu...@gmail.com>> wrote:
>> From the error message it looks like you’re using ghc-9.6(and base 4.18) 
>> while nofib requires base < 4.17.
>> I’d say as a temporary workaround you can likely run your invocation 
>> additionally with —allow-newer, and hope that doesn’t break. Otherwise you 
>> could downgrade to 9.4 or bump the version manually in the cabal file of 
>> nofib?
>> 
>> Rodrigo
>> 
>>> On 12 Jul 2023, at 12:38, Simon Peyton Jones >> <mailto:simon.peytonjo...@gmail.com>> wrote:
>>> 
>>> Friends
>>> 
>>> With a clean HEAD I can't build nofib.  See below.  What should I do?
>>> 
>>> Thanks
>>> 
>>> Simon
>>> 
>>> (cd nofib; cabal v2-run -- nofib-run 
>>> --compiler=`pwd`/../_build/stage1/bin/ghc --output=`date -I`)
>>> Resolving dependencies...
>>> Error: cabal: Could not resolve dependencies:
>>> [__0] trying: nofib-0.1.0.0 (user goal)
>>> [__1] next goal: base (dependency of nofib)
>>> [__1] rejecting: base-4.18.0.0/installed-4.18.0.0 (conflict: nofib =>
>>> base>=4.5 && <4.17)
>>> [__1] skipping: base-4.18.0.0, base-4.17.1.0, base-4.17.0.0 (has the same
>>> characteristics that caused the previous version to fail: excluded by
>>> constraint '>=4.5 && <4.17' from 'nofib')
>>> [__1] rejecting: base-4.16.4.0, base-4.16.3.0, base-4.16.2.0, base-4.16.1.0,
>>> base-4.16.0.0, base-4.15.1.0, base-4.15.0.0, base-4.14.3.0, base-4.14.2.0,
>>> base-4.14.1.0, base-4.14.0.0, base-4.13.0.0, base-4.12.0.0, base-4.11.1.0,
>>> base-4.11.0.0, base-4.10.1.0, base-4.10.0.0, base-4.9.1.0, base-4.9.0.0,
>>> base-4.8.2.0, base-4.8.1.0, base-4.8.0.0, base-4.7.0.2, base-4.7.0.1,
>>> base-4.7.0.0, base-4.6.0.1, base-4.6.0.0, base-4.5.1.0, base-4.5.0.0,
>>> base-4.4.1.0, base-4.4.0.0, base-4.3.1.0, base-4.3.0.0, base-4.2.0.2,
>>> base-4.2.0.1, base-4.2.0.0, base-4.1.0.0, base-4.0.0.0, base-3.0.3.2,
>>> base-3.0.3.1 (constraint from non-upgradeable package requires installed
>>> instance)
>>> [__1] fail (backjumping, conflict set: base, nofib)
>>> After searching the rest of the dependency tree exhaustively, these were the
>>> goals I've had most trouble fulfilling: base, nofib
>>> 
>>> make: *** [/home/simonpj/code/Makefile-spj:39: nofib] Error 1
>>> 
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org <mailto:ghc-devs@haskell.org>
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>> 

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Why do the reverse binder swap transformation?

2023-07-14 Thread Rodrigo Mesquita
Dear GHC devs,

I’m wondering about the reverse binder swap transformation, the one in which we 
substitute occurrences of the case binder by occurrences of the scrutinee (when 
the scrut. is a variable):

case x of z { r -> e }
===>
case x of z { r -> e[x/z] }

My question is: why do we do this transformation? An example in which this 
transformation is beneficial would be great too.

The Note I’ve found about it, Note [Binder-swap during float-out], wasn’t 
entirely clear to me:

4. Note [Binder-swap during float-out]
   ~~~
   In the expression
case x of wild { p -> ...wild... }
   we substitute x for wild in the RHS of the case alternatives:
case x of wild { p -> ...x... }
   This means that a sub-expression involving x is not "trapped" inside 
the RHS.
   And it's not inconvenient because we already have a substitution.

  Note that this is EXACTLY BACKWARDS from the what the simplifier does.
  The simplifier tries to get rid of occurrences of x, in favour of 
wild,
  in the hope that there will only be one remaining occurrence of x, 
namely
  the scrutinee of the case, and we can inline it.

Many thanks,
Rodrigo

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why do the reverse binder swap transformation?

2023-07-14 Thread Rodrigo Mesquita
That’s a great example, it’s much clearer now.

I’ve improved the note and added this example to it. It’s !10875.

Thanks,
Rodrigo

> On 14 Jul 2023, at 16:53, Simon Peyton Jones  
> wrote:
> 
> Consider
> 
> f x = letrec go y = case x of z { (a,b) -> ...(expensive z)... }
> in ...
> 
> If we do the reverse binder-swap we get
> 
> f x = letrec go y = case x of z { (a,b) -> ...(expensive x)... }
> in ...
> 
> and now we can float out:
> 
> f x = let t = expensive x
> in letrec go y = case x of z { (a,b) -> ...(t)... }
> in ...
> 
> Now (expensive x) is computed once, rather than once each time around the 
> 'go' loop..
> 
> Would you like to elaborate the Note to explain this better?
> 
> Simon
> 
> 
> On Fri, 14 Jul 2023 at 16:30, Rodrigo Mesquita  <mailto:rodrigo.m.mesqu...@gmail.com>> wrote:
>> Dear GHC devs,
>> 
>> I’m wondering about the reverse binder swap transformation, the one in which 
>> we substitute occurrences of the case binder by occurrences of the scrutinee 
>> (when the scrut. is a variable):
>> 
>> case x of z { r -> e }
>> ===>
>> case x of z { r -> e[x/z] }
>> 
>> My question is: why do we do this transformation? An example in which this 
>> transformation is beneficial would be great too.
>> 
>> The Note I’ve found about it, Note [Binder-swap during float-out], wasn’t 
>> entirely clear to me:
>> 
>> 4. Note [Binder-swap during float-out]
>>~~~
>>In the expression
>> case x of wild { p -> ...wild... }
>>we substitute x for wild in the RHS of the case alternatives:
>> case x of wild { p -> ...x... }
>>This means that a sub-expression involving x is not "trapped" 
>> inside the RHS.
>>And it's not inconvenient because we already have a substitution.
>> 
>>   Note that this is EXACTLY BACKWARDS from the what the simplifier 
>> does.
>>   The simplifier tries to get rid of occurrences of x, in favour of 
>> wild,
>>   in the hope that there will only be one remaining occurrence of x, 
>> namely
>>   the scrutinee of the case, and we can inline it.
>> 
>> Many thanks,
>> Rodrigo
>> 
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org <mailto:ghc-devs@haskell.org>
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Getting all variables in scope in CoreM

2023-08-06 Thread Rodrigo Mesquita
Dear GHC devs,

I’m trying to invoke the GHC.Core.Lint linting functions from a Core GHC plugin.
These functions take a LintConfig that can m <>ostly be constructed from 
DynFlags, <>
the exception being <>

 <>
l_vars :: ![Var] — ^ Ids that should be treated as being in scope

My question is then “How can I get all variables in scope in the module 
underlying this CoreM computation, to pass in the LintConfig?”.

My ultimate goal is to run LintM to determine the usage environment of a given 
core expression.
In that sense, the “Id out of scope” errors aren’t that important to me, but 
they do make the linting action fail.

Thanks in advance!
Rodrigo___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Configure errors

2023-08-07 Thread Rodrigo Mesquita
Thanks for pointing this out, Simon

What you’ve pasted is the trace of the ghc-toolchain program.
We should probably lower the verbosity after !10976 lands, but in the meantime 
it’s just useful to debug mostly CI.

At the end of the configure step there might be a message that starts with 
“Don’t worry! This will not affect your build in any way”. That’s as less 
alarming as I could make it :).

If you do see the warning, it’s due to a discrepancy between the output 
produced by configure and the one produced by ghc-toolchain:
We’re fixing all the discrepancies caught by CI in !10976 — after which we’ll 
always validate these discrepancies in CI, to ensure ghc-toolchain is kept up 
to date with configure, while configure still configures toolchains.

I’ve also been busy writing the blog about this. It should come out soon enough.

Rodrigo

> On 7 Aug 2023, at 10:50, Simon Peyton Jones  
> wrote:
> 
> Rodrigo
> 
> I'm getting lots of errors from ./configure, see below. 
> 
> Seems to be something to do with your toolchain stuff?  I'm lost.  Should I 
> worry? If not, could they be made to look less alarming somehow?
> 
> Simon
> 
> Entering: checking for C compiler
>   checking for -Qunused-arguments support...
>   Entering: checking for -Qunused-arguments support
> Execute: /usr/bin/gcc -Qunused-arguments -c -o /tmp/tmp0/test.o 
> /tmp/tmp0/test.o.c
> Command failed: /usr/bin/gcc -Qunused-arguments -c -o /tmp/tmp0/test.o 
> /tmp/tmp0/test.o.c
> Exited with code 1
> 
>   found for -Qunused-arguments support: Cc {ccProgram = Program {prgPath = 
> "/usr/bin/gcc", prgFlags = []}}
>   checking whether Cc supports --target...
>   Entering: checking whether Cc supports --target
> Execute: /usr/bin/gcc -Werror --target=x86_64-unknown-linux -c -o 
> /tmp/tmp0/test.o /tmp/tmp0/test.o.c
> Command failed: /usr/bin/gcc -Werror --target=x86_64-unknown-linux -c -o 
> /tmp/tmp0/test.o /tmp/tmp0/test.o.c
> Exited with code 1
> 
>   found whether Cc supports --target: Cc {ccProgram = Program {prgPath = 
> "/usr/bin/gcc", prgFlags = []}}
>   checking whether Cc works...
>   Entering: checking whether Cc works
> Execute: /usr/bin/gcc -c -o /tmp/tmp0/test.o /tmp/tmp0/test.o.c
>   found whether Cc works: ()
>   checking for C99 support...
>   Entering: checking for C99 support
> Execute: /usr/bin/gcc -c -o /tmp/tmp0/test.o /tmp/tmp0/test.o.c
>   found for C99 support: ()
>   checking whether cc supports extra via-c flags...
>   Entering: checking whether cc supports extra via-c flags
> Execute: /usr/bin/gcc -c -fwrapv -fno-builtin -Werror -x c -o 
> /tmp/tmp0/test.o /tmp/tmp0/test.c
>   found whether cc supports extra via-c flags: ()
> found for C compiler: Cc {ccProgram = Program {prgPath = "/usr/bin/gcc", 
> prgFlags = []}}
> checking for C++ compiler...
> Entering: checking for C++ compiler
>   x86_64-unknown-linux-g++ not found in search path
>   x86_64-unknown-linux-clang++ not found in search path
>   x86_64-unknown-linux-c++ not found in search path
>   checking whether C++ supports --target...
>   Entering: checking whether C++ supports --target
> Execute: /usr/bin/g++ -Werror --target=x86_64-unknown-linux -c -o 
> /tmp/tmp0/test.o /tmp/tmp0/test.o.cpp
> Command failed: /usr/bin/g++ -Werror --target=x86_64-unknown-linux -c -o 
> /tmp/tmp0/test.o /tmp/tmp0/test.o.cpp
> Exited with code 1
> 
>   found whether C++ supports --target: Cxx {cxxProgram = Program {prgPath = 
> "/usr/bin/g++", prgFlags = []}}
>   Execute: /usr/bin/g++ -c -o /tmp/tmp0/test.o /tmp/tmp0/test.o.cpp
> found for C++ compiler: Cxx {cxxProgram = Program {prgPath = "/usr/bin/g++", 
> prgFlags = []}}
> checking for C preprocessor...

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Configure errors

2023-08-07 Thread Rodrigo Mesquita
The trace is akin to the configure trace — it shows invocations of the 
toolchain in trying to determine properties of said toolchain e.g. which flags 
are supported.

For example

>   checking for -Qunused-arguments support...
>   Entering: checking for -Qunused-arguments support
> Execute: /usr/bin/gcc -Qunused-arguments -c -o /tmp/tmp0/test.o 
> /tmp/tmp0/test.o.c
> Command failed: /usr/bin/gcc -Qunused-arguments -c -o /tmp/tmp0/test.o 
> /tmp/tmp0/test.o.c
> Exited with code 1
>   found for -Qunused-arguments support: Cc {ccProgram = Program {prgPath = 
> "/usr/bin/gcc", prgFlags = []}}

Is the trace of invoking the C compiler with -Qunused-arguments, checking 
whether the C compiler supports such an option.
That command exited with code 1 likely because the compiler doesn’t indeed 
support -Qunused-arguments.
That’s fine, it means we won’t pass -Qunused-arguments to your C compiler.

Rodrigo

> On 7 Aug 2023, at 10:50, Simon Peyton Jones  
> wrote:
> 
> Rodrigo
> 
> I'm getting lots of errors from ./configure, see below. 
> 
> Seems to be something to do with your toolchain stuff?  I'm lost.  Should I 
> worry? If not, could they be made to look less alarming somehow?
> 
> Simon
> 
> Entering: checking for C compiler
>   checking for -Qunused-arguments support...
>   Entering: checking for -Qunused-arguments support
> Execute: /usr/bin/gcc -Qunused-arguments -c -o /tmp/tmp0/test.o 
> /tmp/tmp0/test.o.c
> Command failed: /usr/bin/gcc -Qunused-arguments -c -o /tmp/tmp0/test.o 
> /tmp/tmp0/test.o.c
> Exited with code 1
> 
>   found for -Qunused-arguments support: Cc {ccProgram = Program {prgPath = 
> "/usr/bin/gcc", prgFlags = []}}
>   checking whether Cc supports --target...
>   Entering: checking whether Cc supports --target
> Execute: /usr/bin/gcc -Werror --target=x86_64-unknown-linux -c -o 
> /tmp/tmp0/test.o /tmp/tmp0/test.o.c
> Command failed: /usr/bin/gcc -Werror --target=x86_64-unknown-linux -c -o 
> /tmp/tmp0/test.o /tmp/tmp0/test.o.c
> Exited with code 1
> 
>   found whether Cc supports --target: Cc {ccProgram = Program {prgPath = 
> "/usr/bin/gcc", prgFlags = []}}
>   checking whether Cc works...
>   Entering: checking whether Cc works
> Execute: /usr/bin/gcc -c -o /tmp/tmp0/test.o /tmp/tmp0/test.o.c
>   found whether Cc works: ()
>   checking for C99 support...
>   Entering: checking for C99 support
> Execute: /usr/bin/gcc -c -o /tmp/tmp0/test.o /tmp/tmp0/test.o.c
>   found for C99 support: ()
>   checking whether cc supports extra via-c flags...
>   Entering: checking whether cc supports extra via-c flags
> Execute: /usr/bin/gcc -c -fwrapv -fno-builtin -Werror -x c -o 
> /tmp/tmp0/test.o /tmp/tmp0/test.c
>   found whether cc supports extra via-c flags: ()
> found for C compiler: Cc {ccProgram = Program {prgPath = "/usr/bin/gcc", 
> prgFlags = []}}
> checking for C++ compiler...
> Entering: checking for C++ compiler
>   x86_64-unknown-linux-g++ not found in search path
>   x86_64-unknown-linux-clang++ not found in search path
>   x86_64-unknown-linux-c++ not found in search path
>   checking whether C++ supports --target...
>   Entering: checking whether C++ supports --target
> Execute: /usr/bin/g++ -Werror --target=x86_64-unknown-linux -c -o 
> /tmp/tmp0/test.o /tmp/tmp0/test.o.cpp
> Command failed: /usr/bin/g++ -Werror --target=x86_64-unknown-linux -c -o 
> /tmp/tmp0/test.o /tmp/tmp0/test.o.cpp
> Exited with code 1
> 
>   found whether C++ supports --target: Cxx {cxxProgram = Program {prgPath = 
> "/usr/bin/g++", prgFlags = []}}
>   Execute: /usr/bin/g++ -c -o /tmp/tmp0/test.o /tmp/tmp0/test.o.cpp
> found for C++ compiler: Cxx {cxxProgram = Program {prgPath = "/usr/bin/g++", 
> prgFlags = []}}
> checking for C preprocessor...

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Configure errors

2023-08-07 Thread Rodrigo Mesquita
I agree.

The alarming trace is due to the verbosity being too high.
I’ve opened #23794 to track lowering the verbosity. Then we should get a 
simpler trace, like the configure one.

Rodrigo

> On 7 Aug 2023, at 11:11, Simon Peyton Jones  
> wrote:
> 
> But the other tests look like
> 
> checking for gnutar... no
> checking for gtar... no
> checking for tar... /usr/bin/tar
> checking for gpatch... no
> checking for patch... /usr/bin/patch
> checking for autoreconf... /usr/bin/autoreconf
> 
> Can't you say
> 
> checking for -Qunused-arguments... no
> 
> You can explain this to me, now, and that helps me, today.  But I'm trying to 
> save you from having to explain it to many future GHC devs, and/or save them 
> time in hunting for answers to the same question.
> 
> No rush 
> 
> Simon
> 
> On Mon, 7 Aug 2023 at 11:07, Rodrigo Mesquita  <mailto:rodrigo.m.mesqu...@gmail.com>> wrote:
>> The trace is akin to the configure trace — it shows invocations of the 
>> toolchain in trying to determine properties of said toolchain e.g. which 
>> flags are supported.
>> 
>> For example
>> 
>>>   checking for -Qunused-arguments support...
>>>   Entering: checking for -Qunused-arguments support
>>> Execute: /usr/bin/gcc -Qunused-arguments -c -o /tmp/tmp0/test.o 
>>> /tmp/tmp0/test.o.c
>>> Command failed: /usr/bin/gcc -Qunused-arguments -c -o /tmp/tmp0/test.o 
>>> /tmp/tmp0/test.o.c
>>> Exited with code 1
>>>   found for -Qunused-arguments support: Cc {ccProgram = Program {prgPath = 
>>> "/usr/bin/gcc", prgFlags = []}}
>> 
>> Is the trace of invoking the C compiler with -Qunused-arguments, checking 
>> whether the C compiler supports such an option.
>> That command exited with code 1 likely because the compiler doesn’t indeed 
>> support -Qunused-arguments.
>> That’s fine, it means we won’t pass -Qunused-arguments to your C compiler.
>> 
>> Rodrigo
>> 
>>> On 7 Aug 2023, at 10:50, Simon Peyton Jones >> <mailto:simon.peytonjo...@gmail.com>> wrote:
>>> 
>>> Rodrigo
>>> 
>>> I'm getting lots of errors from ./configure, see below. 
>>> 
>>> Seems to be something to do with your toolchain stuff?  I'm lost.  Should I 
>>> worry? If not, could they be made to look less alarming somehow?
>>> 
>>> Simon
>>> 
>>> Entering: checking for C compiler
>>>   checking for -Qunused-arguments support...
>>>   Entering: checking for -Qunused-arguments support
>>> Execute: /usr/bin/gcc -Qunused-arguments -c -o /tmp/tmp0/test.o 
>>> /tmp/tmp0/test.o.c
>>> Command failed: /usr/bin/gcc -Qunused-arguments -c -o /tmp/tmp0/test.o 
>>> /tmp/tmp0/test.o.c
>>> Exited with code 1
>>> 
>>>   found for -Qunused-arguments support: Cc {ccProgram = Program {prgPath = 
>>> "/usr/bin/gcc", prgFlags = []}}
>>>   checking whether Cc supports --target...
>>>   Entering: checking whether Cc supports --target
>>> Execute: /usr/bin/gcc -Werror --target=x86_64-unknown-linux -c -o 
>>> /tmp/tmp0/test.o /tmp/tmp0/test.o.c
>>> Command failed: /usr/bin/gcc -Werror --target=x86_64-unknown-linux -c 
>>> -o /tmp/tmp0/test.o /tmp/tmp0/test.o.c
>>> Exited with code 1
>>> 
>>>   found whether Cc supports --target: Cc {ccProgram = Program {prgPath = 
>>> "/usr/bin/gcc", prgFlags = []}}
>>>   checking whether Cc works...
>>>   Entering: checking whether Cc works
>>> Execute: /usr/bin/gcc -c -o /tmp/tmp0/test.o /tmp/tmp0/test.o.c
>>>   found whether Cc works: ()
>>>   checking for C99 support...
>>>   Entering: checking for C99 support
>>> Execute: /usr/bin/gcc -c -o /tmp/tmp0/test.o /tmp/tmp0/test.o.c
>>>   found for C99 support: ()
>>>   checking whether cc supports extra via-c flags...
>>>   Entering: checking whether cc supports extra via-c flags
>>> Execute: /usr/bin/gcc -c -fwrapv -fno-builtin -Werror -x c -o 
>>> /tmp/tmp0/test.o /tmp/tmp0/test.c
>>>   found whether cc supports extra via-c flags: ()
>>> found for C compiler: Cc {ccProgram = Program {prgPath = "/usr/bin/gcc", 
>>> prgFlags = []}}
>>> checking for C++ compiler...
>>> Entering: checking for C++ compiler
>>>   x86_64-unknown-linux-g++ not found in search path
>>>   x86_64-unknown-linux-clang++ not found in search path
>>>   x86_64-unknown-linux-c++ not found in search path
>>>   chec

Re: Problem building 9.4.7 on Fedora 36 (bytestring/cbits/is-valid-utf8.c)

2023-08-08 Thread Rodrigo Mesquita
Unfortunately you’re not the only developer facing these build errors.
They've been reported in #23810 
 and #23789 
.

It might be worth pasting your workaround there too.

Thanks,
Rodrigo

> On 8 Aug 2023, at 16:33, Viktor Dukhovni  wrote:
> 
> The build was failing, because rts/OSThreads.h via Rts.h from
> libraries/bytestring/cbits/is-valid-utf8.c had no definition of
> `clockid_t`.  This type is not exposed with _POSIX_C_SOURCE is
> not defined to a sufficiently high value:
> 
>SYNOPSIS
>   #include 
> 
>   int clock_getres(clockid_t clockid, struct timespec *res);
> 
>   int clock_gettime(clockid_t clockid, struct timespec *tp);
>   int clock_settime(clockid_t clockid, const struct timespec *tp);
> 
>   Link with -lrt (only for glibc versions before 2.17).
> 
>   Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
> 
>   clock_getres(), clock_gettime(), clock_settime():
>   _POSIX_C_SOURCE >= 199309L
> 
> I quick-and-dirty work-around was:
> 
> --- a/libraries/bytestring/cbits/is-valid-utf8.c
> +++ b/libraries/bytestring/cbits/is-valid-utf8.c
> @@ -27,6 +27,10 @@ LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 
> ARISING IN ANY WAY
> OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> SUCH DAMAGE.
> */
> +#undef _POSIX_C_SOURCE
> +#define _POSIX_C_SOURCE 200809L
> +#undef _XOPEN_SOURCE
> +#define _XOPEN_SOURCE   700
> #pragma GCC push_options
> #pragma GCC optimize("-O2")
> #include 
> 
> 
> There's surely a better solution.
> 
> -- 
>Viktor.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


PMC: addConCt and newtypes bottom info

2023-10-27 Thread Rodrigo Mesquita
Dear Sebastian and GHC devs,

Regarding this bit from the function addConCt in the GHC.HsToCore.Pmc.Solver 
module,

Nothing -> do
  let pos' = PACA alt tvs args : pos
  let nabla_with bot' =
nabla{ nabla_tm_st = ts{ts_facts = addToUSDFM env x (vi{vi_pos = 
pos', vi_bot = bot'})} }
  -- Do (2) in Note [Coverage checking Newtype matches]
  case (alt, args) of
(PmAltConLike (RealDataCon dc), [y]) | isNewDataCon dc ->
  case bot of
MaybeBot -> pure (nabla_with MaybeBot)
IsBot-> addBotCt (nabla_with MaybeBot) y
IsNotBot -> addNotBotCt (nabla_with MaybeBot) y
_ -> assert (isPmAltConMatchStrict alt )
 pure (nabla_with IsNotBot) -- strict match ==> not ⊥

My understanding is that given some x which we know e.g. cannot be bottom, if 
we learn that x ~ N y, where N is a newtype (NT), we move our knowledge of x 
not being bottom to the underlying NT Id y, since forcing the newtype in a 
pattern is equivalent to forcing the underlying NT Id.

Additionally, we set x’s BottomInfo to MaybeBot —
However, I don’t understand why we must reset x’s BotInfo to MaybeBot — 
couldn’t we keep it as it is while setting y’s BotInfo to the same info?
An example where resetting this info on the newtype-match is 
important/necessary would be excellent.

FWIW, I built and tested the PMC of ghc devel2 with

MaybeBot -> pure (nabla_with MaybeBot)
IsBot-> addBotCt (nabla_with IsBot) y
IsNotBot -> addNotBotCt (nabla_with IsNotBot) y

And it worked without warnings or errors…

Thanks in advance!
Rodrigo___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: CountDeps

2024-07-02 Thread Rodrigo Mesquita
Hey Simon

CountDepsParser and CountDepsAST list the closure of the Parser/AST modules 
dependencies respectively.

This test helps keep in check our goal of making the AST not depend on GHC.* 
modules (the overarching goal of #21592).
In your case, you’ve extended the closure by adding dependencies to modules 
which are already in the GHC.* namespace, which is perfectly fine.

Rodrigo

> On 2 Jul 2024, at 09:06, Simon Peyton Jones  
> wrote:
> 
> Does anyone know what the CountDeps test does?
> 
> I'm getting the failure below in my branch. Should I just accept it?
> 
> I think it's because GHC.Core.FamInstEnv now depends on 
> GHC.Builtin.Types.Literals, a very reasonable dependency
> 
> Thanks
> 
> Simon
> 
> +++ "/builds/ghc/ghc/tmp/ghctest-su6yq239/test 
> spaces/testsuite/tests/count-deps/CountDepsAst.run/CountDepsAst.run.stdout.normalised"
>  2024-07-01 18:37:33.372548372 +
> @@ -2,6 +2,7 @@
> GHC.Builtin.Names
> GHC.Builtin.PrimOps
> GHC.Builtin.Types
> +GHC.Builtin.Types.Literals
> GHC.Builtin.Types.Prim
> GHC.Builtin.Uniques
> GHC.ByteCode.Types
> --- "/builds/ghc/ghc/tmp/ghctest-su6yq239/test 
> spaces/testsuite/tests/count-deps/CountDepsParser.run/CountDepsParser.stdout.normalised"
>  2024-07-01 18:37:33.393548751 +
> +++ "/builds/ghc/ghc/tmp/ghctest-su6yq239/test 
> spaces/testsuite/tests/count-deps/CountDepsParser.run/CountDepsParser.run.stdout.normalised"
>  2024-07-01 18:37:33.393548751 +
> @@ -2,6 +2,7 @@
> GHC.Builtin.Names
> GHC.Builtin.PrimOps
> GHC.Builtin.Types
> +GHC.Builtin.Types.Literals
> GHC.Builtin.Types.Prim
> GHC.Builtin.Uniques
> GHC.ByteCode.Types
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Question about profiling reports

2024-08-05 Thread Rodrigo Mesquita
Hi Celeste,

You may already be aware, but there’s an alternative format to visualise 
profiles graphically.
Instead of `-p`, use `-pj` to produce the profiling report in JSON format, and 
then load the resulting .prof file into speedscope.app 
<https://www.speedscope.app/>.

Cheers,
Rodrigo Mesquita

> On 30 Jul 2024, at 15:57, Celeste Hollenbeck  
> wrote:
> 
> Hi GHC Team,
> 
> I have a question for a research project. I'm looking at GHC's profiler, and 
> the documentation says a profiling report displays "a break-down by cost 
> centre of the most costly functions in the program". Here's an example of the 
> report that I'm talking about:
> 
> 
> MAINMAIN102   00.00.0   100.0  
> 100.0
>  CAFGHC.IO.Handle.FD128   00.00.0 0.0
> 0.0
>  CAFGHC.IO.Encoding.Iconv   120   00.00.0 0.0
> 0.0
>  CAFGHC.Conc.Signal 110   00.00.0 0.0
> 0.0
>  CAFMain108   00.00.0   100.0  
> 100.0
>   main  Main204   10.00.0   100.0  
> 100.0
>fib  Main205 2692537  100.0  100.0   100.0  
> 100.0
> 
> This example is under Section 8.1 of the GHC User's Guide 
> https://downloads.haskell.org/ghc/latest/docs/users_guide/profiling.html#:~:text=GHC's%20profiling%20system%20assigns%20costs,to%20the%20enclosing%20cost%20centre.
> 
> It looks like the numbers often add up to less than 100% for the %time, but I 
> don't see any documentation on a threshold for what makes a cost centre 
> "costly"—so I assume that "costly" means that it takes up any time 
> whatsoever, and any cost centres that take up any time at all are included in 
> the report? So perhaps the numbers under %time don't add up to 100% all the 
> time because of rounding error or perhaps garbage collection? Or something 
> else that is not profiled?
> 
> Is that correct?
> 
> Thanks,
> 
> Celeste
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GitLab Approval

2024-08-06 Thread Rodrigo Mesquita
Hello Jose,

I’ve approved your account.
Welcome! Feel free to reach out if you have any questions.

Cheers,
Rodrigo Mesquita

> On 6 Aug 2024, at 13:56, Jose Lane  wrote:
> 
> I am looking to get my gitlab account approved. user name: Forist2034
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org <mailto:ghc-devs@haskell.org>
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs